instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Applying kernels to separate dimensions in gpytorch | I am translating three-dimensional Gaussian process regression code from GPflow into GPyTorch to take advantage of PyTorch's native scalability for exact GPR.
This example is helpful, but can't find an explanation for how to apply distinct kernels to individual dimensions in a multi-dimensional GPR (e.g. a Matern52 kernel to dimensions [0,1], and a Matern 32 kernel to dimension [2]).
Does GPyTorch support that functionality, and if so how is it implemented/where can I find an example?
| Use "active_dims" in the kernel functions :
from(https://docs.gpytorch.ai/en/v1.1.1/kernels.html#rbfkernel)
active_dims (tuple of ints, optional):
Set this if you want to compute the covariance of only a few input dimensions. The ints corresponds to the indices of the dimensions. Default: None.
| https://stackoverflow.com/questions/62838452/ |
Creating MLP model to predict the ratings that a user will give to an unseen movie using PyTorch | For my project , i’m trying to predict the ratings that a user will give to an unseen movie, based on the ratings he gave to other movies. I’m using the movielens dataset.The Main folder, which is ml-100k contains informations about 100,000 movies.
Before processing the data, the main data (ratings data) contains user ID, movie ID, user rating from 0 to 5 and timestamps(not considered for this project).I then split the data into Training set(80%) and test data(20%) using sklearn Library.
To create the recommendation systems, the model ‘Stacked-Autoencoder’ is being used. I’m using PyTorch and the code is implemented on Google Colab. The project is based on this https://towardsdatascience.com/stacked-auto-encoder-as-a-recommendation-system-for-movie-rating-prediction-33842386338
I'm new to deep Learning and I want to compare this model(Stacked_Autoencoder) to another Deep Learning model. For Instance,I want to use Multilayer Perception(MLP). This is for research purposes. This is the code below for creating Stacked-Autoencoder model and training the model.
### Part 1 : Archirecture of the AutoEncoder
#nn.Module is a parent class
# SAE is a child class of the parent class nn.Module
class SAE(nn.Module):
# self is the object of the SAE class
# Archirecture
def __init__(self, ):
# self can use alll the methods of the class nn.Module
super(SAE,self).__init__()
# Full connected layer n°1, input and 20 neurons-nodes of the first layer
# one neuron can be the genre of the movie
# Encode step
self.fc1 = nn.Linear(nb_movies,20)
# Full connected layer n°2
self.fc2 = nn.Linear(20,10)
# Decode step
# Full connected layer n°3
self.fc3 = nn.Linear(10,20)
# Full connected layer n°4
self.fc4 = nn.Linear(20,nb_movies)
# Sigmoid activation function
self.activation = nn.Sigmoid()
# Action : activation of the neurons
def forward(self, x) :
x = self.activation(self.fc1(x))
x = self.activation(self.fc2(x))
x = self.activation(self.fc3(x))
# dont's use the activation function
# use the linear function only
x = self.fc4(x)
# x is th evector of predicted ratings
return x
# Create the AutoEncoder object
sae=SAE()
#MSE Loss : imported from torch.nn
criterion=nn.MSELoss()
# RMSProp optimizer (update the weights) imported from torch.optim
#sea.parameters() are weights and bias adjusted during the training
optimizer=optim.RMSProp(sae.parameters(),lr=0.01, weight_decay=0.5)
### Part 2 : Training of the SAE
# number of epochs
nb_epochs = 200
# Epoch forloop
for epoch in range(1, nb_epoch+1):
# at the beginning the loss is at zero
s=0.
train_loss = 0
#Users forloop
for id_user in range(nb_users)
# add one dimension to make a two dimension vector.
# create a new dimension and put it the first position .unsqueeze[0]
input = Variable(training_set[id_user].unsqueeze[0])
# clone the input to obtain the target
target= input.clone()
# target.data are all the ratings
# ratings > 0
if torch.sum(target.data >0) > 0
output = sae(input)
# don't compute the gradients regarding the target
target.require_grad=False
# only deal with true ratings
output[target==0]=0
# Loss Criterion
loss =criterion(output,target)
# Average the error of the movies that don't have zero ratings
mean_corrector=nb_movies/float(torch.sum(target.data>0)+1e-10)
# Direction of the backpropagation
loss.backward()
train_loss+=np.sqrt(loss.data[0]*mean_corrector)
s+=1.
# Intensity of the backpropagation
optimizer.step()
print('epoch:' +str (epoch)+'loss:' +str(train_loss/s)
)
If I want to train using the MLP model. How can I implement this class model?
Also, What other deep learning model(Beside MLP) that I can use to compare with Stacked-Autoencoder?
Thanks.
| An MLP is not suited for recommendations. If you want to go this route, you will need to create an embedding for your userid and another for your itemid and then add linear layers on top of the embeddings. Your target will be to predict the rating for a userid-itemid pair.
I suggest you take a look at variational autoencoders (VAE). They give state-of-the-art results in recommender systems. They will also give a fair comparaison with your stacked-autoencoder. Here's the research paper applying VAE for collaborative filtering : https://arxiv.org/pdf/1802.05814.pdf
| https://stackoverflow.com/questions/62839095/ |
select sub elements from another batch | I have first tensor with size torch.Size([12, 64, 8, 8, 3]) , (8,8,3) is the image size, 64 is the patches, 12 is the batch size
There is another tensor with size torch.Size([12, 10]) which select 10 patches for each item in the batch (select 10 patches from total of 64). Thus it store the indexes. How to use this to query for the first tensor with list comprehension?
| You can use index_select:
c = [torch.index_select(i, dim=0, index=j) for i, j in zip(a,b)]
a and b are your tensor and indices respectively.
You could stack it in the zero dimension afterwards.
| https://stackoverflow.com/questions/62850814/ |
PyTorch forward propagation returns different logits on same samples | Consider the following LeNet model for MNIST
import torch
from torch import nn
import torch.nn.functional as F
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4*4*50, 500)
self.fc2 = nn.Linear(500, 10)
self.ceriation = nn.CrossEntropyLoss()
def forward(self, x):
x = self.conv1(x)
x = F.max_pool2d(x, 2, 2)
x = F.relu(x)
x = self.conv2(x)
x = F.max_pool2d(x, 2, 2)
x = F.relu(x)
x = x.view(-1, 4*4*50)
x = self.fc1(x)
x = self.fc2(x)
return x
Now, I use this model to do a single forward step on a batch of samples like
network=LeNet()
optimizer = torch.optim.SGD(self.network.parameters(), lr=0.001, momentum=0.9)
device = torch.device("cpu")
network.to(device)
network.train()
optimizer.zero_grad()
# X_batch= ... some batch of 50 samples pulled from a train_loader defined as
# torch.manual_seed(42)
# training_set = datasets.MNIST('./mnist_data', train=True, download=False,
# transform=transforms.Compose([
# transforms.ToTensor(),
# transforms.Normalize((0.1307,), (0.3081,))]))
# train_loader = torch.utils.data.DataLoader(training_set,
# batch_size=50,
# shuffle=False)
logits = network(X_batch)
Note that shuffle=False and download=False for the loader since the data set is already downloaded and I don't want to shuffle. My problem is that if I run this code twice I will get different values for logits and I don't understand why since everything else seems to be unchanged. For an extra check, I also extract X_batch to a numpy array and verify that the batch of samples is exactly the same as of previous execution. I do this check with numpy.array_equal() function.
I really can't figure out what I am missing here unless there are precision issues.
| The reason is because every time you run this code you call
network = LeNet()
and end up having different random initialization for the network's weights. If you set random seed before doing that, e.g. like this:
torch.manual_seed(42)
network = LeNet()
then you should get same results on first forward step given you use same data as input.
| https://stackoverflow.com/questions/62854337/ |
Pytorch: Fast create score tensor method | I am new to Pytorch and am looking for a quick get score function. That, given a bunch of samples and a distribution, outputs a tensor consisting of the corresponding score for each individual sample. For instance, consider the following code:
norm = torch.distributions.multivariate_normal.MultivariateNormal(torch.zeros(2),torch.eye(2))
samples = norm.sample((1000,))
samples.requires_grad_(True)
Using samples I would like to create a score tensor of size [1000,2] where the ith component score[i] is the gradient of log p(samples[i]), where p is the density of the given distribution. The method I have come up with is the following:
def get_score(samples,distribution):
log_probs = distribution.log_prob(samples)
for i in range(log_probs.size()[0]):
log_probs[i].backward(retain_graph = True)
The resulting score tensor is then samples.grad. The issue is that my method is quite slow for larger samples (e.g. for a sample of size [50000,2] it takes about 25-30 seconds on my CPU). Is this as fast as it can get?
The only alternative I can think of is to hard-code the score function for each distribution I will use, this doesn't seem like a good solution!
From experimentation, for 50000 samples, the following is about 50% quicker:
for i in range(50000):
sample = norm.sample((1,))
sample.requires_grad_(True)
log_prob = norm.log_prob(a)
log_prob.backward()
This indicates that there should be a better way!
| I'm assuming that log_probs is stored as a pytorch tensor.
You can take advantage of the linearity of differentiation to calculate the derivative for all samples at once: log_probs.sum().backward(retain_graph = True)
At least with GPU acceleration this will be a lot faster.
If log_probs is not a tensor but a list of scalars (represented as pytorch tensors of rank 0), you can use log_probs = torch.stack(log_probs) first.
| https://stackoverflow.com/questions/62854659/ |
Does PyTorch's nn.DataParallel load the same model in each GPU? | The only way it would seem to work (logically) is if the model was loaded in each of the GPUs. This would mean that when weights are updated, each GPU would need to update the weights as well, increasing the workload compared to a single GPU. Is this line of reasoning correct?
| First of all, it is advised to use torch.nn.parallel.DistributedDataParallel instead.
You can check torch.nn.DataParallel documentation where the process is described (you can also check source code and dig a little deeper on github, here is how replication of module is performed).
Here is roughly how it's done:
Initialization
All (or chosen) devices ids are saved in constructor and dimension along which data will be scattered (almost always 0 meaning it will be splitted to devices along batch)
Forward
This is done during every forward run:
Inputs are scattered (tensors along dimensions, tuple, list, dict shallowed copied, other data is shared among threads).
If there is only one device just return module(*args, **kwargs)
If there are multiple devices, copy the network from source machine to other devices (it is done each time!)
Run forward on each device with it's respective input
Gather outputs from devices onto a single source device (concatenation of outputs) onto a source machine.
Run the rest of the code, backprop, update weights on source machine etc.
Source machine is the cuda:0 by default, but it can be chosen. Also weights are updated on a single device, only batch is scattered and the outputs gathered.
| https://stackoverflow.com/questions/62857640/ |
Loss not reducing in Linear Regression with Pytorch | I'm working on a Linear Regression problem with Pytorch. The dataset I'm using is the Housing Prices from Kaggle. While training the model I see the loss is not reducing. It shows an erratic pattern. This is the Loss I'm getting after 100 epochs:
Epoch [10/100], Loss: 222273830912.0000
Epoch [20/100], Loss: 348813688832.0000
Epoch [30/100], Loss: 85658296320.0000
Epoch [40/100], Loss: 290305572864.0000
Epoch [50/100], Loss: 59399933952.0000
Epoch [60/100], Loss: 80360054784.0000
Epoch [70/100], Loss: 90352918528.0000
Epoch [80/100], Loss: 534457679872.0000
Epoch [90/100], Loss: 256064503808.0000
Epoch [100/100], Loss: 102400483328.0000
This is the code:
import torch
import numpy as np
from torch.utils.data import TensorDataset
import torch.nn as nn
from torch.utils.data import DataLoader
import torch.nn.functional as F
inputs = normalized_X
targets = np.array(train_y)
# Tensors
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
targets = targets.view(-1, 1)
train_ds = TensorDataset(inputs, targets.squeeze())
batch_size = 5
train_dl = DataLoader(train_ds, batch_size, shuffle=True)
model = nn.Linear(10, 1)
# Define Loss func
loss_fn = F.mse_loss
# Optimizer
opt = torch.optim.SGD(model.parameters(), lr = 1e-1)
num_epochs = 100
model.train()
for epoch in range(num_epochs):
# Train with batches of data
for xb, yb in train_dl:
# 1. Generate predictions
pred = model(xb.float())
# 2. Calculate loss
yb = yb.view(yb.size(0), -1)
loss = loss_fn(pred, yb.float())
# 3. Compute gradients
loss.backward()
# 4. Update parameters using gradients
opt.step()
# 5. Reset the gradients to zero
opt.zero_grad()
if (epoch+1) % 10 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch +
1, num_epochs,
loss.item()))
| I have run the code you give and I get this error :
p.py:38: UserWarning: Using a target size (torch.Size([50])) that is
different to the input size (torch.Size([50, 1])). This will likely lead
to incorrect results due to broadcasting. Please ensure they have the same size.
Your problem is due to the difference of dimension between pred and yb.
this code show how to resolve it
import torch
import numpy as np
from torch.utils.data import TensorDataset
import torch.nn as nn
from torch.utils.data import DataLoader
import torch.nn.functional as F
inputs = np.random.rand(50, 10)
targets = np.random.randint(0, 2, 50)
# Tensors
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
train_ds = TensorDataset(inputs, targets.squeeze())
batch_size = 100
train_dl = DataLoader(train_ds, batch_size, shuffle=True)
model = nn.Linear(10, 1)
# Define Loss func
loss_fn = F.mse_loss
# Optimizer
opt = torch.optim.SGD(model.parameters(), lr = 1e-1)
num_epochs = 100
model.train()
for epoch in range(num_epochs):
# Train with batches of data
for xb, yb in train_dl:
# 1. Generate predictions
pred = model(xb.float())
# 2. Calculate loss
yb = yb.view(yb.size(0), -1)
loss = loss_fn(pred, yb.float())
# 3. Compute gradients
loss.backward()
# 4. Update parameters using gradients
opt.step()
# 5. Reset the gradients to zero
opt.zero_grad()
if (epoch+1) % 10 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch +
1, num_epochs,
loss.item()))
this discusion show to you in the detail
https://discuss.pytorch.org/t/target-size-torch-size-10-must-be-the-same-as-input-size-torch-size-2/72354/6
| https://stackoverflow.com/questions/62863028/ |
How to split test and train data in a dataset based on number of targets in each category | I have an imageFolder in PyTorch which holds my categorized data images. Each folder is the name of the category and in the folder are images of that category.
I've loaded data and split train and test data via a sampler with random train_test_split. But the problem is my data distribution isn't good and some classes have lots of images and some classes have fewer.
So for solving this problem I want to choose 20% of each class as my test data and the rest would be the train data
ds = ImageFolder(filePath, transform=transform)
batch_size = 64
validation_split = 0.2
indices = list(range(len(ds))) # indices of the dataset
# TODO: fix spliting
train_indices,test_indices = train_test_split(indices,test_size=0.2)
# Creating PT data samplers and loaders:
train_sampler = SubsetRandomSampler(train_indices)
test_sampler = SubsetRandomSampler(test_indices)
train_loader = torch.utils.data.DataLoader(ds, batch_size=batch_size, sampler=train_sampler, num_workers=16)
test_loader = torch.utils.data.DataLoader(ds, batch_size=batch_size, sampler=test_sampler, num_workers=16)
Any idea of how I should fix it?
| Use the stratify argument in train_test_split according to the docs. If your label indices is an array-like called y, do:
train_indices,test_indices = train_test_split(indices, test_size=0.2, stratify=y)
| https://stackoverflow.com/questions/62872089/ |
Can't import ToTensorV2 in Colab | from albumentations.pytorch.transforms import ToTensorV2
I used the above code, and it doesn't work.
| Just add a code block with the line
! pip install albumentations==0.4.6
above the block where you do the import. I tried installing it without the specific version and it failed.
When i did not specify the version number in pip install, version 0.1.12 was installed which does not contain ToTensorV2.
| https://stackoverflow.com/questions/62872413/ |
pytorch training loop ends with ''int' object has no attribute 'size' exception | The code I am posting below is just a small part of the application:
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(self.parameters(), lr=self.learning_rate)
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
print('processing item ',i)
self.update_input_layer(review)
output = self.forward(torch.from_numpy(self.layer_0).float())
target = self.get_target_for_label(label)
print('output ',output)
print('target ',target)
loss = criterion(output, target)
...
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
and it ends with the exception in the title line when evaluating:
loss = criterion(output, target)
prior to that, the variables are as follows:
output tensor([[0.5803]], grad_fn=<SigmoidBackward>)
target 1
| Target should be a torch.Tensor variable. Use torch.tensor([target]).
Additionally, you may want to use batches (so there are N samples and shape of torch.tensor is (N,), same for target).
Also see introductory tutorial about PyTorch, as you're not using batches, not running optimizer or not using torch.utils.data.Dataset and torch.utils.data.DataLoader as you probably should.
| https://stackoverflow.com/questions/62873633/ |
Pytorch: multiply two high dimensions tensor, (2, 5, 3) * (2, 5) into (2, 5, 3) | I want to multiply two high dimensions tensor, (2, 5, 3) * (2, 5) into (2, 5, 3), which multiply each row vector by a scalar.
E.g.
emb = nn.Embedding(6, 3)
input = torch.tensor([[1, 2, 3, 4, 5,],
[2, 3, 1, 4, 5,]])
input_emb = emb(input)
print(input.shape)
> torch.Size([2, 5])
print(input_emb.shape)
> torch.Size([2, 5, 3])
print(input_emb)
> tensor([[[-1.9114, -0.1580, 1.2186],
[ 0.4627, 0.9119, -1.1691],
[ 0.6452, -0.6944, 1.9659],
[-0.5048, 0.6411, -1.3568],
[-0.2328, -0.9498, 0.7216]],
[[ 0.4627, 0.9119, -1.1691],
[ 0.6452, -0.6944, 1.9659],
[-1.9114, -0.1580, 1.2186],
[-0.5048, 0.6411, -1.3568],
[-0.2328, -0.9498, 0.7216]]], grad_fn=<EmbeddingBackward>)
I want to multiply may as follows:
// It is written in this way for convenience, not mathematical true.
// multiply each row vector by a scalar
[[
[-1.9114, -0.1580, 1.2186] * 1
[ 0.4627, 0.9119, -1.1691] * 2
[ 0.6452, -0.6944, 1.9659] * 3
[-0.5048, 0.6411, -1.3568] * 4
[-0.2328, -0.9498, 0.7216] * 5
]
[
[ 0.4627, 0.9119, -1.1691] * 2
[ 0.6452, -0.6944, 1.9659] * 3
[-1.9114, -0.1580, 1.2186] * 1
[-0.5048, 0.6411, -1.3568] * 4
[-0.2328, -0.9498, 0.7216] * 5
]]
Except for the multi-for-loops ways, how to implement it in a concise way by PyTorch APIs?
Thanks in advances.
| You can by correctly aligning the dimensions of both tensors:
import torch
from torch.nn import Embedding
emb = Embedding(6, 3)
inp = torch.tensor([[1, 2, 3, 4, 5,],
[2, 3, 1, 4, 5,]])
input_emb = emb(inp)
inp[...,None] * input_emb
tensor([[[-0.3069, -0.7727, -0.3772],
[-2.8308, 1.3438, -1.1167],
[ 0.6366, 0.6509, -3.2282],
[-4.3004, 3.2342, -0.6556],
[-3.0045, -0.0191, -7.4436]],
[[-2.8308, 1.3438, -1.1167],
[ 0.6366, 0.6509, -3.2282],
[-0.3069, -0.7727, -0.3772],
[-4.3004, 3.2342, -0.6556],
[-3.0045, -0.0191, -7.4436]]], grad_fn=<MulBackward0>)
| https://stackoverflow.com/questions/62875960/ |
Running GPU on multiple PyTorch tensor operators | I have the following PyTorch tensor:
X = np.array([[1, 3, 2, 3], [2, 3, 5, 6]])
X = torch.FloatTensor(X).cuda()
I was wondering if there is any difference (especially in the speed) between Scenario A or B below if I run the following multiple PyTorch operators in one line?
Scenario A:
X_sq_sum = (X**2).cuda().sum(dim = 1).cuda()
Scenario B:
X_sq_sum = (X**2).sum(dim = 1).cuda()
ie. Scenario A has two .cuda() whereas Scenario B only has one .cuda().
Many thanks in advance.
| They will perform equally as the CUDA conversion is only done once.
As described in the docs, repeated .cuda() calls will be no-ops if the object is already in CUDA memory.
| https://stackoverflow.com/questions/62878189/ |
How to create a balancing cycling iterator in PyTourch? | Say I have 2 classes. And for one I have only 17 samples and for another 83. I want to always have equal amount of data from each class per epoch (meaning 17 by 17 in this case). Also, I want to slide sampling a window across the class where I have more data each epoch (first 17, next 17, ...).
Currently I have a looping sampling iterator like this:
class CyclicIterator:
def __init__(self, loader, sampler):
self.loader = loader
self.sampler = sampler
self.epoch = 0
self._next_epoch()
def _next_epoch(self):
self.iterator = iter(self.loader)
self.epoch += 1
def __len__(self):
return len(self.loader)
def __iter__(self):
return self
def __next__(self):
try:
return next(self.iterator)
except StopIteration:
self._next_epoch()
return next(self.iterator)
I wonder how to force all samples from each class to be of equal count per epoch?
| For a balanced batch, which means equal (or close to equal) number of samples per category in each batch, there are some approaches:
-Oversampling (make minor sized classes oversample until reach highest number of samples). In this approach you can use following code:
https://github.com/galatolofederico/pytorch-balanced-batch
-Undersampling (delivers number of samples for all categories based in smallest category number). In my experience, below function does like that using PyTorch library:
torch.utils.data.sampler.WeightedRandomSampler(weights, len(weights))
Where weights is the probability of each sample, it depends in how many samples per category you have, for instance, if you data is simple as that data = [0, 1, 0, 0, 1], class '0' count is 3, and class '1' count is 2 So weights vector is [1/3, 1/2, 1/3, 1/3, 1/2]. With that you can call WeightedRamdomSampler and it will make it for you.
You need to call it in Dataloader. The code to setup it is:
sampler = torch.utils.data.sampler.WeightedRandomSampler(weights, len(weights))
train_dataloader = DataLoader(dataset_train, batch_size=mini_batch,
sampler=sampler, shuffle=False,
num_workers=1)
| https://stackoverflow.com/questions/62878940/ |
Tensorboard only showing dots, line not showing | I am using tensorboard through pytorch and have created a scaler which shows accuracy, loss, and validation accuracy. When I open the graph it only shows dot and dosen't show a line connecting these dots.
Here is a snippet of my code
tb = SummaryWriter(comment=f'-{epoch}')
tb.add_scalar("Accuracy", float(correct/len(train_tensor)*100), epoch)
tb.add_scalar("Loss", loss, epoch)
tb.add_scalar("ValiAccuracy", float(correct/len(train_tensor)*100), epoch)
tb.close()
What is leading tensorboard to not show these lines?
| You are always creating a new directory with your "comment" (depends on your epoch) in the definition of the SummaryWriter. Each subdirectory will be treated as a different experiment in the tensorboard. Thats why they have different colors and only show dots instead of a connected line.
You can try to define your SummaryWriter without a comment:
tb = SummaryWriter()
| https://stackoverflow.com/questions/62880550/ |
Element wise batch matrix multiplication of a row with every other row in matrix, in PyTorch | I have two matrices of sizes (30, 24, 512) respectively where 30 is the batch size. Let us call them A and B.
Now what I need to do is this:
For every batch in A, I want to compute element-wise batch matrix multiplication of each row in a single batch of A with each row in a single batch of B and sum them.
In other words, for every batch, I have a (24, 512) matrix on left-hand side (A) and on right-hand side (B). Now for each row (1 * 512) in A, I want to compute element-wise matrix multiplication of that row with each of the (24 * 512) rows in B, one by one, and sum them.
This operation would thus result in a (1 * 512) sized vector.
Done for each row in A, it would result in a (24 * 512) sized matrix and then, done for each batch it would result in (30, 24, 512) sized matrix, which is the preferred dimension of my output.
How do I go about doing this?
I do not want to use for loops, since that would be inefficient. I also do not want to use repeat method for the same reason.
| I figured out the answer to my own question. Instead of computing the element-wise product of each row and then wanting to compute the sum, I summed the rows and then took the element-wise product, which made the problem (and the solution) much simpler.
import torch
a = torch.randn(30, 24, 512)
b = torch.randn(30, 24, 512)
# Step 1: Summing b along dimension 1 and unsqueezing
# Dimension of b_sum is 30*1*512
b_sum = torch.sum(b, dim=1).unsqueeze(1)
# Step 2: Element-wise product
result = a * b_sum
| https://stackoverflow.com/questions/62880631/ |
Copy large github file into existing repo without downloading/cloning | How to i add bin file associated to this repo https://github.com/graykode/gpt-2-Pytorch to my existing private repo without downloading/cloning? There is a 500mb file that needs downloading to get the codebase to work that I need in my repo and don't want to download into my small hard disk during the cloning process.
It's a noob question but i'm at a complete loss as to how to proceed. Please be nice.
| You can't, sorry. The maximum file size on Github is 100 MB. And for good reason, Git does not work well with large, binary files.
Simplest thing is to find a USB key or other external storage and use it as temporary space to store the file while you add it with git-lfs.
Or don't add the file to your repository. Download and cache it as part of your build process.
| https://stackoverflow.com/questions/62881852/ |
Loading json file using torchtext | I'm working on the dailydialog dataset, which I've converted into a
JSON file which looks something like this:
[{"response": "You know that is tempting but is really not good for our fitness.", "message": "Say, Jim, how about going for a few beers after dinner?"}, {"response": "Do you really think so? I don't. It will just make us fat and act silly. Remember last time?", "message": "What do you mean? It will help us to relax."}, {"response": "I suggest a walk over to the gym where we can play singsong and meet some of our friends.", "message": "I guess you are right. But what shall we do? I don't feel like sitting at home."}, {"response": "Sounds great to me! If they are willing, we could ask them to go dancing with us.That is excellent exercise and fun, too.", "message": "That's a good idea. I hear Mary and Sally often go there to play pingpong.Perhaps we can make a foursome with them."}, {"response": "All right.", "message": "Please lie down over there."}]
So, each item has two keys - response and message.
This is my first time using PyTorch, so I was following a few online available resources. These are the relevant snippets of my code:
def tokenize_en(text):
return [tok.text for tok in spacy_en.tokenizer(text)]
src = Field(tokenize = tokenize_en,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
fields = {'response': ('r', src)}
train_data, test_data, validation_data = TabularDataset.splits(
path = 'FilePath',
train = 'trainset.json',
test = 'testset.json',
validation = 'validationset.json',
format = 'json',
fields = fields
)
Although no errors are raised, despite having many items in my JSON file, the train, test and validation datasets strangely have only 1 example each, as seen in this image:
Image Showing the length of train_data, test_data and validation_data
I'd be really grateful if someone could point out the error to me.
Edit: I found out that the whole file is being treated as a single text string due to lack of indents in the file. But if I indent the JSON file, the TabularDataset function throws a JSONDecodeError to me, suggesting it can no more decode the file. How can I get rid of this problem?
| I think the code is alright, but the issue is with your JSON file. Can you try removing the square brackets("[]") at the beginning and the end of the file?
Probably that is the reason that Your Python file is reading it as one single object.
| https://stackoverflow.com/questions/62882169/ |
Can numpy arrays run in GPUs? | I am using PyTorch. I have the following code:
import numpy as np
import torch
X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]])
X = torch.DoubleTensor(X).cuda()
X_split = np.array_split(X.numpy(),
indices_or_sections = 2,
axis = 0)
X_split
but I am getting this error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-121-870b5d3f67b6> in <module>()
----> 1 X_prime_class_split = np.array_split(X_prime_class.numpy(),
2 indices_or_sections = 2,
3 axis = 0)
4 X_prime_class_split
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
The error message is clear and I know how to fix this error by just including .cpu(), ie. X_prime_class.cpu().numpy(). I am just curious to know if this confirms that numpy arrays cannot run in GPUs/Cuda?
| No you cannot generally run numpy functions on GPU arrays. PyTorch reimplements much of the functionality in numpy for PyTorch tensors. For example torch.chunk works similarly to np.array_split so you could do the following:
X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]])
X = torch.DoubleTensor(X).cuda()
X_split = torch.chunk(X, chunks=2, dim=0)
which splits X into multiple tensors without ever moving the X off the GPU.
| https://stackoverflow.com/questions/62887574/ |
Im not running anything on GPU. But GPU stat is not clearing my usage | GPUstat image
Im not running anything on gpu.
My ID is suil5044. GPUstat is not clearing my usage.
I think I finished it in the IDE but actually the code is still running on the server.
How do I kill my code still running on the server?
And not affect other users
Thanks!
| It would be better if you could show us which process are using your GPU. Because sometimes many utils process use the GPU too under same user name.
However, please try,
Find out all running process pid which are using the GPU. This tool may help.
Kill process by PID by this command: kill -9 <pid> (be sure this pid is under your username)
| https://stackoverflow.com/questions/62888242/ |
How can I incorporate PReLU in a quantized model? | I'm trying to quantize a model which uses PReLU. Replacing PReLU with ReLU is not possible as it drastically affects the network performance to the point its useless.
As far as I know, PReLU is not supported in Pytorch when it comes to quantization. So I tried to rewrite this module manually and implement the multiplication and additions using torch.FloatFunctional() to get around this limitation.
This is what I have come up so far:
class PReLU_Quantized(nn.Module):
def __init__(self, prelu_object):
super().__init__()
self.weight = prelu_object.weight
self.quantized_op = nn.quantized.FloatFunctional()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, inputs):
# inputs = torch.max(0, inputs) + self.weight * torch.min(0, inputs)
self.weight = self.quant(self.weight)
weight_min_res = self.quantized_op.mul(self.weight, torch.min(inputs)[0])
inputs = self.quantized_op.add(torch.max(inputs)[0], weight_min_res).unsqueeze(0)
self.weight = self.dequant(self.weight)
return inputs
and for the replacement :
class model(nn.Module):
def __init__(self)
super().__init__()
....
self.prelu = PReLU()
self.prelu_q = PReLU_Quantized(self.prelu)
....
Basically, I read the learned parameter of the existing prelu module, and run the calculation myself in a new module. The module seems to be working in the sense, its not failing the whole application.
However, in order to assess whether my implementation is actually correct and yields the same result as the original module, I tried to test it.
Here is a counterpart for normal models (i.e. not quantized model):
For some reason, the error between the actual PReLU and my implementation is very large!
Here are sample diffs in different layers:
diff : 1.1562038660049438
diff : 0.02868632599711418
diff : 0.3653906583786011
diff : 1.6100226640701294
diff : 0.8999372720718384
diff : 0.03773299604654312
diff : -0.5090572834014893
diff : 0.1654307246208191
diff : 1.161868691444397
diff : 0.026089997962117195
diff : 0.4205571115016937
diff : 1.5337920188903809
diff : 0.8799554705619812
diff : 0.03827812895178795
diff : -0.40296515822410583
diff : 0.15618863701820374
and the diff is calculated like this in the forward pass:
def forward(self, x):
residual = x
out = self.bn0(x)
out = self.conv1(out)
out = self.bn1(out)
out = self.prelu(out)
out2 = self.prelu2(out)
print(f'diff : {( out - out2).mean().item()}')
out = self.conv2(out)
...
This is the normal implementation which I used on ordinary model (i.e. not quantized!) to asses whether it produces correct result and then move on to quantized version:
class PReLU_2(nn.Module):
def __init__(self, prelu_object):
super().__init__()
self.prelu_weight = prelu_object.weight
self.weight = self.prelu_weight
def forward(self, inputs):
x = self.weight
tmin, _ = torch.min(inputs,dim=0)
tmax, _ = torch.max(inputs,dim=0)
weight_min_res = torch.mul(x, tmin)
inputs = torch.add(tmax, weight_min_res)
inputs = inputs.unsqueeze(0)
return inputs
What am I missing here?
| I figured it out! I made a huge mistake in the very begining. I needed to calculate
PReLU(x)=max(0,x)+a∗min(0,x)
or
and not the actual torch.min! or torch.max! which doesn't make any sense!
Here is the final solution for normal models (i.e not quantized)!:
class PReLU_2(nn.Module):
def __init__(self, prelu_object):
super().__init__()
self.prelu_weight = prelu_object.weight
self.weight = self.prelu_weight
def forward(self, inputs):
pos = torch.relu(inputs)
neg = -self.weight * torch.relu(-inputs)
inputs = pos + neg
return inputs
and this is the quantized version :
class PReLU_Quantized(nn.Module):
def __init__(self, prelu_object):
super().__init__()
self.prelu_weight = prelu_object.weight
self.weight = self.prelu_weight
self.quantized_op = nn.quantized.FloatFunctional()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, inputs):
# inputs = max(0, inputs) + alpha * min(0, inputs)
self.weight = self.quant(self.weight)
weight_min_res = self.quantized_op.mul(-self.weight, torch.relu(-inputs))
inputs = self.quantized_op.add(torch.relu(inputs), weight_min_res)
inputs = self.dequant(inputs)
self.weight = self.dequant(self.weight)
return inputs
Side note:
I also had a typo where I was calculating the diff :
out = self.prelu(out)
out2 = self.prelu2(out)
print(f'diff : {( out - out2).mean().item()}')
out = self.conv2(out)
needs to be
out1 = self.prelu(out)
out2 = self.prelu2(out)
print(f'diff : {( out1 - out2).mean().item()}')
out = self.conv2(out1)
Update:
In case you face issues in quantization, you may try this version :
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.quantized as nnq
from torch.quantization import fuse_modules
class QPReLU(nn.Module):
def __init__(self, num_parameters=1, init: float = 0.25):
super(QPReLU, self).__init__()
self.num_parameters = num_parameters
self.weight = nn.Parameter(torch.Tensor(num_parameters).fill_(init))
self.relu1 = nn.ReLU()
self.relu2 = nn.ReLU()
self.f_mul_neg_one1 = nnq.FloatFunctional()
self.f_mul_neg_one2 = nnq.FloatFunctional()
self.f_mul_alpha = nnq.FloatFunctional()
self.f_add = nnq.FloatFunctional()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
self.quant2 = torch.quantization.QuantStub()
self.quant3 = torch.quantization.QuantStub()
# self.dequant2 = torch.quantization.QuantStub()
self.neg_one = torch.Tensor([-1.0])
def forward(self, x):
x = self.quant(x)
# PReLU, with modules only
x1 = self.relu1(x)
neg_one_q = self.quant2(self.neg_one)
weight_q = self.quant3(self.weight)
x2 = self.f_mul_alpha.mul(
weight_q, self.f_mul_neg_one2.mul(
self.relu2(
self.f_mul_neg_one1.mul(x, neg_one_q),
),
neg_one_q)
)
x = self.f_add.add(x1, x2)
x = self.dequant(x)
return x
m1 = nn.PReLU()
m2 = QPReLU()
# check correctness in fp
for i in range(10):
data = torch.randn(2, 2) * 1000
assert torch.allclose(m1(data), m2(data))
# toy model
class M(nn.Module):
def __init__(self):
super(M, self).__init__()
self.prelu = QPReLU()
def forward(self, x):
x = self.prelu(x)
return x
# quantize it
m = M()
m.qconfig = torch.quantization.default_qconfig
torch.quantization.prepare(m, inplace=True)
# calibrate
m(torch.randn(4, 4))
# convert
torch.quantization.convert(m, inplace=True)
# run some data through
res = m(torch.randn(4, 4))
print(res)
and make sure to read the ralted notes here
| https://stackoverflow.com/questions/62891103/ |
Reshape rows into groups of columns | I have a number of row vectors which I would like to batch as column vectors and use as input for Conv1d. As an example I'd like to reshape the tensor x into y i.e. making two groups of two column vectors.
# size = [4, 3]
x = torch.tensor([
[0, 1, 2],
[3, 4, 5],
[6, 7, 8],
[9, 10, 11]
])
# size = [2, 3, 2]
y = torch.tensor([
[[0, 3],
[1, 4],
[2, 5]],
[[6, 9],
[7, 10],
[8, 11]]
])
Is there a way to do this with just reshape and similar functions? The only way I can think of is using loops and copying into a new tensor.
| You need to use permute as well as reshape:
x.reshape(2, 2, 3).permute(0, 2, 1)
Out[*]:
tensor([[[ 0, 3],
[ 1, 4],
[ 2, 5]],
[[ 6, 9],
[ 7, 10],
[ 8, 11]]])
First, you split the vectors into 2 x.reshape(2,2,3) placing the extra dimension in the middle. Then using permute you change the order of dimensions to be as you expected.
| https://stackoverflow.com/questions/62892378/ |
Deployment of a PyTorch computer vision deep learning model into a windows desktop app | I have trained a yolov3 model in PyTorch with my dataset and I also have written some utility codes for it that run alongside the model all in python language. Now I want to deploy this model and my utilities into a windows desktop app that takes a video and runs the model on its frames. How can I do this task with minimum change in my code or rewrite it in another language? What framework is the best option for designing the UI of the app?
Thanks.
|
I would first use your model with PyTorch to detect each frame and use numpy ImageDraw to draw around your object(to be detected). Here is an article on this: Drawing a rectangle inside a 2D numpy array
Then I would use OpenCV(cv2) to append all the frames together to make a video you could also use ffmpeg. Here is a article on this(OpenCV): How to make a movie out of images in python
Then for your UI framework you could use PyQt5 to display your video: Load an opencv video frame by frame using PyQT. But you could also use Kivy with Gstreamer: Kivy VideoPlayer fullscreen, loop, and hide controls
Finally to turn your .py file into a .exe(executable for windows) I would use PyInstaller for that: http://www.pyinstaller.org/
| https://stackoverflow.com/questions/62894560/ |
Truncated backpropagation in PyTorch (code check) | I am trying to implement truncated backpropagation through time in PyTorch, for the simple case where K1=K2. I have an implementation below that produces reasonable output, but I just want to make sure it is correct. When I look online for PyTorch examples of TBTT, they do inconsistent things around detaching the hidden state and zeroing out the gradient, and the ordering of these operations. Please let me know if I have made a mistake.
In the code below, H maintains the current hidden state, and model(weights, H, x) outputs the prediction and the new hidden state.
while i < NUM_STEPS:
# Grab x, y for ith datapoint
x = data[i]
target = true_output[i]
# Run model
output, new_hidden = model(weights, H, x)
H = new_hidden
# Update running error
error += (output - target)**2
if (i+1) % K == 0:
# Backpropagate
error.backward()
opt.step()
opt.zero_grad()
error = 0
H = H.detach()
i += 1
| So the idea of your code is to isolate the last variables after each Kth step. Yes, your implementation is absolutely correct and this answer confirms that.
# truncated to the last K timesteps
while i < NUM_STEPS:
out = model(out)
if (i+1) % K == 0:
out.backward()
out.detach()
out.backward()
You can also follow this example for your reference.
import torch
from ignite.engine import Engine, EventEnum, _prepare_batch
from ignite.utils import apply_to_tensor
class Tbptt_Events(EventEnum):
"""Aditional tbptt events.
Additional events for truncated backpropagation throught time dedicated
trainer.
"""
TIME_ITERATION_STARTED = "time_iteration_started"
TIME_ITERATION_COMPLETED = "time_iteration_completed"
def _detach_hidden(hidden):
"""Cut backpropagation graph.
Auxillary function to cut the backpropagation graph by detaching the hidden
vector.
"""
return apply_to_tensor(hidden, torch.Tensor.detach)
def create_supervised_tbptt_trainer(
model, optimizer, loss_fn, tbtt_step, dim=0, device=None, non_blocking=False, prepare_batch=_prepare_batch
):
"""Create a trainer for truncated backprop through time supervised models.
Training recurrent model on long sequences is computationally intensive as
it requires to process the whole sequence before getting a gradient.
However, when the training loss is computed over many outputs
(`X to many <https://karpathy.github.io/2015/05/21/rnn-effectiveness/>`_),
there is an opportunity to compute a gradient over a subsequence. This is
known as
`truncated backpropagation through time <https://machinelearningmastery.com/
gentle-introduction-backpropagation-time/>`_.
This supervised trainer apply gradient optimization step every `tbtt_step`
time steps of the sequence, while backpropagating through the same
`tbtt_step` time steps.
Args:
model (`torch.nn.Module`): the model to train.
optimizer (`torch.optim.Optimizer`): the optimizer to use.
loss_fn (torch.nn loss function): the loss function to use.
tbtt_step (int): the length of time chunks (last one may be smaller).
dim (int): axis representing the time dimension.
device (str, optional): device type specification (default: None).
Applies to batches.
non_blocking (bool, optional): if True and this copy is between CPU and GPU,
the copy may occur asynchronously with respect to the host. For other cases,
this argument has no effect.
prepare_batch (callable, optional): function that receives `batch`, `device`,
`non_blocking` and outputs tuple of tensors `(batch_x, batch_y)`.
.. warning::
The internal use of `device` has changed.
`device` will now *only* be used to move the input data to the correct device.
The `model` should be moved by the user before creating an optimizer.
For more information see:
* `PyTorch Documentation <https://pytorch.org/docs/stable/optim.html#constructing-it>`_
* `PyTorch's Explanation <https://github.com/pytorch/pytorch/issues/7844#issuecomment-503713840>`_
Returns:
Engine: a trainer engine with supervised update function.
"""
def _update(engine, batch):
loss_list = []
hidden = None
x, y = batch
for batch_t in zip(x.split(tbtt_step, dim=dim), y.split(tbtt_step, dim=dim)):
x_t, y_t = prepare_batch(batch_t, device=device, non_blocking=non_blocking)
# Fire event for start of iteration
engine.fire_event(Tbptt_Events.TIME_ITERATION_STARTED)
# Forward, backward and
model.train()
optimizer.zero_grad()
if hidden is None:
y_pred_t, hidden = model(x_t)
else:
hidden = _detach_hidden(hidden)
y_pred_t, hidden = model(x_t, hidden)
loss_t = loss_fn(y_pred_t, y_t)
loss_t.backward()
optimizer.step()
# Setting state of engine for consistent behaviour
engine.state.output = loss_t.item()
loss_list.append(loss_t.item())
# Fire event for end of iteration
engine.fire_event(Tbptt_Events.TIME_ITERATION_COMPLETED)
# return average loss over the time splits
return sum(loss_list) / len(loss_list)
engine = Engine(_update)
engine.register_events(*Tbptt_Events)
return engine
| https://stackoverflow.com/questions/62901561/ |
Pytorch with Modified Derivatives | I'm in the process of rewriting the TRACX2 model, a variation of a recurrent neural network used for training encodings in the context of word segmentation from continuous speech or text. The writer of the original code manually wrote the network in Numpy, while I want to optimize it with Pytorch. However, they implement something they call "temperature" and a "fahlman offset":
This clearly isn't the actual derivative of tanh(x), one of their activation functions, but they used this derivative instead. How would I go about implementing this modification in Pytorch?
| Basically, you add a backward hook like so:
a = Variable(torch.randn(2,2), requires_grad=True)
m = nn.Linear(2,1)
m(a).mean().backward()
print(a.grad)
# shows a 2x2 tensor of non-zero values
temperature = 0.3
fahlmanOffset = .1
def hook(module, grad_input, grad_output):
# Use custom gradient output
return grad_output * temperature + fahlmanOffset
m.register_backward_hook(hook)
a.grad.zero_()
m(a).mean().backward()
print(a.grad)
# shows a 2x2 tensor with modified gradient
(courtesy of this answer)
| https://stackoverflow.com/questions/62903224/ |
Can't parse arguments (deep learning tutorial using pytorch) | I'm following this tutorial.
The first scripts run fine and I have a "data" folder in the folder of my scripts containing the MRI data downloaded from MRnet.
However when it comes to the "train" script I get an error. Here's the full script and the error (using jupyter notebook):
import shutil
import os
import time
from datetime import datetime
import argparse
import numpy as np
from tqdm import tqdm
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
from torchsample.transforms import RandomRotate, RandomTranslate, RandomFlip, ToTensor, Compose, RandomAffine
from torchvision import transforms
import torch.nn.functional as F
from tensorboardX import SummaryWriter
import nbimporter
from dataloader import MRDataset
import model
from sklearn import metrics
def train_model(model, train_loader, epoch, num_epochs, optimizer, writer, current_lr, log_every=100):
_ = model.train()
if torch.cuda.is_available():
model.cuda()
y_preds = []
y_trues = []
losses = []
for i, (image, label, weight) in enumerate(train_loader):
optimizer.zero_grad()
if torch.cuda.is_available():
image = image.cuda()
label = label.cuda()
weight = weight.cuda()
label = label[0]
weight = weight[0]
prediction = model.forward(image.float())
loss = torch.nn.BCEWithLogitsLoss(weight=weight)(prediction, label)
loss.backward()
optimizer.step()
loss_value = loss.item()
losses.append(loss_value)
probas = torch.sigmoid(prediction)
y_trues.append(int(label[0][1]))
y_preds.append(probas[0][1].item())
try:
auc = metrics.roc_auc_score(y_trues, y_preds)
except:
auc = 0.5
writer.add_scalar('Train/Loss', loss_value,
epoch * len(train_loader) + i)
writer.add_scalar('Train/AUC', auc, epoch * len(train_loader) + i)
if (i % log_every == 0) & (i > 0):
print('''[Epoch: {0} / {1} |Single batch number : {2} / {3} ]| avg train loss {4} | train auc : {5} | lr : {6}'''.
format(
epoch + 1,
num_epochs,
i,
len(train_loader),
np.round(np.mean(losses), 4),
np.round(auc, 4),
current_lr
)
)
writer.add_scalar('Train/AUC_epoch', auc, epoch + i)
train_loss_epoch = np.round(np.mean(losses), 4)
train_auc_epoch = np.round(auc, 4)
return train_loss_epoch, train_auc_epoch
def evaluate_model(model, val_loader, epoch, num_epochs, writer, current_lr, log_every=20):
_ = model.eval()
if torch.cuda.is_available():
model.cuda()
y_trues = []
y_preds = []
losses = []
for i, (image, label, weight) in enumerate(val_loader):
if torch.cuda.is_available():
image = image.cuda()
label = label.cuda()
weight = weight.cuda()
label = label[0]
weight = weight[0]
prediction = model.forward(image.float())
loss = torch.nn.BCEWithLogitsLoss(weight=weight)(prediction, label)
loss_value = loss.item()
losses.append(loss_value)
probas = torch.sigmoid(prediction)
y_trues.append(int(label[0][1]))
y_preds.append(probas[0][1].item())
try:
auc = metrics.roc_auc_score(y_trues, y_preds)
except:
auc = 0.5
writer.add_scalar('Val/Loss', loss_value, epoch * len(val_loader) + i)
writer.add_scalar('Val/AUC', auc, epoch * len(val_loader) + i)
if (i % log_every == 0) & (i > 0):
print('''[Epoch: {0} / {1} |Single batch number : {2} / {3} ] | avg val loss {4} | val auc : {5} | lr : {6}'''.
format(
epoch + 1,
num_epochs,
i,
len(val_loader),
np.round(np.mean(losses), 4),
np.round(auc, 4),
current_lr
)
)
writer.add_scalar('Val/AUC_epoch', auc, epoch + i)
val_loss_epoch = np.round(np.mean(losses), 4)
val_auc_epoch = np.round(auc, 4)
return val_loss_epoch, val_auc_epoch
def get_lr(optimizer):
for param_group in optimizer.param_groups:
return param_group['lr']
def run(args):
log_root_folder = "./logs/{0}/{1}/".format(args.task, args.plane)
if args.flush_history == 1:
objects = os.listdir(log_root_folder)
for f in objects:
if os.path.isdir(log_root_folder + f):
shutil.rmtree(log_root_folder + f)
now = datetime.now()
logdir = log_root_folder + now.strftime("%Y%m%d-%H%M%S") + "/"
os.makedirs(logdir)
writer = SummaryWriter(logdir)
augmentor = Compose([
transforms.Lambda(lambda x: torch.Tensor(x)),
RandomRotate(25),
RandomTranslate([0.11, 0.11]),
RandomFlip(),
transforms.Lambda(lambda x: x.repeat(3, 1, 1, 1).permute(1, 0, 2, 3)),
])
train_dataset = MRDataset('./data/', args.task,
args.plane, transform=augmentor, train=True)
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=1, shuffle=True, num_workers=11, drop_last=False)
validation_dataset = MRDataset(
'./data/', args.task, args.plane, train=False)
validation_loader = torch.utils.data.DataLoader(
validation_dataset, batch_size=1, shuffle=-True, num_workers=11, drop_last=False)
mrnet = model.MRNet()
if torch.cuda.is_available():
mrnet = mrnet.cuda()
optimizer = optim.Adam(mrnet.parameters(), lr=args.lr, weight_decay=0.1)
if args.lr_scheduler == "plateau":
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer, patience=3, factor=.3, threshold=1e-4, verbose=True)
elif args.lr_scheduler == "step":
scheduler = torch.optim.lr_scheduler.StepLR(
optimizer, step_size=3, gamma=args.gamma)
best_val_loss = float('inf')
best_val_auc = float(0)
num_epochs = args.epochs
iteration_change_loss = 0
patience = args.patience
log_every = args.log_every
t_start_training = time.time()
for epoch in range(num_epochs):
current_lr = get_lr(optimizer)
t_start = time.time()
train_loss, train_auc = train_model(
mrnet, train_loader, epoch, num_epochs, optimizer, writer, current_lr, log_every)
val_loss, val_auc = evaluate_model(
mrnet, validation_loader, epoch, num_epochs, writer, current_lr)
if args.lr_scheduler == 'plateau':
scheduler.step(val_loss)
elif args.lr_scheduler == 'step':
scheduler.step()
t_end = time.time()
delta = t_end - t_start
print("train loss : {0} | train auc {1} | val loss {2} | val auc {3} | elapsed time {4} s".format(
train_loss, train_auc, val_loss, val_auc, delta))
iteration_change_loss += 1
print('-' * 30)
if val_auc > best_val_auc:
best_val_auc = val_auc
if bool(args.save_model):
file_name = f'model_{args.prefix_name}_{args.task}_{args.plane}_val_auc_{val_auc:0.4f}_train_auc_{train_auc:0.4f}_epoch_{epoch+1}.pth'
for f in os.listdir('./models/'):
if (args.task in f) and (args.plane in f) and (args.prefix_name in f):
os.remove(f'./models/{f}')
torch.save(mrnet, f'./models/{file_name}')
if val_loss < best_val_loss:
best_val_loss = val_loss
iteration_change_loss = 0
if iteration_change_loss == patience:
print('Early stopping after {0} iterations without the decrease of the val loss'.
format(iteration_change_loss))
break
t_end_training = time.time()
print(f'training took {t_end_training - t_start_training} s')
def parse_arguments():
parser = argparse.ArgumentParser()
parser.add_argument('-t', '--task', type=str, required=True,
choices=['abnormal', 'acl', 'meniscus'])
parser.add_argument('-p', '--plane', type=str, required=True,
choices=['sagittal', 'coronal', 'axial'])
parser.add_argument('--prefix_name', type=str, required=True)
parser.add_argument('--augment', type=int, choices=[0, 1], default=1)
parser.add_argument('--lr_scheduler', type=str,
default='plateau', choices=['plateau', 'step'])
parser.add_argument('--gamma', type=float, default=0.5)
parser.add_argument('--epochs', type=int, default=50)
parser.add_argument('--lr', type=float, default=1e-5)
parser.add_argument('--flush_history', type=int, choices=[0, 1], default=0)
parser.add_argument('--save_model', type=int, choices=[0, 1], default=1)
parser.add_argument('--patience', type=int, default=5)
parser.add_argument('--log_every', type=int, default=100)
args = parser.parse_args()
return args
if __name__ == "__main__":
args = parse_arguments()
run(args)
Error:
usage: ipykernel_launcher.py [-h] -t {abnormal,acl,meniscus} -p
{sagittal,coronal,axial} --prefix_name
PREFIX_NAME [--augment {0,1}]
[--lr_scheduler {plateau,step}] [--gamma GAMMA]
[--epochs EPOCHS] [--lr LR]
[--flush_history {0,1}] [--save_model {0,1}]
[--patience PATIENCE] [--log_every LOG_EVERY]
ipykernel_launcher.py: error: the following arguments are required: -t/--task, -p/--plane, --prefix_name
%tb:
---------------------------------------------------------------------------
SystemExit Traceback (most recent call last)
<ipython-input-3-e6a34ab63dc0> in <module>
275
276 if __name__ == "__main__":
--> 277 args = parse_arguments()
278 run(args)
<ipython-input-3-e6a34ab63dc0> in parse_arguments()
270 parser.add_argument('--patience', type=int, default=5)
271 parser.add_argument('--log_every', type=int, default=100)
--> 272 args = parser.parse_args()
273 return args
274
~\anaconda3\envs\Pytorch\lib\argparse.py in parse_args(self, args, namespace)
1753 # =====================================
1754 def parse_args(self, args=None, namespace=None):
-> 1755 args, argv = self.parse_known_args(args, namespace)
1756 if argv:
1757 msg = _('unrecognized arguments: %s')
~\anaconda3\envs\Pytorch\lib\argparse.py in parse_known_args(self, args, namespace)
1785 # parse the arguments and exit if there are any errors
1786 try:
-> 1787 namespace, args = self._parse_known_args(args, namespace)
1788 if hasattr(namespace, _UNRECOGNIZED_ARGS_ATTR):
1789 args.extend(getattr(namespace, _UNRECOGNIZED_ARGS_ATTR))
~\anaconda3\envs\Pytorch\lib\argparse.py in _parse_known_args(self, arg_strings, namespace)
2020 if required_actions:
2021 self.error(_('the following arguments are required: %s') %
-> 2022 ', '.join(required_actions))
2023
2024 # make sure all required groups had one option present
~\anaconda3\envs\Pytorch\lib\argparse.py in error(self, message)
2506 self.print_usage(_sys.stderr)
2507 args = {'prog': self.prog, 'message': message}
-> 2508 self.exit(2, _('%(prog)s: error: %(message)s\n') % args)
~\anaconda3\envs\Pytorch\lib\argparse.py in exit(self, status, message)
2493 if message:
2494 self._print_message(message, _sys.stderr)
-> 2495 _sys.exit(status)
2496
2497 def error(self, message):
SystemExit: 2
I have no clue on how to move forwards. I'm stranded here. Anyone know where to go from here?
| I will guess.
ArgumentParser was created to get arguments when you run it in console/terminal not Juputer
python script.py -t abnormal -p axial --prefix_name abc
and Python puts these arguments as list in sys.argv and ArgumentParser uses automatically values from sys.argv in parser.parse_args()
If you want to run it in juputer then you have to send arguments manually as list
args = parser.parse_args( ["-t", "abnormal", "-p", "axial", "--prefix_name", "abc"] )
or you have to append argument to sys.argv
sys.argv.append("-t")
sys.argv.append("abnormal")
sys.argv.append("-p")
sys.argv.append("axial")
sys.argv.append("--prefix_name")
sys.argv.append("abc")
or using .extend( list )
sys.argv.extend( ["-t", "abnormal", "-p", "axial", "--prefix_name", "abc"] )
or using string with split(" ")
sys.argv.extend( "-t abnormal -p axial --prefix_name abc".split(' ') )
but if you run it in Jupyter many times with different arguments then it will remeber all arguments and you will need to remove previous arguments
sys.argv.clear()
sys.argv.extend( ["-t", "abnormal", "-p", "axial", "--prefix_name", "abc"] )
or replace all elements (except first which sometimes can be useful)
sys.argv[1:] = ["-t", "abnormal", "-p", "axial", "--prefix_name", "abc"]
if __name__ == "__main__":
sys.argv[1:] = ["-t", "abnormal", "-p", "axial", "--prefix_name", "abc"]
args = parse_arguments()
run(args)
Eventually you can change function parse_arguments() to make it more universal.
You can set
def parse_arguments(arguments=None):
# ... code ...
args = parser.parse_args(arguments)
and then you can use it in script which you run in console/terminal
args = parse_arguments()
or in Jupyter
args = parse_arguments( ["-t", "abnormal", "-p", "axial", "--prefix_name", "abc"] )
BTW:
sys.argv.append() and sys.argv.extend() can be also useful to add some options to all executions when you run in console/terminal
There is module shlex to split arguments with spaces inside -msg "Hello World"
Normal text.split(" ") will create incorrect list ['-msg', '"Hello', 'World"']
shlex.split(text) will create correct list ['-msg', 'Hello World']
| https://stackoverflow.com/questions/62904639/ |
PyTorch - Element-wise signed min/max? | I may be missing something obvious, but I can't find a way to compute this.
Given two tensors, I want to keep the minimum elements in each one of them as well as the sign.
I thought about
sign_x = torch.sign(x)
sign_y = torch.sign(y)
min = torch.min(torch.abs(x), torch.abs(y))
in order to eventually multiply the signs with the obtained minimums, but then I have no method to multiply the correct sign to each element that was kept and must choose one of the two tensors.
| Here is one way to do it. Multiply torch.sign(x) and torch.sign(y) by a tensor of booleans representing whether x or y is the result of the min calculation. Then take the logical or (|) of the two resulting tensors to combine them, and multiply that by the min calculation.
mins = torch.min(torch.abs(x), torch.abs(y))
xSigns = (mins == torch.abs(x)) * torch.sign(x)
ySigns = (mins == torch.abs(y)) * torch.sign(y)
finalSigns = xSigns.int() | ySigns.int()
result = mins * finalSigns
If x and y have the same absolute value for a certain element, in the code above the sign of x takes precedence. For y to take precedence, swap the order and use finalSigns = ySigns.int() | xSigns.int() instead.
| https://stackoverflow.com/questions/62907427/ |
PyTorch: What is the difference between tensor.cuda() and tensor.to(torch.device("cuda:0"))? | In PyTorch, what is the difference between the following two methods in sending a tensor (or model) to GPU:
Setup:
X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]]) # X = model()
X = torch.DoubleTensor(X)
Method 1
Method 2
X.cuda()
device = torch.device("cuda:0")X = X.to(device)
(I don't really need a detailed explanation of what is happening in the backend, just want to know if they are both essentially doing the same thing)
| There is no difference between the two.
Early versions of pytorch had .cuda() and .cpu() methods to move tensors and models from cpu to gpu and back. However, this made code writing a bit cumbersome:
if cuda_available:
x = x.cuda()
model.cuda()
else:
x = x.cpu()
model.cpu()
Later versions introduced .to() that basically takes care of everything in an elegant way:
device = torch.device('cuda') if cuda_available else torch.device('cpu')
x = x.to(device)
model = model.to(device)
| https://stackoverflow.com/questions/62907815/ |
tensorflow's Timedistributed equivalent in pyTorch | Is there any equivalent implementation of tensorflow.keras.layers.Timedistributed for pytorch?
I am trying to build something like
Timedistributed(Resnet50()).
| Credit to miguelvr on this topic.
You can use this code which is a PyTorch module developed to mimic the Timeditributed wrapper.
import torch.nn as nn
class TimeDistributed(nn.Module):
def __init__(self, module, batch_first=False):
super(TimeDistributed, self).__init__()
self.module = module
self.batch_first = batch_first
def forward(self, x):
if len(x.size()) <= 2:
return self.module(x)
# Squash samples and timesteps into a single axis
x_reshape = x.contiguous().view(-1, x.size(-1)) # (samples * timesteps, input_size)
y = self.module(x_reshape)
# We have to reshape Y
if self.batch_first:
y = y.contiguous().view(x.size(0), -1, y.size(-1)) # (samples, timesteps, output_size)
else:
y = y.view(-1, x.size(1), y.size(-1)) # (timesteps, samples, output_size)
return y
| https://stackoverflow.com/questions/62912239/ |
PyTorch value thresholding and zeroing all other values | I have 2D tensor in PyTorch, representing model confidences. I want:
if 2nd value in row is greater or equal to threshold, all other values should be changed to 0
else values should not change
The simple approach would be:
iterate through rows
check 2nd value
if value is greater or equal, create row of zeroes, change 2nd value to the 2nd value from row and replace row
else don't do anything
It is inefficient, however. Is there a vectorized / tensorized way to do this?
| I would do this by first constructing a new zero matrix, and then moving items from your matrix to the zero matrix as needed. You copy all rows that are in a row whose second element is below the threshold. For all other rows, you only copy the second element.
import torch
threshold = .2
X = torch.rand((100, 10))
new = torch.zeros_like(X)
mask = X[:, 2] <= threshold
new[mask] = X[mask]
new[~mask, 2] = X[~mask, 2]
| https://stackoverflow.com/questions/62915838/ |
How does one use torch.optim.lr_scheduler.OneCycleLR()? | I wanted to use torch.optim.lr_scheduler.OneCycleLR() while training. Can some kindly explain to me how to use it?
What i got from the documentation was that it should be called after each train_batch.
My confusions are as follows:
Does the max_lr parameter has to be same with the optimizer lr parameter?
Can this scheduler be used with Adam optimizer. How is the momentum calculated then?
Let’s say i trained my model for some number of epochs at a stretch now, i wanted to train for some more epochs. Would i have to reset the the scheduler?
Can anybody provide me a sort of a toy example/training loop that implements this scheduler?
I am kind of new to deep learning & PyTorch so my question might be somewhat silly.
| You might get some use out of this thread: How to use Pytorch OneCycleLR in a training loop (and optimizer/scheduler interactions)?
But to address your points:
Does the max_lr parameter has to be same with the optimizer lr parameter? No, this is the max or highest value -- a hyperparameter that you will experiment with. Notice in the paper the use of max_lr: https://arxiv.org/pdf/1708.07120.pdf
Can this scheduler be used with Adam optimizer. How is the momentum calculated then? Yes.
Let’s say i trained my model for some number of epochs at a stretch now, i wanted to train for some more epochs. Would i have to reset the the scheduler? Depends, are you loading the model from a saved checkpoint or not? Check PyTorch's tutorials: https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py
| https://stackoverflow.com/questions/62917353/ |
Create a tensor view (no data copy) where one row/col is removed | TL;DR: Is there a way to delete one row or column of a torch tensor's view without creating a copy of the underlying data?
Background:
Starting from a base tensor (samples, features) of shape (2000,10000) (about 90MB), I want to create a 2-tuple of new views for each feature that consist of the one-feature-training-target with shape (2000,1) and the training inputs with shape (2000,9999) containing the remaining features.
As I want to train one rather simple model for each feature (10k) and as many as possible in parallel, I would like to have this data available in parallel, but my memory does not allow for this unless the input tensors are views.
While slicing creates views and can easily be used for the targets, so far I haven't found a slicing or function that can be used for the inputs. Creating two slices and concatenating them again creates a copy. The same happens when I use a mask to remove the unwanted feature.
| Currently, this is not possible.
PyTorch's dense tensors are always represented by a dense data array and the view only changes where the indices break into the next dimension.
PyTorch's sparse tensors only do the opposite of what you intend, they allow to spread the densely stored data to be interpreted as being spread out.
That being said, you could design your models to take the (2000, 10000) sized tensor as input and effectively remove the target input-feature from the output of your first layer.
Let's say your first layer is a torch.nn.Linear which means that there is an underlying matrix of shape (10000, d_out) (where d_out is the intended output size of the layer - actually, the current implementation uses the transposed weight matrix but I'll ignore that in the examples unless it's explicitly layer.weight that's being used). So your first internal feature tensor is mathematically equivalent to:
layer_output = x.matmul(weights) + biases
which you can rid of input feature i_target by:
layer_output = x.matmul(weights) + biases - x[:,i_target].unsqueeze(dim=-1) * weights[i_target]
or for layer = torch.nn.Linear(10000, d_out):
layer_output = layer(x) - x[:,i_target].unsqueeze(dim=-1) * layer.weight[:,i_target]
So the rest of the network needs no adjustments. A similar approach is also possible for convolution layers.
Also note that this is only necessary for training. If you want your final model to take samples with 9999 features you can just erase the i_target-th column from the linear layer's weights matrix and remove the
- x[:,i_target].unsqueeze(dim=-1) * weights[i_target]
part.
Final note:
Personally I wouldn't train 10k individual models but rather have a single model with 10k outputs where each of the outputs only depends on 9999 of the input features. Of course this makes the model design much more difficult as it requires a lot of care to make sure that each layer's output contains features that don't depend on one of the input features, but it removes a lot of communication overhead that comes with processing lots of small operations on the GPU (which can be devastating in my experience).
| https://stackoverflow.com/questions/62921102/ |
Is it possible to use non-pytoch augmentation in transform.compose | I am working on a data classification problem that takes images as an input in Pytorch. I would like the use the imgaug library, but unfortunatly I keep on getting errors. Here is my code.
#import necessary libraries
from torch import nn
from torchvision import models
import imgaug as ia
import imgaug.augmenters as iaa
from torchvision import datasets
from torch.utils.data.dataloader import DataLoader
from torchvision import transforms
from torch import optim
import numpy as np
from PIL import Image
import glob
from matplotlib import image
#preprocess images
#create data transformers
seq = iaa.Sequential([iaa.Sometimes(0.5,iaa.GaussianBlur(sigma=(0,3.0))),
iaa.Sometimes(0.5,iaa.LinearContrast((0.75,1.5))),
iaa.AdditiveGaussianNoise(loc=0,scale=(0.0,0.05*255),per_channel=0.5),
iaa.Sometimes(0.5,iaa.Affine(
scale={"x": (0.8, 1.2), "y": (0.8, 1.2)},
translate_percent={"x": (-0.2, 0.2), "y": (-0.2, 0.2)},
rotate=(-25, 25),
shear=(-8, 8)))],random_order=True)
train_transformation = transforms.Compose([transforms.RandomResizedCrop(300),
seq,
transforms.ToTensor()])
train_data = datasets.ImageFolder(root = 'train')
train_loader = DataLoader(train_data,shuffle = True,batch_size = 32,num_workers = 0)
train_iter = iter(train_loader)
train_iter.next()
Jupyter Server: local
Python 3.8.4 64-bit: Idle
CNN Cancer Detector
Melanoma
Intro
Skin cancer is the most common form of cancer, with 1 in 5 Americans developping it by the time they reach 70 years old. Over 2 people die of skin cancer in the US every hour.[1] Early detection is key in saving peoples lives with skin cancer, with the early detection 5 year survival rate being 99%[1]. Dermatologist have to look at patients one by one, and must assess by eye whether or not a blemish is malignant or benign. Dermatologist's have around a 66% accuracy rate in assessing 752 different skin diseases, while CNN's, such as the one detailed in *Dermatologist-level classification of skin cancer with deep neural networks* published in Nature have achieved greater accuracy levels then dermatologist's, around 72.1%[2].
By converting cancer detection to easily deployable software, you could allow people to get accurate cancer testing at home, saving resources and time. By making cancer detection more accesible, people would be more likely to get tested, saving lives in the process. Below I will detail my process and results from a melanoma (the most deadly form of skin cancer) detector model using CNN's.
[2]
from PIL import Image
import glob
from matplotlib import image
[3]
#preprocess images
#create data transformers
seq = iaa.Sequential([iaa.Sometimes(0.5,iaa.GaussianBlur(sigma=(0,3.0))),
iaa.Sometimes(0.5,iaa.LinearContrast((0.75,1.5))),
iaa.AdditiveGaussianNoise(loc=0,scale=(0.0,0.05*255),per_channel=0.5),
iaa.Sometimes(0.5,iaa.Affine(
scale={"x": (0.8, 1.2), "y": (0.8, 1.2)},
translate_percent={"x": (-0.2, 0.2), "y": (-0.2, 0.2)},
rotate=(-25, 25),
shear=(-8, 8)))],random_order=True)
…train_iter = iter(train_loader)
train_iter.next()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
in
20 train_loader = DataLoader(train_data,shuffle = True,batch_size = 32,num_workers = 0)
21 train_iter = iter(train_loader)
---> 22 train_iter.next()
D:\Python\lib\site-packages\torch\utils\data\dataloader.py in __next__(self)
343
344 def __next__(self):
--> 345 data = self._next_data()
346 self._num_yielded += 1
347 if self._dataset_kind == _DatasetKind.Iterable and \
D:\Python\lib\site-packages\torch\utils\data\dataloader.py in _next_data(self)
383 def _next_data(self):
384 index = self._next_index() # may raise StopIteration
--> 385 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
386 if self._pin_memory:
387 data = _utils.pin_memory.pin_memory(data)
D:\Python\lib\site-packages\torch\utils\data\_utils\fetch.py in fetch(self, possibly_batched_index)
45 else:
46 data = self.dataset[possibly_batched_index]
---> 47 return self.collate_fn(data)
D:\Python\lib\site-packages\torch\utils\data\_utils\collate.py in default_collate(batch)
77 elif isinstance(elem, container_abcs.Sequence):
78 transposed = zip(*batch)
---> 79 return [default_collate(samples) for samples in transposed]
80
81 raise TypeError(default_collate_err_msg_format.format(elem_type))
D:\Python\lib\site-packages\torch\utils\data\_utils\collate.py in (.0)
77 elif isinstance(elem, container_abcs.Sequence):
78 transposed = zip(*batch)
---> 79 return [default_collate(samples) for samples in transposed]
80
81 raise TypeError(default_collate_err_msg_format.format(elem_type))
D:\Python\lib\site-packages\torch\utils\data\_utils\collate.py in default_collate(batch)
79 return [default_collate(samples) for samples in transposed]
80
---> 81 raise TypeError(default_collate_err_msg_format.format(elem_type))
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found
I am aware that the input to the imgaug transformer must be a numpy array, but I am not sure how to incorporate that into my transform.compose (if I can at all that is.). When the imgaug seq is not in the transform.compose it works properly.
Thank you for the help!
| Looking at the documentation of transforms in pytorch gives us a hint of how to do it: https://pytorch.org/docs/stable/torchvision/transforms.html#generic-transforms
I would try something like:
train_transformation = transforms.Compose([transforms.RandomResizedCrop(300),
transforms.Lambda(lambda x: seq(x)),
transforms.ToTensor()])
| https://stackoverflow.com/questions/62922324/ |
Why does an indexed tensor retain the original tensor's stride? | My understanding of stride is the number of steps you'd need to take to jump to the next part in an axis. So if you have a stride of (3,1) you'd need to take 3 steps to get to the next row and 1 step to get to the next column (assuming the first axis is row and the second axis is column).
Yet when I index into a pytorch tensor, b[1:, 1:] with shape (3,3) lopping off the first row
and the first column, then query its stride, I get (3,1) instead of (2,1).
Why is this the case?
import unittest
import torch
class TestChapter3(unittest.TestCase):
def setUp(self):
self.a = torch.tensor(list(range(9)))
self.b = self.a.view(3,3)
self.c = self.b[1:, 1:]
def test_index_of_view(self):
print(self.c)
self.assertEqual(self.c.size(), torch.Size([2, 2]))
self.assertEqual(self.c.storage_offset(), 4)
self.assertEqual(self.c.stride(), (2, 1)) # self.c.stride() is actually (3,1)
if __name__ == "__main__":
unittest.main()
| The self.c tensor still uses the same underlaying data storage as self.a and self.b. They are all just different views on the same storage.
self.c.storage_offset() tells you where the view of self.c begins (passing the first entire row and the first element of the second row = 4 elements in total) but the data is still stored in memory as one long array.
Since the underlying storage is still the same, the memory address to go one row down is still 3*sizeof(float) bytes or three elements further, which is what self.c.stride()[0] says.
In your example we have:
self.a = (visible=[0, 1, 2, 3, 4, 5, 6, 7, 8], data=[0, 1, 2, 3, 4, 5, 6, 7, 8])
self.b = (visible=[
[0, 1, 2],
[3, 4, 5],
[6, 7, 8]
], data=[0, 1, 2, 3, 4, 5, 6, 7, 8])
self.c = (visible=[
[4, 5],
[7, 8]
], data=[4, 5, 6, 7, 8])
And yes, the 6 in the last data array is not a mistake. The last data array is just an offset view of the same memory that self.a and self.b are referencing.
| https://stackoverflow.com/questions/62922796/ |
TypeError: pic should be PIL Image or ndarray. Got | I would like to split images using Open CV then feed to the by PyTorch model to count object in each image. I am getting the following error message when running the code: TypeError: pic should be PIL Image or ndarray. Got <class ‘bool’>
import os
import numpy as np
import torch
from PIL import Image
from torch.utils.data import Dataset
from torchvision import transforms, utils
#from torchvision.transforms import Grayscalei
import pandas as pd
import pdb
import cv2
class CellsDataset(Dataset):
# a very simple dataset
def __init__(self, root_dir, transform=None, return_filenames=False):
self.root = root_dir
self.transform = transform
self.return_filenames = return_filenames
self.files = [os.path.join(self.root,filename) for filename in os.listdir(self.root)]
self.files = [path for path in self.files
if os.path.isfile(path) and os.path.splitext(path)[1]=='.png']
def __len__(self):
return len(self.files)
def __getitem__(self, idx):
path = self.files[idx]
sample = Image.open(path)
sample = cv2.imread(path)
b,g,r=cv2.split(sample)
sample=cv2.imwrite('sample.png', g)
#transform3 = Grayscale(num_output_channels=3)
#sample = transform3(sample) # convert to a 3 channel grayscale, as it needs to be 3 channel.
if self.transform:
sample = self.transform(sample)
if self.return_filenames:
return sample, path
else:
return sample
| The variable sample is reassigned to what cv2.imwrite returns in this line:
sample=cv2.imwrite('sample.png', g)
That replaces your ndarray with something else.
Try replacing that line with simply cv2.imwrite('sample.png', g) without assigning the value of the evaluated expression to sample.
| https://stackoverflow.com/questions/62924547/ |
How to adjust the batch data by the amount of labels in PyTorch | I have made n-grams / doc-ids for document classification,
def create_dataset(tok_docs, vocab, n):
n_grams = []
document_ids = []
for i, doc in enumerate(tok_docs):
for n_gram in [doc[0][i:i+n] for i in range(len(doc[0]) - 1)]:
n_grams.append(n_gram)
document_ids.append(i)
return n_grams, document_ids
def create_pytorch_datasets(n_grams, doc_ids):
n_grams_tensor = torch.tensor(n_grams)
doc_ids_tensor = troch.tensor(doc_ids)
full_dataset = TensorDataset(n_grams_tensor, doc_ids_tensor)
return full_dataset
create_dataset returns pair of (n-grams, document_ids) like below:
n_grams, doc_ids = create_dataset( ... )
train_data = create_pytorch_datasets(n_grams, doc_ids)
>>> train_data[0:100]
(tensor([[2076, 517, 54, 3647, 1182, 7086],
[517, 54, 3647, 1182, 7086, 1149],
...
]),
tensor(([0, 0, 0, 0, 0, ..., 3, 3, 3]))
train_loader = DataLoader(train_data, batch_size = batch_size, shuffle = True)
The first of tensor content means n-grams and the second one does doc_id.
But as you know, by the length of documents, the amount of training data according to the label would changes.
If one document has very long length, there would be so many pairs that have its label in training data.
I think it can cause overfitting in model, because the classification model tends to classify inputs to long length documents.
So, I want to extract input batches from a uniform distribution for label (doc_ids). How can I fix it in code above?
p.s)
If there is train_data like below, I want to extract batch by the probability like that:
n-grams doc_ids
([1, 2, 3, 4], 1) ====> 0.33
([1, 3, 5, 7], 2) ====> 0.33
([2, 3, 4, 5], 3) ====> 0.33 * 0.25
([3, 5, 2, 5], 3) ====> 0.33 * 0.25
([6, 3, 4, 5], 3) ====> 0.33 * 0.25
([2, 3, 1, 5], 3) ====> 0.33 * 0.25
| In pytorch you can specify a sampler or a batch_sampler to the dataloader to change how the sampling of datapoints is done.
docs on the dataloader:
https://pytorch.org/docs/stable/data.html#data-loading-order-and-sampler
documentation on the sampler: https://pytorch.org/docs/stable/data.html#torch.utils.data.Sampler
For instance, you can use the WeightedRandomSampler to specify a weight to every datapoint. The weighting can be the inverse length of the document for instance.
I would make the following modifications in the code:
def create_dataset(tok_docs, vocab, n):
n_grams = []
document_ids = []
weights = [] # << list of weights for sampling
for i, doc in enumerate(tok_docs):
for n_gram in [doc[0][i:i+n] for i in range(len(doc[0]) - 1)]:
n_grams.append(n_gram)
document_ids.append(i)
weights.append(1/len(doc[0])) # << ngrams of long documents are sampled less often
return n_grams, document_ids, weights
sampler = WeightedRandomSampler(weights, 1, replacement=True) # << create the sampler
train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=False, sampler=sampler) # << includes the sampler in the dataloader
| https://stackoverflow.com/questions/62929637/ |
GPU memory leakage when creating objects from sentence-transformers | Description
I am creating a function in R that embeds sentences using the sentence_transformers library from Python.
For some unknown reason, creating the object multiple times under the same variable name ends up in insufficient memory space to allocate the transformer. To reproduce:
sentence_transformers <- reticulate::import("sentence_transformers")
for (i in 1:10) {
print(i)
bert_encoder <- sentence_transformers$SentenceTransformer("bert-large-nli-stsb-mean-tokens")
}
However, doing the same operation directly on Python does not produce an error
from sentence_transformers import SentenceTransformer
for i in range(10):
print(i)
bert_encoder = SentenceTransformer("bert-large-nli-stsb-mean-tokens")
}
This happens with any model that is allocated in GPU. On my NVIDIA GTX 1060 it reaches the 4th cycle, but on smaller GPUs it crashes earlier. One temporal solution is to create the model outside only once, and then pass the model as a parameter to the function as many times as wanted, but I would rather avoid that because it adds an extra step and in any case calling multiple models might just make it crash as well.
Expected behaviour
The for loop finishes without an error
Observed behaviour
Error in py_call_impl(callable, dots$args, dots$keywords) :
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 2.95 GiB already allocated; 16.11 MiB free; 238.68 MiB cached)
Unsuccesful attemps at solving it
The solutions proposed here
Using numba as suggested here
Declaring the variable explicitely on Python via reticulate::py_run_string() and then doing del bert_encoder and calling the garbage collector
Details
Windows 10 Home
Python 3.7.4
R 4.0.1
Reticulate 1.16
Torch 1.3.1
Tensorflow 2.2.0
Transformers 2.11.0
sentence_transformers 0.2.6
| Ok so I am posting my solution for anyone else having this issue.
After each call to the model as
sentence_transformers <- import("sentence_transformers")
encoder <- sentence_transformers$SentenceTransformer("bert-large-nli-stsb-mean-tokens")
I release GPU memory using
# Has this been done on a GPU?
py <- reticulate::py_run_string("import torch
is_cuda_available = torch.cuda.is_available()")
# Release GPU
if (isTRUE(reticulate::py$is_cuda_available)) {
tryCatch(reticulate::py_run_string("del encoder"),
warning = function(e) {},
error = function(e) {})
tryCatch(rm(encoder),
warning = function(e) {},
error = function(e) {})
gc(full = TRUE, verbose = FALSE)
py <- reticulate::py_run_string("import torch
torch.cuda.empty_cache()")
}
and it works perfectly.
| https://stackoverflow.com/questions/62931082/ |
Pytorch geometric Data object edge_attr for undirected graphs | How would one construct a edge_attr list for undirected graphs in pytorch geometric Data object. Say we had an undirected graph such as this one. The graph COO matrix required by the pytorch geometric object is:
[['a', 'b', 'a', 'c', 'b', 'c'] ['b', 'a', 'c', 'a', 'c', 'b']]
How would one construct the edge_attr list then (which is an array of one-hot encoded vectors for the features of each edge). Since the graph is undirected, would one simply append two of the same one-hot encoded feature vectors at a time. For example, say these are the edge features :
(a, b) = [0,0,0,1] (a, c) = [1,0,0,0] (b, c) = [0,1,0,0]
Would the edge_attr list look like:
[[0,0,0,1], [0,0,0,1], [1,0,0,0], [1,0,0,0], [0,1,0,0], [0,1,0,0]
Note how each one-hot encoded feature vector is repeated twice, and the feature vector index in edge_attr corresponds its respective edge in the graph COO matrix. Because the graph is undirected, we just use the same feature matrix. Is the right way to do it, or is there some other way?
| The answer I figured out is yes. In an undirected graph for pytorch_geometric, the edge features would need to be repeated twice, once for each respective connection present in the COO matrix.
| https://stackoverflow.com/questions/62936573/ |
Pytorch dynamic amount of Layers? | I am trying to specify a dynamic amount of layers, which I seem to be doing wrong.
My issue is that when I define the 100 layers here, I will get an error in the forward step.
But when I define the layer properly it works?
Below simplified example
class PredictFromEmbeddParaSmall(LightningModule):
def __init__(self, hyperparams={'lr': 0.0001}):
super(PredictFromEmbeddParaSmall, self).__init__()
#Input is something like tensor.size=[768*100]
self.TO_ILLUSTRATE = nn.Linear(768, 5)
self.enc_ref=[]
for i in range(100):
self.enc_red.append(nn.Linear(768, 5))
# gather the layers output sth
self.dense_simple1 = nn.Linear(5*100, 2)
self.output = nn.Sigmoid()
def forward(self, x):
# first input to enc_red
x_vecs = []
for i in range(self.para_count):
layer = self.enc_red[i]
# The first dim is the batch size here, output is correct
processed_slice = x[:, i * 768:(i + 1) * 768]
# This works and give the out of size 5
rand = self.TO_ILLUSTRATE(processed_slice)
#This will fail? Error below
ret = layer(processed_slice)
#more things happening we can ignore right now since we fail earlier
I get this error when executing "ret = layer.forward(processed_slice)"
RuntimeError: Expected object of device type cuda but got device type
cpu for argument #1 'self' in call to _th_addmm
Is there a smarter way to program this? OR solve the error?
| You should use a ModuleList from pytorch instead of a list: https://pytorch.org/docs/master/generated/torch.nn.ModuleList.html . That is because Pytorch has to keep a graph with all modules of your model, if you just add them in a list they are not properly indexed in the graph, resulting in the error you faced.
Your coude should be something alike:
class PredictFromEmbeddParaSmall(LightningModule):
def __init__(self, hyperparams={'lr': 0.0001}):
super(PredictFromEmbeddParaSmall, self).__init__()
#Input is something like tensor.size=[768*100]
self.TO_ILLUSTRATE = nn.Linear(768, 5)
self.enc_ref=nn.ModuleList() # << MODIFIED LINE <<
for i in range(100):
self.enc_red.append(nn.Linear(768, 5))
# gather the layers output sth
self.dense_simple1 = nn.Linear(5*100, 2)
self.output = nn.Sigmoid()
def forward(self, x):
# first input to enc_red
x_vecs = []
for i in range(self.para_count):
layer = self.enc_red[i]
# The first dim is the batch size here, output is correct
processed_slice = x[:, i * 768:(i + 1) * 768]
# This works and give the out of size 5
rand = self.TO_ILLUSTRATE(processed_slice)
#This will fail? Error below
ret = layer(processed_slice)
#more things happening we can ignore right now since we fail earlier
Then it should work all right!
Edit: alternative way.
Instead of using ModuleList you can also just use nn.Sequential, this allows you to avoid using the for loop in the forward pass. That also means that you will not have access to intermediary activations, so that is not the solution for you if you need them.
class PredictFromEmbeddParaSmall(LightningModule):
def __init__(self, hyperparams={'lr': 0.0001}):
super(PredictFromEmbeddParaSmall, self).__init__()
#Input is something like tensor.size=[768*100]
self.TO_ILLUSTRATE = nn.Linear(768, 5)
self.enc_ref=[]
for i in range(100):
self.enc_red.append(nn.Linear(768, 5))
self.enc_red = nn.Seqential(*self.enc_ref) # << MODIFIED LINE <<
# gather the layers output sth
self.dense_simple1 = nn.Linear(5*100, 2)
self.output = nn.Sigmoid()
def forward(self, x):
# first input to enc_red
x_vecs = []
out = self.enc_red(x) # << MODIFIED LINE <<
| https://stackoverflow.com/questions/62937388/ |
Pytorch Import Error when deploying django on AWS | My django app works locally. When i try to deploy it and then open it on AWS, i get the following error:
I included torch==1.5.1 in my requirements.txt
Requirements.txt:
When I comment out the parts of the application that require torch and remove it from requirements.txt, it works fine. What can I do to ensure that torch is installed like the other modules in requirements.txt? Here is the link to requirements.txt
| By default torch will require CUDA and GPU. Depending on your ec2 instance this may not be viable option, thus leading to the errors you are observing.
You can install non-cuda version of torch as follows for tests and see if this will have any benefits:
pip3 install torch==1.5.1+cpu torchvision==0.6.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
| https://stackoverflow.com/questions/62940320/ |
Problem with output of neural network in a cross-entropy method attempt at solving CartPole-v0 | I am trying to implement the cross-entropy policy-based method to the classic CartPole-v0 environment. I am actually reformatting a working implementation of this algorithm on the MountainCarContinuous-v0, but when I try to get the agent learning, I get this error message:
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
in
4
5 agent = Agent(env)
----> 6 scores = agent.learn()
7
8 # plot the scores
~/cross_entropy.py in learn(self, n_iterations, max_t, gamma, print_every, pop_size, elite_frac, sigma)
83 for i_iteration in range(1, n_iterations+1): # loop over all the training iterations
84 weights_pop = [best_weight + (sigma*np.random.randn(self.get_weights_dim())) for i in range(pop_size)] # population of the weights/policies
---> 85 rewards = np.array([self.evaluate(weights, gamma, max_t) for weights in weights_pop]) # rewards from the policies resulting from all individual weights
86
87 # get the best policies
~/cross_entropy.py in (.0)
83 for i_iteration in range(1, n_iterations+1): # loop over all the training iterations
84 weights_pop = [best_weight + (sigma*np.random.randn(self.get_weights_dim())) for i in range(pop_size)] # population of the weights/policies
---> 85 rewards = np.array([self.evaluate(weights, gamma, max_t) for weights in weights_pop]) # rewards from the policies resulting from all individual weights
86
87 # get the best policies
~/cross_entropy.py in evaluate(self, weights, gamma, max_t)
56 action = self.forward(state)
57 #action = torch.argmax(action_vals).item()
---> 58 state, reward, done, _ = self.env.step(action)
59 episode_return += reward * math.pow(gamma, t)
60 if done:
/gym/wrappers/time_limit.py in step(self, action)
14 def step(self, action):
15 assert self._elapsed_steps is not None, "Cannot call env.step() before calling reset()"
---> 16 observation, reward, done, info = self.env.step(action)
17 self._elapsed_steps += 1
18 if self._elapsed_steps >= self._max_episode_steps:
/gym/envs/classic_control/cartpole.py in step(self, action)
102 def step(self, action):
103 err_msg = "%r (%s) invalid" % (action, type(action))
--> 104 assert self.action_space.contains(action), err_msg
105
106 x, x_dot, theta, theta_dot = self.state
AssertionError: tensor([ 0.3987, 0.6013]) () invalid
I found this is because the MountainCarContinuous-v0 environment has an action_space of type Box(2) whereas CartPole-v0 is Discrete(2), meaning that I only want an integer as action selection.
I have tried working around this notion by applying a softmax activation function and then took the index of the higher value as the action.
action_vals = self.forward(state)
action = torch.argmax(action_vals).item()
This gets rid of the error but when I train the agent, it seems to learn incredibly fast which is kind of an indicator that something is wrong. This is my full agent class:
class Agent(nn.Module):
def __init__(self, env, h_size=16):
super().__init__()
self.env = env
# state, hidden layer, action sizes
self.s_size = env.observation_space.shape[0]
self.h_size = h_size
self.a_size = env.action_space.n
# define layers
self.fc1 = nn.Linear(self.s_size, self.h_size)
self.fc2 = nn.Linear(self.h_size, self.a_size)
self.device = torch.device('cpu')
def set_weights(self, weights):
s_size = self.s_size
h_size = self.h_size
a_size = self.a_size
# separate the weights for each layer
fc1_end = (s_size*h_size)+h_size
fc1_W = torch.from_numpy(weights[:s_size*h_size].reshape(s_size, h_size))
fc1_b = torch.from_numpy(weights[s_size*h_size:fc1_end])
fc2_W = torch.from_numpy(weights[fc1_end:fc1_end+(h_size*a_size)].reshape(h_size, a_size))
fc2_b = torch.from_numpy(weights[fc1_end+(h_size*a_size):])
# set the weights for each layer
self.fc1.weight.data.copy_(fc1_W.view_as(self.fc1.weight.data))
self.fc1.bias.data.copy_(fc1_b.view_as(self.fc1.bias.data))
self.fc2.weight.data.copy_(fc2_W.view_as(self.fc2.weight.data))
self.fc2.bias.data.copy_(fc2_b.view_as(self.fc2.bias.data))
def get_weights_dim(self):
return (self.s_size+1)*self.h_size + (self.h_size+1)*self.a_size
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.softmax(self.fc2(x))
return x
def evaluate(self, weights, gamma=1.0, max_t=5000):
self.set_weights(weights)
episode_return = 0.0
state = self.env.reset()
for t in range(max_t):
state = torch.from_numpy(state).float().to(self.device)
action_vals = self.forward(state)
action = torch.argmax(action_vals).item()
state, reward, done, _ = self.env.step(action)
episode_return += reward * math.pow(gamma, t)
if done:
break
return episode_return
def learn(self, n_iterations=500, max_t=1000, gamma=1.0, print_every=10, pop_size=50, elite_frac=0.2, sigma=0.5):
"""PyTorch implementation of the cross-entropy method.
Params
======
n_iterations (int): maximum number of training iterations
max_t (int): maximum number of timesteps per episode
gamma (float): discount rate
print_every (int): how often to print average score (over last 100 episodes)
pop_size (int): size of population at each iteration
elite_frac (float): percentage of top performers to use in update
sigma (float): standard deviation of additive noise
"""
n_elite=int(pop_size*elite_frac) # number of elite policies from the population
scores_deque = deque(maxlen=100) # list of the past 100 scores
scores = [] # list of all the scores
best_weight = sigma*np.random.randn(self.get_weights_dim()) # initialize the first best weight randomly
for i_iteration in range(1, n_iterations+1): # loop over all the training iterations
weights_pop = [best_weight + (sigma*np.random.randn(self.get_weights_dim())) for i in range(pop_size)] # population of the weights/policies
rewards = np.array([self.evaluate(weights, gamma, max_t) for weights in weights_pop]) # rewards from the policies resulting from all individual weights
# get the best policies
##
elite_idxs = rewards.argsort()[-n_elite:]
elite_weights = [weights_pop[i] for i in elite_idxs]
##
best_weight = np.array(elite_weights).mean(axis=0) # take the average of the best weights
reward = self.evaluate(best_weight, gamma=1.0) # evaluate this new policy
scores_deque.append(reward) # append the reward
scores.append(reward) # also append the reward
torch.save(self.state_dict(), 'checkpoint.pth') # save the agent
if i_iteration % print_every == 0: # print every 100 steps
print('Episode {}\tAverage Score: {:.2f}'.format(i_iteration, np.mean(scores_deque)))
if np.mean(scores_deque)>=195.0: # print if environment is solved
print('\nEnvironment solved in {:d} iterations!\tAverage Score: {:.2f}'.format(i_iteration-100, np.mean(scores_deque)))
break
return scores
If anyone has an idea on how to get the agent training properly, please give me any suggestions.
| Turns out all I needed was to add an act() method to the Agent class.
def act(self, state):
state = state.unsqueeze(0)
probs = self.forward(state).cpu()
m = Categorical(probs)
action = m.sample()
return action.item()
| https://stackoverflow.com/questions/62941625/ |
Pytorch convert a pd.DataFrame which is variable length sequence to tensor | I get a pandas DataFrame as follows and want to convert it to torch.tensor for embedding.
# output first 5 rows examples
print(df['col'].head(5))
col
0 [a, bc, cd]
1 [d, ed, fsd, g, h]
2 [i, hh, ihj, gfw, hah]
3 [a, cb]
4 [sad]
train_tensor = torch.from_numpy(train)
But it gets an error:
TypeError: can't convert np.ndarray of type numpy.str_. The only supported types are: float64, float32, float16, int64, int32, int16, int8, uint8, and bool.
It seems that from_numpy() doesn't support the variable lenght sequences.
So if want to initialize tensor form it what is the proper way?
And after getting the corresponding tensor I will try to add padding to variable length sequences and do embedding layer for it.
Could anyone help me?
Thanks in advances.
| There are multiple steps involved here
words to IDs
Pretrained: If you are using a pretrained embeddings like Glove/word2vec you will have to map each word to its ID in the vocabulary so that the embedding layer can load the pretrained embeddings.
In case you want to train your own embeddings you will have to map each word to an ID and save the map for later use (during predictions). This is normally called vocabulary
# Vocabulary to our own ID
def to_vocabulary_id(df):
word2id = {}
sentences = []
for v in df['col'].values:
row = []
for w in v:
if w not in word2id:
word2id[w] = len(word2id)+1
row.append(word2id[w])
sentences.append(row)
return sentences, word2id
df = pd.DataFrame({'col': [
['a', 'bc', 'cd'],
['d', 'ed', 'fsd', 'g', 'h'],
['i', 'hh', 'ihj', 'gfw', 'hah'],
['a', 'cb'],
['sad']]})
sentences, word2id = to_vocabulary_id(df)
Embedding layer
If our vocabulary size is say 100 and embedding size is 8, then we will create an embedding layer as below
embedding = nn.Embedding(100, 8)
Pad variable length sentences to 0 and create Tensor
data = pad_sequence([torch.LongTensor(s) for s in sentences], batch_first=True, padding_value=0)
Run through the embedding layer
Finally
import torch
from torch.nn.utils.rnn import pad_sequence
data = pad_sequence([torch.LongTensor(s) for s in sentences], batch_first=True, padding_value=0)
embedding = nn.Embedding(100, 8)
embedding(data).shape
Output:
torch.Size([5, 5, 8])
As you can see we have passed 5 sentences and the max length is 5. So we get embeddings of size 5 X 5 X 8 ie. 5 sentences, 5 words each one having embedding of size 8.
| https://stackoverflow.com/questions/62949148/ |
Accuracy per epoch in PyTorch | I have made a chatbot using pytorch and would like to display accuracy on every epoch. I am not quite understanding how to do that. I can display loss but cant figure out how to display my accuracy
Here is my code :-
from nltk_utils import tokenize, stem, bag_of_words
import json
import numpy as np
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
from model import NeuralNet
from torch.autograd import Variable
all_words=[]
tags=[]
xy=[]
questionsP1=[]
questionsP2=[]
questionsP3=[]
questionsP4=[]
questionTag={}
with open('new.json', encoding="utf8") as file:
data = json.load(file)
for intent in data["intents"]:
for proficiency in intent["proficiency"]:
for questions in proficiency["questions"]:
for responses in questions["responses"]:
wrds = tokenize(responses)
all_words.extend(wrds)
xy.append((wrds, questions["tag"]))
if questions["tag"] in tags:
print(questions["tag"])
if questions["tag"] not in tags:
tags.append(questions["tag"])
if proficiency["level"] == "P1":
questionsP1.append(questions["question"])
questionTag[questions["question"]]=questions["tag"]
if proficiency["level"] == "P2":
questionsP2.append(questions["question"])
questionTag[questions["question"]]=questions["tag"]
if proficiency["level"] == "P3":
questionsP3.append(questions["question"])
questionTag[questions["question"]]=questions["tag"]
if proficiency["level"] == "P4":
questionsP4.append(questions["question"])
questionTag[questions["question"]]=questions["tag"]
ignore_words = ['?', '!', '.', ',']
all_words = [stem(x) for x in all_words if x not in ignore_words]
all_words = sorted(set(all_words))
tags = sorted(set(tags))
X_train = []
y_train = []
for tokenized_response, tag in xy:
bag = bag_of_words(tokenized_response, all_words)
print(bag)
X_train.append( bag )
label = tags.index( tag )
y_train.append( label )
print(y_train)
X_train = np.array( X_train )
y_train = np.array( y_train )
class ChatDataset(Dataset):
def __init__(self):
self.n_samples = len(X_train)
self.x_data = X_train
self.y_data = y_train
def __getitem__(self, index):
return self.x_data[index], self.y_data[index]
def __len__(self):
return self.n_samples
#HyperParameters
batch_size = 8
hidden_size = 8
output_size = len(tags)
input_size = len(X_train[0])
learning_rate = 0.001
num_epochs = 994
dataset = ChatDataset()
train_loader = DataLoader(dataset = dataset, batch_size=batch_size, shuffle = True, num_workers = 2)
device = 'cpu'
model = NeuralNet(input_size, hidden_size, output_size).to(device)
#loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr = learning_rate)
for epoch in range( num_epochs ):
for (words, labels) in train_loader:
words = words.to(device)
labels = labels.to(device)
#Forward
outputs = model(words)
loss = criterion(outputs, labels)
#backward and optimizer step
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'epoch {epoch + 1}/ {num_epochs}, loss={loss.item(): .4f}')
print(f'final loss, loss={loss.item(): .4f}')
data = {
"model_state": model.state_dict(),
"input_size": input_size,
"output_size": output_size,
"hidden_size": hidden_size,
"all_words": all_words,
"tags": tags,
}
FILE = "data.pth"
torch.save(data, FILE)
with open('new.json', 'r') as f:
intents = json.load(f)
bot_name = "Sam"
while True:
sentence = input("You: ")
if sentence == 'quit':
break
sentence = tokenize(sentence)
X = bag_of_words(sentence, all_words)
X = X.reshape( 1, X.shape[0])
X = torch.from_numpy( X )
output = model( X )
_, predicted = torch.max(output, dim=1)
tag = tags[predicted.item()]
print(tag)
probs = torch.softmax(output, dim=1)
probs = probs[0][predicted.item()]
print( probs.item() )
if probs.item() > 0.75:
for intent in intents["intents"]:
for proficiency in intent["proficiency"]:
for questions in proficiency["questions"]:
if questions["tag"] == tag:
print(f'{bot_name}: {questions["question"]}')
else:
print(f'{bot_name}: Probability Too Low')
print(f'Training Complete. File saved to {FILE}')
My chatbot is working inverselt... i am trying to map the answer to the right question.
Any help would be appreciated.
| According to your code labels contains the indices that should have the highest values in outputs in order for the samples to be counted as correct predictions.
So to calculate the validation accuracy:
correct = 0
total = 0
model.eval()
with torch.no_grad():
for (words, labels) in validation_loader:
words = words.to(device)
labels = labels.to(device)
total += labels.shape[0]
outputs = model(words)
correct += torch.sum(labels == outputs.argmax(dim=-1))
accuracy = correct / total
| https://stackoverflow.com/questions/62950311/ |
How to get count number of equal elements in two torch tensors that also equal a specific value | I'm working in pytorch and trying to count the number of equal elements in 2 torch tensors, that also equal a specific value.
That is, if tensor a=[0,1,2,0,1,2] and tensor b = [0,2,1,0,2,1]
I want it to return:
3 when I check how many element are equal in the a,b and also equals 0. sum(a == b and a == 0 and b == 0) = 2,
0 when I check how many element are equal in the a,b and also equals 1 or 2. sum(a == b and a == 1,2 and b == 1,2) = 0.
Thanks!
| A straight forward approach would be:
torch.sum((a==b) * (a==val))
if val is the value you look for.
| https://stackoverflow.com/questions/62952728/ |
fastapi could not find model defintion when run with uvicorn | I want to host a pytorch model in a fastapi backend. When I run the code with python it is working fine. the depickled model can use the defined class. When the same file is started with uvicorn it cannot find the class definition.
Sourcecode looks like this:
import uvicorn
import json
from typing import List
from fastapi import Body, FastAPI
from fastapi.encoders import jsonable_encoder
import requests
from pydantic import BaseModel
#from model_ii import Model_II_b
import dill as pickle
import torch as T
import sys
app = FastAPI()
current_model = 'model_v2b_c2_small_ep15.pkl'
verbose_model = False # for model v2
class Model_II_b(T.nn.Module):
[...]
@app.post('/function')
def API_call(req_json: dict = Body(...)):
try:
# load model...
model = pickle.load(open('models/' + current_model, 'rb'))
result = model.dosomething_with(req_json)
return result
except Exception as e:
raise e
return {"error" : str(e)}
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
When I run this with python main.py it is working fine and I am gettings results. When I run it with uvicorn main:app and send a request I get the following error:
AttributeError: Can't get attribute 'Model_II_b' on <module '__mp_main__' from '/opt/webapp/env/bin/uvicorn'>
both should be using the same python env as I use the uvicorn from within the env.
I hope someone has an idea what is wrong with my setup or code.
Update Stacktrace:
(model_2) root@machinelearning-01:/opt/apps# uvicorn main:app --env-file /opt/apps/env/pyvenv.cfg --reload
INFO: Loading environment from '/opt/apps/env/pyvenv.cfg'
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [164777] using statreload
INFO: Started server process [164779]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: 127.0.0.1:33872 - "POST /ml/v2/predict HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/opt/apps/env/lib/python3.6/site-packages/uvicorn/protocols/http/httptools_impl.py", line 385, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/opt/apps/env/lib/python3.6/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
return await self.app(scope, receive, send)
File "/opt/apps/env/lib/python3.6/site-packages/fastapi/applications.py", line 183, in __call__
await super().__call__(scope, receive, send) # pragma: no cover
File "/opt/apps/env/lib/python3.6/site-packages/starlette/applications.py", line 102, in __call__
await self.middleware_stack(scope, receive, send)
File "/opt/apps/env/lib/python3.6/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc from None
File "/opt/apps/env/lib/python3.6/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/opt/apps/env/lib/python3.6/site-packages/starlette/exceptions.py", line 82, in __call__
raise exc from None
File "/opt/apps/env/lib/python3.6/site-packages/starlette/exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File "/opt/apps/env/lib/python3.6/site-packages/starlette/routing.py", line 550, in __call__
await route.handle(scope, receive, send)
File "/opt/apps/env/lib/python3.6/site-packages/starlette/routing.py", line 227, in handle
await self.app(scope, receive, send)
File "/opt/apps/env/lib/python3.6/site-packages/starlette/routing.py", line 41, in app
response = await func(request)
File "/opt/apps/env/lib/python3.6/site-packages/fastapi/routing.py", line 197, in app
dependant=dependant, values=values, is_coroutine=is_coroutine
File "/opt/apps/env/lib/python3.6/site-packages/fastapi/routing.py", line 149, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "/opt/apps/env/lib/python3.6/site-packages/starlette/concurrency.py", line 34, in run_in_threadpool
return await loop.run_in_executor(None, func, *args)
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "./main.py", line 155, in API_call
raise e
File "./main.py", line 129, in API_call
model = pickle.load(open('models/' + current_model, 'rb'))
File "/opt/apps/env/lib/python3.6/site-packages/dill/_dill.py", line 270, in load
return Unpickler(file, ignore=ignore, **kwds).load()
File "/opt/apps/env/lib/python3.6/site-packages/dill/_dill.py", line 473, in load
obj = StockUnpickler.load(self)
File "/opt/apps/env/lib/python3.6/site-packages/dill/_dill.py", line 463, in find_class
return StockUnpickler.find_class(self, module, name)
AttributeError: Can't get attribute 'Model_II_b' on <module '__mp_main__' from '/opt/apps/env/bin/uvicorn'>
enter code here
| With the help from @lsabi I found the solution here https://stackoverflow.com/a/51397373/13947506
With the custom unpickler my problem was solved:
class CustomUnpickler(pickle.Unpickler):
def find_class(self, module, name):
if name == 'Model_II_b':
from model_ii_b import Model_II_b
return Model_II_b
return super().find_class(module, name)
current_model = 'model_v2b_c2_small_ep24.pkl'
model = CustomUnpickler(open('models/' + current_model, 'rb')).load()
| https://stackoverflow.com/questions/62953477/ |
Broadcasting element wise multiplication in pytorch | I have a tensor in pytorch with size torch.Size([1443747, 128]). Let's name it tensor A. In this tensor, 128 represents a batch size. I have another 1D tensor with size torch.Size([1443747]). Let's call it B. I want to do element wise multiplication of B with A, such that B is multiplied with all 128 columns of tensor A (obviously in an element wise manner). In other words, I want to broadcast the element wise multiplication along dimension=1.
How can I achieve this in pytorch?
It I didn't have a batch size involved in the tensor A (batch size = 1), then normal * operator would do the multiplication easily. A*B then would have generated resultant tensor of size torch.Size([1443747]). However, I don't understand why pytorch is not broadcasting the tensor multiplication along dimension 1? Is there any way to do this?
What I want is, B should be multiplied with all 128 columns of A in an element wise manner. So, the resultant tensors' size would be torch.Size([1443747, 128]).
| The dimensions should match, it should work if you transpose A or unsqueeze B:
C = A.transpose(1,0) * B # shape: [128, 1443747]
or
C = A * B.unsqueeze(dim=1) # shape: [1443747, 128]
Note that the shapes of the two solutions are different.
| https://stackoverflow.com/questions/62955389/ |
calculate accuracy for each class using CNN and pytorch | I Can calculate accuracy after each epoch using this code . But, I want to calculate the accuracy for each class at the end . how can i do that?
I have two folders train and val . each folder has 7 folders of 7 different classes. the train folder is used for training .otherwise val folder is used for testing
def train_model(model, criterion, optimizer, lr_scheduler, num_epochs=25):
since = time.time()
best_model = model
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
mode='train'
optimizer = lr_scheduler(optimizer, epoch)
model.train() # Set model to training mode
else:
model.eval()
mode='val'
running_loss = 0.0
running_corrects = 0
counter=0
# Iterate over data.
for data in dset_loaders[phase]:
inputs, labels = data
print(inputs.size())
# wrap them in Variable
if use_gpu:
try:
inputs, labels = Variable(inputs.float().cuda()),
Variable(labels.long().cuda())
except:
print(inputs,labels)
else:
inputs, labels = Variable(inputs), Variable(labels)
# Set gradient to zero to delete history of computations in previous epoch. Track operations so that differentiation can be done automatically.
optimizer.zero_grad()
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
# print('loss done')
# Just so that you can keep track that something's happening and don't feel like the program isn't running.
# if counter%10==0:
# print("Reached iteration ",counter)
counter+=1
# backward + optimize only if in training phase
if phase == 'train':
# print('loss backward')
loss.backward()
# print('done loss backward')
optimizer.step()
# print('done optim')
# print evaluation statistics
try:
# running_loss += loss.data[0]
running_loss += loss.item()
# print(labels.data)
# print(preds)
running_corrects += torch.sum(preds == labels.data)
# print('running correct =',running_corrects)
except:
print('unexpected error, could not calculate loss or do a sum.')
print('trying epoch loss')
epoch_loss = running_loss / dset_sizes[phase]
epoch_acc = running_corrects.item() / float(dset_sizes[phase])
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val':
if USE_TENSORBOARD:
foo.add_scalar_value('epoch_loss',epoch_loss,step=epoch)
foo.add_scalar_value('epoch_acc',epoch_acc,step=epoch)
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
print('new best accuracy = ',best_acc)
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
print('returning and looping back')
return best_model
def exp_lr_scheduler(optimizer, epoch, init_lr=BASE_LR, lr_decay_epoch=EPOCH_DECAY):
"""Decay learning rate by a factor of DECAY_WEIGHT every lr_decay_epoch epochs."""
lr = init_lr * (DECAY_WEIGHT**(epoch // lr_decay_epoch))
if epoch % lr_decay_epoch == 0:
print('LR is set to {}'.format(lr))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
return optimizer
| Calculating overall accuracy is rather straight forward:
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
acc_all = (preds == labels).float().mean()
To calculate it per class requires a few more lines of code:
acc = [0 for c in list_of_classes]
for c in list_of_classes:
acc[c] = ((preds == labels) * (labels == c)).float() / (max(labels == c).sum(), 1))
| https://stackoverflow.com/questions/62958248/ |
Why do I get the error 'The ordinal 242 could not be located in the dynamic link library' when trying to import torch? | I am currently encountering some issues when trying to import PyTorch on my computer. I am working from my own local Windows laptop (which doesn't have any GPU) and installed Python 3.6 from python.org. I don't have the Anaconda distribution and usually install any new package by opening a Windows Command Prompt and using this command: pip install package.
Usually that's enough for me to be able to use the package right away (either via a Jupyter Notebook or by writing and running jupya .py script in Sublime Text). But PyTorch seem to be a little less straight forward.
I followed the installation instruction copied below (from the PyTorch website):
pip
No CUDA
To install PyTorch via pip, and do not have a CUDA-capable system or do not require CUDA, in the above selector, choose OS: Windows,
Package: Pip and CUDA: None. Then, run the command that is presented
to you.
I opened my Windows Command Prompt and simply ran this command:
pip install torch==1.5.1+cpu torchvision==0.6.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
The installation was successful and no error showed up during the process.
Now, when I open a Jupyter Notebook and run the command: import torch
I get the following error:
python.exe - Ordinal Not Found
The ordinal 242 could not be located in the dynamic link library
c:\users\bdour\appdata\local\programs\python\python36\lib\site-packages\torch\lib\torch_cpu.dll
I checked and the torch_cpu.dll file DOES exist at the path mentioned in the error.
I tried to understand what that error meant but could not find much help. It seems like many people have issues with PyTorch, often due to some issues with their conda environment, but I am not using any environment. I am just trying to import and run the package locally.
And like I mentioned above, I usually encounter no problem with any other library when simply using a pip install command.
Does anyone know where that error is coming from and how to fix it?
Thank you in advance for your time and help.
| I was actually able to find a solution on my own and thought I would post it here in case someone else struggled with the same error.
I found this helpful link and the suggested solution worked for me: https://kittaiwong.wordpress.com/2019/11/04/how-to-fix-the-ordinal-242-could-not-be-located-in-the-dynamic-link-library-mkl_intel_thread-dll/
In short, the problem seems to originate from a file named libiomp5md.dll present in the C:\Windows\System32 folder that is simply incompatible with numpy.
To fix it, I just looked for the file by copying and pasting it in Windows search, Opening file location (which should be C:\Windows\System32) and then renaming it to: libiomp5md.dll.bak
Now I can import torch without getting any error.
Hope that will help others who struggled with getting torch to run!
| https://stackoverflow.com/questions/62961170/ |
OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index'] | When I load the BERT pretrained model online I get this error OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index'] found in directory uncased_L-12_H-768_A-12 or 'from_tf' set to False what should I do?
| Here is what I found. Go to the following link, and click the circled to download, rename it to pytorch_model.bin, and drop it to the directory of biobert-nli, then the issue is resolved. Didn't figure out how to clone from the link.
https://huggingface.co/gsarti/biobert-nli/tree/main
| https://stackoverflow.com/questions/62961627/ |
Best way to convert a tensor from a condensed representation | I have a Tensor that is in a condensed format representing a sparse 3-D matrix. I need to convert it to a normal matrix (the one that it is actually representing).
So, in my case, each row of any 2-D slice of my matrix can only contain one non-zero element. As data, then, I have for each of these rows, the value, and the index where it appears. For example, the tensor
inp = torch.tensor([[ 1, 2],
[ 3, 4],
[-1, 0],
[45, 1]])
represents a 4x5 matrix (first dimension comes from the first dimension of the tensor, second comes from the metadata) A, where A[0][2] = 1, A[1][4] = 3, A[2][0] = -1, A[3][1] = 45.
This is just one 2-D slice of my Matrix, and I have a variable number of these.
I was able to do this for a 2-D slice as described above in the following way using sparse_coo_tensor:
>>> torch.sparse_coo_tensor(torch.stack([torch.arange(0, 4), inp.t()[1]]), inp.t()[0], [4,5]).to_dense()
tensor([[ 0, 0, 1, 0, 0],
[ 0, 0, 0, 0, 3],
[-1, 0, 0, 0, 0],
[ 0, 45, 0, 0, 0]])
Is this the best way to accomplish this? Is there a simpler, more readable alternative?
How do I extend this to a 3-D matrix without looping?
For a 3-D matrix, you can imagine the input to be something like
inp_list = torch.stack([inp, inp, inp, inp])
and the desired output would be the above output stacked 4 times.
I feel like I should be able to do something if I create an index array correctly, but I cannot think of a way to do this without using some kind of looping.
| OK, after a lot of experiments with different types of indexing, I got this to work. Turns out, the answer was in Advanced Indexing. Unfortunately, PyTorch documentation doesn't go in the details of Advanced Indexing. Here is a link for it in the Numpy documentation.
For the problem described above, this command did the trick:
>>> k_lst = torch.zeros([4,4,5])
>>> k_lst[torch.arange(4).unsqueeze(1), torch.arange(4), inp_list[:,:,1]] = inp_list[:,:,0].float()
>>> k_lst
tensor([[[ 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 3.],
[-1., 0., 0., 0., 0.],
[ 0., 45., 0., 0., 0.]],
[[ 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 3.],
[-1., 0., 0., 0., 0.],
[ 0., 45., 0., 0., 0.]],
[[ 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 3.],
[-1., 0., 0., 0., 0.],
[ 0., 45., 0., 0., 0.]],
[[ 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 3.],
[-1., 0., 0., 0., 0.],
[ 0., 45., 0., 0., 0.]]])
Which is exactly what I wanted.
I learned quite a few things searching for this, and I want to share this for anyone who stumbles on this question. So, why does this work? The answer lies in the way Broadcasting works. If you look at the shapes of the different index tensors involved, you'd see that they are (of necessity) broadcastable.
>>> torch.arange(4).unsqueeze(1).shape, torch.arange(4).shape, inp_list[:,:,1].shape
(torch.Size([4, 1]), torch.Size([4]), torch.Size([4, 4]))
Clearly, to access an element of a 3-D tensor such as k_lst here, we need 3 indexes - one for each dimension. If you give 3 tensors of same shapes to the [] operator, it can get a bunch of legal indexes by matching corresponding elements from the 3 tensors.
If the 3 tensors are of different shapes, but broadcastable (as is the case here), it copies the relevant rows/columns of the lacking tensors the requisite number of times to get tensors with the same shapes.
Ultimately, in my case, if we go into how the different values got assigned, this would be equivalent to doing
k_lst[0,0,inp_list[0,0,1]] = inp_list[0,0,0].float()
k_lst[0,1,inp_list[0,1,1]] = inp_list[0,1,0].float()
k_lst[0,2,inp_list[0,2,1]] = inp_list[0,2,0].float()
k_lst[0,3,inp_list[0,3,1]] = inp_list[0,3,0].float()
k_lst[1,0,inp_list[1,0,1]] = inp_list[1,0,0].float()
k_lst[1,1,inp_list[1,1,1]] = inp_list[1,1,0].float()
.
.
.
k_lst[3,3,inp_list[3,3,1]] = inp_list[3,3,0].float()
This format reminds me of torch.Tensor.scatter(), but if it can be used to solve this problem, I haven't figured out how yet.
| https://stackoverflow.com/questions/62961992/ |
pytorch "trying to backward through the graph a second time" error with chracter level RNN | I am training a character level GRU with pytorch, while dividing the text into batches of a certain chunk length.
This is the trainning loop:
for e in range(self.epochs):
self.model.train()
h = self.get_init_state(self.batch_size)
for batch_num in range(self.num_batch_runs):
batch = self.generate_batch(batch_num).to(device)
inp_batch = batch[:-1,:]
tar_batch = batch[1:,:]
self.model.zero_grad()
loss = 0
for i in range(inp_batch.shape[0]):
out, h = self.model(inp_batch[i:i+1,:],h)
loss += loss_fn(out[0],tar_batch[i].view(-1))
loss.backward()
nn.utils.clip_grad_norm_(self.model.parameters(), 5.0)
optimizer.step()
if not (batch_num % 5):
print("epoch: {}, loss: {}".format(e,loss.data.item()/inp_batch.shape[0]))
Still, I am getting this error after the first batch:
Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
Thanks in advance..
| I found the answer myself, the hidden state of the GRU was still attached to the last batch run, so it had to be detached using
h.detach_()
| https://stackoverflow.com/questions/62962149/ |
How to slice 3D torch tensor into 2D slices | I'm working with 3D CT medical data, and I'm trying to slice it into 2D slices that I can input into a UNet model.
I've loaded the data into a torch dataloader, and each iteration currently produces a 4D tensor:
for batch_index, batch_samples in enumerate(train_loader):
data, target = batch_samples['image'].float().cuda(), batch_samples['label'].float().cuda()
print(data.size())
torch.Size([1, 333, 512, 512])
torch.Size([1, 356, 512, 512])
such as this one. I want to iterate over the 333 slices, and then the 356 slices, such that the model receives torch sizes [1, 1, 512, 512] each time.
I was hoping something like :
for x in (data[:,x,:,:]):
would work but it says I need to define x first. How can I iterate over a specific dimension in a torch tensor?
| Simply specify the dimension:
for i in range(data.shape[1]): # dim=1
x = data[:, i, :, :]
# [...]
If you really need that extra dimension, simply add .unsqueeze():
d = 1
for i in range(data.shape[d]): # dim=1
x = data[:, i, :, :].unsqueeze(d) # same dim=1
# [...]
| https://stackoverflow.com/questions/62968562/ |
How to create a custom data loader in Pytorch? | I have a file containing paths to images I would like to load into Pytorch, while utilizing the built-in dataloader features (multiprocess loading pipeline, data augmentations, and so on).
def create_links():
data_dir = "/myfolder"
full_path_list = []
assert os.path.isdir(data_dir)
for _, _, filenames in os.walk(data_dir):
for filename in filenames:
full_path_list.append(os.path.join(data_dir, filename))
with open(config.data.links_file, 'w+') as links_file:
for full_path in full_path_list:
links_file.write(f"{full_path}\n")
def read_links_file_to_list():
config = ConfigProvider.config()
links_file_path = config.data.links_file
if not os.path.isfile(links_file_path):
raise RuntimeError("did you forget to create a file with links to images? Try using 'create_links()'")
with open(links_file_path, 'r') as links_file:
return links_file.readlines()
So I have a list of files (or a generator, or whatever works), file_list = read_links_file_to_list().
How can I build a Pytorch dataloader around it, and how would I use it?
| What you want is a Custom Dataset. The __getitem__ method is where you would apply transforms such as data-augmentation etc. To give you an idea of what it looks like in practice you can take a look at this Custom Dataset I wrote the other day:
class GTSR43Dataset(Dataset):
"""German Traffic Sign Recognition dataset."""
def __init__(self, root_dir, train_file, transform=None):
self.root_dir = root_dir
self.train_file_path = train_file
self.label_df = pd.read_csv(os.path.join(self.root_dir, self.train_file_path))
self.transform = transform
self.classes = list(self.label_df['ClassId'].unique())
def __getitem__(self, idx):
"""Return (image, target) after resize and preprocessing."""
img = os.path.join(self.root_dir, self.label_df.iloc[idx, 7])
X = Image.open(img)
y = self.class_to_index(self.label_df.iloc[idx, 6])
if self.transform:
X = self.transform(X)
return X, y
def class_to_index(self, class_name):
"""Returns the index of a given class."""
return self.classes.index(class_name)
def index_to_class(self, class_index):
"""Returns the class of a given index."""
return self.classes[class_index]
def get_class_count(self):
"""Return a list of label occurences"""
cls_count = dict(self.label_df.ClassId.value_counts())
# cls_percent = list(map(lambda x: (1 - x / sum(cls_count)), cls_count))
return cls_count
def __len__(self):
"""Returns the length of the dataset."""
return len(self.label_df)
| https://stackoverflow.com/questions/62971444/ |
Pytorch: explain torch.argmax | Hello I have the following code:
import torch
x = torch.zeros(1,8,4,576) # create a 4 dimensional tensor
x[0,4,2,333] = 1.0 # put on 1 on a random spot
# I want to find the index of the highest value (0,4,2,333)
print(x.argmax()) # this should return the index
This returns
tensor(10701)
How does this 10701 make sense?
How do I get the actual indices 0,4,2,333 ?
| The data in the 4-dimensional array is stored linearly in memory, and argmax() returns the corresponding index of this flat representation.
Numpy has a function for unraveling the index (converting from the flat array index to the corresponding multi-dimensional indices).
import numpy as np
np.unravel_index(10701, (1,8,4,576))
| https://stackoverflow.com/questions/62974904/ |
ValueError: only one element tensors can be converted to Python scalars | I'm following this tutorial.
I'm at the last part where we combine the models in a regression.
I'm coding this in jupyter as follows:
import shutil
import os
import time
from datetime import datetime
import argparse
import pandas
import numpy as np
from tqdm import tqdm
from tqdm import tqdm_notebook
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
from torchsample.transforms import RandomRotate, RandomTranslate, RandomFlip, ToTensor, Compose, RandomAffine
from torchvision import transforms
import torch.nn.functional as F
from tensorboardX import SummaryWriter
import dataloader
from dataloader import MRDataset
import model
from sklearn import metrics
def extract_predictions(task, plane, train=True):
assert task in ['acl', 'meniscus', 'abnormal']
assert plane in ['axial', 'coronal', 'sagittal']
models = os.listdir('models/')
model_name = list(filter(lambda name: task in name and plane in name, models))[0]
model_path = f'models/{model_name}'
mrnet = torch.load(model_path)
_ = mrnet.eval()
train_dataset = MRDataset('data/',
task,
plane,
transform=None,
train=train,
)
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=1,
shuffle=False,
num_workers=10,
drop_last=False)
predictions = []
labels = []
with torch.no_grad():
for image, label, _ in tqdm_notebook(train_loader):
logit = mrnet(image.cuda())
prediction = torch.sigmoid(logit)
predictions.append(prediction.item())
labels.append(label.item())
return predictions, labels
task = 'acl'
results = {}
for plane in ['axial', 'coronal', 'sagittal']:
predictions, labels = extract_predictions(task, plane)
results['labels'] = labels
results[plane] = predictions
X = np.zeros((len(predictions), 3))
X[:, 0] = results['axial']
X[:, 1] = results['coronal']
X[:, 2] = results['sagittal']
y = np.array(labels)
logreg = LogisticRegression(solver='lbfgs')
logreg.fit(X, y)
task = 'acl'
results_val = {}
for plane in ['axial', 'coronal', 'sagittal']:
predictions, labels = extract_predictions(task, plane, train=False)
results_val['labels'] = labels
results_val[plane] = predictions
y_pred = logreg.predict_proba(X_val)[:, 1]
metrics.roc_auc_score(y_val, y_pred)
However I get this error:
ValueError Traceback (most recent call last)
<ipython-input-2-979acb314bc5> in <module>
3
4 for plane in ['axial', 'coronal', 'sagittal']:
----> 5 predictions, labels = extract_predictions(task, plane)
6 results['labels'] = labels
7 results[plane] = predictions
<ipython-input-1-647731b6b5c8> in extract_predictions(task, plane, train)
54 logit = mrnet(image.cuda())
55 prediction = torch.sigmoid(logit)
---> 56 predictions.append(prediction.item())
57 labels.append(label.item())
58
ValueError: only one element tensors can be converted to Python scalars
Here's the MRDataset code in case:
class MRDataset(data.Dataset):
def __init__(self, root_dir, task, plane, train=True, transform=None, weights=None):
super().__init__()
self.task = task
self.plane = plane
self.root_dir = root_dir
self.train = train
if self.train:
self.folder_path = self.root_dir + 'train/{0}/'.format(plane)
self.records = pd.read_csv(
self.root_dir + 'train-{0}.csv'.format(task), header=None, names=['id', 'label'])
else:
transform = None
self.folder_path = self.root_dir + 'valid/{0}/'.format(plane)
self.records = pd.read_csv(
self.root_dir + 'valid-{0}.csv'.format(task), header=None, names=['id', 'label'])
self.records['id'] = self.records['id'].map(
lambda i: '0' * (4 - len(str(i))) + str(i))
self.paths = [self.folder_path + filename +
'.npy' for filename in self.records['id'].tolist()]
self.labels = self.records['label'].tolist()
self.transform = transform
if weights is None:
pos = np.sum(self.labels)
neg = len(self.labels) - pos
self.weights = torch.FloatTensor([1, neg / pos])
else:
self.weights = torch.FloatTensor(weights)
def __len__(self):
return len(self.paths)
def __getitem__(self, index):
array = np.load(self.paths[index])
label = self.labels[index]
if label == 1:
label = torch.FloatTensor([[0, 1]])
elif label == 0:
label = torch.FloatTensor([[1, 0]])
if self.transform:
array = self.transform(array)
else:
array = np.stack((array,)*3, axis=1)
array = torch.FloatTensor(array)
# if label.item() == 1:
# weight = np.array([self.weights[1]])
# weight = torch.FloatTensor(weight)
# else:
# weight = np.array([self.weights[0]])
# weight = torch.FloatTensor(weight)
return array, label, self.weights
I've only trained my models using 1 and 2 epochs for each plane of the MRI instead of 35 as in the tutorial, not sure if that has anything to do with it. Other than that I'm stranded as to what this could be? I also removed normalize=False in the options for train_dataset as it kept giving me an error and I read that it could be removed, but I'm not so sure?
| Only a tensor that contains a single value can be converted to a scalar with item(), try printing the contents of prediction, I imagine this is a vector of probabilities indicating which label is most likely. Using argmax on prediction will give you your actual predicted label (assuming your labels are 0-n).
| https://stackoverflow.com/questions/62978582/ |
PyTorch AttributeError: 'UNet3D' object has no attribute 'size' | I am making an image segmentation transfer learning project using Pytorch. I am using the weights of this pre-trained model and class UNet3D.
https://github.com/MrGiovanni/ModelsGenesis
When I run the following codes I get this error at the line which MSELoss is called: "AttributeError: 'DataParallel' object has no attribute 'size' ".
When I delete the first line I get a similar error: "AttributeError: 'UNet3D' object has no attribute 'size'
"
How can I convert DataParallel or UNet3D class to an object which MSELoss can use? I do not need DataParallel for now. I need to run the UNet3D() class for transfer learning.
model = nn.DataParallel(model, device_ids = [i for i in range(torch.cuda.device_count())])
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), conf.lr, momentum=0.9, weight_decay=0.0, nesterov=False)
scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)
initial_epoch=10
for epoch in range(initial_epoch, conf.nb_epoch):
scheduler.step(epoch)
model.train()
for batch_ndx, (x,y) in enumerate(train_loader):
x, y = x.float().to(device), y.float().to(device)
pred = model
loss = criterion(pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-46-20d1943b3498> in <module>
25 x, y = x.float().to(device), y.float().to(device)
26 pred = model
---> 27 loss = criterion(pred, y)
28 optimizer.zero_grad()
29 loss.backward()
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
430
431 def forward(self, input, target):
--> 432 return F.mse_loss(input, target, reduction=self.reduction)
433
434
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in mse_loss(input, target, size_average, reduce, reduction)
2528 mse_loss, tens_ops, input, target, size_average=size_average, reduce=reduce,
2529 reduction=reduction)
-> 2530 if not (target.size() == input.size()):
2531 warnings.warn("Using a target size ({}) that is different to the input size ({}). "
2532 "This will likely lead to incorrect results due to broadcasting. "
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
592 return modules[name]
593 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 594 type(self).__name__, name))
595
596 def __setattr__(self, name, value):
AttributeError: 'UNet3D' object has no attribute 'size'
| You have a typo on this line:
pred = model
should be
pred = model(x)
model is nn.Module object which describes the network. x, y, pred are (supposed to be) torch tensors.
Aside from this particular case, I think it would be good to think about how to solve this type of problems in general.
You saw an error (exception) on a certain line. Is the problem there, or earlier? Can you isolate the problem?
For example, if you print out the arguments you're passing to criterion(pred, y) just before the call, do they look right? (they don't)
What happens if you create a couple of tensors of the right shape just before the call and pass them instead? (works fine)
What is the error really saying? "AttributeError: 'UNet3D' object has no attribute 'size'" - well, of course it's not supposed to have a size, but why is the code trying to access it's size? Actually, why is the code even able to access that object on that line? (since the model is not supposed to be passed to the criterion function - right?)
Maybe useful further reading: https://ericlippert.com/2014/03/05/how-to-debug-small-programs/
| https://stackoverflow.com/questions/62985943/ |
Training works but prediction produces constant values (cnn with pytorch) | I have a model trying to predict the class of image: cat or dog. I receive 95% accuracy in training. However when I try to predict a single image, I am stuck with almost constant output every time I run the model. There are some non-constant values, but they mostly look like catastrophic failure.
I read similar topics from forums but that hasn't helped, as it appears there is no particular solution for this problem...
I have tried the following:
Changing epochs 5 to 15,20,30...
Changing lr = 0.001 to 0.01, 0.0001...
I implemented with both dropout regularization model and batch
normalization model...
I changed testing pictures...
Changing last activation layer from softmax to torch.sigmoid...
Reducing batch size from 100 to 30, 75...
Trying with a batch, which results with normal acc, loss and
predictions.
My dataset is scaled which is mentioned in forums as solution.
My optim is Adam which is mentioned in forums as solution.
Loading dataset with torch.data.DataLoader...
Sampling randomly...
I saved and load the model, in case there are problems with that.
However, I already checked that state_dict's are different...
I re-prepared data which resulted the constant value to change
otherwise (dog to cat), somehow? Idk if that's a coincidence though.
Infos:
Dataset :
https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip
Here is all my code with predictions in Jupyter Notebook, feel free to investigate. I am really tired of this solution. Any help is highly appreciated!
https://github.com/yusuftengriverdi/neural_networks/blob/master/CNN_Last.ipynb
Similar topics around the web:
https://discuss.pytorch.org/t/rnn-predicting-a-constant-output/40397/5
https://discuss.pytorch.org/t/cnn-does-not-predict-properly-does-not-converge-as-expected/43567
https://discuss.pytorch.org/t/making-a-prediction-with-a-trained-model/2193
https://datascience.stackexchange.com/questions/46779/predict-gives-the-same-output-value-for-every-image-keras
https://github.com/keras-team/keras/issues/6447
PyTorch model prediction fail for single item
Having trouble with CNN prediction
| If something works in training but fails during prediction, the most likely cause is you're not preprocessing the data the same way.
I had a look at the notebook (huge amount of code, in future please condense this to just the relevant parts here). At a glance - this is your prediction code which doesn't work as expected:
img = cv2.resize(img, (IMG_SIZE, IMG_SIZE))
plt.imshow(img, cmap='gray')
x = torch.Tensor([i for i in img]).view(-1, 50, 50)
y= torch.Tensor([0,1]).to(device)
test_x = x.view(-1, 1, 50, 50)
test_x = test_x.to(device)
net.eval()
#with torch.no_grad():
yhat.append(net(test_x))
But during training you're using a dataloader
testloader = DataLoader(v_dataset, batch_size = BATCH_SIZE, sampler= test_sampler)
...
test_dt = next(iter(testloader))
X, y = test_dt[0].view(-1, 1, 50, 50), test_dt[1]
val_acc, val_loss = fwd_pass(X.view(-1, 1, 50, 50).to(device), y.to(device))
which works (since your test/validation accuracy goes up to a good level).
Figure out what the dataloader code path does which the other code path doesn't do, and you'll have the solution. Eg, load the same image in both ways and compare - same dimensions? data average / standard deviation the same? etc
For a shortcut - just use a dataloader to make predictions as well. P.S. Yes, it is okay to create a dataloader for just one image.
| https://stackoverflow.com/questions/62986273/ |
error in BatchNorm2d in pytorch CNN model | my database has grayscale images of size 128 * 128* 1 each with batch size =10
i am using cnn model but I got this error in BatchNorm2d
expected 4D input (got 2D input)
I posted the way which i used to transform my image (gray scale - tensor - normalize ) and divide it to batches
data_transforms = {
'train': transforms.Compose([
transforms.Grayscale(num_output_channels=1),
transforms.Resize(128),
transforms.CenterCrop(128),
transforms.ToTensor(),
transforms.Normalize([0.5], [0.5])
]),
'val': transforms.Compose([
transforms.Grayscale(num_output_channels=1),
transforms.Resize(128),
transforms.CenterCrop(128),
transforms.ToTensor(),
transforms.Normalize([0.5], [0.5])
]),
}
data_dir = '/content/drive/My Drive/Colab Notebooks/pytorch'
dsets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x])
for x in ['train', 'val']}
dset_loaders = {x: torch.utils.data.DataLoader(dsets[x], batch_size=10,
shuffle=True, num_workers=25)
for x in ['train', 'val']}
dset_sizes = {x: len(dsets[x]) for x in ['train', 'val']}
dset_classes = dsets['train'].classes
I used this model
class HeartNet(nn.Module):
def __init__(self, num_classes=7):
super(HeartNet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=3, stride=1, padding=1),
nn.ELU(inplace=True),
nn.BatchNorm2d(64),
nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1),
nn.ELU(inplace=True),
nn.BatchNorm2d(64),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),
nn.ELU(inplace=True),
nn.BatchNorm2d(128),
nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1),
nn.ELU(inplace=True),
nn.BatchNorm2d(128),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1),
nn.ELU(inplace=True),
nn.BatchNorm2d(256),
nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1),
nn.ELU(inplace=True),
nn.BatchNorm2d(256),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.classifier = nn.Sequential(
nn.Dropout(0.5),
nn.Linear(16*16*256, 2048),
nn.ELU(inplace=True),
nn.BatchNorm2d(2048),
nn.Linear(2048, num_classes)
)
nn.init.xavier_uniform_(self.classifier[1].weight)
nn.init.xavier_uniform_(self.classifier[4].weight)
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), 16 * 16 * 256)
x = self.classifier(x)
return x
How can I solve this problem ?
| You have a problem with the batch norm layer inside your self.classifier sub network: While your self.features sub network is fully convolutional and required BatchNorm2d, the self.classifier sub network is a fully-connected multi-layer perceptron (MLP) network and is 1D in nature. Note the how the forward function removes the spatial dimensions from the feature map x before feeding it to the classifier.
Try replacing the BatchNorm2d in self.classifier with BatchNorm1d.
| https://stackoverflow.com/questions/62991076/ |
How to specify the input dimension of pytorch nn.Linear? | For example, I have defined a model shown below:
class Net(nn.module):
def __init__():
self.conv11 = nn.Conv2d(in_channel,out1_channel,3)
self.conv12 = nn.Conv2d(...)
self.conv13 = nn.Conv2d(...)
self.conv14 = nn.Conv2d(...)
...
#Here is the point
flat = nn.Flatten()
#I don't want to compute the size of data after flatten, but I need a linear layer.
fc_out = nn.Linear(???,out_dim)
The problem is the Linear layer, I don't want to compute the size of the input to the Linear layer, but the defining model needs to specify it. How can I solve this problem?
| The way I do it is to put in some arbitrary value and let the model throw an error. You will be able to see the number of input features in the error description.
There are other ways to do it as well. You can compute the size by hand and write a comment next to each nn.Conv2d layer depicting the layer output. Before you use the nn.Flatten(), you will have the output, simply multiply all the dimensions except the bacthsize. The resulting value is the number of input features for nn.Linear() layer.
If you don't want to do any of this, you can try torchlayers. A handy package that lets you define pytorch models like Keras.
| https://stackoverflow.com/questions/62993470/ |
PyTorch gradients have different shape for CUDA and CPU | I’m dealing with a strange issue where the gradients after backward pass have different shapes depending on whether CUDA or CPU is used. The model used is relatively simple:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool1 = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.pool2 = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.relu1 = nn.ReLU()
self.relu2 = nn.ReLU()
self.relu3 = nn.ReLU()
self.relu4 = nn.ReLU()
def forward(self, x):
x = self.pool1(self.relu1(self.conv1(x)))
x = self.pool2(self.relu2(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = self.relu3(self.fc1(x))
x = self.relu4(self.fc2(x))
x = self.fc3(x)
return x
The input tensor has shape (1, 3, 32, 32), and the relevant section of code is as follows, with the method generate_gradients being of particular importance:
class VanillaBackprop():
"""
Produces gradients generated with vanilla back propagation from the image
"""
def __init__(self, model):
self.model = model
self.gradients = None
# Put model in evaluation mode
self.model.eval()
# Hook the first layer to get the gradient
self.hook_layers()
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.model.to(self.device)
def hook_layers(self):
def hook_function(module, grad_in, grad_out):
self.gradients = grad_in[0]
# Register hook to the first layer
try:
first_layer = list(self.model.features._modules.items())[0][1]
except:
first_layer = list(self.model._modules.items())[0][1]
first_layer.register_backward_hook(hook_function)
def generate_gradients(self, input_image, target_class):
# Forward
model_output = self.model(input_image.to(self.device))
# Zero grads
self.model.zero_grad()
# Target for backprop
one_hot_output = torch.FloatTensor(1, model_output.size()[-1]).zero_()
one_hot_output[0][target_class] = 1
# Backward pass
model_output.backward(gradient=one_hot_output.to(self.device))
# Convert Pytorch variable to numpy array
gradients_as_arr = self.gradients.data.cpu().numpy()[0]
return gradients_as_arr
When on CPU, self.gradients has shape (1, 3, 32, 32), while on CUDA it has shape (1, 6, 28, 28). How is that possible, and how do I fix this? Any help is much appreciated.
| It looks like the issue stems from the register_backward_hook() function. As pointed out in the PyTorch forums:
You might want to double check the register_backward_hook() doc. But
it is known to be kind of broken at the moment and can have this
behavior.
I would recommend you use autograd.grad() for this though. That will
make it simpler than backward+access to the .grad field.
I, however, opted to use register_hook() instead of register_backward_hook() (as opposed to autograd.grad() as suggested), which seems to work as well:
class VanillaBackprop():
"""
Produces gradients generated with vanilla back propagation from the image
"""
def __init__(self, model):
self.model = model
self.gradients = None
# Put model in evaluation mode
self.model.eval()
# Hook the first layer to get the gradient
def hook_input(self, input_tensor):
def hook_function(grad_in):
self.gradients = grad_in
input_tensor.register_hook(hook_function)
def generate_gradients(self, input_image, target_class):
# Register input hook
self.hook_input(input_image)
# Forward
model_output = self.model(input_image)
# Zero grads
self.model.zero_grad()
# Target for backprop
device = next(self.model.parameters()).device
one_hot_output = torch.FloatTensor(1, model_output.size()[-1]).zero_()
one_hot_output[0][target_class] = 1
one_hot_output = one_hot_output.to(device)
# Backward pass
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_output.backward(gradient=one_hot_output.to(device))
# Convert Pytorch variable to numpy array
# [0] to get rid of the first channel (1,3,224,224)
gradients_as_arr = self.gradients.data.cpu().numpy()[0]
return gradients_as_arr
| https://stackoverflow.com/questions/62994718/ |
Split and extract values from a PyTorch tensor according to given dimensions | I have a tensor Aof sizetorch.Size([32, 32, 3, 3]) and I want to split it and extract a tensor B of size torch.Size([16, 16, 3, 3]) from it. The tensor can be 1d or 4d and split has to be according to the given new tensor dimensions. I have been able to generate the target dimensions but I'm unable to split and extract the values from the source tensor. I ave tried torch.narrow but it takes only 3 arguments and I need 4 in many cases. torch.split takes dim as an int due to which tensor is split along one dimension only. But I want to split it along multiple dimensions.
| You have multiple options:
use .split multiple times
use .narrow multiple times
use slicing
e.g.:
t = torch.rand(32, 32, 3, 3)
t0, t1 = t.split((16, 16), 0)
print(t0.shape, t1.shape)
>>> torch.Size([16, 32, 3, 3]) torch.Size([16, 32, 3, 3])
t00, t01 = t0.split((16, 16), 1)
print(t00.shape, t01.shape)
>>> torch.Size([16, 16, 3, 3]) torch.Size([16, 16, 3, 3])
t00_alt, t01_alt = t[:16, :16, :, :], t[16:, 16:, :, :]
print(t00_alt.shape, t01_alt.shape)
>>> torch.Size([16, 16, 3, 3]) torch.Size([16, 16, 3, 3])
| https://stackoverflow.com/questions/62999882/ |
PyTorch - RuntimeError: Error(s) in loading state_dict for VGG: | I've trained a model using PyTorch and saved a state dict file. I have loaded the pre-trained model using the code below. I am getting an error message regarding RuntimeError: Error(s) in loading state_dict for VGG:
RuntimeError: Error(s) in loading state_dict for VGG:
Missing key(s) in state_dict: "features.0.weight", "features.0.bias", "features.2.weight", "features.2.bias", "features.5.weight", "features.5.bias", "features.7.weight", "features.7.bias", "features.10.weight", "features.10.bias", "features.12.weight", "features.12.bias", "features.14.weight", "features.14.bias", "features.17.weight", "features.17.bias", "features.19.weight", "features.19.bias", "features.21.weight", "features.21.bias", "features.24.weight", "features.24.bias", "features.26.weight", "features.26.bias", "features.28.weight", "features.28.bias", "classifier.0.weight", "classifier.0.bias", "classifier.3.weight", "classifier.3.bias", "classifier.6.weight", "classifier.6.bias".
Unexpected key(s) in state_dict: "state_dict", "optimizer_state_dict", "globalStep", "train_paths", "test_paths".
I am following instruction available at this site: https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-model-across-devices
Many Thanks
import argparse
import datetime
import glob
import os
import random
import shutil
import time
from os.path import join
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
from torchvision.transforms import ToTensor
from tqdm import tqdm
import torch.optim as optim
from convnet3 import Convnet
from dataset2 import CellsDataset
from convnet3 import Convnet
from VGG import VGG
from dataset2 import CellsDataset
from torchvision import models
from Conv import Conv2d
parser = argparse.ArgumentParser('Predicting hits from pixels')
parser.add_argument('name',type=str,help='Name of experiment')
parser.add_argument('data_dir',type=str,help='Path to data directory containing images and gt.csv')
parser.add_argument('--weight_decay',type=float,default=0.0,help='Weight decay coefficient (something like 10^-5)')
parser.add_argument('--lr',type=float,default=0.0001,help='Learning rate')
args = parser.parse_args()
metadata = pd.read_csv(join(args.data_dir,'gt.csv'))
metadata.set_index('filename', inplace=True)
# create datasets:
dataset = CellsDataset(args.data_dir,transform=ToTensor(),return_filenames=True)
dataset = DataLoader(dataset,num_workers=4,pin_memory=True)
model_path = '/Users/nubstech/Documents/GitHub/CellCountingDirectCount/VGG_model_V1/checkpoints/checkpoint.pth'
class VGG(nn.Module):
def __init__(self, pretrained=True):
super(VGG, self).__init__()
vgg = models.vgg16(pretrained=pretrained)
# if pretrained:
vgg.load_state_dict(torch.load(model_path))
features = list(vgg.features.children())
self.features4 = nn.Sequential(*features[0:23])
self.de_pred = nn.Sequential(Conv2d(512, 128, 1, same_padding=True, NL='relu'),
Conv2d(128, 1, 1, same_padding=True, NL='relu'))
def forward(self, x):
x = self.features4(x)
x = self.de_pred(x)
return x
model=VGG()
#model.load_state_dict(torch.load(model_path),strict=False)
model.eval()
#optimizer = torch.optim.Adam(model.parameters(),lr=args.lr,weight_decay=args.weight_decay)
for images, paths in tqdm(dataset):
targets = torch.tensor([metadata['count'][os.path.split(path)[-1]] for path in paths]) # B
targets = targets.float()
# code to print training data to a csv file
#filename=CellsDataset(args.data_dir,transform=ToTensor(),return_filenames=True)
output = model(images) # B x 1 x 9 x 9 (analogous to a heatmap)
preds = output.sum(dim=[1,2,3]) # predicted cell counts (vector of length B)
print(preds)
paths_test = np.array([paths])
names_preds = np.hstack(paths)
print(names_preds)
df=pd.DataFrame({'Image_Name':names_preds, 'Target':targets.detach(), 'Prediction':preds.detach()})
print(df)
# save image name, targets, and predictions
df.to_csv(r'model.csv', index=False, mode='a')
Code for saving the state dict
torch.save({'state_dict':model.state_dict(),
'optimizer_state_dict':optimizer.state_dict(),
'globalStep':global_step,
'train_paths':dataset_train.files,
'test_paths':dataset_test.files},checkpoint_path)
| The problem is that what is being saved is not the same as what is expected to be loaded. The code is trying to load only a state_dict; it is saving quite a bit more than that - looks like a state_dict inside another dict with additional info. The load method doesn't have any logic to look inside the dict.
This should work:
import torch, torchvision.models
model = torchvision.models.vgg16()
path = 'test.pth'
torch.save(model.state_dict(), path) # nothing else here
model.load_state_dict(torch.load(path))
| https://stackoverflow.com/questions/63001490/ |
Where are those numbers coming from in pytorch neural networks? | I'm newbie in Pytorch (python), i was just scrolling through their official tutorial and i found this simple neural network architecture. Everything is clear, but those numbers, last three fully connected layers, where are they coming from?
self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension and 16 from last conv layer
self.fc2 = nn.Linear(120, 84) # but here, 120? 84? why? is this just random or there is some logic behind it?
self.fc3 = nn.Linear(84, 10)
full code - https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py
|
self.fc1 = nn.Linear(16 * 6 * 6, 120)
self.fc2 = nn.Linear(120, 84) # you can use any number instead of 120 play with this number and see which gives you best result.
self.fc3 = nn.Linear(84, 10)
120 is number of units in first layer after conv layer , 84 in second layer and 10 in last which probably is dimension of your output layer ie. 10 possible type of classification.
You are correct the dimension of second and third layer is not fixed and you can try different value of num of units and choose one which gives you the best result. You can play with it but you can also look at some of the best performing models and follow the structure they use.
| https://stackoverflow.com/questions/63002352/ |
Pytorch, INPUT (normal tensor) and WEIGHT (cuda tensor) mismatch | DISCLAIMER I know, this question has already asked multiple times, but i tried their solutions, none of them worked for me, so after all those effort, i can't find anything else and eventually i have to ask again.
I'm doing image classification with cnns (PYTORCH), i wan't to train it on GPU (nvidia gpu, compatible with cuda/cuda installed), i successfully managed to put net on it, but the problem is with data.
if torch.cuda.is_available():
device = torch.device("cuda:0")
print("Running on the GPU")
print("Available GPU", torch.cuda.device_count())
Running on the GPU
Available GPU 1
net = Net()
net.to(device)
for epoch in range(2):
running_loss=0.0
for i, data in enumerate(trainloader, 0):
inputs, labels = data[0].to(device), data[1].to(device) # putting data on gpu
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 2000 == 1999:
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
dataiter = iter(testloader)
images, labels = dataiter.next()
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
out = net(images)
ERROR
----------------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-81-76c52eabb174> in <module>
----> 1 out = net(images)
~/anaconda3/envs/home/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
<ipython-input-57-c74f8361a10b> in forward(self, x)
11
12 def forward(self, x):
---> 13 x = self.pool(F.relu(self.conv1(x)))
14 x = self.pool(F.relu(self.conv2(x)))
15 x = x.view(-1, 16*5*5)
~/anaconda3/envs/home/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
~/anaconda3/envs/home/lib/python3.8/site-packages/torch/nn/modules/conv.py in forward(self, input)
351
352 def forward(self, input):
--> 353 return self._conv_forward(input, self.weight)
354
355 class Conv3d(_ConvNd):
~/anaconda3/envs/home/lib/python3.8/site-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight)
347 weight, self.bias, self.stride,
348 _pair(0), self.dilation, self.groups)
--> 349 return F.conv2d(input, weight, self.bias, self.stride,
350 self.padding, self.dilation, self.groups)
351
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
inputs.is_cuda
True
and same for labels.
What I've tried:
https://stackoverflow.com/q/59013109/13287412
https://github.com/sksq96/pytorch-summary/issues/57
https://blog.csdn.net/qq_27261889/article/details/86575033
https://blog.csdn.net/public669/article/details/96510293
but nothing worked so far...
| Your images tensor is located on the CPU while your net is located on the GPU. Even when evaluating you want to make sure that your input tensors and model are located on the same device otherwise you will get tensor data type errors.
out = net(images.to(device))
| https://stackoverflow.com/questions/63005606/ |
Should I transpose features or weights in Neural network? | I am learning Neural network.
Here is the complete piece of code:
https://github.com/udacity/deep-learning-v2-pytorch/blob/master/intro-to-pytorch/Part%201%20-%20Tensors%20in%20PyTorch%20(Exercises).ipynb
When I transpose features, I get the following output:
import torch
def activation(x):
return 1/(1+torch.exp(-x))
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 5 random normal variables
features = torch.randn((1, 5))
# True weights for our data, random normal variables again
weights = torch.randn_like(features)
# and a true bias term
bias = torch.randn((1, 1))
product = features.t() * weights + bias
output = activation(product.sum())
tensor(0.9897)
However, if I transpose weights, I get a different output:
weights_prime = weights.view(5,1)
prod = torch.mm(features, weights_prime) + bias
y_hat = activation(prod.sum())
tensor(0.1595)
Why does this happen?
Update
I took a look at the solution:
https://github.com/udacity/deep-learning-v2-pytorch/blob/master/intro-to-pytorch/Part%201%20-%20Tensors%20in%20PyTorch%20(Solution).ipynb
And I saw this:
y = activation((features * weights).sum() + bias)
why can a matrix features(1,5) multiply another matrix weights(1,5) without transposing weights first?
Update 2
After read several posts, I realized that
matrixA * matrixB is different from torch.mm(matrixA,matrixB) and torch.matmul(matrixA,matrixB).
Could someone confirm my three understandings between?
So the * means element-wise multiplication, whereas torch.mm() and torch.matmul() are matrix-wise multiplication.
differences between torch.mm() and torch.matmul(): mm() is used specifically for 2 dimensions matrix, whereas matmul() can be used for more complicated cases.
In Neutral Network for this Udacity coding exercise mentioned in my above link, it needs element-wise multiplication.
Update 3
Just to bring in the Video screenshot for someone who has the same confusion:
And here is the video link: https://www.youtube.com/watch?time_continue=98&v=6Z7WntXays8&feature=emb_logo
| Looking at https://pytorch.org/docs/master/generated/torch.nn.Linear.html
The typical linear (fully connected) layer in torch uses input features of shape (N,∗,in_features) and weights of shape (out_features,in_features) to produce an output of shape (N,*,out_features). Here N is the batch size, and * is any number of other dimensions (may be none).
The implementation is:
output = input.matmul(weight.t())
So, the answer is that neither of your formulas is correct according to convention; the standard formula is the one above.
You may use a non-standard shape since you're implementing things from scratch; as long as it's consistent it may work, but I don't recommend it for learning. It's unclear what 1 and 5 is in your code, but presumably you want 5 input features and one output feature, with a batch size of 1 as well. In which case the standard shapes should be input = torch.randn((1, 5)) for batch size=1 and in_features=5, and weights = torch.randn((5, 1)) for in_features=5 and out_features=1.
There is no reason why weights should ever be the same shape as features; thus weights = torch.randn_like(features) doesn't make sense.
Lastly, for your actual questions:
"Should I transpose features or weights in Neural network?" - in torch convention, you should transpose weights, but use matmul with the features first. Other frameworks may have a different convention; as long as in_features dimension of the weights is multiplied by the num_features dimension of the input, it would work.
"Why does this happen?" - these are two completely different calculations; there is no reason to think they would produce the same result.
"So the * means element-wise multiplication, whereas torch.mm() and torch.matmul() are matrix-wise multiplication." - Yes; mm is matrix-matrix only, matmul is vector-matrix or matrix-matrix, including batched versions of same - check the docs for everything matmul can do (which is kinda a lot).
"differences between torch.mm() and torch.matmul(): mm() is used specifically for 2 dimensions matrix, whereas matmul() can be used for more complicated cases." - Yes; the big difference is that matmul can broadcast. Use it when you specifically intend that; use mm to prevent unintentional broadcasting.
"In Neutral Network for this Udacity coding exercise mentioned in my above link, it needs element-wise multiplication." - I doubt it; it's probably an error in the Udacity code. This bit of code weights = torch.randn_like(features) looks like an error in any case; the dimensions of weights have a meaning different from the dimensions of features.
| https://stackoverflow.com/questions/63006388/ |
Load pickle saved GPU tensor with CPU? | I save the last hidden layer of Bert for my following process using pickle on GPU.
# output is the last hidden layer of bert, transformed on GPU
with open(filename, 'wb') as f:
pk.dump(output, f)
Is it possible to load it on my person laptop without GPU? I tried following code, but all failed.
# 1st try
with open(filename, 'rb') as f:
torch.load(f, map_location='cpu')
# 2nd
torch.load(filename, map_location=torch.device('cpu'))
All get the following error
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Is it possible to load the file on my laptop?
| If you using pytorch, you can save yourself some headache by saving the state_dict of the model instead of the model itself. The state_dict is an ordered dictionary that stores the weights of your neural network.
The saving routine:
import torch
model = MyFabulousPytorchModel()
torch.save(model.state_dict(), "best_model.pt")
Loading it requires you to initialize the model first:
import torch
device = 'cuda' if torch.cuda.is_available() else 'gpu'
model = MyFabulousPytorchModel()
model.load_state_dict(torch.load(PATH_TO_MODEL))
model.device(device)
There are many advantages of saving the state_dict instead of the object directly. One of them has to do with your issue: porting your model to a different environment isn't as painless as you were hoping for. Another advantage is that it is a lot easier to save checkpoints that allow you to resume training as if training had never stopped. All you have to do is save the state of the optimizer and the loss:
Saving checkpoint:
# somewhere in your training loop:
opt.zero_grad()
pred = model(x)
loss = loss_func(pred, target)
torch.save({"model": model.state_dict(), "opt": opt.state_dict(), "loss":loss}, "checkpoing.pt")
I highly recommend checking the documentation for further information on how to save and load models using pytorch. It is quite a smooth process if you understand its inner workings. https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-model-for-inference
Hope that helps =)
Edit:
More directly, to solve your problem I recomend the following
1- On the computer you used to train the model:
import torch
model = torch.load("PATH_TO_MODEL")
torch.save(model.state_dict(), "PATH.pt")
2- On the other computer:
import torch
from FILE_WHERE_THE_MODEL_CLASS_IS_DEFINED import Model
model = Model() # initialize one instance of the model)
model.load_state_dict(torch.load("PATH.pt")
| https://stackoverflow.com/questions/63008865/ |
PyTorch RuntimeError: CUDA out of memory. Tried to allocate 14.12 GiB | I have faced Cuda out of memory error for a simple fully connected layer model. I have tried torch.cuda.empty_cache() and gc.collect(). Also, I delete unnecessary variables by del and tried by reducing batch size. But the error does not resolved. Also, the error appears only for SUN datasets where 1440 test images are used to evaluate. But the code runs well for AWA2 datasets where no. of test images is 7913. I am using google colab here. I have used RTX 2060 too.
Here is the code snippet, where it gets error:
def euclidean_dist(x, y):
# x: N x D
# y: M x D
torch.cuda.empty_cache()
n = x.size(0)
m = y.size(0)
d = x.size(1)
assert d == y.size(1)
x = x.unsqueeze(1).expand(n, m, d)
y = y.unsqueeze(0).expand(n, m, d)
del n,m,d
return torch.pow(x - y, 2).sum(2)
def compute_accuracy(test_att, test_visual, test_id, test_label):
global s2v
s2v.eval()
with torch.no_grad():
test_att = Variable(torch.from_numpy(test_att).float().to(device))
test_visual = Variable(torch.from_numpy(test_visual).float().to(device))
outpre = s2v(test_att, test_visual)
del test_att, test_visual
outpre = torch.argmax(torch.softmax(outpre, dim=1), dim=1)
outpre = test_id[outpre.cpu().data.numpy()]
#compute averaged per class accuracy
test_label = np.squeeze(np.asarray(test_label))
test_label = test_label.astype("float32")
unique_labels = np.unique(test_label)
acc = 0
for l in unique_labels:
idx = np.nonzero(test_label == l)[0]
acc += accuracy_score(test_label[idx], outpre[idx])
acc = acc / unique_labels.shape[0]
return acc
The error is:
Traceback (most recent call last): File "GBU_new_v2.py", line 234, in <module>
acc_seen_gzsl = compute_accuracy(attribute, x_test_seen, np.arange(len(attribute)), test_label_seen) File "GBU_new_v2.py", line 111, in compute_accuracy
outpre = s2v(test_att, test_visual) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs) File "GBU_new_v2.py", line 80, in forward
a1 = euclidean_dist(feat, a1) File "GBU_new_v2.py", line 62, in euclidean_dist
return torch.pow(x - y, 2).sum(2)#.sqrt() # return: N x M RuntimeError: CUDA out of memory. Tried to allocate 14.12 GiB (GPU 0;
15.90 GiB total capacity; 14.19 GiB already allocated; 669.88 MiB free; 14.55 GiB reserved in total by PyTorch)
| It seems like you have batches defined only for training, while during test you attempt to process the entire test set simultaneously.
You should split your test set into smaller "batches" and evaluate one batch at a time to combine all the batch scores at the end for one score for the model.
| https://stackoverflow.com/questions/63010568/ |
Can I change volatile = False to torch.set_grad_enabled(True)??(in Pytorch) | I have old python codes, so I need to change some parts.
next_q_values.volatile=False
I have this code, and next_q_values is 'torch.Tensor'
when I run this code:
error occured: "volatile was removed and now has no effect use with
torch no_grad instead"
After search, I know that volatile = True is same as torch.no_grad(),
but I want to use volatile = False, So I can't use torch.no_grad().
Can I change volatile = False to torch.set_grad_enabled(True)?
| If you have:
next_q_values.volatile = False
You can change it to:
with torch.no_grad():
next_q_values
...
# You do something with next_q_values here
Every operation on next_q_values should be within the scope of context manager.
| https://stackoverflow.com/questions/63011899/ |
Pytorch MNIST code is returning IndexError | I've followed the Pytorch documentation and have made an extremely simple classifier for the MNIST dataset. Below is my code:
import numpy as np
import torch
import torchvision
from torchvision import transforms, datasets
import torch.nn as nn
import torch.nn.functional as F
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.5], [0.5])
])
train = datasets.MNIST('', train=True, download=True, transform=transform)
test = datasets.MNIST('', train=False, download=True, transform=transform)
trainset = torch.utils.data.DataLoader(train, batch_size=1, shuffle=True)
testset = torch.utils.data.DataLoader(test, batch_size=1, shuffle=False)
class Classifier(nn.Module):
def __init__(self, D_in, H, D_out):
super(Classifier, self).__init__()
self.linear_1 = torch.nn.Linear(D_in, H)
self.linear_2 = torch.nn.Linear(H, D_out)
def forward(self, x):
x = self.linear_1(x).clamp(min=0)
x = self.linear_2(x)
return F.log_softmax(x, dim=1)
net = Classifier(28*28, 128, 10)
optimizer = torch.optim.Adam(net.parameters(), lr=1e-3)
for epoch in range(3):
running_loss = 0.0
for X, label in iter(trainset):
X = X.view(28*28, -1)
optimizer.zero_grad()
output = net(torch.flatten(X))
loss = nn.CrossEntropyLoss(output, label)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print(f'[{epoch + 1}, {i + 1}] loss: {running_loss / 2000}')
running_loss = 0.0
print("Finished training.")
torch.save(net.state_dict(), './classifier.pth')
For some reason, I'm getting the output
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
at the line: output = net(torch.flatten(X)
Thanks in advance for your help!
| When flatten() you remove all dimensions including batch dimension!
Try:
output = net(x.view(x.shape[0], -1))
| https://stackoverflow.com/questions/63014528/ |
Pytorch based Resnet18 achieves low accuracy on CIFAR100 | I'm training a resnet18 on CIFAR100 dataset. After about 50 iterations the validation accuracy converged at about 34%. While the training accuracy reached almost 100%.
I doubt it's kinda overfitting, so i applied data augmentation like RandomHorizontalFlip and RandomRotation, which made the validation converge at about 40%.
I also tried decaying learning rate [0.1, 0.03, 0.01, 0.003, 0.001], decaying after each 50 iterations. Decaying learning rate seems not enhancing the performance.
Heard that Resnet on CIFAR100 may get 70%~80% accuracy. What trick else could I apply? Or is there anything wrong in my implementation? The same code on CIFAR10 can achieve about 80% accuracy.
My whole training and evaluation code is here below:
import torch
from torch import nn
from torch import optim
from torch.utils.data import DataLoader
from torchvision.models import resnet18
from torchvision.transforms import Compose, ToTensor, RandomHorizontalFlip, RandomRotation, Normalize
from torchvision.datasets import CIFAR10, CIFAR100
import os
from datetime import datetime
import matplotlib.pyplot as plt
def draw_loss_curve(histories, legends, save_dir):
os.makedirs(save_dir, exist_ok=True)
for key in histories[0][0].keys():
if key != "epoch":
plt.figure()
plt.title(key)
for history in histories:
x = [h["epoch"] for h in history]
y = [h[key] for h in history]
# plt.ylim(ymin=0, ymax=3.0)
plt.plot(x, y)
plt.legend(legends)
plt.savefig(os.path.join(save_dir, key + ".png"))
def cal_acc(out, label):
batch_size = label.shape[0]
pred = torch.argmax(out, dim=1)
num_true = torch.nonzero(pred == label).shape[0]
acc = num_true / batch_size
return torch.tensor(acc)
class LrManager(optim.lr_scheduler.LambdaLR):
def __init__(self, optimizer, lrs):
def f(epoch):
rate = 1
for k in sorted(lrs.keys()):
if epoch >= k:
rate = lrs[k]
else:
break
return rate
super(LrManager, self).__init__(optimizer, f)
def main(cifar=100, epochs=250, batches_show=100):
if torch.cuda.is_available():
device = "cuda"
else:
device = "cpu"
print("warning: CUDA is not available, using CPU instead")
dataset_cls = CIFAR10 if cifar == 10 else CIFAR100
dataset_train = dataset_cls(root=f"data/{dataset_cls.__name__}/", download=True, train=True,
transform=Compose([RandomHorizontalFlip(), RandomRotation(15), ToTensor(), Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))]))
dataset_val = dataset_cls(root=f"data/{dataset_cls.__name__}/", download=True, train=False,
transform=Compose([ToTensor(), Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))]))
loader_train = DataLoader(dataset_train, batch_size=128, shuffle=True)
loader_val = DataLoader(dataset_val, batch_size=128, shuffle=True)
model = resnet18(pretrained=False, num_classes=cifar).to(device)
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9, weight_decay=1e-5)
lr_scheduler = LrManager(optimizer, {0: 1.0, 50: 0.3, 100: 0.1, 150: 0.03, 200: 0.01})
criterion = nn.CrossEntropyLoss()
history = []
model.train()
for epoch in range(epochs):
print("------------------- TRAINING -------------------")
loss_train = 0.0
running_loss = 0.0
acc_train = 0.0
running_acc = 0.0
for batch, data in enumerate(loader_train, 1):
img, label = data[0].to(device), data[1].to(device)
optimizer.zero_grad()
pred = model(img)
loss = criterion(pred, label)
loss.backward()
optimizer.step()
running_loss += loss.item()
loss_train += loss.item()
acc = cal_acc(pred, label)
running_acc += acc.item()
acc_train += acc.item()
if batch % batches_show == 0:
print(f"epoch: {epoch}, batch: {batch}, loss: {running_loss/batches_show:.4f}, acc: {running_acc/batches_show:.4f}")
running_loss = 0.0
running_acc = 0.0
loss_train = loss_train / batch
acc_train = acc_train / batch
lr_scheduler.step()
print("------------------- EVALUATING -------------------")
with torch.no_grad():
running_acc = 0.0
for batch, data in enumerate(loader_val, 1):
img, label = data[0].to(device), data[1].to(device)
pred = model(img)
acc = cal_acc(pred, label)
running_acc += acc.item()
acc_val = running_acc / batch
print(f"epoch: {epoch}, acc_val: {acc_val:.4f}")
history.append({"epoch": epoch, "loss_train": loss_train, "acc_train": acc_train, "acc_val": acc_val})
draw_loss_curve([history], legends=[f"resnet18-CIFAR{cifar}"], save_dir=f"history/resnet18-CIFAR{cifar}[{datetime.now()}]")
if __name__ == '__main__':
main()
| Resnet18 from torchvision.models it's an ImageNet implementation. Because ImageNet samples much bigger(224x224) than CIFAR10/100 (32x32), the first layers designed to aggressively downsample the input ('stem Network'). It's lead to missing much valuable information on small CIFAR10/100 images.
To achieve good accuracy on CIFAR10, authors use different network structure as described in original paper:
https://arxiv.org/pdf/1512.03385.pdf
and explained in this article:
https://towardsdatascience.com/resnets-for-cifar-10-e63e900524e0
You can download resnet fo CIFAR10 from this repo: https://github.com/akamaster/pytorch_resnet_cifar10
| https://stackoverflow.com/questions/63015883/ |
Loss does not decrease after modifying prediction with torch.floor() | For the following code, the loss decreases.
loss_function=nn.MSELoss()
loss=loss_function(pred,label)
But, the loss remains completely unchanged if I change the pred by floor function. I checked the parameters after opt.step(), they are not changing.
loss_function=nn.MSELoss()
loss=loss_function(torch.floor(pred),label)
Why this might happen?
My guess: This torch.floor(pred) operation breaks the computation graph. Other "real" mathematical operation like pred*3 (for example) does not break the computation graph.
| It's not breaking the computation graph, the gradients are zero so your steps have no effect.
Consider the plot of floor(x) shown below. Notice that the function is discontinuous so it's technically not differentiable at whole numbers. Also, for every point where it is differentiable it's a flat function. In other words the derivative is zero almost everywhere. PyTorch simply assigns the gradient of floor to be zero everywhere, since there's really no alternative other than to raise an exception. This implies that, regardless of the value of the loss, the gradients of your loss function w.r.t. your parameters will be zero (following the chain rule/backprop). Therefore, any gradient descent based optimizations will have no affect on the model parameters.
(Image source: Wikipedia)
| https://stackoverflow.com/questions/63020101/ |
Getting indices of values in high dimension matrix | I have tensorA of size 10x4x9x2, the other tensorB is of size 10x5x2 that contains values from tensorA. Now, how can i find the index of each element in tensorB in tensorA.
Example:
First 2 elements of TensorA:
[[[[ 4., 1.],
[ 1., 2.],
[ 2., 5.],
[ 5., 3.],
[ 3., 11.],
[11., 10.],
[10., -1.],
[-1., -1.],
[-1., -1.]],
[[12., 13.],
[13., 9.],
[ 9., 7.],
[ 7., 5.],
[ 5., 3.],
[ 3., 4.],
[ 4., 1.],
[ 1., 0.],
[ 0., -1.]],
...... so on
Fist 2 elements of TensorB:
[[[ 2., 5.],
[ 5., 7.],
[ 7., 9.],
[ 9., 10.],
[10., 12.]],
[[ 0., 1.],
[ 1., 2.],
[ 2., 5.],
[ 5., -1.],
[-1., -1.]],
Now in tensorB the first element is [2,5] included in the first 5x2 matrix (dimension 0).
so the element should be matched against dimension 0 in tensorA. And the output should be index
0,0,2 since it is the 3rd element.
| You can compare the rows that are equal, sum along the last axis, and check that sum against the size of the searched tensor. Then the nonzero function will get you the indices you're looking for.
Since for the example tensors you have given, TensorB[0, 0] is [2., 5.], that looks like:
((TensorA == TensorB[0, 0]).sum(dim=3) == 2).nonzero()
This will return a tensor of [[0, 0, 2]] if that is the only matching row. If you don't want to hard-code 2 (the size of the searched tensor), you can use:
((TensorA == TensorB[0, 0]).sum(dim=3) == TensorB[0, 0].size()[0]).nonzero()
| https://stackoverflow.com/questions/63020336/ |
one of the variables needed for gradient computation has been modified by an inplace operation : can't find inplace operation |
I have this code below and I can't find the inplace operation that prevents the gradient from computing.
for epoch in range(nepoch):
model.train()
scheduler.step()
for batch1 in loader1:
torch.ones(len(batch1[0]), dtype=torch.float)
x, label = batch1
x = x1.to('cuda', non_blocking=True)
optimizer.zero_grad()
pred = model(x)
pred = pred.squeeze() if pred.ndimension() > 1 else pred
label = (label.float()).cuda(cuda0)
weights = torch.ones(len(label))
loss_fun = torch.nn.BCEWithLogitsLoss(weight=weights.cuda(cuda0))
score = loss_fun(pred, label)
label = np.array(np.round(label.cpu().detach())).astype(bool)
pred = np.array(pred.cpu().detach()>0).astype(bool)
torch.autograd.set_detect_anomaly(True)
score.backward()
optimizer.step()
At the end I have this error that pops up:
Warning: Error detected in MulBackward0. Traceback of forward call that caused the error:
File "train.py", line 98, in <module>
pred = model(x)
File "/home/anatole2/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/anatole2/miniconda3/lib/python3.7/site-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/home/anatole2/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/anatole2/best/PCEN_pytorch.py", line 30, in forward
filtered[i] = filtered[i] + (1-exp(self.log_s)) * filtered[i-1]
(print_stack at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:60)
Traceback (most recent call last):
File "train.py", line 116, in <module>
score.backward()
File "/home/anatole2/miniconda3/lib/python3.7/site-packages/torch/tensor.py", line 198, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/anatole2/miniconda3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 100, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [16, 1, 80]], which is output 0 of SelectBackward, is at version 378; expected version 377 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
If you could help me that'd be great !
| The in-place operation seems to be on this line:
File "/home/anatole2/best/PCEN_pytorch.py", line 30, in forward
filtered[i] = filtered[i] + (1-exp(self.log_s)) * filtered[i-1]
Note that it is using the value from filtered[i] and then storing the result in filtered[i]. This is what in-place means; the new value overwrites the old one.
To fix it, you'd need to do something like this:
filtered_new = torch.zeros_like(filtered)
...
filtered_new[i] = filtered[i] + (1-exp(self.log_s)) * filtered[i-1]
The part that makes this a bit complicated is that this seems to be inside a loop (I assume i is the loop counter) and it probably uses the values from the previous pass through the loop. The modified version is not in-place, but probably won't produce the same results as the original either. So you may have to do something like this:
filtered_new[i] = filtered[i] + (1-exp(self.log_s)) * filtered_new[i-1]
It's impossible to solve this without seeing more code around this, but basically - look around, and replace any operation which changes existing tensors with an operation which creates new tensors to store the results of the calculation.
| https://stackoverflow.com/questions/63022629/ |
How to prevent the initial pytorch variable from changing using a function? | I want to apply a function to the variable x and saved as y. But why the x is also changed? How to prevent it?
import torch
def minus_min(raw):
for col_i in range(len(raw[0])):
new=raw
new[:,col_i] = (raw[:,col_i] - raw[:,col_i].min())
return new
x=torch.tensor([[0,1,2,3,4],
[2,3,4,0,8],
[0,1,2,3,4]])
y=minus_min(x)
print(y)
print(x)
output:
tensor([[0, 0, 0, 3, 0],
[2, 2, 2, 0, 4],
[0, 0, 0, 3, 0]])
tensor([[0, 0, 0, 3, 0],
[2, 2, 2, 0, 4],
[0, 0, 0, 3, 0]])
| Because this assignment:
new[:,col_i] = (raw[:,col_i] - raw[:,col_i].min())
is an in-place operation. Therefore, x and y will share the underlying .data.
The smallest change that would solve this issue would be to make a copy of x inside the function:
def minus_min(raw):
new = raw.clone() # <--- here
for col_i in range(len(raw[0])):
new[:,col_i] = raw[:,col_i] - raw[:,col_i].min()
return new
If you want, you can simplify your function (and remove the for loop):
y = x - x.min(dim=0).values
| https://stackoverflow.com/questions/63023793/ |
How do I use BertForMaskedLM or BertModel to calculate perplexity of a sentence? | I want to use BertForMaskedLM or BertModel to calculate perplexity of a sentence, so I write code like this:
import numpy as np
import torch
import torch.nn as nn
from transformers import BertTokenizer, BertForMaskedLM
# Load pre-trained model (weights)
with torch.no_grad():
model = BertForMaskedLM.from_pretrained('hfl/chinese-bert-wwm-ext')
model.eval()
# Load pre-trained model tokenizer (vocabulary)
tokenizer = BertTokenizer.from_pretrained('hfl/chinese-bert-wwm-ext')
sentence = "我不会忘记和你一起奋斗的时光。"
tokenize_input = tokenizer.tokenize(sentence)
tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
sen_len = len(tokenize_input)
sentence_loss = 0.
for i, word in enumerate(tokenize_input):
# add mask to i-th character of the sentence
tokenize_input[i] = '[MASK]'
mask_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
output = model(mask_input)
prediction_scores = output[0]
softmax = nn.Softmax(dim=0)
ps = softmax(prediction_scores[0, i]).log()
word_loss = ps[tensor_input[0, i]]
sentence_loss += word_loss.item()
tokenize_input[i] = word
ppl = np.exp(-sentence_loss/sen_len)
print(ppl)
I think this code is right, but I also notice BertForMaskedLM's paramaters masked_lm_labels, so could I use this paramaters to calculate PPL of a sentence easiler?
I know the input_ids argument is the masked input, the masked_lm_labels argument is the desired output. But I couldn't understand the actual meaning of its output loss, its code like this:
if masked_lm_labels is not None:
loss_fct = CrossEntropyLoss() # -100 index = padding token
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size),
masked_lm_labels.view(-1))
outputs = (masked_lm_loss,) + outputs
| Yes, you can use the parameter labels (or masked_lm_labels, I think the param name varies in versions of huggingface transformers, whatever) to specify the masked token position, and use -100 to ignore the tokens that you dont want to include in the loss computing.
For example,
sentence='我爱你'
from transformers import BertTokenizer, BertForMaskedLM
import torch
import numpy as np
tokenizer = BertTokenizer(vocab_file='vocab.txt')
model = BertForMaskedLM.from_pretrained('bert-base-chinese')
tensor_input = tokenizer(sentence, return_tensors='pt')
# tensor([[ 101, 2769, 4263, 872, 102]])
repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1)
# tensor([[ 101, 2769, 4263, 872, 102],
# [ 101, 2769, 4263, 872, 102],
# [ 101, 2769, 4263, 872, 102]])
mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2]
# tensor([[0., 1., 0., 0., 0.],
# [0., 0., 1., 0., 0.],
# [0., 0., 0., 1., 0.]])
masked_input = repeat_input.masked_fill(mask == 1, 103)
# tensor([[ 101, 103, 4263, 872, 102],
# [ 101, 2769, 103, 872, 102],
# [ 101, 2769, 4263, 103, 102]])
labels = repeat_input.masked_fill( masked_input != 103, -100)
# tensor([[-100, 2769, -100, -100, -100],
# [-100, -100, 4263, -100, -100],
# [-100, -100, -100, 872, -100]])
loss,_ = model(masked_input, masked_lm_labels=labels)
score = np.exp(loss.item())
The function:
def score(model, tokenizer, sentence, mask_token_id=103):
tensor_input = tokenizer.encode(sentence, return_tensors='pt')
repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1)
mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2]
masked_input = repeat_input.masked_fill(mask == 1, 103)
labels = repeat_input.masked_fill( masked_input != 103, -100)
loss,_ = model(masked_input, masked_lm_labels=labels)
result = np.exp(loss.item())
return result
score(model, tokenizer, '我爱你') # returns 45.63794545581973
| https://stackoverflow.com/questions/63030692/ |
Weights not updating on my neural net (Pytorch) | I'm completely new to neural nets, so I tried to roughly follow some tutorials to create a neural net that can just distinguish if a given binary picture contains a white circle or if it is all black. So, I generated 1000 arrays of size 10000 representing a 100x100 picture with half of them containing a white circle somewhere. The generation of my dataset looks like this:
for i in range(1000):
image = [0] * (IMAGE_SIZE * IMAGE_SIZE)
if random() < 0.5:
dataset.append([image, [[0]]])
else:
#inserts circle in image
#...
dataset.append([image, [[1]]])
np.random.shuffle(dataset)
np.save("testdataset.npy", dataset)
The double list around the classifications is because the net seemed to give that format as an output, so I matched that.
Now since I don't really have any precise idea of how pytorch works, I don't really now which parts of the code are relevant for solving my problem and which aren't. Therefore, I gave the code for the net and the training down below and really hope that someone can explain to me where I went wrong. I'm sorry if it's too much code. The code runs without errors, but if I print the parameters before and after training they didn't change in any way and the net will always just return a 0 for every image/array.
IMAGE_SIZE = 100
EPOCHS = 3
BATCH_SIZE = 50
VAL_PCT = 0.1
class Net(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(IMAGE_SIZE * IMAGE_SIZE, 64)
self.fc2 = nn.Linear(64, 64)
self.fc3 = nn.Linear(64, 64)
self.fc4 = nn.Linear(64, 1)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = self.fc4(x)
return F.log_softmax(x, dim = 1)
net = Net()
optimizer = optim.Adam(net.parameters(), lr = 0.01)
loss_function = nn.MSELoss()
dataset = np.load("testdataset.npy", allow_pickle = True)
X = torch.Tensor([i[0] for i in dataset]).view(-1, 10000)
y = torch.Tensor([i[1] for i in dataset])
val_size = int(len(X) * VAL_PCT)
train_X = X[:-val_size]
train_y = y[:-val_size]
test_X = X[-val_size:]
test_y = y[-val_size:]
for epoch in range(EPOCHS):
for i in range(0, len(train_X), BATCH_SIZE):
batch_X = train_X[i:i + BATCH_SIZE].view(-1, 1, 10000)
batch_y = train_y[i:i + BATCH_SIZE]
net.zero_grad()
outputs = net(batch_X)
loss = loss_function(outputs, batch_y)
loss.backward()
optimizer.step()
| Instead of net.zero_grad() I would recommend using optimizer.zero_grad() as it's more common and de facto standard. Your training loop should be:
for epoch in range(EPOCHS):
for i in range(0, len(train_X), BATCH_SIZE):
batch_X = train_X[i:i + BATCH_SIZE].view(-1, 1, 10000)
batch_y = train_y[i:i + BATCH_SIZE]
optimizer.zero_grad()
outputs = net(batch_X)
loss = loss_function(outputs, batch_y)
loss.backward()
optimizer.step()
I would recommend you reading a bit about different loss functions. It seems you have a classification problem, for that you should use the logits (binary classification) or cross entropy (multi class) loss. I would make the following changes to the network and loss function:
class Net(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(IMAGE_SIZE * IMAGE_SIZE, 64)
self.fc2 = nn.Linear(64, 64)
self.fc3 = nn.Linear(64, 64)
self.fc4 = nn.Linear(64, 1)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = self.fc4(x)
return x
loss_function = nn.BCEWithLogitsLoss()
Check the documentation before using it: https://pytorch.org/docs/stable/nn.html#bcewithlogitsloss
Good luck!
| https://stackoverflow.com/questions/63035319/ |
How do I install the d2l library on Windows without conda? | The d2l.ai textbook includes its own Python library, d2l. Their instructions recommend installing with conda, a Python package manager that also functions as a general library manager. But I've been burned by conda a couple times, and prefer pure pip. How can I install d2l on Windows with pure pip? I only need PyTorch, not MxNet or Tensorflow.
| Using pip
This is kind of an embarrassing answer to my own question, but I recommend you follow the CPU or GPU instructions provided, following just the PYTORCH tab:
"C:\Program Files\Python37\Scripts\pip" install torch==1.5.1 torchvision -f https://download.pytorch.org/whl/torch_stable.html
and
"C:\Program Files\Python37\Scripts\pip" install -U d2l
I include the full paths to Python37 here because Python38 is in my path, and torch, and therefore d2l.torch only work with 3.5 <= python < 3.8.
Earlier, I had trouble with mxnet being missing, but when I ran pip install -U d2l again (with the full path to my prefered Python distribution), from d2l import torch as d2l worked just fine for me.
As standalone script
Sometimes it is convenient just to use the direct files. For example, for installing with torch, you can just save this file, renaming it to d2l.py or d2l_torch.py to distinguish it from the main torch library. Place it besides your other python script, and then you can import it with import d2l_torch or whatever you called the script.
If you read the contents of this python file, you will see it is simply a concatenation of all the code samples in the book, with a few convenience aliases at the end of the file that make it easier for the authors to write their book in three languages at the same time. You can often get by without importing d2l at all and just copy-paste the example code that you need into your own code (with citation of course), or learn what torch commands they are aliasing.
Earlier instructions
In order to import d2l, you will need torch installed. Do NOT try to install these with a simple pip install torch. You need specific versions of both libraries for d2l to work.
In this example, I use Python 3.7. These instructions assume that you have another version of Python as your default version and show how to use full paths to pip to ensure the right installation.
To install torch, go to https://pytorch.org/, in the colorful grid, click on pip, copy the command, open a command prompt as an administrator (right-click and select "Run as Administrator") then paste the command, which should look something like:
pip install torch===1.5.1 torchvision===0.6.1 -f https://download.pytorch.org/whl/torch_stable.html
Then, edit the command to replace pip replaced with the full path to your version of pip, e.g.:
"C:\Program Files\Python37\Scripts\pip" install torch===1.5.1 torchvision===0.6.1 -f https://download.pytorch.org/whl/torch_stable.html
(You don't need to edit the command as long as Python 3.7 is in your path.)
I expanded my own answer to a PyTorch-specific question in writing this answer.
| https://stackoverflow.com/questions/63039866/ |
How to extract and use BERT encodings of sentences for Text similarity among sentences. (PyTorch/Tensorflow) | I want to make a text similarity model which I tend to use for FAQ finding and other methods to get the most related text. I want to use the highly optimised BERT model for this NLP task .I tend to use the the encodings of all the sentences to get a similarity matrix using the cosine_similarity and return results.
In the hypothetical conditions, if I have two sentences as hello world and hello hello world then I am assuming the BRT would give me something like [0.2,0.3,0], (0 for padding) and [0.2,0.2,0.3] and I can pass these two inside the sklearn's cosine_similarity.
How am I supposed to extract the embeddings the sentences to use them in the model? I found somewhere that it can be extracted like:
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
Using Tensorflow:
import tensorflow as tf
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained('bert-base-uncased')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
Is this the right way? Because I read somewhere that there are different type of embeddings that BERT offers.
ALSO please do suggest any other method to find the text similarity
| When you want to compare the embeddings of sentences the recommended way to do this with BERT is to use the value of the CLS token. This corresponds to the first token of the output (after the batch dimension).
last_hidden_states = outputs[0]
cls_embedding = last_hidden_states[0][0]
This will give you one embedding for the entire sentence. As you will have the same size embedding for each sentence you can then easily compute the cosine similarity.
If you do not get satisfactory results using the CLS token you can also try averaging the output embedding for each word in the sentence.
| https://stackoverflow.com/questions/63040954/ |
Consecutive calls to stack throws RuntimeError: stack expects each tensor to be equal size | I think both torch.cat and torch.stack cannot fully satisfy my requirement.
Initially, I define an empty tensor. Then I want to append a 1d-tensor to it for multiple times.
x = torch.tensor([]).type(torch.DoubleTensor)
y = torch.tensor([ 0.3981, 0.6952, -1.2320]).type(torch.DoubleTensor)
x = torch.stack([x,y])
This will throw an error:
RuntimeError: stack expects each tensor to be equal size, but got [0] at entry 0 and [3] at entry 1
So I have to initialise x as torch.tensor([0,0,0]) (but can this be avoided?)
x = torch.tensor([0,0,0]).type(torch.DoubleTensor)
y = torch.tensor([ 0.3981, 0.6952, -1.2320]).type(torch.DoubleTensor)
x = torch.stack([x,y]) # this is okay
x = torch.stack([x,y]) # <--- this got error again
But when I run x = torch.stack([x,y]) the second time, I got this error:
RuntimeError: stack expects each tensor to be equal size, but got [2, 3] at entry 0 and [3] at entry 1
What I want to achieve is being able to append a 1d-tensor multiple times (the added 1d-tensor is different at each time, here I use the same one for simplicity)**:
tensor([[ 0.3981, 0.6952, -1.2320],
[ 0.3981, 0.6952, -1.2320],
[ 0.3981, 0.6952, -1.2320],
[ 0.3981, 0.6952, -1.2320],
...
[ 0.3981, 0.6952, -1.2320]], dtype=torch.float64)
How to achieve this?
| From the documentation of torch.cat "All tensors must either have the same shape (except in the concatenating dimension) or be empty". So, the easiest solution is to add one more dimension (size 1) to the tensor you want to add. Then, you will have tensors of size (n, whatever) and (1, whatever) which will be concatenated along the 0th dimension, meeting the requirements for torch.cat.
Code:
x = torch.empty(size=(0,3))
y = torch.tensor([ 0.3981, 0.6952, -1.2320])
for n in range(5):
y1 = y.unsqueeze(dim=0) # same as y but with shape (1,3)
x = torch.cat([x,y1], dim=0)
print(x)
Output:
tensor([[ 0.3981, 0.6952, -1.2320],
[ 0.3981, 0.6952, -1.2320],
[ 0.3981, 0.6952, -1.2320],
[ 0.3981, 0.6952, -1.2320],
[ 0.3981, 0.6952, -1.2320]])
| https://stackoverflow.com/questions/63041723/ |
MSE with predictions not expected | Here is a regression model where I attempt to predict the y values (outputs) from x values (inputs) . Each class is given a different mean and normalized with l2 normalization:
x_values = sklearn.preprocessing.normalize(x_values, norm="l2")
This may appear as a classification problem attempting to be solved using regression. I'm attempting to understand multiclass regression in PyTorch, as the PyTorch doc gives the following example which suggests multiclass regression is possible:
>>> loss = nn.MSELoss()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> output = loss(input, target)
>>> output.backward()
src: https://pytorch.org/docs/master/generated/torch.nn.MSELoss.html
Entire code:
% reset - f
from datetime import datetime
from sklearn import metrics
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import numpy as np
import matplotlib.pyplot as plt
import torch.utils.data as data_utils
import torch.nn as nn
import torch.nn.functional as F
import random
from torch.autograd import Variable
import pandas as pd
import unittest
import time
from collections import Counter
import sklearn
x_values = []
y_values = []
input_size = 17
lr = .1
# Class1
mu, sigma = 0, 0.1 # mean and standard deviation
x_values.append(np.random.normal(mu, sigma, input_size))
x_values.append(np.random.normal(mu, sigma, input_size))
x_values.append(np.random.normal(mu, sigma, input_size))
# Class2
mu, sigma = 5, 0.5 # mean and standard deviation
x_values.append(np.random.normal(mu, sigma, input_size))
x_values.append(np.random.normal(mu, sigma, input_size))
x_values.append(np.random.normal(mu, sigma, input_size))
# Class3
mu, sigma = 10, 1.0 # mean and standard deviation
x_values.append(np.random.normal(mu, sigma, input_size))
x_values.append(np.random.normal(mu, sigma, input_size))
x_values.append(np.random.normal(mu, sigma, input_size))
# Class4
mu, sigma = 15, 1.5 # mean and standard deviation
x_values.append(np.random.normal(mu, sigma, input_size))
x_values.append(np.random.normal(mu, sigma, input_size))
x_values.append(np.random.normal(mu, sigma, input_size))
# Class5
mu, sigma = 20, 2.0 # mean and standard deviation
x_values.append(np.random.normal(mu, sigma, input_size))
x_values.append(np.random.normal(mu, sigma, input_size))
x_values.append(np.random.normal(mu, sigma, input_size))
x_values = sklearn.preprocessing.normalize(x_values, norm="l2")
y_values.append(0)
y_values.append(0)
y_values.append(0)
y_values.append(1)
y_values.append(1)
y_values.append(1)
y_values.append(2)
y_values.append(2)
y_values.append(2)
y_values.append(3)
y_values.append(3)
y_values.append(3)
y_values.append(4)
y_values.append(4)
y_values.append(4)
num_classes = len(y_values)
class NeuralNet(nn.Module):
def __init__(self):
super(NeuralNet, self).__init__()
self.criterion = torch.nn.MSELoss()
self.model = torch.nn.Sequential(
torch.nn.Linear(input_size, 100),
torch.nn.ReLU(),
torch.nn.Linear(100, 50),
torch.nn.ReLU(),
torch.nn.Linear(50, num_classes)
# torch.nn.ReLU()
)
self.optimizer = torch.optim.Adam(self.model.parameters(), lr)
def update(self, state, action):
y_pred = self.model(torch.Tensor(state))
loss = self.criterion(y_pred, Variable(torch.Tensor(action)))
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
return loss
def predict(self, s):
with torch.no_grad():
return self.model(torch.Tensor(s))
def weights_init(m):
if type(m) == nn.Linear:
m.weight.data.normal_(0.0, 1)
model = NeuralNet()
model.apply(weights_init)
print('len(states)', len(x_values))
i = 0
for s in range(7000):
if i == 15:
i = 0
x = x_values[i]
loss_value = model.update(x, y_values)
if s % 1000 == 0:
print('loss_value', loss_value)
i = i + 1
Predicting on the x_values:
[torch.argmax(model.predict(s)) for s in x_values]
returns:
[tensor(14),
tensor(14),
tensor(14),
tensor(14),
tensor(14),
tensor(14),
tensor(14),
tensor(14),
tensor(14),
tensor(14),
tensor(14),
tensor(14),
tensor(14),
tensor(14),
tensor(14)]
As I have defined classes with difference means and the final loss value is low (4.7370e-15) I expect the predicted values to be closer to:
[tensor(0)
tensor(0),
tensor(0),
tensor(1),
tensor(1),
tensor(1),
tensor(2),
tensor(2),
tensor(2),
tensor(3),
tensor(3),
tensor(3),
tensor(4),
tensor(4),
tensor(4)]
What are the predicted outputs not closed to my expectation?
Have I set up the model incorrectly?
| Are you sure you have a regression problem? When we talk about the output being a specific class, a classification problem is usually used regardless of the input.
Another concept is that you are trying to represent some sort of ordinal categorical variable.
You can pose the problem in two ways:
1 - Consider that you have a classification problem.
class NeuralNet(nn.Module):
class ExpActivation(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return torch.exp(x)
class BoundedPositiveNumber(nn.Module):
def __init__(self):
super().__init__()
self.max_value = 4
def forward(self, x):
return self.max_value * torch.sigmoid(x)
def __init__(self):
super(NeuralNet, self).__init__()
self.criterion = torch.nn.CrossEntropyLoss()
self.model = torch.nn.Sequential(
torch.nn.Linear(input_size, 100),
torch.nn.ReLU(),
torch.nn.Linear(100, 50),
torch.nn.ReLU(),
torch.nn.Linear(50, num_classes),
torch.nn.Softmax()
)
self.optimizer = torch.optim.Adam(self.model.parameters(), lr)
def update(self, state, action):
y_pred = self.model(state)
loss = self.criterion(y_pred, action)
# print(torch.argmax(y_pred, axis=-1), action, loss)
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
return loss
def predict(self, s):
with torch.no_grad():
return self.model(torch.Tensor(s))
def weights_init(m):
if type(m) == nn.Linear:
m.weight.data.normal_(0.0, 1)
model = NeuralNet()
#model.apply(weights_init)
print('len(states)', len(x_values))
i = 0
x_values = torch.from_numpy(x_values).float()
y_values = torch.from_numpy(np.array(y_values)).long()
for s in range(700000):
if i == 15:
i = 0
x = x_values[i:i+1]
y = y_values[i:i+1]
loss_value = model.update(x_values, y_values)
if s % 1000 == 0:
print('loss_value', loss_value)
i = i + 1
# Example
f = model.model(x)
proba = torch.softmax(f) # obtain the probability distribution
np.argmax(proba.cpu().numpy()) # or np.argmax(f.cpu().numpy()), in this case are equivalent
2 - Consider that you want to get that "number" as a regression and
not as a class. You are not looking for a probability distribution but the value directly. In this case, it is not very common but if you want only positive values it is interesting to use the exponential as activation. So you condense -inf, inf to 0, inf.
class NeuralNet(nn.Module):
class ExpActivation(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return torch.exp(x)
class AcotatedPositiveNumber(nn.Module):
def __init__(self):
super().__init__()
self.max_value = 4
def forward(self, x):
return self.max_value * torch.sigmoid(x)
def __init__(self):
super(NeuralNet, self).__init__()
self.criterion = torch.nn.MSELoss()
self.model = torch.nn.Sequential(
torch.nn.Linear(input_size, 100),
torch.nn.ReLU(),
torch.nn.Linear(100, 50),
torch.nn.ReLU(),
torch.nn.Linear(50, 1),
NeuralNet.ExpActivation()
)
self.optimizer = torch.optim.Adam(self.model.parameters(), lr)
def update(self, state, action):
y_pred = self.model(state)
loss = self.criterion(y_pred, action.unsqueeze(-1))
# print(torch.round(y_pred.squeeze()).long(), action, loss)
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
return loss
def predict(self, s):
with torch.no_grad():
return self.model(torch.Tensor(s))
def weights_init(m):
if type(m) == nn.Linear:
m.weight.data.normal_(0.0, 1)
model = NeuralNet()
#model.apply(weights_init)
print('len(states)', len(x_values))
i = 0
x_values = torch.from_numpy(x_values).float()
y_values = torch.from_numpy(np.array(y_values)).float()
for s in range(700000):
if i == 15:
i = 0
x = x_values[i:i+1]
y = y_values[i:i+1]
loss_value = model.update(x_values, y_values)
if s % 1000 == 0:
print('loss_value', loss_value)
i = i + 1
# Example
regresion_value = model.model(x)
regresion_value.cpu().numpy()
| https://stackoverflow.com/questions/63043722/ |
How to avoid iterating over Dataloader while resuming training in Huggingface Trainer class? | I'm currently using Huggingface's Trainer class to train Distillbert for a regression problem using a custom loss function. I'm using their checkpoints to resume training due to the ephemeral nature of compute / unexpected errors.
The issue I'm facing is that each time I resume training from a checkpoint as per their Trainer class via the model_path in the Trainer.train() method, I noticed that the class iterates over the dataloader until it reaches the iteration count as saved in the checkpoint (see the lines from the Trainer class that match the issue).
This might usually not be a issue, but due to the nature of my dataloader's collate function and the size of the dataset, iterating for such a duration without any training is pretty expensive and slows down the overall training.
I planned on utilizing a custom sampler class something along the lines of this with a parameter to resume the indices from a given location but that too seems quite the hack for the given problem.
What could be an alternative that I could try to save on this wasted compute cycles?
| Well it looks like huggingface has provided a solution to this via the use of ignore_data_skip argument in the TrainingArguments.
Although you would have to be careful using this flag. It will essentially be as if you're starting a new epoch from step 0. But you'd be moving the optimizer / model state to whatever it was from the resume point.
| https://stackoverflow.com/questions/63045229/ |
Correct way to register a parameter for model in Pytorch | I tried to define a simple model in Pytorch. The model computes negative log prob for a gaussian distribution:
import torch
import torch.nn as nn
class GaussianModel(nn.Module):
def __init__(self):
super(GaussianModel, self).__init__()
self.register_parameter('mean', nn.Parameter(torch.zeros(1),
requires_grad=True))
self.pdf = torch.distributions.Normal(self.state_dict()['mean'],
torch.tensor([1.0]))
def forward(self, x):
return -self.pdf.log_prob(x)
model = GaussianModel()
Then I tried to optimize the mean parameter:
optimizer = torch.optim.SGD(model.parameters(), lr=0.002)
for _ in range(5):
optimizer.zero_grad()
nll = model(torch.tensor([3.0], requires_grad=True))
nll.backward()
optimizer.step()
print('mean : ', model.state_dict()['mean'],
' - Negative Loglikelihood : ', nll.item())
But it seems the gradient is zero and mean does not change:
mean : tensor([0.]) - Negative Loglikelihood : 5.418938636779785
mean : tensor([0.]) - Negative Loglikelihood : 5.418938636779785
mean : tensor([0.]) - Negative Loglikelihood : 5.418938636779785
mean : tensor([0.]) - Negative Loglikelihood : 5.418938636779785
mean : tensor([0.]) - Negative Loglikelihood : 5.418938636779785
Did I register and use the mean parameter correctly? can autograd compute the gradient for torch.distributions.Normal.log_prob or I should implement the backward() for the model?
| You're over complicating registering your parameter. You can just assign a new self.mean attribute to be an nn.Parameter then use it like a tensor for the most part.
nn.Module overrides the __setattr__ method which is called every time you assign a new class attribute. One of the things it does is check to see if you assigned an nn.Parameter type, and if so, it adds it to the modules dictionary of registered parameters.
Because of this, the easiest way to register your parameter is as follows:
import torch
import torch.nn as nn
class GaussianModel(nn.Module):
def __init__(self):
super(GaussianModel, self).__init__()
self.mean = nn.Parameter(torch.zeros(1))
self.pdf = torch.distributions.Normal(self.mean, torch.tensor([1.0]))
def forward(self, x):
return -self.pdf.log_prob(x)
| https://stackoverflow.com/questions/63047762/ |
Pytorch: all-but-one summation? | I am in the process of moving some message passing code from Numpy to Pytorch. I am unsure of how to do this single step of a much larger algorithm. Below is the simplest explanation of the step.
Given the following:
index = [[2,0,1], [2,2,0]]
value = [[0.1, 1.2, 2.3], [3.4, 4.5, 5.6]]
I would like to compute the "all-but-one" sum of messages to each index. Here is a graphical representation:
The answer I am looking for is:
ans = [[7.9, 5.6, 0], [4.6, 3.5, 1.2]]
The explanation is that, for example, index[0][0] points at node 2. The sum of all messages at node 2 is 0.1+3.4+4.5=8. However we want to exclude the message we are considering (value[0][0]=0.1), so we obtain ans[0][0]=7.9. If only a single index points at a node then the answer is 0 (eg. node 1 with ans[0][2]).
I would be happy with computing the sums for each node, and then subtracting out the individual messages. I am aware that this can lead to loss of significance, but I believe that my use case is very well-behaved (eg. no floating point infinities).
I can also provide the minimal numpy code, but even the minimal example is a bit long. I have looked at pytorch's scatter and gather commands, but I don't think that they're appropriate here.
| After working with pytorch for a while longer, and writing some code for other situations, I realized that there is a much more efficient solution that I hadn't considered. So I am pasting it here for anyone else who comes after me:
import torch
index = [[2, 0, 1], [2, 2, 0]]
value = [[0.1, 1.2, 2.3], [3.4, 4.5, 5.6]]
# convert to tensor
index_tensor = torch.tensor(index)
value_tensor = torch.tensor(value)
num_nodes = 3
totals = torch.zeros(num_nodes)
totals = totals.index_add_(0, index_tensor.flatten(), value_tensor.flatten())
result = totals[index_tensor] - value_tensor
print(result)
It uses much less memory than the scatter_add solution given by Jodag. It avoids all the for loops given in the other solutions. Hooray for much faster code!
| https://stackoverflow.com/questions/63056076/ |
How to ignore and initialize Missing key(s) in state_dict | My saved state_dict does not contain all the layers that are in my model.
How can I ignore the Missing key(s) in state_dict error and initialize the remaining weights?
| This can be achieved by passing strict=False to load_state_dict.
load_state_dict(state_dict, strict=False)
Documentation
| https://stackoverflow.com/questions/63057468/ |
Pytorch C++ RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select | I am calculating word similarity using torch::Embedding module by pretrained wordvector (glove.300d) on Ubuntu 18.04LTS PyTorch C++ (1.5.1, CUDA 10.1). I believe I have moved everything I can to the GPU, but when I execute it, it still says (full error log on the end of the question):
Expected object of device type cuda but got device type cpu for
argument #1 'self' in call to _th_index_select
(checked_dense_tensor_unwrap at /pytorch/aten/src/ATen/Utils.h:72)
I have checked my model initialization method in main.cpp, and it is okay if I only do initialization.
SimilarityModel simiModel(args, 400000, 300);
simiModel.to(device);
//model forward
torch::Tensor data = ids.index({Slice(i*batch_size, (i+1)*batch_size), Slice()}).to(torch::kInt64).to(device); //take a batch
tie(score, indice) = simiModel.forward(data); //forward and transfer score, indice to cpu for further calculation
and this is how I define SimilarityModel in Similarity.h:
class SimilarityModel : public torch::nn::Module {
public:
int64_t topk; // num of top words;
Dictionary dict;
int64_t vocab_size;
int64_t embedding_dim;
torch::nn::Embedding embedding{nullptr};
vector<vector<float> > vec_embed;
SimilarityModel(unordered_map<string, string> args, int64_t vocab_size, int64_t embed_dim);
tuple<torch::Tensor, torch::Tensor> forward(torch::Tensor x);
};
At the same time I have done embedding initialization in SimilarityModel function in Similarity.cpp:
SimilarityModel::SimilarityModel(unordered_map<string, string> args, int64_t vocab_size, int64_t embed_dim)
:embedding(vocab_size, embed_dim) { //Embedding initialize
this->topk = stoi(args["topk"]);
vector<vector<float> > pre_embed;
tie(pre_embed, dict) = loadwordvec(args); //load pretrained wordvec from txt file
this->vocab_size = int64_t(dict.size());
this->embedding_dim = int64_t(pre_embed[0].size());
this->vec_embed = pre_embed;
this->dict = dict;
vector<float> temp_embed;
for(const auto& i : pre_embed) //faltten to 1-d
for(const auto& j : i)
temp_embed.push_back(j);
torch::Tensor data = torch::from_blob(temp_embed.data(), {this->vocab_size, this->embedding_dim}, torch::TensorOptions().dtype(torch::kFloat32)).clone(); //vector to tensor
register_module("embedding", embedding);
this->embedding = embedding.from_pretrained(data, torch::nn::EmbeddingFromPretrainedOptions().freeze(true));
}
and forward function in Similarity.cpp:
tuple<torch::Tensor, torch::Tensor> SimilarityModel::forward(torch::Tensor x) {
auto cuda_available = torch::cuda::is_available(); //copy to gpu
torch::Device device(cuda_available ? torch::kCUDA : torch::kCPU);
torch::Tensor wordvec;
wordvec = this->embedding->forward(x).to(device); //python:embedding(x)
torch::Tensor similarity_score = wordvec.matmul(this->embedding->weight.transpose(0, 1)).to(device);
torch::Tensor score, indice;
tie(score, indice) = similarity_score.topk(this->topk, -1, true, true); //Tensor.topk(int64_t k, int64_t dim, bool largest = true, bool sorted = true)
score = score.to(device);
indice = indice.to(device);
score.slice(1, 1, score.size(1)); //Tensor.slice(int64_t dim, int64_t start, int64_t end, int64_t step)
indice.slice(1, 1, indice.size(1));
return {score.cpu(), indice.cpu()}; //transfer to cpu for further calculation
}
As for intermediate variables in forward() have been put to GPU too. However, I totally have no idea which one is left in CPU, and the error log does not help so much. I have tried the method in Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select to do SimilarityModel().to(device), but that does not work. I am still having a hard time reading this error log and would like some instructions on how to debug such questions.
Error log:
terminate called after throwing an instance of 'c10::Error'
what(): Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select (checked_dense_tensor_unwrap at /pytorch/aten/src/ATen/Utils.h:72)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x46 (0x7fb566a27536 in /home/switchsyj/Downloads/libtorch/lib/libc10.so)
frame #1: <unknown function> + 0x101a80b (0x7fb520fa380b in /home/switchsyj/Downloads/libtorch/lib/libtorch_cuda.so)
frame #2: <unknown function> + 0x105009c (0x7fb520fd909c in /home/switchsyj/Downloads/libtorch/lib/libtorch_cuda.so)
frame #3: <unknown function> + 0xf9d76b (0x7fb520f2676b in /home/switchsyj/Downloads/libtorch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0x10c44e3 (0x7fb558d224e3 in /home/switchsyj/Downloads/libtorch/lib/libtorch_cpu.so)
frame #5: at::native::embedding(at::Tensor const&, at::Tensor const&, long, bool, bool) + 0x2e2 (0x7fb558870712 in /home/switchsyj/Downloads/libtorch/lib/libtorch_cpu.so)
frame #6: <unknown function> + 0x114ef9d (0x7fb558dacf9d in /home/switchsyj/Downloads/libtorch/lib/libtorch_cpu.so)
frame #7: <unknown function> + 0x1187b4d (0x7fb558de5b4d in /home/switchsyj/Downloads/libtorch/lib/libtorch_cpu.so)
frame #8: <unknown function> + 0x2bfe42f (0x7fb55a85c42f in /home/switchsyj/Downloads/libtorch/lib/libtorch_cpu.so)
frame #9: <unknown function> + 0x1187b4d (0x7fb558de5b4d in /home/switchsyj/Downloads/libtorch/lib/libtorch_cpu.so)
frame #10: <unknown function> + 0x32b63a9 (0x7fb55af143a9 in /home/switchsyj/Downloads/libtorch/lib/libtorch_cpu.so)
frame #11: torch::nn::EmbeddingImpl::forward(at::Tensor const&) + 0x71 (0x7fb55af127b1 in /home/switchsyj/Downloads/libtorch/lib/libtorch_cpu.so)
frame #12: SimilarityModel::forward(at::Tensor) + 0xa9 (0x55c96b8e5793 in ./demo)
frame #13: main + 0xaba (0x55c96b8bfe5c in ./demo)
frame #14: __libc_start_main + 0xe7 (0x7fb51edf5b97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #15: _start + 0x2a (0x55c96b8bd74a in ./demo)
Aborted (core dumped)
| Based on the error message, one of the two following Tensors are not in the GPU when you're running SimilarityModel::forward():
this->embedding->weight
x
Given that the error points to the argument #1, I'd say that weight is the one on the CPU.
Here's the call for index.select:
Tensor embedding(const Tensor & weight, const Tensor & indices,
int64_t padding_idx, bool scale_grad_by_freq, bool sparse) {
auto indices_arg = TensorArg(indices, "indices", 1);
checkScalarType("embedding", indices_arg, kLong);
// TODO: use tensor.index() after improving perf
if (indices.dim() == 1) {
return weight.index_select(0, indices);
}
auto size = indices.sizes().vec();
for (auto d : weight.sizes().slice(1)) {
size.push_back(d);
}
return weight.index_select(0, indices.reshape(-1)).view(size);
}
First, try to directly move the weight to the GPU. If it works, it means that when you called TORCH_MODULE(SimilarityModel), and moved the model to the device, it should have worked too. Remember that you have to change the name to SimilarityModelImpl (Name+Impl) in this case. Otherwise, it won't work as well.
| https://stackoverflow.com/questions/63057577/ |
lstm:Input batch size 100 doesn't match hidden[0] batch size 1 | I am trying to add an lstm layer to my previous working AI model.
On adding the model I am getting this error when training the batch to my AI.
earlier without the LSTM the error was not there and it worked fine.
Input batch size 100 doesn't match hidden[0] batch size 1.
I am using nn.LSTMCell
can anyone please help a check if I am missing some parameter to init my lstmcell so it can take batch inputs as well.
below is my code...
import os
import time
import random
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.preprocessing import StandardScaler
import torch
import torch.nn as nn
import torch.nn.functional as F
from random import random as rndm
from torch.autograd import Variable
from collections import deque
os.chdir("C:\\Users\\granthjain\\Desktop\\startup_code")
torch.set_default_tensor_type('torch.DoubleTensor')
class ReplayBuffer(object):
def __init__(self, max_size=1e6):
self.storage = []
self.max_size = max_size
self.ptr = 0
def add(self, transition):
if len(self.storage) == self.max_size:
self.storage[int(self.ptr)] = transition
else:
self.storage.append(transition)
self.ptr = (self.ptr + 1) % self.max_size
def sample(self, batch_size):
ind = np.random.randint(0, self.ptr, size=batch_size)
batch_states, batch_next_states, batch_actions, batch_rewards, batch_dones = [], [], [], [], []
for i in ind:
state, next_state, action, reward, done = self.storage[i]
if state is None:
continue
elif next_state is None:
continue
elif action is None:
continue
elif reward is None:
continue
elif done is None:
continue
batch_states.append(np.array(state, copy=False))
batch_next_states.append(np.array(next_state, copy=False))
batch_actions.append(np.array(action, copy=False))
batch_rewards.append(np.array(reward, copy=False))
batch_dones.append(np.array(done, copy=False))
return np.array(batch_states,dtype=object).astype(float), np.array(batch_next_states,dtype=object).astype(float), np.array(batch_actions,dtype=object).astype(float), np.array(batch_rewards,dtype=object).astype(float), np.array(batch_dones,dtype=object).astype(float)
class Actor(nn.Module):
def __init__(self, state_dim, action_dim, max_action):
super(Actor, self).__init__()
self.lstm = nn.LSTMCell(state_dim, 256)
self.layer_1 = nn.Linear(256, 400)
self.layer_2 = nn.Linear(400, 300)
self.layer_3 = nn.Linear(300, action_dim)
self.hx = torch.zeros(1,256)
self.cx = torch.zeros(1,256)
self.max_action = max_action
def forward(self, x):
self.hx, self.cx = self.lstm(x, (self.hx, self.cx))
x = F.relu(self.layer_1(self.hx))
x = F.relu(self.layer_2(x))
x = self.max_action * torch.tanh(self.layer_3(x))
return x
class Critic(nn.Module):
def __init__(self, state_dim, action_dim):
super(Critic, self).__init__()
# Defining the first Critic neural network
self.lstm1 = nn.LSTMCell(state_dim + action_dim, 256)
self.layer_1 = nn.Linear(256, 400)
self.layer_2 = nn.Linear(400, 300)
self.layer_3 = nn.Linear(300, 1)
# Defining the second Critic neural network
self.lstm2 = nn.LSTMCell(state_dim + action_dim, 256)
self.layer_4 = nn.Linear(256, 400)
self.layer_5 = nn.Linear(400, 300)
self.layer_6 = nn.Linear(300, 1)
self.hx1 = torch.zeros(1,256)
self.cx1 = torch.zeros(1,256)
self.hx2 = torch.zeros(1,256)
self.cx2 = torch.zeros(1,256)
def forward(self, x, u):
xu = torch.cat([x, u], 1)
# Forward-Propagation on the first Critic Neural Network
self.hx1,self.cx1 = self.lstm(xu, (self.hx1, self.cx1))
x1 = F.relu(self.layer_1(self.hx1))
x1 = F.relu(self.layer_2(x1))
x1 = self.layer_3(x1)
# Forward-Propagation on the second Critic Neural Network
self.hx2,self.cx2 = self.lstm(xu, (self.hx2, self.cx2))
x2 = F.relu(self.layer_4(self.hx2))
x2 = F.relu(self.layer_5(x2))
x2 = self.layer_6(x2)
return x1, x2
def Q1(self, x, u):
xu = torch.cat([x, u], 1)
self.hx1,self.cx1 = self.lstm(xu, (self.hx1, self.cx1))
x1 = F.relu(self.layer_1(self.hx1))
x1 = F.relu(self.layer_2(x1))
x1 = self.layer_3(x1)
return x1
# Selecting the device (CPU or GPU)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Building the whole Training Process into a class
class TD3(object):
def __init__(self, state_dim, action_dim, max_action):
self.actor = Actor(state_dim, action_dim, max_action).to(device)
self.actor_target = Actor(state_dim, action_dim, max_action).to(device)
self.actor_target.load_state_dict(self.actor.state_dict())
self.actor_optimizer = torch.optim.Adam(self.actor.parameters())
self.critic = Critic(state_dim, action_dim).to(device)
self.critic_target = Critic(state_dim, action_dim).to(device)
self.critic_target.load_state_dict(self.critic.state_dict())
self.critic_optimizer = torch.optim.Adam(self.critic.parameters())
self.max_action = max_action
def reset_hxcx(self):
self.actor.cx = torch.zeros(1,256)
self.actor.hx = torch.zeros(1,256)
self.actor_target.cx = torch.zeros(1,256)
self.actor_target.hx = torch.zeros(1,256)
self.critic.cx1 = torch.zeros(1,256)
self.critic.cx2 = torch.zeros(1,256)
self.critic.hx1 = torch.zeros(1,256)
self.critic.hx2 = torch.zeros(1,256)
self.critic_target.cx1 = torch.zeros(1,256)
self.critic_target.cx2 = torch.zeros(1,256)
self.critic_target.hx1 = torch.zeros(1,256)
self.critic_target.hx2 = torch.zeros(1,256)
def select_action(self, state):
print("state =", type(state))
return self.actor(state).cpu().data.numpy().flatten()
def train(self, replay_buffer, iterations, batch_size=50, discount=0.99, tau=0.005, policy_noise=0.2, noise_clip=0.5, policy_freq=2):
for it in range(iterations):
# Step 4: We sample a batch of transitions (s, s’, a, r) from the memory
batch_states, batch_next_states, batch_actions, batch_rewards, batch_dones = replay_buffer.sample(batch_size)
batch_states=batch_states.astype(float)
batch_next_states=batch_next_states.astype(float)
batch_actions=batch_actions.astype(float)
batch_rewards=batch_rewards.astype(float)
batch_dones=batch_dones.astype(float)
state = torch.from_numpy(batch_states)
next_state = torch.from_numpy(batch_next_states)
action = torch.from_numpy(batch_actions)
reward = torch.from_numpy(batch_rewards)
done = torch.from_numpy(batch_dones)
# print("actor cx:",self.actor.cx)
# print("actor hx:",self.actor.hx)
# print("actor_target cx:",self.actor_target.cx)
# print("actor_target cx:",self.actor_target.cx)
# print("self.critic.cx1:",self.critic.cx1)
# print("self.critic.cx2",self.critic.cx2)
# print("self.critic.hx1:",self.critic.hx1)
# print("self.critic.hx2:",self.critic.hx2)
# print("self.critic_target.cx1:",self.critic_target.cx1)
# print("self.critic_target.hx1",self.critic_target.hx1)
# print("self.critic_target.cx2:",self.critic_target.cx2)
# print("self.critic_target.hx2:",self.critic_target.hx2)
# Step 5: From the next state s’, the Actor target plays the next action a’
next_action = self.actor_target(next_state)
# Step 6: We add Gaussian noise to this next action a’ and we clamp it in a range of values supported by the environment
noise = torch.Tensor(batch_actions).data.normal_(0, policy_noise).to(device)
noise = noise.clamp(-noise_clip, noise_clip)
next_action = (next_action + noise).clamp(-self.max_action, self.max_action)
# Step 7: The two Critic targets take each the couple (s’, a’) as input and return two Q-values Qt1(s’,a’) and Qt2(s’,a’) as outputs
target_Q1, target_Q2 = self.critic_target(next_state, next_action)
# Step 8: We keep the minimum of these two Q-values: min(Qt1, Qt2)
target_Q = torch.min(target_Q1, target_Q2).double()
# Step 9: We get the final target of the two Critic models, which is: Qt = r + γ * min(Qt1, Qt2), where γ is the discount factor
done = done.resize_((done.shape[0],1))
reward = reward.resize_((reward.shape[0],1))
target_Q = reward + ((1 - done) * discount * target_Q).detach()
# Step 10: The two Critic models take each the couple (s, a) as input and return two Q-values Q1(s,a) and Q2(s,a) as outputs
current_Q1, current_Q2 = self.critic(state, action)
# Step 11: We compute the loss coming from the two Critic models: Critic Loss = MSE_Loss(Q1(s,a), Qt) + MSE_Loss(Q2(s,a), Qt)
critic_loss = F.mse_loss(current_Q1, target_Q) + F.mse_loss(current_Q2, target_Q)
# Step 12: We backpropagate this Critic loss and update the parameters of the two Critic models with a SGD optimizer
self.critic_optimizer.zero_grad()
critic_loss.backward()
self.critic_optimizer.step()
# Step 13: Once every two iterations, we update our Actor model by performing gradient ascent on the output of the first Critic model
if it % policy_freq == 0:
actor_loss = -self.critic.Q1(state, self.actor(state)).mean()
self.actor_optimizer.zero_grad()
actor_loss.backward()
self.actor_optimizer.step()
# Step 14: Still once every two iterations, we update the weights of the Actor target by polyak averaging
for param, target_param in zip(self.actor.parameters(), self.actor_target.parameters()):
target_param.data.copy_(tau * param.data + (1 - tau) * target_param.data)
# Step 15: Still once every two iterations, we update the weights of the Critic target by polyak averaging
for param, target_param in zip(self.critic.parameters(), self.critic_target.parameters()):
target_param.data.copy_(tau * param.data + (1 - tau) * target_param.data)
# Making a save method to save a trained model
def save(self, filename, directory):
torch.save(self.actor.state_dict(), '%s/%s_actor.pth' % (directory, filename))
torch.save(self.critic.state_dict(), '%s/%s_critic.pth' % (directory, filename))
# Making a load method to load a pre-trained model
def load(self, filename, directory):
self.actor.load_state_dict(torch.load('%s/%s_actor.pth' % (directory, filename)))
self.critic.load_state_dict(torch.load('%s/%s_critic.pth' % (directory, filename)))
#set the parameters
start_timesteps = 1e3 # Number of iterations/timesteps before which the model randomly chooses an action, and after which it starts to use the policy network
eval_freq = 5e1 # How often the evaluation step is performed (after how many timesteps)
max_timesteps = 5e3 # Total number of iterations/timesteps
save_models = True # Boolean checker whether or not to save the pre-trained model
expl_noise = 0.1 # Exploration noise - STD value of exploration Gaussian noise
batch_size = 100 # Size of the batch
discount = 0.99 # Discount factor gamma, used in the calculation of the total discounted reward
tau = 0.005 # Target network update rate
policy_noise = 0.2 # STD of Gaussian noise added to the actions for the exploration purposes
noise_clip = 0.5 # Maximum value of the Gaussian noise added to the actions (policy)
policy_freq = 2 # Number of iterations to wait before the policy network (Actor model) is updated
state_dim = 3
action_dim = 3
max_action = 1
idx = 0
class env1:
def __init__(self,state_dim,action_dim,data):
self.state_dim = state_dim
self.state = torch.zeros(self.state_dim)
self.state[state_dim-1]=1000.0
self.next_state = torch.zeros(self.state_dim)
self.next_state[state_dim-1] = 1000.0
self.action_dim = action_dim
self.data = data
self.idx = 0
self.count = 0
self._max_episode_steps = 200
self.state[1] = self.data[self.idx]
self.next_state[1] = self.data[self.idx]
def reset(self):
self.next_state = torch.zeros(self.state_dim)
self.next_state[state_dim-1]=1000.0
self.state = torch.zeros(self.state_dim)
self.state[state_dim-1]=1000.0
self.state[1] = self.data[self.idx]
self.next_state[1] = self.data[self.idx]
self.count = 0
ch = self.state[0]
cp = self.state[1]
cc = self.state[2]
st = torch.tensor([ch,cp,cc])
return st
def step(self,action):
done = False
act_t = torch.argmax(action)
self.idx += 1
if(act_t==0):
num_s = int(self.state[2]/self.state[1])
self.next_state[0] += num_s
self.next_state[2] = self.state[2]%self.state[1]
self.next_state[1] = self.data[self.idx]
elif(act_t==1):
self.next_state[1] = self.data[self.idx]
elif(act_t==2):
self.next_state[2] = self.state[2]+ self.state[1]*self.state[0]
self.next_state[0] = 0
self.next_state[1] = self.data[self.idx]
reward = self.next_state[2] - self.state[2] + self.next_state[1]*self.next_state[0] - self.state[1]*self.state[0] -1
self.state[0] = self.next_state[0]
self.state[1] = self.next_state[1]
self.state[2] = self.next_state[2]
ch = self.state[0]
cp = self.state[1]
cc = self.state[2]
st = torch.tensor([ch,cp,cc])
self.count = (self.count + 1)%100
if(self.count==0):
done = True
return st, reward, done
policy = TD3(state_dim, action_dim, max_action)
#Create the environment
data = pd.read_csv('PAGEIND.csv')
data = data['Close']
data = np.array(data).reshape(-1,1)
max_timesteps = data.shape[0]
sc = StandardScaler()
data = sc.fit_transform(data)
data = torch.DoubleTensor(data)
env = env1(state_dim,action_dim,data)
replay_buffer = ReplayBuffer()
#init training variables
total_timesteps = 0
timesteps_since_eval = 0
episode_num = 0
done = True
t0 = time.time()
# We start the main loop over 500,000 timesteps
while total_timesteps < max_timesteps:
# If the episode is done
if done:
# If we are not at the very beginning, we start the training process of the model
if total_timesteps != 0:
print("Total Timesteps: {} Episode Num: {} Reward: {}".format(total_timesteps, episode_num, episode_reward))
policy.train(replay_buffer, episode_timesteps, batch_size, discount, tau, policy_noise, noise_clip, policy_freq)
# When the training step is done, we reset the state of the environment
obs = env.reset()
policy.reset_hxcx()
# Set the Done to False
done = False
# Set rewards and episode timesteps to zero
episode_reward = 0
episode_timesteps = 0
episode_num += 1
# Before 1000 timesteps, we play random actions
if total_timesteps < 0.8*max_timesteps:
#random action
actn = torch.randn(action_dim)
action = torch.zeros(action_dim)
action[torch.argmax(actn)] = 1
else: # After 1000 timesteps, we switch to the model
action = policy.select_action(torch.tensor(obs))
# If the explore_noise parameter is not 0, we add noise to the action and we clip it
if expl_noise != 0:
print("policy action:",action)
actn = (action + torch.randn(action_dim))
action = torch.zeros(action_dim)
action[torch.argmax(actn)] = 1
# The agent performs the action in the environment, then reaches the next state and receives the reward
new_obs, reward, done = env.step(action)
# We check if the episode is done
done_bool = 0 if episode_timesteps + 1 == env._max_episode_steps else float(done)
# We increase the total reward
episode_reward += reward
# We store the new transition into the Experience Replay memory (ReplayBuffer)
replay_buffer.add((obs, new_obs, action, reward, done_bool))
# We update the state, the episode timestep, the total timesteps, and the timesteps since the evaluation of the policy
obs = new_obs
episode_timesteps += 1
total_timesteps += 1
timesteps_since_eval += 1
and below is the error msg:
policy.train(replay_buffer, episode_timesteps, batch_size, discount, tau, policy_noise, noise_clip, policy_freq)
File "C:/Users/granthjain/Desktop/startup_code/td3_lstm_try.py", line 196, in train
next_action = self.actor_target(next_state)
File "C:\Users\granthjain\Anaconda3_1\lib\site-packages\torch\nn\modules\module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "C:/Users/granthjain/Desktop/startup_code/td3_lstm_try.py", line 79, in forward
self.hx, self.cx = self.lstm(x, (self.hx, self.cx))
File "C:\Users\granthjain\Anaconda3_1\lib\site-packages\torch\nn\modules\module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\granthjain\Anaconda3_1\lib\site-packages\torch\nn\modules\rnn.py", line 708, in forward
self.check_forward_hidden(input, hx[0], '[0]')
File "C:\Users\granthjain\Anaconda3_1\lib\site-packages\torch\nn\modules\rnn.py", line 532, in check_forward_hidden
input.size(0), hidden_label, hx.size(0)))
RuntimeError: Input batch size 100 doesn't match hidden[0] batch size 1
| If you initialize your cell state and hidden state with zeros, there is no need to provide initialization at all, it will be provided for you (as a default, please see docs). However, should you decide to do it on your own, you should always take into account batch size (which might be different every iteration).
In the end, both cell and hidden state for nn.LSTMCell have shape (batch_size, hidden_size) whereas you initialize those once in the constructor with shape (1, hidden_size). You have to move initialization to forward() and with each call get batch size from x which should be just x.shape[0]
As a side note, you are using nn.LSTMCell which is just a single cell computation. Used once does not really make sense, make sure that's what works for you. Maybe just nn.LSTM instead?
| https://stackoverflow.com/questions/63063218/ |
Single shot multi-dimension indexing in torch - perhaps with index_select or gather? | I am performing a multi-index re-arrangement of a matrix based upon its correspondence data. Right now, I and doing this with a pair of index_select calls, but this is very memory inefficient (n^2 in terms of memory usage), and is not exactly ideal in terms of computation efficiency either. Is there some way that I can boil my operation down into a single .gather or .index_select call?
What I essentially want to do is when given a source array of shape (I,J,K), and an array of indices of shape (I,J,2), produce a result which meets the condition:
result[i][j][:] = source[idx[i][j][0]] [idx[i][j][1]] [:]
Here's a runnable toy example of how I'm doing things right now:
source = torch.tensor([[1,2,3], [4,5,6], [7,8,9], [10,11,12]])
indices = torch.tensor([[[2,2],[3,1],[0,2]],[[0,2],[0,1],[0,2]],[[0,2],[0,1],[0,2]],[[0,2],[0,1],[0,2]]])
ax1 = torch.index_select(source,0,indices[:,:,0].flatten())
ax2 = torch.index_select(ax1, 1, indices[:,:,1].flatten())
result = ax2.diagonal().reshape(indices.shape(0), indices.shape(1))
This approach works for me only because my images are rather small, so they fit into memory even with the diagonalization issue. Regardless, I am producing a pretty massive amount of data that doesn't need to be. Furthermore, if K becomes large, then this issue gets worse exponentially. Perhaps I'm just missing something obvious in the documentation, but I feel like this is a problem somebody else has to have run into before that can help me out!
| You already have your indices in nice form for integer array indexing so we can simply do
result = source[indices[..., 0], indices[..., 1], ...]
| https://stackoverflow.com/questions/63065497/ |
pytoch RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1 | I am trying to train a Actor Critic Model with LSTM in both actor and critic.
I am new to all this and can not understand why "RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)" is comming.
I am forwardPropagating from actor and getting error
below is my code and error message.I am using pytorch version 0.4.1
Can someone please help to check what is wrong with this code.
import os
import time
import random
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.preprocessing import StandardScaler
import torch
import torch.nn as nn
import torch.nn.functional as F
from random import random as rndm
from torch.autograd import Variable
from collections import deque
torch.set_default_tensor_type('torch.DoubleTensor')
class Actor(nn.Module):
def __init__(self, state_dim, action_dim, max_action):
super(Actor, self).__init__()
self.lstm = nn.LSTMCell(state_dim, 256)
self.layer_1 = nn.Linear(256, 400)
self.layer_2 = nn.Linear(400, 300)
self.layer_3 = nn.Linear(300, action_dim)
self.hx = torch.zeros(1,256)
self.cx = torch.zeros(1,256)
self.max_action = max_action
def forward(self, x):
self.hx, self.cx = self.lstm(x, (self.hx, self.cx))
x = F.relu(self.layer_1(self.hx))
x = F.relu(self.layer_2(x))
x = self.max_action * torch.tanh(self.layer_3(x))
return x
state_dim = 3
action_dim = 3
max_action = 1
policy = Actor(state_dim, action_dim, max_action)
s = torch.tensor([20,20,100])
next_action = policy(s)
and the error message is :
next_action = policy(s)
Traceback (most recent call last):
File "<ipython-input-20-de717f0ad3d2>", line 1, in <module>
next_action = policy(s)
File "C:\Users\granthjain\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "<ipython-input-4-aed4daf511cb>", line 14, in forward
self.hx, self.cx = self.lstm(x, (self.hx, self.cx))
File "C:\Users\granthjain\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\granthjain\anaconda3\lib\site-packages\torch\nn\modules\rnn.py", line 704, in forward
self.check_forward_input(input)
File "C:\Users\granthjain\anaconda3\lib\site-packages\torch\nn\modules\rnn.py", line 523, in check_forward_input
if input.size(1) != self.input_size:
RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
I am using pytorch version 0.4.1
Can someone please help to check what is wrong with this code.
| Got it.
The input of the lstm layer has different shape.
https://pytorch.org/docs/master/generated/torch.nn.LSTMCell.html
| https://stackoverflow.com/questions/63072770/ |
PyTorch: How to parallelize over multiple GPU using multiprocessing.pool | I have the following code which I am trying to parallelize over multiple GPUs in PyTorch:
import numpy as np
import torch
from torch.multiprocessing import Pool
X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]])
X = torch.DoubleTensor(X).cuda()
def X_power_func(j):
X_power = X**j
return X_power
if __name__ == '__main__':
with Pool(processes = 2) as p: # Parallelizing over 2 GPUs
results = p.map(X_power_func, range(4))
results
But when I ran the code, I am getting this error:
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "<ipython-input-35-6529ab6dac60>", line 11, in X_power_func
X_power = X**j
RuntimeError: CUDA error: initialization error
"""
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
<ipython-input-35-6529ab6dac60> in <module>()
14 if __name__ == '__main__':
15 with Pool(processes = 1) as p:
---> 16 results = p.map(X_power_func, range(8))
17
18 results
1 frames
/usr/lib/python3.6/multiprocessing/pool.py in get(self, timeout)
642 return self._value
643 else:
--> 644 raise self._value
645
646 def _set(self, i, obj):
RuntimeError: CUDA error: initialization error
Where have I gone wrong? Any help would really be appreciated.
| I think the usual approach is to call model.share_memory() once before multiprocessing, assuming you have a model which subclasses nn.Module. For tensors, it should be X.share_memory_(). Unfortunately, I had trouble getting that to work with your code, it hangs (without errors) if X.share_memory_() is called before calling pool.map; I'm not sure if the reason is because X is a global variable which is not passed as one of the arguments in map.
What does work is this:
X = torch.DoubleTensor(X)
def X_power_func(j):
X_power = X.cuda()**j
return X_power
Btw: https://github.com/pytorch/pytorch/issues/15734 mentions that "CUDA API must not be initialized before you fork" (this is likely the issue you were seeing).
Also https://github.com/pytorch/pytorch/issues/17680 if using spawn in Jupyter notebooks "the spawn method will run everything in your notebook top-level" (likely the issue I was seeing when my code was hanging, in a notebook). In short, I couldn't get either fork or spawn to work, except using the sequence above (which doesn't use CUDA until it's in the forked process).
| https://stackoverflow.com/questions/63075594/ |
numpy array type not supported? | I'm trying to copy a model I was able to follow and run through a tutorial, but this time with my own data.
I was able to convert my own MRI images to numpy arrays in the same dimensions as the arrays the tutorial data is.
I tried replacing the numpy arrays in my tutorial with my own arrays and writing my own fictional csv file for normal or abnormal (case, not case).
However when I run it, I get:
(Pytorch) C:\Users\GlaDOS\PythonProjects\dicomnpy>python train.py -t acl -p sagittal --epochs=10 --prefix_name hue
Traceback (most recent call last):
File "train.py", line 277, in <module>
run(args)
File "train.py", line 214, in run
mrnet, train_loader, epoch, num_epochs, optimizer, writer, current_lr, log_every)
File "train.py", line 34, in train_model
for i, (image, label, weight) in enumerate(train_loader):
File "C:\Users\GlaDOS\anaconda3\envs\Pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 345, in __next__
data = self._next_data()
File "C:\Users\GlaDOS\anaconda3\envs\Pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "C:\Users\GlaDOS\anaconda3\envs\Pytorch\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\GlaDOS\anaconda3\envs\Pytorch\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\GlaDOS\PythonProjects\dicomnpy\dataloader.py", line 56, in __getitem__
array = self.transform(array)
File "c:\users\glados\src\torchsample\torchsample\transforms\tensor_transforms.py", line 32, in __call__
inputs = transform(*inputs)
File "C:\Users\GlaDOS\anaconda3\envs\Pytorch\lib\site-packages\torchvision\transforms\transforms.py", line 313, in __call__
return self.lambd(img)
File "train.py", line 167, in <lambda>
transforms.Lambda(lambda x: torch.Tensor(x)),
TypeError: can't convert np.ndarray of type numpy.uint16. The only supported types are: float64, float32, float16, int64, int32, int16, int8, uint8, and bool.
Now I'm wondering if this error means I somehow didn't convert my MRI's to the "correct" numpy array type? And if so, how do I go about changing them to the correct type?
| You can redefining the variable with astype
your_array = your_array.astype(np.uint16)
| https://stackoverflow.com/questions/63075914/ |
How to create a 1D sparse tensors from given list of indices and values? | I have a list of indices and values. I want to create a sparse tensor of size 30000 from this indices and values as follows.
indices = torch.LongTensor([1,3,4,6])
values = torch.FloatTensor([1,1,1,1])
So, I want to build a 30k dimensional sparse tensor in which the indices [1,3,4,6] are ones and the rest are zeros. How can I do that?
I want to store the sequences of such sparce tensors efficiently.
| In general the indices tensor needs to have shape (sparse_dim, nnz) where nnz is the number of non-zero entries and sparse_dim is the number of dimensions for your sparse tensor.
In your case nnz = 4 and sparse_dim = 1 since your desired tensor is 1D. All we need to do to make your indices work is to insert a unitary dimension at the front of indices to make it shape (1, 4).
t = torch.sparse_coo_tensor(indices.unsqueeze(0), values, (30000,))
or equivalently
t = torch.sparse.FloatTensor(indices.unsqueeze(0), values, (30000,))
Keep in mind only a limited number of operations are supported on sparse tensors. To convert a tensor back to it's dense (inefficient) representation you can use the to_dense method
t_dense = t.to_dense()
| https://stackoverflow.com/questions/63076260/ |
PyTorch Average Accuracy after each epoch | I am trying to calculate the accuracy of the model after the end of each epoch. After each epoch I would like to calculate the accuracy over the previous epoch. The model only seems to print the same value as the mean test error.
model.eval()
for images, paths in tqdm(loader_test):
images = images.to(device)
targets = torch.tensor([metadata['count'][os.path.split(path)[-1]] for path in paths]) # B
targets = targets.float().to(device)
# forward pass:
output = model(images) # B x 1 x 9 x 9 (analogous to a heatmap)
preds = output.sum(dim=[1,2,3]) # predicted cell counts (vector of length B)
# logging:
loss = torch.mean((preds - targets)**2)
count_error = torch.abs(preds - targets).mean()
mean_test_error += count_error
writer.add_scalar('test_loss', loss.item(), global_step=global_step)
writer.add_scalar('test_count_error', count_error.item(), global_step=global_step)
global_step += 1
average_accuracy = 0
mean_test_error = mean_test_error / len(loader_test)
writer.add_scalar('mean_test_error', mean_test_error.item(), global_step=global_step)
average_accuracy += mean_test_error
average_accuracy = average_accuracy /len(loader_test)
print("Average accuracy: %f" % average_accuracy)
print("Test count error: %f" % mean_test_error)
if mean_test_error < best_test_error:
best_test_error = mean_test_error
torch.save({'state_dict':model.state_dict(),
'optimizer_state_dict':optimizer.state_dict(),
'globalStep':global_step,
'train_paths':dataset_train.files,
'test_paths':dataset_test.files},checkpoint_path)
| IF your model is a classifier, calculating accuracy follows:
acc = (pred.max(dim=1) == target).float().mean()
Where:
pred.shape = (batch_size, n_classes)
target.shape = (batch_size)
A similar question was asked recently: calculate accuracy for each class using CNN and pytorch
However, you are using MSE loss to train your network. Which is a regression loss. For classification problems CrossEntropyLoss is the GO TO option: https://pytorch.org/docs/master/generated/torch.nn.CrossEntropyLoss.html
IF you are solving an regression task. Than usually accuracy is defined by a threshold. Predictions that are bellow a certain error from the ground truth are considered, the others arent. In this case the calculation is the following:
errors = (pred - target) ** 2 # Squared error
acc = (errors < threshold).float().mean()
error = errors.mean()
Where:
pred.shape = (batch_size, chanels, ...)
target.shap = (batch_size, chanels, ...)
Was it something like that you were looking for?
| https://stackoverflow.com/questions/63083439/ |
Saving model in pytorch and keras | I have trained model with keras and saved in with the help of pytorch. Will it cause any problems in the future. As far as I know the only difference between them is Keras saves its model's weights as doubles while PyTorch saves its weights as floats.
| You can convert your model to double by doing
model.double()
Note that after this, you will need your input to be DoubleTensor.
| https://stackoverflow.com/questions/63086902/ |
accuracy is not increasing in classification images | I try to implement the classification of images with bayesian CNN using dropout.
I have defined two classes:
with dropout for the training phase
without dropout for the test(Don’t drop out on testing? to be confirmed please)
When I started the program I remade that the train/test accuracy remain stable they don’t increase. I don’t see what the problem is.
I don’t know if it’s because of convolution and pooling layer parameters or what? Any idea please.
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5, padding=2)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5, padding=2)
self.fc1 = nn.Linear(16 * 8 * 8, 1024)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 192 * 8 * 8)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
# Lenet with MCDO
class Net_MCDO(nn.Module):
def __init__(self):
super(Net_MCDO, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5, padding=2)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(16, 192, 5, padding=2)
self.fc1 = nn.Linear(16 * 8 * 8, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.dropout = nn.Dropout(p=0.3)
def forward(self, x):
x = self.pool(self.dropout(self.conv1(x)))
x = self.pool(self.dropout(self.conv2(x)))
x = x.view(-1, 192 * 8 * 8)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(self.dropout(x)))
x = F.softmax(self.fc3(self.dropout(x)),dim=1)
return x
net=Net()
mcdo=Net_MCDO()
CE = nn.CrossEntropyLoss()
learning_rate=0.001
optimizer=optim.SGD(net.parameters(), lr=learning_rate, momentum=0.9)
epoch_num = 30
train_accuracies=np.zeros(epoch_num)
test_accuracies=np.zeros(epoch_num)
for epoch in range(epoch_num):
average_loss = 0.0
total=0
success=0
for i, data in enumerate(trainloader, 0):
inputs, labels = data
inputs, labels = Variable(inputs), Variable(labels)
optimizer.zero_grad()
outputs = mcdo(inputs)
loss=CE(outputs, labels)
loss.backward()
optimizer.step()
average_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
success += (predicted==labels.data).sum()
train_accuracy = 100.0*success/total
succes=0
total=0
for (inputs, labels) in testloader:
inputs, labels = Variable(inputs), Variable(labels)
outputs = net(inputs)
_,predicted = torch.max(outputs.data, 1)
total += labels.size(0)
success += (predicted==labels.data).sum()
test_accuracy = 100.0*success/total
print(u"epoch{}, average_loss{}, train_accuracy{},
test_accuracy{}".format(
epoch,
average_loss/n_batches,
train_accuracy,
100*success/total
))
#save
train_accuracies[epoch] = train_accuracy
test_accuracies[epoch] = 100.0*success/total
plt.plot(np.arange(1, epoch_num+1), train_accuracies)
plt.plot(np.arange(1, epoch_num+1), test_accuracies)
plt.show()
| Pytorch merges Softmax inside the CrossEntroplyLoss for numerical stability (and better training). So you should remove the softmax layer of your models. (check the documentation here: https://pytorch.org/docs/stable/nn.html#crossentropyloss). Keeping the Sofmax layer in your model will lead to slower training and possibly worse metrics, that is because you are squashing the gradient twice, thus the weight update is a lot less significant.
Change your code to:
class Net_MCDO(nn.Module):
def __init__(self):
super(Net_MCDO, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5, padding=2)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(16, 192, 5, padding=2)
self.fc1 = nn.Linear(16 * 8 * 8, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.dropout = nn.Dropout(p=0.3)
def forward(self, x):
x = self.pool(F.relu(self.dropout(self.conv1(x)))) # recommended to add the relu
x = self.pool(F.relu(self.dropout(self.conv2(x)))) # recommended to add the relu
x = x.view(-1, 192 * 8 * 8)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(self.dropout(x)))
x = self.fc3(self.dropout(x)) # no activation function needed for the last layer
return x
Furthermore, I would recommend you using an activation function, such as ReLU() after every conv or linear layer. Otherwise you are just performing a bunch of linear operations that could be learned in one single layer.
I hope that helps =)
| https://stackoverflow.com/questions/63087017/ |
Question on restoring training after loading model | Having trained for 24 hours, the training process saved the model files via torch.save. There was a power-off or other issues caused the process exited. Normally, we can load the model and continue training from the last step.
Why should not we load the states of optimizers (Adam, etc), is it necessary?
| Yes, you can load the model from the last step and retrain it from that very step.
if you want to use it only for inference, you will save the state_dict of the model as
torch.save(model, PATH)
And load it as
model = torch.load(PATH)
model.eval()
However, for your concern you need to save the optimizer state dict as well. For that purpose, you need to save it as
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,
...
}, PATH)
and load the model for further training as:
model = TheModelClass(*args, **kwargs)
optimizer = TheOptimizerClass(*args, **kwargs)
checkpoint = torch.load(PATH)
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']
model.eval()
# - or -
model.train()
It is necessary to save the optimizer state dictionary, since this contains buffers and parameters that are updated as the model trains.
| https://stackoverflow.com/questions/63089129/ |
is binary cross entropy an additive function? | I am trying to train a machine learning model where the loss function is binary cross entropy, because of gpu limitations i can only do batch size of 4 and i'm having lot of spikes in the loss graph. So I'm thinking to back-propagate after some predefined batch size(>4). So it's like i'll do 10 iterations of batch size 4 store the losses, after 10th iteration add the losses and back-propagate. will it be similar to batch size of 40.
TL;DR
f(a+b) = f(a)+f(b) is it true for binary cross entropy?
| f(a+b) = f(a) + f(b) doesn't seem to be what you're after. This would imply that BCELoss is additive which it clearly isn't. I think what you really care about is if for some index i
# false
f(x, y) == f(x[:i], y[:i]) + f([i:], y[i:])
is true?
The short answer is no, because you're missing some scale factors. What you probably want is the following identity
# true
f(x, y) == (i / b) * f(x[:i], y[:i]) + (1.0 - i / b) * f(x[i:], y[i:])
where b is the total batch size.
This identity is used as motivation for the gradient accumulation method (see below). Also, this identity applies to any objective function which returns an average loss across each batch element, not just BCE.
Caveat/Pitfall: Keep in mind that batch norm will not behave exactly the same when using this approach since it updates its internal statistics based on batch size during the forward pass.
We can actually do a little better memory-wise than just computing the loss as a sum followed by backpropagation. Instead we can compute the gradient of each component in the equivalent sum individually and allow the gradients to accumulate. To better explain I'll give some examples of equivalent operations
Consider the following model
import torch
import torch.nn as nn
import torch.nn.functional as F
class MyModel(nn.Module):
def __init__(self):
super().__init__()
num_outputs = 5
# assume input shape is 10x10
self.conv_layer = nn.Conv2d(3, 10, 3, 1, 1)
self.fc_layer = nn.Linear(10*5*5, num_outputs)
def forward(self, x):
x = self.conv_layer(x)
x = F.max_pool2d(x, 2, 2, 0, 1, False, False)
x = F.relu(x)
x = self.fc_layer(x.flatten(start_dim=1))
x = torch.sigmoid(x) # or omit this and use BCEWithLogitsLoss instead of BCELoss
return x
# to ensure same results for this example
torch.manual_seed(0)
model = MyModel()
# the examples will work as long as the objective averages across batch elements
objective = nn.BCELoss()
# doesn't matter what type of optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
and lets say our data and targets for a single batch are
torch.manual_seed(1) # to ensure same results for this example
batch_size = 32
input_data = torch.randn((batch_size, 3, 10, 10))
targets = torch.randint(0, 1, (batch_size, 20)).float()
Full batch
The body of our training loop for an entire batch may look something like this
# entire batch
output = model(input_data)
loss = objective(output, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_value = loss.item()
print("Loss value: ", loss_value)
print("Model checksum: ", sum([p.sum().item() for p in model.parameters()]))
Weighted sum of loss on sub-batches
We could have computed this using the sum of multiple loss functions using
# This is simpler if the sub-batch size is a factor of batch_size
sub_batch_size = 4
assert (batch_size % sub_batch_size == 0)
# for this to work properly the batch_size must be divisible by sub_batch_size
num_sub_batches = batch_size // sub_batch_size
loss = 0
for sub_batch_idx in range(num_sub_batches):
start_idx = sub_batch_size * sub_batch_idx
end_idx = start_idx + sub_batch_size
sub_input = input_data[start_idx:end_idx]
sub_targets = targets[start_idx:end_idx]
sub_output = model(sub_input)
# add loss component for sub_batch
loss = loss + objective(sub_output, sub_targets) / num_sub_batches
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_value = loss.item()
print("Loss value: ", loss_value)
print("Model checksum: ", sum([p.sum().item() for p in model.parameters()]))
Gradient accumulation
The problem with the previous approach is that in order to apply back-propagation, pytorch needs to store intermediate results of layers in memory for every sub-batch. This ends up requiring a relatively large amount of memory and you may still run into memory consumption issues.
To alleviate this problem, instead of computing a single loss and performing back-propagation once, we could perform gradient accumulation. This gives equivalent results of the previous version. The difference here is that we instead perform a backward pass on each component of
the loss, only stepping the optimizer once all of them have been backpropagated. This way the computation graph is cleared after each sub-batch which will help with memory usage. Note that this works because .backward() actually accumulates (adds) the newly computed gradients to the existing .grad member of each model parameter. This is why optimizer.zero_grad() must be called only once, before the loop, and not during or after.
# This is simpler if the sub-batch size is a factor of batch_size
sub_batch_size = 4
assert (batch_size % sub_batch_size == 0)
# for this to work properly the batch_size must be divisible by sub_batch_size
num_sub_batches = batch_size // sub_batch_size
# Important! zero the gradients before the loop
optimizer.zero_grad()
loss_value = 0.0
for sub_batch_idx in range(num_sub_batches):
start_idx = sub_batch_size * sub_batch_idx
end_idx = start_idx + sub_batch_size
sub_input = input_data[start_idx:end_idx]
sub_targets = targets[start_idx:end_idx]
sub_output = model(sub_input)
# compute loss component for sub_batch
sub_loss = objective(sub_output, sub_targets) / num_sub_batches
# accumulate gradients
sub_loss.backward()
loss_value += sub_loss.item()
optimizer.step()
print("Loss value: ", loss_value)
print("Model checksum: ", sum([p.sum().item() for p in model.parameters()]))
| https://stackoverflow.com/questions/63089824/ |
Splitting custom PyTorch dataset into train loader and validation loader: Length of both same, even though dataset was split? | I'm trying to split one of the Pytorch custom datasets (MNIST) into a training set and a validation set as follows:
def get_train_valid_splits(data_dir,
batch_size,
random_seed=1,
valid_size=0.2,
shuffle=True,
num_workers=4,
pin_memory=False):
normalize = transforms.Normalize((0.1307,), (0.3081,)) # MNIST
# define transforms
valid_transform = transforms.Compose([
transforms.ToTensor(),
normalize
])
train_transform = transforms.Compose([
transforms.ToTensor(),
normalize
])
# load the dataset
train_dataset = datasets.MNIST(root=data_dir, train=True,
download=True, transform=train_transform)
valid_dataset = datasets.MNIST(root=data_dir, train=True,
download=True, transform=valid_transform)
dataset_size = len(train_dataset)
indices = list(range(dataset_size))
split = int(np.floor(valid_size * dataset_size))
if shuffle == True:
np.random.seed(random_seed)
np.random.shuffle(indices)
train_idx, valid_idx = indices[split:], indices[:split]
train_sampler = sampler.SubsetRandomSampler(train_idx)
valid_sampler = sampler.SubsetRandomSampler(valid_idx)
print(len(train_sampler))
print(len(valid_sampler))
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=batch_size, sampler=train_sampler,
num_workers=num_workers, pin_memory=pin_memory)
valid_loader = torch.utils.data.DataLoader(valid_dataset,
batch_size=batch_size, sampler=valid_sampler,
num_workers=num_workers, pin_memory=pin_memory)
print(len(train_loader.dataset))
print(len(valid_loader.dataset))
return (train_loader, valid_loader)
After calling the function I notice that the results of the indices to sample look right, 48000 and 12000:
print(len(train_sampler))
print(len(valid_sampler))
But when I look at the length of the data set associated with train_loader and valid_loader:
print(len(train_loader.dataset))
print(len(valid_loader.dataset))
I get the same length for both: 60000! Any idea what is going on here? Why is it giving the same length for both, even though I clearly split it by indices?
| It's because the dataloader doesn't modify the dataset you pass it, but "applies" things like batch size, samplers, etc ... to the data when you try to access by iterating it. You're issue is you're using len(loader.dataset) which gives you the length of the provided dataset without modification, when you really wanted len(loader) which is the length of the dataset after "applying" things like batch size and samplers.
import torch
import numpy as np
dataset = np.random.rand(100,200)
sampler = torch.utils.data.SubsetRandomSampler(list(range(70)))
loader = torch.utils.data.DataLoader(dataset, sampler=sampler)
print(len(loader))
>>> 70
print(len(loader.dataset))
>>> 100
Note: The result of len will be affected by batch size:
# with batch size
loader = torch.utils.data.DataLoader(dataset, sampler=sampler, batch_size=2)
print(len(loader))
>>> 35
print(len(loader.dataset))
>>> 100
| https://stackoverflow.com/questions/63092193/ |
How to conditionally construct a tensor from two other tensors in PyTorch on the GPU? | An example:
import torch
pred = torch.tensor([1,2,1,0,0], device='cuda:0')
correct = torch.tensor([1,0,1,1,0], device='cuda:0')
assigned = torch.tensor([1,2,2,1,0], device='cuda:0')
I want result = tensor([1,2,1,1,0], device='cuda:0').
Basically, when pred is same as correct then correct else assigned.
Moreover, I want to exclude this computation from the gradient computation.
Is there a way to do this without iterating over the tensors?
| torch.where does exactly what you're looking for:
import torch
pred = torch.tensor([1,2,1,0,0], device='cuda:0')
correct = torch.tensor([1,0,1,1,0], device='cuda:0')
assigned = torch.tensor([1,2,2,1,0], device='cuda:0')
result = torch.where(pred == correct, correct, assigned)
print(result)
# >>> tensor([1, 2, 1, 1, 0], device='cuda:0')
Since none of these tensors have requires_grad=True, nothing needs to be done to avoid gradient computation. Otherwise, you can do something like this:
import torch
pred = torch.tensor([1.,2.,1.,0.,0.], device='cuda:0')
correct = torch.tensor([1.,0.,1.,1.,0.], device='cuda:0', requires_grad=True)
assigned = torch.tensor([1.,2.,2.,1.,0.], device='cuda:0', requires_grad=True)
with torch.no_grad():
result = torch.where(pred == correct, correct, assigned)
print(result)
# >>> tensor([1, 2, 1, 1, 0], device='cuda:0')
If you do not use torch.no_grad(), you'll have:
result = torch.where(pred == correct, correct, assigned)
print(result)
# >>> tensor([1., 2., 1., 1., 0.], device='cuda:0', grad_fn=<SWhereBackward>)
which, then, can be detached from the computational graph by using:
result = result.detach()
print(result)
# >>> tensor([1., 2., 1., 1., 0.], device='cuda:0')
| https://stackoverflow.com/questions/63092529/ |
lstm pytorch RuntimeError: Expected hidden[0] size (1, 1, 256), got (1, 611, 256) | I am trying to do batch processing with nn.lstm
From the documentation https://pytorch.org/docs/master/generated/torch.nn.LSTM.html I get that h0 and c0 should be of dimension:(num_layers * num_directions, batch, hidden_size).
But when I am trying to give input tensor with batch size>1 and h0 , c0 batch size>1. It is giving me error stating: "RuntimeError: Expected hidden[0] size (1, 1, 256), got (1, 611, 256)"
Here is my code:
it contains 1 memory buffer, Actor, Critic, TD3, ENV classes and main training is in TD3 which has actor and critic objects.
Can someone please help a check what am i missing here.
import os
import time
import random
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.preprocessing import StandardScaler
import torch
import torch.nn as nn
import torch.nn.functional as F
from random import random as rndm
from torch.autograd import Variable
from collections import deque
import pandas_datareader.data as pdr
import datetime
os.chdir('C:\\Users\\granthjain\\Desktop\\startup_code')
torch.set_default_tensor_type('torch.DoubleTensor')
f = open('lstm_with_noise_batch.txt', 'w+')
class ReplayBuffer(object):
def __init__(self, max_size=1e6):
self.storage = []
self.max_size = max_size
self.ptr = 0
def add(self, transition):
if len(self.storage) == self.max_size:
self.storage[int(self.ptr)] = transition
else:
self.storage.append(transition)
self.ptr = (self.ptr + 1) % self.max_size
def sample(self, batch_size):
ind = np.random.randint(0, self.ptr, size=batch_size)
ind = np.random.randint(self.ptr)
(batch_states, batch_next_states, batch_actions, batch_rewards,
batch_dones) = ([], [], [], [], [])
for i in range(ind - batch_size, ind):
(state, next_state, action, reward, done) = self.storage[i]
if state is None:
continue
elif next_state is None:
continue
elif action is None:
continue
elif reward is None:
continue
elif done is None:
continue
batch_states.append(np.array(state, copy=False))
batch_next_states.append(np.array(next_state, copy=False))
batch_actions.append(np.array(action, copy=False))
batch_rewards.append(np.array(reward, copy=False))
batch_dones.append(np.array(done, copy=False))
return (np.array(batch_states, dtype=object).astype(float),
np.array(batch_next_states,
dtype=object).astype(float), np.array(batch_actions,
dtype=object).astype(float), np.array(batch_rewards,
dtype=object).astype(float), np.array(batch_dones,
dtype=object).astype(float))
class Actor(nn.Module):
def __init__(
self,
state_dim,
action_dim,
max_action,
):
super(Actor, self).__init__()
self.lstm = nn.LSTM(state_dim, 256)
self.layer_1 = nn.Linear(256, 400)
self.layer_2 = nn.Linear(400, 300)
self.layer_3 = nn.Linear(300, action_dim)
self.max_action = max_action
def forward(self, x, hx):
(hx, cx) = hx
(output, (hx, cx)) = self.lstm(x, (hx, cx))
x = F.relu(self.layer_1(output))
x = F.relu(self.layer_2(x))
x = self.max_action * torch.tanh(self.layer_3(x))
# print("inside forward type cx:",len(output))
return (x, hx, cx)
class Critic(nn.Module):
def __init__(self, state_dim, action_dim):
super(Critic, self).__init__()
# Defining the first Critic neural network
self.lstm1 = nn.LSTM(state_dim + action_dim, 256)
self.layer_1 = nn.Linear(256, 400)
self.layer_2 = nn.Linear(400, 300)
self.layer_3 = nn.Linear(300, 1)
# Defining the second Critic neural network
self.lstm2 = nn.LSTM(state_dim + action_dim, 256)
self.layer_4 = nn.Linear(256, 400)
self.layer_5 = nn.Linear(400, 300)
self.layer_6 = nn.Linear(300, 1)
def forward(
self,
x,
u,
hx,
):
xu = torch.cat([x, u], 1)
# Forward-Propagation on the first Critic Neural Network
xu = torch.reshape(xu, (xu.shape[0], 1, 6))
(hx1, cx1) = hx
(hx2, cx2) = hx
(output, (hx1, cx1)) = self.lstm1(xu, (hx1, hx2))
x1 = F.relu(self.layer_1(output))
x1 = F.relu(self.layer_2(x1))
x1 = self.layer_3(x1)
# Forward-Propagation on the second Critic Neural Network
(output, (hx2, cx2)) = self.lstm2(xu, (hx2, cx2))
x2 = F.relu(self.layer_4(output))
x2 = F.relu(self.layer_5(x2))
x2 = self.layer_6(x2)
return (
x1,
x2,
hx1,
hx2,
cx1,
cx2,
)
def Q1(
self,
x,
u,
hx1,
):
xu = torch.cat([x, u], 1)
xu = torch.reshape(xu, (xu.shape[0], 1, 6))
(hx1, cx1) = hx1
(output, (hx1, cx1)) = self.lstm1(xu, (hx1, cx1))
x1 = F.relu(self.layer_1(output))
x1 = F.relu(self.layer_2(x1))
x1 = self.layer_3(x1)
return (x1, hx1, cx1)
class TD3(object):
def __init__(
self,
state_dim,
action_dim,
max_action,
):
self.actor = Actor(state_dim, action_dim, max_action).to(device)
self.actor_target = Actor(state_dim, action_dim,
max_action).to(device)
self.actor_target.load_state_dict(self.actor.state_dict())
self.actor_optimizer = torch.optim.Adam(self.actor.parameters())
self.critic = Critic(state_dim, action_dim).to(device)
self.critic_target = Critic(state_dim, action_dim).to(device)
self.critic_target.load_state_dict(self.critic.state_dict())
self.critic_optimizer = \
torch.optim.Adam(self.critic.parameters())
self.max_action = max_action
def select_action(self, state, hx1):
(hx, cx) = hx1
x = self.actor(state, hx1)
return x
def train(
self,
replay_buffer,
iterations,
batch_size=50,
discount=0.99,
tau=0.005,
policy_noise=0.2,
noise_clip=0.5,
policy_freq=2,
):
b_state = torch.Tensor([])
b_next_state = torch.Tensor([])
b_done = torch.Tensor([])
b_reward = torch.Tensor([])
b_action = torch.Tensor([])
for it in range(iterations):
# print ('it: ', it, ' iterations: ', iterations)
# Step 4: We sample a batch of transitions (s, s’, a, r) from the memory
(batch_states, batch_next_states, batch_actions,
batch_rewards, batch_dones) = \
replay_buffer.sample(batch_size)
batch_states = batch_states.astype(float)
batch_next_states = batch_next_states.astype(float)
batch_actions = batch_actions.astype(float)
batch_rewards = batch_rewards.astype(float)
batch_dones = batch_dones.astype(float)
state = torch.from_numpy(batch_states)
next_state = torch.from_numpy(batch_next_states)
action = torch.from_numpy(batch_actions)
reward = torch.from_numpy(batch_rewards)
done = torch.from_numpy(batch_dones)
b_size = 1
seq_len = state.shape[0]
batch = b_size
input_size = state_dim
state = torch.reshape(state, (seq_len, 1, state_dim))
next_state = torch.reshape(next_state, (seq_len, 1,
state_dim))
done = torch.reshape(done, (seq_len, 1, 1))
reward = torch.reshape(reward, (seq_len, 1, 1))
action = torch.reshape(action, (seq_len, 1, action_dim))
b_state = torch.cat((b_state, state),dim=1)
b_next_state = torch.cat((b_next_state, next_state),dim=1)
b_done = torch.cat((b_done, done),dim=1)
b_reward = torch.cat((b_reward, reward),dim=1)
b_action = torch.cat((b_action, action),dim=1)
print("dim state:",b_state.shape)
print("dim next_state:",b_next_state.shape)
print("dim done:",b_done.shape)
print("dim reward:",b_reward.shape)
print("dim action:",b_action.shape)
# for h and c shape (num_layers * num_directions, batch, hidden_size)
h0 = torch.zeros(1, b_state.shape[1], 256)
c0 = torch.zeros(1, b_state.shape[1], 256)
# Step 5: From the next state s’, the Actor target plays the next action a’
next_action = self.actor_target(next_state, (h0, c0))
next_action = next_action[0]
# Step 6: We add Gaussian noise to this next action a’ and we clamp it in a range of values supported by the environment
noise = torch.Tensor(next_action).data.normal_(0,
policy_noise).to(device)
noise = noise.clamp(-noise_clip, noise_clip)
next_action = (next_action + noise).clamp(-self.max_action,
self.max_action)
# Step 7: The two Critic targets take each the couple (s’, a’) as input and return two Q-values Qt1(s’,a’) and Qt2(s’,a’) as outputs
result = self.critic_target(next_state, next_action, (h0,
c0))
target_Q1 = result[0]
target_Q2 = result[1]
# Step 8: We keep the minimum of these two Q-values: min(Qt1, Qt2)
target_Q = torch.min(target_Q1, target_Q2).double()
# Step 9: We get the final target of the two Critic models, which is: Qt = r + γ * min(Qt1, Qt2), where γ is the discount factor
target_Q = reward + (1 - done) * discount * target_Q
# Step 10: The two Critic models take each the couple (s, a) as input and return two Q-values Q1(s,a) and Q2(s,a) as outputs
action = torch.reshape(action, next_action.shape)
result = self.critic(state, action, (h0, c0))
current_Q1 = result[0]
current_Q2 = result[1]
# Step 11: We compute the loss coming from the two Critic models: Critic Loss = MSE_Loss(Q1(s,a), Qt) + MSE_Loss(Q2(s,a), Qt)
critic_loss = F.mse_loss(current_Q1, target_Q) \
+ F.mse_loss(current_Q2, target_Q)
# Step 12: We backpropagate this Critic loss and update the parameters of the two Critic models with a SGD optimizer
self.critic_optimizer.zero_grad()
critic_loss.backward()
self.critic_optimizer.step()
# Step 13: Once every two iterations, we update our Actor model by performing gradient ascent on the output of the first Critic model
out = self.actor(state, (h0, c0))
out = out[0]
(actor_loss, hx, cx) = self.critic.Q1(state, out, (h0,
c0))
actor_loss = -1 * actor_loss.mean()
self.actor_optimizer.zero_grad()
actor_loss.backward()
self.actor_optimizer.step()
# Step 14: Still once every two iterations, we update the weights of the Actor target by polyak averaging
for (param, target_param) in zip(self.actor.parameters(),
self.actor_target.parameters()):
target_param.data.copy_(tau * param.data + (1 - tau)
* target_param.data)
# Step 15: Still once every two iterations, we update the weights of the Critic target by polyak averaging
for (param, target_param) in zip(self.critic.parameters(),
self.critic_target.parameters()):
target_param.data.copy_(tau * param.data + (1 - tau)
* target_param.data)
# Making a save method to save a trained model
def save(self, filename, directory):
torch.save(self.actor.state_dict(), '%s/%s_actor.pth'
% (directory, filename))
torch.save(self.critic.state_dict(), '%s/%s_critic.pth'
% (directory, filename))
# Making a load method to load a pre-trained model
def load(self, filename, directory):
self.actor.load_state_dict(torch.load('%s/%s_actor.pth'
% (directory, filename)))
self.critic.load_state_dict(torch.load('%s/%s_critic.pth'
% (directory, filename)))
class ENV:
def __init__(
self,
state_dim,
action_dim,
data,
):
self.state_dim = state_dim
self.state = torch.zeros(self.state_dim)
self.state[state_dim - 1] = 100000.0
self.next_state = torch.zeros(self.state_dim)
self.next_state[state_dim - 1] = 100000.0
self.action_dim = action_dim
self.data = data
self.idx = 0
self._max_episode_steps = 200
self.state[1] = self.data[self.idx]
self.next_state[1] = self.data[self.idx]
self.buy = 0
def reset(self):
self.next_state = torch.zeros(self.state_dim)
self.next_state[state_dim - 1] = 100000.0
self.state = torch.zeros(self.state_dim)
self.state[state_dim - 1] = 100000.0
self.state[1] = self.data[self.idx]
self.next_state[1] = self.data[self.idx]
ch = self.state[0]
cp = self.state[1]
cc = self.state[2]
st = torch.tensor([ch, cp, cc])
self.buy = 0
return st
def step(self, action):
done = False
act_t = torch.argmax(action)
self.idx += 1
if act_t == 0:
cp = 1.0003 * self.state[1]
num_s = int(self.state[2] / cp)
self.next_state[0] += num_s
self.next_state[2] = self.state[2] % cp
self.next_state[1] = self.data[self.idx]
self.buy = 1
elif act_t == 1:
self.next_state[1] = self.data[self.idx]
elif act_t == 2:
self.next_state[2] = self.state[2] + self.state[1] * (1
- 0.0023) * self.state[0]
self.next_state[0] = 0
self.next_state[1] = self.data[self.idx]
if self.buy == 1:
done = True
self.buy = 0
reward = self.next_state[2] - self.state[2] \
+ self.next_state[1] * self.next_state[0] - self.state[1] \
* self.state[0] - 1
self.state[0] = self.next_state[0]
self.state[1] = self.next_state[1]
self.state[2] = self.next_state[2]
ch = self.state[0]
cp = self.state[1]
cc = self.state[2]
st = torch.tensor([ch, cp, cc])
return (st, reward, done)
# Selecting the device (CPU or GPU)
device = torch.device(('cuda' if torch.cuda.is_available() else 'cpu'))
# set the parameters
start_timesteps = 1e3 # Number of iterations/timesteps before which the model randomly chooses an action, and after which it starts to use the policy network
eval_freq = 5e1 # How often the evaluation step is performed (after how many timesteps)
max_timesteps = 5e3 # Total number of iterations/timesteps
save_models = True # Boolean checker whether or not to save the pre-trained model
expl_noise = 0.1 # Exploration noise - STD value of exploration Gaussian noise
batch_size = 200 # Size of the batch
discount = 0.99 # Discount factor gamma, used in the calculation of the total discounted reward
tau = 0.005 # Target network update rate
policy_noise = 0.2 # STD of Gaussian noise added to the actions for the exploration purposes
noise_clip = 0.5 # Maximum value of the Gaussian noise added to the actions (policy)
policy_freq = 2 # Number of iterations to wait before the policy network (Actor model) is updated
state_dim = 3
action_dim = 3
max_action = 1
idx = 0
# instantiate policy
policy = TD3(state_dim, action_dim, max_action)
indices = pd.read_csv('nifty_test.csv')
indices = indices['0']
indices = pd.read_csv('EQUITY_L.csv')
indices = indices['SYMBOL']
# Create the environment for each ticker
# data = pd.read_csv('PAGEIND.csv')
for ticker in indices:
print(ticker)
ohlcv = pd.read_csv(ticker + '.csv')
data = ohlcv.copy()
data = data['Close']
data = np.array(data).reshape(-1, 1)
count = 0
max_timesteps = data.shape[0]
data = torch.DoubleTensor(data)
env = ENV(state_dim, action_dim, data)
replay_buffer = ReplayBuffer()
# init training variables
total_timesteps = 0
timesteps_since_eval = 0
episode_num = 0
done = True
t0 = time.time()
obs = env.reset()
hx = torch.zeros(1, 1, 256)
cx = torch.zeros(1, 1, 256)
# Set rewards and episode timesteps to zero
episode_reward = 0
episode_timesteps = 0
episode_num = 0
# We start the main loop over max_timesteps
while total_timesteps < max_timesteps:
# If the episode is done
if done | (total_timesteps == max_timesteps - 2) \
& (episode_timesteps > 200):
count = count + 1
if (count % 100 == 0) & (count >= 100) \
| (total_timesteps == max_timesteps - 2) \
& (episode_timesteps > 200):
# If we are not at the very beginning, we start the training process of the model
if total_timesteps != 0:
print('Total Timesteps: {} Episode Num: {} Reward: {}'.format(total_timesteps,
episode_num, episode_reward))
policy.train(
replay_buffer,
episode_timesteps,
batch_size,
discount,
tau,
policy_noise,
noise_clip,
policy_freq,
)
if total_timesteps > 0.6 * max_timesteps + 1:
print('model output: Total Timesteps: {} Episode Num: {} Reward: {}'.format(total_timesteps,
episode_num, episode_reward))
f.write('model output: Total Timesteps: '
+ str(total_timesteps)
+ ' episode_num '
+ str(episode_num)
+ ' episode_reward '
+ str(episode_reward))
# When the training step is done, we reset the state of the environment
obs = env.reset()
# Set the Done to False
done = False
# Set rewards and episode timesteps to zero
episode_reward = 0
episode_timesteps = 0
episode_num += 1
hx = torch.zeros(1, 1, 256)
cx = torch.zeros(1, 1, 256)
# Before 1000 timesteps, we play random actions
if total_timesteps < 0.6 * max_timesteps:
# random action
actn = torch.randn(action_dim)
action = torch.zeros(action_dim)
action[torch.argmax(actn)] = 1
else:
# After 1000 timesteps, we switch to the model
# input of shape (seq_len, batch, input_size)
obs1 = torch.reshape(obs, (1, 1, state_dim))
action = policy.select_action(obs1, (hx, cx))
actn = action[0]
hx = action[1]
cx = action[2]
# If the explore_noise parameter is not 0, we add noise to the action and we clip it
if expl_noise != 0:
print ('policy action:', actn)
actn = actn + torch.randn(action_dim)
action = torch.zeros(action_dim)
action[torch.argmax(actn)] = 1
# The agent performs the action in the environment, then reaches the next state and receives the reward
(new_obs, reward, done) = env.step(action)
# We check if the episode is done
done_bool = (0 if episode_timesteps + 1
== env._max_episode_steps else float(done))
# We increase the total reward
episode_reward += reward
# We store the new transition into the Experience Replay memory (ReplayBuffer)
replay_buffer.add((obs, new_obs, action, reward, done_bool))
# We update the state, the episode timestep, the total timesteps, and the timesteps since the evaluation of the policy
obs = new_obs
episode_timesteps += 1
total_timesteps += 1
timesteps_since_eval += 1
f.close()
and below is the output:
20MICRONS
Total Timesteps: 611 Episode Num: 0 Reward: -53044.2697380831
dim state: torch.Size([200, 611, 3])
dim next_state: torch.Size([200, 611, 3])
dim done: torch.Size([200, 611, 1])
dim reward: torch.Size([200, 611, 1])
dim action: torch.Size([200, 611, 3])
Traceback (most recent call last):
File "C:\Users\granthjain\Desktop\try_lstm.py", line 538, in <module>
policy_freq,
File "C:\Users\granthjain\Desktop\try_lstm.py", line 279, in train
next_action = self.actor_target(next_state, (h0, c0))
File "C:\Users\granthjain\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\granthjain\Desktop\try_lstm.py", line 106, in forward
(output, (hx, cx)) = self.lstm(x, (hx, cx))
File "C:\Users\granthjain\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\granthjain\anaconda3\lib\site-packages\torch\nn\modules\rnn.py", line 567, in forward
self.check_forward_args(input, hx, batch_sizes)
File "C:\Users\granthjain\anaconda3\lib\site-packages\torch\nn\modules\rnn.py", line 523, in check_forward_args
'Expected hidden[0] size {}, got {}')
File "C:\Users\granthjain\anaconda3\lib\site-packages\torch\nn\modules\rnn.py", line 187, in check_hidden_size
raise RuntimeError(msg.format(expected_hidden_size, tuple(hx.size())))
RuntimeError: Expected hidden[0] size (1, 1, 256), got (1, 611, 256)
| Did you make the input dimensions as required in nn.LSTM as well? I saw that you haven't set batch_first = True, and hence the input tensor has to be in the form
(seq_len, batch, input_size)
| https://stackoverflow.com/questions/63097995/ |
Is it the right way to process a PIL image in pytorch? | I want to train a deep model from .png images by using Pytorch. I'm using a pre-trained model on ImageNet so I need to normalize images before feeding them to the network, but when I look at the result of the transform I see there are some values more than 1 and some less than -1. I'm wondering shouldn't they all be in [-1, 1] range? Am I doing it in the right way?
There is my code:
normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
preprocess = transforms.Compose([
transforms.ToTensor(),
normalize])
x = Image.open("path/to/the/file").convert('RGB')
x = self.transform(x) # I feed the network by x
| What you are doing is correct but mean and std are not calculated based on your data, rather you've taken those values from ImageNet dataset.
There will be some images which are out of [-1, 1] range as they weren't part of the mean and std calculations in the first place and it's expected during test. Also there are images outside this range as it changes mean and standard deviation to 0 and 1 respectively, hence there are samples which are outside this range.
If you wish to fine-tune your neural network you should calculate per-channel mean and std and input those values instead (though it might not make a large difference, depending on dataset and how many images you have).
| https://stackoverflow.com/questions/63098379/ |
Gradient Checkpointing returning values | I have a checkpoint callback function (i.e, custom_dec) that returns a Tensor, and a dictionary. But it seems like this function does not return dictionaries (or other data types), but only tensors. What is the workaround to this, as the module that I want to checkpoint is returning a tensor, plus a data type as dictionary:
def custom_dec(self, module):
def custom_forward(*inputs):
output = module(inputs[0], inputs[1],
encoder_attn_mask=inputs[2],
decoder_padding_mask=inputs[3],
layer_state=inputs[4],
causal_mask=inputs[5],
output_attentions=inputs[6],
)
# output[2] is a python dictionary
return output[0], output[2]
The following is the checkpoint call:
x, layer_past = \
checkpoint.checkpoint(
self.custom_dec(decoder_layer),
x,
encoder_hidden_states,
encoder_padding_mask,
decoder_padding_mask,
layer_state,
decoder_causal_mask,
output_attentions,
)
The error:
TypeError: CheckpointFunctionBackward.forward: expected Variable (got
dictionary) for return value 1
| A similar situation was discussed here.
What you can do is to convert the dictionary into some tensor form. I faced an error where it was caused by an input list which is not accepted by torch.utils.checkpoint. My solution was to pass the tensors in the list as independent tensors and form a list out of them in custom_forward.
I don't know the form of your dictionary (e.g. if every key will always have a value), but you can come up with a Dictionary-Tensor inter-change scheme that works for your dictionary.
| https://stackoverflow.com/questions/63102642/ |
Subsets and Splits