instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
import matplotlib failed while deploying my model in AWS sagemaker | I have deployed my AWS model successfully.
but while testing i am getting runtime Error: "import matplotlib.pyplot as plt" . I think it is due to pytorch framework version i used(framework_version=1.2.0). I am facing the same issue when i use higher versions as well.
PyTorchModel(model_data=model_artifact,
role = role,
framework_version=1.2.0,
entry_point='predict.py',
predictor_cls=ImagePredictor)
I have other issue when i use version=1.0.0. i.e i am not able to import libraries from sub directories and deployment itself is failing.
Eg: i have some code files in "Code" directory.
from Code.CTModel import NetWork ---> **this line will fail as "No module named Code" when i use version=1.0.0**
Ultimately i want to how to use/import libraries which are written under sub-directories.
| It sounds like you want to inject some additional code libraries into the SageMaker PyTorch serving container. You might have to dig into the source code for how the PyTorch serving container is built to further customize it: https://github.com/aws/sagemaker-pytorch-inference-toolkit, or build your own image.
Digging into that source code a bit, I see that the container has enabled the importing of arbitrary code, but only when "multi-model mode" is enabled. Can you verify that the code exists under a directory "code" in your model directory and that "multi-model mode" is enabled?
def initialize(self, context):
# Adding the 'code' directory path to sys.path to allow importing user modules when multi-model mode is enabled.
if (not self._initialized) and ENABLE_MULTI_MODEL:
code_dir = os.path.join(context.system_properties.get("model_dir"), 'code')
sys.path.append(code_dir)
self._initialized = True
Reference: https://github.com/aws/sagemaker-pytorch-inference-toolkit/blob/c4e7abc49aeebc2f9b6035337548a90e4330113d/src/sagemaker_pytorch_serving_container/handler_service.py#L47
If this all seems complicated to you (it is), you might want to look into some standardized formats for serializing your PyTorch model such as https://onnx.ai/. I'd love to learn more about what you're trying to do here sometime if you reach out to me at [email protected]. I'm beta-testing a platform that enables deployment in a single line of code and would love to test it out here.
| https://stackoverflow.com/questions/62912156/ |
How does one save torch.nn.Sequential models in pytorch properly? | I am very well aware of loading the dictionary and then having a instance of be loaded with the old dictionary of parameters (e.g. this great question & answer). Unfortunately, when I have a torch.nn.Sequential I of course do not have a class definition for it.
So I wanted to double check, what is the proper way to do it. I believe torch.save is sufficient (so far my code has not collapsed), though these things can be more subtle than one might expect (e.g. I get a warning when I use pickle but torch.save uses it internally so it's confusing). Also, numpy has it's own save functions (e.g. see this answer) which tend to be more efficient, so there might be a subtle trade off I might be overlooking.
My test code:
# creating data and running through a nn and saving it
import torch
import torch.nn as nn
from pathlib import Path
from collections import OrderedDict
import numpy as np
import pickle
path = Path('~/data/tmp/').expanduser()
path.mkdir(parents=True, exist_ok=True)
num_samples = 3
Din, Dout = 1, 1
lb, ub = -1, 1
x = torch.torch.distributions.Uniform(low=lb, high=ub).sample((num_samples, Din))
f = nn.Sequential(OrderedDict([
('f1', nn.Linear(Din,Dout)),
('out', nn.SELU())
]))
y = f(x)
# save data torch to numpy
x_np, y_np = x.detach().cpu().numpy(), y.detach().cpu().numpy()
np.savez(path / 'db', x=x_np, y=y_np)
print(x_np)
# save model
with open('db_saving_seq', 'wb') as file:
pickle.dump({'f': f}, file)
# load model
with open('db_saving_seq', 'rb') as file:
db = pickle.load(file)
f2 = db['f']
# test that it outputs the right thing
y2 = f2(x)
y_eq_y2 = y == y2
print(y_eq_y2)
db2 = {'f': f, 'x': x, 'y': y}
torch.save(db2, path / 'db_f_x_y')
print('Done')
db3 = torch.load(path / 'db_f_x_y')
f3 = db3['f']
x3 = db3['x']
y3 = db3['y']
yy3 = f3(x3)
y_eq_y3 = y == y3
print(y_eq_y3)
y_eq_yy3 = y == yy3
print(y_eq_yy3)
Related:
related question from forum: https://discuss.pytorch.org/t/how-to-save-nn-sequential-as-a-model/89117/14
| As can be seen in the code torch.nn.Sequential is based on torch.nn.Module:
https://pytorch.org/docs/stable/_modules/torch/nn/modules/container.html#Sequential
So you can use
f = torch.nn.Sequential(...)
torch.save(f.state_dict(), path)
just like with any other torch.nn.Module.
| https://stackoverflow.com/questions/62923052/ |
What should __len__ be for PyTorch when generating unlimited data? | Say I am trying to use PyTorch to learn the equation y = 2x and I want to generate an unlimited amount of data to train my model with. I am supposed to provide a __len__ function. Here's an example below. What should it be in this case? How do I specify the number of mini-batch iterations per epoch? Do I just set a number arbitrarily?
import numpy as np
from torch.utils.data import Dataset
class GenerateUnlimitedData(Dataset):
def __init__(self):
pass
def __getitem__(self, index):
x = np.random.randint(1,10)
y = 2 * x
return x, y
def __len__(self):
return 1000000 # This works but is not correct
| You should use torch.utils.data.IterableDataset instead of torch.utils.data.Dataset. In your case it would be:
import torch
class Dataset(torch.utils.data.IterableDataset):
def __init__(self, batch_size):
super().__init__()
self.batch_size = batch_size
def __iter__(self):
while True:
x = torch.randint(1, 10, (self.batch_size,))
y = 2 * x
yield x, y
You should use batches (probably large ones), as that would speed up computations (pytorch is well suited for GPU computations on many samples at once).
| https://stackoverflow.com/questions/62925418/ |
How to add new sample to CIFAR10 torchvision? | Hi I want to add my own images to the CIFAR10 dataset in torchvision, how can I do that?
train_data = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=train_transform)
train_data.add # or a workaround!
thanks
| You can either create a custom dataset for CIFAR10, using the raw cifar10 images here or you can still use the CIFAR10 dataset inside your new custom dataset and then add your logic in the __getitem__() method.
This is a simple example to get you going :
class CIFAR10_2(torch.utils.data.Dataset):
def __init__(self, dataset_path='/cifar10', transformations=None, should_download=True):
self.dataset_train = torchvision.datasets.CIFAR10(dataset_path, download=should_download)
self.transformations = transformations
def __getitem__(self, index):
# do as you wish , add your logic here
(img, label) = self.dataset_train[index]
# for transformations for example
if self.transformations is not None:
return self.transformations(img), label
return img, label
def __len__(self):
return len(self.dataset_train)
you can get fancy and add logic for test,validation, etc and do what ever you like.
| https://stackoverflow.com/questions/62931963/ |
Best way to save many tensors of different shapes? | I would like to store thousands to millions of tensors with different shapes to disk. The goal is to use them as a time series dataset. The dataset will probably not fit into memory and I will have to load samples or ranges of samples from disk.
What is the best way to accomplish this while keeping storage and access time low?
| The easiest way to save anything in disk is by using pickle:
import pickle
import torch
a = torch.rand(3,4,5)
# save
with open('filename.pickle', 'wb') as handle:
pickle.dump(a, handle)
# open
with open('filename.pickle', 'rb') as handle:
b = pickle.load(handle)
You can also save things with pytorch directly, but that is just a pytorch wrapper around pikle.
import torch
x = torch.tensor([0, 1, 2, 3, 4])
torch.save(x, 'tensor.pt')
If you want to save multiple tensors in one file, you can wrap them in a dictionary:
import torch
x = torch.tensor([0, 1, 2, 3, 4])
a = torch.rand(2,3,4,5)
b = torch.zeros(37)
torch.save({"a": a, "b":b, "x", x}, 'tensors.pt')
| https://stackoverflow.com/questions/62932368/ |
How to use a bert pretrained model somewhere else? | I followed this course https://www.coursera.org/learn/sentiment-analysis-bert about building a pretrained model for sentiment analysis. During the trining, at each epoch they saved the model using torch.save(model.state_dict(), f'BERT_ft_epoch{epoch}.model'). Now I want to use one of these models (the best one obviously) elsewhere, for example where a user can paste a tweet as an input and get the emotion of the writer. But I don't know how to load the model and predict, here's what I tried:
import torchvision.models as models
import torch
model = models.resnet101(pretrained=False)
model.load_state_dict(torch.load('Models/BERT_ft_epoch15.model'), strict=False)
model_ft.eval()
output = model_ft(input) #input is a tweets list
I get this error: TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not list
| How to define, initialize, save and load models using Pytorch.
Initializing a model. That is done inheriting the class nn.Module, consider the simple two layer model:
import torch
import torch.nn as nn
class Model(nn.Module)
def __init__(self, input_size=128, output_size=10):
super(Model).__init__()
self.layer1 = nn.Sequetial(nn.Linear(input_size, 64), nn.LeakyReLU())
self.layer2 = nn.Linear(64, output_size)
def forward(self, x):
y = self.layer2(self.layer1(x))
return y
The layers of the model are first initialized at __init__() and then we specify the operations of the forward pass in forward(). You can be creative there, just remember of using pytorch differenciable operations.
You initialize the model by creating an instance of the new class:
model = Model() # brand new instance!
After training your model you want to save it:
import torch
model = Model(128, 10) # initialization
torch.save(model.state_dict, 'model.pt') # saving state dict
You are not saving the model here, you are saving the state_dict this is an ordered dictionary that contains all the weights and biases and other parameters of your model. The reason we save the state_dict instead of the model directly can be found in the documentation (https://pytorch.org/tutorials/beginner/saving_loading_models.html). For now, just consider it best practice.
Finally, we arrive at how to load the model. You have to initialize the model first, then load the state_dict from disk.
model = Model(128, 10) # model initialization
model.load_state_dict('model.pt')
model.eval() # put the model in inference mode
Notice that, when we save the state_dict we may also save the optimizer and the graph used for back propagation. That is useful to checkpoint the training and resume it at a later stage.
# in the training loop
torch.save({"epoch": epoch,
"model": model.state_dict,
"optim": optim.state_dict,
"loss": loss}, f'checkpoint{epoch}.pt')
I hope that paints a cleares picture for you =)
| https://stackoverflow.com/questions/62938230/ |
pytorch training loss invariant with varying forward pass implementations | The following code (MNIST MLP in PyTorch) delivers approximately the same training loss regardless of having the last layer in the forward pass returning:
F.log_softmax(x)
(x)
Option 1 is incorrect because I use criterion = nn.CrossEntropyLoss() and yet the results are almost identical. Am I missing anything?
import torch
import numpy as np
import time
from torchvision import datasets
import torchvision.transforms as transforms
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# choose the training and test datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False,
download=True, transform=transform)
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# linear layer (784 -> 1 hidden node)
self.fc1 = nn.Linear(28 * 28, 512)
self.dropout1= nn.Dropout(p=0.2, inplace= False)
self.fc2 = nn.Linear(512, 256)
self.dropout2= nn.Dropout(p=0.2, inplace= False)
self.dropout = nn.Dropout(p=0.2, inplace= False)
self.fc3 = nn.Linear(256, 10)
def forward(self, x):
# flatten image input
x = x.view(-1, 28 * 28)
# add hidden layer, with relu activation function
x = F.relu(self.fc1(x))
x = self.dropout1(x)
x = F.relu(self.fc2(x))
x = self.dropout2(x)
x = self.fc3(x)
# return F.log_softmax(x)
return x
# initialize the NN
model = Net()
print(model)
model.to('cuda')
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
n_epochs = 10
model.train() # prep model for training
for epoch in range(n_epochs):
# monitor training loss
train_loss = 0.0
start = time.time()
for data, target in train_loader:
data, target = data.to('cuda'), target.to('cuda')
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
train_loss += loss.item()*data.size(0)
train_loss = train_loss/len(train_loader.dataset)
print('Epoch: {} \tTraining Loss: {:.6f} \ttime: {:.6f}'.format(
epoch+1,
train_loss,
time.time()-start
))
| For numerical stability, the nn.CrossEntropyLoss() is implemented with the softmax layer inside it. So you should NOT use the softmax layer in your forward pass.
From the docs (https://pytorch.org/docs/stable/nn.html#crossentropyloss):
This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class.
Using the softmax layer in the forward pass will lead to worse metrics because the gradient magnitudes are lowered (thus, the weight updates are also lowered). I learned it the hard way!
I guess your problem is that the loss is similar at the beginning of training, but at the end of the training, they should not. It is usually a good sanity check to overfit your model in one batch of data. The model should reach 100% accuracy if the batch is small enough. If the model is taking too long to train than you probably have a bug somewhere.
Hope that helps =)
| https://stackoverflow.com/questions/62941089/ |
Received a make error while building warp-ctc pytorch | I am trying to build https://github.com/SeanNaren/warp-ctc.git on Google Colab, following this notebook. I am using these commands on Colab:
!git clone https://github.com/SeanNaren/warp-ctc.git;\
cd warp-ctc;\
mkdir build;\
cd build;\
cmake ..;\
make;
but I am receiving an error building it:
[-11%] Building NVCC (Device) object CMakeFiles/warpctc.dir/src/warpctc_generated_ctc_entrypoint.cu.o
/content/drive/My Drive/simple_hwr/warp-ctc/src/ctc_entrypoint.cu(1): error: this declaration has no storage class or type specifier
/content/drive/My Drive/simple_hwr/warp-ctc/src/ctc_entrypoint.cu(1): error: expected a ";"
2 errors detected in the compilation of "/tmp/tmpxft_00000191_00000000-13_ctc_entrypoint.compute_70.cpp1.ii".
CMake Error at warpctc_generated_ctc_entrypoint.cu.o.cmake:279 (message):
Error generating file /content/drive/My
Drive/simple_hwr/warp-ctc/build/CMakeFiles/warpctc.dir/src/./warpctc_generated_ctc_entrypoint.cu.o
CMakeFiles/warpctc.dir/build.make:220: recipe for target 'CMakeFiles/warpctc.dir/src/warpctc_generated_ctc_entrypoint.cu.o' failed
make[2]: *** [CMakeFiles/warpctc.dir/src/warpctc_generated_ctc_entrypoint.cu.o] Error 1
CMakeFiles/Makefile2:146: recipe for target 'CMakeFiles/warpctc.dir/all' failed
make[1]: *** [CMakeFiles/warpctc.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2
How can this be resolved?
| This post documents an apparent fix for this issue, which also affects the original warp-ctc repository:
[...] the ctc_entrypoint.cu file needs to be a symlink. So, go to src dir and run:
rm ctc_entrypoint.cu
ln -s ctc_entrypoint.cpp ctc_entrypoint.cu
Then, re-run make, which should resolve the issue.
| https://stackoverflow.com/questions/62941402/ |
How is batching normally performed for sequence data for an RNN/LSTM | This Udacity course notebook batches data in a way that is not intuitive to me.
For a long sequence of data, they first truncates the data so it can be evenly divided by batch_size. Next, they .reshape() the data to be (batch_size, -1). Then they creates a sliding window over these batches of sub-sequences. When the sliding window goes out of bounds, they add fake data (via wrap-around to) to the end.
This provided graphic might explain better than I can:
I'm just wondering if this practice is normal, or if there is a different way. It seems strange that the batches are non-consecutive sub-sequences. Wouldn't that make it hard to interpret the output of a single batch?
Is there a better approach? The woman in the video literally said something to the affect "I don't know why it's done this way, but I've seen it before and the network trains fine".
| You should check the documentation on padded sequences from pytorch. (If I had more experience with it I would give you a more detailed explanation, but truth if that I never really understood them!)
Packed Sequence:
https://pytorch.org/docs/master/generated/torch.nn.utils.rnn.PackedSequence.html#torch.nn.utils.rnn.PackedSequence
Pack padded sequence:
https://pytorch.org/docs/master/generated/torch.nn.utils.rnn.pack_padded_sequence.html#torch.nn.utils.rnn.pack_padded_sequence
Pad packed sequence:
https://pytorch.org/docs/master/generated/torch.nn.utils.rnn.pad_packed_sequence.html#torch.nn.utils.rnn.pad_packed_sequence
Pad sequence:
https://pytorch.org/docs/master/generated/torch.nn.utils.rnn.pad_sequence.html#torch.nn.utils.rnn.pad_sequence
Pack sequence:
https://pytorch.org/docs/master/generated/torch.nn.utils.rnn.pack_sequence.html#torch.nn.utils.rnn.pack_sequence
The names are a bit confusing. But the idea is that you create a tensor with the size of your largest sequence in the batch. The other sequences will be padded to have the same size as the longest sequence in the bach. This packed padded sequence is given to the recurrent model (RNN, LTMS, GRU, your favorite). With that you can back arbitrary sequences with minor memory limitations.
| https://stackoverflow.com/questions/62946271/ |
Flattening Efficientnet model | I am trying to remove the top layer in the efficientnet-pytorch implementation. However, if I simply replace the final _fc layer with my own fully connected layer, as suggested by the author in this github comment, I am worried that there is still a swish activation even after this layer, as opposed to having nothing as I expected. When I print the model, the final lines is as follows:
(_bn1): BatchNorm2d(1280, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_avg_pooling): AdaptiveAvgPool2d(output_size=1)
(_dropout): Dropout(p=0.2, inplace=False)
(_fc): Sequential(
(0): Linear(in_features=1280, out_features=512, bias=True)
(1): ReLU()
(2): Dropout(p=0.25, inplace=False)
(3): Linear(in_features=512, out_features=128, bias=True)
(4): ReLU()
(5): Dropout(p=0.25, inplace=False)
(6): Linear(in_features=128, out_features=1, bias=True)
)
(_swish): MemoryEfficientSwish()
)
)
where _fc is my replaced module.
What I hoped to do was:
base_model = EfficientNet.from_pretrained('efficientnet-b3')
model = nn.Sequential(*list(base_model.children()[:-3]))
where in my mind base_model.children() flattens the model from the nested structure. However, now I cannot seem to be able to use the model as if I use a dummy input, x=torch.randn(1,3,255,255) I get the error: TypeError: forward() takes 1 positional argument but 2 were given.
It should be noted that model[:2](x) works, but not model[:3](x). model[2] seems to be the mobile blocks.
Here is a colab notebook with the above code.
| This is a common misunderstanding of what print(net) actually does.
The fact that there is a _swish Module after the _fc simply means that for former was registered after the latter. You can check that in the code:
class EfficientNet(nn.Module):
def __init__(self, blocks_args=None, global_params=None):
# [...]
# Final linear layer
self._avg_pooling = nn.AdaptiveAvgPool2d(1)
self._dropout = nn.Dropout(self._global_params.dropout_rate)
self._fc = nn.Linear(out_channels, self._global_params.num_classes)
self._swish = MemoryEfficientSwish()
The order in which they are defined is the order that they will be printed. When it comes to what exactly is performed, you have to check the forward:
def forward(self, inputs):
# Convolution layers
x = self.extract_features(inputs)
# Pooling and final linear layer
x = self._avg_pooling(x)
x = x.flatten(start_dim=1)
x = self._dropout(x)
x = self._fc(x)
return x
and, as you can see, there is nothing after self._fc(x), which means no Swish will be applied.
| https://stackoverflow.com/questions/62954999/ |
Does my for loop run parallely, if all the tensors involved in the loop are on the GPU? | I have a list of tensors and all of them are present on the GPU. I obtained this list by splitting one tensor on the GPU using torch.split. I want to get list of sums of the list of tensors I have. So, in simple terms, I want to get a list in which, the first element is sum of first tensor in the list, and so on. If I run a for loop for this, does it get parallelised? If not, is there a way to make it run parallely? I want to parallelize it since the list is pretty long, and the sum operation can be done parallely, and independently on every tensor present on the list. If this operation can be performed on the GPU, the performance gain would be immense.
UPDATE : Consider I have a list of tensors as follows :
ls
[tensor([[0.8469, 0.3712, 0.2956],
[0.6548, 0.5284, 0.8682],
[0.5748, 0.2390, 0.1402],
[0.0010, 0.1794, 0.6048],
[0.4636, 0.4101, 0.6543]], device='cuda:0'),
tensor([[0.2138, 0.3613, 0.8712],
[0.4689, 0.0503, 0.7342],
[0.1368, 0.0688, 0.9223]], device='cuda:0'),
tensor([[0.3131, 0.6142, 0.1555],
[0.4099, 0.5000, 0.7578],
[0.7353, 0.2425, 0.4407],
[0.5943, 0.0377, 0.4820],
[0.5898, 0.9585, 0.6993]], device='cuda:0'),
tensor([[0.8629, 0.3172, 0.4248],
[0.9957, 0.6998, 0.0931],
[0.0258, 0.9898, 0.5250]], device='cuda:0'),
tensor([[0.0298, 0.4033, 0.9465],
[0.2763, 0.9412, 0.4873]], device='cuda:0')]
As you can see, I have a list of 5 tensors of different shapes. Each tensor has a shape of 3 in their first dimension. The shape is different because of 0th dimension. So, in this example, the shapes of the tensor in the list are [[5,3], [3, 3], [5, 3], [3, 3], [2,3]]. I want to get a list of tensors from this list as follows :
sums = [torch.sum(li, axis=0) for li in ls]
sums
[tensor([2.5412, 1.7280, 2.5632], device='cuda:0'),
tensor([0.8195, 0.4804, 2.5277], device='cuda:0'),
tensor([2.6424, 2.3528, 2.5352], device='cuda:0'),
tensor([1.8844, 2.0068, 1.0429], device='cuda:0'),
tensor([0.3062, 1.3445, 1.4338], device='cuda:0')]
So, as you can see, the first tensor in the list is sum of first tensor in the list ls along the dimension 0. The second tensor is sum of second tensor in list ls along the dimension 0 and so on.
To do this task, I'm currently using a for loop. which iteratively calculates the sums and appends it to sums list. However, this is very inefficient as my list of tensors is really big, of the order of 100K, and doing this in each iteration is super inefficient. I wanted to find out if there is any way to do this more efficiently.
The list ls of tensors is obtained by splitting a big tensor like this :
splitter = [5, 3, 5, 3, 2]
A = torch.rand(18, 3).cuda()
ls = torch.split(A, splitter)
ls
(tensor([[0.1969, 0.6113, 0.3563],
[0.9180, 0.7759, 0.5953],
[0.0279, 0.4014, 0.2268],
[0.9026, 0.3821, 0.1498],
[0.3630, 0.9144, 0.3277]], device='cuda:0'),
tensor([[2.1312e-02, 5.2311e-01, 8.9177e-02],
[4.7427e-01, 2.4503e-04, 1.2559e-01],
[5.1641e-01, 9.1357e-01, 9.5637e-01]], device='cuda:0'),
tensor([[0.3730, 0.4251, 0.9437],
[0.5634, 0.3086, 0.5891],
[0.5602, 0.0872, 0.2128],
[0.7717, 0.1920, 0.3977],
[0.5787, 0.3488, 0.7499]], device='cuda:0'),
tensor([[0.9338, 0.4330, 0.8843],
[0.5646, 0.0574, 0.8790],
[0.4692, 0.5831, 0.9160]], device='cuda:0'),
tensor([[0.9786, 0.5209, 0.9364],
[0.4370, 0.4917, 0.3672]], device='cuda:0'))
So, if avoiding the for loop is not possible, do anyone have any ideas on summing the main tensor A, according to a splitter provided? So, for example, in the code above, the splitter is [5, 3, 5, 3, 2]. So, I want to obtain a tensor res from tensor A such that the first row of res is sum of first 5 rows of A(because splitter[0] = 5) along dim=0. The second row of res is sum of next 3 rows (row 5 to row 7) of A. And so on. Can I do this without using a for loop? Or can I parallelise this for loop since the opearation it is doing are independent of each other and are mutually exclusive and exhaustive.
I hope the added details are enough. If I need to add any further details to the question, please let me know. Thanks in advance :)
| PyTorch runs GPU operations asynchronously (see docs).
When you call a function that uses the GPU, the operations are enqueued to the particular device
This means, your sum operations may run in parallel.
I have made a simple experiment to test this. If I am right, it proves that you don't need to worry about parallelism here.
import torch
A = torch.rand(100000, 32, device='cuda')
splits = torch.split(A, 4)
Your code:
%%timeit -r1 -n5
sums = [s.sum() for s in splits]
torch.cuda.synchronize()
# Output: 5 loops, best of 1: 374 ms per loop
Added synchronization after every sum operation:
%%timeit -r1 -n5
sums = [torch.cuda.synchronize() or s.sum() for s in splits]
# Output: 5 loops, best of 1: 897 ms per loop
| https://stackoverflow.com/questions/62966810/ |
PyTorch: NVIDIA NGC image or Docker Hub image? | What are the differences between the official PyTorch image on Docker Hub and the PyTorch image on NVIDIA NGC?
The NGC page is more documented than the Docker Hub page, which has no description. But the NGC image is also heavier by a few gigabytes, and it seems to require a CUDA 10.2-compatible driver.
Is there any advantage in using the NVIDIA NGC image instead of the one from Docker Hub?
| I'm not a docker expert.
To the best of my knowledge Nvidia puts a lot of effort to ship GPU-optimized containers such that running GPU pytorch on Nvidia GPU with Nvidia container should have best possible performance.
Therefore, if you are using Nvidia hardware you should expect better performance using NGC containers. The gap, though, might not be that significant.
| https://stackoverflow.com/questions/62978545/ |
Why error FileNotFoundError: [Errno 2] No such file or directory: './ML/model.pth' occurs? | Error
l_1 | from Ml import demo_model
ml_1 | File "/app/Ml/demo.py", line 84, in
ml_1 | model = torch.load('./ML/model.pth', map_location='cpu')
ml_1 | File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 584, in load
ml_1 | with _open_file_like(f, 'rb') as opened_file:
ml_1 | File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 234, in _open_file_like
ml_1 | return _open_file(name_or_buffer, mode)
ml_1 | File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 215, in init
ml_1 | super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: './ML/model.pth'
occurs when starting the docker container.
Dockerfile:
FROM python:3.7
WORKDIR /app
COPY ./server/requirements.txt /app
COPY ./server/Ml/model.pth /app/Ml/
COPY ./server/Ml/demo.py /app/Ml/
RUN python -m pip install --no-cache-dir -r requirements.txt
CMD gunicorn -b=0.0.0.0:8443 -w=4 project.app:app --timeout 2000
The line of code where the error occurs:
best_model = torch.load('./ML/model.pth', map_location='cpu')
| It seems like you have a typo: the folders are Ml (small "l") while in the code it is ML (capital "L").
| https://stackoverflow.com/questions/62980773/ |
Pytorch how to increase batch size | I currently have a tensor of torch.Size([1, 3, 256, 224]) but I need it to be input shape [32, 3, 256, 224]. I am capturing data in real-time so dataloader doesn't seem to be a good option. Is there any easy way to take 32 of size torch.Size([1, 3, 256, 224]) and combine them to create 1 tensor of size [32, 3, 256, 224]?
| You are probable using jit model, and the batch size must be exact like the one the model was trained on.
t = torch.rand(1, 3, 256, 224)
t.size() # torch.Size([1, 3, 256, 224])
t2= t.expand(32, -1,-1,-1)
t2.size() # torch.Size([32, 3, 256, 224])
Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor, and you get what you need. Only the tensor stride was changed.
| https://stackoverflow.com/questions/62981621/ |
How to convert PyTorch tensor to image and send it with flask? | I have a neural network in PyTorch which gives an image as a Tensor. I have converted it to a numpy array and then followed the explanation here to send that image to the html. The problem is that it's always black.
This is my code in the PyTorch:
def getLinearImage():
with torch.no_grad():
test_z = torch.randn(1, 100).to(device)
generated = G(test_z)
generated = generated.cpu()
numpy = generated[0].view(64, 64).numpy()
numpu = (numpy + 1) * 128
return numpy
This is the code in the flask where arr is the returned value from getLinearImage()
def getImage(arr):
img = Image.fromarray(arr.astype("uint8"))
file_object = io.BytesIO()
img.save(file_object, "PNG")
file_object.seek(0)
return send_file(file_object, mimetype="image/PNG")
If I open a static image and I send it to getImage() it works but won't work with the generated one. In the html I call it like:
<img src="/getLinearImage" alt="User Image" height="100px" width="100px">
| Logically speaking, since the static image works, the error is somewhere in your getLinearImage code. I would suggest running things through using PDB (or a debugger of your choice) to figure out why it's not generated correctly.
That said, I create a variable in your code:
numpu = (numpy + 1) * 128
which you don't seem to use, since you return the other variable afterwards:
return numpy
Could that be your problem?
Also: I presume that when you created this, you saved the original image locally to ensure something gets generated in the first place?
| https://stackoverflow.com/questions/62987619/ |
Loss Analysis of Deep Learning | I'm new to deep learning, and I have built a graph convolution network. I used 5 fold cross-validations. After plotting the mean train_loss (blue) and validate_loss (orange) together, I got this baby.
MSE loss
As you can see, from the curvilinear trend of validate_loss, it seems that my networks learn few things. (I guess data? GCN frameworks? learning rate?)
Could you guys specifically help me to figure out the bugs?
I would appreciate that! If you don't get my point, please let me know.
class Scorer(nn.Module):
"""
Three conv_layers and two fc_layers with Dropout
"""
def __init__(self):
super(Scorer, self).__init__()
self.conv_layer1 = GraphConvNet(5, 64)
self.conv_layer2 = GraphConvNet(64, 128)
self.conv_layer3 = GraphConvNet(128, 256) # (I have tried delete conv_layer3)
self.fc_layer1 = nn.Linear(256, 128)
self.drop_layer1 = nn.Dropout(0.5)
self.fc_layer2 = nn.Linear(128, 64)
self.drop_layer2 = nn.Dropout(0.5)
self.out_layer = nn.Linear(64, 1)
def forward(self, NormLap, feat):
h = self.conv_layer1(NormLap, feat)
h = F.leaky_relu(h)
h = self.conv_layer2(NormLap, h)
h = F.leaky_relu(h)
h = self.conv_layer3(NormLap, h)
h = F.leaky_relu(h)
h = self.fc_layer1(h)
h = self.drop_layer1(h)
h = F.leaky_relu(h)
h = self.fc_layer2(h)
h = self.drop_layer2(h)
h = F.leaky_relu(h)
h = self.out_layer(h)
h = F.leaky_relu(h)
Below is my networks and parameters:
# parameter setting
learning_rate = 0.001 # (I have tried 1e-1, 1e-2)
weight_decay = 1e-3 # (I have tried 1e-4)
epochs = 500
batch_size = 50 # (I have tried 30)
model = Scorer()
loss_func = nn.MSELoss()
optimizer = th.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=weight_decay)
| This is pretty much what train and validation loss is supposed to do. Loss goes down over time; that's what an optimizer is trying to do. train_loss keeps going down after valid_loss levels out or plateaus, indicating the model starts overfitting after epoch ~100 or so. Whether MSE 0.3 is good or bad for your application depends entirely on the application, but yes, the optimizer is optimizing just fine.
Please have a look at this resource for how to interpret loss curves: https://machinelearningmastery.com/learning-curves-for-diagnosing-machine-learning-model-performance/
"from the curvilinear trend of validate_loss, it seems that my networks learn few things" - It would help if you explain (in a lot of detail :) WHY you think that, in order to get better answers. What do you expect to see other than what you see? I look at the same graph and it seems to me your network is learning to model the data and predict whatever you're trying to predict.
| https://stackoverflow.com/questions/62988773/ |
Using pre-trained weights in PyTorch | I am working on implementing a research paper based on computer vision in PyTorch. I have built the model architecture by referring to the paper. The author has uploaded saved weights on GitHub in ".pth.tar" format. I want to put the same weights in my model so that I can skip training and optimization part and directly get output from the neural net.
The paper is Learning to see in the dark.
Model architecture is as follow:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv1d(32, 12, 1)
.
.
def forward(self, x):
x = F.relu(self.conv1(x))
.
.
return x
net = Net()
And it is to be followed by importing trained weight from google drive/cloud storage and defining the function to put the trained weights in the net.
PS: Model architecture is exactly same for both
| If you are using google colab
#mount drive onto google colab
from google.colab import drive
drive.mount('/content/gdrive')
Define the path of the weights
weights_path="/content/gdrive/My Drive/weights.pth"
Extract the tar file
!tar -xvf weights.pth.tar
Load the weights into the model net
net=torch.load(weights_path)
| https://stackoverflow.com/questions/62994194/ |
Cuda out of memory in loop on second forward pass? | I have the following problem:
model = MyModel()
model.load_state_dict(checkpoint[weights])
model.train()
data, label = get_data() # just take one trainings example
data.cuda()
for i in range(10): # lets predict data 10 times
output = model(data)
print(i)
I can do one forward step (i is printed 0) but then I get a Cuda OOM error at output = model(data).
It works if I use with torch.no_grad() but I'd like to train in this forward loop and so I need the gradients later.
Do I somehow have to clear the earlier output or data so it doesn't take up my Vram?
output = model(data.clone()) # this doesn't fix the problem
for i in range(10):
model.zero_grad() # doesn't work either
output = model(data)
| Ok you have to manually remove the output tensor.
It is kept although the loop starts over again:
This is fixable by
for i in range(10):
model.zero_grad()
output = model(data)
del output
| https://stackoverflow.com/questions/62994816/ |
pytorch sliding window with unfold & fold | I'm trying to use a simple ssd that was trained on 300x300 data with annotated bounding boxes. If I crop the images manually, it works correctly but with full size images it fails (obviously) due to resizing large images to 300x300 removes many visual features.
I figured the good old sliding window will work here, but I have some problems rebuilding the images with detection and must admit I'm a bit clueless on how to approach, What I have so far is:
at first, I tried this:
chips = F.unfold(img_t.data, kernel_size=300)
following some examples from stack overflow, but this gives me error Input Error: Only 4D input Tensors are supported (got 3D)
so following some more googling I found something that works:
patch_w = 300
patch_h = 300
patches = img_t.data.unfold(0, 3, 3).unfold(1, patch_w, patch_h).unfold(2, patch_w, patch_h)
#Visualise small part:
fig = plt.figure(figsize=(4, 4))
fig.tight_layout()
plt.subplots_adjust(left=0.1, bottom=0.1, right=0.9, top=0.9, wspace=0.01, hspace=0.01)
for i in range(4):
for j in range(4):
inp = transp(patches[0][i][j])
inp = np.array(inp)
ax = fig.add_subplot(4, 4, ((i*4)+j)+1, xticks=[], yticks=[])
plt.imshow(inp)
plt.show()
I then feed the patches to my detector and it looks more or less ok, but there's no overlap (object can be cut into pieces and missed) and more importantly, I can't reverse the process with unfold without getting drowned in exceptions.
I'm not adamant on using the fold/unfold combination for the task as what I really want is to be able to feed large image into the network in a way that will preserve as much information as possible, mark down the detections and rebuild image with bounding boxes from the smaller patches.
What I came up with is this:
new_im = Image.new("RGB", (300*dims[1], 300*dims[0]))
idx = 0
for i in range(dims[0]):
for j in range(dims[1]):
new_im.paste(tiles[idx], (j*300, i*300))
idx += 1
new_im.show()
which rebuilds the image back, but in a very artificial way, where the detector annotates image cropping and returns them as image list which I here rebuild which is both ugly and inefficient.
After a bit of fiddling, I got it to work, but there comes a peculiarity of pytorch.It adds overlapping parts of patches instead of averaging them (see image). How can I fix it? I realise normalisation won't do anything here since it would normalise the good pixels as well, so it needs to just average overlapping pixels.
Also, please note the image was cropped erroneously. Simple code to reproduce:
def fold_unfold(img_path):
transt = transforms.Compose([transforms.ToTensor(),
# transforms.Normalize(mean=[0.5,0.5, 0.5], std=[0.5, 0.5, 0.5])
])
transp = transforms.Compose([
# transforms.Normalize(mean=[0.5,0.5, 0.5], std=[0.5, 0.5, 0.5]),
transforms.ToPILImage()
])
img_t = transt(Image.open(img_path))
img_t = img_t.unsqueeze(0)
kernel = 300
stride = 200
img_shape = img_t.shape
B, C, H, W = img_shape
# number of pixels missing:
pad_w = W % kernel
pad_h = H % kernel
# Padding the **INPUT** image with missing pixels:
img_t = F.pad(input=img_t, pad=(pad_w//2, pad_w-pad_w//2, pad_h//2, pad_h-pad_h//2), mode='constant', value=0)
img_shape = img_t.shape
B, C, H, W = img_shape
print("\n-----input shape: ", img_shape)
patches = img_t.unfold(3, kernel, stride).unfold(2, kernel, stride).permute(0,1,2,3,5,4)
print("\n-----patches shape:", patches.shape)
# reshape output to match F.fold input
patches = patches.contiguous().view(B, C, -1, kernel*kernel)
print("\n", patches.shape) # [B, C, nb_patches_all, kernel_size*kernel_size]
patches = patches.permute(0, 1, 3, 2)
print("\n", patches.shape) # [B, C, kernel_size*kernel_size, nb_patches_all]
patches = patches.contiguous().view(B, C*kernel*kernel, -1)
print("\n", patches.shape) # [B, C*prod(kernel_size), L] as expected by Fold
# https://pytorch.org/docs/stable/nn.html#torch.nn.Fold
output = F.fold(patches, output_size=(H, W), kernel_size=kernel, stride=stride)
# mask that mimics the original folding:
recovery_mask = F.fold(torch.ones_like(patches), output_size=(H,W), kernel_size=kernel, stride=stride)
output = output/recovery_mask
print(output.shape) # [B, C, H, W]
aspil = transp(output[0])
aspil.show()
still, the image is cropped quite a lot so something is still wrong:
Finally, getting the cropping done Code updated to the working version
The problem is coming from the way pytorch does the unfolding. The unfold from tensor method doesn't zero pad automatically, but rather stops the cropping by cutting off the part that didn't fit. I solved it by zero padding the tensor before cropping it.
| I'm posting that way so it's easier for others to find a solution if they encounter similar problem.
Key Highlights:
pytorch unfold will crop out part of the image that doesn't fit into the sliding window used. (ex. with 300x300 image and 100x100 window, nothing would get cropped, but with 290x290 image and same window the cropping will well... crop out the last 90 rows and columns of the original image. Solution is to zero pad the image preemptively to match size of the sliding window
the size of the input image will change after zero padding (no surprise here) but it's easy to forget about that when reconstructing the original image.
Ideally you may want to crop the image to original size in the end, but with sliding window approach it makes more sense for my application to keep the padding around the image so that the center of my detector can be applied to edges of the image too.
Unfolding: I couldn't find a practical difference between patches = img_t.unfold(3, kernel, stride).unfold(2, kernel, stride).permute(0,1,2,3,5,4) and
patches = img_t.unfold(2, kernel, stride).unfold(3, kernel, stride) so explanation on that would be welcome.
Image tensor must be reshaped a number of times before it is possible to fold it back into the original (padded!) image.
normalisation - not in the sense of image transform but rather to revert the effect of sliding window overlap. Another peculiarity of pytorch I found is the way it pastes tensors onto one another when folding overlapping patches. Instead of taking the average of overlap area, it adds them together. This can be reverted with form of overlap mask. This has an exact shape of the produced patches and value of 1 for each point. After folding, the value of each pixel/point will be equal to the number of stacked folds. Ultimately this is the denominator for averaging colors for the overlaps.
The code that ultimately worked for me:
import torch
from torchvision.transforms import transforms
import torch.nn.functional as F
from PIL import Image
img_path = 'filename.jpg'
def fold_unfold(img_path):
transt = transforms.Compose([transforms.ToTensor(),
# transforms.Normalize(mean=[0.5,0.5, 0.5], std=[0.5, 0.5, 0.5])
])
transp = transforms.Compose([
# transforms.Normalize(mean=[0.5,0.5, 0.5], std=[0.5, 0.5, 0.5]),
transforms.ToPILImage()
])
img_t = transt(Image.open(img_path))
img_t = img_t.unsqueeze(0)
kernel = 300
stride = 200 # smaller than kernel will lead to overlap
img_shape = img_t.shape
B, C, H, W = img_shape # Batch size, here 1, channels (3), height, width
# number of pixels missing in each dimension:
pad_w = W % kernel
pad_h = H % kernel
# Padding the **INPUT** image with missing pixels:
img_t = F.pad(input=img_t, pad=(pad_w//2, pad_w-pad_w//2,
pad_h//2, pad_h-pad_h//2), mode='constant', value=0)
img_shape = img_t.shape
# UPDATE the shape information to account for padding
B, C, H, W = img_shape
print("\n----- input shape: ", img_shape)
patches = img_t.unfold(3, kernel, stride).unfold(2, kernel, stride).permute(0, 1, 2, 3, 5, 4)
print("\n----- patches shape:", patches.shape)
# reshape output to match F.fold input
patches = patches.contiguous().view(B, C, -1, kernel*kernel)
print("\n", patches.shape) # [B, C, nb_patches_all, kernel_size*kernel_size]
patches = patches.permute(0, 1, 3, 2)
print("\n", patches.shape) # [B, C, kernel_size*kernel_size, nb_patches_all]
patches = patches.contiguous().view(B, C*kernel*kernel, -1)
print("\n", patches.shape) # [B, C*prod(kernel_size), L] as expected by Fold
# https://pytorch.org/docs/stable/nn.html#torch.nn.Fold
output = F.fold(patches, output_size=(H, W),
kernel_size=kernel, stride=stride)
# mask that mimics the original folding:
recovery_mask = F.fold(torch.ones_like(patches), output_size=(
H, W), kernel_size=kernel, stride=stride)
output = output/recovery_mask
print(output.shape) # [B, C, H, W]
aspil = transp(output[0])
aspil.show()
fold_unfold(img_path)
| https://stackoverflow.com/questions/62995726/ |
Update targets in classification images | Why are we updating targets in the implementation of bayesian cnn with mc dropout here?
https://github.com/sungyubkim/MCDO/blob/master/Bayesian_CNN_with_MCDO.ipynb?fbclid=IwAR18IMLcdUUp90TRoYodsJS7GW1smk-KGYovNpojn8LtRhDQckFI_gnpOYc
def update_target(target, original, update_rate):
for target_param, param in zip(target.parameters(), original.parameters()):
target_param.data.copy_((1.0 - update_rate) * target_param.data + update_rate*param.data)
| The implementation you have referred to is a data parallel one.
Which means, the author intends to train multiple networks with the same architecture but different hyper-parameters.
Although in an unconventional way, this is what update_target does:
update_target(net_test, net, 0.001)
It updates the net_test with a lower learning rate compared to net, but with the exact same parameter changes applied to original net, that is actually being trained. Only the change scales is different.
I am assuming that this is found to be useful in terms of computational efficiency, since only one of the networks are actually being "trained" during main training phase:
outputs = net(inputs)
loss = CE(outputs, labels)
loss.backward()
optimizer.step()
One less forward pass and one less backprop per step.
| https://stackoverflow.com/questions/62998832/ |
load and freeze one model and train others in PyTorch | I have a model A that including three submodels model1, model2, model3.
the model flow: model1 --> model2 --> model3
I have trained model1 in an independent project.
The question is how to use the pre-trained model1 when training the model A?
Now, I try to implement this as follow:
I load the checkpoint of model1 by `model1.load_state_dict(torch.load(model1.pth)) and then set the requires_grad of the model1’s parameters as False?
Is it right?
| Yes, that is correct.
When you structure your model the way you explained, what you are doing is correct.
ModelA consists of three submodels - model1, models, model3
Then you load the weights of each individual model with model*.load_state_dict(torch.load(model*.pth))
Then make requires_grad=False for the model you want to freeze.
for param in model*.parameters():
param.requires_grad = False
You can also freeze weights of particular layers by accessing the submodules, for example, if you have a layer named fc in model1, then you can freeze its weights by making model1.fc.weight.requres_grad = False.
| https://stackoverflow.com/questions/62998911/ |
Getting different eigenvalues between using numpy.linalg.eigh() and torch.symeig() | I am trying to understand why am I getting different eigenvalues between using numpy.linalg.eigh() and torch.symeig().
An example is as below:
Code:
import numpy as np
import torch
arr_symmetric = np.array([[1.,2,3], [2,5,6], [3,6,9]])
arr_symmetric, arr_symmetric.dtype
Output:
(array([[1., 2., 3.],
[2., 5., 6.],
[3., 6., 9.]]), dtype('float64'))
Code:
tsr_symmetric = torch.tensor(arr_symmetric)
tsr_symmetric
Output:
tensor([[1., 2., 3.],
[2., 5., 6.],
[3., 6., 9.]], dtype=torch.float64)
Code:
w, v = np.linalg.eigh(arr_symmetric)
w, v
Output:
(array([4.05517871e-16, 6.99264746e-01, 1.43007353e+01]),
array([[-9.48683298e-01, 1.77819106e-01, -2.61496397e-01],
[ 2.22044605e-16, -8.26924214e-01, -5.62313386e-01],
[ 3.16227766e-01, 5.33457318e-01, -7.84489190e-01]]))
Code:
e, v = torch.symeig(tsr_symmetric, eigenvectors=True)
e, v
Output:
(tensor([-2.6047e-16, 6.9926e-01, 1.4301e+01], dtype=torch.float64),
tensor([[ 9.4868e-01, -1.7782e-01, 2.6150e-01],
[ 8.6389e-16, 8.2692e-01, 5.6231e-01],
[-3.1623e-01, -5.3346e-01, 7.8449e-01]], dtype=torch.float64))
As you can see one of the eigenvalues is different, ie. 4.05517871e-16 vs. -2.6047e-16
Why is this happening?
| 4.05517871e-16 is very close to zero so is -2.6047e-16. They are very very close by. You can verify the same as below because input = V.e.V^T where e is a diagonal matrix with eigen values in the diagonal.
import numpy as np
import torch
arr_symmetric = np.array([[1.,2,3], [2,5,6], [3,6,9]])
e, v = np.linalg.eigh(arr_symmetric)
print (np.dot(v, np.dot(np.diag(e), v.T)))
for i in range(3):
print (np.dot(arr_symmetric, v[:,i].reshape(-1,1)), e[i]*v[:,i])
e, v = torch.symeig(torch.tensor(arr_symmetric), eigenvectors=True)
print (torch.matmul(v, torch.matmul(e.diag_embed(), v.transpose(-2, -1))))
for i in range(3):
print (np.dot(arr_symmetric, v[:,i].reshape(-1,1)), e[i]*v[:,i])
Output:
[[1. 2. 3.]
[2. 5. 6.]
[3. 6. 9.]]
[[3.33066907e-16]
[8.88178420e-16]
[8.88178420e-16]] [-3.84708031e-16 9.00430554e-32 1.28236010e-16]
[[ 0.12434263]
[-0.57823895]
[ 0.3730279 ]] [ 0.12434263 -0.57823895 0.3730279 ]
[[ -3.73959074]
[ -8.04149487]
[-11.21877222]] [ -3.73959074 -8.04149487 -11.21877222]
tensor([[1.0000, 2.0000, 3.0000],
[2.0000, 5.0000, 6.0000],
[3.0000, 6.0000, 9.0000]], dtype=torch.float64)
[[-3.33066907e-16]
[ 0.00000000e+00]
[-8.88178420e-16]] tensor([-2.4710e-16, -2.2502e-31, 8.2368e-17], dtype=torch.float64)
[[-0.12434263]
[ 0.57823895]
[-0.3730279 ]] tensor([-0.1243, 0.5782, -0.3730], dtype=torch.float64)
[[ 3.73959074]
[ 8.04149487]
[11.21877222]] tensor([ 3.7396, 8.0415, 11.2188], dtype=torch.float64)
| https://stackoverflow.com/questions/63000741/ |
What does it mean to assert an object in python? | We are getting an assertion error in this line "assert dataset". We are printing the dataset object and the value we got was this '<datasets.TextDataset object at 0x000002531F10E408>'. We are using 'python 3.7' in this code. Why are we getting assertion error on dataset object?
We are basically trying to run the code of AttnGAN (https://github.com/taoxugit/AttnGAN).
The error is happening on Line: 130 in 'code/main.py'.
Code
dataset = TextDataset(cfg.DATA_DIR, split_dir, base_size=cfg.TREE.BASE_SIZE, transform=image_transform)
print(dataset)
assert dataset
dataloader = torch.utils.data.DataLoader(dataset, batch_size=cfg.TRAIN.BATCH_SIZE, drop_last=True, shuffle=bshuffle, num_workers=int(cfg.WORKERS))
Output
Load from: C:\Users\admin\Desktop\TextToImage\AttnGAN-master (1)\AttnGAN-master\code/../data/birds/captions.pickle
<datasets.TextDataset object at 0x000002531F10E408>
Traceback (most recent call last):
File "./code/main.py", line 131, in
assert dataset
AssertionError
PS C:\Users\admin\Desktop\TextToImage\AttnGAN-master (1)\AttnGAN-master>
| In this case, assert dataset is a not-very-clear way of checking if the dataset is empty. assert throws an exception if the expression (in this case the dataset object) evaluates to false.
https://docs.python.org/3/library/stdtypes.html "Truth Value Testing" says
By default, an object is considered true unless its class defines
either a __bool__() method that returns False or a __len__() method
that returns zero
Loking at the github repo, TextDataset does define __len__(). The logical conclusion is that the returned length of the dataset in your case (after it is loaded) is zero.
Try to look at where it is loading data from, try to make sure the data is there, and print the length before the assertion. Bonus: try to figure out why the original loading doesn't throw an exception but succeeds and produces an empty dataset.
| https://stackoverflow.com/questions/63001830/ |
How to use AMD GPU for fastai/pytorch? | I'm using a laptop which has Intel Corporation HD Graphics 5500 (rev 09), and AMD Radeon r5 m255 graphics card.
Does anyone know how to it set up for Deep Learning, specifically fastai/Pytorch?
| Update 2:
Microsoft has release Pytorch_DML a few hours ago.
You can now install it (in windows or WSL) using pypi package:
pytorch-directml 1.8.0a0.dev211021
pip install pytorch-directml
So if you are on windows or using WSL, you can hop in and give this a try!
Update :
As of Pytorch 1.8 (March 04, 2021), AMD ROCm versions are made available from Pytorch's official website. You can now easily install them on Linux and Mac, the same way you used to install the CUDA/CPU versions.
Currently, the pip packages are being provided only. Also, the Mac and Windows platforms are still not supported (I haven't tested with WSL2 though!)
Old answer:
You need to install the ROCm version. The official AMD instructions on building Pytorch is here.
There was previously a wheel package for rocm, but it seems AMD doesn't distribute that anymore, and instead, you need to build PyTorch from the source as the guide which I linked to explains.
However, you may consult this page, to build the latest PyTorch version: The unofficial page of ROCm/PyTorch.
| https://stackoverflow.com/questions/63008040/ |
using huggingface Trainer with distributed data parallel | To speed up performace I looked into pytorches DistributedDataParallel and tried to apply it to transformer Trainer.
The pytorch examples for DDP states that this should at least be faster:
DataParallel is single-process, multi-thread, and only works on a single machine, while DistributedDataParallel is multi-process and works for both single- and multi- machine training. DataParallel is usually slower than DistributedDataParallel even on a single machine due to GIL contention across threads, per-iteration replicated model, and additional overhead introduced by scattering inputs and gathering outputs.
My DataParallel trainer looks like this:
import os
from datetime import datetime
import sys
import torch
from transformers import Trainer, TrainingArguments, BertConfig
training_args = TrainingArguments(
output_dir=os.path.join(path_storage, 'results', "mlm"), # output directory
num_train_epochs=1, # total # of training epochs
gradient_accumulation_steps=2, # for accumulation over multiple steps
per_device_train_batch_size=4, # batch size per device during training
per_device_eval_batch_size=4, # batch size for evaluation
logging_dir=os.path.join(path_storage, 'logs', "mlm"), # directory for storing logs
evaluate_during_training=False,
max_steps=20,
)
mlm_train_dataset = ProteinBertMaskedLMDataset(
path_vocab, os.path.join(path_storage, "data", "uniparc", "uniparc_train_sorted.h5"),
)
mlm_config = BertConfig(
vocab_size=mlm_train_dataset.tokenizer.vocab_size,
max_position_embeddings=mlm_train_dataset.input_size
)
mlm_model = ProteinBertForMaskedLM(mlm_config)
trainer = Trainer(
model=mlm_model, # the instantiated Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=mlm_train_dataset, # training dataset
data_collator=mlm_train_dataset.collate_fn,
)
print("build trainer with on device:", training_args.device, "with n gpus:", training_args.n_gpu)
start = datetime.now()
trainer.train()
print(f"finished in {datetime.now() - start} seconds")
The output:
build trainer with on device: cuda:0 with n gpus: 4
finished in 0:02:47.537038 seconds
My DistributedDataParallel trainer is build like this:
def create_transformer_trainer(rank, world_size, train_dataset, model):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
os.environ["RANK"] = str(rank)
os.environ["WORLD_SIZE"] = str(world_size)
training_args = TrainingArguments(
output_dir=os.path.join(path_storage, 'results', "mlm"), # output directory
num_train_epochs=1, # total # of training epochs
gradient_accumulation_steps=2, # for accumulation over multiple steps
per_device_train_batch_size=4, # batch size per device during training
per_device_eval_batch_size=4, # batch size for evaluation
logging_dir=os.path.join(path_storage, 'logs', "mlm"), # directory for storing logs
local_rank=rank,
max_steps=20,
)
trainer = Trainer(
model=model, # the instantiated Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
data_collator=train_dataset.collate_fn,
)
print("build trainer with on device:", training_args.device, "with n gpus:", training_args.n_gpu)
start = datetime.now()
trainer.train()
print(f"finished in {datetime.now() - start} seconds")
mlm_train_dataset = ProteinBertMaskedLMDataset(
path_vocab, os.path.join(path_storage, "data", "uniparc", "uniparc_train_sorted.h5"))
mlm_config = BertConfig(
vocab_size=mlm_train_dataset.tokenizer.vocab_size,
max_position_embeddings=mlm_train_dataset.input_size
)
mlm_model = ProteinBertForMaskedLM(mlm_config)
torch.multiprocessing.spawn(create_transformer_trainer,
args=(4, mlm_train_dataset, mlm_model),
nprocs=4,
join=True)
The output:
The current process just got forked. Disabling parallelism to avoid deadlocks...
To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)
The current process just got forked. Disabling parallelism to avoid deadlocks...
To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)
The current process just got forked. Disabling parallelism to avoid deadlocks...
To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)
The current process just got forked. Disabling parallelism to avoid deadlocks...
To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)
The current process just got forked. Disabling parallelism to avoid deadlocks...
To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)
build trainer with on device: cuda:1 with n gpus: 1
build trainer with on device: cuda:2 with n gpus: 1
build trainer with on device: cuda:3 with n gpus: 1
build trainer with on device: cuda:0 with n gpus: 1
finished in 0:04:15.937331 seconds
finished in 0:04:16.899411 seconds
finished in 0:04:16.938141 seconds
finished in 0:04:17.391887 seconds
About the inital forking warning: What is exaclty forked and is this expected?
And about the resulting time: Is the trainer incorrectly used since it seemed to be a lot slower than the DataParallel approach?
| Kind of late to the party but anyway. I'm going to leave this comment here to help anyone wondering if it is possible to keep the parallelism in the tokenizer.
According to this comment on github the FastTokenizers seem to be the issue.
Also according to another comment on gitmemory you shouldn't use the tokenizer before forking the process. (which basically means before iterating through your dataloader)
So the solution is to not use FastTokenizers before training/fine-tuning or use the normal Tokenizers.
Check the huggingface documentation to find out if you really need the FastTokenizer.
| https://stackoverflow.com/questions/63017931/ |
How does pytorch compute derivatives for simple functions? | When we talk about the auto-differentiation in the pytorch, we are usually presented a graphical structures of tensors based on their formulas, and pytorch will compute the gradients by tracing down the graphical tree using chain rules. However, I want to know what will happen at the leaf nodes? Does pytorch hardcode a whole list of basic functions with their analytical derivatives, or does it compute the gradients using numerical methods? A quick example:
import torch
def f(x):
return x ** 2
x = torch.tensor([1.0], requires_grad=True)
y = f(x)
y.backward()
print(x.grad) # 2.0
In this example, does pytorch compute the derivative by $$ (x^2)' = 2x = 2 * 1 = 2 $$, or does pytorch compute in a way similar to $$ (1.00001^2 - 1^2) / (1.000001 - 1) ~ 2 $$ ?
Thanks!
| See this paper for exact answer, specifically section 2.1 or figure 2.
In short, PyTorch has a list of basic functions and the expression of their derivatives. So, what is done in your case (y =xx), is evaluating $$ y' = 2x $$.
The numerical method you mentioned is called numerical differentiation or finite differences, and it is an approximation of the derivative. But it is not what PyTorch does.
| https://stackoverflow.com/questions/63026854/ |
pytorch Gradient with respect to 3D input | I want to construct sobolev network for 3D input regression
In TensorFlow, the gradients of neural network model can be computed using tf.gradient like:
dfdx,dfdy,dfdz = tf.gradients(pred,[x,y,z])
Let M be a torch neural network with 5 layers.
If X is a set of (x,y,z) (3dim data)
and M.forward(X) is a 1 dim output
How can I compute like gradient of M.forward(X) with respect to X? Something like:
tf.gradient(M.forward(X),X)
| If you want to compute gradient of this function for example
y_i = 5*(x_i + 1)²
Create tensor of size 2x1 filled with 1's that requires gradient
x = torch.ones(2, requires_grad=True)
Simple linear equation with x tensor created
y = 5 * (x + 1) ** 2
Let take o as multiple dimension equation
o = 1/2 *sum(y_i)
in python
o = (1/2) * torch.sum(y)
you can compute grad with
o.backward()
x.grad
you can get more information here https://www.deeplearningwizard.com/deep_learning/practical_pytorch/pytorch_gradients/
| https://stackoverflow.com/questions/63027843/ |
What are some ways to speed up data loading on large sparse arrays (~1 million x 1 million, density ~0.0001) in Pytorch? | I am working on a binary classification problem. I have ~1.5 million data points, and the dimensionality of the feature space is 1 million. This dataset is stored as a sparse array, with a density of ~0.0001. For this post, I'll limit the scope to assume that the model is a shallow feedforward neural network, and also assume that the dimensionality has already been optimized (so cannot be reduced below 1 million). Naiive approaches to create mini-batches out of this data to feed into the network would take a lot of time (As an example, a basic approach of creating a TensorDataset (map style) from a torch.sparse.FloatTensor representation of the input array, and wrapping a DataLoader around it, means ~20s to get a mini-batch of 32 to the network, as opposed to say ~0.1s to perform the actual training). I am looking for ways to speed this up.
What I've tried
I first figured that reading from such a large sparse array in every iteration of the DataLoader was computationally intensive, so I broke down this sparse array into smaller sparse arrays
For the DataLoader to read from these multiple sparse arrays in an iterative fashion, I replaced the map style dataset that I had inside the DataLoader with an IterableDataset, and streamed these smaller sparse arrays into this IterableDataset like so:
from itertools import chain
from scipy import sparse
class SparseIterDataset(torch.utils.data.IterableDataset):
def __init__(self, fpaths):
super(SparseIter).__init__()
self.fpaths = fpaths
def read_from_file(self, fpath):
data = sparse.load_npz(fpath).toarray()
for d in data:
yield torch.Tensor(d)
def get_stream(self, fpaths):
return chain.from_iterable(map(self.read_from_file, fpaths))
def __iter__(self):
return self.get_stream(self.fpaths)
With this approach, I was able to bring down the time from the naiive base case of ~20s to ~0.2s per minibatch of 32. However, given that my dataset has ~1.5 million samples, this still implies a lot of time spent on even making one pass through the dataset. (As a comparison, even though it's slightly apples to oranges, running a logistic regression on scikit-learn on the original sparse array takes about ~6s per iteration through the whole dataset. With pytorch, with the approach I just outlined, it would take ~3000s just to load all the minibatches in an epoch)
One thing which I am aware of but yet to try is using multiprocess data loading by setting the num_workers argument in the DataLoader. I believe this has its own catches in the case of iterable style datasets though. Plus even a 10x speedup would still mean ~300s per epoch in loading mini batches. I feel I'm being inordinately slow! Are there any other approaches/improvements/best practices that you could suggest?
| Your dataset in un-sparsified form would be 1.5M x 1M x 1 byte = 1.5TB as uint8, or 1.5M x 1M x 4 byte = 6TB as float32. Simply reading 6TB from memory to CPU could take 5-10 minutes on a modern CPU (depending on the architecture), and transfer speeds from CPU to GPU would be a bit slower than that (NVIDIA V100 on PCIe has 32GB/s theoretical).
Approaches:
Benchmark everything individually - eg in jupyter
%%timeit data = sparse.load_npz(fpath).toarray()
%%timeit dense = data.toarray() # un-sparsify for comparison
%%timeit t = torch.tensor(data) # probably about the same as the line above
Also print out the shapes and datatypes of everything to make sure they are as expected. I haven't tried running your code but I am pretty sure that (a) sparse.load_npz is extremely fast and unlikely to be a bottleneck, but (b) torch.tensor(data) produces a dense tensor and is also quite slow here
Use torch.sparse. I think torch sparse tensors can be used as regular tensors in most cases. You'd have to do some data prep to convert from scipy.sparse to torch.sparse:
A sparse tensor is represented as a pair of dense tensors: a tensor of
values and a 2D tensor of indices. A sparse tensor can be constructed by
providing these two tensors, as well as the size of the sparse tensor
You mention torch.sparse.FloatTensor but I'm pretty sure you're not making sparse tensors in your code - there is no reason to expect those would be constructed simply from passing a scipy.sparse array to a regular tensor constructor, since that's not how they're usually made.
If you figure out a good way to do this, I recommend you post it as a project or git on github, it would be quite useful.
If torch.sparse doesn't work out, think of other ways to either convert the data to dense only on the GPU, or avoid converting it entirely.
See also:
https://towardsdatascience.com/sparse-matrices-in-pytorch-be8ecaccae6
https://github.com/rusty1s/pytorch_sparse
| https://stackoverflow.com/questions/63037571/ |
Why is the super constructor necessary in PyTorch custom modules? | Why does super(LR, self).__init__() need to be called in the code below? I get the error "AttributeError: cannot assign module before Module.init() call" otherwise. That error is caused by self.linear = nn.Linear(input_size, output_size).
I don't understand what the connection is between calling super(LR, self).__init__() and being able to assign the nn.Linear object to self.linear. nn.Linear is a separate object which can be assigned to a variable outside of any class, so why does super(LR, self).__init__() need to be called to assign a Linear object to self.linear within the class?
class LR(nn.Module):
# Constructor
def __init__(self, input_size, output_size):
# Inherit from parent
super(LR, self).__init__()
self.test = 1
self.linear = nn.Linear(input_size, output_size)
# Prediction function
def forward(self, x):
out = self.linear(x)
return out
| When you write self.linear = nn.Linear(...) inside your custom class, you are actually calling the __setattr__ function of your class. It just happens that when you extend nn.Module, there are a bunch of things that your class is inheriting, and one of them is the __setattr__. As you can see in the implementation (I post only the relevant part below), if nn.Linear is an instance of nn.Module, your class must have an attribute called _modules, otherwise it will throw the AttributeError you got:
def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:
# [...]
modules = self.__dict__.get('_modules')
if isinstance(value, Module):
if modules is None:
raise AttributeError("cannot assign module before Module.__init__() call")
remove_from(self.__dict__, self._parameters, self._buffers, self._non_persistent_buffers_set)
modules[name] = value
If you take a look at the nn.Module's __init__, you'll see that self._modules is initialized there:
def __init__(self):
"""
Initializes internal Module state, shared by both nn.Module and ScriptModule.
"""
torch._C._log_api_usage_once("python.nn_module")
self.training = True
self._parameters = OrderedDict()
self._buffers = OrderedDict()
self._non_persistent_buffers_set = set()
self._backward_hooks = OrderedDict()
self._forward_hooks = OrderedDict()
self._forward_pre_hooks = OrderedDict()
self._state_dict_hooks = OrderedDict()
self._load_state_dict_pre_hooks = OrderedDict()
self._modules = OrderedDict() # <---- here
The same is true for buffers and parameters.
| https://stackoverflow.com/questions/63058355/ |
How to fix Process finished with exit code -1073741819 (0xC0000005) when using Convolutional layers in pytorch, error on backward() | When I use Conv1d or Conv2d layers on pytorch, the process is killed unexpectedly. I am getting the error in the following line:
loss.backward()
My set up:
Windows 10
cuda 10.2
cudnn 7.6.5
RTX 2060 Super
Nvidia driver 451.67
Pycharm 2020.04
Error:
Process finished with exit code -1073741819 (0xC0000005)
In comparison, when I replace the conv layer with a dense layer, the problem doesn't occur.
For more comparison, the same project and the same code was run on Ubuntu 20.04 as well, and it worked quite well.
| There seems to be a known bug around this problem that happens with Pytorch on windows, when run on GPU(with CUDA) .
Ensure all params supplied to Conv1d and Conv2d are correct especially padding value. Note that it can have different behaviour with other OS like linux/ubuntu.
And also if you are using Python-3.6 or higher version, it could be this bug. In that case try with Python-3.5
| https://stackoverflow.com/questions/63059001/ |
Constant loss through epochs | I code this neural network to make a gaussian regression but I don't understand why my loss doesn't change through epochs. I set the learning rate to 1 to see the loss decreases but it does not. I chose to take 2000 poitns to train my Neural network. I watched several algorithms on this website and I don't really understand why my algorithm do not achieve what I expect.
I have already imported all libraries needed.
Thank you for your help
def f(x):
return x * np.sin(x) # function to predict
m =2000
X_bis = np.zeros((1,m),dtype = float)
X_bis=np.random.random(m)*10
## Create my training,validation and test set
X_train = X_bis[0:600]
X_val = X_bis[600:800]
X_test = X_bis[800:]
y_train = f(X_train)
y_val = f(X_val)
y_test = f(X_test)
mean_X_train = np.mean(X_train)
std_X_train = np.std(X_train)
mean_y_train = np.mean(y_train)
std_y_train =np.std(y_train)
class MyDataset(data.Dataset):
def __init__(self, data_feature, data_target):
self.data_feature = data_feature
self.data_target = data_target
def __len__(self):
return len(self.data_feature)
def __getitem__(self, index):
X_train_normalized = (self.data_feature[index] - mean_X_train) / std_X_train
y_train_normalized = (self.data_target[index] - mean_y_train) / std_y_train
return torch.from_numpy(np.array(X_train_normalized,ndmin=1)).float(), torch.from_numpy(np.array(y_train_normalized, ndmin = 1)).float()
training_set = MyDataset(X_train,y_train)
train_loading = torch.utils.data.DataLoader(training_set, batch_size= 100)
val_set = MyDataset(X_val, y_val)
val_loading = torch.utils.data.DataLoader(val_set, batch_size= 10)
test_set = MyDataset(X_test,y_test)
test_loading = torch.utils.data.DataLoader(test_set, batch_size= 100)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.FC1 = nn.Linear(1,10)
self.FC2 = nn.Linear(10, 1)
def forward(self, x):
x = F.relu(self.FC1(x))
x = self.FC2(x)
return x
model = Net()
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(),
lr=1, weight_decay= 0.01, momentum = 0.9)
def train(net, train_loader, optimizer, epoch):
net.train()
total_loss=0
for idx,(data, target) in enumerate(train_loader, 0):
outputs = net(data)
loss = criterion(outputs,target)
total_loss +=loss.cpu().item()
optimizer.step()
print('Epoch:', epoch , 'average training loss ', total_loss/ len(train_loader))
def test(net,test_loader):
net.eval()
total_loss = 0
for idx,(data, target) in enumerate(test_loader,0):
outputs = net(data)
outputs = outputs * std_X_train + mean_X_train
target = target * std_y_train + mean_y_train
loss = criterion(outputs,target)
total_loss += sqrt(loss.cpu().item())
print('average testing loss', total_loss/len(test_loader))
for epoch in range(50):
train(model,train_loading,optimizer,epoch)
test(model,val_loading)
'''
| I'm wondering why you don't have loss.backward() after the line that you compute the loss (i.e., loss = criterion(outputs,target)) in your training snippet. This will help backpropagating and ultimately updating the parameters of your network upon optimizer.step(). Also, try using lower learning rates as lr=1 normally is too much in training such networks. Try using learning rates in between 0.001-0.01 to see if your network is learning the mapping between input X and target Y.
| https://stackoverflow.com/questions/63060735/ |
pytorch when do I need to use `.to(device)` on a model or tensor? | I am new to Pytorch, but it seems pretty nice. My only question was when to use tensor.to(device) or Module.nn.to(device).
I was reading the documentation on this topic, and it indicates that this method will move the tensor or model to the specified device. But I was not clear for what operations this is necessary, and what kind of errors I will get if I don't use .to() at the right time?
For example, if I just create a tensor, I imagine that the tensor is stored in CPU accessible memory until I move the tensor to the GPU. Once the tensor is on the GPU, then the GPU will execute any mathematical operations on that tensor.
However, do I have to worry about accidentally transferring the data tensor to the GPU while not transferring the model to the GPU? Will this just give me straight errors, or will it engage in a lot of expensive data transfer behind the scenes. This example is easy enough for me to test, but I was just wondering about other cases where it might not be so obvious.
Any guidance would be helpful.
| It is necessary to have both the model, and the data on the same device, either CPU or GPU, for the model to process data. Data on CPU and model on GPU, or vice-versa, will result in a Runtime error.
You can set a variable device to cuda if it's available, else it will be set to cpu, and then transfer data and model to device :
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.to(device)
data = data.to(device)
| https://stackoverflow.com/questions/63061779/ |
Pytorch and Torchvision are compiled different CUDA versions | RuntimeError: Detected that PyTorch and torchvision were compiled with different CUDA versions. PyTorch has CUDA Version=10.2 and torchvision has CUDA Version=10.1. Please reinstall the torchvision that matches your PyTorch install.
I am trying to run YOLACT on my Google Colab and found this error. Can someone help solve this issue?
| You need to upgrade your torchvision to one compiled with CUDA 10.2:
pip install --upgrade torchvision>=0.6.0
or, if you're using Conda:
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
Check here the version you should install based on your PyTorch.
| https://stackoverflow.com/questions/63062741/ |
How to get occurrence count of item in array in-place | I have a array which I am trying to get weights for. It looks like this:
Target: tensor([ 1, 6, 5, 8, 10, 5, 4, 5, 10, 10, 9, 8, 10, 4, 10, 9])
I need to get the counts of each item in the array. So I would like it to return this:
Class count: [1, 1, 3, 2, 5, 3, 2, 3, 5, 5, 2, 2, 5, 2, 5, 2]
This is my code right now:
class_sample_count = np.unique(y, return_counts=True)[1]
weight = 1. / class_sample_count
print(f'Target: {y}, Class count: {class_sample_count}, Weight: {weight}')
samples_weight = weight[y]
weights = torch.from_numpy(samples_weight)
.. but the class count prints as this:
Class count: [1 2 3 1 2 2 5]
How can I get the counts of each value while keeping the length of the original array?
| Use return_inverse of numpy.unique:
y = [ 1, 6, 5, 8, 10, 5, 4, 5, 10, 10, 9, 8, 10, 4, 10, 9]
uniq, inv, cnt = np.unique(y, return_inverse=True, return_counts=True)
cnt[inv]
Output:
array([1, 1, 3, 2, 5, 3, 2, 3, 5, 5, 2, 2, 5, 2, 5, 2])
| https://stackoverflow.com/questions/63065765/ |
How to do hyperparameter tuning in Detectron2 | Detectron2
COCO Person Keypoint Detection Baselines with Keypoint R-CNN R50-FPN
How do I do hyperparameter tuning with the model above? Which files do I have to open?
Thanks
| You can use "config" to tune your model. Here is an official tutorial how you can use it (https://detectron2.readthedocs.io/tutorials/configs.html)
And here is the file of all hyperparameters that you can tune (https://github.com/facebookresearch/detectron2/blob/master/detectron2/config/defaults.py)
| https://stackoverflow.com/questions/63065916/ |
How to implement a Sparse Embedding in Tensorflow 2 like Pytorch Embedding(sparse=True)? | I have a network that has a lot of items that need to be embedded.
However, in each training batch, only a very small portion of the items will actually be used.
If I use the normal tf.keras.layers.Embedding layer, it will add all the items into the network parameter, thus consuming a lot of memory and decreasing speed in distributed training significantly since in each step all the unused grads are still synchronized.
What I want is, that in each training step only the actually used embedding weights are added into the graph and be computed and synchronized.
Pytorch already provides this functionality with torch.nn.Embedding(sparse=True).
How can I implement this in Tensorflow 2?
| My bad... checking tf.GradientTape() tells me that gradient of tf.gather is already a sparse tensor, so this needs no bother.
| https://stackoverflow.com/questions/63069863/ |
migrating from keras to pytorch | i'm newly in pytorch
it is the model with bidirectional lstm, is there any body to tel me what is the equivalent of this two different lstm & bi-lstm model?
i try some torch codes but it do not worked.because this code has suitable acc in keras,i want the exact model in torch and i unfortunately can't find it:(
fist_one:
def lstm_model(embedding_size, vocab_size):
title = layers.Input(shape=(None,), dtype='int32', name='title')
body = layers.Input(shape=(None,), dtype='int32', name='body')
embedding = layers.Embedding(
mask_zero=True,
input_dim=vocab_size,
output_dim=embedding_size,
weights=[w2v_weights],
trainable=True
)
lstm_1 = layers.LSTM(units=80, return_sequences=True)
lstm_2 = layers.LSTM(units=80, return_sequences=False)
emb_title = embedding(title)
print("question embedding shape", emb_title.shape)
sum_a = lstm_2(lstm_1(emb_title))
print("q_output shape", sum_a.shape)
emb_body = embedding(body)
print("answer embedding shape", emb_body.shape)
sum_b = lstm_2(lstm_1(emb_body))
print("a_output shape", sum_a.shape)
sim = layers.dot([sum_a, sum_b], axes=1, normalize=True)
print("qa_similarity shape", sim.shape)
# sim = layers.Activation(activation='sigmoid')(sim)
sim_model = models.Model(
inputs=[title, body],
outputs=[sim],
)
sim_model.compile(loss='mean_squared_error', optimizer='nadam', metrics=['accuracy'])
embedding_model = models.Model(
inputs=[title],
outputs=[sum_a]
)
return sim_model, embedding_model
second one:
def bilstm_model(embedding_size, vocab_size):
title = layers.Input(shape=(None,), dtype='int32', name='title')
body = layers.Input(shape=(None,), dtype='int32', name='body')
embedding = layers.Embedding(
mask_zero=True,
input_dim=vocab_size,
output_dim=embedding_size,
weights=[w2v_weights],
trainable=True
)
lstm_1 = layers.Bidirectional(LSTM(activation='tanh', dropout=0.2, units=100, return_sequences=True))
lstm_2 = layers.Bidirectional(LSTM(activation='tanh', dropout=0.2, units=100, return_sequences=False))
sum_a = lstm_2(lstm_1(embedding(title)))
sum_b = lstm_2(lstm_1(embedding(body)))
sim = layers.dot([sum_a, sum_b], axes=1, normalize=True)
# sim = layers.Activation(activation='sigmoid')(sim)
sim_model = models.Model(
inputs=[title, body],
outputs=[sim],
)
sim_model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
embedding_model = models.Model(
inputs=[title],
outputs=[sum_a]
)
return sim_model, embedding_model
i;m llokingo for true answer in weeks:(
| class Model(nn.Module):
def __init__(self, **kwargs):
super().__init__()
self.embeddings = nn.Embedding(num_embeddings=kwargs["vocab_size"],
embedding_dim=kwargs["embedding_dim"],
padding_idx=kwargs["pad_idx"])
self.embeddings.weight.requires_grad = True # to not refine-tune
if kwargs["model"] == "lstm":
self.lstm = nn.LSTM(input_size=kwargs["embedding_dim"], # input
hidden_size=kwargs["lstm_units"], # output
num_layers=kwargs["lstm_layers"],
bidirectional=False,
batch_first=True)
if kwargs["model"] == "BiLSTM":
self.lstm = nn.LSTM(input_size=kwargs["embedding_dim"], # input
hidden_size=kwargs["bilstm_units"], # output
num_layers=kwargs["bilstm_layers"],
bidirectional=True,
batch_first=True)
self.dropout = nn.Dropout(kwargs["dropout"])
self.tanh = F.tanh
self.dropout = nn.Dropout(kwargs["dropout"])
def forward(self):
pass
class LSTM_Model(Model):
"""
a class to define multiple models
"""
def __init__(self, **kwargs):
super().__init__(**kwargs)
def forward(self, question, answer):
question_embedding = self.embeddings(question)
# print("question embedding shape:", question_embedding.shape)
answer_embedding = self.embeddings(answer)
# print("answer embedding shape:", answer_embedding.shape)
q_output, (qhidden, qcell) = self.lstm(question_embedding)
print("q_output shape:", q_output.shape)
# print("qhidden shape:", qhidden.shape)
# print("qcell shape:", qcell.shape)
a_output, (ahidden, acell) = self.lstm(answer_embedding)
print("a_output shape:", a_output.shape)
# print("ahidden shape:", ahidden.shape)
# print("acell shape:", acell.shape)
# qa_similary = torch.mm(qhidden[-1], ahidden[-1])
# qa_similary =torch.matmul((qhidden[-1]), torc.th(ahidden[-1]))
q_output = q_output[-1]
q_output = q_output.squeeze()
a_output = a_output[-1]
a_output = a_output.squeeze()
mm = torch.mul((q_output), (a_output))
mm -= mm.min(1, keepdim=True)[0]
mm /= mm.max(1, keepdim=True)[0]
qa_similary =torch.mean(mm, dim=1)
# print("qa_similary shape:", qa_similary.shape)
return qa_similary, qhidden
print("**************************MODEL DEFINE & CREATED!****************************")
is this a true and completely exact implemetation of that keras code for two layer lstm?
| https://stackoverflow.com/questions/63076590/ |
PyTorch multiprocessing error with Hogwild | I've encountered a mysterious bug while trying to implement Hogwild with torch.multiprocessing. In particular, one version of the code runs fine, but when I add in a seemingly unrelated bit of code before the multiprocessing step, this somehow causes an error during the multiprocessing step: RuntimeError: Unable to handle autograd's threading in combination with fork-based multiprocessing. See https://github.com/pytorch/pytorch/wiki/Autograd-and-Fork
I reproduced the error in a minimal code sample, pasted below. If I comment out the two lines of code m0 = Model(); train(m0) which carry out a non-parallel training run on a separate model instance, then everything runs fine. I can't figure out how these lines could be causing a problem.
I'm running PyTorch 1.5.1 and Python 3.7.6 on a Linux machine, training on CPU only.
import torch
import torch.multiprocessing as mp
from torch import nn
def train(model):
opt = torch.optim.Adam(model.parameters(), lr=1e-5)
for _ in range(10000):
opt.zero_grad()
# We train the model to output the value 4 (arbitrarily)
loss = (model(0) - 4)**2
loss.backward()
opt.step()
# Toy model with one parameter tensor of size 3.
# Output is always the sum of the elements in the tensor,
# independent of the input
class Model(nn.Module):
def __init__(self):
super().__init__()
self.x = nn.Parameter(torch.ones(3))
def forward(self, x):
return torch.sum(self.x)
############################################
# Create a separate Model instance and run
# a non-parallel training run.
# For some reason, this code causes the
# subsequent parallel run to fail.
m0 = Model()
train(m0)
print ('Done with preliminary run')
############################################
num_processes = 2
model = Model()
model.share_memory()
processes = []
for rank in range(num_processes):
p = mp.Process(target=train, args=(model,))
p.start()
processes.append(p)
for p in processes:
p.join()
print(model.x)
| If you modify your code to create new processes like this:
processes = []
ctx = mp.get_context('spawn')
for rank in range(num_processes):
p = ctx.Process(target=train, args=(model,))
it seems to run fine (rest of code same as yours, tested on pytorch 1.5.0 / python 3.6 / NVIDIA T4 GPU).
I'm not completely sure what is carried over from the non-parallel run to the parallel run; I tried creating a completely new model for the two runs (with its own class), and/or deleting anything from the original, and/or making sure to delete any tensors and free up memory, and none of that made any difference.
What did make a difference was making sure that .backward() never got called outside of mp.Process() before it was called by a function within mp.Process(). I think what may be carried over is an autograd thread; if the thread exists before multiprocessing with the default fork method it fails, if the thread is created after fork it seems to work okay, and if using spawn it also works okay.
Btw: That's a really interesting question - thank you especially for digesting it to a minimal example!
| https://stackoverflow.com/questions/63081486/ |
Reward not increasing while training a Bipedal System | I am completely new to reinforcement learning and this is my first program in practice. I am trying to train the bipedal system in the OpenAI gym environment using the policy gradient algorithm.
However, the reward never changes, either at episode 0 or at episode 1000 and I cannot figure out what is going wrong. Could anyone please help me..
TIA.
The code is as follows:
import torch
import torch.nn as nn
import numpy
from torch.autograd import Variable
import gym
def train(model, name):
model.train()
env = gym.make(name)
env.reset()
num_episodes = 20000
max_steps = 10000
optim = torch.optim.Adam(model.parameters(), lr=1)
numsteps = []
rewards = []
avg_numsteps = []
min_reward = -1000
for episode in range(num_episodes):
state = env.reset()
probs = []
rewards = []
for steps in range(max_steps):
action, log_prob = model.action(state)
env.render()
state, reward, finished, _ = env.step(action.squeeze(0).detach().numpy())
env.render()
probs.append(log_prob)
rewards.append(reward)
if finished:
break
if finished:
Rewards = []
for i in range(len(rewards)):
G = 0
p = 0
for reward in rewards[i:]:
G = G + 0.9 * p * reward
p = p + 1
Rewards.append(G)
Rewards = torch.tensor(Rewards)
discounted_reward = (Rewards - Rewards.mean()) / (Rewards.std() + 1e-9)
gradients = []
for log_prob, G in zip(log_prob, discounted_reward):
gradients.append(-log_prob * G)
optim.zero_grad()
policy_gradient = Variable(torch.stack(gradients).sum(), requires_grad=True)
policy_gradient.backward()
optim.step()
numsteps.append(steps)
avg_numsteps.append(numpy.mean(numsteps[-10:]))
rewards.append(numpy.sum(rewards))
print("episode: {}, total reward: {}, average_reward: {}, length: {}\n".format(episode,
numpy.sum(rewards),
numpy.round(numpy.mean(
rewards[-10:]),
decimals=3),
steps))
if numpy.sum(rewards) > min_reward:
torch.save(model.state_dict(), '/home/atharva/policyNet.pth')
min_reward = numpy.sum(rewards)
def test(model, name):
env = gym.make(name)
model.eval()
state = env.reset()
with torch.no_grad():
while True:
action, log_prob = model(state)
state, reward, finished, _ = env.step(action.squeeze(0).numpy())
env.render()
if finished:
break
class PolicyNet(nn.Module):
def __init__(self, inputs, actions, hidden_size):
super(PolicyNet, self).__init__()
self.num_actions = actions
self.layer1 = nn.Linear(inputs, hidden_size)
self.layer2 = nn.Linear(hidden_size, hidden_size)
self.layer3 = nn.Linear(hidden_size, 2*hidden_size)
self.layer4 = nn.Linear(2*hidden_size, hidden_size)
self.layer5 = nn.Linear(hidden_size, actions)
def forward(self, x):
x = self.layer1(x)
x = nn.functional.relu(x)
x = self.layer2(x)
x = nn.functional.relu(x)
x = self.layer3(x)
x = nn.functional.relu(x)
x = self.layer4(x)
x = nn.functional.relu(x)
x = self.layer5(x)
return x
def action(self, state):
state = torch.from_numpy(state).float().unsqueeze(0)
actions = self.forward(Variable(state))
#prob = numpy.random.choice(self.num_actions, p=numpy.squeeze(actions.detach().numpy()))
log_prob = torch.log(actions.squeeze(0))
return actions, log_prob
and the calling is as follows
from REINFORCE import PolicyNet, train
model = PolicyNet(24, 4, 256)
train(model, 'BipedalWalker-v3')
| Your learning rate is way too high! High learning rate may lead your model to never converge (in fact, the loss may diverge). A learning rate that is way too low will make the training process unnecessarily long. There is a trade-off you have to find for yourself.
Tunning your learning rate may have a tremendous impact on the performance of your model. I can recommend you spending some time reading this (quite well written) blog post: https://machinelearningmastery.com/learning-rate-for-deep-learning-neural-networks/
For starters, try a learning rate that is in the range [0.01, 0.00001]. For instance:
optim = torch.optim.Adam(model.parameters(), lr=0.001)
| https://stackoverflow.com/questions/63083899/ |
Can I create an Upper triangular tensor in pytorch? | I want to create an upper triangular tensor in pytorch which I want that the lower half of the upper triangular tensor constant zeros. And the lower half of the upper triangular tensor have no grad.
When I use torch.triu() to get upper triangular tensor, the lower half of the upper triangular tensor have grad which means that such "zeros" are not constant.
So how to get an upper triangular tensor and let the lower half of the upper triangular tensor constant zeros?
import torch
a=torch.randn(5,5)
c=torch.randn(1,5)
b=torch.triu(a).requires_grad_()
loss=torch.matmul(c,b)
loss=loss.sum()
loss.backward()
print(b.grad)
| It does appear that torch.triu() gives you an upper triangular matrix and the gradients appear to be correct.
For example, lets say that
x = torch.randn(5, 5, requires_grad=True)
produces
tensor([[ 0.1907, -0.0990, 1.0373, 0.3676, -0.2752],
[ 1.8987, 1.0265, -0.1133, 0.1476, -3.5617],
[ 1.2581, 0.2860, -1.9215, -0.7674, -0.1687],
[ 0.1559, -0.9870, -0.6928, 0.1487, 0.3346],
[ 0.6317, -0.4915, 1.2506, 0.8678, 0.6367]], requires_grad=True)
Then we can take the upper triangular part of x using
y = torch.triu(x)
and y will then be
tensor([[ 0.1907, -0.0990, 1.0373, 0.3676, -0.2752],
[ 0.0000, 1.0265, -0.1133, 0.1476, -3.5617],
[ 0.0000, 0.0000, -1.9215, -0.7674, -0.1687],
[ 0.0000, 0.0000, 0.0000, 0.1487, 0.3346],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.6367]], grad_fn=<TriuBackward>)
The claim that "the lower half of the upper triangular tensor have grad which means that such zeros are not constant" indicates there may be a little confusion about what is expected of the gradients here.
The only implication as far as gradients are concerned is that no matter what we do with y, the lower part of x will not have any impact on the final result since it has been effectively multiplied by a constant zero. Therefore the derivative of any function resulting from y with respect to any of the lower components of x will be zero. We find that this is indeed the case. For example
y.sum().backward()
populates x.grad with the gradient of y.sum() w.r.t. x which is correctly reported as
# x.grad
tensor([[1., 1., 1., 1., 1.],
[0., 1., 1., 1., 1.],
[0., 0., 1., 1., 1.],
[0., 0., 0., 1., 1.],
[0., 0., 0., 0., 1.]])
| https://stackoverflow.com/questions/63090384/ |
Function with torch.mm showing error while using torch.optim | I am kind of a newbie with PyTorch. Please forgive me if the question is childish. I am trying to minimize a function using PyTorch's optim. The function includes matrix multiplication. The details are given below.
First I have a tensor:
Xv.requires_grad_()
XT.requires_grad_()
My Objective Function:
def errorFun(x):
ax = x[0]
ay = x[1]
x0 = x[2]
y0 = x[3]
A = torch.tensor([[ax, 0., x0], [0., ay, y0], [0., 0., 1.]], dtype=torch.float64)
B = torch.tensor([[b11, b12, b13], [b21, b22, b23], [b31, b32, b33]], dtype=torch.float64)
H = torch.mm(A, B)
Ps = torch.mm(H, X)
px = Ps[0,:]
py = Ps[1,:]
PX = torch.stack([px, py], dim=0)
PX.requires_grad_()
return mseloss(PX, XT)
I am minimizing it:
for ii in range(n_optim_steps):
optimizer.zero_grad()
loss = errorFun(params)
#print('Step # {}, loss: {}'.format(ii, loss.item()))
loss.backward()
# Access gradient if necessary
grad = params.grad.data
optimizer.step()
But I am getting this error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-54-84b874448a25> in <module>()
77 loss.backward()
78 # Access gradient if necessary
---> 79 grad = params.grad.data
80 optimizer.step()
81
AttributeError: 'NoneType' object has no attribute 'data'
Thanks in advance.
| I am not sure I understand your task. But it seems you are not using pytorch the way it was designed to be used.
There are 5 things you have to have:
Data for training;
A task you are interested in per forming;
A parameterized model (eg a neural network);
A cost function to be minimized;
An optimizer;
Consider the simple example:
Data: vectors containing random numbers;
Task: sum the numbers of the vector;
Model: Linear regressor (i.e: 1 layer neural net)
Cost function: Mean Squared Error
Optimizer: Stochastic Gradient Descent
The implementation:
import torch
import torch.nn as nn
from torch.optim import SGD
input_size = 5
model = nn.Linear(input_size, 1)
opt = SGD(model.parameters(), lr=0.01)
loss_func = nn.MSELoss()
for _ range(100):
data = torch.rand(batch_size, input_size)
target = data.sum(dim=1)
opt.zero_grad()
pred = model(data)
loss = loss_func(pred, target)
loss.backward()
opt.step()
| https://stackoverflow.com/questions/63093141/ |
PyTorch: Is it possible to differentiate a matrix? | How do you differentiate a matrix in PyTorch? I have tried the following but neither work:
Instance 1:
a = torch.tensor([1., 2, 3], requires_grad=True)
b = torch.tensor([4., 5, 6], requires_grad=True)
c = a*b
c.backward()
#print(b.grad)
>>> RuntimeError: grad can be implicitly created only for scalar outputs
Instance 2:
a = torch.tensor([1., 2, 3], requires_grad=True)
b = torch.tensor([4., 5, 6], requires_grad=True)
c = a*b
print(b.grad)
>>> None
| It is possible but it doesn't really fit into the standard use case of PyTorch where you are generally interested in the gradient of a scalar valued function.
The derivative of a matrix Y w.r.t. a matrix X can be represented as a Generalized Jacobian. For the case where both matrices are just vectors this reduces to the standard Jacobian matrix, where each row of the Jacobian is the transpose of the gradient of one element of Y with respect to X. More generally if X is shape (n1, n2, ..., nD) and Y is shape (m1, m2, ..., mE) then a natural way to represent the Generalized Jacobian of Y with respect to X is as a tensor of shape (m1, m2, ..., mE, n1, n2, ..., nD).
There are two ways to compute the Generalized Jacobian that I'm aware of in PyTorch.
Option 1
Repeated application of back-propagation on each element of Y.
import torch
def construct_jacobian(y, x, retain_graph=False):
x_grads = []
for idx, y_element in enumerate(y.flatten()):
if x.grad is not None:
x.grad.zero_()
# if specified set retain_graph=False on last iteration to clean up
y_element.backward(retain_graph=retain_graph or idx < y.numel() - 1)
x_grads.append(x.grad.clone())
return torch.stack(x_grads).reshape(*y.shape, *x.shape)
then the Jacobian for your test case may be computed using
a = torch.tensor([1., 2., 3.])
b = torch.tensor([4., 5., 6.], requires_grad=True)
c = a * b
jacobian = construct_jacobian(c, b)
print(jacobian)
which results in
tensor([[1., 0., 0.],
[0., 2., 0.],
[0., 0., 3.]])
Option 2
In PyTorch 1.5.1 a new autograd.functional API was introduced, including the new function torch.autograd.functional.jacobian. This produces the same results as the previous example but takes a function as an argument. Not demonstrated here, but you can provide the jacobian function a list of inputs if your function takes multiple independent tensors as input. In that case the jacobian would return a tuple containing the Generalized Jacobian for each of the input arguments.
import torch
a = torch.tensor([1., 2., 3.])
def my_fun(b):
return a * b
b = torch.tensor([4., 5., 6.], requires_grad=True)
jacobian = torch.autograd.functional.jacobian(my_fun, b)
print(jacobian)
which also produces
tensor([[1., 0., 0.],
[0., 2., 0.],
[0., 0., 3.]])
As an aside, in some literature the term "gradient" is used to refer to the transpose of the Jacobian matrix. If that's what you're after then, assuming Y and X are vectors, you can simply use the code above and take the transpose of the resulting Jacobian matrix. If Y or X are higher order tensors (matrices or n-dimensional tensors) then I'm not aware of any literature that distinguishes between gradient and Generalized Jacobian. A natural way to represent such a "transpose" of the Generalized Jacobian would be to use Tensor.permute to turn it into a tensor of shape (n1, n2, ..., nD, m1, m2, ..., mE).
As another aside, the concept of the Generalized Jacobian is rarely used in literature (example usage) but is actually relatively useful in practice. This is because it basically works as a bookkeeping technique to keep track of the original dimensionality of Y and X. By this I mean you could just as easily take Y and X and flatten them into vectors, regardless of their original shape. Then the derivative would be a standard Jacobian matrix. Consequently this Jacobian matrix would be equivalent to a reshaped version of the Generalized Jacobian.
| https://stackoverflow.com/questions/63096122/ |
Pytorch schedule learning rate | I am trying to re-implement one paper, which suggests to adjust the learning rate as below:
The learning rate is decreased by a factor of the regression value with patience epochs 10 on the change value of 0.0001.
Should I use the torch.optim.lr_scheduler.ReduceLROnPlateau()?
I am not sure what value should I pass to each parameter.
Is the change value in the statement denotes to the parameter threshold?
Is the factor in the statement denotes to the parameter factor?
| torch.optim.lr_scheduler.ReduceLROnPlateau is indeed what you are looking for. I summarized all of the important stuff for you.
mode=min: lr will be reduced when the quantity monitored has stopped decreasing
factor: factor by which the learning rate will be reduced
patience: number of epochs with no improvement after which learning rate will be reduced
threshold: threshold for measuring the new optimum, to only focus on significant changes (change value). Say we have threshold=0.0001, if loss is 18.0 on epoch n and loss is 17.9999 on epoch n+1 then we have met our criteria to multiply the current learning rate by the factor.
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min',
factor=0.1, patience=10, threshold=0.0001, threshold_mode='abs')
for epoch in range(20):
# training loop stuff
loss = criterion(...)
scheduler.step(loss)
You can check more details in the documentation: https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.ReduceLROnPlateau
| https://stackoverflow.com/questions/63108131/ |
Does Pytorch-Lightning have a multiprocessing (or Joblib) module? | I have been googling around but can't seem to find if there is a multiprocessing module available in Pytorch-Lightning, just like how Pytorch has a torch.multiprocessing module.
Does anyone know if Pytorch-Lightning has this (or a Joblib similar) module? I am looking for a Pytorch-Lightning module which allows me to parallelize over multiple GPUs
Many thanks in advance.
Edit: To be more specific, I am looking for a multiprocessing module in Pytorch-Lightning which allows me to parallelize over multiple GPUs on non-neural network computations, such as:
import numpy as np
import torch
from torch.multiprocessing import Pool
X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]])
X = torch.DoubleTensor(X)
def X_power_func(j):
X_power = X.cuda()**j
return X_power
if __name__ == '__main__':
with Pool(processes = 2) as p: # Parallelizing over 2 GPUs
results = p.map(X_power_func, range(4))
results
| Yes, basically all you have to do is to provide Trainer with appropriate argument gpus=N and specify backend:
# train on 8 GPUs (same machine (ie: node))
trainer = Trainer(gpus=8, distributed_backend='ddp')
# train on 32 GPUs (4 nodes)
trainer = Trainer(gpus=8, distributed_backend='ddp', num_nodes=4)
You can read more about it in multi-GPU training documentation.
EDIT:
What you were actually looking for is distributed module instead of multiprocessing, torch.distributed.DistributedDataParallel is usually recommended for parallelizing over multiple GPUs.
| https://stackoverflow.com/questions/63108616/ |
OSError: [WinError 127] The specified procedure could not be found. pytorch | I get this error OSError: [WinError 127] The specified procedure could not be found.
OSError Traceback (most recent call last)
in
5 import matplotlib.pyplot as plt
6
----> 7 import torch
8 import torch.nn as nn
9 import torch.optim as optim
~\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\__init__.py in
79 dlls = glob.glob(os.path.join(th_dll_path, '*.dll'))
80 for dll in dlls:
---> 81 ctypes.CDLL(dll)
82
83
~\AppData\Local\Programs\Python\Python38\lib\ctypes\__init__.py in __init__(self, name, mode, handle, use_errno, use_last_error, winmode)
371
372 if handle is None:
--> 373 self._handle = _dlopen(self._name, mode)
374 else:
375 self._handle = handle
OSError: [WinError 127] La procédure spécifiée est introuvable
I installed pytorch a few days ago and hit this error for the first time today ... so I don't think it's an installation problem...
I also tried to reinstall pytorch and still get this error
I am using torch 1.5.1+cu101 and python 3.8.5 on windows 10
I have only one version of python installed.
if someone could please help me on this
thanks a lot
| I got a similar error with Python 3.9.2 on Windows 10 and solved it doing the following:
I copied the CUDA 11 drivers (specifically cublas64_11.dll) from:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin to
C:\<your python directory>\Lib\site-packages\torch\lib
| https://stackoverflow.com/questions/63112597/ |
PyTorch: Running unseen text through generated model | I am trying to implement a PyTorch project, found here.
import os
from process_file import process_doc
import random
import torch
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
import numpy as np
from argparse import ArgumentDefaultsHelpFormatter, ArgumentParser
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import classification_report, confusion_matrix
from full_model import Classifier
import torch.nn as nn
import torch.optim as optim
import time
def get_batch(doc, ref_type='headline'):
sent, ls, out, sids = [], [], [], []
sent.append(doc.headline)
ls.append(len(doc.headline))
for sid in doc.sentences:
if SPEECH:
out.append(out_map[doc.sent_to_speech.get(sid, 'NA')])
else:
out.append(out_map[doc.sent_to_event.get(sid)])
sent.append(doc.sentences[sid])
ls.append(len(doc.sentences[sid]))
sids.append(sid)
ls = torch.LongTensor(ls)
out = torch.LongTensor(out)
return sent, ls, out, sids
def train(epoch, data):
start_time = time.time()
total_loss = 0
global prev_best_macro
for ind, doc in enumerate(data):
model.train()
optimizer.zero_grad()
sent, ls, out, _ = get_batch(doc)
if has_cuda:
ls = ls.cuda()
out = out.cuda()
_output, _, _, _ = model.forward(sent, ls)
loss = criterion(_output, out)
total_loss += loss.item()
loss.backward()
optimizer.step()
del sent, ls, out
if has_cuda:
torch.cuda.empty_cache()
print("--Training--\nEpoch: ", epoch, "Loss: ", total_loss, "Time Elapsed: ", time.time()-start_time)
perf = evaluate(validate_data)
# print(perf)
if prev_best_macro < perf:
prev_best_macro = perf
print ("-------------------Test start-----------------------")
_ = evaluate(test_data, True)
print ("-------------------Test end-----------------------")
torch.save(model.state_dict(), 'discourse_lstm_model.pt')
def evaluate(data, is_test=False):
y_true, y_pred = [], []
model.eval()
for doc in data:
sent, ls, out, sids = get_batch(doc)
if has_cuda:
ls = ls.cuda()
#out = out.cuda()
_output, _, _, _ = model.forward(sent, ls)
_output = _output.squeeze()
_, predict = torch.max(_output, 1)
y_pred += list(predict.cpu().numpy() if has_cuda else predict.numpy())
temp_true = list(out.numpy())
y_true += temp_true
print("MACRO: ", precision_recall_fscore_support(y_true, y_pred, average='macro'))
print("MICRO: ", precision_recall_fscore_support(y_true, y_pred, average='micro'))
if is_test:
print("Classification Report \n", classification_report(y_true, y_pred))
print("Confusion Matrix \n", confusion_matrix(y_true, y_pred))
return precision_recall_fscore_support(y_true, y_pred, average='macro')[2]
if __name__ == '__main__':
parser = ArgumentParser(formatter_class=ArgumentDefaultsHelpFormatter)
# parser.add_argument('--drop', help='DROP', default=6, type=float)
# parser.add_argument('--learn_rate', help='LEARNING RATE', default=0, type=float)
# parser.add_argument('--loss_wt', help='LOSS WEIGHTS', default=0, type=str)
parser.add_argument('--seed', help='SEED', default=0, type=int)
args = parser.parse_args()
has_cuda = torch.cuda.is_available()
SPEECH = 0
if SPEECH:
out_map = {'NA':0, 'Speech':1}
else:
out_map = {'NA':0,'Main':1,'Main_Consequence':2, 'Cause_Specific':3, 'Cause_General':4, 'Distant_Historical':5,
'Distant_Anecdotal':6, 'Distant_Evaluation':7, 'Distant_Expectations_Consequences':8}
train_data = []
validate_data = []
test_data = []
for domain in ["Business", "Politics", "Crime", "Disaster", "kbp"]:
subdir = "../data/train/"+domain
files = os.listdir(subdir)
for file in files:
if '.txt' in file:
doc = process_doc(os.path.join(subdir, file), domain) #'../data/Business/nyt_corpus_data_2007_04_27_1843240.txt'
#print(doc.sent_to_event)
train_data.append(doc)
subdir = "../data/test/"+domain
files = os.listdir(subdir)
for file in files:
if '.txt' in file:
doc = process_doc(os.path.join(subdir, file), domain) #'../data/Business/nyt_corpus_data_2007_04_27_1843240.txt'
#print(doc.sent_to_event)
test_data.append(doc)
subdir = "../data/validation"
files = os.listdir(subdir)
for file in files:
if '.txt' in file:
doc = process_doc(os.path.join(subdir, file), 'VAL') #'../data/Business/nyt_corpus_data_2007_04_27_1843240.txt'
#print(doc.sent_to_event)
validate_data.append(doc)
print(len(train_data), len(validate_data), len(test_data))
seed = args.seed
np.random.seed(seed)
torch.manual_seed(seed)
if has_cuda:
torch.cuda.manual_seed(seed)
random.seed(seed)
np.random.seed(seed)
prev_best_macro = 0.
model = Classifier({'num_layers': 1, 'hidden_dim': 512, 'bidirectional': True, 'embedding_dim': 1024,
'dropout': 0.5, 'out_dim': len(out_map)})
if has_cuda:
model = model.cuda()
model.init_weights()
criterion = nn.CrossEntropyLoss()
print("Model Created")
params = filter(lambda p: p.requires_grad, model.parameters())
optimizer = optim.Adam(params, lr=5e-5, betas=[0.9, 0.999], eps=1e-8, weight_decay=0)
try:
for epoch in range(15):
print("---------------------------Started Training Epoch = {0}--------------------------".format(epoch+1))
train(epoch, train_data)
except KeyboardInterrupt:
print ("----------------- INTERRUPTED -----------------")
evaluate(validate_data)
evaluate(test_data)
Running this code, I have successfully outputted a .pt model trained on a corpus of ~400 articles, each article annotated according to its sectional content (data from the Github repo).
Now, I want to annotate a new, unseen article using this model, but I can't quite figure out how to do so. I have a feeling the classification code is already implemented in the snippet above and I'd really appreciate any help/guidance as to how I could go about classifying an unseen article using this code. Thanks so much in advance!
| Well, your code does training and testing already. You just need an extra piece of code that loads the trained model and test data for inference.
It should be something like that:
# load trained model:
model = Classifier({'num_layers': 1, 'hidden_dim': 512, 'bidirectional': True, 'embedding_dim': 1024,
'dropout': 0.5, 'out_dim': len(out_map)})
model.load_state_dict(torch.load("PATH/TO/SAVED/MODEL.pt")
# load data:
subdir = "PATH/TO/DOCS/"
files = os.listdir(subdir)
validate_data = []
for file in files:
if '.txt' in file:
doc = process_doc(os.path.join(subdir, file), 'VAL')
validate_data.append(doc)
print(len(train_data), len(validate_data), len(test_data))
# use the evaluate function with is_test=True for inference.
evaluate(data, is_test=True)
| https://stackoverflow.com/questions/63113593/ |
why does batch normalization make my batches so abnormal? | I'm playing with pytorch for the first time, and I've noticed that when training my neural net, about one time in four or so the loss takes a left turn towards infinity, then nan shortly after that. I've seen a few other questions about nan-ing, but the recommendations there seem to be essentially to do normalization; but the first layer in my net below is such a normalization, and I still see this problem! The full net is a bit convoluted, but I've done some debugging to try to produce a very small, understandable net that still displays the same issue.
The code is below; it consists of sixteen inputs, 0-1, which are passed through a batch normalization and then a fully-connected layer to a single output. I'd like it to learn the function that always outputs 1, so I take the squared error from 1 for the loss.
import torch as t
import torch.nn as tn
import torch.optim as to
if __name__ == '__main__':
board = t.rand([1,1,1,16])
net = tn.Sequential \
( tn.BatchNorm2d(1)
, tn.Conv2d(1, 1, [1,16])
)
optimizer = to.SGD(net.parameters(), lr=0.1)
for i in range(10):
net.zero_grad()
nn_outputs = net.forward(board)
loss = t.sum((nn_outputs - 1)**2)
print(i, nn_outputs, loss)
loss.backward()
optimizer.step()
If you run it a few times, eventually you'll see a run that looks like this:
0 tensor([[[[-0.7594]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(3.0953, grad_fn=<SumBackward0>)
1 tensor([[[[4.0954]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(9.5812, grad_fn=<SumBackward0>)
2 tensor([[[[5.5210]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(20.4391, grad_fn=<SumBackward0>)
3 tensor([[[[-3.4042]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(19.3966, grad_fn=<SumBackward0>)
4 tensor([[[[823.6523]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(676756.7500, grad_fn=<SumBackward0>)
5 tensor([[[[3.5471e+08]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(1.2582e+17, grad_fn=<SumBackward0>)
6 tensor([[[[2.8560e+25]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(inf, grad_fn=<SumBackward0>)
7 tensor([[[[inf]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(inf, grad_fn=<SumBackward0>)
8 tensor([[[[nan]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(nan, grad_fn=<SumBackward0>)
9 tensor([[[[nan]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(nan, grad_fn=<SumBackward0>)
Why does my loss go to nan, and what can I do about it?
| Welcome to pytorch!
Here is how I would set up your training. Please check the comments.
# how the comunity usually does the import:
import torch # some people do: import torch as th
import torch.nn as nn
import torch.optim as optim
if __name__ == '__main__':
# setting some parameters:
batch_size = 32
n_dims = 128
# select GPU if available
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# initializing a simple neural net
net = nn.Sequential(nn.Linear(n_dims, n_dims // 2), # Batch norm is not usually used directly on the input
nn.BachNorm1d(n_dims // 2), # Batch norm is used before the activation function (it centers the input and helps make the dims of the previous layers independent of each other)
nn.ReLU(), # the most common activation function
nn.Linear(n_dims // 2, 1) # final layer)
net.to(device) # model is copied to the GPU if it is availalbe
optimizer = to.SGD(net.parameters(), lr=0.01) # it is better to start with a low lr and increase it at later experiments to avoid training divergence, the range [1.e-6, 5.e-2] is recommended.
for i in range(10):
# generating random data:
board = torch.rand([batch_size, n_dims])
# for sequences: [batch_size, channels, L]
# for image data: [batch_size, channels, W, H]
# for videos: [batch_size, chanels, L, W, H]
boad = board.to(device) # data is copied to the gpu if it is available
optimizer.zero_grad() # the convension the comunity uses, though the result is the same as net.zero_grad()
nn_outputs = net(board) # don't call net.forward(x), call net(x). Pytorch applies some hooks in the net.__call__(x) that are useful for backpropagation.
loss = ((nn_outputs - 1)**2).mean() # using .mean() makes your training less sensitive to the batch size.
print(i, nn_outputs, loss.item())
loss.backward()
optimizer.step()
One comment about the batch norm. Per dimension, it calculates the mean and the standard deviation of your batch (check the documentation https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html#torch.nn.BatchNorm2d):
x_normalized = (x.mean(dim=0) / (x.std(dim=0) + e-6)) * scale + shift
Where scale and shift are learnable parameters. If you only give one example per batch, x.std(0) = 0 will make x_normalized contain very very large values.
| https://stackoverflow.com/questions/63127564/ |
get the index of element in NumPy array | I have a Numpy integer array with a lot of duplicate elements.
For example:
a = np.random.randint(0,5,20)
a
Out[23]:
array([3, 1, 2, 4, 1, 2, 4, 3, 2, 3, 1, 4, 4, 1, 2, 4, 2, 4, 1, 1])
There are two cases:
if one element is less than 4, get all the indexes of this element
if one element is great than or equal to 4, select four of them randomly
I solved this with a loop.
ans = np.array([])
num = 4
for i in range(1,5):
indexes = np.where(a == i)[0] # all indexes of elements equal to i
index_i = np.random.choice(indexes, num, False) if len(indexes) >=num else indexes
ans = np.concatenate([ans, index_i])
np.sort(ans)
Out[57]:
array([ 0., 1., 2., 5., 6., 7., 8., 9., 10., 11., 13., 14., 15.,
17., 19.])
Can I solve this problem without a loop or more efficiently in Numpy or PyTorch?
| You can do it quite easily, using Pandas.
First convert your array to a pandasonic Series:
s = pd.Series(a)
Then:
Group it by its value.
Apply to each group a function, which:
for groups of size 4 or smaller returns just this group,
for groups with more members, returns a random sample of 4 elements
from them.
Drop the 0-th level of the resulting index (added during grouping).
Sort by the (original) index, to bring back the original order (without
the dropped elements, for now we have original values with their
corresponding indices).
Return the index of the above result, as a Numpy array.
The code to do it is:
s.groupby(s).apply(lambda grp: grp if grp.size <= 4 else grp.sample(4))\
.reset_index(level=0, drop=True).sort_index().index.values
For a sample array containg:
array([2, 2, 1, 0, 1, 0, 2, 2, 2, 3, 0, 2, 1, 0, 0, 3, 3, 0, 2, 4])
the result is:
array([ 0, 2, 4, 5, 7, 9, 10, 11, 12, 14, 15, 16, 17, 18, 19])
To show that this result is correct, I repeated the source array,
with "x" marks below the elements at the returned indices.
array([2, 2, 1, 0, 1, 0, 2, 2, 2, 3, 0, 2, 1, 0, 0, 3, 3, 0, 2, 4])
x x x x x x x x x x x x x x x
| https://stackoverflow.com/questions/63129794/ |
How to concat two tensors of size [B,C,13,18] and [B,C,14,18] respectively in Pytorch? | I often met this problem when the height or width of an image or a tensor becomes odd.
For example, suppose the original tensor is of size [B,C,13,18]. After forwarding a strided-2 conv and several other conv layers, its size will become [B,C,7,9]. If we upsample the output by 2 and concat it with the original feature map as most cases, the error occurs.
I found that in many source codes, they use even sizes like (512,512) for training, so this kind of problem won't happen. But for test, I use the original image size to keep fine details and often met this problem.
What should I do? Do I need to change the network architecture?
| Concatenating tensors with incompatible shapes does not make sense. Information is missing, and you need to specify it by yourself. The question is, what do you are expected from this concatenation ? Usually, you pad the input with zeros, or truncate the output, in order to get compatible shapes (in the general case, being even is not the required condition). If the height and width are large enough, the edge effect should be negligible (well, except perhaps on the edge, it depends).
So if you are dealing with convolutions only, no need to change the architecture strictly speaking, just to add a padding layer somewhere it seems appropriate.
| https://stackoverflow.com/questions/63138587/ |
How to make sure PyTorch has deallocated GPU memory? | Say we have a function like this:
def trn_l(totall_lc, totall_lw, totall_li, totall_lr):
self.model_large.cuda()
self.model_large.train()
self.optimizer_large.zero_grad()
for fb in range(self.fake_batch):
val_x, val_y = next(self.valid_loader)
val_x, val_y = val_x.cuda(), val_y.cuda()
logits_main, emsemble_logits_main = self.model_large(val_x)
cel = self.criterion(logits_main, val_y)
loss_weight = cel / (self.fake_batch)
loss_weight.backward(retain_graph=False)
cel = cel.cpu().detach()
emsemble_logits_main = emsemble_logits_main.cpu().detach()
totall_lw += float(loss_weight.item())
val_x = val_x.cpu().detach()
val_y = val_y.cpu().detach()
loss_weight = loss_weight.cpu().detach()
self._clip_grad_norm(self.model_large)
self.optimizer_large.step()
self.model_large.train(mode=False)
self.model_large = self.model_large.cpu()
return totall_lc, totall_lw, totall_li, totall_lr
On the first call, it allocates 8GB of GPU memory. On the next call, no new memory gets allocated, yet 8GBs are still occupied. I want to have after it is called and the produced first result to have 0 allocated GPU memory or as low as possible.
What I have tried: do retain_graph=False and .cpu().detach() everywhere - no positive effects.
Memory snapshot before
|===========================================================================|
| PyTorch CUDA memory summary, device ID 0 |
|---------------------------------------------------------------------------|
| CUDA OOMs: 0 | cudaMalloc retries: 0 |
|===========================================================================|
| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |
|---------------------------------------------------------------------------|
| Allocated memory | 33100 KB | 33219 KB | 40555 KB | 7455 KB |
| from large pool | 3072 KB | 3072 KB | 3072 KB | 0 KB |
| from small pool | 30028 KB | 30147 KB | 37483 KB | 7455 KB |
|---------------------------------------------------------------------------|
| Active memory | 33100 KB | 33219 KB | 40555 KB | 7455 KB |
| from large pool | 3072 KB | 3072 KB | 3072 KB | 0 KB |
| from small pool | 30028 KB | 30147 KB | 37483 KB | 7455 KB |
|---------------------------------------------------------------------------|
| GPU reserved memory | 51200 KB | 51200 KB | 51200 KB | 0 B |
| from large pool | 20480 KB | 20480 KB | 20480 KB | 0 B |
| from small pool | 30720 KB | 30720 KB | 30720 KB | 0 B |
|---------------------------------------------------------------------------|
| Non-releasable memory | 18100 KB | 20926 KB | 56892 KB | 38792 KB |
| from large pool | 17408 KB | 18944 KB | 18944 KB | 1536 KB |
| from small pool | 692 KB | 2047 KB | 37948 KB | 37256 KB |
|---------------------------------------------------------------------------|
| Allocations | 12281 | 12414 | 12912 | 631 |
| from large pool | 2 | 2 | 2 | 0 |
| from small pool | 12279 | 12412 | 12910 | 631 |
|---------------------------------------------------------------------------|
| Active allocs | 12281 | 12414 | 12912 | 631 |
| from large pool | 2 | 2 | 2 | 0 |
| from small pool | 12279 | 12412 | 12910 | 631 |
|---------------------------------------------------------------------------|
| GPU reserved segments | 16 | 16 | 16 | 0 |
| from large pool | 1 | 1 | 1 | 0 |
| from small pool | 15 | 15 | 15 | 0 |
|---------------------------------------------------------------------------|
| Non-releasable allocs | 3 | 30 | 262 | 259 |
| from large pool | 1 | 1 | 1 | 0 |
| from small pool | 2 | 29 | 261 | 259 |
|===========================================================================|
And after calliing function and
torch.cuda.empty_cache()
torch.cuda.synchronize()
We get:
|===========================================================================|
| PyTorch CUDA memory summary, device ID 0 |
|---------------------------------------------------------------------------|
| CUDA OOMs: 0 | cudaMalloc retries: 0 |
|===========================================================================|
| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |
|---------------------------------------------------------------------------|
| Allocated memory | 10957 KB | 8626 MB | 272815 MB | 272804 MB |
| from large pool | 0 KB | 8596 MB | 272477 MB | 272477 MB |
| from small pool | 10957 KB | 33 MB | 337 MB | 327 MB |
|---------------------------------------------------------------------------|
| Active memory | 10957 KB | 8626 MB | 272815 MB | 272804 MB |
| from large pool | 0 KB | 8596 MB | 272477 MB | 272477 MB |
| from small pool | 10957 KB | 33 MB | 337 MB | 327 MB |
|---------------------------------------------------------------------------|
| GPU reserved memory | 8818 MB | 9906 MB | 19618 MB | 10800 MB |
| from large pool | 8784 MB | 9874 MB | 19584 MB | 10800 MB |
| from small pool | 34 MB | 34 MB | 34 MB | 0 MB |
|---------------------------------------------------------------------------|
| Non-releasable memory | 5427 KB | 3850 MB | 207855 MB | 207850 MB |
| from large pool | 0 KB | 3850 MB | 207494 MB | 207494 MB |
| from small pool | 5427 KB | 5 MB | 360 MB | 355 MB |
|---------------------------------------------------------------------------|
| Allocations | 3853 | 13391 | 34339 | 30486 |
| from large pool | 0 | 557 | 12392 | 12392 |
| from small pool | 3853 | 12838 | 21947 | 18094 |
|---------------------------------------------------------------------------|
| Active allocs | 3853 | 13391 | 34339 | 30486 |
| from large pool | 0 | 557 | 12392 | 12392 |
| from small pool | 3853 | 12838 | 21947 | 18094 |
|---------------------------------------------------------------------------|
| GPU reserved segments | 226 | 226 | 410 | 184 |
| from large pool | 209 | 209 | 393 | 184 |
| from small pool | 17 | 17 | 17 | 0 |
|---------------------------------------------------------------------------|
| Non-releasable allocs | 46 | 358 | 12284 | 12238 |
| from large pool | 0 | 212 | 7845 | 7845 |
| from small pool | 46 | 279 | 4439 | 4393 |
|===========================================================================|
| I don't think the other answer is correct. Allocation and deallocation definitely happens during runtime, the thing to note is that the CPU code runs asynchronously from the GPU code, so you need to wait for any deallocation to happen if you want to reserve more memory after it. Take a look at this:
import torch
a = torch.zeros(100,100,100).cuda()
print(torch.cuda.memory_allocated())
del a
torch.cuda.synchronize()
print(torch.cuda.memory_allocated())
Outputs
4000256
0
So you should del the tensors you don't need and call torch.cuda.synchronize() to make sure that the deallocation goes through before your CPU code continues to run.
In your specific case, after your function trn_l returns, any variables that were local to that function, and do not have references elsewhere, will be deallocated along with the corresponding GPU tensors. All you need to do is wait for this to happen by calling torch.cuda.synchronize() after the function call.
| https://stackoverflow.com/questions/63145729/ |
Is there an equivalent of pytorch.nn.functional.unfold() in keras or tensorflow? | I want to perform a similar operation in keras. However, I am unable to do the unfold operation in keras. I tried it with conv1D layer, but unable to figure out. Any help would be appreciated
'''
import numpy as np
import torch
x = torch.tensor(np.random.rand(25,100,24)) # tensor of shape (batch_size, seq_length,feature_dim)
x = x.unsqueeze(1) # shape=(25,1,100,24)
import torch.nn.functional as F
x = F.unfold(x,(5, 24), stride=(1,24),dilation=(1,1)) #shape (25,120,96)
'''
| I don't think there is. But you can do one thing. Use tensorly for unfolding. Make a function that unfolds the input array. Then using that funtion make a lambda layer in keras or tf2.0 . Suppose you have input array X :
X = np.array([[[ 0, 1],
[ 2, 3],
[ 4, 5],
[ 6, 7]],
[[ 8, 9],
[10, 11],
[12, 13],
[14, 15]],
[[16, 17],
[18, 19],
[20, 21],
[22, 23]]])
To unfold a tensor, simply use the unfold function from TensorLy:
> from tensorly import unfold unfold(X, 0)
>> array([[ 0, 1, 2, 3, 4, 5, 6, 7],
[ 8, 9, 10, 11, 12, 13, 14, 15],
[16, 17, 18, 19, 20, 21, 22, 23]])
Now create a function that takes input array and returns unfolded
array
def unfold(X):
return unfold(X, 0)
Now use this function as a layer in keras
from keras.layers import Lambda
from keras.models import Sequential
model = Sequential()
model.add(....some_layer....)
model.add(....anotenter code hereher_layer....)
model.add(Lambda(unfold)) <<<<=== using our unfold function as keras layer
model.add(...more_layers..)
Hope this will help !
| https://stackoverflow.com/questions/63156717/ |
What should be the size of input image for training a YOLOv3 Model Architecture CNN.? | I've implemented a YOLOv3 from scratch and I plan to fine-tune using MS-COCO weights for some different data.
The dataset I've chosen has images of 720*1280 size.
When I go through the YOLOv3 paper, 1st CONV2d layer is there with filter_size =3 and stride = 1, and output size is 256*256....
Can someone give me a walkthrough for how YOLO training part works in here?
| From Yolov3 paper:
If best possible accuracy/mAP is what you want then use 608 x 608 as input layer size in the config.
If you want good inference/speed at the cost of accuracy then use, 320 x 320
If balanced model is what you want then use 416 x 416
Note that first layer automatically resizes your images to the size of first layer in Yolov3 CNN, so you need not convert your 1280 x 720 images to the input layer size.
Suggest you to read following things:
To understand how Yolov3 works, read this blog post.
To understand some basic stuff read from original site
Learn how to train your custom object detector here
| https://stackoverflow.com/questions/63160524/ |
pytorch beginner :torch.data.new() torch.new() | I have a small question, what is the difference between tensor.data.new() and tensor.new()? It seems they all return an empty tensor with the same dtype and device as the self tensor.
Thank you
| There is no difference. It's a little convoluted, but, you can think of .data as being essentially the same object as the Tensor that holds it. Every Tensor has a .data, and the .data is, itself, a Tensor, so there's some circular references going on.
The most important part is that they both always point to the same data, so all operations that don't require a gradient on either the tensor or it's .data will give you the same result.
import torch
a = torch.randn((1,2))
a.data.data_ptr() == a.data_ptr()
# True -- indicating it's precisely the same memory/buffer
The .data property is the counter-part to the .grad property. But, for convenience, as most people only care about the data, not the gradients associated with them, there's a default buffer for the Tensor that .data also points to.
| https://stackoverflow.com/questions/63166005/ |
How to save and load models of custom dataset in Detectron2? | I have tried to save and load the model using:
All keys are mapped but there is no prediction in output
#1
from detectron2.modeling import build_model
model = build_model(cfg)
torch.save(model.state_dict(), 'checkpoint.pth')
model.load_state_dict(torch.load(checkpoint_path,map_location='cpu'))
I also tried doing it using the official doc but can't understand the input format part
from detectron2.checkpoint import DetectionCheckpointer
DetectionCheckpointer(model).load(file_path_or_url) # load a file, usually from cfg.MODEL.WEIGHTS
checkpointer = DetectionCheckpointer(model, save_dir="output")
checkpointer.save("model_999") # save to output/model_999.pth
| cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file('COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml'))
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # Set threshold for this model
cfg.MODEL.WEIGHTS = '/content/model_final.pth' # Set path model .pth
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1
predictor = DefaultPredictor(cfg)
My code to load custom model works.
| https://stackoverflow.com/questions/63166152/ |
Problem of exporting batchnorm weight from pytorch to Keras | I follow Pytorch Batchnorm layer different from Keras Batchnorm,
Pytorch Batchnorm implementation, but they do not solve my problem.
I also read Wiki about Batchnorm.
And search source code from tensorflow batchnorm and from pytorch source code.
Below is my testing code, and the results between pytorch and keras are different in 1e-2 to 1e-3 order of error which is big. Function b0, b1 similar to torch result but still not quite accurate.
And b2 tries to follow the formula used in tensorflow batchnorm.
Convolution part yields the same result, but I am stucking at batchnorm layer. I also use eval(), no_grad() for pytorch and model.predict for keras model to make sure they are in inference stage.
Tensorflow implementation does not use 1/sqrt(var+eps), but sqrt(var+eps) instead. I try to transfer 1/running_var to keras.BN.moving_var but still fail the result.
import tensorflow as tf
import tensorflow.keras.layers as L
from tensorflow.keras import Model as KModel
import torch.nn as nn
import torch
def KM():
x = L.Input((None,None,3))
y0 = L.Concatenate(axis=-1)([x[:,::2,::2,:],x[:,::2,1::2,:],x[:,1::2,::2,:],x[:,1::2,1::2,:]])
y1 = L.Conv2D(32,3,1,"same",use_bias=False)(y0)
y2 = L.BatchNormalization()(y1)
y3 = L.LeakyReLU(0.1)(y2)
return KModel(x, [y1, y2, y3])
class YM(nn.Module):
def __init__(self):
super(YM, self).__init__()
self.cat = lambda x : torch.cat([x[:,:,::2,::2],x[:,:,::2,1::2],x[:,:,1::2,::2],x[:,:,1::2,1::2]],axis=1)
self.conv = nn.Conv2d(12,32,3,1,1,bias=False)
self.bn = nn.BatchNorm2d(32)
self.act = nn.LeakyReLU(0.1)
def forward(self, x):
y0 = ym.cat(x)
y0 = ym.conv(y0)
y1 = ym.bn(y0)
y2 = ym.act(y1)
return [y0, y1, y2]
np.random.seed(0)
img = np.random.randint(0,255,(1,12,14,3)).astype(np.float32)
img_torch = torch.from_numpy(img.transpose(0,3,1,2).astype(np.float32))
w1 = np.random.rand(32,12,3,3).astype(np.float32)*0.1
bw1 = np.random.rand(32).astype(np.float32)*0.1
bb1 = np.random.rand(32).astype(np.float32)
bm1 = np.random.rand(32).astype(np.float32)
bv1 = np.abs(np.random.rand(32).astype(np.float32))*0.1
ym = YM()
km = KM()
ym.conv.weight = nn.Parameter(torch.from_numpy(w1))
ym.bn.weight = nn.Parameter(torch.from_numpy(bw1))
ym.bn.bias = nn.Parameter(torch.from_numpy(bb1))
ym.bn.running_mean = torch.from_numpy(bm1)
ym.bn.running_var = torch.from_numpy(bv1)
km.layers[6].set_weights([w1.transpose(2,3,1,0)])
km.layers[7].set_weights([bw1, bb1, bm1, bv1])
ym.eval()
ym.bn.track_running_stats = True
with torch.no_grad():
t0 = Ym(ym, img_torch/255.-0.5)
k0 = km.predict(img/255.-0.5)
for i in range(len(t0)):
print(t0[i].shape, k0[i].shape)
Key = 1
print(t0[Key][0,0,:,:].detach().numpy())
print(k0[Key][0,:,:,0])
>>>>>>>>>>>
[[ 0.71826 0.72964 0.73189 0.70224 0.74954 0.72928 0.7524]
[ 0.71305 0.68717 0.68581 0.7242 0.73491 0.71925 0.70781]
[ 0.70145 0.66769 0.6857 0.70804 0.73533 0.73165 0.72006]
[ 0.6758 0.69231 0.71173 0.71325 0.72097 0.71414 0.75782]
[ 0.68255 0.72283 0.71273 0.7226 0.71788 0.68119 0.72556]
[ 0.70452 0.68088 0.74389 0.73558 0.72853 0.7174 0.74389]]
[[ 0.71953 0.73082 0.73306 0.70365 0.75056 0.73046 0.75339]
[ 0.71437 0.6887 0.68736 0.72543 0.73605 0.72052 0.70918]
[ 0.70287 0.66939 0.68724 0.7094 0.73647 0.73282 0.72133]
[ 0.67743 0.6938 0.71306 0.71457 0.72223 0.71545 0.75877]
[ 0.68413 0.72407 0.71405 0.72384 0.71916 0.68278 0.72678]
[ 0.70592 0.68246 0.74495 0.73671 0.72972 0.71868 0.74496]]```
tt = t0[Key].detach().numpy().transpose(0,2,3,1)
kk = k0[Key]
np.abs(tt-kk).max()
>>>>>>>>>>
0.078752756
gamma, beta = bw1[0], bb1[0]
mu, var = bm1[0], bv1[0]
x_p = t0[0][0,0,0,0]
print(gamma,beta,mu,var,x_p)
eps = 1e-10
def bn0(x_p, mu, var, gamma, beta):
# wiki
xhat = (x_p - mu)/np.sqrt(var + eps)
_x = xhat * gamma + beta
return _x
def bn1(x_p, mu, var, gamma, beta):
# pytorch cpp
inv_var = 1/ np.sqrt(var + eps)
alpha_d = gamma * inv_var
beta_d = beta - mu * inv_var * gamma
return x_p * alpha_d + beta_d
def bn2(x_p, mu, var, gamma, beta):
# tensorflow cpp
inv_var = np.sqrt(var + eps)
xhat = (x_p - mu)*inv_var
_x = xhat * gamma + beta
return _x
print(bn0(x_p, mu, var, gamma, beta))
print(bn1(x_p, mu, var, gamma, beta))
print(bn2(x_p, mu, var, gamma, beta))
print(bn2(x_p, mu, 1/var, gamma, beta))
>>>>>>>>
0.048011426 0.87305844 0.67954195 0.059197646 tensor(-0.26256)
tensor(0.68715)
tensor(0.68715)
tensor(0.86205)
tensor(0.68715)
|
Keras seems use different default value epsilon (1e-3) vs (1e-5) in Pytorch
The input "var" in source code from tensorflow seems had already been taken 1/moving_variance somewhere.
Besides batchnorm, the padding strategy of tensorflow vs pytorch may yield different result of output. It is recommended to use Zeropadding2D in tensorflow to specify padding number and using valid-conv2d afterwards when there is stride greater than 1(for transfer weighting from pytorch to tensorflow)
accumulated error can be big through the whole network. There is around 0.6 maximum error before final activation function for small network ~70 layers.
| https://stackoverflow.com/questions/63177631/ |
Pytorch guide not using optim to train | I am currently working through a guide on the Pytroch website here: https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html
I have done pytorch projects before and they have always made use of an optimizer. This guide instead uses the code here:
# Add parameters' gradients to their values, multiplied by learning rate
for p in rnn.parameters():
p.data.add_(p.grad.data, alpha=-learning_rate)
I was confused by this and wanted to know why this works. Additionally I tried to rewrite the code using an optimizer and it was unable to learn. It is using a recurrent neural network which may be the reason, but I am unsure why. Thanks!
| Why do you expect it to not work? Basically what it is doing is manually implementing an optimizer. p.data is the stored value of the parameter. It also provides an internal function add_ that calculates +=. Once loss.backward() is called, pytorch also calculates and stores the gradient. It is simply taking the the gradient value from the backward pass and updating the parameters to perform gradient descent. There is no reason an optimizer shouldn't work here either, but I can't help with that unless you give more info.
| https://stackoverflow.com/questions/63181394/ |
RuntimeError: The expanded size of the tensor (7) must match the existing size (128) at non-singleton dimension 3 | When I run AdaIN code
def adaptive_instance_normalization(content_feat, style_mean, style_std):
size = content_feat.size()
content_mean, content_std = calc_mean_std(content_feat)
normalized_feat = (content_feat - content_mean.expand(
size)) / content_std.expand(size)
return normalized_feat * style_std.expand(size) + style_mean.expand(size)
I got the following error
RuntimeError: The expanded size of the tensor (7) must match the existing size (128) at non-singleton dimension 3. Target sizes: [100, 128, 7, 7]. Tensor sizes: [100, 128]
| You should be more precise and descriptive while explaining your issue. You cannot expect from people to read your mind or be familiar with your exact problem. So first, what should be the expected output and which line is failing ? I guess from the expand calls that you would like to enable broadcasting. Unfortunately, as you can read it from the official documentation, expand works the same as usual broadcasting, and add the required extra dimensions at the beginning, not the end.
So you should use reshape(size[:2] + (1, 1)) in place of expand(size).
| https://stackoverflow.com/questions/63185421/ |
How should I apply a variational autoencoder in a low-dimensional real value case? | I am trying to applying VAE in a simple toy example to familiarize with its property. However, I get stuck in training the model. The total loss and the reconstruction error does not seem to decrease.
The toy example is listed below
random generate 5000 observation from a 2-dimensional multivariate normal distribution.
apply a transformation f, [x,y] --> [sin(x),sin(y)]
train a VAE with 1 hidden layer with 5 neuron in both encoder and decoder. The VAE has 2 latent variables.
In this example, I am not able to decrease the training loss to a sufficiently low level, and the reconstruction is also messy.
I have made several attempts
increase the hidden layers to 2 and 3 (this does not help)
--> I think it is not due to the complexity of the model
check the network on MNIST (the result is comparable with the example I found on other sources)
--> the model design is right
delete the KL divergence in the Loss function (the model can reconstruct well)
--> the model design is right
I try to balance the weight on KL divergence
--> when beta on KL divergence is low, it reconstruct well, but latent space is too far away from standard normal, when beta on KL divergence is high, it can not reconstruct well, but the latent space perform well.
I now suspect several potential reasons, but I can not distinguish which one could be the reason.
It seems that I need to find a balance between weight on KL divergence and reconstruction loss
Is it appropriate to use MSE loss + KL divergence as loss function?
In low-dimension, the VAE does not perform because the ELBO is not so tight?
Could any one help?
The code is attached.
This part defines the model
import torch
import torch.nn as nn
import pandas as pd
import numpy as np
class VAE_Encoder(nn.Module):
def __init__(self,input_size,hidden_size_list,latent_size):
"""
The class is the builder of the encoder part of VAE. It does not need to be directly called.
:param input_size: int
:param hidden_size_list: list(int)
:param latent_size: int
"""
super().__init__()
encoder_size = [input_size]+hidden_size_list
encoder_layers = []
for in_size,out_size in zip(encoder_size[:-1],encoder_size[1:]):
encoder_layers.append(nn.Linear(in_size,out_size))
encoder_layers.append(nn.ReLU())
self.encoder = nn.Sequential(*encoder_layers)
self.encoder_mu = nn.Linear(encoder_size[-1],latent_size)
self.encoder_logvar = nn.Linear(encoder_size[-1],latent_size)
def encode(self,x):
return self.encoder(x)
def encode_gaussian_param(self,encode_x):
return self.encoder_mu(encode_x),self.encoder_logvar(encode_x)
def reparametrize(self,mu,logvar):
std = torch.exp(0.5*logvar)
eps = torch.randn_like(std)
return mu+eps*std
def forward(self,x):
encode_x = self.encoder(x)
mu,logvar = self.encode_gaussian_param(encode_x)
z = self.reparametrize(mu,logvar)
return z,mu,logvar
class VAE_Decoder(nn.Module):
def __init__(self,input_size,hidden_size_list,latent_size):
"""
The class is the builder of the decoder part of VAE. It does not need to be directly called.
:param input_size: int
:param hidden_size_list: list(int)
:param latent_size: int
"""
super().__init__()
decoder_size = [latent_size] + hidden_size_list
decoder_layers = []
for in_size,out_size in zip(decoder_size[:-1],decoder_size[1:]):
decoder_layers.append(nn.Linear(in_size,out_size))
decoder_layers.append(nn.ReLU())
decoder_layers.append(nn.Linear(decoder_size[-1],input_size))
self.decoder = nn.Sequential(*decoder_layers)
def forward(self,z):
return self.decoder(z)
class VAE(nn.Module):
def __init__(self,input_size,encoder_size,latent_size,decoder_size=None):
"""
The class builds the whole VAE. It consists of a encoder model and a decoder model.
The user has flexibility to choose the number of layers in each part of the model by
setting the encoder size and decoder size.
:param input_size: int
:param encoder_size: list(int)
:param latent_size: int
:param decoder_size: list(int)
"""
super().__init__()
if decoder_size is None:
decoder_size = encoder_size[::-1]
self.encoder = VAE_Encoder(input_size,encoder_size,latent_size)
self.decoder = VAE_Decoder(input_size,decoder_size,latent_size)
def decode(self,z):
return self.decoder(z)
def forward(self,x):
z,mu,logvar = self.encoder(x)
x = self.decoder(z)
return x,mu,logvar
def simple_vae_loss(real,recon,mu,logvar,penalty=1):
MSE = nn.functional.mse_loss(recon,real,reduction="sum")
KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
return MSE+penalty*KLD
This part generate the toy example
def generate_value(x,y):
v1 = np.sin(x)
v2 = np.sin(y)
x = np.zeros(x.shape)
y = np.zeros(y.shape)
return (v1,v2)
rand_number = torch.randn((5000,2))
x = rand_number.numpy()[:,0]
y = rand_number.numpy()[:,1]
new_value = generate_value(x,y)
new_x = new_value[0].reshape((5000,1))
new_y = new_value[1].reshape((5000,1))
new_data = np.concatenate((new_x,new_y),axis=1)
| Just to pose a potential reason that leads to the result.
In a paper introduction the Hyperspherical Variational Auto-Encoders, the author suggests a problem of using gaussian as prior in the low dimensional setting.
The problem is called origin gravity. Quote the paper.
In low dimensions, the Gaussian density presents a concentrated probability mass around the origin, encouraging points to cluster in the center. This is particularly problematic when the data is divided into multiple clusters. Although an ideal latent space should separate clusters for each class, the normal prior will encourage all the cluster centers towards the origin. An ideal prior would only stimulate the variance of the posterior without forcing its mean to be close to the center. A prior satisfying these properties is a uniform over the entire space. Such a uniform prior, however, is not well defined on the hyperplane.
To verify whether this is the case, I generate 100 synthetic data from the VAE model. The most interesting finding is that all the latent variables concentrated on the origin (0,0).
"If we decrease the weight on the KL divergence, the latent variable starts to spread out. This is consistent with the origin gravity. When the gaussian prior is strong, the latent variable starts to cluster around origin. In this case, we have to reduce the influence of gaussian prior by reducing the weight on KL divergence."
I guess this is one of the reason.
| https://stackoverflow.com/questions/63188002/ |
Is it normal that accuracy decreases when advancing to the next epoch? | I’m training a CNN to predict digits using the MNIST database. I’m doing Data Augmentation and for some reason accuracy sharply decreases when advancing to next epoch (iteration 60 in the image)
It has to do with data augmentation (transform = my_transforms in the code) because when I deactivate augmentation (transform = None) accuracy doesn't decrease when advancing to next epoch. But I can't explain why. Does anyone have an idea why this happens?
my_transforms = transforms.Compose([
transforms.ToPILImage(),
transforms.RandomCrop((25,25)),
transforms.Resize((28,28)),
transforms.RandomRotation(degrees=45, fill=255),
transforms.RandomVerticalFlip(p=0.1),
transforms.RandomHorizontalFlip(p=0.5),
transforms.ToTensor()
])
dataset = MNISTDataset(transform = my_transforms)
train_loader = DataLoader(dataset = dataset, batch_size = 1000, shuffle=True)
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1,10,kernel_size=5)
self.pool = nn.MaxPool2d(kernel_size=2,stride=2)
self.conv2 = nn.Conv2d(10,20,kernel_size=5)
self.fc1 = nn.Linear(20*4*4, 64)
self.fc2 = nn.Linear(64, 10)
def forward(self,x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1,20*4*4)
x = F.relu(self.fc1(x))
x = F.softmax(self.fc2(x), dim=1)
return x
net = Net()
loss_function=nn.NLLLoss()
optimizer=optim.Adam(net.parameters())
EPOCHS=2
iteracion = 0
for epoch in range(EPOCHS):
for data in train_loader:
inputs, labels = data
inputs = inputs.view(-1,1,28,28)
net.zero_grad()
probabilities=net(inputs)
matches=[torch.argmax(i)==int(j) for i,j in zip(probabilities,labels)]
in_batch_acc=matches.count(True)/len(matches)
loss=loss_function(torch.log(probabilities), labels)
print('Loss:', round(float(loss), 3))
print('In-batch acc:', round(in_batch_acc, 2))
iteracion += 1
loss.backward()
optimizer.step()
| I replicated your model with data augmentation and tried to plot the Accuracy and Loss and it seems that the problem is the way you are plotting.
In the following lines I attach my code and Loss and Accuracy plots:
Code:
# -- Imports -- #
import torch
from torch import nn, optim
from torch.utils.data import DataLoader
from torchvision import transforms, datasets
import torch.nn.functional as F
import matplotlib.pyplot as plt
# -- Data Loader -- #
my_transforms = transforms.Compose([
transforms.ToPILImage(),
transforms.RandomCrop((25,25)),
transforms.Resize((28,28)),
transforms.RandomRotation(degrees=45, fill=255),
transforms.RandomVerticalFlip(p=0.1),
transforms.RandomHorizontalFlip(p=0.5),
transforms.ToTensor()
])
dataset = datasets.MNIST('../data', train=True, download=True,
transform = transforms.Compose([
transforms.RandomCrop((25,25)),
transforms.Resize((28,28)),
transforms.RandomRotation(degrees=45, fill=255),
transforms.RandomVerticalFlip(p=0.1),
transforms.RandomHorizontalFlip(p=0.5),
transforms.ToTensor()
]))
train_loader = DataLoader(dataset = dataset, batch_size = 1000, shuffle=True)
# -- Define Model -- #
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1,10,kernel_size=5)
self.pool = nn.MaxPool2d(kernel_size=2,stride=2)
self.conv2 = nn.Conv2d(10,20,kernel_size=5)
self.fc1 = nn.Linear(20*4*4, 64)
self.fc2 = nn.Linear(64, 10)
def forward(self,x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1,20*4*4)
x = F.relu(self.fc1(x))
x = F.softmax(self.fc2(x), dim=1)
return x
device = 'cuda'
net = Net()
net.to(device)
loss_function=nn.NLLLoss()
optimizer=optim.Adam(net.parameters())
EPOCHS=2
iteracion = 0
accuracy = []
loss_record = []
for epoch in range(EPOCHS):
for data in train_loader:
inputs, labels = data
inputs = inputs.view(-1,1,28,28)
inputs, labels = inputs.to(device), labels.to(device)
# -- Forward -- #
net.zero_grad()
probabilities=net(inputs)
matches=[torch.argmax(i)==int(j) for i,j in zip(probabilities,labels)]
in_batch_acc=matches.count(True)/len(matches)
loss=loss_function(torch.log(probabilities), labels)
# -- Statistics -- #
accuracy.append(in_batch_acc)
loss_record.append(loss)
print('Loss:', round(float(loss), 3))
print('In-batch acc:', round(in_batch_acc, 2))
iteracion += 1
loss.backward()
optimizer.step()
# -- Accuracy plot -- #
iterations = range(0,120)
plt.plot(iterations, accuracy, 'g', label='Accuracy')
plt.title('Accuracy')
plt.xlabel('Iterations')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
# -- Loss plot -- #
plt.plot(iterations, loss_record, label='Loss')
plt.title('Loss')
plt.xlabel('Iterations')
plt.ylabel('Loss')
plt.legend()
plt.show()
Plots:
Accuracy plot
Loss plot
As you can see, there are not jumps when (iteration = 61) -> Next epoch
| https://stackoverflow.com/questions/63194095/ |
Why Pytorch is slower than Tensorflow in read data | I tried two version code for iterate MNIST data to compare the elapsed time.
Pytorch version
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
import torch
from torchvision import datasets, transforms
import matplotlib.pyplot as plt
import time
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('~/data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
# transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=30000, shuffle=True,pin_memory=True,num_workers=4)
tic = time.time()
for epoch in range(0, 5):
for batch_idx, (data, target) in enumerate(train_loader):
continue
toc=time.time()
print('elapsed time:',toc-tic)
Tensorflow 2.x version
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
import tensorflow as tf
from tensorflow.keras import datasets
import time
(xs, ys), (xs_, ys_) = datasets.mnist.load_data()
# print('datasets:', xs.shape, ys.shape, xs.min(), xs.max())
xs = tf.convert_to_tensor(xs, dtype=tf.float32) / 255.
db = tf.data.Dataset.from_tensor_slices((xs, ys))
db = db.batch(30000)
tic = time.time()
for epoch in range(5):
for step, (x, y) in enumerate(db):
continue
toc = time.time()
print('elapsed time:', toc - tic)
And the result are TF elapsed 2s and Pytorch elapsed 15s.
So, why Pytorch is slower than Tensorflow in read data?
Am I setting it wrong?
Thanks!
| Shijia li,
You do everything correctly, it is just less fast code used in Pytorch (perhaps for a reason).
I looked into Pytorch source code and found the following:
train_loader generates indices for each batch(30000-long list)
train_loaderindices to fetcher.
Fetcher collects data from the dataset, but it only does it one record at a time, in cycle called by line 44 in fetch.py module:
data = [self.dataset[idx] for idx in possibly_batched_index]
I suspect that this is the main source of difference with TF - it may be producing whole batch as a single slice in one optimized operation (vs. Python cycle in Torch)
data gets compacted from list of tuples of tensors into two large tensors by collate_fn and delivered to the user.
If you want to accelerate your code - convert the data into Tensors (equivalent of transforms), then generate the indices yourself and get slices of the data and targets without calling the loader. Or pack the resulting tensors into TensorDataset which should be much faster than VisionDataset (used for MNIST).
Someone at Pytorch development team may want to take a look at this.
| https://stackoverflow.com/questions/63202478/ |
How to add hidden neurons to a Pytorch RNN | How can I add hidden neurons to a Recurrent Neural Network in pytorch? In my understanding, torch.nn.RNN has n neurons with inputs being the input and the hidden state, where n is equal to the size of the hidden state.
How can I add additional layers before the neurons go back to the hidden state? E.g. If i only have 1 input and 1 output, but want to be able to model more complex functions?
I tried using the num_layers parameter but this just adds more layers of single neurons. I also tried using torch.nn.Sequential to stack individual RNNs with different sized inputs/outputs but this didnt work as Sequential objects dont seem to pass through additional parameters (h0, the initial hidden state).
Im trying to model f(x)=sin(x) with the initial hidden state beign the initial value of the sinewave (sin(x_0)), the inputs being x and the outputs being sin(x).
| You can't define rnn without defining hidden neurons.
Lets look at the official example:
class RNNTutorial(Module):
def __init__(self, input_size, hidden_size,
output_size):
super(RNNTutorial, self).__init__()
self.hidden_size = hidden_size
size_sum = input_size + hidden_size
self.i2h = Linear(size_sum, hidden_size)
self.i2o = Linear(size_sum, output_size)
self.softmax = LogSoftmax(dim=1)
def forward(self, input_, hidden_):
combined = cat(tensors=(input_, hidden_), dim=1)
hidden_ = self.i2h(input=combined)
hidden_ = relu(hidden_)
output = self.i2o(input=combined)
output = self.softmax(input=output)
return output, hidden_
def init_hidden(self):
return zeros(1, self.hidden_size)
Above is a two-layer RNN structure. On the 1st layer
self.i2h = Linear(size_sum, hidden_size)
The hidden neuron input size: size_sum and the output hidden_size
how to add neuron? You can change the parameter values.
For instance: size_sum + 1 now you add one more hidden neuron.
| https://stackoverflow.com/questions/63210533/ |
when importing pytorch microsoft visual C++ Redistributable is not installed | I work in a windows machine with GPU.
I have installed pytorch in a conda environment with
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
then I run python and inside of python I do import torch and I get this error
Python 3.6.10 |Anaconda, Inc.| (default, May 7 2020, 19:46:08) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Microsoft Visual C++ Redistributable is not installed, this may lead to the DLL load failure.
It can be downloaded at https://aka.ms/vs/16/release/vc_redist.x64.exe
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\aliag\anaconda3\envs\pytorchPractice\lib\site-packages\torch\__init__.py", line 127, in <module>
raise err
OSError: [WinError 126] 指定されたモジュールが見つかりません。
Error loading "C:\Users\aliag\anaconda3\envs\pytorchPractice\lib\site-packages\torch\lib\asmjit.dll"
or one of its dependencies.
>>>
How can I correct this error?
| Get the Microsoft Visual C++ Redistributable installer from the link in the error, in this case is this.
Run the installer and launch again your shell with conda configured when finished
| https://stackoverflow.com/questions/63212096/ |
PyTorch tensors have same value after being added to a list | While learning gradients and optimizing the process by Pytorch, I wanted to figure out the change of loss function values vs weights values with the graph. While I tried to graph, I used both numpy and torch, because I wanted to compare. During to store list of grad. and loss function values.
It works at numpy:
def gradient(x,y, y_predicted):
return np.dot(2*x, y_predicted-y).mean()
dw = gradient(x,y,y_pred)
dw_list.append(dw)
[-120.0, -112.8, -106.032, -99.67008, -93.68988, -88.06848, -82.78437, -77.81731, -73.14827, -68.75938]
It doesn't works at torch:
for epoch in range(n_iters):
# prediction = forward pass
y_pred = forward(x)
# loss
l = loss(y, y_pred)
loss_list.append(l)
print('loss')
print(l_list)
# gradients = backward pass
l.backward() # calculate w.grad = dl/dw and cumulative
# update weights
with torch.no_grad():
w -= learning_rate * w.grad
print(f'w.grad before zero setting = {w.grad}')
dw_list.append(w.grad)
print(dw_list)
#print(f'w.grad before zero setting = {w.grad}')
# zero gradients
w.grad.zero_()
[tensor(-6.9485), tensor(-6.9485), tensor(-6.9485), tensor(-6.9485), tensor(-6.9485), tensor(-6.9485), tensor(-6.9485), tensor(-6.9485), tensor(-6.9485), tensor(-6.9485)]
Why dw_list.append(dw) works at numpy, but dw_list.append(w.grad) does not work at torch?
Why only the new value of grad had been filled up the whole array at each iter. at torch tensor?
| w.grad is a tensor; it (the same tensor) is appended to the list at each iteration, so the list contains copies of the same tensor, not copies of its value at each point in time as you'd probably intend.
The standard way of handling this is to use:
dw_list.append(w.grad.detach().cpu().numpy())
Please have a look at this discussion for why detach() is necessary:
https://discuss.pytorch.org/t/should-it-really-be-necessary-to-do-var-detach-cpu-numpy/35489/6
By contrast, np.mean() returns a new python float object every time it is called, and so the values are different at the end. The list append() is not doing anything different in the two cases.
P.S. I think this would also work:
dw_list.append(w.grad.clone())
however it would keep the cloned tensors in the graph (and on the gpu, if the originals were on the gpu). That may or may not be what you want.
| https://stackoverflow.com/questions/63213881/ |
Number of layers vs list(net.parameters()) | New to convolutional neural nets so sorry if this doesn't make much sense. I have this code:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
and I think this has 5 layers. However, when I print len((list(net.parameters())) I get 10. Shouldn't this be a list of size 5 with parameters for each layer?
| Quick answer : You get an extra parameter array for each layer containing the bias vector associated to the layer.
Detailled answer:
I will try to guide you in my process of investigating your questions.
It seems like a good idea to see what our 10 parameters are :
for param in net.parameters():
print(type(param), param.size())
<class 'torch.nn.parameter.Parameter'> torch.Size([6, 3, 5, 5])
<class 'torch.nn.parameter.Parameter'> torch.Size([6])
<class 'torch.nn.parameter.Parameter'> torch.Size([16, 6, 5, 5])
<class 'torch.nn.parameter.Parameter'> torch.Size([16])
<class 'torch.nn.parameter.Parameter'> torch.Size([120, 400])
<class 'torch.nn.parameter.Parameter'> torch.Size([120])
<class 'torch.nn.parameter.Parameter'> torch.Size([84, 120])
<class 'torch.nn.parameter.Parameter'> torch.Size([84])
<class 'torch.nn.parameter.Parameter'> torch.Size([10, 84])
<class 'torch.nn.parameter.Parameter'> torch.Size([10])
We can recognize our 5 layers and an extra line for each layer. For instance, if we look at a specific layer for instance the first conv layer, we get :
for param in net.conv1.parameters():
print(type(param), param.size())
<class 'torch.nn.parameter.Parameter'> torch.Size([6, 3, 5, 5])
<class 'torch.nn.parameter.Parameter'> torch.Size([6])
So now that we know we have two array of parameters per layer, the question is why. The first 6*3*5*5 array corresponds to your 6 kernels of size 5*5 with 3 channels, the second one corresponds the the bias associated to each of your kernel. Mathematically speaking, to compute the value at the next layer associated to a given kernel, you make the convolution between the area under your desired pixel and the kernel and you add a real number. That number is called the bias and it is empirically proven that using a bias gives better results.
Now you can also create a layer without bias, and then you will only get one parameter array :
layer = nn.Conv2d(3,6,5, bias= False)
for param in layer.parameters():
print(type(param), param.size())
<class 'torch.nn.parameter.Parameter'> torch.Size([6, 3, 5, 5])
| https://stackoverflow.com/questions/63215362/ |
Fine-Tuning DistilBertForSequenceClassification: Is not learning, why is loss not changing? Weights not updated? |
I am relatively new to PyTorch and Huggingface-transformers and experimented with DistillBertForSequenceClassification on this Kaggle-Dataset.
from transformers import DistilBertForSequenceClassification
import torch.optim as optim
import torch.nn as nn
from transformers import get_linear_schedule_with_warmup
n_epochs = 5 # or whatever
batch_size = 32 # or whatever
bert_distil = DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
#bert_distil.classifier = nn.Sequential(nn.Linear(in_features=768, out_features=1), nn.Sigmoid())
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(bert_distil.parameters(), lr=0.1)
X_train = []
Y_train = []
for row in train_df.iterrows():
seq = tokenizer.encode(preprocess_text(row[1]['text']), add_special_tokens=True, pad_to_max_length=True)
X_train.append(torch.tensor(seq).unsqueeze(0))
Y_train.append(torch.tensor([row[1]['target']]).unsqueeze(0))
X_train = torch.cat(X_train)
Y_train = torch.cat(Y_train)
running_loss = 0.0
bert_distil.cuda()
bert_distil.train(True)
for epoch in range(n_epochs):
permutation = torch.randperm(len(X_train))
j = 0
for i in range(0,len(X_train), batch_size):
optimizer.zero_grad()
indices = permutation[i:i+batch_size]
batch_x, batch_y = X_train[indices], Y_train[indices]
batch_x.cuda()
batch_y.cuda()
outputs = bert_distil.forward(batch_x.cuda())
loss = criterion(outputs[0],batch_y.squeeze().cuda())
loss.requires_grad = True
loss.backward()
optimizer.step()
running_loss += loss.item()
j+=1
if j == 20:
#print(outputs[0])
print('[%d, %5d] running loss: %.3f loss: %.3f ' %
(epoch + 1, i*1, running_loss / 20, loss.item()))
running_loss = 0.0
j = 0
[1, 608] running loss: 0.689 loss: 0.687
[1, 1248] running loss: 0.693 loss: 0.694
[1, 1888] running loss: 0.693 loss: 0.683
[1, 2528] running loss: 0.689 loss: 0.701
[1, 3168] running loss: 0.690 loss: 0.684
[1, 3808] running loss: 0.689 loss: 0.688
[1, 4448] running loss: 0.689 loss: 0.692 etc...
Regardless on what I tried, loss did never decrease, or even increase, nor did the prediction get better. It seems to me that I forgot something so that weights are actually not updated. Someone has an idea?
O
what I tried
Different loss functions
BCE
CrossEntropy
even MSE-loss
One-Hot Encoding vs A single neuron output
Different learning rates, and optimizers
I even changed all the targets to only one single label, but even then, the network did'nt converge.
| Looking at running loss and minibatch loss is easily misleading. You should look at epoch loss, because the inputs are the same for every loss.
Besides, there are some problems in your code, fixing all of them and the behavior is as expected: the loss slowly decreases after each epoch, and it can also overfit to a small minibatch. Please look at the code, changes include: using model(x) instead of model.forward(x), cuda() only called once, smaller learning rate, etc.
Tuning and fine-tuning ML models are difficult work.
n_epochs = 5
batch_size = 1
bert_distil = DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(bert_distil.parameters(), lr=1e-3)
X_train = []
Y_train = []
for row in train_df.iterrows():
seq = tokenizer.encode(row[1]['text'], add_special_tokens=True, pad_to_max_length=True)[:100]
X_train.append(torch.tensor(seq).unsqueeze(0))
Y_train.append(torch.tensor([row[1]['target']]))
X_train = torch.cat(X_train)
Y_train = torch.cat(Y_train)
running_loss = 0.0
bert_distil.cuda()
bert_distil.train(True)
for epoch in range(n_epochs):
permutation = torch.randperm(len(X_train))
for i in range(0,len(X_train), batch_size):
optimizer.zero_grad()
indices = permutation[i:i+batch_size]
batch_x, batch_y = X_train[indices].cuda(), Y_train[indices].cuda()
outputs = bert_distil(batch_x)
loss = criterion(outputs[0], batch_y)
loss.backward()
optimizer.step()
running_loss += loss.item()
print('[%d] epoch loss: %.3f' %
(epoch + 1, running_loss / len(X_train) * batch_size))
running_loss = 0.0
Output:
[1] epoch loss: 0.695
[2] epoch loss: 0.690
[3] epoch loss: 0.687
[4] epoch loss: 0.685
[5] epoch loss: 0.684
| https://stackoverflow.com/questions/63218778/ |
RuntimeError: DataLoader worker (pid 27351) is killed by signal: Killed | I'm running the data loader below which applies a filter to a microscopy image prior to training. In order to count the red and green. This code filters the red cells. Since I have applied this to the code I keep on getting the error message above. I have tried increasing the memory allocation to the maximum allowance possible but that didn't help. Is there a way I could modify the filter so it isn't causing this issue, please? Many thanks in advance
import os
import numpy as np
import torch
from PIL import Image
from torch.utils.data import Dataset
from torchvision import transforms, utils
#from torchvision.transforms import Grayscalei
import pandas as pd
import pdb
import cv2
class CellsDataset(Dataset):
# a very simple dataset
def __init__(self, root_dir, transform=None, return_filenames=False):
self.root = root_dir
self.transform = transform
self.return_filenames = return_filenames
self.files = [os.path.join(self.root,filename) for filename in os.listdir(self.root)]
self.files = [path for path in self.files
if os.path.isfile(path) and os.path.splitext(path)[1]=='.png']
def __len__(self):
return len(self.files)
def __getitem__(self, idx):
path = self.files[idx]
image = cv2.imread(path)
sample = image.copy()
# set blue and green channels to 0
sample[:, :, 0] = 0
sample[:, :, 1] = 0
channel.
if self.transform:
sample = self.transform(sample)
if self.return_filenames:
return sample, path
else:
return sample
| Similar problem I met before.
One possible solution is to disable cv2 multi-processing by
def __getitem__(self, idx):
import cv2
cv2.setNumThreads(0)
# ...
in your dataloader. It might be because the cv2 multi-processing is conflict with torch's DataLoader with multi-processing. Although it does not work for me since mine does not involve OpenCV. But indeed you might first try this.
For my occasion, it's worth mentioning that torch's multiprocessing with CUDA access must use spawn or forkserver instead of fork to spawn new process. To do this, you should
if __name__ == '__main__':
import multiprocessing
multiprocessing.set_start_method('spawn')
# ... **The all rest code**
Notice you shall put all remaining code in the if __name__ == '__main__' block, including the imports, because it might be other imports (like cv2) setting it to fork and your loaders does not work.
| https://stackoverflow.com/questions/63221468/ |
How can I cross-validate by Pytorch and Optuna | I want to use cross-validation against the official Optuna and pytorch-based sample code (https://github.com/optuna/optuna/blob/master/examples/pytorch_simple.py).
I thought about splitting the data for cross-validation and trying parameter tuning for each fold, but it seems that the average accuracy of each parameter cannot be obtained because the parameters that can be checked in study.trials_dataframe() are different each time.
| I think we need to evaluate all folds and calculate the mean inside an objective function. I create an example notebook, so please take a look.
In the notebook, I slightly modified the objective function to pass the dataset with the arguments and added a wrapper function objective_cv to call the objective function with the split dataset. Then, I optimized the objective_cv instead of the objective function.
def objective(trial, train_loader, valid_loader):
# Remove the following line.
# train_loader, valid_loader = get_mnist()
...
return accuracy
def objective_cv(trial):
# Get the MNIST dataset.
dataset = datasets.MNIST(DIR, train=True, download=True, transform=transforms.ToTensor())
fold = KFold(n_splits=3, shuffle=True, random_state=0)
scores = []
for fold_idx, (train_idx, valid_idx) in enumerate(fold.split(range(len(dataset)))):
train_data = torch.utils.data.Subset(dataset, train_idx)
valid_data = torch.utils.data.Subset(dataset, valid_idx)
train_loader = torch.utils.data.DataLoader(
train_data,
batch_size=BATCHSIZE,
shuffle=True,
)
valid_loader = torch.utils.data.DataLoader(
valid_data,
batch_size=BATCHSIZE,
shuffle=True,
)
accuracy = objective(trial, train_loader, valid_loader)
scores.append(accuracy)
return np.mean(scores)
study = optuna.create_study(direction="maximize")
study.optimize(objective_cv, n_trials=20, timeout=600)
| https://stackoverflow.com/questions/63224426/ |
Torch sum subsets of tensor | if the tensor is of shape [20, 5] then I need to take 10 at a time and sum them, so result is [2,5].
eg:
shape[20,5] -> shape[2, 5] (sum 10 at a time)
shape[100, 20] -> shape[10,20] (sum 10 at a time)
Is there any faster/optimal way to do this?
eg:
[[1, 1], [1, 2], [3, 4], [1,2]] i want [[2, 3], [4, 6]] by taking sum of 2 rows.
| It is not completely clear, but I cannot use a comment for this, so.
For the first case you have:
t1 = torch.tensor([[1., 1.], [1., 2.], [3., 4.], [1.,2.]])
t1.shape #=> torch.Size([4, 2])
t1
tensor([[1., 1.],
[1., 2.],
[3., 4.],
[1., 2.]])
To get the desired output you should reshape:
tr1 = t1.reshape([2, 2, 2])
res1 = torch.sum(tr1, axis = 1)
res1.shape #=> torch.Size([2, 2])
res1
tensor([[2., 3.],
[4., 6.]])
Let's take a tensor with all one elements (torch.ones) for the second case.
t2 = torch.ones((20, 5))
t2.shape #=> torch.Size([20, 5])
t2
tensor([[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]])
So, reshaping to get the required (?) result:
tr2 = tensor.reshape((10, 2, 5))
res2 = torch.sum(tr2, axis = 0)
res2.shape #=> torch.Size([2, 5])
res2
tensor([[10., 10., 10., 10., 10.],
[10., 10., 10., 10., 10.]])
Is this what you are looking for?
| https://stackoverflow.com/questions/63240702/ |
NameError: name 'base' is not defined, while running open AI gym in GOOGLE COLAB | I am going through the tutorial DQN reinforcement learning in Pytorch.org,https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html
But here when I am trying to render a screen and display using python display, I am getting name base not found. Can anyone help me here? If you want to clear any clarity about the question, I am here
Thanks in advance
| Use the gpu on colab. If with that isn't enought execute this cell at the beginning
!apt install xvfb -y
!pip install pyvirtualdisplay
!pip install piglet
from pyvirtualdisplay import Display
display = Display(visible=0, size=(1400, 900))
display.start()
| https://stackoverflow.com/questions/63250935/ |
Unable to create custom dataset and dataloader using torchtext | I have questions regarding building custom dataset and iterator using torchtext. I used the following code found in this post and modified based on my case:
tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased")
text_field = Field(sequential=True, eos_token="[CLS]", tokenize=tokenizer)
label_field = Field(sequential=False, use_vocab=False)
data_fields = [("file", None),
("text", text_field),
("label", label_field)]
train, val = train_test_split(input_dt, test_size=0.1)
train.to_csv("train_output_path", index=False)
val.to_csv("val_output_path", index=False)
train, val = TabularDataset(path="path", train="train.csv", validation="val.csv",
format="csv", skip_header=True, fields=data_fields)
When it comes to text_field.build_vocab(train), I got this error: TypeError: '<' not supported between instances of 'list' and 'int'.
The only difference between my code and the post is the pre-trained word embeddings. In the post, the author used glove, which I use XLNetTokenizer from transformers package. I also searched for other posts who used the similar method, but they all used the pre-trained word embeddings, therefore they did have such an issue.
Does anyone know how to fix this issue? Many thanks!
| I think as you are using a predefined tokenizer you dont't need to build vocab, instead you can follow this steps. Showing an example of how to do it using BERT tokenizer.
Sentences: it is a list of of text data
lables: is the label associated
###tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased")
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
# Tokenize all of the sentences and map the tokens to thier word IDs.
input_ids = []
attention_masks = []
# For every sentence...
for sent in sentences:
# `encode_plus` will:
# (1) Tokenize the sentence.
# (2) Prepend the `[CLS]` token to the start.
# (3) Append the `[SEP]` token to the end.
# (4) Map tokens to their IDs.
# (5) Pad or truncate the sentence to `max_length`
# (6) Create attention masks for [PAD] tokens.
encoded_dict = tokenizer.encode_plus(
sent, # Sentence to encode.
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
max_length = 100, # Pad & truncate all sentences.
pad_to_max_length = True,
return_attention_mask = True, # Construct attn. masks.
return_tensors = 'pt', # Return pytorch tensors.
)
# Add the encoded sentence to the list.
input_ids.append(encoded_dict['input_ids'])
# And its attention mask (simply differentiates padding from non-padding).
attention_masks.append(encoded_dict['attention_mask'])
# Convert the lists into tensors.
input_ids = torch.cat(input_ids, dim=0)
attention_masks = torch.cat(attention_masks, dim=0)
labels = torch.tensor(labels)
# Print sentence 0, now as a list of IDs.
print('Original: ', sentences[0])
print('Token IDs:', input_ids[0])
### Not combine the input id , mask and labels and divide the dataset
#:
from torch.utils.data import TensorDataset, random_split
# Combine the training inputs into a TensorDataset.
dataset = TensorDataset(input_ids, attention_masks, labels)
# Create a 90-10 train-validation split.
# Calculate the number of samples to include in each set.
train_size = int(0.90 * len(dataset))
val_size = len(dataset) - train_size
# Divide the dataset by randomly selecting samples.
train_dataset, val_dataset = random_split(dataset, [train_size, val_size])
print('{:>5,} training samples'.format(train_size))
print('{:>5,} validation samples'.format(val_size))
### Not you call loader of these datasets
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler
# The DataLoader needs to know our batch size for training, so we specify it
# here. For fine-tuning BERT on a specific task, the authors recommend a batch
# size of 16 or 32.
batch_size = 32
# Create the DataLoaders for our training and validation sets.
# We'll take training samples in random order.
train_dataloader = DataLoader(
train_dataset, # The training samples.
sampler = RandomSampler(train_dataset), # Select batches randomly
batch_size = batch_size # Trains with this batch size.
)
# For validation the order doesn't matter, so we'll just read them sequentially.
validation_dataloader = DataLoader(
val_dataset, # The validation samples.
sampler = SequentialSampler(val_dataset), # Pull out batches sequentially.
batch_size = batch_size # Evaluate with this batch size.
)
| https://stackoverflow.com/questions/63254843/ |
Why cant I use ONNX Runtime training with pytorch? | When I run
from onnxruntime.capi.ort_trainer import ORTTrainer
as stated at https://github.com/microsoft/onnxruntime/#training-start, I get this error:
ModuleNotFoundError: No module named 'onnxruntime.capi.ort_trainer'
What can I do to fix this?
I have onnxruntime installed via pip but I couldn't even find "ort_trainer.py" in [python path]/site_packages/onnx-runtime/capi
| You should build ort training from source.
https://www.onnxruntime.ai/docs/how-to/build.html#training
| https://stackoverflow.com/questions/63255037/ |
Pytorch and data augmentation: how to augmentate data with blur, rotations, etc | I want to do some data augmentation with Pytorch, but i don't know the libraries very well:
I tried this:
def gaussian_blur(img):
image = np.array(img)
image_blur = cv2.GaussianBlur(image,(65,65),10)
new_image = image_blur
im = Image.fromarray(new_image)
return im
data_transforms = {
'train': transforms.Compose([
transforms.RandomRotation([-8,+8]),
transforms.Lambda(gaussian_blur),
transforms.ColorJitter(brightness=0, contrast=0.4, saturation=0, hue=0),
transforms.Compose([transforms.Lambda(lambda x : x + torch.randn_like(x))]),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.RandomRotation([-8,+8]),
transforms.Lambda(gaussian_blur),
transforms.ColorJitter(brightness=0, contrast=0.4, saturation=0, hue=0),
transforms.Compose([transforms.Lambda(lambda x : x + torch.randn_like(x))]),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
Because the effects i want to do are: gaussian blur/rotation/contrast/gamma+random noise
But i have errors considering several aspects, like the size of the images doesn't match.
Any suggestions?
| If you want to apply Gausian blur there is also already a pytorch class for:
torchvision.transforms.GaussianBlur(kernel_size, sigma=(0.1, 2.0))
| https://stackoverflow.com/questions/63288388/ |
CUDA out of memory runtime error, anyway to delete pytorch "reserved memory" | Like many othersm I'm getting a Runtime error of Cuda out of memory, but for some reason pytorch has reserved a large amount of it.
RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 6.00 GiB total capacity; 4.31 GiB already allocated; 844.80 KiB free; 4.71 GiB reserved in total by PyTorch)
I've tried the torch.cuda.empy_cache(), but this isn't working either and none of the other CUDA out of memory posts have helped me either.
When I've checked my gpu usage(nvidia-smi) before running my python program, it is plenty free.
| From the given description it seems that the problem is not allocated memory by Pytorch so far before the execution but cuda ran out of memory while allocating the data that means the 4.31GB got already allocated (not cached) but failed to allocate the 2MB last block.
Possible solution already worked for me, is to decrease the batch size, hope that helps!
| https://stackoverflow.com/questions/63293620/ |
Overlap and add tensor in pytorch using nn.fold | I have the following problem, I have tensor with shape (1,16198), first i would like to dive it into chunks so I used unfold like:
my_tensor.unfold(-1, 178, 89) -> tensor with shape (1, 182, 178)
which means 182 overlapping chunks of size 178, that's perfect.
But now I would like to undo the operation adding the overlapping chunks and get back a (1,16198) tensor. I believe that the fold method would do that, but I've burned through several hours trying to understand how it works to no progress at all and I cant seem to find a good explanation source for how to use it and I am afraid the official documentation is way to complex for my understanding.
| Got it to work last night, i had to transpose to the (1, 182, 178) tensor to (1, 178, 182) then I used
nn.Fold(my_tensor, (1, 16198), kernel_size=(1,178), stride=(1,89))
and i get
back my tensor with overlapping sections added, thus completing the overlap and add algorithm, still not sure how nn.Fold works.
| https://stackoverflow.com/questions/63293877/ |
Difference between hidden dimension and n_layers in rnn using pytorch | I am stuck between hidden dimension and n_layers. What I understood so far, is that n_layers in the parameters of RNN using pytorch, is number of hidden layers. If n_layers represents the number if hidden layers than what is hidden dimension?
| Actually documentation is really clear about their differences. Hidden size is number of features of the hidden state for RNN. So if you increase hidden size then you compute bigger feature as hidden state output.
However, num_layers is just multiple RNN units which contain hidden states with given hidden size.
num_layers=2 would mean stacking two RNNs together to form a stacked
RNN, with the second RNN taking in outputs of the first RNN and
computing the final results
I want to explain how RNN works with the image below. Each RNN unit (blue rectangles) takes one h_n (hidden state) and one input. Hidden dimension determines the feature vector size of the h_n (hidden state). At each timestep (t, horizontal propagation in the image) your rnn will take a h_n and input. Then if you have n_layers >1 it will create a intermediate output and give it to the upper layer(vertical). So hidden dimension determine the size of horizontal h_n in the image, whereas num_layers determine the number of blue cells in vertical axis in the image.
| https://stackoverflow.com/questions/63294347/ |
Loading enormous custom dataset using IterableDataset | I have a huge dataset with features (input_id,input_mask,segment_id,label_id) saved in batches of 64 in a pickle file. I read this file, create a TensorDataset and pass to dataloader for training. Since the features file is too big to create the full TensorDataset, I want to convert TensorDataset to IterableDataset so that one batch of samples can be retrieved from the features file at a time and passed on to the dataloader. But while training, I get the following error:
TypeError: iter() returned non-iterator of type 'TensorDataset'
Following is the custom dataset class I wrote:
class MyDataset(IterableDataset):
def __init__(self,args):
self.args=args
def get_features(self,filename):
with open(filename, "rb") as f:
while True:
try:
yield pickle.load(f)
except EOFError:
break
def process(self,args):
if args.cached_features_file:
cached_features_file = args.cached_features_file
if os.path.exists(cached_features_file):
features=self.get_features(cached_features_file)
feat = next (features)
li=list(feat)
all_input_ids=torch.tensor([f.input_ids for f in li ], dtype=torch.long)
all_input_mask= torch.tensor([f.input_mask for f in li ], dtype=torch.long)
all_segment_ids= torch.tensor([f.segment_ids for f in li], dtype=torch.long)
all_label_ids = torch.tensor([f.label_id for f in li ], dtype=torch.long)
dataset = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids)
return dataset
def __iter__(self):
dataset=self.process(self.args)
return dataset
And I use it like this:
train_dataset=MyDataset(args)
train_dataloader = DataLoader(train_dataset, batch_size=args.train_batch_size)
I understand that TensorDataset is map-style requiring index while IterableDataset is iterable-style which is the reason for error. Even if I return a list/tuple of feature tensors instead of TensorDataset, I get similar error. Can someone please tell me how to load a batched dataset in the correct way with IterableDataset?
| I solved the problem by saving the dataset in a different manner. I saved the features as dictionary objects incrementally pickled in a pickle file and just read them off one at a time and pass on to dataloader for processing. Batching is done automatically by the dataloader. This is how the custom class looks now:
class MyDataset(IterableDataset):
def __init__(self,filename):
self.filename=filename
super().__init__()
def process(self,filename):
with open(filename, "rb") as f:
while True:
try:
yield pickle.load(f)
except EOFError:
break
def __iter__(self):
dataset=self.process(self.filename)
return dataset
| https://stackoverflow.com/questions/63298647/ |
Index multidimensional torch tensor with array of variable length | I have a list of indices and a tensor with shape:
shape = [batch_size, d_0, d_1, ..., d_k]
idx = [i_0, i_1, ..., i_k]
Is there a way to efficiently index the tensor on each dim d_0, ..., d_k with the indices i_0, ..., i_k? (k is available only at run time)
The result should be:
tensor[:, i_0, i_1, ..., i_k] #tensor.shape = [batch_size]
At the moment I'm creating a tuple of slices, one for each dimension:
idx = (slice(tensor.shape[0]),) + tuple(slice(i, i+1) for i in idx)
tensor[idx]
but I would prefer something like:
tensor[:, *idx]
Example:
a = torch.randint(0,10,[3,3,3,3])
indexes = torch.LongTensor([1,1,1])
I would like to index only the last len(indexes) dimensions like:
a[:, indexes[0], indexes[1], indexes[2]]
but in the general case where I don't know how long indexes is.
Note: this answer does not help since it indexes all the dimensions, and does not work for a proper subset!
| Unfortunately you can't provide1 a mix of slices and iterators to an indexing (e.g. a[:,*idx]). However, you can achieve almost the same thing by wrapping it in brackets to cast to an iterator:
a[(slice(None), *idx)]
In Python, x[(exp1, exp2, ..., expN)] is equivalent to x[exp1, exp2, ..., expN]; the latter is just syntactic sugar for the former.
https://numpy.org/doc/stable/reference/arrays.indexing.html
| https://stackoverflow.com/questions/63309876/ |
trying to import png images to torchvision | I am attempting to import images for use with torch and torchvision. But I am receiving this error:
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "c:\python38\lib\site-packages\torch\utils\data\_utils\worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "c:\python38\lib\site-packages\torch\utils\data\_utils\fetch.py", line 47, in fetch
return self.collate_fn(data)
File "c:\python38\lib\site-packages\torch\utils\data\_utils\collate.py", line 79, in default_collate
return [default_collate(samples) for samples in transposed]
File "c:\python38\lib\site-packages\torch\utils\data\_utils\collate.py", line 79, in <listcomp>
return [default_collate(samples) for samples in transposed]
File "c:\python38\lib\site-packages\torch\utils\data\_utils\collate.py", line 81, in default_collate
raise TypeError(default_collate_err_msg_format.format(elem_type))
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'PIL.Image.Image'>
Based on this post, I am converting them to Tensor:
https://discuss.pytorch.org/t/typeerror-default-collate-batch-must-contain-tensors-numpy-arrays-numbers-dicts-or-lists-found-class-imageio-core-util-array/62667
Here is my code:
import torch
import torchvision
import torchvision.transforms
from torchvision import datasets, transforms
transform = transforms.Compose([
transforms.Resize(256),
transforms.ToTensor()
])
dataset = torchvision.datasets.ImageFolder('datasets')
dataloader = torch.utils.data.DataLoader(dataset,
batch_size=16,
shuffle=True,
num_workers=12)
tensor_dataset = []
for i, data in enumerate(dataloader, 0):
Tensor = torch.tensor(data)
tensor_dataset.append(Tensor.flatten)
The first last part is from https://github.com/TerragonDE/PyTorch but I have had no success. The data I am trying to load is from here:
http://www.cvlibs.net/datasets/kitti/
How can I solve this?
UPDATE:
Thanks @trialNerror, but now I am getting this error:
ValueError Traceback (most recent call last)
<ipython-input-6-aa72392b67e8> in <module>
1 for i, data in enumerate(dataloader, 0):
----> 2 Tensor = torch.tensor(data)
3 tensor_dataset.append(Tensor.flatten)
ValueError: only one element tensors can be converted to Python scalars
This is what I have found so far but am not sure how to apply it:
https://discuss.pytorch.org/t/pytorch-autograd-grad-only-one-element-tensors-can-be-converted-to-python-scalars/56681
UPDATE 2:
The reason why I didn't end up using the dataloader is because I end up getting this error:
num_epochs = 10
loss_values = list()
for epoch in range(1, num_epochs):
for i, data in enumerate(train_array, 0):
outputs = model(data.unsqueeze(0))
loss = criterion(outputs,data.unsqueeze(0))
optimizer.zero_grad()
loss.backward()
optimizer.step()
print('Epoch - %d, loss - %0.5f '%(epoch, loss.item()))
loss_values.append(loss.item())
torch.Size([1, 16, 198, 660])
torch.Size([1, 32, 97, 328])
torch.Size([1, 1018112])
RuntimeError Traceback (most recent call last)
<ipython-input-106-5e6fa86df079> in <module>
4 for epoch in range(1, num_epochs):
5 for i, data in enumerate(train_array, 0):
----> 6 outputs = model(data.unsqueeze(0))
7 loss = criterion(outputs,data.unsqueeze(0))
8
c:\python38\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
<ipython-input-90-467a3f84a03f> in forward(self, x)
29 print(out.shape)
30
---> 31 out = self.fc(out)
32 print(out.shape)
33
c:\python38\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
c:\python38\lib\site-packages\torch\nn\modules\linear.py in forward(self, input)
85
86 def forward(self, input):
---> 87 return F.linear(input, self.weight, self.bias)
88
89 def extra_repr(self):
c:\python38\lib\site-packages\torch\nn\functional.py in linear(input, weight, bias)
1608 if input.dim() == 2 and bias is not None:
1609 # fused op is marginally faster
-> 1610 ret = torch.addmm(bias, input, weight.t())
1611 else:
1612 output = input.matmul(weight.t())
RuntimeError: size mismatch, m1: [1 x 1018112], m2: [512 x 10] at C:\w\b\windows\pytorch\aten\src\TH/generic/THTensorMath.cpp:41
I realize that if you have m1: [a * b] and m2: [c * d] then b and c have to be the same value, but I am not sure, what is the best way to resize my images?
| Please note that I had wanted to automatically load all PNG images in a directory as pytorch tensors. I had previously looked at posts like this (and many other web pages):
Loading a huge dataset batch-wise to train pytorch
But instead I ended up using this where I am using Image.open one by one on all the images in the directory instead of the torch DataLoader:
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
image = Image.open("datasets/image_02/data/my_image.png").convert('RGB')
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import transforms as tf
transforms = tf.Compose([tf.Resize(400),
tf.ToTensor()])
img_tensor = transforms(image)
| https://stackoverflow.com/questions/63332048/ |
How to transform labels in pytorch to onehot | How to give target_transform a function for changing the labels to onehot encoding?
For example, the MNIST dataset in torchvision:
train_dataset = torchvision.datasets.MNIST(root='./mnist_data/',
train=True,
download=True,
transform=train_transform,
target_transform=<????>)
Tried F.onehot() but it didn't work.
| This is how I implemented it. Not sure if there's a cleaner way.
train_dataset = torchvision.datasets.MNIST(root='./data/', train=True,
transform=torchvision.transforms.ToTensor(),
target_transform=torchvision.transforms.Compose([
lambda x:torch.LongTensor([x]), # or just torch.tensor
lambda x:F.one_hot(x,10)]),
download=True)
It needs to be an index tensor? i.e. int64
Can't use torchvision.ToTensor because it's not an image
Also torch.LongTensor and torch.tensor behave differently with int input
Need to provide number of classes
| https://stackoverflow.com/questions/63342147/ |
Cant install pytorch - ModuleNotFoundError: No module named 'tools.nnwrap' | I cannot install PyTorch on python 3.7. This error occurs for others but the suggested fixes did not work. I have tried instaling wheel, and importing tools but neither work.
Got error: ModuleNotFoundError: No module named 'tools.nnwrap'
ran command pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html
full output: Looking in links: https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html Collecting torch Using cached torch-0.1.2.post2.tar.gz (128 kB) Collecting torchvision Using cached torchvision-0.2.2.post3-py2.py3-none-any.whl (64 kB) Requirement already satisfied: pyyaml in c:\users\lv7682\documents\repos\security_patch\spms\webapp\securitypatchmanagementsystem\venv\venv\lib\site-packages (from torch) (5.3.1) Collecting pillow>=4.1.1 Using cached Pillow-7.2.0-cp37-cp37m-win32.whl (1.8 MB) Collecting numpy Using cached numpy-1.19.1-cp37-cp37m-win32.whl (10.9 MB) Collecting six Using cached six-1.15.0-py2.py3-none-any.whl (10 kB) Using legacy 'setup.py install' for torch, since package 'wheel' is not installed. Installing collected packages: torch, pillow, numpy, six, torchvision Running setup.py install for torch ... error ERROR: Command errored out with exit status 1: command: 'c:\users\lv7682\documents\repos\security_patch\spms\webapp\securitypatchmanagementsystem\venv\venv\scripts\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[ 0] = '"'"'C:\\Users\\LV7682\\AppData\\Local\\Temp\\pip-install-4b80gt33\\torch\\setup.py'"'"'; __file__='"'"'C:\\Users\\LV7682\\AppData\\Local\\Temp\\pip-install-4b80gt33\\torch\\setup .py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --reco rd 'C:\Users\LV7682\AppData\Local\Temp\pip-record-z5_fy7cc\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\lv7682\documents\repos\security _patch\spms\webapp\securitypatchmanagementsystem\venv\venv\include\site\python3.7\torch' cwd: C:\Users\LV7682\AppData\Local\Temp\pip-install-4b80gt33\torch\ Complete output (23 lines): running install running build_deps Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\LV7682\AppData\Local\Temp\pip-install-4b80gt33\torch\setup.py", line 265, in <module> description="Tensors and Dynamic neural networks in Python with strong GPU acceleration", File "c:\users\lv7682\documents\repos\security_patch\spms\webapp\securitypatchmanagementsystem\venv\venv\lib\site-packages\setuptools\__init__.py", line 163, in setup return distutils.core.setup(**attrs) File "C:\Users\LV7682\AppData\Local\Programs\Python\Python37-32\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\Users\LV7682\AppData\Local\Programs\Python\Python37-32\lib\distutils\dist.py", line 966, in run_commands self.run_command(cmd) File "C:\Users\LV7682\AppData\Local\Programs\Python\Python37-32\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\LV7682\AppData\Local\Temp\pip-install-4b80gt33\torch\setup.py", line 99, in run self.run_command('build_deps') File "C:\Users\LV7682\AppData\Local\Programs\Python\Python37-32\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\Users\LV7682\AppData\Local\Programs\Python\Python37-32\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\LV7682\AppData\Local\Temp\pip-install-4b80gt33\torch\setup.py", line 51, in run from tools.nnwrap import generate_wrappers as generate_nn_wrappers ModuleNotFoundError: No module named 'tools.nnwrap' ---------------------------------------- ERROR: Command errored out with exit status 1: 'c:\users\lv7682\documents\repos\security_patch\spms\webapp\securitypatchmanagementsystem\venv\venv\scripts\python.exe' -u -c 'import sys , setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\LV7682\\AppData\\Local\\Temp\\pip-install-4b80gt33\\torch\\setup.py'"'"'; __file__='"'"'C:\\Users\\LV7682\\AppData\\Local\\Temp\\p ip-install-4b80gt33\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\LV7682\AppData\Local\Temp\pip-record-z5_fy7cc\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\user s\lv7682\documents\repos\security_patch\spms\webapp\securitypatchmanagementsystem\venv\venv\include\site\python3.7\torch' Check the logs for full command output.
| Fixed: needed to use python 64-bit not 32-bit
| https://stackoverflow.com/questions/63345212/ |
pytorch dataset map-style vs iterable-style | A map-style dataset in Pytorch has the __getitem__() and __len__() and iterable-style datasets has __iter__() protocol. If we use map-style, we can access the data with dataset[idx] which is great, however with the iterable dataset we can't.
My question is why this distinction was necessary? What makes the data random read so expensive or even improbable?
| I wrote a short post on how to use PyTorch datasets, and the difference between map-style and iterable-style dataset.
In essence, you should use map-style datasets when possible. Map-style datasets give you their size ahead of time, are easier to shuffle, and allow for easy parallel loading.
It’s a common misconception that if your data doesn’t fit in memory, you have to use iterable-style dataset. That is not true. You can implement a map-style dataset such that it retrives data as needed.
Check out the full post here.
| https://stackoverflow.com/questions/63347149/ |
pytorch code sudden fails on colab with NVIDIA driver on your system is too old | I had some code which worked on colab (gpu runtime) just a short while ago. Suddenly I am getting
The NVIDIA driver on your system is too old (found version 10010).
nvcc shows
Cuda compilation tools, release 10.1, V10.1.243
I tried torch versions 1.5.1, then 1.13.0. Both keep getting this error.
There is a discussion showing other people having doubts. with no clear resolution.
https://github.com/pytorch/pytorch/issues/27738
Anyone having the same problem?
| The light-the-torch package is designed to solve exactly this type of issue. Try this:
!pip install light-the-torch
!ltt install torch torchvision
| https://stackoverflow.com/questions/63349832/ |
will loop decrease the utilization of the GPU? | In PyTorch, I have a loop in my DeepLearning Pipeline's forward part to normalize the intermediate result.
Will it run on CPU and decrease the utilization of the GPU?
some snippet as follow:
def forward(self):
...
for b in range(batch_size):
self.points[b] = self.unit_cube(self.points[b])
....
| In Pytorch, whether an operation is done on the GPU or CPU is decided by where the data is. One of the main selling points of Pytorch is that you don't (usually) have to care where the data is; the interface is the same.
If the tensor data is on the GPU, then the operation is done on the GPU. If it's on the CPU, then the operation is done on the CPU. How you choose to organise those operations (ifs, for loops, etc) have no impact on it.
>>> import torch
>>> a = torch.randn(3,4,5)
>>> b = a.cuda()
>>> a.device
device(type='cpu')
>>> b.device
device(type='cuda', index=0)
>>> c = b
>>> for x in range(10):
... c = c * 2
...
>>> c.device
device(type='cuda', index=0)
In the above example, I used a for loop to double b 10 times, storing the result in c. This was all done on the GPU, and I equally could have done this on a, making it happen on the CPU.
| https://stackoverflow.com/questions/63350404/ |
torch.no_grad() affects on model accuracy | I am getting an error "CUDA out of memory" then i add torch.no_grad() function into my code. Is it affect on my accuracy?
for iters in range(args.iterations):
with torch.no_grad():
encoded, encoder_h_1, encoder_h_2, encoder_h_3 = encoder(
res, encoder_h_1, encoder_h_2, encoder_h_3)
with torch.no_grad():
code = binarizer(encoded)
with torch.no_grad():
output, decoder_h_1, decoder_h_2, decoder_h_3, decoder_h_4 = decoder(
code, decoder_h_1, decoder_h_2, decoder_h_3, decoder_h_4)
res = res - output.detach()
codes.append(code.data.cpu().numpy())
torch.cuda.empty_cache()
print('Iter: {:02d}; Loss: {:.06f}'.format(iters, res.data.abs().mean()))
| torch.no_grad() just disables the tracking of any calculations required to later calculate a gradient.
It won't have any effect on accuracy in a pure inference mode, since gradients are not needed there. Of course you can't use it during training time since we need the gradients to train and optimize.
In general if you go for inference you always want to set the network to eval mode and disable gradients. This saves run time and memory consumption and won't affect accuracy.
Answer to a similar questions, explaining eval() and no_grad() https://discuss.pytorch.org/t/model-eval-vs-with-torch-no-grad/19615/2
| https://stackoverflow.com/questions/63351268/ |
Why is the accuracy difference so much when I use the image data set and pytorch's own data set directly? | For example, for the cifar10 data set, directly using the data set that comes with pytorch, the accuracy rate can reach 96% under the same network structure, but after I converted cifar10 into a picture, I tested it and the accuracy rate was only 92%. why?
This is the previous code:
train_dataset = dset.CIFAR10(args.data_path, train=True, transform=train_transform, download=True)
test_dataset = dset.CIFAR10(args.data_path, train=False, transform=test_transform, download=True)
This is the modified code:
train_dataset = datasets.ImageFolder(root='/home/ubuntu/bigdisk/DataSets/cifar10/static/orig/train/',
transform=train_transform
)
test_dataset = datasets.ImageFolder(root='/home/ubuntu/bigdisk/DataSets/cifar10/static/orig/test/',
transform=test_transform
)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=args.batch_size, shuffle=True,
num_workers=args.prefetch, pin_memory=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=args.test_bs, shuffle=False,
num_workers=args.prefetch, pin_memory=True)
| If the downloaded dataset, hyperparameters(such as batch size or learning rate), dataset transformation, etc were equal, I think it is because the randomness.
Your dataloader shuffles the dataset randomly. The shuffled dataset will always be different every time you shuffle it, which might have led to the accuracy difference.
Also, the model will be initialized with different values each time. (Unless you have used some initialization method that always initializes the model with the same values.)
You could check https://pytorch.org/docs/stable/notes/randomness.html for more information.
| https://stackoverflow.com/questions/63352551/ |
Google Cloud Function Build timeout - all requirements have been loaded | I have the following code on my cloud function -
import os
import numpy as np
import requests
import torch
from torch import nn
from torch.nn import functional as F
import math
from torch.nn import BCEWithLogitsLoss
from torch.utils.data import TensorDataset
from transformers import AdamW, XLNetTokenizer, XLNetModel, XLNetLMHeadModel, XLNetConfig
from keras.preprocessing.sequence import pad_sequences
import numpy as np
import pandas as pd
def polarization(request):
MODEL_URL = 'https://polarization.s3-us-west-1.amazonaws.com/classifier_state_dict.pt'
print(MODEL_URL)
r = requests.get(MODEL_URL)
print(r)
#Cloud function vm is a read only s/m. The only writable place is the tmp folder
file = open("/tmp/model.pth", "wb")
file.write(r.content)
file.close()
print("Wrote to the tmp file")
# State dict requires model object
model = XLNetForPolarizationClassification(num_labels=1)
model.load_state_dict(torch.load('/tmp/model.pth'))
# Tokenize the embedded article
embeddedArticle = request["embeddedArticle"]
tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased', do_lower_case=True)
textIds = tokenize_inputs(embeddedArticle, tokenizer, num_embeddings=250)
# Generate the attention masks and padding
masks = create_attn_masks(textIds)
article = pd.DataFrame()
article["features"] = textIds.tolist()
article["masks"] = masks
# Call generate_predictions
pred = generate_predictions(model, article, 1)
return pred
## Extracting parameter and returning prediction
def generate_predictions(model, df, num_labels, device="cpu"):
model.eval()
X = df_subset["features"].values.tolist()
masks = df_subset["masks"].values.tolist()
X = torch.tensor(X)
masks = torch.tensor(masks, dtype=torch.long)
with torch.no_grad():
# Run the model with the input_ids and attention_masks separately
logits = model(input_ids=X, attention_mask=masks)
# Get the logits for each class
logits = logits.sigmoid().detach().cpu().numpy()
return round(logits)
class XLNetForPolarizationClassification(torch.nn.Module):
def __init__(self, num_labels=2):
super(XLNetForPolarizationClassification, self).__init__()
self.num_labels = num_labels
self.xlnet = XLNetModel.from_pretrained('xlnet-base-cased')
self.classifier = torch.nn.Linear(768, 1)
torch.nn.init.xavier_normal_(self.classifier.weight)
def forward(self, input_ids, token_type_ids=None, attention_mask=None, labels=None):
last_hidden_state = self.xlnet(input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids)
mean_last_hidden_state = self.pool_hidden_state(last_hidden_state)
logits = self.classifier(mean_last_hidden_state)
# If you know the labels, compute the loss otherwise
if labels is not None:
loss_fct = BCEWithLogitsLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1, self.num_labels))
return loss
else:
return logits
def pool_hidden_state(self, last_hidden_state):
last_hidden_state = last_hidden_state[0]
mean_last_hidden_state = torch.mean(last_hidden_state, 1)
return mean_last_hidden_state
def create_attn_masks(input_ids):
"""
This will set a 1 or 0 based on if it is a mask or an actual input it for the word
"""
attention_masks = []
for seq in input_ids:
seq_mask = [float(i>0) for i in seq]
attention_masks.append(seq_mask)
return attention_masks
def tokenize_inputs(text, tokenizer, num_embeddings=250):
# tokenize the text, then truncate sequence to the desired length minus 2 for
# the 2 special characters
tokenized_texts = list(map(lambda t: tokenizer.tokenize(t)[:num_embeddings-2], text))
# convert tokenized text into numeric ids for the appropriate LM
input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts]
# append special token "<s>" and </s> to end of sentence
input_ids = [tokenizer.build_inputs_with_special_tokens(x) for x in input_ids]
# pad sequences
input_ids = pad_sequences(input_ids, maxlen=num_embeddings, dtype="long", truncating="post", padding="post")
return input_ids
and requirements.txt
certifi==2020.6.20
chardet==3.0.4
click==7.1.2
cycler==0.10.0
filelock==3.0.12
future==0.18.2
h5py==2.10.0
idna==2.10
joblib==0.16.0
Keras==2.4.3
kiwisolver==1.2.0
matplotlib==3.3.0
numpy==1.19.1
packaging==20.4
Pillow==7.2.0
pyparsing==2.4.7
python-dateutil==2.8.1
PyYAML==5.3.1
regex==2020.7.14
requests==2.24.0
sacremoses==0.0.43
scipy==1.5.2
sentencepiece==0.1.91
six==1.15.0
tokenizers==0.8.1rc1
torch==1.6.0
tqdm==4.48.2
transformers==3.0.2
urllib3==1.25.10
However when I deploy, it gives me build timed out. The logs don't show any errors and show that each of the dependencies in the requirements.txt file have built. The first print statement doesn't get logged either. I don't see what part of the model/ requirements is causing the time out issue.
Here's a screenshot of the logs - there are no errors all the logs are info until theres a Conext deadline exceeded statement. I can't share the actual logs without giving access to the function I believe. I've set the timeout to 9 minutes (540 seconds)1
| It might be due to the torch import, as this causes you to import the PyTorch lib that contains CUDA and hence requires a GPU, which isn't available on Cloud Functions.
Instead, you can use a direct link to the cpu only version in your requirements.txt, like this:
certifi==2020.6.20
chardet==3.0.4
click==7.1.2
cycler==0.10.0
filelock==3.0.12
future==0.18.2
h5py==2.10.0
idna==2.10
joblib==0.16.0
Keras==2.4.3
kiwisolver==1.2.0
matplotlib==3.3.0
numpy==1.19.1
packaging==20.4
Pillow==7.2.0
pyparsing==2.4.7
python-dateutil==2.8.1
PyYAML==5.3.1
regex==2020.7.14
requests==2.24.0
sacremoses==0.0.43
scipy==1.5.2
sentencepiece==0.1.91
six==1.15.0
tokenizers==0.8.1rc1
https://download.pytorch.org/whl/cpu/torch-1.6.0%2Bcpu-cp37-cp37m-linux_x86_64.whl
tqdm==4.48.2
transformers==3.0.2
urllib3==1.25.10
See this answer about how to select a different PyTorch version.
| https://stackoverflow.com/questions/63369555/ |
Max function in PyTorch versus in NumPy | I am developing an algorithm that involves a CPU case, allowing for NumPy, and a GPU case, allowing for PyTorch. The object will almost always be 4D. Two versions of the object are as follows.
B = [
[[[0.5000, 0.5625],
[0.5000, 0.5625]],
[[1.2500, 0.5000],
[0.5625, 0.6875]],
[[0.5625, 0.6250],
[0.5000, 0.5625]]]
]
B_array = np.array(B)
B_tensor = torch.Tensor(B)
I want to take the max of each 2D matrix, such that I get a result of:
max_array_fn(B_array) # returns array([0.5625, 1.250, 0.6250])
max_tensor_fn(B_tensor) # returns tensor([0.5625, 1.250, 0.6250])
Part of the solution was discussed here, but this is only for NumPy on CPU:
Max of each 2D matrix in 4D NumPy array
However, on the GPU, it seems that PyTorch does not use the same convention as NumPy.
If this was defined as a NumPy array, we could solve the problem using np.array(B, axis=(0,2,3)). The usage of axis in PyTorch similar to the NumPy example is not supported, as suggested here:
PyTorch torch.max over multiple dimensions
Is there an alternative vectorization of the method? Why is it only able to be vectorized on the CPU with NumPy and not on GPU with PyTorch?
The correct solution should not use any loops and ideally one function call.
| Not the most elegant way:
B.max(dim=3)[0].max(dim=2)[0].max(dim=0)[0]
| https://stackoverflow.com/questions/63369743/ |
Pytorch rename labels of a trained model | I have a trained model. During training there was some problem with string characters. So i converted my labels into numbers like:
red : 0
blue: 1
green: 2
Now is it possible to rename my label back to actual label names.
Hours of training. Would be helpful if anyone has an idea.
Train and validate the model
for epoch in range(1, epoch_num + 1):
loss_train, acc_train = train(train_loader, model, criterion, optimizer, epoch)
loss_val, acc_val = validate(val_loader, model, criterion, epoch)
total_loss_val.append(loss_val)
total_acc_val.append(acc_val)
Test Single Image:
def eval_image(file_path):
model = torch.load(file_path)
model.eval()
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
X = Image.open('red.jpeg')
test_transforms = transforms.Compose([transforms.ToTensor()])
image_tensor = test_transforms(X).float()
image_tensor = image_tensor.unsqueeze_(0)
input = Variable(image_tensor)
input = input.to(device)
output = model(input)
index = output.data.cpu().numpy().argmax()
print(index)
Once the training is done, The evaluation script cannot be made generic. I will have to pass following code
idx_to_class = {
0: "red",
1: "blue",
2: "green",
}
class_name = idx_to_class[index]
But I do not want to pass the above code. As my evaluation script needs to generic.
| Use a dictionary for it!
labels = [0, 1, 2, 1, 0, 2, 1]
dct = {0: "red", 1: "blue", 2: "green"}
renamed_labels = [dct[x] for x in labels]
renamed_labels ## ["red", "blue", "green", "blue", "red", "green", "blue"]
| https://stackoverflow.com/questions/63372759/ |
Summing over specific indices PyTorch (similar to scatter_add) | I have some matrix where rows belong to some label, unordered. I want to sum all rows for each label.
Here is how it can be done with a loop:
labels = torch.tensor([0, 1, 0])
x = torch.tensor([[1, 2, 3],[4, 5, 6],[7, 8, 9]])
torch.stack([torch.sum(x[labels == i], dim=0) for i in torch.unique(labels)])
desired output:
tensor([[ 8, 10, 12],
[ 4, 5, 6]])
EDIT: Just to make it clear, I have the labels tensor, I know which labels repeat, I am interested in computing the final line without the use of a loop. I was thinking scatter_add_ or gather might help.
| Just use torch.index_add function.
labels = torch.tensor([0, 1, 0])
x = torch.tensor([[1, 2, 3],[4, 5, 6],[7, 8, 9]])
nrow = torch.unique(labels).size(0)
ncol = x.size(1)
out = torch.zeros((nrow, ncol), dtype=x.dtype)
out.index_add_(0, labels, x)
print(out)
# the output will be
# tensor([[ 8, 10, 12],
# [ 4, 5, 6]])
| https://stackoverflow.com/questions/63380445/ |
Creating a criterion that measures the F1 Loss | I am currently creating criterion to measure the MSE loss function using:
loss_fcn = torch.nn.MSELoss()
loss = loss_fcn(logits[getMaskForBatch(subgraph)], labels.float())
Now I need to change it to F1 score but I cannot seem to find one library that could be used for it
| In particular, depending on the task you need to have a specific loss function.
Loss function also known as objective, cost or error function is somehow opposite to the optimization function. Loss function creates the loss, optimization function reduces the loss. :). These two functions should live in equilibrium so we don't overfit.
PyTorch Regression losses:
nn.L1Loss L1 Loss (MAE)
nn.MSELoss L2 Loss (MSE)
nn.SmoothL1Loss Huber
PyTorch Classification losses:
nn.CrossEntropyLoss
nn.KLDivLoss
nn.NLLLoss
PyTorch GAN training
nn.MarginRankingLoss
So if you used nn.MSELoss you probably need to stay with regression, because F1 is a classification metric.
If you really need F1 score for some other reason, you may use scikit learn.
| https://stackoverflow.com/questions/63400496/ |
torch.jit.script(module) vs @torch.jit.script decorator | Why is adding the decorator "@torch.jit.script" results in an error, while I can call torch.jit.script on that module, e.g. this fails:
import torch
@torch.jit.script
class MyCell(torch.nn.Module):
def __init__(self):
super(MyCell, self).__init__()
self.linear = torch.nn.Linear(4, 4)
def forward(self, x, h):
new_h = torch.tanh(self.linear(x) + h)
return new_h, new_h
my_cell = MyCell()
x, h = torch.rand(3, 4), torch.rand(3, 4)
traced_cell = torch.jit.script(my_cell, (x, h))
print(traced_cell)
traced_cell(x, h)
"C:\Users\Administrator\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\torch\jit\__init__.py", line 1262, in script
raise RuntimeError("Type '{}' cannot be compiled since it inherits"
RuntimeError: Type '<class '__main__.MyCell'>' cannot be compiled since it inherits from nn.Module, pass an instance instead
While the following code works well:
class MyCell(torch.nn.Module):
def __init__(self):
super(MyCell, self).__init__()
self.linear = torch.nn.Linear(4, 4)
def forward(self, x, h):
new_h = torch.tanh(self.linear(x) + h)
return new_h, new_h
my_cell = MyCell()
x, h = torch.rand(3, 4), torch.rand(3, 4)
traced_cell = torch.jit.script(my_cell, (x, h))
print(traced_cell)
traced_cell(x, h)
This question is also featured on PyTorch forums.
| Reason for your error is here, this bulletpoint precisely:
No support for inheritance or any other polymorphism strategy, except
for inheriting from object to specify a new-style class.
Also, as stated at the top:
TorchScript class support is experimental. Currently it is best suited
for simple record-like types (think a NamedTuple with methods
attached).
Currently, it's purpose is for simple Python classes (see other points in the link I've provided) and functions, see link I've provided for more information.
You can also check torch.jit.script source code to get a better grasp of how it works.
From what it seems, when you pass an instance, all attributes which should be preserved are recursively parsed (source). You can follow this function along (quite commented, but too long for an answer, see here), though exact reason why this is the case (and why it was designed this way) is beyond my knowledge (so hopefully someone with expertise in torch.jit's inner workings will speak more about it).
| https://stackoverflow.com/questions/63401585/ |
Pytorch: How to train a network with two loss functions? | I want to pretrain a network with reconstruction loss first, then finetune it by crossentropy loss. But it seems that I have to define two network in this two stage. How to achieve it?
class Net():
def __init__(self,pretrain):
self.pretrain = pretrain
def encoder(self,x):
# do something here
return x
def decoder(self,x):
# do something here
return x
def forward(self):
e_x = self.encoder(x)
if self.pretrain:
return decoder(e_x)
else:
return e_x
def train(x,y):
pretrain = True
if pretrain:
network = Net(pretrain=True)
output = network(x)
loss = MSE(x,output)
else:
network = Net(pretrain=False)
output = network(x)
loss = crossentropy(output,y)
loss.backward()
| You can achieve this by simply defining the two-loss functions and loss.backward will be good to go. See the relevant discussion here
MSE = torch.nn.MSELoss()
crossentropy = torch.nn.CrossEntropyLoss()
def train(x,y):
pretrain = True
if pretrain:
network = Net(pretrain=True)
output = network(x)
loss = MSE(x,output)
else:
network = Net(pretrain=False)
output = network(x)
loss = crossentropy(output,y)
loss.backward()
| https://stackoverflow.com/questions/63404656/ |
The use of meshgrid in pytorch/numpy | Below code is a snippet taken from https://blog.paperspace.com/how-to-implement-a-yolo-v3-object-detector-from-scratch-in-pytorch-part-3/, and I am confused as to what it is trying to achieve.
grid = np.arange(grid_size)
a,b = np.meshgrid(grid, grid)
x_offset = torch.FloatTensor(a).view(-1,1)
y_offset = torch.FloatTensor(b).view(-1,1)
if CUDA:
x_offset = x_offset.cuda()
y_offset = y_offset.cuda()
x_y_offset = torch.cat((x_offset, y_offset), 1).repeat(1,num_anchors).view(-1,2).unsqueeze(0)
I tried the case when grid_size = 3, and it outputed:
tensor([[0., 1.],
[2., 0.],
[1., 2.],
[0., 1.],
[2., 0.],
[0., 0.],
[1., 1.],
[1., 2.],
[2., 2.],
[0., 1.],
[2., 0.],
[1., 2.],
[0., 1.],
[2., 0.],
[0., 0.],
[1., 1.],
[1., 2.],
[2., 2.],
[0., 1.],
[2., 0.],
[1., 2.],
[0., 1.],
[2., 0.],
[0., 0.],
[1., 1.],
[1., 2.],
[2., 2.]])
I cannot quite see what is the pattern here. According to the description in the given link I think I should really expect something like:
tensor([[0,0],
[0,0],
[0,0],
[0,1],
[0,1],
[0,1],
[0,2],
[0,2],
...]])
| If you are expecting the second output you show, simply change
x_y_offset = (
torch.cat((x_offset, y_offset), 1).repeat(1, num_anchors).view(-1, 2).unsqueeze(0)
)
to
x_y_offset = (
torch.cat((y_offset, x_offset), 1).repeat(1, num_anchors).view(-1, 2).unsqueeze(0)
)
It just has to do with the ordering of the output of meshgrid.
| https://stackoverflow.com/questions/63417951/ |
Three dimensional autoencoder has low MSE loss but gives noisy reconstruction image | I am re-constructing spatio-temporal cuboids of 3-dimensional size with width and height equal to 32 and depth equal to 20. I am using Conv3d layers in my autoencoder architecture.
So my input shape is 32x32x20, which I am reducing to size 2048, and then reconstructing it back to 32x32x20.
The MSE loss of the model has nice convergence even though the reconstruction is just noise.
My encoder architecture:
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv3d-1 [-1, 64, 20, 32, 32] 5,248
AvgPool3d-2 [-1, 64, 10, 16, 16] 0
Conv3d-3 [-1, 128, 10, 16, 16] 221,312
AvgPool3d-4 [-1, 128, 5, 8, 8] 0
Conv3d-5 [-1, 256, 5, 8, 8] 884,992
AvgPool3d-6 [-1, 256, 2, 4, 4] 0
Conv3d-7 [-1, 512, 2, 4, 4] 3,539,456
AvgPool3d-8 [-1, 512, 1, 2, 2] 0
My decoder architecture:
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Interpolate-1 [-1, 512, 2, 4, 4] 0
Conv3d-2 [-1, 256, 4, 4, 4] 3,539,200
Interpolate-3 [-1, 256, 8, 8, 8] 0
Conv3d-4 [-1, 128, 10, 8, 8] 884,864
Interpolate-5 [-1, 128, 20, 16, 16] 0
Conv3d-6 [-1, 64, 20, 16, 16] 221,248
Interpolate-7 [-1, 64, 40, 32, 32] 0
Conv3d-8 [-1, 3, 40, 32, 32] 5,187
Conv3d-9 [-1, 3, 20, 32, 32] 246
My reconstruction code:
from mpl_toolkits.axes_grid1 import ImageGrid
# recon_batch is the last batch of the autoencoder output.
# recon_batch has shape (batch_size, 3, 20, 32, 32)
recon_batch = recon_batch.permute(0, 2, 3, 4, 1) # new shape = (batch_size, 20, 32, 32, 3)
recon_batch = recon_batch.detach().cpu()
recon_batch_numpy = recon_batch.detach().cpu().numpy()
for k in range(3): # K=3 because I want to display 3 frames
fig = plt.figure(figsize=(4., 4.))
grid = ImageGrid(fig, 111,
nrows_ncols=(4, 4),
axes_pad=0.1,
)
images = [recon_batch_numpy[i][k] for i in range(16)] # 16 because Original Image is of size (128x128), when I make 32x32 patches, 16 sub frames are formed
for ax, im in zip(grid, images):
ax.imshow((rgb2gray(im) * 255).astype(np.uint8), cmap='gray', vmin=0, vmax=255)
name = str(num_epochs) + 'th_figure_' + str(k)
plt.savefig(name)
Original input image(actual cuboids of size 32x32x20, the image is reconstructed by taking the first frame for depth 0 of each such cuboid):
The reconstructed output image:
If needed, I can also add a loss plot, the loss starts around 10,000 and converges around 200.
| I'm not sure what your loss term or training process looks like, but you may want to consider a couple of things:
training for more epochs since 200 may be a relatively large MSE (and you may be stuck in a local minima for which case a higher gradient or different optimizer/optimizer hyper-parameters may help)
changing you loss term by using a different loss or by adding additional terms to the loss; such as, KL-divergence
| https://stackoverflow.com/questions/63418904/ |
pytorch model saved from TPU run on CPU | I found interesting model - question generator, but can't run it. I got an error:
Traceback (most recent call last):
File "qg.py", line 5, in <module>
model = AutoModelWithLMHead.from_pretrained("/home/user/ml-experiments/gamesgen/t5-base-finetuned-question-generation-ap/")
File "/home/user/.virtualenvs/hugging/lib/python3.7/site-packages/transformers/modeling_auto.py", line 806, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/home/user/.virtualenvs/hugging/lib/python3.7/site-packages/transformers/modeling_utils.py", line 798, in from_pretrained
import torch_xla.core.xla_model as xm
ModuleNotFoundError: No module named 'torch_xla'
I briefly googled and found that "torch_xla" is a something that is used to train pytorch model on TPU. But I would like to run it localy on cpu (for inference, of course) and got this error when pytorch tried to load tpu-bound tensors.
How can I fix it?
this is model, which I tried: https://huggingface.co/mrm8488/t5-base-finetuned-question-generation-ap
| As @cronoik suggested, I have installed transformers library form github. I clonned latest version, and executed python3 setup.py install in it's directory. This bug was fixed, but fix still not released in python's packets repository.
| https://stackoverflow.com/questions/63419835/ |
Using pytorch Cuda on MacBook Pro | I am using MacBook Pro (16-inch, 2019, macOS 10.15.5 (19F96))
GPU
AMD Radeon Pro 5300M
Intel UHD Graphics 630
I am trying to use Pytorch with Cuda on my mac.
All of the guides I saw assume that i have Nvidia graphic card.
I found this: https://github.com/pytorch/pytorch/issues/10657 issue, but it looks like I need to install ROCm, and according to their Supported Operating Systems, it only supports Linux.
Is it possible to run Pytorch on GPU using mac and AMD Graphic card?
| No.
CUDA works only with supported NVidia GPUs, not with AMD GPUs.
There is an ongoing effort to support acceleration for AMD GPUs with PyTorch (via ROCm, which does not work on MacOS).
| https://stackoverflow.com/questions/63423463/ |
AssersionError: Torch not compiled with CUDA enabled | I want to run this repo. I installed everything that is needed for this project.
I have Windows 8.1 operating system, seems that I don't have NVIDIA GPU (from Device Manager: Display adapters - AMD Radeon HD 7660G + 7670M Dual Graphics and AMD Radeon HD 7670M).
I installed torch with command that is presented on Pytorch web-site
pip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
But when I run the project I receive error - AssersionError: Tourch not compiled with CUDA enabled.
Then I tried to install torch with CUDA enabled.
pip install torch===1.6.0 torchvision===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
But when I run the project I receive error - AssersionError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from https://www.nvidia.com/Download/index.aspx.
Please, help me to solve my issue and run the project without errors.
| I have already fixed this issue. There was a problem in source code where they use
opt.device
in main.py and violin_dataset.py. But this was declared as
opt.device = torch.device('cuda:0')
in config.py even if you didn't have cuda support.
So I changed it to
opt.device = torch.device('cpu')
And everything works fine now.
| https://stackoverflow.com/questions/63440170/ |
Use SageMaker Pytorch image for training | I am trying to containerize the training process for a fine tuned BERT model and run it on SageMaker. I was planning to use the pre-built SageMaker Pytorch GPU containers (https://aws.amazon.com/releasenotes/available-deep-learning-containers-images/) as my starting point but I am having issues pulling the images during my build process.
My Dockerfile looks like this:
# SageMaker PyTorch image
FROM 763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-training:1.5.0-gpu-py36-cu101-ubuntu16.04
ENV PATH="/opt/ml/code:${PATH}"
# /opt/ml and all subdirectories are utilized by SageMaker, we use the /code subdirectory to store our user code.
COPY /bert /opt/ml/code
# this environment variable is used by the SageMaker PyTorch container to determine our user code directory.
ENV SAGEMAKER_SUBMIT_DIRECTORY /opt/ml/code
# this environment variable is used by the SageMaker PyTorch container to determine our program entry point
# for training and serving.
# For more information: https://github.com/aws/sagemaker-pytorch-container
ENV SAGEMAKER_PROGRAM bert/train
My build_and_push script:
#!/usr/bin/env bash
# This script shows how to build the Docker image and push it to ECR to be ready for use
# by SageMaker.
# The argument to this script is the image name. This will be used as the image on the local
# machine and combined with the account and region to form the repository name for ECR.
IMAGE="my-bert"
# parameters
PY_VERSION="py36"
# Get the account number associated with the current IAM credentials
account=$(aws sts get-caller-identity --query Account --output text)
if [ $? -ne 0 ]
then
exit 255
fi
chmod +x bert/train
# Get the region defined in the current configuration (default to us-west-2 if none defined)
region=$(aws configure get region)
region=${region:-us-east-2}
# If the repository doesn't exist in ECR, create it.
aws ecr describe-repositories --repository-names ${IMAGE} || aws ecr create-repository --repository-name ${IMAGE}
echo "---> repository done.."
# Get the login command from ECR and execute it directly
aws ecr get-login-password --region $region | docker login --username AWS --password-stdin $account.dkr.ecr.$region.amazonaws.com
echo "---> logged in to account ecr.."
# Get the login command from ECR in order to pull down the SageMaker PyTorch image
# aws ecr get-login-password --region $region | docker login --username AWS --password-stdin 763104351884.dkr.ecr.us-east-1.amazonaws.com
# echo "---> logged in to pytorch ecr.."
echo "Building image with arch=gpu, region=${region}"
TAG="gpu-${PY_VERSION}"
FULLNAME="${account}.dkr.ecr.${region}.amazonaws.com/${IMAGE}:${TAG}"
docker build -t ${IMAGE}:${TAG} --build-arg ARCH="$arch" -f "Dockerfile" .
docker tag ${IMAGE}:${TAG} ${FULLNAME}
docker push ${FULLNAME}
I get the following message during the push and the sagemaker pytorch image is not pulled:
Get https://763104351884.dkr.ecr.us-east-1.amazonaws.com/v2/pytorch-training/manifests/1.5.0-gpu-py36-cu101-ubuntu16.04: no basic auth credentials
Please let me know if this is the correct way to use a pre-built SageMaker image and what I could do to fix this error.
| You should run something like this, before running the docker build:
aws ecr get-login-password --region ${region} | docker login --username AWS --password-stdin 763104351884.dkr.ecr.${region}.amazonaws.com
| https://stackoverflow.com/questions/63440881/ |
How to know whether the data passed to GPU will cause CUDA out of memory or not | I am using GPU to run some very large deep learning models, when I choose a batch size of 8, it can fit into the memory, but if I use a batch size of 16, it will cause CUDA out-of-memory error, and I have to kill the process.
My question is, before actually passing the data into GPU, is there a way that I could know how large the data will occupy in the GPU?
For example, the following code is about how I create a pytorch dataloader and pass each batch of the dataloader to the GPU, could I know how large it is before I call batch.to(device)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)
for step, batch in enumerate(train_dataloader):
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
| I would recommend using the torchsummary package here.
pip install torchsummary
and in use
from torchsummary import summary
myModel.cuda()
summary(myModel, (shapeOfInput)) # where shapeOfInput is a tuple of the sample's dimensions
This will give you the size of the model, the size of the forward pass, and the size of the backpass in MB for a batch size of 1, and you can then multiple out by your batch size.
| https://stackoverflow.com/questions/63443270/ |
Runtime error while executing code from google colab document for creating Deepfakes image animation | enter image description hereI'm getting runtime error while executing code from google colab document for creating Deepfakes image animation.
RuntimeError Traceback (most recent call last)
<ipython-input-5-dbd18151b569> in <module>()
1 from demo import load_checkpoints
2 generator, kp_detector = load_checkpoints(config_path='config/vox-256.yaml',
----> 3 checkpoint_path='/content/gdrive/My Drive/first-order-motion-model/vox-cpk.pth.tar')
10 frames
/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py in _lazy_init()
188 raise AssertionError(
189 "libcudart functions unavailable. It looks like you have a broken build?")
--> 190 torch._C._cuda_init()
191 # Some of the queued calls may reentrantly call _lazy_init();
192 # we need to just return without initializing in that case.
RuntimeError: cuda runtime error (100) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:47
| You may be running on a colab environment that only has TPU's available and not GPU's in which case you need to utilize XLA with PyTorch. You might find this notebook and repository very helpful if this is the case:
https://colab.research.google.com/github/pytorch/xla/blob/master/contrib/colab/resnet18-training.ipynb
https://github.com/pytorch/xla
| https://stackoverflow.com/questions/63445839/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.