id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st80368 | Hi Rui An!
ruian1:
Could you also point me to the block where “log-sum-exp” is applied to ? I traced it back to functional.py but still can’t figure out how it is done…
I don’t have the binary_cross_entropy_with_logits code
in front of me, so I can’t give you the specifics.
The issue is that floating-point error can get amplified when you
compute log of an expression containing exp. sigmoid has
the exp, and cross-entropy has the log, so you can run into
this problem when using sigmoid as input to cross-entropy.
Dealing with this issue is the main reason that
binary_cross_entropy_with_logits exists.
See, for example, the comments about “log1p” in the Wikipedia
article about logarithm 2.
(I was speaking loosely when I mentioned the related
“log-sum-exp-trick.” This is more directly relevant to computing
softmax, but is basically another facet of the same issue. For
more on this, see, for example, Wikipedia’s LogSumExp 10)
Best.
K. Frank |
st80369 | Hi
I have a 1080ti and now thinking should I get a 2080ti
If I apply DataParallel to train the model in Pytorch with 1080ti and 2080ti, will 1080ti become the bottleneck to the training process?
Sorry I am not familiarize with the DataParallel process.
Lets say the whole dataset can be divided into 10 batches, will each GPU be assigned with fixed number of batches (1080ti and 2080ti each will train 5 batches) ?
or if one GPU trains faster it will take more batches?
Thanks |
st80370 | If you are using nn.DataParallel, each GPU will get the same batch size (if possible).
If you look at the internals you might adapt the code and use a custom chunking approach.
However, I would generally advice against a system with mixed GPUs.
E.g. your 2080TI will have TensorCores, which could speedup FP16 calculations, while your 1080Ti won’t be able to do it. |
st80371 | thanks so much for your reply.
Just curious for the performance’s point of view, would you replace 1080ti by 2080ti ? or
train model with dual 1080ti gpu? |
st80372 | What is the most efficient way to do a multi batch prediction in PyTorch?
I have a bunch of images (Dogs vs Cats test set to be precise) that I want to run prediction on. I call the following code in a loop over Dataloader Iterator with a batch size of 64 and store the result int a torch tensor. How should I efficiently collect all the results on the GPU and transfer it to host?
# Loop over
def step(self, inputs):
data, label = inputs # ignore label
outputs = self.model(data)
_, preds = torch.max(outputs.data , 1)
# preds, outputs are cuda tensors. Right?
return preds, outputs
def predict(self, dataloader):
for i, batch in enumerate(dataloader):
pred, output = self.step(batch)
# How to collect these results efficiently without incurring performance penalty ? |
st80373 | i think it should be reasonably efficent to call .cpu() on pred and put it in a list.
prediction_list = []
def predict(self, dataloader):
for i, batch in enumerate(dataloader):
pred, output = self.step(batch)
prediction_list.append(pred.cpu())
A more extreme case is to use CUDA pinned memory on the CPU, http://pytorch.org/docs/master/notes/cuda.html?highlight=pinned#best-practices 131
However, in your use-case i’m not sure you’ll gain much with that. |
st80374 | Are GPU to Host copies also affected by Pinned Memory? I was wondering if we could collect all the results in GPU and transfer it one shot to CPU.
As you said, these copies didn’t affect my run time significantly. My network is taking 250ms. But I was wondering if there was any better way to do this.
Right now I have a torch Tensor pre allocated with the number of elements and in each iteration, I am indexing from n to n + batch_size on that tensor and storing the values. Hope I am not doing anything wrong by that.
def predict(self, dataloader):
num_elements = len(dataloader.dataset)
num_batches = len(dataloader)
batch_size = dataloader.batch_size
predictions = torch.zeros(num_elements)
for i, batch in enumerate(dataloader):
start = i*batch_size
end = start + batch_size
if i == num_batches - 1:
end = num_elements
pred, output = self.step(batch)
predictions[start:end] = pred |
st80375 | I am indexing from n to n + batch_size on that tensor and storing the values.
This seems fine. |
st80376 | I am also looking a more efficient way to make prediction of the entire validation or testing dataset. The way i did before was like
def pytorch_predict(model, test_loader, device):
'''
Make prediction from a pytorch model
'''
# set model to evaluate model
model.eval()
y_true = torch.tensor([], dtype=torch.long, device=device)
all_outputs = torch.tensor([], device=device)
# deactivate autograd engine and reduce memory usage and speed up computations
with torch.no_grad():
for data in test_loader:
inputs = [i.to(device) for i in data[:-1]]
labels = data[-1].to(device)
outputs = model(*inputs)
y_true = torch.cat((y_true, labels), 0)
all_outputs = torch.cat((all_outputs, outputs), 0)
y_true = y_true.cpu().numpy()
_, y_pred = torch.max(all_outputs, 1)
y_pred = y_pred.cpu().numpy()
y_pred_prob = F.softmax(all_outputs, dim=1).cpu().numpy()
return y_true, y_pred, y_pred_prob
I understand that transferring data between GPU and CPU is costly so that I decided to do it at once. However I am not sure if transferring step by step is faster than transferring in one shot.
From your code, you allocate memory to the prediction array first however i think you still need to transfer your data from GPU to CPU if your model is on GPU and outputs are also produced on GPU. |
st80377 | Hi, the bellow section of code is creating the error in the title.
for epoch in range(3): # 3 full passes over the data
for data in trainset: # `data` is a batch of data
X, y = data # X is the batch of features, y is the batch of targets.
net.zero_grad() # sets gradients to 0 before loss calc. You will do this likely every step.
output = net(X.view(-1,11)) # pass in the reshaped batch (recall they are 28x28 atm)
loss = F.nll_loss(output, y) # calc and grab the loss value
loss.backward() # apply this loss backwards thru the network's parameters
optimizer.step() # attempt to optimize weights to account for loss/gradients
print(loss) # print loss. We hope loss (a measure of wrong-ness) declines!
Note: I credit my code to sentdex, I’m only trying to modify it to use for my dataset.
My repository with full code: https://github.com/itisyeetimetoday/pytorch_reggression 1
Sorry about the messy/incorrect formating. This is my first post here. Please let me know how to fix any errors or mistakes with the formating and site usage in general. |
st80378 | Which line of code is throwing this error?
The linked repository thrown a 404 error and there seems to be another TF implementation.
If the error is thrown in your trainset, you should check, that its __getitem__ method returns two values, since you are unpacking them to X and y. |
st80379 | I checked all methods in here https://pytorch.org/docs/stable/cuda.html#module-torch.cuda 16 and could not find single method that can bring me the GPU size of the device.
How to get total memory of GPU device and total available GPU memory on device? |
st80380 | Solved by ptrblck in post #2
print(torch.cuda.get_device_properties('cuda:0'))
> _CudaDeviceProperties(name='TITAN V', major=7, minor=0, total_memory=12034MB, multi_processor_count=80)
will give you some information about your device.
Note that the CUDA context (ant other application) might take some memory, which will not be… |
st80381 | print(torch.cuda.get_device_properties('cuda:0'))
> _CudaDeviceProperties(name='TITAN V', major=7, minor=0, total_memory=12034MB, multi_processor_count=80)
will give you some information about your device.
Note that the CUDA context (ant other application) might take some memory, which will not be tracked by e.g. torch.cuda.memory_allocated() or torch.cuda.memory_cached(). |
st80382 | Does anyone know how to add my custom layer to a specific location after loading pre traind weights? without breaking the pre traind weight. |
st80383 | You could add the extra layer, manipulate the forward method as you wish, and load the state_dict passing strict=False.
Here is a small example:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.fc1 = nn.Linear(1, 1)
self.fc2 = nn.Linear(1, 1)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
class MyModelModified(nn.Module):
def __init__(self):
super(MyModelModified, self).__init__()
self.fc1 = nn.Linear(1, 1)
self.fc2 = nn.Linear(1, 1)
self.extra = nn.Linear(1, 1)
def forwars(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.extra(x)
return x
model = MyModel()
state_dict = model.state_dict()
model_mod= MyModelModified()
model_mod.load_state_dict(state_dict, strict=False)
> _IncompatibleKeys(missing_keys=['extra.weight', 'extra.bias'], unexpected_keys=[])
# Check for equal params
print(model.fc1.weight == model_mod.fc1.weight)
> True
print(model.fc1.bias == model_mod.fc1.bias)
> True
print(model.fc2.weight == model_mod.fc2.weight)
> True
print(model.fc2.bias == model_mod.fc2.bias)
> True |
st80384 | I want to subclass Sequential to create a Residual sequential class in a few lines.
class Residual(nn.Sequential):
def forward(self, x):
return x + super().__call__(x)
However, this crashes with:
RecursionError: maximum recursion depth exceeded while calling a Python object
If I understand well, the super().__call__ actually calls nn.Module.__call__ which then calls the forward of self … which is the one I overrode, that calls the __call__ again …
Changing the __call__ to forward solves the problem :
class Residual(nn.Sequential):
def forward(self, x):
return x + super().forward(x)
But I will lose the hooks then, is that right ?
Is my global understanding correct ?
Could it poses any problem to modules such as DataParallel that the forward of Sequential is callled directly ?
Thank you much in advance ! |
st80385 | Solved by albanD in post #3
Hi,
The hooks for this module have already been called before entering the forward of your module.
So you should use super().forward(x). |
st80386 | The simplest solution I can come up with is the following
class Residual(nn.Module):
def __init__(self, *mods):
super().__init__()
self.seq = nn.Sequential(*mods)
def forward(self, x):
return x + self.seq(x)
This should retain all the hooks with 3 extra lines of code! |
st80387 | Hi,
The hooks for this module have already been called before entering the forward of your module.
So you should use super().forward(x). |
st80388 | Good morning all,
I am making some machine learning on the metro traffic volume from there: http://archive.ics.uci.edu/ml/datasets/Metro+Interstate+Traffic+Volume 4
The aim is to predict an hourly traffic volume according to various features that are:
holiday : Yes/No
temperature in C
rain and snow fall in mm
clouds cover %
hours of the day : integer from 0 to 23
day of the week (wd_i, i=0…6).
I made a random forest and get a 20% error on traffic prediction, which is OK for me. Then I have plotted the FEATURES IMPORTANCE and got:
Feature ranking:
holiday (0.736229)
temp (0.137736)
rain_1h (0.029498)
snow_1h (0.028497)
clouds_all (0.023778)
weather_main (0.019016)
hours (0.004792)
wd_0 (0.004365)
…
wd_6 0 (0.000018)
Comments :
‘holiday’ is very important because of course, on holidays, traffic is very slow the whole day !
‘hours’ seems to be almost insignificant.
Action: I have REMOVED ‘hours’ from the features because it was so insignificant and then I run random forest again.
Result: result is a disaster (error jumped from 20 to 60%) and YES, it is normal, because traffic volume depends so much on the hour of the day !
Question:
why was ‘hours’ put with a so low ranking on the features importance analysis ?
it seems not being a good idea at all to withdrawn features based on there .feature_importances_ method of RandomForestRegressor(). Is that correct ?
So what does .feature_importances_ precisely measure ?
Thanks in advance for your comments ! |
st80389 | Which library are you using?
The lib should have some information, how the feature importance is calculated.
You are posting in the PyTorch forum, which might not be the best place to ask about other toolkits. |
st80390 | Yes you are correct. I was confused… I am using SKLEARN so it is not the right place ! |
st80391 | Hi,
I am trying to apply a torch.sqrt to a vector that contains 0 values. My goal is to avoid the 0 values. I mean only a pply the torch.sqrt to values that are different of 0. To do that I have try the following:
t=torch.tensor([[0.0,1.0,2.0],[3.0,0.0,4.0],[5.0,6.0,0.0]])
aux = t[t!=0].clone()
aux = torch.sqrt(aux)
t[t!=0] = aux.clone()
This works, but breaks the backpropagation. I am getting the following error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [7822, 1]], which is output 0 of DivBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
It seems like this line aux=t[t!=0].clone() is breaking the backpropagation. How can I solve it? Is there any way to achieve my goal easily? |
st80392 | Hi Dhorka!
Dhorka:
I am trying to apply a torch.sqrt to a vector that contains 0 values. My goal is to avoid the 0 values. I mean only a pply the torch.sqrt to values that are different of 0.
I speculate that you wish to avoid torch.sqrt (0) because the
derivative of sqrt is infinite at 0.
How can I solve it? Is there any way to achieve my goal easily?
If your goal is, in fact, to avoid the infinite derivative, you could
simply add a small “epsilon” to your value before calling sqrt:
epsilon = 1.e-8
t = torch.sqrt (t + epsilon)
Now the infinite derivative for (elements of) t = 0 just becomes
a large derivative (specifically 1 / (2 * sqrt (epsilon))).
epsilon should be chosen to be comfortably smaller than the
typical small values that show up in this part of your computation.
Best.
K. Frank |
st80393 | I have a AutoEncoder Model in which I return both the encoding and the reconstructed output. This is the forward method that I use:
def forward(self, x):
for module in self.encoder:
x = module(x)
encoding = x*1 # added the multiply just for testing
for module in self.decoder:
x = module(x)
return x, encoding
In line 4, is the simple assign I am doing to copy wrong?
When I check the output, they both refer to different grad_fn. Am I missing something here? |
st80394 | I’m not sure, what your use case is, but encoding should be assigned to the multiplication with the “old” x tensor, since x will be overwritten in the following loop. |
st80395 | Thank you for the reply. I was thinking that I would have to use .clone() for copying the tensor.
I am planning to use the encoding as well as x for optimisation. Will that work fine in this case? |
st80396 | I am using LSTM in-order to perform binary-classification, when I plot the test-loss it is not reducing over time.It is rather fluctuating a lot over time and looks extremely weird.The training loss on the other hand looks normal and is decreasing over time.Here’s a picture of it
Capture_weird_test.PNG1921×1008 99.2 KB
This is my code of the model definition and configuration.
# Create LSTM Model
class LSTMModel(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim):
super(LSTMModel, self).__init__()
# Number of hidden dimensions
self.hidden_dim = hidden_dim
# Number of hidden layers
self.layer_dim = layer_dim
# LSTM
self.lstm = nn.LSTM(input_dim, hidden_dim, layer_dim, batch_first=True, dropout=0.2)
# Readout layer
self.f1 = nn.Linear(hidden_dim, output_dim)
self.softmax = nn.Sigmoid()
def forward(self, x):
# Initialize hidden state with zeros
h0 = Variable(torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).type(torch.FloatTensor).cuda())
# Initialize cell state
c0 = Variable(torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).type(torch.FloatTensor).cuda())
out, (hn, cn) = self.lstm(x, (h0,c0))
out = self.f1(hn[-1])
out = self.softmax(out)
return out
#LSTM Configuration
batch_size = 10000
num_epochs = 200
learning_rate = 0.001#Try lowering the rate
# Create LSTM
input_dim = 1 # input dimension
hidden_dim = 50 # hidden layer dimension
layer_dim =2 # number of hidden layers
output_dim = 1 # output dimension
model = LSTMModel(input_dim, hidden_dim, layer_dim, output_dim)
model.cuda()
error = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
This is my code for training and testing
from tensorboardcolab import TensorBoardColab
globaliter = 0
globaliter2=0
tb = TensorBoardColab()
for epoch in tqdm(range(num_epochs)):
# Train
model.train()
for i, (inputs, targets) in enumerate(train_loader):
train = Variable(inputs.type(torch.FloatTensor).cuda())
targets = Variable(targets.type(torch.FloatTensor).cuda())
optimizer.zero_grad()
outputs = model(train)
loss = error(outputs, targets)
loss_list_train.append(loss.item())
loss.backward()
optimizer.step()
tb.save_value('Train Loss', 'train_loss', globaliter, loss.item())
globaliter += 1
tb.flush_line('train_loss')
# Test
model.eval()
for inputs, targets in test_loader:
inputs = Variable(inputs.type(torch.FloatTensor).cuda())
targets = Variable(targets.type(torch.FloatTensor).cuda())
outputs = model(inputs)
loss_test = error(outputs, targets)
loss_list_test.append(loss_test.item())
tb.save_value('Test Loss', 'test_loss', globaliter2, loss_test.item())
globaliter2 += 1
tb.flush_line('test_loss')
I’d really be grateful if someone helped me figure this out, or offered suggestions or advice |
st80397 | To install PyTorch nightly from pytorch.otg:
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch-nightly
To install PyTorch nightly from anaconda 14
conda install -c pytorch pytorch-nightly
So, which one is correct? the latter one installs pytorch-nightly from pytorch channel, the first one install pytorch from pytorch-nightly channels!! confusing |
st80398 | Solved by ptrblck in post #4
I would assume the nightly binaries of torchvision and PyTorch should be matching, i.e. both should have the same build date. If some torchvision builds were skipped (for whatever reason), you might have to downgrade PyTorch to the latest matching nightly build.
@fmassa might know more about it. |
st80399 | The first one is the new nightly build. In doubt, stick to the install instructions on the install instructions on the website 158. |
st80400 | I use the first version. There seems now it always downgrades pytorch if I also include torchvision. nightly torchvision seems to be depending on an older version of nightly torch, also the stable torchvision raises an error when I try it with the nightly pytorch.
If I use conda update to update the nightly I get:
Updating pytorch is constricted by
torchvision -> requires pytorch==1.3.0.dev20190917
So I will have to remove torchvision first, and then I can only install the stable torchvision (which does not work against nightly pytorch) |
st80401 | dashesy:
nightly torchvision seems to be depending on an older version of nightly torch
I would assume the nightly binaries of torchvision and PyTorch should be matching, i.e. both should have the same build date. If some torchvision builds were skipped (for whatever reason), you might have to downgrade PyTorch to the latest matching nightly build.
@fmassa might know more about it. |
st80402 | @fmassa pytorch.org 5 suggests
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch-nightly
To get the latest nightly packages for PyTorch and torchvision.
So can I assume
conda update pytorch torchvision -c pytorch-nightly
Is the right way to update them both?
I currently cannot update both (torchvision forced PyTorch to donwgrade):
The following packages will be UPDATED:
torchvision pytorch::torchvision-0.3.0-py36_cu10.~ --> pytorch-nightly::torchvision-0.5.0.dev20190917-py36_cu100
The following packages will be DOWNGRADED:
pytorch 1.3.0.dev20191002-py3.6_cuda10.0.130_~ --> 1.3.0.dev20190917-py3.6_cuda10.0.130_cudnn7.6.2_0
I just wan to know if this is normal or my setup is wrong. Also, I cannot load latest released torchvison against nightly PyTorch:
ImportError: /home/ehazar/miniconda3/envs/py3_night/lib/python3.6/site-packages/torchvision/_C.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN3c1011CPUTensorIdEv |
st80403 | Hi,
I have been trying to train a LSTM on customer sequence model to predict the next event that is likely to happen.
The event space consists of 2983 events. To begin with, I have made a dataset that the most frequent 100 events form the labels for training and the last 10 events before the label as the attributes( or event sequence).
For example the event sequence in a row might be like
* Attributes : [[2,3,5,7,42,432,4,3,1,5],
[243,343,52,7,423,432,42,31,112,532]]
* Labels : [[2],
[1]]
Encoding : One-Hot encoding (For Attributes)
Raw Values (For Labels)
#Imports and defaults :
import pandas as pd
import numpy as np
import torch
import torch.nn as nn
import ast
from sklearn.model_selection import train_test_split
from datetime import datetime
import torch.nn.functional as F
from torch.optim.lr_scheduler import ReduceLROnPlateau, CyclicLR
from torch.utils import data
import matplotlib.pyplot as plt
from sklearn.metrics import precision_score, recall_score, f1_score, accuracy_score, classification_report
torch.multiprocessing.set_sharing_strategy('file_system')
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
#HyperParameters :
sequence_length = 10
input_size = 2983
hidden_size = 1024
num_layers = 2
num_classes = 100
batch_size = 50
num_epochs = 10
learning_rate = 1
#Model :
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, num_classes):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True, bidirectional = True)
self.fc = nn.Linear(hidden_size, num_classes)
def forward(self, x):
h0 = torch.randn(self.num_layers, x.size(0), self.hidden_size).to(device)
c0 = torch.randn(self.num_layers, x.size(0), self.hidden_size).to(device)
out, _ = self.lstm(x, (h0, c0))
out_ = self.fc(out[:, -1, :])
return out_
model = RNN(input_size, hidden_size, num_layers, num_classes).cuda()
#Loss and optimizer and Scheduler :
criterion = nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum= 0.1)
scheduler = CyclicLR(optimizer, mode = 'exp_range',gamma =0.9999, base_lr= 1e-5, max_lr = 1)
#DataLoader :
class Dataset_1(data.Dataset):
def __init__(self, list_IDs, labels, n_uniq):
self.labels = labels
self.list_IDs = list_IDs
self.n_uniq = n_uniq
def __len__(self):
return len(self.list_IDs)
def __getitem__(self, index):
X = self.list_IDs[index]
X = F.one_hot(torch.from_numpy(X).to(torch.int64), num_classes = self.n_uniq)
y = self.labels[index]
return X, y
#Train and validation generators:
params = {'batch_size': 50,
'shuffle': True,
'num_workers': 30}
training_set = Dataset_1(x_train, y_train, input_size )
training_generator = data.DataLoader(training_set, **params)
validation_set = Dataset_1(x_test, y_test, input_size)
validation_generator = data.DataLoader(validation_set, **params)
#Training :
for epoch in range(10):
epoch_time = datetime.now()
# Training
print('Training Start')
counter = 0
for local_batch, local_labels in training_generator:
# Transfer to GPU
counter +=1
#local_batch, local_labels = (copy.deepcopy(local_batch), copy.deepcopy(local_labels))
local_batch, local_labels = local_batch.to(device).float(), local_labels.to(device).long()
outputs = model(local_batch)
loss= criterion(outputs,local_labels)
#loss = criterion(local_labels,outputs, input_lengths, target_lengths)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if counter%5000 == 0:
print(f"Counter : {counter} || Loss : {loss.item()}\n LR : {optimizer.state_dict()['param_groups'][0]['lr']}")
if counter%50 == 0:
lst_loss.append(loss.item())
lst_lr.append(optimizer.state_dict()['param_groups'][0]['lr'])
scheduler.step()
print(f"Epoch time : {datetime.now()-epoch_time} || Loss : {loss.item()}")
with torch.no_grad():
correct = 0
total = 0
pred = []
orig =[]
for local_batch, local_labels in validation_generator:
outputs = model(local_batch.to(device).float())
_, predicted = torch.max(outputs.data, 1)
total += local_labels.squeeze().size(0)
correct += (predicted == local_labels.squeeze().to(device).long()).sum().item()
orig.append(local_labels.squeeze().cpu().numpy())
pred.append(predicted.cpu().numpy())
if total >100000:
print(f'Epoch : {epoch}')
print('Test Accuracy of the model : {} %'.format(100 * correct / total))
pred = np.array(pred).reshape((1,-1)).squeeze()
orig = np.array(orig).reshape((1,-1)).squeeze()
s = classification_report(orig,pred)
df_eval = pd.DataFrame({'Labels' :s.split()[4:504:5],'precision' : s.split()[5:504:5] ,
'recall' :s.split()[6:504:5], 'f-1 score' : s.split()[7:504:5], 'support' : s.split()[8:504:5]})
torch.save({'model_state_dict' : model.state_dict(),
'optimizer_state_dict' : optimizer.state_dict(),
'loss' : loss.item(), 'df_eval' : df_eval,
'accuracy' : accuracy_score(orig,pred),
'epoch' : epoch
}, f'./models/epoch_test/model_top_100_LR_0.01_REDUCELR_layers_2_epoch_{epoch}_accur_{int(accuracy_score(orig,pred)*100)}.pth')
break
Below is the models Loss and Learning rate :
image.png1170×592 51.8 KB
I have tried many variations of the model :
Num of layers from 1-20
hidden_size 128,1024
The dataset I used have 200M rows fro training and since the dataset is huge, with the current infra (GPU - 8GB, Memory- 700GB) that we have here it takes 15Hrs to complete 1 epoch.
The accuracy of the model in general fluctuates between 31 and 34.
Questions :
When the number of nodes in each layer is 128 the amount of GPU used was aroung 1GB and though increasing the batch size helps in more utilization the loss was even bigger with each iteration.
What is the range of batch sizes that I needs to look for? Is there anything in specific that is making the model take so much of time or is the time seems normal for the dataset?
Is there any thing that I am doing that is weird and not intended to use it the way I am doing (updating LR after 1000 batches rather than for each epoch or LR range - 1e-5 to 1) ?
Why isn’t the loss value decreasing any further for any of the learning rates?
I would like to expand the model to all of the 2983 events and predict n events in the future. Is there any better algorithms and techniques that would help in achieving those?
Thanks for the help and for the resources that are being provided to make easy for the beginners to use deeplearning.
Thanks,
Swamy |
st80404 | Suppose I have 2 3-D tensors A, and B and want to copy some elements from B into A. Specifically, I have two lists of the form [(x_1, y_1), (x_2, y_2), ...] and [(x'_1, y'_1), (x'_2, y'_2), ...] and I want to perform A[x_1, y_1, :] = B[x'_1, y'_1, :] and so on. Is there any fast way of doing this or is a for-loop the only way?
Also, will such an operation support the flow of gradients from A to elements copied from B?
Thanks! |
st80405 | Solved by albanD in post #2
Hi,
Here is an example implementation.
Gradients will flow as expected for the copied values from A to B (not gradient for indices of course).
Note that inplace operations sometimes prevent the autograd from being able to compute gradients so there is a flag in case you encounter this error.
Let… |
st80406 | Hi,
Here is an example implementation.
Gradients will flow as expected for the copied values from A to B (not gradient for indices of course).
Note that inplace operations sometimes prevent the autograd from being able to compute gradients so there is a flag in case you encounter this error.
Let me know if you have more questions.
import torch
A = torch.zeros(3, 4, 6)
B = torch.rand(3, 4, 6)
indA = torch.LongTensor([[0, 0], [0, 1], [1, 1]])
indB = torch.LongTensor([[1, 1], [2, 1], [2, 2]])
def indices_copy(A, B, indA, indB, inplace=True):
# To make sure our views below are valid
assert A.is_contiguous()
assert B.is_contiguous()
# Get the size
size = A.size()
# Collapse the first two dimensions, so that we index only one
vA = A.view(size[0] * size[1], size[2])
vB = B.view(size[0] * size[1], size[2])
# If we need out of place, clone to get a tensor backed by new memory
if not inplace:
vA = vA.clone()
# Transform the 2D indices into 1D indices in our collapsed dimension
lin_indA = indA.select(1, 0) * size[1] + indA.select(1, 1)
lin_indB = indB.select(1, 0) * size[1] + indB.select(1, 1)
# Read B and write in A
vA.index_copy_(0, lin_indA, vB.index_select(0, lin_indB))
return vA.view(size)
print("Inputs")
print(A)
print(B)
print(indA)
print(indB)
indices_copy(A, B, indA, indB)
print("Output inplace")
print(A)
A = torch.zeros(3, 4, 6)
new_A = indices_copy(A, B, indA, indB, inplace=False)
print("Output out of place")
print(new_A)
print("Unmodified A")
print(A) |
st80407 | I have a complex vision model. When doing inference on CPU, everything is fine but inference on GPU only does well for the first pass (image) and then all following passes (images) go a way off and produce garbage results. The problem is not in data. When I drop the first image that does well, then the new first (which previously was second image and did not do well) gets predicted well. After couple of days of debugging I have pinpointed the single line of code where the divergence from normal execution first occurs. But I cannot figure out why, except that it is perhaps something to do with asynchronous execution on GPU, and possibly tensor.transpose() that is used on this line. I have tested it on several GPU-s, all the same. Using Pytorch 1.2, Python 3.6 and Cuda 10.
Here is the line of code that first diverges:
# real size: torch.Size([1, 128, 10, 400, 352])
dense = data.new_empty((batch_size, 128, grid_size[1], grid_size[2], grid_size[0]))
# copy data from sparse tensor to new tensor
# THIS IS WERE THE OUTPUT IS DIFFERENT ON GPU from 2nd pass
# dense (CPU) == dense (GPU) on first pass, but differ on each following pass
dense[:, :, coords[:,:,1], coords[:,:,2], coords[:,:,0]] = data.transpose(0,2)
The shape of coords is [1, N, 3], and shape of data is [N, 1, 128] - and N is different for each image (pass). Coords is used to copy data to correct location in a new tensor. Might variable size of N be a problem?
I have tried to copy_() the data tensor to intermediate variable before building dense, but no luck. The dense tensor content is same for the first image on CPU and GPU. But from second image onward it diverges, on CPU it predicts well, but GPU gets different results (out of 180M values 155K were different using very loose 1e-04 tolerance). This does not even involve any learning, just tensor creation and transpose().
I have of course called the model in eval() mode and even checked that dense.requires_grad == False on each iteration for dense tensor.
Well, I am totally out of ideas where to look next. I might work around it somehow (initialise a new model each time), but I suspect that this might also affect training and I’d like to understand the cause. |
st80408 | Solved by martinr in post #2
OK, I have now solved the issue in a sense that I know what caused the issue and fixed it. I created a new tensor using .new_empty() which worked fine on CPU where I developed the code and also on the first iteration on GPU. But this was the source of the problem. When I changed the code to use .new… |
st80409 | OK, I have now solved the issue in a sense that I know what caused the issue and fixed it. I created a new tensor using .new_empty() which worked fine on CPU where I developed the code and also on the first iteration on GPU. But this was the source of the problem. When I changed the code to use .new_zeros() instead, the problem disappeared and inference is fine both on CPU and GPU.
If someone can give an explanation on why this happens internally, I would be grateful.
PS! If someone wonders how I located the problematic code lines then I described it in this comment 16. |
st80410 | Hi everyone I am getting an error wen trying to load the model from the saved checkpoint file.
Model:
class RecNet2(nn.Module):
def __init__(self, in_shape, num_classes=12 ):
super(RecNet2, self).__init__()
self.layer1 = nn.GRU(in_shape[-1], 256)
self.layer2 = nn.LSTM(256, 512)
self.fc1 = nn.Linear(512, 256)
self.fc2 = nn.Linear(40 * 256, num_classes)
def forward(self, x):
x, h_out = self.layer1(x)
x = F.dropout(x, p=0.5)
x, h_ou2 = self.layer2(x)
x = F.dropout(x, p=0.3)
x = self.fc1(x)
x = self.fc2(x.view(-1, 40 * 256))
return x #logits
Error message:
>>> model.load_state_dict(fle)
Traceback (most recent call last):
File "/home/user/miniconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 482, in load_state_dict
own_state[name].copy_(param)
RuntimeError: invalid argument 2: sizes do not match at /opt/conda/conda-bld/pytorch_1512387374934/work/torch/lib/THC/generic/THCTensorCopy.c:101
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/miniconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 487, in load_state_dict
.format(name, own_state[name].size(), param.size()))
RuntimeError: While copying the parameter named layer1.weight_ih_l0, whose dimensions in the model are torch.Size([768]) and whose dimensions in the checkpoint are torch.Size([768, 101]).
The .pth file has been created during training of the above model. Any suggestions on how to investigate this and why are there mismatch tensor shape errors? |
st80411 | kirk86:
RuntimeError: While copying the parameter named layer1.weight_ih_l0, whose dimensions in the model are torch.Size([768]) and whose dimensions in the checkpoint are torch.Size([768, 101]).
that says that layer of your model has been changed from the checkpoint to the definition. In your case it is nn.GRU |
st80412 | RuntimeError Traceback (most recent call last)
in ()
1 model = CaptionModel_B(2048, 50, 160, vocab_size, num_layers=1)
----> 2 model.load_state_dict(torch.load(‘im_caption_35.727_0.316_epoch_20.pth.tar’, map_location=‘cpu’))
3 solver = NetSolver(data, model)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
843 if len(error_msgs) > 0:
844 raise RuntimeError(‘Error(s) in loading state_dict for {}:\n\t{}’.format(
–> 845 self.class.name, “\n\t”.join(error_msgs)))
846 return _IncompatibleKeys(missing_keys, unexpected_keys)
847
RuntimeError: Error(s) in loading state_dict for CaptionModel_B:
size mismatch for rnn.embed.weight: copying a param with shape torch.Size([9080, 50]) from checkpoint, the shape in current model is torch.Size([8947, 50]).
size mismatch for rnn.linear.weight: copying a param with shape torch.Size([9080, 160]) from checkpoint, the shape in current model is torch.Size([8947, 160]).
size mismatch for rnn.linear.bias: copying a param with shape torch.Size([9080]) from checkpoint, the shape in current model is torch.Size([8947]).
Can you help me to solve this error? |
st80413 | Hi all,
I want a custom optimizer to react to the last layers of a graph in a specific way. Usually, the last group in a param_group is the last, but many networks have multiple output layers. Is there any way to know at the optimization step of the optimizer which ones are output layers?
Thanks in advance. |
st80414 | If you have named the output layers, maybe you can segregate the layers based on that while doing
the back-propagation yourself.
for name, param in model.named_parameters():
print name, param.data |
st80415 | Hi
All optimizers accept param_groups as argument to construct them.
So you can require the user to provide parameters with two groups called body and ouput in your optimizer init.
That way you know exactly which is which. |
st80416 | I was actually looking if there was any way to do that generically, but @albanD approach is a reasonable requirement. |
st80417 | Unfortunately, in pytorch, a user can define its network in any way it wants. So I don’t think there is a generic way to do this |
st80418 | This question is conceptual as I have a model working decently well and I am looking for different things to try out.
Research Problem: I am trying to classify each pixel in an image with values from 0->1. This is a regression problem.
Dataset: I have 30 images that are 1000x1000 and ground truth for those images.
Current Approach: For the training, I am extracting 65x65 patches to train a CNN which gives a label to the center pixel. Then in the evaluation of the model, I am striding through the full image labelling each pixel.
Is there any other methods I can try out. |
st80419 | Hi,
I have tried reading around to see if this question has been answered before, but haven’t found anything specific.
In python I could easily convert my image numpy array to cube using image.swapaxes(1, 2).swapaxes(0, 1)
or simply using image.transpose((2,0,1)) before converting to tensor for making inference.
Please is there any quick way of doing this assuming I have read in an image with opencv in the c++ front-end?
Any hint or example will be appreciated. |
st80420 | Thanks, although I have just used cv::split to split and concatenate the channels, and seems to work. |
st80421 | Hello,
I have implemented a VAE in Pytorch with MNIST database.
I noticed that when I use the F.mse_loss, I don’t get a good reconstruction of the input images.
Here is an exmaple of the reconstruction of some input images:
mnistlr0001reg01.jpg1292×535 155 KB
But when I use the F.binary_cross_entropy loss, I get a good reconstruction:
Can you please help me understand why I can’t get a good reconstruction of the input images when using the mse loss?
I appreciate your help! |
st80422 | Based on Chapter 6.2.2, DeepLearningBook 14, I would assume nn.BCELoss to work better, as you are dealing with a Bernoulli Output Distribution, but I’m no expert on this topic, so let’s wait for others to chime in. |
st80423 | I want to do fine turning.
For example, there is a pre-trained weighyt traind from the layer1-> layer2-> layer3 structure.
But I want to fine-tune on layer1-> my_layer1-> layer2-> my_layer2-> layer3 structure.
Is there any way?
please. help me |
st80424 | Solved by albanD in post #2
Hi,
This is technically possible by setting the weights to the pretrained layers yourself with something like model.layer1.load_state_dict(pretrained_layer1_state_dict).
That being said, the insertion of your custom layers might change the behavior of the network completely and make the initializa… |
st80425 | Hi,
This is technically possible by setting the weights to the pretrained layers yourself with something like model.layer1.load_state_dict(pretrained_layer1_state_dict).
That being said, the insertion of your custom layers might change the behavior of the network completely and make the initialization useless. |
st80426 | As the title says I am trying to get a newer version of pytorch to get this option in the torch.unique function. However when I do
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
I do not get the latest version of pytorch (just the 1.0), which apparently does not include this option.
How can I get the correct version ?
EDIT :
I of course tried :
conda install pytorch=1.2 torchvision cudatoolkit=10.1 -c pytorch
but I cannot compile C++ extension as it says the cuda installation is not found and there is no cuda_make_macros.h |
st80427 | Solved by ptrblck in post #2
Could you try to use cudatoolkit=10.0?
If I’m not mistaken, conda will try to install PyTorch 1.0.0 with CUDA9.0 if you specify a wrong cudatoolkit version. |
st80428 | Could you try to use cudatoolkit=10.0?
If I’m not mistaken, conda will try to install PyTorch 1.0.0 with CUDA9.0 if you specify a wrong cudatoolkit version. |
st80429 | Alright that was the solution, so it is ok to use a different toolkit than the one installed on the system apparently, which is nice.
Thank you very much |
st80430 | Yes, the binaries ship with their own CUDA and cudnn, so that you would just need to install the NVIDIA drivers locally. |
st80431 | From my dataset I have three numpy variables/channels which can be characterized as RGB respectively. Furthermore I converted them into tensors using torch.from_numpy(u) and concatenated them into an image tensor as:
HR_data = torch.cat((u_tensor,v_tensor,w_tensor), dim=1)
Then I normalized like this: HR_data_norm = (HR_data - HR_data.mean()) / HR_data.std()
and used TensorDataset to put it into a dataloader.
My question is, should I normalize the numpy arrays before I convert them into tensors or is my method ok? |
st80432 | You shouldn’t see a difference between the numpy and PyTorch approach.
However, usually you would normalize the data using the statistics from the channels, i.e.:
x = torch.randint(0, 256, (3, 224, 224)).float()
y = (x - x.mean([1, 2], keepdim=True)) / x.std([1, 2], keepdim=True)
print(y.mean(), y.std())
> tensor(-4.6021e-08) tensor(1.0000) |
st80433 | I have small dataset on which I want fine tune weights of the CNN but I do not want what little dataset to mess up my batchnorm parameters haw can I freeze them during fine tune train? |
st80434 | I wrote a custom network that uses custom RNN cells that I also wrote. Everything works on CPU and I now try to get it to run on a GPU for faster training.
I’m registering all weights as well as a weight mask (with requires_grad=False) as Parameters in my RNN cells so they get passed to the GPU by just using .to(device) on my network. This seems to work. However in the forward pass method of my network I’m using three constructs that cause some trouble. (The code is too large to paste here, so I try to explain as good as I can.)
A python list contexbuffer, where I append hidden states that I computed and need to access again in the same forward pass.
A torch tensor output initialized with zeros that I fill with the final outputs after all hidden states are computed.
Torch tensors state_mask, zero-one masks where at least one is generated per point in the sequence and passed together with input and hidden states to the RNN cell for computation of the next hidden state.
I tried moving the hidden states from the context buffer as well as the state masks to the GPU right before computation in the RNN cell. This works, but I feel like this is far from ideal. The didn’t manage to get the output to be on the GPU however, I always get
Expected object of backend CPU but got backend CUDA for argument #2 ‘target’
when calculating the loss with .to(device) on the target.
Is there some way to initialize those lists/tensors so that they are moved to the GPU with the whole network? |
st80435 | Try this torch.cuda.set_device(0); This will put everything to GPU by default.
Expected object of backend CPU but got backend CUDA for argument #2 ‘target’
The error says that the loss was expecting something to be on CPU but rather it was found to be on GPU.
So share your loss function maybe and the way you are calling it (might help others to help ) |
st80436 | Thanks for the answer, I’ll try when I get home.
To the error: I’m moving input and target to the GPU with .to(device), then pass the input through my network (which has also been moved to GPU) with prediction = model(input).
When I then call loss = criterion(prediction, target), the error is thrown. So it means the prediction I get from my network is still on CPU even though the network should be on GPU. The target is on GPU as it should be. |
st80437 | Hello everyone
As some people might know, the human vision system is more sensitive toward the color green than red & blue which is why we adjust for this while capturing images from cameras.
It’s necessary to include more information from the green pixels in order to create an image that the eye will perceive as a “true color.” source 1
On a similar fashion, does our conv-nets show greater interest / derive more information from either red, green or blue? I don’t know the questions to this and invite anyone to share thoughts or links to more information.
We will focus on the most commonly used models where the first conv layer’s kernels are 3-dimensional (height, width, depth). I wanted to see how these kernels differed in the depth / RGB dimension. I wrote a script that summed the absolute values of the kernels so it was possible to see which channel ‘carried more weight’ under the assumption that larger weights are more important. Here are the results and the code.
alexnet RGB: [0.35563377 0.35734916 0.28701708]
densenet121 RGB: [0.3414047 0.4036838 0.25491145]
densenet161 RGB: [0.33822682 0.40176123 0.2600119 ]
densenet169 RGB: [0.34025708 0.4014946 0.25824833]
densenet201 RGB: [0.3429028 0.3985608 0.2585365]
googlenet RGB: [0.35481286 0.37088254 0.2743046 ]
inception_v3 RGB: [0.39560044 0.31095004 0.29344955]
mnasnet0_5 RGB: [0.3219102 0.44260073 0.2354891 ]
mnasnet1_0 RGB: [0.30014 0.48660594 0.21325403]
mobilenet_v2 RGB: [0.3212116 0.45090452 0.22788389]
resnet101 RGB: [0.33598453 0.40465072 0.2593648 ]
resnet152 RGB: [0.33805275 0.3991825 0.26276478]
resnet18 RGB: [0.3450004 0.390642 0.26435766]
resnet34 RGB: [0.3454328 0.38308167 0.2714856 ]
resnet50 RGB: [0.34040007 0.38215744 0.27744249]
resnext101_32x8d RGB: [0.33099824 0.4089949 0.26000684]
resnext50_32x4d RGB: [0.3365792 0.39434975 0.269071 ]
shufflenet_v2_x0_5 RGB: [0.330948 0.45226046 0.21679151]
shufflenet_v2_x1_0 RGB: [0.326566 0.46278057 0.21065338]
squeezenet1_0 RGB: [0.3383747 0.38222563 0.27939966]
squeezenet1_1 RGB: [0.3423823 0.40102834 0.25658935]
vgg11 RGB: [0.34595174 0.37333512 0.2807131 ]
vgg11_bn RGB: [0.32353705 0.41581088 0.26065207]
vgg13 RGB: [0.3471459 0.3867222 0.2661319]
vgg13_bn RGB: [0.32355314 0.4105902 0.26585665]
vgg16 RGB: [0.3416876 0.37477094 0.2835415 ]
vgg16_bn RGB: [0.32831633 0.4132697 0.25841403]
vgg19 RGB: [0.3444211 0.37442288 0.2811561 ]
vgg19_bn RGB: [0.3258713 0.41411158 0.26001713]
wide_resnet101_2 RGB: [0.3324678 0.41679963 0.25073257]
wide_resnet50_2 RGB: [0.33688802 0.39650932 0.26660258]
Mean RGB: [0.3378277 0.40201578 0.26015648]
STD RGB: [0.01547325 0.03337919 0.02063317]
import torch
import torchvision.models as models
from types import FunctionType
def check_first_layer(model):
for name, weights in model.named_parameters():
w = weights.abs()
chn = w.sum(dim=0).sum(-1).sum(-1)
# Normalize so that R+G+B=1
chn = chn / chn.sum(0).expand_as(chn)
chn[torch.isnan(chn)] = 0
return chn.detach().numpy()
chn_info = torch.tensor([])
for model_name in dir(models):
if model_name[0].islower():
attr = getattr(models, model_name)
if isinstance(attr, FunctionType):
try:
model = attr(pretrained=True)
rgb_vec = check_first_layer(model)
print(f'{model_name: <25} RGB: {rgb_vec}')
rgb_vec = torch.tensor(rgb_vec).view(1, -1)
chn_info = torch.cat((chn_info, rgb_vec), dim=0)
except:
pass
mean = chn_info.mean(dim=0)
std = chn_info.std(dim=0)
print(f'Mean RGB: {mean.numpy()}')
print(f'STD RGB: {std.numpy()}')
Results: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Mean RGB: [0.3378277 0.40201578 0.26015648]
STD RGB: [0.01547325 0.03337919 0.02063317]
.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By these results, we can see that the weights that are convoluted with the green channel is the largest and the blue is the smallest.
Does this mean that the green layer is more important for the networks? What do you think?
What could be some other reasons for these differences? |
st80438 | These are some interesting results and I’m not sure, if the amplitude of the channel is corresponding to the importance of it.
Did you check, if the green color channel of the input images was “adjusted”?
E.g. wouldn’t the weight in the green channel be larger, if the signal from the green channel is smaller in amplitude?
However, as a red-green color blind person, I cannot confirm the assumption of a higher sensitivity towards red and green. |
st80439 | I can’t speak about the relation between kernel weights and colors, but it’s possible that the Bayer filters in cameras play a role in it.
Most RGB images result from demosaicing, basically interpolation from the existing “color” values, and the green channel has two times the number of pixels than red and blue, therefore being more “accurate” in the green channel than red and blue… |
st80440 | ptrblck:
Did you check, if the green color channel of the input images was “adjusted”?
Good idea. The models are trained on imagenet but I don’t have this dataset so I can’t (yet) do this. I guess you could compare the relative values of the image channels, similarily as to was done with the weights. And it would be of interest to do this after the transformations/normalizations that are applied to the input images, as this is what actually goes into the models.
To my knowledge, all the models were trained with this normalization.
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
I’m not sure, but I believe that these parameter values were derived from the imagenet dataset so that normalized images would resemble a certain distribution, e.g. mean=0, std=1. If we assume that this is true, then ->
The normalization parameters are quite similar for the different channels which suggest that imagenet doesn’t have large differences for the RGB-channels. Idk if my reasoning holds here, but it makes sense to me.
ptrblck:
However, as a red-green color blind person, I cannot confirm the assumption of a higher sensitivity towards red and green. |
st80441 | I agree, this sounds like a somewhat plausible explanations. We already know that the nets largely take their cues from local image features (fur, eyes, etc) source 1. But that ‘image correctness’ on pixel-level would matter that much for a network?
I guess we have seen that ‘pixel correctness’ matters a lot in some adverserial attacks source1 source2
Is there a way to test for this? The only way I can think of right now would be to train on images where we delete green pixels corresponding to 50% of the original green pixels in the bayer filters, and interpolate it back. I see a few problems with this approach tho… |
st80442 | Hi all,
I am training my model on the CPU. A very strange behaviour occured (that I could solve) but I thought I would bring it up because I cannot imagine that this is a desired behaviour:
So when I just train my model on the CPU on my PC with 24 cores, all 24 cores being used 100% even though my model is rather small (thats why I dont train it on the GPU). And most of the workload is also Kernel usage. The training time per epoch requires about 2.5 seconds. I have version 1.0.1.post2 on that PC.
So to make it train faster I pushed it to a server with 80 cores. There, however, I got the exact same behaviour: When training all 80 cores were used with 100% work load. The time per epoch took again about 2.5 seconds on average. On that server I use pytorch version 1.1.0.
Reading through some threads
github.com/pytorch/pytorch
Issue: Very high CPU utilization with pin_memory=True and num_workers > 0 51
opened by rwightman
on 2019-08-22
closed by facebook-github-bot
on 2019-08-23
🐛 Bug
There seems to be an issue with CPU utilization when using a DataLoader with pin_memory=True and num_workers > 0.
I revisited...
module: cpu
module: dataloader
topic: multithreading
triaged
github.com/pytorch/pytorch
Issue: cpu usage is too high on the main thread after pytorch version 1.1 (and 1.2) (not data loader workers ) 40
opened by ggaemo
on 2019-08-17
closed by VitalyFedyunin
on 2019-08-19
I am using python 3.7 CUDA 10.1 and pytorch 1.2
When I am running pytorch on GPU, the cpu usage of the...
module: cpu
I tried torch.set_num_threads(1) and this not just cut the CPU usage to one core (as expected) but the training also is much faster: About 1 seconds per epoch now.
So, I am not sure if this behaviour is really desired, as it seems like spreading the workload over all CPU cores not just requires all ressources but it also is much slower. |
st80443 | Hi,
Unfortunately, this is a known limitation. We use openMP to parallelize cpu work and by default it uses all the available cores. So for machines with many cores, it is sometimes necessary to manually reduce the number of cores it can use |
st80444 | okay, I see. But do you have any clue why the code is running so much slower when running on 80 CPUs as opposed to when running on one CPU? |
st80445 | It is most likely because it tries to parallelize many “small” operations on many cores. So each core has almost nothing to do, but the overhead of communication between the cores is very large. |
st80446 | By the way, this is in the process of being solved, you can track the progress here: https://github.com/pytorch/pytorch/issues/24080 538. |
st80447 | Hi all,
My 10TB hard disk has only 1TB free space now. The data loader (which was working fine before) is super slow now (around 20x slow). Will the hard disk space causes this issue?.
Thanks |
st80448 | Solved by Gkv in post #4
Problem solved upon making some free space. Thanks |
st80449 | Hi,
This is most likely the hard drive that can do some optimization for speed when it’s not too full, but cannot when it has more content. So not pytorch related.
Have you tried freeing more space on the disk to see if performance come back? |
st80450 | I’m trying to run a PyTorch job through AWS ECS (just running a docker container inside EC2) but I receive the following error:
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device(‘cpu’) to map your storages to the CPU.
I use the GPU ECS AMI (ami-0180e79579e32b7e6) together with the 19.09 Nvidia Pytorch docker 2 image
The weird thing that throws me off is that the nvidia-smi command tells me everything is fine with CUDA:
============= 09:59:21 == PyTorch == == PyTorch == 09:59:21
============= 09:59:21 NVIDIA Release 19.09 (build 7911588) 09:59:21 PyTorch Version 1.2.0a0+afb7a16 09:59:21 Container image Copyright © 2019, NVIDIA CORPORATION. All rights reserved. 09:59:21 Copyright © 2014-2019 Facebook Inc. 09:59:21 Copyright © 2011-2014 Idiap Research Institute (Ronan Collobert) 09:59:21 Copyright © 2012-2014 Deepmind Technologies (Koray Kavukcuoglu) 09:59:21 Copyright © 2011-2012 NEC Laboratories America (Koray Kavukcuoglu) 09:59:21 Copyright © 2011-2013 NYU (Clement Farabet) 09:59:21 Copyright © 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston) 09:59:21 Copyright © 2006 Idiap Research Institute (Samy Bengio) 09:59:21 Copyright © 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz) 09:59:21 Copyright © 2015 Google Inc. 09:59:21 Copyright © 2015 Yangqing Jia 09:59:21 Copyright © 2013-2016 The Caffe contributors 09:59:21 All rights reserved. 09:59:21 Various files include modifications © NVIDIA CORPORATION. All rights reserved. 09:59:21 NVIDIA modifications are covered by the license terms that apply to the underlying project or file. 09:59:22 NOTE: MOFED driver for multi-node communication was not detected. 09:59:22 Multi-node communication performance may be reduced. 09:59:22 NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be 09:59:22 insufficient for PyTorch. NVIDIA recommends the use of the following flags: 09:59:22 nvidia-docker run --ipc=host … 09:59:22 Fri Sep 27 09:59:22 201909:59:22 ±----------------------------------------------------------------------------+ 09:59:22 | NVIDIA-SMI 418.40.04 Driver Version: 418.40.04 CUDA Version: 10.1 | 09:59:22 |-------------------------------±---------------------±---------------------+ 09:59:22 | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | 09:59:22 | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | 09:59:22 |===============================+======================+======================| 09:59:22 | 0 Tesla V100-SXM2… On | 00000000:00:1E.0 Off | 0 | 09:59:22 | N/A 41C P0 41W / 300W | 0MiB / 16130MiB | 6% Default | 09:59:22 ±------------------------------±---------------------±---------------------+ 09:59:22
09:59:22 ±----------------------------------------------------------------------------+ 09:59:22 | Processes: GPU Memory | 09:59:22 | GPU PID Type Process name Usage | 09:59:22 |=============================================================================| 09:59:22 | No running processes found | 09:59:22 ±----------------------------------------------------------------------------+
Plus it also runs fine locally with my GTX 1070. Issue can’t be that the Docker image has CUDA 10.1 right? Would be weird if Nvidia would create a PyTorch docker image that doesn’t actually work…
Any help is greatly appreciated |
st80451 | Could you try to create a simple dummy CUDATensor inside our container on the AWS machine and check, if it’s working?
If so, some issue might come from the deserialization, although I’m not sure, why it should be the case.
Do you mean the 19.09 container is working fine on your local machine with a GTX 1070? |
st80452 | Actually, it’s the AWS AMI’s fault, I used the one that they display on https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html 34 instead of the one in the AWS marketplace. |
st80453 | Hello all,
I’m trying to execute a modified version of matrix factorization via deep learning. The goal is to find a matrix of shape [9724x300] where the rows are items and there are (arbitrarily) 300 features. The function would be optimized when the dot product of vector_i and vector_j are really, really close to the value in the interaction matrix, Xij.
Xij has dimensions [9724x9724] where the value in cell [0,1] is the number of users who liked both items i and j. Ergo, when optimized, Vi*VjT should be really close to the number of users who liked both items i and j.
I’ve modified this code from a tutorial 8. The key difference is that in this resource, the author has a user-to-item matrix, not an item-to-item matrix.
I’m stuck trying to index vectors i and j in tensor self.vectors. It appears the datatype did not match what was expected, despite making i and j into LongTensors.
Any feedback would be appreciated!
import torch
from torch.autograd import Variable
class MatrixFactorization(torch.nn.Module):
def __init__(self, n_items=len(movie_ids), n_factors=300):
super().__init__()
self.vectors = nn.Embedding(n_items, n_factors,sparse=True)
def forward(self, i,j):
return (self.vectors([i])*torch.transpose(self.vectors([j]))).sum(1)
def predict(self, i, j):
return self.forward(i, j)
model = MatrixFactorization(n_items= len(movie_ids),n_factors=300)
loss_fn = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
for i in range(len(movie_ids)):
for j in range(len(movie_ids)):
# get user, item and rating data
rating = Variable(torch.FloatTensor([Xij[i, j]]))
# predict
i = Variable(torch.LongTensor([int(i)]))
j = Variable(torch.LongTensor([int(j)]))
prediction = model(i, j)
loss = loss_fn(prediction, rating)
# backpropagate
loss.backward()
# update weights
optimizer.step()
And the error I receive:
TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not list |
st80454 | Solved by albanD in post #2
Hi,
Variables are not needed anymore. and predict methods should not be needed. An updated version of your code is:
import torch
from torch.autograd import Variable
class MatrixFactorization(torch.nn.Module):
def __init__(self, n_items=len(movie_ids), n_factors=300):
super().__init__(… |
st80455 | Hi,
Variables are not needed anymore. and predict methods should not be needed. An updated version of your code is:
import torch
from torch.autograd import Variable
class MatrixFactorization(torch.nn.Module):
def __init__(self, n_items=len(movie_ids), n_factors=300):
super().__init__()
self.vectors = nn.Embedding(n_items, n_factors,sparse=True)
def forward(self, i,j):
# i and j are LongTensors here of size (batch)
feat_i = self.vectors(i)
feat_j = self.vectors(j)
# feat_i and feat_j are of size (batch, n_factors)
# Since you only want the interactions element-wise
# for i and j in the batch dimension, you want diag(feat_i * feat_j.t())
# This can be efficiently computed using element-wise product and summing
result = (feat_i * feat_j).sum(-1)
# result is of size (batch)
return result
model = MatrixFactorization(n_items= len(movie_ids),n_factors=300)
loss_fn = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
for i in range(len(movie_ids)):
for j in range(len(movie_ids)):
# get user, item and rating data
rating =torch.FloatTensor([Xij[i, j]])
# predict
i = torch.LongTensor([int(i)])
j = torch.LongTensor([int(j)])
prediction = model(i, j)
loss = loss_fn(prediction, rating)
# Reset the gradients to 0
optimizer.zero_grad()
# backpropagate
loss.backward()
# update weights
optimizer.step()
Note that with this code you can do real batch size and have:
# Generate a batch of 4 pairs i,j
i = torch.LongTensor([1, 3, 4, 5])
j = torch.LongTensor([2, 4, 6, 7])
# Get the ground truth and do forward for all of them at once
ratings = X[i, j]
pred = model(i, j)
# By default, MSELoss with compute the average MSELoss in the batch (you can change that if your need, check the doc how to do so)
loss = loss_fn(pred, ratings) |
st80456 | Thank you much for your detailed response!
Can you elaborate on the last block of code beginning at
# Generate a batch of 4 pairs i,j
Does this code block replace the nested for loop?
And what is the significance of the pairs, just an example? Or will this sample large batches of data?
i = torch.LongTensor([1, 3, 4, 5])
j = torch.LongTensor([2, 4, 6, 7])
Thanks @albanD! |
st80457 | Pytorch has been built to work with batch of data. That allows to perform larger operations and thus better use device such as GPUs.
A batch is basically a bunch of independent data that you process at the same time.
In your case, if len(movie_ids) == 2, you have 4 pairs of indices to evaluate: (0,0), (0,1), (1,0), (1,1).
You can do this using your nested for loops or by doing a single forward with a batch of 4 samples with i = torch.LongTensor([0, 0, 1, 1]) and j=torch.LongTensor([0, 1, 0, 1]). That way, in one call to your model, you get the result for all four samples, instead of doing 4 calls to your model with the for loops.
Loss functions such as MSE are built to support such examples and you can see in the doc that it has a special argument reduction to choose how the individual values for each element in your batch should be aggregated into the final loss value. |
st80458 | Awesome! I tried the batch method you’ve described, and the following warning was returned:
/anaconda3/lib/python3.6/site-packages/torch/nn/modules/loss.py:431: UserWarning: Using a target size (torch.Size([1])) that is different to the input size (torch.Size([5931640])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
However, the code hasn’t thrown an error (yet) so maybe all is well. |
st80459 | Check the size of the elements you give to your loss.
One has the full batch size, but the other is 1. They should both be of size batch size. So something is wrong somewhere |
st80460 | Hi.
I am working on one project, and the update formula is
parameter = parameter + learning_rate * gradient
rather than the usual term which is subtracting.
I tried to make the learning rate negative to implement the idea but this is forbidden. So my question how to add the gradient when updating?
Hope for your reply. Thanks. |
st80461 | Solved by albanD in post #2
Hi,
I guess the simplest thing is to get -gradient. You can do this by flipping the sign of your loss: loss = - loss. |
st80462 | Hi,
I guess the simplest thing is to get -gradient. You can do this by flipping the sign of your loss: loss = - loss. |
st80463 | Besides flipping the loss, you can modify the gradient update rule by using a for loop over network parameters and updating the gradient accordingly. This will give you more flexibility.
learning_rate = 0.01
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate * -1.0)
(Source: code snippet taken from 60-min blaze) |
st80464 | I take the example from https://pytorch.org/tutorials/intermediate/tensorboard_tutorial.html 55 and copy it. I run it but tensorboard does not show any graph (see attached image). I also run it with my own code and get exactly the same.
To show it I use:
tensorboard --logdir=data/ --host localhost --port 8088
Because simply using tensorboard --logdir does not work.
Using writer.add_image() works though. So, could you shed me some light on how to get the graph?
tensorboard.PNG1917×1014 60.4 KB |
st80465 | We have similar issues here 134 and here 100
CC @lanpa, @orionr who might know, what’s going on. |
st80466 | Hi, which version of pytorch are you using? Have you run step 3?
writer.add_graph(net, images)
writer.close()
Any errors or warnings?
Based on your description, you might have changed the logging dir. Also make sure that you are passing the correct parameters. |
st80467 | I am using pytorch 1.2. Yes, that was a mistake in my description but the logging dir is the same. In fact, you can know it’s the same because if it weren’t, then I wouldn’t get anything but I still get the images from writer.add_image() |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.