id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st83168 | Thanks and sorry for my ignorance. How can I know when you finish torch 0.4 binary files building and put them in the offical channel ? I will keep an eye on the link your gave. |
st83169 | Just to remind you that torchvision is uploaded and you can use the commands on pytorch.org 2 to install. |
st83170 | Hello,
I was wondering whether something like numpy.memmap exists in PyTorch? Or should I just save as a numpy array and use numpy.memmap instead, because conversion to torch tensors is cheap?
Numpy: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.memmap.html 66 |
st83171 | You can use torch.Storage.from_file (docs 400):
e.g.
x = torch.IntTensor(torch.IntStorage.from_file('file.bin', size=100)) |
st83172 | Is there a way to store the data to memory mapped files as well. Does Pytorch have a method for this? |
st83173 | I am relatively new to Pytorch and have been training an LSTM model. Any feedback on code in general would be appreciated.
When I train the model I receive the following warning
UserWarning: Using a target size (torch.Size([4050, 1, 1])) that is different to the input size (torch.Size([1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
Can anyone help me with this problem?
My goal is to train a model on a time series containing multiple features. Please see the code below:
LSTM:
import torch
import torch.nn as nn
import pandas as pd
import numpy as np
from Tools.data_loader import CreateDataset
class LSTMModel(nn.Module):
def __init__(self, input_dim, hidden_dim, num_layers, output_dim):
super(LSTMModel, self).__init__()
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.output_dim = output_dim
self.num_layers = num_layers
self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers)
self.forecast = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
batch_size = 1
h0 = torch.zeros(self.num_layers, batch_size, self.hidden_dim).requires_grad_()
c0 = torch.zeros(self.num_layers, batch_size, self.hidden_dim).requires_grad_()
out, (hn,cn) = self.lstm(x, (h0.detach(), c0.detach()))
y_pred = self.forecast(out[-1].view(batch_size,-1))
return y_pred.view(-1)
dataset = CreateDataset('Data/Bloomberg_PV_weather.csv', 'Datetime', 0.8, 'Power') #PLACEHOLDER
input_dim = dataset.num_features
hidden_dim = 50
num_layers = 1
output_dim = 1
num_epochs = 50
X_train = dataset.X_train
y_train = dataset.y_train
X_test = dataset.X_test
y_test = dataset.y_test
model = LSTMModel(input_dim, hidden_dim, num_layers, output_dim)
criterion = nn.MSELoss()
optimizer=torch.optim.Adam(model.parameters())
hist = np.zeros(num_epochs)
for epoch in range(num_epochs):
optimizer.zero_grad()
output=model(X_train)
loss = criterion(output,y_train)
if epoch % 100 == 0:
print('Epoch ', epoch, 'Loss: ',loss.item())
hist[epoch] = loss.item()
loss.backward()
optimizer.step()
Data Loader:
and here is my CreateDataset class:
'''
Loads data and creates train and test sets
'''
import torch
import math
import pandas as pd
class CreateDataset():
def __init__(self, file, datetime_col, train_proportion, target):
self.file = file
self.datetime_col = datetime_col
self.train_proportion = train_proportion
self.target = target
self.data = self.load_data()
self.train_set, self.test_set = self.split_data()
self.X_train, self.y_train,self.X_test, self.y_test, self.num_features = self.reshape_data()
def load_data(self):
'''
Reads in data
'''
data = pd.read_csv(self.file, header=0)
data.drop(columns=self.datetime_col, inplace=True)
return data
def split_data(self):
'''
Creates test and train sets
'''
train_length = math.ceil(len(self.data)*self.train_proportion)
train_set = self.data[0:train_length]
test_set = self.data[train_length:]
return train_set, test_set
def reshape_data(self):
'''
Splits datasets into X and y sets and reshapes into 3D tensor
'''
num_features = (len(self.test_set.columns)-1)
y_train = torch.tensor(self.train_set[self.target].values).float()
X_train = torch.tensor(self.train_set.drop(columns=self.target).values).float()
y_test = torch.tensor(self.test_set[self.target].values).float()
X_test = torch.tensor(self.test_set.drop(columns=self.target).values).float()
X_train = X_train.view(-1,1,num_features)
y_train = y_train.view(-1,1,1)
X_test = X_test.view(-1,1,num_features)
y_test = y_test.view(-1,1,1)
return X_train, y_train, X_test, y_test, num_features |
st83174 | Hi everyone.
I come from a C++/C# background and this has made me thinking for sometime now! Here is my question :
Since Pytorch has a dynamic nature, there will be many situations in which you’d be dealing with different classes on the fly and checking for which classes you are dealing with.
Currently one way to access the class names is through __class__ and __class__.__name__ (class attributes) .
My question is, knowing that names with __, mean they are supposed to be private and not be accessed by the developer, Why doesn’t Pytorch provide a simple property to access them?
Simple things such as :
for m in model.modules():
if (m.__class__.__name__ == torch.nn.Linear.__class__.__name__):
...
become very long and ugly needlessly! let alone more complex ones.
I could ask this about the Python langauge, but thought maybe, due to compatibility reasons, they can not do that there, but nothing stops a library to provide its own set of utility methods, properties.
So What is stopping you from providing such things that would make life easier and codes much more readable.
Thanks in advance |
st83175 | Solved by ptrblck in post #5
The isinstance comparison is working:
resnet18 = models.resnet18(pretrained=True)
resnet18.fc = nn.Linear(512, 10)
for module in resnet18.modules():
if isinstance(module, nn.Linear):
print(module)
> Linear(in_features=512, out_features=10, bias=True)
as it returns the only nn.Linear … |
st83176 | Hi,
I think the point is that comparing names should not be used?
For what you want, isinstance(m, torch.nn.Linear) is the pythonic way to do it I think. |
st83177 | Hi, Thank you very much.
The example was given in the Udacity Pytorch course and I just gave that as an example.
But how do you get the type of any class?
It seems type() only works on tensors, and I couldnt findy any ways to query and get a class type! |
st83178 | I was actually trying to finetune a resnet18. I wanted to set the gradients of all layers to zero except the last linear layer! the isinstance simply doesn’t work!
I wrote :
resnet18 = models.resnet18(pretrained=True)
resnet18.fc = nn.Linear(512, 10)
for module in resnet18.modules():
if not isinstance(module, nn.Linear):
for param in module.parameters():
param.requires_grad = False
What seems to work is to compare the names which is :
for module in resnet18.modules():
if module._get_name() != 'Linear':
print('layer: ',module._get_name())
for param in module.parameters():
param.requires_grad_(False)
elif module._get_name() == 'Linear':
for param in module.parameters():
param.requires_grad_(True)
something like nn.Linear.__class__.__name__ doesnt have the proper name! it contains type as the name which is weird!
Thats why I’m asking.
on a side note :
Something weird happens, if I omit the second elseif block, the resnet18.fc parameters gradient will be False! while they are initially True, and the first if clause clearly checks for all layers except ‘Linear’.
I’d like to know why this is happening as well, and if this is a bug!? |
st83179 | The isinstance comparison is working:
resnet18 = models.resnet18(pretrained=True)
resnet18.fc = nn.Linear(512, 10)
for module in resnet18.modules():
if isinstance(module, nn.Linear):
print(module)
> Linear(in_features=512, out_features=10, bias=True)
as it returns the only nn.Linear layer in the model.
However, note that modules will be called recursively on your main model.
E.g. the first result will be the parent ResNet class.
If you call module.parameters() you will get all parameters of the model, including the parameters of the linear layer. |
st83180 | also __name__ and __class__ are special attributes defined by Python (not specific to PyTorch) and they aren’t really private and you can expect they to always be there: https://docs.python.org/3/reference/datamodel.html |
st83181 | Shisho_Sama:
nn.Linear.__class__.__name__
Thanks, but do you know why nn.Linear.__class__.__name__ doesnt have a proper name in it (it returns type? should it not return ‘Linear’? |
st83182 | Because obj.__class__ accesses the class (type) of the object. Classes are also objects in Python. Most classes are just instances of the type metaclass. Hence, if you do A_Class.__class__ or type(A_Class) you (often) get the type metaclass.
On the other hand, an instance of nn.Linear, e.g., nn.Linear(3, 4) has class nn.Linear so you get nn.Linear(3, 4).__class__ == nn.Linear and type(nn.Linear(3, 4)) == nn.Linear.
You probably now realize that if you want to get the name of nn.Linear, you should instead use nn.Linear.__name__ rather than nn.Linear.__class__.__name__ which returns the name of the metaclass type. |
st83183 | Hi,
I am practicing pytorch and implemented an keras based seq2seq example:
https://keras.io/examples/lstm_seq2seq/ 1
Below is my implementation:
from __future__ import unicode_literals, print_function, division
from io import open
import unicodedata
import string
import re
import random
import numpy as np
import torch
import torch.nn as nn
from torch import optim
import torch.nn.functional as F
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
data_path = './eng_fra.txt'
# Vectorize the data.
input_texts = []
target_texts = []
input_characters = set()
target_characters = set()
with open(data_path, 'r', encoding='utf-8') as f:
lines = f.read().split('\n')
for line in lines[: min(num_samples, len(lines) - 1)]:
#print('line:',line)
input_text, target_text = line.split('\t')
# We use "tab" as the "start sequence" character
# for the targets, and "\n" as "end sequence" character.
target_text = '\t' + target_text + '\n' # why?
# print('input_text and target_text:',input_text, target_text)
input_texts.append(input_text)
target_texts.append(target_text)
for char in input_text:
if char not in input_characters:
input_characters.add(char)
for char in target_text:
if char not in target_characters:
target_characters.add(char)
input_characters = sorted(list(input_characters))
target_characters = sorted(list(target_characters))
num_encoder_tokens = len(input_characters)
print('input_characters',input_characters)
num_decoder_tokens = len(target_characters)
print('target_characters',target_characters)
max_encoder_seq_length = max([len(txt) for txt in input_texts])
max_decoder_seq_length = max([len(txt) for txt in target_texts])
print('max_encoder_seq_length and max_decoder_seq_length',max_encoder_seq_length,max_decoder_seq_length)
input_token_index = dict(
[(char, i) for i, char in enumerate(input_characters)])
target_token_index = dict(
[(char, i) for i, char in enumerate(target_characters)])
# define the shapes
encoder_input_data = np.zeros(
(len(input_texts), max_encoder_seq_length, num_encoder_tokens),
dtype='float32')
decoder_input_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
decoder_target_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
# one hot encoding for each word in each sentence
for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
for t, char in enumerate(input_text):
encoder_input_data[i, t, input_token_index[char]] = 1.
for t, char in enumerate(target_text):
# decoder_target_data is ahead of decoder_input_data by one timestep
decoder_input_data[i, t, target_token_index[char]] = 1.
if t > 0:
# decoder_target_data will be ahead by one timestep
# and will not include the start character.
decoder_target_data[i, t - 1, target_token_index[char]] = 1.
encoder_input_data=torch.Tensor(encoder_input_data).to(device)
decoder_input_data=torch.Tensor(decoder_input_data).to(device)
decoder_target_data=torch.Tensor(decoder_target_data).to(device)
class encoder(nn.Module):
def __init__(self):
super(encoder,self).__init__()
self.LSTM=nn.LSTM(input_size=num_encoder_tokens,hidden_size=256,batch_first=True)
def forward(self,x):
out,(h,c)=self.LSTM(x)
return h,c
class decoder(nn.Module):
def __init__(self):
super(decoder,self).__init__()
self.LSTM=nn.LSTM(input_size=num_decoder_tokens,hidden_size=256,batch_first=True)
self.FC=nn.Linear(256,num_decoder_tokens)
def forward(self,x, hidden):
out,(h,c)=self.LSTM(x,hidden)
out=self.FC(out)
return out,(h,c)
class seq2seq(nn.Module):
def __init__(self,encoder,decoder):
super(seq2seq,self).__init__()
self.encoder=encoder
self.decoder=decoder
def forward(self,encode_input_data,decode_input_data):
hidden, cell = self.encoder(encode_input_data)
output, (hidden, cell) = self.decoder(decode_input_data, (hidden, cell))
return output
encoder=encoder().to(device)
# encoder_loss = nn.CrossEntropyLoss() # CrossEntropyLoss compute softmax internally in pytorch
# encoder_optimizer = torch.optim.Adam(encoder.parameters(), lr=0.001)
decoder=decoder().to(device)
# decoder_loss = nn.CrossEntropyLoss() # CrossEntropyLoss compute softmax internally in pytorch
# decoder_optimizer = torch.optim.Adam(decoder.parameters(), lr=0.001)
model=seq2seq(encoder,decoder).to(device)
optimizer = optim.RMSprop(model.parameters(),lr=0.01)
loss_fun=nn.CrossEntropyLoss()
# model.train()
num_epochs=50
batches=np.array_split(range(decoder_target_data.shape[0]),100)
total_step=len(batches)
for epoch in range(num_epochs):
for i,batch_ids in enumerate(batches):
encoder_input=encoder_input_data[batch_ids]
decoder_input=decoder_input_data[batch_ids]
decoder_target=decoder_target_data[batch_ids]
output = model(encoder_input, decoder_input)
loss=loss_fun(output.view(-1,93).to(device),decoder_target.view(-1,93).max(dim=1)[1].to(device))
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 20 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
# Reverse-lookup token index to decode sequences back to
# something readable.
reverse_input_char_index = dict(
(i, char) for char, i in input_token_index.items())
reverse_target_char_index = dict(
(i, char) for char, i in target_token_index.items())
def decode_sequence(input_seq):
# Encode the input as state vectors.
h,c=model.encoder(input_seq)
# Generate empty target sequence of length 1.
# Populate the first character of target sequence with the start character.
target_seq = torch.zeros((1, 1, num_decoder_tokens)).to(device)
target_seq[0, 0, target_token_index['\t']] = 1.
# Sampling loop for a batch of sequences
# (to simplify, here we assume a batch of size 1).
stop_condition = False
decoded_sentence = ''
while not stop_condition:
output_tokens, (h_t, c_t) = model.decoder(target_seq,(h,c))
# Sample a token
sampled_token_index = output_tokens.view(-1,93).squeeze(0).max(dim=0)[1].item()
sampled_char = reverse_target_char_index[sampled_token_index]
decoded_sentence += sampled_char
# Exit condition: either hit max length
# or find stop character.
if (sampled_char == '\n' or
len(decoded_sentence) > max_decoder_seq_length):
stop_condition = True
# Update the target sequence (of length 1).
target_seq = torch.zeros((1, 1, num_decoder_tokens)).to(device)
target_seq[0, 0, sampled_token_index] = 1.
# Update states
h,c=h_t,c_t
return decoded_sentence
for seq_index in range(100):
# Take one sequence (part of the training set)
# for trying out decoding.
input_seq = encoder_input_data[seq_index: seq_index + 1]
decoded_sentence = decode_sequence(input_seq)
print('-')
print('Input sentence:', input_texts[seq_index])
print('Decoded sentence:', decoded_sentence)
Basically I follow exact the same data processing steps and structures of encoder/decoder of the keras exampe, but the result is worth than keras, did I do anything wrong |
st83184 | I would recommend to start with a simple code base and compare the Keras results to your PyTorch model.
E.g. isolate the model and use some dummy data to compare the model outputs.
Your current code is pretty long, which makes debugging hard. |
st83185 | Hi,
I am using the latest stable version of Pytorch (1.2.0) to train and test a model. For testing, I map the trained model to the CPU and run there. However, I get the following error:
RuntimeError: code is too big
Could anybody let me know what is the main reason for such an error or at least what is the meaning of it? Unfortunately, I could not diagnose the problem based on this limited error message.
This issue only happens when I run on the CPU, with GPU everything is fine. I was thinking it is a RAM issue. However, this error occurs on a workstation of 64 Giga RAM and it disappears when I run on a normal computer of 8 Giga RAM.
Many thanks in advance |
st83186 | It is a normal nn model like the following one:
class G(nn.Module):
"""G"""
def __init__(self):
super().__init__()
torch.manual_seed(5)
self.pad1= nn.ReflectionPad1d(15)
self.enc1 = SpectralNorm(nn.Conv1d(in_channels=1, out_channels=16, kernel_size=32, stride=2, padding=0, bias= False)) # out : [B x 16 x 8192]
self.enc1_nl = nn.PReLU() # non-linear transformation after encoder layer 1
self.pad2= nn.ReflectionPad1d(15)
self.enc2 = SpectralNorm(nn.Conv1d(16, 16, 32, 2, 0, bias= True)) # [B x 32 x 4096]
self.enc2_nl = nn.PReLU()
self.pad3= nn.ReflectionPad1d(15)
self.enc3 = SpectralNorm(nn.Conv1d(16, 32, 32, 2, 0, bias= True)) # [B x 32 x 2048]
self.enc3_nl = nn.PReLU()
self.pad4= nn.ReflectionPad1d(15)
self.enc4 = SpectralNorm(nn.Conv1d(32, 32, 32, 2, 0, bias= True)) # [B x 64 x 1024]
self.enc4_nl = nn.PReLU()
self.pad5= nn.ReflectionPad1d(15)
self.enc5 = SpectralNorm(nn.Conv1d(32, 64, 32, 2, 0, bias= True)) # [B x 64 x 1024]
self.enc5_nl = nn.PReLU()
self.pad6= nn.ReflectionPad1d(15)
self.enc6 = SpectralNorm(nn.Conv1d(64, 64, 32, 2, 0, bias= True)) # [B x 64 x 1024]
self.enc6_nl = nn.PReLU()
self.pad7= nn.ReflectionPad1d(15)
self.enc7 = SpectralNorm(nn.Conv1d(64, 128, 32, 2, 0, bias= True)) # [B x 64 x 1024]
self.enc7_nl = nn.PReLU()
self.conv1x1_1 = SpectralNorm(nn.Conv1d(in_channels=128, out_channels=1, kernel_size=1, stride=1, padding=0, bias= False))
self.dec7_nl = nn.PReLU()
self.conv1x1_2 = SpectralNorm(nn.Conv1d(in_channels=65, out_channels=1, kernel_size=1, stride=1, padding=0, bias= False))
self.dec6_nl = nn.PReLU()
self.conv1x1_3 = SpectralNorm(nn.Conv1d(in_channels=65, out_channels=1, kernel_size=1, stride=1, padding=0, bias= False))
self.dec5_nl = nn.PReLU()
self.conv1x1_4 = SpectralNorm(nn.Conv1d(in_channels=33, out_channels=1, kernel_size=1, stride=1, padding=0, bias= False))
self.dec4_nl = nn.PReLU()
self.conv1x1_5 = SpectralNorm(nn.Conv1d(in_channels=33, out_channels=1, kernel_size=1, stride=1, padding=0, bias= False))
self.dec3_nl = nn.PReLU()
self.conv1x1_6 = SpectralNorm(nn.Conv1d(in_channels=17, out_channels=1, kernel_size=1, stride=1, padding=0, bias= False))
self.dec2_nl = nn.PReLU()
self.conv1x1_7 = SpectralNorm(nn.Conv1d(in_channels=17, out_channels=1, kernel_size=1, stride=1, padding=0, bias= False))
self.dec1_nl = nn.PReLU()
def forward(self, x):
"""
Forward pass of generator.
Args:
x: input batch (signal)
z: latent vector
"""
### encoding step
enc1= self.enc1(self.pad1(x))
enc1_out= self.enc1_nl(enc1)
enc2= self.enc2(self.pad2(enc1_out))
enc2_out= self.enc2_nl(enc2)
enc3= self.enc3(self.pad3(enc2_out))
enc3_out= self.enc3_nl(enc3)
enc4= self.enc4(self.pad4(enc3_out))
enc4_out= self.enc4_nl(enc4)
enc5= self.enc5(self.pad5(enc4_out))
enc5_out= self.enc5_nl(enc5)
enc6= self.enc6(self.pad6(enc5_out))
enc6_out= self.enc6_nl(enc6)
enc7= self.enc7(self.pad7(enc6_out))
enc7_out= self.enc7_nl(enc7)
code= enc7_out
dec7= F.interpolate(self.conv1x1_1(code), scale_factor=2, mode="linear")
dec7_out= self.dec7_nl(torch.cat((dec7,enc6),dim=1))
dec6= F.interpolate(self.conv1x1_2(dec7_out), scale_factor=2, mode="linear")
dec6_out= self.dec6_nl(torch.cat((dec6,enc5),dim=1))
dec5= F.interpolate(self.conv1x1_3(dec6_out), scale_factor=2, mode="linear")
dec5_out= self.dec5_nl(torch.cat((dec5,enc4),dim=1))
dec4= F.interpolate(self.conv1x1_4(dec5_out), scale_factor=2, mode="linear")
dec4_out= self.dec4_nl(torch.cat((dec4,enc3),dim=1))
dec3= F.interpolate(self.conv1x1_5(dec4_out), scale_factor=2, mode="linear")
dec3_out= self.dec3_nl(torch.cat((dec3,enc2),dim=1))
dec2= F.interpolate(self.conv1x1_6(dec3_out), scale_factor=2, mode="linear")
dec2_out= self.dec2_nl(torch.cat((dec2,enc1),dim=1))
dec1= F.interpolate(self.conv1x1_7(dec2_out), scale_factor=2, mode="linear")
out = self.dec1_nl(dec1)
return out
When I try to run it on the workstation’s CPU, I got that error:
g=G()
z= torch.randn((1,1,16000))
o= g(z)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-129-2d7f710cef1a> in <module>
1 g=G()
2 z= torch.randn((1,1,16000))
----> 3 o= g(z)
4 out.shape
/home/amm-er/ahd/anaconda3/envs/amustenv/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
545 result = self._slow_forward(*input, **kwargs)
546 else:
--> 547 result = self.forward(*input, **kwargs)
548 for hook in self._forward_hooks.values():
549 hook_result = hook(self, input, result)
<ipython-input-128-de262c205903> in forward(self, x)
129 enc1= self.enc1(self.pad1(x))
130 enc1_out= self.enc1_nl(enc1)
--> 131 enc2= self.enc2(self.pad2(enc1_out))
132 enc2_out= self.enc2_nl(enc2)
133 enc3= self.enc3(self.pad3(enc2_out))
/home/amm-er/ahd/anaconda3/envs/amustenv/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
545 result = self._slow_forward(*input, **kwargs)
546 else:
--> 547 result = self.forward(*input, **kwargs)
548 for hook in self._forward_hooks.values():
549 hook_result = hook(self, input, result)
<ipython-input-65-68e726f6c853> in forward(self, *args)
62 def forward(self, *args):
63 self._update_u_v()
---> 64 return self.module.forward(*args)
/home/amm-er/ahd/anaconda3/envs/amustenv/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input)
198 _single(0), self.dilation, self.groups)
199 return F.conv1d(input, self.weight, self.bias, self.stride,
--> 200 self.padding, self.dilation, self.groups)
201
202
RuntimeError: code is too big
On the GPU, everything works fine! |
st83187 | It’s hard to say without looking at the rest of the code - but maybe try reduce the batch size? Maybe even try reduce the size of your dataset?
Try monitoring it on the GPusing using the command nvidia-smi or monitor your CPU/Memory usage when you start it? |
st83188 | Actually, the batch size I use for running on CPU is equal to 1. “nvidia-smi” provides information about memory utilization by the GPU, which is not relevant to this problem.
Do you have at least any idea what this error “code is too big” should refer to? |
st83189 | Assume you have an nn module which was trained and saved to a .pkl file.
For inference, you need to load this model and decide whether you would like to run it on GPU or CPU. This is done by setting the argument “map_location” of “torch.load” to “cpu” for running on the CPU.
I think if you could give me any info about this error message then I would understand the issue better. |
st83190 | To be honest its difficult to reproduce so I couldn’t give a definitive answer, my guess is that your training on the GPU then serialisation the model in an unexpected way such that when you try to reload the model into a CPU backend it doesn’t work.
To check this, how is it your saving the model? |
st83191 | It is the normal save/load of modules. I have already posted on github and it seems to be an internal issue 95. |
st83192 | I have the following packages installed on my conda environment:
image.png695×100 20.9 KB
my nvidia drivers are:
image.png783×241 34.6 KB
But pytorch returns false when:
image.png730×158 17.8 KB
Any ideas what’s going on here? |
st83193 | Never mind folks, I found that after restarting only then torch.cuda.is_available() returned True. One thing though that seems strange to me is that when updating the conda environment pkgs pytorch & torchvision update to latest version 1.2 & 0.4 but cudatoolkit downgrades to 10.0 from 10.1, why is that, can’t we use cudatoolkit 10.1 instead of 10.0? Is there some conflict which forces conda to downgrade to cudatoolkit 10.0? |
st83194 | The binaries are shipped with CUDA10.0. Did you build from source with CUDA10.1 or how did you install it? |
st83195 | Basically if you install in a conda env conda install pytorch torchvision cudatoolkit=10.1 -c pytorch it installs pytorch 1.0 & torchvison 0.2 instead of 1.2 & 0.4, but seems to be working fine with cuda 10.1. Now if you do conda update --all it upgrades torch & torchvision to 1.2 & 0.4 equivalently but it downgrades cuda to 10.0. The only reasoning I can find behind this is if pytorch 1.0 binary is built with cuda 10.1? If so then why 1.2 binary is built with cuda 10 instead of 10.1? |
st83196 | Hello !
I hope I am not asking something dumb, but I could not find a clear answer despite a bit of googling so I’ll try my luck here :
When reading the doc about learning rate scheduling in pytorch, I see that the scheduler has to be provided with a validation error value. Unfortunately it is not describe anywhere what this corresponds to nor how to compute it so I don’t really know how to call scheduler.step() properly …
Can someone help me ?
Thanks in advance ! |
st83197 | Some schedulers, e.g. ReduceLROnPlateau 71 expect to track a metric in order to lower the learning rate.
Usually you would split your dataset into training, validation (, and test) and use the validation loss to trigger the learning rate reduction.
The idea behind it is that once your validation loss doesn’t decrease anymore (or not significantly), lowering the learning rate might help decreasing it further.
However, you could use whatever metric you like to use.
PS: other learning rate schedulers do not expect any input and use some other schedule, e.g. epochs as milestones. |
st83198 | Hello I have a model call M. when I call it and call backward() to calculate the gradient and then I zero the gradient the model has no gradient. but once I call optimizer.step() on zero gradient the model update its parameters on something I don’t know so the next iteration the model has new params.
output = M(input,GT)
error = L1(output,GT)
error.backward()
print(M.grad) —> non-zero.
M.zero_grad()
print(M.grad) —> zero.
optimizer.step() -> the model is update !!!
I don’t want to use with torch.no_grad() because I want to check each iteration. |
st83199 | Hi @falmasri
I think you’re on the right track with this, if you’re doing a typical training cycle - you don’t need to zero_grad() on your model, instead you zero out your optimizer.
Try this and let me know how you go:
# I personally like to .zero_grad() as the first thing.
optimizer.zero_grad()
output = M(input) # Run input through. No need for the target here?
error = L1(output, GT)
# Remove the .zero_grad, backpropagate and step the optimizer.
error.backward()
optimizer.step()
This tutorial 28 has a lot more detail on this as well. |
st83200 | It didn’t work, the model is still updating its params. To make the model clear. M includes two sequential instantiated twice. if the model is called first M(input,1) the first is computed , while the second is computed with M(input,2).
I’m doing this procedure to make sure the first call or the second are not interfering but apparently yes. |
st83201 | It depends your update rules (optimizer) actually. Many optimizers not only depend on grad, such as Nesterov-SGD, Adam, RMSProp eg:
# weight = weight - learning_rate * gradient, if gradients are zero, the weight will not update
optimizer = optim.SGD(model.parameters(), lr=0.01)
# even gradients are zero, the weight will decay
optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=0.1)
You can run below code, see the differences
model = nn.Sequential(
nn.Linear(6, 2, bias=False),
nn.Sigmoid(),
)
input = torch.randn(6)
target = torch.randn(2)
#optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=0.1)
optimizer = optim.SGD(model.parameters(), lr=0.01)
criterion = nn.MSELoss()
output = model(input)
loss = criterion(output, target)
loss.backward()
print(list(model.parameters())[0].grad)
optimizer.zero_grad()
print(list(model.parameters())[0].grad)
print("before step: ", list(model.parameters()))
optimizer.step()
print("after step: ", list(model.parameters())) |
st83202 | Then maybe it is better to have two optimizers working on part of the model. if once the backward is called there is no way to reset the optimizer parameters. |
st83203 | If you are talking about this training, I think you can just change it here:
github.com
pytorch/examples/blob/4581968193699de14b56527296262dd76ab43557/word_language_model/main.py#L104
###############################################################################
# Build the model
###############################################################################
ntokens = len(corpus.dictionary)
if args.model == 'Transformer':
model = model.TransformerModel(ntokens, args.emsize, args.nhead, args.nhid, args.nlayers, args.dropout).to(device)
else:
model = model.RNNModel(args.model, ntokens, args.emsize, args.nhid, args.nlayers, args.dropout, args.tied).to(device)
criterion = nn.CrossEntropyLoss()
###############################################################################
# Training code
###############################################################################
def repackage_hidden(h):
"""Wraps hidden states in new Tensors, to detach them from their history."""
if isinstance(h, torch.Tensor):
return h.detach() |
st83204 | I have a sequence of inputs equal to 50. Each input has a vector of dimension 6. This is src_vocab. tgt_vocab is a number from 0 to 1 that corresponds to each sequence.
I have a lot of such sequences and each of them corresponds to a number from 1 to 0
To the network, I submit a two-dimensional vector 50X6, at the output I get a number from 1 to 0
Loss I need to calculate as MSELoss.
If you compare with the translation, then I have a sentence, and at the output as a translation I get a number. |
st83205 | Hi,
I am using pytorch to generate a temporal median across “n” images in an array and this works very well. But now I would like to mask out image regions I dont need to compute the median for and set these regions to zero in the image array, ie. mask.
Now I have a stacked array of images that are mostly zero values, but the pytorch median function takes same time to compute.
How do I construct a torch.median(image_array) to ignore zeros, but retain the image shape in the result. If I do torch.nonzero I get tuples and if I set as_tupples=False, pytorch says there is no such thing as “as_tupples”.
Any help would be appreciated. Image array is [1920x1080x1] by 21 frames stacked into a buffer, converted to torch tensor, then apply torch.median to the buffer. This works fine and retains shape provided I use np.array(tensor) to convert back to an image.
thanks in advance |
st83206 | What would be the result, if a certain pixel location contains all zeros?
You are most likely looking for as_tuple=False (note the missing s at the end). |
st83207 | thanks for you quick response ptrblck,
The result I am getting is,
TypeError: nonzero() got an unexpected keyword argument ‘as_tuple’
I am using PyTorch on ARM64, version 1.1.0 |
st83208 | The as_tuple argument was introduced in 1.2.0 (docs 27), so you should update to the latest stable version. |
st83209 | Hi ptrblk,
Yes, upgrading to 1.2 fixes the missing option as_tuple, thank you for pointing this out, much appreciated.
I still have a problem with maintaining or reconstructing the result back to the original image shape. It appears that the result becomes a 1D array, whereas the original image was a 2D array.
Can you tell me what is required to reconstruct the image array from the nonzero median result [1D array].
If I do not use the torch.nonzero step, a 2D image format is preserved.
This is my test code (buffer = "stacked array of 2D images, each image has the same masked area having zero values)
buffer2 = torch.nonzero(buffer, as_tuple=False)
median = torch.median(buffer2, 0)[0] # this is a temporal median along axis 0
torch.cuda.synchronize()
median2 = median.cpu()
median3 = np.array(median2)
cv2.imshow(“median”, median3)
Error:
ValueError: Image must be 2D (grayscale, RGB, or RGBA).
Regards rapidproto |
st83210 | Hey,
since I can generate my training data, I basically have access to unlimited datasets and generate the samples using a torch Dataset and Dataloader on the fly.
But as generating samples is (medium) expensive I want to reuse the generated data for a limited lifetime, let’s say 10 epoches until I would get overfitting effects.
The thing is, when I use single threaded dataloader (i.e. num_workers=0) everything seems to be fine. However as soon as I use multiple workers, they do not seem to have access to write to the dataset, so checking for existing samples fails all the time and new samples need to be generated.
The reason why I like doing this with torch’s dataset / dataloader is that I do not have to care about parallel programming and all of my CPU cores are nicely utilised to generate the data.
Summarised I basically want to do the following:
Init Dataset class and give it a lifetime of 10 epoches
for each epoch, for each batch: draw sample from dataset. if sample is None, generate a new one and save it as let’s say self.sample[index] = function_which_generates_samples() and return otherwise use existing sample
after each epoch: dataset.time_til_death -= 1, if dataset.time_til_death == 0: self.samples = [None] * pseudo_length
If it’s helpful you can find a dummy example below (works for num_workers=0 but not for more)
class DummyDataset(Dataset):
def __init__(self, lifetime=5, ds_size=8):
super().__init__()
self.lifetime = lifetime
self.time_til_death = lifetime
self.ds_size = ds_size
self.samples = None
self.gt = None
self._drop_dataset()
def step(self):
self.time_til_death -= 1
if self.time_til_death <= 0:
self._drop_dataset()
self.time_til_death = self.lifetime
def _drop_dataset(self):
self.samples = [None] * self.__len__()
self.gt = [None] * self.__len__()
print("Dropped.")
def _new_sample(self):
return torch.rand(1), torch.tensor([1])
def __len__(self):
return self.ds_size
def __getitem__(self, index):
if self.samples[index] is not None:
print("Old sample.")
sample = self.samples[index]
gt = self.gt[index]
else:
print("New sample.")
sample, gt = self._new_sample()
self.samples[index] = sample
self.gt[index] = gt
return sample, gt
if __name__ == '__main__':
ds = DummyDataset(lifetime=2, ds_size=2)
dl = DataLoader(ds, batch_size=2, shuffle=True, num_workers=4)
for e in range(3):
print(f"Epoch {e}")
for i, (sample, gt) in enumerate(dl):
print(f'Batch: {i} Sample: {sample} GT {gt}')
# pass
ds.step() |
st83211 | Solved by ptrblck in post #3
The Dataset is copied for each worker, if I’m not mistaken, which will lose all changes performed on these copies.
Have a look at this example of shared arrays and let me know, if that would help in your use case. |
st83212 | Or to shorten the question: Do Dataloaders have ‘write’ access to variables when using multiple workers? |
st83213 | The Dataset is copied for each worker, if I’m not mistaken, which will lose all changes performed on these copies.
Have a look at this example 112 of shared arrays and let me know, if that would help in your use case. |
st83214 | Сan i make a convolution of the layer like in the picture?
rnn5.png1347×936 17 KB |
st83215 | You could use a linear layer or a conv layer with a specific kernel size:
x = torch.randn(1, 16, 7)
lin = nn.Linear(7, 1, bias=False)
output = lin(x)
conv = nn.Conv2d(1, 1, kernel_size=(1, 7), stride=(1, 1), bias=False)
with torch.no_grad():
conv.weight.copy_(lin.weight.unsqueeze(1).unsqueeze(2))
x_conv = x.unsqueeze(1)
output_conv = conv(x_conv)
print(torch.allclose(output, output_conv.squeeze(1))) |
st83216 | I do not understand why we do it:
with torch.no_grad():
conv.weight.copy_(lin.weight.unsqueeze(1).unsqueeze(2))
x_conv = x.unsqueeze(1)
output_conv = conv(x_conv)
this way will not work?:
conv = nn.Conv2d(1, 1, kernel_size=(1, 7), stride=(1, 1), bias=False)
output = conv(x) |
st83217 | It will work. I just used this code snippet to set the weights according to the linear layer’s, so that we can make sure both results are equal.
Of course you don’t have to set the weights manually. |
st83218 | if you do this (print(output_conv(x)):
RuntimeError: Expected 4-dimensional input for 4-dimensional weight 1 1 1, but got 3-dimensional input of size [1, 16, 7] instead |
st83219 | output_conv is the result of the convolution so I assume you are using conv instead.
As shown in my code, you would have to unsqueeze x in dim1. |
st83220 | How can I use a transformer instead of conv?
I have a sequence of 16 inputs. Kadzhi entrance has a length of 7.
To use trabsformer I have to specify src and tgt. I can’t understand how src and tgt should look. I can assume that src = 7, but what is tgt? |
st83221 | Okay, what I’m trying to do is use an existing pytorch Model (BERT) in my own custom network.
If you didn’t know, BERT works on input that is of shape (512).
As my input data is much larger than that, I’ve split each example into lengths of 12, with at most 4 lengths for each example. So for example, in my custom network, with a batch size of 32, the input shape would be (32, 4, 512).
So I’ve included a pretrained bert model in my network in the first layer, and in the forward function I loop over each of the four 512-length chunks, pass them into bert, get the output back, and stack the output back together. The output of a single bert pass is 768, therefore after the looping I now have a batch shape of (32, 4, 768).
What I’m struggling with is that my model simply isn’t converging, could it be that the backwards pass isn’t propagating to the Bert loop properly? |
st83222 | Could you check, if your BERT model gets valid gradients after the loss.backward() call?
You can print the .grad attributes of some layers or of all parameters:
...
loss.backward()
print(model.bert_model.some_layer.weight.grad)
# or
for name, param in model.named_parameters():
print(name, param.grad)
If you don’t see any gradients, this would mean that the computation graph was detached somewhere. |
st83223 | I’ve looked at the rad.bert.bert.encoder.layer[10].intermediate.dense.weight.grad values and they do change.
Unsure really why else the model isn’t converging.
If it’s any help here’s the rest of it:
class ClassificationBert(nn.Module):
# AS input all our data is shaped so that it is (documents, segments)
# that is, we have multiple segments per document.
def __init__(self, bert, labels):
super(ReadmissionBert, self).__init__()
self.num_labels = labels
self.bert = bert
self.linear = nn.Linear(3072, self.num_labels)
self.sigmoid = nn.Sigmoid()
def forward(self, text, labels):
# We loop over all the sequences to get the bert representaions
pooled_layer_output = []
for i in range(len(text)):
bert_outputs = []
for j in range(len(text[i])):
bert_out = self.bert(text[i][j].unsqueeze(0))
bert_outputs.append(bert_out)
bs = torch.stack(bert_outputs).view(-1)
pooled_layer_output.append(bs)
# Flatten the input so that we have a single dimension for all the bert pooled layer.
pooled_layer_output = torch.stack(pooled_layer_output)
logits = self.linear(pooled_layer_output) #We only use the output of the last hidden layer.
logits = self.sigmoid(logits)
outputs = (logits,) # add hidden states and attention if they are here
if labels is not None:
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss,) + outputs
return outputs
So you can see that I pass data into bert in a loop. |
st83224 | The loop shouldn’t be the problem here.
Could you try to overfit your model on a small subset of your data, e.g. just 10 samples?
If your model cannot overfit these samples nearly perfectly, some other bugs might be in the code. |
st83225 | Yeah its unable to fit the data. Just an idea, but as I call BERT in a loop during the forward pass 4 times, Could the gradient being passed to bert be calculated four times during the backward pass?So that for each backward pass it’s amplifying the gradient 4x? |
st83226 | That could be one reason.
Also, I’m a bit sceptical about these lines of code:
bs = torch.stack(bert_outputs).view(-1)
...
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss,) + outputs
Could you print the shape of bs, pooled_layer_output and logits?
Also, what is the last line of code doing? |
st83227 | bs.shape: torch.Size([3072])
pooled_layer_shape: torch.Size([2, 3072])
logits shape: torch.Size([2, 2])
The last line of code calculates the loss, and then returns it as a tuple (loss, logits) |
st83228 | Thanks for the info.
Your batch size should be 32 as described in the first post.
Reshaping the activations to [2, 3072] seems to be wrong or am I missing something? |
st83229 | Apologies that was just an example. I do pass in batches of size two as that’s pretty much the limit of the computing power available and instead emulate it with 32 batches. So it IS a batch of 2. |
st83230 | Thanks for the information and sorry for missing the probably obvious bug.
nn.CrossEntropyLoss expects raw logits, so could you remove the logits = self.sigmoid(logits) and try to overfit the small data sample again? |
st83231 | Nothing seems to happen, the loss just bounces around the 0.69 mark and I get AUROC of roughly 0.44 on the training data. |
st83232 | Not quite sure all the steps I took, but I removed the sigmoid function and replaced the loss calculation:
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
With this:
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits, labels.view(-1))
Now it has overfitted extremely well and all the metrics it has outputted are making sense and I get the output below:
{'train': {'loss': 0.04647320210933685, 'accuracy': 0.7676767676767676, 'roc': 1.0}, 'test': {'loss': 0.7614774504303932, 'accuracy': 0.5555555555555556, 'roc': 0.6168539325842697}}
So thank you very much for your advice! |
st83233 | I have two mask tensors which both have size of [b,c,w,h] (c=63*63)
weight_1 and weight_2 are their corresponding label [b,1,w,h] ] indicating which position in mask should I pick
Because mask_1 and mask_2 are paired (mask_1[i] is paired with mask_2[i] ), I want to select common min numbers of mask for each pair
below is my code
def select_mask(mask_1, weight_1, mask_2, weight_2, o_sz=63, g_sz=127):
mask_1 = mask_1.permute(0, 2, 3, 1).contiguous()
mask_2 = mask_2.permute(0, 2, 3, 1).contiguous()
sel_pm_1 = torch.tensor([])
sel_pm_2 = torch.tensor([])
sel_pm_1 = torch.autograd.Variable(sel_pm_1).cuda()
sel_pm_2 = torch.autograd.Variable(sel_pm_2).cuda()
for i in range(weight_1.size(0)):
tmp = weight_1[i][0].view(-1) #[w*h] (w=25,h=25)
tmp2 = weight_2[i][0].view(-1) #[w*h]
tmp = Variable(tmp.data.eq(1).nonzero())
tmp2 = Variable(tmp2.data.eq(1).nonzero())
n_tmp = tmp.nelement()
n_tmp2 = tmp2.nelement()
mi = min(n_tmp, n_tmp2)
tmp = tmp[:mi].squeeze()
tmp2 = tmp2[:mi].squeeze()
T = mask_1[i].view(25*25,63*63)
T2 = mask_2[i].view(25*25,63*63)
tmp_pm_1 = torch.index_select(T,0,tmp)
tmp_pm_2 = torch.index_select(T2,0,tmp2)
sel_pm_1 = torch.cat([tmp_pm_1,sel_pm_1])
sel_pm_2 = torch.cat([tmp_pm_2,sel_pm_2])
if sel_pm_1.nelement()==0 or sel_pm_2.nelement()==0: return None,None,False
#sel_pm_1 = [n,63*63]
sel_pm_1 = sel_pm_1.view(-1, 1, o_sz, o_sz)
sel_pm_1 = nn.UpsamplingBilinear2d(size=[g_sz, g_sz])(sel_pm_1)
sel_pm_1 = sel_pm_1.view(-1, g_sz , g_sz)
sel_pm_2 = sel_pm_2.view(-1, 1, o_sz, o_sz)
sel_pm_2 = nn.UpsamplingBilinear2d(size=[g_sz, g_sz])(sel_pm_2)
sel_pm_2 = sel_pm_2.view(-1, g_sz , g_sz)
return sel_pm_1, sel_pm_2, True
Forwarding seems no problem , however , I got the following error message
Where could possibly go wrong?
Traceback (most recent call last):
File "/home/teresa/SiamMask/tools/train_siammask.py", line 301, in <module>
main()
File "/home/teresa/SiamMask/tools/train_siammask.py", line 169, in main
train(train_loader, dist_model, optimizer, lr_scheduler, args.start_epoch, cfg)
File "/home/teresa/SiamMask/tools/train_siammask.py", line 255, in train
loss.backward()
File "/home/teresa/anaconda3/envs/siammask/lib/python3.6/site-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/teresa/anaconda3/envs/siammask/lib/python3.6/site-packages/torch/autograd/__init__.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: invalid argument 4: expecting vector of indices at /pytorch/aten/src/THC/generic/THCTensorIndex.cu:16 |
st83234 | Anyone know where i can get the PyTorch v1.2?
Conda has 1.1 for Windows but 1.2 for Mac, Linux |
st83235 | Solved by FilipAndersson245 in post #9
Pytorch just got updated for windows on conda.
[image] |
st83236 | Hi,
PyTorch 1.2 is available through conda
image.png1030×341 20.9 KB
pytorch.org
PyTorch 12
An open source deep learning platform that provides a seamless path from research prototyping to production deployment. |
st83237 | My bad, you are right. It is not available on pip too.
image.png823×540 33.9 KB
image.png1069×339 24.3 KB
@fmassa, @ptrblck Could you please help us? |
st83238 | Yes, I have received an email last night about v1.2 and as you can see in the first image, all numbers are pointing to 1.2. And there is separate branch on GitHub for v1.2. They did not mention particular platform for this release so it should be available for all.
GitHub
pytorch/pytorch 5
Tensors and Dynamic neural networks in Python with strong GPU acceleration - pytorch/pytorch |
st83239 | It seems they are working on it as you know!
github.com/pytorch/pytorch
Issue: [v1.2.0] Release Tracker 5
opened by soumith
on 2019-07-30
The branch v1.2.0 has been cut.
If you need any particular patches onto this branch, please comment below and send a PR...
triaged |
st83240 | github.com/pytorch/pytorch
Issue: Unable to install 1.2
opened by ZhuBaohe
on 2019-08-10
Whether using pip or conda, I still get 1.1 by the official installation command. |
st83241 | Conda
conda install pytorch cpuonly -c pytorch
conda install pytorch cudatoolkit=9.2 -c pytorch -c numba/label/dev
conda install pytorch cudatoolkit=10.0 -c pytorch
Wheels
CPU
https://download.pytorch.org/whl/cpu/torch-1.2.0%2Bcpu-cp35-cp35m-win_amd64.whl 5
https://download.pytorch.org/whl/cpu/torch-1.2.0%2Bcpu-cp36-cp36m-win_amd64.whl 13
https://download.pytorch.org/whl/cpu/torch-1.2.0%2Bcpu-cp37-cp37m-win_amd64.whl 19
CUDA 9.2
https://download.pytorch.org/whl/cu92/torch-1.2.0%2Bcu92-cp35-cp35m-win_amd64.whl
https://download.pytorch.org/whl/cu92/torch-1.2.0%2Bcu92-cp36-cp36m-win_amd64.whl 3
https://download.pytorch.org/whl/cu92/torch-1.2.0%2Bcu92-cp37-cp37m-win_amd64.whl 2
CUDA 10.0
https://download.pytorch.org/whl/cu100/torch-1.2.0-cp35-cp35m-win_amd64.whl 6
https://download.pytorch.org/whl/cu100/torch-1.2.0-cp36-cp36m-win_amd64.whl 6
https://download.pytorch.org/whl/cu100/torch-1.2.0-cp37-cp37m-win_amd64.whl 22 |
st83242 | I am installing torchvision, pandas, PIL and other things, here is a glimpse -
!pip install -q pandas
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\10/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!pip install -q http://download.pytorch.org/whl/{accelerator}/torch-1.0.0-{platform}-linux_x86_64.whl torchvision
import torch
!pip install -q Pillow==4.3.0
!pip install -q PIL
!pip install -q image
import PIL
#%reload_ext autoreload <------------— comment out
#%autoreload 0 <------------— comment out
%matplotlib inline
!pip install --no-cache-dir -I pillow
But when I run this, the result is this -
ERROR: torchvision 0.3.0 has requirement torch>=1.1.0, but you’ll have torch 1.0.0 which is incompatible.
ERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you’ll have imgaug 0.2.9 which is incompatible.
ERROR: Could not find a version that satisfies the requirement PIL (from versions: none)
ERROR: No matching distribution found for PIL
But I have torch 1.1.0 successfully installed -
!pip install -q torch
import torch
print(torch.__version__)
The result of the print is 1.1.0. Then why it is showing this result? I am doing all this in google colab. Thanks in advance. |
st83243 | Solved, the main problem was in torch vision versions. Just had to update torchvision version to the latest edition. |
st83244 | pip install torchvision==version or pip install torchvision --upgrade ? For Linux, you can now get torchvision 0.4.0 directly as well with PyTorch-v1.2.0 |
st83245 | If I do
conda install pytorch=0.4.0 cuda90 -c pytorch
then it actually installs cuda 9.2. If I forcefully install cuda 9.0 via anaconda before I issue above command, I can’t run pytorch. It fails with a error message that, if you google it, says that the pytorch and cuda versions are incompatible.
If I try to do it with pip
pip install torch==0.4.0 -f https://download.pytorch.org/whl/cu90/stable
I get
ERROR: Could not find a version that satisfies the requirement torch==0.4.0 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 0.4.1, 0.4.1.post2, 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0)
ERROR: No matching distribution found for torch==0.4.0
Installing 0.4.1 works but then the package I want to compile does not compile. Btw if I install 0.4.0 with cuda 9.2 the software I want to compile also does not work, but hints directly that it needs cuda 9.0.
An edit to further my point:
(base) julian@thinstation:~/PycharmProjects$ conda create -n human_dynamics
Collecting package metadata: done
Solving environment: done
## Package Plan ##
environment location: /home/julian/Software/anaconda3/envs/human_dynamics
Proceed ([y]/n)? y
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
# $ conda activate human_dynamics
#
# To deactivate an active environment, use
#
# $ conda deactivate
(base) julian@thinstation:~/PycharmProjects$ conda activate human_dynamics
(human_dynamics) julian@thinstation:~/PycharmProjects$ conda install pytorch=0.4.0 cuda90 -c pytorch
Collecting package metadata: done
Solving environment: done
## Package Plan ##
environment location: /home/julian/Software/anaconda3/envs/human_dynamics
added / updated specs:
- cuda90
- pytorch=0.4.0
The following packages will be downloaded:
package | build
---------------------------|-----------------
certifi-2019.6.16 | py36_0 154 KB
cudatoolkit-9.2 | 0 351.0 MB
------------------------------------------------------------
Total: 351.2 MB
The following NEW packages will be INSTALLED:
blas pkgs/main/linux-64::blas-1.0-mkl
ca-certificates pkgs/main/linux-64::ca-certificates-2019.5.15-0
certifi pkgs/main/linux-64::certifi-2019.6.16-py36_0
cffi pkgs/main/linux-64::cffi-1.12.3-py36h2e261b9_0
cuda90 pytorch/linux-64::cuda90-1.0-h6433d27_0
cudatoolkit pkgs/main/linux-64::cudatoolkit-9.2-0
[...]
Am I doing something wrong? This should install cudatoolkit 9.0 and not 9.2, right? This looks similar to #11138 13 but his issue got solved I guess? Also his workaround does not work for me, because in my case the pytorch bundled for it actually seems to be compiled against cuda 9.2.
(Another edit to clear the logs from update warnings. I tried it with both anaconda 4.6.11 and 4.6.14, it yields the same result)
Okay, maybe I should have read the issue more carefully: "PyTorch no longer installs or depends on cudatoolkit == 9". Hmm, so would compiling it myself solve it? |
st83246 | If your env already has cuda92, it will only install according pytorch.
See if it is there by
conda list
then uninstall it
conda uninstall cuda92
Then you should be able to install pytorch 0.4 + cuda90 |
st83247 | I am mostly certain this is something trivial, yet I can’t find much on Google about it.
I’m populating 3D tensors from BGR data, which I need to place in a 4D tensor to transform into a batch for evaluation/testing purposes.
I know how to get my 3D tensor:
img = Image.open(file)
in_t = self.img_tf(img).cuda(non_blocking=True).float()
And I know the size of the batch:
def make_batch(self, faces):
data = []
for (uuid, hwid, file) in faces:
img = Image.open(file)
in_t = self.img_tf(img).cuda(non_blocking=True).float()
print(in_t.shape)
data.append(in_t)
self.input = torch.tensor(data)
return self.input
My problem of course is how to stack the in_t in self.input so that it makes sense to use for propagation right afterwards:
def run_batch(self):
with torch.no_grad():
output = self.net(self.input)
pred = output.argmax(dim=1, keepdim=True)
Placing them in a list dosn’t work, passing them one at a time by unsqueezing makes no sense since I know I have a batch for evaluation, so AFAIK I need to stack the BGR 3 channel tensor as a 4D tensor, with 1st dimension being the batch? |
st83248 | Yes, that’s what I’ve been trying to use, but I keep getting the dimensions mixed up.
I’ve found another post on here asking the same question, and his solution was to just setup a 4D tensor and populate it directly:
data = torch.zeros([len(faces), 3, 224, 224])
self.face_map = []
i = 0
for (uuid, hwid, file) in faces:
img = Image.open(file)
in_t = self.img_tf(img)
data[i] = in_t
i += 1
Like so. I imagine the “proper” approach would be to use stack instead? |
st83249 | This approach is fine. Alternatively, you could append the tensors in a list and use torch.stack on the list to get the same result. |
st83250 | Thanks @ptrblck I’ll stick to using that one then. I’ve tried to use torch.stack but I seem to be getting the dimensions wrong, because every time I get an error/exception.
What would be a minimal example of how to stack 3D (BGR 3, 224, 224) into a 4D batch tensor? |
st83251 | This would be a simple example:
x = []
for _ in range(10):
x.append(torch.randn(3, 224, 224))
x = torch.stack(x)
print(x.shape)
> torch.Size([10, 3, 224, 224]) |
st83252 | Hey all,
I developed PyTorchWrapper, a helper library for pytorch. It provides a systematic and extensible way to build, train, evaluate, and tune deep learning models. It also provides several ready to use modules and functions for fast model development.
If you’d like to check it out follow the following links.
install with: pip install pytorch-wrapper
GitHub: https://github.com/jkoutsikakis/pytorch-wrapper 5
docs: https://pytorch-wrapper.readthedocs.io/en/latest/ |
st83253 | When pytorch does the batch normalization in SGD, what batch size it use? Per GPU batch size, or total batch size over all GPUs?
For example, suppose I set the per-GPU batch size as 32, and I use 8 GPUs. When Pytorch does batch normalization, what batch size it use, 32 or 32 x 8?
My concern comes from a facebook’s paper “Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour”. If Pytorch’s implementation is the same with what this paper describes, I can modify the parameters of my model accordingly; if not, I need to know. Thanks! |
st83254 | I am pretty sure that if you e.g., set the minibatch size to 32 in the dataloader and have say 8 GPUs, you get 4 data points per GPU during DataParallel (you can kind of see that based on the number of training instances per minibatch and also memory use) . I am wondering if you are referring to DataParallel in your question?
In practice, I found that yes, using e.g., 4 GPUs can maybe speed up computation by ~2.5 times, but then I also need to train 1.5 more epochs to get comparable results (compared to 1 GPU).
Also, I usually use minibatchsize * number of GPUs as my minibatch param to make best use of your GPUs. I.e., if you have a model with a current batch size that fills up 80% of the GPU memory, and you want to maintain that level of utilization, you’d need to increase the batch size then of course as the batch gets scattered across the GPUs. |
st83255 | Hi,
Thanks for you detailed explanation.
rasbt:
I usually use minibatchsize * number of GPUs as my minibatch param
What does the minibatch param means ? the epochs the model to train?
Thank you. |
st83256 | sorry that was maybe a bit confusing. With minibatch param, I meant the minibatch size. E.g., if my minibatch size is 256 for 1GPU, I use 4*256 for four GPUs. |
st83257 | What I’m asking is which batch size is used in Batch Normalization, when there are more than one GPU? Does Pytorch do batch normalization per GPU, or it do batch normalization accross all GPUs?
For example, let’s assume batch size per GPU is 32, and there are 4 GPUs, so, total batch size is 32 * 4 =128. So, when Pytorch does batch normalization, which ‘batch’ does it use? 32 or 128? |
st83258 | Yeah, this is also what I found. So you mean we cannot implement linear acceleration using the current PyTorch? |
st83259 | I’ve noticed that if I take slices of a big tensor with the standard numpy syntax (a = b[0:2]) and then save it to a file with pickle, the size is for the two tensors is the same.
What is actually happening?
I found that I can make the change of size effective by converting the newly created tensor to a numpy vector and then back to a tensor again but I’m sure there’s an easier way. |
st83260 | Solved by ptrblck in post #2
a and b will share the same memory, but some attributes in a such as the size and possibly storage_offset will be changed. Here is a small example
b = torch.zeros(3, 2)
a = b[1:3]
print(b)
> tensor([[0., 0.],
[0., 0.],
[0., 0.]])
a[:] = 1.
print(b)
> tensor([[0., 0.],
[1., 1… |
st83261 | a and b will share the same memory, but some attributes in a such as the size and possibly storage_offset will be changed. Here is a small example
b = torch.zeros(3, 2)
a = b[1:3]
print(b)
> tensor([[0., 0.],
[0., 0.],
[0., 0.]])
a[:] = 1.
print(b)
> tensor([[0., 0.],
[1., 1.],
[1., 1.]])
print(a.storage_offset())
> 2
Note that I used other indices for slicing to demonstrate the storage_offset.
If you want to create a copy, use a = b[0:2].clone() |
st83262 | Just as the title says, how should I modify my forward pass to use this function?
My current forward pass is:
def forward(self, x):
out = F.relu(self.pool1(self.conv1(x)))
out = F.relu(self.pool2(self.conv2(out)))
out = F.relu(self.fc1(out))
out = F.relu(self.fc2(out))
out = self.fc3(out)
return out
Many thanks! |
st83263 | I like @Priya_Goyal’s tutorial 373 on checkpointing.
Note that it wasn’t updated in a while and uses an old PyTorch version, but the general workflow should be the same. |
st83264 | Hi I want to make the following tensor
tensor([[1., 1., 1., 1., 1.],
[2., 2., 2., 2., 2.],
[3., 3., 3., 3., 3.],
[4., 4., 4., 4., 4.],
[5., 5., 5., 5., 5.],
[6., 6., 6., 6., 6.]])
to
tensor([[1., 1., 1., 1., 1., 4., 4., 4., 4., 4.]
[2., 2., 2., 2., 2., 5., 5., 5., 5., 5.],
[3., 3., 3., 3., 3., 6., 6., 6., 6., 6.]])
I tried to do with view, but failed to do so. I want to do it so that I can compute backward() on the tensor.
Can anyone help? |
st83265 | One more question.
what about if I want to add the first half and the second half?
That is,
[5,5,5,5,5]
[7,7,7,7,7]
[9,9,9,9,9]
Thanks a lot! |
st83266 | Sorry for coming back with a similar question.
How do I make
tensor([[1., 1., 1., 1., 1., 4., 4., 4., 4., 4.]
[2., 2., 2., 2., 2., 5., 5., 5., 5., 5.],
[3., 3., 3., 3., 3., 6., 6., 6., 6., 6.]])
to
tensor([[1., 1., 1., 1., 1.],
[2., 2., 2., 2., 2.],
[3., 3., 3., 3., 3.],
[4., 4., 4., 4., 4.],
[5., 5., 5., 5., 5.],
[6., 6., 6., 6., 6.]]) |
st83267 | I tried to add a layer ‘Transformer’, but got an error:
module ‘torch.nn’ has no attribute ‘Transformer’
how can i fix this? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.