id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st84368
|
I’m tracing back the error and it points at this:
def zlogit(re, im):
abs_ = t.sqrt(re**2 + im**2)
ang = t.atan2(im, re)
mask = ~(ang > 0)
abs_[mask] = -1 * abs_[mask]
return abs_
Where is the inplace operation that I did?
|
st84369
|
Solved by tom in post #3
No, sqrt’s backward wants abs_ to calculate d sqrt(x) = -0.5 / sqrt(x) .
If you absolutely needed the memory, you use checkpointing for zlogit, but most of the time that type of memory optimization is not worth the effort.
Best regards
Thomas
|
st84370
|
I solved it! Woah! The error was misleading and was pointing to
abs_ = t.sqrt(re**2 + im**2)
While in reality, the error is in:
abs_[mask] = -1 * abs_[mask]
That was the inplace opeartion. I had to create a clone of that tensor and modify the clone like this:
clone = abs_.clone()
clone[mask] = -1 * abs_[mask]
Is there a more efficient way? cause abs_ is wasted in my solution. Can the clone be a detached tensor and I go like abs_ = Dtached[mask] ? IDK.
|
st84371
|
No, sqrt’s backward wants abs_ to calculate d sqrt(x) = -0.5 / sqrt(x) .
If you absolutely needed the memory, you use checkpointing for zlogit, but most of the time that type of memory optimization is not worth the effort.
Best regards
Thomas
|
st84372
|
Hello, I have this dataset 1, and I make a data loader, it returns image, mask, and label. but when I make use it, it occurs this issue.
Traceback (most recent call last):
File "test/test_dataset.py", line 16, in test_dataset
for i, j in enumerate(dataload):
File "/home/ubuntu/fzh/.conda/envs/rando/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 582, in __next__
return self._process_next_batch(batch)
File "/home/ubuntu/fzh/.conda/envs/rando/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 608, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
RuntimeError: Traceback (most recent call last):
File "/home/ubuntu/fzh/.conda/envs/rando/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 99, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/ubuntu/fzh/.conda/envs/rando/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 68, in default_collate
return [default_collate(samples) for samples in transposed]
File "/home/ubuntu/fzh/.conda/envs/rando/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 68, in <listcomp>
return [default_collate(samples) for samples in transposed]
File "/home/ubuntu/fzh/.conda/envs/rando/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 52, in default_collate
return default_collate([torch.from_numpy(b) for b in batch])
File "/home/ubuntu/fzh/.conda/envs/rando/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 43, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: Expected object of scalar type Byte but got scalar type Short for sequence element 1 in sequence argument at position #1 'tensors'
and here is what I have done briefly:
def __getitem__(self, idx):
im = []
for i in range(self.num_input):
direct, _ = self.root_dir[self.num_input * idx + i].split("\n")
if i < self.num_input - 1:
image = nib.load(direct).get_data()
image = np.expand_dims(image, axis=0)
im.append(image)
if i == 0 :
direct = os.path.split(direct)[0] + "/mask"
mask = nib.load(direct + "/mask.nii.gz").get_data()
else:
labels =nib.load(direct).get_data()
labels = np.asarray(labels)
iamges = np.concatenate(im, axis=0).astype(float)
# iamges shape: 4 X H X W X D
# labels shape : HXWXD
# mask shape : HxWxD
images = np.transpose(iamges,(0,3,1,2))
labels = np.transpose(labels,(2,0,1))
mask = np.transpose(mask,(2,0,1))
return images,labels,mask
Thank you for your help
|
st84373
|
Hello, I have this dataset 2 contains the labels of english letters and its points, I want to train yolov3 with that dataset so I need to use anchor images, but before that I need to convert this file into original images. I found this script 1 to convert but it didn’t help me so much. Do anyone know any other way to train yolov3 with that dataset? Or help me to convert this file into original images.
|
st84374
|
Solved by ptrblck in post #2
There seem to be a few scripts which are loading the data and visualizing a few examples using matplotlib.
E.g. have a look at this script, which loads the data using pandas.
After loading, you can directly index the DataFrame using .iloc and reshape the numpy array to [28, 28] in order to get the…
|
st84375
|
There seem to be a few scripts which are loading the data and visualizing a few examples using matplotlib.
E.g. have a look at this script 9, which loads the data using pandas.
After loading, you can directly index the DataFrame using .iloc and reshape the numpy array to [28, 28] in order to get the appropriate shaped images.
If you are planning on creating a custom Dataset, you could load the data in the __init__ method and get each sample in __getitem__.
|
st84376
|
I was going over through the masking code in the chatbot tutorial and noticed that it masks with a zero on indices that are 0 but are NOT padding tokens (e.g. the first token). Is that a bug? Is the fix to use the lengths of the sequences to pad?
# Returns padded target sequence tensor, padding mask, and max target length
def outputVar(l, voc):
'''
padVar = padded (transposed) list of sentences with the batches
tensor([[1391, 188, 122, 53, 5091],
[ 4, 53, 12, 154, 7708],
[ 2, 3026, 1048, 747, 4],
[ 0, 4, 115, 5747, 2],
[ 0, 2, 12, 2281, 0],
[ 0, 0, 1048, 4, 0],
[ 0, 0, 4, 2, 0],
[ 0, 0, 2, 0, 0]])
mask = mask indicating where words occur and what isn't a word (i.e 0 for padding)
tensor([[1, 1, 1, 1, 1],
[1, 1, 1, 1, 1],
[1, 1, 1, 1, 1],
[0, 1, 1, 1, 1],
[0, 1, 1, 1, 0],
[0, 0, 1, 1, 0],
[0, 0, 1, 1, 0],
[0, 0, 1, 0, 0]], dtype=torch.uint8)
max_target_len = length of longest target sentence
max_target_len = 8
'''
# list of index representation of sentence [[124, 101, 102, 4401, 98, 382, 4, 2], ..., [67, 188, 38, 4, 2]]
indexes_batch = [indexesFromSentence(voc, sentence) for sentence in l]
# get length of the largest (target) sentence
max_target_len = max([len(indexes) for indexes in indexes_batch])
# (transposed) list of index represtations of sentences with padded zeros at the end [(124, 25, 25, 218, 67), ..., (4, 2, 0, 0, 0), (2, 0, 0, 0, 0)]
padList = zeroPadding(indexes_batch) # padds with zeros sentences that are too long
# returns the mask indicating which position are words locations and marks with 0 which ones are simply zeros
st()
mask = binaryMatrix(padList)
#mask = torch.Tensor(padList) != PAD_token ## ALSO BUGGY?!
mask = torch.ByteTensor(mask)
# tensorfy (transposed) list of index represtations of sentences with padded zeros at the end. The last list is now a tensor/matrix
padVar = torch.LongTensor(padList)
return padVar, mask, max_target_len
|
st84377
|
Solved by tom in post #2
That’s not a bug, the token with index 0 is the padding token:
PAD_token = 0 # Used for padding short sentences
Best regards
Thomas
|
st84378
|
That’s not a bug, the token with index 0 is the padding token:
PAD_token = 0 # Used for padding short sentences
Best regards
Thomas
|
st84379
|
I also missed that things get transposed at some point so that confused me too (cuz I saw 2 pad tokens each in a row at the beginning of the code).
|
st84380
|
hi, assume I have a (m,n) Tensor, say mat.
I would like to have each of its n columns shuffled in a different manner.
a naive implementation would be, mat being the (m,n) Tensor:
res = mat.clone()
for i in range(res.shape[1]):
ind = torch.randperm(res.shape[0], device=device)
res[:, i] = res[ind, i]
can I do the same without the for loop in a vectorized manner ?
|
st84381
|
Hi!
One way would be to use advanced indexing and the stdlib function random.shuffle. I used it on a list rather than a call to torch.arange as shuffle seems to go against torch's semantics under the hood.
mat = torch.linspace(1, 16, 16).view(4, 4)
tensor([[ 1., 2., 3., 4.],
[ 5., 6., 7., 8.],
[ 9., 10., 11., 12.],
[13., 14., 15., 16.]])
col_idxs = list(range(mat.shape[1]))
random.shuffle(col_idxs)
mat = mat[:, torch.tensor(col_idxs)]
tensor([[ 2., 1., 4., 1.],
[ 6., 5., 8., 5.],
[10., 9., 12., 9.],
[14., 13., 16., 13.]])
Hope that helps!
|
st84382
|
Hi Antoine (and tymokvo)!
hi, assume I have a (m,n) Tensor, say mat.
I would like to have each of its n columns shuffled in a different manner.
…
can I do the same without the for loop in a vectorized manner ?
I can’t* think of a way to do this using built-in tensor
functions without a loop.
*) Well, actually I can, but with a cost in efficiency.
Try this (for m = 4, n = 3):
import torch
mat = torch.tensor ([[11.0, 12, 13],[21, 22, 23],[31, 32, 33],[41, 42, 43]])
ind = torch.rand (4, 3).argsort (dim = 0)
res = torch.zeros (4, 3).scatter_ (0, ind, mat)
The computational time complexity of your task
should be m * n. (You have n columns and the
cost of randomly permuting a length-m column
is m.)
But the cost of sorting a length-m column
is m * log (m), so my scheme has cost
n * m * log (m).
The point is that I can’t figure out how to get
the randomly permuted columns of indices
without a loop or using the sort trick.
(Note that tymokvo’s approach is applying the same
random permutation to each of the rows. Antoine is
asking for distinct random permutations for (in his
case) each of the columns, as his loop-based solution
does. Also, for reasons I don’t understand – tymokvo’s
code looks right for what it does – the final result in
tymokvo’s post has a duplicated column, (1, 5, 9, 13),
and a missing column (3, 7, 11, 15).)
Have fun!
K. Frank
|
st84383
|
thanks both,
@KFrank your solution does work but indeed, on a (1000, 10000) problem, I get :
scatter: 1.084769 ms
and using naive with for loops:
naive: 0.394080 ms
|
st84384
|
Hi,
I’m training a neural network model in triplet format, where I have three inputs: an anchor image and two other images, one positive and one negative in relation to the anchor image, indicating similarity.
I have the following difficulty: my experiments consist of extracting features for each image in the test dataset and using these features for the task of image retrieval. I’m saving the trained model, but I’m not able to load to extract the features of each image because the model is trained for a triplet format input. Some idea?
|
st84385
|
Hi,
I would like to collapse a serie of 2d convolutions to a single conv operation.
Basically I’d like to make the pytorch convolution associative.
For example if I have something like
y = F.conv2d(F.conv2d(a1,k1),k0)
I’d like to write it this way: y = F.conv2d(a1,K) (where K depends on k1 and k2).
If my calculus are not wrong, this should work:
a1 = torch.randn(1,100,256,256)
k1 = torch.randn(25,100,5,5)
k0 = torch.rand(1,25,5,5)
a0 = F.conv2d(a1,k1)
y = F.conv2d(a0,k0)
K = F.conv2d(k1.transpose(0,1),k0,padding=4).transpose(0,1)
y2 = F.conv2d(a1,K)
But it doesn’t work.
Does anyone know what I am forgetting ? Thanks
|
st84386
|
Solved by ptrblck in post #2
The convolution operator in PyTorch is a cross-correlation and not a convolution in the signal processing sense.
From the docs:
[…] where ⋆\star⋆ is the valid 2D cross-correlation operator, N is a batch size, C denotes a number of channels, H is a height of input planes in pixels, and W is width …
|
st84387
|
The convolution operator in PyTorch is a cross-correlation and not a convolution in the signal processing sense.
From the docs:
[…] where ⋆\star⋆ is the valid 2D cross-correlation 2 operator, N is a batch size, C denotes a number of channels, H is a height of input planes in pixels, and W is width in pixels.
While a convolution is commutative, a cross-correlation is not.
Thus, you would need to flip the kernels in the spatial dimensions to get approx. the same result:
a1 = torch.randn(1,100,256,256)
k1 = torch.randn(25,100,5,5)
k0 = torch.randn(1,25,5,5)
a0 = F.conv2d(a1,torch.flip(k1, [2, 3]))
y = F.conv2d(a0,torch.flip(k0, [2, 3]))
K = F.conv2d(k1.transpose(0, 1),torch.flip(k0, [2, 3]),padding=4).transpose(0, 1)
y2 = F.conv2d(a1,torch.flip(K, [2, 3]))
print((y - y2).abs().max())
> tensor(0.0052)
The max abs error seems to be a bit high, but if we lower the number of channels, we can reduce this error to approx. ~1e-5, so I assume it might be due to the limited floating point precision (or I’m not seeing the bug in the code).
|
st84388
|
I’m following the chatbot tutorial at this URL.
https://pytorch.org/tutorials/beginner/chatbot_tutorial.html 3
I’ve written two Modules. One is a decoder and one is an encoder. They run. My problem is that when I look at my output, quite often I just see the word ‘I’ alone. In other words the chatbot thinks that it is sufficient to answer every question with the word ‘I’ alone. I donwloaded the code from the tutorial and that code does not show this behaviour.
I’ve tried a tensorflow project, and though it was a long time ago I believe I remember having this same problem. I am using the movie database as my corpus. I have to think someone has seen this problem before.
My setup currently is that the encoder processes a whole batch at a time, while the decoder goes through the batch line by line and the lines word by word. Is this my problem?
my code changes often and is very messy, but the url to the seq2seq model on github is here:
github.com
radiodee1/awesome-chatbot/blob/master/model/seq_2_seq.py 1
#!/usr/bin/python3.6
from __future__ import unicode_literals, print_function, division
import sys
#sys.path.append('..')
from io import open
import unicodedata
import string
import re
import random
import os
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch import optim
import torch.nn.functional as F
import torch.nn.init as init
import time
import datetime
import math
This file has been truncated. show original
any help would be appreciated.
|
st84389
|
Your code is quite long (~2800 lines of code), so that it would be helpful if you could narrow down possible issues to some functions.
I’m not an NLP expert, but usually it helps me to scale down the problem, i.e. use a simple model, little dataset, and check my code for possible bugs.
|
st84390
|
why don’t you write a meaningful title for your question? What issue/question? Can I know your question/issue just by reading your question title?
|
st84391
|
Hi,
I’m trying to build the Chatbot 4 from the official tutorials and I’m running into two issues. The main issue is the mismatch of dimensions when running through the GRU in the decoder.
def forward(self, input_step, last_hidden, encoder_output):
# we run this one step (word) at a time
embedded = self.embedding(input_step)
embedded = self.embedding_dropout(embedded)
# forward through unidirectional GRU
set_trace()
rnn_output, hidden_state = self.gru(embedded, last_hidden)
# calculate attention weights from the current GRU output
attn_weights = self.attn(rnn_output, encoder_output)
The error is occurring at the gru call after set_trace(). Here is the error:
Traceback (most recent call last):
File "chatbot.py", line 110, in <module>
decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden, encoder_output)
File "/net/vaosl01/opt/NFS/su0/anaconda3/envs/pyt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/su0/pytorch-learn/chatbot_tutorial/decoder.py", line 99, in forward
rnn_output, hidden_state = self.gru(embedded, last_hidden)
File "/net/vaosl01/opt/NFS/su0/anaconda3/envs/pyt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/net/vaosl01/opt/NFS/su0/anaconda3/envs/pyt/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 175, in forward
self.check_forward_args(input, hx, batch_sizes)
File "/net/vaosl01/opt/NFS/su0/anaconda3/envs/pyt/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 131, in check_forward_args
expected_input_dim, input.dim()))
RuntimeError: input must have 3 dimensions, got 2
I’m only working a very small subset to make sure things are good. Here are the parameters:
batch_size = 5
hidden_size = 500
n_encoder_layers = 2
n_decoder_layers = 2
dropout = 0.1
attn_model = 'dot'
embedding = nn.Embedding(len(vocab), hidden_size)
Basically, my embedded shape thats passed as input is of shape (batch_size, hidden_size) = (5, 500) and its asking for a 3-d input. Which makes sense from the GRU 1 documentation which says input is either of shape (seq_len, batch, input_size) or a pad_packed_sequence. However, here it is neither of those and I’d like some help fixing it. This code is directly from the tutorial.
There is one more error related to the use of pack_padded_sequence which is also asked here 2. I’ve temporarily averted that by not setting a default device (and everything is running on the CPU) and will get back to it after I get everything else working.
Let me know if more information is required.
Thanks.
|
st84392
|
Embedding is multidimensional, so it shouldn’t have a dimension problem.
I tried running the jupyter notebook for the chatbot, and it seems to work.
Can you try running that and check the output?
Also what version of pytorch are you using?
|
st84393
|
Thanks for your reply. I ran the notebook really quickly and its training (on the GPU) without any problems. I don’t get why when I modularize the (same) code with different files and try to run one iteration, I get issues that I’ve highlighted my first post and the one that you had earlier.
I’m running PyTorch 1.0.1.post2
|
st84394
|
Well I guess I lied a bit about it being completely same code. Instead of voc, I named the instance vocab and I just gave the class a __len__ method.
|
st84395
|
why don’t you write a meaningful title for your question? What issue? Can I know your issue just by reading your question title?
|
st84396
|
Hi,
I’m new to pytorch and have been following the many tutorials available.
But, When I did The CHATBOT TUTORIAL
(https://pytorch.org/tutorials/beginner/chatbot_tutorial.html?highlight=chatbot%20tutorial 8) is not work.
Like the figure below
8788.JPG893×769 109 KB
What should I do and what is causing this?
USE_CUDA = torch.cuda.is_available()
device = torch.device(“cuda” if USE_CUDA else “cpu”)
corpus_name = “cornell movie-dialogs corpus”
corpus = os.path.join(“data”, corpus_name)
def printLines(file, n=10):
with open(file, ‘rb’) as datafile:
lines = datafile.readlines()
for line in lines[:n]:
print(line)
#printLines(os.path.join(corpus, “movie_lines.txt”))
Splits each line of the file into a dictionary of fields
def loadLines(fileName, fields):
lines = {}
with open(fileName, ‘r’, encoding=‘iso-8859-1’) as f:
for line in f:
values = line.split(" +++$+++ ")
# Extract fields
lineObj = {}
for i, field in enumerate(fields):
lineObj[field] = values[i]
lines[lineObj[‘lineID’]] = lineObj
return lines
Groups fields of lines from loadLines into conversations based on movie_conversations.txt
def loadConversations(fileName, lines, fields):
conversations = []
with open(fileName, ‘r’, encoding=‘iso-8859-1’) as f:
for line in f:
values = line.split(" +++$+++ ")
# Extract fields
convObj = {}
for i, field in enumerate(fields):
convObj[field] = values[i]
# Convert string to list (convObj[“utteranceIDs”] == “[‘L598485’, ‘L598486’, …]”)
lineIds = eval(convObj[“utteranceIDs”])
# Reassemble lines
convObj[“lines”] = []
for lineId in lineIds:
convObj[“lines”].append(lines[lineId])
conversations.append(convObj)
return conversations
Extracts pairs of sentences from conversations
def extractSentencePairs(conversations):
qa_pairs = []
for conversation in conversations:
# Iterate over all the lines of the conversation
for i in range(len(conversation[“lines”]) - 1): # We ignore the last line (no answer for it)
inputLine = conversation[“lines”][i][“text”].strip()
targetLine = conversation[“lines”][i+1][“text”].strip()
# Filter wrong samples (if one of the lists is empty)
if inputLine and targetLine:
qa_pairs.append([inputLine, targetLine])
return qa_pairs
datafile = os.path.join(corpus, “formatted_movie_lines.txt”)
delimiter = ‘\t’
Unescape the delimiter
delimiter = str(codecs.decode(delimiter, “unicode_escape”))
lines = {}
conversations = []
MOVIE_LINES_FIELDS = [“lineID”, “characterID”, “movieID”, “character”, “text”]
MOVIE_CONVERSATIONS_FIELDS = [“character1ID”, “character2ID”, “movieID”, “utteranceIDs”]
Load lines and process conversations
print("\nProcessing corpus…")
lines = loadLines(os.path.join(corpus, “movie_lines.txt”), MOVIE_LINES_FIELDS)
print("\nLoading conversations…")
conversations = loadConversations(os.path.join(corpus, “movie_conversations.txt”),
lines, MOVIE_CONVERSATIONS_FIELDS)
Write new csv file
print("\nWriting newly formatted file…")
with open(datafile, ‘w’, encoding=‘utf-8’) as outputfile:
writer = csv.writer(outputfile, delimiter=delimiter)
for pair in extractSentencePairs(conversations):
writer.writerow(pair)
Print a sample of lines
print("\nSample lines from file:")
printLines(datafile)
Default word tokens
PAD_token = 0 # Used for padding short sentences
SOS_token = 1 # Start-of-sentence token
EOS_token = 2 # End-of-sentence token
class Voc:
def init(self, name):
self.name = name
self.trimmed = False
self.word2index = {}
self.word2count = {}
self.index2word = {PAD_token: “PAD”, SOS_token: “SOS”, EOS_token: “EOS”}
self.num_words = 3 # Count SOS, EOS, PAD
def addSentence(self, sentence):
for word in sentence.split(' '):
self.addWord(word)
def addWord(self, word):
if word not in self.word2index:
self.word2index[word] = self.num_words
self.word2count[word] = 1
self.index2word[self.num_words] = word
self.num_words += 1
else:
self.word2count[word] += 1
# Remove words below a certain count threshold
def trim(self, min_count):
if self.trimmed:
return
self.trimmed = True
keep_words = []
for k, v in self.word2count.items():
if v >= min_count:
keep_words.append(k)
print('keep_words {} / {} = {:.4f}'.format(
len(keep_words), len(self.word2index), len(keep_words) / len(self.word2index)
))
# Reinitialize dictionaries
self.word2index = {}
self.word2count = {}
self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"}
self.num_words = 3 # Count default tokens
for word in keep_words:
self.addWord(word)
MAX_LENGTH = 10 # Maximum sentence length to consider
Turn a Unicode string to plain ASCII, thanks to
http://stackoverflow.com/a/518232/2809427 3
def unicodeToAscii(s):
return ‘’.join(
c for c in unicodedata.normalize(‘NFD’, s)
if unicodedata.category© != ‘Mn’
)
Lowercase, trim, and remove non-letter characters
def normalizeString(s):
s = unicodeToAscii(s.lower().strip())
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" “, s)
s = re.sub(r”\s+", r" ", s).strip()
return s
Read query/response pairs and return a voc object
def readVocs(datafile, corpus_name):
print(“Reading lines…”)
# Read the file and split into lines
lines = open(datafile, encoding=‘utf-8’).
read().strip().split(’\n’)
# Split every line into pairs and normalize
pairs = [[normalizeString(s) for s in l.split(’\t’)] for l in lines]
voc = Voc(corpus_name)
return voc, pairs
def filterPair§:
# Input sequences need to preserve the last word for EOS token
return len(p[0].split(’ ‘)) < MAX_LENGTH and len(p[1].split(’ ')) < MAX_LENGTH
Filter pairs using filterPair condition
def filterPairs(pairs):
return [pair for pair in pairs if filterPair(pair)]
Using the functions defined above, return a populated voc object and pairs list
def loadPrepareData(corpus, corpus_name, datafile, save_dir):
print(“Start preparing training data …”)
voc, pairs = readVocs(datafile, corpus_name)
print(“Read {!s} sentence pairs”.format(len(pairs)))
pairs = filterPairs(pairs)
print(“Trimmed to {!s} sentence pairs”.format(len(pairs)))
print(“Counting words…”)
for pair in pairs:
voc.addSentence(pair[0])
voc.addSentence(pair[1])
print(“Counted words:”, voc.num_words)
return voc, pairs
save_dir = os.path.join(“data”, “save”)
voc, pairs = loadPrepareData(corpus, corpus_name, datafile, save_dir)
Print some pairs to validate
print("\npairs:")
for pair in pairs[:10]:
print(pair)
File “”, line 1, in
runfile(‘C:/Users/lab723/Desktop/glove1005/untitled0.py’, wdir=‘C:/Users/lab723/Desktop/glove1005’)
File “C:\Users\lab723\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py”, line 705, in runfile
execfile(filename, namespace)
File “C:\Users\lab723\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py”, line 102, in execfile
exec(compile(f.read(), filename, ‘exec’), namespace)
File “C:/Users/lab723/Desktop/glove1005/untitled0.py”, line 219, in
voc, pairs = loadPrepareData(corpus, corpus_name, datafile, save_dir)
File “C:/Users/lab723/Desktop/glove1005/untitled0.py”, line 209, in loadPrepareData
pairs = filterPairs(pairs)
File “C:/Users/lab723/Desktop/glove1005/untitled0.py”, line 202, in filterPairs
return [pair for pair in pairs if filterPair(pair)]
File “C:/Users/lab723/Desktop/glove1005/untitled0.py”, line 202, in
return [pair for pair in pairs if filterPair(pair)]
File “C:/Users/lab723/Desktop/glove1005/untitled0.py”, line 198, in filterPair
return len(p[0].split(’ ‘)) < MAX_LENGTH and len(p[1].split(’ ')) < MAX_LENGTH
IndexError: list index out of range
|
st84397
|
I say this with my best interest in mind, but nobody is going to be able to help you with this. Make the question shorter or at least write a more meaningful title…short concise title. No one wants to read a novel. We want to help but your not making it easy for us…
|
st84398
|
Hi:
For example, I have only two classes, 0 and 1. For class 0, I have 100 samples, for class 1 i have 900 samples. From what I understand, class 0 should have a higher weight, so would the weight [0.9,0.1] be a good approach for training?
|
st84399
|
Solved by Nikronic in post #2
Hi,
I think this question is very similar to yours. Give it a try and feel free to ask any other questions.
best
|
st84400
|
Hi,
I think this question 32 is very similar to yours. Give it a try and feel free to ask any other questions.
best
|
st84401
|
I’m implementing an atari pong playing policy gradient agent.
The main loop at the moment is.
Run the policy to gather some experience and save the experience (images, actions, rewards) in a dataset.
Run a single iteration of training over the gathered experience to update the policy.
Throw all the data away and start again.
Up until now, I have just been using cpu training, but now I’d like to push the training to GPU.
When it comes to loading It seems I have a matrix of choices.
Create tensors on CPU, then push them to GPU via pinned memory.
Create tensors directly on GPU.
"1. Create tensors on the get_item(index) of the DataSet
2. Create tensors in the collate_batch function of the DataLoader (or write my own DataLoader)
I thought I understood all this, but as it turns out, there are a few gaps in my understanding. I have 3 questions.
Is there any downside to directly creating GPU tensors in the get_item(index) call on the DataSet. Is this a bad idea?
What’s the best practice for creating tensors generally, should they created for each item in the dataset, or should the dataset just return what it returns, and the loader take care of creating tensors?
Where is the right place to decide which device a tensor goes to? The DataSet? The DataLoader, or the training loop itself?
Any insight appreciated!
|
st84402
|
Solved by ptrblck in post #2
The downside of creating GPU tensors in __getitem__ or push CPU tensors onto the device is that your DataLoader won’t be able to use multiple workers anymore.
If you try to set num_workers > 0, you’ll get a CUDA error:
RuntimeError: CUDA error: initialization error
This also means that your hos…
|
st84403
|
The downside of creating GPU tensors in __getitem__ or push CPU tensors onto the device is that your DataLoader won’t be able to use multiple workers anymore.
If you try to set num_workers > 0, you’ll get a CUDA error:
RuntimeError: CUDA error: initialization error
This also means that your host and device operations (most likely) won’t be able to overlap anymore, i.e. your CPUs cannot load and process the data while your GPU is busy training the model.
If you are “creating” the tensors, i.e. sampling them, this could still be a valid approach.
However, if you are loading and processing some data (e.g. images), I would write the Dataset such that a single example is loaded, processed and returned as a CPUTensor.
If you use pin_memory=True in you DataLoader, the transfer from host to device will be faster as described in this blogpost 822.
Inside the training loop you would push the tensors onto the GPU. If you set non_blocking=True as an argument in tensor.to(), PyTorch will try to perform the transfer asynchronously as decribed here 416.
The DataLoader might use a sampler or a custom collate_fn, but shouldn’t be responsible of creating the tensors.
|
st84404
|
@ptrblck I had a question related to this. Actually I am loading image filenames in my dataset, and loading the images in __get_item__. I also use manually written data augmentation functions in __get_item__, so wouldn’t it be better to have these run on the GPU? Currently training is pretty slow.
Now I think I have 2 options - either I write the data augmentation functions for batched tensors outside the DataLoader, or I can create GPU tensors in __get_item__.
What would you recommend?
|
st84405
|
What kind of data augmentation methods are you currently using?
If you are using PIL, you might want to install PIL-SIMD, which is a drop-in replacement and will use SIMD operations to speed up the transformations.
Are you using multiple workers in your DataLoader? This should usually speeding up the data loading and processing.
If you are bottlenecked by the CPU and have enough GPU resources, you might also try to use NVIDIA/DALI 112. @JanuszL might give you some more information about it. Have a look at his post here 83 for some additional information.
|
st84406
|
I know what inplace does, but, can you please explain why should I opt to use it or not. Are there any caveats and what are the scenarios in which this option is important, sepecifically when building a NN. Take ReLU layer as an example in which this option is available.
|
st84407
|
Solved by Nikronic in post #2
Hello,
First, there is an important thing you have to consider; you only can use inplace=True when you are sure your model won’t cause any error. For example, if you trying to train a CNN, in the time of backpropagation, autograd needs all the values, but inplace=True operation can cause a change s…
|
st84408
|
Hello,
First, there is an important thing you have to consider; you only can use inplace=True when you are sure your model won’t cause any error. For example, if you trying to train a CNN, in the time of backpropagation, autograd needs all the values, but inplace=True operation can cause a change so your backprop is no longer valid. Actually, this kind of error has been handled by PyTorch, so you’ll be noticed about it.
Second, if you do not have any error, it is better to use inplace=True operation because it won’t allocate new memory for the output of your layer. So it can prevent from Out of memory error.
Finally, as far as I know developers usually use inplace=True unless they do not get any error.
I got 'RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation' error
Would you please show me how to fix this? I changed it like this:
def forward(self, logits, labels):
N, C, H, W = logits.size()
n_pixs = N * H * W
logits = logits.permute(0, 2, 3, 1).contiguous().view(-1, C)
with torch.no_grad():
scores = F.softmax(logits, dim=1).cpu().detach()
labels = labels.view(-1)
labels_cpu = labels.cpu().detach()
invalid_mask = labels_cpu==self.ignore_lb
labels_cpu[invalid_mask]…
What's the difference between nn.ReLU() and nn.ReLU(inplace=True)?
I implemented generative adversarial network using both nn.ReLU() and nn.ReLU(inplace=True). It seems that nn.ReLU(inplace=True) saved very small amount of memory.
What’s the purpose of the using inplace=True?
Is the behavior different in backpropagation?
bests
|
st84409
|
Indeed @Nikronic nails it with the rule of thumb You can use inplace for memory efficiency unless you it breaks.
You might also be less eager to use inplace when planning to use the JIT, as it will fuse pointwise non-inplace operations like ReLU if there are several in a row.
The two things to avoid are:
you move a leaf tensor into the graph (using inplace on something that you just defined with requires_grad=True),
when the operation before the inplace wants to have its result to compute the backward. Whether this is the case is not easy to tell with from the outside, unfortunately.
As a corollary, you should avoid using inplace on the inputs of your re-usable module in lest the future use could be in one of the two situations.
Best regards
Thomas
|
st84410
|
Hi there,
In torch when we train our model for a image the model can store the output in model. Is it also possible to store the output in pytorch model after train the image?
In torch we can access the output after train model as like given example …
out = model:forward(input)
suppose there is 10 modules in the model and I want to access the output of 5th layer in this case
layer_5 = model.modules[5].output
Like this is it possible in pytorch too. If not how will do the same in pytorch.
Thanks
|
st84411
|
If you would like to store the output of a certain (internal) layer, you could use forward hooks as described here 3.
|
st84412
|
Hi @ptrblck how can I store each layer (internal or external) out put as they calculate forward pass. I am not using name of each layer I want to store according to indexing of of each layer.
In my model there is layer in side layer also present.
Example : -
Sequential(
(0): Conv2d()
(1): BatchNorm2d()
(2): ReLU()
(3): Sequential(
(0): ConcatTable(
(0): Sequential(
(0): BatchNorm2d()
(1): ReLU()
(2): Conv2d()
(3): BatchNorm2d()
(4): ReLU()
(5): Conv2d()
(6): BatchNorm2d()
(7): ReLU()
(8): Conv2d()
)
(1): Sequential()
)
(1): CAddTable()
)
Note : - I removed the input , output and kernel etc info.
Thanks
|
st84413
|
You could iterate all modules and add the forward hooks using their name.
Here is a small example:
activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.detach()
return hook
model = nn.Sequential(
nn.Conv2d(1, 1, 3, 1, 1),
nn.BatchNorm2d(1),
nn.ReLU(),
nn.Sequential(
nn.Conv2d(1, 1, 3, 1, 1),
nn.BatchNorm2d(1),
nn.ReLU()
),
nn.Conv2d(1, 1, 3, 1, 1),
nn.BatchNorm2d(1),
nn.ReLU()
)
for name, module in model.named_modules():
module.register_forward_hook(get_activation(name))
model(torch.randn(1, 1, 4, 4))
for key in activation:
print(key, activation[key])
Note, that I removed ConcatTable and CAddTable, as they are classes from Torch7, which is not under active development anymore. I would therefore recommend to switch to PyTorch.
Have a look at the website for install instructions, if you haven’t already installed it.
|
st84414
|
why does the way of calculating l1_loss depend on target.requires_grad?
def l1_loss(input, target, size_average=None, reduce=None, reduction='mean'):
# type: (Tensor, Tensor, Optional[bool], Optional[bool], str) -> Tensor
r"""l1_loss(input, target, size_average=None, reduce=None, reduction='mean') -> Tensor
Function that takes the mean element-wise absolute value difference.
See :class:`~torch.nn.L1Loss` for details.
"""
if not (target.size() == input.size()):
warnings.warn("Using a target size ({}) that is different to the input size ({}). "
"This will likely lead to incorrect results due to broadcasting. "
"Please ensure they have the same size.".format(target.size(), input.size()),
stacklevel=2)
if size_average is not None or reduce is not None:
reduction = _Reduction.legacy_get_string(size_average, reduce)
if target.requires_grad:
ret = torch.abs(input - target)
if reduction != 'none':
ret = torch.mean(ret) if reduction == 'mean' else torch.sum(ret)
else:
expanded_input, expanded_target = torch.broadcast_tensors(input, target)
ret = torch._C._nn.l1_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))
return ret
|
st84415
|
I just train my CRNN model for Chinese text recognition.
I meet this problem.
problem2.JPG1022×446 73.3 KB
Maybe, it’s like a problem of “out of memory”.
|
st84416
|
But, the situation of GPUs is shown in the follow figure.
When I trained my model, the PID of the process is 4585. But, while the problem occured, the process of 4585 was replaced by the process in the figure.
I suspect that there is a bug in the “torch.nn.CTCLoss”.
Please, for help.
|
st84417
|
Hey guys,
I trained my CIFAR-10 full precision network using vgg architecture. I got a 92.27 percent accuracy on the validation set. However, when I saved and loaded the model and then tested using the loop, I am getting only 35 percent accuracy. I am pretty sure the file is not corrupted.
model=torch.load('/content/cifar_fullprecison_vgg8.pth')
import torch
class Ternary_batch_rel(torch.nn.Module):
def __init__(self,batchnorm_size):
super(Ternary_batch_rel,self).__init__()
self.l1=torch.nn.Sequential(
torch.nn.ReLU(),
torch.nn.BatchNorm2d(batchnorm_size)
)
def forward(self,x):
out=self.l1(x)
return out
z1=Ternary_batch_rel(128).to(device)
z2=Ternary_batch_rel(256).to(device)
z3=Ternary_batch_rel(512).to(device)
class Ternary_max_pool(torch.nn.Module):
def __init__(self):
super(Ternary_max_pool,self).__init__()
self.l1=torch.nn.Sequential(
torch.nn.MaxPool2d(kernel_size=2,stride=2))
def forward(self,x):
out=self.l1(x)
return out
zm=Ternary_max_pool().to(device)
Testing loop
correct=0
total=0
for images,labels in (train_loader):
images=images.to(device)
labels=labels.to(device)
y1=F.conv2d(images,model['layer1.0.weight'],padding=1)
y2=z1(y1)
y2=F.conv2d(y2,model['layer1.3.weight'],padding=1)
y3=z1(y2)
y3=zm(y3)
y4=F.conv2d(y3,model['layer2.0.weight'],padding=1)
y4=z2(y4)
y5=F.conv2d(y4,model['layer2.3.weight'],padding=1)
y5=z2(y5)
y6=zm(y5)
y7=F.conv2d(y6,model['layer3.0.weight'],padding=1)
y8=z3(y7)
y9=F.conv2d(y8,model['layer3.3.weight'],padding=1)
y10=z3(y9)
y11=zm(y10)
y11=y11.view(y11.size(0),-1)
y12=F.linear(y11,model['layer4.0.weight'])
y13=F.relu(y12)
y14=F.dropout(y13)
y15=F.linear(y14,model['layer4.3.weight'])
_,predicted=torch.max(y15,1)
total+=labels.size(0)
correct+=(predicted==labels).sum().item()
print('Test accuracy of the model on the 10000 test images:{}%'.format((correct/total)*100))
|
st84418
|
I’m not sure why you are using the model components explicitly than using the model definition file’s forward method. Any good reason for that?
What about bias terms in conv, linear layers? Are you using them too?
|
st84419
|
I am training without the bias even the trained model that I am loading currently.
Also, I am implementing a paper that does not train weights rather than parameters of a weight drawn from some distribution and hence testing would not be straight forward( since weights need to be sampled from a distribution and then passed to infer the data)
|
st84420
|
Did you call model.eval() after loading the model?
This is usually needed, if your model contains layers such as dropout and batchnorm.
|
st84421
|
But the model here is a dictionary since it contains state_dict . The model here is not an instance of the class.
|
st84422
|
It’s not completely clear to me how you are using the model, but even if you call the layers in a functional way, you should set them to eval to get the proper validation accuracy.
You can call .eval() on each module separately.
|
st84423
|
@ptrlblck, lets say if the model were an instance of a neural network class
model=Conv_net().to(device)
Now during the training phase, I would be using the forward function of the class to forward propagate and optimize the parameters. The parameters here are not the weights of the network, rather some parameters of a probability distribution to which the weights belong to.
During the eval() phase, I would have to first sample the trained probability distribution (paramerterized by the weight parameters) and then pass the sampled weights to the neural network
so during the model.eval() phase, I would not be able to do something like the following:
model.eval()
for images,labels in valid_loader:
yout=model(images)
...
..
However, if i have the samples of the distribution, I want to do the following to see my validation accuracy:
y1=F.convd(images,w_sampled_1,padding=1)
y2=z1(y1)...
etc
Is there a way to run inference of this kind using model instance rather than a dictionary?
|
st84424
|
@ptrblck you could check out the code written here:
Blown up gradients and loss autograd
I have my entire code written here and the output, I am surprised to see such huge gradients and losses. I am working on this code on CIFAR-10 based on local reparameterization trick by Shayer et.al, I am unable to reproduce the results in the paper. I have been working on this for over a month but no clue where I am going wrong. Could someone help me out Please.
class Ternary_batch_rel(torch.nn.Module):
def __init__(self,batchnorm_size):
super(Ternary_batch_rel,self).__init__()
self.…
|
st84425
|
Did you solve this problem? I have realized that there is accuracy problems when I save the parameters of the models for certain epoch, and try to reproduce its outputs. For instance, suppose I stop the model at epoch 120, and save the model and its outputs. If I load the state_dict and predict over the same data points then there is about ~0.1% error. Why would that be? Is there something I should set to save the state_dict with higher precision?
|
st84426
|
I see that the new pytorch comes with a much needed learning rate scheduler. Actually, more of them. 409 I want to use the torch.optim.lr_scheduler.MultiStepLR, since I need it to divide my learning rate at certain milestones.
What I do is:
learning_rate = 0.1
momentum = 0.9
optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=momentum, weight_decay=0.0001)
scheduler = MultiStepLR(optimizer, milestones=[60,100,150,400], gamma=0.1)
for epoch in range(number_of_training_epochs): # loop over the dataset multiple times
for i, data in enumerate(train_loader, 0):
inputs, labels = data
inputs = inputs.float()
inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
scheduler.step()
What I see is that now my network is not learning anything. Should I use optimizer.step also ?
|
st84427
|
And I think scheduler.step should be outside the lowest loop (i.e. one step per epoch).
|
st84428
|
When I use ‘MultiStepLR’, I also need optimizer.step() and scheduler.step(). OMG you save my time.
Really thanks.
And then… I think torch.optim.lr_scheduler.MultiStepLR 231
need to add optimized.step() also …
|
st84429
|
I understand that loading a model made with torch.jit will always load it into CPU by default, but what should I do in C++ as the equivalent of .cuda() please? Sorry for asking something so basic.
|
st84430
|
try tensor.to(kCUDA)
edit: sorry, didn’t realize you were trying to move a module in c++.
|
st84431
|
I’m following the tutorial called Loading a Pytorch Model in C++ 150. I was able to trace the python model, export it to the .pt format, load it in C++ and perform inference. That all worked fine on the CPU.
However, I want to do the same thing on the GPU. Following the above suggestion, I made one modification to the .cpp file. I took:
inputs.push_back(torch::ones({1, 3, 224, 224}))
And made it:
inputs.push_back(torch::ones({1, 3, 224, 224}).to(at::kCUDA))
I also added the following include:
#include <ATen/Aten.h>
I then reran make in my build directory. It built. However, here is what happens when I try to run the executable:
$ ./example-app ../model.pt
ok
terminate called after throwing an instance of 'at::Error'
what(): Expected object of backend CUDA but got backend CPU for argument #2 'weight' (checked_tensor_unwrap at /pytorch/aten/src/ATen/Utils.h:70)
frame #0: at::CUDAFloatType::thnn_conv2d_forward(at::Tensor const&, at::Tensor const&, at::ArrayRef<long>, at::Tensor const&, at::ArrayRef<long>, at::ArrayRef<long>) const + 0xb5 (0x7f47506484c5 in /home/abweiss/libtorch/lib/libcaffe2_gpu.so)
frame #1: torch::autograd::VariableType::thnn_conv2d_forward(at::Tensor const&, at::Tensor const&, at::ArrayRef<long>, at::Tensor const&, at::ArrayRef<long>, at::ArrayRef<long>) const + 0x55f (0x7f477f1729df in /home/abweiss/libtorch/lib/libtorch.so.1)
frame #2: at::TypeDefault::thnn_conv2d(at::Tensor const&, at::Tensor const&, at::ArrayRef<long>, at::Tensor const&, at::ArrayRef<long>, at::ArrayRef<long>) const + 0x73 (0x7f4774fd1933 in /home/abweiss/libtorch/lib/libcaffe2.so)
frame #3: torch::autograd::VariableType::thnn_conv2d(at::Tensor const&, at::Tensor const&, at::ArrayRef<long>, at::Tensor const&, at::ArrayRef<long>, at::ArrayRef<long>) const + 0x179 (0x7f477f0d01b9 in /home/abweiss/libtorch/lib/libtorch.so.1)
frame #4: at::native::_convolution_nogroup(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::ArrayRef<long>, at::ArrayRef<long>, at::ArrayRef<long>, bool, at::ArrayRef<long>) + 0x75f (0x7f4774d1490f in /home/abweiss/libtorch/lib/libcaffe2.so)
frame #5: at::TypeDefault::_convolution_nogroup(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::ArrayRef<long>, at::ArrayRef<long>, at::ArrayRef<long>, bool, at::ArrayRef<long>) const + 0x6d (0x7f4774fb680d in /home/abweiss/libtorch/lib/libcaffe2.so)
frame #6: torch::autograd::VariableType::_convolution_nogroup(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::ArrayRef<long>, at::ArrayRef<long>, at::ArrayRef<long>, bool, at::ArrayRef<long>) const + 0x1ae (0x7f477f0c0d2e in /home/abweiss/libtorch/lib/libtorch.so.1)
frame #7: at::native::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::ArrayRef<long>, at::ArrayRef<long>, at::ArrayRef<long>, bool, at::ArrayRef<long>, long, bool, bool, bool) + 0x1b48 (0x7f4774d18b38 in /home/abweiss/libtorch/lib/libcaffe2.so)
frame #8: at::TypeDefault::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::ArrayRef<long>, at::ArrayRef<long>, at::ArrayRef<long>, bool, at::ArrayRef<long>, long, bool, bool, bool) const + 0x93 (0x7f4774fb68e3 in /home/abweiss/libtorch/lib/libcaffe2.so)
frame #9: torch::autograd::VariableType::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::ArrayRef<long>, at::ArrayRef<long>, at::ArrayRef<long>, bool, at::ArrayRef<long>, long, bool, bool, bool) const + 0x22d (0x7f477f0c102d in /home/abweiss/libtorch/lib/libtorch.so.1)
frame #10: <unknown function> + 0x47410b (0x7f477f27610b in /home/abweiss/libtorch/lib/libtorch.so.1)
frame #11: <unknown function> + 0x4a87ed (0x7f477f2aa7ed in /home/abweiss/libtorch/lib/libtorch.so.1)
frame #12: <unknown function> + 0x49486c (0x7f477f29686c in /home/abweiss/libtorch/lib/libtorch.so.1)
frame #13: torch::jit::script::Method::run(std::vector<torch::jit::IValue, std::allocator<torch::jit::IValue> >&) + 0xf6 (0x42ba88 in ./example-app)
frame #14: torch::jit::script::Method::operator()(std::vector<torch::jit::IValue, std::allocator<torch::jit::IValue> >) + 0x4a (0x42bb16 in ./example-app)
frame #15: torch::jit::script::Module::forward(std::vector<torch::jit::IValue, std::allocator<torch::jit::IValue> >) + 0x81 (0x42c325 in ./example-app)
frame #16: main + 0x1fe (0x42787e in ./example-app)
frame #17: __libc_start_main + 0xf0 (0x7f474ee32830 in /lib/x86_64-linux-gnu/libc.so.6)
frame #18: _start + 0x29 (0x426cb9 in ./example-app)
Aborted (core dumped)
Are there additional steps required to make this run on the GPU? It seems that I have moved the input Tensor to GPU, but the model weights are still on the CPU. How do I move the whole model to GPU?
|
st84432
|
A couple of follow up comments/questions:
First off, I tried moving my loaded Module to the GPU by doing the following:
std::shared_ptr<torch::jit::script::Module> module = torch::jit::load(argv[1]);
module->to(at::kCUDA);
However, this yields a compile error:
$ make
Scanning dependencies of target example-app
[ 50%] Building CXX object CMakeFiles/example-app.dir/example-app.cpp.o
/home/abweiss/pt1_example_0/example-app.cpp: In function ‘int main(int, const char**)’:
/home/abweiss/pt1_example_0/example-app.cpp:16:11: error: ‘struct torch::jit::script::Module’ has no member named ‘to’
module->to(at::kCUDA);
So apparently torch::nn::Module has a to function, but not torch::jit::script::Module.
Second, I have a question. What is the difference between at::kCUDA and torch::kCUDA? When should I be using one instead of the other?
|
st84433
|
It’s suggested in TensorOptions.h that the rule of thumb is that if there’s an at:: and torch:: version of the same function then you should use the at:: version with Tensors and the torch:: version with Variables.
|
st84434
|
Does anyone know if it is even possible to move a traced model to the GPU in C++? Maybe this is just not possible in the preview version of Pytorch 1.0. It would be really nice if someone could clarify.
|
st84435
|
I check the pyotrch code and find in python code torch.jit.load return torch.jit.ScriptModule which is
inheritted from torch.nn.Module,so i can use torch.jit.ScriptModule like torch.nn.Module,
traced_script_module = torch.jit.load("model.pt")
traced_script_module.cuda()
but in C++ torch::jit::load return std::shared_ptr<lt;torch::jit::script::Module> which is a struct Module.https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/script/module.h 49
|
st84436
|
Might the answer be that we’re supposed to move certain parts within the module to cuda rather than the whole thing?
|
st84437
|
I’m still hoping to get a definitive answer on this from one of the developers (@goldsborough? @smth?), or from someone who has already made this work:
In the current preview version Pytorch 1.0, is it possible to move a traced model to the GPU in C++?
If so, how? If not, is there an alternative way to get a C++ GPU model?
These are really fundamental questions for anyone who has been waiting for a way to productionize their Pytorch models.
|
st84438
|
Seems to me that at https://github.com/pytorch/pytorch/issues/12563 61 they are talking about the python interface.
|
st84439
|
Yeah, the fact that traced modules are loaded back onto the CPU by default was explained at the developer conference. It’s somewhere in https://www.facebook.com/pytorch/videos/482401942168584/ 20 if you’re interested.
How to move them from CPU to CUDA in python is well documented. C++ seems to have slipped through the net.
|
st84440
|
Excellent! 39 my doombots will soon be…
Um, I mean… oh good, there’s a patch. That’s promising.
|
st84441
|
Hi everyone, sorry for being late to the party. I think there are two broad questions that were asked in this thread:
How can I move my script::Module to CUDA? We found that there indeed was no easy way to do this without iterating over parameters yourself, so I went a head and implemented script::Module::to(...) in https://github.com/pytorch/pytorch/pull/12710 132. We’ll try to land it today or tomorrow.
Some of you noticed that the torch::nn::Module class from the C++ frontend has a to(...) method, and you were wondering whether you could mix torch::nn::Module and script::Module. At the moment, there is a strict division between the torch::nn::Module, which is for the C++ frontend (the pure C++ alternative to the Python eager frontend), and script::Module (the C++ module class for TorchScript). They may not be mixed at the moment. The torch::nn::Module class is currently friendlier to use because it’s meant to provide the same API as torch.nn.Module in Python, for research. We are working actively on blending the TorchScript C++ API with the C++ frontend API, so I would expect torch::nn::Module and script::Module to become largely the same in the next few months. Feel free to raise more issues for operations you’d like to do on script::Module that are currently hard to do, and we can send more patches.
Hope this helps and let me know if you have more questions.
|
st84442
|
you can try load model directly into GPU, like:
std::shared_ptr<torch::jit::script::Module> module = torch::jit::load(argv[1], torch::kCUDA);
|
st84443
|
I have a neural network called model. I expected that model.training=True would have the same effect as model.train(). However, the behaviors are different, at least for dropout. In the former, dropout seemed to be disabled (different runs model(x) give the same output), while in the latter, dropout is activated (different runs model(x) give different outputs). Thanks.
|
st84444
|
.training is an attribute of the current module.
If you call model.train() or model.eval(), this attribute will be changed for all child modules inside the parent class as can be seen here 41.
Thus, if you would like to change the .training attribute manually, you would have to change it for each module manually.
|
st84445
|
Hi all,
Is there an elegant way to apply a network to a tensor that is larger than GPU memory?
Tensorflow has tensorflow-mesh, maybe there’s something similar for pytorch?
I am aware of a sliding window approach, but that can lead to artifacts at edges of outputs.
Thanks!
|
st84446
|
Solved by smth in post #2
have you looked at something like https://pytorch.org/docs/stable/checkpoint.html
|
st84447
|
have you looked at something like https://pytorch.org/docs/stable/checkpoint.html 69
|
st84448
|
Hi everyone,
I am fairly new to Pytorch and I’m currently working on a project that needs to perform classification on images. However, it’s not a binary classification.
The outputs of the neural network are real numbers. For instance the classification I’m looking the neural network to provide is as such:
Reads Image
says that the image has attribute A at a value of 1200 and another attribute B at a value of 8.
The image data that’s fed into this neural net usually has a value range of 1200 - 1800 and another attribute range of 4 - 20. The main goal is to train the network to give an estimated analog value based on the image fed into it.
The classes provided by Pytorch seem to favor binary classification, such as the Dataloader and ImageFolder classes. However, I cannot provide the class folder structure that is often used since I do not have set binary classes:
I want to discuss what possible structures of neural networks can support a problem like this. Including activation functions and what Pytorch might have to offer.
Are there implementations out there that I haven’t found on the web yet? What type of loss functions and optimizers in Pytorch may best serve this type of problem? Should I even be considering neural networks to perform this task in the first place?
To be clear I’m not asking for code or a complete solution. Just a discussion on developing a neural net that can work with this analog system.
Also if I need to clarify anything I will be happy to.
|
st84449
|
Hi Andrew!
I am fairly new to Pytorch and I’m currently working on a project that needs to perform classification on images. However, it’s not a binary classification.
The outputs of the neural network are real numbers.
A quick note on terminology: To me, at least, the term
“classification” has the connotation of assigning input
samples to discrete classes, that is, labelling them with
integer class labels. So I might call what you’re talking
about something like continuous value estimation, rather
than “classification.”
For instance the classification I’m looking the neural network to provide is as such:
Reads Image
says that the image has attribute A at a value of 1200 and another attribute B at a value of 8.
The image data that’s fed into this neural net usually has a value range of 1200 - 1800 and another attribute range of 4 - 20. The main goal is to train the network to give an estimated analog value based on the image fed into it.
This seems perfectly reasonable to attempt with a pytorch
neural network (depending on the details of your problem).
The classes provided by Pytorch seem to favor binary classification, such as the Dataloader and ImageFolder classes.
This is an overstatement. Pytorch certainly supports binary
classification, but it supports lots of other things, too.
However, I cannot provide the class folder structure that is often used since I do not have set binary classes:
Yes, while you can certainly encode binary class labels (and
multi-class labels) in a directory structure, you can’t very
well encode real-number attributes that way.
So the straightforward approach would be to have a data file
that has the real-number attributes in it. It could be as
simple as containing a sequence of (pairs of) real numbers,
with the understanding that the order in which you read in
your images will (somehow) match the order of the attribute
values in your file. Or your data file could have a character
string for the filename of each image file, followed by two
numbers for the values of the two attributes.
I want to discuss what possible structures of neural networks can support a problem like this. Including activation functions and what Pytorch might have to offer.
Since you’re analyzing images your network would probably
start with a couple of convolutional layers, and then switch
over to a couple of fully-connected layers, with the last
layer having two (floating-point number) outputs as the
predictions for your two attributes. (There are many
additional techniques that could improve the performance
of your network – things like nn.MaxPool2d or
nn.Dropout2d – but I would recommend starting simple
and adding refinements as the need arises. Or start with
a prepackaged network.)
Rectified linear units (nn.ReLU) are commonly used as
(non-linear) activation functions.
What type of loss functions and optimizers in Pytorch may best serve this type of problem?
For your loss function, you would probably simply use the squared
error between your predictions and actuals (nn.MSELoss).
That is:
loss = (predicted_A - actual_A)^2 + (predicted_B - actual_B)^2.
You might want to weight the two terms so they have comparable
size lest you preferentially optimize for accuracy in A at the
cost of B.
I always like to start simple, so I would recommend starting
with plain-vanilla (stochastic) gradient descent (optim.SGD).
You can also add momentum to gradient descent (supported by
optim.SGD), and pytorch offers more sophisticated optimizers,
such as the commonly-used optim.Adam (“adaptive moments”).
Are there implementations out there that I haven’t found on the web yet?
Undoubtedly! There are lots of (semi-) prepackaged architectures
out there for image processing, but I don’t know much about them
and don’t have any concrete suggestions. But if you give us more
detail about your specific problem (and post some sample images!),
some of the experts here will likely have good advice.
Should I even be considering neural networks to perform this task in the first place?
Likely yes, but it really depends on your problem. (So post
details and sample images!) For example, if the two attributes
you are trying to “predict” are the intensity and and saturation
averaged over the pixels in a color image, you’d be much better
off just calculating them directly.
Best.
K. Frank
|
st84450
|
Hi K. Frank,
Thank you for taking the time to answer my questions! I appreciate it!
KFrank:
I might call what you’re talking
about something like continuous value estimation , rather
than “classification.”
I think that’s a better name for this project. I’ll use that from now on.
KFrank:
So the straightforward approach would be to have a data file
that has the real-number attributes in it. It could be as
simple as containing a sequence of (pairs of) real numbers,
with the understanding that the order in which you read in
your images will (somehow) match the order of the attribute
values in your file. Or your data file could have a character
string for the filename of each image file, followed by two
numbers for the values of the two attributes.
Yes, I have this setup nicely. I can parse a png file name for the attributes and make them into labels.
KFrank:
For your loss function, you would probably simply use the squared
error between your predictions and actuals ( nn.MSELoss ).
I agree, after looking through the loss functions this seems like a good choice for real value estimation.
KFrank:
You might want to weight the two terms so they have comparable
size lest you preferentially optimize for accuracy in A at the
cost of B.
This is very true. Since the values ranges for A and B are much different I would need to go through this. I haven’t looked into how to weight the MSE loss function yet, but is it possible that you could give some guidance on this?
Lastly, I wanted to make batched sets of data. I found that Pytorch does this automatically through the Dataloader class via a parameter. I am confused on whether this Dataloader still needs the dataset parameter structured in the hierarchical fashion previously mentioned. If that’s the case would there be anything else to quickly batch data or should I code my own function?
Thanks again,
Andrew
|
st84451
|
Hello Andrew!
First, a general comment:
We will be able to give you advice that is more
likely to be useful to you if you give us some
concrete detail about the problem you are working
on.
How big are your images? How many will you be
training on? What do they look like? What is
the typical distribution of your two attributes?
What is the conceptual meaning of your attributes?
I haven’t looked into how to weight the MSE loss function yet, but is it possible that you could give some guidance on this?
I don’t believe that nn.MSELoss has a built-in
way to include these relative weights. There are
a number of straightforward approaches to including
such weights.
Myself, I would just write my own loss function,
something like this:
import torch
# define weighted loss function
def wtSqErr (pred, targ, wts):
return (wts * (pred - targ)**2).mean()
# construct some sample data
# use a batch size of 10
# y_targ are the actuals, y_pred are the predictions
# which, for this example, are the actuals plus noise
y_targ = torch.tensor ([1000.0, 2.0]) * torch.randn (10, 2) + torch.tensor ([2000.0, 3.0])
y_targ
y_pred = y_targ + torch.tensor ([100.0, 0.15]) * torch.randn (10, 2)
y_pred.requires_grad = True
y_pred
# set up the weights for the loss
wtA = 1.0 / 1000.0**2
wtB = 1.0 / 2.0**2
wtAB = torch.tensor ([wtA, wtB])
wtAB
# calculate loss
loss = wtSqErr (y_pred, y_targ, wtAB)
loss
# show that autograd works
print (y_pred.grad)
loss.backward()
print (y_pred.grad)
Here is the output of the above script:
>>> import torch
>>>
>>> # define weighted loss function
...
>>> def wtSqErr (pred, targ, wts):
... return (wts * (pred - targ)**2).mean()
...
>>> # construct some sample data
... # use a batch size of 10
... # y_targ are the actuals, y_pred are the predictions
... # which, for this example, are the actuals plus noise
...
>>> y_targ = torch.tensor ([1000.0, 2.0]) * torch.randn (10, 2) + torch.tensor ([2000.0, 3.0])
>>> y_targ
tensor([[2.3612e+03, 2.4401e+00],
[2.2880e+03, 7.0144e+00],
[1.2435e+02, 4.6300e+00],
[3.7007e+03, 1.4845e+00],
[1.7911e+03, 2.0490e+00],
[2.6058e+03, 2.2381e+00],
[6.1270e+02, 2.1648e+00],
[6.9680e+02, 1.4656e+00],
[1.2903e+03, 2.8559e+00],
[1.6696e+03, 5.5197e+00]])
>>> y_pred = y_targ + torch.tensor ([100.0, 0.15]) * torch.randn (10, 2)
>>> y_pred.requires_grad = True
>>> y_pred
tensor([[2.6065e+03, 2.5329e+00],
[2.3034e+03, 7.2111e+00],
[2.9170e+02, 4.4378e+00],
[3.7426e+03, 1.4848e+00],
[1.8188e+03, 2.2676e+00],
[2.8676e+03, 2.3148e+00],
[5.6415e+02, 2.1441e+00],
[7.7348e+02, 1.4650e+00],
[1.2437e+03, 2.9639e+00],
[1.5545e+03, 5.5731e+00]], requires_grad=True)
>>>
>>> # set up the weights for the loss
...
>>> wtA = 1.0 / 1000.0**2
>>> wtB = 1.0 / 2.0**2
>>>
>>> wtAB = torch.tensor ([wtA, wtB])
>>> wtAB
tensor([1.0000e-06, 2.5000e-01])
>>>
>>> # calculate loss
...
>>> loss = wtSqErr (y_pred, y_targ, wtAB)
>>> loss
tensor(0.0111, grad_fn=<MeanBackward1>)
>>>
>>> # show that autograd works
...
>>> print (y_pred.grad)
None
>>> loss.backward()
>>> print (y_pred.grad)
tensor([[ 2.4533e-05, 2.3200e-03],
[ 1.5451e-06, 4.9169e-03],
[ 1.6735e-05, -4.8059e-03],
[ 4.1961e-06, 8.9884e-06],
[ 2.7656e-06, 5.4653e-03],
[ 2.6174e-05, 1.9158e-03],
[-4.8546e-06, -5.1618e-04],
[ 7.6680e-06, -1.5900e-05],
[-4.6640e-06, 2.7002e-03],
[-1.1507e-05, 1.3345e-03]])
Note that if you use pytorch tensors to do your
calculations, autograd will work for you without
your having to do anything special.
Lastly, I wanted to make batched sets of data.
Pytorch naturally works with batches. The first
index of your input data, predictions, and target
data tensors is the index that indexes over samples
in the batch.
In the above example, you can understand the
generated data to be a batch of 10 samples.
(In fact, pytorch loss functions require batches,
even if the batch size is only 1. Following the
above example, for batch-size = 1, a “batch” of,
say, predictions would then have a shape of
y_pred.shape = torch.Size ([1, 2]).)
If your training data fits in memory (We don’t
know – you’ve told us nothing concrete about
your problem.), you can read it all into one
tensor, and then use “indexing” or “slicing”
to get your batches.
import torch
all_data = torch.ones (10, 3)
first_batch_of_two = all_data[0:2]
second_batch_of_two = all_data[2:4]
Doing this does not create new tensors with their
own storage – it just sets up a view into the
existing all_data tensor.
Good luck.
K. Frank
|
st84452
|
Hi K. Frank,
Thank you so much for the detailed response! I have a prototype somewhat working now from your help!
I am going to work on getting the loss down faster and get better prediction values sometime soon.
Thank you again,
Andrew Smith
|
st84453
|
hi there,
for some reason, I need to know the device that is expected by a Module,
is there a way to do that ?
|
st84454
|
That’s a relative question. Mistakes happen when in an operation which makes use of two tensors. Thus, from the pov of tensor 1, expected device would be tensor2’s, whereas from the pov tensor 2 it would be the other way around.
Don’t think about “modules” because errors are thrown by tensors inside modules, not modules themselves.
The closest thing you can do is, if you know which operation implies your module/tensor, to query the analogous module/tensor by calling tensor.device() or module.device().
|
st84455
|
Yes, sure, but I have to think about modules in my case, I’m handling them as black boxes
So I guess the answer is no and I must rely on the user to provide me with a device parameter
|
st84456
|
For modules with parameters, next(m.parameters()).device. Modules without parameters should take tensors on any device.
This will fail for modules with mixed devices, but you probably don’t expect those to work, do you?
Best regards
Thomas
|
st84457
|
thanks,
that’s clear that I could indeed get the device of some arbitrary parameter.
I guess the question becomes: is there some introspection trick to get the first parameter that will be involved in the forward function of a module, to then bring my tensor to that place.
(likewise, having the expected input shape would be cool, and is a related question)
anyways, for now I’m getting user instructions, at he cost of more parameters
|
st84458
|
For my current use case, I would like BatchNorm to behave as though it is in inference mode and not training (just BatchNorm and not the whole network).
I notice from Pytorch documentation that
track_running_stats: a boolean value that when set to ``True``, this
module tracks the running mean and variance, and when set to ``False``,
this module does not track such statistics and always uses batch
statistics in both training and eval modes. Default: ``True``
Now I was under the impression that BatchNorm at test time/eval time would not compute anything on the fly but use previously stored mean/variance during training. The running stats flag mentions that even if set to False, it will use batch statistics in test mode.
When I see this:
http://cs231n.stanford.edu/slides/2019/cs231n_2019_lecture07.pdf 27
it mentions that at test time BatchNorm can be fused because there is no separate calculation performed at test time. (from the slides quote: "during testing batchnorm becomes a linear operator! Can be fused with the previous fully-connected or conv layer "). So if I do this in Pytorch
def forward(self, x):
.
.
.
self.bn = BatchNorm2d(2, track_running_stats=False)
.
.
and after instantiating the net object, I do
net.bn.eval(),
will this calculate batch mean and variance when an input batch is applied because the documentation says so? How do I not calculate stats in test mode? Basically how to ensure that in eval mode, it uses only previous stats and does not use anything from the current batch?
|
st84459
|
Arun_Vishwanathan:
Basically how to ensure that in eval mode, it uses only previous stats and does not use anything from the current batch?
Leave track_running_stats=True and set the batchnorm layer to eval().
This setup will update the running stats in train() mode and just use them (without updating) in eval() mode.
If you set track_running_stats=False, the batch statistics will always be used as explained in the docs.
|
st84460
|
Thanks a lot for your response!
Actually the reason I asked this question (which I should included above) is because I was not observing what you mentioned. So let me put forth my example used:
I have a simple model that looks like this:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3,2,kernel_size=3, bias=False)
self.bn = nn.BatchNorm2d(2, track_running_stats=True)
self.fc1 = nn.Linear(18, 10, bias=False)
def forward(self, x):
x1 = self.conv1(x)
print("conv output")
print(x1)
# amean = np.mean(x1.data.numpy(), axis=(0,2,3))
# print(amean)
# astd = np.std(x1.data.numpy(), axis=(0,2,3))
# print(astd)
# x11 = x1.data.numpy()
# for i in range(2):
# x11[:,i,:,:] = (x1.data.numpy()[:,i,:,:] - amean[i]) / astd[i]
# print(x11)
x2 = self.bn(x1)
print("bn output")
print(x2)
x3 = F.relu(x2)
print("relu1 output")
print(x3)
x4 = x3.view(x3.size(0), -1)
x5 = self.fc1(x4)
print("fc output")
print(x5)
x6 = F.relu(x5)
print("relu2 output")
print(x6)
return x6
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
torch.manual_seed(0)
net = Net()
net.bn.bias.requires_grad=False
net.bn.eval()
Now I pass in one training sample from the forward function if I check the output of bn, this is what it prints:
bn output
tensor([[[[-1.6799, -0.0496, 0.2127],
[ 0.3922, 1.3428, 0.0598],
[ 0.3674, 0.3883, -1.0339]],
[[ 0.4344, 0.1697, -0.1216],
[ 0.1510, -0.0127, -0.1253],
[-0.0596, -0.1495, -0.2863]]]], grad_fn=<NativeBatchNormBackward>)
and output of the previous layer (conv) is:
tensor([[[[-0.0403, 0.0103, 0.0185],
[ 0.0240, 0.0535, 0.0137],
[ 0.0233, 0.0239, -0.0202]],
[[-0.1044, -0.1664, -0.2347],
[-0.1708, -0.2092, -0.2356],
[-0.2202, -0.2412, -0.2733]]]], grad_fn=<MkldnnConvolutionBackward>)
if eval did not calculate anything from the current batch, then the output of bn should not have been -1.6799 at the first index, for example. The -1.6799 is obtained when normalizing the output of conv. In the forward function the commented out code shows the formula used to obtain the same result. If I check the first index of x11 it also is -1.6799.
Could you please explain why I see this? If I replace eval() with train() it is the same result. This is the one and only training sample used (one batch) and if it did not use current batch stats, it should have some other different output I presumed because no previous mean and variance was calculated as this is the first time the model is provided with an input. But it looks like even using eval(), it is using the current batch to make the output.
Thanks!
|
st84461
|
Note that batchnorm layers have also affine parameters by default (affine=True).
While the weight and bias are initialized with zeros and ones, respectively, in the current master, the weight parameter was initialized with a uniform distribution up to PyTorch 1.1.0 4.
If you are not using a nightly build, you might add this to your code:
torch.manual_seed(0)
net = Net()
net.bn.bias.requires_grad=False
with torch.no_grad():
net.bn.weight.fill_(1.)
net.bn.eval()
|
st84462
|
Thanks.
But I think it it still doing the same thing. It is using the current batch in eval mode to calculate mean and variance.
So basically while the weight is initialized differently now, it still uses the current batch to make the bn output
If you check the output of conv listed above and calculate the per channel mean and variance and do the following (based on my code above):
x11[:,0,:,:]*self.bn.weight.data.numpy()[0]
x11[:,1,:,:]*self.bn.weight.data.numpy()[1]
x11 is the normalized conv output from the applied sample input.
The result above will match the first index for example of the output of bn printed by Pytorch!
The issue is basically the current batch is used to make the output of bn in eval mode.
I would expect to see the default mean and variance (if one is not present) used not including the current batch. But it uses this one batch to make the calculation
|
st84463
|
Arun_Vishwanathan:
The issue is basically the current batch is used to make the output of bn in eval mode.
Could you check the output again?
Using your model and this code snippet:
torch.manual_seed(0)
net = Net()
net.bn.bias.requires_grad=False
net.bn.weight.requires_grad=False
with torch.no_grad():
net.bn.weight.fill_(1.)
net.bn.eval()
x = torch.randn(1, 3, 5, 5)
net(x)
yields the exactly same outputs:
conv output
tensor([[[[ 0.4132035971, -0.7129006982, 0.0829921290],
[ 0.5575969219, -0.5301585793, -0.2478107512],
[ 0.2830447555, -0.8774284720, -0.7162327170]],
[[-0.1680016071, -0.6251038313, -0.3634709716],
[ 0.5589009523, 0.4439742863, 0.1032543331],
[ 0.4814728498, 0.3313015699, -0.5421049595]]]],
grad_fn=<MkldnnConvolutionBackward>)
bn output
tensor([[[[ 0.4132015407, -0.7128971219, 0.0829917118],
[ 0.5575941205, -0.5301558971, -0.2478095144],
[ 0.2830433249, -0.8774240613, -0.7162291408]],
[[-0.1680007726, -0.6251006722, -0.3634691536],
[ 0.5588981509, 0.4439720511, 0.1032538190],
[ 0.4814704359, 0.3312999010, -0.5421022177]]]],
grad_fn=<NativeBatchNormBackward>)
...
|
st84464
|
Sure, I can take a look. What is the mean and variance by default for the calculation? Is it set to zero in this case by default?
|
st84465
|
running_mean is initialized as zeros, while running_var as ones (in 1.1.0 5 and the current master).
|
st84466
|
I just found the issue!!
In my code I additionally do,
x = torch.randn(1, 3, 5, 5)
torch.onnx.export(net, x, ‘ayeonnx.onnx’)
net(x)
to convert the model to onnx (for some use case ) before executing net(x)
and this messes up the whole thing!
if I instead do
torch.onnx.export(copy.deepcopy(net), x, 'ayeonnx.onnx')
I get what you get. Is this not a bug in that I had to deepcopy and for some reason the net object got affected without that or is this expected?
|
st84467
|
Yeah, that’s probably the issue and might be considered a bug.
As you can see here 4, set_training changes the current training mode of the model to the passed argument mode.
By default torch.onnx.export uses training=False, which should be fine.
However, since you are not setting the complete model to eval, net.training will still return True:
net.bn.eval()
print(net.training)
> True
print(net.bn.training)
> False
While this is your desired use case, set_training only checks the training attribute of the parent model and sets the complete model to its “old” mode again:
torch.onnx.export(net, x, 'tmp.onnx')
print(net.training)
> True
print(net.bn.training)
> True
This will of course cause the next forward call to update the running statistics, so you should set the batchnorm layer to eval again after exporting the model using onnx.
The proper approach would maybe be to restore the training attribute for each submodule recursively, but I’m not sure if that’s an edge case.
Anyway, feel free to open an issue and link to this topic so that this can be discussed with the ONNX devs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.