id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st84168 | also my question is in the context of padding not packing…so idk why packing is relevant…
https://pytorch.org/docs/stable/nn.html#pad-sequence 2
pad_sequence
torch.nn.utils.rnn.pad_sequence(sequences, batch_first=False, padding_value=0)[SOURCE]
Pad a list of variable length Tensors with padding_value
pad_sequence stacks a list of Tensors along a new dimension, and pads them to equal length. For example, if the input is list of sequences with size L x * and if batch_first is False, and T x B x * otherwise.
B is batch size. It is equal to the number of elements in sequences. T is length of the longest sequence. L is length of the sequence. * is any number of trailing dimensions, including none. |
st84169 | Yeah, well, you linked pack_padded_sequence before.
There it is the same, except that the lengths (which is an input to pack_padded_sequence) are not given as a parameter but are implicit in [len(s) for s in sequences] the sum of lengths is then the sum of the lengths parameter (sum(lengths)) or sum([len(s) for s in sequences]).
Best regards
Thomas |
st84170 | AFAIK this means that the resulting tensor can have an arbitrary number of dimensions with any shape after T and B dimensions.
For example, it could be a tensor of sizes:
torch.size([T, B])
torch.size([T, B, 10]
torch.size([T, B, 25, 500])
and so on… |
st84171 | From the documentation:
>>> from torch.nn.utils.rnn import pad_sequence
>>> a = torch.ones(25, 300)
>>> b = torch.ones(22, 300)
>>> c = torch.ones(15, 300)
>>> pad_sequence([a, b, c]).size()
torch.Size([25, 3, 300])
Can be mutated to
>>> from torch.nn.utils.rnn import pad_sequence
>>> a = torch.ones(25, 300, 10)
>>> b = torch.ones(22, 300, 10)
>>> c = torch.ones(15, 300, 10)
>>> pad_sequence([a, b, c]).size()
torch.Size([25, 3, 300, 10])
to
>>> from torch.nn.utils.rnn import pad_sequence
>>> a = torch.ones(25, 300, 10, 5)
>>> b = torch.ones(22, 300, 10, 5)
>>> c = torch.ones(15, 300, 10, 5)
>>> pad_sequence([a, b, c]).size()
torch.Size([25, 3, 300, 10, 5])
to
>>> from torch.nn.utils.rnn import pad_sequence
>>> a = torch.ones(25, 300, 10, 5, 7)
>>> b = torch.ones(22, 300, 10, 5, 7)
>>> c = torch.ones(15, 300, 10, 5, 7)
>>> pad_sequence([a, b, c]).size()
torch.Size([25, 3, 300, 10, 5, 7])
I think you get the rough idea? * replaces (300,) or (300, 10) or (300, 10, 5) or (300, 10, 5, 7).
Best regards
Thomas |
st84172 | Hi, I am trying to execute python script from PHP but it is doesn’t works from pytorch? Any hint or idea what could be the reason?
Thanks |
st84173 | I think that’s really a PHP question more than a PyTorch one, so this forum might not be the ideal venue…
That said, you might consider implementing a flask or similar server 56 for your PyTorch model and connect to that from PHP.
Best regards
Thomas |
st84174 | Is there a reason why Pytorch doesn’t favour batch size first when organizing the data? Is there a reason for it to not be like that?
Is that the opposite of tensorflow or something? Perhaps there is a good reason idk… |
st84175 | Hi,
In pytorch, we do have batch first scenarios. For example if you process any datasets from torchvision. They are processed batch_first. You can change it. I added one small example below.
testset = torchvision.datasets.CIFAR10(root=data_path, train=False, download=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2)
If you process the above testloader, you will get something as follows.
for i, data in enumerate(testloader, 0):
inputs, labels = data # inputs and labels are tensors with size 4x3x32x32 and 4
Here the batch is the first dimension, then the channels. At the end, we have height and width.
One more thing batch first and batch last are two different settings. The desired representation can be achieved using torch.transpose.
Thanks |
st84176 | I was going through the chatbot tutorial and saw the following:
pair_batch.sort(key=lambda x: len(x[0].split(" ")), reverse=True)
why does the tutorial (reverse) sort? Is there a good reason?
code:
# Returns all items for a given batch of pairs
def batch2TrainData(voc, pair_batch):
'''
'''
pair_batch.sort(key=lambda x: len(x[0].split(" ")), reverse=True)
#print( [len(pair[0].split(" ")) for pair in pair_batch] )
#st()
input_batch, output_batch = [], []
# seperate the pairs (for the current batch) in X and Y values
for pair in pair_batch:
input_batch.append(pair[0])
output_batch.append(pair[1])
# returns a matrix of (max_length,batch_size) corresponding to the feature vectors X
inp, lengths = inputVar(input_batch, voc)
output, mask, max_target_len = outputVar(output_batch, voc)
return inp, lengths, output, mask, max_target_len |
st84177 | I just found out that the packing documentation says:
For unsorted sequences, use enforce_sorted = False. If enforce_sorted is True, the sequences should be sorted by length in a decreasing order, i.e. input[:,0] should be the longest sequence, and input[:,B-1] the shortest one. enforce_sorted = True is only necessary for ONNX export.
so now the question is, if I should be sorting myself or if I should let pytorch do the sorting internally? Which one is the best or recommended or standard one?
docs:
https://pytorch.org/docs/stable/nn.html?highlight=pack_padded_sequence#torch.nn.utils.rnn.pack_padded_sequence |
st84178 | so I decided to have the sorting happen in the dataprocessing with dataset/dataloader cuz these allow num_workers>0 which probably allows for parallelization and speed ups because we can process many things at the same time.
Obviously I’d need to benchmark to check what is the fastest but that doesn’t sound like its worth my time. So I’d just go with my gut what seems a fine solution.
Feel free to contribute a suggestion! |
st84179 | the only thing I’m not sure is if to use the sorting as suggested by the ChatBot tutorial (i.e. doing it in Python or if there is some Pytorch way to do it that is better/more efficient).
For now I will sort in Python directly and not use Pytorch. |
st84180 | Hi guys, I have a problem when I load my model:
This is the code when I trained my model:
model = models.densenet121(pretrained = True)
train_on_gpu = torch.cuda.is_available()
for param in model.parameters():
param.requires_grad = False
from collections import OrderedDict
classifier = nn.Sequential(OrderedDict([
(‘c1’,nn.Linear(1024,740)),
(‘relu’, nn.ReLU()),
(‘c2’,nn.Linear(740,400)),
(‘relu1’,nn.ReLU()),
(‘c3’, nn.Linear(400,2)),
(‘output’, nn.LogSoftmax(dim=1))
]))
model.classifier = classifier
…
The train code
…
torch.save( model.state_dict(),‘classificador-1.pth’)
When I load the model:
state_dict = (checkpoint[‘state_dict’])
model = models.densenet121(pretrained = False)
model.load_state_dict(new_state_dict,strict=False)
model.fc = classifier
for parameter in model.parameters():
parameter.requires_grad = False
model.eval()
But I have wrong predicted class when I test the model,
because I just wanted two exits but I’m getting more. |
st84181 | It seems you can load the saved model, but what is not clear where is the variable classifier initialized before you use when loading:
state_dict = (checkpoint[‘state_dict’])
model = models.densenet121(pretrained = False)
model.load_state_dict(new_state_dict,strict=False)
model.fc = **classifier**
for parameter in model.parameters():
parameter.requires_grad = False
model.eval()
If the linear layer is not initialized/loaded with trained weights then the predictions will not be accurate. |
st84182 | When I check what the memory usage on my GPU is in training and validation iterations, I notice that something from the training iteration is not released during the validation iteration. Output of print statements:
...
training:
3940352
validation:
131654144
training:
3940352
validation:
131654144
...
For reference, my code looks something like this:
def iteration(step, batch, model, device, stats_dict, optim_schedule=None, criterion=None, train=True):
if train:
model.train()
else:
model.eval()
batch = {key: value.to(device) for key, value in batch.items()}
output = model.forward(batch['input_ids'])
loss = criterion(output, batch['label'])
if train:
optim_schedule.zero_grad()
loss.backward()
optim_schedule.step_and_update_lr()
# for debugging GPU memory issues:
if train:
print('training: ')
else:
print('validation: ')
print(torch.cuda.memory_allocated())
#################################
# gather loss and correct prediction statistics for this data chunk
#################################
predicted = prot_output.argmax(dim=-1).cpu()
num_correct = (predicted == batch['label'].cpu()).sum().item() # attempt at solving the bug by moving to CPU
stats_dict['loss'] += loss.item()
stats_dict['num_correct'] += num_correct
stats_dict['nb_tr_examples'] += batch['input_ids'].size(0)
stats_dict['nb_tr_steps'] += 1
return stats_dict
And in the main function:
def main():
# ....
# Accumulate loss for plotting training history
train_hist = {'loss': [],
'acc': []}
val_hist = {'loss': [],
'acc': []}
train_stats = {'loss': 0,
'num_correct': 0,
'nb_tr_steps': 0,
'nb_tr_examples': 0,
}
val_stats = {'loss': 0,
'num_correct': 0,
'nb_tr_steps': 0,
'nb_tr_examples': 0,
}
#################################
# Start looping through the epochs
################################
for epoch in tqdm.trange(int(args.num_train_epochs), desc="Epoch"):
for step, batch in enumerate(tqdm.tqdm(train_dataloader, desc="Iteration")):
if args.train_dataset:
train_stats = iteration(step, batch, model, device, train_stats, optim_schedule, criterion, train=True)
if step % args.time_steps_per_plot_point == 0:
# let's add a new point to the array that will be plotted:
train_hist['loss'].append(train_stats['loss'] / train_stats['nb_tr_steps'])
train_hist['acc'].append(train_stats['num_correct'] / train_stats['nb_tr_examples'])
if args.val_dataset:
val_stats = iteration(step, batch, model, device, val_stats, optim_schedule, criterion, train=False)
if step % args.time_steps_per_plot_point == 0:
val_hist['loss'].append(val_stats['loss'] / val_stats['nb_tr_steps'])
val_hist['acc'].append(val_stats['num_correct'] / val_stats['nb_tr_examples'])
I’m relatively new to PyTorch, and I’d appreciate any tips on how to debug this or writing PyTorch efficient code. I’ve seen GPU memory not fully released after training loop 1, but if the stats_dict is what is causing the problem, what is the best way to still track performance for plotting? |
st84183 | Is there any reason to choose one vs the other? I noticed that everything can be done in the collate_fn which I thought was more conceptually nice since everything was in one place (specially for my complicated data processing custom class).
Or is there a real performance reason (i.e. runs faster) for choosing transforms vs collate_fn? What are all the advantages and disadvantages? |
st84184 | i’m working on action recognition task, the paper that i’m working from it say’s that after feed the images to vgg16 add to the projection layer as in the image
how can i implement prejection layer in pytorch and feed it to Bi-Directional Recurrent Neural Network? |
st84185 | I ran into a non-numpy-like behavior while trying to use a 2d boolean tensor as a mask. This code reproduces it:
a = torch.zeros((3, 3, 2))
mask = torch.zeros((3, 3), dtype=bool)
mask[(1, 2, 2), (0, 0, 1)] = 1
print(mask)
a[mask, 0] = 1
What I get is:
IndexError: The shape of the mask [3, 3] at index 1does not match the shape of the indexed tensor [3, 2] at index 1
Basically, I cannot select which elements in my third dimension should be masked. The numpy version works fine on this one. |
st84186 | Hello, getting the “TypeError: can’t pickle X objects” error. As has been posted many times before this is in fact because I am running torch.save(net, PATH) and not torch.save(net.state_dict(), PATH). However, my understanding is that state_dict() just to fill in values so you have to initialize a network with the same architecture to load a state_dict() to it. My application incorporates modifying the architecture after initialization so if that is how it works I can not use state_dict for my saving. Everything was working in pytorch 0.2, I have now updated and working with 0.4. I was wondering if there was any advice on how to track down what exactly is the problem with my object that can’t be pickled. or possibly something that changed between versions that might cause this to no longer work.
If it helps my object is class myObject(torch.autograd.Function). It has a forward and backward and also fills in and keeps track of a few values.
Thanks! |
st84187 | I have the same problem, do you have solved the problem? if so, please give me a help! thanks! |
st84188 | I believe I eventually decided pytorch didn’t like the concept of pickling torch.autograd.functions. I ended up making a function that saved the variables from the class, deleted them all then pickled and upon load created new class objects and filled in the values. |
st84189 | Hello, I know that this might be trivial but I’ve been structurally with it for quite some time. I’m having a tensor of shape [batch, num_nodes, num_nodes] and I would like to mask it so that on the second dimension I replace the biggest number with 1 and the rest with 0. So far I’ve come to this: mask = (mask == mask.max(dim=2, keepdim=True)[0]) it works but i’m loosing the gradients. Any help it’s welcome. |
st84190 | I have a module that outputs 2 losses, l_1 and l_2.
I only want the gradients from l_2 to flow if l_2 is negative, but I still need l_2 for later computations. How can I achieve this? |
st84191 | You could just add a condition before calling backward:
l_1.backward(retain_graph=True)
if l_2 < 0:
l_2.backward()
Would that work in your use case or is it more complicated? |
st84192 | thanks for the quick answer. Is there another way to code this self-contained in the nn.Module subclass? Because this would get very, very complicated programming-wise (I have not started the project with this option in mind and combine them early). I have a lot of these modules and a lot of these “l_2” losses. I want to disable the gradients for positive values for some modules and not for others, based on hyper-parameters. |
st84193 | Could you give a small example, how these l_2 losses are calculated in the modules and where/how you would like to disable them?
Maybe detaching the critical tensors based on the condition would work. |
st84194 | Okay, I have modules module the loss for the main task and the absolute log-determinant for the transformation (for example, the sum of the absolute logs of the diagonal elements for multiplication with a triangular matrix. For non-linear transformation the whole thing is more difficult). Similar to normalizing flows, if you know this approach.
In my earlier layers, I don’t want the weights to optimize the absolute log-determinant because of numerical problems, I only want to avoid “low” log-determinants (< zero). But I don’t know how many layers to disable etc. or whether this would really solve my problem. So I have to experiment a lot.
All the log-determinants of the modules get combined via a sum and added to the overall loss. Since I have not started the problem with the approach in mind, I constantly combine the log-determinants to get better code-organization. |
st84195 | I essentiall have something like this:
class CustomClass(nn.Module):
def forward(input):
(do some calculations)
return l_1, logdets
l_1 gets used immediatly in the next layer and I want to disable the gradient from logdets if it’s greater than zero. |
st84196 | LeanderK:
I want to disable the gradient from logdets if it’s greater than zero.
If you would like to disable the gradient flowing back from logdets if it’s > 0, this code should probably work:
def forward(self, input):
...
if logdets > 0:
logdets = logdets.detach()
return l_1, logdets |
st84197 | @ptrblck I revisited my problem. Your solution is unfortunately not working because logdets is a tensor where I want to let the gradient flow depending on the element-wise condition. i thought about the torch function interface but i am not sure whether it’s working since it provides gradients with respect to the output but I would need to manipulte the gradients flowing for the logdets computation. |
st84198 | import torch
def batch_wise_index_select(input, dim, index):
assert input.size(0) == index.size(0)
out = []
for i in range(input.size(0)):
out.append(torch.index_select(input[i], dim=dim-1, index=index[i].view(-1)))
return torch.stack(out, dim=0)
a = (torch.rand(2,2,5)*10).int()
b = (torch.rand(2,3)*3).long()
out = batch_wise_index_select(a, dim=2, index=b)
print(a)
print(b)
print(out)
Output
tensor([[[6, 4, 7, 4, 0],
[2, 3, 1, 4, 5]],
[[6, 2, 6, 6, 3],
[7, 4, 7, 2, 9]]], dtype=torch.int32)
tensor([[1, 1, 2],
[1, 1, 0]])
tensor([[[4, 4, 7],
[3, 3, 1]],
[[2, 2, 6],
[4, 4, 7]]], dtype=torch.int32)
Then how to accelerate batch_wise_index_select ? |
st84199 | Let’s say I have a classification model. And my job is to predict the correct class out of 30 different classes. The current accuracy is 70%.
The thing is: I have to consume another team’s classification result which is 80% accurate. So I’m using their prediction result as a feature. I’ll call it “golden feature”. Let’s say I’m aiming >80% accuracy with the golden feature.
Here is my current approach:
(I’m using Deep Learning.) I have several features and each feature has its own weight. I also create a weight vector for one hot vector (1 by 30) of “golden feature” and train all weights together. However the result doesn’t seem to provide much.
I thought about the reason why and realized that the learned vector (30 by n) won’t be that meaningful. They would be just positive numbers.
(Please yell at me if my reasoning is wrong!)
Has anyone faced the similar problem? Any suggestion will be highly appreciated!
The method that you suggest doesn’t have to be Deep Learning approach. |
st84200 | You could try to use the output of the penultimate layers, concatenate them, and train a final classifier. This would be similar to the vanilla fine tuning approach, but with multiple inputs.
If this doesn’t provide much benefit, I would go for e.g. XGBoost and try to create a model ensemble out of your trained models. |
st84201 | I define some layer by myself, and I want to replace some layers in origin Net using my layer.
For Example:
Net define:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 3)
self.conv3 = nn.Conv2d(16, 32, 3)
self.pool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(32, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
x = x.view(-1, 32)
x = self.fc1(x)
return x
In this Net, F.relu is used. However, I want to replace the relu in this Net.
Therefore, if I print this Net() :
Net(
(conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(conv2): Conv2d(6, 16, kernel_size=(3, 3), stride=(1, 1))
(conv3): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=32, out_features=10, bias=True)
)
which I found that Relu operation not existed.
So I want to know If I want to modify relu operation in Net(Net can not be rewrited), How can I deal with it?
if jit.script can help me to do this process?Preformatted text |
st84202 | Since relu is used as a functional call, the cleanest way would be to override the class and reimplement the forward method with your new non-linearity.
By doing so, you could also create the new activation function as an nn.Module, so that it can be replaced easily, if that’s your use case. |
st84203 | The situation or the request for me is that I get an model and Net define from others, And I need to design a framework to replace some operation such as relu or others. So modify net define by hand is not realistic for me.
If there has any other advance? |
st84204 | You could try to override the F.relu directly with your desired method, but I would consider this quite a hack, since this will invisibly redefine this method. |
st84205 | Hi their !!
I have some problem in my code it torch.dot product
ypo=torch.tensor([[4,6,7],[4,6,7],[4,6,7]],dtype=torch.float)
yo=torch.tensor([4,6,7],dtype=torch.float)
xyz=torch.dot(yo,ypo)
this give error it is
dot: Expected 1-D argument tensor, but got 2-D
plz help me … |
st84206 | an99:
ypo=torch.tensor([[4,6,7],[4,6,7],[4,6,7]],dtype=torch.float) yo=torch.tensor([4,6,7],dtype=torch.float) xyz=torch.dot(yo,ypo)
Hi @tom,
Just curious, there is another function torch.mm which can do the same multiplication. Which one is the updated one?
Thanks |
st84207 | Hi all,
I am currently trying to train a network for a regression task, basically I need to map a variable-length sequence (say a spectrogram) to a fixed size vector. My baseline model was a GRU (many-to-one) which mapped the sequence to a vector, and then fed it to a Linear layer for the output, this didn’t go very well. Which led me to believe that perhaps I should shrink the spectrogram before handing it over to the GRU. This actually helped a lot.
My problem is, with the baseline GRU model I just used PackedSequence because I knew the original lengths of the sequences (which are need to create a PackedSequence). But now, after the sequences first pass through a conv2d layer, I “lose” that information and no longer know the length of sequences. So my question is - what it the common practice in such cases? I am open to other architectures as well btw.
Thanks,
Felix. |
st84208 | Maybe I’m misunderstanding something, but why would passing a sequence through a convolutional layer make you “lose” information about the length of that sequence? With a given series of convolutions, you should be able to quite easily figure out the output shape given the input shape? |
st84209 | So let’s say I have sequences of different length, I need to zero-pad them all to the same length to work in batches. So for each sequence I “remember” its length. Let’s say for example I have a sequence tensor of 300 x 100, and I “remember” that its actual length is 230, meaning there are 70 zero padded time frames. Now, after I pass that through conv+pool this shrinks to a new size, the old length of 230 is not longer relevant. Did I manage to explain myself? I feel like maybe this is a bit tricky, or I’m missing something. |
st84210 | Hey @Felix_Kreuk,
Did you manage to find a solution to this, I am in the exact same position now. |
st84211 | I can’t find an example anywhere. Additionally I can’t find out what exactly optimizer.param_groups returns, so I don’t know whether I can filter by name to change the learning rate. |
st84212 | The param groups are dictionaries with the state information. If you only have one param group (this is the case if you just initialized the optimizer with model.parameters()) then optimizer.param_groups[0]['lr'] is the learning rate.
Parameter groups are useful e.g. when you want different learning rates for different parameters. For example fast.ai advocates that for finetuning.
Best regards
Thomas |
st84213 | Hello so I have to use BCHW format for my dqn and I have it working with 1 frame, and it’s a “grayscale” image, so no rgb values. I just have:
state = getState() -> 50x50 matrix
state= [None, None, :, : ]
However, I want to try stacking the frames and I’m not sure how I would go about doing that. So would I need a new array to store 4 bchw format “states” or could I give the number of layers as the channel in one bchw format state?
Also, would stacking affect anything else in the code that I should be aware of, like in the network? |
st84214 | I initially tried pip3 install torch but when I import torch in jupyter notebook it said that no module found. I then used conda to install it and import worked. Does anyone knows why would pip fail for any specific reasons? |
st84215 | There are detailed pip commands on the Get started 83 page, maybe the download.pytorch.org-hosted versions for CUDA 10 or CPU work better. I always install PyTorch via pip3.
Best regards
Thomas |
st84216 | Hello all, I created a simple network where a convolutional layers weight matrix is altered by a custom function.
I came up with this :
class snet(nn.Module):
def __init__(self, num_classes=3):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 2, 1, 0)
shape = self.conv1.weight.shape
self.var1 = nn.Parameter(torch.ones(shape))
self.var2 = nn.Parameter(torch.ones(shape))
self.conv2 = nn.Conv2d(6, 6, 5, 1, 0)
self.fc = nn.Linear(6*11*11, num_classes)
def some_method(self):
"""
Suppose this is a custom method, tasked with
producing values for each entry in the weight matrix
of a convolutional layer. for simplicity we used
addition here
"""
return self.var1 + self.var2
def forward(self, input):
self.conv1.weight = nn.Parameter(self.some_method())
output = self.conv1(input)
# or using the functional api and using the weight matrix directly is no different
# output = F.conv2d(input, self.some_method())
output = self.conv2(output)
output = output.view(input.size(0), -1)
output = self.fc(output)
return output
n = snet(num_classes=3)
fake_dataset = torchvision.datasets.FakeData(100,
image_size=(3, 16, 16),
num_classes=3,
transform=transforms.ToTensor())
fake_dataloader = torch.utils.data.DataLoader(fake_dataset,
batch_size=20)
criterion = nn.CrossEntropyLoss()
opt = torch.optim.Adam(n.parameters(), lr=0.01)
for imgs, labels in fake_dataloader:
p = n(imgs)
loss = criterion(p, labels)
opt.zero_grad()
loss.backward()
opt.step()
print(loss.item())
Apparently this is wrong as nothing happens! the parameters are added to the module and they show up in the parameters list. however, the gradient is always zero!
I noticed the grad_fn property for both variables/parameters are None! where as it must have been the addition right?
Based on the autograd tutorial, when one variable in an operation has requires_grad = True, the output also will have itsrequires_grad = True and thus the gradient should flow back to those withrequires_grad set to True.!
Since the nn.Parameter() sets this property implicitly to True, this should work yet it does not!
whats wrong here? what am I missing here?
Any help is greatly appreciated. |
st84217 | Solved by Krish in post #8
Had a missing line of code above. This
def forward(self, x):
self.some_function()
...
The values of kernel won’t be optimised directly, instead the optimizer will optimise values of var1 and var2 only as they are the parameters.
You can check that out by tracing the grad_fn backwards (tedious… |
st84218 | var1 and var2 have no grad_fn because you are not performing any operation that changes their values.
I believe that the conv1 weight doesn’t have a grad_fn because some_function() (or any python function) returns the value rather than the reference. So although the value gets updated, the lack of reference means Pytorch can’t know what changed the value. |
st84219 | Thanks a lot for your response, but I thought since they are being used in an operation, so they must have a gradient in any case right?
If not, how am I supposed to introduce a dependent variables?
For example in my case, what should I be doing ? I’m really lost here! |
st84220 | They do take part in an operation but Pytorch doesn’t know what operation it is, because it just gets the value from the function.
What you can try in this case is to perform the computation of some_method inside the init or forward method.
I will try this out and let you know if that worked. |
st84221 | nn.Parameter doesn’t seem to transfer any kind of grad information. One workaround that I could get to work was to use the torch.nn.functional API and passing the kernel through there.
def some_function(self):
self.kernel = self.var1 + self.var2
def forward(self, x):
self.some_function()
output = F.conv2d(x, self.kernel, padding = 0, stride = 1)
Here grad_fn works as usual.
You do have to figure out the kernel size by yourself though. |
st84222 | Thanks but this only acts as a one time initialization , its as if I write :
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 3)
shape = self.conv1.weight.shape
self.conv1.weight = nn.Parameter(torch.rand(shape)) + nn.Parameter(torch.rand(shape))
and use self.conv in the forward pass!
The difference is that here the values of the self.kernel will get optimized, but what I’m after is that I want the values of self.var1 and self.var2 to get changed so that their interactions result in a set of filters/kernels ( the resulting weight matrix) that minimize the loss.
In other words, in your self.kernel case, there are n entries that get learned, but in my case, there are 2x more variables that are used to create a kernel.
The method call is necessary since, each time its called in the forward pass, it runs the underlying operation that creates a new set of filters, then these filters are used, some losses will be produced and ultimately a set of gradients will be created, I want these gradients to update not the filters values themselves, but the self.var1 and self.var2 values that create the filters values |
st84223 | Had a missing line of code above. This
def forward(self, x):
self.some_function()
...
The values of kernel won’t be optimised directly, instead the optimizer will optimise values of var1 and var2 only as they are the parameters.
You can check that out by tracing the grad_fn backwards (tedious process, although). |
st84224 | Shisho_Sama:
in your self.kernel case, there are n entries that get learned, but in my case, there are 2x more variables that are used to create a kernel.
I don’t clearly understand what you are saying. Can you explain it? |
st84225 | Thanks, That actually worked Thanks a gazillion times sir
Also I noticed there is no need to use a class attribute for this and simply returning the result like this would also work :
def some_method(self):
"""
Suppose this is a custom method, tasked with
producing values for each entry in the weight matrix
of a convolutional layer. for simplicity we used
addition here
"""
result = self.var1 + self.var2
return result
def forward(self, inputs):
weight = self.some_method()
output = F.conv2d(inputs, weight )
print('var1 grad',self.var1.grad)
print('weight grad',weight.grad)
return output
When running this the self.var1.grad actually has values!
The catch here seems to be using the functional form for this to work only! (Why is that? I’d really would like to know why the first wouldn’t work?)
var1 grad None
weight grad None
var1 grad tensor([[[[ 0.0295, 0.0308],
[ 0.0284, 0.0288]],
[[ 0.0157, 0.0115],
[ 0.0275, 0.0370]],
[[ 0.0231, 0.0077],
[ 0.0269, 0.0438]]],
[[[ 0.0812, 0.0750],
[ 0.0818, 0.0876]],
[[ 0.0944, 0.0916],
[ 0.0720, 0.0938]],
[[ 0.0702, 0.0918],
[ 0.0966, 0.0842]]],
[[[-0.0011, 0.0065],
[ 0.0024, 0.0097]],
[[ 0.0100, -0.0002],
[ 0.0133, 0.0061]],
[[ 0.0084, 0.0118],
[ 0.0030, 0.0073]]],
[[[ 0.0204, 0.0375],
[ 0.0376, 0.0324]],
[[ 0.0149, 0.0122],
[ 0.0230, 0.0258]],
[[ 0.0214, 0.0505],
[ 0.0280, 0.0071]]],
[[[-0.0798, -0.0690],
[-0.0813, -0.0650]],
[[-0.0819, -0.0741],
[-0.0805, -0.0698]],
[[-0.0827, -0.0689],
[-0.0654, -0.0692]]],
[[[ 0.0201, 0.0045],
[ 0.0195, 0.0160]],
[[ 0.0128, 0.0298],
[ 0.0173, 0.0195]],
[[ 0.0271, 0.0173],
[ 0.0270, 0.0049]]]])
weight grad None
var1 grad tensor([[[[ 0.0686, 0.0713],
[ 0.0695, 0.0971]],
[[ 0.0705, 0.0810],
[ 0.0792, 0.0667]],
[[ 0.0934, 0.0621],
[ 0.0784, 0.0773]]],
[[[ 0.4964, 0.4827],
[ 0.4911, 0.4820]],
[[ 0.5122, 0.5005],
[ 0.5119, 0.4927]],
[[ 0.4984, 0.5238],
[ 0.4832, 0.5231]]],
[[[-0.2394, -0.2491],
[-0.2389, -0.2550]],
[[-0.2355, -0.2570],
[-0.2466, -0.2347]],
[[-0.2434, -0.2351],
[-0.2315, -0.2475]]],
[[[ 0.1815, 0.1805],
[ 0.2223, 0.2162]],
[[ 0.2135, 0.2039],
[ 0.1949, 0.2067]],
[[ 0.2072, 0.2130],
[ 0.2181, 0.1701]]],
[[[-0.1945, -0.1760],
[-0.2120, -0.2003]],
[[-0.1972, -0.1877],
[-0.2123, -0.1866]],
[[-0.1686, -0.2197],
[-0.1895, -0.2136]]],
[[[ 0.0689, 0.0675],
[ 0.0926, 0.0759]],
[[ 0.0856, 0.0696],
[ 0.0806, 0.0503]],
[[ 0.0815, 0.0521],
[ 0.0534, 0.0843]]]])
...
The weird thing is that although we have gradients, yet grad_fn is None for both self.var1 and self.var2
and concerning :
I don’t clearly understand what you are saying. Can you explain it?
for the sake of simplicity, lets assume we a kernel of size 2x2 .
this simply means we have 4 variables (separate values) that together form a kernel we know and use.
in the forward pass, what happens is that these 4 variables are used in different operations (multiplication, sum , etc) and then in the backward pass, each of them get a gradient with respect to how they contributed to the end loss.
Now, when I said, we have two variables for each entry, I simply meant suppose we don’t have this kernel of 2x2 (4 variables), instead suppose we have an empty 2x2 grid and we need to fill its entries.
in the first case, a variable is allocated to each entry. thus a grid of 2x2 has 4 variables that each represent a single entry! (the entry they occupy !). now suppose we want to fill this grid not with one variable but 2. meaning 2 variables with be used to create a value for a single entry in our grid.
2 variables per each entry. in other words, suppose we have a grid1 of 2x2 and a grid2 of 2x2 and we use these two grids to (for example add them together element wise), get the values for our empty grid i.e. kernel values. here this very kernel, is not treated as our variables, it simply a matrix of values, the real variables are gridl1 and grid2 respectively. I hope this makes it a bit more clear! Although I got what I was after thanks to you
By the way, What puzzles me now is that we have gradients for our vars, yet their grad_fn attribute is None!!?
Why is that? |
st84226 | Shisho_Sama:
we have gradients for our var s, yet their grad_fn attribute is None!!?
Why is that?
As I said earlier, a variable has a grad_fn only when an operation changes its value (except when the code is wrapped in with torch.no_grad():). Here no operation changes the value of var1 or var2.
Something you can try to understand is to run this code:
x = torch.ones(2).requires_grad_(True)
y = x + 2
print (y.grad_fn, x.grad_fn)
The output should be: AddBackward None
The gradients store how much the variable affects the loss. You can see it for yourself by running along with the above code:
y.sum()
y.backward()
print(x.grad) |
st84227 | Thanks a lot again got it this time
Now my only question that still persists, is why it doesn’t work on setting the conv layer weights directly using the non-functional form? |
st84228 | Hi all,
I am wondering if there is a way to set the learning rate each epoch to a custom value.
for instance in Matconvent you can specify learning rate as LR_SCHEDULE = np.logspace(-3, -5, 120) to have it change from .001 to .00001 over 120 training epochs, for instance.
is there something similar I can do in Pytorch?
my first idea is to define the following function and then re-define the optimizer each epoch
def scheduler(optimizer,lr):
for param_group in optimizer.param_groups:
param_group['lr'] = lr
return optimizer
so then
for epoch in range(EPOCHS):
lr = LR_SCHEDULE[epoch]
optimizer = scheduler(optimizer,lr)
could this work?
thanks |
st84229 | torch.optim.lr_scheduler is basically doing the same update step as your scheduler code (besides some other checks).
Have a look at the implemented lr_schedulers 1.3k to avoid rewriting them. |
st84230 | Thank you, I am aware of the torch.optim.lr_scheduler, I was more looking for something I can customize if needed instead of using the implemented versions |
st84231 | torch.optim.lr_scheduler seems to adjust LR in a “relative” fashion, seeing that its method get_lr() do not take any argument. However, in most of my cases, I wish to do in an “absolute” fashion that set LR given current epoch and current iteration (yes I adjust the LR each iteration). In this case, such a wheel mentioned above is actually doing great for me. |
st84232 | For me, this is a quite interesting problem. I ran the same codes but got different results on CPU and GPU. To be specific, the following codes can run on CPU without any error raised. However, when running on GPU, the following error is raised:
RuntimeError: rnn: hx is not contiguous
The codes of my class are given as follows:
class EncoderLSTM(nn.Module):
def __init__(self, voc_size, hidden_size=HIDDEN_SIZE, max_length=MAX_LENGTH+2):
super(EncoderLSTM, self).__init__()
self.hidden_size = hidden_size
self.max_length = max_length
self.memorising = nn.Embedding(voc_size, self.hidden_size)
self.attn = Attn(hidden_size)
self.dropout = nn.Dropout(DROPOUT_RATIO)
self.lstm = nn.LSTM(hidden_size, hidden_size)
self.init_hidden = self.init_hidden_and_cell()
self.init_cell = self.init_hidden_and_cell()
def forward(self, embedded_input_var):
batch_size = embedded_input_var.shape[0]
# Initialise the initial hidden and cell states for encoder
last_hidden = self.init_hidden.expand(-1, batch_size, -1)
last_cell = self.init_cell.expand(-1, batch_size, -1)
# Forward pass through LSTM
for t in range(NUM_WORD):
# Calculate attention weights from the current LSTM input
attn_weights = self.attn(last_hidden, embedded_input_var)
# Calculate the attention weighted representation
r = attn_weights.bmm(embedded_input_var).transpose(0, 1)
# Forward through unidirectional LSTM
lstm_output, (lstm_hidden, lstm_cell) = self.lstm(r, (last_hidden, last_cell))
# Return hidden and cell state of LSTM
return lstm_hidden, lstm_cell
def init_hidden_and_cell(self):
return nn.Parameter(torch.zeros(1, 1, self.hidden_size, device=DEVICE))
I guess the cause of this error is that the variable last_hidden in line 19 is not contiguous. However, what I don’t understand is that the different results on CPU and GPU.
Why running on CPU wouldn’t raise this error? |
st84233 | No expert, but I suppose CPU and GPU handle memory very differently. Read this for good explanation of contiguity 33.
Anyway, try using the .contiguous() method on the tensor which needs it. |
st84234 | Hi,
I’m trying to train on my own data. I generated cocco like json with my training and validations sets and trying to run it with default argument values, excecpt --images that i’m excluding since data ois from multiple repos.
Something is loading to much batches to memory and get:
RuntimeError: CUDA out of memory. Tried to allocate 348.35 GiB (GPU 0; 7.93 GiB total capacity; 591.43 MiB already allocated; 6.54 GiB free; 26.57 MiB cached)
How to configure it to adjust batches number ?
Thanks |
st84235 | Could you post a link to the repo you are using?
Most likely there is a flag like --batch_size or -b where you can adjust the batch size.
Alternatively, have a look at the DataLoader initialization in the code and change the batch_size argument there. |
st84236 | Ok.
I have a config.py file with:
annotations = “/input/train/annotations.json”
val_annotations = “/input/valid/annotations.json”
backbone = ‘ResNet50FPN’
classes = 3
model = “retinanet_rn50fpn.pth”
fine_tune = “/media/Data/ObjectDetector/resnet50-19c8e357.pth”
iters = 100
val_iters = 10
lr = 0.0001
resize = 516
batch = 1
max_size = 516
Then in UseExample.py:
import retinanet.config_train as config
import retinanet.main as main
args =
[
“train” ,
config.model ,
‘–backbone’ , config.backbone,
‘–annotations’ , config.annotations,
#’–val-annotations’, config.val_annotations,
‘–classes’ , str(config.classes),
‘–resize’ , str(config.resize),
‘–batch’ , str(config.batch),
‘–max-size’ , str(config.max_size),
‘–iters’ , str(config.iters),
‘–lr’ , str(config.lr),
]
main.main(args) |
st84237 | Do you see this OOM error only using your custom dataset or also the original one?
If this error is raised only for your custom dataset, are you using the same image resolution or are you working with bigger images? In the latter case, could you resize your custom images to the same size and try to run the code again?
Since the batch size is already set to 1, you cannot lower is further and would need to save some memory in another part of your training. |
st84238 | My mistake:
My json cocco like meta file was messed up. Now works wonderfully.
Sorry for bothering you |
st84239 | The pytorch docs for the groups parameter of nn.Conv2d state that:
groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example,
At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
At groups= in_channels, each input channel is convolved with its own set of filters, of size: in_channels / out_channels
However, this description seems inconsistent with the behaviour of nn.Conv2d in reality.
for example:
import torch
import torch.nn as nn
conv_layer = nn.Conv2d(16, 16, 1, groups=2, bias=False)
conv_layer.weight.shape
Returns torch.Size([16, 8, 1, 1])
But based on my interpretation of :
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
Shouldn’t there be two weights, each of size [8, 8, 1, 1]?
These inconsistencies carry over to other values for the groups parameter, in my view.
I must be missing something - could someone please clarify? |
st84240 | liam.schoneveld:
Shouldn’t there be two weights, each of size [8, 8, 1, 1] ?
There are two groups of this size concatenated in dim0.
Have a look at this small example:
conv_grouped = nn.Conv2d(16, 16, 1, groups=2, bias=False)
output_grouped = conv_grouped(x)
output_grouped.shape
conv1 = nn.Conv2d(8, 8, 1, bias=False)
conv2 = nn.Conv2d(8, 8, 1, bias=False)
with torch.no_grad():
conv1.weight.copy_(conv_grouped.weight[:8])
conv2.weight.copy_(conv_grouped.weight[8:])
output1 = conv1(x[:, :8])
output2 = conv2(x[:, 8:])
output_manual = torch.cat((output1, output2), dim=1)
print((output_manual == output_grouped).all())
> tensor(1, dtype=torch.uint8)
As you can see, the first half of the conv_grouped weights will be applied to the first half of the channels in x and the second half of the weights to the second half of the channels. |
st84241 | I’ve been having some trouble with my neural net, as it’s outputting all negatives for all 200k test set data points. I know that this isn’t possible with ReLU activation, which is why I’m so concerned.
Here’s my neural net class:
class Net(nn.Module):
def __init__(self, *, dims:dict):
super(Net, self).__init__()
# Layer 1
self.fc_1 = nn.Linear(in_features = dims['input_dim'], out_features=dims['layer_1'])
self.actv_1 = nn.ReLU()
# Layer 2
self.fc_2 = nn.Linear(in_features = dims['layer_1'], out_features = dims['layer_2'])
self.actv_2 = nn.ReLU()
self.bn_2 = nn.BatchNorm1d(num_features = dims['layer_2'])
# Layer 3
self.fc_3 = nn.Linear(in_features = dims['layer_2'], out_features = dims['layer_3'])
self.actv_3 = nn.ReLU()
self.bn_3 = nn.BatchNorm1d(num_features=dims['layer_3'])
# initialize
xavier_normal_(self.fc_1.weight)
xavier_normal_(self.fc_2.weight)
xavier_normal_(self.fc_3.weight)
self.fc_output = nn.Linear(in_features = dims['layer_3'], out_features = dims['output_dim'])
def forward(self, x):
x = self.actv_1(self.fc_1(x))
x = self.actv_2(self.bn_2(self.fc_2(x)))
x = self.actv_3(self.bn_3(self.fc_3(x)))
return self.fc_output(x)
Is this something wrong with my weight initialization? If so, what other options should I use?
Thanks in advance! |
st84242 | Since the last layer (self.fc_output) doesn’t use any activation function, its output values are unbound and can take negative as well as positive values.
If you would like to clip the output to a certain range, you could apply relu again on the last layer or use .clamp(min=0.0). |
st84243 | Thanks for your response!
I’m also having another issue though - after applying ReLU to the output layer, I’m still getting all 0 values (as expected) as all the predictions from nn.Linear have been negative. Thus my confusion matrix has exclusively true negatives and false negatives, and no real predictions.
Is there something wrong with my model? It doesn’t look like there’s anything wrong but I’m not sure.
Thanks for your time! |
st84244 | Hey guys,
I was implement a GAN network from online (followed by this github: https://github.com/AaronLeong/BigGAN-pytorch 3).
The test file is missing so I wrote it by myself. The code ran successfully but the result didn’t show that the image is generated from learnt weights, but looks like generating from initial random noise. The code is as follows
model = Generator(z_dim,n_class,chn)
checkpoint = torch.load('./model/faces/241280_G.pth', map_location = str(device))
model.load_state_dict(checkpoint,strict=False)
model.to(device)
model.float()
model.eval()
def label_sampel():
label = torch.LongTensor(batch_size, 1).random_()%n_class
one_hot= torch.zeros(batch_size, n_class).scatter_(1, label, 1)
print(device)
return label.squeeze(1).to(device), one_hot.to(device)
z = torch.randn(batch_size, z_dim).to(device)
z_class, z_class_one_hot = label_sampel()
fake_images = model(z, z_class_one_hot)
save_image(denorm(fake_images.data), os.path.join(path, '1_generated.png'))
These are face image samples that generated during training phase.
42900_fake.jpg1042×522 263 KB
This one is the image that I tried generate from trained generator via loading generator’s weights file.
Can some one help with it?
Thanks in advance. |
st84245 | Did the training run generate the first picture?
If so, could you store this random input tensor for the sake of debugging, and use it for the test case? |
st84246 | Hey!
Yeah, the first picture is the sample generating during training with batch size 32.
As you suggested, I reran the training and stored the input which is the random noise data corresponding to that iteration.
It worked when I redo everything.
And then I figured out the weight file were actually loaded. Inputs were fine too. When input went through the model build with weights, it didn’t generate a face-like image. I figured that something wrong with weight file.
What I did before is that I copied the weight file from a locked folder(has to use sudo to manipulate the folder) to another normal folder. And load weight file from there. I guess something got lost while I move the weight file. It worked if I keep them in original folder which created during initialization. |
st84247 | Hey, I think I’m wrong.
Actually the when I did it again, the training was finished. And the first time the training was interrupted and did complete.
I still don’t know exactly why is that. The code used torch.nn.DataParallel for both Generator and Discriminator during training. And I loaded the weights to gpu while testing.
How do I check if weights are loaded? |
st84248 | In an ideal case I prefer enumerating over dataloader with a for loop.
But for a particular implementation, I need multiple batches within one iteration in a sequence.
To do this I use, next(iter(dataloader)) to get as many batches as I need within one iteration.
The issue is, this causes linear increase in RAM over iterations and epochs. Somehow, the memory is not cleared after every iteration.
Any way I can avoid using iter and get multiple batches within one iteration? |
st84249 | Sorry for the confusion. I am storing the iterator and using next() in every iteration to get the batch.
Both lead to memory leak. |
st84250 | Found the problem which was causing memory leak. The code I was using also uses itertools cycle() to iterate over another dataloader which was causing the memory leak.
Using iter(dataloader) does not cause any memory leak. |
st84251 | So,
>>> net = nn.Conv2d(3, 3, 1, 1, 0)
takes
>>> sys.getsizeof(net.weight.data)
72
which is in bytes.
But,
>>> torch.save(net.state_dict(), './net.pth')
>>> !du -h ./net.pth
4.0K ./net.pth
which if converted to bytes
>>> 4 * 1000
4000
is 4000 bytes!!!
Please, tell me what I am missing and why is there is a massive increase in size when it is stored in a file?
P.S. Its quite obvious I am missing something obvious, just point to some article or something, or maybe in the PyTorch Codebase where this is implemented. |
st84252 | Solved by albanD in post #3
Hi,
Few things I think:
du -h actually counts the size of the folder as well. Running “ls -lha” will show you that the empty folder takes 4K, not the .pth file which is only 515bytes.
sys.getsizeof() measure the size of the Python object. So it is very unreliable for most pytorch elements.
The s… |
st84253 | Okay, My second line is wrong.
>>> sys.getsizeof(net.weight.data)
72
getsizeof can’t handle torch.Tensor
it returns 72 in case of empty tensors also!
So, I did this
>>> t1 = torch.Tensor([5.])
>>> sys.getsizeof(t1.item())
24
Okay, looks good.
But in that case the model will contain
>>> 24 * torch.numel(net.weight.data)
216
I am closer to 4000 bytes! But still far away!
My guess is that each parameter uses far more bytes than its showing up.
I am doing something wrong most likely. |
st84254 | Hi,
Few things I think:
du -h actually counts the size of the folder as well. Running “ls -lha” will show you that the empty folder takes 4K, not the .pth file which is only 515bytes.
sys.getsizeof() measure the size of the Python object. So it is very unreliable for most pytorch elements.
The storage format is not really optimized for space. You can actually “cat” the file and you’ll see that it contains more strings than actual data.
Using different pickle backend might improve that but I have never tried (pickle_module argument to the save function) |
st84255 | Okay, I didn’t knew that about du -h, now that you say so, I checked it though ls -lha and GUI and it shows to be of 5xx bytes, which is goood Thanks!
Maybe I will try changing the pickle_module once and play around with it a bit.
My main aim was to calculate what the size of a model is and so, getting the amount of memory it will take when I move it to GPU.
It should take around the same memory, right? (the size of the model as calculated based on the datatype of the model and the memory it consumes in the GPU when I move it to GPU? ) Should be, intuitively.
Please feel free to reply when free, I have no hurries.
Thanks a ton for getting back to me Really Appreciated |
st84256 | Hi,
The size it will take on gpu is the size of all the tensors. So the following will give you the memory in Bytes:
size = 0
for p in model.parameters():
size += p.nelement() * p.element_size() |
st84257 | I am just starting with PyTorch and I can’t seem to figure out how to output an array of 6 binary values or probabilities that do not sum to 1. All of my training data has labels such as [0, 1, 1, 0, 1, 1] for a single entry representing the presence of certain chemical compounds.
I’m just trying to get an output value that is similar to my labels or a collection of probabilities such as [0.0, 0,9, 0.7, 0.1, 0,8, 0.9]. I don’t know what that would be called and I’m sorry I can’t phrase my question well. |
st84258 | It looks like you are dealing with a multi-label calssification, i.e. each sample might have more than a single ground truth class.
If that’s the case, you could use nn.BCEWithLogitsLoss and use an output layer as:
nn.Linear(in_features, nb_classes)
Each unit in the output layer will be treated as giving the probability for the corresponding class independently.
The target should be a FloatTensor having the same shape as your model output in this case, e.g. [batch_size, nb_classes].
Let us know, if you need more information or get stuck somewhere. |
st84259 | Can someone explain when to use BatchNorm1d and when BatchNorm2d?
From here 10 BN1d is called temporal:
Because the Batch Normalization is done over the C dimension, computing statistics on (N, L) slices, it’s common terminology to call this Temporal Batch Normalization.
From here 5 BN2d is called spatial:
Because the Batch Normalization is done over the C dimension, computing statistics on (N, H, W) slices, it’s common terminology to call this Spatial Batch Normalization. |
st84260 | I have a learning rate scheduler that reduces the learning rate on a plateau of the validation error. However, I am wondering if this is the correct way? Should instead the training error be used? |
st84261 | Should I use TensorFlow or Caffe2 to create an app for android. I said let’s use caffe2, since TensorFlow is longer name and I like coffee too.
But when tried the tutorials 4 I believe someone is trying to help you to create the apps as hard as possible.
Do I really have to use Android Studio to create an app. Can I make APK in Python? |
st84262 | Solved by Vandalko in post #2
It’s not a problem to integrate Python into your app, but adding PyTorch or Tensorflow or any other framework is completely different problem.
Mainly because those libraries usually rely on native bindings which are not available for Android.
Speaking about performance, your best choice would be T… |
st84263 | It’s not a problem to integrate Python into your app, but adding PyTorch or Tensorflow or any other framework is completely different problem.
Mainly because those libraries usually rely on native bindings which are not available for Android.
Speaking about performance, your best choice would be TensorFlow(lite) or any other framework that supports Android Neural Networks API.
Otherwise you would rely on quality of ported code by 3rd-parties. For example, tutorial 2 you mentioned contains no black magic - they just provide you native (C++) library and it’s up to you to implement all required integrations with your client code.
Also, check this Pytorch model running in Android 10
Do I really have to use Android Studio 1 to create an app. Can I make APK in Python?
No, you are not required to use Android Studio. Even Android SDK is optional (you may write your own implementation of existing tools - they are open source). But you have to satisfy APK format in one or another way: compile code -> convert to DEX -> archive everything into APK (which is just ZIP with extra meta info) -> zipalign & sign |
st84264 | You made an excellent feedback but I will need few days to process that @ Vandalko |
st84265 | I have a model, it is a bit complicated, and I want parts of it to be grouped under one name and other part, ditto. The reason why I want this is so that I can train the parts individually, specifically, so that I can pass the parameters of that part alone to the optimizer. How can I do that?
To be concrete, here’s a snippet of my code:
self.feat1 = Cconv1d(in_channels=30, out_channels=60, kernel_size=2, stride=1, padding=2)
self.feat2 = Cconv1d(in_channels=60, out_channels=120, kernel_size=3, stride=2, padding=0)
# self.feat3 = nn.MaxPool1d(kernel_size=4)
self.featl1 = Clinear(6360, 500)
self.featl2 = Clinear(500, 31)
self.featl3 = nn.Softmax(dim=1)
I want all of those to be under model.part1
PS: I know the Sequntial thing, but I don’t want it because I have special forward between my layers. Any other solution? or a hacky way to pass parameters of those layers alone to the optimizer? |
st84266 | Solved by ptrblck in post #2
You could create a custom submodule inside your main module:
class MySubmodule(nn.Module):
def __init__(self):
super(MySubmodule, self).__init__()
self.feat1 = nn.Conv2d(1, 1, 3, 1, 1)
# ...
def forward(self, x):
x = self.feat1(x)
return x
… |
st84267 | You could create a custom submodule inside your main module:
class MySubmodule(nn.Module):
def __init__(self):
super(MySubmodule, self).__init__()
self.feat1 = nn.Conv2d(1, 1, 3, 1, 1)
# ...
def forward(self, x):
x = self.feat1(x)
return x
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.part1 = MySubmodule()
def forward(self, x):
x = self.part1(x)
return x
Alternatively, you could just create lists with the necessary parameters and pass them to the corresponding optimizer:
params = list(model.feat1.parameters()) + list(model...)
optimizer = torch.optim.SGD(params, lr=1e-3) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.