id
stringlengths
3
8
text
stringlengths
1
115k
st81268
I would also appreciate it to have a .device() function I could call, that would work for models, tensors, etc
st81269
It would be convenient in many cases if we have a method device() that returns the exact device that the model/tensors are located, then some_tensor.to(some_model.device()) would be an elegant solution for many functions (which accept a model as input and perform some inference on the model).
st81270
Is there a way to get the device of a module (a torchvision model or criterion) ?
st81271
Yes, although it might be a bit misleading in some special cases. In case your model is stored on just one GPU, you could simply print the device of one parameter, e.g.: print(next(model.parameters()).device) However, you could also use model sharding and split the model among a few GPUs. In that case you could have to check all parameters for their device.
st81272
I am am using copy.deepcopy to instantiate multiple instances of the same model (including same initialization parameters). I would like to reaffirm if deepcopy copies this storage property, so I’d still have to check somehow.
st81273
When I use torch.nn.parallel.DistributedDataParllel for training , I found process launched for training on all other gpus would take up some memory at GPU 0 ,i.e worker 0. This take much more memory of GPU 0 that I can’t use bigger batch. How Can I solve this? Is it because other process send gradients to 0 by default to calculate the average??? Screenshot_2019-04-10_18-11-05.png940×364 70.1 KB
st81274
By the way in the screenshot, the process for training step on GPU0 was killed since it crashes due to outof memory error
st81275
I have implemented a convolutional autoencoder that perfectly works without weight sharing among encoder and decoder. I guess you all know how a conv. autoencoder works. When tieing weights of the decoder to the encoder, i have noticed a weird behaviour of the weights of a standard nn.Conv2d: For my case the input ist self.conv1 = nn.Conv2d(1,100,(16,5),stride=(16,5),padding=0), the auto-initialized weights for this layer are of size [100,1,16,5]. For the deconv I should use the the functional library with the transpose of these weights, right? This is the mathematically correct way to share weights. So what i would do looks like this F.conv_transpose2d(out, weight=self.conv1.weight.transpose(0,1), bias=None, stride=(16,5),padding=0) this throws an error, if I don’t transpose the weights in the conv_transpose2d it doesn’t throw an error. So, this one works F.conv_transpose2d(out, weight=self.conv1.weight, bias=None,stride=(16,5),padding=0) This seems like a weird behaviour (and maybe leads to errors in the future), especially because for fully connected (linear) layers it exactly works the way i would expect it to work. Any ideas on this? Thanks in advance niclas
st81276
I was interested in a similar question: what dimensions should be transposed when sharing weights for deconvolution ? is transposing filters redundant ? I experienced with non symmetric filters: self.encoder = nn.Sequential(nn.Conv2d(1, 64, (4, 2), 2, 1)) self.decoder = nn.Sequential(nn.ConvTranspose2d(64, 1, (2, 4), 2, 1)) does NOT work. The correct decoder is as follows: self.decoder = nn.Sequential(nn.ConvTranspose2d(64, 1, (4, 2), 2, 1)) Therefore I think weight=self.conv1.weight.transpose(0,1) should do the job for transposed convolution
st81277
EDIT: I report the same error as you with the use of nn.functional.conv_transpose2d(): class AE_tied_weights(nn.Module): def __init__(self, input_dim=28*28, hidden_layers=(), output_dim=2000): super(AE_tied_weights, self).__init__() self.encoder = nn.ModuleList([nn.Conv2d(1, 64, 4, 2, 1)]) self.bias = nn.ParameterList([nn.Parameter(torch.randn(1))]) def forward(self, x): h = self.encoder[0](x) h = torch.sigmoid(h) y = nn.functional.conv_transpose2d(h, weight=self.encoder[0].weight.transpose(0, 1), bias=self.bias[0], stride=2, padding=1) return x, h, y transposing weight=self.encoder[0].weight.transpose(0, 1) gives the following error: y = nn.functional.conv_transpose2d(h, weight=self.encoder[0].weight.transpose(0, 1), bias=self.bias[0], stride=2, padding=1) RuntimeError: Given transposed=1, weight[1, 64, 4, 4], so expected input[60, 64, 14, 14] to have 1 channels, but got 64 channels instead Which I don’t find very consistent with my previous feed back…
st81278
I experienced the same behaviour and I don’t really know why it works like that…
st81279
I am trying to calculate the mutual information between the hidden layers’ output and input and output using the following code: def InfoNCE(X, Y, batch_size=256, num_epochs=200, dev=torch.device(“cpu”), model=None, rg=True): A = torch.tensor([float(batch_size)] * batch_size).reshape(batch_size, 1)#.cuda() if not model: model = nn.Sequential( nn.Linear(X.shape[1]+Y.shape[1], 16), nn.ReLU(), nn.Linear(16, 8), nn.ReLU(), nn.Linear(8, 1), ) # Move data to device X = X.to(dev) Y = Y.to(dev) + torch.randn_like(Y) * 1e-4 model = model.to(dev) opt = optim.SGD(model.parameters(), lr=0.03, momentum=0.9) td = TensorDataset(X, Y) result = [] for epoch in range(num_epochs): for x, y in DataLoader(td, batch_size, shuffle=True, drop_last=True): opt.zero_grad() top = model(torch.cat([x, y], 1)).flatten() xiyj = torch.cat([x.repeat_interleave(batch_size,dim=0),y.repeat(batch_size,1)], 1) bottom = torch.logsumexp(model(xiyj).reshape(batch_size,batch_size), 1) - A.log() loss = -(top - bottom).mean() result.append(-loss.item()) loss.backward(retain_graph=rg) opt.step() r = torch.tensor(result[-20:]).mean() #plt.plot(result) print(r) return r InfoNCE(dataset.x, layer_2_log[1]) I tried setting retain_graph both true and false after reading some posts, it still gives the runtime error no matter what. Any help would be appreciated!
st81280
Solved by albanD in post #9 Ho, looking at the code in a nice editor made me realise that you pass layer_2_log to your second training stage. But that Tensor has some history already from the first training stage. Is that expected that this will backward in the first part of the model as well? If no, you should add a .detach() …
st81281
Hi, Which version of pytorch are you using? Some autograd fix have been added to master last week that might be related. Do you see the same behavior if you use nightly builds? If you still do, could you write a small script that I could run to reproduce the error?
st81282
Thank you so much for answering! 3.6.9 How to check if I have nightly build? I am running in an environment that I created using conda. https://drive.google.com/file/d/1MvO3DYE_W0pJfcnBb7Yosfp1M2943lil/view?usp=sharing 1
st81283
Hi, you can check your version by printing torch.__version__ and report it here. For your code sample, what are the size of the objects in the dataset? Could you replace the p.load()with just a big torch.rand(your_size) so that I can run it without your dataset?
st81284
1.2.0 Sorry that I forgot to share the dataset. Here you go: https://drive.google.com/file/d/1lMHCbVqdPFI21hsy8Rqd_oB1nHVdEsn5/view?usp=sharing 2
st81285
Can you try and install the pytorch preview build from this 3 page and check if it still happens? thanks, I asked for random inputs because that way we don’t have to download dataset and move them around on our machines to reproduce and it’s much simpler for us.
st81286
My bad. The system is not allowing me to reply with a share line cuz they think it is promotional. Can you comment line 10 and 11 and change line 13 and 14 to: x_train = torch.rand([5198,22])#d[‘mushrooms’][0][0:5198] y_train = torch.rand([5198,1])#d[‘mushrooms’][1][0:5198].view(5198,-1)
st81287
Ho, looking at the code in a nice editor made me realise that you pass layer_2_log to your second training stage. But that Tensor has some history already from the first training stage. Is that expected that this will backward in the first part of the model as well? If no, you should add a .detach() (when saving it in the first training loop or when passing it to the InfoNCE function) not to keep it’s history.
st81288
questions can i pass non-tensor arguments to autograd forward/backward? can i save non-tensor arguments with ctx.save_for_backward? background info In the documentation for torch.autograd.Function.backward it is stated that: It must accept a context ctx as the first argument, followed by as many outputs did forward() 3 return, and it should return as many tensors, as there were inputs to forward() 3. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. I need 2 arguments to be passed from the module variables to the autograd function and into the kernel, an integer and a boolean. From the above documentation it sounds like this will not work with backward as those aren’t tensors. (I don’t wanna make them tensors either) Currently my forward looks like: class modulefunction (autograd.Function): @staticmethod def forward (ctx,*args): output,*variables = _cuda_.module(args) ctx.save_for_backward(variables) return output @staticmethod def backward (ctx,d_output): return _cuda_.dmodule(d_output,*ctx.saved_variables) class module (nn.Module): ... def forward (self,input): ... return modulefunction.apply(input,*other_tensors,integer,boolean) Do i just have the cpp backward binding return placeholder values like 0 for those extra arguments?
st81289
It is more of a python question than pytorch. I have a model which I would like to pass to class A object, then the same model should be returned from class A object, and passed to class B object. Class A and Class B have different data to train the model, but instead of passing the model independently to class A and then class B, I would like to pass returned model from class A to class B. If anyone could help or suggest material that would be amazing. Thank you
st81290
I’m not sure, what your use case is, but would this work? class classA(object): def train(self, model): # train model return model class classB(object): def train(self, model): # train model return model model = nn.Linear(10, 10) A = classA() B = classB() model = A.train(model) model = B.train(model) # alternatively model = B.train(A.train(model))
st81291
I used to think that as a VAE model is trained, the KL gets smaller. But according to my training, it seems to be wrong. I was wondering if someone can explain why the KL term is increasing instead of decreasing? Is that what is expected? or maybe I’m doing something wrong here blue = total loss green = BCE loss red = KL
st81292
while working on PR #23917, it is found that: for a CPU sparse tensor with non-contiguous indices tensor, after calling coalesce() for the sparse tensor, the indices tensor is still non-contiguous. Is this expected?
st81293
I want to write a custom autograd-function to save gpu-memory and recompute results during the backward-pass. The whole architecture is similiar to invertible-resnets. A simplified example of what I want to write is: class MemorySaving(torch.autograd.function): @staticmethod def forward(ctx, keep, pred, eval_function, combine_function, add_input, *weights): ctx.eval_function = eval_function ctx.combine_function = combine_function ctx.weights = weights with torch.no_grad(): temp, add_out = eval_function(keep, add_input) res_pred, factor = combine_function.forward(temp, pred) pred.set_() del pred ctx.save_for_backward(keep.data, res_pred, add_input) return (res_pred, add_out, factor) @staticmethod def backward(ctx, res_pred_grad, add_out_grad, factor_grad): eval_function = ctx.eval_function combine_function = ctx.combine_function weights = ctx.weights keep, res_pred, add_input = ctx.saved_tensors with torch.enable_grad(): keep.requires_grad = True temp, add_out = eval_function(keep, add_input) pred, _ = combine_function.reverse(temp, res_pred) pred = pred.detach() pred.requires_grad = True resulting_pred, factor = combine_function.forward(temp, pred) grad_pred = torch.autograd.grad(resulting_pred, (keep, add_input, pred) + weights, res_pred_grad) grad_factor = torch.autograd.grad(factor, (keep, add_input) + weights, factor_grad) grad_add_out = torch.autograd.grad(add_out, (keep, add_input) + weights, add_out_grad) #omitting code to combine the grads return (keep_grad, pred_grad, None, None, add_input) + weights_grads I am omitting some code for the backward-function because this function is (currently) not the problem. My problem is that I have some code like this: if memory_saving: res_pred, add_out, factor = MemorySaving.apply(keep, pred, eval_function, combine_function, add_input, *weights) else: temp, add_out = eval_function(keep, add_input) res_pred, factor = combine_function.forward(temp, pred) and res_pred, add_out, factor all have requires_grad set to False and no grad_fn if memory_saving is True. What am I doing wrong? I tried adding a res_pred, add_out, factor = res_pred.clone(), add_out.clone(), factor.clone() to torch.autograd.Function.forward but it does not change anything.
st81294
Solved by albanD in post #4 Ok, maybe I’m missing something. It’s just that your implementation is very close to the checkpoint one in the sense that you save the input, and redo a forward/backward during the backward pass. What are the types of the inputs to your function? Do you have at least one Tensor that requires gradie…
st81295
Hi, Is there a reason why you’re not using the checkpoint 3 tool? It looks like exactly what you’re trying to reimplement. For your code, the forward pass in a Function already runs in a torch.no_grad() block. So no need to add another one. The fact that you return a tuple might be a problem. Can you just return your three tensors like return res_pred, add_out, factor ?
st81296
If I combine them correctly I get constant memory consumption independent of depth, which is different from checkpointing (I only need the resulting tensors to compute the input gradients, assuming add_out is None). The fact that you return a tuple might be a problem. Can you just return your three tensors like return res_pred, add_out, factor ? return res_pred, add_out, factor does not make any difference.
st81297
Ok, maybe I’m missing something. It’s just that your implementation is very close to the checkpoint one in the sense that you save the input, and redo a forward/backward during the backward pass. What are the types of the inputs to your function? Do you have at least one Tensor that requires gradients there?
st81298
albanD: What are the types of the inputs to your function? Do you have at least one Tensor that requires gradients there? This was the problem, i accidentially called it with self.parameters() and not *[p for p in self.self.parameters()]
st81299
I’ve recently started observing the errors of the following kind: 019-09-15 13:52:45,866 - log.py:60 - Failed Traceback (most recent call last): File “train.py”, line 770, in main_loop() File “train.py”, line 649, in main_loop util.record(‘params’, torch.sum(util.flat_param(model)).item()) RuntimeError: CUDA error: uncorrectable ECC error encountered Is it something that I could be causing, or should I blame it on cosmic rays?
st81300
Solved by Yaroslav_Bulatov in post #3 I had two such crashes in one day, but have not seen it since then after rerunning a few times, so seems hardware related.
st81301
I had two such crashes in one day, but have not seen it since then after rerunning a few times, so seems hardware related.
st81302
I’m trying to fine-tuning vgg16. Then I got the classifier of it: (I have changed the last output layer.) (classifier): Sequential( (0): Linear(in_features=25088, out_features=4096, bias=True) (1): ReLU(inplace) (2): Dropout(p=0.5) (3): Linear(in_features=4096, out_features=4096, bias=True) (4): ReLU(inplace) (5): Dropout(p=0.5) (6): Linear(in_features=4096, out_features=1, bias=True) ) My question is that, when use the model to predict, its output is with no range. Actually, I got tensor([[ 0.9261], [ 0.6800], [ 0.5750], [ 0.5498], [ 0.6597], [ 0.7453], [ 0.5137], [ 0.6788], [ 1.0495], [ 0.7216], [-0.2671], ... And I use nn.BCEWithLogitsLoss() , (nn.BCEWithLogitsLoss() is better than nn.BCELoss() ? ) so I can’t (shouldn’t) use output = torch.sigmoid(output) and there is no softmax layer in the model. What is the correct way I get the accuracy? (The label is 0 or 1.)
st81303
Solved by phan_phan in post #5 Because calling nn.BCEWithLogitsLoss is the equivalent of calling first nn.Sigmoid, and then nn.BCELoss. It’s just that those two are often called one after the other, so they designed nn.BCEWithLogitsLoss that does both, and better. To cite the docs : This loss combines a Sigmoid layer and the B…
st81304
The way I think of is, output = torch.sigmoid(output) if 0 =< output < 0.5: # prediction is label 0 else: # prediction is label 1 But this makes get value of loss by output data get value of accuracy by torch.sigmoid(output data) Can I do like this? Does it mean get value of loss and accuracy by different data, so it’s not in line with mathematical logic?
st81305
Yes, your own answer makes sense But you can do simpler : comparing the output of the sigmoid to 0.5 is equivalent to comparing the input of the sigmoid to 0 ! (see wikipedia 1) So, you don’t need to call .sigmoid() ; just see where output < 0.
st81306
Oh… comparing the output of the sigmoid to 0.5 is equivalent to comparing the input of the sigmoid to 0 Yes, thank you for your suggestion But why get value of loss by output data get value of accuracy by torch.sigmoid( output data ) is the right way for getting value of loss and accuracy? We should get both loss and accuracy by the same data, isn’t it right?
st81307
Because calling nn.BCEWithLogitsLoss is the equivalent of calling first nn.Sigmoid, and then nn.BCELoss. It’s just that those two are often called one after the other, so they designed nn.BCEWithLogitsLoss that does both, and better. To cite the docs : This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability. (no idea what it means) So in both cases (for the loss and for the accuracy) it’s the data resulting of the sigmoid which is taken into account. It’s just hidden beneath a trick for the loss, and for the accuracy you don’t really need to compute it since you can just compare the raw logits with 0.
st81308
Hello, I tried to initialize the weights of the embedding layer with my own embedding, by methods below _create_emb_layer. I am so confused why the weights changed after init the model. class clf(nn.Module): def __init__(self, weight_matrix): super(clf, self).__init__() self.embedding, self.vocal_size, self.embed_dim = self._create_emb_layer(weight_matrix, trainable=False) **print('original matrix:', weight_matrix[0])** **print('after init matrix', self.embedding.weight.detach().numpy()[0])** def _create_emb_layer(self, weight_matrix, trainable=False): num_embeddings, embedding_dim = weight_matrix.shape emb_layer = nn.Embedding(num_embeddings, embedding_dim) emb_layer.weights = torch.nn.Parameter(torch.from_numpy(weight_matrix)) if trainable: emb_layer.weight.requires_grad = True else: emb_layer.weight.requires_grad = False return emb_layer, num_embeddings, embedding_dim
st81309
Solved by Dan_Xie in post #2 Found out the problem – I set emb_layer.weights instead of emb_layer.weight. Found out by printing out state_dict.
st81310
Found out the problem – I set emb_layer.weights instead of emb_layer.weight. Found out by printing out state_dict.
st81311
Hello! I try to accelerate CNNs on TitanV using FP16. I use this piece of code: # ... model.half() # ... image = image.half() #... outputs = model(image) but I wonder whether the intermediate results (i.e., accumulator) in convolution operations can be represented by FP16. Besides, several activation functions (e.g., Sigmoid, Tanh) may require high precision representations. As doc 2 said, half() Casts all floating-point parameters and buffers to half datatype. which means those nonlinear operations still use FP16? Thank you!
st81312
Solved by albanD in post #2 Hi, It does convert everything to float16 which indeed is not optimal in most cases. The nvidia folks have done a lot of testing around this and release the AMP library to automatically only convert the right part of the model to fp16 !
st81313
Hi, It does convert everything to float16 which indeed is not optimal in most cases. The nvidia folks have done a lot of testing around this and release the AMP 6 library to automatically only convert the right part of the model to fp16 !
st81314
Hi all, I am getting an error with the usage of the nn.TransformerDecoder layer. I initialize the layer as follows: self.transformer_decoder_layer = nn.TransformerDecoderLayer(2048, 8) self.transformer_decoder = nn.TransformerDecoder(self.transformer_decoder_layer, num_layers=6) However, under forward method, when I run “self.transformer_decoder” layer as following; tgt_mem = self.transformer_decoder(tgt_emb, mem) where tgt_emb.shape = torch.Size([8, 9, 2048]) and mem.shape = torch.Size([8, 68, 2048] I get the following error: File “/gpfs/hpc/home/hasan90/nPAPER2/IMG+TXT/model/MultiModal/Decoder.py”, line 59, in forward tgt_mem = self.transformer_decoder(tgt_emb, mem) File “/gpfs/hpc/home/hasan90/build/miniconda2/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 547, in call result = self.forward(*input, **kwargs) File “/gpfs/hpc/home/hasan90/build/miniconda2/envs/python3/lib/python3.6/site-packages/torch/nn/modules/transformer.py”, line 216, in forward memory_key_padding_mask=memory_key_padding_mask) File “/gpfs/hpc/home/hasan90/build/miniconda2/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 547, in call result = self.forward(*input, **kwargs) File “/gpfs/hpc/home/hasan90/build/miniconda2/envs/python3/lib/python3.6/site-packages/torch/nn/modules/transformer.py”, line 329, in forward key_padding_mask=memory_key_padding_mask)[0] File “/gpfs/hpc/home/hasan90/build/miniconda2/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 547, in call result = self.forward(*input, **kwargs) File “/gpfs/hpc/home/hasan90/build/miniconda2/envs/python3/lib/python3.6/site-packages/torch/nn/modules/activation.py”, line 783, in forward attn_mask=attn_mask) File “/gpfs/hpc/home/hasan90/build/miniconda2/envs/python3/lib/python3.6/site-packages/torch/nn/functional.py”, line 3213, in multi_head_attention_forward k = k.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1) RuntimeError: shape ‘[-1, 72, 256]’ is invalid for input of size 1114112 How I can solve the problem? Thanks,
st81315
ohh my bad, I just read the docs of transformer class, I give the tensors wrongly. it must be in (seq_len, B, feat_size) format.
st81316
Hi, I was wondering if someone could tell me what’re the differences between ConvTranspose2d(group=in_channel) and Upsample(mode='bilinear') Thanks
st81317
Solved by ptrblck in post #2 Upsample will use the mode to “mathematically” upsample the activation (no training), while ConvTranspose2d will use trainable filter kernels.
st81318
Upsample will use the mode to “mathematically” upsample the activation (no training), while ConvTranspose2d will use trainable filter kernels.
st81319
import torch import torchvision import torchvision.transforms as transforms transform = transforms.Compose([transforms.ToTensor()]) trainset = torchvision.datasets.MNIST(root='.data/', train=True, download=True, transform=transform) train_data = torch.utils.data.DataLoader(trainset, shuffle=True, batch_size=4,num_workers=2) testset = torchvision.datasets.MNIST(root='.data/', train=False,download=True, transform=transform ) test_data = torch.utils.data.DataLoader(testset, shuffle=True, batch_size=4) class SimpleConv(torch.nn.Module): def __init__(self): super(SimpleConv, self).__init__() self.conv1 = torch.nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5) self.pool = torch.nn.MaxPool2d(kernel_size=2, stride=2) #return 28/2 = 14*14 self.layer1 = torch.nn.Sequential( # 1 - count of input map # 32 - count of output map torch.nn.Conv2d(1,32, kernel_size=5, stride=1), torch.nn.ReLU(), torch.nn.MaxPool2d(kernel_size=2, stride=2) ) #return 14/2 = 7*7 self.layer2 = torch.nn.Sequential( torch.nn.Conv2d(32,64,kernel_size=5,stride=1,padding=2), torch.nn.ReLU(), torch.nn.MaxPool2d(kernel_size=2,stride=2) ) self.drop_out = torch.nn.Dropout() self.fc1 = torch.nn.Linear(7 * 7 * 64, 1000) self.fc2 = torch.nn.Linear(1000, 10) def forward(self, x): out = self.layer1(x) out = self.layer2(out) print(out.shape) out = x.view(-1, 7 * 7 * 64) print(out.shape) out = self.drop_out(out) out = self.fc1(out) out = self.fc2(out) return out model = SimpleConv() loss_fun = torch.nn.CrossEntropyLoss() optim = torch.optim.Adam(model.parameters(),lr= 0.001) num_epochs = 5 num_classes = 10 for epoch in range(num_epochs): for i, batch in enumerate(train_data): X_batch, y_batch = batch print(X_batch.shape) print(y_batch.shape) optim.zero_grad() output = model(X_batch) loss = loss_fun(output, y_batch)
st81320
Your out layer is of shape [4,64,6,6] so you should reshape first out not x and secondly, you should hence reshape by [-1,64 x 6 x 6]. see below ... self.fc1 = torch.nn.Linear(64*6*6, 1000) self.fc2 = torch.nn.Linear(1000, 10) def forward(self, x): out = self.layer1(x) out = self.layer2(out) print(out.shape) out = out.view(-1, 64*6*6) print(out.shape) out = self.drop_out(out) out = self.fc1(out) out = self.fc2(out) return out
st81321
I want to have say, a list of all the allowable layers in pytorch. How do I get that info? Is it possible? I am fine as a started for only sequential models…
st81322
I don’t know if there is a cleaner way but you can do: from torch import nn all_layers = [] for sym in dir(nn): attr = getattr(nn, sym) if type(attr).__name__ == 'type': all_layers.append(attr)
st81323
nice! <3 Is there an easy way to sort out what is an activation layer vs other layers? Ideally the more fine granularity the better e.g. sequence layers affine activations normalization regularizations etc but at the very least separate activations layers from the rest…
st81324
This would work: activations = [] for l in all_layers: if 'torch.nn.modules.activation.' in repr(l): activations.append(l)
st81325
Well a nice thing about PyTorch is that you can easily use Python’s reflection, which complements the official docs. e.g. dir(module) the get the attributes of a module or help(module.func) to get directly the docstring associated to module.func. Here 5 you can find more information about it.
st81326
on a related note, how does one do this: Is it possible to extract automatically all the arguments to Pytorch layers/functions/modules? I asked a similar question where I want to download all the layers into pytorch: but I also want to have a list of all the hyperparameters/arguments for each of them. Is it possible to extract all of those automatically? (at the very least as string representations or something like that)
st81327
In that specific case, repr and str will return the same string. But in general repr should return all the important information on the object while str is meant to return only a formatted and human readable output.
st81328
Hello, I have a tensor with size B,C,W,H and I generated random kernel with size 4x4 filled with zeros except one with one value. This kernel is multiplied with the the tensor and re-generated for each sliding process. the result I have is the same tensor half filled with zeros. I want to downsample this tensor to half by removing only the zeros values and keeping it dimension. this can be done re-creating a new small tensor and copying each rows independently. What it is not clear for me is the gradients, by doing this new tensor and copying am I keeping the flow of the gradients or loosing it ? or is there a better way to do it ?
st81329
Hi, I am trying to calculate the plane passing through 2 vectors which are both of 200 dimensions. I know if these vectors were 3-D, I could use cross product. I wonder if there are any functions for cross product generalization to more than 3d in pytorch? Thanks, T
st81330
When I implement custom autograd function. In the forward of class xxxFunction(torch.autograd.Function), I get ctx.matches, then I want to get the ctx.matches outside this function. I tried @staticmethod def get_matches(ctx): return ctx.matches Then in the class xxx(torch.nn.Module), I run xxxFunction.get_matches(). get_matches() takes exactly 1 argument (0 given). Thank you in advance!
st81331
Hi, You can access this from the backward method that gets the same ctx as input. I don’t think you can extract it in a reliable way though. Why do you need to do this? Can’t you just return it as another output of the forward method?
st81332
I am facing the same issue. If I return it as another output of the forward method, how to adjust the backward method? @albanD Many thanks!
st81333
The backward method should get and return None for everything that is not differentiable.
st81334
Here is an example: github.com Zhaoyi-Yan/Shift-Net_pytorch/blob/master/models/shift_net/InnerShiftTriple.py#L34 3 cur_device = torch.cuda.current_device() self.cur_mask = self.mask_all[cur_device*cur_bsize:(cur_device+1)*cur_bsize, :, :, :] # If mask changes, then need to set cal_fix_flag true each iteration. def forward(self, input): self.bz, self.c, self.h, self.w = input.size() self._split_mask(self.bz) self.flag = util.cal_flag_given_mask_thred(self.cur_mask, self.shift_sz, self.stride, self.mask_thred) final_out = InnerShiftTripleFunction.apply(input, self.shift_sz, self.stride, self.triple_weight, self.flag, self.show_flow) if self.show_flow: self.flow_srcs = InnerShiftTripleFunction.get_flow_src() return final_out def get_flow(self): return self.flow_srcs def set_flow_true(self): self.show_flow = True def set_flow_false(self): self.show_flow = False
st81335
I training my model on one GPU(v100), the speed as below: 2019-09-17 01:46:48,876 - INFO - [ 1022/10000] lr: 0.000100 Time 1.773 ( 1.744) Data 0.001 ( 0.001) Loss 6341.771 (6945.944) 2019-09-17 01:46:50,593 - INFO - [ 1023/10000] lr: 0.000100 Time 1.607 ( 1.722) Data 0.001 ( 0.001) Loss 7225.229 (6958.357) 2019-09-17 01:46:52,323 - INFO - [ 1024/10000] lr: 0.000100 Time 1.717 ( 1.732) Data 0.001 ( 0.001) Loss 7218.038 (6929.233) The format of time info, such as Time 1.717 ( 1.732), that 1.717 is current batch, 1.732 is the one hundred batch recently cost time. when I use 8GPU on one node, use torch.nn.parallel.DistributedDataParallel, torch.nn.SyncBatchNorm.convert_sync_batchnorm and mp.spawn, the speed as below: 019-09-16 06:06:40,619 - INFO - [ 9/5000] lr: 0.000036 Time 2.822 ( 4.896) Data 0.001 ( 1.428) Loss 307113.969 (331260.794) 2019-09-16 06:06:43,485 - INFO - [ 10/5000] lr: 0.000037 Time 3.419 ( 4.749) Data 0.001 ( 0.001) Loss 303037.688 (325792.062) 2019-09-16 06:06:46,120 - INFO - [ 11/5000] lr: 0.000037 Time 2.866 ( 2.943) Data 0.001 ( 0.001) Loss 296579.000 (320417.425) 2019-09-16 06:06:48,925 - INFO - [ 12/5000] lr: 0.000037 Time 2.634 ( 2.879) Data 0.001 ( 0.001) Loss 292080.625 (315081.881) 2019-09-16 06:06:51,671 - INFO - [ 13/5000] lr: 0.000038 Time 2.806 ( 2.847) Data 0.001 ( 0.001) Loss 286678.000 (309843.294) The speedup ratio of 8GPU is about 0.6.
st81336
Hi, I am using pytorch 1.2.0 with CUDA 10.0.130 to train a neural network on a GeForce 2080 Ti GPU. I am receiving a cublas runtime error (traceback pasted at the bottom). My code runs successfully with pytorch 1.1, CUDA 9, and GeForce GTX 1080, so the problem probably relates to my environment configuration. I thought PyTorch 1.2.0, CUDA 10, and the 2080 Tis were all compatible? More specifics about my environment are pasted below the error traceback. I appreciate any help resolving this issue – thank you! Error traceback CUDA runtime error: misaligned address (74) in magma_queue_destroy_internal at /opt/conda/conda-bld/magma-cuda100_1549065924616/work/interface_cuda/interface.cpp:944 CUDA runtime error: misaligned address (74) in magma_queue_destroy_internal at /opt/conda/conda-bld/magma-cuda100_1549065924616/work/interface_cuda/interface.cpp:945 CUDA runtime error: misaligned address (74) in magma_queue_destroy_internal at /opt/conda/conda-bld/magma-cuda100_1549065924616/work/interface_cuda/interface.cpp:946 Traceback (most recent call last): File “/dscrhome/snb21/autoencoded-vocal-analysis-master/mouse_sylls_mwe.py”, line 150, in model.train_loop(loaders, epochs=600, test_freq=None) File “/hpchome/mooneylab/snb21/autoencoded-vocal-analysis-master/ava/models/vae.py”, line 422, in train_loop loss = self.train_epoch(loaders[‘train’]) File “/hpchome/mooneylab/snb21/autoencoded-vocal-analysis-master/ava/models/vae.py”, line 354, in train_epoch loss.backward() File “/dscrhome/snb21/.conda/envs/ava2/lib/python3.7/site-packages/torch/tensor.py”, line 118, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File “/dscrhome/snb21/.conda/envs/ava2/lib/python3.7/site-packages/torch/autograd/init.py”, line 93, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: cublas runtime error : the GPU program failed to execute at /opt/conda/conda-bld/pytorch_1565272271120/work/aten/src/THC/THCBlas.cu:331 Environment Details PyTorch version: 1.2.0 Is debug build: No CUDA used to build PyTorch: 10.0.130 OS: Red Hat Enterprise Linux Server release 7.7 (Maipo) GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) CMake version: version 2.8.12.2 Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 10.0.130 GPU models and configuration: GPU 0: GeForce RTX 2080 Ti Nvidia driver version: 418.39 cuDNN version: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7.5.0 Versions of relevant libraries: [pip] numpy==1.16.4 [pip] torch==1.2.0 [pip] torchvision==0.4.0a0+6b959ee [conda] blas 1.0 mkl [conda] magma-cuda100 2.5.1 1 pytorch [conda] mkl 2019.4 243 [conda] mkl-service 2.3.0 py37he904b0f_0 [conda] mkl_fft 1.0.14 py37ha843d7b_0 [conda] mkl_random 1.0.2 py37hd81dba3_0 [conda] pytorch 1.2.0 py3.7_cuda10.0.130_cudnn7.6.2_0 pytorch [conda] torchvision 0.4.0 py37_cu100 pytorch
st81337
Hi, Is it possible that loops are faster than Data loader? Earlier: def one_hot_vector(x_raw, n_uniq): #time_strt = datetime.now() input_len = x_raw.shape[0] input_col_len = x_raw.shape[1] x = np.zeros((input_len*input_col_len,n_uniq),dtype=np.int8) x_raw = x_raw.reshape(-1,1) for i in range(n_uniq): ind, _ = np.where(x_raw == i) x[ind, i] = 1 x = x.reshape(input_len,input_col_len, n_uniq) x_raw = x_raw.reshape(input_len, input_col_len) #print(f"Completed in {datetime.now()-time_strt}") return x for epoch in range(num_epochs): epoch_time = datetime.now() for i in range(0,x_train.shape[0],100000): #strt_time = datetime.now() one_hot_x_train = one_hot_vector(x_train[i:i+100000], 2983) one_hot_x_train = torch.from_numpy(one_hot_x_train) y_train_ = torch.from_numpy(y_train[i:i+100000].astype(np.int32)) for j in range(0,one_hot_x_train.shape[0], batch_size): outputs = model(one_hot_x_train[j:j+batch_size].to(device).float()) loss = criterion(outputs, y_train_[j:j+batch_size].squeeze().to(device).long()) #Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() print(f"Epoch : {epoch}/{num_epochs} Train % : {(i+100000)/(x_train.shape[0])} Loss : {loss.item()} Loop Cost : {datetime.now()-strt_time} ") print(f"Epoch time : {datetime.now()-epoch_time}") And after using Data loader: import torch from torch.utils import data import numpy as np import torch.nn.functional as F class Dataset_1(data.Dataset): 'Characterizes a dataset for PyTorch' def __init__(self, list_IDs, labels, n_uniq): 'Initialization' self.labels = labels self.list_IDs = list_IDs self.n_uniq = n_uniq def __len__(self): 'Denotes the total number of samples' return len(self.list_IDs) def __getitem__(self, index): 'Generates one sample of data' # Select sample # Load data and get label X = self.list_IDs[index] X = F.one_hot(torch.tensor(X).to(torch.int64), num_classes = self.n_uniq) y = self.labels[index] return X, y params = {'batch_size': 50, 'shuffle': True, 'num_workers': 50} training_set = Dataset_1(x_train, y_train, 2983 ) training_generator = data.DataLoader(training_set, **params) validation_set = Dataset_1(x_test, y_test, 2983) validation_generator = data.DataLoader(validation_set, **params) for epoch in range(num_epochs): epoch_time = datetime.now() # Training print('Training Start') counter = 0 for local_batch, local_labels in training_generator: # Transfer to GPU counter +=1 local_batch, local_labels = local_batch.to(device).float(), local_labels.to(device).long() outputs = model(local_batch) loss = criterion(outputs,local_labels) optimizer.zero_grad() loss.backward() optimizer.step() if counter%10000 == 0: print(f"Counter : {counter} || Loss : {loss.item()}") print(f"Epoch time : {datetime.now()-epoch_time} || Loss : {loss.item()}") I have a huge dataset (200M sample) and I do not have exact times but the time to run 2 epochs almost got doubled with the data loader. Also the model Loss doesn’t seem to improve much it remains between 4 and 5 if I use ADM optimiser and 2 and 3 if I use SGC optimiser, what all could I try for the LSTM models for improving the accuracy. Here is the model parameters and Model that I am using : # Hyper-parameters sequence_length = 10 input_size = 2983 hidden_size = 128 num_layers = 4 num_classes = 100 num_epochs = 2 learning_rate = 0.1 # Recurrent neural network (many-to-one) class RNN(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_classes): super(RNN, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True) #self.lsmax = nn.LogSoftmax(hidden_size,hidden_size) self.fc = nn.Linear(hidden_size, num_classes) #self.fc_ = nn.Linear(num_classes, num_classes) #self.fc_2 = nn.Linear(num_classes, num_classes) def forward(self, x): # Set initial hidden and cell states h0 = torch.randn(self.num_layers, x.size(0), self.hidden_size).to(device) c0 = torch.randn(self.num_layers, x.size(0), self.hidden_size).to(device) # h1 = torch.randn(1, self.hidden_size, self.hidden_size).to(device) # c1 = torch.randn(1, self.hidden_size, self.hidden_size).to(device) # Forward propagate LSTM out, _ = self.lstm(x, (h0, c0)) # out: tensor of shape (batch_size, seq_length, hidden_size) #out_, _ = self.lsmax(out,(h1,c1)) # Decode the hidden state of the last time step out_ = self.fc(out[:, -1, :]) #out_ = self.fc_(out) #out__ = self.fc_2(out_) return out_ model = RNN(input_size, hidden_size, num_layers, num_classes).cuda() # Loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
st81338
Solved by ptrblck in post #2 The manual loop might be faster, since you are just slicing the tensor, while your Dataset copies the data. Try to use torch.from_numpy in your __getitem__ and compare the results again.
st81339
The manual loop might be faster, since you are just slicing the tensor, while your Dataset copies the data. Try to use torch.from_numpy in your __getitem__ and compare the results again.
st81340
Hello, I’m developing third party software with a goal to release it as FOSS (MIT license) on Github, strictly connected with PyTorch. Is it okay if I use your documentation template provided here 18? I have changed the footer to Built with Sphinx using PyTorch's theme provided originally by Read the Docs. and removed any copyrights, stripped down some parts I will not use and changed the header a little bit, other than that it is almost the same. Secondly, is it okay to use PyTorch’s new logo as part of this library/libraries? I would assume it is after asking permission (based on this thread 5), but it’s got quite a few months, anything changed? If so, do you want me to include any licenses or notices if you allow me to use any/all of the above?
st81341
yes you can use the documentation template. Yes you can use the pytorch logo to represent pytorch as the product. As long as it isn’t misrepresented, we are fine.
st81342
@smth Came just to ping you with the results torchdata 11 and direct link to documentation itself 13. If anything is wrong, hit me up, thanks for permission.
st81343
I have two GPUs installed, with the same script, if I select to run on GTX2080ti, the error occurs, and if I run on GTX1080ti, it works right ~/Molecule_Optimizer.py in forward(self, atom_list, bond_list, atom_degree_list, bond_degree_list, atom_mask) 33 atom_mask = atom_mask.unsqueeze(2) 34 batch_size,mol_length,num_atom_feat = atom_list.size() ---> 35 atom_feature = self.dropout(self.atom_fc(atom_list)) 36 37 bond_neighbor = [bond_list[i][bond_degree_list[i]] for i in range(batch_size)] ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 475 result = self._slow_forward(*input, **kwargs) 476 else: --> 477 result = self.forward(*input, **kwargs) 478 for hook in self._forward_hooks.values(): 479 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/linear.py in forward(self, input) 53 54 def forward(self, input): ---> 55 return F.linear(input, self.weight, self.bias) 56 57 def extra_repr(self): ~/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in linear(input, weight, bias) 1024 return torch.addmm(bias, input, weight.t()) 1025 -> 1026 output = input.matmul(weight.t()) 1027 if bias is not None: 1028 output += bias RuntimeError: cublas runtime error : the GPU program failed to execute at /opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/THC/THCBlas.cu:249
st81344
Excuse me, could you please tell me how to tackle this problem? I found that it will occur when I carry out the “loss.backward()”.
st81345
Is this resolved? It may be related to an issue I am having and cannot resolve. See here 339
st81346
Say I have a training situation where I have two loss functions defined and want to update my gradients by backward pass over both. If I simply specify: err1 = criterion1(...) err2 = criterion2(...) err = err1 + err2 err.backward() optimizer.step() Is it possible that one gradient update will overpower the other so that, in effect, only one loss really gets optimized since the other gradient update is so much smaller? If so, can I normalize the gradients of the two before combining them and make sure they have similar impact on my update?
st81347
Solved by DerekGloudemans in post #4 Yeah so weight_decay is used to add a regularization penalty to model weights over time, to keep model weights small on average and prevent overfitting. In answer to your second question, yes, weight decay will generally tend to decay the weights for both losses, unless it was implemented in a cust…
st81348
The short answer is yes. One way to balance the contribution of each loss is to weight them. In this way you can control for which loss you want to prioritise. Something similar happens when you use weight_decay in your optimiser, which essentially scales the loss by the given value.
st81349
Just a weighted sum, like: k = 0.2 err = k * err1 + (1 - k) * err2? How do I determine a good weight_decay value? Is this based on the norm of the gradient updates of one compared to that of the other? Actually, I’m realizing the weight_decay might not work because it will decay the weight for both losses, no?
st81350
Yeah so weight_decay is used to add a regularization penalty to model weights over time, to keep model weights small on average and prevent overfitting. In answer to your second question, yes, weight decay will generally tend to decay the weights for both losses, unless it was implemented in a custom manner to apply only to one loss. The importance here is that weight_decay is used to adjust values’ weightings over time during training. (see How does SGD weight_decay work?) What you are likely looking for is instead a weighting function that combines two scalar values, probably the same way at all points during training. err = k * err1 + (1-k)* err2 One way to do this is a simple weighted sum, as shown above. Practically, this weighting value k must be selected by trying a number of candidate values.There are other ways to combine these values as well, such as multiplication: err = err1*err2 or geometric mean (this is a commonly used technique for combining the error metrics of precision and recall into a single metric called F1 score). err = (2* err1 *err2) / (err1 + err2) The short of it is, there’s not an easy way to see how different weightings of the different loss components will translate into performance gains, since the relationship is a function of your model. Try a range of values or functions to see what yields satisfactory results.
st81351
I ran 3 experiments: (1) BatchNorm2d, num_gpus = 1, batch_size = 2, learning_rate = 2e-4 -> accuracy = a1 (2) SyncBatchNorm2d, num_gpus = 2, batch_size = 1, learning_rate = 2e-4 -> accuracy = a2 (3) SyncBatchNorm2d, num_gpus = 2, batch_size = 1, learning_rate = 1e-4 -> accuracy = a3 All other variables are the same for the 3 experiments. I expected to see either a1 = a2 or a1 = a3. But both a2 and a3 are lower than a1. Also, when using SyncBatchNorm2d, the network converges slower. Any idea what is happening?
st81352
I the following code, I would like to make delta a parameter learnable by the model instead of a fixed scalar value. I would like to have the option of a learnable delta for each component of the input tensor or for each layer. self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=5, stride=1, padding=2), nn.Threshold(-delta, -delta, inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(64, 192, kernel_size=5, padding=2), nn.Threshold(-delta, -delta, inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), )
st81353
I tried using torch.clamp, but also seems non-differentiable: import torch from torch import autograd threshold = autograd.Variable(torch.rand(1), requires_grad=True) print('threshold', threshold) # m = torch.nn.Threshold(threshold, threshold) input = autograd.Variable(torch.rand(1, 5), requires_grad=True) - 0.5 print('input', input) # out = m(input) out = torch.clamp(input, min=threshold) print('out', out) out.backward(torch.ones(1, 5)) print('threshold.grad.data', threshold.grad.data) > Traceback (most recent call last): > File "4729.py", line 11, in <module> > out = torch.clamp(input, min=threshold) > File "/Users/hugh2/conda3/envs/pytorch/lib/python3.6/site-packages/torch/autograd/variable.py", line 396, in clamp > return CmaxConstant(min)(self) > File "/Users/hugh2/conda3/envs/pytorch/lib/python3.6/site-packages/torch/autograd/_functions/pointwise.py", line 232, in forward > self._max_buffer = i.gt(self.constant).type_as(i) > TypeError: gt received an invalid combination of arguments - got (Variable), but expected one of: > * (float value) > didn't match because some of the arguments have invalid types: (Variable) > * (torch.FloatTensor other) > didn't match because some of the arguments have invalid types: (Variable) I tried on tensorflow, and seemed to work ok: import tensorflow as tf graph = tf.Graph() with graph.as_default(): input_t = tf.placeholder(tf.float32, [None], 'input') threshold_t = tf.Variable(0.05) out_t = tf.minimum(input_t, threshold_t) sess = tf.Session() with sess.as_default(): sess.run(tf.global_variables_initializer()) print('out', sess.run(out_t, feed_dict={input_t: [-0.3, 0.0, 0.7]})) # get grad of out_t wrt threshold_t grad_out_t = tf.gradients(out_t, [threshold_t])[0] print('d(out)/d(theshold)', sess.run(grad_out_t, feed_dict={input_t: [-0.3, 0.0, 0.7]})) print('d(out)/d(theshold)', sess.run(grad_out_t, feed_dict={input_t: [-0.3, 0.0, -0.7]})) print('d(out)/d(theshold)', sess.run(grad_out_t, feed_dict={input_t: [-0.3, 0.5, 0.7]})) out [-0.30000001 0. 0.05 ] d(out)/d(theshold) 1.0 d(out)/d(theshold) 0.0 d(out)/d(theshold) 2.0 Edit: I guess maybe this needs the Scalar thing that’s on the way, in order to be solved?
st81354
hughperkins: out = torch.clamp(input, min=threshold) out = input.max(threshold) Best regards Thomas
st81355
threshold = autograd.Variable(torch.rand(1), requires_grad=True) self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=5, stride=1, padding=2), nn.Max(threshold), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(64, 192, kernel_size=5, padding=2), nn.Max(threshold), nn.MaxPool2d(kernel_size=3, stride=2), ) Do you think the code above should work?
st81356
As far as I can tell, it’s not possible. Also see Creating a custom loss function So, it looks like you could create a custom autograd module to handle this. If it was me, I might consider logging it on pytorch issues page and/or submitting the custom autograd module for PR.
st81357
Hi, The think is that the threshold operation is not differentiable wrt the threshold value. More specificatly, if the operation it is performing for each element is: if inp[el] <= threshold: out[el] = thresholded_value else: out[el] = inp[el] What is d(out)/d(threshold) here?
st81358
It works in tensorflow. I reckon it’s not differentiable at the threshold itself, but it’s differnetaible almost everywhere?
st81359
Well the threshold_value will have a gradient that accumulate the grad_out for every element where it has been thresholded. So this one in theory you could learn, even though I am not sure what that means in practice. The threshold is definitely not learnable with pure gradients, or maybe I am missing something? What would be the gradient “almost everywhere” ?
st81360
(by the way, you can see that the theoretical result I’ve proposed matches the results I’m getting from tensorflow)
st81361
Sum of the output_grad for things below the threshold, zero otherwise. You can see this by looking sternly at the max formulation or, if you prefer, rewrite as relu(x-t)+t. Best regards Thomas
st81362
@tom good point when threshold == threshold_value. But can you get a similar expression for the general formula of threshold when they are not equal?
st81363
Hi @albanD, No, I do not know what to do then with respect to the input cut-off. And with my bias towards theory, a using discontinuous function seems unintuitive, too. In fact, I prefer to think about this as shrinkage (i.e. relu(x-t), with its well-studied connections e.g. to a quadratic activation penalty or regression with noisy observation) plus a bias and don’t really like to think about thresholding. If you fed the output to (optionally) relu plus a layer that uses bias, I would think do not need the offset +t outside the relu at all. But that’s me. Best regards Thomas
st81364
I’m like 80% sure it should be differentaible. What makes you feel it would not be? By the way, one of the nice things about tf is, I’ve tried some really convoluted bizarre cost functions, and they’ve all been differentabiel. It’s quite nice…
st81365
Oh youre right. Fair enough Screen Shot 2017-07-12 at 1.18.54 AM.jpg1276×1180 57.3 KB
st81366
What is wrong bellow? import torch from torch import autograd import torch.nn as nn all = [ ‘CNN’, ‘cnn’, ] class CNN(nn.Module): def init(self, dataset): super(CNN, self).init() self.threshold = autograd.Variable(torch.rand(1), requires_grad=True) self.threshold.data.fill_(0.05) self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=2, stride=None, padding=0) self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x): x = self.conv1(x) x = self.maxpool(x) #x = x + self.threshold.expand_as(x) x = x + self.threshold x = self.relu(x) x = self.conv2(x) x = self.conv2_drop(x) x = self.maxpool(x) x = self.relu(x) x = x.view(-1, 320) x = self.fc1(x) x = self.relu(x) x = self.fc2(x) x = self.relu(x) return x def cnn(dataset): model = CNN(dataset) return model Error: Traceback (most recent call last): File “train.py”, line 390, in main() File “train.py”, line 165, in main cnn(args.epochs, train_loader, val_loader, model, criterion, optimizer, experiment) File “train.py”, line 200, in cnn training_time += train(train_loader, model, criterion, optimizer, epoch) File “train.py”, line 260, in train output = model(input_var) File “/home/dlm/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 206, in call result = self.forward(*input, **kwargs) File “/home/dlm/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py”, line 59, in forward return self.module(*inputs[0], **kwargs[0]) File “/home/dlm/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 206, in call result = self.forward(*input, **kwargs) File “/home/dlm/code/deeplearninglab/sem/models/cnn.py”, line 27, in forward x = x + self.threshold File “/home/dlm/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py”, line 745, in add return self.add(other) File “/home/dlm/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py”, line 283, in add return self._add(other, False) File “/home/dlm/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py”, line 277, in _add return Add(inplace)(self, other) File “/home/dlm/anaconda3/lib/python3.6/site-packages/torch/autograd/_functions/basic_ops.py”, line 20, in forward return a.add(b) TypeError: add received an invalid combination of arguments - got (torch.FloatTensor), but expected one of: (float value) didn’t match because some of the arguments have invalid types: (torch.FloatTensor) (torch.cuda.FloatTensor other) didn’t match because some of the arguments have invalid types: (torch.FloatTensor) (torch.cuda.sparse.FloatTensor other) didn’t match because some of the arguments have invalid types: (torch.FloatTensor) (float value, torch.cuda.FloatTensor other) (float value, torch.cuda.sparse.FloatTensor other)
st81367
I’d use nn.Parameter instead of Variable for the Parameter. You did call model.cuda, probably. Best regards Thomas