id
stringlengths
3
8
text
stringlengths
1
115k
st80468
same issue to me. I run the demo from official tutorial VISUALIZING MODELS, DATA, AND TRAINING WITH TENSORBOARD 27, nothing changed, no graph emerged. (anaconda 3 environment: python3.7 pytorch 1.2.0 torchvision 0.4.0 tensorboard 1.14.0 )
st80469
Same issue here with python3.7.4, pytorch 1.2.0, torchvision 0.4.0, tensorboard 1.14.0
st80470
@ptrblck Updating Pytorch to nightly build resolved the issue. Thank you very much for the suggestion.
st80471
pytorch version:1.1.0 cuda version:10.0.130 GPU: TITAN V I met a problem when I use 10 GPUs: RuntimeError: cuda runtime error (60) : peer mapping resources exhausted at /opt/conda/conda-bld/pytorch_1556653114079/work/aten/src/THC/THCGeneral.cpp:164 And when I use 8 or 4 GPUs,I didn’t get this error.
st80472
I am having the same problem with 10 GPUs, #GPU <= 8 is file. pytorch version:1.2.0 CUDA Version 10.1.105
st80473
Hi , I was wondering how to store some of the model modules ('including their weight and bias ) into a global variable to copy them back after ?
st80474
Solved by albanD in post #9 If you have the same tree structure, you should encounter the same modules at the same time during the forward no? So you can simply .append() and .pop() your list.
st80475
You can save network parameters in a list, if that’s what you are after. Global or local that’s your design choice. Loop over the network parameters and save them. Checkout moduleList that can be of use as well.
st80476
You can also get all the parameters as a nice dictionnary with model.state_dict(). Then use model.load_state_dict(state_dict) to restore it.
st80477
I want some thing in this way store all the Linear layers (modules or weights ) into a list then loop over all model modules and if the module is linear assign the linear stored module that match the index
st80478
Ok, In that case, I would use for mod_uniq_name, mod in model.named_modules() to find all the Linear layers and save their weights in some structure (like a dict) using mod_uniq_name as the key. Then when you want to reload it, do the same iteration and if the mod_uniq_name is present in your saved structure, load what is there.
st80479
Thank you but it is a bit different since I am suing some kind of tree so I have to save every Linear Node on Forward function so I have to use a different way to store them like: a list that store the weights or a list that stores the entire module which one do you thin is better ?
st80480
If you don’t plan of modifying the module itself, I think saving only the weights is better.
st80481
yes but iterating that list needs break you can’t index it if I try : for p55 in temppara: print(p55) print(temppara.index(p55)) input() I get errors for the index and I need an index to know what module need to be mapped to what module or what weights should be maped to what module weights
st80482
If you have the same tree structure, you should encounter the same modules at the same time during the forward no? So you can simply .append() and .pop() your list.
st80483
How do I make my neural network learn same pattern multiple times, that is one neural network learns minor modifications of the same pattern, and stores them in a list, and returns this list to another neural network? if I do something like this, class Model(nn.Module): def __init__(self): self.lin = nn.Linear(3, 3) def forward(self, x): out = self.lin(x) return out, out then it would return two outputs, but both of them would be same, I want redundancy, but not exactly same patterns, only minor modifications.
st80484
I’m not sure why you are trying to do, because there might be a better way to achieve the desired output. However, one way to achieve ‘minor’ modifications is by adding some random noise(possibly Gaussian) to X in the forward() method call.
st80485
Hello, I have a situation to work with multiple instances of the same model, like this: class Decoder(nn.Module): pass decoders = [] for _ in range(some_number): decoders.append(Decoder()) Some weird doubts: All the decoder instances are independent of each other as long as they don’t share a tensor. True/False? To save them to disk and reload them I’ll have to loop through and do something like this: 'decoder0': decoders[0].state_dict(), . . is there a better way? Primarily, I am trying to understand the independence amongst the decoders themselves, and the conditional dependence that comes in, when there is a shared model (like an encoder). Cheers.
st80486
Solved by ptrblck in post #4 Yes, each output tensor will be attached to a computation graph, which involves the same encoder instance. If you calculate some losses based on these outputs and call backward on them, the gradients will be accumulated in the parameters of encoder: encoder = nn.Linear(1, 1) decoders = nn.ModuleLi…
st80487
Yes, all Decoder instances are independent, since you’ve initialized each one separately. In your current approach, yes. However, you could also use an nn.ModuleList, which will return all internal submodule states using its state_dict() method: modules = nn.ModuleList() for _ in range(10): modules.append(nn.Linear(1, 1)) modules.state_dict() > OrderedDict([('0.weight', tensor([[-0.0277]])), ('0.bias', tensor([-0.3542])), ('1.weight', tensor([[0.2417]])), ('1.bias', tensor([0.2794])), ('2.weight', tensor([[0.6173]])), ('2.bias', tensor([0.7524])), ('3.weight', tensor([[-0.9020]])), ('3.bias', tensor([0.7507])), ('4.weight', tensor([[-0.2359]])), ('4.bias', tensor([0.6560])), ('5.weight', tensor([[-0.8661]])), ('5.bias', tensor([-0.9012])), ('6.weight', tensor([[0.7482]])), ('6.bias', tensor([0.6804])), ('7.weight', tensor([[0.7841]])), ('7.bias', tensor([-0.6375])), ('8.weight', tensor([[0.6187]])), ('8.bias', tensor([-0.3414])), ('9.weight', tensor([[0.2675]])), ('9.bias', tensor([0.0969]))])
st80488
Great! Thanks for that clarification. And for the case when there is an encoder intrusion? class Encoder(nn.Module): pass class Decoder(nn.Module): pass encoder = Encoder() decoders = [] for _ in range(some_number): decoders.append(Decoder()) intermediate = encoder(input) output0 = decoders[0](intermediate) output1 = decoders[1](intermediate) . . In this scenario, the decoders are still independent of each other, but the encoder is dependent on all the decoders. Correct?
st80489
Yes, each output tensor will be attached to a computation graph, which involves the same encoder instance. If you calculate some losses based on these outputs and call backward on them, the gradients will be accumulated in the parameters of encoder: encoder = nn.Linear(1, 1) decoders = nn.ModuleList() for _ in range(3): decoders.append(nn.Linear(1, 1)) x = torch.randn(1, 1) intermediate = encoder(x) output0 = decoders[0](intermediate) output1 = decoders[1](intermediate) output2 = decoders[2](intermediate) output0.backward(retain_graph=True) print(encoder.weight.grad) output1.backward(retain_graph=True) print(encoder.weight.grad) output2.backward() print(encoder.weight.grad)
st80490
Yapay sinir ağı egitimi için data setim var ama bunu nasıl aktaracağımı bilmiyorum excelden. Yardımcı olabilir misiniz?
st80491
Hi Busra, I’ll add the translation of your text here, so that we can help better: I have a data set for artificial neural network training, but I don’t know how to transfer it. Can you help me? If you want to fine tune a model, have a look at this tutorial 2. This tutorial 1 might also be a good starter. PS: feel free to use some translation service (e.g. Google translate) to post your question.
st80492
Hi, I was wondering if any repo’s or notebooks are in the eco-system which use PyTorch to demonstrate theory, rather than do experiments, (although the border between the two is a bit vague). There’s a lot of beautiful theory on function approximation, for example, Learning Real and Boolean Functions: When Is Deep Better Than Shallow 24 is quite readable. I think it builds on Vapnik–Chervonenkis theory 7. @smth - Vladimir Vapnik is at FAIR now, (I think)? So I guess there might be some, “in house teaching courses”, that covers statistical learning theory, and is implemented using PyTorch? @apaszke, @fmassa is this something you guys would be interested in? Or, is the focus of PyTorch more leaning towards experiments/applications?
st80493
If anyone knows of any good theory repos/projects that are in any deep learning framework, (doesn’t have to be PyTorch). Please could you link them here? I’d be interested in implementing them in PyTorch - I’ll be doing some DL theory teaching/demos over the summer. This would also be very helpful to others who are giving DL demos/teaching. Thanks a lot for your help, Aj
st80494
definitely not learning theory, but a notebook that teaches NLP properly using pytorch as a tool, rather than showcase pytorch as a tech-demo was this: https://github.com/rguthrie3/DeepLearningForNLPInPytorch 38 In general, to give you my completely honest and regularized opinion, teaching something like “Learning Real and Boolean Functions” is way more valuable to teach in numpy than in pytorch. I wont pretend, numpy is much more accessible. Unless you need to really teach / showcase a gradient based learning method (SVM-SGD?) or teach GPU based whatever, you can maybe just stick to numpy.
st80495
On my list of things I’d like to see or do is an implementation of Alemi et al.: Deep Variational Information Bottleneck 17 or Shwartz-Ziv and Tishby: Opening the Black Box of Deep Neural Networks via Information 12 Also, I think the Bayesian neural network chapters in the textbook MacKay: Information Theory, Inference, and Learning Algorithms 9 might be inspiration for some things you could do. Though for this, Soumith’s comment regarding numpy vs. torch may apply as well. Naturally, I’d be very interested in what you do. Best regards Thomas
st80496
I know this is an old conversation but I am developing a full fledged package for information theory of deep learning in PyTorch which have a lot of information bottleneck functionalities including HSIC bottleneck sigma networks (yes ! that train without backprop). Currently library is in testing phase but there are notebooks available, thought that can be useful. Link to the repo: GitHub spino17/PyGlow 8 This package is an attempt to implement Keras like API functionalities on PyTorch backend with functionalities supporting information theoretic methods which are relevant for understanding neural n... Link to the documentation: https://pyglow.github.io/ 9
st80497
Hello, I have this module where I try to add a positional embedding to the Word Embeddings. However, there is no error in the below code. However, when I use the commented line in the forward pass below, it results in a RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED error. Code: class QuestionEmbeddingLayer(nn.Module): def __init__(self, vocab, args): super().__init__() self.word_embedding = nn.Embedding(num_embeddings=vocab.shape[0],embedding_dim=vocab.shape[1]) # dim=300 torch.nn.init.xavier_uniform(self.word_embedding.weight) self.word_embedding.weight.requires_grad = False self.word_embedding.weight.data.copy_(vocab) self.position_enc = nn.Embedding(num_embeddings=150,embedding_dim=300) tmp = get_sinusoid_encoding_table(150, 300, padding_idx=0) self.position_enc.weight.data.copy_(tmp) self.position_enc.weight.requires_grad = False self.bilstm = nn.LSTM(input_size=300, hidden_size=int(args.d_model/2), num_layers=2, dropout=0.1, bidirectional=True, batch_first=False) def forward(self, q): word_emb = self.word_embedding(q.transpose(0,1)) # word_emb = self.word_embedding(q.transpose(0,1)) + self.position_enc(q.transpose(0,1) <- this gives error!!!!!! output, (hidden, cell) = self.bilstm(word_emb) q = torch.cat((hidden[-1],hidden[-2]),1) # (batchsize, hiddensize*2) return q def get_sinusoid_encoding_table(n_position, d_hid, padding_idx=None): ''' Sinusoid position encoding table ''' def cal_angle(position, hid_idx): return position / np.power(10000, 2 * (hid_idx // 2) / d_hid) def get_posi_angle_vec(position): return [cal_angle(position, hid_j) for hid_j in range(d_hid)] sinusoid_table = np.array([get_posi_angle_vec(pos_i) for pos_i in range(n_position)]) sinusoid_table[:, 0::2] = np.sin(sinusoid_table[:, 0::2]) # dim 2i sinusoid_table[:, 1::2] = np.cos(sinusoid_table[:, 1::2]) # dim 2i+1 if padding_idx is not None: # zero vector for padding dimension sinusoid_table[padding_idx] = 0 print(sinusoid_table.shape) return torch.Tensor(sinusoid_table).cuda() Any idea why this is acting this way? I’ve checked the shapes, they are all correct. Im using Cuda 9, Pytorch 1.0.0
st80498
Might be because the self.word_embedding or q is not in GPU. Since the rest of the code is not available, have you checked if these tensors are in GPU by explicitly setting them to your_tensor.to(torch.device("gpu"))?
st80499
I just got the same problem on a piece of code that was otherwise running flawlessly. The difference is that I was concatenating rather than summing the embedding. I figured out that the error appeared when the size of the embedding is too small. I mean, not the dimension but the number of item to embed. In my case, embedding 4 items was problematic, in fact it was up to 13 but 14 worked fine. So I just added 10 dummy tokens. What is weird indeed is that the error only appears when you start to use those embedding in cat or sim. Maybe someone knows why small embeddings are a problem. Best
st80500
That’s kind of a weird issue. Do you have a small code example, so that we can have a look?
st80501
Here it is. It is a bit longish, but it is self contained import torch import torch.nn as nn from torch import tensor, cat from tqdm import tqdm, trange from random import seed torch.manual_seed(0) seed(0) c = ['(', ')', '+', '-', '*', '/', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', ' '] ioc = {c[i]:i for i in range(len(c))} toc = {'(':'PAR', ')':'PAR', '+':'OP', '-':'OP' ,'*':'OP' ,'/':'OP' , '0':'NUM', '1':'NUM', '2':'NUM', '3':'NUM', '4':'NUM', '5':'NUM', '6':'NUM', '7':'NUM', '8':'NUM', '9':'NUM', ' ':'SPC'} types = list(toc.values()) types.sort() iop = {types[i]:i for i in range(len(types))} class Typed_LSTM(nn.Module): def __init__(self, input_dim, hidden_dim, num_layer, device): nn.Module.__init__(self) self.hidden_dim = hidden_dim self.device = device self.LSTM = {'_':nn.LSTM(input_dim, hidden_dim, num_layer)} for t, l in self.LSTM.items(): self.add_module('LSTM'+t, l) def forward(self, xts): b = xts.shape[0] states = [] hc = None for xt in xts: xt = xt.reshape((1, xt.shape[0], xt.shape[1])) s, hc = self.LSTM['_'](xt, hc) states.append(s) states = cat(states) return states class PSolver(nn.Module): def __init__(self, ioc, iop, emb_dim, pos_dim, hidden_dim, num_layer, types, device): nn.Module.__init__(self) self.device = device self.ioc = ioc self.iop = iop #self.P = nn.Embedding(len(self.iop)+20, pos_dim) # add some dummy embeddings to avoid the cudnn bug self.P = nn.Embedding(len(self.iop), pos_dim) self.E = nn.Embedding(len(self.ioc), emb_dim) self.LSTM = Typed_LSTM(emb_dim+pos_dim, hidden_dim, num_layer, device) #self.LSTM = nn.LSTM(emb_dim+pos_dim, hidden_dim) self.H = nn.Linear(hidden_dim, hidden_dim) self.O = nn.Linear(hidden_dim, 1, bias=True) self.loss = nn.L1Loss(reduction='sum') self.trainer = torch.optim.Adadelta(self.parameters()) self.to(device) def encode(self, exp, tags): emb = self.E(tensor([[self.ioc[c] for c in exp]], device=self.device)).transpose(0,1) pemb = self.P(tensor([[self.iop[p] for p in tags]], device=self.device)).transpose(0,1) emb = cat([emb, pemb], dim=2) state = self.LSTM(emb)[-1][-1] value = self.H(state) value = self.O(value) return value def test(self, exp, tags): value = self.encode(exp, tags) return value.item() device = torch.device('cuda:0') solver = PSolver(ioc, iop, 10, 10, 50, 1, types, device) exp = '1+2-3+4-5+6-7' tags = ['NUM', 'OP', 'NUM', 'OP', 'NUM', 'OP', 'NUM', 'OP', 'NUM', 'OP', 'NUM', 'OP', 'NUM'] v = solver.test(exp, tags) The problems is around line 69/70 where adding a few dummy embeddings resolves the cuDNN problem. The problem otherwise appears with both the “house” lstm and torch native lstm. At times, on top of the cuDNN error, it also throws a bunch of : /pytorch/aten/src/THC/THCTensorIndex.cu:308: void indexSelectSmallIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [0,0,0], thread: [0,0,0] Assertion srcIndex < srcSelectDimSize failed. But it is not alway easy to reproduce this one. If it helps I’ll be glad to know. Best
st80502
I have a tensor x of shape [batch_size, 2]. I want to split it in shapes [batch_size, 1] and [batch_size, 1] then use one as a classification head and one as a regression head. Is there a way to do this or will I have to use different tensors?
st80503
I did that but I am asking that if I pass these to BCE loss and MSE loss respectively, will the grad be stored correctly so that optimisation can happen properly?
st80504
Yes ! Say you have loss_1 and loss_2 (computed for example with x[:, 0] and x[:, 1]). Then if you call .backward() on a combination of the two losses, for example loss_sum = 0.5 * loss_1 + 0.5 * loss_2 loss_sum.backward() optimization can happen properly.
st80505
I’m planning on using pytorch hub segmentation models for my research project. The models look fine overall, but I’m wondering if they have been screened for possible errors? I know that they are pooled in by other people on github, but is the pytorch team taking the time to screen them? Or is it an automated process with no checks? Essentially, Do models on pytorch hub have high credibility, like tensorflow hub? Thanks a lot for the initiative!
st80506
Yes, absolutely. We carefully vet and review them to the best extent we can. If they are on pytorch.org/hub 10 they are screened.
st80507
Amazing! Thanks @smth @ptrblck, the research community thanks you for making our job easier through Pytorch in so many ways and continually improving it
st80508
How can i configure the dataloader to accept a batch size that is larger than the dataset size? Is it possible for the dataloader to continue sampling from the dataset? uti_va_loader = torch.utils.data.DataLoader(uti_va_data, batch_size=args.batch_size, shuffle=True, num_workers=0, drop_last=False, pin_memory=torch.cuda.is_available()) This line is completely skipped if the batch size is larger than the dataset size. for batch_idx, data in enumerate(uti_va_loader): print(data.size())
st80509
Solved by HaziqRazali in post #3 Anyway if you artificially enlarge the number that dataset. len returns you will be able to. Thank you. That give me an idea to simply take the modulo of dataset.len, allowing me to set a batch size larger then the size of the dataset. I still needed to set __len__ to return a larger number, eit…
st80510
It’s not about the dataloader but dataset. Dataset is data-agnostic and it just iterates over a list of indices whose length is set at len dataset method. How do you expect the dataloader to accept larget batch size if it cannot load non existing data? It will throw errors. Anyway if you artificially enlarge the number that dataset.len returns you will be able to. If what you wanna do is create a kind of infinite loop you can use built-in itertools’ repeat which allows you to iterate a iterator as many times as you want. https://docs.python.org/2/library/itertools.html#itertools.repeat 15
st80511
Anyway if you artificially enlarge the number that dataset. len returns you will be able to. Thank you. That give me an idea to simply take the modulo of dataset.len, allowing me to set a batch size larger then the size of the dataset. I still needed to set __len__ to return a larger number, either the length of the dataframe or the batch size. Set the length of the dataset to be the max over the dataset length or the batch size def __len__(self): return max(len(self.df),args.batch_size) Take the modulo idx by the actual length of the data def __getitem__(self, idx): idx = idx % self.data_len Template below class uti_dataset(torch.utils.data.Dataset): def __init__(self, args, data_path): # load dataset self.df = pd.Dataframe() ... self.data_len = len(self.df) def __len__(self): return max(len(self.df),args.batch_size) def __getitem__(self, idx): idx = idx % self.data_len filenames = self.df.iloc[idx]["filepaths"] # load and transform data # ... return images
st80512
My model class Model(torch.nn.Module): def __init__(self): super(Model, self).__init__() #self.fc1 = nn.Linear(3,32) self.gru = nn.GRU(3, 256, 3) #nn.init.xavier_uniform_(self.gru.weight) self.fc3 = nn.Linear(256, 1) nn.init.xavier_uniform_(self.fc3.weight) def forward(self, x): batch = x.size(0) #out = self.fc1(out) out = torch.transpose(x,0,1) hidden = self.__init__hidden(batch) out,hidden = self.gru(out, hidden) #print(hidden.size()) #torch.Size([3, 128, 256]) out = self.fc3(hidden) return out def __init__hidden(self, batch): hidden = torch.zeros(3, batch, 256).to(device) return hidden I need the fc3 layer to accept not 3 sizes, but the last one. How to write this?
st80513
Solved by phan_phan in post #2 In your example, hidden[-1] is the hidden state for the last step, for the last layer. It is shaped [batch_size, hidden_size], so self.fc3(hidden[-1]) will do fine.
st80514
In your example, hidden[-1] is the hidden state for the last step, for the last layer. It is shaped [batch_size, hidden_size], so self.fc3(hidden[-1]) will do fine.
st80515
What does torch.cuda.is_avaialble() result mean? Does it mean the CUDA tools (nvcc,…) is available, or does it mean CUDA code can run (nvidia GPU/drivers/…)? I would like to build extensions if CUDA tools are available even if no GPU is present. It seems like I could import CUDaExtension only to fail when building starts 3. My current workaround is to try-catch everything 1. try: from torch.utils.cpp_extension import CUDAExtension import torch assert torch.cuda.is_available(), "No CUDA found" except (ImportError, OSError, AssertionError) as e: CUDAExtension = None print("No CUDA was detected, building without CUDA error: {}".format(e)) But there should be a standard way?
st80516
My collate function for my dataloader involves the construction of a sparse matrix by .to_sparse(). The .to_sparse() call is the bottleneck in the data loading process and slows down training significantly. Is there anyway to get around this? I’m thinking of saving the sparse matrix for each batch with torch.save() and then load that instead of calling .to_sparse() every time. Not exactly sure how to go about though - such that a unique hash is constructed for a batch. Thanks!
st80517
I am using a bidirectional LSTM for a binary classification model on text sequences. self.rnn = nn.LSTM(embed_size, hidden_size, batch_first=True, bidirectional=True) out,(ht,ct) = self.rnn(X_packed) print(ht.shape) for bs=64, hidden_size=128, the dimension of ht is 2 x 64 x 128. This is then pushed to a FC layer and finally passed through a sigmoid activation function. Should the input to the FC layer be ht[-1] i.e. 64 x 128 or a concatenated version of the two torch.concat([ht[0],ht[-1]],dim=1) i.e. 64 x 256?
st80518
Here ht[0] corresponds to the last output of the forward lstm, and ht[1] corresponds to the last output of the backward lstm. You used a bidirectional lstm, so you might as well use the output of both directions! Your solution torch.concat([ht[0],ht[-1]],dim=1) seems correct.
st80519
Now, I have a float tensor B, that is B = torch.randn(2,3,4), and I also have a bool tensor C, that is the same size as B and its element is true or false. I want to extract the values in B, whose location is the true values of C. What should I do?
st80520
You could just index B with the BoolTensor: B = torch.randn(2, 3, 4) C = torch.randint(0, 2, (2, 3, 4)).bool() res = B[C]
st80521
class NetDataset(Dataset): def __init__(self): xy = np.loadtxt('data_2d.txt', delimiter=';', dtype=np.float32) self.len = xy.shape[0] self.x_data = torch.from_numpy(xy[:, 0:-1]) self.y_data = torch.from_numpy(xy[:, [-1]]) def __getitem__(self, index): return self.x_data[index], self.y_data[index] def __len__(self): return self.len self.x_data = torch.from_numpy(xy[:, 0:-1]) it gives me the following view [0.1 0.4 0.5 0.2 0.8 0.5] How can i get this [[0.1 0.4 0.5][0.2 0.8 0.5]]?
st80522
dataset = NetDataset() train_loader = DataLoader(dataset=dataset, batch_size=128, shuffle=True) … for i, (inputs, labels) in enumerate(train_loader): y_pred = model(inputs) loss = criterion(y_pred, labels)
st80523
slavavs: [[0.1 0.4 0.5][0.2 0.8 0.5]] I am currently using nn.linear. I want to use nn.gru [[0.1 0.4 0.5][0.2 0.8 0.5]] - the sequence is 2
st80524
Sorry, what I meant was, What do you expect xy to be here? And what do you expect xy[:, 0:-1] to give you? I am not familiar with such notation.
st80525
We can use torch.nn.Conv2d to create an usual convolution layer, but if I want to creat a convolution layer with a kernel of novel shape, such as ‘T’ shape(means with kernel weight of [w1 w2 w3; 0 w4 0; 0 w5 0] ), what should I do? Could somebody help me? Thanks very much!
st80526
you can make use of torch.nn.functional.conv2d() 47. import torch.nn.functional as F kernel = torch.Tensor([[[[1,2,3], [0,4,0], [0,5,0]]]]) data = torch.randn(1,1,10,10) output = F.conv2d(data, kernel)
st80527
Thanks for your answer! But I want to train the layer in a network, means w1 to w5 are learnable. Just like dilate convolution, some weights are 0, and others are learnable. Do you know how to do this?
st80528
If you initialize your kernel with zeros in the right places, you could probably use register_hook 9 to zero the corresponding gradients to avoid learning those parameters.
st80529
you can still use function approach class NovelConv(nn.Module): def __init__(self): self.W = nn.Parameter(torch.Tensor([[w1,w2,w3],[w4,0,0],[w5,0,0]])) def forward(self,x): return F.conv2d(x, self.W) Backpropogation will work by doing this
st80530
Thank you! As far as I know, register_hook is used to get intermediate results. How to use register_hook to zero the gradients, could you please explain it more specifically?
st80531
Thank you! In this way, I need to extract w1 to w5 from original layer, and give it to W, how to do this?
st80532
thank you! I used your function .but have the NameError: name ‘w1’ is not defined.how to solve it ?
st80533
I found a solution for a problem similar that yours. Check the code below for a convolution block with symmetric coefficients I initially tried to create the weight matrix as @Naman-ntc but the graph was broken and the gradients did not flow to the scalar variables For this reason I create the weight matrix by summing the coefficients to a zero matrix. Any other simpler alternatives are welcome class Conv2d_symmetric(nn.Module): def __init__(self): super(Conv2d_simple, self).__init__() self.a = nn.Parameter(torch.randn(1)) self.b = nn.Parameter(torch.randn(1)) self.c = nn.Parameter(torch.randn(1)) self.bias = None self.stride = 2 self.padding = 1 self.dilation = 1 self.groups = 1 def forward(self, input): #in case we use gpu we need to create the weight matrix there device = self.a.device weight = torch.zeros((1,1,3,3)).to(device) weight[0,0,0,0] += self.c[0] weight[0,0,0,1] += self.b[0] weight[0,0,0,2] += self.c[0] weight[0,0,1,0] += self.b[0] weight[0,0,1,1] += self.a[0] weight[0,0,1,2] += self.b[0] weight[0,0,2,0] += self.c[0] weight[0,0,2,1] += self.b[0] weight[0,0,2,2] += self.c[0] # print("weight= ", weight) # print("inout = ", input) return F.conv2d(input, weight, self.bias, self.stride, self.padding, self.dilation, self.groups)
st80534
Hello everyone I am working on a simple LSTM demo but I keep running into the following error during training: RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 90 and 89 in dimension 1 Any help would be greatly appreciated! Full stack trace: RuntimeError: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 99, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/collate.py", line 68, in default_collate return [default_collate(samples) for samples in transposed] File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/collate.py", line 68, in <listcomp> return [default_collate(samples) for samples in transposed] File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/collate.py", line 43, in default_collate return torch.stack(batch, 0, out=out) RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 90 and 89 in dimension 1 at /pytorch/aten/src/TH/generic/THTensor.cpp:711 Hyperparameters: SEQ_LENGTH = 90 # 90 day average BATCH_SIZE = 2 EPOCHS = 100 NUM_FEATURES = 4 HIDDEN_SIZE = 32 NUM_LAYERS = 2 DROPOUT = 0.2 NUM_DIR = 1 LEARNING_RATE = 0.002 Dataset: class TimeSeriesDataset(data.Dataset): def __init__(self, samples, targets, seq_length): 'Initialization' self.samples = samples self.targets = targets self.seq_length = seq_length def __getitem__(self, index): x = torch.tensor(self.samples.iloc[index:index + self.seq_length].values).float() y = torch.tensor(self.targets.iloc[index:index + self.seq_length].values).float() return x, y def __len__(self): return len(self.samples) Dataloader: training_dataset = TimeSeriesDataset(x_train, y_train, SEQ_LENGTH) test_dataset = TimeSeriesDataset(x_test, y_test, SEQ_LENGTH) training_generator = torch.utils.data.DataLoader( training_dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=4, drop_last=True) test_generator = torch.utils.data.DataLoader( test_dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=4, drop_last=True) Training code: def training(model, epochs, state_dim): for epoch in range(epochs): # Initialize states # (num_layers * num_directions, batch, hidden_size) states = (torch.zeros(state_dim).to(device), torch.zeros(state_dim).to(device)) # Training for step, (x_batch, y_batch) in enumerate(training_generator): x_batch = x_batch.permute(1, 0, 2) y_batch = y_batch.permute(1, 0).unsqueeze(dim=2) # # Move to GPU x_batch, y_batch = x_batch.to(device), y_batch.to(device) # (seq_len, batch, input_size) states = [state.detach() for state in states] prediction, states = model(x_batch, states) model.zero_grad() loss = criterion(prediction, y_batch) loss.backward() optimizer.step() print('Epoch [{}/{}], Step[{}], Loss: {:.4f}' .format(epoch+1, epochs, step, loss.item())) training(model, EPOCHS, state_dim=(NUM_LAYERS * NUM_DIR, BATCH_SIZE, HIDDEN_SIZE))
st80535
Another perplexing detail: If I set the batch size to 1, the data is loaded successfully and the model trains for several epochs, but with a batch size bigger than 1 I get the aforementioned error
st80536
Based on the error message it looks like the samples are stacked in dim0 and have apparently a different length (90 vs 89). Are you loading the samples as [seq_len, batch_size, features]? Note that the default collate_fn will try to stack the tensors in dim0, which will increase your seq_len, if you are using the aforementioned format.
st80537
I’m searching the net for a multiplication which is applied between an 1-d tensor anf an n-d tensor. Expected behaviour: A = torch.tensor(np.array([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]], [[2., 0., 0.], [0., 2., 0.], [0., 0., 2.]]])) v = torch.tensor(np.array([2.0, 1.0])) torch.element_wise(v, A) expected result: tensor([[[2., 0., 0.], [0., 2., 0.], [0., 0., 2.]], [[2., 0., 0.], [0., 2., 0.], [0., 0., 2.]]])
st80538
I found a solution, but I wonder if there exist any simpler solution? def ev(a, b): N = b.shape[0] shp = b.shape[1:] return torch.mul(a, b.view(N, -1).transpose(0,1)).transpose(0,1).view(N, *shp) R = ev(v, A) # gives the right answer
st80539
You can do the following: v.view(-1, 1, 1).expand_as(A) * A Note that the automatic broadcasting can take care of the expand and so you can simply do: v.view(-1, 1, 1) * A
st80540
That works as well I usually prefer the version with functions compared to advance indexing because I know when I get a view of the data and when I get a copy.
st80541
I’ve been trying to implement the KL divergence using tf/pytorch and numpy. So far (tks to @Nikronic) the tf and pytorch results are similar but the numpy version is quite off, and I can not find any reason why. One thing that I’ve noticed is that if the preds and labels are containing one array repeated multiple times (np.broadcast_to(np.random.uniform(0., 1., (11,), (64, 12, 300, 11))) the results are very similar. Any help will be more than appreciated. preds = np.random.uniform(0., 1., (64, 12, 300, 11)) labels = np.random.uniform(0., 1., (64, 12, 300, 11)) preds_tf = tf.distributions.Categorical(probs=tf.convert_to_tensor(preds)) labels_tf = tf.distributions.Categorical(probs=tf.convert_to_tensor(labels)) tf_res = tf.reduce_mean(tf.distributions.kl_divergence(preds_tf, labels_tf)) preds_torch = torch.distributions.Categorical(probs=torch.from_numpy(preds)) labels_torch = torch.distributions.Categorical(logits=torch.from_numpy(labels).log()) torch_res = torch.mean(kl_divergence(preds_torch, labels_torch)) np_res = np.mean(np.sum(preds * np.log((preds / labels)), axis=-1)) print(tf_res.numpy(), torch_res.item(), np_res)
st80542
Hi, I implemented my own collate_fn as I need to pad data up to a variable length. The length up to which the batch has to be padded is also determined in this function. When I set the num_workers > 0 in the DataLoader, it runs fine on my cpu. However, und gpu I get the following error: RuntimeError: cuda runtime error (3) : initialization error at /pytorch/aten/src/THC/THCCachingAllocator.cpp:507 I read several similar threads, where it is recommended to set the start method for multiprocesing like this: torch.multiprocessing.set_start_method('forkserver') Now if I do that, I get the following error on both my cpu and gpu: AttributeError: Can't pickle local object 'MyDataset.get_collate_fn.<locals>.collate_fn' Maybe important to know is that I pass the collate_fn to the Dataloader like this: class MyDataset(Dataset) ... @static_method def get_collate(device): def collate_fn(batch): ... batch = batch.to(device) return collate_fn data_loader = DataLoader(MyDataset(), collate_fn=MyDataset.get_collate(device))
st80543
Same thing here, only with set_start_method(‘spawn’) or ‘forkserver’, ‘fork’ start method is fine. Pytorch 1.2, Linux
st80544
Our setup involves initial part of the network (input interface) which run on separate GPU cards. Each GPU gets its own portion of data (model parallelism) and process it separately. Each input interface, in turn, it itself a complex nn.Module. Every input interface can occupy one or several cards (say, interface_1 runs on GPU 0 and 1, interface_2 - on GPU 2 and 3 and so on). We need to keep the weights of these input interface the same all over the training. We also need them to run in parallel to save training time which is already weeks for our scenario. The best idea we can think of was initializing the interfaces with the same weights and then average the gradients for them. As the interfaces are identical, updating same weights with the same gradients should keep them the same all over the training process thus achieving desired “shared weights” mode. However, I cannot find any good solution for changing values of these weights and their gradients represented as Parameter in PyTorch. Apparently, PyTorch does not allow to do so. Our current state is: if we copy.deepcopy the ‘parameter.data’ of the “master” interface and assign it to ‘parameter.data’ of the “slave” interface, the values are indeed changed but .to(device_id) does not work and keeps them at the “master” device. However, we need them to move to a “slave” device. Could someone please tell me if it is possible at all or, if not, if there is a better way to implement shared weights along with the parallel execution for our scenario?
st80545
Based on your description, I assume some parallel methods could probably help in your use case. The usage of the .data attribute is not recommended and you could try to adapt some synchronization work flows from DDP 11.
st80546
You are right, the whole system is the variation of model parallelism. We were also thinking about data parallelism across the interfaces but that does not work as in this case PyTorch puts all input data on GPU 0 before distributing it across parallel nodes. For our scenario, the input data is so large that it takes more than entire 32Gb of GPU 0 memory itself.
st80547
In that case, try to also stick to the DDP approach (I would generally recommend DDP instead of DataParallel).
st80548
Hello! I created an environment with conda and I want to install pytorch in it, but it doesn’t work. After I get inside my environment with source activate env_name I tried this: conda install pytorch torchvision -c pytorch (I also tried it like this: conda install -c pytorch pytorch torchvision) but I am getting this error: Using Anaconda Cloud api site https://api.anaconda.org Fetching package metadata: ...... Solving package specifications: ...... Error: Could not find some dependencies for pytorch: mkl >=2018, cudatoolkit >=9.0,<9.1, blas * mkl, cudatoolkit >=10.0,<10.1, cudatoolkit >=9.2,<9.3, blas * openblas, cudnn 7.0.*, cudatoolkit 9.* Did you mean one of these? pytorch, pytorch-gpu, pytorch-cpu Did you mean one of these? cudatoolkit You can search for this package on anaconda.org with anaconda search -t conda cudatoolkit 9.* (and similarly for the other packages) Here are my installed packages: backports 1.0 py34_0 backports.shutil-get-terminal-size 1.0.0 <pip> decorator 4.0.11 py34_0 get_terminal_size 1.0.0 py34_0 ipython 4.2.0 py34_0 ipython-genutils 0.1.0 <pip> ipython_genutils 0.1.0 py34_0 libgfortran 1.0 0 numpy 1.9.2 py34_0 openssl 1.0.2l 0 path.py 10.0 py34_0 pexpect 4.2.1 py34_0 pickleshare 0.7.4 py34_0 pip 9.0.1 py34_1 ptyprocess 0.5.1 py34_0 python 3.4.5 0 readline 6.2 2 scipy 0.16.0 np19py34_0 setuptools 27.2.0 py34_0 simplegeneric 0.8.1 py34_1 six 1.10.0 py34_0 sqlite 3.13.0 0 tk 8.5.18 0 traitlets 4.3.1 py34_0 wheel 0.29.0 py34_0 xz 5.2.3 0 zlib 1.2.11 0 What should I do? Thank you!
st80549
It seems you are using Python3.4. Could you create a new conda environment with Python >= 3.5 and try to install PyTorch again?
st80550
I want to know does pad_packed_sequence and pack_padded_sequence is necessary when using the biLSTM?
st80551
Please see Understanding pack_padded_sequence and pad_packed_sequence 18 and https://suzyahyah.github.io/pytorch/2019/07/01/DataLoader-Pad-Pack-Sequence.html 9 (referred to in that article).
st80552
Hi, I have 2 tensors like below: hc1 = torch.randn(5,1, 1, 1) hc2 = torch.randn(5,1, 1, 1) I want to concatenate these 2 tensors as hc3 and then sort hc3 based on amounts of hc1. How could I do it? Thanks
st80553
Solved by phan_phan in post #4 Hi, You can get the permutation by calling perm = hc1.argsort(dim=0).squeeze(). In your example it returns tensor([2, 1, 0, 4, 3]) Then you can rearrange hc2 with this permutation : hc2_rearranged = hc2[perm]. In your example it returns tensor([[3.], [2.], [1.], [5.], …
st80554
Could you post a simple example with some values? You could concatenate these tensor using out = torch.cat((hc1, hc2), dim=dim), however I’m not sure, how you would like to sort the tensors.
st80555
Dear ptrblck, Thanks for your reply. Sure. I simplify my question with changing the dimensions of tensors to 2. Suppose we have these 2 tensors: hc1 = tensor([[30], [20], [10], [50], [40]]) hc2=tensor([[1], [2], [3], [4], [5]]) 1, 2, 3, 4, 5 are, subsequently, related to 30, 20, 10, 50, 40. After sorting hc1 we would have 10, 20, 30, 40, 50 and 3, 2, 1, 5, 4. Because hc1 and hc2 are 1 to 1. I thought that I should concatenate them as hc3 and sort it based on hc1. I hope that I explained my question as well as possible. Thanks
st80556
Hi, You can get the permutation by calling perm = hc1.argsort(dim=0).squeeze(). In your example it returns tensor([2, 1, 0, 4, 3]) Then you can rearrange hc2 with this permutation : hc2_rearranged = hc2[perm]. In your example it returns tensor([[3.], [2.], [1.], [5.], [4.]]) You can always concatenate those afterwards
st80557
Thanks for your reply. I can not understand how to have sorted hc1 while you achieved sorted indices of hc1? Is there any way? Thanks
st80558
phan_phan: hc2_rearranged = hc2[perm] . Dear Phan_Phan, Oh, how funny question I asked. It just needs to write: print(hc1[perm]) Many thanks
st80559
For example, I want to convert torch.tensor([1,2,3,4,5,6] to torch.tensor([[1,0,0],[2,3,0],[4,5,6]]), what is the most efficient way to do the conversion?
st80560
Currently, take tril_indices from numpy, and use advanced indices to assign to an all zeros matrix.
st80561
Beside numpy, PyTorch has now implemented tril_indices. x = torch.tensor([1., 2., 3., 4., 5., 6.]) m = torch.zeros((3, 3)) tril_indices = torch.tril_indices(row=3, col=3, offset=0) m[tril_indices[0], tril_indices[1]] = x
st80562
Hello! I have sequences of data created from multiple agents playing a game for a given amount of timesteps in synchron, thus with a shape of (#agents, #timesteps, x, …), where the data denoted by ‘x, …’ can be of any shape. What would be an easy way to slice the ‘timestep’ long sequences of data into smaller sequential parts? A specific example (with a picture as data): From (32, 1024, 3, 28, 28) into (128, 256, 3, 28, 28), as we create 256 long sequences from 1024 long sequences and as a result increase our “number” of data from 32 to 4*32=128. My only guess would be to make a fancy indexing such as one would do on a list of lists of lists, but there must be a better way. Thanks is advance.
st80563
Solved by spanev in post #3 Hi @Dudly01, Yes this is the way to go, why would the sequence order not be intact? view doesn’t modify the underlying sequence, just how you “see” it. and you can simply do this to get the shape list: b = list(a.size())
st80564
I might have come up with a solution, but i am not sure whether the sequence order would be intact: # Long sequenced data a = torch.randn((32, 1024, 3 , 28, 28)) # Get the original shape and modify it b = [i for i in a.size()] b[0] = -1 b[1] = int(b[1] / 4) # Desired length sequences c = a.view(b) >>>c.shape torch.Size([128, 256, 3, 28, 28])
st80565
Hi @Dudly01, Yes this is the way to go, why would the sequence order not be intact? view doesn’t modify the underlying sequence, just how you “see” it. and you can simply do this to get the shape list: b = list(a.size())
st80566
Hello, I am very new to PyTorch and I am now trying to train my first neuronal net with my own data. Sadly I get an error message I could not figure out a solution for. It would be great if you could help me. I am getting the following error: ValueError Traceback (most recent call last) in 13 for inputs, labels in training_loader: 14 outputs = model(inputs) —> 15 loss = criterion(outputs, labels) 16 17 optimizer.zero_grad() ~\Anaconda3\envs\opencv4\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs) 491 result = self._slow_forward(*input, **kwargs) 492 else: –> 493 result = self.forward(*input, **kwargs) 494 for hook in self._forward_hooks.values(): 495 hook_result = hook(self, input, result) ~\Anaconda3\envs\opencv4\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target) 940 def forward(self, input, target): 941 return F.cross_entropy(input, target, weight=self.weight, –> 942 ignore_index=self.ignore_index, reduction=self.reduction) 943 944 ~\Anaconda3\envs\opencv4\lib\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction) 2054 if size_average is not None or reduce is not None: 2055 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2056 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) 2057 2058 ~\Anaconda3\envs\opencv4\lib\site-packages\torch\nn\functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 1879 if target.size()[1:] != input.size()[2:]: 1880 raise ValueError(‘Expected target size {}, got {}’.format( -> 1881 out_size, target.size())) 1882 input = input.contiguous().view(n, c, 1, -1) 1883 target = target.contiguous().view(n, 1, -1) ValueError: Expected target size (100, 4), got torch.Size([100]) This is my code: import torch import matplotlib.pyplot as plt import numpy as np import torch.nn.functional as F from torch import nn from torchvision import datasets, transforms from torchvision import transforms import pandas class CsvDataset(torch.utils.data.Dataset): def __init__(self, csv_file, transforms=None): """ Args: csv_file (string): Path to csv file transforms (callable, optional): Optional tranforms to be applied on a sample """ self.df = pandas.read_csv(csv_file, sep=';') self.df = self.df.loc[:, ~self.df.columns.str.contains('^Unnamed')] #Delete last colum self.transforms = transforms def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() sample_tensor = torch.tensor(self.df.iloc[idx:idx+1, 4:].values).float() lbl_to_idx = { 'z': 0, 'g': 1, 'e': 2, 'u': 3 } lbl_val = self.df.iloc[idx:idx+1, 1:2].values label = torch.tensor(lbl_to_idx[lbl_val[0,0]]).float() sample = {'sample': sample_tensor, 'label': label} if self.transforms: # Something Wrong?????????? sample = self.transform(sample) return sample_tensor, label def __len__(self): return self.df.shape[0] dataset = CsvDataset(csv_file='C:/data.csv') # __Split Dataset__ train_size = int(0.8 * len(dataset)) test_size = len(dataset) - train_size train_dataset, validation_dataset = torch.utils.data.random_split(dataset, [train_size, test_size]) training_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=100, shuffle=True) validation_loader = torch.utils.data.DataLoader(validation_dataset, batch_size=100, shuffle=False) class Classifier(nn.Module): def __init__(self, D_in, H1, H2, D_out): super().__init__() #Define neural net: self.linear1 = nn.Linear(D_in, H1) self.linear2 = nn.Linear(H1, H2) self.linear3 = nn.Linear(H2, D_out) def forward(self, x): x = F.relu(self.linear1(x)) x = F.relu(self.linear2(x)) x = self.linear3(x) return model = Classifier(224, 125, 65, 4) #224 Inputs, 125 nodes in H1, 65 nodes in H2, 4 ouput classes model criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.0001) epochs = 15 running_loss_history = [] running_corrects_history = [] val_running_loss_history = [] #For Validation with validation dataset val_running_corrects_history = [] for e in range(epochs): running_loss = 0.0 running_corrects = 0.0 val_running_loss = 0.0 val_running_corrects = 0.0 for inputs, labels in training_loader: outputs = model(inputs) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() _, preds = torch.max(outputs, 1) running_loss += loss.item() running_corrects += torch.sum(preds == labels.data) else: #Validation: with torch.no_grad(): for val_inputs, val_labels in validation_loader: val_outputs = model(val_inputs) val_loss = criterion(val_outputs, val_labels) _, val_preds = torch.max(val_outputs, 1) val_running_loss += val_loss.item() val_running_corrects += torch.sum(val_preds == val_labels.data) epoch_loss = running_loss/len(training_loader) epoch_acc = running_corrects.float()/ len(training_loader) running_loss_history.append(epoch_loss) running_corrects_history.append(epoch_acc) val_epoch_loss = val_running_loss/len(validation_loader) val_epoch_acc = val_running_corrects.float()/ len(validation_loader) val_running_loss_history.append(val_epoch_loss) val_running_corrects_history.append(val_epoch_acc) print('epoch :', (e+1)) print('training loss: {:.4f}, acc {:.4f} '.format(epoch_loss, epoch_acc.item())) print('validation loss: {:.4f}, validation acc {:.4f} '.format(val_epoch_loss, val_epoch_acc.item())) Thank you very much!
st80567
Solved by phan_phan in post #4 Okay, it seems that you have a phantom dimension that you need to get rid of. The tensor inputs has shape [batch, 1, 224], you need to squeeze it to have [batch, 224] instead. Instead of calling outputs = model(inputs), try : outputs = model(inputs.squeeze(1))