id
stringlengths
3
8
text
stringlengths
1
115k
st103700
(briefly thought I’d found the source, but that was for alpha_dropout, source for _functions.dropout.Dropout.apply seems elsewhere)
st103701
empirically, seems potentially shortcutted, eg run: import torch from torch import nn import time dropout = 0.01 layers = [] for i in range(10000): layers.append(nn.Dropout(dropout)) x = torch.rand(10000, requires_grad=True) for l in layers: x = l(x) back_start = time.time() x.sum().backward() print('back time', time.time() - back_start) result for dropout=0: back time 0.02752685546875 for dropout=0.01: back time 0.11877131462097168 Additional evidence: putting inplace=True fails if dropout is not 0 but works ok with dropout = 0 in addition backwards time is unchanged whether inplace is True or not, for dropout 0
st103702
I think this 312 is the implementation of the nn Module for Dropout. And so yes this check is already done.
st103703
(note: oh, the source code is in api folder. Interesting I keep ending up like at https://github.com/pytorch/pytorch/tree/5d474e1812b2343b7dc1c9561f5f154334ccae38/torch/csrc/nn 47 , and then I’ve no idea where to go next )
st103704
Recently I found this repo 11. A quick question, can this tool be used to build the source from local machine to generate a wheel file every day ? Does it automatically detect the cuda version and non-cuda version ?
st103705
How to make closure() that re-evaluates the model with new weights? Any comment on my attempt below? (modified from here 8) for input, target in dataset: def closure(new_weight_idx=None, new_weight=None): # update the model with the new_weight if (new_weight_idx is not None) and (new_weight is not None): state_dict = model.state_dict() key = list(state_dict.keys())[new_weight_idx] old_weight = state_dict[key] state_dict[key] = new_weight model.load_state_dict(state_dict) optimizer.zero_grad() output = model(input) loss = loss_fn(output, target) loss.backward() # put the old_weight back # TODO return loss optimizer.step(closure) Use case: try to implement Algorithm 7.1 (Line Search Newton–CG) from Nocedal’s num opt book 10
st103706
I don’t really understand your question. Could you explain your use case a bit more? Is the closure working or do you have some issues with it?
st103707
@ptrblck I do apologize, I have updated my question. Previously I accidentally hit the tab button, then it was posted right away, thank you.
st103708
Currently the function is passed into .step() and called without arguments. See this line of code 24. However, you could pass arguments using lambda and most likely partial. Here is a small example: model = nn.Linear(20, 2) loss_fn = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=1.) input = torch.randn(1, 20) target = torch.empty(1, dtype=torch.long).random_(2) optimizer.step(lambda: closure(0, torch.randn(size=model.weight.size()))) I cannot comment of the correctness of the approach, since the paper is behind a pay wall and I’m currently out of office, so I cannot see it.
st103709
Hi, I was reading the docs on CUDA semantics 1, where the following snippet is available: cuda = torch.device(‘cuda’) # Default CUDA device cuda0 = torch.device(‘cuda:0’) cuda2 = torch.device(‘cuda:2’) # GPU 2 (these are 0-indexed) I am not clear on the indexing here. Basically, I think I understand that we can create devices with torch.device and that all of our GPUs will be devices of type cuda and be given an index (always starting at 0). In this example, I understand the lines that follow (and which I omitted) regarding the use of the context manager, etc. But I don’t understand the indexing above. In this example, Are there two or three GPUs available? Why is the default device (cuda) not be given an index? Is it the same as the device cuda:0? Is there anything special about the default device? Is there always a default device even if I give all devices an index? Why is there no cuda:1? Also, is it correct that the indexing in Pytorch (starting always at zero) is independent of the available device ids that the environment variable CUDA_AVAILABLE_DEVICES holds? If someone could clear up these questions, or just shed some light on the naming and indexing of devices, I’d be very grateful. Best wishes, Max
st103710
1: there are 3 GPUs available and visible (indices 0,1,2). 2: because otherwise it would not be the default device anymore but a specific one. 3: usually yes but not necessarily. You can set the default device to another index if you want to. 4: despite the missing index: no 5: yes. If you don’t call x.to(device) but x.cuda() without specifying an index the default GPU will be used again. 6: probably simply left out to provide a compact example. 7: yes. If you set CUDA_VISIBLE_DEVICES to 5,6 and 7 cuda:0 will be GPU 5 and so on.
st103711
RuntimeError: Expected object of type CPUFloatType but found type CUDAFloatType for argument #2 'weight’ deleted
st103712
Check if your model and data are on the same device. This Error may be caused by, for example, the mode is on gpu (model.cuda()) while input data is on cpu.
st103713
Agree with zeakey. In addition, I just want to add that we can use x.is_cuda where x is the tensor of interest to check whether it is on GPU or not.
st103714
In torch/optim/sgd.py, we have: p.data.add_(-group['lr'], d_p) that does the parameter update (cmiiw). Will it be more intuitive/obvious to write it as: p.data.add_(-group['lr'] * d_p) ? if not, why not? Quick test: >>> p = torch.tensor([[0.], [3.]]) >>> p2 = p.clone() >>> alpha = 0.5 # step length >>> dir = torch.tensor([[5.], [10.]]) # step direction >>> p.data.add(-alpha,dir) tensor([[-2.5000], [-2.0000]]) >>> p2.data.add(-alpha * dir) tensor([[-2.5000], [-2.0000]]) >>> torch.equal(p,p2) True
st103715
It will be the same, since it’s just the second method described here 71 as the in-place version. It seems the docs for the in-place version are currently missing, but as far as I know @richard is onto this.
st103716
What is the best way to resume a training in pytorch. Saving an optimizer would be necessary since some optimizers store the momentum of gradients along with the current gradients. What is an elegant way to do this?
st103717
Solved by ptrblck in post #4 There is a good example in the ImageNet demo.
st103718
The most straightforward way I can think of for the optimizers is using state_dict() and load_state_dict(). For the dataloader there isn’t a way. I guess the implicit recommendation is to start from a fresh epoch (you could play games there, too, like caching the permutation of items it is currently going through and the position), but what should e.g. len(dl) return? Best regards Thomas
st103719
What unifying torch.save for optimizer and model, so that optimizer can be saved together with the model and loaded together I am really feeling lazy for calling torch.save twice . Although makes sense to do so (somewhat) in my opinion
st103720
Ohhh exactly what I was looking for, didnt know that torch.save works on a dictionary as well!! Thanks
st103721
The error below happens when using any command related to cuda on PyTorch. I made sure my glibc is correctly at 2.14 and my conda environment contains only the PyTorch installation and corresponding dependencies. Anyone else seen this? import torch torch.cuda.current_device() *** glibc detected *** python: double free or corruption (!prev): 0x00007ffff80d1180 *** ======= Backtrace: ========= /lib64/libc.so.6(+0x33f4675dee)[0x7ffff734adee] /lib64/libc.so.6(+0x33f4678c80)[0x7ffff734dc80] /usr/lib64/libcuda.so.1(+0x30b736)[0x7fffec01e736] /usr/lib64/libcuda.so.1(+0x30b7a4)[0x7fffec01e7a4] /usr/lib64/libcuda.so.1(+0x1f03d0)[0x7fffebf033d0] /usr/lib64/libcuda.so.1(+0x1c877b)[0x7fffebedb77b] /usr/lib64/libcuda.so.1(cuInit+0x4d)[0x7fffebf2e7cd] /home/kumarv/willa099/miniconda3/envs/pytorch4/lib/python3.6/site-packages/torch/…/…/…/libcudart.so.9.0(+0x1c8aa)[0x7fffe433b8aa] /home/kumarv/willa099/miniconda3/envs/pytorch4/lib/python3.6/site-packages/torch/…/…/…/libcudart.so.9.0(+0x1c901)[0x7fffe433b901] /lib64/libpthread.so.0(pthread_once+0x53)[0x7ffff7675e03] /home/kumarv/willa099/miniconda3/envs/pytorch4/lib/python3.6/site-packages/torch/…/…/…/libcudart.so.9.0(+0x54869)[0x7fffe4373869] /home/kumarv/willa099/miniconda3/envs/pytorch4/lib/python3.6/site-packages/torch/…/…/…/libcudart.so.9.0(+0x18b6a)[0x7fffe4337b6a] /home/kumarv/willa099/miniconda3/envs/pytorch4/lib/python3.6/site-packages/torch/…/…/…/libcudart.so.9.0(+0x1dd8b)[0x7fffe433cd8b] /home/kumarv/willa099/miniconda3/envs/pytorch4/lib/python3.6/site-packages/torch/…/…/…/libcudart.so.9.0(cudaGetDeviceCount+0x4a)[0x7fffe435425a] /home/kumarv/willa099/miniconda3/envs/pytorch4/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so(_Z29THCPModule_isDriverSufficientP7_object+0x1e)[0x7fffeb33970e] python(_PyCFunction_FastCallDict+0x19a)[0x7ffff7bb8a2a] python(+0x19cdfc)[0x7ffff7c45dfc] python(_PyEval_EvalFrameDefault+0x2fa)[0x7ffff7c6a94a] python(+0x196f8b)[0x7ffff7c3ff8b] python(+0x19ced5)[0x7ffff7c45ed5] python(_PyEval_EvalFrameDefault+0x2fa)[0x7ffff7c6a94a] python(+0x196f8b)[0x7ffff7c3ff8b] python(+0x19ced5)[0x7ffff7c45ed5] python(_PyEval_EvalFrameDefault+0x2fa)[0x7ffff7c6a94a] python(+0x196f8b)[0x7ffff7c3ff8b] python(+0x19ced5)[0x7ffff7c45ed5] python(_PyEval_EvalFrameDefault+0x2fa)[0x7ffff7c6a94a] python(PyEval_EvalCodeEx+0x329)[0x7ffff7c40cb9] python(PyEval_EvalCode+0x1c)[0x7ffff7c41a4c] python(+0x214c44)[0x7ffff7cbdc44] python(+0xdb84e)[0x7ffff7b8484e] python(PyRun_InteractiveLoopFlags+0xf3)[0x7ffff7b84a04] python(+0xdbaa4)[0x7ffff7b84aa4] python(+0xdd994)[0x7ffff7b86994] python(main+0xee)[0x7ffff7b8975e] /lib64/libc.so.6(__libc_start_main+0xfd)[0x7ffff72f3d1d] python(+0x1c847b)[0x7ffff7c7147b] ======= Memory map: ======== 7fff60000000-7fff60021000 rw-p 00000000 00:00 0 7fff60021000-7fff64000000 —p 00000000 00:00 0 7fff66bf8000-7fff7a777000 r-xp 00000000 00:1b 1406612009 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libcudnn.so.7.1.2 7fff7a777000-7fff7a977000 —p 13b7f000 00:1b 1406612009 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libcudnn.so.7.1.2 7fff7a977000-7fff7a9d2000 rw-p 13b7f000 00:1b 1406612009 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libcudnn.so.7.1.2 7fff7a9d2000-7fff7aa24000 rw-p 00000000 00:00 0 7fff7aa24000-7fff7aa27000 rw-p 13bda000 00:1b 1406612009 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libcudnn.so.7.1.2 7fff7aa27000-7fff82855000 r-xp 00000000 00:1b 3168206853 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libcufft.so.9.0.176 7fff82855000-7fff82a55000 —p 07e2e000 00:1b 3168206853 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libcufft.so.9.0.176 7fff82a55000-7fff82a64000 rw-p 07e2e000 00:1b 3168206853 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libcufft.so.9.0.176 7fff82a64000-7fff82ac8000 rw-p 00000000 00:00 0 7fff82ac8000-7fff82aca000 rw-p 07e3e000 00:1b 3168206853 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libcufft.so.9.0.176 7fff82aca000-7fff85cbb000 r-xp 00000000 00:1b 1140328941 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libcublas.so.9.0.176 7fff85cbb000-7fff85eba000 —p 031f1000 00:1b 1140328941 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libcublas.so.9.0.176 7fff85eba000-7fff85ef1000 rw-p 031f0000 00:1b 1140328941 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libcublas.so.9.0.176 7fff85ef1000-7fff85f00000 rw-p 00000000 00:00 0 7fff85f00000-7fff85f03000 rw-p 03228000 00:1b 1140328941 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libcublas.so.9.0.176 7fff85f03000-7fff8943f000 r-xp 00000000 00:1b 2771446159 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libcusparse.so.9.0.176 7fff8943f000-7fff8963f000 —p 0353c000 00:1b 2771446159 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libcusparse.so.9.0.176 7fff8963f000-7fff89659000 rw-p 0353c000 00:1b 2771446159 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libcusparse.so.9.0.176 7fff89659000-7fff89669000 rw-p 00000000 00:00 0 7fff89669000-7fff8966e000 rw-p 03556000 00:1b 2771446159 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libcusparse.so.9.0.176 7fff8966e000-7fff8d459000 r-xp 00000000 00:1b 2793288561 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libmkl_core.so 7fff8d459000-7fff8d658000 —p 03deb000 00:1b 2793288561 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libmkl_core.so 7fff8d658000-7fff8d65f000 r–p 03dea000 00:1b 2793288561 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libmkl_core.so 7fff8d65f000-7fff8d68c000 rw-p 03df1000 00:1b 2793288561 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libmkl_core.so 7fff8d68c000-7fff8d6a0000 rw-p 00000000 00:00 0 7fff8d6a0000-7fff8ebdd000 r-xp 00000000 00:1b 2881525789 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libmkl_gnu_thread.so 7fff8ebdd000-7fff8eddd000 —p 0153d000 00:1b 2881525789 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libmkl_gnu_thread.so 7fff8eddd000-7fff8ede0000 r–p 0153d000 00:1b 2881525789 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libmkl_gnu_thread.so 7fff8ede0000-7fff8edf7000 rw-p 01540000 00:1b 2881525789 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libmkl_gnu_thread.so 7fff8edf7000-7fff8f6e9000 r-xp 00000000 00:1b 1976849771 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libmkl_gf_lp64.so 7fff8f6e9000-7fff8f8e8000 —p 008f2000 00:1b 1976849771 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libmkl_gf_lp64.so 7fff8f8e8000-7fff8f8ea000 r–p 008f1000 00:1b 1976849771 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libmkl_gf_lp64.so 7fff8f8ea000-7fff8f8fc000 rw-p 008f3000 00:1b 1976849771 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libmkl_gf_lp64.so 7fff8f8fc000-7fff8f902000 rw-p 00000000 00:00 0 7fff8f902000-7fff91a64000 r-xp 00000000 00:1b 432129565 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libnccl.so.1.3.5 7fff91a64000-7fff91c63000 —p 02162000 00:1b 432129565 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libnccl.so.1.3.5 7fff91c63000-7fff91c64000 r–p 02161000 00:1b 432129565 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libnccl.so.1.3.5 7fff91c64000-7fff91c65000 rw-p 02162000 00:1b 432129565 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/libnccl.so.1.3.5 7fff91c65000-7fffab2f7000 r-xp 00000000 00:1b 1480490381 /panfs/roc/groups/6/kumarv/willa099/miniconda3/envs/pytorch4/lib/python3.6/site-packages/torch/lib/libATen.soAborted
st103722
I solved the problem by switching to a different cluster. I think it’s possible the GPU was bad. Thanks for the response, though.
st103723
the tutorial shows learning rate decay according to epoch, but how to adjust lr according to batch?
st103724
You could just call scheduler.step() inside your “batch loop”: dataset = datasets.FakeData(size=200, transform=transforms.ToTensor()) loader = DataLoader( dataset, batch_size=10, shuffle=False, num_workers=0 ) model = nn.Linear(3*224*224, 10) optimizer = optim.SGD(model.parameters(), lr=1.) scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1) criterion = nn.NLLLoss() for epoch in range(10): for batch_idx, (data, target) in enumerate(loader): print('Epoch {}, Batch idx {}, lr {}'.format( epoch, batch_idx, optimizer.param_groups[0]['lr'])) optimizer.zero_grad() output = model(data.view(10, -1)) loss = criterion(output, target.long()) loss.backward() optimizer.step() scheduler.step() The scheduler itself does not contain any logic about the epochs, but just uses .step() to manipulate the learning rate of the optimizer.
st103725
here is my model: class vgg16Net(nn.Module): def init(self): super(vgg16Net, self).init() vgg = models.vgg16_bn(pretrained=True) self.block1 = nn.Sequential(*list(vgg.features.children())[:7]) self.block2 = nn.Sequential(*list(vgg.features.children())[7:14]) self.block3 = nn.Sequential(*list(vgg.features.children())[14:24]) self.block4 = nn.Sequential(*list(vgg.features.children())[24:34]) self.block5 = nn.Sequential(*list(vgg.features.children())[34:], nn.AdaptiveAvgPool2d(1)) self.classifier = nn.Sequential(nn.Linear(512, 10)) def forward(self, x): x = self.block1(x) x = self.block2(x) x = self.block3(x) x = self.block4(x) x = self.block5(x) x = x.view(x.shape[0],-1) x = self.classifier(x) vggmodel = vgg16Net() vggmodel = vgg16.cuda() device = torch.device(“cuda:0”) inputs = torch.randn((4,3,224,224)) inputs = inputs.to(device) outputs = vggmodel(inputs) #get a None type object
st103726
Solved by ptrblck in post #2 It looks like you forgot to return x.
st103727
you need to return the output of forward def forward(self, x): x = self.block1(x) x = self.block2(x) x = self.block3(x) x = self.block4(x) x = self.block5(x) x = x.view(x.shape[0],-1) x = self.classifier(x) return x
st103728
Hello We (at google) would like to use the pytorch logo as part of our Cloud AI Platform services. The logo will be displayed on one or more web pages. Do we have permissions to use the logo? Let us know if you have any questions. Thanks.
st103729
The answer will be yes, but can you please email me at [my-first-name]@pytorch.org so that we can have the request and response on record.
st103730
a= tensor ([4,4,3]) b= tensor ([3,3,2]) i think, a*b result tensor([12,12,6]) but, the real result is tensor([[12,12,6]]) maybe, one more dimension. i want tensor([12,12,6]) how can i get this result?
st103731
the result is my mistake. a is not tensor([4,4,3]), but tensor([[4,4,3]]) so result is tensor([[12,12,6]]). i get tensor([12,12,6]).
st103732
a is not tensor([4,4,3]), but tensor([[4,4,3]]) so result is tensor([[12,12,6]]). i get tensor([12,12,6]). That’s slightly confusing. You want to element-wise multiply tensor ([[4,4,3]]) with ([3, 3, 2]) but get a tensor([ 12, 12, 6])? Just use the view method to drop one dimension. E.g., result = (a * b).view(-1)
st103733
I was trying to do the following import torch print(torch.Tensor(2,3)) I don’t know why but, some times it works and gives an output, and sometimes it throw’s out the following error. RuntimeError Traceback (most recent call last) D:\softwares\anaconda\lib\site-packages\IPython\core\formatters.py in call(self, obj) 700 type_pprinters=self.type_printers, 701 deferred_pprinters=self.deferred_printers) –> 702 printer.pretty(obj) 703 printer.flush() 704 return stream.getvalue() D:\softwares\anaconda\lib\site-packages\IPython\lib\pretty.py in pretty(self, obj) 398 if cls is not object 399 and callable(cls.dict.get(‘repr’)): –> 400 return _repr_pprint(obj, self, cycle) 401 402 return _default_pprint(obj, self, cycle) D:\softwares\anaconda\lib\site-packages\IPython\lib\pretty.py in repr_pprint(obj, p, cycle) 693 “”“A pprint that just redirects to the normal repr function.”"" 694 # Find newlines and replace them with p.break() –> 695 output = repr(obj) 696 for idx,output_line in enumerate(output.splitlines()): 697 if idx: D:\softwares\anaconda\lib\site-packages\torch\tensor.py in repr(self) 55 # characters to replace unicode characters with. 56 if sys.version_info > (3,): —> 57 return torch._tensor_str._str(self) 58 else: 59 if hasattr(sys.stdout, ‘encoding’): D:\softwares\anaconda\lib\site-packages\torch_tensor_str.py in _str(self) 216 suffix = ‘, dtype=’ + str(self.dtype) + suffix 217 –> 218 fmt, scale, sz = _number_format(self) 219 if scale != 1: 220 prefix = prefix + SCALE_FORMAT.format(scale) + ’ ’ * indent D:\softwares\anaconda\lib\site-packages\torch_tensor_str.py in _number_format(tensor, min_sz) 94 # TODO: use fmod? 95 for value in tensor: —> 96 if value != math.ceil(value.item()): 97 int_mode = False 98 break RuntimeError: Overflow when unpacking long Can anyone tell me the reason for this behaviour. My thought was that, torch.Tensor(2,3) is similar to creating an uninitialized tensor which is same as torch.empty(2,3). If this is correct, then https://github.com/pytorch/pytorch/issues/6339 12 has the solution. If not, can anyone help me out here
st103734
Solved by ptrblck in post #4 It seems this bug is fixed in the master branch, although I couldn’t reproduce the issue using 0.4.0. Maybe I was just lucky with the uninitialized values. However, you can just use the tensor as you wish. Just avoid printing an uninitialized tensor.
st103735
You are most likely right and this should be already fixed. Which PyTorch version are you using?
st103736
It seems this bug is fixed in the master branch, although I couldn’t reproduce the issue using 0.4.0. Maybe I was just lucky with the uninitialized values. However, you can just use the tensor as you wish. Just avoid printing an uninitialized tensor.
st103737
I guess torch.Tensor just creates memory on the device (cpu or gpu) and you are trying to print an uninitialized tensor. Assign some values and then print. It should work.
st103738
Hello , I would like to know is there a rule or an equation of when a decay to the learning rate must be introduced ? Also in multiplying the old learning rate by the decay ratio is the best way ? and taking that value as 0.7 or some arbitrary value between [0,1] and thank you.
st103739
Is there any way to consume the whole gpu memory in pytorch? I know it’s not an ideal case to have, but there are times when you don’t want others to interfare your training inside the same gpu. Like when you are doing training and evaluation, you could release your momery and use it for other works to do faster processing. sometimes a big sentences could come to the batch so that you need more memory and someone comes after you to the server, runs their process and you got OOM. this actually happens for me while training nmt system. I don’t find any suggestions about this in PYTORCH. From my experience of tensorflow it was their by default and to be honest I don’t like this feature by default. I think this is a must have feature though it has some dark side. I would very mich like to know how people handle this type of situation when server don’t have any type of resource allocator for gpus. I got one forum post here which is not clear to me, How to occupy all the gpu memory at the beginning of training 3 and finally if this feature is not there right now, will it be included in the future version of release?
st103740
Hi, This does not exist in pytorch. I think it is unlikely to be added in future releases because of how the cuda allocator that we use works. Most of the work that we do is actually GPU compute bound. So even if there is extra memory available, there is no point for sometone else to launch a job on the same GPU: running both job at the same time is actually slower than running them one after the other + it has higher memory demand. I guess the solution is not to share a single GPU if you actually need all of it.
st103741
I am trying t understand how contiguous() and permute() work in pytorch, but im not smart enough to get it. Can anyone please provide explanations and example to make it clear for me? Thanks
st103742
Solved by tom in post #2 Use stride and size to track the memory layout: a = torch.randn(3, 4, 5) b = a.permute(1, 2, 0) c = b.contiguous() d = a.contiguous() # a has "standard layout" (also known as C layout in numpy) descending strides, and no memory gaps (stride(i-1) == size(i)*stride(i)) print (a.shape, a.stride(), a.d…
st103743
Use stride and size to track the memory layout: a = torch.randn(3, 4, 5) b = a.permute(1, 2, 0) c = b.contiguous() d = a.contiguous() # a has "standard layout" (also known as C layout in numpy) descending strides, and no memory gaps (stride(i-1) == size(i)*stride(i)) print (a.shape, a.stride(), a.data_ptr()) # b has same storage as a (data_ptr), but has the strides and sizes swapped around print (b.shape, b.stride(), b.data_ptr()) # c is in new storage, where it has been arranged in standard layout (which is "contiguous") print (c.shape, c.stride(), c.data_ptr()) # d is exactly as a, as a was contiguous all along print (d.shape, d.stride(), d.data_ptr()) Best regards Thomas
st103744
Hello I hope you are doing well, I am solving a classification problem with two main classes. I am using GRU and linear layers in my model, Adam Optimizer and CrossEntropyLoss loss function. Both loss and accuracy are inaccurate for some unknown reason for me! like the accuracy is really low and the loss reaches 75% Do you know what could be the gap and why I am getting such a results ? Thanks
st103745
It seems your model or training procedure might have a bug. The best way of making sure your code works is to try to overfit your model on a small sample (e.g. just one single sample). If your model cannot overfit the single sample, something in the architecture or training might be wrong. Could you try to do that and report the results?
st103746
Thanks, here is the result of running. 100 epochs 1sample for training and 1 for testing. " the values of auc_roc always nan? Epoch 1/100 loss: 0.6639 - acc: 1.0000 - val_loss: 0.6122 - val_acc: 1.0000 - val_roc: 000nan Epoch 2/100 loss: 0.6447 - acc: 1.0000 - val_loss: 0.6142 - val_acc: 1.0000 - val_roc: 000nan Epoch 3/100 loss: 0.6999 - acc: 0.0000 - val_loss: 0.6368 - val_acc: 1.0000 - val_roc: 000nan Epoch 4/100 loss: 0.6451 - acc: 1.0000 - val_loss: 0.5434 - val_acc: 1.0000 - val_roc: 000nan Epoch 5/100 loss: 0.6243 - acc: 1.0000 - val_loss: 0.6269 - val_acc: 1.0000 - val_roc: 000nan Epoch 6/100 loss: 0.5886 - acc: 1.0000 - val_loss: 0.5920 - val_acc: 1.0000 - val_roc: 000nan Epoch 7/100 loss: 0.5408 - acc: 1.0000 - val_loss: 0.5496 - val_acc: 1.0000 - val_roc: 000nan Epoch 8/100 loss: 0.6247 - acc: 1.0000 - val_loss: 0.5303 - val_acc: 1.0000 - val_roc: 000nan Epoch 9/100 loss: 0.6389 - acc: 1.0000 - val_loss: 0.5526 - val_acc: 1.0000 - val_roc: 000nan Epoch 10/100 loss: 0.7407 - acc: 0.0000 - val_loss: 0.6292 - val_acc: 1.0000 - val_roc: 000nan Epoch 11/100 loss: 0.5630 - acc: 1.0000 - val_loss: 0.6084 - val_acc: 1.0000 - val_roc: 000nan Epoch 12/100 loss: 0.6090 - acc: 1.0000 - val_loss: 0.6732 - val_acc: 1.0000 - val_roc: 000nan Epoch 13/100 loss: 0.5534 - acc: 1.0000 - val_loss: 0.6409 - val_acc: 1.0000 - val_roc: 000nan Epoch 14/100 loss: 0.6593 - acc: 1.0000 - val_loss: 0.6653 - val_acc: 1.0000 - val_roc: 000nan Epoch 15/100 loss: 0.5573 - acc: 1.0000 - val_loss: 0.4994 - val_acc: 1.0000 - val_roc: 000nan Epoch 16/100 loss: 0.5948 - acc: 1.0000 - val_loss: 0.6182 - val_acc: 1.0000 - val_roc: 000nan Epoch 17/100 loss: 0.5219 - acc: 1.0000 - val_loss: 0.6496 - val_acc: 1.0000 - val_roc: 000nan Epoch 18/100 loss: 0.4764 - acc: 1.0000 - val_loss: 0.5855 - val_acc: 1.0000 - val_roc: 000nan Epoch 19/100 loss: 0.5772 - acc: 1.0000 - val_loss: 0.6684 - val_acc: 1.0000 - val_roc: 000nan Epoch 20/100 loss: 0.5014 - acc: 1.0000 - val_loss: 0.5266 - val_acc: 1.0000 - val_roc: 000nan Epoch 21/100 loss: 0.5669 - acc: 1.0000 - val_loss: 0.6259 - val_acc: 1.0000 - val_roc: 000nan Epoch 22/100 loss: 0.5227 - acc: 1.0000 - val_loss: 0.5831 - val_acc: 1.0000 - val_roc: 000nan Epoch 23/100 loss: 0.6289 - acc: 1.0000 - val_loss: 0.5928 - val_acc: 1.0000 - val_roc: 000nan Epoch 24/100 loss: 0.5928 - acc: 1.0000 - val_loss: 0.6153 - val_acc: 1.0000 - val_roc: 000nan Epoch 25/100 loss: 0.5538 - acc: 1.0000 - val_loss: 0.5555 - val_acc: 1.0000 - val_roc: 000nan Epoch 26/100 loss: 0.5515 - acc: 1.0000 - val_loss: 0.4815 - val_acc: 1.0000 - val_roc: 000nan Epoch 27/100 loss: 0.6470 - acc: 1.0000 - val_loss: 0.5834 - val_acc: 1.0000 - val_roc: 000nan Epoch 28/100 loss: 0.5991 - acc: 1.0000 - val_loss: 0.6326 - val_acc: 1.0000 - val_roc: 000nan Epoch 29/100 loss: 0.5483 - acc: 1.0000 - val_loss: 0.5693 - val_acc: 1.0000 - val_roc: 000nan Epoch 30/100 loss: 0.4610 - acc: 1.0000 - val_loss: 0.5385 - val_acc: 1.0000 - val_roc: 000nan Epoch 31/100 loss: 0.6384 - acc: 1.0000 - val_loss: 0.4610 - val_acc: 1.0000 - val_roc: 000nan Epoch 32/100 loss: 0.6100 - acc: 1.0000 - val_loss: 0.4956 - val_acc: 1.0000 - val_roc: 000nan Epoch 33/100 loss: 0.5538 - acc: 1.0000 - val_loss: 0.5950 - val_acc: 1.0000 - val_roc: 000nan Epoch 34/100 loss: 0.4978 - acc: 1.0000 - val_loss: 0.4806 - val_acc: 1.0000 - val_roc: 000nan Epoch 35/100 loss: 0.5172 - acc: 1.0000 - val_loss: 0.5676 - val_acc: 1.0000 - val_roc: 000nan Epoch 36/100 loss: 0.5714 - acc: 1.0000 - val_loss: 0.5535 - val_acc: 1.0000 - val_roc: 000nan Epoch 37/100 loss: 0.6651 - acc: 1.0000 - val_loss: 0.5463 - val_acc: 1.0000 - val_roc: 000nan Epoch 38/100 loss: 0.5412 - acc: 1.0000 - val_loss: 0.6234 - val_acc: 1.0000 - val_roc: 000nan Epoch 39/100 loss: 0.5188 - acc: 1.0000 - val_loss: 0.5380 - val_acc: 1.0000 - val_roc: 000nan Epoch 40/100 loss: 0.5751 - acc: 1.0000 - val_loss: 0.6268 - val_acc: 1.0000 - val_roc: 000nan Epoch 41/100 loss: 0.4608 - acc: 1.0000 - val_loss: 0.5827 - val_acc: 1.0000 - val_roc: 000nan Epoch 42/100 loss: 0.5282 - acc: 1.0000 - val_loss: 0.4714 - val_acc: 1.0000 - val_roc: 000nan Epoch 43/100 loss: 0.5524 - acc: 1.0000 - val_loss: 0.4762 - val_acc: 1.0000 - val_roc: 000nan Epoch 44/100 loss: 0.5560 - acc: 1.0000 - val_loss: 0.4422 - val_acc: 1.0000 - val_roc: 000nan Epoch 45/100 loss: 0.5200 - acc: 1.0000 - val_loss: 0.5564 - val_acc: 1.0000 - val_roc: 000nan Epoch 46/100 loss: 0.5861 - acc: 1.0000 - val_loss: 0.5194 - val_acc: 1.0000 - val_roc: 000nan Epoch 47/100 loss: 0.4303 - acc: 1.0000 - val_loss: 0.5422 - val_acc: 1.0000 - val_roc: 000nan Epoch 48/100 loss: 0.4627 - acc: 1.0000 - val_loss: 0.4563 - val_acc: 1.0000 - val_roc: 000nan Epoch 49/100 loss: 0.4126 - acc: 1.0000 - val_loss: 0.5426 - val_acc: 1.0000 - val_roc: 000nan Epoch 50/100 loss: 0.4448 - acc: 1.0000 - val_loss: 0.4956 - val_acc: 1.0000 - val_roc: 000nan Epoch 51/100 loss: 0.4756 - acc: 1.0000 - val_loss: 0.3807 - val_acc: 1.0000 - val_roc: 000nan Epoch 52/100 loss: 0.4906 - acc: 1.0000 - val_loss: 0.4367 - val_acc: 1.0000 - val_roc: 000nan Epoch 53/100 loss: 0.5350 - acc: 1.0000 - val_loss: 0.4401 - val_acc: 1.0000 - val_roc: 000nan Epoch 54/100 loss: 0.5158 - acc: 1.0000 - val_loss: 0.5971 - val_acc: 1.0000 - val_roc: 000nan Epoch 55/100 loss: 0.3638 - acc: 1.0000 - val_loss: 0.4439 - val_acc: 1.0000 - val_roc: 000nan Epoch 56/100 loss: 0.4309 - acc: 1.0000 - val_loss: 0.4926 - val_acc: 1.0000 - val_roc: 000nan Epoch 57/100 loss: 0.5687 - acc: 1.0000 - val_loss: 0.5362 - val_acc: 1.0000 - val_roc: 000nan Epoch 58/100 loss: 0.4342 - acc: 1.0000 - val_loss: 0.5274 - val_acc: 1.0000 - val_roc: 000nan Epoch 59/100 loss: 0.5823 - acc: 1.0000 - val_loss: 0.5437 - val_acc: 1.0000 - val_roc: 000nan Epoch 60/100 loss: 0.4977 - acc: 1.0000 - val_loss: 0.4626 - val_acc: 1.0000 - val_roc: 000nan Epoch 61/100 loss: 0.4301 - acc: 1.0000 - val_loss: 0.5634 - val_acc: 1.0000 - val_roc: 000nan Epoch 62/100 loss: 0.5764 - acc: 1.0000 - val_loss: 0.4220 - val_acc: 1.0000 - val_roc: 000nan Epoch 63/100 loss: 0.4134 - acc: 1.0000 - val_loss: 0.4579 - val_acc: 1.0000 - val_roc: 000nan Epoch 64/100 loss: 0.4567 - acc: 1.0000 - val_loss: 0.5778 - val_acc: 1.0000 - val_roc: 000nan Epoch 65/100 loss: 0.4165 - acc: 1.0000 - val_loss: 0.5290 - val_acc: 1.0000 - val_roc: 000nan Epoch 66/100 loss: 0.3902 - acc: 1.0000 - val_loss: 0.4509 - val_acc: 1.0000 - val_roc: 000nan Epoch 67/100 loss: 0.3772 - acc: 1.0000 - val_loss: 0.5043 - val_acc: 1.0000 - val_roc: 000nan Epoch 68/100 loss: 0.3754 - acc: 1.0000 - val_loss: 0.3601 - val_acc: 1.0000 - val_roc: 000nan Epoch 69/100 loss: 0.3820 - acc: 1.0000 - val_loss: 0.4563 - val_acc: 1.0000 - val_roc: 000nan Epoch 70/100 loss: 0.3793 - acc: 1.0000 - val_loss: 0.4135 - val_acc: 1.0000 - val_roc: 000nan Epoch 71/100 loss: 0.5005 - acc: 1.0000 - val_loss: 0.4114 - val_acc: 1.0000 - val_roc: 000nan Epoch 72/100 loss: 0.5058 - acc: 1.0000 - val_loss: 0.4379 - val_acc: 1.0000 - val_roc: 000nan Epoch 73/100 loss: 0.5321 - acc: 1.0000 - val_loss: 0.4355 - val_acc: 1.0000 - val_roc: 000nan Epoch 74/100 loss: 0.4466 - acc: 1.0000 - val_loss: 0.4963 - val_acc: 1.0000 - val_roc: 000nan Epoch 75/100 loss: 0.3814 - acc: 1.0000 - val_loss: 0.3516 - val_acc: 1.0000 - val_roc: 000nan Epoch 76/100 loss: 0.3500 - acc: 1.0000 - val_loss: 0.4960 - val_acc: 1.0000 - val_roc: 000nan Epoch 77/100 loss: 0.3406 - acc: 1.0000 - val_loss: 0.4104 - val_acc: 1.0000 - val_roc: 000nan Epoch 78/100 loss: 0.4097 - acc: 1.0000 - val_loss: 0.3950 - val_acc: 1.0000 - val_roc: 000nan Epoch 79/100 loss: 0.4751 - acc: 1.0000 - val_loss: 0.4306 - val_acc: 1.0000 - val_roc: 000nan Epoch 80/100 loss: 0.3236 - acc: 1.0000 - val_loss: 0.4117 - val_acc: 1.0000 - val_roc: 000nan Epoch 81/100 loss: 0.3355 - acc: 1.0000 - val_loss: 0.3545 - val_acc: 1.0000 - val_roc: 000nan Epoch 82/100 loss: 0.4293 - acc: 1.0000 - val_loss: 0.3483 - val_acc: 1.0000 - val_roc: 000nan Epoch 83/100 loss: 0.3347 - acc: 1.0000 - val_loss: 0.4013 - val_acc: 1.0000 - val_roc: 000nan Epoch 84/100 loss: 0.3636 - acc: 1.0000 - val_loss: 0.3877 - val_acc: 1.0000 - val_roc: 000nan Epoch 85/100 loss: 0.4909 - acc: 1.0000 - val_loss: 0.3191 - val_acc: 1.0000 - val_roc: 000nan Epoch 86/100 loss: 0.4887 - acc: 1.0000 - val_loss: 0.4015 - val_acc: 1.0000 - val_roc: 000nan Epoch 87/100 loss: 0.3689 - acc: 1.0000 - val_loss: 0.3816 - val_acc: 1.0000 - val_roc: 000nan Epoch 88/100 loss: 0.4261 - acc: 1.0000 - val_loss: 0.4574 - val_acc: 1.0000 - val_roc: 000nan Epoch 89/100 loss: 0.3855 - acc: 1.0000 - val_loss: 0.4199 - val_acc: 1.0000 - val_roc: 000nan Epoch 90/100 loss: 0.4726 - acc: 1.0000 - val_loss: 0.4844 - val_acc: 1.0000 - val_roc: 000nan Epoch 91/100 loss: 0.2764 - acc: 1.0000 - val_loss: 0.3909 - val_acc: 1.0000 - val_roc: 000nan Epoch 92/100 loss: 0.3507 - acc: 1.0000 - val_loss: 0.3096 - val_acc: 1.0000 - val_roc: 000nan Epoch 93/100 loss: 0.3022 - acc: 1.0000 - val_loss: 0.3459 - val_acc: 1.0000 - val_roc: 000nan Epoch 94/100 loss: 0.4090 - acc: 1.0000 - val_loss: 0.3050 - val_acc: 1.0000 - val_roc: 000nan Epoch 95/100 loss: 0.3081 - acc: 1.0000 - val_loss: 0.3604 - val_acc: 1.0000 - val_roc: 000nan Epoch 96/100 loss: 0.3630 - acc: 1.0000 - val_loss: 0.2944 - val_acc: 1.0000 - val_roc: 000nan Epoch 97/100 loss: 0.5035 - acc: 1.0000 - val_loss: 0.3200 - val_acc: 1.0000 - val_roc: 000nan Epoch 98/100 loss: 0.4066 - acc: 1.0000 - val_loss: 0.3779 - val_acc: 1.0000 - val_roc: 000nan Epoch 99/100 loss: 0.3435 - acc: 1.0000 - val_loss: 0.3576 - val_acc: 1.0000 - val_roc: 000nan Epoch 100/100 loss: 0.3131 - acc: 1.0000 - val_loss: 0.3457 - val_acc: 1.0000 - val_roc: 000nan Test score: 36.65938675403595 Test accuracy: 100.0 Test ROC: nan GRU( (gru): GRU(7, 4) (linear): Linear(in_features=4, out_features=2, bias=True) )
st103747
How do you calculate the AUC? Is the loss shrinking to zero or a really low number?
st103748
from sklearn import metrics fpr, tpr, _ = metrics.roc_curve(y, yy) roc_auc = metrics.auc(fpr, tpr) it was added to evaluation function, sorry I did not understand the second question
st103749
I think the AUC and ROC are not defined for a single point. Do you get any warnings? For the second question: Your accuracy is 100% from the first epoch and just switches to 0% two times. If you look at the training loss, do you see it approaching zero?
st103750
y is the test labels and yy is the predicted ones from the model, y_pred = self.predict(X) yy=y_pred.detach().numpy()[:,1].flatten() y_pred has dimensions of (number of test samples, 2) where 2 is the number of classes
st103751
Have you used any activation funtion in self.predict? As far as I know, roc_curve needs the probabilities. So maybe you would like to use F.softmax(y_pred, dim=1), if y_pred are logits.
st103752
Actually no I am not getting warnings, actually I define AUR, ROC only in the evaluate function which we call it twice when validating the model after the training and the testing. I dont think that the loss approaching to 0, the values in the run not percentage, the minimum so far 31% do you think I have to increase the number of epochs to see it it approach 0?
st103753
Actually no I am not using any activation function, but I am using CrossEntropyLoss which should apply softmax already
st103754
CrossEntropyLoss applies LogSoftmax on the input, but your model output will be logits. Therefore you would need to transform them into probabilities using Softmax to calculate the ROC/AUC. Yes, it would be a good idea to make sure the loss approaches zero. Changing the learning rate, increasing the epochs, etc. might help.
st103755
So I added a softmax at the end of the model, but still I am getting nan for ROC/AUC and regarding the loss, Yes it approaching 0
st103756
Sorry for the confusion. You shouldn’t add it at the end of your model, but just for the calculation of the ROC/AUC. It’s good if it’s approaching zero, so the sanity check was successful. Now you could try to scale your problem up, i.e. give it more data and see, if it’s still capable of learning.
st103757
Actually I applied the softmax on the predicted data before the calculations of ROC/AUC but still for one sample for testing and training, it gives nan. However, when I increase the number of samples it start giving new good values like above 70% but the loss still also high?
st103758
I am not sure if this information might help but when I calculated the confusion matrix for the testing data I got this results val_loss: 63.5753 - val_acc: 63.3333 - val_roc: 67.1875 avg_correct: 63.3333 - avg_wrong: 36.6667 - this is the way of calculating the metric cm = metrics.confusion_matrix(y,yy.round(),labels= [0,1]) where y is the true labels and yy is the predict value y_pred=F.softmax(y_pred) yy=y_pred.detach().numpy()[:,1].flatten()
st103759
Could someone please tell me the different between them? Since both of them can convert a 2x2 tensor to a 4x4 tensor, when should I use each of them.
st103760
Following the guide here 1, I set up a distributed training on my devbox with 4 GPUs (12 GB each). I however notice that after a few steps, the memory in GPU0 always max outs and the corresponding rank dies. The remaining ranks continue to work though. This is happening even with very small batch sizes (for e.g. 16). Any ideas on what could be happening? The model seems small enough to easily work with 12 GB. The input tensors are not too large either. Happy to provide more specific information. Debugging tips are welcome. Thanks!
st103761
Given a M×N×C matrix, I want to calculate the l2-norm of all sub-matrix with size m×n×C, m<M and n<N, then get a (M-m+1)×(N-n+1)×1 matrix. How to effectively implement it in pytorch? Thank you for your attention and answer.
st103762
Solved by ptrblck in post #2 This code should work for your matrix given the kernel size and stride: kh, kw = 3, 3 # kernel size dh, dw = 1, 1 # stride C, M, N = 4, 7, 7 x = torch.randn(M, N, C) patches = x.unfold(0, kh, dh).unfold(1, kw, dw) print(patches.shape) > torch.Size([5, 5, 4, 3, 3]) patches = patches.contiguous().v…
st103763
This code should work for your matrix given the kernel size and stride: kh, kw = 3, 3 # kernel size dh, dw = 1, 1 # stride C, M, N = 4, 7, 7 x = torch.randn(M, N, C) patches = x.unfold(0, kh, dh).unfold(1, kw, dw) print(patches.shape) > torch.Size([5, 5, 4, 3, 3]) patches = patches.contiguous().view(*patches.size()[:-2], -1) patches = torch.pow(torch.pow(patches, 2).sum(3), 0.5) print(patches.shape) > torch.Size([5, 5, 4]) You would have to permute the tensor or
st103764
I’m trying to use both PyTorch and Caffe2 in a single project. To do so, I tried installing PyTorch from source (with the environment variable FULL_CAFFE2=1). This appeared to work fine. I then added pytorch/build to my PYTHONPATH. The issue is that within a python session, I get a segmentation fault: Python 3.6.6 |Anaconda, Inc.| (default, Jun 28 2018, 17:14:51) [GCC 7.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from caffe2.python import workspace WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode. WARNING:root:Debug message: No module named 'caffe2.python.caffe2_pybind11_state_hip' Segmentation fault Whereas if I import torch or import caffe2 first, I don’t see this error. Python 3.6.6 |Anaconda, Inc.| (default, Jun 28 2018, 17:14:51) [GCC 7.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> from caffe2.python import workspace >>> Maybe I didn’t install caffe2 correctly with PyTorch? I’m on Ubuntu 16.04, CUDA 8, cuDNN v6.0.21.
st103765
Installing Caffe2 first from source following this 108 and then simply installing pytorch using pip install torch seems to work but given that caffe2 lives within pytorch, installing Caffe2 from source as described above seems redundant. Guess I’ll wait for PyTorch 1.0.
st103766
Right now I’m trying to replicated the input dimension size of a loaded model to convert the content into ONNX. Initially I was given an error for not having enough input arguments for any people who remember my previous post (Missing key(s) in state_dict 17). I fixed my previous error and now facing another one. partONe.png1110×488 60.8 KB I think it has something to do with my forward function handling the data type.
st103767
its a bit of a silly thing because it works but, it may point out to some linting required in the source I have a function that calls torch.max(some_tensor, dim=1) and it works just fine; but for some reason visual studio code flags it as [E1101 module ‘torch’ has no ‘max’ member]. Im using the correct interpreter pointing to torch 0.4.0
st103768
i was going through This 19 tutorial. There i have a doubt with the following class code. class RNN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(RNN, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.output_size = output_size self.i2h = nn.Linear(input_size + hidden_size, hidden_size) self.i2o = nn.Linear(input_size + hidden_size, output_size) self.softmax = nn.LogSoftmax() def forward(self, input, hidden): combined = torch.cat((input, hidden), 1) hidden = self.i2h(combined) output = self.i2o(combined) output = self.softmax(output) return output, hidden def init_hidden(self): return Variable(torch.zeros(1, self.hidden_size)) This code was taken from Here 9. There it was mentioned that Since the state of the network is held in the graph and not in the layers, you can simply create an nn.Linear and reuse it over and over again for the recurrence. What i don’t understand is, how can one just increase input feature size in nn.Linear and say it is a RNN. Am i missing something here.
st103769
Hi, I guess this module should be used in a for loop depending on what you need. For example, to just get the output for each input: def forward_inputs(inputs): hidden = mod.init_hidden() outputs = [] for inp in inputs: out, hidden = mod(inp, hidden) outputs.append(out) return outputs
st103770
Hi, thanks for the answer, i was wondering, is this very common to use this kind of setup for RNN?
st103771
I don’t use much RNNs. But that depends a lot on what you want to do. If you’re looking for standard stuff, there are modules that exist that do many to many mapping or many to one mapping and stuff. You can find them here 5 in the doc. This implementation is useful if you want to do something less common and you need to control how each step of the RNN is done. Possibly changing the hidden between iterations and filtering which outputs you want to return.
st103772
i just looked up and saw that torch.cat combines multiple tensor’s. so, in the code, combined = torch.cat((input, hidden), 1) this means that the combined is input and hidden layer’s combined right. so, self.i2h is from input to hidden layer and self.i2o is the output of combined to the next layer? so, in a way, he made a single tensor for the all the time steps. Is my understanding correct?
st103773
Well this is how an rnn works: you combine the current input with the previous step’s hidden state to get on one hand the output of this step and the new hidden state. Here to do so, the input and the previous layer hidden state are combined together into a single tensor. And then a linear layer is used to get the output and another to get the new state.
st103774
Stack overflow answer for the same question. https://stackoverflow.com/questions/51152658/building-recurrent-neural-network-with-feed-forward-network-in-pytorch/51158304#51158304 49
st103775
I want to seperate vgg16 layers in to different block so that the learning rate of parameters of different block can be different, so I rewrite the class, here is the code: class vgg16Net(nn.Module): def __init__(self): super(vgg16Net, self).__init__() self.block1 = nn.Sequential(nn.Conv2d(3, 64, kernel_size=3, padding=1), nn.BatchNorm2d(64), nn.ReLU(True), nn.Conv2d(64, 64, kernel_size=3, padding=1), nn.BatchNorm2d(64), nn.ReLU(True), nn.MaxPool2d(kernel_size=2, stride=2)) self.block2 = nn.Sequential(nn.Conv2d(64, 128, kernel_size=3, padding=1), nn.BatchNorm2d(128), nn.ReLU(True), nn.Conv2d(128, 128, kernel_size=3, padding=1), nn.BatchNorm2d(128), nn.ReLU(True), nn.MaxPool2d(kernel_size=2, stride=2)) self.block3 = nn.Sequential(nn.Conv2d(128, 256, kernel_size=3, padding=1), nn.BatchNorm2d(256), nn.ReLU(True), nn.Conv2d(256, 256, kernel_size=3, padding=1), nn.BatchNorm2d(256), nn.ReLU(True), nn.Conv2d(256, 256, kernel_size=3, padding=1), nn.BatchNorm2d(256), nn.ReLU(True), nn.MaxPool2d(kernel_size=2, stride=2)) self.block4 = nn.Sequential(nn.Conv2d(256, 512, kernel_size=3, padding=1), nn.BatchNorm2d(512), nn.ReLU(True), nn.Conv2d(512, 512, kernel_size=3, padding=1), nn.BatchNorm2d(2512), nn.ReLU(True), nn.Conv2d(512, 512, kernel_size=3, padding=1), nn.BatchNorm2d(512), nn.ReLU(True), nn.MaxPool2d(kernel_size=2, stride=2)) self.block5 = nn.Sequential(nn.Conv2d(512, 512, kernel_size=3, padding=1), nn.BatchNorm2d(512), nn.ReLU(True), nn.Conv2d(512, 512, kernel_size=3, padding=1), nn.BatchNorm2d(2512), nn.ReLU(True), nn.Conv2d(512, 512, kernel_size=3, padding=1), nn.BatchNorm2d(512), nn.ReLU(True), nn.MaxPool2d(kernel_size=2, stride=2)) self.classifier = nn.Sequential(nn.AdaptiveAvgPool2d(1), nn.Linear(512, 10)) def forward(self, x): x = self.block1(x) x = self.block2(x) x = self.block3(x) x = self.block4(x) x = self.block5(x) x = self.classifier(x) Then I want to load pretrained weights: vggmodel = vgg16Net() model_urls = ‘https://download.pytorch.org/models/vgg16_bn-6c64b313.pth’ weights = model_zoo.load_url(model_urls) vggmodel.load_state_dict(weights, False) but I found it doesn’t work: (vggmodel.block1[0].weight.data == weights[‘features.0.weight’]).sum() #the result is 0 anyone can help? or is there a simple way to set different learning rate in different layer but in the same Sequential, since the source code of vgg16 of pytorch just have two Sequential
st103776
Does pytorch allow for using multiple computers to parallelize training? I have 8 computers with no GPU(only cpu) and want to parallelize it. I have read this, but this doesnt have information to setup other PCs for distrubuted training https://pytorch.org/tutorials/intermediate/dist_tuto.html 4 If yes, please suggest an example with setup details
st103777
Hi, Sigmoid for the last layer and MSE_loss are used in my model, however, the model don’t convergence and loss don’t decrease in training . Therefore, I did some test in snippet . In the test one : class Net(nn.Module): def __init__(self, input_size, output_size): super(Net, self).__init__() self.fc1 = nn.Linear(input_size, output_size) def forward(self, x): out = self.fc1(x) return F.sigmoid(out) net = Net(1000, 1) for name, param in net.named_parameters(): if "weight" in name or "bias" in name: param.data.uniform_(-0.1, 0.1) optimizer = torch.optim.SGD(net.parameters(), lr=0.5, momentum=0.9) input_net = torch.randn(100, 100, 1000) target = torch.ones(100, 100) mask = torch.randn(100, 100).ge(0.5) for epoch in range(1000): optimizer.zero_grad() outputs = [] for i in range(input_net.size(0)): output = net(input_net[i]) outputs += [output.squeeze(1)] outputs = torch.stack(outputs) loss = F.mse_loss(outputs, target, reduce=False)[mask] total_loss = loss.sum() print(total_loss) total_loss.backward() optimizer.step() In this snippet, the total_loss couldn’t decrease hugely,which is similar to my model mentioned before . Then, I do some changes for this class Net(nn.Module): def __init__(self, input_size, output_size): super(Net, self).__init__() self.fc1 = nn.Linear(input_size, output_size) def forward(self, x): out = self.fc1(x) return F.sigmoid(out) net = Net(1000, 1) for name, param in net.named_parameters(): if "weight" in name or "bias" in name: param.data.uniform_(-0.1, 0.1) optimizer = torch.optim.SGD(net.parameters(), lr=0.5, momentum=0.9) input_net = torch.randn(100, 100, 1000) target = torch.ones(100, 100) mask = torch.randn(100, 100).ge(0.5) for epoch in range(1000): optimizer.zero_grad() outputs = [] for i in range(input_net.size(0)): output = net(input_net[i]) outputs += [output.squeeze(1)] outputs = torch.stack(outputs) loss = F.mse_loss(outputs, target, reduce=False)[mask] total_loss = loss.sum() / mask.sum().float() # change: average loss print(total_loss) total_loss.backward() optimizer.step() In this snippet, I did a size average for loss, which cause the loss had decrease rapidly. I couldn’t absolutely understand reasons of the change, Can any one explain that ? if I don’t size average, what should I do can make the model converge ?
st103778
Because your losses are most likely in a different range (sum vs. mean), you would have to change your learning rate accordingly to get the same weight updates. In your first example your learning rate might just be too high for the high loss values.
st103779
Thanks for your reply. I tested the first example in different learning rate , while the result of experiment is terrible. What’s wrong with it ?
st103780
I have some questions regarding the use of the adaptive average pooling instead of a concatenate. The questions comes from two threads on the forum Global Average Pooling in Pytorch I am trying to use global average pooling, however I have no idea on how to implement this in pytorch. So global average pooling is described briefly as: It means that if you have a 3D 8,8,128 tensor at the end of your last convolution, in the traditional method, you flatten it into a 1D vector of size 8x8x128. And you then add one or several fully connected layers and then at the end, a softmax layer that reduces the size to 10 classification categories and applies the softmax operator. Th… Global average pooling misunderstanding vision Hello, l h would like to replace my fully connected layer with global average pooling layer. l have 10 classes and my last convolutional layer output a 3D tensor of 16,25,32. Last_conv=tensor.view(16,25,32) # something to do here !! final_layer=self.global_average_pooling(last_conv) # output=self.Softmax(final_layer) My question is how to pass from 3d tensor of 16,25,32 to 10 (number of classes) through global average pooling ? Thank you Q1: What is the preferred approach to using global average pooling for current sota models, should there be a fully connected layer after it or have it be a fully convolutional network? Q2: How do I change the output size to be size k? Do I need to have a conv2d layer before it? From the first forum thread it seems like I need have a layer with k out_channels before the self.conv2d_last = nn.Conv2d( in_channel, out_channel = k, kernel_size=1) then in the forward pass I have x = torch.cat() # if i want to concatenate outputs from different conv layers x = self.con2d_last x = F.adaptive_avg_pool2d(x, (1, 1)) Will that get me to a vector of size k that I can use as the output (or will I need to flatten it)? Is this right?
st103781
I use fully connected layer, self.out1 = nn.Sequential( nn.AvgPool2d(4) # where 4 is kernel size ) x = self.conv4(x) x = self.out1(x) x = x.view(-1, 1024*1*1)
st103782
Look at Adaptive average pooling 1.3k. Credits : https://github.com/pytorch/vision/issues/538 374 Hth!
st103783
So for the adaptive pooling with output size (1, 1), the input is batch x channel x H x W and the output is batch x channel x 1 x 1?
st103784
Yeah it will. You can also use convolutions instead of max pooling, you’ll have to manually create Conv2d later and the fully connected layer instead of average pooling. (As they say then entire network is hyperparameter)
st103785
Hi, The statistics in the Batch normalization can be obtained in two ways: 1, using “track_running_stats”, then the average one over the training is used. 2, not using it, then the statistics of its individual batch is used. I wonder how it can be done as in the original paper, “running over the dataset again and get the statistics”. In other words, how to “only update the statistics without updating other parameters” when running the model? Thanks.
st103786
Since the running stats will be updated during the forward pass, you could set the model to model.train() (or just the BatchNorm layer if you wish), and just do forward passes on your data without calculating the backward pass or update any gradients.
st103787
Thanks for the solution. So if I want to use the individual batch statistics for the validation (during the training process), and to obtain the statistics over the dataset in the end of the training, is the following process correct? 1, set “track_running_stats” to true, allow for keeping the average of the statistics. 2, train the model 3, set the model to .val(), but set batch norm to .train(), to validate the model using individual statistics? This does not work since it will update the statistics using the validation data. One trick is to set the saved average statistics to its initial state (0 or 1) before the final updating, so the data does not show effect on the final statistics. Is there any easy way? 4, set the model to .train(), but don’t do backward pass to estimate the statistics for the whole dataset. (here the statistics is changed from the average in the training. Any way to avoid this?) How to correctly implement “use the individual batch statistics for the validation (during the training process), and to obtain the statistics over the dataset in the end of the training”? Is there any way to set “track_running_stats” during the training processing similarly as .train() and .val() option? Thanks.
st103788
Sunnydreamrain: “use the individual batch statistics for the validation (during the training process), and to obtain the statistics over the dataset in the end of the training” I don’t really understand this question. You would like to use individual batch statistics during the validation while use the running stats during training? If you set the model to .eval(), the running stats will be used, if track_running_stats=True. Otherwise the batch stats will be used. Do you want the running average of the dataset statistics or the “global” stats? If the latter, you could calculate them offline and just set the running stats to these values.
st103789
For training, the statistics does not matter as it does not concern the training process. For validation, I want to use the batch stats. For test, I want to use the global stats over the whole dataset. So the solution would be: set track_running_stats=False, calculate the global stats offline and assign it to the model parameters? Is there any easy to calculate the global stats? Thanks.
st103790
I think you should set track_running_stats=False, because you don’t want to use the running stats in any case. You could calculate the global stats using this example: class MyDataset(Dataset): def __init__(self): self.data = torch.randn(100, 3, 24, 24) def __getitem__(self, index): return self.data[index] def __len__(self): return len(self.data) dataset = MyDataset() loader = DataLoader(dataset, batch_size=5, shuffle=False, num_workers=1) tmp_mean = 0 tmp_var = 0 nb_elems = 0 for data in loader: b, _, h, w = data.size() tmp_mean += data.sum(3).sum(2).sum(0) tmp_var += torch.pow(data, 2).sum(3).sum(2).sum(0) nb_elems += b*h*w global_mean = tmp_mean / nb_elems global_var = tmp_var / (nb_elems - 1) - global_mean**2
st103791
Okay. Thanks for the example. I was actually wondering whether there is a simple way to directly get the batch stats during the forward pass, and the global stats can be simply obtained by averaging them. I mean the batch stats is already calculated and used even when setting track_running_stats=False. Thanks.
st103792
Hello , I’m having a stagnate accuracy on my model Screenshot from 2018-07-02 16-59-17.png734×429 80.6 KB model_in = MLPModel(input_dim, args.MLP_in_width[0], args.MLP_in_width[1], args.MLP_in_width[2],args.MLP_in_width[2]) if torch.cuda.is_available(): model_in.cuda() # RNN-------------------------------------------------------------------------- model_RNN = LSTMModel(args.MLP_in_width[2], args.hidden_dim, args.layer_dim, args.output_dim_rnn) if torch.cuda.is_available(): model_RNN.cuda() # MLP_OUT----------------------------------------------------------------------- model_out = MLPModel(args.output_dim_rnn, args.MLP_out_width[0], args.MLP_out_width[1], args.MLP_out_width[2], output_dim) if torch.cuda.is_available(): model_out.cuda() #features_train_MLP = np.zeros((features_train.shape[0],features_train.shape[1],features_train.shape[2])) if torch.cuda.is_available(): features_train_MLP=torch.zeros([batch_size,features_train.shape[1],features_train.shape[2]]).cuda() #STEP 5: INSTANTIATE LOSS CLASS------------------------------------------------ unique, counts = np.unique(labels_train, return_counts=True) #dict(zip(unique, counts)) #counts = np.power(counts.astype(float), -1/2) #counts = counts / counts.sum() #counts_t = torch.from_numpy(counts).type(torch.FloatTensor).cuda() criterion = nn.CrossEntropyLoss() #STEP 6: INSTANTIATE OPTIMIZER CLASS------------------------------------------- learning_rate = args.learning_rate parameters = list(model_in.parameters()) + list(model_RNN.parameters()) + list(model_out.parameters()) optimizer = torch.optim.SGD(parameters, lr=learning_rate) #STEP 7: TRAIN THE MODEL------------------------------------------------------- iter = 0 first_pass = False for epoch in range(num_epochs): for i, (features, labels) in enumerate(train_loader): features=features.type(torch.FloatTensor) labels = labels.type(torch.LongTensor) features_mlp_in = torch.zeros([batch_size , seq_dim , args.MLP_in_width[2]]) features_mlp_in = features_mlp_in.type(torch.FloatTensor) if torch.cuda.is_available(): features = Variable(features.cuda()) labels = Variable(labels.cuda()) features_mlp_in = Variable(features_mlp_in.cuda()) else: features = Variable(features) labels = Variable(labels) features_mlp_in = Variable(torch.zeros([batch_size , seq_dim , args.MLP_in_width[2]])) # Clear gradients w.r.t. parameters optimizer.zero_grad() # Forward pass to get output from MLP_in for j in range (seq_dim): outputs_mlp_1 = model_in(features[:,j,:]) features_mlp_in[:,j,:] = outputs_mlp_1.data # Forward pass to get output from rnn outputs_rnn = model_RNN(features_mlp_in) # Forward pass to get output from MLP_out outputs = model_out(outputs_rnn) # Calculate Loss: softmax --> cross entropy loss loss = criterion(outputs, labels) # Getting gradients w.r.t. parameters loss.backward() # Updating parameters #if epoch == args.lr_steps and ( first_pass == False ) : # learning_rate = learning_rate * args.learning_rate # optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) # first_pass = True optimizer.step() iter += 1 if iter % 500 == 0 : # Calculate Accuracy correct = 0 total = 0 # Iterate through test dataset for features, labels in test_loader: features = features.type(torch.FloatTensor) features_mlp_in_t = torch.zeros([batch_size , seq_dim , args.MLP_in_width[2]]) features_mlp_in_t = features_mlp_in_t.type(torch.FloatTensor) if torch.cuda.is_available(): features = Variable(features.cuda()) features_mlp_in_t = Variable(features_mlp_in_t.cuda()) # Forward pass to get output from MLP_in for j in range (seq_dim): outputs_mlp_1 = model_in(features[:,j,:]) features_mlp_in_t[:,j,:] = outputs_mlp_1.data # Forward pass to get output from rnn outputs_rnn = model_RNN(features_mlp_in_t) # Forward pass only to get output outputs = model_out(outputs_rnn) # Get predictions from the maximum value _, predicted = torch.max(outputs.data, 1) # Total number of labels total += labels.size(0) # Total correct predictions correct += (predicted.type(torch.DoubleTensor).cpu() == labels.cpu()).sum() accuracy = 100 * correct / total # Print Loss print('Iteration: {}. epoch: {}. Loss: {:.3f} . Accuracy: {:.3f} '.format(iter, epoch, loss.data[0] , accuracy))
st103793
Solved by ptrblck in post #2 Your loss is not decreasing either. Try to simplify your approach by trying to overfit a single sample. If the loss approaches zero and you succesfully overfitted your model, we could have a look at other possible errors. If that’s not possible, your model or training procedure has most likely a b…
st103794
Your loss is not decreasing either. Try to simplify your approach by trying to overfit a single sample. If the loss approaches zero and you succesfully overfitted your model, we could have a look at other possible errors. If that’s not possible, your model or training procedure has most likely a bug.
st103795
I think the problem lie here in the optimizer parameters , I did this line as you recommended in another post parameters = list(model_in.parameters()) + list(model_RNN.parameters()) + list(model_out.parameters()) optimizer = torch.optim.SGD(parameters, lr=learning_rate)
st103796
I fixed it , it seems stacking feedforward network into an RNN into anotehr feedforward netwrok requires more epochs than a simple RNN architecture , I pushed it to 1000 epoch for now and it started learning ont the 180th epoch and slowly increasing in value , from stagnating at 31% into 75% at the 1000th epoch. Thank you anyways.
st103797
Sounds good. Was there still an error in passing the parameters to the optimizer? If so, could you comment on the fix?
st103798
already did yesterday and thank you Screenshot from 2018-07-03 14-50-33.png1920×1200 208 KB
st103799
No worries! I thought there was still a problem: JaeGer: I think the problem lie here in the optimizer parameters , I did this line as you recommended in another post