id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st80668 | How can we simple calculate the GPU memory model (nn.Module) use? Just a single GPU unit. |
st80669 | Solved by ptrblck in post #2
To calculate the memory requirement for all parameters and buffers, you could simply sum the number of these and multiply by the element size:
mem_params = sum([param.nelement()*param.element_size() for param in model.parameters()])
mem_bufs = sum([buf.nelement()*buf.element_size() for buf in model… |
st80670 | To calculate the memory requirement for all parameters and buffers, you could simply sum the number of these and multiply by the element size:
mem_params = sum([param.nelement()*param.element_size() for param in model.parameters()])
mem_bufs = sum([buf.nelement()*buf.element_size() for buf in model.buffers()])
mem = mem_params + mem_bufs # in bytes
However, this will not include the peak memory usage for the forward and backward pass (if that’s what you are looking for). |
st80671 | During training you are using intermediate tensors needed to backpropagate and calculate the gradients. These intermediate tensors will be freed once the gradients were calculated (and you haven’t used retain_graph=True), so you’ll see more memory usage during training than the initial model parameters and buffers would use. |
st80672 | Is peak memory usage equivalent to forward/backward pass size here?
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 10, 24, 24] 260
Conv2d-2 [-1, 20, 8, 8] 5,020
Dropout2d-3 [-1, 20, 8, 8] 0
Linear-4 [-1, 50] 16,050
Linear-5 [-1, 10] 510
================================================================
Total params: 21,840
Trainable params: 21,840
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.06
Params size (MB): 0.08
Estimated Total Size (MB): 0.15
---------------------------------------------------------------- |
st80673 | It might be, but I’m not sure which utility you are using and how it estimates the memory usage. |
st80674 | @ptrblck, How can we measure the peak memory usage? This sounds like the most important question not to break into CUDA out of memory errors. Should I ask the separate question for this? |
st80675 | torch.Cuda.max_memory_allocated() should give you the max value. I’m not sure, if your currently used logging library gives a matching number, but it would be interesting to see. |
st80676 | Thanks, it was:
import torch
torch.cuda.max_memory_allocated()
This can help me figure out the max batch size I can use on a model, hopefully. But I wonder if something similar is present in PyTorch already.
However, I am not sure if this thing will also count the memory in the garbage collector that can be free after gc.collect().
Maybe this is called cache 40. |
st80677 | From a WGAN-GP tutorial where you create the gradient penalty.
penalty = (tf.norm(tf.gradients(D(interpolation), interpolation), axis=1) - 1) ** 2.0
Can you do this with a 1 liner too in Pytorch? |
st80678 | Sure u can,
torch.norm(torch.autograd.grad(outputs=D(interpolation), inputs=interpolation), p='chose/norm/by/name/here', dim=1) - 1) ** 2.0) |
st80679 | Hi,
I am using spyder (python 3.6) in window 10 to work with python programming. I need to install PyTorch 1.2 or up version in window 10. Who can show me how to install PyTorch 1.2 or up?
Thanks in advance! |
st80680 | Solved by Nikronic in post #2
Hi,
Open up conda prompt from anaconda installation path, then run this command:
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
Or open oup Powershell (windows native shell) then run the command.
By the way, the PyTorch doc is easy to follow and almost complete, a google query wou… |
st80681 | Hi,
Open up conda prompt from anaconda installation path, then run this command:
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
Or open oup Powershell (windows native shell) then run the command.
By the way, the PyTorch doc is easy to follow and almost complete, a google query would be great!
pytorch.org
PyTorch 10
An open source deep learning platform that provides a seamless path from research prototyping to production deployment. |
st80682 | Why “run_custom_batch” is 20 times faster than “run_pytorch_batch”?
from time import time
import torch
from torch import nn
import torch.utils.data as data_utils
class Controller(torch.nn.Module):
def __init__(self, n=20):
super(Controller, self).__init__()
self.net = nn.Sequential(
nn.Linear(12, n),
nn.Tanh(),
nn.Linear(n, 4),
)
def forward(self, Z):
return self.net(Z)
def run_custom_batch(n_epoch, n, batch_size):
print(f"running custom_batch, n = {n}, batch_size = {batch_size}")
controller = Controller()
Z = torch.rand(n, 12, dtype=torch.float32)
for i in range(n_epoch):
t0 = time()
i1, i2 = 0, batch_size
while i2 < n:
res = controller(Z[i1:i2])
i1, i2 = i2, i2 + batch_size
res = controller(Z[i1:n])
print(f"i = {i}, time={time() - t0}")
print(f"------------------------------------------------------")
def run_pytorch_batch(n_epoch, n, batch_size):
print(f"running pytorch_batch, n = {n}, batch_size = {batch_size}")
controller = Controller()
Z = torch.rand(n, 12, dtype=torch.float32)
train = data_utils.TensorDataset(Z)
train_loader = data_utils.DataLoader(train, batch_size=batch_size, shuffle=False)
for i in range(n_epoch):
t0 = time()
for Z in train_loader:
res = controller(Z[0])
print(f"i = {i}, time={time() - t0}")
print(f"------------------------------------------------------")
if __name__ == '__main__':
batch_size = 1024
run_custom_batch(n_epoch=5, n=300000, batch_size=batch_size)
"""
i = 0, time=0.09275412559509277
i = 1, time=0.07978677749633789
i = 2, time=0.07679605484008789
i = 3, time=0.07879066467285156
i = 4, time=0.07879066467285156
"""
run_pytorch_batch(n_epoch=5, n=300000, batch_size=batch_size)
"""
i = 0, time=1.7543346881866455
i = 1, time=1.7942287921905518
i = 2, time=1.8034889698028564
i = 3, time=1.8829929828643799
i = 4, time=1.836116075515747
""" |
st80683 | I would assume that simple slicing is faster than calling into the __getitem__ method of your Dataset and stacking the samples to a tensor.
The slowdown of 20x is quite large and I wouldn’t expect it. However, did you try different number of workers for your DataLoader approach and compared it to the manual slicing approach?
The main advantage of using the DataLoader is e.g., that you can use multiple processes to prefetch the next batch, while your main training loop is busy with the model training.
Also, you can easily shuffle, provide a custom sampler, a custom collate_fn, use pinned memory, and can easily implement lazy loading of the samples in your Dataset.
If you can store the data in memory and don’t need the advantages of the Dataset and DataLoader, your approach seems completely fine. |
st80684 | Thank you, @ptrblck.
The different number of workers do not help. I think because all the data already in memory. |
st80685 | Given a model and an input tensor, how do I access all the intermediate tensors? I have a network with multiple parallel (residual) branches in it. Are there any restrictions with such networks?
Appreciate your inputs, thanks! |
st80686 | Hi,
The simplest way to access such elements is usually to just access them during the foward pass of your network. |
st80687 | @albanD, thanks for your response. I have a nested structure and additionally I am worried to put code in the forward function.
I was hoping if I can use some of the torch.jit utilities for this and if there is a good example that I can use. If I am able to construct a network graph then it would help me with handling different residual branches as well.
Appreciate some pointers on this one. Thanks! |
st80688 | Why are you worried about putting code in the forward function?
One of the main advantage of pytorch is actually that the forward is called at every iteration and you can do arbitrary code in there |
st80689 | Hi –
The openAI gym environments use a seed() method that hashes its input; the relevant code snippet:
seed = create_seed(seed)
rng = np.random.RandomState()
rng.seed(_int_list_from_bigint(hash_seed(seed)))
(from here 1).
Does torch.manual_seed() do the same? This isn’t mentioned in the docs. |
st80690 | Solved by albanD in post #2
Hi,
We don’t do any hashing of the seed.
But the rng engine might depending on which one is used (cuda or cpu). |
st80691 | Hi,
We don’t do any hashing of the seed.
But the rng engine might depending on which one is used (cuda or cpu). |
st80692 | def forward(self, x):
if self.dowsample:
identity = self.downsample(x)
else:
identity = x
if need_downsampling == True:
self.downsample = do the downsampling stuff
else:
self.downsample = lambda x : x
def forward(self, x):
identity = self.downsample(x)
Is one of the above more computation or memory efficient than the other? |
st80693 | Hi,
Yes they are.
Note that in both case, you can change self.downsample in between forwards if needed. |
st80694 | I am working with ensembles of neural networks, roughly of the following form:
class NNEnsemble():
def __init__(self, n_estimators) :
super().__init__()
self.estimators = nn.ModuleList([ generate_model() for _ in range(self.n_estimators)])
def forward(self, x):
out = torch.stack([est(x) for est in self.estimators], dim=1)
return torch.sum(out, dim=1)
Here, generate_model() generates a small base model (e.g. MLP 3 layers with 128 hidden neurons). As it turns out, the list-comprehension/for-loop during forward is rather slow. In fact, I only utilize < 10% GPU for an ensemble with 100 networks of small size. Is there any way to speed-up this operation, e.g. by using a “parallel” list comprehension?
I found one post about grouped convolution which mentions a similar structure, but there was no real solution besides waiting for cudnn support. |
st80695 | Hi,
Unfortunately if you do many small operations, it’s going to be fairly slow. The overhead of starting the work on the GPU might be larger than the time you spend actually computing.
You can try to run on the CPU the small part of your model, increase the batch size to give more work to your gpu or find a way to agregate your estimators into a single one that performs one big operation. |
st80696 | Hi!
I have a following piece of code:
sorted_values, indices = torch.sort(score, dim=0, descending=True)
score[filt_idxs] = float("-inf")
filt_values, filt_indices = torch.sort(score, dim=0, descending=True)
I have manually examined score tensor and it only has a single -inf value after filtering. That would mean that filt_indices might shift by at most one up compared to indices.
However, I see that in the cases where values of score tensor are the same, indices and filt_indices are completely different.
That behavior gets me thinking that sorting in PyTorch is non-deterministic. Is that so? If yes - how can I make it deterministic?
Extra details:
All the operations are on CPU
PyTorch 1.1.0 |
st80697 | Hi,
Could you find a small score Tensor that reproduces this behavior to help us reproduce the issue? |
st80698 | Also, I have realized that my choice of wording was incorrect.
It would be more proper to say that sorting in PyTorch is non-stable, which makes sense if internally sorting operation does quicksort.
However, it would be a really good feature if you could specify whether you need it to be stable or not by e.g. passing a flaf to sorting function. In this case it would internally run merge sort since it’s stable. |
st80699 | Good proposal. I also meet a similar problem when translating PC side pytorch code to mobile side C++ code. In my scenario, counting sort is more efficient. However, the indices returned by pytorch is different with my counting sort when there are equal items (they all sort correctly). It would be nice if we could control the pytorch sort algorthim to keep result consistency. |
st80700 | Hi,
We would be happy to accept PR implementing new sorting algorithms that can be chosen based on a flag (with the current behavior as default). |
st80701 | Hi All,
I was wondering if someone knows if this same bug exists in pytorch?
github.com/tensorflow/tensorflow
Issue: tf.keras batch normalization is batch dependent at test time
opened by isaacgerg
on 2019-09-15
System information
Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Both
OS Platform and Distribution...
TF 1.13
comp:keras
stat:awaiting response
type:bug
Essentially, function outputs are dependent on batch size/data. For example, on a given input x, you get y=f(x). now add x to a batch of data z and send over the batch. You would exist for x’s index in z that y_z would equal y, but this is not the case. |
st80702 | I have to admit it’s a strange question to ask for a specific bug.
However, if you set your batch norm to .eval() in PyTorch, the result will not be depending on the batch and the running estimates will be used for each sample.
Let us know, if you encounter any strange behavior. |
st80703 | I understand what you mean. The test example is very simple to implement and if pytorch has the same error, then it might be with cudnn. |
st80704 | You could try to run this dummy example on your machine and check the difference:
x = torch.randn(10, 3, 24, 24)
bn = nn.BatchNorm2d(3)
# dummy forward passes to update running estimates
for _ in range(10):
_ = bn(x)
# Set to eval
bn.eval()
output_all = bn(x)
output_manual = []
for i in range(x.size(0)):
output_manual.append(bn(x[i:i+1]))
output_manual = torch.cat(output_manual)
print((output_all - output_manual).abs().max())
> tensor(0., grad_fn=<MaxBackward1>) |
st80705 | I know this is a strange request but i really want to run this issue to ground. Thanks for the code! |
st80706 | what is a way to load omniglot in few shot setting, like 1 shot 5 way, or 5 shot 5 way? |
st80707 | Hey guys,
So I have a batch of convolutional filters and a batch of images. What is the best way to perform the per-element convolution so it is executed in parallel (without iterating through the batch indices).
Thanks! |
st80708 | Solved by phan_phan in post #2
So, you have filters in a tensor w of shape:
[batch_size, out_channels, in_channels, kernel_height, kernel_width],
and images in a tensor x of shape:
[batch_size, in_channels, in_height, in_width],
and you want an output of shape:
[batch_size, out_channels, out_height, out_width],
where the i_… |
st80709 | So, you have filters in a tensor w of shape:
[batch_size, out_channels, in_channels, kernel_height, kernel_width],
and images in a tensor x of shape:
[batch_size, in_channels, in_height, in_width],
and you want an output of shape:
[batch_size, out_channels, out_height, out_width],
where the i_th output is the convolution of the i_th image by the i_th filter ?
You can do it by merging the batch and channel dimensions together, and use a grouped convolution, with groups=batch_size :
o = torch.nn.functional.conv2d(
x.view(1, batch_size*in_channels, x.size(2), x.size(3)),
w.view(batch_size*out_channels, in_channels, w.size(3), w.size(4)),
groups=batch_size)
o = o.view(batch_size, out_channels, o.size(2), o.size(3))
It works ! But is it the best way to do it ? I don’t know.
Let me know if this ends up faster. |
st80710 | Hey phan_phan,
Thanks. Your solution is correct. It seems to be ever so slightly faster than simply iterating over the batch dimension.
Cheers |
st80711 | Hi ! I am new to deep learning and have some questions about training networks. Hope someone can help me clear my doubts.
When i train my network i usually stop training when i see that the training loss plot have converged. However this always produces a validation loss plot that decreases and increases after a point. Meaning the network have over fitted right ? Should i stop training at that point when validation loss starts to increases even though training loss has not converged
Thanks in advance |
st80712 | You can save the state of your model regularly during training, in different files, so at the end you can choose which one to use, by looking at your losses plot.
To get the best results on data that is not in the training set, you’d want to use the model state saved at the lowest validation loss (just before the over-fitting starts to increase validation loss). |
st80713 | Hi There,
I cannot find some tutorial of the Transformer as it is implemeneted in pytorch 1.2.
I found this one 5 but it looks like it’s some fancy stuff that uses only some part (encoder) and not the plain transformer.
Something like an example for beginners exist on how to use this ?
thanks for the amazing feature,
best |
st80714 | So, I have an issue with some code where GPU:0 always seems to have abnormally high usage (and it gets worse with bigger inputs). Is it possible to see what tensors and models exist on a particular GPU? And if so, how would I do that? The hope is that I could use this information to help determine the cause of the issue. |
st80715 | Solved by Nikronic in post #2
Hi,
print(tensor.device)
print(model.get_device())
Note that if your model and tensor are on GPU, you will get a output, otherwise it will through and error. They will return the id of the currently being used GPUs.
Bests
Nik |
st80716 | Hi,
print(tensor.device)
print(model.get_device())
Note that if your model and tensor are on GPU, you will get a output, otherwise it will through and error. They will return the id of the currently being used GPUs.
Bests
Nik |
st80717 | Hi.
I have problem with printing weights between inputs and first layer.
class LSTMClassifier(nn.Module):
"""Very simple implementation of LSTM-based time-series classifier."""
def __init__(self, input_size, hidden_size, n_layers, output_size):
super().__init__()
self.hidden_size = hidden_size
self.n_layers = n_layers
self.rnn = nn.LSTM(input_size, hidden_size, n_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)
self.batch_size = None
self.hidden = None
def forward(self, x):
h0, c0 = self.init_hidden(x)
out, (hn, cn) = self.rnn(x, (h0, c0))
out = self.fc(out[:, -1, :])
return out
def init_hidden(self, x):
h0 = torch.zeros(self.n_layers, x.size(0), self.hidden_size)
c0 = torch.zeros(self.n_layers, x.size(0), self.hidden_size)
return [t for t in (h0, c0)]
return y
I know how to do this with feedforward Neural Network but with LSTM it’s not working.
Thanks for help. |
st80718 | Hi,
Why can’t you add a print statement in your forward() function?
Can you give an example of what you want to achieve? |
st80719 | How this print statement should looks like?
I just wanna know what neural network see as important inputs referring to weights. |
st80720 | You can add print(self.rnn.weight_ih) for example. You can find here 222 the definition of each weight and their names.
Rick_Sanchez:
I just wanna know what neural network see as important inputs referring to weights.
Not aure what that means |
st80721 | albanD:
Not aure what that means
I’m just checking my theory .
Thanks for help. |
st80722 | I am doing binary classification task for which I created Alexnet network and are using the weights of the pretrained alexnet model. Now instead of training whole network I have freezed all the layers except final layer for which I have kept the requires_grad as True. But now during training I observed that there is no major improvement in the run time of the model, still for 20 epochs it took almost 6 hours to training on 20000 images. I am not sure where I am going wrong. Please suggest me w.r.t below code if I going in right direction or there is some mistake in freezing layers or training.
alexnet_url = 'https://download.pytorch.org/models/alexnet-owt-4df8aa71.pth'
transform = transforms.Compose([transforms.RandomHorizontalFlip(),
transforms.Resize([256,256]),
transforms.ToTensor(),
transforms.Normalize(mean=[0.5],std=[0.5])
])
test_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.5],std=[0.5])
])
dataset = CatDogDataset(root_dir = './Cat-Dog-data/cat-dog-train', transform = transform)
train_data, val_data = random_split(dataset, [19000, 1041])
train_loader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)
val_loader = torch.utils.data.DataLoader(val_data, batch_size=64, shuffle=True)
test_data = CatDogDataset(root_dir = './Cat-Dog-data/cat-dog-test', transform = test_transform)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=64, shuffle=True)
now = datetime.now()
print('Before training:-', now)
# loading pre-trained alexnet model
model_pre = models.alexnet(pretrained=True)
# creating new alextnet model for binary classification
class AlexNet(nn.Module):
def __init__(self, num_classes=1):
super(AlexNet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(64, 192, kernel_size=5, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
)
self.avgpool = nn.AdaptiveAvgPool2d((6, 6))
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(256 * 6 * 6, 4096),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, num_classes),
nn.Sigmoid()
)
state = model_zoo.load_url(alexnet_url)
state_dict = self.state_dict()
for k, v in state.items():
if (k == 'classifier.6.weight' or k == 'classifier.6.bias'):
continue
state_dict.update({k: v})
self.load_state_dict(state_dict)
def forward(self, x):
x = self.features(x)
x = self.avgpool(x)
x = x.view(x.size(0), 256*6*6)
x = self.classifier(x)
return x
model = AlexNet(num_classes=1).to(dev)
model.features[0].weight.requires_grad = False
model.features[0].bias.requires_grad = False
model.features[3].weight.requires_grad = False
model.features[3].bias.requires_grad = False
model.features[6].weight.requires_grad = False
model.features[6].bias.requires_grad = False
model.features[8].weight.requires_grad = False
model.features[8].bias.requires_grad = False
model.features[10].weight.requires_grad = False
model.features[10].bias.requires_grad = False
model.classifier[1].weight.requires_grad = False
model.classifier[1].bias.requires_grad = False
model.classifier[4].weight.requires_grad = False
model.classifier[4].bias.requires_grad = False
model.classifier[6].weight.requires_grad = True
model.classifier[6].bias.requires_grad = True
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(filter(lambda p:p.requires_grad, model.parameters()), lr=0.0001, betas=(0.9, 0.999))
total_step = len(train_loader)
train_losses, val_losses, train_accs, val_accs = [],[],[],[]
for epoch in range(num_epochs):
train_loss = 0
val_loss = 0
train_acc = 0
val_acc = 0
for images,labels in train_loader:
images = images.to(dev)
labels = labels.to(dev)
outputs = model(images)
loss = criterion(outputs, labels.type(torch.FloatTensor).to(dev))
train_loss += loss.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
total = labels.size(0)
labels = torch.squeeze(labels)
predicted = np.where(outputs.cpu().detach().numpy()<0.5, 0, 1)
predicted = torch.from_numpy(predicted)
predicted = torch.squeeze(predicted)
correct = (predicted == labels.cpu()).sum().item()
train_acc += correct / total
Time observed before training:- 2019-09-26 10:38:14.336391
Time observed after training:- 2019-09-26 17:02:13.680405
Though the learnable parameters are 4097 still the model is taking almost 5 - 6 hrs for training. |
st80723 | Could you try to parse only the learnable parameters to the optimizer?
I think the problem is that even if grad param is set to false, it only means that values aren’t gonna be updated, but gradients are still computed as they have to be backpropagated to previous tensors.
import torch
import time
x = torch.rand(10,requires_grad=False)
y = torch.rand(10,requires_grad=False)
z = torch.rand(10,requires_grad=True)
def timing(f):
def wrap(*args):
time1 = time.time()
ret = f(*args)
time2 = time.time()
print('{:s} function took {:.3f} ms'.format(f.__name__, (time2-time1)*1000.0))
return ret
return wrap
@timing
def run(optim,backprop=True):
optim.zero_grad()
q=x+y
p=(z+q).mean()
print(x)
print(y)
print(z)
print(q)
if backprop:
p.backward()
print(x.grad)
print(y.grad)
print(z.grad)
print(q.grad)
print(p.grad)
if backprop:
optim.step()
print(x)
print(y)
print(z)
print(q)
run( torch.optim.SGD([x, y, z], lr=1))
with torch.no_grad():
run( torch.optim.SGD([x, y, z], lr=1),False)
run( torch.optim.SGD([x], lr=1))
tensor([0.8253, 0.7537, 0.7688, 0.9117, 0.2731, 0.4141, 0.5021, 0.5691, 0.8326,
0.9245])
tensor([0.2086, 0.2583, 0.3304, 0.6977, 0.2884, 0.6722, 0.2859, 0.4168, 0.3863,
0.2488])
tensor([0.0729, 0.0993, 0.8631, 0.5255, 0.8684, 0.8381, 0.3063, 0.4755, 0.3240,
0.5715], requires_grad=True)
tensor([1.0340, 1.0120, 1.0993, 1.6094, 0.5615, 1.0863, 0.7880, 0.9859, 1.2189,
1.1733])
None
None
tensor([0.1000, 0.1000, 0.1000, 0.1000, 0.1000, 0.1000, 0.1000, 0.1000, 0.1000,
0.1000])
None
None
tensor([0.8253, 0.7537, 0.7688, 0.9117, 0.2731, 0.4141, 0.5021, 0.5691, 0.8326,
0.9245])
tensor([0.2086, 0.2583, 0.3304, 0.6977, 0.2884, 0.6722, 0.2859, 0.4168, 0.3863,
0.2488])
tensor([-2.7121e-02, -7.1023e-04, 7.6311e-01, 4.2552e-01, 7.6838e-01,
7.3814e-01, 2.0632e-01, 3.7555e-01, 2.2402e-01, 4.7150e-01],
requires_grad=True)
tensor([1.0340, 1.0120, 1.0993, 1.6094, 0.5615, 1.0863, 0.7880, 0.9859, 1.2189,
1.1733])
run function took 62.251 ms
tensor([0.8253, 0.7537, 0.7688, 0.9117, 0.2731, 0.4141, 0.5021, 0.5691, 0.8326,
0.9245])
tensor([0.2086, 0.2583, 0.3304, 0.6977, 0.2884, 0.6722, 0.2859, 0.4168, 0.3863,
0.2488])
tensor([-2.7121e-02, -7.1023e-04, 7.6311e-01, 4.2552e-01, 7.6838e-01,
7.3814e-01, 2.0632e-01, 3.7555e-01, 2.2402e-01, 4.7150e-01],
requires_grad=True)
tensor([1.0340, 1.0120, 1.0993, 1.6094, 0.5615, 1.0863, 0.7880, 0.9859, 1.2189,
1.1733])
None
None
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
None
None
tensor([0.8253, 0.7537, 0.7688, 0.9117, 0.2731, 0.4141, 0.5021, 0.5691, 0.8326,
0.9245])
tensor([0.2086, 0.2583, 0.3304, 0.6977, 0.2884, 0.6722, 0.2859, 0.4168, 0.3863,
0.2488])
tensor([-2.7121e-02, -7.1023e-04, 7.6311e-01, 4.2552e-01, 7.6838e-01,
7.3814e-01, 2.0632e-01, 3.7555e-01, 2.2402e-01, 4.7150e-01],
requires_grad=True)
tensor([1.0340, 1.0120, 1.0993, 1.6094, 0.5615, 1.0863, 0.7880, 0.9859, 1.2189,
1.1733])
run function took 1.616 ms
tensor([0.8253, 0.7537, 0.7688, 0.9117, 0.2731, 0.4141, 0.5021, 0.5691, 0.8326,
0.9245])
tensor([0.2086, 0.2583, 0.3304, 0.6977, 0.2884, 0.6722, 0.2859, 0.4168, 0.3863,
0.2488])
tensor([-2.7121e-02, -7.1023e-04, 7.6311e-01, 4.2552e-01, 7.6838e-01,
7.3814e-01, 2.0632e-01, 3.7555e-01, 2.2402e-01, 4.7150e-01],
requires_grad=True)
tensor([1.0340, 1.0120, 1.0993, 1.6094, 0.5615, 1.0863, 0.7880, 0.9859, 1.2189,
1.1733])
None
None
tensor([0.1000, 0.1000, 0.1000, 0.1000, 0.1000, 0.1000, 0.1000, 0.1000, 0.1000,
0.1000])
None
None
tensor([0.8253, 0.7537, 0.7688, 0.9117, 0.2731, 0.4141, 0.5021, 0.5691, 0.8326,
0.9245])
tensor([0.2086, 0.2583, 0.3304, 0.6977, 0.2884, 0.6722, 0.2859, 0.4168, 0.3863,
0.2488])
tensor([-2.7121e-02, -7.1023e-04, 7.6311e-01, 4.2552e-01, 7.6838e-01,
7.3814e-01, 2.0632e-01, 3.7555e-01, 2.2402e-01, 4.7150e-01],
requires_grad=True)
tensor([1.0340, 1.0120, 1.0993, 1.6094, 0.5615, 1.0863, 0.7880, 0.9859, 1.2189,
1.1733])
run function took 1.965 ms
If you look at this messy code, even if grads are false, the fact you are running the optimizer takes lot of time meanwhile if you go for a no_grad strategy or directly does not parse them to the optimizer you save that time. |
st80724 | Hi,
There have been previous discussions on weighted BCELoss here but none of them give a clear answer how to actually apply the weight tensor and what will it contain?
I’m doing binary segmentation where the output is either foreground or background (1 and 0). But my dataset is highly imbalanced and there is way more background than foreground. (To be exact there is 95 times more background pixels than foreground). So I want to penalize the background by multiplying it with a small number.
I see that BCELoss has a weight parameter: https://pytorch.org/docs/stable/nn.html 88 which according to docs is to be of the size nbatch.
So if I have a batch of size 1, would the eight tensor be of size 1 as well? wont that just be a single float value?
ive read the discussion here: Binary cross entropy weights 83 but that does not answer what the weight tensor would look like for a batch size of 1?
Sorry if I’m rambling but I can’t seem to find anywhere how to use the weight parameter properly. If my output and label tensors are of the size [1, 1, 60, 40, 40] how would I make my weight parameter to penalize the background class (0)? |
st80725 | Solved by KFrank in post #2
Hi phdproblems!
Let me add some hypothetical context to your question to make it
more concrete:
As you say, you have a batch size of one. Each batch tensor is
of torch.Size([1, 1, 60, 40, 40]), so each sample is of
torch.Size([1, 60, 40, 40]).
Let me imagine that each sample represents, say,… |
st80726 | Hi phdproblems!
phdproblems:
I’m doing binary segmentation where the output is either foreground or background (1 and 0). But my dataset is highly imbalanced and there is way more background than foreground. (To be exact there is 95 times more background pixels than foreground).
…
Sorry if I’m rambling but I can’t seem to find anywhere how to use the weight parameter properly. If my output and label tensors are of the size [1, 1, 60, 40, 40] how would I make my weight parameter to penalize the background class (0)?
Let me add some hypothetical context to your question to make it
more concrete:
As you say, you have a batch size of one. Each batch tensor is
of torch.Size([1, 1, 60, 40, 40]), so each sample is of
torch.Size([1, 60, 40, 40]).
Let me imagine that each sample represents, say, a one-channel
(e.g., black and white), three-dimensional image of 60 z-slices
(or time-slices), each of x-y dimension of 40x40. The output
of your network is a tensor for which each element is the predicted
probability (between 0 and 1) that the corresponding voxel (3-d
pixel) in your 3-d image is a foreground voxel. I will call
this your predict tensor.
Your target tensor is of the same shape, and is the known
training data with values of 0 or 1 indicating background and
foreground voxels, respectively.
(That is, your network performs a binary classification of your
voxels. As far as the loss is concerned, you don’t care that
the voxels happen to be arranged as a (one-channel) 60x40x40
3-d image.)
To weight each voxel’s contribution separately, you want a weight
tensor that is the same shape as your predict and target tensors,
i.e., torch.Size([1, 1, 60, 40, 40]). (So its first dimension is
your batch size of one.)
The following script (for pytorch 0.3.0) illustrates this:
import torch
print (torch.__version__)
torch.manual_seed (2019)
predict = torch.rand ((1, 1, 60, 40, 40))
target = torch.bernoulli (predict)
weight = torch.rand ((1, 1, 60, 40, 40))
weight_rebal = torch.ones_like (target) / 95.0 + (1.0 - 1.0 / 95.0) * target
predict_f = predict.clone()
target_f = target.clone()
weight_f = weight.clone()
predict_f.resize_((96000, 1))
target_f.resize_((96000, 1))
weight_f.resize_((96000, 1))
predict = torch.autograd.Variable (predict)
target = torch.autograd.Variable (target)
predict_f = torch.autograd.Variable (predict_f)
target_f = torch.autograd.Variable (target_f)
loss = torch.nn.functional.binary_cross_entropy (predict, target)
lossw = torch.nn.functional.binary_cross_entropy (predict, target, weight)
loss_rebal = torch.nn.functional.binary_cross_entropy (predict, target, weight_rebal)
loss_f = torch.nn.functional.binary_cross_entropy (predict, target)
loss_fw = torch.nn.functional.binary_cross_entropy (predict_f, target_f, weight_f)
print ('predict.shape =', predict.shape)
print ('loss =', loss)
print ('lossw =', lossw)
print ('loss_rebal =', loss_rebal)
print ('predict_f.shape =', predict_f.shape)
print ('loss_f =', loss_f)
print ('loss_fw =', loss_fw)
The output is:
0.3.0b0+591e73e
predict.shape = torch.Size([1, 1, 60, 40, 40])
loss = Variable containing:
0.5002
[torch.FloatTensor of size 1]
lossw = Variable containing:
0.2503
[torch.FloatTensor of size 1]
loss_rebal = Variable containing:
0.2530
[torch.FloatTensor of size 1]
predict_f.shape = torch.Size([96000, 1])
loss_f = Variable containing:
0.5002
[torch.FloatTensor of size 1]
loss_fw = Variable containing:
0.2503
[torch.FloatTensor of size 1]
weight is a tensor of random weights (one for each voxel).
weight_rebal is a tensor of foreground-background weights
(again, one for each voxel), computed from your training-data
target, where background voxels are given a weight factor of
1/95 (to account for their greater frequency of occurrence).
Last, the _f (for flattened) tensors and losses are just to show
that the shape doesn’t affect the per-voxel loss computation.
These can be understood, if you will, as consisting of a batch
of 96,000 samples (batch size = 96,000) of single floating-point
prediction values and single 0 or 1 class labels.
(As a side note, CrossEntropyLoss permits you to specify a
per-class weight, e.g., one weight for foreground and another
weight for background voxels, rather than having to pass in a
tensor of per-voxel weights. You could recast your problem
as a two-class (foreground / background) multiclass classification
problem and use CrossEntropyLoss, but I wouldn’t recommend it.)
Good luck.
K. Frank |
st80727 | Thank you so much. I think this might be the first detailed response on this question here. And for once cleared up the weight tensor size and shape confusion. I really appreciate the time you took in this detailed answer, cleared up all of my confusions
KFrank:
(As a side note, CrossEntropyLoss permits you to specify a
per-class weight, e.g., one weight for foreground and another
weight for background voxels, rather than having to pass in a
tensor of per-voxel weights. You could recast your problem
as a two-class (foreground / background) multiclass classification
problem and use CrossEntropyLoss , but I wouldn’t recommend it.)
Can you please mention why you would not recommend this, I though BCE is just CE but for a binary problem, is it not advised to use CE for binary problems?
Thanks again. |
st80728 | hi another followup question to this.
How you explained in your answer is exactly what I need, i.e. calculate the weight tensor for each instance based on the target tensor. But from what i have read, pytorch does not support this, it only supports the same weight for all instances in a batch which has to be provided when the loss is declared/initialized.
Is this the case or can I provide a different weight for each instance?
Thank you |
st80729 | Hello phdproblems!
phdproblems:
KFrank:
(You could recast your problem
as a two-class (foreground / background) multiclass classification
problem and use CrossEntropyLoss , but I wouldn’t recommend it.)
Can you please mention why you would not recommend this, I though BCE is just CE but for a binary problem, is it not advised to use CE for binary problems?
BCE takes a single number per sample for its prediction – the
probability of the sample being in class “1”.
Multiclass CE takes, for N classes, N numbers for its prediction.
These are the probabilities of the sample being in each of the
N classes. (Pytorch’s CrossEntropyLoss has a softmax
implicitly built in, so it takes logits, rather than probabilities.)
So if you use multiclass CE for a binary (two-class) problem,
you will now have to pass two numbers to it instead of one.
(If these are official probabilities, they will be redundant, in
that they will sum to 1.) So your network will have to have
two (redundant) outputs instead of one. Hardly a big deal,
but it would presumably make your network ever so slightly
less efficient.
If you do everything correctly, you will be doing exactly the
same problem and will get exactly the same result (up to
some round-off error), but it just seems cleaner to me to
use BCE for the binary problem.
Best.
K. Frank |
st80730 | Hi phdproblems!
phdproblems:
But from what i have read, pytorch does not support this, it only supports the same weight for all instances in a batch which has to be provided when the loss is declared/initialized.
Is this the case or can I provide a different weight for each instance?
Well, the documentation for the weight argument in BCELoss 62
and binary_cross_entropy 79 is, to put it nicely, somewhat abbreviated.
I purposely used binary_cross_entropy in my example,
because you can pass in a batch of weights (together with
your predict and target) every time the loss is called.
(As you note, with BCELoss you pass in the weight only at
the beginning when you instantiate the BCELoss class, so
you can’t give it different weights every time you call it with a
new predict and target.)
Also in my example I showed that passing in per-voxel weights
to binary_cross_entropy does indeed work, even if the
documentation doesn’t make this clear, and showed that this
gives the same result as the “flattened” version (predict_f,
target_f, weight_f) where a batch of weights – one weight
for each sample in the batch – is passed in, consistent with
the simplest interpretation of the documentation.
Good luck.
K. Frank |
st80731 | You’re right, sorry I missed that you were using binary_cross_entropy() and not BCELoss() (which I am using).
Thanks for clarifying. I’ll just use binary_cross_entropy()
Thanks for your help |
st80732 | Hi all,
After 2,3 epochs (sometimes before finishing first epoch also) my model gives the error, Torch: invalid memory size – maybe an overflow? at /opt/conda/conda-bld/pytorch-nightly_1549065556721/work/aten/src/TH/THGeneral.cpp:188. Can anyone help me to resolve this?. Pytorch version: 1.0.0.dev20190201 on Ubuntu 18.04.
Thanks |
st80733 | Do you see the same error using the latest stable release (or latest nightly build)? |
st80734 | I have updated Pytorch. Now I am getting another error in my data loader, RuntimeError: Trying to create tensor with negative dimension -2: [-2, 300]. |
st80735 | Could you try to isolate which batch creates this issue or is it thrown in the first step of your DataLoader iteration?
You could iterate your Dataset and check, which index is raising this error.
Once you’ve isolated it, you could check what’s going wrong in the __getitem__ (or where this exception is raised). |
st80736 | Hi,
I have read this thread here carefully: question 29 but I am still confused about it.
Let’s say we have 3 operations:
x1 = input
x2 = op1(x1)
x3 = op2(x2)
x4 = op3(x3)
loss = lossfunction(x4)
What is the difference between:
x1 = input
x2 = op1(x1)
with torch.no_grad()
x3 = op2(x2)
x4 = op3(x3)
loss = lossfunction(x4)
and (using detach)
x1 = input
x2 = op1(x1)
x3 = op2(x2).detach()
x4 = op3(x3)
loss = lossfunction(x4)
Assuming all operations have learnable parameters, how does the gradient propagation, memory management and activations change? |
st80737 | Hi,
In this particular case, they will be quite similar.
The only difference I guess is that in the first case, you will never even instantiate the state in op2. While you will in the second case and discard them later.
If you do inplace operations in op2 though, there will be some difference. |
st80738 | [ 16] 12410 || B: 5.001 | C: 2.542 | M: 6.508 | S: 0.126 | T: 14.177 || ETA: 3 days, 6:05:50 || timer: 0.355
[ 16] 12420 || B: 5.125 | C: 2.548 | M: 6.596 | S: 0.129 | T: 14.397 || ETA: 3 days, 6:05:45 || timer: 0.357
Computing validation mAP (this may take a while)…
Calculating mAP…
| all | .50 | .55 | .60 | .65 | .70 | .75 | .80 | .85 | .90 | .95 |
-------±------±------±------±------±------±------±------±------±------±------±------+
box | 6.09 | 20.33 | 20.30 | 16.83 | 3.47 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
mask | 3.22 | 7.92 | 7.92 | 7.92 | 7.92 | 0.50 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
-------±------±------±------±------±------±------±------±------±------±------±------+
[ 17] 12430 || B: 4.725 | C: 2.548 | M: 6.641 | S: 0.135 | T: 14.049 || ETA: 3 days, 6:52:38 || timer: 0.356
[ 17] 12440 || B: 4.636 | C: 2.541 | M: 6.520 | S: 0.121 | T: 13.818 || ETA: 3 days, 6:52:41 || timer: 0.357
[ 17] 12450 || B: 4.655 | C: 2.538 | M: 6.710 | S: 0.119 | T: 14.022 || ETA: 3 days, 6:52:40 || timer: 0.357
[ 17] 12460 || B: 4.801 | C: 2.533 | M: 6.707 | S: 0.113 | T: 14.155 || ETA: 3 days, 6:52:29 || timer: 0.359
[ 17] 12470 || B: 4.465 | C: 2.544 | M: 6.497 | S: 0.123 | T: 13.629 || ETA: 3 days, 6:52:34 || timer: 0.357
[ 17] 12480 || B: 4.464 | C: 2.539 | M: 6.670 | S: 0.136 | T: 13.809 || ETA: 3 days, 6:52:32 || timer: 0.356
[ 17] 12490 || B: 4.622 | C: 2.534 | M: 7.012 | S: 0.139 | T: 14.306 || ETA: 3 days, 6:52:29 || timer: 0.355
[ 17] 12500 || B: 4.615 | C: 2.538 | M: 7.111 | S: 0.153 | T: 14.417 || ETA: 3 days, 6:52:25 || timer: 0.358
[ 17] 12510 || B: 4.576 | C: 2.560 | M: 6.934 | S: 0.144 | T: 14.214 || ETA: 3 days, 6:52:24 || timer: 0.359
[ 17] 12520 || B: 4.516 | C: 2.553 | M: 6.547 | S: 0.142 | T: 13.758 || ETA: 3 days, 6:52:19 || timer: 0.357
[ 17] 12530 || B: 4.567 | C: 2.547 | M: 6.474 | S: 0.135 | T: 13.724 || ETA: 3 days, 6:52:20 || timer: 0.355
[ 17] 12540 || B: 4.792 | C: 2.532 | M: 6.801 | S: 0.121 | T: 14.246 || ETA: 3 days, 6:52:16 || timer: 0.355
[ 17] 12550 || B: 4.766 | C: 2.525 | M: 6.734 | S: 0.130 | T: 14.155 || ETA: 3 days, 6:52:20 || timer: 0.358
[ 17] 12560 || B: 4.564 | C: 2.511 | M: 6.705 | S: 0.129 | T: 13.910 || ETA: 3 days, 6:52:25 || timer: 0.357
[ 17] 12570 || B: 4.467 | C: 2.486 | M: 6.713 | S: 0.136 | T: 13.803 || ETA: 3 days, 6:52:25 || timer: 0.355
[ 17] 12580 || B: 4.381 | C: 2.495 | M: 6.505 | S: 0.126 | T: 13.507 || ETA: 3 days, 6:52:31 || timer: 0.356
[ 17] 12590 || B: 3.971 | C: 2.483 | M: 6.208 | S: 0.132 | T: 12.795 || ETA: 3 days, 6:52:30 || timer: 0.356
[ 17] 12600 || B: 3.766 | C: 2.477 | M: 6.069 | S: 0.129 | T: 12.441 || ETA: 3 days, 6:52:29 || timer: 0.358
[ 17] 12610 || B: 3.656 | C: 2.452 | M: 5.964 | S: 0.128 | T: 12.200 || ETA: 3 days, 6:52:27 || timer: 0.356
[ 17] 12620 || B: 3.622 | C: 2.439 | M: 6.141 | S: 0.125 | T: 12.326 || ETA: 3 days, 6:52:33 || timer: 0.356
[ 17] 12630 || B: 3.712 | C: 2.427 | M: 6.246 | S: 0.124 | T: 12.509 || ETA: 3 days, 6:52:38 || timer: 0.359
[ 17] 12640 || B: 3.489 | C: 2.434 | M: 6.113 | S: 0.120 | T: 12.156 || ETA: 3 days, 6:52:37 || timer: 0.358
[ 17] 12650 || B: 3.384 | C: 2.445 | M: 5.906 | S: 0.120 | T: 11.855 || ETA: 3 days, 6:52:33 || timer: 0.355
[ 17] 12660 || B: 3.653 | C: 2.459 | M: 6.166 | S: 0.117 | T: 12.394 || ETA: 3 days, 6:52:36 || timer: 0.353
[ 17] 12670 || B: 3.760 | C: 2.464 | M: 6.224 | S: 0.100 | T: 12.549 || ETA: 3 days, 6:52:38 || timer: 0.357
[ 17] 12680 || B: 3.882 | C: 2.452 | M: 6.390 | S: 0.097 | T: 12.822 || ETA: 3 days, 6:52:39 || timer: 0.359
[ 17] 12690 || B: 4.236 | C: 2.455 | M: 6.754 | S: 0.095 | T: 13.540 || ETA: 3 days, 6:52:50 || timer: 0.354
[ 17] 12700 || B: 4.662 | C: 2.460 | M: 7.379 | S: 0.104 | T: 14.606 || ETA: 3 days, 6:49:43 || timer: 0.357
[ 17] 12710 || B: 4.683 | C: 2.454 | M: 7.404 | S: 0.111 | T: 14.652 || ETA: 3 days, 6:49:45 || timer: 0.359
[ 17] 12720 || B: 4.698 | C: 2.459 | M: 7.431 | S: 0.112 | T: 14.700 || ETA: 3 days, 6:49:39 || timer: 0.357
[ 17] 12730 || B: 4.437 | C: 2.468 | M: 7.125 | S: 0.128 | T: 14.158 || ETA: 3 days, 6:49:43 || timer: 0.356
[ 17] 12740 || B: 4.434 | C: 2.470 | M: 7.025 | S: 0.145 | T: 14.074 || ETA: 3 days, 6:49:40 || timer: 0.354
[ 17] 12750 || B: 4.448 | C: 2.453 | M: 7.122 | S: 0.168 | T: 14.192 || ETA: 3 days, 6:49:34 || timer: 0.357
[ 17] 12760 || B: 4.289 | C: 2.456 | M: 6.763 | S: 0.171 | T: 13.679 || ETA: 3 days, 6:49:26 || timer: 0.361
[ 17] 12770 || B: 4.462 | C: 2.445 | M: 6.904 | S: 0.171 | T: 13.983 || ETA: 3 days, 6:49:18 || timer: 0.354
[ 17] 12780 || B: 4.606 | C: 2.441 | M: 6.875 | S: 0.173 | T: 14.096 || ETA: 3 days, 6:49:05 || timer: 0.354
[ 17] 12790 || B: 4.360 | C: 2.450 | M: 6.649 | S: 0.159 | T: 13.616 || ETA: 3 days, 6:48:55 || timer: 0.356
[ 17] 12800 || B: 4.125 | C: 2.456 | M: 6.210 | S: 0.138 | T: 12.928 || ETA: 3 days, 6:48:58 || timer: 0.360
[ 17] 12810 || B: 4.230 | C: 2.464 | M: 6.044 | S: 0.129 | T: 12.867 || ETA: 3 days, 6:48:49 || timer: 0.357
[ 17] 12820 || B: 3.991 | C: 2.453 | M: 5.686 | S: 0.128 | T: 12.259 || ETA: 3 days, 6:48:46 || timer: 0.358
[ 17] 12830 || B: 4.185 | C: 2.455 | M: 5.814 | S: 0.114 | T: 12.567 || ETA: 3 days, 6:48:32 || timer: 0.354
[ 17] 12840 || B: 4.152 | C: 2.443 | M: 5.731 | S: 0.099 | T: 12.424 || ETA: 3 days, 6:48:17 || timer: 0.358
[ 17] 12850 || B: 4.328 | C: 2.451 | M: 5.711 | S: 0.064 | T: 12.554 || ETA: 3 days, 6:48:10 || timer: 0.358
[ 17] 12860 || B: 4.382 | C: 2.445 | M: 5.515 | S: 0.063 | T: 12.404 || ETA: 3 days, 6:48:15 || timer: 0.357
[ 17] 12870 || B: 4.257 | C: 2.455 | M: 5.234 | S: 0.081 | T: 12.026 || ETA: 3 days, 6:48:15 || timer: 0.353
[ 17] 12880 || B: 4.149 | C: 2.469 | M: 5.083 | S: 0.081 | T: 11.782 || ETA: 3 days, 6:48:12 || timer: 0.357
[ 17] 12890 || B: 4.106 | C: 2.469 | M: 4.845 | S: 0.086 | T: 11.506 || ETA: 3 days, 6:48:12 || timer: 0.359
[ 17] 12900 || B: 4.060 | C: 2.466 | M: 4.778 | S: 0.116 | T: 11.420 || ETA: 3 days, 6:48:11 || timer: 0.355
[ 17] 12910 || B: 3.959 | C: 2.472 | M: 4.964 | S: 0.124 | T: 11.519 || ETA: 3 days, 6:48:11 || timer: 0.355
[ 17] 12920 || B: 4.053 | C: 2.493 | M: 5.045 | S: 0.146 | T: 11.738 || ETA: 3 days, 6:48:03 || timer: 0.355
[ 17] 12930 || B: 4.230 | C: 2.499 | M: 5.152 | S: 0.145 | T: 12.026 || ETA: 3 days, 6:47:59 || timer: 0.360
[ 17] 12940 || B: 4.517 | C: 2.501 | M: 5.243 | S: 0.146 | T: 12.408 || ETA: 3 days, 6:48:02 || timer: 0.357
[ 17] 12950 || B: 4.292 | C: 2.520 | M: 5.012 | S: 0.205 | T: 12.029 || ETA: 3 days, 6:47:58 || timer: 0.357
[ 17] 12960 || B: 4.167 | C: 2.512 | M: 5.189 | S: 0.228 | T: 12.096 || ETA: 3 days, 6:47:54 || timer: 0.358
[ 17] 12970 || B: 4.392 | C: 2.516 | M: 5.241 | S: 0.213 | T: 12.363 || ETA: 3 days, 6:47:48 || timer: 0.357
[ 17] 12980 || B: 4.339 | C: 2.501 | M: 5.229 | S: 0.214 | T: 12.283 || ETA: 3 days, 6:47:57 || timer: 0.358
[ 17] 12990 || B: 4.413 | C: 2.495 | M: 5.140 | S: 0.218 | T: 12.266 || ETA: 3 days, 6:48:00 || timer: 0.356
[ 17] 13000 || B: 4.544 | C: 2.484 | M: 4.857 | S: 0.192 | T: 12.077 || ETA: 3 days, 6:47:55 || timer: 0.355
[ 17] 13010 || B: 4.561 | C: 2.477 | M: 4.672 | S: 0.217 | T: 11.927 || ETA: 3 days, 6:47:55 || timer: 0.357
[ 17] 13020 || B: 4.617 | C: 2.474 | M: 4.626 | S: 0.198 | T: 11.915 || ETA: 3 days, 6:47:59 || timer: 0.358
[ 17] 13030 || B: 4.330 | C: 2.474 | M: 4.454 | S: 0.199 | T: 11.457 || ETA: 3 days, 6:48:02 || timer: 0.354
[ 17] 13040 || B: 4.154 | C: 2.482 | M: 4.343 | S: 0.202 | T: 11.180 || ETA: 3 days, 6:48:02 || timer: 0.355
[ 17] 13050 || B: 4.306 | C: 2.458 | M: 4.562 | S: 0.264 | T: 11.590 || ETA: 3 days, 6:48:10 || timer: 0.358
[ 17] 13060 || B: 4.233 | C: 2.463 | M: 4.518 | S: 0.241 | T: 11.455 || ETA: 3 days, 6:48:01 || timer: 0.358
[ 17] 13070 || B: 3.974 | C: 2.460 | M: 4.445 | S: 0.267 | T: 11.146 || ETA: 3 days, 6:47:55 || timer: 0.361
[ 17] 13080 || B: 3.946 | C: 2.459 | M: 4.613 | S: 0.262 | T: 11.281 || ETA: 3 days, 6:47:59 || timer: 0.357
[ 17] 13090 || B: 3.916 | C: 2.463 | M: 4.699 | S: 0.254 | T: 11.331 || ETA: 3 days, 6:48:01 || timer: 0.356
[ 17] 13100 || B: 3.617 | C: 2.444 | M: 4.736 | S: 0.246 | T: 11.043 || ETA: 3 days, 6:47:56 || timer: 0.357
[ 17] 13110 || B: 3.743 | C: 2.442 | M: 5.006 | S: 0.211 | T: 11.403 || ETA: 3 days, 6:47:58 || timer: 0.356
[ 17] 13120 || B: 3.750 | C: 2.436 | M: 5.360 | S: 0.205 | T: 11.751 || ETA: 3 days, 6:48:03 || timer: 0.357
[ 17] 13130 || B: 3.877 | C: 2.423 | M: 5.334 | S: 0.231 | T: 11.866 || ETA: 3 days, 6:48:04 || timer: 0.356
[ 17] 13140 || B: 3.743 | C: 2.425 | M: 5.388 | S: 0.228 | T: 11.783 || ETA: 3 days, 6:48:00 || timer: 0.358
[ 17] 13150 || B: 3.511 | C: 2.420 | M: 5.206 | S: 0.108 | T: 11.244 || ETA: 3 days, 6:47:55 || timer: 0.354
[ 18] 13160 || B: 3.625 | C: 2.424 | M: 5.388 | S: 0.124 | T: 11.560 || ETA: 3 days, 6:50:44 || timer: 0.359
[ 18] 13170 || B: 3.746 | C: 2.428 | M: 5.483 | S: 0.102 | T: 11.760 || ETA: 3 days, 6:50:49 || timer: 0.358
[ 18] 13180 || B: 3.817 | C: 2.440 | M: 5.298 | S: 0.111 | T: 11.665 || ETA: 3 days, 6:50:39 || timer: 0.356
[ 18] 13190 || B: 3.805 | C: 2.447 | M: 5.404 | S: 0.127 | T: 11.783 || ETA: 3 days, 6:50:39 || timer: 0.360
[ 18] 13200 || B: 3.899 | C: 2.494 | M: 5.348 | S: 0.128 | T: 11.868 || ETA: 3 days, 6:50:53 || timer: 0.357
[ 18] 13210 || B: 3.804 | C: 2.495 | M: 5.139 | S: 0.129 | T: 11.566 || ETA: 3 days, 6:50:54 || timer: 0.357
[ 18] 13220 || B: 3.627 | C: 2.495 | M: 4.718 | S: 0.132 | T: 10.972 || ETA: 3 days, 6:50:40 || timer: 0.356
[ 18] 13230 || B: 3.515 | C: 2.516 | M: 4.471 | S: 0.108 | T: 10.610 || ETA: 3 days, 6:50:37 || timer: 0.359
[ 18] 13240 || B: 3.490 | C: 2.516 | M: 4.434 | S: 0.107 | T: 10.546 || ETA: 3 days, 6:50:41 || timer: 0.358
[ 18] 13250 || B: 3.711 | C: 2.534 | M: 4.487 | S: 0.125 | T: 10.856 || ETA: 3 days, 6:50:41 || timer: 0.358
[ 18] 13260 || B: 3.955 | C: 2.518 | M: 4.463 | S: 0.106 | T: 11.042 || ETA: 3 days, 6:50:42 || timer: 0.354
[ 18] 13270 || B: 4.028 | C: 2.522 | M: 4.476 | S: 0.109 | T: 11.135 || ETA: 3 days, 6:50:51 || timer: 0.359
[ 18] 13280 || B: 3.926 | C: 2.513 | M: 4.480 | S: 0.101 | T: 11.020 || ETA: 3 days, 6:50:48 || timer: 0.358
[ 18] 13290 || B: 3.781 | C: 2.493 | M: 4.353 | S: 0.084 | T: 10.710 || ETA: 3 days, 6:50:50 || timer: 0.356
[ 18] 13300 || B: 3.715 | C: 2.466 | M: 4.472 | S: 0.083 | T: 10.736 || ETA: 3 days, 6:50:44 || timer: 0.356
[ 18] 13310 || B: 3.626 | C: 2.461 | M: 4.440 | S: 0.085 | T: 10.612 || ETA: 3 days, 6:50:38 || timer: 0.357
[ 18] 13320 || B: 3.671 | C: 2.474 | M: 4.495 | S: 0.087 | T: 10.727 || ETA: 3 days, 6:50:29 || timer: 0.358
[ 18] 13330 || B: 3.743 | C: 2.463 | M: 4.461 | S: 0.097 | T: 10.764 || ETA: 3 days, 6:50:21 || timer: 0.357
[ 18] 13340 || B: 3.751 | C: 2.471 | M: 4.291 | S: 0.102 | T: 10.615 || ETA: 3 days, 6:50:13 || timer: 0.356
[ 18] 13350 || B: 3.877 | C: 2.460 | M: 4.501 | S: 0.084 | T: 10.921 || ETA: 3 days, 6:50:03 || timer: 0.355
[ 18] 13360 || B: 3.676 | C: 2.476 | M: 4.548 | S: 0.094 | T: 10.794 || ETA: 3 days, 6:49:55 || timer: 0.356
[ 18] 13370 || B: 3.430 | C: 2.475 | M: 4.317 | S: 0.103 | T: 10.325 || ETA: 3 days, 6:49:57 || timer: 0.358
[ 18] 13380 || B: 3.524 | C: 2.510 | M: 4.092 | S: 0.104 | T: 10.230 || ETA: 3 days, 6:50:03 || timer: 0.354
[ 18] 13390 || B: 3.721 | C: 2.527 | M: 4.112 | S: 0.109 | T: 10.468 || ETA: 3 days, 6:49:57 || timer: 0.357
[ 18] 13400 || B: 3.842 | C: 2.520 | M: 4.090 | S: 0.107 | T: 10.559 || ETA: 3 days, 6:49:57 || timer: 0.360
[ 18] 13410 || B: 4.212 | C: 2.557 | M: 4.283 | S: 0.105 | T: 11.157 || ETA: 3 days, 6:49:55 || timer: 0.358
[ 18] 13420 || B: 4.443 | C: 2.561 | M: 4.393 | S: 0.103 | T: 11.499 || ETA: 3 days, 6:49:58 || timer: 0.358
[ 18] 13430 || B: 4.454 | C: 2.556 | M: 4.536 | S: 0.093 | T: 11.640 || ETA: 3 days, 6:03:02 || timer: 0.355
[ 18] 13440 || B: 4.514 | C: 2.544 | M: 4.703 | S: 0.089 | T: 11.850 || ETA: 3 days, 6:03:04 || timer: 0.359
[ 18] 13450 || B: 4.237 | C: 2.526 | M: 4.542 | S: 0.086 | T: 11.391 || ETA: 3 days, 6:03:06 || timer: 0.358
[ 18] 13460 || B: 4.406 | C: 2.517 | M: 4.432 | S: 0.077 | T: 11.432 || ETA: 3 days, 6:03:03 || timer: 0.357
[ 18] 13470 || B: 4.534 | C: 2.518 | M: 4.628 | S: 0.056 | T: 11.737 || ETA: 3 days, 6:02:57 || timer: 0.356
[ 18] 13480 || B: 4.620 | C: 2.495 | M: 4.871 | S: 0.070 | T: 12.057 || ETA: 3 days, 6:02:55 || timer: 0.359
[ 18] 13490 || B: 4.506 | C: 2.489 | M: 4.917 | S: 0.131 | T: 12.042 || ETA: 3 days, 6:03:01 || timer: 0.357
[ 18] 13500 || B: 4.460 | C: 2.501 | M: 4.790 | S: 0.139 | T: 11.891 || ETA: 3 days, 6:02:55 || timer: 0.357
[ 18] 13510 || B: 4.200 | C: 2.473 | M: 4.525 | S: 0.137 | T: 11.335 || ETA: 3 days, 6:02:55 || timer: 0.356
[ 18] 13520 || B: 4.136 | C: 2.467 | M: 4.478 | S: 0.137 | T: 11.217 || ETA: 3 days, 6:02:52 || timer: 0.356
[ 18] 13530 || B: 4.161 | C: 2.479 | M: 4.594 | S: 0.154 | T: 11.387 || ETA: 3 days, 6:02:52 || timer: 0.358
[ 18] 13540 || B: 3.935 | C: 2.475 | M: 4.498 | S: 0.157 | T: 11.064 || ETA: 3 days, 6:02:52 || timer: 0.355
[ 18] 13550 || B: 4.013 | C: 2.479 | M: 4.456 | S: 0.160 | T: 11.107 || ETA: 3 days, 6:02:43 || timer: 0.354
[ 18] 13560 || B: 3.889 | C: 2.491 | M: 4.388 | S: 0.165 | T: 10.934 || ETA: 3 days, 6:02:35 || timer: 0.357
[ 18] 13570 || B: 3.705 | C: 2.480 | M: 4.153 | S: 0.190 | T: 10.529 || ETA: 3 days, 6:02:33 || timer: 0.356
[ 18] 13580 || B: 3.566 | C: 2.479 | M: 4.146 | S: 0.197 | T: 10.388 || ETA: 3 days, 6:02:29 || timer: 0.356
[ 18] 13590 || B: 3.570 | C: 2.484 | M: 4.070 | S: 0.154 | T: 10.278 || ETA: 3 days, 6:02:29 || timer: 0.355
[ 18] 13600 || B: 3.694 | C: 2.485 | M: 4.107 | S: 0.153 | T: 10.438 || ETA: 3 days, 6:02:29 || timer: 0.360
[ 18] 13610 || B: 3.701 | C: 2.491 | M: 4.319 | S: 0.154 | T: 10.665 || ETA: 3 days, 6:02:35 || timer: 0.358
[ 18] 13620 || B: 3.580 | C: 2.461 | M: 4.316 | S: 0.152 | T: 10.509 || ETA: 3 days, 6:02:32 || timer: 0.358
[ 18] 13630 || B: 3.632 | C: 2.447 | M: 4.437 | S: 0.131 | T: 10.647 || ETA: 3 days, 6:02:19 || timer: 0.354
[ 18] 13640 || B: 3.744 | C: 2.446 | M: 4.378 | S: 0.140 | T: 10.708 || ETA: 3 days, 6:02:26 || timer: 0.358
[ 18] 13650 || B: 3.661 | C: 2.444 | M: 4.271 | S: 0.146 | T: 10.523 || ETA: 3 days, 6:02:24 || timer: 0.358
[ 18] 13660 || B: 3.768 | C: 2.447 | M: 4.720 | S: 0.150 | T: 11.086 || ETA: 3 days, 6:02:28 || timer: 0.357
[ 18] 13670 || B: 4.226 | C: 2.485 | M: 5.642 | S: 0.129 | T: 12.481 || ETA: 3 days, 6:02:25 || timer: 0.354
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [320,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [321,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [322,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [323,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [21,0,0], thread: [224,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [21,0,0], thread: [225,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [220,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [221,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [222,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [223,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [448,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [449,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [450,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [451,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [452,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [453,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [454,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [455,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [456,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [457,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [458,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [459,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [460,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [20,0,0], thread: [461,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [19,0,0], thread: [122,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [19,0,0], thread: [123,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [19,0,0], thread: [124,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [19,0,0], thread: [125,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [19,0,0], thread: [126,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [19,0,0], thread: [127,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [21,0,0], thread: [352,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [21,0,0], thread: [353,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [21,0,0], thread: [354,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [21,0,0], thread: [355,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [21,0,0], thread: [356,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [21,0,0], thread: [357,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [21,0,0], thread: [358,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [21,0,0], thread: [359,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [21,0,0], thread: [360,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [21,0,0], thread: [361,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [21,0,0], thread: [362,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [21,0,0], thread: [363,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [82,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [83,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [84,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [85,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [86,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [87,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [88,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [89,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [90,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [91,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [92,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [93,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [94,0,0] Assertion *input >= 0. && *input <= 1. failed.
/pytorch/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor<Dtype, Acctype>::operator()(const Dtype *, const Dtype *, Dtype *) [with Dtype = float, Acctype = float]: block: [18,0,0], thread: [95,0,0] Assertion *input >= 0. && *input <= 1. failed.
THCudaCheck FAIL file=/pytorch/aten/src/THC/THCReduceAll.cuh line=327 error=59 : device-side assert triggered
Traceback (most recent call last):
File “train.py”, line 382, in
train()
File “train.py”, line 257, in train
losses = criterion(out, wrapper, wrapper.make_mask())
File “/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 547, in call
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py”, line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File “/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 547, in call
result = self.forward(*input, **kwargs)
File “/home/administrator/Sheet_detection/yoloact_new/yolact/layers/modules/multibox_loss.py”, line 180, in forward
losses[‘C’] = self.ohem_conf_loss(conf_data, conf_t, pos, batch_size)
File “/home/administrator/Sheet_detection/yoloact_new/yolact/layers/modules/multibox_loss.py”, line 243, in ohem_conf_loss
loss_c = log_sum_exp(batch_conf) - batch_conf[:, 0]
File “/home/administrator/Sheet_detection/yoloact_new/yolact/layers/box_utils.py”, line 279, in log_sum_exp
x_max = x.data.max()
RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/aten/src/THC/THCReduceAll.cuh:327 |
st80739 | Based on the error message it seems you are passing an invalid input to your criterion.
This code shows the error:
criterion = nn.BCELoss()
output = torch.randn(10, 1)
target = torch.randint(0, 2, (10, 1)).float()
loss = criterion(output, target)
> RuntimeError: Assertion `x >= 0. && x <= 1.' failed. input value should be between 0~1, but got -0.718591
loss = criterion(torch.sigmoid(output), target) # works |
st80740 | I want to extract every parameters of certain BatchNorm2d layer from pretrained model, like epsilon, input size, momentum etc…
Then, I will feed the extracted parameters to my new BatchNorm2d layer from different model.(let say this is target model)
I don’t think torch.save&load won’t work for this case cuz the pretrained model and target model have different architecture.
How do I parse parameters of BatchNorm2d? |
st80741 | You could get all attributes from the original batch norm layer simply by accessing them:
bn = nn.BatchNorm2d(3)
print(bn.eps)
print(bn.momentum)
... |
st80742 | pytorch tutorial have a bilstm-crf example。But, it isn’t used minibatch。
when i try to make a minibatch in it。I find that, CRF can’t be minibatch?
And, CRF need run in cpu? it will be so slowly!
aspect these,there are also some questiones below:
how pytorch auto deal variable sequence length?padding a same length?but pytorch is dynamic right?
I don’t konw why,but pytorch is slowly so much and the uses of gpu util
is little.
may be all these question is caused by (batch and crf )
how can CRF read the batch and deal in gpu? |
st80743 | Hi, I just started to use Pytorch and wanted to know if there’s any thing wrong with my approach.
I’ll briefly explain the problem I have and how I have decided to solve it.
Neural network: M
Data: A labels: a loss: l_a (Cross-entropy), gradient of loss w.r.t parameters: G_a
Data: B labels: b loss: l_b (Cross-entropy), gradient of loss w.r.t parameters: G_b
learning rate: lr
Algorithm I want to implement goes like this:
Forward-pass A to obtain M(A)
Compute l_a and correspondingly G_a using M(A)
Forward-pass B to obtain M(B)
Compute l_b and correspondingly G_b using M(B)
Some computations based on G_a and G_b to obtain G_c
Update M_0 w.r.t G_c
My solution:
G_a = [ ]
G_b = [ ]
M.zero_grad()
outputs = net(A)
loss_value_A = l_a(a, outputs)
loss_value_A.backward()
for f in M.parameters():
G_a.append(f.grad.data)
M.zero_grad()
outputs = net(B)
loss_value_B = l_b(b, outputs)
loss_value_B.backward()
for f in M.parameters():
G_b.append(f.grad.data)
G_c = some_computation(G_a, G_b)
M.zero_grad()
for f in M.parameters():
f.data.sub_(lr * f.grad.data for f in G_c)
Thanks. |
st80744 | Hi guys i can’t seem to export my model to onnx properly as when i convert the onnx back to coreml, i get crashes when i run it on my iphone. Anyone know a way to do it preperly? its a drn segmentation model.
thanks
import argparse
import os
import sys
import numpy as np
import torch
import torchvision.transforms as transforms
from PIL import Image
from networks.drn_seg import DRNSeg
from utils.tools import *
from utils.visualize import *
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
"--input_path", required=True, help="the model input")
parser.add_argument(
"--dest_folder", required=True, help="folder to store the results")
parser.add_argument(
"--model_path", required=True, help="path to the drn model")
parser.add_argument(
"--gpu_id", default='0', help="the id of the gpu to run model on")
parser.add_argument(
"--no_crop",
action="store_true",
help="do not use a face detector, instead run on the full input image")
args = parser.parse_args()
img_path = args.input_path
dest_folder = args.dest_folder
model_path = args.model_path
gpu_id = args.gpu_id
# Loading the model
if torch.cuda.is_available():
device = 'cuda:{}'.format(gpu_id)
else:
device = 'cpu'
model = DRNSeg(2, None)
state_dict = torch.load(model_path, map_location=device)
model.load_state_dict(state_dict['model'])
model.to(device)
model.eval()
# Data preprocessing
tf = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(
mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
im_w, im_h = Image.open(img_path).size
if args.no_crop:
face = Image.open(img_path).convert('RGB')
else:
faces = face_detection(img_path, verbose=False)
if len(faces) == 0:
print("no face detected by dlib, exiting")
sys.exit()
face, box = faces[0]
face = resize_shorter_side(face, 400)[0]
face_tens = tf(face).to(device)
with torch.no_grad():
flow = model(face_tens.unsqueeze(0))[0].cpu().numpy()
flow = np.transpose(flow, (1, 2, 0))
h, w, _ = flow.shape
modified = face.resize((w, h), Image.BICUBIC)
modified_np = np.asarray(modified)
reverse_np = warp(modified_np, flow)
reverse = Image.fromarray(reverse_np)
# Saving the results
modified.save(
os.path.join(dest_folder, 'cropped_input.jpg'),
quality=90)
reverse.save(
os.path.join(dest_folder, 'warped.jpg'),
quality=90)
flow_magn = np.sqrt(flow[:, :, 0]**2 + flow[:, :, 1]**2)
save_heatmap_cv(
modified_np, flow_magn,
os.path.join(dest_folder, 'heatmap.jpg')) |
st80745 | When I use package_padded_sequence and pad_packed_sequence to handle variable length sequences.I have a problem, that is, I can’t print the result of pad_packed_sequence output when my batch_first is set to True.
2019-08-09 16-13-54屏幕截图.png1042×529 79.7 KB
However, once I set the batch_first in the pad_packed_sequence function to False, I can output the result correctly.
Thank you for your help. |
st80746 | Because it is a newcomer, I can only upload one image. Below is a part of the code I wrote.
2019-08-09 16-17-36屏幕截图.png864×957 113 KB |
st80747 | The error message points to a non-contiguous tensor.
Call .contiguous() on your tensor before the .view() call.
Based on the error message, this should work:
start = [get_summarized_data(self[i]).contiguous().view(-1) for i in range(...)] |
st80748 | I am read the code of batch normlization, and I find this line 96:
f = torch._C._functions.BatchNorm(running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled)
But I do not find any library called the _C. I do not know where does torch._C.functions.BatchNorm come from. |
st80749 | Solved by fmassa in post #3
For completeness, the _C comes from here |
st80750 | Here is the C source for PyTorch https://github.com/pytorch/pytorch/tree/master/torch/csrc 4.2k
And the external libraries that perform the math computation can be found in https://github.com/pytorch/pytorch/tree/master/torch/lib 1.2k |
st80751 | hi, fmassa. I want to find the defination of torch._C.CudaFloatTensorBase class.But i didn’t find it in the torch/csrc/Module.cpp.Do you know where is it? Thanks. |
st80752 | @lmf we do these things via C macro expansions:
GitHub
pytorch/pytorch 322
Tensors and Dynamic neural networks in Python with strong GPU acceleration - pytorch/pytorch |
st80753 | Thanks for the two links, I tried a lot but I still could not find exactly where torch._C._functions.ConvNd come from, could you give me an exact link? Thanks a lot! |
st80754 | Hi
The module torch._C._functions is created here 1.2k.
The ConvNd class is added on this 438 line. |
st80755 | Thanks @albanD
One more thing, and very important, is how do I understand this line of code addClass<ConvForward, ConvCtor>(module, ConvClass, "ConvNd", conv_forward_properties); , because I plan to learn C and understand the logic of each function in Torch Neural Network library.
I have to admit at this moment I have not learnt C language, so could you just give me a basic idea of what does this line of code do?
Thanks a lot! |
st80756 | @dl4daniel this is C++. Without knowing C or C++, it’s not easy to give you a basic idea of what’s going on. |
st80757 | Thanks for your reply! Could you give me a basic picture while assuming I am a C or C++ beginner, is it feasible? thanks |
st80758 | @smth @albanD @apaszke
Is it possible to debug from pytorch code into torch._C code?
I can use pdbpp to debug pytorch code and check all variables values and it is very convenient for learning. for example, when I want to see what is going on inside ‘self.conv1(x)’, I can step into
27 -> x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
and eventually, I am taken to the following code, which is the edge between pytorch python and torch._C. I want to be able to continue to debug and checkout variable values inside torch._C code such as ConvNd below. Is it possible? if so, how could I do it? Thanks a lot
50 f = ConvNd(_pair(stride), _pair(padding), _pair(dilation), False,
51 _pair(0), groups, torch.backends.cudnn.benchmark, torch.backends
52 return f(input, weight, bias) |
st80759 | Hi,
Yes you can do it but since you are going into cpp land, you will need a cpp debugger.
If you have a cpp ide you may have a debugger in it.
To check stack traces and basic values without any graphical tool you can use gdb, but this is significantly less user friendly.
If you are not used to cpp debugger, it may be simpler to just add prints into the function and recompile pytorch with your prints. You can find here 77 some cpp dev tips for pytorch. |
st80760 | @albanD thanks a lot!
You are very right about gdb, I spend several hours to install gdb and tried to get it work for running python file, but still failed to get it work properly 1
I prefer a simple solution to my problem above: how to check or debug pytorch code all the way from python code to C code or C++ code. If I understand you correctly, it seems the simple solution for me is to pick a C/C++ ide with a debuger. I have been using atom for a while. Is it possible to make atom a proper C/C++ ide? if so, can I use it to check or debug pytorch code from python to C and C++ in one go within atom?
right now, it seems I can compile and run C/C++ in atom OK
atom__c_compileOK.png1662×1262 197 KB
However, I can’t just compile and debug my pytorch code as if it is a C/C++ file. Also, although it says “compile and debug”, I don’t see any features like gdb or pdb in atom. What should I do from here? Thanks a lot!
atom__c_compileBad.png2026×1258 454 KB |
st80761 | I have never used Atom, so I can’t really help you here unfortunately. You may be able to find more information on google though |
st80762 | Thanks for your patience! one last question, put atom aside, for a c/c++ ide with debuger, we can use C/C++ debugger to debug a pytorch file like “neural_network_tutoral.py”, for both python code and C code like ConvNd under the hood, right? |
st80763 | I am not aware of any IDE that would be able to do that. I think you will have to have two different debuggers. |
st80764 | Thanks, so two debugers (one for python, one for C/C++) for the same pytorch tutorlal in python. This is interesting. |
st80765 | Thanks, could you give me a simple demo on how to debug C code in a numpy code example? |
st80766 | Well, you usually want to debug in C when you have segmentation faults, as you don’t have any information that was returned from C.
If you want to debug from python, you can use pdb 9.
Note: in most cases you don’t need to go into gdb to debug a python program, because the libraries were designed to give meaningful error messages and not segfault. In the numpy case that I will show below, I will use an unsafe numpy function to obtain the segfault, but that should not happen with normal numpy code. The same applies for pytorch, so if you observe segmentation faults, please let us know
If you want to go into debugging C, you can use gdb for example. Say you have a python script example.py that gives a problem. Here I’ll post an example in numpy, called example.py:
import numpy as np
from numpy.lib.stride_tricks import as_strided
a = np.array([1, 2, 3])
b = as_strided(a, shape=(2,), strides=(20,))
# accessing invalid memory, will segfault
b[1] = 1
If you try running it in python, it will give segmentation fault. Try running python example.py.
To run it on gdb, you can do something like
gdb python
and then, once it enters gdb, you do
run example.py
This will run the program and, when it crashes, it will give you the possibility to inspect where it crashed by running bt (from backtrace). You can find more information on gdb options online 4. |
st80767 | I want to use roi_pooling module which is defined in fast r-cnn, as follows:
class AdaptiveMaxPool2d(Function):
def init(self, out_w, out_h):
super(AdaptiveMaxPool2d, self).init()
self.out_w = out_w
self.out_h = out_h
def forward(self, input):
output = input.new()
indices = input.new().long()
self.save_for_backward(input)
self.indices = indices
self._backend = type2backend[type(input)]
self._backend.SpatialAdaptiveMaxPooling_updateOutput(
self._backend.library_state, input, output, indices,
self.out_w, self.out_h)
return output
def backward(self, grad_output):
input, = self.saved_tensors
indices = self.indices
grad_input = grad_output.new()
self._backend.SpatialAdaptiveMaxPooling_updateGradInput(
self._backend.library_state, input, grad_output, grad_input,
indices)
return grad_input, None
def adaptive_max_pool(input, size):
return AdaptiveMaxPool2d(size[0],size[1])(input)
def roi_pooling(input, rois, size=(7,7), spatial_scale=1.0):
assert(rois.dim() == 2)
assert(rois.size(1) == 5)
output = []
rois = rois.data.float()
num_rois = rois.size(0)
rois[:,1:].mul_(spatial_scale)
rois = rois.long()
for i in range(num_rois):
roi = rois[i]
im_idx = roi[0]
im = input.narrow(0, im_idx, 1)[..., roi[2]:(roi[4]+1), roi[1]:(roi[3]+1)]
output.append(adaptive_max_pool(im, size))
return torch.cat(output, 0)
my code for call the function is as:
roi_pooling(myfeaturemap, rois, size=(1,1), spatial_scale=1.0/16)
where the type of myfeaturemap is torch.Tensor
but I got errors ,like follows:
File “/home/user/lr/twostage/roi_pooling.py”, line 54, in roi_pooling
output.append(adaptive_max_pool(im, size))
File “/home/user/lr/twostage/roi_pooling.py”, line 37, in adaptive_max_pool
return AdaptiveMaxPool2d(size[0],size[1])(input)
File “/home/user/lr/twostage/roi_pooling.py”, line 21, in forward
self._backend = type2backend[type(input)]
File “/usr/local/lib/python2.7/dist-packages/torch/_thnn/init.py”, line 15, in getitem
return self.backends[name].load()
KeyError: <class ‘torch.Tensor’>
, I don’t have ideas about above error, could anybody help me? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.