id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st84868
|
That’s strange, since the data loading time seem to be completely hidden behind the computation.
I just tried your code on my machine and simplified it a bit:
removed the test loop
used random inputs (torch.randn as data and torch.randint as target)
used an input batch of [20, 1, 100]
Using this I got a GPU utilization of 97% on a TITAN V GPU.
Does nvidia-smi show any utilization at all?
EDIT: Just saw your second post now.
Could you run nvidia-smi, as I’m not sure how the Windows task manager handles the GPU util and how accurate it is?
|
st84869
|
Of course, here it is:
Thu Jul 04 00:56:34 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.86 Driver Version: 430.86 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1070 WDDM | 00000000:01:00.0 On | N/A |
| N/A 68C P2 99W / N/A | 2088MiB / 8192MiB | 71% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1384 C+G Insufficient Permissions N/A |
| 0 3292 C+G ...hell.Experiences.TextInput.InputApp.exe N/A |
| 0 3512 C+G ...a\Local\Vivaldi\Application\vivaldi.exe N/A |
| 0 3812 C ...cts\DeepAndroid\venv\Scripts\python.exe N/A |
| 0 7228 C+G ... Files (x86)\Dropbox\Client\Dropbox.exe N/A |
| 0 8260 C+G ...11411.0_x64__8wekyb3d8bbwe\Video.UI.exe N/A |
| 0 9068 C+G Insufficient Permissions N/A |
| 0 9244 C+G C:\Windows\explorer.exe N/A |
| 0 9760 C+G ...5n1h2txyewy\StartMenuExperienceHost.exe N/A |
| 0 10128 C+G ...t_cw5n1h2txyewy\ShellExperienceHost.exe N/A |
| 0 10164 C+G Insufficient Permissions N/A |
| 0 10804 C+G ...dows.Cortana_cw5n1h2txyewy\SearchUI.exe N/A |
| 0 11948 C+G ...48.51.0_x64__kzf8qxf38zg5c\SkypeApp.exe N/A |
| 0 11972 C+G ....410.0_x64__8wekyb3d8bbwe\YourPhone.exe N/A |
| 0 12332 C+G ...DIA GeForce Experience\NVIDIA Share.exe N/A |
| 0 18804 C+G ....28.0_x64__8wekyb3d8bbwe\Calculator.exe N/A |
+-----------------------------------------------------------------------------+
|
st84870
|
Thank you very much! Finally, I’d like to ask a couple of questions if you do not mind:
Is there a GUI to monitor this?
Why is the GPU memory usage for my process (the process with PID 3812) N/A?
Why is the type of my process C while others are C+G?
|
st84871
|
I’m not sure, but Windows might have some GUIs for it. Since I’m using Ubuntu, I’m, used to these beautiful black terminals
Might be some kind of permission issue. Could you try to run the process as an admin and see, if that helps?
G = graphics, C = compute. Since you are using CUDA, only C should be shown. As far as I know G will pop up, if e.g. DirectX is used (but I have really limited knowledge about this area).
|
st84872
|
Thank you very much, again, for your great contribution, much appreciated. Cannot agree more, Terminal is the most missing part of not using Linux.
|
st84873
|
p.s. Running the process as the admin(istrator) did not help to see the GPU memory usage of the process. A small note to the potential readers of the topic.
|
st84874
|
talhak:
Is there a GUI to monitor this?
One thing that could help is to use the small arrow right before one of the graphs (such as video encode) and change it to Compute_0 or CUDA or some other similar name there may be in your system. The result should be similar to:
gpu_monit.PNG709×282 5.02 KB
|
st84875
|
That is really helpful and exactly what I was looking for, thank you very much for your contribution. @fireis
|
st84876
|
Glad to help!
One addition, if you want something fancier, there is a commercial (but free) tool called MSI Afterburner which is able to provide more info, such as temperature and power usage.
|
st84877
|
Why is the GPU memory usage for my process (the process with PID 3812 ) N/A ?
Maybe if you open CMD as administrator and run nvidia-smi, then it could give you more information since you have more privileges.
|
st84878
|
It seems that it is not possible to see the GPU memory usage with nvidia-smi while you have a display connected to the GPU in Windows, according to this Stack Overflow answer: https://stackoverflow.com/a/44228331 10
Another option seems to be an API (https://developer.nvidia.com/nvapi 12) that lets you see GPU info on Windows, but I have not tried since I work on Ubuntu.
As a last resort, you could come to Linux, we have penguins and cookies!
|
st84879
|
Hi, I am a newbie in PyTorch, GAN, and I don’t have much experience in Python (Although I am a C/C++ programmer).
I have a simple tutorial code for DCGAN for generating fake image, and it was ok when i run the code with “DATASETNAME = ‘MNIST’”. However, when i change the dataset to ‘CIFAR10’, the program produces error related to “running_mean”.
The code is as below
import torch.nn as nn
def weights_init(module):
if isinstance(module, nn.Conv2d) or isinstance(module, nn.ConvTranspose2d):
module.weight.detach().normal_(mean=0., std=0.02)
elif isinstance(module, nn.BatchNorm2d):
module.weight.detach().normal_(1., 0.02)
module.bias.detach().zero_()
else:
pass
class View(nn.Module):
def __init__(self, output_shape):
super(View, self).__init__()
self.output_shape = output_shape
def forward(self, x):
return x.view(x.shape[0], *self.output_shape)
class Generator(nn.Module):
def __init__(self, dataset_name):
super(Generator, self).__init__()
act = nn.ReLU(inplace=True)
norm = nn.BatchNorm2d
if dataset_name == 'CIFAR10': # Output shape 3x32x32
model = [nn.Linear(100, 512 * 4 * 4), View([512, 4, 4]), norm(512), act] # 4x4
model += [nn.ConvTranspose2d(512, 256, 5, stride=2, padding=2, output_padding=1), norm(256), act] # 8x8
model += [nn.ConvTranspose2d(256, 128, 5, stride=2, padding=2, output_padding=1), norm(128), act] # 16x16
model += [nn.ConvTranspose2d(128, 3, 5, stride=2, padding=2, output_padding=1), nn.Tanh()] # 32x32
elif dataset_name == 'LSUN': # Output shape 3x64x64
model = [nn.Linear(100, 1024 * 4 * 4), View([1024, 4, 4]), norm(1024), act] # 4x4
model += [nn.ConvTranspose2d(1024, 512, 5, stride=2, padding=2, output_padding=1), norm(512), act] # 8x8
model += [nn.ConvTranspose2d(512, 256, 5, stride=2, padding=2, output_padding=1), norm(256), act] # 16x16
model += [nn.ConvTranspose2d(256, 128, 5, stride=2, padding=2, output_padding=1), norm(128), act] # 32x32
model += [nn.ConvTranspose2d(128, 3, 5, stride=2, padding=2, output_padding=1), nn.Tanh()] # 64x64
elif dataset_name == 'MNIST': # Output shape 1x28x28
model = [nn.Linear(100, 256 * 4 * 4), View([256, 4, 4]), norm(256), act] # 4x4
model += [nn.ConvTranspose2d(256, 128, 5, stride=2, padding=2), norm(128), act] # 7x7
model += [nn.ConvTranspose2d(128, 64, 5, stride=2, padding=2, output_padding=1), norm(64), act] # 14x14
model += [nn.ConvTranspose2d(64, 1, 5, stride=2, padding=2, output_padding=1), nn.Tanh()] # 28x28
else:
raise NotImplementedError
self.model = nn.Sequential(*model)
def forward(self, x):
return self.model(x)
class Discriminator(nn.Module):
def __init__(self, dataset_name):
super(Discriminator, self).__init__()
act = nn.LeakyReLU(inplace=True, negative_slope=0.2)
norm = nn.BatchNorm2d
if dataset_name == 'CIFAR10': # Input shape 3x32x32
model = [nn.Conv2d(3, 128, 5, stride=2, padding=2, bias=False), act] # 16x16
model += [nn.Conv2d(128, 256, 5, stride=2, padding=2, bias=False), norm(128), act] # 8x8
model += [nn.Conv2d(256, 512, 5, stride=2, padding=2, bias=False), norm(256), act] # 4x4
model += [nn.Conv2d(512, 1, 4, stride=2, padding=2, bias=False), nn.Sigmoid()] # 1x1
elif dataset_name == 'LSUN': # Input shape 3x64x64
model = [nn.Conv2d(3, 128, 5, stride=2, padding=2, bias=False), act] # 128x32x32
model += [nn.Conv2d(128, 256, 5, stride=2, padding=2, bias=False), norm(128), act] # 256x16x16
model += [nn.Conv2d(256, 512, 5, stride=2, padding=2, bias=False), norm(256), act] # 512x8x8
model += [nn.Conv2d(512, 1024, 5, stride=2, padding=2, bias=False), norm(512), act] # 1024x4x4
model += [nn.Conv2d(1024, 1, 4), nn.Sigmoid()] # 1x1x1
elif dataset_name == 'MNIST': # Input shape 1x28x28
model = [nn.Conv2d(1, 64, 5, stride=2, padding=2, bias=False), act] # 14x14
model += [nn.Conv2d(64, 128, 5, stride=2, padding=2, bias=False), norm(128), act] # 7x7
model += [nn.Conv2d(128, 256, 5, stride=2, padding=2, bias=False), norm(256), act] # 4x4
model += [nn.Conv2d(256, 1, 4, bias=False), nn.Sigmoid()] # 1x1
else:
raise NotImplementedError
self.model = nn.Sequential(*model)
def forward(self, x):
return self.model(x)
if __name__ == '__main__':
import os
from torchvision.transforms import Compose, Normalize, Resize, ToTensor
from torch.utils.data import DataLoader
#from models import Discriminator, Generator, weights_init
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
from time import time
from tqdm import tqdm
from torchvision.utils import save_image
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
BETA1, BETA2 = 0.5, 0.99
BATCH_SIZE = 16
DATASET_NAME = 'CIFAR10'
DEVICE = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu:0')
EPOCHS = 1
ITER_REPORT = 10
LATENT_DIM = 100
LR = 2e-4
N_D_STEP = 1
ITER_DISPLAY = 500
IMAGE_DIR = './GAN/checkpoints/'+DATASET_NAME+'/Image'
MODEL_DIR = './GAN/checkpoints/'+DATASET_NAME+'/Model'
if DATASET_NAME == 'CIFAR10':
IMAGE_SIZE = 32
OUT_CHANNEL = 3
from torchvision.datasets import CIFAR10
transforms = Compose([ToTensor(), Normalize(mean=[0.5], std=[0.5])])
dataset = CIFAR10(root='./datasets', train=True, transform=transforms, download=True)
elif DATASET_NAME == 'LSUN':
IMAGE_SIZE = 64
OUT_CHANNEL = 3
from torchvision.datasets import LSUN
transforms = Compose([Resize(64), ToTensor(), Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])])
dataset = LSUN(root='./datasets/LSUN', classes=['bedroom_train'], transform=transforms)
elif DATASET_NAME == 'MNIST':
IMAGE_SIZE = 28
OUT_CHANNEL = 1
from torchvision.datasets import MNIST
transforms = Compose([ToTensor(), Normalize(mean=[0.5], std=[0.5])])
dataset = MNIST(root='./datasets', train=True, transform=transforms, download=True)
else:
raise NotImplementedError
data_loader = DataLoader(dataset=dataset, batch_size=BATCH_SIZE, num_workers=0, shuffle=True)
D = Discriminator(DATASET_NAME).apply(weights_init).to(DEVICE)
G = Generator(DATASET_NAME).apply(weights_init).to(DEVICE)
print(D, G)
criterion = nn.BCELoss()
optim_D = torch.optim.Adam(D.parameters(), lr=LR, betas=(BETA1, BETA2))
optim_G = torch.optim.Adam(G.parameters(), lr=LR, betas=(BETA1, BETA2))
list_D_loss = list()
list_G_loss = list()
total_step = 0
st = time()
for epoch in range(EPOCHS):
for data in tqdm(data_loader):
total_step += 1
real, label = data[0].to(DEVICE), data[1].to(DEVICE)
z = torch.randn(BATCH_SIZE, LATENT_DIM).to(DEVICE)
fake = G(z)
real_score = D(real)
fake_score = D(fake.detach())
D_loss = 0.5 * (criterion(fake_score, torch.zeros_like(fake_score).to(DEVICE))
+ criterion(real_score, torch.ones_like(real_score).to(DEVICE)))
optim_D.zero_grad()
D_loss.backward()
optim_D.step()
list_D_loss.append(D_loss.detach().cpu().item())
if total_step % ITER_DISPLAY == 0:
#(BatchSize, Channel*ImageSize*ImageSize)-->(BatchSize, Channel, ImageSize, ImageSize)
fake = fake.view(BATCH_SIZE, OUT_CHANNEL, IMAGE_SIZE, IMAGE_SIZE)
real = real.view(BATCH_SIZE, OUT_CHANNEL, IMAGE_SIZE, IMAGE_SIZE)
save_image(fake, IMAGE_DIR + '/{}_fake.png'.format(epoch + 1), nrow=4, normalize=True)
save_image(real, IMAGE_DIR + '/{}_real.png'.format(epoch + 1), nrow=4, normalize=True)
if total_step % N_D_STEP == 0:
fake_score = D(fake)
G_loss = criterion(fake_score, torch.ones_like(fake_score))
optim_G.zero_grad()
G_loss.backward()
optim_G.step()
list_G_loss.append(G_loss.detach().cpu().item())
if total_step % ITER_REPORT == 0:
print("Epoch: {}, D_loss: {:.{prec}} G_loss: {:.{prec}}"
.format(epoch, D_loss.detach().cpu().item(), G_loss.detach().cpu().item(), prec=4))
torch.save(D.state_dict(), '{}_D.pt'.format(DATASET_NAME))
torch.save(G.state_dict(), '{}_G.pt'.format(DATASET_NAME))
plt.figure()
plt.plot(range(0, len(list_D_loss)), list_D_loss, linestyle='--', color='r', label='Discriminator loss')
plt.plot(range(0, len(list_G_loss) * N_D_STEP, N_D_STEP), list_G_loss, linestyle='--', color='g',
label='Generator loss')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.legend()
plt.savefig('Loss.png')
print(time() - st)
---------------------------------------------------------------------------------------------------------------------------------
The error seems to come from Discriminator .forward as follows:
---------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-2-0e0e520d9b12> in <module>
71 fake = G(z)
72
---> 73 real_score = D(real)
74 fake_score = D(fake.detach())
75
C:\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
<ipython-input-1-5160dda25717> in forward(self, x)
87
88 def forward(self, x):
---> 89 return self.model(x)
C:\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
C:\Anaconda3\lib\site-packages\torch\nn\modules\container.py in forward(self, input)
90 def forward(self, input):
91 for module in self._modules.values():
---> 92 input = module(input)
93 return input
94
C:\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
C:\Anaconda3\lib\site-packages\torch\nn\modules\batchnorm.py in forward(self, input)
81 input, self.running_mean, self.running_var, self.weight, self.bias,
82 self.training or not self.track_running_stats,
---> 83 exponential_average_factor, self.eps)
84
85 def extra_repr(self):
C:\Anaconda3\lib\site-packages\torch\nn\functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps)
1695 return torch.batch_norm(
1696 input, weight, bias, running_mean, running_var,
-> 1697 training, momentum, eps, torch.backends.cudnn.enabled
1698 )
1699
RuntimeError: running_mean should contain 256 elements not 128
Can anyone tell me what is this error about? It seems to come from size setting of something in the model, but that’s all I can guess.
Thank you in advance.
|
st84880
|
The number of features for an nn.BatchNorm2d layer should be equal to the number of output channels of the preceding conv layer.
Fix the model definition from e.g.
[nn.Conv2d(128, 256, 5, stride=2, padding=2, bias=False), norm(128), act]
to
[nn.Conv2d(128, 256, 5, stride=2, padding=2, bias=False), norm(256), act]
PS: I’ve formatted your code for better readability. If you would like to post code snippets, you can wrap your code in three backticks ```
|
st84881
|
Thank you very much. The problem seems to be solved (although I’m suffering another hardware problem. After fixing the problem, the computer shuts down when I run the code). Thank you very much.
|
st84882
|
Good to heat it’s working now!
paradism:
After fixing the problem, the computer shuts down when I run the code
Could you create a new topic and describe this problem a bit?
It might be due to a weak PSU or maybe another hardware bug.
|
st84883
|
It seems that it was a PSU problem. The previous power supply capacity was 700W. I changed it to 1200W power, and it seems that the computer works well. I have a dual GPU system, maybe that was the problem. Anyhow, the problem seems to be solved. Thank you.
|
st84884
|
Hello,
I have two classifiers and two associated discriminator networks. While I can create the instances of Classifier networks but I can’t in the case of discriminator networks. It generates an “AttributeError: ‘torch.device’ object has no attribute '_apply’”. I’ve seen several related posts and I also checked my PyTorch version which is 1.1.0. I don’t know how to debug this issue. I’ve attached a screenshot of the Jupyter notebook and the Discriminator class. If needed I can provide more detailed information. Can you please help me out?
FYI: The main difference between Classifier and Discriminator class is that the Discriminator network has CNN layers.
Thank you.
PyTorch Error.png2910×807 236 KB
|
st84885
|
Solved by ptrblck in post #2
You are not creating an instance of the discriminator, i.e. you forgot the parentheses.
Try to call Discriminator().to(DEVICE) instead.
PS: It’s better to post the code directly by wrapping it in three backticks ``` instead of screenshots. This will allow the search to index it and allows us to co…
|
st84886
|
You are not creating an instance of the discriminator, i.e. you forgot the parentheses.
Try to call Discriminator().to(DEVICE) instead.
PS: It’s better to post the code directly by wrapping it in three backticks ``` instead of screenshots. This will allow the search to index it and allows us to copy/paste it in case we need to debug it.
|
st84887
|
Ah! Thank you for point me out the mistake that I did.
Next time I’ll the three back-ticks to wrap up the codes. Thanks again for the suggestion.
|
st84888
|
Say we have two features x1 and x2.
I want the neural network to give the same results when I switch the two features.
A unstrict way would be argument the data by switching two columns. But this won’t guarantee symmetry.
This neural network could be multi-layers and there could be other features which I don’t want to enforce any constraints.
Edit: My guessing is I could generate a layer purely for the two features, e.g, a Linear(1,10), using this to act on both of them. And use another linear layer to act on other features. I could then concat or add the next layer. Give me suggestions if there are better practice. Thanks!
|
st84889
|
I agree, I think the best way to go about this is to create a linear layer to act on just these two features and create a separate linear layer for the remaining features and then concatenate the output.
|
st84890
|
What is the final output you want? It is dependent on that as well. If you apply the same linear layer to the two features, there would be symmetry if the ordering of the output did not matter. What is the output that you are expecting from your model?
|
st84891
|
After I concatenate them, they will be passed to a few more layers to yield an output. The two feature are not really symmetric in this case.
x1–a few layers–>f(x1)
x2–a few layers–>f(x2)
(f(x1),f(x2),x3,x4)—subsequent layers—>output
So when I switch x1 and x2, it becomes (f(x2),f(x1),x3,x4) which is different and will yield different final results.
|
st84892
|
Yes, if you have more linear layers after the initial(‘symmetric’) linear layer you will lose symmetry. You will likely have to constantly apply the same linear layer to the two features, and only concatenate the outputs at the absolute end.
|
st84893
|
Hi,
I’m working on an image classification problem, with 12 classes. I’d like to use test time augmentation by averaging the probabilities of each class over the augmented test images but I am uncertain as to the correct formula.
Currently, I just take the output of the final fc layer, treating it as unnormalized output, and apply a softmax across classes, interpreting the result as the probability of the class. This is what I average over, taking argmax after averaging.
The loss function is cross-entropy, which involves log(softmax) and so my question is: am I interpreting properly the probabilities in my scheme above, or should I do something different.
Grateful for any advice.
|
st84894
|
The geometric mean of the ‘probabilities’ from the softmax layer provide the best result for this. If you’ve got the output of log-softmax, you can just take the (arithmetic) mean as it’s equivalent to the geometric of the non-log values. Then yes, take the argmax of either of those…
|
st84895
|
rwightman:
The geometric mean of the ‘probabilities’ from the softmax layer provide the best result for this.
Hi Ross,
do you know if there is a fundamental argument for using the Geometric mean rather than the arithmetic or is it more empirical?
If I assume that the prediction is a noisy estimate of the “true PD” with sufficiently nice noise, my intuition would lead me to an arithmetic estimate. Also, the geometric mean of probabilities doesn’t sum to 1 (in general), so you’d need to re-normalize:
a = torch.randn(2,4)
p = a.softmax(1)
p_gm = (p[0]**0.5*p[1]**0.5)
print(p_gm.sum())
Best regards
Thomas
|
st84896
|
@tom My usage is based on empirical results, geometric mean of the probabilities for ensembling predictions from a NN has produced better results for me on many occasions.
I searched for theory once, as the intuition didn’t sit well, especially when thinking about the case when one ensemble member has a near 0 output. In practice though, it works. I suppose a ‘try both’ answer might be more appriopriate.
In my theory search, the papers that provided some insight in the context of NN were about dropout
https://arxiv.org/abs/1312.6197 8
http://papers.nips.cc/paper/4878-understanding-dropout.pdf 7
|
st84897
|
I have an implementation of LeNet in PyTorch. It works completely fine with losses like MSE and MAE, but when I implement cross entropy loss according to the documentation and this forum post (RuntimeError: multi-target not supported (newbie) 3) I get this error:
RuntimeError: “host_softmax” not implemented for ‘torch.cuda.LongTensor’. The only reference to this error that exists on the internet is here: https://stackoverflow.com/questions/51818225/pytorch-runtimeerror-host-softmax-not-implemented-for-torch-cuda-longtensor 14, and I’m not doing what he did wrong according to the comments. Since the error was in computing softmax, I tried switching from cross entropy loss to nll loss to get rid of it, and I got this error: ‘RuntimeError: nll_loss_forward is not implemented for type torch.cuda.LongTensor’
What stupid thing am I missing here? I’ve started at it and all the documentation and posts for 4 hours and still can’t see what I’m doing wrong.
import torch
import torch.utils.data
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
out = self.conv1(x)
out = F.relu(out)
out = F.max_pool2d(out, 2)
out = F.relu(self.conv2(out))
out = F.max_pool2d(out, 2)
out = out.view(out.size(0), -1)
out = F.relu(self.fc1(out))
out = F.relu(self.fc2(out))
out = self.fc3(out)
return out
def train(model, device, train_loader, optimizer):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device, dtype=torch.int64)
optimizer.zero_grad()
output = model(data)
loss = F.cross_entropy(output.argmax(dim=1, keepdim=True), target.argmax(dim=1, keepdim=True)).item()
loss.backward()
optimizer.step()
def accuracy(model, loader, device):
model.eval()
correct = 0
with torch.no_grad():
for (data, target) in loader:
data, target = data.to(device), target.to(device)
output = model(data)
_, predicted = output.max(1)
_, target = target.max(1)
correct += (predicted == target).sum().item()
return correct/len(loader.dataset)
def loss(model, loader, device):
model.eval()
loss = 0
with torch.no_grad():
for data, target in loader:
data, target = data.to(device), target.to(device, dtype=torch.int64)
output = model(data)
loss += F.cross_entropy(output.argmax(dim=1, keepdim=True), target.argmax(dim=1, keepdim=True)).item()
loss /= len(loader.dataset)
return loss
def main():
batch_size = 6000
test_batch_size = 1000
epochs = 50
device = torch.device('cuda')
train_data = torch.utils.data.TensorDataset(torch.load('trainImages.pt'), torch.load('trainLabels.pt'))
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, shuffle=True, num_workers=1, pin_memory=True)
test_data = torch.utils.data.TensorDataset(torch.load('testImages.pt'), torch.load('testLabels.pt'))
test_loader = torch.utils.data.DataLoader(test_data, batch_size=test_batch_size, shuffle=True, num_workers=1, pin_memory=True)
model = Net().to(device)
optimizer = optim.Adam(model.parameters(), lr=0.001)
for epoch in range(1, epochs + 1):
train(model, device, train_loader, optimizer)
trainLoss = loss(model, train_loader, device)
testLoss = loss(model, test_loader, device)
trainAcc = accuracy(model, train_loader, device)
testAcc = accuracy(model, test_loader, device)
print(epoch, trainLoss, testLoss, trainAcc, testAcc)
main()
|
st84898
|
It just means that you are feeding the wrong format. The cross entropy loss function you are using should receive a matrix containing the logits with dimensions: num_examples x num_classes. I.e., these are one-hot encoded. The targets, however, are a 1D long-type array with dimension “num_examples” only. (currently you have 2D long vs 2D long but with num_examples x 1 dimensions, which is why you get the error “implemented for type torch.cuda.LongTensor’”).
I have a step by step example notebook here if that helps with understanding the expected format: https://github.com/rasbt/stat479-deep-learning-ss19/blob/master/L08_logistic/code/cross-entropy-pytorch.ipynb 221
|
st84899
|
Thanks a ton!
So to make sure I understand correctly, I shouldn’t take the argmax of the output and should reshape the argmax of the target to (N,) instead of (N,1)?
If so, how would I properly reshape target? I tried .view(n,) and it doesn’t appear PyTorch supports that.
|
st84900
|
Hey Justin, did you end up resolving this issue? I’ve also been searching online for a while, but I have not found a solution.
|
st84901
|
I switched to the other implementation of cross entropy in pytorch and everything has been fine. Still no clue what’s up here though.
|
st84902
|
hi everyone
How can use Univariate Selection for select best K feature in pytorch?
|
st84903
|
I have a problem where I train a fully connected NN and once trained I gather new data and train again (with both old and new data) for a few times.
Do I need to reinitialize something in order the model to learn correctly? Optimizer or the model weights. I guess if I use lr_scheduler I should reinitialize but I do not use it yet.
I use ADAM optimizer.
Thanks in advance!
(New to Pytorch)
|
st84904
|
Can you give some more information about the model, the data, and why you believe the model is not training properly?
|
st84905
|
I’ve posted this question on SO (https://stackoverflow.com/questions/56962116/bert-pytorch-to-onnx-conversion-lambda-error 16), but I thought this might be a better place for it. I’m trying to convert the PyTorch BERT implementation from https://github.com/codertimo/BERT-pytorch 13, but I’m hitting an error with the TransformerBlock:
builtins.ValueError: Auto nesting doesn't know how to process an input object of type bert_pytorch.model.transformer.TransformerBlock.forward.<locals>.<lambda>. Accepted types: Tensors, or lists/tuples of them
The code for the TransformerBlock is:
class TransformerBlock(nn.Module):
"""
Bidirectional Encoder = Transformer (self-attention)
Transformer = MultiHead_Attention + Feed_Forward with sublayer connection
"""
def __init__(self, hidden, attn_heads, feed_forward_hidden, dropout):
"""
:param hidden: hidden size of transformer
:param attn_heads: head sizes of multi-head attention
:param feed_forward_hidden: feed_forward_hidden, usually 4*hidden_size
:param dropout: dropout rate
"""
super().__init__()
self.attention = MultiHeadedAttention(h=attn_heads, d_model=hidden)
self.feed_forward = PositionwiseFeedForward(d_model=hidden, d_ff=feed_forward_hidden, dropout=dropout)
self.input_sublayer = SublayerConnection(size=hidden, dropout=dropout)
self.output_sublayer = SublayerConnection(size=hidden, dropout=dropout)
self.dropout = nn.Dropout(p=dropout)
def forward(self, x, mask):
x = self.input_sublayer(x, lambda _x: self.attention.forward(_x, _x, _x, mask=mask)) // <-- Error!
x = self.output_sublayer(x, self.feed_forward)
return self.dropout(x)
Any workarounds?
|
st84906
|
hello below is my first linear regression model with PyTorch.
if I increase num_data to 1000, the loss become infinite…
I don’t know why
any help would be appreciated so much!!
My Colab Link :
https://colab.research.google.com/drive/1oi9jZyb5TucwUERMSaVw1ZEVOt9uxrT4 10)
|
st84907
|
Hi,
I am very new in the Deep learning and Pytorch platform. I have started a project related to image classification with CNN , for this reason I would like to load my custom data set( a folder which contain 4000 numpy files) In Pytorch. I used DatasetFolder 7 as a reference. But I am unable to load my dataset.
Here is my code
import torch
import torchvision
from torch.utils.data import DataLoader, Sampler
from torchvision import datasets
from torchvision.transforms import transforms
import matplotlib.pyplot as plt
import numpy as np
DATA_TRAIN = '/home/mchow/Downloads/homework/data/pp'
def npy_loader(path):
sample = torch.from_numpy(np.load(path))
return sample
#
dataset = datasets.DatasetFolder(
root='DATA_TRAIN',
loader=npy_loader,
extensions=['.npy']
)
But I got the following errors.I could not solve this problem.
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/mchow/Downloads/pycharm-professional-2019.1.3/pycharm-2019.1.3/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/home/mchow/Downloads/pycharm-professional-2019.1.3/pycharm-2019.1.3/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/mchow/Downloads/Assignment 3.py", line 29, in <module>
extensions=['.npy']
File "/home/mchow/miniconda3/lib/python3.6/site-packages/torchvision/datasets/folder.py", line 75, in __init__
classes, class_to_idx = find_classes(root)
File "/home/mchow/miniconda3/lib/python3.6/site-packages/torchvision/datasets/folder.py", line 23, in find_classes
classes = [d for d in os.listdir(dir) if os.path.isdir(os.path.join(dir, d))]
FileNotFoundError: [Errno 2] No such file or directory: 'DATA_TRAIN'
I will be very grateful,if any one help me out .
|
st84908
|
I’m Ubuntu 14.04, with the latest version of pip, with no CUDA, so I did
pip install https://s3.amazonaws.com/pytorch/whl/cu75/torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl
which gave the error
torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform.
|
st84909
|
Not sure if that’s a problem in your case, but I had a similar issue (or a similar message) when I accidentally tried to install a package into a wrong environment (a py 3.5 wheel into a 3.6 env). Have you checked that your instance has Python 2.7 in your env and that the default pip is the pip of that python install?
|
st84910
|
I’m encountering the same issue on Ubuntu 14.04, python 2.7.13.
pip install https://s3.amazonaws.com/pytorch/whl/cu80/torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl -v
torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform. Exception information: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 257, in run InstallRequirement.from_line(name, None)) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 168, in from_line raise UnsupportedWheel("%s is not a supported wheel on this platform." % wheel.filename) UnsupportedWheel: torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform.
|
st84911
|
@Nadav, are you on UBUNTU 32-bit? Are you on an ARM processor?
We dont support 32-bit platforms.
can you tell me what the output of:
uname -a
lsb_release -a
|
st84912
|
I get the same error with the same wheel, on a rhel6 system (ugh). I’m working in a virtualenv without root.
pip install https://s3.amazonaws.com/pytorch/whl/cu75/torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl
torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform.
Storing debug log for failure in /u/tsercu/.pip/pip.log
pip --version
pip 1.5.6 from /opt/share/Python-2.7.9/lib/python2.7/site-packages (python 2.7)
lsb_release -a
LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Distributor ID: RedHatEnterpriseServer
Description: Red Hat Enterprise Linux Server release 6.6 (Santiago)
Release: 6.6
Codename: Santiago
|
st84913
|
I am getting the same error
Output of uname -a
Linux billy.local 3.13.0-66-generic #108-Ubuntu SMP Wed Oct 7 15:20:27 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Output of lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.4 LTS
Release: 14.04
Codename: trusty
|
st84914
|
Tom, if you are using RHEL6, I HIGHLY recommend Anaconda.
Tom, can you post this file somewhere: > Storing debug log for failure in /u/tsercu/.pip/pip.log
I will rebuild the pip wheels with manylinux 152 this week so that it works on ALL machines.
|
st84915
|
pip.log has no extra info, content pasted below.
Just built from source which is looking good. Will check out anaconda in the future for better install.
$ cat ~/.pip/pip.log
torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform.
Exception information:
Traceback (most recent call last):
File "/opt/share/Python-2.7.9/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/opt/share/Python-2.7.9/lib/python2.7/site-packages/pip/commands/install.py", line 257, in run
InstallRequirement.from_line(name, None))
File "/opt/share/Python-2.7.9/lib/python2.7/site-packages/pip/req.py", line 167, in from_line
raise UnsupportedWheel("%s is not a supported wheel on this platform." % wheel.filename)
UnsupportedWheel: torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform.
|
st84916
|
@Kalamaya yes. I expect the new wheel to solve your errors as well. Working on it now.
|
st84917
|
Hi everyone.
This should be fixed now for all of you.
The new command is available on the website, but pasting here for convenience:
pip install https://s3.amazonaws.com/pytorch/whl/cu75/torch-0.1.6.post22-cp27-none-linux_x86_64.whl
pip install torchvision
|
st84918
|
I am facing the same problem on ubuntu.
$ uname -a
Linux koparakesari 3.19.0-65-generic #73~14.04.1-Ubuntu SMP Wed Jun 29 21:05:22 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
$ sudo pip install torch-0.1.9.post2-cp35-cp35m-linux_x86_64.whl
torch-0.1.9.post2-cp35-cp35m-linux_x86_64.whl is not a supported wheel on this platform.
Help me please
|
st84919
|
@paarulakan you dont seem to have a python 3.5 based install. To figure out your python version, run this command:
pip --version
python --version
|
st84920
|
@smth Hi, I use Python 3.6, so what is the alternative of https://s3.amazonaws.com/pytorch/whl/cu75/torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl? Thank you!
|
st84921
|
I Google it by myself again, and I find it maybe: https://s3.amazonaws.com/pytorch/whl/cu80/torch-0.1.8.post1-cp36-cp36m-linux_x86_64.whl.
|
st84922
|
our website has a chooser for your wheels. choose your appropriate options and the correct link is given:
http://pytorch.org/ 776
|
st84923
|
Here is my custom VGG19 class
class EncoderCNN(nn.Module):
def init(self, cnn_output_size):
super(EncoderCNN, self).__init__()
vggnet = models.vgg19(pretrained=True)
self.vggmodel = vggnet.features
self.linear = nn.Linear(100352, cnn_output_size)
self.tanh = nn.Tanh()
self.dropout = nn.Dropout(0.5)
def forward(self, image_tensor):
#INPUT = Image as Tensor
features = self.vggmodel(image_tensor)
features = features.view(-1,512*14*14)
features = self.linear(features)
features = self.dropout(self.Tanh(features))
print(features.size())
return features
When I build the model in main(), this is the structure I get:
EncoderCNN(
(vggmodel): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace)
(16): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(17): ReLU(inplace)
(18): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(19): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace)
(23): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(24): ReLU(inplace)
(25): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(26): ReLU(inplace)
(27): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace)
(30): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(31): ReLU(inplace)
(32): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(33): ReLU(inplace)
(34): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(35): ReLU(inplace)
(36): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(linear): Linear(in_features=100352, out_features=512, bias=True)
(tanh): Tanh()
(dropout): Dropout(p=0.5)
)
train_loader = torch.utils.data.DataLoader(d, **batch_size= 4**,
sampler=SubsetRandomSampler(train_indices),
shuffle = False,
num_workers = 0,collate_fn = customBatchBuilder)
batch contains image, question, ques_len and ans
image tensor is of 3 x 448 x 448 tensor
Now, when I execute the below lines,
for i, batch in enumerate(train_loader):
image, question, ques_len, ans = batch
optimizer.zero_grad()
img_emb = image_model(image)
I get this error:
TypeError: conv2d(): argument ‘input’ (position 1) must be Tensor, not tuple
Please help!
|
st84924
|
Could you print the type of image as it seems to be a tuple instead of a tensor?
Maybe your Dataset returns more than these 4 values.
|
st84925
|
Its returning tuple.
this is my code for custom batch:
def customBatchBuilder(samples):
imgs_batch, question_batch ,length_batch,ans_batch = zip(*samples)
# Sort sequences based on length
seqLengths = [len(seq) for seq in question_batch]
sorted_list = sorted(zip(list(imgs_batch), question_batch, seqLengths,ans_batch), key = lambda x: -x[2])
imgs_batch, question_batch, seqLengths,ans_batch = zip(*sorted_list)
paddedSeqs = torch.nn.utils.rnn.pad_sequence(question_batch, batch_first=True)
# Create tensor with padded sequences.
print(type(imgs_batch))
#print("##############")
return imgs_batch,paddedSeqs,seqLengths,ans_batch
|
st84926
|
Morning all,
I am reading in a 2048x420000 numpy array, this array is 420000 samples each 2048 long.
I convert this np array into a tensor using torch.Tensor and then set up a single tensor 1x2048 long for labels.
Using TensorDataset(X_train, y_train) to combine (I think as i understand???) this is then passed into DataLoader with a batch sie of 32.
This then gives me the error
RuntimeError: Expected 3-dimensional input for 3-dimensional weight 16 1, but got 2-dimensional input of size [32, 42000] instead
I’m confused, why isn’t the tensor generated by dataload [32 , 2048]? it is using the size of the row (42000) and not the columns (2048).
chaslie
|
st84927
|
Solved by chaslie in post #3
Further to this. Looking at the input vector, it has shape 32x42000, and not 32x3048 as intended, how do I change this?
|
st84928
|
The error indicates that the supplied input dimension and weight dimension doe not match.
It is expecting a 3 dimensional input.
|
st84929
|
Further to this. Looking at the input vector, it has shape 32x42000, and not 32x3048 as intended, how do I change this?
|
st84930
|
I run the deeplabv3+ on cityscapes dataset, in the training I run the following code:
for cur_step, (images, labels) in enumerate( train_loader ):
if scheduler is not None:
scheduler.step()
#images is ([8,3,512,512]) tensor, labels is ([8,512,512]) tensor, 8 is the batch_size
images = images.to(device, dtype=torch.float32)
labels = labels.to(device, dtype=torch.long)
print( np.unique(labels.cpu().numpy()) )
# N, C, H, W
optim.zero_grad()
#outputs is ([8,20,512,512]) tensor, 20 is class num
outputs = model(images)
#criterion is Cross Entropy Loss
loss = criterion(outputs, labels)
loss.backward()
optim.step()
in the above code, I got the error
IMG_0118.JPG3264×2448 2.31 MB
I think the bug is in the " loss = criterion(outputs, labels)", but I dont know how to fix it
|
st84931
|
How did you initialize the criterion?
Note that 'mean' should be passed as a string.
Also, in older PyTorch versions (pre 1.0), you had to use 'elementwise_mean' as far as I remember.
If you are using one of the older versions, I would strongly recommend to update.
|
st84932
|
Hi Everyone,
i really hope you guys can help me out with my problem. For my Master Thesis i want to program a LSTM that predicts the remaining Life Time of a mechanical part, by using time/vibration signals as input. To get a feeling for LSTM’s i’m trying to train one to predict a sine curve.
This here is My Code 16. My model doesn’t train very well hence the predictions are bad as well. The loss starts at 0.5 and decreases really fast after just 3 epochs. I tried tried the multi-one approach, where my input is a seq_len tensor and my output is the one next value after the sequence.
I’d tried so many configurations; Added LSTM layers, changed the hidden size in a range of 2-500,
tried a lr from e-06 - e-03, tried different Sequence length and batch sizes, used different optimizer, but it seems like nothing really works.
Does anyone have a idea what i can try, or does anyone see a mistake in my code? I would be really happy if someone could help me
My latest tries look like this:
Output while training
Prediction
|
st84933
|
Hello there)
Don’t know, if its still important, but I’ve noticed, that you’ve used shuffle:
trainloader = DataLoader(Trainingset, batch_size, shuffle = True, drop_last=True, num_workers=0)
validloader = DataLoader(Validationset, batch_size, shuffle = True, drop_last=True, num_workers=0)
For LSTM order is important, and by using shuffle you’ve confused LSTM. I’ve changed shuffle to False and it works okay.
|
st84934
|
I want to implement LSTM
input_size: 10x512x7x7 (DataType: double)
output_size: 512x7x7 (DataType: double)
Do I need embedding layer, I heard embedding is used to convert word_to_number…
Could you explain more about embedding layer in LSTM…?
|
st84935
|
Bug
To Reproduce
Steps to reproduce the behavior:
import torch
from torch import nn
class SparseTest(nn.Module):
def __init__(self):
super(SparseTest, self).__init__()
self.S = torch.sparse_coo_tensor(
indices=torch.tensor([[0, 0, 1, 2], [2, 3, 0, 3]]),
values=torch.tensor([1.0, 2.0, 1.0, 3.0]),
size=[3, 4]).cuda()
self.fc = nn.Linear(6, 4)
def forward(self, x):
self.S = self.S
x = torch.spmm(self.S, x)
x = x.reshape(-1)
x = self.fc(x)
return x
if __name__ == "__main__":
X = torch.ones(4, 2, dtype=torch.float).cuda()
y = torch.zeros(4, dtype=torch.float).cuda()
sparseTest = SparseTest()
sparseTest = sparseTest.cuda()
sparseTest = torch.nn.DataParallel(sparseTest) # whether use DataParallel
optimizer = torch.optim.Adam(sparseTest.parameters(), lr=0.001, weight_decay=0.00005)
lossMSE = nn.MSELoss()
with torch.set_grad_enabled(True):
for i in range(10):
x = sparseTest(X)
optimizer.zero_grad()
loss = lossMSE(x, y)
loss.backward()
optimizer.step()
print("loss: {:.8f}".format(loss.item()))
Expected behavior
Without Dataparallel, i.e., # sparseTest = torch.nn.DataParallel(sparseTest)
$ python sparseTest.py
loss: 1.46217334
loss: 1.42986751
loss: 1.39802396
loss: 1.36665058
loss: 1.33575463
loss: 1.30534196
loss: 1.27541876
loss: 1.24598980
loss: 1.21705961
loss: 1.18863106
With Dataparallel, i.e., sparseTest = torch.nn.DataParallel(sparseTest)
$ python sparseTest.py
Traceback (most recent call last):
File "sparseTest.py", line 31, in <module>
x = sparseTest(X)
File "/home/pai/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/pai/.local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 143, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/pai/.local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 153, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/pai/.local/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply
raise output
File "/home/pai/.local/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker
output = module(*input, **kwargs)
File "/home/pai/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "sparseTest.py", line 15, in forward
x = torch.spmm(self.S, x)
RuntimeError: addmm: Argument #3 (dense): Expected dim 0 size 4, got 1
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
PyTorch version: 1.0.1.post2
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.10.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration:
GPU 0: TITAN Xp
GPU 1: TITAN Xp
GPU 2: TITAN Xp
GPU 3: TITAN Xp
Nvidia driver version: 390.116
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.4.2
/usr/local/cuda-9.0/lib64/libcudnn.so.7
Versions of relevant libraries:
[pip3] numpy==1.16.3
[pip3] torch==1.0.1.post2
[pip3] torch-dct==0.1.5
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.2.2.post3
[conda] blas 1.0 mkl
[conda] mkl 2019.3 199
[conda] mkl_fft 1.0.12 py36ha843d7b_0
[conda] mkl_random 1.0.2 py36hd81dba3_0
[conda] pytorch 1.1.0 py3.6_cuda9.0.176_cudnn7.5.1_0 pytorch
[conda] torch-dct 0.1.5 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
Additional context
|
st84936
|
I have a dataset which has multiple targets attributes.
Sample targets for 12 data points (4 attributes as target outputs: not one-hot encoded):
class Classifier(nn.Module):
def __init__(self,input_nodes):
super(Classifier, self).__init__()
self.fc3 = nn.Linear(75000, 500)
self.fc4 = nn.Linear(500, 200)
self.fc5 = nn.Linear(200, 100)
self.fc_out = nn.Linear(100, 4)
self.dropout = nn.Dropout(0.5)
def forward(self, x):
x = F.relu(self.fc3(x))
x = self.dropout(x)
x = F.relu(self.fc4(x))
x = self.dropout(x)
x = F.relu(self.fc5(x))
x = self.dropout(x)
x = self.fc_out(x)
return x
criterion = nn.CrossEntropyLoss()
for epoch in range(n_epochs):
running_loss = 0
i = 0
model.train()
for data, label in trainloader:
y_hat = model(data) # Shape of y_hat = 6*4 (as 4 targets)
loss = criterion(y_hat, label) # Shape of label = 6*4
#--> Give error: RuntimeError: multi-target not supported at ..\aten\src\THNN/generic/ClassNLLCriterion.c:20
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss.append(loss.item())
train_losses.append(loss.item())
running_loss += loss.item()
Error: RuntimeError: multi-target not supported at …\aten\src\THNN/generic/ClassNLLCriterion.c:20
How can I train and evaluate the multi-target classification problem?
Please help.
|
st84937
|
Hello Animesh!
Animesh_Kumar_Paul:
I have a dataset which has multiple targets attributes.
Sample targets for 6 data points (4 attributes as target outputs: not one-hot encoded):
Just to confirm, are your really working with a “multi-target”
(multi-class) classification problem? I ask because the sample
targets you show in your .png image have either 0 (in just row 2)
or 1 (in the rest of the rows) of the labels set.
That is, even though you didn’t show such a row, could you have
a row (say, row 17) for which all four fields are set to 1?
If not, you have a multi-class (but not multi-label) classification
problem, and you should recast it as such, and (most likely)
use nn.CrossEntropyLoss as your loss function.
Error: RuntimeError: multi-target not supported at …\aten\src\THNN/generic/ClassNLLCriterion.c:20
How can I train and evaluate the multi-target classification problem?
The general consensus is that MultiLabelSoftMarginLoss 28
is the loss function to start with for a multi-label classification
problem.
Best.
K. Frank
|
st84938
|
Hi @KFrank,
Thanks for your help.
That is, even though you didn’t show such a row, could you have
a row (say, row 17) for which all four fields are set to 1?
Yes, I have some rows with all 1s and some rows with more than ones there.
The targets can be like this:
|
st84939
|
Hi Prerna and Animesh!
Prerna_Dhareshwar:
I think you’re looking for BCEloss which can be found here.
You don’t want – in the typical case – BCELoss for classification
problems. This is because BCELoss requires the predictions fed
into it to be numbers in (0, 1) (probability-like numbers).
For example, Animesh’s output layer is a Linear
self.fc_out = nn.Linear(100, 4)
that will, in general, output numbers ranging from -infinity to
+infinity. You also don’t want to pass these outputs through
a Sigmoid (or Softmax) layer to map them to (0, 1) because
of the risk of overflow.
You want, instead, a loss function that takes logit-like
predictions (that run from -infinity to +infinity), such as
BCEWithLogitsLoss.
MultiLabelSoftMarginLoss and BCEWithLogitsLoss
are essentially the same function, as Peter explains here:
What is the difference between BCEWithLogitsLoss and MultiLabelSoftMarginLoss
You are right. Both loss functions seem to return the same loss values:
x = Variable(torch.randn(10, 3))
y = Variable(torch.FloatTensor(10, 3).random_(2))
# double the loss for class 1
class_weight = torch.FloatTensor([1.0, 2.0, 1.0])
# double the loss for last sample
element_weight = torch.FloatTensor([1.0]*9 + [2.0]).view(-1, 1)
element_weight = element_weight.repeat(1, 3)
bce_criterion = nn.BCEWithLogitsLoss(weight=None, reduce=False)
multi_criterion = nn.MultiLabelSoftMarginLoss(weight=No…
Good luck.
K. Frank
|
st84940
|
I have a problem that I cannot justify and I need an expert opinion as it might be architecture-related.
So I define a fresh Xception-based model, which is the following (Xception block is also provided):
image.png775×811 75.8 KB
image.png850×647 98.3 KB
I then run:
model = SamplingSegmentation(3, 1, 6).cuda()
out = model(torch.ones((5, 3, 64, 64)).cuda())
And the error is:
image.png519×736 96.7 KB
Does anyone know why this is happening? Printing the the input to the SeparableConvolution tells me it is already in cuda.
|
st84941
|
It seems you are using plain Python lists to store submodules in your model.
While this might work logically in your forward method, the submodules won’t be properly registered, so that e.g. model.parameters() won’t show these parameters.
Also, .to() or .cuda() calls won’t have any effect on the submodules, as is here the case.
Instead you should use nn.ModuleList to add your modules, which will properly register them.
PS: It’s always better to post code snippets and wrap them in three backticks ```, so that this code will be searchable in the discussion board and we could copy it in case we would like to debug it
|
st84942
|
Thank you, your advice saved my day, I just enclosed the modules list inside a nn.ModuleList and the problem was fixed. I will keep your other note in mind the next time I post something.
|
st84943
|
Hi.
Me and another colleague have been struggling on this problem for the last days and we are losing our sanity. Unfortunately the conditions to recreate this problems are so rare and fragile that I have to attach a ~600Mb data file to this post for people interested to reproduce this. This data file does not contain any personal or sensible data, it’s just some random Monte Carlo numbers generated by another program.
Anyway here is the problem:
We have a really simple model: 7 input neurons, going to 5 and then a single output. Everything is made with linear layer and with sigmoid activations. In the example below, the model doesn’t even need to be trained for this problem to appear.
When we evaluate this model on some large datasets (~10 Million points), there are some inconsistencies between the CPU and GPU evaluations. In particular, the CPU is always right but the GPU starts giving some random results after about 2 Million points.
Here are some plots that show this behavior:
histogram of the difference between cpu evaluations and gpu evaluations. You can see a peak at 0 where the two evaluations coincide
plot of the same difference. You can see that the two evaluations coincide for the first 2 Million points and then it becomes noise.
plots.png410×504 13.2 KB
If you want to reproduce these plots here is a minimal working code:
import torch, os, h5py
from torch import nn
import matplotlib.pyplot as plt
class Model(nn.Module):
def __init__(self, input_dim, output_dim, hidden_units):
super(Model, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden_units, bias = False)
self.fc2 = nn.Linear(hidden_units, output_dim, bias = False)
def forward(self, x):
x = torch.sigmoid(self.fc1(x))
x = torch.sigmoid(self.fc2(x))
return x
#create the model
model = Model(7, 1, 5)
#read the data file
data = h5py.File(os.getcwd() + '/data.h5', 'r')
test_input = torch.Tensor((data['Data'])[()])
#shuffle the data
test_input = test_input[torch.randperm(test_input.size(0))]
#get indices for which variable 6 is between 350 and 450
idx1 = test_input[:, 6] > 350
idx2 = test_input[:, 6] < 450
#normalize data
norm_test_input = (test_input - test_input.mean(0))/test_input.std(0)
#take only data between 350 and 450
test_input_cut = norm_test_input[idx1 * idx2]
print(test_input_cut.shape)
# --- #
#evaluate the model on CPU
print("CPU")
with torch.no_grad():
print(model(test_input_cut)[-10:])
print(model(test_input_cut[-10:]))
cpu_results = model(test_input_cut)
#evaluate the model on GPU
print("GPU")
model.cuda()
test_input_cut = test_input_cut.cuda()
with torch.no_grad():
print(model(test_input_cut)[-10:])
print(model(test_input_cut[-10:]))
gpu_results = model(test_input_cut)
#plot difference between the two evaluations
diff = (cpu_results - gpu_results.cpu()).squeeze().numpy()
plt.hist(diff, bins = 30)
plt.show()
plx = range(cpu_results.size(0))
ply = (cpu_results.squeeze() - gpu_results.squeeze().cpu()).numpy()
plt.plot(plx, ply)
plt.show()
And here is the 600MB .h5 file 2 (that needs to be put in the same folder of the python code) for which this problem appears.
This behavior is really fragile in this example: changing the two numbers (350 and 450) even slightly erases makes everything work fine. For different (or random generated) data we couldn’t find any “cut” that would make this appear, but from what we tried it doesn’t look like there are any problems with the input data. In the real case, i.e. in the full project code, this seems to happen more consistently. We can’t post the full source code, so this tiny fragile example is the only thing that we could come up with to reproduce the issue in a consistent way to be posted in a forum.
Thanks everybody in advance for your help.
|
st84944
|
This seems to be an interesting problem, but unfortunately I couldn’t reproduce this issue.
I used your code and the provided data, but the differences on my system seem to be alright.
Also, printing the unique values for diff gives:
np.unique(diff, return_counts=True)
(array([-1.1920929e-07, -5.9604645e-08, 0.0000000e+00, 5.9604645e-08,
1.1920929e-07], dtype=float32),
array([ 765, 441331, 3331571, 101474, 2]))
which is in the expected floating point precision range.
I’m using a 1080Ti, 418.56 driver and CUDA10.1.
|
st84945
|
Hi thanks for your answer. We tested this on two different machines and we get the same result.
Machine 1:
GPU: GTX 1070
Pytorch: 1.1.0
Cuda: 9.0.36
Driver: 430.14
Machine 2:
GPU: GTX 1050
Pytorch: 1.0.1
Cuda: 9.0
Driver: 430.86
It’s weird that you don’t find the same. Could it be related to CUDA 9 vs 10?
|
st84946
|
I’m not sure, but it would be interesting to see, if updating to CUDA10 helps.
Would that be possible or are you stuck to CUDA9 on your machines?
|
st84947
|
I have updated the GTX1050 machine to pytorch 1.1.0 and CUDA 10 and now the problems seem not to be there, neither in the example or in the full code. I wonder what was the problem then.
|
st84948
|
I could reproduce this issue on my system with CUDA9.0.
I also debugged a little bit and found that the GPU outputs a constant value after 2097152 input values, which is exactly 2**21.
print(gpu_results[2**21:].shape)
> torch.Size([1777991, 1])
print(gpu_results[2**21:].unique())
> tensor([0.5127], device='cuda:0', grad_fn=<NotImplemented>)
I’m not sure, what’s going on, but maybe @ngimel might know some limitation regarding this magic number and CUDA9.
|
st84949
|
2**21 number makes me think that you are hitting https://github.com/pytorch/pytorch/pull/22034 47. The solution is to update to cuda 9.2
|
st84950
|
In Numpy, there is a simple way to check which BLAS is being used by
numpy.show_config()
Is there a similar way in PyTorch?
I am experiencing abnormally-slow performance of PyTorch-CPU on a remote server.
And I suspect PyTorch is not using BLAS.
So I am looking for ways to check:
if PyTorch is using BLAS;
which BLAS
Thanks in advance!
|
st84951
|
The binaries are all built with MKL, as you can verify by printing torch.__config__.show().
However, also be sure to check torch.__config__.parallel_info() as it could be that the number of threads is not properly set.
|
st84952
|
Thanks very much! On torch-1.0.1.post2, it shows
AttributeError: module 'torch' has no attribute '__config__'
|
st84953
|
Thanks. Just updated to ‘1.1.0’ and it shows a lot of information.
Question-1: do they literally mean the libs that PyTorch are linked to and will be using in runtime?
Reason for question: I observe >100 times slowdown on a remote server (where I am not an admin) than my person laptop, so I suspect it is not really using the resources it should and need a way to check that.
Question-2:
AttributeError: module 'torch.__config__' has no attribute 'parallel_info'?
|
st84954
|
Ah my bad, I am using a nightly build. That function was probably added later. You can still check the number of threads using usual posix functionalities.
|
st84955
|
Thanks all the same! I used torch.get_num_threads() to see # threads being used, and I found that was the cause of abnormal slow-down. When I set it back to OMP_NUM_THREADS=# cores, it came back to normal.
However, I was wondering:
What is the best practice for setting OMP_NUM_THREADS? # physical cores or logical cores? or neither?
Where is the best place to set it? I have two options:
2.1) set env vars OMP_NUM_THREADS when submitting jobs to cluster
2.2) set via torch.set_num_threads()
Not sure which way is better and why. Can you help?
|
st84956
|
HMEIatJHU:
What is the best practice for setting OMP_NUM_THREADS? # physical cores or logical cores? or neither?
I usually try with different values between those two numbers. But I am not really an expert…
HMEIatJHU:
Where is the best place to set it? I have two options:
2.1) set env vars OMP_NUM_THREADS when submitting jobs to cluster
2.2) set via torch.set_num_threads()
Not sure which way is better and why. Can you
Use torch.set_num_threads. PyTorch uses OMP, MKL, and a native thread pool (as well as TBB maybe). The function takes care of all of them. Not sure if the env flag will set all of them.
|
st84957
|
SimonW:
HMEIatJHU:
What is the best practice for setting OMP_NUM_THREADS? # physical cores or logical cores? or neither?
I usually try with different values between those two numbers. But I am not really an expert…
I tried many options and found smaller values tend to give good performance in my case.
SimonW:
HMEIatJHU:
Where is the best place to set it? I have two options:
2.1) set env vars OMP_NUM_THREADS when submitting jobs to cluster
2.2) set via torch.set_num_threads()
Not sure which way is better and why. Can you
Use torch.set_num_threads. PyTorch uses OMP, MKL, and a native thread pool (as well as TBB maybe). The function takes care of all of them. Not sure if the env flag will set all of them.
Thanks for the advice!
|
st84958
|
I currently have a lot of code that looks like:
tensor[mask, :][..., i]
because apparently, I can’t mix masks and selection by dimensions. tensor[mask, i] complains about dimensions not working out, but the mask has the shape of the tensor up to the last dimension.
Any better way to achieve what I want to do?
|
st84959
|
Is mask a uint8 or bool tensor?
If so, could you post a small example of what you would like to achieve?
|
st84960
|
When I saved the weights of a layer from pretrained model into a dictionary it’s showing only some but I need all weights how can I see those numbers?
OrderedDict([(‘weight’, tensor([[[[-3.4618e-02, -3.3801e-02, -2.2497e-02],
[-3.2275e-02, -2.8543e-02, -1.5026e-02],
[-3.1307e-02, -1.8263e-02, -1.1662e-02]],
[[-6.7787e-03, -5.1631e-03, -1.1498e-02],
[-3.1353e-03, -9.5856e-03, -1.1940e-02],
[-1.6923e-02, -1.5893e-02, -9.2058e-03]],
[[ 2.1721e-02, 1.6138e-03, -1.2762e-02],
[ 1.1541e-02, 3.9117e-03, -1.7966e-02],
[ 8.2566e-03, -7.2988e-03, -1.8711e-02]],
...,
[[ 1.1666e-02, 5.8356e-03, 5.1317e-03],
[ 2.7172e-02, 3.0096e-02, 2.2107e-02],
[ 5.0745e-02, 5.0426e-02, 5.2568e-02]],
[[ 1.5641e-02, 6.1531e-03, 1.0435e-02],
[-4.9628e-03, -1.0472e-02, -9.7399e-03],
[-1.3043e-02, -1.1895e-02, -8.3080e-03]],
[[-7.5200e-03, -8.6918e-03, -7.7642e-03],
[-2.1090e-02, -1.2527e-02, -1.4423e-02],
[-1.5833e-02, -1.0271e-02, -6.5119e-03]]],
[[[ 2.9262e-02, 2.4535e-02, 9.8333e-03],
[ 1.3875e-02, 1.1888e-02, 1.5868e-03],
[ 3.2187e-02, 2.7896e-02, 2.1428e-02]],
[[-1.2907e-02, -1.6465e-02, -1.0769e-02],
[-1.2730e-02, -8.0963e-04, -9.7610e-03],
[-1.7824e-02, -1.9745e-03, -6.0412e-03]],
[[ 7.3908e-03, 1.3782e-03, -2.7653e-03],
[ 2.2362e-02, 3.1297e-03, 7.8103e-03],
[-3.6186e-03, 4.7948e-03, -3.9560e-03]],
...,
[[ 5.8102e-03, 6.1981e-04, -9.2120e-04],
[-2.2606e-03, -2.2002e-02, -3.1097e-03],
[-9.4541e-03, -2.8003e-02, -2.2712e-02]],
[[ 2.0525e-03, -3.4167e-03, 1.0797e-02],
[ 8.3235e-03, -7.1854e-03, 8.9610e-03],
[ 1.1586e-02, -4.0099e-03, 1.9634e-02]],
[[ 7.4921e-03, -1.0478e-03, 1.5801e-02],
[ 7.9514e-04, -1.6881e-02, -2.4732e-04],
[ 1.5973e-02, 1.6192e-02, 3.7039e-03]]],
[[[-2.3103e-02, -2.4085e-02, -2.1704e-02],
[ 1.0276e-02, -7.1129e-03, 6.8848e-03],
[ 8.5694e-03, -1.9288e-03, -2.8843e-03]],
[[-1.3841e-02, -1.0492e-02, -2.4102e-02],
[-2.7612e-02, -2.5791e-02, -1.2960e-02],
[-1.5598e-03, 3.6200e-04, 2.1741e-02]],
[[ 2.7855e-02, 2.6277e-02, 2.2290e-02],
[ 2.4780e-02, 2.5150e-02, 2.5566e-03],
[-1.9022e-03, 1.0823e-02, -8.6423e-03]],
...,
[[-6.3089e-03, 1.8908e-03, 6.9918e-03],
[-6.9307e-03, -8.7947e-03, -7.2253e-03],
[-3.5385e-03, -4.9549e-03, -8.7845e-03]],
[[ 8.2309e-03, 1.8546e-04, -9.4308e-04],
[ 2.4796e-04, -1.3734e-03, -6.4844e-03],
[ 1.0330e-02, -6.3096e-03, 2.9899e-03]],
[[-3.2615e-03, 1.1206e-03, 2.0341e-03],
[ 3.0882e-03, -1.5877e-02, -3.2855e-03],
[-1.0955e-02, -5.5521e-03, -1.2025e-04]]],
...,
[[[-1.7462e-02, -3.8714e-02, -3.6843e-02],
[-2.3918e-02, -3.9357e-02, -2.3753e-02],
[-6.3624e-03, -1.7850e-02, 1.0370e-03]],
[[ 2.8356e-02, 1.4364e-02, 8.2898e-03],
[ 1.7873e-02, -1.1615e-03, 5.5851e-03],
[-1.2643e-03, -8.9554e-03, -3.0568e-03]],
[[ 5.1422e-03, 4.6503e-04, 9.0143e-03],
[ 1.9823e-02, 3.4737e-02, 2.8242e-02],
[ 3.2088e-02, 2.3840e-02, 2.2072e-02]],
...,
[[ 3.4983e-03, 9.9425e-03, 4.5399e-03],
[ 1.1027e-02, 4.5929e-03, 2.6449e-03],
[ 1.7532e-02, -1.0313e-03, -8.3937e-03]],
[[-1.0549e-02, -1.0171e-02, -5.7416e-03],
[-8.2740e-03, -8.3159e-03, -1.4377e-02],
[-1.1800e-02, -1.1000e-02, -1.6324e-03]],
[[-1.3096e-02, -3.1618e-02, -2.5536e-02],
[-4.8419e-03, -1.3772e-02, -1.3568e-02],
[-1.5098e-02, -8.9054e-03, -1.2804e-02]]],
[[[ 3.8384e-03, -6.2667e-03, -1.1664e-02],
[ 3.3901e-03, -1.1676e-02, -6.1101e-03],
[-1.7972e-02, -3.3468e-02, -9.4137e-03]],
[[ 1.0985e-03, 1.3939e-04, 5.5219e-03],
[-9.3755e-03, -3.6006e-03, -3.1602e-05],
[-2.7297e-03, -5.7176e-03, -3.4485e-03]],
[[ 1.9236e-02, 1.2842e-02, 8.1627e-03],
[ 1.5062e-02, 4.3942e-04, 1.0795e-03],
[ 6.7360e-03, 9.0229e-03, 2.8575e-03]],
...,
[[-7.0815e-03, -4.9577e-03, 1.6272e-03],
[ 2.0418e-03, -8.2440e-03, 8.4400e-03],
[ 6.3961e-03, -5.5046e-03, 1.0984e-03]],
[[ 9.5378e-04, -1.1439e-02, -3.1516e-03],
[ 1.0669e-03, -1.8538e-02, -1.2831e-02],
[ 5.5736e-03, -1.1039e-02, -1.3992e-02]],
[[-1.9465e-02, -1.2852e-02, -1.8318e-02],
[-2.6698e-02, -1.8011e-02, -1.0662e-02],
[-1.6479e-02, -1.4482e-02, -1.6194e-02]]],
[[[-2.3728e-02, -3.6695e-02, -3.7500e-02],
[-1.3462e-03, -3.9573e-04, 3.2643e-03],
[ 1.5174e-02, 2.6909e-02, 1.3840e-02]],
[[ 2.8536e-04, -1.5611e-02, -6.2689e-03],
[-4.9708e-04, -2.2292e-02, -2.0291e-02],
[ 1.2825e-02, -1.6755e-02, -2.1885e-02]],
[[ 8.5642e-03, 6.9444e-03, 9.6820e-03],
[ 3.2590e-03, 1.7821e-02, 7.3303e-03],
[ 1.2386e-02, 2.1082e-02, -7.5897e-03]],
...,
[[ 7.6673e-03, 1.3758e-02, 1.1945e-02],
[ 8.6567e-03, 1.0310e-02, -1.6190e-03],
[ 1.7062e-02, -6.1529e-03, -4.0647e-03]],
[[ 1.5687e-02, 3.6550e-03, -1.1310e-02],
[-1.4527e-04, 8.9713e-04, 2.0525e-03],
[-1.0805e-02, -5.5566e-03, 8.8173e-03]],
[[-1.4262e-02, -1.7638e-02, -6.3932e-03],
[-2.4702e-03, -1.2362e-02, -2.2834e-02],
[-4.4622e-03, -9.5870e-03, -1.7204e-02]]]], device='cuda:0'))
Thanks for any help
|
st84961
|
Solved by ptrblck in post #4
In that case, you could also try to use imhow from matplotlib, which might be clearer in case your tensor is huge.
|
st84962
|
You could serialize them to a text file e.g. using numpy’s savetxt method, in case you need to access them directly.
May I ask what your use case is that you need to see all values?
|
st84963
|
In that case, you could also try to use imhow from matplotlib, which might be clearer in case your tensor is huge.
|
st84964
|
Training...
LR 1e-05
Train Epoch: 0 [0/1 (0%)] Loss 2.4733 (2.4733)
LR 1e-05
Train Epoch: 1 [0/1 (0%)] Loss 2.4733 (2.4733)
LR 1e-05
Train Epoch: 2 [0/1 (0%)] Loss 2.4733 (2.4733)
LR 1e-05
Train Epoch: 3 [0/1 (0%)] Loss 2.4733 (2.4733)
LR 1e-05
Train Epoch: 4 [0/1 (0%)] Loss 2.4733 (2.4733)
LR 1e-05
Any idea?
|
st84965
|
Could you post more of your code? Looks like there is something wrong, loss should decrease.
|
st84966
|
Excuse me! I dont understand this code, what is the “data” . I have seen in website but still not understand. Please help me. Thanks you so much .
|
st84967
|
(data, target) are whatever you return from your train_data_loader.
You better come back to the part code of train_data_loader to understand what does it return.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.