id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st80568 | Your tensor outputs seems to be of shape [100, something, 4].
I suppose 100 is the batch size, and 4 is the number of classes ?
But what does the other dimension correspond to ? |
st80569 | Thanks for your help.
Right, 100 would be the batch size and I have 4 classes.
A sample tensor consists of 224 values.
This is how a piece of the dataset looks like (first measurement sample, then label):
dataset[5]
(tensor([[0.0681, 0.0647, 0.0697, 0.0680, 0.0632, 0.0590, 0.0717, 0.0640, 0.0558,
0.0673, 0.0649, 0.0675, 0.0678, 0.0717, 0.0726, 0.0713, 0.0720, 0.0757,
0.0756, 0.0744, 0.0744, 0.0760, 0.0777, 0.0797, 0.0760, 0.0781, 0.0758,
0.0759, 0.0768, 0.0758, 0.0760, 0.0744, 0.0757, 0.0744, 0.0754, 0.0774,
0.0777, 0.0788, 0.0829, 0.0857, 0.0875, 0.0942, 0.1035, 0.1125, 0.1204,
0.1349, 0.1482, 0.1601, 0.1754, 0.1912, 0.2024, 0.2140, 0.2218, 0.2294,
0.2344, 0.2405, 0.2447, 0.2486, 0.2535, 0.2542, 0.2544, 0.2568, 0.2554,
0.2477, 0.2405, 0.2350, 0.2305, 0.2221, 0.2132, 0.2095, 0.2054, 0.2022,
0.1978, 0.1955, 0.1899, 0.1882, 0.1892, 0.1882, 0.1846, 0.1804, 0.1777,
0.1741, 0.1703, 0.1661, 0.1627, 0.1583, 0.1578, 0.1576, 0.1568, 0.1529,
0.1481, 0.1425, 0.1366, 0.1270, 0.1198, 0.1130, 0.1099, 0.1076, 0.1022,
0.0994, 0.0946, 0.0880, 0.0840, 0.0820, 0.0815, 0.0802, 0.0827, 0.0880,
0.0988, 0.1171, 0.1491, 0.1890, 0.2348, 0.2812, 0.3261, 0.3630, 0.3943,
0.4248, 0.4504, 0.4734, 0.4962, 0.5118, 0.5217, 0.5316, 0.5350, 0.5365,
0.5429, 0.5515, 0.5720, 0.5907, 0.5909, 0.5929, 0.6001, 0.6017, 0.6039,
0.6022, 0.6034, 0.6033, 0.5968, 0.5984, 0.5936, 0.5948, 0.5960, 0.5980,
0.6030, 0.6056, 0.6022, 0.5938, 0.5872, 0.5832, 0.5741, 0.5695, 0.5610,
0.5634, 0.5688, 0.5707, 0.5690, 0.5678, 0.5646, 0.5563, 0.5579, 0.5532,
0.5549, 0.5573, 0.5536, 0.5591, 0.5611, 0.5601, 0.5557, 0.5563, 0.5591,
0.5535, 0.5577, 0.5565, 0.5551, 0.5545, 0.5469, 0.5592, 0.5621, 0.5559,
0.5560, 0.5592, 0.5609, 0.5594, 0.5594, 0.5601, 0.5568, 0.5478, 0.5518,
0.5447, 0.5483, 0.5453, 0.5404, 0.5367, 0.5438, 0.5450, 0.5398, 0.5360,
0.5313, 0.5155, 0.5035, 0.4977, 0.4822, 0.4723, 0.4667, 0.4621, 0.4480,
0.4449, 0.4368, 0.4308, 0.4343, 0.4166, 0.4172, 0.4014, 0.3871, 0.3612,
0.3562, 0.3375, 0.3182, 0.2991, 0.3049, 0.2872, 0.2858, 0.2696]]),
tensor(1.)) |
st80570 | Okay, it seems that you have a phantom dimension that you need to get rid of.
The tensor inputs has shape [batch, 1, 224], you need to squeeze it to have [batch, 224] instead.
Instead of calling outputs = model(inputs), try :
outputs = model(inputs.squeeze(1)) |
st80571 | Hello,
I’m working on semantic segmentation project on pytorch framework. I wrote example code for UNet model with n_classes=1 and run it on windows 10 PC. Everything was great but get a lot of time because of poor gpu. Env was created via conda:
python = 3.6.6
PIL=5.4.1
pytorch=1.0.1
So, i gone to dedicated server with 1080 ti ubuntu 18.04 lts. Created the same env and checked package versions - all right. After that i moved source code and dataset to dedicated server and run it. But after first epoch i get following exception:
https://hastebin.com/medolataxa.sql 7
OSError: image file is truncated (28 bytes not processed)
I haven’t use truncated files. Everything was okay on windows 10, but after first epoch everything collapsed. (p.s. use allow truncated_files doesn’t solve the issue).
It is also strange, that PIL.Image.open(<image_path>) recursive running on dataset didn’t throw any exception with files on dedicated server.
Is someone know how to fix it? |
st80572 | Maybe there were some issues moving these files to the server?
Could you try to load all images in a loop, store the index which gives you the error, and have a look at this particular file?
Something like this should give you the index:
for idx, (data, target) in enumerate(dataset):
print(idx)
The error should then be related to idx+1. Depending on the Dataset you are using, you could try to get the corresponding image path and check the file manually. |
st80573 | Thank you @ptrblck for yor reply,
I wrote custom Dataset. Here is a code:
class DatasetLoader(Dataset):
def __init__(self, X, y, input_transform=None, label_transform=None):
self.data = X
self.labels = y
self.input_transform = input_transform
self.label_transform = label_transform
@staticmethod
def load_dataset(data_dir: str):
logger.debug(f"load_dataset: Loading dataset from {data_dir}")
inputs_dir = f'{data_dir}/inputs'
labels_dir = f'{data_dir}/labels'
inputs = []
for image_path in tqdm(glob.glob(inputs_dir + '/*')):
image = Image.open(image_path)
inputs.append(image)
labels = []
for image_path in tqdm(glob.glob(labels_dir + '/*')):
label = Image.open(image_path).convert('L')
labels.append(label)
return inputs, labels
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
data = self.data[idx]
if self.input_transform is not None:
data = self.input_transform(data)
label = self.labels[idx]
if self.label_transform is not None:
label = self.label_transform(label)
return data, label
Also i zipped dataset folder, moved to dedicated server via scp and check sha256sum. Everything was good. |
st80574 | Thanks for the code!
You could add a try ... expect block around the loop where Image.open is called and see, which image makes problems.
Maybe you are using different PIL versions, such that this issue was fixed on your local machine? |
st80575 | Actually, image loading works good. I checked it in jupyter. Also, code using shown below:
def load_datasets(data_dir, input_size, test_pct=0.2, eval_size=10):
train_transform, test_transform, label_transform = create_transforms(input_size)
X, y = DatasetLoader.load_dataset(data_dir)
train_slice = round((1 - test_pct) * len(X))
train_data = DatasetLoader(X[:train_slice], y[:train_slice],
input_transform=train_transform, label_transform=label_transform)
test_data = DatasetLoader(X[train_slice:], y[train_slice:],
input_transform=test_transform, label_transform=label_transform)
eval_data = DatasetLoader(X[-eval_size:], y[-eval_size:],
input_transform=test_transform, label_transform=label_transform)
logger.debug(f"load_datasets: (train_data, test_data, eval_data) sizes = "
f"{len(train_data), len(test_data), len(eval_data)}")
return train_data, test_data, eval_data
So, images get loaded before iteration within trainloader |
st80576 | The error message point to PIL.ImageFile.load, which is weird if image loading is not an issue.
Could you create a (small) code snippet to reproduce this error? |
st80577 | Wait a minute, i’ll provide sample code for train and exact line where exception throws from. |
st80578 | Here is my sample code for training:
with tensorboardX.SummaryWriter(log_dir=log_dir) as summary_writer:
for epoch in range(epochs):
epoch_train_loss = 0
model.train()
logger.debug(f"train: Running epoch {epoch + 1} out of {epochs}")
for inputs, labels in tqdm(trainloader):
inputs, labels = inputs.cuda(non_blocking=True), labels.cuda(non_blocking=True)
outputs = model.forward(inputs)
loss = criterion(outputs, labels)
epoch_train_loss += loss.item()
optimizer.zero_grad()
loss.backward()
optimizer.step() |
st80579 | Also some utils:
def create_transforms(input_size):
channel_means = [0.485, 0.456, 0.406]
channel_stds = [0.229, 0.224, 0.225]
train_tfms = transforms.Compose([transforms.Resize(input_size),
transforms.ToTensor(),
transforms.Normalize(channel_means, channel_stds)])
test_tfms = transforms.Compose([transforms.Resize(input_size),
transforms.ToTensor(),
transforms.Normalize(channel_means, channel_stds)])
mask_tfms = transforms.Compose([transforms.Resize(input_size),
transforms.ToTensor()])
return train_tfms, test_tfms, mask_tfms
def create_dataloaders(data_dir, input_size=256, test_pct=0.2, batch_size=64) -> (DataLoader, DataLoader, DataLoader):
train_data, test_data, eval_data = load_datasets(data_dir, input_size, test_pct)
trainloader = DataLoader(train_data, batch_size=batch_size, shuffle=True, num_workers=6, pin_memory=True)
testloader = DataLoader(test_data, batch_size=batch_size, shuffle=False, num_workers=6, pin_memory=True)
evalloader = DataLoader(eval_data, batch_size=1, shuffle=False, num_workers=6, pin_memory=True)
return trainloader, testloader, evalloader
trainloader, testloader, evalloader = create_dataloaders(data_dir, test_pct=test_pct, batch_size=batch_size) |
st80580 | image.png1014×604 200 KB
As you can see, the exception throws on idx=0 within __iter__() and __next__() func |
st80581 | @ptrblck
I had an idea that jpg image compression is different on win10 and linux machines depends on a lib. So i converted all jpg images to png on win10 and send zipped folder to linux machine via scp. But again the same error. First iteration on trainloader is okay, but it’s look like images get corrupted after read, because then we are trying to read same trainloader at a time, we get truncated image.
Any ideas? |
st80582 | hmm, i tried to do random things and found that in my ubuntu server 6 instances of python file, because num_workers = 6. So, i removed num_workers from DataLoader object creation and it worked.
Could you provide some observations why did it happen?@ptrblck
Best regards,
Alex. |
st80583 | I’m really not sure, why PIL throws an error, if you use multiple workers.
Using
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
did not solve the error, right?
Could you upload the image somewhere? I could try to reproduce this issue on my Ubuntu machine. |
st80584 | I use LFW dataset. Downloaded from here http://vis-www.cs.umass.edu/lfw/part_labels/ 2 |
st80585 | Thanks for the link.
I’ve downloaded the lfw_funneled dataset and it’s running fine:
path = './lfw_funneled/'
dataset = datasets.ImageFolder(
path,
transform=transforms.ToTensor()
)
# Check Dataset
for image, target in dataset:
print(target)
# Check DataLoader
loader = DataLoader(
dataset,
num_workers=6,
shuffle=False)
for data, target in loader:
print(target)
No PIL errors on my machine with 6 workers.
I’m using Ubuntu 18.04.1 LTS, and PIL 5.4.1. |
st80586 | it’s a clear case of a corrupted image (I’ve seen this error before).
Print the image filename as well, and inspect / redownload / delete the bad image. |
st80587 | I don’t get any errors for 5 epochs and still think it’s related to your image.
Did you try to redownload or delete the image as suggested? |
st80588 | Hi, my custom model looks like this:
mask_conv_model = models.resnet50(pretrained=True)
for param in mask_conv_model.parameters():
param.requires_grad = False
num_ftrs = mask_conv_model.fc.in_features
mask_conv_model.fc = nn.Sequential(nn.Linear(num_ftrs, masked_output_ftrs), nn.ReLU(), nn.Dropout())
for name, param in mask_conv_model.named_parameters():
if param.requires_grad == True:
mask_params.append(param)
mask_conv_model_optimizer = optim.Adam(mask_params, lr=masked_learning_rate)
Then later on, I save my model:
torch.save({
'epoch': epoch,
'target_seq_len': hype['target_seq_len'],
'input_seq_len': hype['input_seq_len'],
'masked_output_ftrs': hype['masked_output_ftrs'],
'mask_conv': dict_of_models['MASKED CONV MODEL'].state_dict(),
}, os.path.join(save_dir, "MASK_CONV.tar"))
Now, to load the model I have to write a lot of code:
mask_conv_model = models.resnet50(pretrained=True)
mask_checkpoint = torch.load(mask_dir)
epoch = mask_checkpoint['epoch']
masked_output_ftrs = mask_checkpoint['masked_output_ftrs']
num_ftrs = mask_conv_model.fc.in_features
mask_conv_model.fc = nn.Linear(num_ftrs, masked_output_ftrs)
mask_conv_model.load_state_dict(mask_checkpoint['mask_conv'])
Is there a one-line method of loading models? Something like this:
mask_checkpoint = torch.load(mask_dir)
mask_conv_model = mask_checkpoint.load_state_dict(mask_checkpoint['mask_conv']) |
st80589 | On Windows 10, I know you are supposed to do DataLoader related tasks like the following:
def main():
#do multiprocessing with data loader here
if __name__ == '__main__':
main()
However, I decided to move things around and now I do all my training in a separate module that I import into my main.py
This creates a problem with using DataLoader with num_workers > 0 on Windows 10 and my entire script gets executed multiple times (this honestly still makes no sense to me, why normal forking is not possible in Windows?). So, now my flow of execution is:
#main.py
from train import train
#some preparatory code here
train(x)
##########################
##########################
#train.py
train(x):
#initial data processing
loader = DataLoader(dataset=_DATASET, batch_size=batch_size, shuffle=False, num_workers=4)
#data post processing
However, the above doesn’t work. I even tried adding the if-condition in main.py:
#main.py
from train import train
#some preparatory code here
if __name__ == '__main__':
train(x)
##########################
##########################
#train.py
train(x):
#initial data processing
loader = DataLoader(dataset=_DATASET, batch_size=batch_size, shuffle=False, num_workers=4)
#data post processing
But the entire script still gets executed multiple times (same problem as before). Then I tried adding the if condition inside train.py
#main.py
from train import train
#some preparatory code here
train(x)
##########################
##########################
#train.py
train(x):
#initial data processing
if __name__ == 'utils':
loader = DataLoader(dataset=_DATASET, batch_size=batch_size, shuffle=False, num_workers=4)
for i, (image_batch, image_batch_flipped) in enumerate(loader):
masked_img_array.append(image_batch)
masked_img_flipped_array.append(image_batch_flipped)
for frame in data[start_index:end_index]:
if frame['labels'] is not None:
unmasked_img_array.append(frame['name'])
#data post processing done outside the if-condition (is this correct)
fp = open(os.path.join(MASTER_ROOT_DIR, script_args['ROOT_DIR'], "bbox_files", "bboxes" + str(start_index) +
"to" + str(end_index)) + ".txt", "rb")
list_of_dicts = pickle.load(fp)
all_agents = []
for dict in list_of_dicts:
for agent in dict.keys():
all_agents.append(agent)
all_agents_unique = list(set(all_agents))
print("TOTAL UNIQUE AGENTS " + str(len(all_agents_unique)))
trainFile.write("TOTAL UNIQUE AGENTS " + str(len(all_agents_unique)) + "\n")
print(len(masked_img_array))
return all_agents_unique, list_of_dicts, masked_img_array, masked_img_flipped_array, unmasked_img_array
But now the dataloader doesn’t load my data into the arrays masked_img_array, masked_img_flipped_array and unmasked_img_array
So, should I just go back to the original method or is there a way around it where you can do multiprocessing in a separate module?
Edit: Actually, after trying the recent code (I was using the wrong name in the if condition) I again face the same problem as before where the whole script gets executed multiple times. |
st80590 | #main.py
from train import train
#some preparatory code here
if __name__ == '__main__':
train(x)
##########################
##########################
#train.py
def train(x):
#initial data processing
loader = DataLoader(dataset=_DATASET, batch_size=batch_size, shuffle=False, num_workers=4)
#data post processing
This one should work. Would you please upload all your scripts so that I can have a look? |
st80591 | As mentioned in my post, this doesn’t work. However, it works in Google Colab when I use num_workers = -1 |
st80592 | The first question I would ask is the number of GPU cores I have.
For instance if I have 8 cores, then torch.cuda.empty_cache() should be rephrased as something like this:
how_many_gpus = torch.cuda.device_count()
for _ in range(how_many_gpus):
torch.cuda.set_device(_)
torch.cuda.empty_cache()
What have you tried so far? |
st80593 | .empty_cache will only clear the cache, if no references are stored anymore to any of the data. If you don’t see any memory release after the call, you would have to delete some tensors before. |
st80594 | How can I clear cache? I am using this https://github.com/avinashpaliwal/Super-SloMo 12 |
st80595 | I tried this and
TypeError: device_count() takes 0 positional arguments but 1 was given |
st80596 | What is wrong with this
import torch
how_many_gpus = torch.cuda.device_count()
for _ in range(how_many_gpus):
torch.cuda.set_device("cuda0")
torch.cuda.empty_cache() |
st80597 | Mr_Tajniak:
I have NVIDIA GeForce GTX 1060 6GB
Have you meant to say you don’t have multiple GPUs? Can you confirm? |
st80598 | Mr_Tajniak:
What is wrong with this
Please check out the CUDA semantics document.
Instead, torch.cuda.set_device("cuda0") I would use torch.cuda.set_device("cuda:0"), but in general the code you provided in your last update @Mr_Tajniak would not work for the case of multiple GPUs.
In case you have a single GPU (the case I would assume) based on your hardware, what @ptrblck said:
.empty_cache will only clear the cache, if no references are stored anymore to any of the data. If you don’t see any memory release after the call, you would have to delete some tensors before.
This basically means PyTorch torch.cuda.empty_cache() would clear the PyTorch cache area inside the GPU.
You can check out the size of this area with this code:
import torch
import gc
def p():
c = torch.cuda.memory_cached()
print(f'cached :{c}')
a = torch.cuda.memory_allocated()
print(f'allocated:{a}')
f = torch.cuda.memory_cached()-torch.cuda.memory_allocated()
print(f'free :{f}')
torch.cuda.empty_cache()
p()
r = torch.randn(1, 128).cuda()
p()
Out:
cached :0
allocated:0
free :0
cached :2097152
allocated:512
free :2096640
See how PyTorch allocated 2Mb of cache just for storing this 128 floats. If you would del r followed by p() the GPU memory will be free again.
If you would have some objects you haven’t deleted make sure you delete them if they are not needed.
Why did PyTorch cached the memory in advance?
To reuse it later. This is the idea of the cache. We assume this precious resource will be used later. |
st80599 | @Mr_Tajniak, all I wanted to say you would need to deal with the variables that consume GPU memory smart.
I haven’t tested project 1 you mentioned, but I just saw you created the issue 2.
You may expect the feedback in there. Check if OOM is there if you use a smaller video format. Check what are the GPU memory requirements based on the video format you use (size of the frame). |
st80600 | I’ve noticed that the pytorch implementation of KL divergence yells different results from the tensorflow implementation. The results differ significantly (0.20, and 0.14) and I was curios what could be the reason. Below you can find a small example. Any help will be more than appreciated.
import tensorflow as tf
import numpy as np
import torch
from torch.distributions.kl import kl_divergence
tf.enable_eager_execution()
preds = np.array([1.9417487e-03, 9.9999997e-10, 5.8252434e-03, 9.9999997e-10, 3.8834962e-03,
8.1553400e-02, 3.6893204e-01, 5.2427185e-01, 7.7669914e-03, 5.8252434e-03,
9.9999997e-10]).astype('float32')
labels = np.array([1.0362695e-02, 9.9999997e-10, 9.9999997e-10, 9.9999997e-10, 3.1088084e-02,
9.0673573e-02, 3.4974092e-01, 5.1036268e-01, 2.5906744e-03, 9.9999997e-10,
5.1813480e-03]).astype('float32')
preds_tf = tf.distributions.Categorical(probs=tf.convert_to_tensor(preds))
labels_tf = tf.distributions.Categorical(probs=tf.convert_to_tensor(labels))
tf_res = tf.distributions.kl_divergence(preds_tf, labels_tf)
preds_torch = torch.distributions.Categorical(probs=torch.from_numpy(preds))
labels_torch = torch.distributions.Categorical(probs=torch.from_numpy(labels))
torch_res = kl_divergence(preds_torch, labels_torch)
print(tf_res.numpy(), torch_res.item()) |
st80601 | Solved by Nikronic in post #3
@razvanc92
I just found the solution using distribution package too.
As I mentioned in the previous post, the target should be log_probs, so based on, we must have these:
preds_torch = torch.distributions.Categorical(probs=torch.from_numpy(preds))
labels_torch = torch.distributions.Categorical(lo… |
st80602 | Hi,
I have not read the distribution package source code, but from what I know from the C++ source code, I prefer using torch.nn.functional.kl_div 13 function to calculate the divergence.
github.com
pytorch/pytorch/blob/35fed93b1ef05175143f883c6f89f06c6dd9429b/aten/src/ATen/native/Loss.cpp#L71 2
return apply_loss_reduction(output, reduction);
}
Tensor margin_ranking_loss(const Tensor& input1, const Tensor& input2, const Tensor& target, double margin, int64_t reduction) {
auto output = (-target * (input1 - input2) + margin).clamp_min_(0);
return apply_loss_reduction(output, reduction);
}
Tensor kl_div(const Tensor& input, const Tensor& target, int64_t reduction) {
auto zeros = at::zeros_like(target);
auto output_pos = target * (at::log(target) - input);
auto output = at::where(target > 0, output_pos, zeros);
return apply_loss_reduction(output, reduction);
}
Tensor kl_div_backward_cpu(const Tensor& grad, const Tensor& input, const Tensor& target, int64_t reduction) {
auto grad_input = at::zeros_like(input);
auto grad_expand = grad.expand_as(input);
AT_DISPATCH_FLOATING_TYPES(input.scalar_type(), "kl_div_backward_cpu", [&]() {
at::CPU_tensor_apply3<scalar_t, scalar_t, scalar_t>(
grad_input,
Based on the source code, you should provide log_probs for the target.
Notice that PyTorch use kl_div like this: kl_div(b, a) for kl_div(a||b), so it means you need to use following code to get the same result as Tensorflow.
preds_torch = torch.Tensor(preds)
labels_torch = torch.Tensor(labels)
out = F.kl_div(labels_torch.log(), preds_torch, reduction='sum')
print(out.item()) #0.2038460671901703
Also, it is equivalent to:
out = (preds_torch * (preds_torch / labels_torch).log()).sum()
print(out.item())
In the end, I am really not sure about distribution package yet. I will check it out and let you know if you are interested.
Further reading:
KL-divergence between two multivariate gaussian
Hi,
Yes, this is the correct approach.
Just be aware that the input a must should contain log-probabilities and the target b should contain probability.
https://pytorch.org/docs/stable/nn.functional.html?highlight=kl_div#kl-div
By the way, PyTorch use this approach:
[image]
https://pytorch.org/docs/stable/distributions.html?highlight=kl_div#torch.distributions.kl.kl_divergence
Good luck
Nik
github.com/pytorch/pytorch
Issue: kldiv loss formula 10
opened by skynbe
on 2018-05-07
closed by soumith
on 2018-05-07
Does F.kl_div(a, b) mean KL(b||a), not KL(a||b) ?
Good luck
Nik |
st80603 | @razvanc92
I just found the solution using distribution package too.
As I mentioned in the previous post, the target should be log_probs, so based on, we must have these:
preds_torch = torch.distributions.Categorical(probs=torch.from_numpy(preds))
labels_torch = torch.distributions.Categorical(logits=torch.from_numpy(np.log(labels)))
torch_res = kl_divergence(preds_torch, labels_torch)
Note that for target(labels_torch) we use logits not probs and also provide log(labels) rather than labels itself.
Good luck
Nik |
st80604 | Could you also help me with the differences between tf/pytorch and numpy. It seems to be working fine when the input is 2d, but when the input has more than 2 dimensions it doesn’t. For example now I’m trying with a 4d array where the distributions are on the last axis. This is my implementation:
np.mean(np.sum(preds * np.log(preds / labels), axis=-1))
Thanks in advance. |
st80605 | @razvanc92 Sorry for late reply, I was dealing with a bunch of problems.
If I want to be frank with you, I could not get same output for random generated numbers using both nn.kl_div or formula itself. Can you state your last post as a separate question?
And please mention me there too, so I can understand what is really happening there.
Maybe other experienced could help us too. |
st80606 | I am trying to train a model for 1d data that has 500 features.
Unfortunately, since the common implementation of GAN is for image,
I wasn’t able to find a working version of GAN for 1d data.
Does anyone know any public implementation?
Is there anything specific that I have to keep in mind?
I preprocessed the input data to have range of -1 to 1 to match with Tanh() and also tried different activation functions but none seems to give me promising results…
Thank you for the time |
st80607 | Solved by ptrblck in post #2
I’ve reimplemented a toy example from this blog post a while ago and you can find the code here.
Since this code is quite old by now, you might need to change some details (e.g. swap data[0] for .item()). |
st80608 | I’ve reimplemented a toy example from this blog post 103 a while ago and you can find the code here 106.
Since this code is quite old by now, you might need to change some details (e.g. swap data[0] for .item()). |
st80609 | Thank you so much.
I can try to adapt some of your approaches.
The only thing is that,
when I said 1d, I meant 1xd, where d is number of features.
it seems like your implementation is for generates a single number.
I guess things you should workout without much patching, right? |
st80610 | ljj7975:
I guess things you should workout without much patching, right?
I hope so. If it’s not that easy and you get stuck somewhere, just let me know and I can try to help out. |
st80611 | I’ve implemented some basic codes for GANs with plots showing training progress. you can find it here:
https://github.com/mese79/GANs_experiments 130 |
st80612 | Instead of working with image files such as .png and .jpg, I’ve extracted three variables u, v, w from a NETCDF file, with dimension as (hours, number of grid points in X, number of grid points in Y). The u,v,w components can be seen as three separate channels, just like an image has RGB channels.Afterwards, I transform all the components into tensors using torch.from_numpy, and reshape them into (hours, channel, gridX, gridY), and concatenate them to get my data. Thus the tensor dimension is (745, 3, 128, 128). Furthermore I have low resolution data on the form (745, 3, 64, 64).
Concatenating the tensors together like three RGB channels
HR_data = torch.cat((u_tensor,v_tensor,w_tensor), dim=1) # output = dim ( 745, 3, 128, 128)
LR_data = torch.cat((u_tensor_lr,v_tensor_lr,w_tensor_lr), dim=1) # output = dim ( 745, 3, 64, 64)
Normalizing
HR_data_norm = (HR_data - HR_data.mean()) / HR_data.std()
LR_data_norm = (LR_data - LR_data.mean()) / LR_data.std()
Creating training set
dataset_train = torch.utils.data.TensorDataset(LR_data_norm, HR_data_norm)
trainloader = torch.utils.data.DataLoader(dataset_train, batch_size=batchSize,
shuffle=True, num_workers=0)
I’m trying to fit my tensor data into a dcgan, using this as a basis https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html 1 , and I’m treating my data just like images as they are transformed into tensor form. I have set some hyperparameters such as:
nz = HR_data.shape[0] #size of generator input
ngf = 64 #size of feature maps in generator
ndf = 128 #size of feature maps in discriminator
batchSize = 64
However, when I try to train the discriminator with the real image, i.e. high resolution data, I get this error:
“ValueError: Target and input must have the same number of elements. target nelement (64) != input nelement (1600)”
which means that the binary cross entropy function need the input and target to be of the same dimension. My target dim is of 64 (I guess it’s the batch size), and output dim is of 1600.
My question is therefore, how do I change the training loop and the DCGAN_example code to work with my data? I think there must be something wrong in my discriminator.
tl;dr: I want to train DCGAN with low and high resolution data. How to change DCGAN_example with TensorDataset instead of ImageFolder? |
st80613 | I face a problem about copy tensor to cpu.
I test same step on V100 and P100 card. Same environment.
On V100 card machine, the .cpu() step only cost less than 0.01s.
But on P100 card machine, this single step cost 5 second at most(based on the length of tensor, one dimesion length about 10,0000)
Is that only about the GPU? I use CUDA 9 and pytorch 1.0.0. |
st80614 | How are you testing?
Moving to CPU is going to potentially require synchronization as most GPU operations are asynchronous. So it depends on whether the GPU result is actually there and the time will often be due to other operations that aren’t yet completed.
It’s also likely to depend a fair bit on bus latency which will depend on connection method and also other load on the system. Are they both NVLink or both PCIe? Performance will likely differ a lot between the two.
In general though you want to avoid waiting on CPU transfers. Not actually accessing the CPU tensor immediately I think generally results in not synchronising when you call .cpu() (though would love clarification here), or you can explicitly do an asynchronous transfer to CPU (a search will find details, but think the basics are to pin your destination host memory and use dest.copy_(src, non-blocking=True). Though I think this was only enabled in PyTorch 1.2 due to a previous bug 6. That issue should also have or link to full code. |
st80615 | I’ve added the synchhronize(), result didn’t change.
here is the code I test:
both are PCIe. I tried use copy_() like:
y = torch.empty(audio.shape[0], device=‘cpu’)
y.copy_(audio, non_blocking=True)
the time is almost same as copy() |
st80616 | That code will likely be partly (perhaps mostly) measuring the time taken to do the actual processing in waveglow.infer which would of course be expected to be faster on the V100. To measure just the copy you should synchronize before the copy, start timing, copy and them sync again before taking the end time. As in:
start = time.time()
torch.cuda.synchronize()
audio = audio.cpu()
torch.cuda.synchronize()
elapsed = time.time() - start
Or you can use CUDA events to do the timing. Here you’d do:
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
audio = audio.cpu()
end.record()
torch.cuda.synchronize()
elapsed = start.elapsed_time(end)
The synchronize here is needed to retrieve the time, it is not actually related to timing and can be done at a later point. So you don’t have to slow down the processing with synchronize by collecting timing.
terryyizhong:
I tried use copy_() like:
Oh, I forgot to note that asynchronous copies can only happen on a non-default stream, they won’t work on the default stream 0. By default all work in PyTorch occurs on the stream 0 so asynchronous copies won’t happen. If you use a non-default stream you also need to synchronize it to the default stream yourself, by, for example recording an event on the default stream after issuing the work (i.e. calling the appropriate tensor methods, remembering they are asynchronous so work is not completed immediately), and then having the non-default stream wait on that event before copying. I won’t give an example as you need to understand the issues around synchronisation before applying this. There are examples of at least the PyTorch specific parts of this, though they generally assume some familiarity with CUDA programming.
I believe you also need to pin the CPU memory you are copying to, either by calling pin_memory on an existing tensor, or by passing pin_memory when creating the tensor (though I’m not sure all methods of creation support this parameter).
And, as noted asynchronous copies from GPU to CPU are only fixed in PyTorch 1.2 so unless you have upgraded they won’t work.
I can confirm that if properly implemented they can help, I reduced the impact of some work in a forward hook (so slowing down the training loop) by taking care of all the above issues (non-default stream/pin-memory/synchronise streams/PyTorch 1.2). But they won’t always help. If you go on to access items in the CPU tensor immediately after the copy then it will force synchronisation and performance won’t increase. But I think just issuing PyTorch operations on the CPU tensor is fine as they are asynchronous (again happy to be corrected by someone more knowledgeable here). |
st80617 | Instead of working with image files such as .png and .jpg, I’ve extracted three variables u, v, w from a NETCDF file, with dimension as (hours, number of grid points in X, number of grid points in Y). The u,v,w components can be seen as three separate channels, just like an image has RGB channels. Afterwards, I transform all the components into tensors using torch.from_numpy, and reshape them into (hours, channel, gridX, gridY), and concatenate them to get my data. Thus the tensor dimension is (745, 3, 132, 132).
Now I want to plug it in an ESRGAN, https://github.com/xinntao/ESRGAN 2 .
The problem is, the dataloader is using ImageSetFolder class, and take images on the form .png as inputs. In PyTorch we usually use dataloader ImageSetfolder which loads the images automaticailly. However, now my data is already given in tensor form. What I can do from here? For practical use, the tensor data can be treated as RGB images. Thinking about maybe utilizing TensorData class? Or just removing the whole dataloader and sample from my dataset or something.
Help appreciated! |
st80618 | I’m not sure if the posted shape ([745, 3, 132, 132]) is a single sample or your complete dataset.
In the latter case, you could simply use a TensorDataset, pass it to a DataLoader, and remove the currently used Dataset from the code. |
st80619 | It is the latter case. But how will I change the code, as it expects me to pass some images in a folder in order to use dataroot? I also have to normalize it, thinking about creating my own transform or something like that. I will keep you further updated. |
st80620 | I’m not sure which script you are using, but instead of loading the images you would just replace these lines with your loaded tensors.
What kind of transformations would you like to apply? |
st80621 | By transformation I meant normalization, but I fixed that. I’m trying to fit my tensor data into a dcgan, using this as a basis https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html 4 .
However I will make a new discussion thread as I now have a more clear understanding of what I’m asking for, so you can gladly delete this discussion. |
st80622 | Hi,
I am trying to export pytorch model to onnx with onnx2trt. When I run the command:
$ onnx2trt model.onnx model.trt
I got the error of :
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
Parsing model
ERROR: /root/build/TensorRT/parsers/onnx/ModelImporter.cpp:523 In function importModel:
[4] Assertion failed: !_importer_ctx.network()->hasImplicitBatchDimension() && “This version of the ONNX parser only supports networks with an explicit batch dimension”
How could I export my onnx model with an older version so that it could compatible with tensorrt? |
st80623 | Model is tacotron2 gst.
Almost same to this https://github.com/mozilla/TTS/blob/master/models/tacotrongst.py 3
I left train at night and fond this
https://i.imgur.com/59vuPyf.png 16
I have 2 thoughts on this:
1 try to track what data is it, may be it is corrupted.
2 skip step and doesnt apply changes to model if loss spike.
Is it good ideas? If so, what is best way to do it? May be examples or someone already make lib about it?
Or should i do something else? |
st80624 | I am implementing one component in my NN, which consists of several nn.functional operations (parameter-free). The component takes a list of tensors as input for one batch. Each tensor of the list is variable-sized(might be empty); one tensor for one entry in the batch. Each row of the tensor is some kind of feature. I don’t want to use for loop to process those tensors and perform sum pooling for each tensor after this component, and concat them. Is there any way to do such computation in parallel? |
st80625 | Can torch.cuda.manual_seed_all(seed) be used on a single GPU? And if so, then what is the point of torch.cuda.manual_seed(seed)?
And is it the same with torch.cuda.seed_all() and torch.cuda.seed()?
I was using this code, but for some reason it’s not reproducing the same results:
torch.manual_seed(seed_val)
torch.cuda.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
torch.backends.cudnn.deterministic=True
I’m wondering if I was doing something wrong? Or if I was missing something? My code can run on a single GPU or multiple devices, so I was trying to cover all the bases. |
st80626 | I removed torch.cuda.manual_seed(params.seed), and now it seems to be working correctly again. |
st80627 | I have a two layer Neural Network. Given the weights and biases predicted by Neural Network, how to draw the decision boundary on this dataset?
My Neural network is defined as follows:
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(2,2)
self.fc2 = nn.Linear(2,1)
def forward(self,x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
list(model.parameters()) returns me
[Parameter containing:
tensor([[-0.5128, -0.3776],
[ 0.4611, 0.4625]], requires_grad=True), Parameter containing:
tensor([ 0.6443, -3.6282], requires_grad=True), Parameter containing:
tensor([[-0.1985, 2.8158]], requires_grad=True), Parameter containing:
tensor([-5.4537], requires_grad=True)]
The Google Colab notebook is attached 39. You will find the complete code under Toy Dataset 3 predicting logits. Besides, I have drawn 1 layer neural network decision boundary as an example. Find it under Toy Dataset 2 predicting logits. For your ease, 1 layer neural network dataset and its decision boundary is attached here as well. |
st80628 | A naive way I can think of is to have a grid of points, make predictions for each of them, then draw line? |
st80629 | Hy guys, i would use transfer learning with resnet50. I use the KLDivLoss for the divergence of 2 distribution. Is there a possibility to change FC and have an output dimension of [9,224,224] (the dimension of a photo with 9 channels(my second distribution)? |
st80630 | Solved by paganpasta in post #2
I am new to pytorch so will only be able to provide high level help. If you wish you can add a GlobalPooling layer just after the last layer to obtain (9,1). Or you can define the new layer in your init and bypass the FC(x) in the forward() method of your derived nn.Module class. |
st80631 | I am new to pytorch so will only be able to provide high level help. If you wish you can add a GlobalPooling layer just after the last layer to obtain (9,1). Or you can define the new layer in your init and bypass the FC(x) in the forward() method of your derived nn.Module class. |
st80632 | There should be no problem doing this. Work on this problem from the last convolution layer and add the necessary 2d operations (conv2d, upsample2d, etc.) until you reach your desired output shape. |
st80633 | Hi,
could anyone point me towards more information with regard to the torch.nn.functional.batch_norm method?
When I checked the Python source code I was only able to get up to the point of
return F.batch_norm(
input, self.running_mean, self.running_var, self.weight, self.bias,
self.training or not self.track_running_stats,
exponential_average_factor, self.eps)
in the torch/nn/modules/batchnorm (in the PyTorch source code).
My question is over which dimension does functional.batch_norm compute the batch_statistics?
Judging from the code I would suspect the 2nd dimension.
The same method functional.batch_norm is used for BatchNorm1D, BatchNorm2D and BatchNorm3D.
So for BatchNorm1D with [batch_size, features] data tensors the second dimension is obviously the relevant one for batch_norm and for BatchNorm2D with [batch_size, channels, height, width] the second dimension is the relevant one again.
Thanks in addvance. |
st80634 | I defined a new loss module and used it to train my own model. However, the first batch’s loss always get inf or nan, which leads to fail.
I try to print the loss item info as follows:
loss item: inf
loss item: 7.118189334869385
…
loss item: 7.123733997344971
what may it happpens? I test the loss module and it works with some synthesis data. And the module is implemented with torch functions as well.
Could someone occcur this kind of problem? |
st80635 | Could you check your input for NaN or Inf values by calling torch.isnan and torch.isinf on it?
If that’s not the case, you could use the anomaly detection 245 util to debug your model. |
st80636 | Thanks for your suggestion and I have solved this problem.
It is because of the torch.exp function and sometimes it will take a large input and torch.exp will get inf value. |
st80637 | Hi,
I am somewhat of a beginner to pytorch. I am implementing a paper where they have a classification CNN (input -> convolutional layers -> dense layers -> output). However, each filter of the final convolutional layer has its own loss calculation. During back-propagation, the gradient of the final loss of the network output is summed with the gradient of the loss for each particular filter. This is then back-propagated to lower convolutional layers as per usual.
I have no idea how to implement something like this. I have only ever implemented pre-defined loss functions (e.g. nn.CrossEntropyLoss()) and only for the final output of the network.
Do you have any suggestions of how to get started? |
st80638 | Hi @ari,
You can have the model returning both the last Conv layer output and the final output
def forward(self, x):
[...]
c = self.last_conv(...)
x = self.dense(x)
return c, x
Next, depending on how you calculate “loss for each particular filter” you can define your loss function and sum it with the classic output loss (as nn. CrossEntropyLoss):
def special_loss(c):
# Implements the paper algorithm
return l
loss = nn.CrossEntropyLoss()
c, output = model(input)
final_loss = loss(output, target) + special_loss(c, c_label)
final_loss.backward()
Keep in my mind that a loss function is normal PyTorch code, that returns a scalar (on which you usually call backward) |
st80639 | File “/home/ramachap/autodl_starting_kit_stable/AutoDL_ingestion_program/ingestion.py”, line 326, in
remaining_time_budget=remaining_time_budget)
File “AutoDL_simple_baseline_models/resnet/model.py”, line 371, in train
self.trainloop(self.criterion, self.optimizer, steps=steps_to_train)
File “AutoDL_simple_baseline_models/resnet/model.py”, line 573, in trainloop
log_ps = self.pytorchmodel(images)
File “/home/ramachap/envs/autodl/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 489, in call
result = self.forward(*input, **kwargs)
File “AutoDL_simple_baseline_models/resnet/model.py”, line 73, in forward
out = self.model(x)
File “/home/ramachap/envs/autodl/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 489, in call
result = self.forward(*input, **kwargs)
File “/home/ramachap/envs/autodl/lib/python3.5/site-packages/torchvision/models/resnet.py”, line 150, in forward
x = self.conv1(x)
File “/home/ramachap/envs/autodl/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 489, in call
result = self.forward(*input, **kwargs)
File “/home/ramachap/envs/autodl/lib/python3.5/site-packages/torch/nn/modules/conv.py”, line 320, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected tensor for argument #1 ‘input’ to have the same device as tensor for argument #2 ‘weight’; but device 3 does not equal 0 (while checking arguments for cudnn_convolution) |
st80640 | When using DataParallel for the model all the tensors you give should be on device 0. That is a pytorch format, they get copied themselves. I think this discussion might help you. How to solve the problem of `RuntimeError: all tensors must be on devices[0]` 15 |
st80641 | I’m confused about how to use DataParallel properly over multiple GPU’s because it seems like it’s distributing along the wrong dimension (code works fine using only single GPU).
The model using dim=0 in Dataparallel, batch_size=32 and 8 GPUs is:
import torch
import torch.nn as nn
from torch.autograd import Variable
class StepRNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size, num_layers): #
super().__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.num_layers = num_layers
self.encoder = nn.Embedding(input_size, hidden_size)
self.rnn = nn.GRU(input_size=hidden_size, \
hidden_size=hidden_size,\
num_layers=num_layers)
self.decoder = nn.Linear(hidden_size, output_size)
def forward(self, input, hidden):
batch_size = input.size(0)
encoded = self.encoder(input)
output, hidden = self.rnn(encoded.view(1, batch_size, -1), hidden)
output = self.decoder(output.view(batch_size, -1))
return output, hidden
def init_hidden(self, batch_size):
return Variable(torch.zeros(self.num_layers, batch_size, self.hidden_size))
decoder = StepRNN(
input_size=100,
hidden_size=64,
output_size=100,
num_layers=1)
decoder_dist = nn.DataParallel(decoder, device_ids=[0,1,2,3,4,5,6,7], dim=0)
decoder_dist.cuda()
batch_size = 32
hidden = decoder.init_hidden(batch_size).cuda()
input_ = Variable(torch.LongTensor(batch_size, 10)).cuda()
target = Variable(torch.LongTensor(batch_size, 10)).cuda()
for c in range(10):
decoder_dist(input_[:,c].contiguous(), hidden) #RuntimeError: Expected hidden size (1, 4, 64), got (1, 32, 64)
The result is RuntimeError: Expected hidden size (1, 4, 64), got (1, 32, 64). It makes sense that its expecting a 32/8 hidden size but it seems to be passing the full batch. What am I missing? Full traceback here 9.
With dim=1 I get RuntimeError: invalid argument 2: out of range. Full trace here 9.
Interestingly, if I open an Ipython session and run the code once, I get the runtime error above. But, if I run it again unchanged, I get a different error: RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1502009910772/work/torch/lib/THC/generic/THCTensorCopy.c:18. This seems pretty consistent but not sure why the error would change with the exact same code.
I found another question where the issue is related to batch_first=True so taking dim=0 by default doesn’t work. But I’m using the default batch_first=False. |
st80642 | if it’s a nn.GRU, i think you have to use the flag batch_first=True to make sure the inputs are interpreted to be having mini-batch in dimension-0
http://pytorch.org/docs/master/nn.html?highlight=batch_first#torch.nn.GRU 192 |
st80643 | I changed my code to use an LSTM like so:
import torch
import torch.nn as nn
from torch.autograd import Variable
class StepRNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size, num_layers): #
super().__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.num_layers = num_layers
self.encoder = nn.Embedding(input_size, hidden_size)
self.rnn = nn.LSTM(input_size=hidden_size, \
hidden_size=hidden_size,\
num_layers=num_layers)
self.decoder = nn.Linear(hidden_size, output_size)
def forward(self, input, hidden):
batch_size = input.size(0)
encoded = self.encoder(input)
output, hidden = self.rnn(encoded.view(1, batch_size, -1), hidden)
output = self.decoder(output.view(batch_size, -1))
return output, hidden
def init_hidden(self, batch_size):
return (Variable(torch.zeros(self.num_layers, batch_size, self.hidden_size)).cuda(),
Variable(torch.zeros(self.num_layers, batch_size, self.hidden_size).cuda()))
decoder = StepRNN(
input_size=100,
hidden_size=8,
output_size=100,
num_layers=1)
decoder_dist = nn.DataParallel(decoder, device_ids=[0,1,2,3,4,5,6,7], dim=0)
decoder_dist.cuda()
batch_size = 16
hidden = decoder.init_hidden(batch_size)
input_ = Variable(torch.LongTensor(batch_size, 10)).cuda()
target = Variable(torch.LongTensor(batch_size, 10)).cuda()
for c in range(10):
decoder_dist(input_[:,c].contiguous(), hidden)
The result is again RuntimeError: Expected hidden size (1, 2, 8), got (1, 16, 8). Full trace 6. It doesn’t seem to affect GRU only so I modified the title of this post for future possible searches.
What is the right way to parallelize consistent with the pytorch defaults? It seems like DataParallel is expecting data in the non standard way, or am I missing anything?
As a beginner its confusing to rethink everything I’ve learned using batch_first=True , how can I go about using DataParallel using the defaults, or how would I have to modify the code above to use batch_first=True?
Thanks for any input. |
st80644 | I’ve traced the source of the error to cudnn/rnn.py during the forward pass that starts in line 190. The error comes from this part of the code:
190 def forward(fn, input, hx, weight, output, hy):
191 with torch.cuda.device_of(input):
(...)
264 if tuple(hx.size()) != hidden_size:
265 raise RuntimeError('Expected hidden size {}, got {}'.format(
266 hidden_size, tuple(hx.size())))
where looks like hidden_size is the properly formatted data, and hx is the actual data being passed to each GPU (there’s a similar snippet in line 370 but I think this is the relevant one because its during forward pass).
The offending object is hx, which is passed to forward as an argument, so it looks like forward expects hx to be properly split already by some previous process? |
st80645 | I had run into the same problem when trying to combine rnn modules with DataParallel.
If you wrap the rnn in a module where the forward function only requires an input parameter, it works fine. It doesn’t seem to work when you need both the input and hidden parameter. If you can contain your hidden parameter logic within your module, it’s an effective work-around. |
st80646 | Thanks, not sure I follow, what do you mean by containing hidden parameter inside the module?
Glad to hear I’m not the only one having this annoying issue |
st80647 | Something like this (not working code):
class LSTM(nn.Module):
def __init__(self, initial_state):
super(LSTM, self).__init__()
self.lstm = nn.LSTM(
...
batch_first=True)
self.hn = initial_state
def forward(self, input):
output, hn = self.lstm(input, self.hn)
self.hn = hn
return output |
st80648 | Thanks for the snippet, I see what you meant. Seems a bit similar to other responses here I’ve seen where people actually have to use the non-default batch_first=True. I would definitely try to help debugging but this issue is a bit over my head. |
st80649 | @jekbradbury Will batch_first=True be the standard way of using DataParallel with RNNs going on? Haven’t been able to use it with this setting, but if this will be the standard way moving forward I might as well learn how to do it It would also be nice to have it documented somewhere. Thanks. |
st80650 | I think both batch-first and batch-second modes are compatible with DataParallel (it assumes batch-first by default, since that’s true of all non-RNN-related tensors, but it has a keyword argument to split over a different dimension). Both modes are definitely compatible with the rest of the RNN infrastructure, including pack_padded_sequence. |
st80651 | DataParallel is not working for me over multiple GPUs with batch_first=False, and I think there are other questions in the forum with similar issues iirc. The two snippets I posted above (GRU and LSTM) will not work with multiple GPUs even when splitting on a different dimension with batch_first=False (I made the snippets self-contained to make it easy to verify). It seems from other questions here that batch_first=True works fine, but I don’t think it works with False unless my code is wrong --which is entirely possible. If you have a minute I’d appreciate the validation of the code as I’m learning pytorch and can’t say for sure. |
st80652 | What’s the conclusion? I have the same issue. Should we use batch_first=True? I want to use DataParallel with batch_first=False. |
st80653 | How about this?
Just use BatchFirst is False, but got the batch data as B x S. Then, you just need to transpose once in your RNN cell. Then, that’s it.
def forward(self, input, seq_lengths):
# Note: we run this all at once (over the whole input sequence)
# input shape: B x S (input size)
# transpose to make S(sequence) x B (batch)
input = input.t()
batch_size = input.size(1)
Then, everything is OK. For the entire code, please check out at https://github.com/hunkim/PyTorchZeroToAll/blob/master/12_4_name_classify.py 79. |
st80654 | I’m using batch_first = True, and my forward function requires just one parameter. I’m still facing errors. Here’s my model.
class CryptoLSTM(nn.Module):
def __init__(self, embedding_dim, hidden_dim, batch_size, vocab_size):
super().__init__()
self.hidden_dim = hidden_dim
self.batch_size = batch_size
self.embedding_dim = embedding_dim
self.vocab_size = vocab_size
self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, batch_first=True)
self.hidden2text = nn.Linear(hidden_dim, vocab_size)
self.hidden = self.init_hidden()
def init_hidden(self):
self.hidden = (torch.autograd.Variable(torch.zeros(1, self.batch_size,
self.hidden_dim).cuda()), torch.autograd.Variable(torch.zeros(
1, self.batch_size, self.hidden_dim).cuda()))
def forward(self, sentence):
embeds = self.word_embeddings(sentence)
lstm_out, self.hidden = self.lstm(embeds, self.hidden)
tag_space = self.hidden2text(lstm_out)
scores = F.log_softmax(tag_space, dim=2)
return scores
Here’s the training script
model = CryptoLSTM(args.embedding_dim, args.hidden_dim,
args.batch_size, len(alphabet))
model = torch.nn.DataParallel(model).cuda()
loss_function = nn.NLLLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
for epoch in range(args.num_epochs):
for batch in dataloader:
model.zero_grad()
model.module.init_hidden()
inputs, targets = batch
predictions = model(inputs)
# predictions.size() == 64x128x27
# NLLLoss expects classes to be the second dim
predictions = predictions.transpose(1, 2)
# predictions.size() == 64x27x128
loss = loss_function(predictions, targets)
loss.backward()
optimizer.step()
And here’s the Traceback.
Traceback (most recent call last):
File "train.py", line 244, in <module>
main()
File "train.py", line 186, in main
predictions = model(inputs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 357, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/data_parallel.py", line 73, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/data_parallel.py", line 83, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/parallel_apply.py", line 67, in parallel_apply
raise output
File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/parallel_apply.py", line 42, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 357, in __call__
result = self.forward(*input, **kwargs)
File "train.py", line 132, in forward
lstm_out, self.hidden = self.lstm(embeds, self.hidden)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 357, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/rnn.py", line 190, in forward
self.check_forward_args(input, hx, batch_sizes)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/rnn.py", line 158, in check_forward_args
'Expected hidden[0] size {}, got {}')
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/rnn.py", line 154, in check_hidden_size
raise RuntimeError(msg.format(expected_hidden_size, tuple(hx.size())))
RuntimeError: Expected hidden[0] size (1, 32, 2000), got (1, 64, 2000)
I’m using torch 0.3.1 on Python 3.5. Any help would be greatly appreciated. |
st80655 | I am using data_parallel with the GRU and batch_first = true, I dont think the GRU’s hidden state is outputted they way the documentations says it is supposed to. |
st80656 | Same issues here. Trying to use DataParallel with a LSTM model.
RuntimeError: Expected hidden[0] size (1, 2500, 50), got (1, 10000, 50)
I can see from the shape mismatch what the general problem is. The hidden is being created for the entire model input (10000 in my case) where dataparallel is dividing that input by GPU count (4 in my case) to spread the load. Maybe we can also wrap the hidden input tensor with dataparallel so its also distributed correctly? |
st80657 | I might have found a workaround for this issue, or maybe its the actual correct way to implement. According to the torch.nn.LSTM docs
“If (h_0, c_0) is not provided, both h_0 and c_0 default to zero.”
So the workaround is basically to allow nn.LSTM to initialize itself rather than have separate init_hidden logic. Some might say this is the correct way to initialize the hidden. Thoughts? |
st80658 | I think the reason why DataParallel didn’t work when you need both the input and hidden parameter is that h_0 shape is (num_layers * num_directions, batch, hidden_size) whether the batch_first is True or False.
See the description:
"""
Args:
input_size: The number of expected features in the input `x`
hidden_size: The number of features in the hidden state `h`
num_layers: Number of recurrent layers. E.g., setting ``num_layers=2``
would mean stacking two LSTMs together to form a `stacked LSTM`,
with the second LSTM taking in outputs of the first LSTM and
computing the final results. Default: 1
bias: If ``False``, then the layer does not use bias weights `b_ih` and `b_hh`.
Default: ``True``
batch_first: If ``True``, then the input and output tensors are provided
as (batch, seq, feature). Default: ``False``
dropout: If non-zero, introduces a `Dropout` layer on the outputs of each
LSTM layer except the last layer, with dropout probability equal to
:attr:`dropout`. Default: 0
bidirectional: If ``True``, becomes a bidirectional LSTM. Default: ``False``
Inputs: input, (h_0, c_0)
- **input** of shape `(seq_len, batch, input_size)`: tensor containing the features
of the input sequence.
The input can also be a packed variable length sequence.
See :func:`torch.nn.utils.rnn.pack_padded_sequence` or
:func:`torch.nn.utils.rnn.pack_sequence` for details.
- **h_0** of shape `(num_layers * num_directions, batch, hidden_size)`: tensor
containing the initial hidden state for each element in the batch.
If the RNN is bidirectional, num_directions should be 2, else it should be 1.
- **c_0** of shape `(num_layers * num_directions, batch, hidden_size)`: tensor
containing the initial cell state for each element in the batch.
If `(h_0, c_0)` is not provided, both **h_0** and **c_0** default to zero.
"""
batch_first only decide the input sizes, not the hidden, which causes the problem. Maybe you can try using batch_first = False and change the input sizes to apply that.
Again I tried using batch_first = True in LSTM. It turns out that it works if you provide hidden as (num_layers * num_directions, batch, hidden_size) or don’t specify the hidden like @robd2 said, either way will work. |
st80659 | Hello,
I have following error while loading a 2 gpu trained model on cpu:
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu
I tried loading with:
torch.load(‘my_model.pt’, ‘cpu’)
torch.load(‘my_model.pt’, map_location=lambda storage, location: ‘cpu’)
As I’ve researched. But always getting same error.
It seems the gpu-1 part is loaded in cpu, but gpu-0 part is not, so the error message.
I welcome some help.
Regards
Daniel |
st80660 | Did you save the model itself or the state? You can check the if all params are on gpu or not by using
next(model.parameters()).is_cuda
I think best would be to load it on the gpu move it to the cpu and save again. |
st80661 | Thanks for the answer, but:
next(model.parameters()).is_cuda
–>True
Tried to load then move and save again:
torch.load('gpu_model.pt')
model.to('cpu')
torch.save(model, 'models_train/CPU_model_cpu.pt')
didn’t work either. Same error message |
st80662 | Don’t save the model, save model state dict. Saving the model itself can cause various problems.
https://pytorch.org/tutorials/beginner/saving_loading_models.html 18 |
st80663 | Hi All,
Whether I have requires_grad_ set to True or False for my tensor does not seem to affect the time taken for operations on that tensor to be completed. This is confusing to me. I would’ve expected operations to take longer when requires_grad_ is True, as the extra step of creating the graph must be taken. Any help on understanding why I’m seeing these results would be much appreciated.
Thanks in advance |
st80664 | This is actually a good news as many hours were spent optimizing the autograd engine so that it’s overhead is negligible in terms of runtime |
st80665 | Very interesting! So creating the graph is a cheap operation? It’s not until you start using the graph (backprop, etc) that the computational cost becomes significant? |
st80666 | Yes.
The overhead of creating the graph is simply creating one cpp object (the Node), wrapping some Tensors (the one saved for the backward, creating the links to the previous Nodes and linking the output to the newly created Node.
Backprop need to traverse this whole graph to know which operations to perform (fairly expensive) and then computing the gradients themselves is roughly as expensive as the forward pass. |
st80667 | I have found AdamW 3 by LiyuanLucasLiu.
If I compare the implementation with the Adam 2, one thing is that I wonder…
Why AdamW implmentation used p_data_fp32 = p.data.float() and later on p.data.copy_(p_data_fp32).
Is this the placeholder trick 71 for the optim to be memory efficient?
Will this improve the original Adam implementation, or this is not needed? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.