id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st30268
|
Where are you installing your version of torch from? It could be that the binary package was not compiled with CUDA support.
|
st30269
|
eqy:
It could be that the binary package was not compiled with CUDA support.
It indeed seems to be the case:
Bhavya_Soni:
Current version of torch(after updating) is 1.9.0+cpu.
@Bhavya_Soni make sure you are specifying the desired CUDA runtime version when installing the binaries as given here 11.
|
st30270
|
I was getting some package not found error so I tried to do pip install --upgrade torch torchvision.
After that I got this error.
|
st30271
|
Looks like PyTorch doesn’t support CUDA 11.3 right now. It’s showing for CUDA 11.1 + torch 1.9.0
So do I need to downgrade the CUDA version? @ptrblck
|
st30272
|
Not necessarily. The binaries ship with their CUDA runtime (as well as cudnn, NCCL etc.) so your local CUDA toolkit installation won’t be used unless you are building PyTorch from source or are compiling a custom CUDA extension.
|
st30273
|
So how can I do that ? I mean “CUDA toolkit installation won’t be used unless you are building PyTorch from source or are compiling a custom CUDA extension” how can I do this?
|
st30274
|
You can just install the NVIDIA driver, select a desired PyTorch binary using the posted link, and install it.
E.g.
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
will install PyTorch with the CUDA 10.2 runtime (as well as cudnn7.6.5).
|
st30275
|
pip install torch==1.9.0+cu102 torchvision==0.10.0+cu102 torchaudio===0.9.0 -f https://download.pytorch.org/whl/torch_stable.html 11
This was helpful , Thank you.
|
st30276
|
Let’s suppose we have convolutional module. Right now it is boring:
class My_Conv(nn.Module):
def __init__(self, kwargs):
super(My_Conv, self).__init__()
self.conv = nn.Conv2d(**kwargs)
def forward(self, x):
return self.conv(x)
But I want to do something like this:
class My_Conv(nn.Module):
def __init__(self, kwargs):
super(My_Conv, self).__init__()
self.something = Something()
self.conv = nn.Conv2d(**kwargs)
def forward(self, x):
alpha=self.something(x)
return self.conv(x,weight_multipliers=alpha)
Let’s suppose I have an alpha tensor. Alpha contains multipliers for the convolution weights. But not globally for the whole convolution. It contains different multipliers for each stride of the convolution window. Think of it as some kind of attention.
Now I want to get the convolution outputs just as before, but first I want the weights to be multiplied by the relevant numbers from alpha, for each given window stride.
Is this possible to do efficiently in Pytorch? Is there a hack for this without reimplementing the convolution operation? I looked in the source, and I don’t feel like messing around on the C++ level.
|
st30277
|
I think the easiest approach would be to unfold the inputs, apply the weights to each patch, and use a matrix multiplication approach for the convolution, which will most likely use a lot of memory and be slow.
Checking the native conv implementation in C++ could yield a speedup, but I also understand this wouldn’t be interesting to you.
|
st30278
|
Thank you for the suggestion! I will try it, and share my results. If it works well in python (apart from the performance) maybe it will motivate me to seek a more efficient solution such as hacking away in C++.
|
st30279
|
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, downsample=None):
super(ResidualBlock, self).__init__()
self.conv1 = conv3x3(in_channels, out_channels, stride)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(out_channels, out_channels)
self.bn2 = nn.BatchNorm2d(out_channels)
self.downsample = downsample
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
# ResNet
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes=10):
super(ResNet, self).__init__()
self.in_channels = 16
self.conv = conv3x3(3, 16)
self.bn = nn.BatchNorm2d(16)
self.relu = nn.ReLU(inplace=True)
self.layer1 = self.make_layer(block, 16, layers[0])
self.layer2 = self.make_layer(block, 32, layers[1], 2)
self.layer3 = self.make_layer(block, 64, layers[2], 2)
self.avg_pool = nn.AvgPool2d(8)
self.fc = nn.Linear(64, num_classes)
def make_layer(self, block, out_channels, blocks, stride=1):
downsample = None
if (stride != 1) or (self.in_channels != out_channels):
downsample = nn.Sequential(
conv3x3(self.in_channels, out_channels, stride=stride),
nn.BatchNorm2d(out_channels))
layers = []
layers.append(block(self.in_channels, out_channels, stride, downsample))
self.in_channels = out_channels
for i in range(1, blocks):
layers.append(block(out_channels, out_channels))
return nn.Sequential(*layers)
def forward(self, x):
out = self.conv(x)
out = self.bn(out)
out = self.relu(out)
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.avg_pool(out)
out = out.view(out.size(0), -1)
out = self.fc(out)
return out
error : RuntimeError: Expected 5-dimensional input for 5-dimensional weight [16, 1, 3, 3, 3], but got 4-dimensional input of size [5529, 1, 5, 5] instead
|
st30280
|
Based on the error message it seems you are using nn.Conv3d in your model, which would expect a 5-dimensional input in the shape [batch_size, channels, depth, height, width], while you are passing a 4-dimensional input. You could either use nn.Conv2d layers or make sure the input has the expected shape.
|
st30281
|
@ptrblck
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, downsample=None):
super(ResidualBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channels, out_channels, 3)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_channels, out_channels,3)
self.bn2 = nn.BatchNorm2d(out_channels)
self.downsample = downsample
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample:
residual = self.downsample(x)
#out += residual
out = self.relu(out)
return out
# ResNet
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes=2):
super(ResNet, self).__init__()
self.in_channels = 16
self.conv = nn.Conv2d(1, 16,3)
self.bn = nn.BatchNorm2d(16)
self.relu = nn.ReLU(inplace=True)
self.layer1 = self.make_layer(block, 16, layers[0])
self.layer2 = self.make_layer(block, 32, layers[1], 2)
self.layer3 = self.make_layer(block, 64, layers[2], 2)
self.avg_pool = nn.AvgPool2d(8)
self.fc = nn.Linear(64, num_classes)
def make_layer(self, block, out_channels, blocks, stride=1):
downsample = None
if (stride != 1) or (self.in_channels != out_channels):
downsample = nn.Sequential(
nn.Conv2d(self.in_channels, out_channels, 3),
nn.BatchNorm2d(out_channels))
layers = []
layers.append(block(self.in_channels, out_channels, stride, downsample))
self.in_channels = out_channels
for i in range(1, blocks):
layers.append(block(out_channels, out_channels))
return nn.Sequential(*layers)
def forward(self, x):
out = self.conv(x)
out = self.bn(out)
out = self.relu(out)
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.avg_pool(out)
out = out.view(out.size(0), -1)
out = self.fc(out)
return out
RuntimeError: Calculated padded input size per channel: (1 x 1). Kernel size: (3 x 3). Kernel size can’t be greater than actual input size
|
st30282
|
This new error is raised, if your input size is too small such that a convolution layer is failing as its kernel size is larger than the padded input.
|
st30283
|
@ptrblck
loss_train.backward()
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [2232, 16, 37, 37]], which is output 0 of ReluBackward1, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
|
st30284
|
I have a 2-layer LSTM (torch.nn.LSTM) model, with dropout enabled.
For a task, I need the intermediate gradients as well, I do this by using backward hooks.
But when I call .backward(), I get the following error:
cudnn RNN backward can only be called in training mode
I can bypass this by setting the LSTM layer to .train() but this also enables the dropout.
Can this be fixed?
|
st30285
|
You can call .train() on the nn.LSTM module alone or disable cudnn for this layer.
|
st30286
|
Hello!
I wish to ask you a question that how to make the output of the LSTM network in Pytorch become the input of the next step?
I need to move the value output by LSTM to the next input, and then make predictions step by step. How do I describe the code?
|
st30287
|
For the LSTM dropout, I use the dropout parameter of nn.LSTM. When I call .train() on nn.LSTM, the dropout is enabled as well.
|
st30288
|
In that case you could still disable cudnn for this layer only or set the dropout to zero after calling .train() on it.
|
st30289
|
I have some bug in training model, I did two experiments. In the first one, when I evaluate in training every 1000steps, and I evaluate the model in the end of each epoch, but not to save every epoch end model. In the second, I evaluate in training every 1000steps, and not have evaluate in the end of each epoch.This two experiments had not same as result.I dont not this is why?
The train and evaluate function are here:
Train:
def train_and_evaluate(args, model, tokenizer, optimizer, scheduler, train_dataloader, val_loader, epoch, max_f1):
""" Train the model """
# Train!
logger.info("***** Running training *****")
logger.info(" Num examples = %d", len(train_dataloader)*args.train_batch_size)
epoch_step = 0
epoch_loss = 0.0
model.zero_grad()
# 下面这里读取batch数据需要根据自己的数据脚本进行修改
epoch_iterator = tqdm(train_dataloader, desc="Training")
# model.train()
scaler = GradScaler()
# 增加对抗训练代码
# fgm = FGM(model, epsilon=1, emb_name='word_embeddings.weight')
# pgd = PGD(model, emb_name='word_embeddings.weight', epsilon=1.0, alpha=0.3)
# k=3
for step, batch in enumerate(epoch_iterator):
model.train()
batch = tuple(t.to(args.device) for t in batch)
inputs = {'input_ids':batch[0], 'attention_mask':batch[1],
'token_type_ids':batch[2],
'start_positions':batch[3],
'end_positions':batch[4],
'answerable_label':batch[5]}
if args.model_type in ["xlm", "roberta", "distilbert", "camembert", "bart", "longformer"]:
del inputs["token_type_ids"]
if args.model_type in ['xlnet', 'xlm']:
inputs.update({'cls_index': batch[6],
'p_mask': batch[9]})
with autocast():
outputs = model(**inputs)
loss = outputs[0]
# if args.n_gpu > 1:
# loss = loss.mean() # mean() to average on multi-gpu parallel training
epoch_loss += loss.item()
scaler.scale(loss).backward()
# if args.fp16:
# with amp.scale_loss(loss, optimizer) as scaled_loss:
# scaled_loss.backward()
# else:
# loss.backward()
# pgd对抗训练
# pgd.backup_grad()
# for t in range(k):
# pgd.attack(is_first_attack=(t==0)) # 在embedding上添加对抗扰动, first attack时备份param.data
# if t != k-1:
# model.zero_grad()
# else:
# pgd.restore_grad()
# with autocast():
# loss_adv = model(**inputs)[0]
# scaler.scale(loss_adv).backward() # 反向传播,并在正常的grad基础上,累加对抗训练的梯度
# pgd.restore() # 恢复embedding参数
# 增加fgm对抗训练的代码
# fgm.attack()
# with autocast():
# adv_outputs = model(**inputs)
# loss_adv = adv_outputs[0]
# if args.n_gpu > 1:
# loss_adv = loss_adv.mean() # mean() to average on multi-gpu parallel training
# scaler.scale(loss_adv).backward()
# if args.fp16:
# with amp.scale_loss(loss_adv, optimizer) as adv_scaled_loss:
# adv_scaled_loss.backward()
# else:
# loss_adv.backward()
# fgm.restore()
# if args.fp16:
# torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), args.max_grad_norm)
# else:
# torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)
# optimizer.step()
scaler.step(optimizer)
# scaler.step(aux_opt)
scaler.update()
# optimizer.step()
scheduler.step() # Update learning rate schedule
optimizer.zero_grad()
epoch_step += 1
# optimizer.step()
# scheduler.step() # Update learning rate schedule
# model.zero_grad()
# epoch_step += 1
# evaluate model in some steps
if (epoch_step % args.evaluate_steps == 0) or (step == len(train_dataloader) - 1):
val_results = evaluate(args, model, tokenizer, val_loader)
# logger.info('evaluate f1 is {:.4f}'.format(val_results.get('f1')))
# logger.info('***** Epoch {} Running result *****'.format(epoch+1))
# logger.info('Training loss is {:.4f}'.format(epoch_loss/epoch_step))
# logger.info("***** Eval results %s *****", "")
# info = "-".join([f' {key}: {value:.4f} ' for key, value in val_results.items()])
# logger.info(info)
if max_f1 < val_results.get('f1'):
max_f1 = val_results.get('f1')
# logger.info('Epoch {} Training loss is {:.4f}'.format(epoch+1, epoch_loss/epoch_step))
logger.info("***** Eval results %s *****", "")
info = "-".join([f' {key}: {value:.4f} ' for key, value in val_results.items()])
logger.info(info)
# Save best model checkpoint
output_dir = os.path.join(args.output_dir, args.model_type)
if not os.path.exists(output_dir):
os.makedirs(output_dir)
# Save weights of the network
model_to_save = model.module if hasattr(model, "module") else model # Take care of distributed/parallel training
# model_checkpoint = {'epoch': epoch + 1,
# 'state_dict': model_to_save.state_dict(),
# 'optim_state_dict': optimizer.state_dict(),
# 'scheduler_dict': scheduler.state_dict(),
# }
# model_to_save.save_pretrained(output_dir)
tokenizer.save_pretrained(output_dir)
model_file_path = os.path.join(output_dir, 'qa-best.bin')
torch.save(model_to_save.state_dict(), model_file_path)
logger.info("Saving best model checkpoint to %s", output_dir)
# if 'cuda' in str(args.device):
# torch.cuda.empty_cache()
return max_f1
Evaluate:
def evaluate(args, model, tokenizer, val_loader, prefix=""):
features = val_loader.dataset.features
examples = val_loader.dataset.examples
# args.eval_batch_size = args.per_gpu_eval_batch_size * max(1, args.n_gpu)
# Note that DistributedSampler samples randomly
# eval_sampler = SequentialSampler(dataset)
# eval_dataloader = DataLoader(dataset, sampler=eval_sampler, batch_size=args.eval_batch_size)
# multi-gpu evaluate
# if args.n_gpu > 1 and not isinstance(model, torch.nn.DataParallel):
# model = torch.nn.DataParallel(model)
# Eval!
logger.info("***** Running evaluation {} *****".format(prefix))
logger.info(" Num examples = %d", len(val_loader)*args.eval_batch_size)
# logger.info(" Batch size = %d", args.eval_batch_size)
all_results = []
# start_time = timeit.default_timer()
model.eval()
# for batch in tqdm(val_loader, desc="Evaluating"):
for batch in val_loader:
model.eval()
batch = tuple(t.to(args.device) for t in batch)
with torch.no_grad():
inputs = {'input_ids':batch[0], 'attention_mask':batch[1],
'token_type_ids':batch[2], }
# 'start_positions':batch[3],
# 'end_positions':batch[4],}
# 'answerable_label':batch[5]}
if args.model_type in ["xlm", "roberta", "distilbert", "camembert", "bart", "longformer"]:
del inputs["token_type_ids"]
batch_unique_id = batch[6]
# XLNet and XLM use more arguments for their predictions
if args.model_type in ["xlnet", "xlm"]:
inputs.update({"cls_index": batch[4], "p_mask": batch[7]})
# for lang_id-sensitive xlm models
if hasattr(model, "config") and hasattr(model.config, "lang2id"):
inputs.update(
{"langs": (torch.ones(batch[0].shape, dtype=torch.int64) * args.lang_id).to(args.device)}
)
outputs = model(**inputs)
for i, unique_id in enumerate(batch_unique_id):
# eval_feature = features[example_indice]
# unique_id = int(eval_feature.unique_id)
unique_id = int(unique_id.item())
output = [output[i].detach().to('cpu').tolist() for output in outputs[:2]]
# Some models (XLNet, XLM) use 5 arguments for their predictions, while the other "simpler"
# models only use two.
if args.model_type in ["xlnet", "xlm"]:
start_logits = output[0]
# start_top_index = output[1]
end_logits = output[1]
# end_top_index = output[3]
# cls_logits = output[2]
result = SquadResult(
unique_id,
start_logits,
end_logits,
# start_top_index=start_top_index,
# end_top_index=end_top_index,
cls_logits=None,
)
else:
start_logits = output[0]
end_logits = output[1]
# cls_logits = output[2]
result = SquadResult(unique_id, start_logits, end_logits)
all_results.append(result)
# evalTime = timeit.default_timer() - start_time
# logger.info(" Evaluation done in total %f secs (%f sec per example)", evalTime, evalTime / len(dataset))
# Compute predictions
output_prediction_file=None
output_nbest_file=None
# output_prediction_file = os.path.join(args.output_dir, "predictions_{}.json".format(prefix))
# output_nbest_file = os.path.join(args.output_dir, "nbest_predictions_{}.json".format(prefix))
if args.version_2_with_negative:
output_null_log_odds_file = None
# output_null_log_odds_file = os.path.join(args.output_dir, "null_odds_{}.json".format(prefix))
else:
output_null_log_odds_file = None
# XLNet and XLM use a more complex post-processing procedure
if args.model_type in ["xlnet", "xlm"]:
start_n_top = model.config.start_n_top if hasattr(model, "config") else model.module.config.start_n_top
end_n_top = model.config.end_n_top if hasattr(model, "config") else model.module.config.end_n_top
predictions = compute_predictions_extended(
examples,
features,
all_results,
args.n_best_size,
args.max_answer_length,
output_prediction_file,
output_nbest_file,
output_null_log_odds_file,
start_n_top,
end_n_top,
args.version_2_with_negative,
tokenizer,
args.verbose_logging
)
else:
# predictions是一个dict:{qid:[pred_text,start_logits,end_logits,start_index,end_index]}
predictions, nbest_predictions = compute_predictions(
examples,
features,
all_results,
args.n_best_size,
args.max_answer_length,
args.do_lower_case,
output_prediction_file,
output_nbest_file,
output_null_log_odds_file,
args.verbose_logging,
args.version_2_with_negative,
args.null_score_diff_threshold,
tokenizer
)
# Compute the F1 and exact scores.
results = squad_evaluate(examples, predictions, tokenizer)
return results
The training loop function:
def train_loop(args, model, tokenizer, optimizer, scheduler, train_dataloader, val_dataloader):
# 这里进行train和val的操作
seed_everything(args.seed)
max_f1 = 0.0
# global_steps = 0
for epoch in range(int(args.num_train_epochs)):
logger.info('******************** Epoch {} Running Start! ********************'.format(epoch+1))
max_f1 = train_and_evaluate(args,model, tokenizer, optimizer, scheduler, train_dataloader, val_dataloader, epoch, max_f1)
**
this is the diffenence of two experiments
> **# last_evaluate_results = evaluate(args, model, tokenizer, val_dataloader)**
**
# logger.info('The last step evaluate f1 is {:.4f}'.format(last_evaluate_results.get('f1')))
# max_f1 = new_max_f1
# logger.info('The best Acc-score is {:.4f}'.format(max_acc))
# logger.info('The best new F1-score is {:.4f}'.format(new_max_f1))
logger.info('The best F1-score is {:.4f}'.format(max_f1))
logger.info('******************** Epoch {} Running End! ********************'.format(epoch+1))
# logger.info('Negative best F1-score is {:.4f}'.format(max_neg_f1))
if 'cuda' in str(args.device):
torch.cuda.empty_cache()
|
st30290
|
I still think the reason could be the different order into the pseudorandom number generator as explained in your double post 2. Did you take a look at it and e.g. re-seeded the training for debugging purposes?
|
st30291
|
@ptrblck Thanks for your reply, you means is I need to choose another seed number to train the model?Thanks!
|
st30292
|
@ptrblck I use this code to test the random state, the code like this:
{
"python": random.getstate(),
"numpy": np.random.get_state(),
"cpu": torch.random.get_rng_state(),
"gpu": torch.cuda.random.get_rng_state()
}
and, the result is the state is not change in start and end of the evaluate function.
|
st30293
|
when I save my checkpoint, the error occurs, how to fix it?
the config file, notice CHECKPOINT
import torch
DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
CSV_DIR = 'C:/Users/PML/Documents/Florentino_space/cast_iron_preprocess/csv512_offset50'
IMG_DIR = 'C:/Users/PML/Documents/Florentino_space/cast_iron_preprocess/img512_offset50'
LEARNING_RATE = 2e-4
IMAGE_SIZE = 512
IN_CHANNELS = 1
OUT_CHANNELS = 1
BATCH_SIZE = 8
NUM_EPOCHS = 30
LOAD_MODEL = False
SAVE_MODEL = True
NUM_WORKER = 4
CHECKPOINT = 'sur_checkpoint.pth.tar'
this is my utils function:
def save_checkpoint(model, optimizer, filename="my_checkpoint.pth.tar"):
print("=> Saving checkpoint")
checkpoint = {
"state_dict": model.state_dict(),
"optimizer": optimizer.state_dict(),
}
torch.save(checkpoint, filename)
from model import AutoEncoder
import torch
from dataset import SurDataset
from utils import save_checkpoint, load_checkpoint, save_some_examples
from torch.utils.data import DataLoader
from torchvision.utils import save_image
import torch.nn as nn
import torch.optim as optim
import config
from tqdm import tqdm
model = AutoEncoder(in_channels=1, out_channels=1).to(config.DEVICE)
optimizer = optim.Adam(list(model.parameters()),
lr = config.LEARNING_RATE,
betas=(0.5, 0.999))
mse = nn.MSELoss()
scalar = torch.cuda.amp.GradScaler()
dataset = SurDataset(csv_dir=config.CSV_DIR, img_dir=config.IMG_DIR)
train_set, val_set = torch.utils.data.random_split(dataset, [15000, len(dataset)-15000])
loader = DataLoader(dataset=train_set, batch_size=config.BATCH_SIZE, shuffle=True)
val_loader = DataLoader(dataset=val_set, batch_size=config.BATCH_SIZE)
loop = tqdm(loader)
loop = tqdm(loader)
if config.LOAD_MODEL:
load_checkpoint(
config.CHECKPOINT, model, optimizer, config.LEARNING_RATE,
)
model.train()
for epoch in range(config.NUM_EPOCHS):
print(f"epoch {epoch+1}/{config.NUM_EPOCHS}:")
losses = []
for idx, (csv_, target) in enumerate(loop):
csv_ = csv_.to(config.DEVICE)
target = target.to(config.DEVICE)
predict = model(csv_)
loss = mse(predict, target)
losses.append(loss.item())
optimizer.zero_grad()
loss.backward()
# gradient descent or adam step
optimizer.step()
if idx % 5 == 0:
save_some_examples(model, val_loader, epoch, folder="evaluation")
if config.SAVE_MODEL and epoch % 5 == 0:
save_checkpoint(model, optimizer, filename=config.CHECKPOINT) # here come the error!
print(f"loss at epoch {epoch+1}/{config.NUM_EPOCHS} is {losses:.4f}.")
|
st30294
|
Solved by Florentino in post #4
I know what is wrong, that I pass an array rather than a value, thank you!
|
st30295
|
The error is raised, if you try to use a wrong format identifier in a str as seen here:
print('{:3}'.format(['aa', 'bb']))
> TypeError: unsupported format string passed to list.__format__
but I’m unsure where exactly the error is coming from.
Could you post a minimal executable code snippet to reproduce this issue, please?
|
st30296
|
Would these error help?
Traceback (most recent call last):
File "C:\Users\PML\.conda\envs\floren\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\PML\.conda\envs\floren\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "c:\Users\PML\.vscode\extensions\ms-python.python-2021.6.944021595\pythonFiles\lib\python\debugpy\__main__.py", line 45, in <module>
cli.main()
File "c:\Users\PML\.vscode\extensions\ms-python.python-2021.6.944021595\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py", line 444,
in main
run()
File "c:\Users\PML\.vscode\extensions\ms-python.python-2021.6.944021595\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py", line 285,
in run_file
runpy.run_path(target_as_str, run_name=compat.force_str("__main__"))
File "C:\Users\PML\.conda\envs\floren\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\PML\.conda\envs\floren\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\PML\.conda\envs\floren\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "c:\Users\PML\Documents\Florentino_space\cast_iron_preprocess\autoencoder20210615\train.py", line 48, in <module>
print(f"loss at epoch {epoch+1}/{config.NUM_EPOCHS} is {losses:.4f}.")
TypeError: unsupported format string passed to list.__format__
|
st30297
|
Hello, I’m trying to solve MNIST tutorial.
Here is my code
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt
from torchvision import datasets, transforms
dev = 'cuda'
os.environ['KMP_DUPLICATE_LIB_OK'] = 'True'
no_cuda = False
use_cuda = not no_cuda and torch.cuda.is_available()
device = torch.device ('cuda' if use_cuda else 'cpu')
seed = 1
batch_size = 64
test_batch_size = 64
torch.manual_seed(seed)
train_loader = torch.utils.data.DataLoader (datasets.MNIST('dataset/', train=True, download=True,
transform= transforms.Compose([transforms.ToTensor(),
transforms.Normalize ((0.1307,), (0.3081,))])),
batch_size = batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(datasets.MNIST('dataset/', train=False, transform=transforms.Compose
([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])), batch_size = test_batch_size, shuffle=True)
image, label = next(iter(train_loader))
class Net (nn.Module):
def __init__ (self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear (4*4*50, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
#feature extraction
x = F.relu (self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
# print(x.shape)
# model = Net()
# model.forward(image)
# Fully connected (Classification)
x = x.view(-1, 4*4*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr = 0.001, momentum = 0.5)
epochs = 1
log_interval = 100
for epoch in range (1, 2):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad() #optimizer clear
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step() # update
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)] \t Loss: {:.6f}'.format (epoch, batch_idx * len(data), len(train_loader.dataset), 100*batch_idx / len (train_loader), loss.item()))
model.eval ()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data) # size [64, 10]
test_loss += F.nll_loss(output, target, reduction ='sum').item()
pred = output.argmax(dim = 1, keepdim = True) # size [64, 1]
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss = test_loss / len(test_loader.dataset)
print(len(data), target.shape, data.shape, output.shape)
print ('\nTest set: Average Loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n' .format(test_loss, correct, len(test_loader.dataset), 100 * correct / len(test_loader.dataset)))
Even though I designated test_batch_size as 64, batch size of evaluation part is 16.
I’m wondering why the batch size had changed.
I am looking forward to see any help. Thanks in advance.
Kind regards,
Yoon Ho
|
st30298
|
You have train as for batch_idx, (data, target) in enumerate(train_loader):
and test as for data, target in test_loader:. Fix that and it should work?
Also shuffling test is not a good idea, you want to keep it same for comparision across runs
|
st30299
|
The test data of MNIST will contain 10000 samples.
If you are using a batch size of 64, you would get 156 full batches (9984 samples) and a last batch of 16 samples (9984+16=10000), so I guess you are only checking the shape of the last batch.
If you don’t want to use this last (smaller) batch, you can use drop_last=True in the DataLoader.
|
st30300
|
Is is possible to generate batch data at the running time using pytorch?
for example , my training data consists of 30000 256*256 images, can I generate batch data containing, for example ,10 images at each iteration, thus I don’t have to load the whole 30000 images one time but load them step by step?
Thank you.
|
st30301
|
have you looked at our examples much? most of the vision ones do this: https://github.com/pytorch/examples/ 1.4k
|
st30302
|
The Dataset & DataLoader 20 tutorial explains how lazy loading can be used and how the DataLoader creates the batches, so you might want to take a look at it.
|
st30303
|
I’m going through the Pytorch Recipe: Defining a Neural Network in Pytorch, and I have a question regarding the parameters for F.max_pool2d.
image600×541 45.9 KB
In the above code, there are two parameters for F.max_pool2d: x and 2.
My questions:
To clarify, is the first parameter ‘x’ just representing our data?
I think I read somewhere that the second parameter is the kernel size, but I’m confused because I thought pooling doesn’t use kernels (aka filters)? I thought only the convolution operation uses kernels/filters? What is this kernel doing in the pooling operation? Or does this second parameter represent something else?
Any help would be appreciated, thank you!
|
st30304
|
Yes, x is the input activation.
Pooling layers do use a kernel (which doesn’t contain trainable weights) and you can imagine it as a “window” or “patch” which is selecting the input locations for the pooling operation. E.g. a kernel size of 2 would select a 2x2 window (= 4 pixels) and in case of nn.MaxPool2d it would then return the max. value of these 4 inputs in the output activation.
|
st30305
|
I was going through the Pytorch Recipe: Defining a Neural Network in Pytorch, and I didn’t understand what the torch.nn.Dropout2d function was doing and what its purpose was in the following algorithm in Step 2 of the recipe when it teaches us how to define and initialize the neural network.
image731×522 65.1 KB
Question 1: The comments say that the torch.nn.Dropout2d function is “Designed to ensure that adjacent pixels are either all 0s or all active with an input probability” – what “adjacent pixels” is this referring to, and what is the purpose of making them either all 0s or all active? also, what is the purpose of giving them a probability?
Question 2: Also, I don’t understand where the “9216” (i.e. the number highlighted) comes from for the first parameter in nn.Linear where self.fc1 is defined. The second convolutional layer outputted 64 features, but then the input to the first fully connected layer has 9216 features, so I don’t see the connection? I’m assuming what I’m missing is whatever the torch.nn.Dropout2d function is doing, or perhaps not, I’m not sure.
Any guidance on both of my above questions would be so appreciated, thank you!
|
st30306
|
“Adjacent pixels” refer to the values of an entire channel, which would be dropped using nn.Dropout2d, while nn.Dropout would drop random pixel locations as seen here:
x = torch.randn(2, 2, 4, 4)
drop = nn.Dropout2d()
out = drop(x)
print(out)
> tensor([[[[ 0.0000, 0.0000, -0.0000, -0.0000],
[-0.0000, -0.0000, 0.0000, -0.0000],
[-0.0000, 0.0000, 0.0000, 0.0000],
[-0.0000, -0.0000, 0.0000, 0.0000]],
[[-1.7700, 0.5395, 1.1095, 1.5484],
[-1.5528, -0.6495, 1.5294, 0.6949],
[-1.5919, 0.3380, 2.6201, 2.0743],
[-1.5087, 0.5487, -0.4077, 1.1598]]],
[[[-5.5042, -0.3527, -0.1202, 1.6333],
[ 1.9476, -1.1323, 1.2164, -1.9838],
[ 1.9263, 1.0842, -1.4239, -0.8705],
[ 2.7384, 1.5202, 2.0018, -1.3804]],
[[ 0.0000, -0.0000, 0.0000, -0.0000],
[ 0.0000, 0.0000, -0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000],
[-0.0000, 0.0000, 0.0000, 0.0000]]]])
drop = nn.Dropout()
out = drop(x)
print(out)
> tensor([[[[ 1.3324, 0.0000, -0.0000, -0.0000],
[-0.9783, -0.0000, 0.0000, -4.8274],
[-0.0000, 2.1448, 0.1935, 0.0000],
[-0.6435, -0.0000, 2.8480, 0.2524]],
[[-0.0000, 0.0000, 0.0000, 1.5484],
[-1.5528, -0.6495, 0.0000, 0.0000],
[-0.0000, 0.0000, 2.6201, 0.0000],
[-0.0000, 0.0000, -0.4077, 0.0000]]],
[[[-0.0000, -0.3527, -0.1202, 0.0000],
[ 0.0000, -0.0000, 0.0000, -0.0000],
[ 0.0000, 1.0842, -1.4239, -0.8705],
[ 2.7384, 1.5202, 0.0000, -0.0000]],
[[ 2.2113, -0.5281, 3.7269, -1.7598],
[ 0.0000, 0.0000, -0.0000, 0.0000],
[ 3.0427, 0.0000, 0.0000, 0.0000],
[-0.0000, 0.4729, 0.0000, 0.0000]]]])
The purpose is best explained in the Dropout paper, which explains it can help avoiding “co-adaption”. The specified probability is a hyperparameter and specifies the drop probability.
The in_features of a linear layer are defined by the number of features of the flattened input activation. Since nn.Conv2d layers output a 4-dimensional tensor in the shape [batch_size, channels, height, width], you would specify the in_features of the linear layer as channels*height*width and flatten the activation via nn.Flatten() or manually via x = x.view(x.size(0), -1).
|
st30307
|
Hi,
I tried to use torch.no_grad() with DDP, but it would throw
This error indicates that your
module has parameters that were not used in producing its output (the
return value of forward). You can enable unused parameter detection
by passing the keyword argument find_unused_parameters=True to
torch.nn.parallel.DistributedDataParallel
the pseudo code is as following
class MyModel(nn.Module):
def forward(self, x):
with torch.no_grad():
self.layers(x)
return x
class WholeFlow(nn.Module):
def __init__(self):
self.f=MyModel()
self.g=nn.Linear(256, 256)
def forward(self, x):
x=self.f(x)
x=self.g(x)
return x
SGD=(WholeFlow.g.parameters(),...)
There is a similar issue here: DDP does not work well with `torch.no_grad()` in 1.2 · Issue #6087 · PyTorchLightning/pytorch-lightning · GitHub 6
It works with DataParallell, but can’t work with DDP.
Any idea?
|
st30308
|
Did you try to add the suggested find_unused_parameters=True argument and if so, did you get any other error?
|
st30309
|
I am currently training a network in pytorch and my training loss decreases but wavers a lot (it actually fluctuates in two ranges after 25 epochs of training. The training loss for roughly a third of the iterations is in the 30-40 range where as for the other two-third, it is in the 150-400 range (never in between. i.e. never between 40 and 150, only in these extreme values) as is evident from the training loss profile as follows.
However, the validation loss has always been in the lower range (shown in the validation loss profile below).
After 25 epochs, the validation loss is around 30. This makes me wonder how the validation loss is computed at the end of each epoch.
I was under the impression that whatever parameters are computed at the end of an epoch, the network is inferred for the datapoints in the validation set and the val loss is thus reported. But with that understanding, the validation loss should have been high on a few instances (when the training loss is high after the end of an epoch).
I can only think of a couple of reasons why my training loss is so unsteady.
The training loss curve is extremely rough full of local peaks and valleys and my learning rate is high resulting in the loss going up and down during training. However, in that case, the validation loss reported after a batch should have been unsteady as well as after the end of an epoch, the training loss can be up or low unless the network has been extremely lucky so far with low losses whenever the epoch is ending. Or unless the validation loss after each epoch is not computed based on a single set of network parameters on the validation set but instead based on 10 (let’s say) sets of network parameters and the best val loss is reported.
Around a third of the datapoints are poorly explained by the learnt network. Since the training loss is computed for each training datapoint separately, it is unsteady. Given that the val loss is computed on a large set of validation datapoints, the losses get averaged out and is therefore smooth. However, it does not explain why the validation loss is on the lower end of the spectrum (i.e. around 30 instead of being something like 80-100 since some of the datapoints are not fit well).
Can anyone find out some other possibility I am unable to imagine?
|
st30310
|
Normalize and shuffle data, since training is most likely logged every step, it is normal for it to be fluctuating especially for smaller batch sizes. I have seen this mostly in GANs. But the variation here is a bit too much. A good approach would be to see the samples that are having abnormal loss.
|
st30311
|
I do shuffle my data. You’re right that the training loss is logged every step unlike the val loss which would be steady as it is computed on a larger size. However, in line with my point 2 above, it does not explain why the validation loss is low instead of high
|
st30312
|
Why do you expect val to be high? The trend per epoch in the training is also steadily decreasing. So the val is decreasing too.
i had this issue before but cant remember how I solved it. You have samples in your data that are too different.
Do you normalize your data? check if you are doing some augmentations that change the data too much. I believe this was the issue I had.
try clipping the gradients?
|
st30313
|
Hi guys,
I got a very strange problem.
I’m now experimenting with two completely same cards and same code, both of them are free of tasks. But I will get ‘out of memory error’ with one of them. Anyone ever encountered similar problems?
|
st30314
|
I have the following snippet, I’m wondering how to effectively vectorize this in ‘pure’ pytorch:
indices = []
for i in range(0, self.dataset.shape[0]):
if torch.mean(self.dataset[i]) >= .2:
indices.append(i)
self.dataset = self.dataset[indices, :, :, :, :]
self.target = self.target[indices, :, :, :, :]
I tried
b = torch.mean(self.dataset, dim=0) > .2
indices = b.nonzero()
and
torch.where(torch.mean(self.dataset[:, ???]) > .2, self.dataset, torch.FloatTensor([0.0]))
to no avail. Any thoughts?
|
st30315
|
The first one should work, it would be nice to see the expected output and the one you get.
a = torch.FloatTensor(3,3)
b = a.mean(dim=0) > 2
c = b.nonzero()
a - > tensor([[-2.8721e+27, 4.5780e-41, -2.8721e+27], [ 4.5780e-41, 0.0000e+00, 0.0000e+00], [ 0.0000e+00, 6.8929e+34, 8.5771e-39]])
b -> tensor([False, True, False])
c -> tensor([[1]])
|
st30316
|
given the following:
dataset = torch.randn((123, 1, 64, 64, 64))
b = torch.mean(dataset, dim=0) > .2
c = b.nonzero()
print(c.shape)
this prints:
torch.Size([3495, 4])
where I expect it to be more like assuming that 23 of the original 123 3d volume elements don’t exceed the threshold
torch.Size(100, 1, 64, 64, 64)
|
st30317
|
Let’s say I have a torch dataloader = DataLoader(...) object. I don’t want to iterate through the whole dataset whenever I call for data, label in dataloader: in a function, so currently I use:
dataloader = DataLoader(...)
iter_dataloader = iter(dataloader)
batch = iter_dataloader.next() # Set the first batch
def train_batch():
data, label = batch
prediction = model(data)
# Do fancy things here
try:
batch = iter_dataloader.next() # Load the next batch
except:
iter_dataloader = iter(dataloader) # if the iterator object reaches the end, reset the dataloader
batch = iter_dataloader.next()
for _ in range(N):
train_batch() # This function is called multiple times
For each call of train_batch(), I get a batch from the dataset, train the model, and load the next batch. If there is no batch left, I reset the DataLoader object.
Now to my questions:
Is there a way to have a cleaner code? That is, I don’t want to use the iter and next method. Everytime I call it, it auto samples a batch from it and auto resets when it reaches the end. I hear about the Sampler, but I have not used it.
Extension of above: instead of a batch, can I have like K batches or 1/K of the dataset’s size I am using?
I want three sampling methods: (1) sampling batches from it in sequence (no shuffle), (2) sampling from it randomly (with and without replacement - shuffle), and (3) sampling from it such that the labels are equal. Is there a way to do this?
|
st30318
|
I have tried multiple approaches but unable to resolve this issue. I have trained my TTS model but after running the inference code I’m facing this issue. I have done implementing reshape and unsqueeze method but nothing worked. The model is a trained tacotron2 model for generating audio samples. However my input size is already of 3 dimension as shown in the ss attached. Please help me resolve
this!!!
(embedding): Embedding(148, 512)
(encoder): Encoder(
(convolutions): ModuleList(
(0): Sequential(
(0): ConvNorm(
(conv): Conv1d(512, 512, kernel_size=(5,), stride=(1,), padding=(2,))
)
(1): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Sequential(
(0): ConvNorm(
(conv): Conv1d(512, 512, kernel_size=(5,), stride=(1,), padding=(2,))
)
(1): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(2): Sequential(
(0): ConvNorm(
(conv): Conv1d(512, 512, kernel_size=(5,), stride=(1,), padding=(2,))
)
(1): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(lstm): LSTM(512, 256, batch_first=True, bidirectional=True)
Actual Code:
waveglow_path =
torch.hub.load('nvidia/DeepLearningExamples:torchhub',
'nvidia_waveglow')
#waveglow = torch.load(waveglow_path,
map_location=torch.device("cpu")) # waveglow_pt is the path to the
.pth file
#waveglow = torch.load(waveglow_path)
waveglow = waveglow_path.to('cuda')
waveglow.eval()
#waveglow.cpu().eval()
tacotron2 = torch.hub.load('nvidia/DeepLearningExamples:torchhub',
'nvidia_tacotron2')
model_path = '/content/TTS_Nigerian'
#tacotron2 = torch.load(model_path,
map_location=torch.device("cpu"))
#tacotron2 = torch.load(model_path)
tacotron2.load_state_dict(torch.load(model_path)['state_dict'])
print(tacotron2.state_dict)
tacotron2 = tacotron2.to('cuda')
tacotron2.eval()
tk = Tokenizer()
text = "Now please click on the start button or say 'start' to
proceed"
tk.fit_on_texts(text)
sequence = np.array(tk.texts_to_sequences([text]))[None, :]
#sequence = sequence.astype(int)
sequence = torch.from_numpy(sequence).to(device='cuda', dtype =
torch.int64)
print(sequence.shape)
torch.squeeze(sequence,[1,3])
with torch.no_grad():
mel_output, mel_output_postnet, _,alignment = tacotron2.infer(sequence, input_lengths=20)
audio = waveglow.infer(mel_output_postnet)
audio_numpy = audio[0].data.cpu().numpy()
sampling_rate = 22050
write("audio.wav", sampling_rate, audio_numpy)
from IPython.display import Audio
Audio(audio_numpy, rate=sampling_rate)
200573816_325154539209923_3438776102481001934_n1408×541 98.5 KB
|
st30319
|
nn.Conv1d layers expect a 3-dimensional input in the shape [batch_size, channels, seq_len], while you are trying to pass a 4-dimensional one (with an empty dimension).
Check the shape of the activation tensor and make sure it’s a valid tensor (containing values) with 3 dimensions.
|
st30320
|
I have checked the shape of the input tensor and its showing a 3 dimensional shape/size. Also, if you could please specifically tell me the changes in the CNN architecture or can I achieve this without making any changes to the trained model?
|
st30321
|
It shouldn’t be necessary to change the model architecture, since the input shape is wrong.
I’m not sure where you’ve checked the shape of the input tensor, but you might want to check the shape of all intermediate activation tensors in the forward method, since the error message gives the shape as [1, 0, 1, 512].
|
st30322
|
I have a network, in which there are 3 architectures that share the same classifier.
class VGGBlock(nn.Module):
def __init__(self, in_channels, out_channels,batch_norm=False):
super(VGGBlock,self).__init__()
conv2_params = {'kernel_size': (3, 3),
'stride' : (1, 1),
'padding' : 1
}
noop = lambda x : x
self._batch_norm = batch_norm
self.conv1 = nn.Conv2d(in_channels=in_channels,out_channels=out_channels , **conv2_params)
self.bn1 = nn.BatchNorm2d(out_channels) if batch_norm else noop
self.conv2 = nn.Conv2d(in_channels=out_channels,out_channels=out_channels, **conv2_params)
self.bn2 = nn.BatchNorm2d(out_channels) if batch_norm else noop
self.max_pooling = nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2))
@property
def batch_norm(self):
return self._batch_norm
def forward(self,x):
x = self.conv1(x)
x = self.bn1(x)
x = F.relu(x)
x = self.conv2(x)
x = self.bn2(x)
x = F.relu(x)
x = self.max_pooling(x)
return x
class VGG16(nn.Module):
def __init__(self, input_size, num_classes=1,batch_norm=False):
super(VGG16, self).__init__()
self.in_channels,self.in_width,self.in_height = input_size
self.block_1 = VGGBlock(self.in_channels,64,batch_norm=batch_norm)
self.block_2 = VGGBlock(64, 128,batch_norm=batch_norm)
self.block_3 = VGGBlock(128, 256,batch_norm=batch_norm)
self.block_4 = VGGBlock(256,512,batch_norm=batch_norm)
@property
def input_size(self):
return self.in_channels,self.in_width,self.in_height
def forward(self, x):
x = self.block_1(x)
x = self.block_2(x)
x = self.block_3(x)
x = self.block_4(x)
return x
class VGG16Classifier(nn.Module):
def __init__(self, num_classes=1,classifier = None,batch_norm=False):
super(VGG16Classifier, self).__init__()
self._vgg_a = VGG16((1,32,32),batch_norm=True)
self._vgg_b = VGG16((1,32,32),batch_norm=True)
self._vgg_star = VGG16((1,32,32),batch_norm=True)
self.classifier = classifier
if (self.classifier is None):
self.classifier = nn.Sequential(
nn.Linear(2048, 2048),
nn.ReLU(True),
nn.Dropout(p=0.5),
nn.Linear(2048, 512),
nn.ReLU(True),
nn.Dropout(p=0.5),
nn.Linear(512, num_classes)
)
def forward(self, x1,x2,x3):
op1 = self._vgg_a(x1)
op1 = torch.flatten(op1,1)
op2 = self._vgg_b(x2)
op2 = torch.flatten(op2,1)
op3 = self._vgg_star(x3)
op3 = torch.flatten(op3,1)
x1 = self.classifier(op1)
x2 = self.classifier(op2)
x3 = self.classifier(op3)
return x1,x2,x3
model1 = VGG16((1,32,32),batch_norm=True)
model2 = VGG16((1,32,32),batch_norm=True)
model_star = VGG16((1,32,32),batch_norm=True)
model_combo = VGG16Classifier(model1,model2,model_star)
I want to traing model_combo using the following loss function:
class CombinedLoss(nn.Module):
def __init__(self, loss_a, loss_b, loss_star, _lambda=1.0):
super().__init__()
self.loss_a = loss_a
self.loss_b = loss_b
self.loss_star = loss_star
self.register_buffer('_lambda',torch.tensor(float(_lambda),dtype=torch.float32))
def forward(self,y_hat,y):
return (self.loss_a(y_hat[0],y[0]) +
self.loss_b(y_hat[1],y[1]) +
self.loss_combo(y_hat[2],y[2]) +
self._lambda * torch.sum(model_star.weight - torch.pow(torch.cdist(model1.weight+model2.weight), 2)))
In the training function I pass loaders, that for simplicity are loaders_a, loaders_b and again loaders_a, where loaders_a is related to the first 50% of data of MNIST and loaders_b to the latter 50% of MNIST.
def train(net, loaders, optimizer, criterion, epochs=20, dev=None, save_param=False, model_name="valerio"):
loaders_a, loaders_b, loaders_star = loaders
# try:
net = net.to(dev)
#print(net)
#summary(net,[(net.in_channels,net.in_width,net.in_height)]*2)
criterion.to(dev)
# Initialize history
history_loss = {"train": [], "val": [], "test": []}
history_accuracy_a = {"train": [], "val": [], "test": []}
history_accuracy_b = {"train": [], "val": [], "test": []}
history_accuracy_star = {"train": [], "val": [], "test": []}
# Store the best val accuracy
best_val_accuracy = 0
# Process each epoch
for epoch in range(epochs):
# Initialize epoch variables
sum_loss = {"train": 0, "val": 0, "test": 0}
sum_accuracy_a = {"train": 0, "val": 0, "test": 0}
sum_accuracy_b = {"train": 0, "val": 0, "test": 0}
sum_accuracy_star = {"train": 0, "val": 0, "test": 0}
progbar = None
# Process each split
for split in ["train", "val", "test"]:
if split == "train":
net.train()
#widgets = [
#' [', pb.Timer(), '] ',
#pb.Bar(),
#' [', pb.ETA(), '] ', pb.Variable('ta','[Train Acc: {formatted_value}]')]
#progbar = pb.ProgressBar(max_value=len(loaders_a[split]),widgets=widgets,redirect_stdout=True)
else:
net.eval()
# Process each batch
for j, ((input_a, labels_a), (input_b, labels_b), (input_s, labels_s)) in enumerate(zip(loaders_a[split], loaders_b[split], loaders_star[split])):
labels_a = labels_a.unsqueeze(1).float()
labels_b = labels_b.unsqueeze(1).float()
labels_s = labels_s.unsqueeze(1).float()
input_a = input_a.to(dev)
labels_a = labels_a.to(dev)
input_b = input_b.to(dev)
labels_b = labels_b.to(dev)
input_s = input_s.to(dev)
labels_s = labels_s.to(dev)
# Reset gradients
optimizer.zero_grad()
# Compute output
pred = net(input_a,input_b, input_s)
loss = criterion(pred, [labels_a, labels_b, labels_s])
# Update loss
sum_loss[split] += loss.item()
# Check parameter update
if split == "train":
# Compute gradients
loss.backward()
# Optimize
optimizer.step()
# Compute accuracy
pred_labels = (pred[2] >= 0.0).long() # Binarize predictions to 0 and 1
pred_labels_a = (pred[0] >= 0.0).long() # Binarize predictions to 0 and 1
pred_labels_b = (pred[1] >= 0.0).long() # Binarize predictions to 0 and 1
batch_accuracy_star = (pred_labels == labels_s).sum().item() / len(labels_s)
batch_accuracy_a = (pred_labels_a == labels_a).sum().item() / len(labels_a)
batch_accuracy_b = (pred_labels_b == labels_b).sum().item() / len(labels_b)
# Update accuracy
sum_accuracy_star[split] += batch_accuracy_star
sum_accuracy_a[split] += batch_accuracy_a
sum_accuracy_b[split] += batch_accuracy_b
#if (split=='train'):
#progbar.update(j, ta=batch_accuracy)
#progbar.update(j, ta=batch_accuracy_a)
#progbar.update(j, ta=batch_accuracy_b)
#if (progbar is not None):
#progbar.finish()
# Compute epoch loss/accuracy
#for split in ["train", "val", "test"]:
#epoch_loss = sum_loss[split] / (len(loaders_a[split])+len(loaders_b[split]))
#epoch_accuracy_combo = {split: sum_accuracy_combo[split] / len(loaders[split]) for split in ["train", "val", "test"]}
#epoch_accuracy_a = sum_accuracy_a[split] / len(loaders_a[split])
#epoch_accuracy_b = sum_accuracy_b[split] / len(loaders_b[split])
epoch_loss = sum_loss["train"] / (len(loaders_a["train"])+len(loaders_b["train"])+len(loaders_s["train"]))
epoch_accuracy_a = sum_accuracy_a["train"] / len(loaders_a["train"])
epoch_accuracy_b = sum_accuracy_b["train"] / len(loaders_b["train"])
epoch_accuracy_star = sum_accuracy_star["train"] / len(loaders_s["train"])
epoch_loss_val = sum_loss["val"] / (len(loaders_a["val"])+len(loaders_b["val"])+len(loaders_s["val"]))
epoch_accuracy_a_val = sum_accuracy_a["val"] / len(loaders_a["val"])
epoch_accuracy_b_val = sum_accuracy_b["val"] / len(loaders_b["val"])
epoch_accuracy_star_val = sum_accuracy_star["val"] / len(loaders_s["val"])
epoch_loss_test = sum_loss["test"] / (len(loaders_a["test"])+len(loaders_b["test"])+len(loaders_s["test"]))
epoch_accuracy_a_test = sum_accuracy_a["test"] / len(loaders_a["test"])
epoch_accuracy_b_test = sum_accuracy_b["test"] / len(loaders_b["test"])
epoch_accuracy_star_test = sum_accuracy_star["test"] / len(loaders_s["test"])
# Store params at the best validation accuracy
if save_param and epoch_accuracy["val"] > best_val_accuracy:
# torch.save(net.state_dict(), f"{net.__class__.__name__}_best_val.pth")
torch.save(net.state_dict(), f"{model_name}_best_val.pth")
best_val_accuracy = epoch_accuracy["val"]
# Update history
for split in ["train", "val", "test"]:
history_loss[split].append(epoch_loss)
history_accuracy_a[split].append(epoch_accuracy_a)
history_accuracy_b[split].append(epoch_accuracy_b)
history_accuracy_star[split].append(epoch_accuracy_star)
# Print info
print(f"Epoch {epoch + 1}:",
f"Training Loss = {epoch_loss:.4f},",)
print(f"Epoch {epoch + 1}:",
f"Training Accuracy for A = {epoch_accuracy_a:.4f},")
print(f"Epoch {epoch + 1}:",
f"Training Accuracy for B = {epoch_accuracy_b:.4f},")
print(f"Epoch {epoch + 1}:",
f"Training Accuracy for star = {epoch_accuracy_star:.4f},")
print(f"Epoch {epoch + 1}:",
f"Val Loss = {epoch_loss_val:.4f},",)
print(f"Epoch {epoch + 1}:",
f"Val Accuracy for A = {epoch_accuracy_a_val:.4f},")
print(f"Epoch {epoch + 1}:",
f"Val Accuracy for B = {epoch_accuracy_b_val:.4f},")
print(f"Epoch {epoch + 1}:",
f"Val Accuracy for star = {epoch_accuracy_star_val:.4f},")
print(f"Epoch {epoch + 1}:",
f"Test Loss = {epoch_loss_test:.4f},",)
print(f"Epoch {epoch + 1}:",
f"Test Accuracy for A = {epoch_accuracy_a_test:.4f},")
print(f"Epoch {epoch + 1}:",
f"Test Accuracy for B = {epoch_accuracy_b_test:.4f},")
print(f"Epoch {epoch + 1}:",
f"Test Accuracy for star = {epoch_accuracy_star_test:.4f},")
print("\n")
But I got this error:
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 1, 3, 3], but got 2-dimensional input of size [128, 2048] instead
|
st30323
|
Hi Bruno!
CasellaJr:
self.conv1 = nn.Conv2d(in_channels=in_channels,out_channels=out_channels , **conv2_params)
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 1, 3, 3], but got 2-dimensional input of size [128, 2048] instead
You are passing a tensor of the wrong shape into the first Conv2d
layer in one of your VGGBlocks, most likely because you are passing
an input batch of 1-d vectors, rather than “3-d” images to your model.
Conv2d requires a shape of [nBatch, in_channels, height, width]
for its input (where nBatch can be arbitrary, in_channels matches
the in_channels of the Conv2d, and height and width are at least
as large kernel_size). You need the in_channels dimension, even
if in_channels = 1.
I’m guessing that you have a batch size of 128, and that your input
images have been flattened into 1-d vectors of length 2048.
The following example illustrates a valid input and then reproduces
your error with invalid input:
>>> import torch
>>> torch.__version__
'1.7.1'
>>> conv = torch.nn.Conv2d (1, 64, (3, 3))
>>> x_good = torch.randn (128, 1, 32, 64)
>>> x_bad = torch.randn (128, 32 * 64)
>>> conv (x_good).shape
torch.Size([128, 64, 30, 62])
>>> conv (x_bad).shape
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/user/miniconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 423, in forward
return self._conv_forward(input, self.weight)
File "/home/user/miniconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 419, in _conv_forward
return F.conv2d(input, weight, self.bias, self.stride,
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 1, 3, 3], but got 2-dimensional input of size [128, 2048] instead
Best.
K. Frank
|
st30324
|
I edited the original post: I have flatten in the forward, just before passing to the classifier. I think it is correct, but it does not work…
|
st30325
|
I would like to binarize the computational flow of a model (basically the model class) without weights to send it over the network and instantiate the same network architecture remotely. I’ve looked into torch.onnx but it seems as if this is not the right tool as it also includes weights and input format needs to be fixed. Any suggestions how this could be achieved?
|
st30326
|
Hi I’m trying to train a basic classifier.
my models is:
class Model(nn.Module):
def __init__(self,input_size=512,output_size=3, hidden_size=512):
super(Model, self).__init__()
self.cnn = CNN()
self.lstm = nn.LSTM(input_size=input_size, hidden_size=hidden_size,bidirectional=True)
self.hidden_size = hidden_size
self.linear = nn.Sequential(nn.Linear(hidden_size*2,hidden_size), nn.ReLU(), nn.Linear(hidden_size,output_size),
nn.Dropout(0.2))
torch.nn.init.xavier_normal_(self.linear[0].weight, gain=1.0)
torch.nn.init.xavier_normal_(self.linear[2].weight, gain=1.0)
def forward(self,x,indices):
features = self.cnn(x)
num_samp = torch.unique(indices)
preds = []
for i in num_samp:
p,_=self.lstm(features[torch.where(indices == i)[0]].unsqueeze(1),
(torch.zeros((2, 1, 512)).cuda(), torch.zeros((2, 1, 512)).cuda()))
out = self.linear(p.squeeze(1))
preds.append(out)
preds = torch.stack(preds).squeeze(0)
return preds
My dataset code is:
class AudioDataset(data.Dataset):
def __init__(self,root,indices):
super(AudioDataset,self).__init__()
self.audio_files = [os.path.join(root, f) for f in os.listdir(root) if f.endswith('.wav') and
f.startswith(tuple([str(f) for f in indices]))]
ann_fnames = [f for f in os.listdir(root) if f.endswith('.txt') and f.startswith(tuple([str(f) for f in indices]))]
self.annotations = []
for file_name in ann_fnames:
ann = []
if file_name.split('_')[0].isdigit():
with open(os.path.join(root, file_name), 'r') as fid:
for line in fid:
ann.append([float(f) for f in line.split('\t')])
self.annotations.append(np.array(ann))
def __getitem__(self, index):
audio_file = self.audio_files[index]
ann = self.annotations[index]
spectorgrams = split_according_to_cycle(audio_file, ann)
spectograms, labels = split_spectrodrams(spectorgrams, ann[:, 2:])
return spectograms, labels
def __len__(self):
return len(self.audio_files)
def split_according_to_cycle(audio_file,ann):
waveform, sample_rate = torchaudio.load(audio_file)
channel = 0
transformed = torchaudio.transforms.Resample(sample_rate, 16000)(waveform[channel, :].view(1, -1))
base_len = 5*16000
spectorgrams = []
for cycle in ann:
rasp_cycle = transformed[:,floor(cycle[0]*16000):ceil(cycle[1]*16000)]
while rasp_cycle.shape[1] < base_len:
rasp_cycle = torch.cat([rasp_cycle, rasp_cycle] ,1)
spectorgrams.append( torchaudio.transforms.Spectrogram()(rasp_cycle))
return spectorgrams
def split_spectrodrams(spectrograms, ann):
split_spec = []
tiled_lables = []
for s, an in zip(spectrograms, ann):
a = torch.stack([F.interpolate(a.unsqueeze(1), size=(64, 128), mode='bicubic') for a in
torch.split(s, 128, 2)])
if an[0]>0:
tiled_lables.append(torch.tensor([1]*a.shape[0]))
elif an[1]>0:
tiled_lables.append(torch.tensor([2]*a.shape[0]))
else:
tiled_lables.append(torch.tensor([0] * a.shape[0]))
split_spec.append(a)
return torch.cat(split_spec,dim=0).squeeze(1), torch.cat(tiled_lables)
my model and data loader initialization code is:
trainset = AudioDataset(root, train_indices)
validationset = AudioDataset(root, eval_indices)
trainloader = DataLoader(dataset=trainset,
batch_size=1,
shuffle=True,
collate_fn=collate_fn, # use custom collate function here
pin_memory=True,
num_workers=0)
validationloader = DataLoader(dataset=validationset,
batch_size=1,
shuffle=False,
collate_fn=collate_fn, # use custom collate function here
pin_memory=True,
num_workers=0)
logger.info(f'Building Model')
net = Model()
net = net.to("cuda" if torch.cuda.is_available() else "cpu")
optimizer = optim.SGD(net.parameters(), lr=1e-3, momentum=0.9, weight_decay=5e-4)
scheduler = utils.LinearWarmupScheduler(optimizer, 10, lr_sched.CosineAnnealingLR(optimizer, total_epoch))
criterion = nn.CrossEntropyLoss()
and my training code is:
net.train()
train_loss = 0
total = 0
correct = 0
optimizer.zero_grad()
for batch_idx, (inputs, targets,indices) in enumerate(trainloader):
if use_cuda:
inputs, targets = inputs.cuda(), targets.cuda()
inputs, targets = inputs, targets
optimizer.zero_grad()
outputs = net(inputs, indices)
loss = criterion(outputs.squeeze(1), targets)
loss.backward()
utils.clip_gradient(optimizer, 0.1)
# if (batch_idx + 1) % 10 == 0:
# every 10 iterations of batches of size 10
optimizer.step()
train_loss += loss.data
_, predicted = torch.max(outputs.data, 1)
total += targets.data.shape[0]
correct += predicted.eq(targets.data).cpu().sum()
utils.progress_bar(batch_idx, len(trainloader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'
% (train_loss / (batch_idx + 1), 100. * correct / total, correct, total))
after a couple of batches my training procedure is stuck , i don’t get any errors it just stuck anyone has any idea why?
|
st30327
|
If num_workers=0 and i interrupt the procedure the error i get:
Epoch: 0
^CTraceback (most recent call last):
File "train.py", line 147, in <module>
train(epoch)
File "train.py", line 74, in train
for batch_idx, (inputs, targets,indices) in enumerate(trainloader):
File "/home/gal/.conda/envs/audio/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
data = self._next_data()
File "/home/gal/.conda/envs/audio/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/gal/.conda/envs/audio/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/gal/.conda/envs/audio/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/gal/mixture-of-experts/datasets/ichbi.py", line 39, in __getitem__
spectorgrams = split_according_to_cycle(audio_file, ann)
File "/home/gal/mixture-of-experts/datasets/ichbi.py", line 55, in split_according_to_cycle
rasp_cycle = torch.cat([rasp_cycle, rasp_cycle] ,1)
KeyboardInterrupt
if the num_workers=1 i get:
^CTraceback (most recent call last):
File "train.py", line 147, in <module>
train(epoch)
File "train.py", line 74, in train
for batch_idx, (inputs, targets,indices) in enumerate(trainloader):
File "/home/gal/.conda/envs/audio/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
data = self._next_data()
File "/home/gal/.conda/envs/audio/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 841, in _next_data
idx, data = self._get_data()
File "/home/gal/.conda/envs/audio/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 798, in _get_data
success, data = self._try_get_data()
File "/home/gal/.conda/envs/audio/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 761, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/home/gal/.conda/envs/audio/lib/python3.6/queue.py", line 173, in get
self.not_empty.wait(remaining)
File "/home/gal/.conda/envs/audio/lib/python3.6/threading.py", line 299, in wait
gotit = waiter.acquire(True, timeout)
KeyboardInterrupt
my guess is that there is a data lock or something, how can i fix this issue?
|
st30328
|
Hi @galsk87 , it seems like i have a similar problem, explained here 43 in detail.
I want to know if you have solved your problem or not? If yes, could you please share your solution?
|
st30329
|
Hi all,
I am trying to reshape and duplicate a tensor based on a window that is applied to one of the
Here is a code example:
x = torch.rand([1, 10, 60, 256]) # [batch, subsamples, timepoints, channels]
window_size = 3
st = torch.arange(0, x.shape[1] - window_size + 1, 1)
nd = st + window_size
t_list = list()
for s, n in zip(st, nd):
t_list.append(x[:, s:n, :, :].clone())
new_x = torch.stack(t_list).squeeze(1)
new_x.shape() # torch.Size([8, 3, 60, 256])
I was wondering if there is a tensor function that does this, or if there is a more elegant manner to do so, perhaps without using a for loop.
Thanks!
|
st30330
|
Solved by ptrblck in post #2
You could use tensor.unfold for it:
out = x.unfold(1, window_size, 1)
out = out.squeeze(0).permute(0, 3, 1, 2)
print(out.shape) # torch.Size([8, 3, 60, 256])
print((out == new_x).all()) # tensor(True)
|
st30331
|
You could use tensor.unfold for it:
out = x.unfold(1, window_size, 1)
out = out.squeeze(0).permute(0, 3, 1, 2)
print(out.shape) # torch.Size([8, 3, 60, 256])
print((out == new_x).all()) # tensor(True)
|
st30332
|
I am getting a batch of strings from the dataloader.
The way I get it is setting batch_size=1 and creating buckets in the Dataset.init(), so every batch is only “one file” for the dataloader, but it is 8 strings in a list.
When using CPU, the dataloader puts every string in a tuple like this (“some string”, ).
So we get list of 8 tuples.
When using GPU, the dataloader puts every string in a list like this [“some string”].
So we get list of 8 lists.
Is that something that is known to happen?
|
st30333
|
With this code:
import torch
class MyDataset(Dataset):
def __init__(self):
self.data = 100*[['a', 'b']]
def __getitem__(self, index):
x = self.data[index]
return x
def __len__(self):
return len(self.data)
dataset = MyDataset()
loader = DataLoader(
dataset,
batch_size=5,
num_workers=2,
shuffle=True
)
for data in loader:
print(data)
I get
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
[('a', 'a', 'a', 'a', 'a'), ('b', 'b', 'b', 'b', 'b')]
Any clue @Isaac_Kargar ?
|
st30334
|
Hi everyone,
Is it possible to split a GPU in half, and apply DataParallel ? It seems like my model doesn’t use the full GPU capacity, and I’ve read that increasing the batch size (which would give more work to the GPU) changes the learning and could make it worse.
Thanks
|
st30335
|
To my knowledge, the effect would be the same. Since dataparallel makes the copy of the model on different gpus and merge the results. The backward gradient is also passed across all the gpus and collectively updated. That means, all these copies update at once. So would there be a difference? It shouldnt.
So how to use you gpu more efficiently? run multiple experiments at once!
|
st30336
|
I was working on a recommendation algorithm and I need user_embeddings and item_embeddings to calculate the score. In the algorithm, user_encoder aims to produce user_embeddings and item_encoder for item_embeddings.
The item_embeddings should be updated in every epoch and user_embeddings should be updated in every batch in an epoch. I was wondering how to complete? The part of code I want is as following:
user_model = UserModel()
item_model = ItemModel()
criterion = nn.BCEWithLogitsLoss().to(device)
optimizer = optim.Adam(list(user_model.parameters()) + list(item_model.parameters()), lr=LEARNING_RATE)
for epoch in range(num_epochs):
user_model.train()
item_model.train()
item_embeddings = item_model(input)
# the size of item_embeddings = [ITEMS NUM, item_embed_dim]
for idx, batch in enumerate(train_dataloader):
data = batch["data"].to(device)
label = batch["label"].to(device)
optimizer.zero_grad()
user_embeddings = user_model(data, item_embeddings)
# the size of user_embeddings = [batch size, item_embed_dim]
score = torch.matmul(user_embeddings, item_embeddings.T)
loss = criterion(score, label)
loss.backward()
optimizer.step()
I met RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time.
Any ideas to fix it? Thanks in advance.
|
st30337
|
Lets say I have array like [1, 2, 3], expanded it like [1, 2, 3, 3, 3] and processed.
How should it take an average on repeated values and collapse it to the original seq len?
|
st30338
|
the error occur in torch.nn.functional.instancenorm()
with mean, variance , weight, and bias having same size
I am not able to figure out what is the problem.
|
st30339
|
Could you post a minimal code snippet to reproduce this issue (including the used shapes) so that we could have a look at it, please?
|
st30340
|
hi ~
I found that running_mean/var’s shape is C, not N * C
according to the [1607.08022] Instance Normalization: The Missing Ingredient for Fast Stylization 3, the shape of mean/var is N *C , but for torch.nn.instancenorm ,
at::alias(running_mean).copy_(running_mean_.view({ b, c }).mean(0, false)); }
i don’t know the operation of “mean(0,false)” is use for ?
So mean, variance , weight, and bias has has same size is C .
|
st30341
|
I am using transfer learning from MobileNetV3 Small to predict 5 different points on an image. I am doing this as a regression task.
For both models:
Setting the last 50 layers trainable and adding the same fully connected layers to the end.
Learning rate 3e-2
Batch size 32
Adam optimizer with the same betas
100 epochs
The inputs consist of RGB unscaled images
Pytorch
Model
def _init_weights(m):
if type(m) == nn.Linear:
nn.init.xavier_uniform_(m.weight)
m.bias.data.fill_(0.01)
def get_mob_v3_small():
model = torchvision.models.mobilenet_v3_small(pretrained=True)
children_list = get_children(model)
for c in children_list[:-50]:
for p in c.parameters():
p.requires_grad = False
return model
class TransferMobileNetV3_v2(nn.Module):
def __init__(self,
num_keypoints: int = 5):
super(TransferMobileNetV3_v2, self).__init__()
self.classifier_neurons = num_keypoints*2
self.base_model = get_mob_v3_small()
self.base_model.classifier = nn.Sequential(
nn.Linear(in_features=1024, out_features=1024),
nn.ReLU(),
nn.Linear(in_features=1024, out_features=512),
nn.ReLU(),
nn.Linear(in_features=512, out_features=self.classifier_neurons)
)
self.base_model.apply(_init_weights)
def forward(self, x):
out = self.base_model(x)
return out
Training Script
def train(net, trainloader, testloader, train_loss_fn, optimizer, scaler, args):
len_dataloader = len(trainloader)
for epoch in range(1, args.epochs+1):
net.train()
for batch_idx, sample in enumerate(trainloader):
inputs, labels = sample
inputs, labels = inputs.to(args.device), labels.to(args.device)
optimizer.zero_grad()
with torch.cuda.amp.autocast(args.use_amp):
prediction = net(inputs)
loss = train_loss_fn(prediction, labels)
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
def main():
args = make_args_parser()
args.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
seed = args.seed
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(seed)
loss_fn = nn.MSELoss()
optimizer = optim.Adam(net.parameters(), lr=3e-2,
betas=(0.9, 0.999))
scaler = torch.cuda.amp.GradScaler(enabled=args.use_amp)
train(net, train_loader, test_loader, loss_fn, optimizer, scaler, args)
Tensorflow
Model
base_model = tf.keras.applications.MobileNetV3Small(weights='imagenet',input_shape=(224,224,3))
x_in = base_model.layers[-6].output
x = Dense(units=1024, activation="relu")(x_in)
x = Dense(units=512, activation="relu")(x)
x = Dense(units=10, activation="linear")(x)
model = Model(inputs=base_model.input, outputs=x)
for layer in model.layers[:-50]:
layer.trainable=False
Training Script
model.compile(loss = "mse",
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-2))
history = model.fit(input_numpy, output_numpy,
verbose=1,
batch_size=32, epochs=100,validation_split = 0.2)
Results
The PyTorch model predicts one single point around the center for all 5 different points.
The Tensorflow model predicts the points quite well and are quite accurate.
The loss in the Pytorch model is much higher than the Tensorflow model.
Please do let me know what is going wrong as I am trying my best to shift to PyTorch for this work and I need this model to give me similar/identical results.
Note: I also noticed that the MobileNetV3 Small model seems to be different in PyTorch and different in Tensorflow. I do not know if am interpreting it wrong, but I’m putting it here just in case.
|
st30342
|
Hi, I have a 4x4 tensor, for example [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]] and I also have a mapping dict, like {4: 5, 7: 10, 13: 10, 15: 1, 16: 1}. Is there any way to map the value in the tensor using this dictionary efficiently?
|
st30343
|
I do not know how efficient it is but it is readable by using the apply function. If you have the tensor t and the dictionary d, then simply write t.apply_(d.get).
|
st30344
|
Hi I’m new to pytorch.
If I do loss.backward for every 0.1% of my training set and do optimizer.step for every 1% of my training set, what could be the problem?
Due to characteristic of my training data, I write my code as below
BATCH_SIZE = parameter from user
for epoch in range(1,10001):
i=0
(..........)
for ..... in training_generator:
(.......)
loss.backward()
i+=1
if i % BATCH_SIZE == 0 :
optimizer.step()
optimizer.zero_grad()
As far as I know, loss.backward accumulate gradient in summation. This kind of accumulation of gradient is okay? Is there any way to divide gradient by BATCH_SIZE?
In my case, I use Adam optimizer.
|
st30345
|
Solved by ptrblck in post #2
Based on your description I understand that you are calling optimizer.step() more often (1 out of 100 steps) and calculate the gradients only 1 out of 1000 steps.
In this case, the general problem could be that the optimizer updates the parameters with “old” gradients, which might not work.
To cha…
|
st30346
|
Based on your description I understand that you are calling optimizer.step() more often (1 out of 100 steps) and calculate the gradients only 1 out of 1000 steps.
In this case, the general problem could be that the optimizer updates the parameters with “old” gradients, which might not work.
To change the gradients, you could either scale the loss itself (divide with a constant) or use hooks to manipulate the .grad attributes of all parameters.
|
st30347
|
Sorry for my English. I write sentence make you confused. Acutally, I want to say calling loss.backward more often than optimizer.step.
Although, you give the answer what I want to know! Thanks!
|
st30348
|
Hi, I have a question about using multiple GPU devices.
I set my model to use multiple device as below.
os.environ["CUDA_VISIBLE_DEVICES"]="0,1,2,3"
model = model().to(device)
model = nn.DataParallel(model.to(device))
(...)
def train():
(....)
for epoch in range(start_epoch, 10001):
(....)
for ..... in training_generator:
(.......)
output=model(user_input)
My model is 3D-CNN and my ‘user_input’ has (A,B,C,D,E) dimension. At here, if I set A =4 , does my model run with 4 devices parallely? If I use 4 devices, does the number A have to be a multiple of 4 for efficient calculation?
Thanks.
|
st30349
|
Solved by ptrblck in post #4
No, each device will get a batch containing 2 samples.
DDP should be faster as it reduces the communication overhead in nn.DataParallel.
The details of the latter (including the scatter/gather calls) are described in this blog post.
|
st30350
|
Yes, nn.DataParallel would split the input batch in dim0 and transfer each chunk to the corresponding device.
Hyeonuk_Woo:
If I use 4 devices, does the number A have to be a multiple of 4 for efficient calculation?
Also yes, but to get the most efficient approach you should use DisttributedDataParallel with a single process per GPU.
|
st30351
|
Frist of all, thankyou so much!
I have one more basic question.
If I use 4 devices and dim0 of my input data is 8, in this case, each GPU handle two process simultaneously or sequentially?
Why single process per GPU is recommended?
+) the word ‘single process’ means that batch size of the data assigned to single GPU is 1?
|
st30352
|
Hyeonuk_Woo:
If I use 4 devices and dim0 of my input data is 8, in this case, each GPU handle two process simultaneously or sequentially?
+) the word ‘single process’ means that batch size of the data assigned to single GPU is 1?
No, each device will get a batch containing 2 samples.
Hyeonuk_Woo:
Why single process per GPU is recommended?
DDP should be faster as it reduces the communication overhead in nn.DataParallel.
The details of the latter (including the scatter/gather calls) are described in this blog post 1.
|
st30353
|
Really confused about why the __getitem__() method is not being called here:
class MaleFacesDataset(Dataset):
def __init__(self, csv_file, root_dir):
self.landmarks_frame = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.landmarks_frame)
def __getitem__(self, idx):
print("GET!")
if torch.is_tensor(idx):
idx = idx.tolist()
img_name = os.path.join(self.root_dir,
self.landmarks_frame.iloc[idx, 0])
image = io.imread(img_name)
sample = image
sample = Image.fromarray(np.uint8(sample)).convert('RGB')
sample = Image.fromarray(sample.astype('uint8'), 'RGB')
if self.transform:
sample = self.transform(sample)
return sample
Have tried rewriting the whole class a few times but still the same problem - any input is appreciated!
|
st30354
|
Yeah, here is how I am instantiating it:
male_dataset = MaleFacesDataset(csv_file = './attribute_dir',
root_dir= './img_align_celeba', transform= transform)
|
st30355
|
Hey,
Should we except any performance changes from pytorch 1.8 Cuda 11.1 in comparison to pytorch 1.7.1 with Cuda 11?
And if not, what version of pytorch should give the most out of the new cards?
Thanks!
|
st30356
|
Solved by ptrblck in post #2
You shouldn’t expect a lot of changes for CNNs, as both versions ship with cudnn8.0.5.
Due to this issue we couldn’t use the cudnn8.1 release and are working on it. For the best performance on your 3070 you could build from source using the latest cudnn release.
|
st30357
|
You shouldn’t expect a lot of changes for CNNs, as both versions ship with cudnn8.0.5.
Due to this issue 7 we couldn’t use the cudnn8.1 release and are working on it. For the best performance on your 3070 you could build from source using the latest cudnn release.
|
st30358
|
Hey @ptrblck,
Now that 1.8.1 is out, is the answer different?
Does it use cudnn 8.1?
|
st30359
|
No, it’s still using cudnn8.0.5, as 1.8.1 was a hot-fix release mainly for Kineto and we’ll revisit the issue with the now released cudnn8.2 version.
|
st30360
|
@ptrblck Hey,
Now that pytorch 1.9, does it come with cudnn8.2? Will it use rtx 3070 full speed?
|
st30361
|
The binaries will not use it as described here 5, so you would have to build it from source.
|
st30362
|
In Tensorflow we can do the below operation by tf.gather_nd, but how to do this in pytorch?
Simple indexing into a matrix:
indices = [[0, 0], [1, 1]]
params = [['a', 'b'], ['c', 'd']]
output = ['a', 'd']
Slice indexing into a matrix:
indices = [[1], [0]]
params = [['a', 'b'], ['c', 'd']]
output = [['c', 'd'], ['a', 'b']]
Indexing into a 3-tensor:
indices = [[1]]
params = [[['a0', 'b0'], ['c0', 'd0']],
[['a1', 'b1'], ['c1', 'd1']]]
output = [[['a1', 'b1'], ['c1', 'd1']]]
indices = [[0, 1], [1, 0]]
params = [[['a0', 'b0'], ['c0', 'd0']],
[['a1', 'b1'], ['c1', 'd1']]]
output = [['c0', 'd0'], ['a1', 'b1']]
indices = [[0, 0, 1], [1, 0, 1]]
params = [[['a0', 'b0'], ['c0', 'd0']],
[['a1', 'b1'], ['c1', 'd1']]]
output = ['b0', 'b1']
Batched indexing into a matrix:
indices = [[[0, 0]], [[0, 1]]]
params = [['a', 'b'], ['c', 'd']]
output = [['a'], ['b']]
|
st30363
|
Yes, there are equivalent operations in pytorch. Try something like the following:
Simple indexing into matrix:
x = torch.randn(2, 2)
indices = torch.ByteTensor([[0, 0],[1,1]])
x.masked_select(indices)
Slice indexing into matrix:
x = torch.randn(2, 2)
indices = torch.LongTensor([1, 0])
x.index_select(0, indices)
Indexing into a 3-tensor:
x = torch.randn(1, 4, 2)
x[:,:,1]
Batched indexing into a matrix:
x = torch.randn(2, 2)
indices = torch.LongTensor([[0, 0],[1,1]])
[x[i] for i in indices]
|
st30364
|
Are you sure these snippets are correct? Just comparing to TensorFlow’s example 23:
indices = torch.ByteTensor([[0, 0], [1, 1]])
params = torch.Tensor([[1, 2], [3, 4]])
params.masked_select(indices)
Tensorflow documentation says the output should be tensor([ 1., 4.]), but your code gives tensor([ 3., 4.])
|
st30365
|
That one should be
indices = torch.tensor([[ 0, 1],[ 0, 1]])
params = torch.tensor([[1, 2], [3, 4]])
params[indices.tolist()]
|
st30366
|
it should be
rows_list = [0, 1]
col_list = [0, 1]
params[[rows_list , col_list]]
|
st30367
|
Are you not missing something in the Batched indexing into a matrix block at the end? If you do it that way you have to loop over all indices, for the dim=0 in your case. My question would be, is there a fast way in pytorch to do the gather_nd where I have a 3D-matrix that stores all the indices and a 3D-matrix that has all the values and I would like to create a new 3D-matrix where each value is mapped to the indices from the 3D-matrix.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.