id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st83068 | Dear all,
I just tested the Facebook code for recommendation. The DLRN tool has good results (Kaggle ads challenge) in predicting whether an ad will be clicked, given the user info.
But I don’t understand, how would an ad recommendation come out of this ? Does one have to make a generative network to find an ad that the customer is susceptible to click ?
Indeed, the DLRN network is a discriminative network, not a generative one.
Thanks, |
st83069 | I’m implementing a backwards HMM algorithm. I used this link 12 as reference. This link contains the results of the numerical example used (I am attempting to implement that and compare my generated results to it). Page 3, section 2. Backward probability , there is a table containing the calculated results.
Here is my code:
# Initial Transition matrix as shown in page 2 of above link
A = np.array([[0.6, 0.4], [0.3, 0.7]])
A = torch.from_numpy(A)
# Initial State Probability (page 2)
pi = np.array([0.8, 0.2])
pi = torch.from_numpy(pi)
# Output probabilities (page 2)
emission_matrix = np.array([[0.3, 0.4, 0.3, 0.3], [0.4, 0.3, 0.3, 0.3]])
emission_matrix = torch.from_numpy(emission_matrix)
# Initialize empty 2x4 matrix (dimensions of emission matrix)
backward = torch.zeros(emission_matrix.shape, dtype=torch.float64)
# Backward algorithm
def _backward(emission_matrix):
# Initialization: A(i, j) * B(T, i) * B(Ot+1, j) , where B(Ot+1, j) = 1
backward[:, -1] = torch.matmul(A, emission_matrix[:, -1])
# I reversed the emission matrix so as to start from the last column
rev_emission_mat = torch.flip(emission_matrix[:, :-1], [1])
# I transposed the reversed emission matrix such that each iterable in the for
# loop is the observation sequence probability
T_rev_emission_mat = torch.transpose(rev_emission_mat, 1, 0)
# This step is so that I assign a reverse index enumeration to each iterable in the
# emission matrix starts from time T to 0, rather than the opposite
zipped_cols = list(zip(range(len(T_rev_emission_mat)-1, -1, -1), T_rev_emission_mat))
for i, obs_prob in zipped_cols:
# Induction: Σ A(i, j) * B(j)(Ot+1) * β(t+1, j)
if i != 0:
backward[:, i] = torch.matmul(A * obs_prob, backward[:, i+1])
# Termination: Σ π(i) * bi * β(1, i)
backward[:, 0] = torch.matmul(pi * obs_prob, backward[:, 1])
# run backward algorithm
_backward(emission_matrix)
# check results, backward is an all zero matrix that was initialized above
print(backward)
>>> tensor([[0.0102, 0.0324, 0.0900, 0.3000],
[0.0102, 0.0297, 0.0900, 0.3000]], dtype=torch.float64)
As you can see, the 0-th index does not match the result in page 3 of the previous link 12. What did I do wrong? If there is anything I can clarify, please let me know. Thanks in advance! |
st83070 | Leaving this for reference. Someone solved the question on stackoverflow with a simple one liner (https://stackoverflow.com/questions/57466547/backward-algorithm-hidden-markov-model-0th-index-termination-step-yields-wron/57482129#57482129 17)
backward[:, 0] = pi * obs_prob * backward[:, 1]
I had the wrong matrix shape, in the termination step part of the code (the last line in the method). Cheers! |
st83071 | I find a example about “wrap padding” in
https://github.com/pytorch/pytorch/issues/3858 7
and I modify the code a little to make some dimension “wrap padding” and some padding with zeros.
def pad_circular_nd2(x: torch.Tensor, pad: int, dim, dim0) -> torch.Tensor:
“”"
:param x: shape [H, W]
:param pad: int >= 0
:param dim: the dimension over which the tensors are padded
:return:
“”"
if isinstance(dim, int):
dim = [dim]
if isinstance(dim0, int):
dim0 = [dim0]
for d in dim:
if d >= len(x.shape):
raise IndexError(f"dim {d} out of range")
idx = tuple(slice(0, None if s != d else pad, 1) for s in range(len(x.shape)))
x = torch.cat([x, x[idx]], dim=d)
idx = tuple(slice(None if s != d else -2 * pad, None if s != d else -pad, 1) for s in range(len(x.shape)))
x = torch.cat([x[idx], x], dim=d)
pass
x0 = torch.zeros(x.size()).double().cuda()
for d in dim0:
if d >= len(x.shape):
raise IndexError(f"dim {d} out of range")
idx = tuple(slice(0, None if s != d else pad, 1) for s in range(len(x.shape)))
x = torch.cat([x, x0[idx]], dim=d)
idx = tuple(slice(None if s != d else -2 * pad, None if s != d else -pad, 1) for s in range(len(x.shape)))
x = torch.cat([x0[idx], x], dim=d)
pass
return x.cuda()
However this “wrap padding” runs on CPU, though I expect it runs on GPU and makes the training much slower. Is there any way to fix it? |
st83072 | It should run on the GPU, if you pass x as a CUDATensor.
How did you check this operation runs on the CPU?
There also seems to be a mode='circular' now in F.pad. |
st83073 | Hi ptrblck,
The CPU consume is about 70% for a i9 CPU and without this “wrap padding” the CPU consume is close to 0. So I guess I might make something wrong and the code pass the data between CPU and GPU. |
st83074 | I want to only store the layers block and the computation graph.
To start off I want it to work for Sequential models. I want to be able to store the Sequential model architecture without having to store the parameters. Is that possible?
After that, is it generalizable for arbitrary models? I only want to store the shape and types of the pytorch graph. I don’t need the parameters. |
st83075 | I think the easiest way would be to store the model class definition, if you don’t want the parameters. |
st83076 | hmmm what does that mean? how does one do that?
I’ve been just saving the string of the Sequential model for the moment…but then I need to parse the string to extract things from it or even worse use an eval function… |
st83077 | I am training a model in GPU (80% load). But my CPU is at 100% load (all time). I want to know whether there exist command to limit the CPU usage. (Like restricting the core count, percentage…)
Screenshot from 2019-08-13 12-08-08.png766×763 97.2 KB |
st83078 | Reduce the num_workers used when you create your dataloader.
num_workers (int, optional) – how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0)
https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader 70 |
st83079 | I am trying to do something like this (simplified version of my code):
for x in range(1,1000):
output = model(data)
#Change Data
loss = loss + F.nll_loss(output, target)
# Calculate gradients of model in backward pass
loss.backward()
# Collect gradients
final_result = final_result + myvar.grad.data
The problem is that a significant number of temporary variables are causing me to run out of GPU memory. Hence, is this next piece of code logically equivalent?
for x in range(1,1000):
output = model(data)
loss = F.nll_loss(output, target)
#Change data
# Calculate gradients of model in backward pass
loss.backward(retain_graph=True)
# Collect gradients
final_result = final_result + myvar.grad.data
del loss
del other_variables
If I am understand how .backward and .grad.data works correctly, then it should be equivalent. However, this is not the case for me and I’m currently looking for the bug. |
st83080 | The addition of final_result = final_result + myvar.grad.data won’t work, if you don’t zero out the gradients in each iteration.
Currently you are accumulating:
final_result = (grad0) + (grad0+grad1) + (grad0+grad1+grad2) + ...
since loss.backward will already accumulate the gradients.
Another approach would be to let loss.backward() accumulate the gradients automatically and just assigning final_result after the loop.
# 1
torch.manual_seed(2809)
model = nn.Linear(1, 2, bias=False)
loss = 0.
for _ in range(1000):
x = torch.randn(1, 1)
target = torch.randint(0, 2, (1,))
output = model(x)
loss = loss + F.nll_loss(output, target)
loss.backward()
final_grad1 = model.weight.grad
# 2
torch.manual_seed(2809)
model = nn.Linear(1, 2, bias=False)
for _ in range(1000):
x = torch.randn(1, 1)
target = torch.randint(0, 2, (1,))
output = model(x)
loss = F.nll_loss(output, target)
loss.backward()
final_grad2 = model.weight.grad
print(torch.allclose(final_grad1, final_grad2))
> True |
st83081 | Is this still true that the loss.backward was accumulating the gradients despite me doing “del loss” during each loop? |
st83082 | The deletion of the loss shouldn’t make a difference, as the gradients were already accumulated. |
st83083 | Sorry, One thing I forgot to add: I am doing model.zero_grad() after each iteration. Does this zero out the myvar.grad.data? |
st83084 | model.zero_grad will zero out the gradients of all internal parameters.
If you’ve registered self.myvar = nn.Parameter(...), it should be also zeroed out. |
st83085 | What I know about the problem
Adam is stateful and requires a memory space proportional to the parameters in your model.
Model parameters must be loaded onto device 0
OOM occurs at state[‘exp_avg_sq’] = torch.zeros_like(p.data) which seems to be the last allocation of memory in the optimizer source code.
Neither manually allocating or use of nn.DataParallel prevents OOM error
Moved loss to forward function to reduce memory in training
Below are my training and forward methods
def train(dataloader, vocabulary_dict, epoch):
model = ViralClassification(len(vocabulary_dict), 0.5, 6588 )#, device_ids=[0,1], output_device=1)
model.to('cuda:0')
model.print_gpu_memory_info()
optimizer = optim.Adam(model.parameters(), lr=0.01)
cudnn.benchmark = True
cudnn.enabled = True
for i in range(epoch):
running_loss = 0.0
for (i, (label, sequence)) in enumerate(dataloader):
loss = model(sequence.to('cuda:0'), label.to('cuda:1'))#.to('cuda:0'))
running_loss += loss.item()
loss.backward()
del loss
print('gpu_memory_one in training_loop')
model.print_gpu_memory_info()
torch.cuda.empty_cache()
print('gpu_memory_two in training_loop')
model.print_gpu_memory_info()
print('bout to step')
optimizer.step()
optimizer.zero_grad()
print('bottom of training loop')
def forward(self, inputs, labels):
inputs = self.embedding(inputs)#.to('cuda:0')
#print('embedded completed')
(inputs, hidden_state) = self.bilstm_layer(inputs)
#print('bilstm completed')
inputs.to('cuda:1')
self.bilstm_layer.flatten_parameters()
torch.cuda.synchronize('cuda:0')
torch.cuda.synchronize('cuda:1')
inputs = self.attention_layer(inputs)
#print('attention completed')
inputs = inputs.view(-1, self.row*2*self.lstm_dim)#.to('cuda:1')
#print('view transformation completed')
inputs = self.mlp_one(inputs, self.relu_one)
#print('mlp one completed')
inputs = self.mlp_two(inputs, self.relu_two)
#print('mlp two completed')
torch.cuda.synchronize('cuda:0')
torch.cuda.synchronize('cuda:1')
logits = self._classify(inputs)
#print('logits completed')
torch.cuda.empty_cache()
torch.cuda.synchronize('cuda:0')
torch.cuda.synchronize('cuda:1')
#self.print_gpu_memory_info()
loss = self.criterion(logits, labels)
return loss
My OOM occurs when I perform optimizer.step.
My problem is that before optimizer.step my memory on device 1 has plenty of open room but since the optimizer performs it calculations on device 0, the OOM occurs.
Is this a problem that checkpointing may be able to solve?
Is it possible to change the location of the optimizer? |
st83086 | I’m not sure, how your model works.
Your forward method seems to use model sharding, i.e. different parts of the model are located on different devices.
However, you are never transferring the inputs to GPU1, since you are not reassigning it in this line of code:
inputs.to('cuda:1')
Also, based on your code you are only calling model.to('cuda:0'), which would mean everything runs on GPU0.
However, it seems that at least your labels are on GPU1, which should create an error in your loss calculation.
Are code parts missing or am I missing something? |
st83087 | Don’t think you missed anything. Has given me a way forward to solving my problem. Thanks alot for the help.
So the reassignment of inputs acts like a pointer to a device memory space containing the tensors?
model.to(‘cuda:0’) will override explicit statements within the model itself?
Also the reason I am passing my labels to device 1 is because in theory (if I had reassigned) the logits should be output on device 1. I did this because I thought it would reduce memory use on device 0 allowing more room for the optimizer.
The model is for genomic sequence data and can be found at https://www.biorxiv.org/content/10.1101/694851v1 1. if your interested.
E: I think I pinpointed the error to when the optimizer attempts to step over the embedding layer it exceeds memory because of the size of my vocabulary. When I move it to device 1 I get a OOM on device 1 while the optimizer params stay on device 0. If I move the embedding to cpu. The model trains. |
st83088 | djseivad:
So the reassignment of inputs acts like a pointer to a device memory space containing the tensors?
Tensors return a copy on the specified device, while model.to() works recursively on all submodules.
How large is your embedding layer? |
st83089 | Vocabulary dimension is 8,390,658.
Embedding dimension is 100.
Embedding is supposed to represent all 12 char strings and their reverse complement with a nucleotide alphabet. |
st83090 | I am trying to plot ROC Curve for multiclass classification.I followed https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html 4.
I used the below code to calculate y_test,and y_score
def test_epoch(net,test_loader):
y_test =[]
y_score =[]
with torch.no_grad():
for batch in test_loader:
images, labels = batch['image'], batch['grade']
images =Variable(images)
labels= Variable(labels)
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
c = (predicted == labels).squeeze().numpy()
y_score.append(outputs.numpy())
y_test.append(labels.numpy())
return y_test,y_score `
I seen that my y_test is array like below [array([0, 4, 0, 2, 1, 0, 0, 3, 0, 1, 1, 0, 3, 2, 0, 2, 2, 2, 1, 1, 1, 0,0, 0, 0, 3, 0, 0, 0, 4, 2, 1, 2, 0, 2, 1, 0, 4, 0, 0, 0, 1, 2, 2 0, 1, 2, 2, 0, 2, 0, 2, 2, 3, 2, 3, 3, 1, 1, 1, 0, 2, 0, 0, 2, 1,3, 0, 2, 0, 3, 2, 1, 0, 2, 2, 1, 0, 0, 0, 0, 2, 1, 2, 0, 3, 0, 1, 3, 0, 3, 2, 0, 3, 1, 0, 1, 2, 2, 0])]
And y_score is like[array([[ 0.30480504, -0.12213976, 0.09632117, -0.16465648, -0.44081157],[ 0.21797988, -0.09650452, 0.07616544, -0.12001953, -0.34972644],[ 0.3230184 , -0.13098559, 0.10277118, -0.17656785, -0.45888817],[ 0.38143447, -0.15880316, 0.12123139, -0.21719441, -0.5281661 ],[ 0.3427343 , -0.13945231, 0.11076729, -0.19657779, -0.4913683 ],
Whenever I called the function for plotting ROC
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(N_classes):
fpr[i], tpr[i], _ = roc_curve(truth_val[:, i],preds_val[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(truth_val.ravel(), preds_val.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"]
I am getting this error massages,Traceback (most recent call last): File "/home/Downloads/demo 3.py", line 435, in <module> plot_roc(y_test, y_score, 5) fpr[i], tpr[i], _ = roc_curve(truth_val[:, i],preds_val[:, i]) TypeError: list indices must be integers or slices, not tuple
I could not understand how I will solve this problem.
I am highly appreciate any kind of help regarding this issue. |
st83091 | Based on the error I think truth_val[:, i] or preds_val[:, i] is a python list ? |
st83092 | train data’ shape is (3, 224, 224)
label is total 5
label 1 = (0~5)
label 2 = (0~7)
label 3 = (0~3)
label 4 = (0~2)
label 5 = (0~8)
so i mapped
example))
image <-> [1, 7, 3, 1, 4]
and
model = models.resnet50(pretrained=False)
in_features = model.fc.in_features
model.fc = nn.Linear(in_features=in_features, out_features=5)
model.to(device)
print(model)
Is it right to design this?
I have no idea how to set the loss and model if there are multiple such labels using Resnet50. |
st83093 | One idea is to work with hot vectors instead. So basically, your last fc layer will be connected to 5 different outputs such that :
output 1 is a 6 element vector that represents the first label. If your label is 3 for example then you vector would be [0, 0, 0, 1, 0, 0]
output 2 is 8 element vector.
.
. and so on
Once you have these representations, you apply CrossEntropyLoss on each one of your outputs. And your final loss is the sum of all these 5 losses.
You can also weight your loss labels if you think one label is more important than the other. |
st83094 | when i add cuda to the conv network, my network stops working. If you make a network on Linear, then everything works … Who knows why?
The forward pass works, but the return does not. |
st83095 | The problem was that I used cuda 9.2. When I installed cuda 10, the problem was resolved. |
st83096 | Hello,
I have a vector of values
And a vector of labels
(image1)
And a set of set of values, that I will call ‘results’
(image2)
Is there anyway I can fill all of the results s_i such that
(image3)
Because I’m a new user I can only post one picture. Here’s the rest : https://imgur.com/a/MUJ5tcw 1
Here’s an example ->
v = [1,2,3,4]
u = [1,1,2,3]
f(u,v) = S
S_1 = [1,2]
S_2 = [3]
S_3 = [4]
I know I can do this using for loops, but I’d like to work with greater amount of data, for which their references are calculated on the spot (not done by a human beforehand) and it is taking too much time with for loops…
Thank. If there’s anything that is unclear about my query, please feel free to ask.
Best regards, |
st83097 | pip3 install --user http://download.pytorch.org/whl/cu90/torch-0.3.1-cp36-cp36m-linux_x86_64.whl
ERROR: torch-0.3.1-cp36-cp36m-linux_x86_64.whl is not a supported wheel on this platform.
Python 3.6.9
pip 19.2.2
Thanks in advance. |
st83098 | I’m not sure about this particular error, but why are you trying to install PyTorch 0.3.0, which is quite old by now.
Have a look here 46 for install instructions for the latest stable release (1.2.0). |
st83099 | Hello every one.
I’m trying to write a function that does both training and validation since like 90% of the code for them is the same . But I’m facing a slight problem here. I don’t know where this should be placed so the gradient is not calculated for the tensors.
Should I place with torch.no_grad() before the operations such as model(imgs), criterion(preds, labels) or
Should I be using it Like this e.g.
for imgs, labels in dataloader:
imgs = imgs.to(device)
labels = labels.to(device)
with torch._nograd():
model.eval()
preds = mode(imgs)
# the rest
loss = criterion(preds, labels)
or
for imgs, labels in dataloader:
with torch._nograd():
imgs = imgs.to(device)
labels = labels.to(device)
model.eval()
preds = mode(imgs)
# the rest
loss = criterion(preds, labels)
# acc, etc |
st83100 | Both codes would work the same, if you just want to run inference and if your input doesn’t require gradients. |
st83101 | Thank you very much.
So you mean simply doing sth like :
def train_validation_loop(model, dataloader, optimizer, criterion, is_training,
device, topk=1, interval=1000 ):
preds = None
loss = 0.0
loss_total = 0.0
accuracy_total = 0.0
total_batches = len(dataloader)
status = 'training' if is_training else 'validation'
for i, (imgs, labels) in enumerate(dataloader):
imgs = imgs.to(device)
labels = labels.to(device)
if is_training:
model.train()
preds = model(imgs)
loss = criterion(preds, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
else:
model.eval()
with torch.no_grad():
preds = model(imgs)
loss = criterion(preds, labels)
loss_total += loss.item()
_, class_idxs = preds.topk(k, dim=1)
results = (class_idxs.view(*labels.shape) == labels).float()
accuracy_total += torch.mean(results)
if i % interval == 0:
_, class_idxs = preds.topk(k, dim=1)
results = (class_idxs.view(*labels.shape) == labels).float()
accuracy_per_batch = torch.mean(results)
print(f'{status} loss/accuracy(per batch): {loss:.6f} / {accuracy_per_batch:.4f}')
# calculate the loss/accuracy after each epoch
print(f'{status} loss/accuracy: {loss_total/total_batches:.6f} / {accuracy_total/total_batches:.4f}')
will consider gradients when in training mode and wont do this when in validation mode?
So basically |
st83102 | Hi there,
I am currently drawing batch data from three same-shape(i.e., m by n matrix) independent data loaded on GPU all together, but due to the GPU memory usage problem I would like to write down my custom Dataset.
The thing is, I can pick item indices and use them in the three data sources before, but the dataloader actually does not provide any information about it. So my possible solution towards it is to modify getitem in my custom dataloader and use as below:
class MyDataset(Dataset):
def __init__(self):
self.cifar10 = datasets.CIFAR10(root='YOUR_PATH',
download=False,
train=True,
transform=transforms.ToTensor())
def __getitem__(self, index):
data, target = self.cifar10[index]
return data, target, index
which is from here: [https://discuss.pytorch.org/t/how-does-one-obtain-indicies-from-a-dataloader/16847/3]
Here comes the question: I want to pick the indices from dataloader and use the indices to get items from other two dataloaders(the data shape are the same). Would there be more efficient way? |
st83103 | Could you explain a bit, what you mean by
Minseok:
I want to pick the indices from dataloader
?
Would you like to determine the indices manually?
Would it work to initialize all three datasets inside your custom MyDataset and just index all three in __getitem__ or am I misunderstanding the use case? |
st83104 | I think I found a walkaround that initialize dataset independently and concatenate them in one dataset, then index it using the key information during the dataloading. Data from the same row comes out from datasets. The shuffle=True in the dataloader seems to shuffle all the concatenated datasets in the same shuffled indices. Specific code is as below and it works for me:
class ContextTensorDataset(Dataset):
def __init__(self, context_type, directory_path, context_threshold=0, context_neglect=False):
print(context_type + " Context Tensor Dataset initialized.")
# Read data if the context is provided
self.context_matrix = T.FloatTensor(np.load(directory_path + 'user_' + context_type + '_context.npy'))
self.context_threshold = context_threshold
self.context_type = context_type
if context_neglect:
self.context_matrix[self.context_matrix < self.context_threshold] = 0
def __getitem__(self, index):
return {self.context_type: self.context_matrix[index]}
def __len__(self):
'''
Returns the length of the data(i.e., number of users)
'''
return self.context_matrix.shape[0]
def shuffle_index(self, new_index):
self.context_matrix = self.context_matrix[new_index]
@property
def shape(self):
return self.context_matrix.shape
class ConcatDatasets(Dataset):
def __init__(self, *datasets):
self.datasets = datasets
def __getitem__(self, index):
batch = {}
for dataset in self.datasets:
batch = {**batch, **dataset[index]}
return batch
def __len__(self):
return min(len(d) for d in self.datasets)
thus the code in the main.py would be somewhat like below
if 'G' in args.input_val:
place_data = ContextTensorDataset('geo', directory_path, args.context, context_neglect=True)
datasets += (place_data,)
if 'T' in args.input_val:
time_data = ContextTensorDataset('time', directory_path, args.context, context_neglect=True)
datasets += (time_data,)
if 'S' in args.input_val:
seq_data = ContextTensorDataset('seq', directory_path, args.context, context_neglect=True)
datasets += (seq_data,)
dataloaders = DataLoader(ConcatDatasets(*datasets),
batch_size=args.batch_size,
shuffle=True,
pin_memory=True,
num_workers=4) |
st83105 | i use window env.
code1:
for ii , (X, y, y_weight) in enumerate(dataLoader[phase]): #for each of the batches
X = X.to(device) # [Nbatch, 3, H, W]
y_weight = y_weight.type(‘torch.FloatTensor’).to(device)
y = y.type(‘torch.LongTensor’).to(device)
error:
File “”, line 1, in
debugfile(‘C:/Users/mbmhm/Desktop/unet/train_unet.py’, wdir=‘C:/Users/mbmhm/Desktop/unet’)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 856, in debugfile
debugger.run(“runfile(%r, args=%r, wdir=%r)” % (filename, args, wdir))
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\bdb.py”, line 585, in run
exec(cmd, globals, locals)
File “”, line 1, in
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 827, in runfile
execfile(filename, namespace)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 110, in execfile
exec(compile(f.read(), filename, ‘exec’), namespace)
File “c:/users/mbmhm/desktop/unet/train_unet.py”, line 267, in
for ii , (X, y, y_weight) in enumerate(dataLoader[phase]): #for each of the batches
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 193, in iter
return _DataLoaderIter(self)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 469, in init
w.start()
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\process.py”, line 112, in start
self._popen = self._Popen(self)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\context.py”, line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\context.py”, line 322, in _Popen
return Popen(process_obj)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\popen_spawn_win32.py”, line 89, in init
reduction.dump(process_obj, to_child)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\reduction.py”, line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File “stringsource”, line 2, in tables.hdf5extension.Array.reduce_cython
TypeError: self.dims,self.dims_chunk,self.maxdims cannot be converted to a Python object for pickling
code 2:
for x,y,w in dataLoader[‘train’]:
print(x.shape, y.shape, w.shape)
error:
File “”, line 1, in
debugfile(‘C:/Users/mbmhm/Desktop/unet/train_unet.py’, wdir=‘C:/Users/mbmhm/Desktop/unet’)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 856, in debugfile
debugger.run(“runfile(%r, args=%r, wdir=%r)” % (filename, args, wdir))
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\bdb.py”, line 585, in run
exec(cmd, globals, locals)
File “”, line 1, in
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 827, in runfile
execfile(filename, namespace)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 110, in execfile
exec(compile(f.read(), filename, ‘exec’), namespace)
File “c:/users/mbmhm/desktop/unet/train_unet.py”, line 200, in
for w, y, z in dataLoader[‘train’]:
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 576, in next
idx, batch = self._get_batch()
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 543, in _get_batch
success, data = self._try_get_batch()
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 519, in _try_get_batch
raise RuntimeError(‘DataLoader worker (pid(s) {}) exited unexpectedly’.format(pids_str))
RuntimeError: DataLoader worker (pid(s) 7744, 4584) exited unexpectedly |
st83106 | Is this error related to this topic 9, this one 5 and this 4?
If so, I would recommend to continue the discussion in one of the mentioned threads to avoid confusion.
Let me know, if these issues are related and where you would like to continue the discussion. |
st83107 | for ii , (X, y, y_weight) in enumerate(dataLoader[phase]): #for each of the batches
X = X.to(device) # [Nbatch, 3, H, W]
y_weight = y_weight.type(‘torch.FloatTensor’).to(device)
y = y.type(‘torch.LongTensor’).to(device)
error:
TypeError: self.dims,self.dims_chunk,self.maxdims cannot be converted to a Python object for pickling
====================================
for x,y,w in dataLoader[‘train’]:
print(x.shape, y.shape, w.shape)
error:
*RuntimeError: DataLoader worker (pid(s) 7744, 4584) exited unexpectedly |
st83108 | Could you post the line of code, which is raising this error:
TypeError: self.dims,self.dims_chunk,self.maxdims cannot be converted to a Python object for pickling
or alternatively the definition of your Dataset? I assume something is wrong in __getitem__. |
st83109 | it is my code
class Dataset(object):
def init(self, fname ,img_transform=None, mask_transform = None, edge_weight= False):
#nothing special here, just internalizing the constructor parameters
self.fname=fname
self.edge_weight = edge_weight
self.img_transform=img_transform
self.mask_transform = mask_transform
self.tables=tables.open_file(self.fname)
self.numpixels=self.tables.root.numpixels[:]
self.nitems=self.tables.root.img.shape[0]
self.tables.close()
self.img = None
self.mask = None
def __getitem__(self, index):
#opening should be done in __init__ but seems to be
#an issue with multithreading so doing here
with tables.open_file(self.fname,'r') as db:
self.img=db.root.img
self.mask=db.root.mask
#get the requested image and mask from the pytable
img = self.img[index,:,:,:]
mask = self.mask[index,:,:]
#the original Unet paper assignes increased weights to the edges of the annotated objects
#their method is more sophistocated, but this one is faster, we simply dilate the mask and
#highlight all the pixels which were "added"
if(self.edge_weight):
weight = scipy.ndimage.morphology.binary_dilation(mask==1, iterations =2) & ~mask
else: #otherwise the edge weight is all ones and thus has no affect
weight = np.ones(mask.shape,dtype=mask.dtype)
mask = mask[:,:,None].repeat(3,axis=2) #in order to use the transformations given by torchvision
weight = weight[:,:,None].repeat(3,axis=2) #inputs need to be 3D, so here we convert from 1d to 3d by repetition
img_new = img
mask_new = mask
weight_new = weight
seed = random.randrange(sys.maxsize) #get a random seed so that we can reproducibly do the transofrmations
if self.img_transform is not None:
random.seed(seed) # apply this seed to img transforms
img_new = self.img_transform(img)
if self.mask_transform is not None:
random.seed(seed)
mask_new = self.mask_transform(mask)
mask_new = np.asarray(mask_new)[:,:,0].squeeze()
random.seed(seed)
weight_new = self.mask_transform(weight)
weight_new = np.asarray(weight_new)[:,:,0].squeeze()
return img_new, mask_new, weight_new
def __len__(self):
return self.nitems |
st83110 | I’ve trained on a nn.Sequential model and would like to load the previously saved weights into another instance of the model. However, I’ve been trying to add names to the layers in this new model definition (which I didn’t previously do) and am running into an error with missing/unexpected keys.
Any ideas for how to circumvent such an issue? |
st83111 | Solved by ptrblck in post #2
You could load the state_dict and rename all keys to the newly given names.
The fastest way would probably be to use the state_dict of your new model definition and just create a loop over the keys of both dicts changing one to the other. |
st83112 | You could load the state_dict and rename all keys to the newly given names.
The fastest way would probably be to use the state_dict of your new model definition and just create a loop over the keys of both dicts changing one to the other. |
st83113 | Hello,
I’ve been struggling for hours with this.
Say I have a 2d tensor:
A = torch.randn(10,4)
And also a 1d LongTensor of with A.shape[0] elements (e.g. 10)
I = torch.randint(0,4,(10))
I’d like an easier but equivalent way of doing this:
torch.take(A,I+torch.arange(0,I.shape[0])*A.shape[1])
That is, I want to obtain a 1-d tensor like this: from the 0th row of A, the I[0]-th value, from the first row of A the I[1]-th value, etc.
If I do A[I] it gives me a matrix (I want a 1d tensor) that selects the corresponding rows.
I also tried index_select, but I’m also getting a matrix.
Thank you! |
st83114 | Hello,
I want to translate caffe model to caffe2 model using caffe_translator.py.
but I cannot figure out how to specify the --input_dims options.
usage: caffe_translator.py [-h] [--init_net INIT_NET]
[--predict_net PREDICT_NET] [--remove_legacy_pad]
[--input_dims INPUT_DIMS [INPUT_DIMS ...]]
prototext caffemodel
Is there anybody explain the usage of --input_dims with caffe’ mnist tutorial
at http://caffe.berkeleyvision.org/gathered/examples/mnist.html 2.
Thanks. |
st83115 | python caffe_translator.py --remove_legacy_pad new.prototxt new.caffemodel --input_dims 1 3 210 280
putting it at the end
this worked for me |
st83116 | %%timeit -n 1000000
import torch.nn as nn
1000000 loops, best of 3: 357 ns per loop
%%timeit -n 1000000
import torch.nn
1000000 loops, best of 3: 307 ns per loop
%%timeit -n 1000000
from torch import nn
1000000 loops, best of 3: 864 ns per loop |
st83117 | I’ve never seen someone profiling the import.
My personal and biased choice is import torch.nn as nn. |
st83118 | How can I have different behavior for stochastic layers in the forward passes? For example, with bith batch normalization and dropout present, how can I prevent BN to update its mean and variance while dropout to behave as if in the training phase? Or in other words, use BN in eval mode while dropout in train mode in a forward pass.
Thanks a lot |
st83119 | Solved by spanev in post #3
You could also do:
net = net.eval()
for m in net.modules():
if 'Dropout' in str(type(m)):
m.train() |
st83120 | You could also do:
net = net.eval()
for m in net.modules():
if 'Dropout' in str(type(m)):
m.train() |
st83121 | Hi,
I have two tensors of size [b, c, h, w, dim]; one contains values, while other contains indices that these values should map to. I would like to create a third tensor of this dimension and perform the map:
out = value[idx]
How can I do this? I believe index operations like index_copy don’t work with dimensions greater than 1.
Thanks |
st83122 | Could you post an example of the input and index tensors, which shows, how the indexing should work? |
st83123 | value = [[[[[0.4196, 0.4720, 0.4312], [0.7560, 0.2302, 0.0757]],
[[0.4395, 0.1510, 0.0983], [0.7749, 0.4382, 0.3548]]]]]
idx = [[[[[2, 1, 0], [2, 0, 1]],
[[0, 1, 2], [0, 1, 2]]]]]
So here I would want a result like:
out = [[[[[0.4312, 0.4720, 0.4196 ], [0.0757, 0.7560, 0.2302]],
[[0.4395, 0.1510, 0.0983], [0.7749, 0.4382, 0.3548]]]]] |
st83124 | I’m trying to train an NN on a dataset containg 1d vectors of features.
class data_loader(Dataset):
def __init__(self, data):
data = np.array(data.values, dtype=float)
self.len = data.shape[0]
self.X = torch.from_numpy(data[:, 0:-1])
self.y = torch.from_numpy(data[:, [-1]])
def __getitem__(self, index):
return self.X[index], self.y[index]
def __len__(self):
return self.len
train_set = data_loader(real_train)
train_loader = DataLoader(dataset=train_set,
batch_size=64,
shuffle=True,
num_workers=0)
val_set = data_loader(real_val)
val_loader = DataLoader(dataset=val_set,
batch_size=64,
shuffle=False,
num_workers=0)
class genreNN(nn.Module):
def __init__(self, n_in=26, n_hidden=128, n_out=8, dropout=0.4):
super(genreNN, self).__init__()
self.n_in = n_in
self.n_hidden = n_hidden
self.n_out = n_out
self.dropout = dropout
self.fc = nn.Linear(n_in, n_hidden)
self.drop = nn.Dropout(dropout)
self.out = nn.Linear(n_hidden, n_out)
def forward(self, x):
x = x.view(-1, self.n_in)
sigmoid = nn.Sigmoid()
x = sigmoid(self.fc(x))
x = self.drop(x)
x = F.log_softmax(self.out(x))
return x
model = genreNN()
criterion = nn.NLLLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-4)
def train(epoch, model, train_loader, optimizer, print_every=50, cuda=None):
model.train()
correct = 0
for k, (data, target) in enumerate(train_loader):
# if cuda:
# data, target= data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data.float())
pred = output.data.max(1)[1]
correct += pred.eq(target.long().data).cpu().sum()
acc = 100. * correct / len(train_loader.dataset)
loss = F.nll_loss(output, target.long())
loss.backward()
optimizer.step()
if k % print_every == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f} Accuracy: {}'.format(
epoch, k * len(data), len(train_loader.dataset),
100. * k / len(train_loader), loss.data[0], acc))
def validate(loss_vec, acc_vec, model, val_loader, cuda=None):
model.eval()
val_loss, correct = 0, 0
for data, target in val_loader:
# if cuda:
# data, target = data.cuda(), target.cuda()
data, target = Variable(data, volatile=True), Variable(target)
output = model(data.float())
val_loss += F.nll_loss(output, target.long()).data[0]
pred = output.data.max(1)[1] # get the index of the max log-probability
correct += pred.eq(target.long().data).cpu().sum()
val_loss /= len(val_loader)
loss_vec.append(val_loss)
acc = 100. * correct / len(val_loader.dataset)
acc_vec.append(acc)
print('\nValidation set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
val_loss, correct, len(val_loader.dataset), acc))
epochs = 100
loss_vec=[]
acc_vec=[]
gpu_available = torch.cuda.is_available()
for e in range(epochs):
train(e, model, train_loader, optimizer, cuda=gpu_available)
validate(loss_vec, acc_vec, model, val_loader, cuda=gpu_available)
I believe the loss function in train() is causing this error but I don’t know how to fix this |
st83125 | Could you print the shape of your target tensor?
Based on your model architecture, if should be a LongTensor with the shape [batch_size] containing class indices in the range [0, n_out-1]. |
st83126 | How does one make a bidirectional RNN if one is processing a sequence token by token? Do I have to use a hardcoded for loop or is the bidirectional flag essentially useless?
Does things change if my RNN is to some degree custom (but has LSTM/GRU in it)?
Some example code of how I am doing one direction:
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from pdb import set_trace as st
torch.manual_seed(1)
def step_by_step(net,sequence,hidden):
'''
an example of an LSTM processing all the sequence one token at a time (one time step at a time)
'''
## process sequence one element at a time
print()
print('start processing sequence')
for i, token in enumerate(sequence):
print(f'-- i = {i}')
#print(f'token.size() = {token.size()}')
## to add fake batch_size and fake seq_len
h_n, c_n = hidden # hidden states, cell state
processed_token = token.view(1, 1, -1) # torch.Size([1, 1, 3])
print(f'processed_token.size() = {processed_token.size()}')
print(f'h_n.size() = {h_n.size()}')
#print(f'processed_token = {processed_token}')
#print(f'h_n = {h_n}')
# after each step, hidden contains the hidden state.
out, hidden = lstm(processed_token, hidden)
## print results
print()
print(out)
print(hidden)
if __name__ == '__main__':
## model params
hidden_size = 6
input_size = 3
lstm = nn.LSTM(input_size=input_size,hidden_size=hidden_size)
## make a sequence of length Tx (list of Tx tensors)
Tx = 5
sequence = [torch.randn(1, input_size) for _ in range(Tx)] # make a sequence of length 5
## initialize the hidden state.
hidden = (torch.randn(1, 1, hidden_size), torch.randn(1, 1, hidden_size))
step_by_step(lstm,sequence,hidden)
print('DONE \a') |
st83127 | I asked a similar question where I want to download all the layers into pytorch:
How does one download all possible layers allowable in Pytorch into one data structure?
I want to have say, a list of all the allowable layers in pytorch. How do I get that info? Is it possible?
I am fine as a started for only sequential models…
but I also want to have a list of all the hyperparameters/arguments for each of them. Is it possible to extract all of those automatically? (at the very least as string representations or something like that) |
st83128 | I was using this code to ensemble the model from several epochs, but as you can see from the kappa score printed out below, except for the last epoch, all other epochs have very strange scores:
global labels_v
val_preds = np.zeros(len(val19_df))
for i in range(len(Ws)):
if Ws[i] > 0:
model.load_state_dict(torch.load(f'weight_best{i * 4 + 5}.pt'))
model.cuda()
preds_i, labels_v = test_model(return_p=True) # test_model does inference for validation set in model.eval() mode, and returns the kappa score
# def test_model(val_loader=val_loader, val_df=val19_df, return_p=False)
val_preds += Ws[i] * preds_i
val_preds /= w_total
kappa score: -0.007528865258592532
kappa score: 0.0
kappa score: 0.0
kappa score: 0.49560576349219887
kappa score: 0.9044690071390352
The exact same test_model function was used at training time and they behaved normally before saving. Model in these epoches were saved this way:
if epoch % 4 == 1:
print(f'model{epoch} saved')
torch.save(model.state_dict(), f'weight_best{epoch}.pt')
And also seems that if I use model.train() mode for inference it behaves more normally. So what’s wrong? |
st83129 | I’m not sure, what Ws contains, did you make sure to call model.eval() inside test_model? |
st83130 | Hi! I am new in PyTorch. I want to update the weight by a old grad:
Here is a pseudocode example:
output = model(input)
loss1 = criterion1(output, target)
loss2 = criterion2(output, target)
grad1 = loss1.compute_grad()
grad2 = loss2.compute_grad()
parameters.update(grad1)
... # other operation will be done here.
parameters.update(grad2)
The key point is that the grad2 is calculated w.r.t the old weight, before the parameters.update(grad1).
I have no ideas to do this in an efficient way. Can create_graph=True or other method in Pytorch can help me? An easy example is better. Thank you! |
st83131 | It’s probably not the most efficient way, but you could clone all .grad attributes, zero them out in the model, perform your custom operations, copy them back into the model parameters, and call optimizer.step() when you are done. |
st83132 | Hello All, I’m trying to fine-tune a resnet18 model.
I want to freeze all layers except the last one. I did
resnet18 = models.resnet18(pretrained=True)
resnet18.fc = nn.Linear(512, 10)
for param in resnet18.parameters():
param.requires_grad = False
However, doing
for param in resnet18.fc.parameters():
param.requires_grad = True
Fails. How can I set a specific layers parameters to have requires_grad to True?
Thank you all in advance
Note:
I specifically don’t want to swap the order of assigning a new layer with setting all the grads to false
I want to learn how this specific thing can be done. |
st83133 | Solved by ptrblck in post #2
Your code works.
After running your code snippets, you can print the requires_grad attributes:
for name, param in resnet18.named_parameters():
print(name, param.requires_grad)
which shows, that fc.weight and fc.bias both require the gradient.
You will also get a valid gradients in these laye… |
st83134 | Your code works.
After running your code snippets, you can print the requires_grad attributes:
for name, param in resnet18.named_parameters():
print(name, param.requires_grad)
which shows, that fc.weight and fc.bias both require the gradient.
You will also get a valid gradients in these layers:
resnet18(torch.randn(1, 3, 224, 224)).mean().backward()
for name, param in resnet18.named_parameters():
print(name, param.grad) |
st83135 | Thank you very much, but the code I gave produces an error. it says:
fc doesnt have any attribute named parameters()
so instead I did :
for _, param in resnet18.fc._parameters.items():
print(param.requires_grad)
param.requires_grad = True
and interestingly for this to work I have to do :
for module in resnet18.modules():
if module._get_name() != 'Linear':
print('layer: ',module._get_name())
for param in module.parameters():
param.requires_grad_(False)
elif module._get_name() == 'Linear':
for param in module.parameters():
param.requires_grad_(True)
again if I just do :
for module in resnet18.modules():
if module._get_name() != 'Linear':
print('layer: ',module._get_name())
for param in module.parameters():
param.requires_grad_(False)
and print
for param in resnet18.parameters():
print(param.requires_grad)
all parameters are set as False!
This is really puzzeling. |
st83136 | I would recommend to stick to the named_parameters approach, as in your approach resnet18.modules() will also return fc.weight and fc.bias, which do not contain the 'Linear' name in it.
Shisho_Sama:
but the code I gave produces an error. it says:
fc doesnt have any attribute named parameters()
Does this code raise this error:
for name, param in resnet18.fc.named_parameters():
print(name, param.requires_grad)
If so, could you post your pytorch and torchvision versions, as I would like to have a look at it? |
st83137 | Its very weird! both your code and also!fc.parameters() are now working just fine!!!
This has got me confused for two days! and now they are just working fine!
I don’t know what could have caused this! or I may have pretty much made a mistake!
By the way I am running Pytorch 1.0!
Any way thanks a gazillion times that was a tremendous help.
By the way do you mind if I ask you to kindly have a look here 12 as well? |
st83138 | Good to hear, it’s working now!
If you are running a Jupyter notebook, make sure to run all previous cells, as it’s easy to forget about old variables etc. |
st83139 | Yes, it was on Jupyter,
One more thing I was experimenting with different ways of doing this and wrote this :
for k, p in resnet18.fc._parameters.items():
p.requires_grad = True
which works but I tried to changed it again and wrote it this way this time:
(p.requires_grad_(True) for k,p in resnet18.fc._parameters.items())
which failed miserably!
I expected this to also work since I’m using the inplace operator (requires_grad_) but it doesnt! do you know why this is not working? |
st83140 | In your second code snippet you are creating a Python Generator, which will be lazily evaluated.
Your code works, if you execute the generator or use a list comprehension instead.
resnet18 = models.resnet18()
for param in resnet18.fc.parameters():
print(param.requires_grad)
# 1
gen = (p.requires_grad_(False) for k,p in resnet18.fc._parameters.items())
next(gen)
next(gen)
# 2
list((p.requires_grad_(False) for k,p in resnet18.fc._parameters.items()))
# 3
[p.requires_grad_(False) for k,p in resnet18.fc._parameters.items()]
for param in resnet18.fc.parameters():
print(param.requires_grad) |
st83141 | Below is test code
import torch
import torch.nn as nn
data = torch.rand(3, 10, 145)
bn = nn.BatchNorm1d(10)
bn.eval()
print bn(data) is None |
st83142 | Thanks for your help. Below is my result
Snipaste_2019-08-12_14-51-28.png1274×599 48 KB
But when I update pytorch to 1.2.0, the problem is solved |
st83143 | I’ve installed PyTorch 1.1.0 for Python2.7 and used your code snippet in python and ipython.
Both yield valid results and I cannot reproduce this issue.
Do you also get a None output for nn.BatchNorm2d?
Could you update PyTorch to the latest stable release (1.2.0) and check it again? |
st83144 | Yes, When I update pytorch to 1.2.0. It can yield valid results. But I didn’t figure out why it is occured in pytorch 1.1.0 |
st83145 | I’m still unsure, how this might happen on your system, as I’ve tried to reproduce it using matching versions of Python, Pytorch etc. and wasn’t able to. |
st83146 | Hi,
I have a tensor where I would like to evaluate a function on each element, and based on its output, fill corresponding values in another tensor. Lets say if the function returns value lesser than 0 or greater than 1, then we set value in the output tensor to 0, else we set it to function output. How would I go about doing that?
Thanks! |
st83147 | You can do it like this:
def f2(x):
return x*2
def f1(x):
# evaluate a function on each element
tmp = f2(x)
#based on its output, fill corresponding values in another tensor
if tmp < 0 or tmp > 1:
return 0
else:
return tmp
x = torch.randn(10)
x.apply_(f1)
But apply_() function only works with CPU tensors and should not be used in code sections that require high performance(becase it’s slow). |
st83148 | In some cases, I have multiple gamma distributions with different parameters, and I aim to sample from each. I wonder whether I could make it parallel. Now I am doing an explicit loop, which takes time. Gamma is only for an example. |
st83149 | I saw that is possible to use CUDA to write to memory mapped files (reference https://stackoverflow.com/questions/29518875/cuda-zero-copy-memory-memory-mapped-file 34 )
I am wonder if it is somehow possible in Pytorch to write a cuda mounted tensor directory to a mem mapped stored on GPU.
The purpose of this is to speed up writing tensors after each training step. Currently,
with torch.no_grad():
numpyMemmap[arrayOfRandomIndexes] = u_embeddings.weight.data.detach().cpu().numpy()
takes 6 seconds. I think it’s because the numpy memory map is stored on CPU. I need something that would write in a fraction of a second since I will be storing the tensors after each training step, and there will be hundreds of thousands of training steps. |
st83150 | How are you profiling the code?
Make sure to synchronize the code before starting and stopping the timer using toch.cuda.synchronize().
Since CUDA operations are asynchronous, the cpu() call will create a synchronization point, so that you might be in fact profiling some other workload which is still executed on the GPU. |
st83151 | print(len(dir(torch.nn.modules.activation.torch)))
646
print(len(dir(torch)))
646 |
st83152 | I have a time series data with each column(feature) with different scale and the output is a a forecasting problem for the next time step and the output dimension is same as the input dimension.I am using a RNN and is it a good idea to standardize the data?
In case if I do at test time should I stack the bpttt and then apply standardization?
Thanks |
st83153 | I used to use keras and the image format it followed is [Height x Width x Channels x Samples]. i decided to switch to PyTorch. But i didn’t switch out my data loading schemes. So now i have numpy arrays of shape HxWxCxS, instead of SxCxHxW which is required for PyTorch. Does anyone have any idea to convert this ? |
st83154 | What you want to achieve here sounds more like permutation of dimensions rather than reshaping?
If that is the case, this should do:
t = torch.Tensor(np_arr)
t = t.permute(3, 2, 0, 1)
But if you know that the underlying data is in SxCxHxW but your array dims are HxWxCxS, you can use
t = t.reshape(S, C, H, W) |
st83155 | I’m trying to reproduce the Wide residual network 28-2 for a semi supervised learning article I’m creating. But I’m having trouble using the Batch_norm.
I keep getting this error:
File “C:\Anaconda3\lib\site-packages\torch\nn\functional.py”, line 1708, in batch_norm
training, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: the derivative for ‘running_mean’ is not implemented
Currently I’m using it like this:
F.batch_norm(z,weight=bnW0,bias=bnB0,running_mean=bnM0,running_var=bnV0,training=training)
where weight,bias,running_mean, and running_var all have been instatiated as:
nn.Parameter((torch.rand(16) - 0.5) * 1e-1)
Is batch_norm currently not working in training mode, or am I just doing something wrong here? |
st83156 | Solved by Mazhar_Shaikh in post #2
Hi tueboesen,
The running_mean and running_var should be registered as buffers and not parameters. The update to these tensors happens inside the function call. And gradients are not available. Hence, the error. |
st83157 | Hi tueboesen,
The running_mean and running_var should be registered as buffers and not parameters. The update to these tensors happens inside the function call. And gradients are not available. Hence, the error. |
st83158 | Okay thank you for clarifying.
If there is no plan of implementing that at some point I would say that the error should probably be changed to be more informative, because right now it just sounds like batch_norm isn’t working. |
st83159 | Hi,
I am trying to use tensor.index_copy_, but having some issues regarding tensor’s dimensions.
On the Pytorch documents, there is an example:
x = torch.zeros(5,3)
t = torch.tensor([[1,2,3], [4,5,6], [7,8,9]], dtype=torch.float)
index = torch.tensor([0, 4, 2])
x.index_copy_(0, index, t)
These codes result in x as:
x = [[1, 2, 3],
[0, 0, 0],
[7, 8, 9],
[0, 0, 0],
[4, 5, 6]]
I understood this example with the dimension of 0, but if I try to use it with dimension of 1 in similar example, I receive an runtime error that indicates some dimension issues. For the clarification, I will leave an example of this below:
x = torch.zeros(5,3)
t = torch.tensor([[1,2,3], [4,5,6], [7,8,9], [10,11,12], [13,14,15]], dtype=torch.float)
index = torch.tensor([0, 2, 1])
x.index_copy_(1, index, t)
I expect to receive x as
[[1, 3, 2],
[4, 6, 3],
[7, 9, 8],
[10, 12, 11],
[13, 15, 14],]
but running these codes results in the following error:
RuntimeError: index_copy_(): Source.destination tensor must have same slice shapes. Destination slice shape: [5] at dimension 1 and source slice shape: [3] at dimension 0.
Could someone explain what is wrong with this? Thank you! |
st83160 | Your code snippet seems to work and returns the right result:
x = torch.zeros(5,3)
t = torch.tensor([[1,2,3], [4,5,6], [7,8,9], [10,11,12], [13,14,15]], dtype=torch.float)
index = torch.tensor([0, 2, 1])
x.index_copy_(1, index, t)
> tensor([[ 1., 3., 2.],
[ 4., 6., 5.],
[ 7., 9., 8.],
[10., 12., 11.],
[13., 15., 14.]]) |
st83161 | Thank you for your reply on my inquiry.
When I run the same code on my environment, it raises with the error below.
RuntimeErrorTraceback (most recent call last)
in ()
----> 1 x.index_copy_(1, index, t)
RuntimeError: index_copy_(): Source/destination tensor must have same slice shapes. Destination slice shape: [5] at dimension 1 and source slice shape: [3] at dimension 0.
I am running this code in torch version 0.4.0.
What I notice is that if I run the same code on the torch version 0.3.1, then it runs as I expected, but it does not run properly once I use torch version 0.4.0. Do you know how to run this code on version 0.4.0?
Thank you! |
st83162 | I’m not sure, but I would recommend to update to the latest stable release as described here 6.
A lot of bug fixes were shipped in the versions after 0.4.0, and this might be a particular bug. |
st83163 | I updated to 1.2
I found that torchvision that comes with it is 0.3 instead of 0.4 as promoted
And it is not even working
Throws an error when importing it
import torchvision
Gives very long error, last line of which is: DLL failure: sepecified module not found. |
st83164 | Solved by peterjc123 in post #8
Just to remind you that torchvision is uploaded and you can use the commands on pytorch.org to install. |
st83165 | github.com/pytorch/pytorch
Issue: Unable to install 1.2
opened by ZhuBaohe
on 2019-08-10
Whether using pip or conda, I still get 1.1 by the official installation command. |
st83166 | That was true, until today’s morning. They fixed it. Now you get 1.2. You can try it.
However it comes with torchvision 0.3 instead of 0.4 as they promoted, and it doesn’t even work |
st83167 | github.com/pytorch/pytorch
Issue: Unable to install 1.2
opened by ZhuBaohe
on 2019-08-10
Whether using pip or conda, I still get 1.1 by the official installation command. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.