id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st83968 | Hi All,
Anyone know what this error is all about?
Traceback (most recent call last):
File "train.py", line 135, in <module>
loss.backward()
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED
Any and all help is very much appreciated
Thanks |
st83969 | This error might point to a faulty driver installation, a failure in initializing the CUDA context or an unsupported GPU.
Was your setup working before or are you using a new environment/system?
In the former case, could you disable cudnn (torch.backends.cudnn.enabled=False), and run the code again?
Maybe some other error is hidden with this error message. |
st83970 | After having no luck with various manual build manuals for Caffe2 and OpenCV (within my anaconda python 2.7 environment, Ubuntu 16.04), I followed the advice to install the nightly build binaries via:
conda install pytorch-nightly torchvision -c pytorch
I want to use models from R2Plus1D 10 for extracting action recognition features. However, when running their extract_features.py it will throw the following error message:
Traceback (most recent call last):
File "lib/extract_features.py", line 333, in <module>
main()
File "lib/extract_features.py", line 328, in main
ExtractFeatures(args)
File "lib/extract_features.py", line 131, in ExtractFeatures
devices=gpus,
File "/anaconda2/lib/python2.7/site-packages/caffe2/python/data_parallel_model.py", line 34, in Parallelize_GPU
Parallelize(*args, **kwargs)
File "/anaconda2/lib/python2.7/site-packages/caffe2/python/data_parallel_model.py", line 219, in Parallelize
input_builder_fun(model_helper_obj)
File "lib/extract_features.py", line 106, in input_fn
use_local_file=args.use_local_file,
File "/models/c3d/R2Plus1D/lib/utils/model_helper.py", line 120, in AddVideoInput
data, label, video_id = model.net.VideoInput(
File "/anaconda2/lib/python2.7/site-packages/caffe2/python/core.py", line 2171, in __getattr__
",".join(workspace.C.nearby_opnames(op_type)) + ']'
AttributeError: Method VideoInput is not a registered operator. Did you mean: []
Some issues mention that there is a simple solution. Does somebody know a solution? |
st83971 | Hi,
I’m trying to understand the difference between a Dataset and DataLoader for a specific case. The DataLoader allows us to specify a sampler.
Lets say I’ve a sampler that looks like the following.
class MyAwesomeSampler(Sampler):
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return iter(self.indices)
def __len__(self):
return len(self.indices)
and then I give plug an instance of MyAwesomeSampler to the input of DataLoader
How is this different from using Subset(mydataset, indices) where the indices are the same as above.
Does this make any difference in Dataset objects with data-augmentation (eg any of the image datasets in torchvision). |
st83972 | Solved by ptrblck in post #2
Your custom sampler and the Subset will yield the same data samles, if you don’t use shuffle=True for the DataLoader using the Subset.
The main difference is, that a custom sampler can do much more than just returning a subset of the data.
E.g. the WeightedRandomSampler implements a sample weight,… |
st83973 | Your custom sampler and the Subset will yield the same data samles, if you don’t use shuffle=True for the DataLoader using the Subset.
The main difference is, that a custom sampler can do much more than just returning a subset of the data.
E.g. the WeightedRandomSampler implements a sample weight, which can be used to balance the batches for an imbalanced dataset.
It shouldn’t make any difference regarding data augmentation, as only the indices passed to __getitem__ will be customized, not the method itself. |
st83974 | I wanted to save my model while training every few epochs and was wondering about the best way to go about it. The approach suggested in this link 4 seems to be a common/popular way to do so.
However, I don’t fully understand how the above method works. By calling model.cpu and then model.cuda won’t we be creating new objects for the parameters different from the ones before calling either of the two functions as suggested in the docs? If this is true, then won’t we need to change the parameters the optimizer updates? I’ve not seen this being mentioned anywhere though.
I’m just getting started with PyTorch and so I apologize for any ignorance on my part. |
st83975 | Solved by ptrblck in post #2
The push to the CPU and back to GPU shouldn’t be a problem, as the id of the parameters shouldn’t change, thus the optimizer still holds valid references to the parameters.
However, I would suggest to use the same device after storing the state_dict, since internal states of the optimizer (e.g. usi… |
st83976 | The push to the CPU and back to GPU shouldn’t be a problem, as the id of the parameters shouldn’t change, thus the optimizer still holds valid references to the parameters.
However, I would suggest to use the same device after storing the state_dict, since internal states of the optimizer (e.g. using Adam) will also be stored on the initial device. |
st83977 | Thanks! Yeah, I’ll make sure to use the same device.
Just a quick follow up though, what do the id of the parameters depend on?
model_1.cpu()
model_2.cpu()
model_2.gpu()
model_1.gpu()
Would the id of the parameters change in this case? Just trying to avoid possible mistakes. |
st83978 | I am making a LSTM model for Sentiment Analysis in PyTorch. I am using a Twitter Dataset for this purpose and have used Scikit-learn for data splitting. This is how I am doing it.
# training params
batch_size = 25
epochs = 5 # 3-4 is approx where I noticed the validation loss stop decreasing
counter = 0
print_every = 100
clip=5 # gradient clipping
# move model to GPU, if available
#if(train_on_gpu):
# net.cuda()
net.train()
# train for some number of epochs
for e in range(epochs):
# initialize hidden state
h = net.init_hidden(batch_size)
# batch loop
for inputs, labels in X_train:
counter += 1
#if(train_on_gpu):
# inputs, labels = inputs.cuda(), labels.cuda()
inputs, labels = inputs.to(device), labels.to(device)
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
# zero accumulated gradients
net.zero_grad()
# get the output from the model
output, h = net(inputs, h)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), labels.float())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(net.parameters(), clip)
optimizer.step()
# loss stats
if counter % print_every == 0:
# Get validation loss
val_h = net.init_hidden(batch_size)
val_losses = []
net.eval()
for inputs, labels in X_test:
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
val_h = tuple([each.data for each in val_h])
#if(train_on_gpu):
# inputs, labels = inputs.cuda(), labels.cuda()
inputs, labels = inputs.to(device), labels.to(device)
output, val_h = net(inputs, val_h)
val_loss = criterion(output.squeeze(), labels.float())
val_losses.append(val_loss.item())
net.train()
print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.6f}...".format(loss.item()),
"Val Loss: {:.6f}".format(np.mean(val_losses)))
This is what I get -
19 # batch loop
---> 20 for inputs, labels in X_train:
21 counter += 1
22
ValueError: too many values to unpack (expected 2)
Can any of you guys help me to get this problem solved? I am very much in the last step. I am clueless that what to do next. This is a link to my Github page for this problem. You can check it anytime. Thanks in advance. |
st83979 | Thank you sir for response. I have applied Keras instead of PyTorch. I am done with the Sentiment Analysis. Thank you again. |
st83980 | Does the following cause parameter sharing between the conv layers?
self.conv = nn.Conv2d(channels_in, channels_out)
layers = []
for _ in range(num_iters):
layers.append(self.conv)
self.model = nn.Sequential(*layers)
If so, does the following remedy it?
layers = []
for _ in range(num_iters):
layers.append(nn.Conv2d(channels_in, channels_out))
self.model = nn.Sequential(*layers) |
st83981 | Solved by ptrblck in post #2
In the first code snippet you will use the same conv layer num_iters times, so yes the parameters will be shared (or rather reused).
The second code snippet created num_iters new conv layers, each containing new parameters. |
st83982 | In the first code snippet you will use the same conv layer num_iters times, so yes the parameters will be shared (or rather reused).
The second code snippet created num_iters new conv layers, each containing new parameters. |
st83983 | Is this function differentiable?
Is there any implementation code behind this function? I cannot find the source code of this function in the pytorch website.
I have implemented a paper 8
using this function but the result is quite weird.
Other Question:
I have the problem about index differentiating below.
In this paper section 3.3
We first select Y frames (i.e. keyframes) based on the prediction scores from the
decoder.
The decoder output is [2,320], which means non-keyframe score and key frame score of the 320 frames. We want to find a 0/1 vector according to the decoder output but the process of [2,320] -> 0/1 vector seems not differentiable…
How to implement this in pytorch?
Thanks for anyone pointing out the reason. |
st83984 | I have found its source code.
CPU:
github.com
pytorch/pytorch/blob/10c60b601a63710b17e2d14e8384c7c9bb51f9ff/aten/src/TH/generic/THTensorEvenMoreMath.cpp#L203-L207 15
void THTensor_(indexSelect)(THTensor *tensor, THTensor *src, int dim, THLongTensor *index)
{
ptrdiff_t i, numel;
THTensor *tSlice, *sSlice;
int64_t *index_data;
CUDA:
github.com
pytorch/pytorch/blob/2e37ab85afa1b3b7b05f1ebe24c220809a05de9b/aten/src/THC/generic/THCTensorIndex.cu#L395-L400 7
void THCTensor_(indexSelect)(THCState *state, THCTensor *dst, THCTensor *src, int dim, THCudaLongTensor *indices)
{
THCAssertSameGPU(THCTensor_(checkGPU)(state, 3, dst, src, indices));
int dims = THCTensor_(nDimensionLegacyNoScalars)(state, dst);
THArgCheck(dims <= MAX_CUTORCH_DIMS, 2, CUTORCH_DIM_WARNING); |
st83985 | github.com
pytorch/pytorch/blob/6dfecc7e01842bb7e5024794fdce94480de7bb3a/tools/autograd/derivatives.yaml#L415-L417 5
- name: index_select(Tensor self, int dim, Tensor index) -> Tensor
self: at::zeros(self.sizes(), grad.options()).index_add_(dim, index, grad)
index: non_differentiable
It is non-differentiable with respect to index. |
st83986 | Thank you very much, Tony-Y.
What do you think of the pytorch implementation of the “select” action in “we select k key frames to form the predicted summary video” 4 ?
I use torch.index_select first but I know the function cannot be differentiable now.
Do you have any implementation suggestion on “select” action? |
st83987 | You use index_select 4 times in your code:
github.com
pcshih/pytorch-VSLUD/blob/b3ed7aba9332d2e0a21a66e84eae3654c9e254af/SK.py#L38-L40 1
index = torch.tensor(column_mask, device=torch.device('cuda:0'))
h_select = torch.index_select(h, 3, index)
x_select = torch.index_select(x_temp, 3, index)
github.com
pcshih/pytorch-VSLUD/blob/b3ed7aba9332d2e0a21a66e84eae3654c9e254af/training_set_preparation.py#L42-L47
gt_summary = torch.from_numpy(dataset[key]["gtsummary"][...]).to(device)
column_index = gt_summary.nonzero().view(-1)
feature_summary_cuda = torch.from_numpy(dataset[key]["features"][...]).to(device)
feature_summary_cuda = feature_summary_cuda.transpose(1,0).view(1,1024,1,feature_summary_cuda.shape[0])
feature_summary_cuda = torch.index_select(feature_summary_cuda, 3, column_index)
attributes["summary_features"] = feature_summary_cuda; #print(torch.isnan(feature_summary_cuda).nonzero().view(-1))
github.com
pcshih/pytorch-VSLUD/blob/b3ed7aba9332d2e0a21a66e84eae3654c9e254af/train.py#L195-L198
index = torch.tensor(column_mask, device=device)
select_vd = torch.index_select(vd, 3, index)
reconstruct_loss = torch.norm(S_K_summary-select_vd, p=2)**2
reconstruct_loss /= len(column_mask)
Where is the problem? |
st83988 | Thank you very much, Tony-Y.
The index_select function cannot be diff. so the gradient cannot backprop. to the previous S_K architecture.
My problem is how do I implement “select” action in pytorch instead of using the index_select function to implement it? |
st83989 | Do you want a derivative with respect to the index rather than the source tensor? |
st83990 | I want a derivative with respect to the source tensor[index] -> the tensor on the “index” location.
Because the output tensor of FCSN 2 architecture is in the shape of [1,2,1,#frame]. This tensor means whether frames are selected or not.
The algo. of this paper 2 is below:
Downsampling every video in 2fps and pre-processing every downsampled training video to [1,1024, 1, T] (video) and [1,1024,1,S] (summary) through pre-trained googlenet.
for every pre-processed downsample video’s feature(in the format [1,1024,1,T]) and real summary video’s feature(in the format [1,1024,1,S]): ->T,S may differ in each video
Put [1,1024,1,T] into FCSN and get the index_mask(this index_mask is constructed from the output of FCSN [1,2,1,T] which means which frame should be selected)
S__44130656.jpg1478×1108 190 KB
Select K key outputs of FCSN according to index_mask and get the output in format [1,2,1,K].
Put the selected K key outputs of FCSN([1,2,1,K]) into the 1x1 conv to get the [1,1024,1,K].
Add K key features[1,1024,1,K] (Pick the K key features from original video feature according to index_mask) to the 1x1 conv output[1,1024,1,K] to do skip connection(this is the output of S_K).
Pick the K key features from original video feature according to index_mask and calculate the reconstruction loss with previous step output.
Calculate the diversity loss.
Calculate the adv. loss by putting the output of S_K in to S_D and set target score 1 to get the adv. loss
Update S_K.
Put Real summary videos’ features [1,1024,1,S] into S_D to calculate and set target score 1 to get the adv. loss.
Put Fake summary videos’ features [1,1024,1,K] come from S_K in S_D to calculate and set target score 0 to get adv. loss.
Update S_D.
end
Thank you very much, Tony-Y. |
st83991 | rec loss.png798×138 7.24 KB
This reconstruction loss can be calculated by the weighted mean where the index_mask is used as the weights and k is the sum of the index_mask. |
st83992 | Sorry for my poor English understanding.
Could you please set an example?
Thank you very much, Tony-Y. |
st83993 | import torch
index_mask = torch.Tensor([0.0, 0.0, 1.0, 1.0, 0.0])
v = torch.randn(3,5)
sk = torch.randn(3,5)
torch.sum((sk-v)**2 * index_mask) / torch.sum(index_mask)
where the feature size is 3 and the number of frames is 5. |
st83994 | I have followed your idea but the loss is the same quite weird.
loss_index_mask.PNG1446×577 50.4 KB
github.com
pcshih/pytorch-VSLUD/blob/20a91393c040cd6a005c1b01ce77ca3db82efd25/SK.py#L24-L56
h = x
x_temp = x
h = self.FCSN(h)
values, indices = h.max(1, keepdim=True)
###old###
# # 0/1 vector, we only want key(indices=1) frame
# column_mask = (indices==1).view(-1).nonzero().view(-1).tolist()
# # if S_K doesn't select more than one element, then random select two element(for the sake of diversity loss)
# if len(column_mask)<2:
# print("S_K does not select anything, give a random mask with 2 elements")
# column_mask = random.sample(list(range(h.shape[3])), 2)
# index = torch.tensor(column_mask, device=torch.device('cuda:0'))
# h_select = torch.index_select(h, 3, index)
# x_select = torch.index_select(x_temp, 3, index)
###old###
This file has been truncated. show original
github.com
pcshih/pytorch-VSLUD/blob/20a91393c040cd6a005c1b01ce77ca3db82efd25/train.py#L202-L214
###new reconstruct###
reconstruct_loss = torch.sum((S_K_summary-vd)**2 * index_mask) / torch.sum(index_mask)
###new reconstruct###
# diversity
S_K_summary_reshape = S_K_summary.view(S_K_summary.shape[1], S_K_summary.shape[3])
norm_div = torch.norm(S_K_summary_reshape, 2, 0, True)
S_K_summary_reshape = S_K_summary_reshape/norm_div
loss_matrix = S_K_summary_reshape.transpose(1, 0).mm(S_K_summary_reshape)
diversity_loss = loss_matrix.sum() - loss_matrix.trace()
#diversity_loss = diversity_loss/len(column_mask)/(len(column_mask)-1)
diversity_loss = diversity_loss/(torch.sum(index_mask))/(torch.sum(index_mask)-1)
p.s. The torch.index_select in below is just for training set preparation so I do not change.
github.com
pcshih/pytorch-VSLUD/blob/b3ed7aba9332d2e0a21a66e84eae3654c9e254af/training_set_preparation.py#L42-L47
gt_summary = torch.from_numpy(dataset[key]["gtsummary"][...]).to(device)
column_index = gt_summary.nonzero().view(-1)
feature_summary_cuda = torch.from_numpy(dataset[key]["features"][...]).to(device)
feature_summary_cuda = feature_summary_cuda.transpose(1,0).view(1,1024,1,feature_summary_cuda.shape[0])
feature_summary_cuda = torch.index_select(feature_summary_cuda, 3, column_index)
attributes["summary_features"] = feature_summary_cuda; #print(torch.isnan(feature_summary_cuda).nonzero().view(-1))
Thank you very much, Tony-Y. |
st83995 | The selection is based on the output of the FCSN architecture (i.e. [1,1024,1,T] tensor)
Thank you very much, Tony-Y. |
st83996 | if S_K doesn’t select more than one element, then random select two element(for the sake of diversity loss)
What’s this? |
st83997 | I implement the diversity loss through below concept, where the feature size is 2 and the number of frames is 3.
diversity.jpg1108×1478 167 KB |
st83998 | if i have batch = 100, can i make backpropagation for each sample of 100 individually? That is, the forward pass once, backpropagation 100 items? |
st83999 | You can. Pytorch typically computes element-wise loss, then average and then backprop.
At the time of defining the loss just choose the proper option not to compute the average and loss will return 100 elements instead of a single number.
Then just backprop each element like element.backward()
You will have to set retain_graph=True .
for element in loss:
element.backward(retain_graph=True) |
st84000 | if i use MSELoss, which is the correct option?
“none” does not work with the batch.
Leave only the “sum” and “mean”, but they will not give the desired result. Most likely I will have to write my loss function. |
st84001 | It’s not necessary to use for-loop,
mse_criterion = torch.nn. MSELoss (reduction=‘none’ )
loss = mse_criterion(input, label) # input->BxCxHxW, loss->Bx1 or B,
loss.backward(gradient=torch.ones_like(loss))
Then, loss.grad will be individual grad for each sample. |
st84002 | Hii
I’m new in pytorch, any help will be appreciated.
I want to use deconvolution to estimate an image for 1-D feature. The same as DWT (discrete wavelet transform).
Thnx |
st84003 | torch.nn.ConvTranspose2d can do unsampling and can be regarded as a deconvolution operation. |
st84004 | Thnx for reply
I tried modifying this code but it gives me error.
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(2))
Kindly, can u modify this code so it can be used for deconvolution? |
st84005 | I am running PyTorch on two different machines (no GPU), and I observe drastic speed differences between the two. Both machines are running Ubuntu 18.04, Python 3.7.3 with Anaconda and Pytorch v1.2.0 was built from source.
Here is a small sample of code to reproduce my issues:
import torch
from time import time
torch.set_num_threads(10)
N_samples, N_features = (128,5000)
x = [ torch.randn(N_samples, N_features) for _ in range(10) ]
y = [ torch.randn(N_samples) for _ in range(10) ]
model = torch.nn.Sequential(
torch.nn.Linear(N_features, 500),
torch.nn.ReLU(),
torch.nn.Linear(500, 100),
torch.nn.ReLU(),
torch.nn.Linear(100, 1),
torch.nn.ReLU()
)
def run_batch(xi,yi):
t0 = time()
model(xi)
print(time()-t0)
[ run_batch(xi,yi) for xi,yi in zip(x,y) ]
On the first machine, I get:
0.45372891426086426
0.37187933921813965
0.37082910537719727
...
On the second, I get:
0.013288736343383789
0.009611368179321289
0.009567499160766602
...
For the slower machine, Python was compiled with --enable-optimizations, but I’m not sure for the faster one.
Moreover, the slower machine has more CPUs and more RAM memory than the other one.
Any idea where the difference might come from?
Thank you for your help |
st84006 | Hey all! I’m using the MNIST dataset available through torchvision and trying to use transform operations to create synthetic data.
In addition to a regular train_set where I only used transforms.ToTensor(), I wrote the following with the intention of appending it to the original train_set:
train_set2 = torchvision.datasets.MNIST(
root='./data',
train=True,
download=True,
transform=transforms.Compose([
transforms.RandomAffine(degrees=20,
translate=(0.9, 0.9),
scale=(0.9, 1.1),
shear=(-20, 20)),
transforms.ToTensor()
])
)
However, when I view the images that are produced through the extracting and transforming of the dataset there does not appear to be any difference in how the images look at all.
For example, the following are my results:
plt.imshow(train_set.data[0])
plt.imshow(train_set2.data[0])
Any clarification would be greatly appreciated! |
st84007 | Solved by ptrblck in post #4
You are directly indexing the underlying non-transformed data using train_set2.data.
Try to index the Dataset, to get the transformed tensors train_set2[index]. |
st84008 | Well when I convert it to a numpy array it produces the same results… more so confused about why there isn’t any sort of transformation of the data occurring |
st84009 | You are directly indexing the underlying non-transformed data using train_set2.data.
Try to index the Dataset, to get the transformed tensors train_set2[index]. |
st84010 | torch.dot(nn.Conv2d(3, 64, kernel_size=(3,1), stride=1, padding=(0,1)).weight, nn.Conv2d(3, 64, kernel_size=(1,3), stride=1, padding=(0,1)).weight)
RuntimeError: dot: Expected 1-D argument self, but got 4-D
Like this, I want to make 3x3 filter and apply it to train.
How should I do?
Thank you. |
st84011 | The dot product will return a scalar value, so maybe you would like to apply torch.matmul on the filter kernels? |
st84012 | Suppose I have a list of indices and wish to modify an existing array with this list. Currently the only way I can do this is by using a for loop as follows. Just wondering if there is a faster/ efficient way.
torch.manual_seed(0)
a = torch.randn(5,3)
idx = torch.Tensor([[1,2], [3,2]]).to(torch.long)
for i,j in idx:
a[i,j] = 1
I initially assumed that gather or index_select would go some way in answering this question, but looking at documentation 14 this doesn’t seem to be the answer.
In my particular case, a is a 5 dimensional vector and idx is a Nx5 vector. So the output (after subscripting with something like a[idx]) I’d expect is a (N,) shaped vector.
This (unanswered) question is similar: More efficient way of indexing to avoid loop 65
(this is a crosspost from stackoverflow 42.) |
st84013 | @sachinruk Thanks for mentioning, and I would like to know how to extend this to higher dimension.
If we have a 4-D tensor of [batch, channel, height, width], and a set of [x, y] coordinates with shape [batch, num_points, 2] , how to select this 4-D tensor without loop?
Here is my implementation with loop:
Suppose the 4-D feature map [10, 256, 64, 64] in dimension, and coordinates is [10, 68, 2] in dimension(for each batch, there are 68 points that we wanted to select from feature map)
coord_features = torch.zeros(10, 68, 256)
feature_map = feature_map.transpose(1,2).transpose(2,3) #reshape to [10, 64, 64, 256]
for i in xrange(coords.shape[0]): #loop through each sample in batch
for j in xrange(coords.shape[1]): #loop through each points
#select coordinate on feature map
coord_features[i][j] = feature_map[i][coords[i][j][1].type(torch.int64)][coords[i][j][0].type(torch.int64)] |
st84014 | @ptrblck’s answer does work, but turns out this was the answer I was looking for as shown on SO:
a[idx.t().chunk(chunks=2,dim=0)] = 1 |
st84015 | To do this with a batch dimension, you can just create index tensors for the other dimensions as needed.
For a simple example, let’s say we have a tensor data of grayscale images with size b, h, w and a set of random pixels coordinates coords with size b, 2 of pixel coordinates (one pixel per image in the batch) that we want to zero out. We can do this with:
bselect = torch.arange(data.size(0), dtype=torch.long)
data[bselect, coords[:, 0], coords[:, 1]] = 0
Your example is a little more complicated, because you have multiple points per image, but you can just do something like bselect = torch.range(batch, dtype=torch.long)[:, None].expand(batch, num_points).view(-1).
For the channel dimension, I think you can just use a single integer 0, but you may need to create a vector of zeros of the same length as bselect. |
st84016 | Hello,
I’m trying to train an adversarial network and I’m using BCELoss from PyTorch. I’ve provided the discriminator network, training code, error message snippet below. I’m training the network only for 5 epochs and there is no error generated for the initial 4 epochs but stuck into a runtime error at the fifth epoch. Any suggestions, please?
Discriminator Network
class Discriminator(nn.Module):
def init(self):
super(Discriminator, self).init()
self.Dconv1 = nn.Sequential(
nn.Conv2d(in_channels=128, out_channels=256, kernel_size=(1, 5)),
nn.LeakyReLU(negative_slope=0.2)
)
self.Dconv2 = nn.Sequential(
nn.Conv2d(in_channels=256, out_channels=512, kernel_size=(1, 3)),
nn.LeakyReLU(negative_slope=0.2)
)
self.Dfc1 = nn.Sequential(
nn.Linear(in_features=1536, out_features=256),
nn.LeakyReLU(negative_slope=0.2),
nn.Dropout(),
nn.Linear(in_features=256, out_features=1),
nn.Sigmoid()
)
self._initialize_weights()
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
if m.bias is not None:
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.Linear):
n = m.weight.size(1)
m.weight.data.normal_(0, 0.01)
m.bias.data.zero_()
def forward(self, input):
out = self.Dconv1(input)
out = self.Dconv2(out)
out = out.reshape(-1, 512*1*3)
out = self.Dfc1(out)
return out
Training Code Snippet
s1_length = len(s1_source)
s2_length = len(s2_source)
t_length= len(s1_target)
logging.warning("Iteration: %d, S1 length: %d, S2 length: %d, Target length: %d", i, s1_length, s2_length, t_length)
s1_error_fake = loss(s1_source, ones_target(s1_length))
s1_error_real = loss(s1_target, zeros_target(t_length))
s1_t_dis_loss = s1_error_fake + s1_error_real
s2_error_fake = loss(s2_source, ones_target(s2_length))
s2_error_real = loss(s2_target, zeros_target(t_length))
s2_t_dis_loss = s2_error_fake + s2_error_real
logging.warning("S1 Disc loss: %s, S2 Disc Loss: %s", s1_t_dis_loss.data, s2_t_dis_loss.data)
Error Message
146it [00:02, 53.81it/s]
146it [00:02, 56.52it/s]
146it [00:02, 57.55it/s]
42it [00:00, 58.23it/s]
RuntimeError Traceback (most recent call last)
in
63 logging.warning(“Iteration: %d, S1 length: %d, S2 length: %d, Target length: %d”, i, s1_length, s2_length, t_length)
64
—> 65 s1_error_fake = loss(s1_source, ones_target(s1_length))
66 s1_error_real = loss(s1_target, zeros_target(t_length))
67 s1_t_dis_loss = s1_error_fake + s1_error_real
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
–> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
510 @weak_script_method
511 def forward(self, input, target):
–> 512 return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
513
514
~/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction)
2111
2112 return torch._C._nn.binary_cross_entropy(
-> 2113 input, target, weight, reduction_enum)
2114
2115
RuntimeError: reduce failed to synchronize: device-side assert triggered
1
|
st84017 | Hi! May be this helps:
https://github.com/pytorch/pytorch/issues/5560 20
You can also try to run your code on CPU, you will receive much more informative error message then. |
st84018 | Thank you for the link. I actually read through that link yesterday and as you can see in my discriminator class I’ve used a Sigmoid at the final output. I also used the target tensors using the following code snippet -
def ones_target(size):
‘’’
Tensor containing ones, with shape = size
‘’’
data = Variable(torch.ones(size, 1)).cuda(gpu_id)
return data
def zeros_target(size):
‘’’
Tensor containing zeros, with shape = size
‘’’
data = Variable(torch.zeros(size, 1)).cuda(gpu_id)
return data
I’m pretty sure, I’ve done all the fixes that link suggests. I’m now confused about what could be wrong. What really annoying is that the error generates after running few epochs! |
st84019 | Yes, it is confusing.
Did you try to print out the loss? May be it is goes to infinity or vanishes causing the overflow? |
st84020 | Hi,
I am trying to implement an efficient histogram method in PyTorch.
I know PyTorch already has a histc and bincount though there is no 2D version of that.
I am looking to implement it in parallel to be fast; i.e for a vector of k values and using n bins, I want to do k*n operations in parallel.
I have looked at max grid & block size for CUDA and it is doable giving my number of bins and size of vector.
How should I go about and do this to be integrated with PyTorch? |
st84021 | Somehow when I do the install it installs torchvision but not torch. Command I am running as dictated from the main website:
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
then I do conda list but look:
$ conda list
# packages in environment at /home/ubuntu/anaconda3/envs/automl:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
alabaster 0.7.12 py36_0
anaconda 2019.07 py36_0
anaconda-client 1.7.2 py36_0
anaconda-project 0.8.3 py_0
asn1crypto 0.24.0 py36_0
astroid 2.2.5 py36_0
astropy 3.2.1 py36h7b6447c_0
atomicwrites 1.3.0 py36_1
attrs 19.1.0 py36_1
babel 2.7.0 py_0
backcall 0.1.0 py36_0
backports 1.0 py_2
backports.os 0.1.1 py36_0
backports.shutil_get_terminal_size 1.0.0 py36_2
beautifulsoup4 4.7.1 py36_1
bitarray 0.9.3 py36h7b6447c_0
bkcharts 0.2 py36_0
blas 1.0 mkl
bleach 3.1.0 py36_0
blosc 1.16.3 hd408876_0
bokeh 1.2.0 py36_0
boto 2.49.0 py36_0
bottleneck 1.2.1 py36h035aef0_1
bzip2 1.0.8 h7b6447c_0
ca-certificates 2019.5.15 0
cairo 1.14.12 h8948797_3
certifi 2019.6.16 py36_0
cffi 1.12.3 py36h2e261b9_0
chardet 3.0.4 py36_1
click 7.0 py36_0
cloudpickle 1.2.1 py_0
clyent 1.2.2 py36_1
colorama 0.4.1 py36_0
contextlib2 0.5.5 py36_0
cryptography 2.7 py36h1ba5d50_0
cudatoolkit 10.0.130 0
curl 7.65.2 hbc83047_0
cycler 0.10.0 py36_0
cython 0.29.12 py36he6710b0_0
cytoolz 0.10.0 py36h7b6447c_0
dask 2.1.0 py_0
dask-core 2.1.0 py_0
dbus 1.13.6 h746ee38_0
decorator 4.4.0 py36_1
defusedxml 0.6.0 py_0
distributed 2.1.0 py_0
docutils 0.14 py36_0
entrypoints 0.3 py36_0
et_xmlfile 1.0.1 py36_0
expat 2.2.6 he6710b0_0
fastcache 1.1.0 py36h7b6447c_0
filelock 3.0.12 py_0
flask 1.1.1 py_0
fontconfig 2.13.0 h9420a91_0
freetype 2.9.1 h8a8886c_1
fribidi 1.0.5 h7b6447c_0
get_terminal_size 1.0.0 haa9412d_0
gevent 1.4.0 py36h7b6447c_0
glib 2.56.2 hd408876_0
glob2 0.7 py_0
gmp 6.1.2 h6c8ec71_1
gmpy2 2.0.8 py36h10f8cd9_2
graphite2 1.3.13 h23475e2_0
greenlet 0.4.15 py36h7b6447c_0
gst-plugins-base 1.14.0 hbbd80ab_1
gstreamer 1.14.0 hb453b48_1
h5py 2.9.0 py36h7918eee_0
harfbuzz 1.8.8 hffaf4a1_0
hdf5 1.10.4 hb1b8bf9_0
heapdict 1.0.0 py36_2
html5lib 1.0.1 py36_0
icu 58.2 h9c2bf20_1
idna 2.8 py36_0
imageio 2.5.0 py36_0
imagesize 1.1.0 py36_0
importlib_metadata 0.17 py36_1
intel-openmp 2019.4 243
ipykernel 5.1.1 py36h39e3cac_0
ipython 7.6.1 py36h39e3cac_0
ipython_genutils 0.2.0 py36_0
ipywidgets 7.5.0 py_0
isort 4.3.21 py36_0
itsdangerous 1.1.0 py36_0
jbig 2.1 hdba287a_0
jdcal 1.4.1 py_0
jedi 0.13.3 py36_0
jeepney 0.4 py36_0
jinja2 2.10.1 py36_0
joblib 0.13.2 py36_0
jpeg 9b h024ee3a_2
json5 0.8.4 py_0
jsonschema 3.0.1 py36_0
jupyter 1.0.0 py36_7
jupyter_client 5.3.1 py_0
jupyter_console 6.0.0 py36_0
jupyter_core 4.5.0 py_0
jupyterlab 1.0.2 py36hf63ae98_0
jupyterlab_server 1.0.0 py_0
keyring 18.0.0 py36_0
kiwisolver 1.1.0 py36he6710b0_0
krb5 1.16.1 h173b8e3_7
lazy-object-proxy 1.4.1 py36h7b6447c_0
libarchive 3.3.3 h5d8350f_5
libcurl 7.65.2 h20c2e04_0
libedit 3.1.20181209 hc058e9b_0
libffi 3.2.1 hd88cf55_4
libgcc-ng 9.1.0 hdf63c60_0
libgfortran-ng 7.3.0 hdf63c60_0
liblief 0.9.0 h7725739_2
libpng 1.6.37 hbc83047_0
libsodium 1.0.16 h1bed415_0
libssh2 1.8.2 h1ba5d50_0
libstdcxx-ng 9.1.0 hdf63c60_0
libtiff 4.0.10 h2733197_2
libtool 2.4.6 h7b6447c_5
libuuid 1.0.3 h1bed415_2
libxcb 1.13 h1bed415_1
libxml2 2.9.9 hea5a465_1
libxslt 1.1.33 h7d1a2b0_0
llvmlite 0.29.0 py36hd408876_0
locket 0.2.0 py36_1
lxml 4.3.4 py36hefd8a0e_0
lz4-c 1.8.1.2 h14c3975_0
lzo 2.10 h49e0be7_2
markupsafe 1.1.1 py36h7b6447c_0
matplotlib 3.1.0 py36h5429711_0
mccabe 0.6.1 py36_1
mistune 0.8.4 py36h7b6447c_0
mkl 2019.4 243
mkl-service 2.0.2 py36h7b6447c_0
mkl_fft 1.0.12 py36ha843d7b_0
mkl_random 1.0.2 py36hd81dba3_0
mock 3.0.5 py36_0
more-itertools 7.0.0 py36_0
mpc 1.1.0 h10f8cd9_1
mpfr 4.0.1 hdf1c602_3
mpmath 1.1.0 py36_0
msgpack-python 0.6.1 py36hfd86e86_1
multipledispatch 0.6.0 py36_0
nbconvert 5.5.0 py_0
nbformat 4.4.0 py36_0
ncurses 6.1 he6710b0_1
networkx 2.3 py_0
ninja 1.9.0 py36hfd86e86_0
nltk 3.4.4 py36_0
nose 1.3.7 py36_2
notebook 6.0.0 py36_0
numba 0.45.0 py36h962f231_0
numexpr 2.6.9 py36h9e4a6bb_0
numpy 1.16.4 py36h7e9f1db_0
numpy-base 1.16.4 py36hde5b4d6_0
numpydoc 0.9.1 py_0
olefile 0.46 py36_0
openpyxl 2.6.2 py_0
openssl 1.1.1c h7b6447c_1
packaging 19.0 py36_0
pandas 0.24.2 py36he6710b0_0
pandoc 2.2.3.2 0
pandocfilters 1.4.2 py36_1
pango 1.42.4 h049681c_0
parso 0.5.0 py_0
partd 1.0.0 py_0
patchelf 0.9 he6710b0_3
path.py 12.0.1 py_0
pathlib2 2.3.4 py36_0
patsy 0.5.1 py36_0
pcre 8.43 he6710b0_0
pep8 1.7.1 py36_0
pexpect 4.7.0 py36_0
pickleshare 0.7.5 py36_0
pillow 6.1.0 py36h34e0f95_0
pip 19.1.1 py36_0
pixman 0.38.0 h7b6447c_0
pkginfo 1.5.0.1 py36_0
pluggy 0.12.0 py_0
ply 3.11 py36_0
prometheus_client 0.7.1 py_0
prompt_toolkit 2.0.9 py36_0
psutil 5.6.3 py36h7b6447c_0
ptyprocess 0.6.0 py36_0
py 1.8.0 py36_0
py-lief 0.9.0 py36h7725739_2
pycodestyle 2.5.0 py36_0
pycosat 0.6.3 py36h14c3975_0
pycparser 2.19 py36_0
pycrypto 2.6.1 py36h14c3975_9
pycurl 7.43.0.3 py36h1ba5d50_0
pyflakes 2.1.1 py36_0
pygments 2.4.2 py_0
pylint 2.3.1 py36_0
pyodbc 4.0.26 py36he6710b0_0
pyopenssl 19.0.0 py36_0
pyparsing 2.4.0 py_0
pyqt 5.9.2 py36h05f1152_2
pyrsistent 0.14.11 py36h7b6447c_0
pysocks 1.7.0 py36_0
pytables 3.5.2 py36h71ec239_1
pytest 5.0.1 py36_0
pytest-arraydiff 0.3 py36h39e3cac_0
pytest-astropy 0.5.0 py36_0
pytest-doctestplus 0.3.0 py36_0
pytest-openfiles 0.3.2 py36_0
pytest-remotedata 0.3.1 py36_0
python 3.6.8 h0371630_0
python-dateutil 2.8.0 py36_0
python-libarchive-c 2.8 py36_11
pytorch 1.1.0 py3.6_cuda10.0.130_cudnn7.5.1_0 pytorch
pytz 2019.1 py_0
pywavelets 1.0.3 py36hdd07704_1
pyyaml 5.1.1 py36h7b6447c_0
pyzmq 18.0.0 py36he6710b0_0
qt 5.9.7 h5867ecd_1
qtawesome 0.5.7 py36_1
qtconsole 4.5.1 py_0
qtpy 1.8.0 py_0
readline 7.0 h7b6447c_5
requests 2.22.0 py36_0
rope 0.14.0 py_0
ruamel_yaml 0.15.46 py36h14c3975_0
scikit-image 0.15.0 py36he6710b0_0
scikit-learn 0.21.2 py36hd81dba3_0
scipy 1.3.0 py36h7c811a0_0
seaborn 0.9.0 py36_0
secretstorage 3.1.1 py36_0
send2trash 1.5.0 py36_0
setuptools 41.0.1 py36_0
simplegeneric 0.8.1 py36_2
singledispatch 3.4.0.3 py36_0
sip 4.19.8 py36hf484d3e_0
six 1.12.0 py36_0
snappy 1.1.7 hbae5bb6_3
snowballstemmer 1.9.0 py_0
sortedcollections 1.1.2 py36_0
sortedcontainers 2.1.0 py36_0
soupsieve 1.8 py36_0
sphinx 2.1.2 py_0
sphinxcontrib 1.0 py36_1
sphinxcontrib-applehelp 1.0.1 py_0
sphinxcontrib-devhelp 1.0.1 py_0
sphinxcontrib-htmlhelp 1.0.2 py_0
sphinxcontrib-jsmath 1.0.1 py_0
sphinxcontrib-qthelp 1.0.2 py_0
sphinxcontrib-serializinghtml 1.1.3 py_0
sphinxcontrib-websupport 1.1.2 py_0
spyder 3.3.6 py36_0
spyder-kernels 0.5.1 py36_0
sqlalchemy 1.3.5 py36h7b6447c_0
sqlite 3.29.0 h7b6447c_0
statsmodels 0.10.0 py36hdd07704_0
sympy 1.4 py36_0
tblib 1.4.0 py_0
terminado 0.8.2 py36_0
testpath 0.4.2 py36_0
tk 8.6.8 hbc83047_0
toolz 0.10.0 py_0
torchvision 0.3.0 py36_cu10.0.130_1 pytorch
tornado 6.0.3 py36h7b6447c_0
tqdm 4.32.1 py_0
traitlets 4.3.2 py36_0
typed-ast 1.3.4 py36h7b6447c_0
unicodecsv 0.14.1 py36_0
unixodbc 2.3.7 h14c3975_0
urllib3 1.24.2 py36_0
wcwidth 0.1.7 py36_0
webencodings 0.5.1 py36_1
werkzeug 0.15.4 py_0
wheel 0.33.4 py36_0
widgetsnbextension 3.5.0 py36_0
wrapt 1.11.2 py36h7b6447c_0
wurlitzer 1.0.2 py36_0
xlrd 1.2.0 py36_0
xlsxwriter 1.1.8 py_0
xlwt 1.3.0 py36_0
xz 5.2.4 h14c3975_4
yaml 0.1.7 had09818_2
zeromq 4.3.1 he6710b0_3
zict 1.0.0 py_0
zipp 0.5.1 py_0
zlib 1.2.11 h7b6447c_3
zstd 1.3.7 h0b5b093_0
stackoverflow.com
How is it that torch is not installed by torchvision is? 2
python, machine-learning, deep-learning, pytorch
asked by
Charlie Parker
on 04:19PM - 27 Jul 19 UTC |
st84022 | I was told I do have pytorch installed but my script keeps giving me this error:
$ cat nohup.out
Traceback (most recent call last):
File "high_performing_data_point_models_cifar10.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
does that mean that I need to install it as pytroch and not torch? Is this not weird? |
st84023 | Note I am running this on an AWS instance p3.2xlarge . This keeps happening when I log out and then go back in that my torch package gets missing…!!! |
st84024 | Hi,
This is AJAX to flask server project.
For more info about this project I have a couple postings about it here, please see link-1 link-2
When I run the python code in the terminal I get the TypeError: ‘top_block_22’ object is not callable error. The program starts running without problems, but once I start moving the slider the program will give that error.
More details:
This is AJAX to flask server project. I created a ‘Slider’ folder inside Flask directory : /home/fit-pc/my_flask_app/virtualenv/Slider. In this folder I have Templates and Static folders. Inside Templates folder I have my index.html file (see below). This index.html file had the script for the roundSlider widget that I am trying to use to control some variable value inside my python code ‘top_block_22.py’. My main python code is in the main Slider folder. Static folder is just empty.
Please, I need your help to solve this problem.
error log:
fit - pc@fitpc - fitlet2:~$ python / home / fit - pc / my_flask_app / virtualenv / Slider / top_block_22.py
* Running on http: / / 127.0 . 0.1 : 5000 / (Press CTRL + C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 269 - 962 - 008
127.0 . 0.1 - - [ 26 / Jul / 2019 11 : 20 : 11 ] "GET / HTTP/1.1" 200 -
gr::log :INFO: audio source - Audio sink arch: alsa
127.0 . 0.1 - - [ 26 / Jul / 2019 11 : 20 : 15 ] "GET /valueofslider?slide_val=903 HTTP/1.1" 500 -
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/flask/app.py" , line 1997 , in __call__
return self .wsgi_app(environ, start_response)
File "/usr/lib/python2.7/dist-packages/flask/app.py" , line 1985 , in wsgi_app
response = self .handle_exception(e)
File "/usr/lib/python2.7/dist-packages/flask/app.py" , line 1540 , in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python2.7/dist-packages/flask/app.py" , line 1982 , in wsgi_app
response = self .full_dispatch_request()
File "/usr/lib/python2.7/dist-packages/flask/app.py" , line 1615 , in full_dispatch_request
return self .finalize_request(rv)
File "/usr/lib/python2.7/dist-packages/flask/app.py" , line 1630 , in finalize_request
response = self .make_response(rv)
File "/usr/lib/python2.7/dist-packages/flask/app.py" , line 1740 , in make_response
rv = self .response_class.force_type(rv, request.environ)
File "/usr/lib/python2.7/dist-packages/werkzeug/wrappers.py" , line 921 , in force_type
response = BaseResponse( * _run_wsgi_app(response, environ))
File "/usr/lib/python2.7/dist-packages/werkzeug/wrappers.py" , line 59 , in _run_wsgi_app
return _run_wsgi_app( * args)
File "/usr/lib/python2.7/dist-packages/werkzeug/test.py" , line 923 , in run_wsgi_app
app_rv = app(environ, start_response)
TypeError: 'top_block_22' object is not callable
And here is the Python code:
from gnuradio import analog
from gnuradio import audio
from gnuradio import blocks
from gnuradio import eng_notation
from gnuradio import gr
from gnuradio.eng_option import eng_option
from gnuradio.filter import firdes
from optparse import OptionParser
from flask import Flask, render_template, jsonify, request, redirect, url_for
from random import randint
class top_block_22(gr.top_block):
def __init__(self, slide_val):
self.slide_val = slide_val
gr.top_block.__init__(self, "Top Block 22")
##################################################
# Variables
##################################################
self.samp_rate = samp_rate = 32000
##################################################
# Blocks
##################################################
self.blocks_add_xx = blocks.add_vff(1)
self.audio_sink = audio.sink(32000, '', True)
self.analog_sig_source_x_1 = analog.sig_source_f(samp_rate, analog.GR_COS_WAVE, 440, 0.4, 0)
self.analog_sig_source_x_0 = analog.sig_source_f(samp_rate, analog.GR_COS_WAVE, 350, 0.4, 0)
self.analog_noise_source_x_0 = analog.noise_source_f(analog.GR_GAUSSIAN, 0.005, -42)
##################################################
# Connections
##################################################
self.connect((self.analog_noise_source_x_0, 0), (self.blocks_add_xx, 2))
self.connect((self.analog_sig_source_x_0, 0), (self.blocks_add_xx, 0))
self.connect((self.analog_sig_source_x_1, 0), (self.blocks_add_xx, 1))
self.connect((self.blocks_add_xx, 0), (self.audio_sink, 0))
app = Flask(__name__)
@app.route('/')
def hex_color():
return render_template("index.html")
@app.route('/valueofslider')
def slide():
slide_val = request.args.get('slide_val')
return top_block_22(slide_val)
def main(top_block_cls=top_block_22, options=None):
tb = top_block_cls()
tb.start()
try:
raw_input('Press Enter to quit: ')
except EOFError:
pass
tb.stop()
tb.wait()
samp_rate = int(slide_val) + 100
print(samp_rate)
return(slide_val) # Still need to return or get TypeError
if __name__ == '__main__':
app.run(debug=True)
And this is the index.html script:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>jQuery roundSlider - JS Bin</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script>
<link href="https://cdnjs.cloudflare.com/ajax/libs/roundSlider/1.3.2/roundslider.min.css" rel="stylesheet" />
<script src="https://cdnjs.cloudflare.com/ajax/libs/roundSlider/1.3.2/roundslider.min.js"></script>
</head>
<body>
<!-- Only html needed -->
<div id="slider"></div>
<script>
var val;
$("#slider").roundSlider({
radius: 215,
min: 0,
max: 40000,
change: function () {
var obj1 = $("#slider").data("roundSlider");
val = obj1.getValue();
value: 1
$.getJSON('/valueofslider', {
slide_val: val
});
}
});
</script>
</body> |
st84025 | Could you replace the line
gr.top_block.__init__(self, "Top Block 22")
with the code given below?
super(self, top_block_22).__init__(self, "Top Block 22")
Hope this helps. |
st84026 | Salam alaykum Mazhar and thanks for your attempt to help. I tried: super(self, top_block_22).init(self, “Top Block 22”) but gave another error: TypeError: super() argument 1 must be type, not top_block_22.
Hadad |
st84027 | Apologies, I comitted an error, Could you try this?
super(top_block_22, self).__init__(self, "Top Block 22") |
st84028 | This time I got this error:
super(top_block_22, self).init(self, “Top Block 22”)
TypeError: init() takes at most 2 arguments (3 given) |
st84029 | In my architecture, i have to reuse a module with the same weights as its original ones. Does the following method is the right way to realize my thought?
Class Encoder(nn.module):
def init(self):
super(Encoder, self).init()
。。。
def forward(self, x1, x2, emb, input_sel):
if input_sel == 0 :
x = x1
else:
x = x2
x = 。。。。
。。。
Class Top(nn.module):
def init(self):
super(Top, self).init()
self.encoder = Encoder()
self.decoder = Decoder()
。。。
def forward(input, emb1,emb2):
encoder_output = self.encoder(input,input,emb1,0)
decoder_output = self.decoder(encoder_output)
code_output = self.encoder(decoder_output,decoder_output,emb1,1)
return encoder_output, decoder_output, code_output
Can the two encoders share the same weight in above method?? |
st84030 | Hi nkcdy,
Yes, the code that you’ve shared would be one way to have shared weights, and should work for your purpose. |
st84031 | Hi everyone, I
import midi
import os
from midi_to_statematrix import *
import glob
import numpy as np
batch_width = 10 # number of sequences in a batch
batch_len = 16*8 # length of each sequence
division_len = 16 # interval between possible start locations
piece = 0
pieces = loadPieces(“MIDIS_train”)
input = torch.FloatTensor(MusicToTensor(pieces))
lstm_input_size = batch_len
h1 = 128
hidden = torch.zeros(65, n_hidden, 2)
num_train = 2
batch_size = 1
num_epochs = 1000
print(np.shape(input))
print(np.shape(hidden))
def loadPieces(dirpath):
pieces = {}
for fname in os.listdir(dirpath):
if fname[-4:] not in ('.mid','.MID'):
continue
name = fname[:-4]
tensor = midiToNoteStateMatrix(os.path.join(dirpath, fname))
if len(tensor) < batch_len:
continue
pieces[name] = tensor
print ("Loaded {}".format(name))
return pieces
def MusicToTensor(piece):
dirpath = "MIDIS_train"
pieces = {}
for fname in os.listdir(dirpath):
if fname[-4:] not in ('.mid','.MID'):
continue
matrix = midiToNoteStateMatrix(os.path.join(dirpath, fname))
tensor = torch.FloatTensor(matrix)
return tensor
import torch.nn as nn
Here we define our model as a class
class LSTM(nn.Module):
def __init__(self, input_dim, hidden_dim, batch_size, output_dim=1,
num_layers=2):
super(LSTM, self).__init__()
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.batch_size = batch_size
self.num_layers = num_layers
# Define the LSTM layer
self.lstm = nn.LSTM(self.input_dim, self.hidden_dim, self.num_layers)
# Define the output layer
self.linear = nn.Linear(self.hidden_dim, output_dim)
def init_hidden(self):
# This is what we'll initialise our hidden state as
return (torch.zeros(self.num_layers, self.batch_size, self.hidden_dim),
torch.zeros(self.num_layers, self.batch_size, self.hidden_dim))
def forward(self, input):
# Forward pass through LSTM layer
# shape of lstm_out: [input_size, batch_size, hidden_dim]
# shape of self.hidden: (a, b), where a and b both
# have shape (num_layers, batch_size, hidden_dim).
lstm_out, self.hidden = self.lstm(input.view(len(input), self.batch_size, -1))
# Only take the output from the final timetep
# Can pass on the entirety of lstm_out to the next layer if it is a seq2seq prediction
y_pred = self.linear(lstm_out[-1].view(self.batch_size, -1))
return y_pred.view(-1)
model = LSTM(lstm_input_size, h1, batch_size=num_train, output_dim=output_dim, num_layers=num_layers)
loss_fn = torch.nn.MSELoss(size_average=False)
optimiser = torch.optim.Adam(model.parameters(), lr=learning_rate)
#####################
Train model
#####################
import torch.optim as optim
Device
device = torch.device(“cuda:0” if torch.cuda.is_available() else “cpu”)
Model instance
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
def get_accuracy(logit, target, batch_size):
‘’’ Obtain accuracy for training round ‘’’
corrects = (torch.max(logit, 1)[1].view(target.size()).data == target.data).sum()
accuracy = 100.0 * corrects/batch_size
return accuracy.item()
for epoch in range(num_epochs): # loop over the dataset multiple times
train_running_loss = 0.0
train_acc = 0.0
model.train()
# zero the parameter gradients
optimizer.zero_grad()
# reset hidden states
model.hidden = model.init_hidden()
# get the inputs
inputs = input
inputs = inputs.view(batch_size, 65*78*2)
# forward + backward + optimize
outputs = model(inputs)
loss = criterion(outputs)
loss.backward()
optimizer.step()
train_running_loss += loss.detach().item()
train_acc += get_accuracy(outputs, labels, batch_size)
model.eval()
print('Epoch: %d | Loss: %.4f | Train Accuracy: %.2f'
%(epoch, train_running_loss / i, train_acc/i))
The input is a matrix of 78 columns, 65 rows and a vector of 2 integers for each element of the matrix. I have tried to adjust all the dimensions but I got the error:
“RuntimeError: input.size(-1) must be equal to input_size. Expected 128, got 5070” |
st84032 | Could you please post the complete error? Or point to which line is generating the error? |
st84033 | Let’s say we have dropout or something else that zero’s out part of the state space. Would a forward pass and backward pass still take the same time compared to the non-zero’d out input? |
st84034 | And if so, why. We should at least be able to avoid matrix multiplications and just sum the biases. There should be a shortcut.
Note: I am not talking about ‘mostly zero’ sparse matrices. I am talking about something more fundamental; you do not need to multiply by a matrix if the input is zero. |
st84035 | In my honest opinion, It comes down to a trade off. If you want to conditionally compute some operation, then the cost of the condition should also be considered.
Total Cost of operations = Cost of conditional check*Number of elements + Cost of multiply-add * Number of non-zero elements
In most cases, The reduction in cost due to multiply add is not enough to compensate the additional cost of conditional checks.
For sparse tensors, where more than 70% of the elements are zero, the trade off favors having checks.
Assumption: Cost of multiply-add of 32-bit number is O(32 squared) while cost of checking whether 32-bit number is non-zero is O(32). |
st84036 | Epoch: 0 – Training Loss: 0.057411
Epoch: 1 – Training Loss: 0.056986
Epoch: 2 – Training Loss: 0.057286
Epoch: 3 – Training Loss: 0.057266
Anyone knows why it is sooo slow ??
My data set have 1248 images for training
CODE:
model = models.densenet121(pretrained = True)
manter os features parameters
for param in model.parameters():
param.requires_grad = False
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
for e in range(epochs):
# keep track of training and validation loss
running_loss = 0.0
running_corrects = 0.0
for inputs, label in (dataloaders['train']):
model.train()
# IF GPU is availible
if train_on_gpu:
inputs, label = inputs.cuda(), label.cuda()
optimizer.zero_grad()
with torch.set_grad_enabled(True):
logps = model(inputs)
_, preds = torch.max(logps, 1) # tecnica nova de validacao
loss = criterion(logps, label)
loss.backward()
optimizer.step()
running_loss += loss.item()
running_corrects += torch.sum(preds == label.data) |
st84037 | I haven’t check but does densenet softmax already incorporated inside the model? |
st84038 | Could you tell me why you are using this line?
Sam_Fst:
for param in model.parameters():
param.requires_grad = False
Seems to me that setting the requires grad to False, prevents the optimizer from calculating gradients for the parameters and prevents any optimization from taking place. |
st84039 | There have been a couple of general discussions about accumulating gradients before before (How to implement accumulated gradient? 3 and How to implement accumulated gradient in pytorch (i.e. iter_size in caffe prototxt) 3).
But I want to know how loss momentum behaves when we’re training a model like this:
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
# suppose we want to accumulate gradients for 'N' steps:
for i, input in enumerate(data_loader):
output = model(input)
loss = criterion(output, input) / N
loss.backward()
if i % N == 0 and i > 0:
optimizer.step()
optimizer.zero_grad()
Are the loss momentum values overwritten at each call of loss.backward() or (desirably) when optimizer.step() is called? |
st84040 | Hi Ali,
The loss variable is reassigned every iteration due to the line.
loss = criterion(output, input) / N
The momentum of the gradient is maintained by the optimizer for the parameters/tensors that it is tracking, in this case those are model.parameters().
Hope this helps. |
st84041 | So I am trying to do filtering via Conv2d, I define my kernel and then change the weights of Conv2d and thought that should be it, but the results does not match
For a toy example, I define a 3x3 kernel : [[0, 0, 0],[0, 1, 0],[0, 0, 0]], the output results of using this kernel should basically give me identical results as an input. but I am not sure why it is not happening like that:
Here is the whole toy code:
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch
Input = torch.rand(1,2,6,6)
Guass = nn.Conv2d(Input.size(1), Input.size(1), 3, padding = 1)
KernelGauss = torch.ones(3,3)*0.
KernelGauss[1,1] = 1.
KernelGaussExpand = KernelGauss.expand(Guass.weight.size())
Guass.weight = torch.nn.Parameter(KernelGaussExpand, requires_grad=False)
OutPut = Guass(Input)
print('KernelGauss: {}' .format(KernelGauss))
print('Input: {}' .format(Input))
print('OutPut: {}' .format(OutPut))
KernelGauss: tensor([[0., 0., 0.],
[0., 1., 0.],
[0., 0., 0.]])
Input: tensor([[[[0.1694, 0.0436, 0.7638, 0.4422, 0.6436, 0.1260],
[0.2136, 0.2536, 0.1094, 0.6026, 0.2102, 0.8117],
[0.6218, 0.5365, 0.9977, 0.4297, 0.4261, 0.7260],
[0.8062, 0.3274, 0.4722, 0.2773, 0.4304, 0.7448],
[0.6605, 0.8976, 0.7353, 0.7233, 0.9757, 0.2564],
[0.6651, 0.2333, 0.2524, 0.8768, 0.0754, 0.4961]],
[[0.8020, 0.6601, 0.4271, 0.3141, 0.6387, 0.8583],
[0.3664, 0.9589, 0.7841, 0.6330, 0.0851, 0.4114],
[0.4414, 0.9218, 0.1864, 0.5323, 0.4601, 0.4648],
[0.0841, 0.8478, 0.8975, 0.1383, 0.0706, 0.0810],
[0.8904, 0.7208, 0.2150, 0.0034, 0.4385, 0.7455],
[0.8900, 0.6735, 0.5207, 0.4046, 0.6463, 0.9222]]]])
OutPut: tensor([[[[1.1774, 0.9097, 1.3969, 0.9623, 1.4883, 1.1903],
[0.7860, 1.4185, 1.0995, 1.4416, 0.5013, 1.4290],
[1.2692, 1.6643, 1.3901, 1.1680, 1.0922, 1.3967],
[1.0962, 1.3812, 1.5756, 0.6216, 0.7070, 1.0318],
[1.7568, 1.8244, 1.1563, 0.9327, 1.6201, 1.2078],
[1.7611, 1.1128, 0.9791, 1.4874, 0.9277, 1.6242]],
[[1.0561, 0.7884, 1.2756, 0.8410, 1.3670, 1.0690],
[0.6647, 1.2972, 0.9782, 1.3203, 0.3800, 1.3077],
[1.1479, 1.5430, 1.2688, 1.0467, 0.9709, 1.2754],
[0.9749, 1.2599, 1.4543, 0.5003, 0.5857, 0.9105],
[1.6355, 1.7031, 1.0349, 0.8114, 1.4988, 1.0865],
[1.6398, 0.9915, 0.8578, 1.3661, 0.8064, 1.5029]]]],
grad_fn=<ThnnConv2DBackward>) |
st84042 | You forgot to deactivate the bias term (which is True by default)
Edit: @InnovArul was a few seconds faster, so nvm |
st84043 | Guass = nn.Conv2d(Input.size(1), Input.size(1), 3, padding = 1, bias=False)
Also note your typo in the layers name. |
st84044 | nn.Conv2d(Input.size(1), Input.size(1), 3, padding = 1, bias=False)
@justusschock |
st84045 | @InnovArul @justusschock
I deactivated it, but it is still not giving the answer it should…
Update:
It is a little wierd…
When i do
Input = torch.rand(1,1,6,6)
it works
but when i do:
Input = torch.rand(1,2,6,6)
it doesnot |
st84046 | Ok. well, you are having 2 channels (which I didn’t notice). The conv layer will multiply the kernel with input and add up the results for every position.
Try having only one channel and check the results. |
st84047 | yeah exactly!
I just noticed that…
Do you know what i should do to make it work for batchsize and channel size more than 1? |
st84048 | you can use group convolutions.
Guass = nn.Conv2d(Input.size(1), Input.size(1), 3, padding = 1, bias=False, groups=Input.size(1)) |
st84049 | and you could also use the functional API which removes the necessarity to change the weigthts by hand since you could simply pass the weights to the function. |
st84050 | x = torch.rand(1, 1, 64, 64)
gauss_kernel = torch.Tensor([[0, 0, 0], [0, 1, 0], [0, 0, 0]]).unsqueeze(0).unsqueeze(0)
out = F.conv2d(x, gauss_kernel, padding=1)
print(x)
print(gauss_kernel)
print(out)
tensor([[[[0.7888, 0.7222, 0.9251, ..., 0.1763, 0.8585, 0.7747],
[0.8203, 0.2307, 0.4474, ..., 0.8498, 0.3399, 0.2610],
[0.0001, 0.2678, 0.2113, ..., 0.9698, 0.3422, 0.1166],
...,
[0.7921, 0.1250, 0.1268, ..., 0.8102, 0.1775, 0.4617],
[0.1761, 0.6137, 0.6097, ..., 0.0786, 0.8547, 0.4948],
[0.7365, 0.6266, 0.6305, ..., 0.6367, 0.0811, 0.8240]]]])
tensor([[[[0., 0., 0.],
[0., 1., 0.],
[0., 0., 0.]]]])
tensor([[[[0.7888, 0.7222, 0.9251, ..., 0.1763, 0.8585, 0.7747],
[0.8203, 0.2307, 0.4474, ..., 0.8498, 0.3399, 0.2610],
[0.0001, 0.2678, 0.2113, ..., 0.9698, 0.3422, 0.1166],
...,
[0.7921, 0.1250, 0.1268, ..., 0.8102, 0.1775, 0.4617],
[0.1761, 0.6137, 0.6097, ..., 0.0786, 0.8547, 0.4948],
[0.7365, 0.6266, 0.6305, ..., 0.6367, 0.0811,
This way you don’t have to wrap your kernel into a Parameter and you don’t have to modify the weights by hand, it might also reduce some memory (which really is not that relevant in this small example). |
st84051 | if i have an input of size
x = torch.rand(1, 3, 64, 64)
how would you do it to have a output of size (1,3,64,64)? |
st84052 | I want to implement a ResNet network but I really want it to be in the sequential network form.
What I mean by sequential network form is the following:
## mdl5, from cifar10 tutorial
mdl5 = nn.Sequential(OrderedDict([
('pool1', nn.MaxPool2d(2, 2)),
('relu1', nn.ReLU()),
('conv1', nn.Conv2d(3, 6, 5)),
('pool1', nn.MaxPool2d(2, 2)),
('relu2', nn.ReLU()),
('conv2', nn.Conv2d(6, 16, 5)),
('relu2', nn.ReLU()),
('Flatten', Flatten()),
('fc1', nn.Linear(1024, 120)), # figure out equation properly
('relu4', nn.ReLU()),
('fc2', nn.Linear(120, 84)),
('relu5', nn.ReLU()),
('fc3', nn.Linear(84, 10))
]))
but of course with the NN lego blocks being “ResNet”.
Thanks! |
st84053 | cross posted: https://stackoverflow.com/questions/57229054/how-does-one-implement-my-own-resnet-with-torch-nn-sequential-in-pytorch 403 |
st84054 | Hello
I have been playing with a basic fully connected neuralNet using the Sequential function.
dcn.NN = torch.nn.Sequential(
torch.nn.Linear(dcn.d, dcn.hidden_d, bias=True),
torch.nn.ReLU(),
torch.nn.Linear(dcn.hidden_d, dcn.hidden_d, bias=True),
torch.nn.ReLU(),
torch.nn.Linear(dcn.hidden_d, dcn.hidden_d, bias=True),
torch.nn.ReLU(),
torch.nn.Linear(dcn.hidden_d, dcn.hidden_d, bias=True),
torch.nn.ReLU(),
torch.nn.Linear(dcn.hidden_d, dcn.output_d, bias=True),
torch.nn.Softmax(dim=1)
)
I know that pytorch has a pre-packaged resnet in the library. But to my understanding, it is a CNN. If I want a basic fully connected neuralNet with resnet structure, I assume I would need to build it myself? If so how do I do it using the Sequential function? Or do I have to do it without using the Sequential method?
Thank you in advanced.
Chieh |
st84055 | you have to do it without the Sequential container, because you have to do the += operation (i.e. residual connection) |
st84056 | I also want to know. Duplicate: How to have Residual Network using only Sequential Blocks? 139 |
st84057 | Like Keras. If I have two input a, b, I want to design a model. Is it possible for the model to first take one input(ex. input a), so that model(a) become a new model/function that only takes one input(input b)?
Or are there any equivalent methods to design a network this way?
Really thanks. |
st84058 | Well, considering you can code whatever you want inside the forward function it should be possible to do “whatever” you can code. Could you put an example? |
st84059 | Of course. Thanks for you attention.
For example, There are two input: 1 is a latent vector, assume its size = (1, 500). Then we get a secondary input, which is a batch of 3D vertices coordinates, assume its size = (1000, 3), that is, there are 1000 (x, y, z) tuples.
I want to design a model that takes 500+3 as input size. That means, it takes first input, which makes it a new function whose input size is 3. Then this functions takes the secondary input and return a result. It is like functional programming.
One possible but brute-force way is just duplicate the first input 1000 times and concat them. However, I think there should be a better way. |
st84060 | Something like this?
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.layer = create_layer()
self.flag = True
def func_input1(self, x):
return None
def func_input2(self, x1, x2, x3):
return None
def forward(self, *inputs):
if self.flag:
x = self.func_input1(*inputs)
self.flag = False
else:
x = self.func_input2(*inputs)
return x
You can really code it however you want to. You can also pass multiple inputs and call methods inside forward. |
st84061 | Thanks for your reply. However, I am talking about another problem.
I want a model that can be seen as a function f(x, y). I give it a first input a, and I want to regard f(a, y) as g(y), a function that only takes y as argument.
And in the training procedure, there may be only 1 x-input but 1000 y-input for each input data item. |
st84062 | Well, regarding my previous answer the aforementioned function can work that way. Assuming that x is a tensor it may have learnable or non-learnable parameters.
You can simply store x in a model attribute self.x = a (equivalent to define g(y) and then use the forward function as g(y) itself.
You can actively modify self.x whenever you want.
from torch import nn
import torch
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
# "defining" f(a,y)
self.a = torch.tensor(10)
#g(y)
def forward(self, y):
return y+self.a
#Instantiate model
model = Model()
for _ in range(10):
print('Defining x dynamically')
model.a = torch.tensor(torch.rand(1))
for _ in range(2):
y = torch.rand(1)
print('Running model for x = %.3f, y = %.3f' % (model.a.item(),y.item()))
run1 = model(y)
print(run1)
Running model for x = 0.456, y = 0.978
tensor([1.4344])
Running model for x = 0.456, y = 0.740
tensor([1.1966])
Defining x dynamically
Running model for x = 0.461, y = 0.845
tensor([1.3054])
Running model for x = 0.461, y = 0.893
tensor([1.3531])
Defining x dynamically
Running model for x = 0.328, y = 0.956
tensor([1.2839])
Running model for x = 0.328, y = 0.405
tensor([0.7331])
Defining x dynamically
Running model for x = 0.985, y = 0.218
tensor([1.2030])
Running model for x = 0.985, y = 0.172
tensor([1.1572])
Defining x dynamically
Running model for x = 0.479, y = 0.954
tensor([1.4331])
Running model for x = 0.479, y = 0.351
tensor([0.8302])
Defining x dynamically
Running model for x = 0.085, y = 0.192
tensor([0.2775])
Running model for x = 0.085, y = 0.371
tensor([0.4562])
Defining x dynamically
Running model for x = 0.844, y = 0.585
tensor([1.4286])
Running model for x = 0.844, y = 0.381
tensor([1.2249])
Defining x dynamically
Running model for x = 0.846, y = 0.239
tensor([1.0844])
Running model for x = 0.846, y = 0.473
tensor([1.3185])
Defining x dynamically
Running model for x = 0.506, y = 0.517
tensor([1.0226])
Running model for x = 0.506, y = 0.092
tensor([0.5975])
Defining x dynamically
Running model for x = 0.539, y = 0.125
tensor([0.6635])
Running model for x = 0.539, y = 0.834
tensor([1.3732]) |
st84063 | Thank you!
In my specific case, I need to concatenate a latent vector (size = 500) with many (1000, For now) XYZ coordinates (x, y, z). That means I want a 503 dimension vector. I just don’t know how to do that.
And each pair of inputs (a latent vector, 1000 vertices) are also in batches. |
st84064 | Well that depends a lot on what kind of data do you have in that latent space. One option could be to use a Linear transformation to map 500–>1000 and then to expand that to 1000x3 (x y z)
Anyway have a look at this paper, https:FiLM 15
They propose a good general concat method. |
st84065 | Hello,
I am making a VQA model with co-attention with Y adaptive image features (10-100). To calculate the co attention, I first project my question (batch * q_len * q_Dim) to (batch * q_len * new_dim)
Then I have the following for loop which i project each one of my image features (Y * feature_dim) to (Y * new_dim)
def attn_weights(self, q2, v2, n_objs):
batch_size = n_objs.size(0)
weights = torch.zeros(batch_size, self.max_objs, 1).to(self.device)
q_proj = self.q_proj(q2)
for i in range(batch_size):
n_i = int(n_objs[i].item()) ### number of objects for the ith image in batchk
v_i = v2[i] ## the ith image in batch
v_i = v_i[:n_i-1, :] ## selecting number of object in image
v_i = self.v_proj(v_i) ## projecting feature dim to new_dim
q_i = q_proj[i] ## the ith question in batch
fusion = v_i * q_i.repeat(n_i-1 ,1) ## repeat the question Y times
fusion = self.dropout(fusion)
scores = self.linear(fusion)
att_weights = softmax(scores, 0)
weights[i, :n_i -1] = att_weights
return weights
During training this causes CUDA’s memory usage to sky rocket. I have checked the nvidia-smi and this function alone causes 14113MiB / 15079MiB of memory to be used.
This is the error I have received:
File "main.py", line 181, in <module>
main()
File "main.py", line 166, in main
run(mod, train_loader, optimizer, train=[], prefix='train', epoch=i)
File "main.py", line 79, in run
loss.backward()
File "/opt/anaconda3/lib/python3.7/site-packages/torch/tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/opt/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: CUDA out of memory. Tried to allocate 1.43 GiB (GPU 0; 14.73 GiB total capacity; 8.45 GiB already allocated; 1.04 GiB free; 4.54 GiB cached)
Is there a reason why this is happening, and is there a known way around this? If nn.Linear layers are not supposed to be called in a for loop, my next question would be how to project the Y image features for every image in the batch (Y * feature_dim) to (Y * new_dim) where the batch dimension looks like (batch * 100 * feature_dim) to ( batch * 100 * new_dim) where everything after the Y image features (100 - Y) would be zero padded without the zero padding affecting the gradient of the projection.
Any help would be greatly appreciated! |
st84066 | Solved by JuanFMontesinos in post #2
Hi, nn.linear work with an arbitrary amount of dimensions, namely you can pass whatever tensor of size BATCH,,dim yo obtain BATCH,,new_dim.
Never do for loops in pytorch as it is equivalent to generate Siamese modules. It duplicates the computational graph as many times as you call the module.
If… |
st84067 | Hi, nn.linear work with an arbitrary amount of dimensions, namely you can pass whatever tensor of size BATCH,,dim yo obtain BATCH,,new_dim.
Never do for loops in pytorch as it is equivalent to generate Siamese modules. It duplicates the computational graph as many times as you call the module.
If you would like to do something similar (linear is spatial as you can pass arbitrary dimensions) the proper way is squeezing everything into the BATCH dimension |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.