instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Add 2-D tensor as column of dataframe | My dataframe looks like
INCIDENT_NUMBER
0 INC000030884498
1 INC000029956111
2 INC000029555353
3 INC000029555338
I also have a 2-D tensor for the above four incidents like
sample_concatenated_embedding=
tensor(
[[ 0.6993, -0.1427, -0.1532, ..., 0.8386, 0.5151, 0.8906],
[ 0.7382, -0.8497, 0.1363, ..., 0.8054, 0.5432, 0.9082],
[ 0.0835, -0.2431, -0.0815, ..., 0.8025, 0.5217, 0.9041],
[-0.0346, -0.2396, -0.5831, ..., 0.7591, 0.6138, 0.9649]],
grad_fn=<ViewBackward>)
The embeddings are of size [4, 161280]
I want to insert the tensor in the four consecutive lines of my Dataframe
The end Dataframe should look like
INCIDENT_NUMBER embedding
0 INC000030884498 [ 0.6993, -0.1427, -0.1532, ..., 0.8386, 0.5151, 0.8906]
1 INC000029956111 [ 0.7382, -0.8497, 0.1363, ..., 0.8054, 0.5432, 0.9082]
2 INC000029555353 [ 0.0835, -0.2431, -0.0815, ..., 0.8025, 0.5217, 0.9041]
3 INC000029555338 [-0.0346, -0.2396, -0.5831, ..., 0.7591, 0.6138, 0.9649]
If the tensors would have been series, i could have simply used the below command
my_dataframe['embedding'] = sample_concatenated_embedding
I could use a for loop and easily insert into dataframe like
empty_dataframe = pd.DataFrame(columns=['incident','embedding'])
for item in range(0,4):
INCIDENT_NUMBER = my_dataframe['INCIDENT_NUMBER'].iloc[item]
temp_df = pd.DataFrame([[INCIDENT_NUMBER, sample_concatenated_embedding[item]], columns=['incident','embedding'])
frames = [empty_dataframe, temp_df]
empty_dataframe = pd.concat(frames)
But for loop would be inefficent. Is there any shorter way to achieve the end goal
| If the value index of INCIDENT_NUMBER and value index of sample_concatenated_embedding matches. You can just convert sample_concatenated_embedding to list then assign it to the new column like
import pandas as pd
df = pd.DataFrame({'INCIDENT_NUMBER': ['INC000030884498', 'INC000029956111', 'INC000029555353', 'INC000029555338']})
data = [[ 0.6993, -0.1427, -0.1532, 0.8386, 0.5151, 0.8906],
[ 0.7382, -0.8497, 0.1363, 0.8054, 0.5432, 0.9082],
[ 0.0835, -0.2431, -0.0815, 0.8025, 0.5217, 0.9041],
[-0.0346, -0.2396, -0.5831, 0.7591, 0.6138, 0.9649]]
df['embedding'] = data
df.rename(columns={'INCIDENT_NUMBER': 'incident'}, inplace=True)
print(df)
incident embedding
0 INC000030884498 [0.6993, -0.1427, -0.1532, 0.8386, 0.5151, 0.8906]
1 INC000029956111 [0.7382, -0.8497, 0.1363, 0.8054, 0.5432, 0.9082]
2 INC000029555353 [0.0835, -0.2431, -0.0815, 0.8025, 0.5217, 0.9041]
3 INC000029555338 [-0.0346, -0.2396, -0.5831, 0.7591, 0.6138, 0.9649]
| https://stackoverflow.com/questions/67210373/ |
How can I reshape the (1006,19) result of keras regressor predictions into a (1006,1) numpy array? | I'm trying to create a stock prediction model in botch PyTorch and Keras. I have already followed some tutorials online and modified to fit my data and it works fine.
Now I'm translating that code into a compatible Keras model. I've already created the model and did the predictions but the problem is that the regressor.predict() function from Keras returns a (1006,19) numpy array whereas when I do predictions = model(x_test) it returns a (1006,1) which is what I need for my following work so I can plot the results.
Here's my Keras code so far:
from keras.models import Sequential
from keras.layers import LSTM, Dense, Dropout
lookback = 20
x_train_keras, y_train_keras, x_test_keras, y_test_keras = split_data(price, lookback)
print('x_train.shape = ',x_train_keras.shape) # x_train.shape = (1006, 19, 1)
print('y_train.shape = ',y_train_keras.shape) # y_train.shape = (1006, 1)
print('x_test.shape = ',x_test_keras.shape) # x_test.shape = (252, 19, 1)
print('y_test.shape = ',y_test_keras.shape) # y_test.shape = (252, 1)
regression = Sequential()
regression.add(LSTM(units=50, return_sequences=True, kernel_initializer='glorot_uniform', input_shape=(x_train_keras.shape[1],1)))
regression.add(Dropout(0.2))
regression.add(LSTM(units=50,kernel_initializer='glorot_uniform',return_sequences=True))
regression.add(Dropout(0.2))
regression.add(LSTM(units=50,kernel_initializer='glorot_uniform',return_sequences=True))
regression.add(Dropout(0.2))
regression.add(LSTM(units=50,kernel_initializer='glorot_uniform',return_sequences=True))
regression.add(Dropout(0.2))
regression.add(Dense(units=1))
regression.compile(optimizer='adam', loss='mean_squared_error')
from keras.callbacks import History
history = History()
history = regression.fit(x_train_keras, y_train_keras, batch_size=30, epochs=100, callbacks=[history])
train_predict_keras = regression.predict(x_train_keras)
train_predict_keras = train_predict_keras.reshape((train_predict_keras.shape[0], train_predict_keras.shape[1]))
predict = pd.DataFrame(scaler.inverse_transform(train_predict_keras))
original = pd.DataFrame(scaler.inverse_transform(y_train_keras))
fig = plt.figure()
fig.subplots_adjust(hspace=0.2, wspace=0.2)
plt.subplot(1,2,1)
ax = sns.lineplot(x=original.index, y=original[0], label='Data', color='royalblue')
ax = sns.lineplot(x=predict.index, y=predict[0], label='Training Prediction', color='tomato')
ax.set_title('Stock Price', size=14, fontweight='bold')
ax.set_xlabel("Days", size = 14)
ax.set_ylabel("Cost (USD)", size = 14)
ax.set_xticklabels('', size=10)
plt.subplot(1,2,2)
ax = sns.lineplot(data=history.history.get('loss'), color='royalblue')
ax.set_xlabel("Epoch", size = 14)
ax.set_ylabel("Loss", size = 14)
ax.set_title("Training Loss", size = 14, fontweight='bold')
fig.set_figheight(6)
fig.set_figwidth(16)
# Make predictions
test_predict_keras = regression.predict(x_test_keras)
# Invert predictions
train_predict_keras = scaler.inverse_transform(train_predict_keras)
y_train_keras = scaler.inverse_transform(y_train_keras)
test_predict_keras = scaler.inverse_transform(test_predict_keras.reshape((test_predict_keras.shape[0], test_predict_keras.shape[1])))
y_test = scaler.inverse_transform(y_test_keras)
# Calculate root MSE
trainScore = math.sqrt(mean_squared_error(y_train[:,0], y_train_pred[:,0]))
print(f'Train score {trainScore:.2f} RMSE')
testScore = math.sqrt(mean_squared_error(y_test[:,0], y_test_pred[:,0]))
print(f'Test score {testScore:.2f} RMSE')
# shift train predictions for plotting
trainPredictPlot_keras = np.empty_like(price)
trainPredictPlot_keras[:, :] = np.nan
trainPredictPlot_keras[lookback:len(train_predict_keras)+lookback, :] = train_predict_keras
# shift test predictions for plotting
testPredictPlot_keras = np.empty_like(price)
testPredictPlot_keras[:, :] = np.nan
testPredictPlot_keras[len(train_predict_keras)+lookback-1:len(price)-1, :] = test_predict_keras
original = scaler.inverse_transform(price['Close'].values.reshape(-1,1))
predictions_keras = np.append(trainPredictPlot_keras, testPredictPlot_keras, axis=1)
predictions_keras = np.append(predictions_keras, original, axis=1)
result_keras = pd.DataFrame(predictions_keras)
The error occurs in the trainPredictPlot_keras[lookback:len(train_predict_keras)+lookback, :] = train_predict_keras line saying could not broadcast input array from shape (1006,19) into shape (1006,1)
| Set the return_sequences to False for the last LSTM layer. You need to do as follows:
....
....
regression.add(LSTM(units=50,kernel_initializer='glorot_uniform',
return_sequences=False))
regression.add(Dropout(0.2))
regression.add(Dense(units=1))
regression.compile(optimizer='adam', loss='mean_squared_error')
Check doc:
return_sequences: Boolean. Whether to return the last output. in the output sequence, or the full sequence. Default: False.
| https://stackoverflow.com/questions/67213190/ |
Is .contiguous().flatten() the same as .view(-1) in PyTorch? | Are these exactly the same?
myTensor.contiguous().flatten()
myTensor.view(-1)
Will they return the same auto grad function etc?
| No, they are not exactly the same.
myTensor.contiguous().flatten():
Here, contiguous() either returns a copy of myTensor stored in contiguous memory, or returns myTensor itself if it is already contiguous. Then, flatten() reshapes the tensor to a single dimension. However, the returned tensor could be the same object as myTensor, a view, or a copy, so the contiguity of the output is not guaranteed.
Relevant documentation:
It’s also worth mentioning a few ops with special behaviors:
reshape(), reshape_as() and flatten() can return either a view or new tensor, user code shouldn’t rely on whether it’s view or not.
contiguous() returns itself if input tensor is already contiguous, otherwise it returns a new contiguous tensor by copying data.
myTensor.view(-1):
Here, view() returns a tensor with the same data as myTensor, and will only work if myTensor is already contiguous. The result may not be contiguous depending on the shape of myTensor.
| https://stackoverflow.com/questions/67214586/ |
How to accelerate batch-size data from memory when using dataloader | I am trying to use dataloader for training. The dataset is 150G, which are all .npz files. Due to the limitation of memory size, only one sample is read at a time from the disk. The following is part of the code.
class VimeoDataset(Dataset):
def __init__(self, mode, batch_size=32, num_workers = 8, num_gpus = 4):
self.batch_size = batch_size
self.num_workers = num_workers
self.num_gpus = num_gpus
self.mode = mode
self.load_data()
self.h = 256
self.w = 448
xx = np.arange(0, self.w).reshape(1,-1).repeat(self.h,0)
yy = np.arange(0, self.h).reshape(-1,1).repeat(self.w,1)
self.grid = np.stack((xx,yy),2).copy()
self.npzs=[]
count = self.batch_size * self.num_workers * self.num_gpus
if self.mode == 'train':
filelist = glob('/data/vimeoFlow2/dataset/train/*.npz')
self.npzs = [filelist[i:i + count] for i in range(0, len(filelist), count)]
else:
filelist = glob('/data/vimeoFlow2/dataset/val/*.npz')
self.npzs = [filelist[i:i + count] for i in range(0, len(filelist), count)]
def __len__(self):
return len(self.npzs)
def load_data(self, index):
self.data = []
self.flow_data = []
for i in range(len(self.npzs[index])):
f = np.load(self.npzs[index][i])
self.data.append(f['i0i1gt'])
if self.mode == 'train':
self.flow_data.append(f['ft0ft1'])
else:
self.flow_data.append(np.zeros((256, 448, 4)))
def getimg(self, index):
data = self.meta_data[index]
img0 = data[0:3].transpose(1, 2, 0)
img1 = data[3:6].transpose(1, 2, 0)
gt = data[6:9].transpose(1, 2, 0)
flow_gt = (self.flow_data[index]).transpose(1, 2, 0)
return img0, gt, img1, flow_gt
def __getitem__(self, index):
img0, gt, img1, flow_gt = self.getimg(index)
dataset = VimeoDataset(mode = 'train', batch_size=32, num_workers = 8, num_gpus = 4)
sampler = DistributedSampler(dataset)
train_data = DataLoader(dataset, batch_size=args.batch_size, pin_memory=True, num_workers=args.num_workers, drop_last=True, sampler=sampler)
dataset_val = VimeoDataset(mode = 'val', batch_size=32, num_workers = 8, num_gpus = 4)
val_data = DataLoader(dataset_val, batch_size=args.batch_size, pin_memory=True, num_workers=args.num_workers)
However, reading data from the disk one by one causes the dataloader to be very time-consuming. So I want to improve this program, first load the amount of data of num_gpus×num_workers×batch_size into the memory, then read the data from the memory with __getitem__, and finally replace the data in the memory after each iteration. But I still don’t know how to achieve it. I have tried my idea as in the code above. I don't konw how to allocate the load_data function parameters.
| It look like you are trying to use torch Dataset in the wrong way. Your Dataset subclass should neither batch the data itself nor use the number of workers.
Batching the data and loading it in parallel is the role of the DataLoader class. You Dataset subclass __getitem__ method should only returns 1 sample (and additionally one ground truth annotation) from the dataset, it should be data like Tensor or Array which can be concatenated in order to create a batch.
Take a look at the Dataset and DataLoader documentation that is pretty clear on this.
The purpose of DataLoader is to load (i.e. read from disk to memory) and pre-process your data in parallel. If you specified 8 workers, it roughly means that 8 parallel threads are calling the __getitem__ method to create a batch of items. Note that DataLoader already "caches" the data and load them in advance to be ready in time (take a look at the prefetch_factor parameter).
This should be a sufficient compromise between loading speed and memory consumption, you should try this before writing any custom caching, loading and parallel processing of your data.
| https://stackoverflow.com/questions/67215351/ |
TypeError: img should be PIL Image. Got - PyTorch | I'm trying to prepare some image data for my neural to classify. As part of the image preprocessing step, I'm applying the HOG filter in my dataset class as such:
class GetHogData(Dataset):
def __init__(self, df, root, transform = None):
self.df = df
self.root = root
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img_path = os.path.join(self.root, self.df.iloc[idx, 0])
# image = Image.open(img_path)
image = cv2.imread(img_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
label = self.df.iloc[idx, 1]
if self.transform:
image = self.transform(image)
hog_, hog_image = hog(
image,
orientations = 9,
pixels_per_cell = (14,14),
cells_per_block = (2,2),
block_norm = "L1")
image = np.transpose(image, (2, 0, 1))
img_hog_lbl = {
"image" : torch.tensor(image, dtype = torch.float32),
"label" : torch.tensor(label, dtype = torch.long),
"hog": torch.tensor(hog_, dtype = torch.float32)
}
return img_hog_lbl
After this, I define my train and validation transformation as such:
# Image mean and standard dev
img_mean = [0.485, 0.456, 0.406]
img_std = [0.229, 0.224, 0.225]
train_trans = transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(img_mean, img_std)
])
test_trans = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(img_mean, img_std)
])
and finally, I create the loaders as such:
train_img = GetHogData(df = train_lab, root = "/content/train", transform = train_trans)
test_img = GetHogData(df = test_lab ,root = "/content/test", transform = test_trans)
However, when I attempt to preview the training image with test_img[1] I get the error:
TypeError Traceback (most recent call last)
<ipython-input-132-b9a9394eb1e0> in <module>()
----> 1 test_img[1]
5 frames
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/functional_pil.py in resize(img, size, interpolation)
207 def resize(img, size, interpolation=Image.BILINEAR):
208 if not _is_pil_image(img):
--> 209 raise TypeError('img should be PIL Image. Got {}'.format(type(img)))
210 if not (isinstance(size, int) or (isinstance(size, Sequence) and len(size) in (1, 2))):
211 raise TypeError('Got inappropriate size arg: {}'.format(size))
TypeError: img should be PIL Image. Got <class 'numpy.ndarray'>
I've tried to add transforms.ToPILImage() to my transforms by doing:
# Image mean and standard dev
img_mean = [0.485, 0.456, 0.406]
img_std = [0.229, 0.224, 0.225]
train_trans = transforms.Compose([
transforms.ToPILImage(),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(img_mean, img_std)
])
test_trans = transforms.Compose([
transforms.ToPILImage(),
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(img_mean, img_std)
])
but I got the error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-135-b9a9394eb1e0> in <module>()
----> 1 test_img[1]
1 frames
<ipython-input-129-8551c2e76038> in __getitem__(self, idx)
27 pixels_per_cell = (14,14),
28 cells_per_block = (2,2),
---> 29 block_norm = "L1")
30
31 image = np.transpose(image, (2, 0, 1))
/usr/local/lib/python3.7/dist-packages/skimage/feature/_hog.py in hog(image, orientations, pixels_per_cell, cells_per_block, block_norm, visualize, transform_sqrt, feature_vector, multichannel)
273 n_blocks_col = (n_cells_col - b_col) + 1
274 normalized_blocks = np.zeros((n_blocks_row, n_blocks_col,
--> 275 b_row, b_col, orientations))
276
277 for r in range(n_blocks_row):
ValueError: negative dimensions are not allowed
Does anybody have any ideas? Thanks in advance!
Edit - New Error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-154-b9a9394eb1e0> in <module>()
----> 1 test_img[1]
<ipython-input-151-8551c2e76038> in __getitem__(self, idx)
27 pixels_per_cell = (14,14),
28 cells_per_block = (2,2),
---> 29 block_norm = "L1")
30
31 image = np.transpose(image, (2, 0, 1))
ValueError: too many values to unpack (expected 2)
| The problem is as I wrote in the comment, skimage requires the data to be ndarray but you give it a torch tensor thus that error.
Try this
train_trans = transforms.Compose([
transforms.ToPILImage(),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(img_mean, img_std),
lambda x: np.rollaxis(x.numpy(), 0, 3)
])
Edit
This is basically transform the output to ndarray and change channel axis.
But as you can see it's not the best way to fix things since you have to transform PIL image to tensor then transform tensor to ndarray and then transform ndarray back to tensor again.
The better way to do this is transform PIL image directly to ndarray and normalize that, for example.
in getitem
if self.transform:
image = self.transform(image)
# add these
image = np.array(image)
mean = [0.485, 0.456, 0.406]
std = [0.229, 0.224, 0.225]
x[..., 0] -= mean[0]
x[..., 1] -= mean[1]
x[..., 2] -= mean[2]
x[..., 0] /= std[0]
x[..., 1] /= std[1]
x[..., 2] /= std[2]
# these are your code
hog_, hog_image = hog(
And in transform just use
train_trans = transforms.Compose([
transforms.ToPILImage(),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
])
Edit2
Refer to this line. You need to either add visualize=True in hog() or remove , hog_image. If you don't need hog_image then the latter is preferred.
hog_, hog_image = hog(
image, visualize=True,
hog_ = hog(
| https://stackoverflow.com/questions/67217190/ |
Concatenating two multi-dimensional numpy arrays at specific index | I have two numpy arrays of sizes: [20,3,100,100] and [20,5,100,100].
They are 20 (100,100) images with 3 channels, and 5 channels, respectively. I want to concatenate the channels so that I have 20 (100,100) images with 8 channels.
I would like to concat them along (dim = 1) them together, without having to create a new numpy.zeros array of size [20,8,100,100]. Is this possible?
| Concatenate merges arrays along an existing axis:
import numpy as np
a = np.zeros((20,3,100,100))
b = np.ones((20,5,100,100))
output = np.concatenate((a,b), axis=1)
output.shape
# (20, 8, 100, 100)
| https://stackoverflow.com/questions/67217536/ |
The loss value does not decrease | I am implementing a simple feedforward neural network with Pytorch and the loss function does not seem to decrease. Because of some other tests I have done, the problem seems to be in the computations I do to compute pred, since if I slightly change the network so that it spits out a 2-dimensional vector for each entry and save it as pred, everything works perfectly.
Do you see the problem in defining pred here? Thanks
import torch
import numpy as np
from torch import nn
dt = 0.1
class Neural_Network(nn.Module):
def __init__(self, ):
super(Neural_Network, self).__init__()
self.l1 = nn.Linear(2,300)
self.nl = nn.Tanh()
self.l2 = nn.Linear(300,1)
def forward(self, X):
z = self.l1(X)
z = self.nl(z)
o = self.l2(z)
return o
N = 1000
X = torch.rand(N,2,requires_grad=True)
y = torch.rand(N,1)
NN = Neural_Network()
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.Adam(NN.parameters(), lr=1e-5)
epochs = 200
for i in range(epochs): # trains the NN 1,000 times
HH = torch.mean(NN(X))
gradH = torch.autograd.grad(HH, X)[0]
XH= torch.cat((gradH[:,1].unsqueeze(0),-gradH[:,0].unsqueeze(0)),dim=0).t()
pred = X + dt*XH
#Optimize and improve the weights
loss = criterion(pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print (" Loss: ", loss.detach().numpy()) # mean sum squared loss
P.S. With these X and y the loss is not expected to go to zero, I have added them here like them just for simplicity. I will apply this architecture to data points which are expected to satisfy this model. However I am just interested in seeing the loss decreasing.
My aim is to approximate with a neural network the Hamiltonian of a vector field where only some trajectory is known. For example only the updates x(t)\rightarrow x(t+\Delta t) for some choice of points. So the vector X contains the points x(t), while y contains the $x(t+\Delta t)$. My network above approximates in a simple way the Hamiltonian function H(x) and in order to optimize it I need to find the trajectories associated to this Hamiltonian.
In particular XH aims to be the Hamiltonian vector field associated to the approximated Hamiltonian. The time update pred = X + dt*XH is simply one step of forward Euler.
However, my main issue here can be abstracted in: how can I involve the gradient of a network with respect to its inputs in the loss function?
| Probably because the gradient flow graph for NN is destroyed with the gradH step. (check HH.grad_fn vs gradH.grad_fn )
So your pred tensor (and subsequent loss) does not contain the necessary gradient flow through the NN network.
The loss contains gradient flow for the input X, but not for the NN.parameters(). Because the optimizer only take a step() over thoseNN.parameters(), the network NN is not being updated, and since X is neither being updated, loss does not change.
You can check how the loss is sending it's gradients backward by checking loss.grad_fn after loss.backward()
and here's a neat function (found on Stackoverflow) to check it:
def getBack(var_grad_fn):
print(var_grad_fn)
for n in var_grad_fn.next_functions:
if n[0]:
try:
tensor = getattr(n[0], 'variable')
print(n[0])
print('Tensor with grad found:', tensor)
print(' - gradient:', tensor.grad)
print()
except AttributeError as e:
getBack(n[0])
with getBack(loss.grad_fn) after loss.backward() to check it for yourself (maybe reduce size of batch N before though)
Edit: It works by changing gradH = torch.autograd.grad(HH, X, create_graph=True)[0]
| https://stackoverflow.com/questions/67226701/ |
I want to confirm which of these methods to calculate Dice Loss is correct | so I have 4 methods to calculate dice loss and 3 of them are returning the same results, so I can conclude that 1 of them is calculating it wrong, but I would to confirm it with you guys:
import torch
torch.manual_seed(0)
inputs = torch.rand((3,1,224,224))
target = torch.rand((3,1,224,224))
Method 1: flatten tensors
def method1(inputs, target):
inputs = inputs.reshape( -1)
target = target.reshape( -1)
intersection = (inputs * target).sum()
union = inputs.sum() + target.sum()
dice = (2. * intersection) / (union + 1e-8)
dice = dice.sum()
print("method1", dice)
Method 2: flatten tensors except for batch size, sum all dims
def method2(inputs, target):
num = target.shape[0]
inputs = inputs.reshape(num, -1)
target = target.reshape(num, -1)
intersection = (inputs * target).sum()
union = inputs.sum() + target.sum()
dice = (2. * intersection) / (union + 1e-8)
dice = dice.sum()/num
print("method2", dice)
Method 3: flatten tensors except for batch size, sum dim 1
def method3(inputs, target):
num = target.shape[0]
inputs = inputs.reshape(num, -1)
target = target.reshape(num, -1)
intersection = (inputs * target).sum(1)
union = inputs.sum(1) + target.sum(1)
dice = (2. * intersection) / (union + 1e-8)
dice = dice.sum()/num
print("method3", dice)
Method 4: don't flatten tensors
def method4(inputs, target):
intersection = (inputs * target).sum()
union = inputs.sum() + target.sum()
dice = (2. * intersection) / (union + 1e-8)
print("method4", dice)
method1(inputs, target)
method2(inputs, target)
method3(inputs, target)
method4(inputs, target)
method 1,3 and 4 print: 0.5006
method 2 print: 0.1669
and it makes sense, since I am flattening the inputs and targets on 3 dimensions leaving out batch size, and then I am summing all 2 dimensions that result from the flattening instead of just dim 1
Method 4 seems to be the most optimized one
| First, you need to decide what dice score you report: the dice score of all samples in the batch (methods 1,2 and 4) or the averaged dice score of each sample in the batch (method 3).
If I'm not mistaken, you want to use method 3 - you want to optimize the dice score of each of the samples in the batch and not a "global" dice score: Suppose you have one "difficult" sample in an "easy" batch. The misclassified pixels of the "difficult" sample will be negligible w.r.t all other pixels. But if you look at the dice score of each sample separately then the dice score of the "difficult" sample will not be negligible.
| https://stackoverflow.com/questions/67230305/ |
Indexing whole Tensor along specific dimension and specific channels | Let say we have a Tensor A with the dimension dim(A)=[i, j, k=6, u, v]. Now we are interested to get the whole tensor at dimension k with channels=[0:3]. I know we can get it this way:
B = A[:, :, 0:3, :, :]
Now I would like to know if there is any better "pythonic" way to achieve the same result without doing this suboptimal indexing. I mean something like.
B = subset(A, dim=2, index=[0, 1, 2])
No matter in which framework, i.e. pytorch, tensorflow, numpy, etc.
Thanks a lot
| In numpy, you can use the take method:
B = A.take([0,1,2], axis=2)
In TensorFlow, there is not really a more concise way than using the traditionnal approach. Using tf.slice would be really verbose:
B = tf.slice(A,[0,0,0,0,0],[-1,-1,3,-1,-1])
You can potentially use the experimental version of take (since TF 2.4):
B = tf.experimental.numpy.take(A, [0,1,2], axis=2)
in PyTorch, you can use index_select:
torch.index_select(A, dim=2, index=torch.tensor([0,1,2]))
Note that you can skip listing explicitly the first dimensions (or the last) by using an ellipsis:
# Both are equivalent in that case
B = A[..., 0:3, :, :]
B = A[:, :, 0:3, ...]
| https://stackoverflow.com/questions/67231324/ |
Can I find the number of specific numeric data in this MNIST training, test data? | import torchvision.datasets as dsets
import torchvision.transforms as transforms
import torch.nn.init
import torch.nn.functional as F
device = "cuda" if torch.cuda.is_available() else "cpu"
print(device)
learning_rate = 0.001
training_epochs = 15
batch_size = 100
mnist_train = dsets.MNIST(root='MNIST_data/', # Specify download path
train=True, # Specify True to download as training data
transform=transforms.ToTensor(), # Convert to tensor
download=True)
mnist_test = dsets.MNIST(root='MNIST_data/', # Specify download path
train=False, # If false is specified, download as test data
transform=transforms.ToTensor(), # Convert to tensor
download=True)
This is the part that loads the data of MNIST data classification code using CNN
In the book I refer to, it is said that you can see how many specific numeric data are in the training set and the test set by mentioning only that part.
For example, can you tell how many '5' data are in that training or test set?
Just knowing that you can access data tensors with mnist_train.train_data or mnist_train.train_labels, etc., I don't know how much I can think of how to know the number of specific numeric data. help
| You can access the data and labels of the dataset, for either split, using the data and targets attributes respectively. So, for example, here you can access the training data and labels using mnist_train.data and mnist_train.labels respectively.
Since the targets attribute is a torch.Tensor for this dataset, you can count the number of instances of each target by using torch.bincount. Since there are 10 classes in total, the output will be a tensor of length 10, where the ith index specifies the number of data points of class i.
Example:
>>> mnist_train = dsets.MNIST(root='MNIST_data/', train=True, transforms.ToTensor(), download=True)
>>> mnist_train.targets
tensor([5, 0, 4, ..., 5, 6, 8])
>>> torch.bincount(mnist_train.targets, minlength=10)
tensor([5923, 6742, 5958, 6131, 5842, 5421, 5918, 6265, 5851, 5949])
You can see that class 5 has 5,421 data points in the training split.
| https://stackoverflow.com/questions/67235505/ |
Why is the clip_grad_norm_ function used here? | I am learning LSTM with PyTorch from someone's code. Here he uses the clip_grad_norm_ function in the training process of a two layer LSTM. I want to know why he uses the clip_grad_norm_ function here, so I can understand the whole code properly (he used it in second last line).
for x, y in get_batches(data, batch_size, seq_length):
counter += 1
x = one_hot_encode(x, n_chars)
inputs, targets = torch.from_numpy(x), torch.from_numpy(y)
if(train_on_gpu):
inputs, targets = inputs.cuda(), targets.cuda()
h = tuple([each.data for each in h])
net.zero_grad()
output, h = net(inputs, h)
loss = criterion(output, targets.view(batch_size*seq_length).long())
loss.backward()
nn.utils.clip_grad_norm_(net.parameters(), clip)
opt.step()
If you need more information about question then please let me know.
| torch.nn.utils.clip_grad_norm_ performs gradient clipping. It is used to mitigate the problem of exploding gradients, which is of particular concern for recurrent networks (which LSTMs are a type of).
Further details can be found in the original paper.
| https://stackoverflow.com/questions/67236480/ |
Modifying a Numpy float32 bitwise and returning the modified np.float32 value | I'm currently working on a bit chopping algorithm for simulation of memory footprint reduction when training neural networks. I'm using PyTorch to achieve this.
However, what I'm basically trying to do is set to 0 the less significant bits of the mantissa of the float32 value to see for now if the neural network will train and how much precision it will lose depending on the number of bits that are set to 0.
My problem is that every value on the tensors is of type Numpy float32, and I would like to get the literal bit representation of the value (how the actual float32 value is represented in memory) or as an integer, apply the bitwise modification and convert it back to np.float32.
However, I've tried the following (Note, x is of type numpy.ndarray):
print(x)
value_as_int = self.x.view(np.int32)
print(value_as_int)
value_as_int = value_as_int&0xFFFFFFFE
print(value_as_int)
new_float = value_as_int.view(np.float32)
print(new_float)
Here's an example output of the part that works:
0.13498048
1040857171
1040857170
This does convert the value to its literal bit integer representation and allows me to set to 0 the last bit, although when trying to convert it back to np.float32 I get the following error:
ValueError: Changing the dtype of a 0d array is only supported if the itemsize is unchanged
Is there a proper way to do this? Am I missing something in my code or is it the wrong approach?
Thank you in advance
| The problem here is that 0xFFFFFFFE is a python int (rather than a numpy int32). Numpy is implicitly upcasting to int64 when the bit-wise AND operator is appled. To avoid this you can make your bit-mask a np.uint32.
It seems that in Windows (but not Linux), you need to use np.uint32 instead of np.int32 to avoid getting a "Python int too large to convert to C long" error when your bit-mask is larger than 0x7FFFFFFF.
value_as_int = self.x.view(np.uint32)
value_as_int = value_as_int & np.uint32(0xFFFFFFFE)
new_float = value_as_int.view(np.float32)
| https://stackoverflow.com/questions/67238504/ |
Accessing PyTorch modules - ResNet18 | I am using a ResNet-18 coded as follows:
class ResidualBlock(nn.Module):
'''
Residual Block within a ResNet CNN model
'''
def __init__(self, input_channels, num_channels,
use_1x1_conv = False, strides = 1):
# super(ResidualBlock, self).__init__()
super().__init__()
self.conv1 = nn.Conv2d(
in_channels = input_channels, out_channels = num_channels,
kernel_size = 3, padding = 1, stride = strides,
bias = False
)
self.bn1 = nn.BatchNorm2d(num_features = num_channels)
self.conv2 = nn.Conv2d(
in_channels = num_channels, out_channels = num_channels,
kernel_size = 3, padding = 1, stride = 1,
bias = False
)
self.bn2 = nn.BatchNorm2d(num_features = num_channels)
if use_1x1_conv:
self.conv3 = nn.Conv2d(
in_channels = input_channels, out_channels = num_channels,
kernel_size = 1, stride = strides
)
self.bn3 = nn.BatchNorm2d(num_features = num_channels)
else:
self.conv3 = None
self.relu = nn.ReLU(inplace = True)
self.initialize_weights()
def forward(self, X):
Y = F.relu(self.bn1(self.conv1(X)))
Y = self.bn2(self.conv2(Y))
if self.conv3:
X = self.bn3(self.conv3(X))
# print(f"X.shape due to 1x1: {X.shape} & Y.shape = {Y.shape}")
else:
# print(f"X.shape without 1x1: {X.shape} & Y.shape = {Y.shape}")
pass
Y += X
return F.relu(Y)
def shape_computation(self, X):
Y = self.conv1(X)
print(f"self.conv1(X).shape: {Y.shape}")
Y = self.conv2(Y)
print(f"self.conv2(X).shape: {Y.shape}")
if self.conv3:
h = self.conv3(X)
print(f"self.conv3(X).shape: {h.shape}")
def initialize_weights(self):
for m in self.modules():
# print(m)
if isinstance(m, nn.Conv2d):
nn.init.kaiming_uniform_(m.weight)
'''
# Do not initialize bias (due to batchnorm)-
if m.bias is not None:
nn.init.constant_(m.bias, 0)
'''
elif isinstance(m, nn.BatchNorm2d):
# Standard initialization for batch normalization-
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.kaiming_normal_(m.weight)
nn.init.constant_(m.bias, 0)
b0 = nn.Sequential(
nn.Conv2d(in_channels = 3, out_channels = 64, kernel_size = 3, stride = 1, padding = 1),
nn.BatchNorm2d(num_features = 64),
nn.ReLU())
def create_resnet_block(input_filters, output_filters, num_residuals, first_block = False):
# Python list to hold the created ResNet blocks-
resnet_blk = []
for i in range(num_residuals):
if i == 0 and first_block:
resnet_blk.append(ResidualBlock(input_channels = input_filters, num_channels = output_filters, use_1x1_conv = True, strides = 2))
else:
resnet_blk.append(ResidualBlock(input_channels = output_filters, num_channels = output_filters, use_1x1_conv = False, strides = 1))
return resnet_blk
b1 = nn.Sequential(*create_resnet_block(input_filters = 64, output_filters = 64, num_residuals = 2, first_block = True))
b2 = nn.Sequential(*create_resnet_block(input_filters = 64, output_filters = 128, num_residuals = 2, first_block = True))
b3 = nn.Sequential(*create_resnet_block(input_filters = 128, output_filters = 256, num_residuals = 2, first_block = True))
b4 = nn.Sequential(*create_resnet_block(input_filters = 256, output_filters = 512, num_residuals = 2, first_block = True))
# Initialize a ResNet-18 CNN model-
model = nn.Sequential(
b0, b1, b2, b3, b4,
nn.AdaptiveAvgPool2d(output_size = (1, 1)),
nn.Flatten(),
nn.Linear(in_features = 512, out_features = 10))
The layer names are now as follows:
for layer_name, param in trained_model.named_parameters():
print(f"layer name: {layer_name} has {param.shape}")
Result:
> layer name: 0.0.weight has torch.Size([64, 3, 3, 3])
> layer name: 0.0.bias has torch.Size([64])
> layer name: 0.1.weight has torch.Size([64])
> layer name: 0.1.bias has torch.Size([64])
> layer name: 1.0.conv1.weight has torch.Size([64, 64, 3, 3])
> layer name: 1.0.bn1.weight has torch.Size([64])
> layer name: 1.0.bn1.bias has torch.Size([64])
> layer name: 1.0.conv2.weight has torch.Size([64, 64, 3, 3])
> layer name: 1.0.bn2.weight has torch.Size([64])
> layer name: 1.0.bn2.bias has torch.Size([64])
> layer name: 1.0.conv3.weight has torch.Size([64, 64, 1, 1])
> layer name: 1.0.conv3.bias has torch.Size([64])
> layer name: 1.0.bn3.weight has torch.Size([64])
> layer name: 1.0.bn3.bias has torch.Size([64])
> layer name: 1.1.conv1.weight has torch.Size([64, 64, 3, 3])
> layer name: 1.1.bn1.weight has torch.Size([64])
> layer name: 1.1.bn1.bias has torch.Size([64])
> layer name: 1.1.conv2.weight has torch.Size([64, 64, 3, 3])
> layer name: 1.1.bn2.weight has torch.Size([64])
> layer name: 1.1.bn2.bias has torch.Size([64])
> layer name: 2.0.conv1.weight has torch.Size([128, 64, 3, 3])
> layer name: 2.0.bn1.weight has torch.Size([128])
> layer name: 2.0.bn1.bias has torch.Size([128])
> layer name: 2.0.conv2.weight has torch.Size([128, 128, 3, 3])
> layer name: 2.0.bn2.weight has torch.Size([128])
> layer name: 2.0.bn2.bias has torch.Size([128])
> layer name: 2.0.conv3.weight has torch.Size([128, 64, 1, 1])
> layer name: 2.0.conv3.bias has torch.Size([128])
> layer name: 2.0.bn3.weight has torch.Size([128])
> layer name: 2.0.bn3.bias has torch.Size([128])
> layer name: 2.1.conv1.weight has torch.Size([128, 128, 3, 3])
> layer name: 2.1.bn1.weight has torch.Size([128])
> layer name: 2.1.bn1.bias has torch.Size([128])
> layer name: 2.1.conv2.weight has torch.Size([128, 128, 3, 3])
> layer name: 2.1.bn2.weight has torch.Size([128])
> layer name: 2.1.bn2.bias has torch.Size([128])
> layer name: 3.0.conv1.weight has torch.Size([256, 128, 3, 3])
> layer name: 3.0.bn1.weight has torch.Size([256])
> layer name: 3.0.bn1.bias has torch.Size([256])
> layer name: 3.0.conv2.weight has torch.Size([256, 256, 3, 3])
> layer name: 3.0.bn2.weight has torch.Size([256])
> layer name: 3.0.bn2.bias has torch.Size([256])
> layer name: 3.0.conv3.weight has torch.Size([256, 128, 1, 1])
> layer name: 3.0.conv3.bias has torch.Size([256])
> layer name: 3.0.bn3.weight has torch.Size([256])
> layer name: 3.0.bn3.bias has torch.Size([256])
> layer name: 3.1.conv1.weight has torch.Size([256, 256, 3, 3])
> layer name: 3.1.bn1.weight has torch.Size([256])
> layer name: 3.1.bn1.bias has torch.Size([256])
> layer name: 3.1.conv2.weight has torch.Size([256, 256, 3, 3])
> layer name: 3.1.bn2.weight has torch.Size([256])
> layer name: 3.1.bn2.bias has torch.Size([256])
> layer name: 4.0.conv1.weight has torch.Size([512, 256, 3, 3])
> layer name: 4.0.bn1.weight has torch.Size([512])
> layer name: 4.0.bn1.bias has torch.Size([512])
> layer name: 4.0.conv2.weight has torch.Size([512, 512, 3, 3])
> layer name: 4.0.bn2.weight has torch.Size([512])
> layer name: 4.0.bn2.bias has torch.Size([512])
> layer name: 4.0.conv3.weight has torch.Size([512, 256, 1, 1])
> layer name: 4.0.conv3.bias has torch.Size([512])
> layer name: 4.0.bn3.weight has torch.Size([512])
> layer name: 4.0.bn3.bias has torch.Size([512])
> layer name: 4.1.conv1.weight has torch.Size([512, 512, 3, 3])
> layer name: 4.1.bn1.weight has torch.Size([512])
> layer name: 4.1.bn1.bias has torch.Size([512])
> layer name: 4.1.conv2.weight has torch.Size([512, 512, 3, 3])
> layer name: 4.1.bn2.weight has torch.Size([512])
> layer name: 4.1.bn2.bias has torch.Size([512])
> layer name: 7.weight has torch.Size([10, 512])
> layer name: 7.bias has torch.Size([10])
In order to prune this model, I am referring to PyTorch pruning tutorial. It's mentioned here that to prune a module/layer, use the following code:
parameters_to_prune = (
(model.conv1, 'weight'),
(model.conv2, 'weight'),
(model.fc1, 'weight'),
(model.fc2, 'weight'),
(model.fc3, 'weight'),
)
But for the code above, the modules/layers no longer have this naming convention. For example, to prune the first conv layer of this model:
> layer name: 0.0.weight has torch.Size([64, 3, 3, 3])
on trying the following code:
prune.random_unstructured(model.0.0, name = 'weight', amount = 0.3)
It gives me the error:
prune.random_unstructured(trained_model.0.0, name = 'weight', amount = 0.3)
^ SyntaxError: invalid syntax
How do I handle this?
| this will work
import torch.nn.utils.prune as prune
prune.random_unstructured(list(model.children())[0][0] , name = 'weight', amount = 0.3) # first conv layer
| https://stackoverflow.com/questions/67243218/ |
How to get class and bounding box coordinates from YOLOv5 predictions? | I am trying to perform inference on my custom YOLOv5 model. The official documentation uses the default detect.py script for inference. I have written my own python script but I cannot access the predicted class and the bounding box coordinates from the output of the model. Here is my code:
import torch
model = torch.hub.load('ultralytics/yolov5', 'custom', path_or_model='best.pt')
predictions = model("my_image.png")
print(predictions)
| results = model(input_images)
labels, cord_thres = results.xyxyn[0][:, -1].numpy(), results.xyxyn[0][:, :-1].numpy()
This will give you labels, coordinates, and thresholds for each object detected, you can use it to plot bounding boxes.
You can check out this repo for more detailed code.
https://github.com/akash-agni/Real-Time-Object-Detection
| https://stackoverflow.com/questions/67244258/ |
Neural network learning to sum two numbers | I am learning Pytorch, and I am trying to implement a really simple network which takes an input which is of length 2, i.e. a point in the plane, and aims to learn the sum of its components.
In principle the network should just learn a linear layer with weight matrix W = [1.,1.] and zero bias, so I expect to have very low training error. However, I don't see why I do not get this as expected.
The code I am writing is this:
import torch
from torch import nn, optim
import numpy as np
device = torch.device("cuda:0" if
torch.cuda.is_available() else "cpu")
N = 1000 # number of samples
D = 2 # input dimension
C = 1 # output dimension
def model(z):
q = z[:,0]
p = z[:,1]
return q+p
X = torch.rand(N, D,requires_grad=True).to(device)
y = model(X)
lr = 1e-2 #Learning rate
Rete = nn.Sequential(nn.Linear(D, C))
Rete.to(device) #Convert to CUDA
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(Rete.parameters(), lr=lr)
for t in range(5000):
y_pred = Rete(X)
loss = criterion(y_pred, y)
print("[EPOCH]: %i, [LOSS]: %.6f" % (t, loss.item()))
optimizer.zero_grad()
optimizer.step()
| There are 2 problems.
The first problem is that you forgot to backpropogate the loss:
optimizer.zero_grad()
loss.backward() # you forgot this step
optimizer.step()
It is important that optimizer.zero_grad() IS NOT put in between loss.backwards() and optimizer.step(), otherwise you'll be resetting the gradients before performing the gradient descent step. In general, it is advisable to either put optimizer.zero_grad() at the very beginning of your loop, or right after you call optimizer.step():
loss.backward()
optimizer.step()
optimizer.zero_grad() # this should go right after .step(), or at the very beginning of your training loop
Even after this change, you'll notice that your model still doesn't converge:
[EPOCH]: 0, [LOSS]: 0.232405
[EPOCH]: 1, [LOSS]: 0.225010
[EPOCH]: 2, [LOSS]: 0.218473
...
[EPOCH]: 4997, [LOSS]: 0.178762
[EPOCH]: 4998, [LOSS]: 0.178762
[EPOCH]: 4999, [LOSS]: 0.178762
This leads us to the second problem, the shapes of the output (y_pred) and labels (y) do not match. y_pred has shape (N, C) but y has shape (N,). To fix this, just reshape y to match y_pred:
y = y.reshape(-1, C)
Then our model will converge:
[EPOCH]: 0, [LOSS]: 1.732189
[EPOCH]: 1, [LOSS]: 1.680017
[EPOCH]: 2, [LOSS]: 1.628712
...
[EPOCH]: 4997, [LOSS]: 0.000000
[EPOCH]: 4998, [LOSS]: 0.000000
[EPOCH]: 4999, [LOSS]: 0.000000
Both of these bugs fail silently, which makes debugging them difficult. Unfortunately, these kinds of bugs are very easy to come across when doing machine learning. I highly recommend reading this blog post on best practices when training neural networks to minimize the risk of silent bugs.
Full code:
import torch
import numpy as np
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
N = 1000 # number of samples
D = 2 # input dimension
C = 1 # output dimension
X = torch.rand(N, D).to(device) # (N, D)
y = torch.sum(X, axis=-1).reshape(-1, C) # (N, C)
lr = 1e-2 # Learning rate
model = torch.nn.Sequential(torch.nn.Linear(D, C)) # model
model.to(device)
criterion = torch.nn.MSELoss() # loss function
optimizer = torch.optim.Adam(model.parameters(), lr=lr) # optimizer
for epoch in range(1000):
y_pred = model(X) # forward step
loss = criterion(y_pred, y) # compute loss
loss.backward() # backprop (compute gradients)
optimizer.step() # update weights (gradient descent step)
optimizer.zero_grad() # reset gradients
if epoch % 50 == 0:
print(f"[EPOCH]: {epoch}, [LOSS]: {loss.item():.6f}")
| https://stackoverflow.com/questions/67248686/ |
Related to SubsetRandomSampler | I am using SubsetRandomSampler for splitting a classification dataset into test and validation. Can we split the dataset for each class.
import numpy as np
import torch
from torchvision import transforms
from torch.utils.data.sampler import SubsetRandomSampler
train_transforms = transforms.Compose([transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
dataset = datasets.ImageFolder( '/data/images/train', transform=train_transforms )
validation_split = .2
shuffle_dataset = True
random_seed= 42
batch_size = 20
dataset_size = len(dataset) #4996
indices = list(range(dataset_size))
split = int(np.floor(validation_split * dataset_size))
if shuffle_dataset :
np.random.seed(random_seed)
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
valid_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, sampler=train_sampler)
validation_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, sampler=valid_sampler)
| Did you mean train and validation not test and validation?
If so, the SubsetRandomSampler uses randomly select samples from indices. Therefore you can just randomly split the indices of each class before put them in train_indices and val_indices.
Like
indexs = [[] for _ in range(len(dataset.classes))] # you can't use `[[]] * len(dataset.classes)`. Although there might be better ways but I don't know
for idx, (_, class_idx) in enumerate(dataset):
indexs[class_idx].append(idx)
train_indices, val_indices = [], []
for cl_idx in indexs:
size = len(cl_idx)
split = int(np.floor(validation_split * size))
np.random.shuffle(cl_idx)
train_indices.extend(cl_idx[split:])
val_indices.extend(cl_idx[:split])
train_sampler = SubsetRandomSampler(train_indices)
valid_sampler = SubsetRandomSampler(val_indices)
| https://stackoverflow.com/questions/67250023/ |
Stuck at this error "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu" | I am running the following code. If I try to run it on CPU only it runs fine but it takes too much time to train. So I thought to change the runtime to GPU and made appropriate changes. Now it is stuck.
import torch
from models.bert_attention_model import AttentionModel
from models.bert_cnn_model import BERTCNNSentiment
import sys
if sys.argv[1].lower() =="hinglish":
data_path = "../data/hinglish/"
elif sys.argv[1].lower() == "spanglish":
data_path = "../data/spanglish/"
else:
print("Format: %s %s" %(argv[0], argv[1]))
train_name = "train.txt"
test_name = "test.txt"
model_save_names = ["../checkpoint/cnn_model.txt", "../checkpoint/attention_model.txt"]
import random
import numpy as np
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
from transformers import BertTokenizer, AutoTokenizer, XLMRobertaTokenizer
tokenizer = XLMRobertaTokenizer.from_pretrained("xlm-roberta-base")
print('XLM Roberta Tokenizer Loaded...')
init_token_idx = tokenizer.cls_token_id
eos_token_idx = tokenizer.sep_token_id
pad_token_idx = tokenizer.pad_token_id
unk_token_idx = tokenizer.unk_token_id
max_input_length = 150
print("Max input length: %d" %(max_input_length))
def tokenize_and_cut(sentence):
tokens = tokenizer.tokenize(sentence)
tokens = tokens[:max_input_length-2]
return tokens
from torchtext import data
UID = data.Field(sequential=False, use_vocab=False, pad_token=None)
TEXT = data.Field(batch_first = True,
use_vocab = False,
tokenize = tokenize_and_cut,
preprocessing = tokenizer.convert_tokens_to_ids,
init_token = init_token_idx,
eos_token = eos_token_idx,
pad_token = pad_token_idx,
unk_token = unk_token_idx)
LABEL = data.LabelField()
from torchtext import datasets
fields = [('uid',UID),('text', TEXT),('label', LABEL)]
train_data, test_data = data.TabularDataset.splits(
path = data_path,
train = train_name,
test = test_name,
format = 'tsv',
fields = fields,
skip_header = True)
train_data, valid_data = train_data.split(random_state = random.seed(SEED))
print('Data loading complete')
print(f"Number of training examples: {len(train_data)}")
print(f"Number of validation examples: {len(valid_data)}")
print(f"Number of test examples: {len(test_data)}")
tokens = tokenizer.convert_ids_to_tokens(vars(train_data.examples[0])['text'])
LABEL.build_vocab(train_data, valid_data)
print(LABEL.vocab.stoi)
BATCH_SIZE = 128
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print("Device in use:",device)
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
sort_key=lambda x: len(x.text),
batch_size = BATCH_SIZE,
device = device)
print('Iterators created')
print('Downloading XLM Roberta model...')
from transformers import XLMRobertaModel
bert = XLMRobertaModel.from_pretrained('xlm-roberta-base')
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
print('XLM Roberta model downloaded')
OUTPUT_DIM = 3
DROPOUT = 0.3
N_FILTERS = 100
FILTER_SIZES = [2,3,4]
HIDDEN_DIM = 100
model_names = ["CNN_Model", "Attention_Model"]
models = [ BERTCNNSentiment(bert, OUTPUT_DIM, DROPOUT, N_FILTERS, FILTER_SIZES),
AttentionModel(bert, BATCH_SIZE, OUTPUT_DIM, HIDDEN_DIM, 50000, 768) ]
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
for i in range(2):
print(f'The {models[i]} has {count_parameters(models[i]):,} trainable parameters')
for i in range(2):
print("Parameters for " + f'{model_names[i]}')
for name, param in models[i].named_parameters():
if param.requires_grad:
print(name)
import torch.optim as optim
from sklearn.metrics import confusion_matrix
def clip_gradient(model, clip_value):
params = list(filter(lambda p: p.grad is not None, model.parameters()))
for p in params:
p.grad.data.clamp_(-clip_value, clip_value)
optimizers = [optim.Adam(models[0].parameters()), optim.Adam(models[1].parameters())]
criterion = nn.CrossEntropyLoss()
nll_loss = nn.NLLLoss()
log_softmax = nn.LogSoftmax()
for i in range(2):
models[i] = models[i].to(device)
criterion = criterion.to(device)
nll_loss = nll_loss.to(device)
log_softmax = log_softmax.to(device)
from sklearn.metrics import f1_score
def categorical_accuracy(preds, y):
count0,count1,count2 = torch.zeros(1),torch.zeros(1),torch.zeros(1)
count0 = torch.zeros(1).to(device)
count1 = torch.zeros(1).to(device)
count2 = torch.zeros(1).to(device)
total0,total1,total2 = torch.FloatTensor(1),torch.FloatTensor(1),torch.FloatTensor(1)
max_preds = preds.argmax(dim = 1, keepdim = True) # get the index of the max probability
correct = max_preds.squeeze(1).eq(y)
predictions = max_preds.squeeze(1)
true_correct = [0,0,0]
for j,i in enumerate(y.cpu().numpy()):
true_correct[y.cpu().numpy()[j]]+=1
if i==0:
count0+=correct[j]
total0+=1
elif i==1:
count1+=correct[j]
total1+=1
elif i==2:
count2+=correct[j]
else:
total2+=1
metric=torch.FloatTensor([count0/true_correct[0],count1/true_correct[1],count2/true_correct[2],f1_score(y.cpu().numpy(),predictions.cpu().numpy(),average='macro')])
return correct.sum() / torch.FloatTensor([y.shape[0]]),metric,confusion_matrix(y.cpu().numpy(),max_preds.cpu().numpy())
def train(model, iterator, optimizer, criterion, i):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
if (i == 0):
predictions = model(batch.text).squeeze(1)
else:
predictions = model(batch.text, batch_size = len(batch)).squeeze(1)
loss = criterion(predictions, batch.label)
acc,_,_ = categorical_accuracy(predictions, batch.label)
loss.backward()
clip_gradient(model, 1e-1)
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def evaluate(model, iterator, criterion, i):
epoch_loss = 0
epoch_acc = 0
epoch_all_acc = torch.FloatTensor([0,0,0,0])
confusion_mat = torch.zeros((3,3))
confusion_mat_temp = torch.zeros((3,3))
model.eval()
with torch.no_grad():
for batch in iterator:
if (i == 0):
predictions = model(batch.text).squeeze(1)
else:
predictions = model(batch.text,batch_size=len(batch)).squeeze(1)
loss = criterion(predictions, batch.label)
acc,all_acc,confusion_mat_temp = categorical_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
epoch_all_acc += all_acc
confusion_mat+=confusion_mat_temp
return epoch_loss / len(iterator), epoch_acc / len(iterator),epoch_all_acc/len(iterator),confusion_mat
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
N_EPOCHS = 40
best_f1 = [-1, -1]
for epoch in range(N_EPOCHS):
for i in range(2):
start_time = time.time()
train_loss, train_acc = train(models[i], train_iterator, optimizers[i], criterion, i)
valid_loss, valid_acc,tot,conf = evaluate(models[i], valid_iterator, criterion, i)
f1 = tot[3]
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if f1 > best_f1[i]:
best_f1[i] = f1
path = model_save_names[i]
print(path)
torch.save(models[i].state_dict(), path)
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
print(tot)
print(conf)
for i in range(2):
path = model_save_names[i]
models[i].load_state_dict(torch.load(path))
def ensemble_evaluate(models, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
epoch_all_acc = torch.FloatTensor([0,0,0,0])
models[0].eval()
models[1].eval()
confusion_mat = torch.zeros((3,3))
confusion_mat_temp = torch.zeros((3,3))
with torch.no_grad():
for batch in iterator:
predictions0 = models[0](batch.text).squeeze(1)
predictions1 = models[1](batch.text, batch_size=len(batch)).squeeze(1)
predictions = F.softmax(predictions0, dim=1) * F.softmax(predictions1, dim=1)
loss = criterion(predictions, batch.label)
acc,all_acc,confusion_mat_temp = categorical_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
epoch_all_acc += all_acc
confusion_mat += confusion_mat_temp
print(confusion_mat)
return epoch_loss / len(iterator), epoch_acc / len(iterator),epoch_all_acc/len(iterator)
def ensemble_write_to_file(models, test_iterator):
label_dict = {'0':'negative', '1':'neutral', '2':'positive'}
file = open("answer.txt", "w")
file.write('Uid,Sentiment\n')
count = 0
for batch in test_iterator:
predictions0 = models[0](batch.text).squeeze(1)
predictions1 = models[1](batch.text, batch_size=len(batch)).squeeze(1)
predictions = F.softmax(predictions0, dim=1) * F.softmax(predictions1, dim=1)
max_preds = predictions.argmax(dim = 1, keepdim = True).detach().cpu().numpy()
for i,row in enumerate(batch.uid.cpu().numpy()):
count += 1
label_number = max_preds[i][0]
label_number_str = list(LABEL.vocab.stoi.keys())[list(LABEL.vocab.stoi.values()).index(label_number)]
predicted_label_name = label_dict[label_number_str]
if count != len(test_data):
file.write('%s,%s\n'%(row,predicted_label_name))
else:
file.write('%s,%s'%(row,predicted_label_name))
file.close()
valid_loss, valid_acc, tot = ensemble_evaluate(models, test_iterator, criterion)
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
print(tot)
Here is the output which is I am getting. I have shown only the requird output necessary for debugging
Traceback (most recent call last):
File "main.py", line 268, in <module>
train_loss, train_acc = train(models[i], train_iterator, optimizers[i], criterion, i)
File "main.py", line 212, in train
acc,_,_ = categorical_accuracy(predictions, batch.label)
File "main.py", line 184, in categorical_accuracy
count1+=correct[j]
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Can someone tell me where to make the appropriate changes so that the code may run on the GPU in Google Colab.
Edit_1:
Thank you so much for the help you provided it actually solved my problem, but now a similar new problem arised. Note that I have updated the code to include the changes you mentioned
Traceback (most recent call last):
File "main.py", line 271, in <module>
train_loss, train_acc = train(models[i], train_iterator, optimizers[i], criterion, i)
File "main.py", line 215, in train
acc,_,_ = categorical_accuracy(predictions, batch.label)
File "main.py", line 194, in categorical_accuracy
return correct.sum() / torch.FloatTensor([y.shape[0]]),metric,confusion_matrix(y.cpu().numpy(),max_preds.cpu().numpy())
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
| The problem is exactly as the error says, Pytorch expects all operations to be done in the same device but the two tensors you are adding are in different places.
You need to add .to(device) to these variables
count0,count1,count2 = torch.zeros(1),torch.zeros(1),torch.zeros(1)
Like
count0 = torch.zeros(1).to(device)
count1 = torch.zeros(1).to(device)
count2 = torch.zeros(1).to(device)
But just in case if there is NameError: name 'device' is not defined you can just use the y's or pred's device like
device = y.device
count0 = torch.zeros(1).to(device)
count1 = torch.zeros(1).to(device)
count2 = torch.zeros(1).to(device)
| https://stackoverflow.com/questions/67251758/ |
pytorch modifying the input data to forward to make it suitable to my model |
Here is what I want to do.
I have an individual data of shape (20,20,20) where 20 tensors of shape (1,20,20) will be used as an input for 20 separate CNN. Here's the code I have so far.
class MyModel(torch.nn.Module):
def __init__(self, ...):
...
self.features = nn.ModuleList([nn.Sequential(
nn.Conv2d(1,10, kernel_size = 3, padding = 1),
nn.ReLU(),
nn.Conv2d(10, 14, kernel_size=3, padding=1),
nn.ReLU(),
nn.Conv2d(14, 18, kernel_size=3, padding=1),
nn.ReLU(),
nn.Flatten(),
nn.Linear(28*28*18, 256)
) for _ in range(20)])
self.fc_module = nn.Sequential(
nn.Linear(256*n_selected, cnn_output_dim),
nn.Softmax(dim=n_classes)
)
def forward(self, input_list):
concat_fusion = cat([cnn(x) for x,cnn in zip(input_list,self.features)], dim = 0)
output = self.fc_module(concat_fusion)
return output
The shape of the input_list in forward function is torch.Size([100, 20, 20, 20]), where 100 is the batch size.
However, there's an issue with
concat_fusion = cat([cnn(x) for x,cnn in zip(input_list,self.features)], dim = 0)
as it results in this error.
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [10, 1, 3, 3], but got 3-dimensional input of size [20, 20, 20] instead
First off, I wonder why it expects me to give 4-dimensional weight [10,1,3,3]. I've seen
"RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 3 3, but got 3-dimensional input of size [3, 224, 224] instead"?
but I'm not sure where those specific numbers are coming from.
I have an input_list which is a batch of 100 data. I'm not sure how I can deal with individual data of shape (20,20,20) so that I can actually separate this into 20 pieces to use it as an independent input to 20 CNN.
| why it expects me to give 4-dimensional weight [10,1,3,3].
Note the following log means the nn.Conv2d with kernel (10, 1, 3, 3) requiring a 4 dimensional input.
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [10, 1, 3, 3]
How to separate input into 20 pieces along channels.
Iteration over input_list(100, 20, 20, 20) produces 100 tensors of shape (20, 20, 20).
If you want to split input along channel, try to slice input_list along second dimension.
concat_fusion = torch.cat([cnn(input_list[:, i:i+1]) for i, cnn in enumerate(self.features)], dim = 1)
| https://stackoverflow.com/questions/67256305/ |
OSError: libmkl_intel_lp64.so.1: cannot open shared object file: No such file or directory | I am trying to run a model on TPU as given in colab notebook. The model was working fine, but today I could not run the model.
I used the following code to install pytorch-xla.
VERSION = "nightly" #@param ["1.5" , "20200325", "nightly"]
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
!python pytorch-xla-env-setup.py --version $VERSION
I try to install required libraries as below:
!pip install -U nlp
!pip install sentencepiece
!pip install numpy --upgrade
However, when I try the following
import nlp
It gives the following error:
OSError: libmkl_intel_lp64.so.1: cannot open shared object file: No such file or directory
I searched the error and I tried the followings, but still does not work. Any ideas how to fix it? Note: It was working a few days ago, however, today it is not.
!pip install mkl
#!export PATH="$PATH:/opt/intel/bin"
#!export LD_LIBRARY_PATH="$PATH:opt/intel/mkl/lib/intel64_lin/"
!export LID_LIBRAEY_PATH="$LID_LIBRARY_PATH:/opt/intel/mkl/lib/intel64_lin/"
| import os
os.environ['LD_LIBRARY_PATH']='/usr/local/lib'
!echo $LD_LIBRARY_PATH
!sudo ln -s /usr/local/lib/libmkl_intel_lp64.so /usr/local/lib/libmkl_intel_lp64.so.1
!sudo ln -s /usr/local/lib/libmkl_intel_thread.so /usr/local/lib/libmkl_intel_thread.so.1
!sudo ln -s /usr/local/lib/libmkl_core.so /usr/local/lib/libmkl_core.so.1
!ldconfig
!ldd /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch.so
worked for me. We will also try to fix the problem internally.
| https://stackoverflow.com/questions/67257008/ |
Semantic Segmentation runtime error at loss function | I am using a costume model for segmentation (SETRModel). The model output shape is (nBatch, 256, 256) and the code below confirms it (note that the channel is squeezed out). The target shape is the same (It’s a PILMask).
When I start training, I get a runtime error (see below) related to the loss function. What am I doing wrong?
```
size = 480
half= (256, 256)
splitter = FuncSplitter(lambda o: Path(o).parent.name == 'validation')
dblock = DataBlock(blocks=(ImageBlock, MaskBlock(codes)),
get_items=get_relevant_images,
splitter=splitter,
get_y=get_mask,
item_tfms=Resize((size,size)),
batch_tfms=[*aug_transforms(size=half), Normalize.from_stats(*imagenet_stats)])
dls = dblock.dataloaders(path/'images', bs=4)
model = SETRModel(patch_size=(32, 32),
in_channels=3,
out_channels=1,
hidden_size=1024,
num_hidden_layers=8,
num_attention_heads=16,
decode_features=[512, 256, 128, 64])
# Create a Learner using a custom model
loss = nn.BCEWithLogitsLoss()
learn = Learner(dls, model, loss_func=loss, lr=1.0e-4, cbs=callbacks, metrics=[Dice()])
# Let's test and make sure the loss function is happy with its inputs
learn.eval()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
t1 = torch.rand(4, 3, 256, 256).to(device)
print("input: " + str(t1.shape))
pred = learn.model(t1).to(device)
print("output: " + str(pred.shape))
# prints this:
# input: torch.Size([4, 3, 256, 256])
# output: torch.Size([4, 256, 256])
target = next(iter(learn.dls.train))[1]
target = target.type(torch.float32).to(device)
target.size(), pred.size()
# prints this:
# (torch.Size([4, 256, 256]), torch.Size([4, 256, 256]))
loss(pred, target)
# prints this:
# TensorMask(0.6844, device='cuda:0', grad_fn=<AliasBackward>)
# so, the loss function is happy with its inputs
learn.fine_tune(50)
# prints this:
# ---------------------------------------------------------------------------
# RuntimeError Traceback (most recent call last)
# <ipython-input-114-0e514c73651a> in <module>()
# ----> 1 learn.fine_tune(50)
# 19 frames
# /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in binary_cross_entropy_with_logits(input, target, weight, size_average, reduce, reduction, pos_weight)
# 2827 pixel_shuffle = _add_docstr(torch.pixel_shuffle, r"""
# 2828 Rearranges elements in a tensor of shape :math:`(*, C \times r^2, H, W)` to a
# -> 2829 tensor of shape :math:`(*, C, H \times r, W \times r)`.
# 2830
# 2831 See :class:`~torch.nn.PixelShuffle` for details.
# RuntimeError: result type Float can't be cast to the desired output type Long
| This is something that happens when you use PyTorch inside fastai (I believe this should be fixed).
Just create custom loss_func. For example:
def loss_func(output, target): return CrossEntropyLossFlat()(out, targ.long())
and pass it when creating the DataBlock:
dblock = DataBlock(... , loss_func=loss_func, ...)
| https://stackoverflow.com/questions/67257634/ |
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same - PyTorch | I'm trying to push both my mode and data, images and labels, to run on the GPU by doing:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
Followed by:
count = 0
loss_list = []
iteration_list = []
accuracy_list = []
epochs = 30
for epoch in range(epochs):
for i, (images, labels) in enumerate(trainloader):
net = net.to(device)
images.to(device)
labels.to(device)
optimizer.zero_grad()
outputs = net(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
count += 1
if count % 50 == 0:
correct = 0
total = 0
for i, (images, labels) in enumerate(testloader):
images.to(device)
labels.to(device)
outputs = net(images)
predicted = torch.max(outputs.data, 1)[1]
total += len(labels)
correct += (predicted == labels).sum()
accuracy = 100 * correct / float(total)
loss_list.append(loss.data)
iteration_list.append(count)
accuracy_list.append(accuracy)
if count % 500 == 0:
print("Iteration: {} Loss: {} Accuracy: {} %".format(count, loss.data, accuracy))
I'm explicitly pushing my model and data to device however I am met by the error:
RuntimeError Traceback (most recent call last)
<ipython-input-341-361b906da73d> in <module>()
12
13 optimizer.zero_grad()
---> 14 outputs = net(images)
15 loss = criterion(outputs, labels)
16 loss.backward()
4 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
394 _pair(0), self.dilation, self.groups)
395 return F.conv2d(input, weight, bias, self.stride,
--> 396 self.padding, self.dilation, self.groups)
397
398 def forward(self, input: Tensor) -> Tensor:
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
I feel like I'm doing the right thing by pushing both model and data to GPU but I can't figure out why it's not working. Does somebody know what's going wrong? Thank you in advance.
| Your weights are saved on your gpu but your input is on your cpu. You can change that by: images.cuda()
| https://stackoverflow.com/questions/67257940/ |
Dataloader worker exited unexpectedly while running on Visual Studio. But runs okay on Google Colab | So I have this dataloader that loads data from hdf5 but exits unexpectedly when I am using num_workers>0 (it works ok when 0). More strangely, it works okay with more workers on google colab, but not on my computer.
On my computer I have the following error:
Traceback (most recent call last):
File "C:\Users\Flavio Maia\AppData\Roaming\Python\Python37\site-packages\torch\utils\data\dataloader.py", line 986, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\multiprocessing\queues.py", line 105, in get
raise Empty
_queue.Empty
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "", line 2, in
File "C:\Users\Flavio Maia\AppData\Roaming\Python\Python37\site-packages\torch\utils\data\dataloader.py", line 517, in next
data = self._next_data()
File "C:\Users\Flavio Maia\AppData\Roaming\Python\Python37\site-packages\torch\utils\data\dataloader.py", line 1182, in _next_data
idx, data = self._get_data()
File "C:\Users\Flavio Maia\AppData\Roaming\Python\Python37\site-packages\torch\utils\data\dataloader.py", line 1148, in _get_data
success, data = self._try_get_data()
File "C:\Users\Flavio Maia\AppData\Roaming\Python\Python37\site-packages\torch\utils\data\dataloader.py", line 999, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 12332) exited unexpectedly
Also, my getitem function is:
def __getitem__(self,index):
desired_file = int(index/self.file_size)
position = index % self.file_size
h5_file = h5py.File(self.files[desired_file], 'r')
image = h5_file['Screenshots'][position]
rect = h5_file['Rectangles'][position]
numb = h5_file['Numbers'][position]
h5_file.close()
image = torch.from_numpy(image).float()
rect = torch.from_numpy(rect).float()
numb = torch.from_numpy( np.asarray(numb) ).float()
return (image, rect, numb)
Does anyone have any idea what can be causing this empty queue?
| Windows can't handle num_workers > 0 . You can just set it to 0 which is fine. What also should work: Put all your train / test script in a train/test() function and call it under if __name__ == "__main__":
For example like this:
class MyDataLoder(torch.utils.data.Dataset):
train_set = create_dataloader()
. . .
def train():
test_set = create_dataloader()
. . .
def test():
. . .
if __name__ == "__main__":
train()
test()
| https://stackoverflow.com/questions/67258047/ |
Creating a model which weights are the sum of weights of 2 different neural networks | I am doing an experiment of transfer learning.
I trained 2 CNNs that have exactly the same structure, one for MNIST and one for SVHN.
I obtained the parameters (weights and bias) of the 2 models.
Now, I want to combine (sum, or other operations) these weights. A thing like this:
modelMNIST.parameters()
modelSVHN.parameters()
#now the new model
model3 = MyCNN(1)
model3.parameters = modelMNIST.parameters()+modelSVHN.parameters()
If I do in this way, I obtain this error:
SyntaxError: can't assign to function call
And in this way:
model3.block_1[0].weight = modelMNIST.block_1[0].weight + modelSVHN.block_1[0].weight
I get this error:
TypeError: cannot assign 'torch.cuda.FloatTensor' as parameter 'weight' (torch.nn.Parameter or None expected)
Is there any way to combine weights of different models?
| My solution is this:
class VGG16SUM(nn.Module):
def __init__(self, model1, model2, num_classes):
super(VGG16SUM, self).__init__()
# calculate same padding:
# (w - k + 2*p)/s + 1 = o
# => p = (s(o-1) - w + k)/2
self.block_1 = nn.Sequential(
nn.Conv2d(in_channels=1,
out_channels=64,
kernel_size=(3, 3),
stride=(1, 1),
# (1(32-1)- 32 + 3)/2 = 1
padding=1),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.Conv2d(in_channels=64,
out_channels=64,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_2 = nn.Sequential(
nn.Conv2d(in_channels=64,
out_channels=128,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(128),
nn.ReLU(),
nn.Conv2d(in_channels=128,
out_channels=128,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(128),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_3 = nn.Sequential(
nn.Conv2d(in_channels=128,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.Conv2d(in_channels=256,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.Conv2d(in_channels=256,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_4 = nn.Sequential(
nn.Conv2d(in_channels=256,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(512),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(512),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(512),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.classifier = nn.Sequential(
nn.Linear(2048, 4096),
nn.ReLU(True),
nn.Dropout(p=0.25),
nn.Linear(4096, 4096),
nn.ReLU(True),
nn.Dropout(p=0.25),
nn.Linear(4096, num_classes),
)
for p_out, p_in1, p_in2 in zip(self.parameters(), model1.parameters(), model2.parameters()):
p_out.data = nn.Parameter(p_in1 +p_in2);
def forward(self, x):
x = self.block_1(x)
x = self.block_2(x)
x = self.block_3(x)
x = self.block_4(x)
# x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
#logits = self.classifier(x)
#probas = F.softmax(logits, dim=1)
# probas = nn.Softmax(logits)
#return probas
# return logits
It works!!!
| https://stackoverflow.com/questions/67262565/ |
torch concat 1D to a 2D tensor | I have a issue on concat 2 tensor,
say I have x and y:
x = torch.randn(35, 50)
y = torch.randn(35)
How do I concat every y value in to x[0] to make x has a shape 35,51?
I tried:
for i in y:
for a in range(x.shape[0]):
x[a] = torch.cat((x[a],i),0)
Still getting shape error. Any smart way of doing it?
| This should work:
z = torch.cat([x,y.reshape(-1,1)], axis=1)
print(z.shape)
Output:
torch.Size([35, 51])
| https://stackoverflow.com/questions/67267440/ |
Pytorch: create a mask that is larger than the n-th quantile of each 2D tensor in a batch | I have a torch.Tensor of shape (2, 2, 2) (can be bigger), where the values are normalized within range [0, 1].
Now I am given a positive integer K, which tells me that I need to create a mask where for each 2D tensor inside the batch, values are 1 if it is larger than 1/k of all the values, and 0 elsewhere. The return mask also has shape (2, 2, 2).
For example, if I have a batch like this:
tensor([[[1., 3.],
[2., 4.]],
[[5., 7.],
[9., 8.]]])
and let K=2, it means that I must mask the values where they are greater than 50% of all the values inside each 2D tensor.
In the example, the 0.5 quantile is 2.5 and 7.5, so this is the desired output:
tensor([[[0, 1],
[0, 1]],
[[0, 0],
[1, 1]]])
I tried:
a = torch.tensor([[[0, 1],
[0, 1]],
[[0, 0],
[1, 1]]])
quantile = torch.tensor([torch.quantile(x, 1/K) for x in a])
torch.where(a > val, 1, 0)
But this is the result:
tensor([[[0, 0],
[0, 0]],
[[1, 0],
[1, 1]]])
| t = torch.tensor([[[1., 3.],
[2., 4.]],
[[5., 7.],
[9., 8.]]])
t_flat = torch.reshape(t, (t.shape[0], -1))
quants = torch.quantile(t_flat, 1/K, dim=1)
quants = torch..reshape(quants, (quants.shape[0], 1, 1))
res = torch.where(t > val, 1, 0)
and after this res is:
tensor([[[0, 1],
[0, 1]],
[[0, 0],
[1, 1]]])
which is what you wanted
| https://stackoverflow.com/questions/67268152/ |
How to get modules of 3d tensors in 4d torch tensor? | I have a torch tensor with 4 dimensions.
It's shape is [50, 1, 1, 200]. I have to get a list with modules of 200 hundred 3d tensors. Which is the easiest way to do so?
| Did you try torch.unbind (https://pytorch.org/docs/stable/generated/torch.unbind.html)?
a = torch.rand(50, 1, 1, 200)
b = torch.unbind(dim=3)
len(b) # 200
b[0].shape # torch.Size([50, 1, 1])
| https://stackoverflow.com/questions/67268673/ |
PyTorch: apply mask with different shape | I have a tensor of shape (60, 3, 32, 32) and a boolean mask of shape (60, 32, 32). I want to apply this mask to the tensor. The output tensor should have shape (60, 3, 32, 32), and values are kept if the mask is 1, else 0.
How can I do that fast?
| Let t be the tensor and m be the mask. You can use:
t * m.unsqueeze(1)
| https://stackoverflow.com/questions/67275794/ |
attn_output_weights in MultiheadAttention | I wanna know if the matrix of the attn_output_weight can demonstrate the relationship between every word-pair in the input sequence.
In my project, I draw the heat map based on this output and it shows like this:
However, I can hardly see any information from this heat map.
I refer to other people's work, their heat map is like this. At least the diagonal of the matrix should have the deep color.
Then I wonder if my method to draw the heat map is correct or not (i.e. directly using the output of the attn_output_weight ) If this is not the correct way, could you please tell me how to draw the heat map?
| It seems your range of values is rather limited. In the target example the range of values lies between [0, 1], since each row represents the softmax distribution. This is visible from the definition of attention:
I suggest you normalize each row / column (according to the attention implementation you are using) and finally visualize the attention maps in the range [0, 1]. You can do this using the arguments vmin and vmax respectively in matplotlib plottings.
If this doesn't solve the problem, maybe add a snippet of code containing the model you are using and the visualization script.
| https://stackoverflow.com/questions/67276766/ |
What is the simplest way to continue training a pre-trained BERT model, on a specific domain? | I want to use a pre-trained BERT model in order to use it on a text classification task (I'm using Huggingface library). However, the pre-trained model was trained on domains that are different than mine, and I have a large unannotated dataset that can be used for fine-tuning it. If I use only my tagged examples and fine-tune it "on the go" while training on the specific task (BertForSequenceClassification), the dataset is too small for adapting the language model for the specific domain. What it the best way to do so?
Thanks!
| Let's clarify a couple of points first to reduce some ambiguity.
BERT uses two pretraining objectives: Masked Language Modeling (MLM) and Next Sentence Prediction.
You mentioned having a large unannotated dataset, which you plan on using to fine-tune your BERT model. This is not how fine-tuning works. In order to fine-tune your pretrained model, you would need an annotated dataset i.e. document & class pair for sequence classification downstream task.
So what can you do? First, extend your general domain tokenizer with your unannotated dataset consisting of domain-specific vocabulary. Then, using this extended tokenizer you can continue pretraining on MLM and/or NSP objectives to modify your word embeddings. Finally, fine-tune your model using an annotated dataset.
| https://stackoverflow.com/questions/67282155/ |
Randomly set some elements in a tensor to zero (with low computational time) | I have a tensor of shape (3072,1000) which represents the weights in my neural network. I want to:
randomly set 60% of its elements to zero.
After updating the weights, keep 60% of the elements equal to zero but again randomly i.e., not the same previous elements.
Note: my network is not the usual artificial neural network which uses backpropagation algorithm but it is a biophysical model of the neurons in the brain so I am using special weight updating rules. Therefore, I think the ready functions in pytorch, if any, might not be helpful.
I tried the follwoing code, it is working but it takes so long because after every time I update my weight tensor, I have to run that code to set the weight tensor again to be 60% zeros
row_indices = np.random.choice(np.size(mytensor.weight, 0),
replace=False,size=int(np.size(mytensor.weight, 0)* 0.6))
column_indices = np.random.choice(np. size(mytensor.weight, 1),
replace=False, size=int(np. size(mytensor.weight, 1) * 0.6))
for r in row_indices:
for c in column_indices:
(mytensor.weight)[r][c] = 0
| You can use the dropout function for this:
import torch.nn.functional as F
my_tensor.weight = F.dropout(my_tensor.weight, p=0.6)
| https://stackoverflow.com/questions/67282712/ |
PyTorch tensor declared as torch.long becomes torch.int64 | I am new to PyTorch so I haven't worked a lot with PyTorch Tensors. Something I am puzzled about is if I declare the dytpe of a tensor as torch.long, and then check the dtype it is int64. For example:
In [62]: a = torch.tensor([[0, 1, 1, 2],
[1, 0, 2, 1]], dtype=torch.long)
a.dtype
Out[62]: torch.int64
I am probably making some silly mistake.
Why is this happening?
Edit:
89 if isinstance(edge_index, Tensor):
---> 90 assert edge_index.dtype == torch.long
91 assert edge_index.dim() == 2
92 assert edge_index.size(0) == 2
In my case a is edge_index.
| From the the documentation we can see that torch.long and torch.int64 are synonymous and both refer to the 64-bit signed integer type.
| https://stackoverflow.com/questions/67287559/ |
How to check accuracy on BCELoss Pytorch? | I'm trying to use Pytorch to take a HeartDisease.csv and predict whether the patient has heart disease or not... the .csv provides 13 inputs and 1 target
I'm using BCELoss and I'm having trouble understanding how to write an accuracy check function.
My num_samples is correct but not my num_correct. I think this is a result of not understanding the predictions tensor. Right now my num_correct is usually over 8000 while my num_samples is 303...
Any insight on how to write this check accuracy function is much appreciated
I wrote this on a google co lab
#imports
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
import pandas as pd
#create fully connected network
class NN(nn.Module):
def __init__(self, input_size, num_classes):
super(NN, self).__init__()
self.outputs = nn.Linear(input_size, 1)
def forward(self, x):
x = self.outputs(x)
return torch.sigmoid(x)
#set device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
#hyperparameters
input_size = 13 # 13 inputs
num_classes = 1 # heartdisease or not
learning_rate = 0.001
batch_size = 64
num_epochs = 1
#load data
class MyDataset(Dataset):
def __init__(self, root, n_inp):
self.df = pd.read_csv(root)
self.data = self.df.to_numpy()
self.x , self.y = (torch.from_numpy(self.data[:,:n_inp]),
torch.from_numpy(self.data[:,n_inp:]))
def __getitem__(self, idx):
return self.x[idx, :], self.y[idx,:]
def __len__(self):
return len(self.data)
train_dataset = MyDataset("heart.csv", input_size)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle =True)
test_dataset = MyDataset("heart.csv", input_size)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle =True)
#initialize network
model = NN(input_size=input_size, num_classes=num_classes).to(device)
#loss and optimizer
criterion = nn.BCELoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
#train network
for epoch in range(num_epochs):
for batch_idx, (data, targets) in enumerate(train_loader):
#get data to cuda if possible
data = data.to(device=device)
targets = targets.to(device=device)
#forward
scores = model(data.float())
targets = targets.float()
loss = criterion(scores, targets)
#backward
optimizer.zero_grad()
loss.backward()
#grad descent or adam step
optimizer.step()
#check accuracy of model
def check_accuracy(loader, model):
num_correct = 0
num_samples = 0
model.eval()
with torch.no_grad():
for x, y in loader:
x = x.to(device=device)
y = y.to(device=device)
scores = model(x.float())
_, predictions = scores.max(1)
num_correct += (predictions == y).sum()
num_samples += predictions.size(0)
print("Got {} / {} with accuracy {}".format(num_correct, num_samples, float(num_correct)/float(num_samples)*100))
model.train()
print("checking accuracy on training data")
check_accuracy(train_loader, model)
print("checking accuracy on test data")
check_accuracy(test_loader, model)
| Note: Don't fool yourself. A single linear layer + a sigmoid + BCE loss = logistic regression. This is a linear model, so just take note of that when referring to it as a "neural network", which is a term usually reserved for similar networks but with at least one hidden layer and nonlinear activations.
The sigmoid layer at the end of your model's forward() function returns an (N,1)-sized tensor, where N is the batch size. In other words, it returns a scalar for every data point. Each scalar is a value between 0 and 1 (this is the range of the sigmoid function).
The idea is to interpret those scalars as probabilities corresponding to the positive class. Suppose 1 corresponds to heart disease, and 0 corresponds to no heart disease; heart disease is the positive class, and no heart disease is the negative class. Now suppose a score is 0.6. This might be interpreted as a 60% chance that the associated label is heart disease, and a 40% chance that the associated label is no heart disease. This interpretation of the sigmoid output is what motivates the BCE loss to begin with (it's ultimately just a negative log likelihood).
So what you might do is check if your scores are greater than 0.5. If so, predict heart disease. If not, predict no heart disease.
Right now, you're computing maximums from the scores across dimension 1, which does nothing because dimension 1 is already of size 1; taking the maximum of a single value simply gives you that value.
Try something like this:
def check_accuracy(loader, model):
num_correct = 0
num_samples = 0
model.eval()
with torch.no_grad():
for x, y in loader:
x = x.to(device=device)
y = y.to(device=device)
scores = model(x.float())
// Create a Boolean tensor (True for scores > 0.5, False for others)
// and then cast it to a long tensor (Trues -> 1, Falses -> 0)
predictions = (scores > 0.5).long()
num_correct += (predictions == y).sum()
num_samples += predictions.size(0)
print("Got {} / {} with accuracy {}".format(num_correct, num_samples, float(num_correct)/float(num_samples)*100))
model.train()
You may also want to squeeze your prediction and target tensors to size (N) instead of (N,1), though I'm not sure it's necessary in your case.
| https://stackoverflow.com/questions/67288750/ |
How I can convert model.pt to model.h5? | After using YOLOv5 to train model weights as .pt file,
how can I convert the weights file (model.pt) to hdf5 file (model.h5)?
Running python train.py --batch 16 --epochs 3 --data mydata.yaml --weights yolov5s.pt, the result is given by best.pt file at subfolder of YOLOv5, how can I convert it to h5 file?
| Do this installation and get started here
This could be a lengthy procedure ... as you are aware that .pt only contains weights and not model architecture hence your model class should also be present in your conversion code
Edit: New links are added
| https://stackoverflow.com/questions/67291066/ |
How do you test a custom dataset in Pytorch? | I've been following tutorials in Pytorch that use datasets from Pytorch that allow you to enable whether you'd like to train using the data or not... But now I'm using a .csv and a custom dataset.
class MyDataset(Dataset):
def __init__(self, root, n_inp):
self.df = pd.read_csv(root)
self.data = self.df.to_numpy()
self.x , self.y = (torch.from_numpy(self.data[:,:n_inp]),
torch.from_numpy(self.data[:,n_inp:]))
def __getitem__(self, idx):
return self.x[idx, :], self.y[idx,:]
def __len__(self):
return len(self.data)
How can I tell Pytorch not to train my test_dataset so I can use it as a reference of how accurate my model is?
train_dataset = MyDataset("heart.csv", input_size)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle =True)
test_dataset = MyDataset("heart.csv", input_size)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle =True)
| In pytorch, a custom dataset inherits the class Dataset. Mainly it contains two methods __len__() is to specify the length of your dataset object to iterate over and __getitem__() to return a batch of data at a time.
Once the dataloader objects are initialized (train_loader and test_loader as specified in your code), you need to write a train loop and a test loop.
def train(model, optimizer, loss_fn, dataloader):
model.train()
for i, (input, gt) in enumerate(dataloader):
if params.use_gpu: #(If training using GPU)
input, gt = input.cuda(non_blocking = True), gt.cuda(non_blocking = True)
predicted = model(input)
loss = loss_fn(predicted, gt)
optimizer.zero_grad()
loss.backward()
optimizer.step()
and your test loop should be:
def test(model,loss_fn, dataloader):
model.eval()
for i, (input, gt) in enumerate(dataloader):
if params.use_gpu: #(If training using GPU)
input, gt = input.cuda(non_blocking = True), gt.cuda(non_blocking = True)
predicted = model(input)
loss = loss_fn(predicted, gt)
In additional you can use metrics dictionary to log your predicted, loss, epochs etc,. The main difference between training and test loop is that we exclude back propagation (zero_grad(), backward(), step()) in inference stage.
Finally,
for epoch in range(1, epochs + 1):
train(model, optimizer, loss_fn, train_loader)
test(model, loss_fn, test_loader)
| https://stackoverflow.com/questions/67291566/ |
How to move data_parallel model to a specific cuda device? | I currently need to use a pretrained model by setting it on a specific cuda device. The pretrained model is defined as below:
DataParallel(
(module): MobileFaceNet(
(conv1): Conv_block(
(conv): Conv2d(3, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=64)
)
(conv2_dw): Conv_block(
(conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False)
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=64)
)
(conv_23): Depth_Wise(
(conv): Conv_block(
(conv): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=128)
)
(conv_dw): Conv_block(
(conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=128, bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=128)
)
(project): Linear_block(
(conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(conv_3): Residual(
(model): Sequential(
(0): Depth_Wise(
(conv): Conv_block(
(conv): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=128)
)
(conv_dw): Conv_block(
(conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=128)
)
(project): Linear_block(
(conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Depth_Wise(
(conv): Conv_block(
(conv): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=128)
)
(conv_dw): Conv_block(
(conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=128)
)
(project): Linear_block(
(conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Depth_Wise(
(conv): Conv_block(
(conv): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=128)
)
(conv_dw): Conv_block(
(conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=128)
)
(project): Linear_block(
(conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(3): Depth_Wise(
(conv): Conv_block(
(conv): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=128)
)
(conv_dw): Conv_block(
(conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=128)
)
(project): Linear_block(
(conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
)
(conv_34): Depth_Wise(
(conv): Conv_block(
(conv): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(conv_dw): Conv_block(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=256, bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(project): Linear_block(
(conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(conv_4): Residual(
(model): Sequential(
(0): Depth_Wise(
(conv): Conv_block(
(conv): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(conv_dw): Conv_block(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256, bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(project): Linear_block(
(conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Depth_Wise(
(conv): Conv_block(
(conv): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(conv_dw): Conv_block(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256, bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(project): Linear_block(
(conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Depth_Wise(
(conv): Conv_block(
(conv): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(conv_dw): Conv_block(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256, bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(project): Linear_block(
(conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(3): Depth_Wise(
(conv): Conv_block(
(conv): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(conv_dw): Conv_block(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256, bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(project): Linear_block(
(conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Depth_Wise(
(conv): Conv_block(
(conv): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(conv_dw): Conv_block(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256, bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(project): Linear_block(
(conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(5): Depth_Wise(
(conv): Conv_block(
(conv): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(conv_dw): Conv_block(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256, bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(project): Linear_block(
(conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
)
(conv_45): Depth_Wise(
(conv): Conv_block(
(conv): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=512)
)
(conv_dw): Conv_block(
(conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=512, bias=False)
(bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=512)
)
(project): Linear_block(
(conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(conv_5): Residual(
(model): Sequential(
(0): Depth_Wise(
(conv): Conv_block(
(conv): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(conv_dw): Conv_block(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256, bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(project): Linear_block(
(conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Depth_Wise(
(conv): Conv_block(
(conv): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(conv_dw): Conv_block(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256, bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=256)
)
(project): Linear_block(
(conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
)
(conv_6_sep): Conv_block(
(conv): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(prelu): PReLU(num_parameters=512)
)
(conv_6_dw): Linear_block(
(conv): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), groups=512, bias=False)
(bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv_6_flatten): Flatten()
(linear): Linear(in_features=512, out_features=512, bias=False)
(bn): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
If I conventionally declare
model.to(device)
with device on cuda:1, then it makes error when forwarding:
model(imgs)
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1
I think this is because the model was previously trained with data parallel utils in pytorch.
How can I properly set the model to the device that I specifically want?
| You should get the neural network out of DataParallel first.
Assuming your DataParallel is named model you could do:
device = torch.device("cuda:1")
module = model.module.to(device)
| https://stackoverflow.com/questions/67298294/ |
Elegant Numpy Tensor product | I need to take the product over two tensors in numpy (or pytorch):
I have
A = np.arange(1024).reshape(8,1,128)
B = np.arange(9216).reshape(8, 128, 9)
And want to obtain C, with dot products summing over the last dim of A (axis=2) and the middle dim of B (axis=1). This should have dimensions 8x9. Currently, I am doing:
C = np.zeros([8, 9])
for i in range(8):
C[i,:] = np.matmul(A[i,:,:], B[i,:,:])
How to do this elegantly?
I tried:
np.tensordot(weights, features, axes=(2,1)).
but it returns 8x1x8x9.
| One way would be to use numpy.einsum.
C = np.einsum('ijk,ikl->il', A, B)
Or you could use broadcasted matrix multiply.
C = (A @ B).squeeze(axis=1)
# equivalent: C = np.matmul(A, B).squeeze(axis=1)
| https://stackoverflow.com/questions/67302419/ |
How do I load a local model with torch.hub.load? | I need to avoid downloading the model from the web (due to restrictions on the machine installed).
This works, but it downloads the model from the Internet
model = torch.hub.load('pytorch/vision:v0.9.0', 'deeplabv3_resnet101', pretrained=True)
I have placed the .pth file and the hubconf.py file in the /tmp/ folder and changed my code to
model = torch.hub.load('/tmp/', 'deeplabv3_resnet101', pretrained=True, source='local')
but to my surprise, it still downloads the model from the Internet. What am I doing wrong? How can I load the model locally?
Just to give you a bit more details, I'm doing all this in a Docker container that has a read-only volume at runtime, so that's why the download of new files fails.
| There are two approaches you can take to get a shippable model on a machine without an Internet connection.
Load DeepLab with a pretrained model on a normal machine, use a JIT compiler to export it as a graph, and put it into the machine. The Script is easy to follow:
# To export
model = torch.hub.load('pytorch/vision:v0.9.0', 'deeplabv3_resnet101', pretrained=True).eval()
traced_graph = torch.jit.trace(model, torch.randn(1, 3, H, W))
traced_graph.save('DeepLab.pth')
# To load
model = torch.jit.load('DeepLab.pth').eval().to(device)
In this case, the weights and network structure is saved as computational graph, so you won't need any extra files.
Take a look at torchvision's GitHub repository.
There's a download URL for DeepLabV3 with Resnet101 backbone weights.
You can download those weights once, and then use deeplab from torchvision with pretrained=False flag and load weights manually.
model = torch.hub.load('pytorch/vision:v0.9.0', 'deeplabv3_resnet101', pretrained=False)
model.load_state_dict(torch.load('downloaded weights path'))
Take in consideration, there might be a ['state_dict'] or some similar parent key in state dict, where you would use:
model.load_state_dict(torch.load('downloaded weights path')['state_dict'])
| https://stackoverflow.com/questions/67302634/ |
How to translate a conv2D in keras or tensorflow which is already implemented in PyTorch? | I have the following function in pytorch implementation for replacing a conv2D layer with 3 different layers:
first_layer = torch.nn.Conv2d(in_channels=3, \
out_channels=3, kernel_size=1,
stride=1, padding=0, dilation = (1,1), bias=False)
core_layer = torch.nn.Conv2d(in_channels=3, \
out_channels=16, kernel_size=(3,3),
stride=(1,1), padding=(1,1), dilation=(1,1),
bias=False)
last_layer = torch.nn.Conv2d(in_channels=16, \
out_channels=64], kernel_size=1, stride=1,
padding=0, dilation=(1,1), bias=True)
last_layer.bias.data = layer.bias.data
first_layer.weight.data = \
torch.transpose(first, 1, 0).unsqueeze(-1).unsqueeze(-1)
last_layer.weight.data = last.unsqueeze(-1).unsqueeze(-1)
core_layer.weight.data = core
new_layers = [first_layer, core_layer, last_layer]
y = nn.Sequential(*new_layers)
where, 'first' represents a random 3 by 3 matrix.
'core' represents a tensor of shape [16,3,3,3]
'last' represents another random matrix of size (64,16).
When I tried to translate this into keras, I have the following :
first_layer = tf.keras.layers.SeparableConv2D(3, kernel_size=1, strides = (1,1), padding = 'same', dilation_rate = (1,1), use_bias = False )
core_layer = tf.keras.layers.Conv2D(16, kernel_size=3, strides = (1,1), padding = (1,1), dilation_rate = (1,1), use_bias = False)
last_layer = tf.keras.layers.SeparableConv2D(64, kernel_size=1, strides = (1,1), \
padding = 'same', dilation_rate = (1,1), use_bias =True )
first_layer = tf.expand_dims(tf.expand_dims(tf.transpose(first, perm = [1,0]),0),0)
last_layer = tf.expand_dims(tf.expand_dims(last, 0),0)
core_layer = core
new_layers = [first_layer, core_layer, last_layer]
when I tried to get back the weights of the model in keras, I am getting a list with no weights at all. The convolution is not performed. Any idea on how to proceed further/ any other approached of transforming the above pytorch implementation to keras or tensorflow?
| You are missing exactly the last step in Keras transformation.
There is also a Sequential() class in TensorFlow(Keras) that you can use to instantiate the model.
I haven't checked for the exact match between TF and PyTorch, but this should be your starting point to solve your problem.
model = tf.keras.Sequential([first_layer, core_layer, last_layer])
y = model(x)
| https://stackoverflow.com/questions/67302719/ |
How to convert probability to angle degree in a head-pose estimation problem? | I reused code from others to make head-pose prediction in Euler angles. The author trained a classification network that returns bin classification results for the three angles, i.e. yaw, roll, pitch. The number of bins is 66. They somehow convert the probabilities to the corresponding angle, as written from line 150 to 152 here. Could someone help to explain the formula?
These are the relevant lines of code in the above file:
[56] model = hopenet.Hopenet(torchvision.models.resnet.Bottleneck, [3, 4, 6, 3], 66) # a variant of ResNet50
[80] idx_tensor = [idx for idx in xrange(66)]
[81] idx_tensor = torch.FloatTensor(idx_tensor).cuda(gpu)
[144] yaw, pitch, roll = model(img)
[146] yaw_predicted = F.softmax(yaw)
[150] yaw_predicted = torch.sum(yaw_predicted.data[0] * idx_tensor) * 3 - 99
| If we look at the training code, and the authors' paper,* we see that the loss function is a sum of two losses:
the raw model output (vector of probabilities for each bin category):
[144] yaw, pitch, roll = model(img)
a linear combination of the bin predictions (the predicted continuous angle):
[146] yaw_predicted = F.softmax(yaw)
[150] yaw_predicted = torch.sum(yaw_predicted.data[0] * idx_tensor) * 3 - 99
Since 3 * softmax(label_weighted_sum(output)) - 99 is the final layer in training the regression loss (but is not explicitly a part of the model's forward), this must be applied to the raw output to convert it from the vector of bin probabilities to a single angle prediction.
*
3.2. The Multi-Loss Approach
All previous work which predicted head pose using convolutional networks regressed all three Euler angles directly using a mean squared error loss. We notice that this approach does not achieve the best results on our large-scale synthetic training data.
We propose to use three separate losses, one for each angle. Each loss is a combination of two components: a binned pose classification and a regression component. Any
backbone network can be used and augmented with three fully-connected layers which predict the angles. These three fully-connected layers share the previous convolutional layers of the network.
The idea behind this approach is that by performing bin classification we use the very stable softmax layer and cross-entropy, thus the network learns to predict the neighbourhood of the pose in a robust fashion. By having three cross-entropy losses, one for each Euler angle, we have three signals which are backpropagated into the network
which improves learning. In order to obtain a fine-grained predictions we compute the expectation of each output angle for the binned output. The detailed architecture is shown
in Figure 2.
We then add a regression loss to the network, namely a mean-squared error loss, in order to improve fine-grained predictions. We have three final losses, one for each angle,
and each is a linear combination of both the respective classification and the regression losses. We vary the weight of the regression loss in Section 4.4 and we hold the weight of
the classification loss constant at 1. The final loss for each Euler angle is the following:
Where H and MSE respectively designate the crossentropy and mean squared error loss functions.
| https://stackoverflow.com/questions/67311147/ |
Batch inference of softmax does not sum to 1 | I am working with REINFORCE algorithm with PyTorch. I noticed that the batch inference/predictions of my simple network with Softmax doesn’t sum to 1 (not even close to 1). I am attaching a minimum working code so that you can reproduce it. What am I missing here?
import numpy as np
import torch
obs_size = 9
HIDDEN_SIZE = 9
n_actions = 2
np.random.seed(0)
model = torch.nn.Sequential(
torch.nn.Linear(obs_size, HIDDEN_SIZE),
torch.nn.ReLU(),
torch.nn.Linear(HIDDEN_SIZE, n_actions),
torch.nn.Softmax(dim=0)
)
state_transitions = np.random.rand(3, obs_size)
state_batch = torch.Tensor(state_transitions)
pred_batch = model(state_batch) # WRONG PREDICTIONS!
print('wrong predictions:\n', *pred_batch.detach().numpy())
# [0.34072137 0.34721774] [0.30972624 0.30191955] [0.3495524 0.3508627]
# DOES NOT SUM TO 1 !!!
pred_batch = [model(s).detach().numpy() for s in state_batch] # CORRECT PREDICTIONS
print('correct predictions:\n', *pred_batch)
# [0.5955179 0.40448207] [0.6574412 0.34255883] [0.624833 0.37516695]
# DOES SUM TO 1 AS EXPECTED
|
Although PyTorch lets us get away with it, we don’t actually provide an input with the right dimensionality. We have a model that takes one input and produces one output, but PyTorch nn.Module and its subclasses are designed to do so on multiple samples at the same time. To accommodate multiple samples, modules expect the zeroth dimension of the input to be the number of samples in the batch.
Deep Learning with PyTorch
That your model works on each individual sample is an implementation nicety. You have incorrectly specified the dimension for the softmax (across batches instead of across the variables), and hence when given a batch dimension it is computing the softmax across samples instead of within samples:
nn.Softmax requires us to specify the dimension along which the softmax function is applied:
softmax = nn.Softmax(dim=1)
In this case, we have two input vectors in two rows (just like when we work with
batches), so we initialize nn.Softmax to operate along dimension 1.
Change torch.nn.Softmax(dim=0) to torch.nn.Softmax(dim=1) to get appropriate results.
| https://stackoverflow.com/questions/67314352/ |
Is it possible to resolve TypeError: argument 'input' (position 1) must be Tensor error without retraining the model? | I have made a model in PyTorch for use in an openAI Gym environment. I have made it in the following way:
class Policy(nn.Module):
def __init__(self, s_size=8, h_size=16, a_size=4):
super(Policy, self).__init__()
self.fc1 = nn.Linear(s_size, h_size)
self.fc2 = nn.Linear(h_size, 32)
self.fc3 = nn.Linear(32, 64)
self.fc4 = nn.Linear(64, a_size)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = self.fc4(x)
return F.softmax(x, dim=1 )
def act(self, state):
state = torch.from_numpy(state).float().unsqueeze(0).to(device)
probs = self.forward(state).cpu()
m = Categorical(probs)
action = m.sample()
return action.item(), m.log_prob(action)
I then save it's state in a dictionary and use it as following :
env = gym.make('LunarLander-v2')
policy = Policy().to(torch.device('cpu'))
policy.load_state_dict(torch.load('best_params_cloud.ckpt', map_location='cpu'))
policy.eval()
ims = []
rewards = []
state = env.reset()
for step in range(STEPS):
img = env.render(mode='rgb_array')
action,log_prob = policy(state)
# print(action)
state,reward,done,i_ = env.step(action)
rewards.append(reward)
# print(reward,done)
cv2_im_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
pil_im = Image.fromarray(cv2_im_rgb)
draw = ImageDraw.Draw(pil_im)
# Choose a font
font = ImageFont.truetype("Roboto-Regular.ttf", 20)
# Draw the text
draw.text((0, 0), f"Step: {step} Action : {action} Reward: {int(reward)} Total Rewards: {int(np.sum(rewards))} done: {done}", font=font,fill="#FDFEFE")
# Save the image
img = cv2.cvtColor(np.array(pil_im), cv2.COLOR_RGB2BGR)
im = plt.imshow(img, animated=True)
ims.append([im])
if done:
env.close()
break
Writer = animation.writers['pillow']
writer = Writer(fps=15, metadata=dict(artist='Me'), bitrate=1800)
im_ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=3000,
blit=True)
im_ani.save('ll_train1.gif', writer=writer)
But this returns the error:
TypeError Traceback (most recent call last)
<ipython-input-3-da32222edde2> in <module>
9 for step in range(STEPS):
10 img = env.render(mode='rgb_array')
---> 11 action,log_prob = policy(state)
12 # print(action)
13 state,reward,done,i_ = env.step(action)
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
<ipython-input-2-66d42ebb791e> in forward(self, x)
33
34 def forward(self, x):
---> 35 x = F.relu(self.fc1(x))
36 x = F.relu(self.fc2(x))
37 x = F.relu(self.fc3(x))
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~\anaconda3\lib\site-packages\torch\nn\modules\linear.py in forward(self, input)
92
93 def forward(self, input: Tensor) -> Tensor:
---> 94 return F.linear(input, self.weight, self.bias)
95
96 def extra_repr(self) -> str:
~\anaconda3\lib\site-packages\torch\nn\functional.py in linear(input, weight, bias)
1751 if has_torch_function_variadic(input, weight):
1752 return handle_torch_function(linear, (input, weight), input, weight, bias=bias)
-> 1753 return torch._C._nn.linear(input, weight, bias)
1754
1755
TypeError: linear(): argument 'input' (position 1) must be Tensor, not numpy.ndarray
I tried to change the forward function by adding the following line of code:
def forward(self, x):
x = torch.tensor(x,dtype=torch.float32,device=DEVICE).unsqueeze(0) //Added this line
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = self.fc4(x)
return F.softmax(x, dim=1 )
But this also returns an error : ValueError: not enough values to unpack (expected 2, got 1)
The policy took a lot of time to train, and I am trying to avoid retraining it, is there a workaround for it to run without retraining?
| This error is not related to your model.
forward function only returns the probability distribution but what you need is the action and corresponded probability (output of Policy.act).
Change your code from
for step in range(STEPS):
img = env.render(mode='rgb_array')
# This line causes the error.
action,log_prob = policy(state)
to
for step in range(STEPS):
img = env.render(mode='rgb_array')
# This line causes the error.
action,log_prob = policy.act(state)
| https://stackoverflow.com/questions/67316491/ |
Wav2Vec pytorch element 0 of tensors does not require grad and does not have a grad_fn | I am retraining a wav2vec model from hugging face for classification problem. I have 5 classes and the input is a list of tensors [1,400].
Here is how I am getting the model
num_labels = 5
model_name = "Zaid/wav2vec2-large-xlsr-53-arabic-egyptian"
model_config = AutoConfig.from_pretrained(model_name, num_labels=num_labels) ##needed for the visualizations
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained(model_name)
model = Wav2Vec2ForCTC.from_pretrained(model_name, config=model_config)
Here is the model updated settings
# Freeze the pre trained parameters
for param in model.parameters():
param.requires_grad = False
criterion = nn.MSELoss().to(device)
optimizer = AdamW(model.parameters(), lr=2e-5, eps=1e-6)
# Add three new layers at the end of the network
model.classifier = nn.Sequential(
nn.Linear(768, 256),
nn.Dropout(0.25),
nn.ReLU(),
nn.Linear(256, 64),
nn.Dropout(0.25),
nn.ReLU(),
nn.Linear(64, 2),
nn.Dropout(0.25),
nn.Softmax(dim=1)
)
Then the training loop
print_every = 300
total_loss = 0
all_losses = []
model.train()
for epoch in range(2):
print("Epoch number: ", epoch)
for row in range(16918):
Input = torch.tensor(trn_ivectors[row]).double()
label = torch.tensor(trn_labels[row]).long().to(device)
label = torch.unsqueeze(label,0).to(device)
#print("Label", label.shape)
Input = torch.unsqueeze(Input,1).to(device)
#print(Input.shape)
optimizer.zero_grad()
#Input.requires_grad = True
Input = F.softmax(Input[0], dim=-1)
if label == 0:
label = torch.tensor([1.0, 0.0]).float().to(device)
elif label == 1:
label = torch.tensor([0.0, 1.0]).float().to(device)
# print(overall_output, label)
loss = criterion(Input, label)
total_loss += loss.item()
loss.backward()
optimizer.step()
if idx % print_every == 0 and idx > 0:
average_loss = total_loss / print_every
print("{}/{}. Average loss: {}".format(idx, len(train_data), average_loss))
all_losses.append(average_loss)
total_loss = 0
torch.save(model.state_dict(), "model_after_train.pt")
Unfortunately when I try to train the program it gives me the following error
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Please I would appreciate if you could tell me how to fix this error. I have been searching a lot on a way fixing it but didn't fix it
Thanks
| Please try adding
requires_grad = True
| https://stackoverflow.com/questions/67326333/ |
PyTorch LSTM for Daily Stock Return Prediction - Train loss is consistently lower than test loss | I was wondering if someone could share some ideas for why my training loss begins at a higher level than the test loss?
I am trying to run an LSTM on daily stock return data as the only input and using the 10 previous days to predict the price on the next day. Training/test/validation sets do not overlap, so there is no leakage. Not using any regularisation that would impact the training data only.
Really confused at the moment as I cannot seem to find the error.
I will include the code below but its quite long
# Defining the LSTM class
import torch
import torch.nn as nn
from sklearn import preprocessing
from sklearn.preprocessing import StandardScaler, MinMaxScaler
class LSTM(nn.Module):
def __init__(self, n_inputs, n_hidden, num_layers, n_outputs):
super(LSTM, self).__init__()
self.D = n_inputs
self.M = n_hidden
self.K = n_outputs
self.L = num_layers
self.rnn = nn.LSTM(
input_size=self.D,
hidden_size=self.M,
num_layers=self.L,
batch_first=True)
self.fc = nn.Linear(self.M, self.K)
def forward(self, X):
# initial hidden states
h0 = torch.zeros(self.L, X.size(0), self.M).to(device)
c0 = torch.zeros(self.L, X.size(0), self.M).to(device)
# get RNN unit output
out, _ = self.rnn(X, (h0, c0))
# we only want h(T) at the final time step
out = self.fc(out[:, -1, :])
return out
# Defining a function to train the LSTM
def full_gd(model,
loss_function,
optimizer,
X_train,
y_train,
X_test,
y_test,
no_epochs):
# Stuff to store
train_losses = np.zeros(no_epochs)
test_losses = np.zeros(no_epochs)
for it in range(no_epochs):
# zero the parameter gradients
optimizer.zero_grad()
# Forward pass
outputs = model(X_train)
loss = loss_function(outputs, y_train)
# Backward and optimize
loss.backward()
optimizer.step()
# Save losses
train_losses[it] = loss.item()
# Test loss
test_outputs = model(X_test)
test_loss = loss_function(test_outputs, y_test)
test_losses[it] = test_loss.item()
if (it + 1) % 10 == 0:
print(f'Epoch {it+1}/{no_epochs}, Train Loss: {loss.item():.4f}, Test Loss: {test_loss.item():.4f}')
return train_losses, test_losses
# Import sklearn's StandardScaler to scale the returns data
scaler = StandardScaler()
scaler.fit(data[:3*len(data)//5])
historical_returns = scaler.transform(data)
# Creating the dataset to train the LSTM. D is the number of input features. T is the number of data points used in forecasting
T = 10
D = 1
X = []
Y = []
for t in range(len(historical_returns) - T):
x = historical_returns[t:t+T]
X.append(x)
y = historical_returns[t+T]
Y.append(y)
X_historical = np.array(X).reshape(-1, T, 1)
Y_historical = np.array(Y).reshape(-1, 1)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Splitting the data into a 60/20/20 train/validation/test split. No random split is used here as this is a time series dataset
x_train1 = torch.from_numpy(X_historical[:3*len(historical_returns)//5].astype(np.float32))
y_train1 = torch.from_numpy(Y_historical[:3*len(historical_returns)//5].astype(np.float32))
x_val1 = torch.from_numpy(X_historical[-2*len(historical_returns)//5: -1*len(historical_returns)//5].astype(np.float32))
y_val1 = torch.from_numpy(Y_historical[-2*len(historical_returns)//5: -1*len(historical_returns)//5].astype(np.float32))
x_test1 = torch.from_numpy(X_historical[-1*len(historical_returns)//5:].astype(np.float32))
y_test1 = torch.from_numpy(Y_historical[-1*len(historical_returns)//5:].astype(np.float32))
# move data to GPU
x_train1, y_train1 = x_train1.to(device), y_train1.to(device)
x_val1, y_val1 = x_val1.to(device), y_val1.to(device)
x_test1, y_test1 = x_test1.to(device), y_test1.to(device)
x_train1 = x_train1.reshape(-1, T, 1)
x_test1 = x_test1.reshape(-1, T, 1)
x_val1 = x_val1.reshape(-1, T, 1)
# Define the model parameters
Hidden = 10
model = LSTM(1, Hidden, 1, 1)
model.to(device)
loss_function = nn.MSELoss()
learning_rate = 0.01
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# Train the model
no_epochs = 200
train_losses, validation_losses = full_gd(model,
loss_function,
optimizer,
x_train1,
y_train1,
x_val1,
y_val1,
no_epochs)
# Plot training and validation loss
plt.figure(figsize=(12,8))
plt.plot(train_losses, label='train loss')
plt.plot(validation_losses, label='test loss')
plt.legend()
plt.show()
| Well there might be several reasons.
Your task is difficult, or it is hard with the data you have.
Your validation split contains very easy tasks.
Another natural reason to this issue is rising from the dataset size, since validation split is relatively smaller than the training split. Theoretically by random guesses (this is somewhat the model's initial state), you are more likely to fail on large number of guesses.
Your model seems that it couldn't learn, it perform poorly on training data, that is undesired. Keep in mind that RNNs are hard to train though. You can try some potential aids like, increasing the epoch size, making the model more complex. If you can compare your results with another work, you should do it. That'd guide you how good or badly you made the experiment.
| https://stackoverflow.com/questions/67326379/ |
Pytorch: smarter way to reduce dimension by reshape | I want to reshape a Tensor by multiplying the shape of first two dimensions.
For example,
1st_tensor: torch.Size([12, 10]) to torch.Size([120])
2nd_tensor: torch.Size([12, 10, 5, 4]) to torch.Size([120, 5, 4])
I.e. The first two dimensions shall be merged into one, while the other dimensions shall remain the same.
Is there a smarter way than
1st_tensor.reshape(-1,)
2nd_tensor.reshape(-1,5,4),
that can adapt to the shape of different Tensors?
Test cases:
import torch
tests = [
torch.rand(11, 11),
torch.rand(12, 15351, 6, 4),
torch.rand(13, 65000, 8)
]
| For tensor t, you can use:
t.reshape((-1,)+t.shape[2:])
This uses -1 for flattening the first two dimensions, and then uses t.shape[2:] to keep the other dimensions identical to the original tensor.
For your examples:
>>> tests = [
... torch.rand(11, 11),
... torch.rand(12, 15351, 6, 4),
... torch.rand(13, 65000, 8)
... ]
>>> tests[0].reshape((-1,)+tests[0].shape[2:]).shape
torch.Size([121])
>>> tests[1].reshape((-1,)+tests[1].shape[2:]).shape
torch.Size([184212, 6, 4])
>>> tests[2].reshape((-1,)+tests[2].shape[2:]).shape
torch.Size([845000, 8])
| https://stackoverflow.com/questions/67327630/ |
How to find input that maximizes output of a neural network using pytorch | I have a pytorch network, that have been trained and weights are updated (complete training).
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(1, H)
self.fc2 = nn.Linear(1, H)
self.fc3 = nn.Linear(H, 1)
def forward(self, x, y):
h1 = F.relu(self.fc1(x)+self.fc2(y))
h2 = self.fc3(h1)
return h2
After training, I want to maximize the output of the network with respect to input. In other words, I want to optimize the input to maximize the neural network output, without changing weights. How can I achieve that.
My trial, but it doesn't make sense:
in = torch.autograd.Variable(x)
out = Net(in)
grad = torch.autograd.grad(out, input)
|
Disable gradients for the network.
Set your input tensor as a parameter requiring grad.
Initialize an optimizer wrapping the input tensor.
Backprop with some loss function and a goal tensor
...
Profit!
import torch
f = torch.nn.Linear(10, 5)
f.requires_grad_(False)
x = torch.nn.Parameter(torch.rand(10), requires_grad=True)
optim = torch.optim.SGD([x], lr=1e-1)
mse = torch.nn.MSELoss()
y = torch.ones(5) # the desired network response
num_steps = 5 # how many optim steps to take
for _ in range(num_steps):
loss = mse(f(x), y)
loss.backward()
optim.step()
optim.zero_grad()
But make sure that your goal tensor is well defined wrt. the network's monotonicity, otherwise you might end up with nans.
| https://stackoverflow.com/questions/67328098/ |
ValueError: only one element tensors can be converted to Python scalars when using torch.Tensor on list of tensors | I have a list of tensors:
object_ids = [tensor([2., 3.]), tensor([2., 3.]), tensor([2., 3.]), tensor([2., 3.]), tensor([2., 3.]), tensor([2., 3.]), tensor([2., 3.]), tensor([2., 3.]), tensor([2., 3.]), tensor([2., 3.])]
Intuitively, it seems like I should be able to create a new tensor from this:
torch.as_tensor(object_ids, dtype=torch.float32)
But this does NOT work. Apparently, torch.as_tensor and torch.Tensor can only turn lists of scalars into new tensors. it cannot turn a list of d-dim tensors into a d+1 dim tensor.
| You can use torch.stack.
In your example:
>>> object_ids = [tensor([2., 3.]), tensor([2., 3.]), tensor([2., 3.]), tensor([2., 3.]), tensor([2., 3.]), tensor([2., 3.]), tensor([2., 3.]), tensor([2., 3.]), tensor([2., 3.]), tensor([2., 3.])]
>>> torch.stack(object_ids)
tensor([[2., 3.],
[2., 3.],
[2., 3.],
[2., 3.],
[2., 3.],
[2., 3.],
[2., 3.],
[2., 3.],
[2., 3.],
[2., 3.]])
| https://stackoverflow.com/questions/67328121/ |
one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [640]] is at version 4; | I want to use pytorch DistributedDataParallel for adversarial training. The loss function is trades.The code can run in DataParallel mode. But in DistributedDataParallel mode, I got this error.
When I change the loss to AT, it can run successfully. Why can't run with trades loss? The two loss functions are as follows:
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/home/lthpc/.conda/envs/bba/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/data/zsd/defense/my_adv_training/muti_gpu_2.py", line 170, in main_worker
train_loss = train(train_loader, model, optimizer, epoch, local_rank, args)
File "/data/zsd/defense/my_adv_training/muti_gpu_2.py", line 208, in train
loss = trades(model,x, y,optimizer, args.epsilon,args.step_size,args.num_steps,beta=6.0)
File "/data/zsd/defense/my_adv_training/loss_functions.py", line 137, in trades
loss_kl.backward()
File "/home/lthpc/.conda/envs/bba/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/lthpc/.conda/envs/bba/lib/python3.7/site-packages/torch/autograd/__init__.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [640]] is at version 4; expected version 3 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
.
for i, (x, y) in enumerate(train_loader):
# measure data loading time
x,y = x.cuda(local_rank, non_blocking=True), y.cuda(local_rank, non_blocking=True)
loss = trades(model,x, y,optimizer, args.epsilon,args.step_size,args.num_steps,beta=6.0)
torch.distributed.barrier()
optimizer.zero_grad()
loss.backward(retain_graph=True)
optimizer.step()
def trades():
model.eval()
criterion_kl = nn.KLDivLoss(reduction='sum')
x_adv = x.detach() + 0.001 * torch.randn_like(x).detach()
nat_output = model(x)
for _ in range(num_steps):
x_adv.requires_grad_()
with torch.enable_grad():
loss_kl = criterion_kl(F.log_softmax(model(x_adv), dim=1),
F.softmax(nat_output, dim=1))
loss_kl.backward()
eta = step_size * x_adv.grad.sign()
x_adv = x_adv.detach() + eta
x_adv = torch.min(torch.max(x_adv, x - epsilon), x + epsilon)
x_adv = torch.clamp(x_adv, 0.0, 1.0)
model.train()
x_adv = Variable(x_adv, requires_grad=False)
optimizer.zero_grad()
# calculate robust loss
logits = model(x)
loss_natural = nn.CrossEntropyLoss()(logits, y)
loss_robust = (1.0 / x.size(0)) * criterion_kl(F.log_softmax(model(x_adv), dim=1),
F.softmax(logits, dim=1))
loss = loss_natural + beta * loss_robust
return loss
def AT():
model.eval()
x_adv = x.detach() + torch.from_numpy(np.random.uniform(-epsilon,
epsilon, x.shape)).float().cuda()
x_adv = torch.clamp(x_adv, 0.0, 1.0)
for k in range(num_steps):
x_adv.requires_grad_()
output = model(x_adv)
model.zero_grad()
with torch.enable_grad():
loss = nn.CrossEntropyLoss()(output, y)
loss.backward()
eta = step_size * x_adv.grad.sign()
x_adv = x_adv.detach() + eta
x_adv = torch.min(torch.max(x_adv, x - epsilon), x + epsilon)
x_adv = torch.clamp(x_adv, 0.0, 1.0)
x_adv = Variable(x_adv, requires_grad=False)
model.train()
logits_adv = model(x_adv)
loss = nn.CrossEntropyLoss()(logits_adv, y)
return loss
| I changed the code of trades and solved this error. But I don't know why this works.
def trades():
model.eval()
criterion_kl = nn.KLDivLoss(reduction='sum')
x_adv = x.detach() + 0.001 * torch.randn_like(x).detach()
nat_output = model(x)
for _ in range(num_steps):
x_adv.requires_grad_()
with torch.enable_grad():
loss_kl = criterion_kl(F.log_softmax(model(x_adv), dim=1),
F.softmax(nat_output, dim=1))
grad = torch.autograd.grad(loss_kl, [x_adv])[0]
x_adv = x_adv.detach() + step_size * torch.sign(grad.detach())
x_adv = torch.min(torch.max(x_adv, x - epsilon), x + epsilon)
x_adv = torch.clamp(x_adv, 0.0, 1.0)
model.train()
x_adv = Variable(x_adv, requires_grad=False)
optimizer.zero_grad()
# calculate robust loss
logits = model(x)
loss_natural = nn.CrossEntropyLoss()(logits, y)
loss_robust = (1.0 / x.size(0)) * criterion_kl(F.log_softmax(model(x_adv), dim=1),
F.softmax(logits, dim=1))
loss = loss_natural + beta * loss_robust
return loss
| https://stackoverflow.com/questions/67329997/ |
TypeError: img should be PIL Image. Got even though using latest pytorch version | I had to create a new pytorch environment in Anaconda, I created my code and it worked fine in my old environment. I then created the new environment with the same version of pytorch and cuda 10.1 (but then updated to cuda 11, same as the old one. WHen i try to run the same code i get the error:
TypeError: img should be PIL Image. Got <class 'torch.Tensor'>
When trying to apply any transformation to my tensors, for example the following code gives me an error:
def randRoll(batch, deg):
rotator = torchvision.transforms.RandomRotation(deg)
batch = rotator(batch)
return batch
Nothing has changed and I can't understand why i would get this.
Any suggestions?
| Solved
Turns out somehow torchvision 0.2.2 had been installed instead of the latest 0.9.1 (which my other environment used).
This was solved by uninstalling torchvision using
conda remove torchvision
then installing torchvision using pip (using conda install gave me version 0.2.2)
pip install torchvision
I also had to reinstall six using pip.
| https://stackoverflow.com/questions/67332948/ |
Pixelwise regression. How to go from Nx1xHxW to Nx3xHxW? | I have a Nx1xHxW feature maps. I need to add a second head that generates Nx3xHxW representing pixel wise regression with a triplet for each pixel .
The question is: how would you go from Nx1xHxW to Nx3xHxW? A fully connected layer would be too expensive in terms of introduced parameters.
That's what I am trying a 1x1x3 convolutional filter with stride 1 defined in PyTorch as nn.Conv2d(1, 3, (1, 1), stride=1, bias=True) but results does not seem encouraging. Any suggestion would be welcome.
Best
| You can expand the dimension of the data at any point in the forward function with non-parametric operations to force the output into this shape. For instance:
def forward(input):
input = input.repeat(1,3,1,1)
output = self.layers(input)
return output
or:
def forward(input):
intermediate = self.layers(input)
intermediate.repeat(1,3,1,1)
output = self.more_layers(intermediate)
return output
Theoretically, there is some nonlinear function that produces the 3d pixelwise output given a 1-dimensional input. You can try and learn this nonlinear function using a series of NN layers, but, as you indicated above, this may not give great results and moreover may be difficult to learn well. Instead, you can simply expand the input at some point so that you are instead learning a 3d to 3d pixelwise nonlinear function with NN layers. torch.repeat and other similar operations are differentiable so shouldn't cause an issue with learning.
| https://stackoverflow.com/questions/67333510/ |
calculating accuracy (in lstm model) | I am trying to get my accuracy, and I have this code:
num_correct = 0.0
for inputs, labels in dataloader(
valid_features, valid_labels, batch_size=batch_size, sequence_length=20):
top_val, top_class = torch.exp(output).topk(1)
num_correct += torch.sum(top_class.squeeze() == labels)
#...
print(#...,
"Accuracy: {:.3f}".format(num_correct*1.0 / len(valid_labels) *1.0)
It always prints 0.000, so I decided to print the raw values going into num_correct:
print(top_class.squeeze(), labels)
tensor([ 1, 3, 3, ..., 3, 4, 3], device='cuda:0') tensor([ 1, 1, 3, ..., 3, 3, 3], device='cuda:0')
tensor([ 4, 3, 1, ..., 4, 4, 3], device='cuda:0') tensor([ 4, 3, 1, ..., 4, 4, 3], device='cuda:0')
tensor([ 2, 4, 2, ..., 4, 4, 4], device='cuda:0') tensor([ 3, 4, 1, ..., 4, 4, 4], device='cuda:0')
tensor([ 0, 1, 3, ..., 2, 3, 0], device='cuda:0') tensor([ 0, 1, 3, ..., 2, 2, 0], device='cuda:0')
these appear pretty highly accurate. So .. I could extract this out to numpy and be done with it, but there is a pytorch way.
| This works:
num_correct += torch.sum(torch.eq(top_class.squeeze(), labels)).item()
| https://stackoverflow.com/questions/67337015/ |
UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() | I'm new on PyTorch and I'm trying to code with it
so I have a function called OH which tack a number and return a vector like this
def OH(x,end=10,l=12):
x = T.LongTensor([[x]])
end = T.LongTensor([[end]])
one_hot_x = T.FloatTensor(1,l)
one_hot_end = T.FloatTensor(1,l)
first=one_hot_x.zero_().scatter_(1,x,1)
second=one_hot_end.zero_().scatter_(1,end,1)
vector=T.cat((one_hot_x,one_hot_end),dim=1)
return vector
OH(0)
output:
tensor([[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 1., 0.]])
now I have a NN that takes this output and return number but this warning always appear in my compiling
online.act(OH(obs))
output:
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:17: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
4
I tried to to use online.act(OH(obs).clone().detach()) but it give me the same warning
and the code works fine and give good results but I need to understand this warning
Edit
the following is my NN that has the act function
class Network(nn.Module):
def __init__(self,lr,n_action,input_dim):
super(Network,self).__init__()
self.f1=nn.Linear(input_dim,128)
self.f2=nn.Linear(128,64)
self.f3=nn.Linear(64,32)
self.f4=nn.Linear(32,n_action)
#self.optimizer=optim.Adam(self.parameters(),lr=lr)
#self.loss=nn.MSELoss()
self.device=T.device('cuda' if T.cuda.is_available() else 'cpu')
self.to(self.device)
def forward(self,x):
x=F.relu(self.f1(x))
x=F.relu(self.f2(x))
x=F.relu(self.f3(x))
x=self.f4(x)
return x
def act(self,obs):
state=T.tensor(obs).to(device)
actions=self.forward(state)
action=T.argmax(actions).item()
return action
| the problem is that you are receiving a tensor on the act function on the Network and then save it as a tensor
just remove the tensor in the action like this
def act(self,obs):
#state=T.tensor(obs).to(device)
state=obs.to(device)
actions=self.forward(state)
action=T.argmax(actions).item()
| https://stackoverflow.com/questions/67341208/ |
How to convert this nested JSON file in pandas df? | I have to deal with this database for a project:
In particular, I need to obtain a pandas df to formatting these data as input for a neural network in an NLP task. The Json format is the following:
json file
├── "data"
│ └── [i]
│ ├── "paragraphs"
│ │ └── [j]
│ │ ├── "context": "paragraph text"
│ │ └── "qas"
│ │ └── [k]
│ │ ├── "answers"
│ │ │ └── [l]
│ │ │ ├── "answer_start": N
│ │ │ └── "text": "answer"
│ │ ├── "id": "&lt;uuid&gt;"
│ │ └── "question": "paragraph question?"
│ └── "title": "document id"
└── "version": 1.1
I tried hard using .json_normalize method but I can't get any results. I've noticed that most of my attempts (those which don't end with an error) end with recognizing as indexes just "data" and "version" and as the only object the rest of the text, as here:
f = open("SQuAD_it-test.json", "r",encoding="Latin-1" )
data = json.load(f)
df = pd.json_normalize(data)
df.sample(1)
data version
0 [{'paragraphs': [{'qas': [{'question': 'Quando... 1.1
And if I try to visualize more samples an error occurs to me which tells the population is only 1.
My desired output is something like this, selecting the indexes to use which can be at different level of the tree:
df.sample(5)
title context question text answer_start
str1 str6 str11 str16 N1
str2 str7 str12 str17 N2
str3 str8 str13 str13 N3
str4 str9 str14 str18 N4
str5 str10 str15 str19 N5
I've also worked on the argument of .json_normalize.
But I'm not able to fully comprehend the explanation. Could you help me?
| Since the given json has many nested fileds, we can use record_path and meta arguments to get the desired dataframe:
df = pd.json_normalize(data, record_path=['data', 'paragraphs', 'qas', 'answers'],
meta=[['data','title'], ['data', 'paragraphs','context'],
['data', 'paragraphs', 'qas','question']])
Note that the output keys will not be in the exact order as given in the desired output table. Also, keys will have slightly different (fully qualified) names.
print(df.keys())
Ouput:
Index(['text', 'answer_start', 'data.title', 'data.paragraphs.context',
'data.paragraphs.qas.question'],
dtype='object')
| https://stackoverflow.com/questions/67345054/ |
Training a model on GPU is very slow | I am using A100-SXM4-40GB Gpu but training is terribly slow. I tried two models, a simple classification on cifar and a Unet on Cityscapes. I tried my code on other GPUs and it worked totally fine, but I do not know why training on this high capacity GPU is super slow.
I would appreciate any help.
Here are some other properties of GPUs.
GPU 0: A100-SXM4-40GB
GPU 1: A100-SXM4-40GB
GPU 2: A100-SXM4-40GB
GPU 3: A100-SXM4-40GB
Nvidia driver version: 460.32.03
cuDNN version: Could not collect
| Thank you for your answer. Before trying your answer, I decided to uninstall anaconda and reinstall it and this solved the problem.
| https://stackoverflow.com/questions/67346102/ |
pytorch change input image size | I am new to pytorch and I am following a tutorial but when i try to modify the code to use 64x64x3 images instead of 32x32x3 images, i get a buch of errors. Here is the code from the tutorial:
import torch
from torch.utils.data import DataLoader
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import ImageFolder
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Resize(32),
transforms.RandomCrop(32),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
batch_size = 4
trainset = ImageFolder("Train", transform=transform)
trainloader = DataLoader(trainset, shuffle=True, batch_size=batch_size, num_workers=0)
classes = ('Dog', 'Cat')
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(batch_size)))
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
print("training started")
from tqdm import tqdm
for epoch in range(5): # loop over the dataset multiple times
running_loss = 0.0
for i, data in tqdm(enumerate(trainloader, 0), desc=f"epoch: {epoch + 1}"):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
PATH = './net.pth'
torch.save(net.state_dict(), PATH)
If I change the ´transforms.Resize(32)´ and ´transforms.RandomCrop(32)´ to 64 (to get 64x64x3 images), I get this error
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
~\Documents\pyth\classifier\train_classifier.py in <module>
86
87 # forward + backward + optimize
---> 88 outputs = net(inputs)
89 loss = criterion(outputs, labels)
90 loss.backward()
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~\Documents\pyth\classifier\train_classifier.py in forward(self, x)
57 x = self.pool(F.relu(self.conv1(x)))
58 x = self.pool(F.relu(self.conv2(x)))
---> 59 x = x.view(-1, 10816+1)
60 x = F.relu(self.fc1(x))
61 x = F.relu(self.fc2(x))
RuntimeError: shape '[-1, 10817]' is invalid for input of size 10816
´´´
and if i try to change the parameters of ´x.view(...)´ i get this error
´´´
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
~\Documents\pyth\classifier\train_classifier.py in <module>
86
87 # forward + backward + optimize
---> 88 outputs = net(inputs)
89 loss = criterion(outputs, labels)
90 loss.backward()
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~\Documents\pyth\classifier\train_classifier.py in forward(self, x)
58 x = self.pool(F.relu(self.conv2(x)))
59 x = x.view(-1, 16 * 2 * 5 * 5)
---> 60 x = F.relu(self.fc1(x))
61 x = F.relu(self.fc2(x))
62 x = self.fc3(x)
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~\Anaconda3\lib\site-packages\torch\nn\modules\linear.py in forward(self, input)
92
93 def forward(self, input: Tensor) -> Tensor:
---> 94 return F.linear(input, self.weight, self.bias)
95
96 def extra_repr(self) -> str:
~\Anaconda3\lib\site-packages\torch\nn\functional.py in linear(input, weight, bias)
1751 if has_torch_function_variadic(input, weight):
1752 return handle_torch_function(linear, (input, weight), input, weight, bias=bias)
-> 1753 return torch._C._nn.linear(input, weight, bias)
1754
1755
RuntimeError: mat1 and mat2 shapes cannot be multiplied (2x800 and 400x120)
´´´
| I think this should work because after performing 2nd Pooling operation the output feature map is coming N x C x 13 x 13
self.fc1 = nn.Linear(16 * 13 * 13, 120)
x = x.view(-1, 16 * 13 * 13)
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 13 * 13, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 13 * 13)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
| https://stackoverflow.com/questions/67355392/ |
RuntimeError: Given groups=1, weight of size [32, 1, 5, 5], expected input[256, 3, 256, 256] to have 1 channels, but got 3 channels instead | I am trying to run following code but getting an error:
import torch.nn as nn
import torch.nn.functional as F
class EmbeddingNet(nn.Module):
def __init__(self):
super(EmbeddingNet, self).__init__()
self.convnet = nn.Sequential(nn.Conv2d(1, 32, 5), nn.PReLU(),
nn.MaxPool2d(2, stride=2),
nn.Conv2d(32, 64, 5), nn.PReLU(),
nn.MaxPool2d(2, stride=2))
self.fc = nn.Sequential(nn.Linear(64 * 4 * 4, 256),
nn.PReLU(),
nn.Linear(256, 256),
nn.PReLU(),
nn.Linear(256, 2)
)
def forward(self, x):
output = self.convnet(x)
output = output.view(output.size()[0], -1)
output = self.fc(output)
return output
def get_embedding(self, x):
return self.forward(x)
class EmbeddingNetL2(EmbeddingNet):
def __init__(self):
super(EmbeddingNetL2, self).__init__()
def forward(self, x):
output = super(EmbeddingNetL2, self).forward(x)
output /= output.pow(2).sum(1, keepdim=True).sqrt()
return output
def get_embedding(self, x):
return self.forward(x)'''enter code here
| Error is very simple .Its saying instead of 1 channel you have given 3 channel images.
one change would be in this block
class EmbeddingNet(nn.Module):
def __init__(self):
super(EmbeddingNet, self).__init__()
self.convnet = nn.Sequential(nn.Conv2d(3, 32, 5), #instead of 1 i have made it 3
nn.PReLU(),
nn.MaxPool2d(2, stride=2),
nn.Conv2d(32, 64, 5), nn.PReLU(),
nn.MaxPool2d(2, stride=2))
self.fc = nn.Sequential(nn.Linear(64 * 4 * 4, 256),
nn.PReLU(),
nn.Linear(256, 256),
nn.PReLU(),
nn.Linear(256, 2)
)
EDIT to next error:
change to this
self.fc = nn.Sequential(nn.Linear(64 * 61 * 61, 256), #here is the change
nn.PReLU(),
nn.Linear(256, 256),
nn.PReLU(),
nn.Linear(256, 2)
)
| https://stackoverflow.com/questions/67360787/ |
Decreasing number of nodes each layers in torch.nn.lstm | Is there an easy way to decrease the number of nodes in each layer by a factor? I don't see this option on the documentation page, perhaps there is a similar function I can use though instead of manually defining each layer?
self.lstm = nn.LSTM(
input_size=input_size,
hidden_size=hidden_size,
num_layers=num_layers,
batch_first=True,
dropout=0.2,
) # lstm
| Not that I know of, but writing it from scratch is straightforward:
def _constant_scale(initial: int, factor: int) -> int:
return initial//factor
class StackedLSTM(Module):
def __init__(self, input_size: int, hidden_sizes: list[int], *args, **kwargs):
super(StackedLSTM, self).__init__()
self.layers = ModuleList([LSTM(input_size=xs, hidden_size=hs, *args, **kwargs) for xs, hs in zip([input_size] + hidden_sizes, hidden_sizes)])
def forward(self, x: Tensor, hc: Optional[tuple[Tensor, Tensor]] = None) -> Tensor:
for layer in self.layers:
x, _ = layer(x, hc)
hc = None
return x
hidden_sizes = [_constant_scale(300, 2**i) for i in range(3)]
sltm = StackedLSTM(100, hidden_sizes)
x = torch.rand(10, 32, 100)
h = torch.rand(1, 32, 300)
c = torch.rand(1, 32, 300)
out = sltm(x, (h, c))
print(out.shape)
# torch.Size([10, 32, 75])
| https://stackoverflow.com/questions/67363579/ |
How to sample similar vectors given a vector and cosine similarity in pytorch? | I have this vector
>>> vec
tensor([[0.2677, 0.1158, 0.5954, 0.9210, 0.3622, 0.4081, 0.4477, 0.7930, 0.1161,
0.5111, 0.2010, 0.3680, 0.1162, 0.1563, 0.4478, 0.9732, 0.7962, 0.0873,
0.9793, 0.9382, 0.9468, 0.0851, 0.7601, 0.0322, 0.7553, 0.4025, 0.3627,
0.5706, 0.3015, 0.1344, 0.8343, 0.8187, 0.4287, 0.5785, 0.9527, 0.1632,
0.2890, 0.5411, 0.5319, 0.7163, 0.3166, 0.5717, 0.5018, 0.5368, 0.3321]])
using this vector I want to generate 15 vectors which have a cosine similarity greater than 80%.
How can I do this in pytorch?
| I modified the answer here, adding an extra dimension and converting from numpy to torch.
def torch_cos_sim(v,cos_theta,n_vectors = 1,EXACT = True):
"""
EXACT - if True, all vectors will have exactly cos_theta similarity.
if False, all vectors will have >= cos_theta similarity
v - original vector (1D tensor)
cos_theta -cos similarity in range [-1,1]
"""
# unit vector in direction of v
u = v / torch.norm(v)
u = u.unsqueeze(0).repeat(n_vectors,1)
# random vector with elements in range [-1,1]
r = torch.rand([n_vectors,len(v)])*2 -1
# unit vector perpendicular to v and u
uperp = torch.stack([r[i] - (torch.dot(r[i],u[i]) * u[i]) for i in range(len(u))])
uperp = uperp/ (torch.norm(uperp,dim = 1).unsqueeze(1).repeat(1,v.shape[0]))
if not EXACT:
cos_theta = torch.rand(n_vectors)* (1-cos_theta) + cos_theta
cos_theta = cos_theta.unsqueeze(1).repeat(1,v.shape[0])
# w is the linear combination of u and uperp with coefficients costheta
# and sin(theta) = sqrt(1 - costheta**2), respectively:
w = cos_theta*u + torch.sqrt(1 - torch.tensor(cos_theta)**2)*uperp
return w
You can check the output with:
vec = torch.rand(54)
output = torch_cos_sim(vec,0.6,n_vectors = 15, EXACT = False)
# test cos similarity
for item in output:
print(torch.dot(vec,item)/(torch.norm(vec)*torch.norm(item)))
| https://stackoverflow.com/questions/67370107/ |
Convert keras model architecture to Pytorch | I am trying to convert a keras model to pytorch for human activity recognition. The keras model could achieve up to 98% accuracy, while Pytorch model could achieve only ~60% accuracy. I couldn’t figure out the problem, first I was thinking of the padding=‘same’ of keras, but I have already adjusted the padding of pytorch already. Can you check what is wrong?
The keras code is as above
model = keras.Sequential()
model.add(layers.Input(shape=[100,12]))
model.add(layers.Conv1D(filters=32, kernel_size=3, padding="same"))
model.add(layers.BatchNormalization())
model.add(layers.ReLU())
model.add(layers.Conv1D(filters=64, kernel_size=3, padding="same"))
model.add(layers.BatchNormalization())
model.add(layers.ReLU())
model.add(layers.MaxPool1D(2))
model.add(layers.LSTM(64))
model.add(layers.Dense(units=128, activation='relu'))
model.add(layers.Dense(13, activation='softmax'))
model.summary()
and my pytorch model code is as below
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
seq_len = 100
# output: [, 32, 100]
self.conv1 = nn.Conv1d(seq_len, 32, kernel_size=3, stride=1, padding=1)
self.bn1 = nn.BatchNorm1d(32)
# output: [, 64, 100]
self.conv2 = nn.Conv1d(32, 64, kernel_size=3, padding=1)
self.bn2 = nn.BatchNorm1d(64)
# output: [, 64, 50]
self.mp = nn.MaxPool1d(kernel_size=2, stride=2)
# output: [, 64]
self.lstm = nn.LSTM(6, 64, 1)
# output: [, 128]
self.fc1 = nn.Linear(64, 128)
# output: [, 13]
self.fc2 = nn.Linear(128, 13)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = F.relu(x)
x = self.conv2(x)
x = self.bn2(x)
x = F.relu(x)
x = self.mp(x)
out, _ = self.lstm(x)
x = out[:, -1, :]
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
return x
| Since it was too lengthy to write in the comments,I am writing it in the answer, After testing your PyTorch architecture for shape with random tensor of size which you have mentioned: torch.randn(1, 100, 12)(NCH format).
This is the outcome:
input= torch.Size([1, 100, 12])
1st Conv= torch.Size([1, 32, 12])
1st batchNorm= torch.Size([1, 32, 12])
1st relu= torch.Size([1, 32, 12])
2nd Conv= torch.Size([1, 64, 12])
2nd batchnorm= torch.Size([1, 64, 12])
2nd relu= torch.Size([1, 64, 12])
1st maxPool= torch.Size([1, 64, 6])
LSTM= torch.Size([1, 64])
1st FC= torch.Size([1, 128])
3rd relu= torch.Size([1, 128])
2nd FC= torch.Size([1, 13])
It is the 100 channels your network is receiving, and as you mentioned in comments after 1st convolution in Keras it's [batch_size, 100, 32] but in torch it is changing to [batch_size,32,12]
Change to this:
def __init__(self):
super(CNN, self).__init__()
in_channels = 12
# output: [, 32, 100]
self.conv1 = nn.Conv1d(in_channels, 32, kernel_size=3, stride=1, padding=1)
self.bn1 = nn.BatchNorm1d(32)
# output: [, 64, 100]
self.conv2 = nn.Conv1d(32, 64, kernel_size=3, padding=1)
self.bn2 = nn.BatchNorm1d(64)
# output: [, 64, 50]
self.mp = nn.MaxPool1d(kernel_size=2, stride=2)
# output: [, 64]
self.lstm = nn.LSTM(50, 64, 1)
# output: [, 128]
self.fc1 = nn.Linear(64, 128)
# output: [, 13]
self.fc2 = nn.Linear(128, 13)
self.softmax = nn.Softmax()
The output for this will be:
input= torch.Size([1, 12, 100])
1st Conv= torch.Size([1, 32, 100])
1st batchNorm= torch.Size([1, 32, 100])
1st relu= torch.Size([1, 32, 100])
2nd Conv= torch.Size([1, 64, 100])
2nd batchnorm= torch.Size([1, 64, 100])
2nd relu= torch.Size([1, 64, 100])
1st maxPool= torch.Size([1, 64, 50])
LSTM= torch.Size([1, 64])
1st FC= torch.Size([1, 128])
3rd relu= torch.Size([1, 128])
2nd FC= torch.Size([1, 13])
| https://stackoverflow.com/questions/67370134/ |
Can't do lazy loading with allennlp | Currently I'm trying to implement lazy loading with allennlp, but can't.
My code is as the followings.
def biencoder_training():
params = BiEncoderExperiemntParams()
config = params.opts
reader = SmallJaWikiReader(config=config)
# Loading Datasets
train, dev, test = reader.read('train'), reader.read('dev'), reader.read('test')
vocab = build_vocab(train)
vocab.extend_from_instances(dev)
# TODO: avoid memory consumption and lazy loading
train, dev, test = list(reader.read('train')), list(reader.read('dev')), list(reader.read('test'))
train_loader, dev_loader, test_loader = build_data_loaders(config, train, dev, test)
train_loader.index_with(vocab)
dev_loader.index_with(vocab)
embedder = emb_returner()
mention_encoder, entity_encoder = Pooler_for_mention(word_embedder=embedder), \
Pooler_for_cano_and_def(word_embedder=embedder)
model = Biencoder(mention_encoder, entity_encoder, vocab)
trainer = build_trainer(lr=config.lr,
num_epochs=config.num_epochs,
model=model,
train_loader=train_loader,
dev_loader=dev_loader)
trainer.train()
return model
When I commented-out train, dev, test = list(reader.read('train')), list(reader.read('dev')), list(reader.read('test')), iterator doesn't work and training is conducted with 0 sample.
Building the vocabulary
100it [00:00, 442.15it/s]01, 133.57it/s]
building vocab: 100it [00:01, 95.84it/s]
100it [00:00, 413.40it/s]
100it [00:00, 138.38it/s]
You provided a validation dataset but patience was set to None, meaning that early stopping is disabled
0it [00:00, ?it/s]
0it [00:00, ?it/s]
I'd like to know if there is any solution for avoid this.
Thanks.
Supplement, added at fifth, May.
Currently I am trying to avoid putting all of each sample data on top of memory before training the model.
So I have implemented the _read method as a generator. My understanding is that by calling this method and wrapping it with SimpleDataLoader, I can actually pass the data to the model.
In the DatasetReader, the code for the _read method looks like this. It is my understanding that this is intended to be a generator that avoids memory consumption.
@overrides
def _read(self, train_dev_test_flag: str) -> Iterator[Instance]:
'''
:param train_dev_test_flag: 'train', 'dev', 'test'
:return: list of instances
'''
if train_dev_test_flag == 'train':
dataset = self._train_loader()
random.shuffle(dataset)
elif train_dev_test_flag == 'dev':
dataset = self._dev_loader()
elif train_dev_test_flag == 'test':
dataset = self._test_loader()
else:
raise NotImplementedError(
"{} is not a valid flag. Choose from train, dev and test".format(train_dev_test_flag))
if self.config.debug:
dataset = dataset[:self.config.debug_data_num]
for data in tqdm(enumerate(dataset)):
data = self._one_line_parser(data=data, train_dev_test_flag=train_dev_test_flag)
yield self.text_to_instance(data)
Also, build_data_loaders actually looks like this.
def build_data_loaders(config,
train_data: List[Instance],
dev_data: List[Instance],
test_data: List[Instance]) -> Tuple[DataLoader, DataLoader, DataLoader]:
train_loader = SimpleDataLoader(train_data, config.batch_size_for_train, shuffle=False)
dev_loader = SimpleDataLoader(dev_data, config.batch_size_for_eval, shuffle=False)
test_loader = SimpleDataLoader(test_data, config.batch_size_for_eval, shuffle=False)
return train_loader, dev_loader, test_loader
But, by somewhat reason I don't know, this code doesn't work.
def biencoder_training():
params = BiEncoderExperiemntParams()
config = params.opts
reader = SmallJaWikiReader(config=config)
# Loading Datasets
train, dev, test = reader.read('train'), reader.read('dev'), reader.read('test')
vocab = build_vocab(train)
vocab.extend_from_instances(dev)
train_loader, dev_loader, test_loader = build_data_loaders(config, train, dev, test)
train_loader.index_with(vocab)
dev_loader.index_with(vocab)
embedder = emb_returner()
mention_encoder, entity_encoder = Pooler_for_mention(word_embedder=embedder), \
Pooler_for_cano_and_def(word_embedder=embedder)
model = Biencoder(mention_encoder, entity_encoder, vocab)
trainer = build_trainer(lr=config.lr,
num_epochs=config.num_epochs,
model=model,
train_loader=train_loader,
dev_loader=dev_loader)
trainer.train()
return model
In this code, the SimpleDataLoader will wrap the generator type as it is. I would like to do the lazy loading that allennlp did in the 0.9 version.
But this code iterates training over 0 instances, so currently I have added
train, dev, test = list(reader.read('train')), list(reader.read('dev')), list(reader.read('test'))
before
train_loader, dev_loader, test_loader = build_data_loaders(config, train, dev, test).
And it works. But this means that I can't train or evaluate the model until I have all the instances in memory. Rather, I want each batch to be called into memory only when it is time to train.
| The SimpleDataLoader is not capable of lazy loading. You should use the MultiProcessDataLoader instead. Setting max_instances_in_memory to a non-zero integer (usually some multiple of your batch size) will trigger lazy loading.
| https://stackoverflow.com/questions/67378820/ |
Why is this tensorflow training taking so long? | I'm learning DRL with the book Deep Reinforcement Learning in Action. In chapter 3, they present the simple game Gridworld (instructions here, in the rules section) with the corresponding code in PyTorch.
I've experimented with the code and it takes less than 3 minutes to train the network with 89% of wins (won 89 of 100 games after training).
As an exercise, I have migrated the code to tensorflow. All the code is here.
The problem is that with my tensorflow port it takes near 2 hours to train the network with a win rate of 84%. Both versions are using the only CPU to train (I don't have GPU)
Training loss figures seem correct and also the rate of a win (we have to take into consideration that the game is random and can have impossible states). The problem is the performance of the overall process.
I'm doing something terribly wrong, but what?
The main differences are in the training loop, in torch is this:
loss_fn = torch.nn.MSELoss()
learning_rate = 1e-3
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
....
Q1 = model(state1_batch)
with torch.no_grad():
Q2 = model2(state2_batch) #B
Y = reward_batch + gamma * ((1-done_batch) * torch.max(Q2,dim=1)[0])
X = Q1.gather(dim=1,index=action_batch.long().unsqueeze(dim=1)).squeeze()
loss = loss_fn(X, Y.detach())
optimizer.zero_grad()
loss.backward()
optimizer.step()
and in the tensorflow version:
loss_fn = tf.keras.losses.MSE
learning_rate = 1e-3
optimizer = tf.keras.optimizers.Adam(learning_rate)
...
Q2 = model2(state2_batch) #B
with tf.GradientTape() as tape:
Q1 = model(state1_batch)
Y = reward_batch + gamma * ((1-done_batch) * tf.math.reduce_max(Q2, axis=1))
X = [Q1[i][action_batch[i]] for i in range(len(action_batch))]
loss = loss_fn(X, Y)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
Why is the training taking so long?
| Why is TensorFlow slow
TensorFlow has 2 execution modes: eager execution, and graph mode. TensorFlow default behavior, since version 2, is to default to eager execution. Eager execution is great as it enables you to write code close to how you would write standard python. It's easier to write, and it's easier to debug. Unfortunately, it's really not as fast as graph mode.
So the idea is, once the function is prototyped in eager mode, to make TensorFlow execute it in graph mode. For that you can use tf.function. tf.function compiles a callable into a TensorFlow graph. Once the function is compiled into a graph, the performance gain is usually quite important. The recommended approach when developing in TensorFlow is the following:
Debug in eager mode, then decorate with @tf.function.
Don't rely on Python side effects like object mutation or list appends.
tf.function works best with TensorFlow ops; NumPy and Python calls are converted to constants.
I would add: think about the critical parts of your program, and which ones should be converted first into graph mode. It's usually the parts where you call a model to get a result. It's where you will see the best improvements.
You can find more information in the following guides:
Better performance with tf.function
Introduction to graphs and tf.function
Applying tf.function to your code
So, there are at least two things you can change in your code to make it run quite faster:
The first one is to not use model.predict on a small amount of data. The function is made to work on a huge dataset or on a generator. (See this comment on Github). Instead, you should call the model directly, and for performance enhancement, you can wrap the call to the model in a tf.function.
Model.predict is a top-level API designed for batch-predicting outside of any loops, with the fully-features of the Keras APIs.
The second one is to make your training step a separate function, and to decorate that function with @tf.function.
So, I would declare the following things before your training loop:
# to call instead of model.predict
model_func = tf.function(model)
def get_train_func(model, model2, loss_fn, optimizer):
"""Wrapper that creates a train step using the two model passed"""
@tf.function
def train_func(state1_batch, state2_batch, done_batch, reward_batch, action_batch):
Q2 = model2(state2_batch) #B
with tf.GradientTape() as tape:
Q1 = model(state1_batch)
Y = reward_batch + gamma * ((1-done_batch) * tf.math.reduce_max(Q2, axis=1))
# gather is more efficient than a list comprehension, and needed in a tf.function
X = tf.gather(Q1, action_batch, batch_dims=1)
loss = loss_fn(X, Y)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss
return train_func
# train step is a callable
train_step = get_train_func(model, model2, loss_fn, optimizer)
And you can use that function in your training loop:
if len(replay) > batch_size:
minibatch = random.sample(replay, batch_size)
state1_batch = np.array([s1 for (s1,a,r,s2,d) in minibatch]).reshape((batch_size, 64))
action_batch = np.array([a for (s1,a,r,s2,d) in minibatch]) #TODO: Posibles diferencies
reward_batch = np.float32([r for (s1,a,r,s2,d) in minibatch])
state2_batch = np.array([s2 for (s1,a,r,s2,d) in minibatch]).reshape((batch_size, 64))
done_batch = np.array([d for (s1,a,r,s2,d) in minibatch]).astype(np.float32)
loss = train_step(state1_batch, state2_batch, done_batch, reward_batch, action_batch)
losses.append(loss)
There are other changes that you could make to make your code more TensorFlowesque, but with those modifications, your code takes ~2 minutes on my CPU. (with a 97% win rate).
| https://stackoverflow.com/questions/67383458/ |
Sequential network with the VGG layers | I want to have a sequential network with the characteristics of VGG network (I want to pass my network to another function, which doesn't support VGG objects and supports nn.sequential).
I added the function getSequentialVersion method to VGG class to have the sequential network with the linear layer. However, apparently, there is a size mismatch in the network.
'''VGG for CIFAR10. FC layers are removed.
(c) YANG, Wei
'''
import torch.nn as nn
import torch.utils.model_zoo as model_zoo
import math
__all__ = [
'VGG','vgg16_bn',
]
model_urls = {
'vgg16': 'https://download.pytorch.org/models/vgg16-397923af.pth',
}
class VGG(nn.Module):
def __init__(self, features, num_classes=1000, cfg_type=None, batch_norm=False, **kwargs):
super(VGG, self).__init__()
self.features = features
self.classifier = nn.Linear(512, num_classes)
self._initialize_weights()
self.cfg_type = cfg_type
self.batch_norm = batch_norm
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
if m.bias is not None:
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.Linear):
n = m.weight.size(1)
m.weight.data.normal_(0, 0.01)
m.bias.data.zero_()
def getSequentialVersion(self):
return make_layers(cfg[self.cfg_type], batch_norm=self.batch_norm, flag=True)
def make_layers(cfg, batch_norm=False, flag=False):
layers = []
in_channels = 3
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1, bias=False)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
else:
layers += [conv2d, nn.ReLU(inplace=True)]
in_channels = v
if flag:
#for Cifar10
layers += [nn.Linear(512, 10)]
return nn.Sequential(*layers)
cfg = {
'A': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'B': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'D': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
'E': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}
def vgg16_bn(**kwargs):
"""VGG 16-layer model (configuration "D") with batch normalization"""
print("VGG16-bn")
model = VGG(make_layers(cfg['D'], batch_norm=True), cfg_type='D', batch_norm=True,**kwargs)
return model
When I call summary(net, ( 3, 32, 32)) (for cifar10), I get the mismatch error. In other words, the main problem is when I add this line layers+= [nn.linear(512, 10)].
Can anyone help me? Thanks a lot.
The error message:
File "./main.py", line 284, in <module>
summary(net, ( 3, 32, 32))
File "./anaconda3/envs/my_env/lib/python3.8/site-packages/torchsummary/torchsummary.py", line 72, in summary
model(*x)
File ".anaconda3/envs/my_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "./anaconda3/envs/my_env/lib/python3.8/site-packages/torch/nn/modules/container.py", line 119, in forward
input = module(input)
File "./anaconda3/envs/my_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "./anaconda3/envs/my_env/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 94, in forward
return F.linear(input, self.weight, self.bias)
File "./envs/my_env/lib/python3.8/site-packages/torch/nn/functional.py", line 1753, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: mat1 dim 1 must match mat2 dim 0
Additional information:
This is how exactly I initialize and use my network:
net = vgg16_bn(depth=args.depth,
num_classes=num_classes,
growthRate=args.growthRate,
compressionRate=args.compressionRate,
widen_factor=args.widen_factor,
dropRate=args.dropRate,
base_width=args.base_width,
cardinality=args.cardinality).getSequentialVersion()
net = net.to(args.device)
module_names = ''
if hasattr(net, 'features'):
module_names = 'features'
elif hasattr(net, 'children'):
module_names = 'children'
else:
print('unknown net modules...')
summary(net, ( 3, 32, 32))
| The problem is quite simple. When flag=True (as in getSequentialVersion()), there's a missing Flatten operation. Therefore, to fix the problem, you need to add this operation like this:
if flag:
# for Cifar10
layers += [nn.Flatten(), nn.Linear(512, 10)] # <<< add Flatten before Linear
In the forward call, you can see the flatten in its view form:
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), -1) # here, equivalent to torch.flatten(x, 1)
x = self.classifier(x)
return x
and this is what was missing when you were transforming the layers to Sequential.
| https://stackoverflow.com/questions/67385342/ |
Pytorch custom CUDA extension build fails for torch 1.6.0 or higher | I have a custom CUDA extension for pytorch (https://pytorch.org/tutorials/advanced/cpp_extension.html), which used to work fine with pytorch1.4, CUDA10.1, and Titan Xp GPUs. However, recently we changed our system to new A40 GPUs and CUDA11.1. When I try to build my custom pytorch extension using CUDA11.1, pytorch 1.8.1, gcc 9.3.0, and Ubuntu 20.04 I get the following errors:
$ python3 setup.py install
running install
running bdist_egg
running egg_info
creating cuda_test.egg-info
writing cuda_test.egg-info/PKG-INFO
writing dependency_links to cuda_test.egg-info/dependency_links.txt
writing top-level names to cuda_test.egg-info/top_level.txt
writing manifest file 'cuda_test.egg-info/SOURCES.txt'
reading manifest file 'cuda_test.egg-info/SOURCES.txt'
writing manifest file 'cuda_test.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'cuda_test' extension
creating /path/to/code/cuda/test/build
creating /path/to/code/cuda/test/build/temp.linux-x86_64-3.7
Emitting ninja build file /path/to/code/cuda/test/build/temp.linux-x86_64-3.7/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/1] /cm/shared/apps/cuda11.1/toolkit/11.1.1/bin/nvcc --generate-dependencies-with-compile --dependency-output /path/to/code/cuda/test/build/temp.linux-x86_64-3.7/test_cuda.o.d -I/path/to/code/venv/lib/python3.7/site-packages/torch/include -I/path/to/code/venv/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/path/to/code/venv/lib/python3.7/site-packages/torch/include/TH -I/path/to/code/venv/lib/python3.7/site-packages/torch/include/THC -I/cm/shared/apps/cuda11.1/toolkit/11.1.1/include -I/path/to/code/venv/include/python3.7m -c -c /path/to/code/cuda/test/test_cuda.cu -o /path/to/code/cuda/test/build/temp.linux-x86_64-3.7/test_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=cuda_test -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
FAILED: /path/to/code/cuda/test/build/temp.linux-x86_64-3.7/test_cuda.o
/cm/shared/apps/cuda11.1/toolkit/11.1.1/bin/nvcc --generate-dependencies-with-compile --dependency-output /path/to/code/cuda/test/build/temp.linux-x86_64-3.7/test_cuda.o.d -I/path/to/code/venv/lib/python3.7/site-packages/torch/include -I/path/to/code/venv/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/path/to/code/venv/lib/python3.7/site-packages/torch/include/TH -I/path/to/code/venv/lib/python3.7/site-packages/torch/include/THC -I/cm/shared/apps/cuda11.1/toolkit/11.1.1/include -I/path/to/code/venv/include/python3.7m -c -c /path/to/code/cuda/test/test_cuda.cu -o /path/to/code/cuda/test/build/temp.linux-x86_64-3.7/test_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=cuda_test -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/arithmetic.h(256): error: identifier "FLT_MIN" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/arithmetic.h(274): error: identifier "DBL_MIN" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrig.h(190): error: identifier "DBL_EPSILON" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrig.h(228): error: identifier "DBL_EPSILON" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrig.h(243): error: identifier "DBL_EPSILON" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrig.h(293): error: identifier "DBL_EPSILON" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrig.h(406): error: identifier "DBL_EPSILON" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrig.h(498): error: identifier "DBL_MAX" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrig.h(562): error: identifier "DBL_MAX_EXP" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrig.h(565): error: identifier "DBL_MANT_DIG" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrig.h(630): error: identifier "DBL_EPSILON" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrigf.h(119): error: identifier "FLT_EPSILON" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrigf.h(137): error: identifier "FLT_EPSILON" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrigf.h(147): error: identifier "FLT_EPSILON" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrigf.h(170): error: identifier "FLT_EPSILON" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrigf.h(249): error: identifier "FLT_EPSILON" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrigf.h(327): error: identifier "FLT_MAX" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrigf.h(375): error: identifier "FLT_MAX_EXP" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrigf.h(377): error: identifier "FLT_MANT_DIG" is undefined
/cm/shared/apps/cuda11.1/toolkit/11.1.1/include/thrust/detail/complex/catrigf.h(420): error: identifier "FLT_EPSILON" is undefined
I also wrote a simple test code to verify that my larger CPP/CUDA code isn't the culprit, which produced the same error messages. I also checked if arithmetic.h and catrig.h include <cfloat>, which should provide the {FLT,DBL}_{MIN,MAX,EPSILON,MANT_DIG} definitions but this looks all normal, since it's standard NVIDIA code.
Let me know if anyone has encountered a similar problem or would know a solution.
---- UPDATE ----
Here are a couple of more things that I've tried:
The CUDA code compiles when I use CUDA10.1, pytorch 1.4.0, gcc 9.3.0, and Ubuntu 20.04.
Using pytorch 1.5.1 instead generates the following error:
/usr/include/c++/9/bits/stl_function.h(437): error: identifier "__builtin_is_constant_evaluated" is undefined
but this can be solved by downgrading gcc to version 7.5.
Using pytorch 1.6.0 or higher instead always results in the errors reported in the beginning, even when using gcc-7.
| I found the issue. The Intel MKL module wasn't loaded properly and caused the error. After fixing this the compilation worked just fine also with CUDA 11.1 and pytorch 1.8.1!
| https://stackoverflow.com/questions/67386709/ |
numpy equivalent code of unsqueeze and expand from torch tensor method | I have these 2 tensors
box_a = torch.randn(1,4)
box_b = torch.randn(1,4)
and i have a code in pytorch
box_a[:, 2:].unsqueeze(1).expand(1, 1, 2)
but i want to convert the above code in numpy
for box_a and box_b i can do something like this
box_a = numpy.random.randn(1,4)
box_b = numpy.random.randn(1,4)
But what about this
box_a[:, 2:].unsqueeze(1).expand(1, 1, 2)
| solved it
box_a = np.random.randn(1,4)
box_b = np.random.randn(1,4)
max_xy = np.broadcast_to(np.expand_dims(box_a[:, 2:],axis=1),(1,1,2))
| https://stackoverflow.com/questions/67387383/ |
How do I one hot encode along a specific dimension using PyTorch? | I have a tensor of size [3, 15, 136], where:
3 is batch size
15 - sequence length and
136 is tokens
I want to one-hot my tensor using the probabilities in the tokens dimension (136). To do so I want to extract the tokens dimension for each letter in sequence length and put 1 to the largest possibility and mark all other tokens as 0.
| You can use PyTorch's one_hot function to achieve this:
import torch.nn.functional as F
t = torch.rand(3, 15, 136)
F.one_hot(t.argmax(dim=2), 136)
| https://stackoverflow.com/questions/67387722/ |
Attention weighted aggregation | Let the tensor shown below be the representation of two sentences (batch_size = 2) composed with 3 words (max_lenght = 3) and each word being represented by vectors of dimension equal to 5 (hidden_size = 5) obtained as output from a neural network:
net_output
# tensor([[[0.7718, 0.3856, 0.2545, 0.7502, 0.5844],
# [0.4400, 0.3753, 0.4840, 0.2483, 0.4751],
# [0.4927, 0.7380, 0.1502, 0.5222, 0.0093]],
# [[0.5859, 0.0010, 0.2261, 0.6318, 0.5636],
# [0.0996, 0.2178, 0.9003, 0.4708, 0.7501],
# [0.4244, 0.7947, 0.5711, 0.0720, 0.1106]]])
Also consider the following attention scores:
att_scores
# tensor([[0.2425, 0.5279, 0.2295],
# [0.2461, 0.4789, 0.2751]])
Which efficient approach allows obtaining the aggregation of vectors in net_output weighted by att_scores resulting in a vector of shape (2, 5)?
| This should work:
weighted = (net_output * att_scores[..., None]).sum(axis = 1)
Uses broadcasting to (elementwise) multiply the attention weights to each vector and aggregates (them by summing) all vectors in a batch.
| https://stackoverflow.com/questions/67389071/ |
issue with calculating accuracy | i'm using Torch Metrics to try to calculate the accuracy of my model. But i'm getting this error. I tried using .to(device="cuda:0") but I got a cuda initialization error. I also tried using .cuda() but that didn't work either. I'm using PyTorch lightning with a Titan Xp GPU. Im using a mish activation function with the Movie-lens data set.
code:
# %% [markdown]
# # Data Preprocessing
#
# Before we start building and training our model, let's do some preprocessing to get the data in the required format.
# %% [code] {"_kg_hide-input":true,"_kg_hide-output":true}
import pandas as pd
import numpy as np
from tqdm.notebook import tqdm
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
import pytorch_lightning as pl
import torch.nn.functional as F
from pytorch_lightning.callbacks import EarlyStopping
import wandb
import torchmetrics
wandb.init(project="Mocean-Recommendor",entity="maxall4")
config = wandb.config
def mish(x):
return (x*torch.tanh(F.softplus(x)))
np.random.seed(123)
# %% [markdown]
# First, we import the ratings dataset.
# %% [code]
ratings = pd.read_csv('rating.csv',
parse_dates=['timestamp'])
# %% [markdown]
# In order to keep memory usage manageable within Kaggle's kernel, we will only use data from 30% of the users in this dataset. Let's randomly select 30% of the users and only use data from the selected users.
# %% [code]
rand_userIds = np.random.choice(ratings['userId'].unique(),
size=int(len(ratings['userId'].unique())*0.3),
replace=False)
ratings = ratings.loc[ratings['userId'].isin(rand_userIds)]
print('There are {} rows of data from {} users'.format(len(ratings), len(rand_userIds)))
# %% [code]
ratings.sample(5)
# %% [code]
ratings['rank_latest'] = ratings.groupby(['userId'])['timestamp'] \
.rank(method='first', ascending=False)
train_ratings = ratings[ratings['rank_latest'] != 1]
test_ratings = ratings[ratings['rank_latest'] == 1]
# drop columns that we no longer need
train_ratings = train_ratings[['userId', 'movieId', 'rating']]
test_ratings = test_ratings[['userId', 'movieId', 'rating']]
# %% [markdown]
# ### Converting the dataset into an implicit feedback dataset
# %% [code]
train_ratings.loc[:, 'rating'] = 1
train_ratings.sample(5)
# %% [markdown]
# The code below generates 4 negative samples for each row of data. In other words, the ratio of negative to positive samples is 4:1. This ratio is chosen arbitrarily but I found that it works rather well (feel free to find the best ratio yourself!)
# %% [code]
# Get a list of all movie IDs
all_movieIds = ratings['movieId'].unique()
# Placeholders that will hold the training data
users, items, labels = [], [], []
# This is the set of items that each user has interaction with
user_item_set = set(zip(train_ratings['userId'], train_ratings['movieId']))
# 4:1 ratio of negative to positive samples
num_negatives = 4
for (u, i) in tqdm(user_item_set):
users.append(u)
items.append(i)
labels.append(1) # items that the user has interacted with are positive
for _ in range(num_negatives):
# randomly select an item
negative_item = np.random.choice(all_movieIds)
# check that the user has not interacted with this item
while (u, negative_item) in user_item_set:
negative_item = np.random.choice(all_movieIds)
users.append(u)
items.append(negative_item)
labels.append(0) # items not interacted with are negative
# %% [code]
class MovieLensTrainDataset(Dataset):
"""MovieLens PyTorch Dataset for Training
Args:
ratings (pd.DataFrame): Dataframe containing the movie ratings
all_movieIds (list): List containing all movieIds
"""
def __init__(self, ratings, all_movieIds):
self.users, self.items, self.labels = self.get_dataset(ratings, all_movieIds)
def __len__(self):
return len(self.users)
def __getitem__(self, idx):
return self.users[idx], self.items[idx], self.labels[idx]
def get_dataset(self, ratings, all_movieIds):
users, items, labels = [], [], []
user_item_set = set(zip(ratings['userId'], ratings['movieId']))
num_negatives = 4
for u, i in user_item_set:
users.append(u)
items.append(i)
labels.append(1)
for _ in range(num_negatives):
negative_item = np.random.choice(all_movieIds)
while (u, negative_item) in user_item_set:
negative_item = np.random.choice(all_movieIds)
users.append(u)
items.append(negative_item)
labels.append(0)
return torch.tensor(users), torch.tensor(items), torch.tensor(labels)
# %% [code]
acc_metric = torchmetrics.Accuracy()
class NCF(pl.LightningModule):
""" Neural Collaborative Filtering (NCF)
Args:
num_users (int): Number of unique users
num_items (int): Number of unique items
ratings (pd.DataFrame): Dataframe containing the movie ratings for training
all_movieIds (list): List containing all movieIds (train + test)
"""
def __init__(self, num_users, num_items, ratings, all_movieIds):
super().__init__()
self.user_embedding = nn.Embedding(num_embeddings=num_users, embedding_dim=8)
self.item_embedding = nn.Embedding(num_embeddings=num_items, embedding_dim=8)
self.fc1 = nn.Linear(in_features=16, out_features=64)
self.fc2 = nn.Linear(in_features=64, out_features=32)
self.output = nn.Linear(in_features=32, out_features=1)
self.ratings = ratings
self.all_movieIds = all_movieIds
def on_validation_end(self,outputs):
loss = torch.stack([x['val_loss'] for x in outputs]).mean()
return { 'loss' : loss }
def forward(self, user_input, item_input):
# Pass through embedding layers
user_embedded = self.user_embedding(user_input)
item_embedded = self.item_embedding(item_input)
# Concat the two embedding layers
vector = torch.cat([user_embedded, item_embedded], dim=-1)
# Pass through dense layer
vector = mish(self.fc1(vector))
vector = mish(self.fc2(vector))
# Output layer
pred = nn.Sigmoid()(self.output(vector))
return pred
def training_step(self, batch, batch_idx):
user_input, item_input, labels = batch
predicted_labels = self(user_input, item_input)
loss = nn.BCELoss()(predicted_labels, labels.view(-1, 1).float())
acc = acc_metric(predicted_labels,labels)
wandb.log({"loss": loss,"acc":acc})
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters())
def train_dataloader(self):
return DataLoader(MovieLensTrainDataset(self.ratings, self.all_movieIds),
batch_size=512, num_workers=4)
# %% [markdown]
# We instantiate the NCF model using the class that we have defined above.
# %% [code]
num_users = ratings['userId'].max()+1
num_items = ratings['movieId'].max()+1
all_movieIds = ratings['movieId'].unique()
model = NCF(num_users, num_items, train_ratings, all_movieIds)
# %% [code]
wandb.watch(model)
early_stopping = EarlyStopping(
monitor='loss',
min_delta=0.00,
patience=3,
verbose=False,
mode='min',
)
trainer = pl.Trainer(max_epochs=100, gpus=1, reload_dataloaders_every_epoch=True,
progress_bar_refresh_rate=50, logger=False, checkpoint_callback=True,callbacks=[early_stopping])
trainer.fit(model)
# %% [markdown]
# ### Hit Ratio @ 10
# %% [code]
# User-item pairs for testing
test_user_item_set = set(zip(test_ratings['userId'], test_ratings['movieId']))
# Dict of all items that are interacted with by each user
user_interacted_items = ratings.groupby('userId')['movieId'].apply(list).to_dict()
hits = []
for (u,i) in tqdm(test_user_item_set):
interacted_items = user_interacted_items[u]
not_interacted_items = set(all_movieIds) - set(interacted_items)
selected_not_interacted = list(np.random.choice(list(not_interacted_items), 99))
test_items = selected_not_interacted + [i]
predicted_labels = np.squeeze(model(torch.tensor([u]*100),
torch.tensor(test_items)).detach().numpy())
top10_items = [test_items[i] for i in np.argsort(predicted_labels)[::-1][0:10].tolist()]
if i in top10_items:
hits.append(1)
else:
hits.append(0)
print("The Hit Ratio @ 10 is {:.2f}".format(np.average(hits)))
wandb.log({"hit ratio": np.average(hits)})
error:
Traceback (most recent call last):
File "main.py", line 359, in <module>
trainer = pl.Trainer(max_epochs=100, gpus=1, reload_dataloaders_every_epoch=True,
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 499, in fit
self.dispatch()
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 546, in dispatch
self.accelerator.start_training(self)
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 73, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 114, in start_training
self._results = trainer.run_train()
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 637, in run_train
self.train_loop.run_training_epoch()
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 492, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 654, in run_training_batch
self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 425, in optimizer_step
model_ref.optimizer_step(
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1390, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 214, in step
self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 134, in __optimizer_step
trainer.accelerator.optimizer_step(optimizer, self._optimizer_idx, lambda_closure=closure, **kwargs)
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 277, in optimizer_step
self.run_optimizer_step(optimizer, opt_idx, lambda_closure, **kwargs)
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 282, in run_optimizer_step
self.training_type_plugin.optimizer_step(optimizer, lambda_closure=lambda_closure, **kwargs)
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 163, in optimizer_step
optimizer.step(closure=lambda_closure, **kwargs)
File "/home/max/.local/lib/python3.8/site-packages/torch/optim/optimizer.py", line 89, in wrapper
return func(*args, **kwargs)
File "/home/max/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/max/.local/lib/python3.8/site-packages/torch/optim/adam.py", line 66, in step
loss = closure()
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 648, in train_step_and_backward_closure
result = self.training_step_and_backward(
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 742, in training_step_and_backward
result = self.training_step(split_batch, batch_idx, opt_idx, hiddens)
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 293, in training_step
training_step_output = self.trainer.accelerator.training_step(args)
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 156, in training_step
return self.training_type_plugin.training_step(*args)
File "/home/max/.local/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 125, in training_step
return self.lightning_module.training_step(*args, **kwargs)
File "main.py", line 318, in training_step
print(type(labels))
File "/home/max/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/max/.local/lib/python3.8/site-packages/torchmetrics/metric.py", line 152, in forward
self.update(*args, **kwargs)
File "/home/max/.local/lib/python3.8/site-packages/torchmetrics/metric.py", line 199, in wrapped_func
return update(*args, **kwargs)
File "/home/max/.local/lib/python3.8/site-packages/torchmetrics/classification/accuracy.py", line 142, in update
self.correct += correct
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
| I am explaining it here,
This command:
print(next(model.parameters()).device)
It will print the device on which your model's parameters are loaded.
To check if they are loaded on GPU or not, you can do this:
print(next(model.parameters()).is_cuda)
It will return a boolean value,
After seeing your code, and as you mentioned it was returning "CPU" when printed:
next(model.parameters()).device
Means that your model's parameter are loaded on CPU, but this line
trainer = pl.Trainer(max_epochs=100, gpus=1, reload_dataloaders_every_epoch=True,
progress_bar_refresh_rate=50, logger=False, checkpoint_callback=True,callbacks=[early_stopping])
Here gpus=1 means it will set the number of gpus to train,
since all your tensors are loaded on by default CPU, you were getting that error.
When you have set gpus=None it is no more using gpus to train.
To run on GPUs:
You have to move tensors from CPU to GPU,
For example:
ex_tensor=torch.zeros((7,7))
ex_tensor = ex_tensor.cuda()
And also your parameters of the model,
model = model.cuda()
| https://stackoverflow.com/questions/67393293/ |
torchvision.transforms.RandomRotation fill argument not working | I am working on tensors and want to rotate them with torchvision.transforms.RandomRotation and use the fill option.
import torch
import torchvision
img1 = torch.rand((1, 16, 16))
img2 = torchvision.transforms.RandomRotation(45, fill=1)(img1)
However, I always get:
Argument fill/fillcolor is not supported for Tensor input. Fill value is zero
and it gets not filled with ones. I have the same problem with torchvision.transforms.RandomPerspective.
I am using Python 3.8 and PyTorch 1.7.1. I tried using fill=(1,), which seemed to be a workaround, but it did not worked for me. Do you know what could be wrong?
| You're probably using torchvision v0.8.2 or older. This issue was fixed 5 months ago in the PR #2904. If you are not using v0.9.0 or newer, you won't be able to use fill in a Tensor input :(
So, the only solution is to upgrade your torchvision.
| https://stackoverflow.com/questions/67393353/ |
Pytorch on GCP: Machine type is not available on this endpoint | I'm new to GCP so pardon for prehaps asking/missing something obvious here.
I'm trying to deploy and create a version resource on GCP with a custom pytorch model. Everything have been working fine until now until I try to create the a new version of the model. Then I keep getting:
INVALID_ARGUMENT: Machine type is not available on this endpoint.
I've tried switching between different types from their list here without luck. What am I missing?
Here's the script I run to deploy:
MODEL_NAME='test_iris'
MODEL_VERSION='v1'
RUNTIME_VERSION='2.4'
MODEL_CLASS='model.PyTorchIrisClassifier'
PYTORCH_PACKAGE='gs://${BUCKET_NAME}/packages/torch-1.8.1+cpu-cp37-cp37m-linux_x86_64.whl'
DIST_PACKAGE='gs://${BUCKET_NAME}/models/Test_model-0.1.tar.gz'
GCS_MODEL_DIR='models/'
REGION="europe-west1"
# Creating model on AI platform
gcloud alpha ai-platform models create ${MODEL_NAME}\
--region=europe-west1 --enable-logging \
--enable-console-logging
gcloud beta ai-platform versions create ${MODEL_VERSION} --model=${MODEL_NAME} \
--origin=gs://${BUCKET_NAME}/${GCS_MODEL_DIR} \
--python-version=3.7 \
--machine-type=mls1-c4-m2\
--runtime-version=${RUNTIME_VERSION} \
--package-uris=${DIST_PACKAGE},${PYTORCH_PACKAGE} \
--prediction-class=${MODEL_CLASS}
Thanks!
| According to the documentation, you can only deploy a Custom prediction routine when using a legacy (MLS1) machine type for your model version. However, you can not use a regional endpoint with this type of machine, as stated here,
Regional endpoints only support Compute Engine (N1) machine types. You cannot use legacy (MLS1) machine types on regional endpoints.
As I can see, you have specified a regional endpoint with the --region flag, which does not support the machine type you required for your use case. Thus, you need to change the model and its version to a global endpoint, so you won't face the error anymore.
In addition, when you specify a regional endpoint within gcloud create model --region, you need to specify the same region when creating the model's version. On the other hand, when creating a model in the global endpoint gcloud create model --regions, you can omit the region flag in the command gcloud ai-platform versions create. Note that the --regions command is used only for the global endpoint
Lastly, I must point out that, as per documentation, when selecting a region for the global endpoint, using the --regions flag when creating the model, your prediction nodes run in the specified region. Although, the AI Platform Prediction infrastructure managing your resources might not necessarily run in the same region.
| https://stackoverflow.com/questions/67398763/ |
How to add extra dense layer on top of BertForSequenceClassification? | I want to add an extra layer (and dropout) before the classification layer (I'm using PyTorch lightning) What is the best way to do it?
| The class BertForSequenceClassification (that comes from the Huggingface Transformers when using PyTorch lightning) implements a fixed architecture. If you want to change it (e.g., by adding layers), you need to inherit your own module.
This is actually quite simple. You can copy the code of BertForSequenceClassification and modify the code between getting the pooled BERT output and getting the logits.
Note however that adding a hidden layer to a classifier does not make much difference when finetuning BERT. The capacity of the additional hidden layer is negligible compared to the entire stack of BERT layers. Even If you cannot finetune the entire model, fine-tuning just the last BERT layer is probably better than adding an extra layer to the classifier.
| https://stackoverflow.com/questions/67398812/ |
Convert one-hot encoded dimension into the index of position of 1 | I have a tensor of three dimensions [batch_size, sequence_length, number_of_tokens].
The last dimension is one-hot encoded. I want to receive a tensor of two dimensions, where sequence_length consists of the index position of '1' of the number_of_tokens dimension.
For example, to turn a tensor of shape (2, 3, 4):
[[[0, 1, 0, 0]
[1, 0, 0, 0]
[0, 0, 0, 1]]
[[1, 0, 0, 0]
[1, 0, 0, 0]
[0, 0, 1, 0]]]
into a tensor of shape (2, 3) where number_of_tokens dimension is converted into the 1's position:
[[1, 0, 3]
[0, 0, 2]]
I'm doing it to prepare the model result to compare to reference answer when computing loss, I hope it is correct way.
| If your original tensor is as specified in your previous question, you can bypass the one-hot encoding and directly use the argmax:
t = torch.rand(2, 3, 4)
t = t.argmax(dim=2)
| https://stackoverflow.com/questions/67399043/ |
How to change the value of a torch tensor with requires_grad=True so that the backpropogation can start again? | a=torch.tensor([1,2],dtype=dtype,requires_grad=True)
b=a[0]*a[1]
b.backward()
print(a.grad)
but when I use a+=1 and want to give a another value and do another round of backpropogation, it shows that a leaf Variable that requires grad is being used in an in-place operation.
However a=a+1 seems right. What's the difference between a=a+1 and a+=1?
(solved)
I find my doubt about torch actually lies in the following small example,
a=torch.tensor([1,2],dtype=dtype,requires_grad=True)
b=a[0]*a[1]
b.backward()
print(a.grad)
print(a.requires_grad)
c=2*a[0]*a[1]
c.backward()
print(a.grad)
When I do the first round of backpropogation, I get a.grad=[2,1].
Then I want to make a fresh start and calculate the differential of c with respect to a, however, the gradient seems to accumulate. How can I eliminate this effect?
| The += operator is for in-place operations, while a = a+1 is not in-place (that is, a refers to a different tensor after this operation).
But in your example you don't seem to be using either of these, so it is hard to say what you want to achieve.
| https://stackoverflow.com/questions/67400795/ |
Can we have inputs that is more than 1D in Pytorch (e.g word-embedding) | Say I have some text and I want to classify them into three groups food, sports, science. If I have a sentence I dont like to each mushrooms we can use wordembedding (say 100 dimensions) to create a 6x100 matrix for this particular sentense.
Ususally when training a neural-network our data is a 2D array with the dimensions n_obs x m_features
If I want to train a neural network on wordembedded sentences(i'm using Pytorch) then our input is 3D n_obs x (m_sentences x k_words)
e.g
#Say word-embedding is 3-dimensions
I = [1,2,3]
dont = [4,5,6]
eat = [7,8,9]
mushrooms = [10,11,12]
"I dont eat mushrooms" = [I,dont,eat,mushrooms] #First observation
Is the best way, when we have N>2 dimensions, to do some kind of pooling e.g mean, or can we use the actual 2D-features as input?
| Technically the input will be 1D, but that doesn't matter.
The internal architecture of your neural network will take care of recognizing the different words. You could for example have a convolution with a stride equal to the embedding size.
You can flatten a 2D input to become 1D and it will work fine. This is the way you'd normally do it with word embeddings.
I = [1,2,3]
dont = [4,5,6]
eat = [7,8,9]
mushrooms = [10,11,12]
input = np.array([I,dont,eat,mushrooms]).flatten()
The inputs of a neural network have to always be of the same size, but as sentences are not, you will probably have to limit the the max length of the sentence to a set length of words and add paddings to the end of the shorter sentences:
I = [1,2,3]
Am = [4,5,6]
short = [7,8,9]
paddingword = [1,1,1]
input = np.array([I,Am,eat,short, paddingword]).flatten()
Also you might want to look at doc2vec from gensim, which is an easy way to make embeddings for texts, which are then easy to use for a text classification problem.
| https://stackoverflow.com/questions/67403070/ |
Implementing Backprop for custom loss functions | I have a neural network Network that has a vector output. Instead of using a typical loss function, I would like to implement my own loss function that is a method in some class. This looks something like:
class whatever:
def __init__(self, network, optimizer):
self.network = network
self.optimizer = optimizer
def cost_function(relevant_data):
...implementation of cost function with respect to output of network and relevant_data...
def train(self, epochs, other_params):
...part I'm having trouble with...
The main thing I'm concerned with is about taking gradients. Since I'm taking my own custom loss function, do I need to implement my own gradient with respect to the cost function?
Once I do the math, I realize that if the cost is J, then the gradient of J is a fairly simple function in terms of the gradient of the final layer of the Network. I.e, it looks something like: Equation link.
If I used some traditional loss function like CrossEntropy, my backprocess would look like:
objective = nn.CrossEntropyLoss()
for epochs:
optimizer.zero_grad()
output = Network(input)
loss = objective(output, data)
loss.backward()
optimizer.step()
But how do we do this in my case? My guess is something like:
for epochs:
optimizer.zero_grad()
output = Network(input)
loss = cost_function(output, data)
#And here is where the problem comes in
loss.backward()
optimizer.step()
loss.backward() as I understand it, takes the gradients of the loss function with respect to the parameters. But can I still invoke it while using my own loss function (presumably the program doesn't know what the gradient equation is). Do I have to implement another method/subroutine to find the gradients as well?
Which brings me to my other question: if I do want to implement gradient calculation for my loss function, I also need the gradient of the neural network parameters. How do I obtain those? Is there a function for that?
| As long as all your steps starting from the input till the loss function involve differentiable operations on PyTorch's tensors, you need not do anything extra. PyTorch builds a computational graph that keeps track of each operation, its inputs, and gradients. So, calling loss.backward() on your custom loss would still propagate gradients back correctly through the graph. A Gentle Introduction to torch.autograd from the PyTorch tutorials may be a useful reference.
After the backward pass, if you need to directly access the gradients for further processing, you can do so using the .grad attribute (so t.grad for tensor t in the graph).
Finally, if you have a specific use case for finding the gradient of an arbitrary differentiable function implemented using PyTorch's tensors with respect to one of its inputs (e.g. gradient of the loss with respect to a particular weight in the network), you could use torch.autograd.grad.
| https://stackoverflow.com/questions/67404862/ |
PyTorch - Import dataset with images as labels | I have a dataset containing images as inputs and labels/targets as images as well. The structure in the folder is as follows:
> DATASET/
> ---TRAIN/
> ------image_xx.png
> ------label_xx.png
> ---TEST/
> ------image_xx.png
> ------label_xx.png
I've currently tried to use "ImageFolder" from torchvisions datasets to load the images as follows:
TRAIN_PATH = '/path/to/dataset/DATASET'
train_data = datasets.ImageFolder(root=TRAIN_PATH, transform=transforms.ToTensor())
train_loader = DataLoader(train_data, batch_size=16, shuffle=True)
However as shown below:
for img, label in train_loader:
print(img.shape)
print(label.shape)
break
torch.Size([16, 3, 128, 128])
torch.Size([16])
The labels aren't images but rather indicies or something similar. Is there a convenient way of importing this dataset with the aforementioned structure?
| The ImageFolder dataset is suitable when you have discrete, scalar classes for each image. It expects the directory structure to be such that each subdirectory contains a certain class.
For your case, you can simply define your own subclass of torch.nn.Dataset. This tutorial may be helpful.
A simple example (I have not tried running it to see if it works correctly):
import torch
import os
import cv2
class MyDataset(torch.utils.data.Dataset):
def __init__(self, root_path, transform=None):
self.data_paths = [f for f in sorted(os.listdir(root_path)) if f.startswith("image")]
self.label_paths = [f for f in sorted(os.listdir(root_path)) if f.startswith("label")]
self.transform = transform
def __getitem__(self, idx):
img = cv2.imread(self.data_paths[idx])
label = cv2.imread(self.label_paths[idx])
if self.transform:
img = self.transform(img)
return img, label
def __len__(self):
return len(self.data_paths)
TRAIN_PATH = '/path/to/dataset/DATASET/TRAIN/'
train_data = MyDataset(root_path=TRAIN_PATH, transform=transforms.ToTensor())
train_loader = torch.utils.data.DataLoader(train_data, batch_size=16, shuffle=True)
| https://stackoverflow.com/questions/67406731/ |
difference between initializing bias by nn.init.constant and (bias=1) in pytorch | I'm making a code for AlexNet and i'm confused with how to initialize the weights
what is the difference between:
for layer in self.cnnnet:
if isinstance(layer, nn.Conv2d):
nn.init.constant_(layer.bias, 0)
and
nn.Linear(shape, bias=0)
| The method nn.init.constant_ receives a parameter to initialize and a constant value to initialize it with. In your case, you use it to initialize the bias parameter of a convolution layer with the value 0.
The method nn.Linear the bias parameter is a boolean stating weather you want the layer to have a bias or not. By setting it to be 0, you're actually creating a linear layer with no bias at all.
A good practice is to start with PyTorch's default initialization techniques for each layer. This is done by just creating the layers, pytorch initializes them implicitly. In more advanced development stages you can also explicitly change it if necessary.
For more info see the official documentation of nn.Linear and nn.Conv2d.
| https://stackoverflow.com/questions/67412090/ |
How to use torch.utils.tensorboard.SummaryWriter from the last interrupted events |
As in this picture, if I want to add scalar from events.out.tfevents, but not create a new one.
How can I set the params int this code:
SummaryWriter(self, log_dir=None, comment='', purge_step=None, max_queue=10,
flush_secs=120, filename_suffix='')
| You should be able to run it the same way (e.g. log_dir has to be the same, tensorboard in your case).
You have to remember to use next global step when adding scalar though.
First run, assume it crashed at 9th step:
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter("my_dir")
x = range(10)
for i in x:
writer.add_scalar("y=x", i, i)
writer.close()
If you want to continue writing to this event file, you have to move last parameter global step by 10:
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter("my_dir")
x = range(10)
for i in x:
writer.add_scalar("y=x", i, i + 10) # start from step 10
writer.close()
Running first file, followed by the second one and opening tensorboard via tensorboard --logdir my_dir would give you:
| https://stackoverflow.com/questions/67413721/ |
Tracing Tensor Sizes in TorchScript | I'm exporting a PyTorch model via TorchScript tracing, but I'm facing issues. Specifically, I have to perform some operations on tensor sizes, but the JIT compilers hardcodes the variable shapes as constants, braking compatibility with tensor of different sizes.
For example, create the class:
class Foo(nn.Module):
"""Toy class that plays with tensor shape to showcase tracing issue.
It creates a new tensor with the same shape as the input one, except
for the last dimension, which is doubled. This new tensor is filled
based on the values of the input.
"""
def __init__(self):
nn.Module.__init__(self)
def forward(self, x):
new_shape = (x.shape[0], 2*x.shape[1]) # incriminated instruction
x2 = torch.empty(size=new_shape)
x2[:, ::2] = x
x2[:, 1::2] = x + 1
return x2
and run the test code:
x = torch.randn((3, 5)) # create example input
foo = Foo()
traced_foo = torch.jit.trace(foo, x) # trace
print(traced_foo(x).shape) # obviously this works
print(traced_foo(x[:, :4]).shape) # but fails with a different shape!
I could solve the issue by scripting, but in this case I really need to use tracing. Moreover, I think that tracing should be able to handle tensor size manipulations correctly.
|
but in this case I really need to use tracing
You can freely mix torch.script and torch.jit wherever needed. For example one could do this:
import torch
class MySuperModel(torch.nn.Module):
def __init__(self, *args, **kwargs):
super().__init__()
self.scripted = torch.jit.script(Foo(*args, **kwargs))
self.traced = Bar(*args, **kwargs)
def forward(self, data):
return self.scripted(self.traced(data))
model = MySuperModel()
torch.jit.trace(model, (input1, input2))
You could also move part of the functionality dependent on shape to separate function and decorate it with @torch.jit.script:
@torch.jit.script
def _forward_impl(x):
new_shape = (x.shape[0], 2*x.shape[1]) # incriminated instruction
x2 = torch.empty(size=new_shape)
x2[:, ::2] = x
x2[:, 1::2] = x + 1
return x2
class Foo(nn.Module):
def forward(self, x):
return _forward_impl(x)
There is no other way than script for that as it has to understand your code. With tracing it merely records operations you perform on the tensor and has no knowledge of control flow dependent on data (or shape of data).
Anyway, this should cover most of the cases and if it doesn't you should be more specific.
| https://stackoverflow.com/questions/67413808/ |
Does dissecting a Pytorch model lower memory usage? | Suppose I have a Pytorch autoencoder model defined as:
class ae(torch.nn.Module):
def __init__(self, z_dim, n_channel=3, size_=8):
super(ae, self).__init__()
self.encoder = Encoder()
self.decoder = Decoder()
def forward(self, x):
z = self.encoder(x)
x_reconstructed = self.decoder(z)
return z, x_reconstructe
Now instead of defining an specific ae model and loading it, I can use the Encoder and Decoder code directly in my code. I know the number of total parameters wouldn't change but here's my question: since these two models are now separated, is it possible that the code can run on lower ram/gpu-memory? Does separating them means they do not need to be loaded into memory at once?
(Note that autoencoder is just an example, My question is really about any models that consists of several sub-modules).
|
is it possible that the code can run on lower ram/gpu-memory?
The way you created it right now no, it isn't. If you instantiate it and move to device, something along those lines:
encoder = ...
decoder = ...
autoencoder = ae(encoder, decoder).to("cuda")
It will take, in total, decoder + encoder GPU memory when moved to the device and will be loaded to memory at once.
But, instead, you could do this:
inputs = ...
inputs = inputs.to("cuda")
encoder = ...
encoder.to("cuda")
output = encoder(inputs)
encoder.to("cpu") # Free GPU memory
decoder = ...
decoder.to("cuda") # Uses less in total
result = decoder(output)
You could wrap this idea in model (or function), still one would have to wait for parts of the network to be copied to GPU and your performance will be inferior (but GPU memory will be smaller).
Depending on where you instantiate the models RAM memory footprint could also be lower (Python will automatically destroy object in function scope), let's look at this option (no need for casting to cpu as the object will be automatically garbage collected as mentioned above):
def encode(inputs):
encoder = ...
encoder.to("cuda")
results = encoder(inputs)
return results
def decode(inputs):
decoder = ...
decoder.to("cuda")
return decoder(inputs)
outputs = encode(inputs)
result = decode(outputs)
| https://stackoverflow.com/questions/67414377/ |
How to build CUDA custom C++ extension for PyTorch without CUDA? | I was tasked with creating a CI workflow for building a PyTorch CUDA extension for this application. Up until now, the application was deployed by creating the target AWS VM with a CUDA GPU, pushing all the sources there and running setup.py, but instead I want to do the build in our CI system and deploy pre-built binaries to the production environment.
When running setup.py in the CI system, I get the error "No CUDA GPUs are available" - which is true, there are no CUDA GPUs in the CI system. Is there a way to just build the CUDA extension without a CUDA GPU available?
This is the error message:
gcc -pthread -shared -B /usr/local/miniconda/envs/build/compiler_compat -L/usr/local/miniconda/envs/build/lib -Wl,-rpath=/usr/local/miniconda/envs/build/lib -Wl,--no-as-needed -Wl,--sysroot=/ /app/my-app/build/temp.linux-x86_64-3.6/my-extension/my-module.o -L/usr/local/miniconda/envs/build/lib/python3.6/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.6/my-extension/my-module.cpython-36m-x86_64-linux-gnu.so
building 'my-extension.my-module._cuda_ext' extension
creating /app/my-app/build/temp.linux-x86_64-3.6/my-extension/src
Traceback (most recent call last):
File "setup.py", line 128, in <module>
'build_ext': BuildExtension
File "/usr/local/miniconda/envs/build/lib/python3.6/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/usr/local/miniconda/envs/build/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/local/miniconda/envs/build/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/usr/local/miniconda/envs/build/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/usr/local/miniconda/envs/build/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/usr/local/miniconda/envs/build/lib/python3.6/distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/usr/local/miniconda/envs/build/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 653, in build_extensions
build_ext.build_extensions(self)
File "/usr/local/miniconda/envs/build/lib/python3.6/distutils/command/build_ext.py", line 448, in build_extensions
self._build_extensions_serial()
File "/usr/local/miniconda/envs/build/lib/python3.6/distutils/command/build_ext.py", line 473, in _build_extensions_serial
self.build_extension(ext)
File "/usr/local/miniconda/envs/build/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 196, in build_extension
_build_ext.build_extension(self, ext)
File "/usr/local/miniconda/envs/build/lib/python3.6/distutils/command/build_ext.py", line 533, in build_extension
depends=ext.depends)
File "/usr/local/miniconda/envs/build/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 468, in unix_wrap_ninja_compile
cuda_post_cflags = unix_cuda_flags(cuda_post_cflags)
File "/usr/local/miniconda/envs/build/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 377, in unix_cuda_flags
cflags + _get_cuda_arch_flags(cflags) +
File "/usr/local/miniconda/envs/build/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1407, in _get_cuda_arch_flags
capability = torch.cuda.get_device_capability()
File "/usr/local/miniconda/envs/build/lib/python3.6/site-packages/torch/cuda/__init__.py", line 291, in get_device_capability
prop = get_device_properties(device)
File "/usr/local/miniconda/envs/build/lib/python3.6/site-packages/torch/cuda/__init__.py", line 296, in get_device_properties
_lazy_init() # will define _get_device_properties
File "/usr/local/miniconda/envs/build/lib/python3.6/site-packages/torch/cuda/__init__.py", line 172, in _lazy_init
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available
I'm not very familiar with CUDA and only half proficient in Python (I'm here as the "ops" part of "devops").
| It is not a complete solution, as I lack details to completely figure out a solution. But it should help you or your teammates.
So first based on the source code, it is not required to reach torch._C._cuda_init() if you have CUDA arch flags set.
This means the pytorch is trying to figure out the CUDA arch because it is not specified by the user.
Here is a related thread. As you can see, setting the TORCH_CUDA_ARCH_LIST environment to something that fits your environment should work for you.
| https://stackoverflow.com/questions/67430901/ |
How to use numpy dataset in Pytorch Lightning | I want to make a dataset using NumPy and then want to train and test a simple model like 'linear, or logistic`.
I am trying to learn Pytorch Lightning. I have found a tutorial that we can use the NumPy dataset and can use uniform distribution here. As a newcomer, I am not getting the full idea, how can I do that!
My code is given below
import numpy as np
import pytorch_lightning as pl
from torch.utils.data import random_split, DataLoader, TensorDataset
import torch
from torch.autograd import Variable
from torchvision import transforms
np.random.seed(42)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
class DataModuleClass(pl.LightningDataModule):
def __init__(self):
super().__init__()
self.constant = 2
self.batch_size = 10
self.transform = transforms.Compose([
transforms.ToTensor()
])
def prepare_data(self):
a = np.random.uniform(0, 500, 500)
b = np.random.normal(0, self.constant, len(x))
c = a + b
X = np.transpose(np.array([a, b]))
idx = np.arange(500)
np.random.shuffle(idx)
# Uses foirst 400 random indices for training
train_idx = idx[:400]
# Uses the remaining indices for validation
val_idx = idx[400:]
# Generate train and validation dataset
x_train, y_train = X[train_idx], y[train_idx]
x_val, y_val = X[val_idx], y[val_idx]
# Converting numpy array to Tensor
self.x_train_tensor = torch.from_numpy(x_train).float().to(device)
self.y_train_tensor = torch.from_numpy(y_train).float().to(device)
self.x_val_tensor = torch.from_numpy(x_val).float().to(device)
self.y_val_tensor = torch.from_numpy(y_val).float().to(device)
training_dataset = TensorDataset(self.x_train_tensor, self.y_train_tensor)
validation_dataset = TensorDataset(self.x_val_tensor, self.y_val_tensor)
return training_dataset, validation_dataset
def train_dataloader(self):
training_dataloader = prepare_data() # Most probably this is wrong way!!!
return DataLoader(self.training_dataloader)
def val_dataloader(self):
validation_dataloader = prepare_data() # Most probably this is wrong way!!!
return DataLoader(self.validation_dataloader)
# def test_dataloader(self):
obj = DataModuleClass()
print(obj.prepare_data())
This part is done based on the answer given [Here, I want to take a and b as features and c as label or target variable.]
Now, how, can I pass the dataset into the `training and validation method?
| You can get the data from prepare_data() or setup() both using the following code.
def prepare_data(self):
a = np.random.uniform(0, 500, 500)
b = np.random.normal(0, self.constant, len(a))
c = a + b
X = np.transpose(np.array([a, b]))
# Converting numpy array to Tensor
self.x_train_tensor = torch.from_numpy(X).float().to(device)
self.y_train_tensor = torch.from_numpy(c).float().to(device)
training_dataset = TensorDataset(self.x_train_tensor, self.y_train_tensor)
self.training_dataset = training_dataset
def setup(self):
data = self.training_dataset
self.train_data, self.val_data = random_split(data, [400, 100])
def train_dataloader(self):
return DataLoader(self.train_data)
def val_dataloader(self):
return DataLoader(self.val_data)
You can split the dataset using random_split().
| https://stackoverflow.com/questions/67437448/ |
Calling pytorch neural network forward() gives error "mat1 and mat2 shapes cannot be multiplied" | I have following code which defines simple neural network:
class MlpNN(nn.Module):
def __init__(self, in_dim=301, hidden_dims=[500,500,500,500,500,30]):
super(MlpNN, self).__init__()
self.net = nn.Sequential()
self.net.add_module("lin_0", nn.Linear(in_dim, hidden_dims[0]))
self.net.add_module("relu_0", nn.ReLU())
layer_id = 1
for hidden_dim in hidden_dims[1:]:
self.net.add_module("lin_"+str(layer_id), nn.Linear(hidden_dim, hidden_dim))
self.net.add_module("relu_"+str(layer_id), nn.ReLU())
layer_id += 1
def forward(self, x):
return self.net(x)
The network formed is:
MlpNN(
(net): Sequential(
(lin_0): Linear(in_features=301, out_features=500, bias=True)
(relu_0): ReLU()
(lin_1): Linear(in_features=500, out_features=500, bias=True)
(relu_1): ReLU()
(lin_2): Linear(in_features=500, out_features=500, bias=True)
(relu_2): ReLU()
(lin_3): Linear(in_features=500, out_features=500, bias=True)
(relu_3): ReLU()
(lin_4): Linear(in_features=500, out_features=500, bias=True)
(relu_4): ReLU()
(lin_5): Linear(in_features=30, out_features=30, bias=True)
(relu_5): ReLU()
)
)
while doing forward(), it gives me following error:
mat1 and mat2 shapes cannot be multiplied (1x500 and 30x30)
Whats wrong I am doing here? I know it must be stupid basic given that I am quite new machine learning.
| In your network, your lin_4 layer has 500 output features, while your lin_5 layer has 30 input features. This causes a mismatch in the shapes. To fix this, you should ensure that the output features of a layer and the input features of the layer after it are the same. In your code, you could do that as:
class MlpNN(nn.Module):
def __init__(self, in_dim=301, hidden_dims=[500,500,500,500,500,30]):
super(MlpNN, self).__init__()
self.net = nn.Sequential()
self.net.add_module("lin_0", nn.Linear(in_dim, hidden_dims[0]))
self.net.add_module("relu_0", nn.ReLU())
for layer_id in range(1, len(hidden_dims)):
self.net.add_module("lin_"+str(layer_id), nn.Linear(hidden_dims[layer_id-1], hidden_dims[layer_id]))
self.net.add_module("relu_"+str(layer_id), nn.ReLU())
def forward(self, x):
return self.net(x)
| https://stackoverflow.com/questions/67440763/ |
How to get dataset from prepare_data() to setup() in PyTorch Lightning | I made my own dataset using NumPy in the prepare_data() methods using the DataModules method of PyTorch Lightning. Now, I want to pass the data into the setup() method to split into training and validation.
import numpy as np
import pytorch_lightning as pl
from torch.utils.data import random_split, DataLoader, TensorDataset
import torch
from torch.autograd import Variable
from torchvision import transforms
np.random.seed(42)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
class DataModuleClass(pl.LightningDataModule):
def __init__(self):
super().__init__()
self.constant = 2
self.batch_size = 10
def prepare_data(self):
a = np.random.uniform(0, 500, 500)
b = np.random.normal(0, self.constant, len(a))
c = a + b
X = np.transpose(np.array([a, b]))
# Converting numpy array to Tensor
self.x_train_tensor = torch.from_numpy(X).float().to(device)
self.y_train_tensor = torch.from_numpy(c).float().to(device)
training_dataset = TensorDataset(self.x_train_tensor, self.y_train_tensor)
return training_dataset
def setup(self):
data = # What I have to write to get the data from prepare_data()
self.train_data, self.val_data = random_split(data, [400, 100])
def train_dataloader(self):
training_dataloader = setup() # Need to get the training data
return DataLoader(self.training_dataloader)
def val_dataloader(self):
validation_dataloader = prepare_data() # Need to get the validation data
return DataLoader(self.validation_dataloader)
obj = DataModuleClass()
print(obj.prepare_data())
| The same answer as your previous question...
def prepare_data(self):
a = np.random.uniform(0, 500, 500)
b = np.random.normal(0, self.constant, len(a))
c = a + b
X = np.transpose(np.array([a, b]))
# Converting numpy array to Tensor
self.x_train_tensor = torch.from_numpy(X).float().to(device)
self.y_train_tensor = torch.from_numpy(c).float().to(device)
training_dataset = TensorDataset(self.x_train_tensor, self.y_train_tensor)
self.training_dataset = training_dataset
def setup(self):
data = self.training_dataset
self.train_data, self.val_data = random_split(data, [400, 100])
def train_dataloader(self):
return DataLoader(self.train_data)
def val_dataloader(self):
return DataLoader(self.val_data)
| https://stackoverflow.com/questions/67441163/ |
Train a neural network when the training has only the derivative of output wrt all inputs | There is a scalar function F with 1000 inputs. I want to train a model to predict F given the inputs. However, in the training dataset, we only know the derivative of F with respect to each input, not the value of F itself. How I can construct a neural network with this limitation in tensorflow or pytorch?
| I think you can use torch.autograd to compute the gradients, and then use them for the loss. You need:
(a) A trainable nn.Module to represent the (unknown) function F:
class UnknownF(nn.Module):
def __init__(self, ...):
# whatever combinations of linear layers and activations and whatever...
def forward(self, x):
# x is 1000 dim vector
y = self.layers(x)
# y is a _scalar_ output
return y
model = UnknownF(...) # instansiate the model of the unknown function
(b) Training data:
x = torch.randn(n, 1000, requires_grad=True) # n examples of 1000-dim vectors
dy = torch.randn(n, 1000) # the corresponding n-dim gradients of the n inputs
(c) An optimizer:
opt = torch.optim.SGD(model.parameters(), lr=0.1)
(d) Put it together:
criterion = nn.MSELoss()
for e in range(num_epochs):
for i in range(n):
# batch size = 1, pick one example
x_ = x[i, :]
dy_ = dy[i, :]
opt.zero_grad()
# predict the unknown output
y_ = model(x_)
# compute the gradients of the model using autograd:
pred_dy_ = autograd.grad(y_, x_, create_graph=True)[0]
# compute the loss between the model's gradients and the GT ones:
loss = criterion(pred_dy_, dy_)
loss.backward()
opt.step() # update model's parameters accordingly.
| https://stackoverflow.com/questions/67442104/ |
PyTorch: compare three tensors? | I have three boolean mask tensors that I want to create a boolean mask that if the value matches in three tensors then it is 1, else 0.
I tried torch.where(A == B == C, 1, 0), but it doesn't seem to support such.
| The torch.eq operator only supports binary tensor comparisons, hence you need to perform two comparisons:
(A==B) & (B==C)
| https://stackoverflow.com/questions/67444564/ |
Normalising images using mean and std of a dataset | I used the following snippet to compute the mean and std of the images in the cityscapes dataset to normalise them:
def compute_mean_std(dataloader):
pop_mean = []
pop_std = []
for i, (img,mask, rgb_mask) in enumerate(dataloader):
numpy_image = img.cpu().numpy()
batch_mean = np.mean(numpy_image,axis=(0,2,3))
pop_mean.append(batch_mean)
#print(batch_mean.shape)
batch_std = np.mean(numpy_image, axis=(0,2,3))
pop_std.append(batch_std)
#print(batch_std.shape)
pop_mean = np.array(pop_mean).mean(axis=0)
pop_std = np.array(pop_std).std(axis=0)
print(pop_mean.shape)
print(pop_std.shape)
return(pop_mean, pop_std)
This code gave me the following mean and std:
MEAN = [0.28660315, 0.32426634, 0.28302112]
STD = [0.00310452, 0.00292714, 0.00296411]
but when I computed the mean and std of images after normalisation using these mean and std, they are not close to 0 and 1.
Is this approach correct to compute mean and std over the whole dataset and normalising images?
| Your formulas are not correct. You can't take the mean of the values of a batch and then the standard deviation of these means and expect it to be the standard deviation over the entire dataset. Try something like:
total = 0.0
totalsq = 0.0
count = 0
for data, *_ in dataloader:
count += np.prod(data.shape)
total += data.sum()
totalsq += (data**2).sum()
mean = total/count
var = (totalsq/count) - (mean**2)
std = torch.sqrt(var)
| https://stackoverflow.com/questions/67445440/ |
Custom Dataset not accepting argument in PyTorch | I am trying to create a custom Dataset in PyTorch using this dataset. It is of the shape (X, 785), X being number of samples and each row containing the label at index 0 and 784 pixel values. This is my code :
from torch.utils.data import Dataset
def SignMNISTDataset(Dataset):
def __init__(self, csv_file_path, mode='Train'):
self.labels = []
self.pixels = []
self.mode = mode
data = pd.read_csv(csv_file_path).values
if self.mode == 'Train':
self.labels = data[:,0].tolist()
print("Training labels acquired")
for idx in range(len(self.labels)):
self.pixels.append(data[idx][1:].tolist())
def __len__(self):
return len(self.labels)
def __getitem__(self, idx):
pixels = self.pixels[idx]
if self.mode == 'Train':
labels = self.labels[idx]
content = {"pixels":pixels, "label":labels}
else:
content = {"pixels":pixels}
return content
training_data = SignMNISTDataset('sign_mnist_train/sign_mnist_train.csv', 'Train')
On running, I get the following error :
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-46-0173199f8794> in <module>()
27 return content
28
---> 29 training_data = SignMNISTDataset('sign_mnist_train/sign_mnist_train.csv', 'Train')
30 from torch.utils.data import DataLoader
31
TypeError: SignMNISTDataset() takes 1 positional argument but 2 were given
Where exactly is this coming from? Is the mode argument somehow not being read during the object creation?
My end goal is to create a neural network for classifying sign characters, following this tutorial.
I tried explicitly mentioning the keyword mode during the object creation. This is what I got -
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-48-fd796c48dc67> in <module>()
27 return content
28
---> 29 training_data = SignMNISTDataset('sign_mnist_train/sign_mnist_train.csv', mode='Train')
TypeError: SignMNISTDataset() got an unexpected keyword argument 'mode'
| Please use
class SignMNISTDataset(Dataset):
Instead of
def SignMNISTDataset(Dataset):
| https://stackoverflow.com/questions/67447071/ |
PyTorch backward() on a tensor element affected by nan in other elements | Consider the following two examples:
x = torch.tensor(1., requires_grad=True)
y = torch.tensor(0., requires_grad=True)
z = torch.full((2, ), float("nan"))
z0 = x * y / y
z1 = x + y
print(z0, z1) # tensor(nan, grad_fn=<DivBackward0>) tensor(1., grad_fn=<AddBackward0>)
z1.backward()
print(x.grad) # tensor(1.)
x = torch.tensor(1., requires_grad=True)
y = torch.tensor(0., requires_grad=True)
z = torch.full((2, ), float("nan"))
z[0] = x * y / y
z[1] = x + y
print(z) # tensor([nan, 1.], grad_fn=<CopySlices>)
z[1].backward()
print(x.grad) # tensor(nan)
In example 1, z0 does not affect z1, and the backward() of z1 executes as expected and x.grad is not nan. However, in example 2, the backward() of z[1] seems to be affected by z[0], and x.grad is nan.
How do I prevent this (example 1 is desired behaviour)? Specifically I need to retain the nan in z[0] so adding epsilon to division does not help.
| When indexing the tensor in the assignment, PyTorch accesses all elements of the tensor (it uses binary multiplicative masking under the hood to maintain differentiability) and this is where it is picking up the nan of the other element (since 0*nan -> nan).
We can see this in the computational graph:
torchviz.make_dot(z1, params={'x':x,'y':y})
torchviz.make_dot(z[1], params={'x':x,'y':y})
If you wish to avoid this behaviour, either mask the nan's, or do as you have done in the first example - separate these into two different objects.
| https://stackoverflow.com/questions/67447166/ |
Building CUDA extension fails even after issue in code (deprecated AT_CHECK) is fixed | I'm trying to install neural_renderer. Unfortunately, the original implementation only supports Python 2.7+ and PyTorch 0.4.0, so I'm using a fork that includes some fixes for compatibility with torch 1.7 (here). The main issue was using AT_CHECK(), which was not compatible with newer versions of PyTorch, and was replaced with TORCH_CHECK().
After running pip install neural_renderer_pytorch on the fixed version, using a virtual environment, I get the output (which I truncated to just the error):
/tmp/pip-install-[somestring]/neural-renderer-pytorch_[somelongstring]/neural_renderer/cuda/load_textures_cuda.cpp:15:23: error: ‘AT_CHECK’ was not declared in this scope; did you mean ‘DCHECK’?
15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
| ^~~~~~~~
with [somestring] and [somelongstring] being some alphanumeric strings that changed with each compilation.
It looks like AT_CHECK is still being used somewhere in the code, but I don't know where. I know this error is exactly what the fork fixed, so I assume the cpp file is still cached somewhere from a previous compilation. But I can't figure out where exactly. I'm sure I'm on branch pytorch1.7 and running pip in the right repository; with torch==1.7.0 installed.
What I've tried so far, to no avail:
run pip cache purge before attempting to install
running pip with --no-cache-dir
deleting the virtualenv I'm using and making a new one
deleting the entire repository and making a new one
This issue on GitHub suggested just using PyTorch 1.4.0. This worked (i.e. I created a Python3.7 environment and ran conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.0 -c pytorch, then installed), but my goal is still to compile it for PyTorch 1.7.0 (and a newer version of Python).
| If you want to install the fork, you cannot use pip install neural_renderer_pytorch. This command installs the original one.
To install the fork, you have to clone it to your local machine and install it:
git clone https://github.com/ZhengZerong/neural_renderer
cd neural_renderer
pip install .
You can do it in just one go as well:
pip install git+https://github.com/ZhengZerong/neural_renderer.git
Don't forget to uninstall the original version first, or just start a new venv.
| https://stackoverflow.com/questions/67447724/ |
AttributeError: 'DataModuleClass' object has no attribute 'training_dataset' | I am trying to learn PyTorch Lightning by writing a very simple DataModuleClass. After prepare_data(), and setup() I am trying to check these functions are working or not. So, I am trying to get the training and validation dataset from setup(). But I am getting an error
AttributeError: 'DataModuleClass' object has no attribute 'training_dataset'
Code
def prepare_data(self):
x = np.random.uniform(0, 10, 10)
e = np.random.normal(0, self.sigma, len(x))
# Making target or labels
y = x + e
# Marging x and e for 2 features
X = np.transpose(np.array([x, e]))
# Converting numpy array to Tensor
self.x_train_tensor = torch.from_numpy(X).float().to(device)
self.y_train_tensor = torch.from_numpy(y).float().to(device)
training_dataset = TensorDataset(self.x_train_tensor, self.y_train_tensor)
self.training_dataset = training_dataset
def setup(self):
data = self.training_dataset
self.train_data, self.val_data = random_split(data, [8, 2])
return self.train_data, self.val_data
def train_dataloader(self):
return DataLoader(self.train_data)
def val_dataloader(self):
return DataLoader(self.val_data)
obj = DataModuleClass()
print(obj.setup())
Could you tell me why I am getting this error?
| From the way the code looks to me.
The variable self.training_dataset of the DataModuleClass is initiated in prepare_data and setup need it in the first line.
But you called setup without calling training_dataset.
If prepare_data is expected to be called every time you create a DataModuleClass object then it best to put prepare_data in __init__. Like
def __init__(self, other_params):
..... all your code previously in __init__
self.prepare_data() # put this in the last line of this function
But if you don't need that then you need to call prepare_data before setup
obj = DataModuleClass()
obj.prepare_data()
print(obj.setup())
Or put prepare_data in setup itself.
def setup(self):
self.prepare_data()
data = self.training_dataset
self.train_data, self.val_data = random_split(data, [8, 2])
return self.train_data, self.val_data
Edit 1: See the actual value of self.train_data and self.val_data
The objects returned from setup are torch.utils.data.dataset.Subset.
There are basically 2 ways to get the tensors.
1. Treat them like lists
train_data, val_data = obj.setup()
print(train_data[0])
2. Use for loop
train_data, val_data = obj.setup()
for data in train_data:
print(data)
Note
I'm not sure weather you'll get the tensors or TensorDataset. If it's the latter then use the same trick again, like
train_data, val_data = obj.setup()
train_tensor_data = train_data[0]
print(train_tensor_data[0])
| https://stackoverflow.com/questions/67448220/ |
Why is accuracy droping in the last batch? | I am using PyTorch for a classification task. For some reason, the accuracy drops in the last iteration, I would like to know why? any answer is appreciated.
Hers is the code
class Classifier(nn.Module):
def __init__(self):
super(Classifier, self).__init__()
self.layers = nn.Sequential(nn.Linear(89, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 2))
def forward(self, x):
return self.layers(x)
def train(train_dl, model, epochs):
loss_function = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.1)
for epoch in range(epochs):
for (features, target) in train_dl:
optimizer.zero_grad()
features, target = features.to(device), target.to(device)
output = model(features.float())
target = target.view(-1)
loss = loss_function(output, target)
loss.backward()
optimizer.step()
output = torch.argmax(output, dim=1)
correct = (output == target).float().sum()
accuracy = correct / 512
print(accuracy, loss)
break
model = Classifier().to(device)
train(train_dl, model, 10)
and the last part of the output
tensor(0.6465, device='cuda:0') tensor(0.6498, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6348, device='cuda:0') tensor(0.6574, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6582, device='cuda:0') tensor(0.6423, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6660, device='cuda:0') tensor(0.6375, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6719, device='cuda:0') tensor(0.6338, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6426, device='cuda:0') tensor(0.6523, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6699, device='cuda:0') tensor(0.6347, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6582, device='cuda:0') tensor(0.6422, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6543, device='cuda:0') tensor(0.6449, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6465, device='cuda:0') tensor(0.6502, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6992, device='cuda:0') tensor(0.6147, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6777, device='cuda:0') tensor(0.6289, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6836, device='cuda:0') tensor(0.6244, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6738, device='cuda:0') tensor(0.6315, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.1387, device='cuda:0') tensor(0.5749, device='cuda:0', grad_fn=<NllLossBackward>)
| Probably because your last batch size is less than 512. It would be better to change this line
accuracy = correct / 512
to:
accuracy = correct / features.shape[0]
Alternatively, if you don't want your last batch to have a different size, you can drop it when you create the DataLoader, by setting drop_last=True, something like this:
train_dl = DataLoader(..., drop_last=True)
| https://stackoverflow.com/questions/67448552/ |
One liner tensor with conditions | I have two tensors:
a = torch.nn.Parameter(torch.rand(7, requires_grad=True))
b = torch.randint(0,60, (20,))
Is there a one liner (or a quick & short way) that can create a tensor (call it x) of size 20 (similar to "b") with conditions?
i.e.
[b<4 use a[0], 4 <=b<12 use a[1], 12<=b<22 use a[2], <28, <38, <50, >50] for every b
So if:
b = [12, 93, 54, 0...]
I want my new tensor "x" to be:
x = [a[2],a[6], a[6]...]
I'm going to use this "x" tensor to train and need the values the be backproped and learnable
i.e.
loss = torch.rand(20) * x
loss.backward() ...
So if one of the a's is not in x I want it to not change.
| You can sum multiplicative masks of the conditions:
x = a[0]*(b<4) + a[1]*((4<=b)&(b<12)) + a[2]*((12<=b)&(b<22)) + a[3]*((22<=b)&(b<28)) + a[4]*((28<=b)&(b<30)) + a[5]*((30<=b)&(b<50)) + a[6]*(b>=50)
| https://stackoverflow.com/questions/67448750/ |
AttributeError: 'tuple' object has no attribute 'train_dataloader' | I have a 3 file. In the datamodule file, I have created data and used the basic format of the PyTorch Lightning. In the linear_model I made a linear regression model based on this page. Finally, I have a train file, I am calling the model and trying to fit the data. But I am getting this error
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/mostafiz/Dropbox/MSc/Thesis/regreesion_EC/src/test_train.py", line 10, in <module>
train_dataloader=datamodule.DataModuleClass().setup().train_dataloader(),
AttributeError: 'tuple' object has no attribute 'train_dataloader'
Sample datamodule file
class DataModuleClass(pl.LightningDataModule):
def __init__(self):
super().__init__()
self.sigma = 5
self.batch_size = 10
self.prepare_data()
def prepare_data(self):
x = np.random.uniform(0, 10, 10)
e = np.random.normal(0, self.sigma, len(x))
y = x + e
X = np.transpose(np.array([x, e]))
self.x_train_tensor = torch.from_numpy(X).float().to(device)
self.y_train_tensor = torch.from_numpy(y).float().to(device)
training_dataset = TensorDataset(self.x_train_tensor, self.y_train_tensor)
self.training_dataset = training_dataset
def setup(self):
data = self.training_dataset
self.train_data, self.val_data = random_split(data, [8, 2])
return self.train_data, self.val_data
def train_dataloader(self):
return DataLoader(self.train_data)
def val_dataloader(self):
return DataLoader(self.val_data)
Sample training file
from . import datamodule, linear_model
model = linear_model.LinearRegression(input_dim=2, l1_strength=1, l2_strength=1)
trainer = pl.Trainer()
trainer.fit(model,
train_dataloader=datamodule.DataModuleClass().setup().train_dataloader(),
val_dataloaders=datamodule.DataModuleClass().setup().val_dataloaders())
Let me know if you need more code or explanation.
Update (Based on the comment)
Now, I am getting the following error after removing self.prepare_data() from the __init__() of the DataModuleClass(), removed return self.train_data, self.val_data from setup(), and changed the test file to
data_module = datamodule.DataModuleClass()
trainer = pl.Trainer()
trainer.fit(model,data_module)
Error:
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/mostafiz/Dropbox/MSc/Thesis/regreesion_EC/src/test_train.py", line 10, in <module>
train_dataloader=datamodule.DataModuleClass().train_dataloader(),
File "/home/mostafiz/Dropbox/MSc/Thesis/regreesion_EC/src/datamodule.py", line 54, in train_dataloader
return DataLoader(self.train_data)
AttributeError: 'DataModuleClass' object has no attribute 'train_data'
| Most of the things were correct, except few things like:
def prepare_data(self):
This function was right except that it should not return anything.
Another thing was
def setup(self,stage=None):
Requires stage variable which can be set to a default value of none in case we don't want to switch between different test and train stage.
Putting everything together, here is the code:
from argparse import ArgumentParser
import numpy as np
import pytorch_lightning as pl
from torch.utils.data import random_split, DataLoader, TensorDataset
import torch
from torch.autograd import Variable
from torchvision import transforms
import pytorch_lightning as pl
import torch
from torch import nn
from torch.nn import functional as F
from torch.optim import Adam
from torch.optim.optimizer import Optimizer
class LinearRegression(pl.LightningModule):
def __init__(
self,
input_dim: int = 2,
output_dim: int = 1,
bias: bool = True,
learning_rate: float = 1e-4,
optimizer: Optimizer = Adam,
l1_strength: float = 0.0,
l2_strength: float = 0.0
):
super().__init__()
self.save_hyperparameters()
self.optimizer = optimizer
self.linear = nn.Linear(in_features=self.hparams.input_dim, out_features=self.hparams.output_dim, bias=bias)
def forward(self, x):
y_hat = self.linear(x)
return y_hat
def training_step(self, batch, batch_idx):
x, y = batch
# flatten any input
x = x.view(x.size(0), -1)
y_hat = self(x)
loss = F.mse_loss(y_hat, y, reduction='sum')
# L1 regularizer
if self.hparams.l1_strength > 0:
l1_reg = sum(param.abs().sum() for param in self.parameters())
loss += self.hparams.l1_strength * l1_reg
# L2 regularizer
if self.hparams.l2_strength > 0:
l2_reg = sum(param.pow(2).sum() for param in self.parameters())
loss += self.hparams.l2_strength * l2_reg
loss /= x.size(0)
tensorboard_logs = {'train_mse_loss': loss}
progress_bar_metrics = tensorboard_logs
return {'loss': loss, 'log': tensorboard_logs, 'progress_bar': progress_bar_metrics}
def validation_step(self, batch, batch_idx):
x, y = batch
x = x.view(x.size(0), -1)
y_hat = self(x)
return {'val_loss': F.mse_loss(y_hat, y)}
def validation_epoch_end(self, outputs):
val_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_mse_loss': val_loss}
progress_bar_metrics = tensorboard_logs
return {'val_loss': val_loss, 'log': tensorboard_logs, 'progress_bar': progress_bar_metrics}
def configure_optimizers(self):
return self.optimizer(self.parameters(), lr=self.hparams.learning_rate)
np.random.seed(42)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
class DataModuleClass(pl.LightningDataModule):
def __init__(self):
super().__init__()
self.sigma = 5
self.batch_size = 10
def prepare_data(self):
x = np.random.uniform(0, 10, 10)
e = np.random.normal(0, self.sigma, len(x))
y = x + e
X = np.transpose(np.array([x, e]))
self.x_train_tensor = torch.from_numpy(X).float().to(device)
self.y_train_tensor = torch.from_numpy(y).float().to(device)
training_dataset = TensorDataset(self.x_train_tensor, self.y_train_tensor)
self.training_dataset = training_dataset
def setup(self,stage=None):
data = self.training_dataset
self.train_data, self.val_data = random_split(data, [8, 2])
def train_dataloader(self):
return DataLoader(self.train_data)
def val_dataloader(self):
return DataLoader(self.val_data)
model = LinearRegression(input_dim=2, l1_strength=1, l2_strength=1)
trainer = pl.Trainer()
dummy = DataModuleClass()
trainer.fit(model,dummy)
| https://stackoverflow.com/questions/67449926/ |
Does pytorch register modules that are assigned to object properties? | Say I have a module like:
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.cnn = CNN(params)
And then I do:
module = MyModule()
module.cnn = CNN(some_other_params)
Is the replacement registered? Will there be any nasty side effects down the line?
| Yes. This is the standard way to reassign attributes. Whether there are nasty side effects depends on whether your replacement is specified compatibly.
| https://stackoverflow.com/questions/67450932/ |
Nearly Constant training and validation accuracy | I’m new to pytorch and my problem may be a little naive
I’m training a pretrained VGG16 network on my dataset which it’s size is near 33000 images in 8 classes with labels [1,2,…,8] and my classes are imbalanced. my problem is that during training, validation and training accuracy is low and doesn’t increase, is there any problem in my code?
if not, what do you suggest to improve training?
'''
import torch
import time
import torch.nn as nn
import numpy as np
from sklearn.model_selection import train_test_split
from torch.optim import Adam
import cv2
import torchvision.models as models
from classify_dataset import Classification_dataset
from torchvision import transforms
transform = transforms.Compose([transforms.Resize((224,224)),
transforms.RandomHorizontalFlip(p=0.5),
transforms.RandomVerticalFlip(p=0.5),
transforms.RandomRotation(degrees=45),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))
])
dataset = Classification_dataset(root_dir=r'//home/arisa/Desktop/Hamid/IQA/Hamid_Dataset',
csv_file=r'/home/arisa/Desktop/Hamid/IQA/new_label.csv',transform=transform)
target = dataset.labels - 1
train_indices, test_indices = train_test_split(np.arange(target.shape[0]), stratify=target)
test_dataset = torch.utils.data.Subset(dataset, indices=test_indices)
train_dataset = torch.utils.data.Subset(dataset, indices=train_indices)
class_sample_count = np.array([len(np.where(target[train_indices] == t)[0]) for t in np.unique(target)])
weight = 1. / class_sample_count
samples_weight = np.array([weight[t] for t in target[train_indices]])
samples_weight = torch.from_numpy(samples_weight)
samples_weight = samples_weight.double()
sampler = torch.utils.data.WeightedRandomSampler(samples_weight, len(samples_weight), replacement = True)
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=64,
sampler=sampler)
test_loader = torch.utils.data.DataLoader(test_dataset,
batch_size=64,
shuffle=False)
for param in model.parameters():
param.requires_grad = False
num_ftrs = model.classifier[0].in_features
model.classifier = nn.Linear(num_ftrs,8)
optimizer = Adam(model.parameters(), lr = 0.0001 )
criterion = nn.CrossEntropyLoss()
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=50, gamma=0.01)
path = '/home/arisa/Desktop/Hamid/IQA/'
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print(device)
def train_model(model, train_loader,valid_loader, optimizer, criterion, scheduler=None, num_epochs=10 ):
min_valid_loss = np.inf
model.train()
start = time.time()
TrainLoss = []
model = model.to(device)
for epoch in range(num_epochs):
total = 0
correct = 0
train_loss = 0
#lr_scheduler.step()
print('Epoch {}/{}'.format(epoch+1, num_epochs))
print('-' * 10)
train_loss = 0.0
for x,y in train_loader:
x = x.to(device)
#print(y.shape)
y = y.view(y.shape[0],).to(device)
y = y.to(device)
y -= 1
out = model(x)
loss = criterion(out, y)
optimizer.zero_grad()
loss.backward()
TrainLoss.append(loss.item()* y.shape[0])
train_loss += loss.item() * y.shape[0]
_,predicted = torch.max(out.data,1)
total += y.size(0)
correct += (predicted == y).sum().item()
optimizer.step()
lr_scheduler.step()
accuracy = 100*correct/total
valid_loss = 0.0
val_loss = []
model.eval()
val_correct = 0
val_total = 0
with torch.no_grad():
for x_val, y_val in test_loader:
x_val = x_val.to(device)
y_val = y_val.view(y_val.shape[0],).to(device)
y_val -= 1
target = model(x_val)
loss = criterion(target, y_val)
valid_loss += loss.item() * y_val.shape[0]
_,predicted = torch.max(target.data,1)
val_total += y_val.size(0)
val_correct += (predicted == y_val).sum().item()
val_loss.append(loss.item()* y_val.shape[0])
val_acc = 100*val_correct / val_total
print(f'Epoch {epoch + 1} \t\t Training Loss: {train_loss / len(train_loader)} \t\t Validation Loss: {valid_loss / len(test_loader)} \t\t Train Acc:{accuracy} \t\t Validation Acc:{val_acc}')
if min_valid_loss > (valid_loss / len(test_loader)):
print(f'Validation Loss Decreased({min_valid_loss:.6f}--->{valid_loss / len(test_loader):.6f}) \t Saving The Model')
min_valid_loss = valid_loss / len(test_loader)
state = {'state_dict': model.state_dict(),'optimizer': optimizer.state_dict(),}
torch.save(state,'/home/arisa/Desktop/Hamid/IQA/checkpoint.t7')
end = time.time()
print('TRAIN TIME:')
print('%.2gs'%(end-start))
train_model(model=model, train_loader=train_loader, optimizer=optimizer, criterion=criterion, valid_loader= test_loader,num_epochs=500 )
Thanks in advance
here is the result of 15 epoch
Epoch 1/500
----------
Epoch 1 Training Loss: 205.63448420514916 Validation Loss: 233.89266112356475 Train Acc:39.36360386127994 Validation Acc:24.142040038131555
Epoch 2/500
----------
Epoch 2 Training Loss: 199.05699240435197 Validation Loss: 235.08799531243065 Train Acc:41.90998291820601 Validation Acc:24.27311725452812
Epoch 3/500
----------
Epoch 3 Training Loss: 199.15626737127448 Validation Loss: 236.00033430619672 Train Acc:41.1035633416756 Validation Acc:23.677311725452814
Epoch 4/500
----------
Epoch 4 Training Loss: 199.02581041173886 Validation Loss: 233.60767459869385 Train Acc:41.86628530568466 Validation Acc:24.606768350810295
Epoch 5/500
----------
Epoch 5 Training Loss: 198.61493769454472 Validation Loss: 233.7503859202067 Train Acc:41.53656695665991 Validation Acc:25.0
Epoch 6/500
----------
Epoch 6 Training Loss: 198.71323942956585 Validation Loss: 234.17176149830675 Train Acc:41.639852222619474 Validation Acc:25.369399428026693
Epoch 7/500
----------
Epoch 7 Training Loss: 199.9395153770592 Validation Loss: 234.1744423635078 Train Acc:40.98041552456998 Validation Acc:24.84509056244042
Epoch 8/500
----------
Epoch 8 Training Loss: 199.3533399020355 Validation Loss: 235.4645173188412 Train Acc:41.26643626107337 Validation Acc:24.165872259294567
Epoch 9/500
----------
Epoch 9 Training Loss: 199.6451746921249 Validation Loss: 233.33387595956975 Train Acc:40.96452548365312 Validation Acc:24.59485224022879
Epoch 10/500
----------
Epoch 10 Training Loss: 197.9305159737011 Validation Loss: 233.76405122063377 Train Acc:41.8782028363723 Validation Acc:24.6186844613918
Epoch 11/500
----------
Epoch 11 Training Loss: 199.33247244055502 Validation Loss: 234.41085289463854 Train Acc:41.59218209986891 Validation Acc:25.119161105815063
Epoch 12/500
----------
Epoch 12 Training Loss: 199.87399289874256 Validation Loss: 234.23621463775635 Train Acc:41.028085647320545 Validation Acc:24.49952335557674
Epoch 13/500
----------
Epoch 13 Training Loss: 198.85540591944292 Validation Loss: 234.33149099349976 Train Acc:41.206848607635166 Validation Acc:24.857006673021925
Epoch 14/500
----------
Epoch 14 Training Loss: 199.92641723337513 Validation Loss: 233.37722391070741 Train Acc:41.15520597465539 Validation Acc:24.988083889418494
Epoch 15/500
----------
Epoch 15 Training Loss: 197.82172771698328 Validation Loss: 234.4943131533536 Train Acc:41.69943987605768 Validation Acc:24.380362249761678
| my problem was in model.train(). This phrase should be inside the training loop. but in my case I put it outside the training loop and when it comes to model.eval(), model maintained in this mode
| https://stackoverflow.com/questions/67451311/ |
Pytorch semantic segmentation loss function | I’m new to segmentation model.
I would like to use the deeplabv3_resnet50 model.
My image has shape (256, 256, 3) and my label has shape (256, 256). Each pixel in my label has a class value(0-4). And the batch size set in the DataLoader is 32.
Therefore, the shape of my input batch is [32, 3, 256, 256] and the shape of corresponding target is [32, 256, 256]. I believe this is correct.
I was trying to use nn.BCEWithLogitsLoss().
Is this the correct loss function for my case? Or should I use
CrossEntropy instead?
If this is the right one, the output of my model is [32, 5, 256, 256]. Each image prediction has the shape [5,256, 256], does layer 0 means the unnomarlized probabilities of class 0? In order to make a [32, 256, 256] tensor to match the target to feed into the BCEWithLogitsLoss, do I need to transform the unnomarlized probabilities to classes?
If I should use CrossEntropy, what the size of my output and label should be?
Thank you everyone.
| You are using the wrong loss function.
nn.BCEWithLogitsLoss() stands for Binary Cross-Entropy loss: that is a loss for Binary labels. In your case, you have 5 labels (0..4).
You should be using nn.CrossEntropyLoss: a loss designed for discrete labels, beyond the binary case.
Your models should output a tensor of shape [32, 5, 256, 256]: for each pixel in the 32 images of the batch, it should output a 5-dim vector of logits. The logits are the "raw" scores for each class, to be later on normalize to class probabilities using softmax function.
For numerical stability and computational efficiency, nn.CrossEntropyLoss does not require you to explicitly compute the softmax of the logits, but does it internally for you. As the documentation read:
This criterion combines LogSoftmax and NLLLoss in one single class.
| https://stackoverflow.com/questions/67451818/ |
How can I more efficiently multiply every element in a batch of tensors with every other batch element, except for itself? | So, I have this code that multiplies every element in a batch of tensors with every other element, except for itself. The code works, but it becomes painfully slow with larger batch sizes (Ideally I want to be able to use it with batch sizes of up to 1000 or more, but even a couple hundred is okay). It basically freezes when using the PyTorch autograd system and large batch sizes (like 50 or greater).
I need help making the code faster and more efficient, while still getting the same output. Any help would be appreciated!
import torch
tensor = torch.randn(50, 512, 512)
batch_size = tensor.size(0)
list1 = []
for i in range(batch_size):
list2 = []
for j in range(batch_size):
if j != i:
x_out = (tensor[i] * tensor[j]).sum()
list2.append(x_out )
list1.append(sum(list2))
out = sum(list1)
I thought that torch.prod might be able to be used, but it doesn't seem to result in the same output as the code above. NumPy answers are acceptable as long as they can be recreated in PyTorch.
| You could do the following:
import torch
tensor = torch.randn(50, 512, 512)
batch_size = tensor.size(0)
tensor = tensor.reshape(batch_size, -1)
prod = torch.matmul(tensor, tensor.transpose(0,1))
out = torch.sum(prod) - torch.trace(prod)
Here, you first flatten each element. Then, you multiply the matrix where each row is an element with its own transpose, which gives a batch_size x batch_size matrix, where the ijth element equals the product of tensor[i] with tensor[j]. So, summing up over the values in this matrix and subtracting its trace (i.e., sum of diagonal elements) gives the desired result.
I tried both methods with a batch_size of 1000, and the time taken dropped from 61.43s to 0.59s.
| https://stackoverflow.com/questions/67452064/ |
Is there a way to load torchvision model by string? | Currently, I load pretrained torchvision model using following code:
import torchvision
torchvision.models.resnet101(pretrained=True)
However, I'd love to have model name as string parameter and then load the pretrained model using that string. A pseudo-code that would do so would be something like:
model_name = 'resnet101'
torchvision.models.get(model_name)(pretrained=True)
Is there a way to accomplish this in a rather simple manner?
| You can use getattr
getattr(torchvision.models, 'resnet101')(pretrained=True)
| https://stackoverflow.com/questions/67454884/ |
pytorch (cpu only) on osx fails with symbol not found | I am trying to get started with PyTorch - on a mac osx computer. However, basic steps fail:
from torch_sparse import coalesce, SparseTensor
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-1-dad8246d5249> in <module>
----> 1 from torch_sparse import coalesce, SparseTensor
/usr/local/Caskroom/miniconda/base/envs/my_conda_env/lib/python3.8/site-packages/torch_sparse/__init__.py in <module>
10 '_saint', '_padding'
11 ]:
---> 12 torch.ops.load_library(importlib.machinery.PathFinder().find_spec(
13 library, [osp.dirname(__file__)]).origin)
14
/usr/local/Caskroom/miniconda/base/envs/my_conda_env/lib/python3.8/site-packages/torch/_ops.py in load_library(self, path)
102 # static (global) initialization code in order to register custom
103 # operators with the JIT.
--> 104 ctypes.CDLL(path)
105 self.loaded_libraries.add(path)
106
/usr/local/Caskroom/miniconda/base/envs/my_conda_env/lib/python3.8/ctypes/__init__.py in __init__(self, name, mode, handle, use_errno, use_last_error, winmode)
371
372 if handle is None:
--> 373 self._handle = _dlopen(self._name, mode)
374 else:
375 self._handle = handle
OSError: dlopen(/usr/local/Caskroom/miniconda/base/envs/my_conda_env/lib/python3.8/site-packages/torch_sparse/_version.so, 6): Symbol not found: __ZN3c105ErrorC1ENS_14SourceLocationERKNSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEE
Referenced from: /usr/local/Caskroom/miniconda/base/envs/my_conda_env/lib/python3.8/site-packages/torch_sparse/_version.so
Expected in: flat namespace
in /usr/local/Caskroom/miniconda/base/envs/my_conda_env/lib/python3.8/site-packages/torch_sparse/_version.so
I am using a conda environment of:
name: my_conda_env
channels:
- pytorch
- conda-forge
- defaults
dependencies:
- python>=3.8
- pytorch
- pytorch_geometric
and instantiated it using:
conda env create --force -f environment.yml
| https://github.com/rusty1s/pytorch_sparse/issues/135
Indeed, https://github.com/conda-forge/pytorch_sparse-feedstock/issues/13 is the problem
The manual installation of pytorch geometric and its dependencies such as pytorch sparse using pip https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html is a currently suitable workaround
| https://stackoverflow.com/questions/67455890/ |
Pytorch getting RuntimeError: Found dtype Double but expected Float | I am trying to implement a neural net in PyTorch but it doesn't seem to work. The problem seems to be in the training loop. I've spend several hours into this but can't get it right. Please help, thanks.
I haven't added the data preprocessing parts.
# importing libraries
import pandas as pd
import numpy as np
import torch
import torch.nn as nn
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
import torch.nn.functional as F
# get x function (dataset related stuff)
def Getx(idx):
sample = samples[idx]
vector = Calculating_bottom(sample)
vector = torch.as_tensor(vector, dtype = torch.float64)
return vector
# get y function (dataset related stuff)
def Gety(idx):
y = np.array(train.iloc[idx, 4], dtype = np.float64)
y = torch.as_tensor(y, dtype = torch.float64)
return y
# dataset
class mydataset(Dataset):
def __init__(self):
super().__init__()
def __getitem__(self, index):
x = Getx(index)
y = Gety(index)
return x, y
def __len__(self):
return len(train)
dataset = mydataset()
# sample dataset value
print(dataset.__getitem__(0))
(tensor([ 5., 5., 8., 14.], dtype=torch.float64), tensor(-0.3403, dtype=torch.float64))
# data-loader
dataloader = DataLoader(dataset, batch_size = 1, shuffle = True)
# nn architecture
class Net(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(4, 4)
self.fc2 = nn.Linear(4, 2)
self.fc3 = nn.Linear(2, 1)
def forward(self, x):
x = x.float()
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = Net()
# device
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model.to(device)
# hyper-parameters
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
# training loop
for epoch in range(5):
for batch in dataloader:
# unpacking
x, y = batch
x.to(device)
y.to(device)
# reset gradients
optimizer.zero_grad()
# forward propagation through the network
out = model(x)
# calculate the loss
loss = criterion(out, y)
# backpropagation
loss.backward()
# update the parameters
optimizer.step()
Error:
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py:446: UserWarning: Using a target size (torch.Size([1])) that is different to the input size (torch.Size([1, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-18-3f68fcee9ff3> in <module>
20
21 # backpropagation
---> 22 loss.backward()
23
24 # update the parameters
/opt/conda/lib/python3.7/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
219 retain_graph=retain_graph,
220 create_graph=create_graph)
--> 221 torch.autograd.backward(self, gradient, retain_graph, create_graph)
222
223 def register_hook(self, hook):
/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
130 Variable._execution_engine.run_backward(
131 tensors, grad_tensors_, retain_graph, create_graph,
--> 132 allow_unreachable=True) # allow_unreachable flag
133
134
RuntimeError: Found dtype Double but expected Float
| You need the data type of the data to match the data type of the model.
Either convert the model to double (recommended for simple nets with no serious performance problems such as yours)
# nn architecture
class Net(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(4, 4)
self.fc2 = nn.Linear(4, 2)
self.fc3 = nn.Linear(2, 1)
self.double()
or convert the data to float.
class mydataset(Dataset):
def __init__(self):
super().__init__()
def __getitem__(self, index):
x = Getx(index)
y = Gety(index)
return x.float(), y.float()
| https://stackoverflow.com/questions/67456368/ |
How to replace torch.norm with other pytorch function? | I would like to replace the torch.norm function using the other Pytorch function.
I was able to replace torch.norm in the case where x is not a matrix, as shown in the following code.
import torch
x = torch.randn(9)
out1 = torch.norm(x)
out2 = sum(abs(x)**2)**(1./2)
out1 == out2
>> tensor(True)
But I don't know how to replace it when x is a matrix.
Especially, I want to replace it in my case of dim=1 and keepdim=True.
x = torch.randn([3, 136, 64, 64])
out1 = torch.norm(x, dim=1, keepdim=True)
out2 = ???
out1 == out2
Background:
I'm converting a Pytorch model to CoreML, but the _VF.frobenius_norm operator defined in the torch.norm function is not implemented with CoreMLTools.
(The implementation inside torch.norm can be found here.)
A few people have trouble with this problem, but CoreMLTools is still unsupported (You can check from this issue).
So I'd like to replace it without the operator used in torch.norm.
I have tried torch.linalg.norm() and numpy.linalg.norm but they were not supported.
I have created a simple colaboratory notebook that reproduces this.
Please test it using the following colab.
https://colab.research.google.com/drive/11o6rTxHzEgZ_Rc7nFZHd3TvPugybB88h?usp=sharing
| You could try the following:
import torch
x = torch.randn([3, 136, 64, 64])
out1 = torch.norm(x, dim=1, keepdim=True)
out2 = torch.square(x).sum(dim=1, keepdim=True).sqrt()
Note that out1 == out2 won't give exactly all True due to small errors in precision. You can check that the errors are in the order of 1e-7 for float32.
Here, the norm is computed directly using its mathematical definition. You can see this reference from Wolfram MathWorld for more details.
| https://stackoverflow.com/questions/67456870/ |
Subsets and Splits