instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
How to get a specific parameter matrix (tensor) of a model by its name in pytorch? | I have trained my model in pytorch. Now I want to extract a specific parameters tensor by its name (not all tensors). How can I do that?
print(myModel.parameters)
OUTPUT:
<bound method Module.parameters of ANN(
(fc1): Linear(in_features=2, out_features=4, bias=True)
(fc2): Linear(in_features=4, out_features=1, bias=True)
)>
For example, I only want to get fc1.weight and fc1.bias.
| You can access sub-modules with the dot notation directly, while parameters returns a iterator over all tensors without providing keys:
>>> myModel.fc1.weight
>>> myModel.fc1.bias
| https://stackoverflow.com/questions/73416801/ |
How do I create a custom handler in torchserve? | I am trying to create a custom handler on Torchserve.
The custom handler has been modified as follows
# custom handler file
# model_handler.py
"""
ModelHandler defines a custom model handler.
"""
import os
import soundfile
from espnet2.bin.enh_inference import *
from ts.torch_handler.base_handler import BaseHandler
class ModelHandler(BaseHandler):
"""
A custom model handler implementation.
"""
def __init__(self):
self._context = None
self.initialized = False
self.model = None
self.device = None
def initialize(self, context):
"""
Invoke by torchserve for loading a model
:param context: context contains model server system properties
:return:
"""
# load the model
self.manifest = context.manifest
properties = context.system_properties
model_dir = properties.get("model_dir")
self.device = torch.device("cuda:" + str(properties.get("gpu_id")) if torch.cuda.is_available() else "cpu")
# Read model serialize/pt file
serialized_file = self.manifest['model']['serializedFile']
model_pt_path = os.path.join(model_dir, serialized_file)
if not os.path.isfile(model_pt_path):
raise RuntimeError("Missing the model.pt file")
self.model = SeparateSpeech("train_enh_transformer_tf.yaml", "valid.loss.best.pth")
self.initialized = True
def preprocess(self,data):
audio_data, rate = soundfile.read(data)
preprocessed_data = audio_data[np.newaxis, :]
return preprocessed_data
def inference(self, model_input):
model_output = self.model(model_input)
return model_output
def postprocess(self, inference_output):
"""
Return inference result.
:param inference_output: list of inference output
:return: list of predict results
"""
# Take output from network and post-process to desired format
postprocess_output = inference_output
#wav ni suru
return postprocess_output
def handle(self, data, context):
model_input = self.preprocess(data)
model_output = self.inference(model_input)
return self.postprocess(model_output)
The torchserve appears to be working.
The Torchserve logs are as follows
root@5c780ba74916:~/files# torchserve --start --ncs --model-store model_store --models denoise_transformer=denoise_transformer.mar
root@5c780ba74916:~/files# WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2022-08-24T14:06:06,662 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Initializing plugins manager...
2022-08-24T14:06:06,796 [INFO ] main org.pytorch.serve.ModelServer -
Torchserve version: 0.6.0
TS Home: /usr/local/lib/python3.8/dist-packages
Current directory: /root/files
Temp directory: /tmp
Number of GPUs: 1
Number of CPUs: 16
Max heap size: 4002 M
Python executable: /usr/bin/python3
Config file: N/A
Inference address: http://127.0.0.1:8080
Management address: http://127.0.0.1:8081
Metrics address: http://127.0.0.1:8082
Model Store: /root/files/model_store
Initial Models: denoise_transformer=denoise_transformer.mar
Log dir: /root/files/logs
Metrics dir: /root/files/logs
Netty threads: 0
Netty client threads: 0
Default workers per model: 1
Blacklist Regex: N/A
Maximum Response Size: 6553500
Maximum Request Size: 6553500
Limit Maximum Image Pixels: true
Prefer direct buffer: false
Allowed Urls: [file://.*|http(s)?://.*]
Custom python dependency for model allowed: false
Metrics report format: prometheus
Enable metrics API: true
Workflow Store: /root/files/model_store
Model config: N/A
2022-08-24T14:06:06,805 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Loading snapshot serializer plugin...
2022-08-24T14:06:06,817 [INFO ] main org.pytorch.serve.ModelServer - Loading initial models: denoise_transformer.mar
2022-08-24T14:06:07,006 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model denoise_transformer
2022-08-24T14:06:07,007 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model denoise_transformer
2022-08-24T14:06:07,007 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model denoise_transformer loaded.
2022-08-24T14:06:07,007 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: denoise_transformer, count: 1
2022-08-24T14:06:07,015 [DEBUG] W-9000-denoise_transformer_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/usr/bin/python3, /usr/local/lib/python3.8/dist-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9000]
2022-08-24T14:06:07,016 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: EpollServerSocketChannel.
2022-08-24T14:06:07,059 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://127.0.0.1:8080
2022-08-24T14:06:07,059 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: EpollServerSocketChannel.
2022-08-24T14:06:07,060 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://127.0.0.1:8081
2022-08-24T14:06:07,060 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: EpollServerSocketChannel.
2022-08-24T14:06:07,062 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://127.0.0.1:8082
Model server started.
2022-08-24T14:06:07,258 [WARN ] pool-3-thread-1 org.pytorch.serve.metrics.MetricCollector - worker pid is not available yet.
2022-08-24T14:06:07,363 [INFO ] W-9000-denoise_transformer_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9000
2022-08-24T14:06:07,364 [INFO ] W-9000-denoise_transformer_1.0-stdout MODEL_LOG - [PID]6258
2022-08-24T14:06:07,364 [INFO ] W-9000-denoise_transformer_1.0-stdout MODEL_LOG - Torch worker started.
2022-08-24T14:06:07,365 [INFO ] W-9000-denoise_transformer_1.0-stdout MODEL_LOG - Python runtime: 3.8.10
2022-08-24T14:06:07,365 [DEBUG] W-9000-denoise_transformer_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-denoise_transformer_1.0 State change null -> WORKER_STARTED
2022-08-24T14:06:07,368 [INFO ] W-9000-denoise_transformer_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000
2022-08-24T14:06:07,374 [INFO ] W-9000-denoise_transformer_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9000.
2022-08-24T14:06:07,376 [INFO ] W-9000-denoise_transformer_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1661317567376
2022-08-24T14:06:07,398 [INFO ] W-9000-denoise_transformer_1.0-stdout MODEL_LOG - model_name: denoise_transformer, batchSize: 1
2022-08-24T14:06:07,596 [INFO ] pool-3-thread-1 TS_METRICS - CPUUtilization.Percent:0.0|#Level:Host|#hostname:5c780ba74916,timestamp:1661317567
2022-08-24T14:06:07,596 [INFO ] pool-3-thread-1 TS_METRICS - DiskAvailable.Gigabytes:220.49971389770508|#Level:Host|#hostname:5c780ba74916,timestamp:1661317567
2022-08-24T14:06:07,597 [INFO ] pool-3-thread-1 TS_METRICS - DiskUsage.Gigabytes:17.66714859008789|#Level:Host|#hostname:5c780ba74916,timestamp:1661317567
2022-08-24T14:06:07,597 [INFO ] pool-3-thread-1 TS_METRICS - DiskUtilization.Percent:7.4|#Level:Host|#hostname:5c780ba74916,timestamp:1661317567
2022-08-24T14:06:07,597 [INFO ] pool-3-thread-1 TS_METRICS - GPUMemoryUtilization.Percent:17.9931640625|#Level:Host,device_id:0|#hostname:5c780ba74916,timestamp:1661317567
2022-08-24T14:06:07,597 [INFO ] pool-3-thread-1 TS_METRICS - GPUMemoryUsed.Megabytes:1474|#Level:Host,device_id:0|#hostname:5c780ba74916,timestamp:1661317567
2022-08-24T14:06:07,598 [INFO ] pool-3-thread-1 TS_METRICS - GPUUtilization.Percent:8|#Level:Host,device_id:0|#hostname:5c780ba74916,timestamp:1661317567
2022-08-24T14:06:07,598 [INFO ] pool-3-thread-1 TS_METRICS - MemoryAvailable.Megabytes:14307.53515625|#Level:Host|#hostname:5c780ba74916,timestamp:1661317567
2022-08-24T14:06:07,598 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUsed.Megabytes:1372.1640625|#Level:Host|#hostname:5c780ba74916,timestamp:1661317567
2022-08-24T14:06:07,598 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUtilization.Percent:10.6|#Level:Host|#hostname:5c780ba74916,timestamp:1661317567
2022-08-24T14:06:08,306 [INFO ] W-9000-denoise_transformer_1.0-stdout MODEL_LOG - encoder self-attention layer type = self-attention
2022-08-24T14:06:08,328 [INFO ] W-9000-denoise_transformer_1.0-stdout MODEL_LOG - Eps is deprecated in si_snr loss, set clamp_db instead.
2022-08-24T14:06:08,373 [INFO ] W-9000-denoise_transformer_1.0-stdout MODEL_LOG - Perform direct speech enhancement on the input
2022-08-24T14:06:08,390 [INFO ] W-9000-denoise_transformer_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 992
2022-08-24T14:06:08,391 [DEBUG] W-9000-denoise_transformer_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-denoise_transformer_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2022-08-24T14:06:08,391 [INFO ] W-9000-denoise_transformer_1.0 TS_METRICS - W-9000-denoise_transformer_1.0.ms:1380|#Level:Host|#hostname:5c780ba74916,timestamp:1661317568
2022-08-24T14:06:08,391 [INFO ] W-9000-denoise_transformer_1.0 TS_METRICS - WorkerThreadTime.ms:23|#Level:Host|#hostname:5c780ba74916,timestamp:1661317568
I sent the wav file to torchserve with the following command
curl http://127.0.0.1:8080/predictions/denoise_transformer -T Mix.wav
However, the following error was returned
<HTML><HEAD>
<TITLE>Request Error</TITLE>
</HEAD>
<BODY>
<FONT face="Helvetica">
<big><strong></strong></big><BR>
</FONT>
<blockquote>
<TABLE border=0 cellPadding=1 width="80%">
<TR><TD>
<FONT face="Helvetica">
<big>Request Error (invalid_request)</big>
<BR>
<BR>
</FONT>
</TD></TR>
<TR><TD>
<FONT face="Helvetica">
Your request could not be processed. Request could not be handled
</FONT>
</TD></TR>
<TR><TD>
<FONT face="Helvetica">
This could be caused by a misconfiguration, or possibly a malformed request.
</FONT>
</TD></TR>
<TR><TD>
<FONT face="Helvetica" SIZE=2>
<BR>
For assistance, contact your network support team.
</FONT>
</TD></TR>
</TABLE>
</blockquote>
</FONT>
</BODY></HTML>
Is there somewhere wrong?
Best regards.
| Hi here's an example but it seems the problem should be how you receive parse the data
from typing import Dict, List, Tuple
import numpy as np
import soundfile as sf
from io import BytesIO
from ts.torch_handler.base_handler import BaseHandler
import torch
from model import Model
class SoundModelHandler(BaseHandler):
def __init__(self):
super(SoundModelHandler, self).__init__()
self.initialized = False
def preproc_one_wav(self, req: Dict[str, bytearray]) -> Tuple[np.ndarray, int]:
"""
Function to convert req data to image
:param req:
:return: np array of wav form
"""
wav_byte = req.get("data")
if wav_byte is None:
wav_byte = req.get('body')
# create a stream from the encoded image
wav, sr = sf.read(BytesIO(wav_byte))
return wav, sr
def initialize(self, context):
"""
Invoke by torchserve for loading a model
:param context: context contains model server system properties
:return:
"""
self.manifest = context.manifest
properties = context.system_properties
model_dir = properties.get("model_dir")
self._context = context
self.model = Model()
self.model.load_state_dict(torch.load(model_dir + 'weights.pt'))
self.initialize = True
self.device = torch.device(
"cuda:" + str(properties.get("gpu_id"))
if torch.cuda.is_available() and properties.get("gpu_id") is not None
else "cpu"
)
def preprocess(self, requests: List[Dict[str, bytearray]]):
"""
Function to prepare data from the model
:param requests:
:return: tensor of the processed shape specified by the model
"""
batch_crop = [self.preproc_one_wav(req) for req in requests]
# Do something here if you want return as torch tensor
# You can apply torch cat here as well
batch = torch.cat(batch_crop)
return batch
def inference(self, model_input: torch.Tensor):
"""
Given the data from .preprocess, perform inference using the model.
:param reqeuest:
:return: Logits or predictions given by the model
"""
with torch.no_grad():
generated_ids = self.model.generate(model_input.to(self.device))
return generated_ids
Btw heres an example you can do to make a request to your model in python
import json
import requests
sample = {'data': wav_in_byte}
results = requests.post('ip:port', data=json.dumps(sample))
# parse results
get_results = results.json()
| https://stackoverflow.com/questions/73424388/ |
RuntimeError: mse_cuda not implemented for Long when training a transformer.Trainer | I'm attempting to train a HuggingFace Trainer but seeing the following error:
RuntimeError: "mse_cuda" not implemented for 'Long' when training a transformer.Trainer
I've tried this in multiple cloud environments (CPU & GPU) with no luck. The dataset (tok_dds) is of the following shape and type, and I've ensured there are no NULL values.
Dataset({
features: ['label', 'title', 'text', 'input', 'input_ids', 'token_type_ids', 'attention_mask'],
num_rows: 5000
})
{'label': int,
'title': str,
'text': str,
'input': str,
'input_ids': list,
'token_type_ids': list,
'attention_mask': list}
I have defined my loss functions as below:
def corr(x,y): return np.corrcoef(x,y)[0][1]
def corr_d(eval_pred): return {'pearson': corr(*eval_pred)}
However, when attempting to Train the model_nm = 'microsoft/deberta-v3-small' on the train/test split of my dataset. I see the following error:
dds = tok_ds.train_test_split(0.25, seed=42)
tokz = AutoTokenizer.from_pretrained(model_nm)
model = AutoModelForSequenceClassification.from_pretrained(model_nm, num_labels=1)
trainer = Trainer(model, args, train_dataset=dds['train'], eval_dataset=dds['test'],
tokenizer=tokz, compute_metrics=corr_d)
...
...
File /shared-libs/python3.9/py/lib/python3.9/site-packages/torch/nn/functional.py:3280, in mse_loss(input, target, size_average, reduce, reduction)
3277 reduction = _Reduction.legacy_get_string(size_average, reduce)
3279 expanded_input, expanded_target = torch.broadcast_tensors(input, target)
-> 3280 return torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))
RuntimeError: "mse_cuda" not implemented for 'Long' when training a transformer.Trainer
Here are the args passed into the Trainer if it's relevant:
args = TrainingArguments('outputs', learning_rate=lr, warmup_ratio=0.1, lr_scheduler_type='cosine', fp16=True,
evaluation_strategy="epoch", per_device_train_batch_size=bs, per_device_eval_batch_size=bs*2,
num_train_epochs=epochs, weight_decay=0.01, report_to='none')
Here's is what I think may be relevant environment information
!python --version
Python 3.9.13
!pip list
Package Version
----------------------------- ------------
...
transformers 4.21.1
huggingface-hub 0.8.1
pandas 1.2.5
protobuf 3.19.4
scikit-learn 1.1.1
tensorflow 2.9.1
torch 1.12.0
Can anyone point me in the right direction to solve this problem?
| Changing the datatype of the labels column from int to float solved this issue for me. If your Dataset is from a pandas DataFrame, you can change the datatype of the column before passing the dataframe to a Dataset.
| https://stackoverflow.com/questions/73428120/ |
"SystemError: google/protobuf/pyext/descriptor.cc:358: bad argument to internal function" while using Audio Transformers in Hugging Face | I am trying to do a task of "Speech2Text" using transformer model in Hugging Face.
I tried the code in this documentation on hugging face
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt")
generated_ids = model.generate(inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids)
transcription
but when I tried to run this code in Google Colab I am receiving the following error :
SystemError: google/protobuf/pyext/descriptor.cc:358: bad argument to internal function
On checking the other error lines it seems that on calling processor(), return_tesnors is None even though it is specified as pt. Due to which code is importing tensorflow and that error is coming. (know issue)
Full error message :
SystemError Traceback (most recent call last)
<ipython-input-4-2a3231ef630c> in <module>
9 ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
10
---> 11 inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt")
12
13 generated_ids = model.generate(inputs["input_features"], attention_mask=inputs["attention_mask"])
10 frames
/usr/local/lib/python3.7/dist-packages/transformers/models/speech_to_text/processing_speech_to_text.py in __call__(self, *args, **kwargs)
51 information.
52 """
---> 53 return self.current_processor(*args, **kwargs)
54
55 def batch_decode(self, *args, **kwargs):
/usr/local/lib/python3.7/dist-packages/transformers/models/speech_to_text/feature_extraction_speech_to_text.py in __call__(self, raw_speech, padding, max_length, truncation, pad_to_multiple_of, return_tensors, sampling_rate, return_attention_mask, **kwargs)
230 pad_to_multiple_of=pad_to_multiple_of,
231 return_attention_mask=return_attention_mask,
--> 232 **kwargs,
233 )
234
/usr/local/lib/python3.7/dist-packages/transformers/feature_extraction_sequence_utils.py in pad(self, processed_features, padding, max_length, truncation, pad_to_multiple_of, return_attention_mask, return_tensors)
161
162 if return_tensors is None:
--> 163 if is_tf_available() and _is_tensorflow(first_element):
164 return_tensors = "tf"
165 elif is_torch_available() and _is_torch(first_element):
/usr/local/lib/python3.7/dist-packages/transformers/utils/generic.py in _is_tensorflow(x)
96
97 def _is_tensorflow(x):
---> 98 import tensorflow as tf
99
100 return isinstance(x, tf.Tensor)
/usr/local/lib/python3.7/dist-packages/tensorflow/__init__.py in <module>
35 import typing as _typing
36
---> 37 from tensorflow.python.tools import module_util as _module_util
38 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader
39
/usr/local/lib/python3.7/dist-packages/tensorflow/python/__init__.py in <module>
35
36 from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
---> 37 from tensorflow.python.eager import context
38
39 # pylint: enable=wildcard-import
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/context.py in <module>
27 import six
28
---> 29 from tensorflow.core.framework import function_pb2
30 from tensorflow.core.protobuf import config_pb2
31 from tensorflow.core.protobuf import coordination_config_pb2
/usr/local/lib/python3.7/dist-packages/tensorflow/core/framework/function_pb2.py in <module>
14
15
---> 16 from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
17 from tensorflow.core.framework import node_def_pb2 as tensorflow_dot_core_dot_framework_dot_node__def__pb2
18 from tensorflow.core.framework import op_def_pb2 as tensorflow_dot_core_dot_framework_dot_op__def__pb2
/usr/local/lib/python3.7/dist-packages/tensorflow/core/framework/attr_value_pb2.py in <module>
14
15
---> 16 from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
17 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
18 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2
/usr/local/lib/python3.7/dist-packages/tensorflow/core/framework/tensor_pb2.py in <module>
14
15
---> 16 from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
17 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
18 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2
/usr/local/lib/python3.7/dist-packages/tensorflow/core/framework/resource_handle_pb2.py in <module>
148 ,
149 'DESCRIPTOR' : _RESOURCEHANDLEPROTO,
--> 150 '__module__' : 'tensorflow.core.framework.resource_handle_pb2'
151 # @@protoc_insertion_point(class_scope:tensorflow.ResourceHandleProto)
152 })
SystemError: google/protobuf/pyext/descriptor.cc:358: bad argument to internal function
here's my colab link for reference
Let me know what can be done to resolve this error
Thank you
| import tensorflow as tf
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
........
Import tensorflow lib first even if you are not using it, before importing any torch libraries. Don't know the exact reason but after importing the lib code is working on the notebook you have shared.
Refer to these links:
https://github.com/tensorflow/tensorflow/issues/48797
torchvision and tensorflow-gpu import error
| https://stackoverflow.com/questions/73433868/ |
ValueError: base_distribution needs to have shape with size at least 6, but got torch.Size([6]) | I have the following architecture for my neural network
import torch
import torch.distributions as pyd
import toch.nn as nn
from torch.distributions import transforms as tT
from torch.distributions.transformed_distribution import TransformedDistribution
LOG_STD_MIN = -5
LOG_STD_MAX = 0
class TanhTransform(pyd.transforms.Transform):
domain = pyd.constraints.real
codomain = pyd.constraints.interval(-1.0, 1.0)
bijective = True
sign = +1
def __init__(self, cache_size=1):
self.device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
super().__init__(cache_size=cache_size)
@staticmethod
def atanh(x):
return 0.5 * (x.log1p() - (-x).log1p())
def __eq__(self, other):
return isinstance(other, TanhTransform)
def _call(self, x):
return x.tanh()
def _inverse(self, y):
return self.atanh(y.clamp(-0.99, 0.99))
def log_abs_det_jacobian(self, x, y):
return 2.0 * (math.log(2.0) - x - F.softplus(-2.0 * x))
def get_spec_means_mags(spec):
means = (spec.maximum + spec.minimum) / 2.0
mags = (spec.maximum - spec.minimum) / 2.0
means = Variable(torch.tensor(means).type(torch.FloatTensor), requires_grad=False)
mags = Variable(torch.tensor(mags).type(torch.FloatTensor), requires_grad=False)
return means, mags
class Split(torch.nn.Module):
def __init__(self, module, n_parts: int, dim=1):
super().__init__()
self._n_parts = n_parts
self._dim = dim
self._module = module
def forward(self, inputs):
output = self._module(inputs)
if output.ndim==1:
result=torch.hsplit(output, self._n_parts )
else:
chunk_size = output.shape[self._dim] // self._n_parts
result =torch.split(output, chunk_size, dim=self._dim)
return result
class Network(nn.Module):
def __init__(
self,
state,
act,
fc_layer_params=(),
):
super(Network, self).__init__()
self._act = act
self._layers = nn.ModuleList()
for hidden_size in fc_layer_params:
if len(self._layers)==0:
self._layers.append(nn.Linear(state.shape[0], hidden_size))
else:
self._layers.append(nn.Linear(hidden_size, hidden_size))
self._layers.append(nn.ReLU())
output_layer = nn.Linear(hidden_size,self._act.shape[0] * 2)
self._layers.append(output_layer)
self._act_means, self._act_mags = get_spec_means_mags(
self._act)
def _get_outputs(self, state):
h = state
for l in nn.Sequential(*(list(self._layers.children())[:-1])):
h = l(h)
self._mean_logvar_layers = Split(
self._layers[-1],
n_parts=2,
)
mean, log_std = self._mean_logvar_layers(h)
a_tanh_mode = torch.tanh(mean) * self._action_mags + self._action_means
log_std = torch.tanh(log_std).to(device=self.device)
log_std = LOG_STD_MIN + 0.5 * (LOG_STD_MAX - LOG_STD_MIN) * (log_std + 1)
std = torch.exp(log_std)
a_distribution = TransformedDistribution(
base_distribution=Normal(loc=torch.full_like(mean, 0).to(device=self.device),
scale=torch.full_like(mean, 1).to(device=self.device)),
transforms=tT.ComposeTransform([
tT.AffineTransform(loc=self._action_means, scale=self._action_mags, event_dim=mean.shape[-1]),
TanhTransform(),
tT.AffineTransform(loc=mean, scale=std, event_dim=mean.shape[-1])]))
return a_distribution, a_tanh_mode
def get_log_density(self, state, action):
a_dist, _ = self._get_outputs(state)
log_density = a_dist.log_prob(action)
return log_density
def __call__(self, state):
a_dist, a_tanh_mode = self._get_outputs(state)
a_sample = a_dist.sample()
log_pi_a = a_dist.log_prob(a_sample)
return a_tanh_mode, a_sample, log_pi_a
When I run my code I get this error message:
action = self._a_network(latent_states)[1]
File "/home/planner_regularizer.py", line 182, in __call__
a_dist, a_tanh_mode = self._get_outputs(state.to(device=self.device))
File "/home/planner_regularizer.py", line 159, in _get_outputs
a_distribution = TransformedDistribution(
File "/home/dm_control/lib/python3.8/site-packages/torch/distributions/transformed_distribution.py", line 61, in __init__
raise ValueError("base_distribution needs to have shape with size at least {}, but got {}."
ValueError: base_distribution needs to have shape with size at least 6, but got torch.Size([6]).
How can I fix this error message?
Update: if I remove event_dim from AffineTransform, I wouldn't get above error but the output of log_prob would be size 1 which is not correct. Any suggestion?
| The error is telling you exactly what the problem is: TransformedDistribution expects the base distribution to have event_shape of at least length 6, but you are passing a Normal distribution with event_shape=[6].
This minimum length requirement exists because TransformedDistribution applies affine transforms, which require at least 2 dimensions:
1 for the batch_shape
1 for the event coordinates being transformed
Simply construct your Normal distribution with more dimensions, e.g.
Normal(loc=torch.zeros(1, 6), scale=torch.ones(1, 6))
| https://stackoverflow.com/questions/73445876/ |
Change dtype of weights for pytorch pretrained model | I am using YOLOV7 model. The pretrained weights shared are optimised and shared in float16 dtype.
How can I convert the dtype of parameters of model in PyTorch. I want to convert the type of the weights to float32 type.
weights = torch.load('yolov7-mask.pt')
model = weights['model']
| Load weights to your model and just call .float().
example:
cp = torch.load('yolov7-mask.pt')
model.load_state_dict(cp['weight'])
model = model.float()
It'll work if the model's class is nn.Module. (Checked for torch version 1.8)
| https://stackoverflow.com/questions/73454134/ |
Get torch parameter name from layer | If I have a model with a linear layer lin = nn.Linear(out_dim, in_dim) then lin.named_parameters() produces a sequence something like [('weight',Tensor), ('bias',Tensor)]
But if I run model.named_parameters() the sequence is [('lin.weight',Tensor), ('lin.bias',Tensor)].
Is it possible to get the full name of the tensor from the layer? i.e. the name it has inside the root module?
| Modules containing other modules will just preprend their own name separated by a period . to the ones of the containing modules, so to get the "final" names you just have to split of the last part:
import torch
a = torch.nn.Linear(1, 1)
a.b = torch.nn.Linear(1, 1)
a.b.c = torch.nn.Linear(1, 1)
for (fullname, param) in a.named_parameters():
name = fullname.split('.')[-1]
print(name, '\t', fullname)
| https://stackoverflow.com/questions/73456102/ |
What do the output of pytorch lightning predicts mean? | Could someone explain to me where I can find out what the output of pytorch lightning prediction tensors mean?
I have this code:
#Predicting
path = analysis.best_checkpoint + '/' + "ray_ckpt"
model = GraphLevelGNN.load_from_checkpoint(path)
model.eval()
trainer = pl.Trainer()
test_result = trainer.test(model, graph_test_loader, verbose=False)
print(test_result)
##[{'test_acc': 0.65625, 'test_f1': 0.7678904428904428, 'test_precision': 1.0, 'test_recall': 0.65625}]
predictions = trainer.predict(model, graph_test_loader)
print(predictions)
And it prints:
[(tensor(0.7582), tensor(0.5000), 0.6666666666666666, 1.0, 0.5), (tensor(0.4276), tensor(0.7500), 0.8571428571428571, 1.0, 0.75), (tensor(0.4436), tensor(0.7500), 0.8571428571428571, 1.0, 0.75), (tensor(0.2545), tensor(1.), 1.0, 1.0, 1.0), (tensor(1.0004), tensor(0.3750), 0.5454545454545454, 1.0, 0.375)]
But I can't seem to understand what these numbers mean? Can someone explain how to get more info?
| Well in a simple summary its the forward pass that we can define with a prediction step
import pytorch_lightning as pl
class LitModel(pl.LightningModule):
def forward(self, inputs):
return self.base_model(inputs)
# Overwrite the predict step
def predict_step(self, batch, batch_idx):
return self(batch)
model = LitModel()
trainer = pl.Trainer()
trainer.predict(model, data) # note data is a dataloader
for a deeper explanation read this:
output prediction of pytorch lightning model
| https://stackoverflow.com/questions/73456852/ |
Why is testing accuracy so low, could there be a bug in my code? | I've been training an image classification model using object detection and then applying image classification to the images. I have 87 custom classes in my data(not ImageNet classes), and just over 7000 images altogether(around 60 images per class). I am happy with my object detection code and I think it works quite well, however, for classification I have been using ResNet and AlexNet. I have tried AlexNet, ResNet18, ResNet50 and ResNet101 for training however, I am getting very low testing accuracies(around 10%), and my training accuracies are high for all models. I've also attempted regularisation and changing the learning rates, but I am not getting the higher accuracies(>80%) that I require. I wonder if there is a bug in my code, although I haven't been able to figure it out.
Here is my training code, I have also processed images in the way that Pytorch pretrained models expect:
import torch.nn as nn
import torch.optim as optim
from typing import Callable
import numpy as np
EPOCHS=100
resnet = torch.hub.load('pytorch/vision:v0.10.0', 'resnet50')
resnet.eval()
resnet.fc = nn.Linear(2048, 87)
res_loss = nn.CrossEntropyLoss()
res_optimiser = optim.SGD(resnet.parameters(), lr=0.01, momentum=0.9, weight_decay=1e-5)
def train_model(model, loss_fn, optimiser, modelsavepath):
train_acc = 0
for j in range(EPOCHS):
running_loss = 0.0
correct = 0
total = 0
for i, data in enumerate(training_generator, 0):
model.train()
inputs, labels, paths = data
total += 1
optimizer.zero_grad()
outputs = model(inputs)
_, predicted = torch.max(outputs, 1)
if(predicted.int() == labels.int()):
correct += 1
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
train_acc = train_correct / len(training_generator)
print("Epoch:{}/{} AVG Training Loss:{:.3f} AVG Training Acc {:.2f}% ".format(j + 1, EPOCHS, train_loss, train_acc))
torch.save(model, modelsavepath)
train_model(resnet, res_loss, res_optimiser, 'resnet.pth')
Here is the testing code used for a single image, it is part of a class:
self.model.eval()
outputs = self.model(img[None, ...]) #models expect batches, so give it a singleton batch
scores, predictions = torch.max(outputs, 1)
predictions = predictions.numpy()[0]
possible_scores= np.argmax(scores.detach().numpy())
Is there a bug in my code, either testing or training, or is my model just overfitting? Additionally, is there a better image classification model that I could try?
| Your dataset is very small, so you're most likely overfitting. Try:
decrease learning rate (try 0.001, 0.0001, 0.00001)
increase weight_decay (try 1e-4, 1e-3, 1e-2)
if you don't already, use image augmentations (at least the default ones, like random crop and flip).
Watch train/test loss curves when finetuning your model and stop training as soon as you see test accuracy going down while train accuracy goes up.
| https://stackoverflow.com/questions/73458074/ |
Torchvision model cannot be loaded from storage when no GPU availabe | I trained a torchvision mask r-cnn model on GPU and saved it to disk using torch.save(model, model_name). On another machine, without GPU, I try to load it again using torch.load(model_name). The model cannot be deserializised because torch does not know about device cuda:0.
How can I 'convert' such a model to be used on non-GPU environments?
I assume it is best practice to move a model to CPU before saving it?
| torch.load() has an argument map_location where you can specify the device. So you can use
torch.load(..., map_location='cpu')
or specify any other device to directly load it there.
| https://stackoverflow.com/questions/73458229/ |
Why heads share same KQV weights(matrix) in transformer? | self.values = nn.Linear(self.head_dim, self.head_dim, bias=False)
self.keys = nn.Linear(self.head_dim, self.head_dim, bias=False)
self.queries = nn.Linear(self.head_dim, self.head_dim, bias=False)
self.fc_out = nn.Linear(heads * self.head_dim, embed_size)
def forward(self, values, keys, query, mask):
# Get number of training examples
N = query.shape[0]
value_len, key_len, query_len = values.shape[1], keys.shape[1], query.shape[1]
# Split the embedding into self.heads different pieces
values = values.reshape(N, value_len, self.heads, self.head_dim)
keys = keys.reshape(N, key_len, self.heads, self.head_dim)
query = query.reshape(N, query_len, self.heads, self.head_dim)
values = self.values(values) # (N, value_len, heads, head_dim)
keys = self.keys(keys) # (N, key_len, heads, head_dim)
queries = self.queries(query) # (N, query_len, heads, heads_dim)
# Einsum does matrix mult. for query*keys for each training example
# with every other training example, don't be confused by einsum
# it's just how I like doing matrix multiplication & bmm
energy = torch.einsum("nqhd,nkhd->nhqk", [queries, keys])
I have noticed that many implementations of multi-headed attention are similar to the above code.
But I am confused as to why, here, the KQV projections for the different heads seem to be shared.
Is it because in back propagation they receive the same signal?
| It's not shared. See original implementation (https://github.com/google-research/google-research/blob/6a30ad7a6655fc481ab040ad6e54a92be93a8db3/summae/transformer.py#L73), also implementations in huggingface and pytorch.
https://github.com/huggingface/transformers/blob/c55d6e4e10ce2d9c37e5f677f0842b04ef8b73f3/src/transformers/models/bert/modeling_bert.py#L251
https://github.com/pytorch/pytorch/blob/f5bfa4d0888e6cd5984092b38cb8b10609558d05/torch/nn/modules/activation.py#L946
| https://stackoverflow.com/questions/73459106/ |
Dataset input when combining models in pytorch | I have two deep learning Models A and B, B is trained separately with the prediction results of A. To train the B Model, I want to use the prediction result of A as B's input. It can be understood as concatenation of the two models. So when I write the Dataset of Model B in PyTorch, it read data from the Model A rather than the local file. However, when train the Model B, it's very slow since Model A needs to be run for every input data of Model B, is there any efficient way to combine two models? I am trying to save the prediction data of model A as files, and then read the files as B's dataset input, this takes a lot of storage, maybe not the best way. Looking forward to ideas!!
| Assuming that you're backpropagating loss only through model B, it should take only roughly 2x time to do the forward pass through both models, and the backwards pass should take the same time regardless of whether the inputs to model B are loaded from file cache or directly from model A. Importantly, you need to disable gradient computation for model A, or model A will be rather slow and backpropagation will be slow. Your code should roughly look like:
for batch in dataset:
input_A, target_B = batch
modelA.eval()
# ensure no gradient computation for model A
with torch.no_grad():
output_A = modelA(input_A)
output_A = output_A.detatch() # ensure backpropagation will stop here
# gradient computation as normal
output_B = modelB(output_A)
loss = loss_fn(output_B,target_B)
loss.backward()
optimizer.step()
optimizer.zero_grad()
| https://stackoverflow.com/questions/73459826/ |
How can I iterate over all the batches from DataLoader? | I wanted to iterate over all the batches and save the images but with this process its saving only images of the first batch
for batch_idx, (test_data, test_targets) in enumerate(test_loader):
for i in range(0, test_loader.batch_size-1):
img = np.array(test_data[i][0])*255
img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
low_black = np.array([0,0,0])
high_black = np.array([360,255,0])
mask = cv2.inRange(hsv, low_black, high_black)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img[mask>0]=random.choice(list(color_dict.values()))
cv2.imwrite(f'/content/test_data/{test_targets[i].item()}_{i+1}.png', img)
| Since i is start from 0 to batch_size at every batch so the saved names are duplicated. One common way to solve it is using count:
count = 0 # here
for batch_idx, (test_data, test_targets) in enumerate(test_loader):
for i in range(0, test_loader.batch_size-1):
img = np.array(test_data[i][0])*255
img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
low_black = np.array([0,0,0])
high_black = np.array([360,255,0])
mask = cv2.inRange(hsv, low_black, high_black)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img[mask>0]=random.choice(list(color_dict.values()))
cv2.imwrite(f'/content/test_data/{test_targets[i].item()}_{count}.png', img)
count+=1 # plus one every sample
| https://stackoverflow.com/questions/73466156/ |
How do I use Pytorch models in Deep Java Library(DJL)? | I would like to run EasyNMT in Java.
However I don't know how to load and run the model.
I loaded the model as follows:
URI uri = new URI("file:////Users/.../prior.pth");
Path modelDir = Paths.get(uri);
Model model = Model.newInstance("model.pth", Device.cpu(), "PyTorch");
model.load(modelDir);
However, I do not know what to do after this.
EasyNMT performs the following:
model.translate("Dies ist ein Satz in Deutsch.", target_lang='en', max_new_tokens=1000)
How does DJL perform translations?
| You need create your own Translator to do pre-processing and post-processing. You can find this jupyter notebook that explains how Translator works in DJL.
For NMT model, you can find this example in DJL: https://github.com/deepjavalibrary/djl/blob/master/examples/docs/neural_machine_translation.md
| https://stackoverflow.com/questions/73467509/ |
Shuffling of time series data in pytorch-forecasting | I am using pytorch-forecasting for count time series. I have some date information such as hour of day, day of week, day of month etc...
when I assign these as categorical variables in TimeSeriesDataSet using time_varying_known_categoricals the training.data['categoricals'] values seem shuffled and not in the right order as the target. Why is that?
pandas dataframe is like below before going through TimeSeriesDataSet
After the following code
why has hour of day column changed to 0, 1, 12, 17?
| Actually, the time_varying_known_categoricals are NOT shuffled. The categories assigned to them are not in order like 1 for 1st hour, 2 for 2nd hour etc.. that's why it feels like it has shuffled the time series. I tried to align "hour_of_day" categorical variable for 3 days. I noticed that the encoding for each hour matches correcly for each day so there is no shuffling. This information should be mentioned in the doc string atleast. It will save a lot of time and confusion.
| https://stackoverflow.com/questions/73468198/ |
PyTorch - read single pixel from each image channel and write it to another image | I have a tensor [C, H, W], where C is a number of channels (does not have to be 3).
I have another array with indices and I would like to read pixels at those indices and write them to another image.
For example:
two images A (source) and B (target)
C = 3
H, W = 64
indicesRead = [[0, 0], [13, 15], [32, 43]]
indicesWrite = [[7, 5], [1, 1], [4, 4]]
and I would like to get from image A pixel for channel 0 at [0, 0], channel 1 at [13, 15] and channel 2 at [32, 43].
Once I have these values, I want to write them to image B to channel 0 to position [7, 5] (basically copy A[0, 0] to B[7, 5]) etc.
Can it be done with torch methods or I have to iterate tensor manually?
| You can use list-based indexing in Pytorch.
# add channel index to indicesRead
indicesRead = [[0,0, 0], [1,13, 15], [2,32, 43]]
indicesWrite = [[7, 5], [1, 1], [4, 4]]
cRead, aRead , bRead = zip(*indicesRead)
aWrite, bWrite = zip(*indicesWrite)
B[aWrite,bWrite]= A[cRead,aRead,bRead]
Note I use a and b to denote height and width dimensions (no correlation to input and output matrices A and B because the x y (column-first) convention becomes a bit confusing with arrays. I use c to indicate channel.
| https://stackoverflow.com/questions/73474650/ |
Torch automatic differentiation for matrix defined with components of vector | The title is quite self-explanatory. I have the following
import torch
x = torch.tensor([3., 4.], requires_grad=True)
A = torch.tensor([[x[0], x[1]],
[x[1], x[0]]], requires_grad=True)
f = torch.norm(A)
f.backward()
I would like to compute the gradient of f with respect to x, but if I type x.grad I just get None. If I use the more explicit command torch.autograd.grad(f, x) instead of f.backward(), I get
RuntimeError: One of the differentiated Tensors appears to not have
been used in the graph. Set allow_unused=True if this is the desired
behavior.
| The problem might be, that when you take a slice of a leaf tensor, it returns a non-leaf tensor like so:
>>> x.is_leaf
True
>>> x[0].is_leaf
False
So what's happening is that x is not what was added to the graph, but instead x[0].
Try this instead:
>>> import torch
>>>
>>> x = torch.tensor([3., 4.], requires_grad=True)
>>> xt = torch.empty_like(x).copy_(x.flip(0))
>>> A = torch.stack([x,xt])
>>>
>>> f = torch.norm(A)
>>> f.backward()
>>>
>>> x.grad
tensor([0.8485, 1.1314])
The difference is that PyTorch knows to add x to the graph, so f.backward() populates it's gradient.
Here you'll find a few different way of copying tensors and the effect it has on the graph.
| https://stackoverflow.com/questions/73475930/ |
pytorch subset on dataloader decrease number of data | i'm using random_split()
dataset_train, dataset_valid = random_split(dataset, [int(len(dataset) * 0.8), int(len(dataset) * 0.2+1)])
len(dataset_train) is 1026 and len(dataset_valid) is 257 but put this two vriable into dataloader decrease number of data
loader_train = DataLoader(dataset_train, batch_size=batch_size, shuffle=True, num_workers=0)
loader_val = DataLoader(dataset_valid, batch_size=batch_size, shuffle=True, num_workers=0)
print (len(loader_train))
print (len(loader_val))
output is :
257, 65
I don't know why decrease the size of dataset.
please any help. thanks.
| You can use data.Subset which works as a wrapper for data.Dataset instances. You need to provide a generator or list of indices that you want to retain in the constructed dataset
Here is a minimal setup example:
>>> dataset_train = data.TensorDataset(torch.arange(100))
Construct the subset by wrapping dataset_train:
>>> subset = data.Subset(ds, range(10)) # will select the first 10 of dataset_train
Finally construct your dataloader:
>>> loader_train = data.DataLoader(subset, batch_size=2)
As an illustration here is loader_train:
>>> for x in dl:
... print(x)
[tensor([0, 1])]
[tensor([2, 3])]
[tensor([4, 5])]
[tensor([6, 7])]
[tensor([8, 9])]
| https://stackoverflow.com/questions/73483902/ |
How self.anchors is not self. declared in the __init__ but it's used in the method? | class Detect(nn.Module):
stride = None # strides computed during build
onnx_dynamic = False # ONNX export parameter
export = False # export mode
def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer
super().__init__()
self.nc = nc # number of classes
self.no = nc + 5 # number of outputs per anchor
self.nl = len(anchors) # number of detection layers
self.na = len(anchors[0]) // 2 # number of anchors
self.grid = [torch.zeros(1)] * self.nl # init grid
self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid
self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2)
self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
self.inplace = inplace # use inplace ops (e.g. slice assignment)
def _make_grid(self, nx=20, ny=20, i=0):
d = self.anchors[i].device
t = self.anchors[i].dtype
shape = 1, self.na, ny, nx, 2 # grid shape
y, x = torch.arange(ny, device=d, dtype=t), torch.arange(nx, device=d, dtype=t)
if check_version(torch.__version__, '1.10.0'): # torch>=1.10.0 meshgrid workaround for torch>=0.7 compatibility
yv, xv = torch.meshgrid(y, x, indexing='ij')
else:
yv, xv = torch.meshgrid(y, x)
grid = torch.stack((xv, yv), 2).expand(shape) - 0.5 # add grid offset, i.e. y = 2.0 * x - 0.5
anchor_grid = (self.anchors[i] * self.stride[i]).view((1, self.na, 1, 1, 2)).expand(shape)
return grid, anchor_grid
In the init there's no self.anchors = anchors but in the _make_grid() method they are using self.anchors. How is it possible?
PS I had to drop the forward method otherwise stackoverflow claimed there was too much code
| Notice this call in __init__:
self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2)
Let's see what the PyTorch docs have to say about nn.Module.register_buffer:
register_buffer(name, tensor, persistent=True)
Adds a buffer to the module.
This is typically used to register a buffer that should not to be
considered a model parameter. For example, BatchNorm’s running_mean is
not a parameter, but is part of the module’s state. Buffers, by
default, are persistent and will be saved alongside parameters. This
behavior can be changed by setting persistent to False. The only
difference between a persistent buffer and a non-persistent buffer is
that the latter will not be a part of this module’s state_dict.
Buffers can be accessed as attributes using given names.
Parameters
name (string) – name of the buffer. The buffer can be
accessed from this module using the given name
[...]
So it appears the base class you're inheriting from exposes any registered buffer as an attribute, hence the availability of self.anchors at runtime
| https://stackoverflow.com/questions/73484654/ |
RuntimeError: Calculated padded input size per channel: (1 x 1). Kernel size: (4 x 4). Kernel size can't be greater than actual input size | I am trying to train GAN to transfer style. I am getting error when passing images through discriminator
for epoch in range(epochs):
#code for stats
for real_images in tqdm(t_dl):
optimizer["discriminator"].zero_grad()
real_preds = model["discriminator"](real_images)#-----------------------error here
#code
And here is model
model = {
"discriminator": discriminator.to(device),
"generator": generator.to(device)
}
And code for discriminator
discriminator = nn.Sequential(
# in: 3 x 256 x 256
PrintLayer(),
nn.Conv2d(3, 64, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(64),
nn.LeakyReLU(0.2, inplace=True),
# out: 64 x 128 x 128
PrintLayer(),
nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(128),
nn.LeakyReLU(0.2, inplace=True),
# out: 128 x 64 x 64
PrintLayer(),
nn.Conv2d(128, 256, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(256),
nn.LeakyReLU(0.2, inplace=True),
# out: 256 x 32 x 32
PrintLayer(),
nn.Conv2d(256, 512, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(512),
nn.LeakyReLU(0.2, inplace=True),
# out: 512 x 16 x 16
PrintLayer(),
nn.Conv2d(512, 1024, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(1024),
nn.LeakyReLU(0.2, inplace=True),
# out: 512 x 8 x 8
PrintLayer(),
nn.Conv2d(1024, 1024, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(1024),
nn.LeakyReLU(0.2, inplace=True),
# out: 1024 x 4 x 4
PrintLayer(),
nn.Conv2d(1024, 1, kernel_size=4, stride=1, padding=0, bias=False),
# out: 1 x 1 x 1
PrintLayer(),
nn.Flatten(),
nn.Sigmoid())
So I added PrintLayer() to check dimensions after convolutions
class PrintLayer(nn.Module):
def __init__(self):
super(PrintLayer, self).__init__()
def forward(self, x):
print(x.shape)
return x
All images in batch are 256*256, I printed images sizes right before passing them to discriminator
0 torch.Size([3, 256, 256])
1 torch.Size([3, 256, 256])
2 torch.Size([3, 256, 256])
3 torch.Size([3, 256, 256])
4 torch.Size([3, 256, 256])
5 torch.Size([3, 256, 256])
6 torch.Size([3, 256, 256])
7 torch.Size([3, 256, 256])
8 torch.Size([3, 256, 256])
9 torch.Size([3, 256, 256])
It works with first image but somehow second image is 112*112
torch.Size([10, 3, 256, 256])
torch.Size([10, 64, 128, 128])
torch.Size([10, 128, 64, 64])
torch.Size([10, 256, 32, 32])
torch.Size([10, 512, 16, 16])
torch.Size([10, 1024, 8, 8])
torch.Size([10, 1024, 4, 4])
torch.Size([10, 1, 1, 1])
torch.Size([10, 3, 112, 112])
torch.Size([10, 64, 56, 56])
torch.Size([10, 128, 28, 28])
torch.Size([10, 256, 14, 14])
torch.Size([10, 512, 7, 7])
torch.Size([10, 1024, 3, 3])
torch.Size([10, 1024, 1, 1])
| My problem was solved here https://discuss.pytorch.org/t/runtimeerror-calculated-padded-input-size-per-channel-1-x-1-kernel-size-4-x-4-kernel-size-cant-be-greater-than-actual-input-size/160184/11
Briefly:Generator generated images of wrong shape,so discriminator was getting 112*112.
| https://stackoverflow.com/questions/73493485/ |
Multiple-series training input is giving NaN loss while same data but One-serie training input is not | I want to train a N-Beats time series model using Darts. I have a time serie DataFrame for each users so I want to use Multiple-Series training but when I feed the list of TimeSeries I directly get NaN as losses during training. If I concatenate all users's TimeSeries into one, I get a normal loss. In both cases the data is scale, fill and cast to float.32
data = scaler.transform(filler.transform(data)).astype(np.float32)
Here is the code that I use combine the list of TimeSeries into a single TimeSeries. I also have a pure Darts code for that but it is much slower for the same result.
SPLIT = 0.8
if concatenate_to_one_ts:
all_dfs = []
all_dfs_cov = []
for i in range(len(list_of_target_ts)):
all_dfs.append(list_of_target_ts[i].pd_series())
all_dfs_cov.append(list_of_cov_ts[i].pd_dataframe())
all_dfs = pd.concat(all_dfs)
all_dfs_cov = pd.concat(all_dfs_cov)
nbr_train_sample = int(len(all_dfs) * SPLIT)
all_dfs_train = all_dfs[:nbr_train_sample]
all_dfs_test = all_dfs[nbr_train_sample:]
list_of_target_ts_train = TimeSeries.from_series(all_dfs_train.reset_index(drop=True))
list_of_target_ts_test = TimeSeries.from_series(all_dfs_test.reset_index(drop=True))
all_dfs_cov_train = all_dfs_cov[:nbr_train_sample]
all_dfs_cov_test = all_dfs_cov[nbr_train_sample:]
list_of_cov_ts_train = TimeSeries.from_dataframe(all_dfs_cov_train.reset_index(drop=True))
list_of_cov_ts_test = TimeSeries.from_dataframe(all_dfs_cov_test.reset_index(drop=True))
else:
nbr_train_sample = int(len(list_of_target_ts) * SPLIT)
list_of_target_ts_train = list_of_target_ts[:nbr_train_sample]
list_of_target_ts_test = list_of_target_ts[nbr_train_sample:]
list_of_cov_ts_train = list_of_cov_ts[:nbr_train_sample]
list_of_cov_ts_test = list_of_cov_ts[nbr_train_sample:]
model = NBEATSModel(input_chunk_length=4,
output_chunk_length=1,
batch_size=512,
n_epochs=5,
nr_epochs_val_period=1,
model_name="NBEATS_test",
generic_architecture=True,
force_reset=True,
save_checkpoints=True,
show_warnings=True,
log_tensorboard=True,
torch_device_str='cuda:0'
)
model.fit(series=list_of_target_ts_train,
past_covariates=list_of_cov_ts_train,
val_series=list_of_target_ts_val,
val_past_covariates=list_of_cov_ts_val,
verbose=True,
num_loader_workers=20)
As Multiple-Series training I get:
Epoch 0: 8%|██████████▉ | 2250/27807 [03:00<34:11, 12.46it/s, loss=nan, v_num=logs, train_loss=nan.0
As a single serie training I get:
Epoch 0: 24%|█████████████████████████▋ | 669/2783 [01:04<03:24, 10.33it/s, loss=0.00758, v_num=logs, train_loss=0.00875]
I am also confused by the number of sample per epoch with the same batch size as from what I read here: https://unit8.com/resources/training-forecasting-models/ the single serie should have more sample as the window size cut is not happening for each Multiple Series.
|
Regarding the NaNs, I would try reducing the learning rate if I were you. Also double check that there's no NaN remaining in your data (see corresponding entry here 1)
Regarding the number of samples, each of the separate time series are split into several (input, output) slices. For the single series, this split is done once overall, whereas for the multiple series, this split is done once per series and then all the resulting samples are regrouped in a common training set. So it is expected to have more training samples with multiple series (and each training sample will have fewer dimensions compared to the single-multivariate-series case).
| https://stackoverflow.com/questions/73503021/ |
How can I apply a linear transformation on sparse matrix in PyTorch? | In PyTorch, we have nn.linear that applies a linear transformation to the incoming data:
y = WA+b
In this formula, W and b are our learnable parameters and A is my input data matrix. The matrix 'A' for my case is too large for RAM to complete loading, so I use it sparsely. Is it possible to perform such an operation on sparse matrices using PyTorch?
| This is possible with PyTorch using sparse matrix multiply. In your case, I think you want something like:
>> i = [[0, 1, 1],
[2, 0, 2]]
>> v = [3, 4, 5]
>> A = torch.sparse_coo_tensor(i, v, (2, 3))
>> A.to_dense()
tensor([[0, 0, 3],
[4, 0, 5]])
# compute W@A by computing ((A.T)@(W.T)).T because...
# at time of writing, the sparse matrix must be first in the matmul
>> (A.t() @ W.t()).t()
| https://stackoverflow.com/questions/73504976/ |
Custom torch.nn.Module not learning, even though grad_fn=MmBackward | I am training a model to predict pose using a custom Pytorch model. However, V1 below never learns (params don't change). The output is connected to the backdrop graph and grad_fn=MmBackward.
I can't understand why V1 isn't learning but V2 is?
V1
class cam_pose_transform_V1(torch.nn.Module):
def __init__(self):
super(cam_pose_transform, self).__init__()
self.elevation_x_rotation_radians = torch.nn.Parameter(torch.normal(0., 1e-6, size=()))
self.azimuth_y_rotation_radians = torch.nn.Parameter(torch.normal(0., 1e-6, size=()))
self.z_rotation_radians = torch.nn.Parameter(torch.normal(0., 1e-6, size=()))
def forward(self, x):
exp_i = torch.zeros((4,4))
c1 = torch.cos(self.elevation_x_rotation_radians)
s1 = torch.sin(self.elevation_x_rotation_radians)
c2 = torch.cos(self.azimuth_y_rotation_radians)
s2 = torch.sin(self.azimuth_y_rotation_radians)
c3 = torch.cos(self.z_rotation_radians)
s3 = torch.sin(self.z_rotation_radians)
rotation_in_matrix = torch.tensor([
[c2, s2 * s3, c3 * s2],
[s1 * s2, c1 * c3 - c2 * s1 * s3, -c1 * s3 - c2 * c3 * s1],
[-c1 * s2, c3 * s1 + c1 * c2 * s3, c1 * c2 * c3 - s1 * s3]
], requires_grad=True)
exp_i[:3, :3] = rotation_in_matrix
exp_i[3, 3] = 1.
return torch.matmul(exp_i, x)
However, this version learns as expected (params and loss change) and also has grad_fn=MmBackward on the output:
V2
def vec2ss_matrix(vector): # vector to skewsym. matrix
ss_matrix = torch.zeros((3,3))
ss_matrix[0, 1] = -vector[2]
ss_matrix[0, 2] = vector[1]
ss_matrix[1, 0] = vector[2]
ss_matrix[1, 2] = -vector[0]
ss_matrix[2, 0] = -vector[1]
ss_matrix[2, 1] = vector[0]
return ss_matrix
class cam_pose_transform_V2(torch.nn.Module):
def __init__(self):
super(camera_transf, self).__init__()
self.w = torch.nn.Parameter(torch.normal(0., 1e-6, size=(3,)))
self.v = torch.nn.Parameter(torch.normal(0., 1e-6, size=(3,)))
self.theta = torch.nn.Parameter(torch.normal(0., 1e-6, size=()))
def forward(self, x):
exp_i = torch.zeros((4,4))
w_skewsym = vec2ss_matrix(self.w)
v_skewsym = vec2ss_matrix(self.v)
exp_i[:3, :3] = torch.eye(3) + torch.sin(self.theta) * w_skewsym + (1 - torch.cos(self.theta)) * torch.matmul(w_skewsym, w_skewsym)
exp_i[:3, 3] = torch.matmul(torch.eye(3) * self.theta + (1 - torch.cos(self.theta)) * w_skewsym + (self.theta - torch.sin(self.theta)) * torch.matmul(w_skewsym, w_skewsym), self.v)
exp_i[3, 3] = 1.
return torch.matmul(exp_i, x)
Update #1
In the training loop I printed the .grad attributes using:
print([i.grad for i in list(cam_pose.parameters())])
loss.backward()
print([i.grad for i in list(cam_pose.parameters())])
Results:
# V1
[None, None, None]
[None, None, None]
# V2
[None, None, None]
[tensor([-0.0032, 0.0025, -0.0053]), tensor([ 0.0016, -0.0013, 0.0054]), tensor(-0.0559)]
Nothing else in the code was changed, just swapped V1 model for V2.
| this is your problem right here:
rotation_in_matrix = torch.tensor([
[c2, s2 * s3, c3 * s2],
[s1 * s2, c1 * c3 - c2 * s1 * s3, -c1 * s3 - c2 * c3 * s1],
[-c1 * s2, c3 * s1 + c1 * c2 * s3, c1 * c2 * c3 - s1 * s3]], requires_grad=True)
you are creating a tensor out of a list of tensors, which is not a differentiable operation -- i.e. there's no gradient flow from rotation_in_matrix to its elements c1..c3
the solution would be to create the rotation_in_matrix using tensor operations like stack and cat instead
| https://stackoverflow.com/questions/73506309/ |
Pytorch backward does not compute the gradients for requested variables | I'm trying to train a resnet18 model on pytorch (+pytorch-lightning) with the use of Virtual Adversarial Training. During the computations required for this type of training I need to obtain the gradient of D (ie. the cross-entropy loss of the model) with regard to tensor r.
This should, in theory, happen in the following code snippet:
def generic_step(self, train_batch, batch_idx, step_type):
x, y = train_batch
unlabeled_idx = y is None
d = torch.rand(x.shape).to(x.device)
d = d/(torch.norm(d) + 1e-8)
pred_y = self.classifier(x)
y[unlabeled_idx] = pred_y[unlabeled_idx]
l = self.criterion(pred_y, y)
R_adv = torch.zeros_like(x)
for _ in range(self.ip):
r = self.xi * d
r.requires_grad = True
pred_hat = self.classifier(x + r)
# pred_hat = F.log_softmax(pred_hat, dim=1)
D = self.criterion(pred_hat, pred_y)
self.classifier.zero_grad()
D.requires_grad=True
D.backward()
R_adv += self.eps * r.grad / (torch.norm(r.grad) + 1e-8)
R_adv /= 32
loss = l + R_adv * self.a
loss.backward()
self.accuracy[step_type] = self.acc_metric(torch.argmax(pred_y, 1), y)
return loss
Here, to my understanding, r.grad should in theory be the gradient of D with respect to r. However, the code throws this at D.backward():
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
(full traceback excluded because this error is not helpful and technically "solved" as I know the cause for it, explained just below)
After some research and debugging it seems that in this situation D.backward() attempts to calculate dD/dD disregarding any previous mention of requires_grad=True. This is confirmed when I add D.requires_grad=True and I get D.grad=Tensor(1.,device='cuda:0') but r.grad=None.
Does anyone know why this may be happening?
| In Lightning, .backward() and optimizer step are all handled under the hood. If you do it yourself like in the code above, it will mess with Lightning because it doesn't know you called backward yourself.
You can enable manual optimization in the LightningModule:
def __init__(self):
super().__init__()
# put this in your init
self.automatic_optimization = False
This tells Lightning that you are taking over calling backward and handling optimizer step + zero grad yourself. Don't forget to add that in your code above. You can access the optimizer and scheduler like so in your training step:
def training_step(self, batch, batch_idx):
optimizer = self.optimizers()
scheduler = self.lr_schedulers()
# do your training step
# don't forget to call:
# 1) backward 2) optimizer step 3) zero grad
Read more about manual optimization here.
| https://stackoverflow.com/questions/73511703/ |
Why my cross entropy loss function does not converge? | I try to write a cross entropy loss function by myself. My loss function gives the same loss value as the official one, but when i use my loss function in the code instead of official cross entropy loss function, the code does not converge. When i use the official cross entropy loss function, the code converges. Here is my code, please give me some suggestions. Thanks very much
The input 'out' is a tensor (B*C) and 'label' contains class indices (1 * B)
class MylossFunc(nn.Module):
def __init__(self):
super(MylossFunc, self).__init__()
def forward(self, out, label):
out = torch.nn.functional.softmax(out, dim=1)
n = len(label)
loss = torch.FloatTensor([0])
loss = Variable(loss, requires_grad=True)
tmp = torch.log(out)
#print(out)
torch.scalar_tensor(-100)
for i in range(n):
loss = loss - torch.max(tmp[i][label[i]], torch.scalar_tensor(-100) )/n
loss = torch.sum(loss)
return loss
| Instead of using torch.softmax and torch.log, you should use torch.log_softmax, otherwise your training will become unstable with nan values everywhere.
This happens because when you take the softmax of your logits using the following line:
out = torch.nn.functional.softmax(out, dim=1)
you might get a zero in one of the components of out, and when you follow that by applying torch.log it will result in nan (since log(0) is undefined). That is why torch (and other common libraries) provide a single stable operation, log_softmax, to avoid the numerical instabilities that occur when you use torch.softmax and torch.log individually.
| https://stackoverflow.com/questions/73520088/ |
Pytorch: What is the best way to sample two items of different labels form a dataset? | I'm trying to define a pytorch Dataset/Dataloader for an image style transfer network. I have a dataset of images grouped by styles, and I want each sample from this dataset to consist of two images, one for style, the other for content. My first idea was to implement a Dataset with something like this in __init__:
n = len(images)
itr_style = random.shuffle([i for i in range(n)]))
itr_content = random.shuffle([i for i in range(n)]))
and this in __getitem__:
return (images[itr_style[index]], images[itr_content[index]])
Which is probably not the most efficient implementation, and I also need to make sure that:
The two images don't come from the same style
The dataset re-shuffles every epoch
So what is the best way to implement this Dataset?
| I understood you want to make combination of two images, which are from different groups.
Assuming you have group of images, you can preload every combination of image index from each group, and load image from __getitem__.
from typing import List
from torch.utils.data import Dataset
class Image():
"""Placeholder class - you may change Image class into some tensor objects"""
pass
class PreloadedDataset(Dataset):
def __init__(self, img_groups: List[List[Image]]):
super(PreloadedDataset, self).__init__()
self.groups = img_groups
self.combinations = []
for group_idx1, group1 in enumerate(img_groups):
for group_idx2, group2 in enumerate(img_groups[group_idx1:]):
for img1 in range(len(group1)):
for img2 in range(len(group2)):
self.combinations.append((group_idx1, img1, group_idx2, img2))
def __len__(self):
return len(self.combinations)
def __getitem__(self, item):
group1, img1, group2, img2 = self.combinations[item]
return self.groups[group1][img1], self.groups[group2][img2]
| https://stackoverflow.com/questions/73524287/ |
Pytorch: Test each row of the first 2D tensor also exist in the second tensor? | Given two tensors t1 and t2:
t1=torch.tensor([[1,2],[3,4],[5,6]])
t2=torch.tensor([[1,2],[5,6]])
If the row elements of t1 is exist in t2, return True, otherwise return False. The ideal result is
[Ture, False, True].
I tried torch.isin(t1, t2), but its return the results by elements not by rows. By the way, if they are numpy arrays, it can be completed by
np.in1d(t1.view('i,i').reshape(-1), t2.view('i,i').reshape(-1))
I wonder how to get the similar result in tensor?
| def rowwise_in(a,b):
"""
a - tensor of size a0,c
b - tensor of size b0,c
returns - tensor of size a1 with 1 for each row of a in b, 0 otherwise
"""
# dimensions
a0 = a.shape[0]
b0 = b.shape[0]
c = a.shape[1]
assert c == b.shape[1] , "Tensors must have same number of columns"
a_expand = a.unsqueeze(1).expand(a0,b0,c)
b_expand = b.unsqueeze(0).expand(a0,b0,c)
# element-wise equality
equal = a_expand == b_expand
# sum along dim 2 (all elements along this dimension must be true for the summed dimension to be True)
row_equal = torch.prod(equal,dim = 2)
row_in_b = torch.max(row_equal, dim = 1)[0]
return row_in_b
| https://stackoverflow.com/questions/73524338/ |
Using hook method in a data parallelism approach in PyTorch | A pretrained model, as my encoder, has several layers as follows and I want to extract some features from some layers of that :
TimeSformer(
(model): VisionTransformer(
(dropout): Dropout(p=0.0, inplace=False)
(patch_embed): PatchEmbed(
(proj): Conv2d(3, 768, kernel_size=(16, 16), stride=(16, 16))
)
(pos_drop): Dropout(p=0.0, inplace=False)
(time_drop): Dropout(p=0.0, inplace=False)
(blocks): ModuleList( #************
(0): Block(
(norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(temporal_attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_fc): Linear(in_features=768, out_features=768, bias=True)
(drop_path): Identity()
(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=768, out_features=3072, bias=True)
(act): GELU()
(fc2): Linear(in_features=3072, out_features=768, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
(1): Block(
(norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(temporal_attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_fc): Linear(in_features=768, out_features=768, bias=True)
(drop_path): DropPath()
(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=768, out_features=3072, bias=True)
(act): GELU()
(fc2): Linear(in_features=3072, out_features=768, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
.
.
.
.
.
.
I need to extract features of specific layers of the above mentioned model and I am using hook method as follows:
import torch.nn as nn
class my_class(nn.Module):
def __init__(self, pretrained=False):
super(my_class, self).__init__()
self.featureExtractor =TimeSformer(img_size=224, num_classes=400, num_frames=8, attention_type='divided_space_time',
pretrained_model='/home/TimeSformer_divST_16x16_448_K400.pyth')
self.featureExtractor=nn.DataParallel(self.featureExtractor)
self.list = list(self.featureExtractor.children())
self.activation = {}
def get_activation(name):
def hook(model, input, output):
self.activation[name] = output.detach()
return hook
self.featureExtractor.model.blocks[4].register_forward_hook(get_activation('block4')) # line 30 in the error
self.featureExtractor.model.blocks[8].register_forward_hook(get_activation('block8'))
self.featureExtractor.model.blocks[4].temporal_attn.register_forward_hook(get_activation('block4.temporal_attn'))
self.featureExtractor.model.blocks[11].register_forward_hook(get_activation('block11'))
def forward(self, x, out_consp = False):
b = self.featureExtractor(x)
block4_output_Temporal_att = self.activation['block4.temporal_attn']
block4_output = self.activation['block4']
block8_output = self.activation['block8']
block11_output = self.activation['block11']
.
.
.
.
The problem is I face the following error:
File "/home/TimeSformer/models/timesformer.py", line 30, in init
self.featureExtractor.model.blocks[4].register_forward_hook(get_activation('block4'))
File "/home/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1185, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'DataParallel' object has no attribute 'model'
How can I solve the problem
| You need to access the module attribute, not model. Refer to line 140 of this file.
In other words, you should access your blocks via self.featureExtractor.module.blocks.
| https://stackoverflow.com/questions/73525619/ |
Pytorch progress bar disappear on vscode jupyter | I have problem when training Pytorch model, the progress bar of is disappeared by no reason today. It still work properly the days before. I'm using jupyter through vs code, connect to the kernel that run on the Ubuntu subsystem. How can I show the progress bar as normal
| I had this issue and it seems to come from a problem with tqdm for new versions of ipywidget (see https://github.com/microsoft/vscode-jupyter/issues/8552).
As mentioned in the link, I solved it by downgrading ipywidgets:
pip install ipywidgets==7.7.2
| https://stackoverflow.com/questions/73526940/ |
The size of Logits of Roberta model is weird | My input size is [8,22]. A batch with 8 tokenized sentences with a length of 22.
I dont want to use the default classifier.
model = RobertaForSequenceClassification.from_pretrained("xlm-roberta-large")
model.classifier=nn.Identity()
After model(batch)
The size of result is torch.Size([8, 22, 1024]). I have no idea why. Should it be [8,1024]?
| The model.classifier object you have replaced used to be an instance of a RobertaClassificationHead. If you take a look at its source code[1], the layer is hard-coded into indexing the first item of the second dimension of its input, which is supposed to be the [CLS] token.
By replacing it with an Identity you miss out on the indexing operation, hence your output shape.
Long story short, don't assume functionality you haven't verified when it comes to non-own code, huggingface in particular (lots of ad-hoc classes and spaghetti interfaces, least as far as I'm concerned).
[1] source
| https://stackoverflow.com/questions/73530622/ |
Output of layer in CNN with equal dim of feature | I read a bit about autoencoders and read that encoder transform data from $\mathbb{R}^{N \times D} \rightarrow \mathbb{R}^{N\times d}$$ space.
I wrote CNN autoencoder with for example one layer like this:
self.conv1 = nn.Conv2d(1, 1, 5)
but when I apply x = F.relu(self.conv1(x)) of course I get an image with smaller dimensions (if I had the dimension torch.Size([1, 400, 1024]) at the beginning)
after the operation I have torch.Size([1, 396, 1020]).
It is clear that convolution reduces the dimension of the image (that's all clear :) ), but I'm interested in whether there is a way to keep the number of features and then fill the others with zeros, and if that would be a step in the right direction, the question arises as to which features to fill with zeros .
P.S. I use only one channel because I want the final result to be in a 2d matrix, and my images are gray scale anyway.
| I think you need to add a padding='same' option
self.conv1 = nn.Conv2d(1, 1, 5, padding='same')
This is the usual way of ensuring output dimensions don't get shrunk
when applying convolutional layers. It pads the original image before
applying the convolution, by default with zeros, rather than padding the result.
(If this were Tensorflow I would be certain, but it looks like torch works
the same way. https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html)
| https://stackoverflow.com/questions/73532153/ |
why do object detection methods have an output value for every class | Most recent object detection methods rely on a convolutional neural network. They create a feature map by running input data through a feature extraction step. They then add more convolutional layers to output a set of values like so (this set is from YOLO, but other architectures like SSD differ slightly):
pobj: probability of being an object
c1, c2 ... cn: indicating which class the object belongs to
x, y, w, h: bounding box of the object
However, one particular box cannot be multiple objects. As in, wouldn't having a high value for, say, c1 mean that the values for all the others c2 ... cn would be low? So why use different values for c1, c2 ... cn? Couldn't they all be represented by a single value, say 0-1, where each object has a certain range within the 0-1, say 0-0.2 is c1, 0.2-0.4 is c2 and so on...
This would reduce the dimension of the output from NxNx(5+C) (5 for the probability and bounding box, +C one for each class) to NxNx(5+1) (5 same as before and 1 for the class)
Thank you
| Short answer, NO! That is almost certainly not an acceptable solution. It sounds like your core question is: Why is a a single value in the range [0,1] not a sufficient, compact output for object classification? As a clarification, I'd say this doesn't really have to do with single-shot detectors; the outputs from 2-stage detectors and most all classification networks follows this same 1D embedding structure. As a secondary clarification, I'd say that many 1-stage networks also don't output pobj in their original implementations (YOLO is the main one that does but Retinanet and I believe SSD does not).
An object's class is a categorical attribute. Assumed within a standard classification problem is that the set of possible classes is flat (i.e. no class is a subclass of any other), mutually exclusive (each example falls into only a single class), and unrelated (not quite the right term here but essentially no class is any more or less related to any other class).
This assumed attribute structure is well represented by an orthonormal encoding vector of the same length as the set of possible attributes. A vector [1,0,0,0] is no more similar to [0,1,0,0] than it is to [0,0,0,1] in this space.
(As an aside, a separate branch of ML problems called multilabel classification removes the mutual exclusivity constrain (so [0,1,1,0] and [0,1,1,1] would both be valid label predictions. In this space class or label combinations COULD be construed as more or less related since they share constituent labels or "basis vectors" in the orthonormal categorical attribute space. But enough digression..)
A single, continuous variable output for class destroys the assumption that all classes are unrelated. In fact, it assumes that the relation between any two classes is exact and quantifiable! What an assumption! Consider attempting to arrange the classes of, let's say, the ImageNet classification task, along a single dimension. Bus and car should be close, no? Let's say 0.1 and 0.2, respectively in our 1D embedding range of [0,1]. Zebra must be far away from them, maybe 0.8. But should be close to zebra fish (0.82)? Is a striped shirt closer to a zebra or a bus? Is the moon more similar to a bicycle or a trumpet? And is a zebra really 5 times more similar to a zebra fish than a bus is to a car? The exercise is immediately, patently absurd. A 1D embedding space for object class is not sufficiently rich to capture the differences between object classes.
Why can't we just place object classes randomly in the continuous range [0,1]? In a theoretical sense nothing is stopping you, but the gradient of the network would become horrendously, unmanageably non-convex and conventional approaches to training the network would fail. Not to mention the network architecture would have to encode extremely non-linear activation functions to predict the extremely hard boundaries between neighboring classes in the 1D space, resulting in a very brittle and non-generalizable model.
From here, the nuanced reader might suggest that in fact, some classes ARE related to one another (i.e. the unrelated assumption of the standard classification problem is not really correct). Bus and car are certainly more related than bus and trumpet, no? Without devolving into a critique on the limited usefulness of strict ontological categorization of the world, I'll simply suggest that in many cases there is an information embedding that strikes a middle ground. A vast field of work has been devoted to finding embedding spaces that are compact (relative to the exhaustive enumeration of "everything is its own class of 1") but still meaningful. This is the work of principal component analysis and object appearance embedding in deep learning.
Depending on the particular problem, you may be able to take advantage of a more nuanced embedding space better suited towards the final task you hope to accomplish. But in general, canonical deep learning tasks such as classification / detection ignore this nuance in the hopes of designing solutions that are "pretty good" generalized over a large range of problem spaces.
| https://stackoverflow.com/questions/73536858/ |
Drastic difference in accuracy for varying batch sizes? | When training my CNN image classifier using PyTorch I noticed a ~20+% difference in accuracy when using a batch size of 4 vs 32. What might be causing such drastic differences?
batch_size 4
100%|██████████| 10/10 [02:50<00:00, 17.04s/it, TestAcc=71%, TrainAcc=74%, loss=0.328]
batch_size 32
100%|██████████| 10/10 [02:38<00:00, 15.85s/it, TestAcc=53%, TrainAcc=57%, loss=0.208]
Model:
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, num_classes)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
| You can try to adjust your learning rate too.
With a larger batch size you should also use a larger learning rate.
This article has additional explanations for the relation of learning rate and batch size.
| https://stackoverflow.com/questions/73540096/ |
Pytorch gives error Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu | I'm beggining my journey to Python 3 / Pytorch, and i'm having the following issue:
I'm trying to run this code on my GPU, but i get the following error :
Expected all tensors to be on the same device, but found at least two
devices, cuda:0 and cpu!
I know it means i'm trying to manipulate 2 tensors that are both on different devices, but i can figure out where in my code I missed to transfer this tensor.
Any help would be appreciated.
Here is the code
import sys
import os
sys.path.append(os.path.abspath("include"))
sys.path.append(os.path.abspath("models"))
from utils import *
from MD5EncryptedDataEncoder import *
import torch
import torch.nn.functional as F
from torch import nn
import numpy as np
import hashlib
import pathlib
from pathlib import Path
# instantiate the model
model = MD5EncryptionEncoder()
model_file = Path(model.m_save_path)
if model_file.is_file():
model = torch.load(model.m_save_path)
print("Loaded previously saved model from ["+model.m_save_path+"]")
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
print("Device that will be used : ")
print(device)
model = model.to(device)
print("Model parameter devices :")
print(next(model.parameters()).device)
#define our loss function
loss_function = nn.MSELoss()
loss_function = loss_function.to(device)
#defined our optimizer
#learning_rate = 0.00000005
#optimizer = torch.optim.RMSprop(model.parameters(), lr=learning_rate)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.7,patience=8,cooldown=2,verbose=True)
loss_history = [0.02]
for i in range(50000000):
#Create our random unencrypted md5 data + convert it to a pytorch tensor
md5_unencrypted_data_hexadecimal_string = secrets.token_hex(64)
md5_unencrypted_data_tensor = HexStringToBinaryTensor(md5_unencrypted_data_hexadecimal_string)
md5_unencrypted_data_binary_blob = HexStringToBinaryBlob(md5_unencrypted_data_hexadecimal_string)
#Run the MD5 algorithm on our unencrypted md5 data + convert the result to a pytorch tensor
md5_encrypted_data = hashlib.md5(md5_unencrypted_data_binary_blob)
md5_encrypted_data_hexadecimal_string = md5_encrypted_data.hexdigest()
md5_encrypted_data_tensor = HexStringToBinaryTensor(md5_encrypted_data_hexadecimal_string)
md5_encrypted_data_tensor = md5_encrypted_data_tensor.to(device)
print(md5_encrypted_data_tensor)
#print("Unencrypted tensor :")
#print(md5_unencrypted_data_tensor)
#print("Encrypted tensor :")
#print(md5_encrypted_data_tensor)
#Run our forward pass
predictions_tensor = model(md5_encrypted_data_tensor)
#print("Prediction : ",predictions_tensor)
#Run our loss function
loss = loss_function(predictions_tensor, md5_encrypted_data_tensor)
#Compute the gradient (this does NOT update the weight : only compute the error gradient)
loss.backward()
if i % 100 == 0:
#Update the weights
optimizer.step()
#We are not using batch update, so restart the gradients
optimizer.zero_grad()
#Call the scheduler that will changer some params if necessary
scheduler.step(loss)
print("Loss : ",loss.item())
if(loss.item() < min(loss_history)):
print("Saving model to ["+model.m_save_path+"]")
torch.save(model,model.m_save_path)
loss_history.append(loss.item())
if i % 500 == 0:
print("Prediction : ",predictions_tensor)
print("Wanted result : ",md5_encrypted_data_tensor)
Here is the class MD5EncryptedDataEncoder code :
import torch
import torch.nn.functional as F
from torch import nn
# define the network class
class MD5EncryptionEncoder(nn.Module):
m_save_path = "data/MD5EncryptionEncoder.model"
def __init__(self):
# call constructor from superclass
super().__init__()
self.input_to_hidden_1 = nn.Linear(128, 10240)
self.hidden_layers = []
self.hidden_layers.append(nn.Linear(10240, 10240))
self.hidden_layers.append(nn.Linear(10240, 10240))
self.last_hidden_to_output = nn.Linear(10240, 128)
self.hidden_layers_activation_function = torch.nn.LeakyReLU(0.1)
def forward(self, x):
# define forward pass.
# Here, 'x' represent the output of a network defined in __init__
x = self.hidden_layers_activation_function(self.input_to_hidden_1(x))
x = self.hidden_layers_activation_function(self.hidden_layers[0](x))
x = self.hidden_layers_activation_function(self.hidden_layers[1](x))
x = torch.sigmoid(self.last_hidden_to_output(x))
return x
def EncryptedTensorToStateTensor(self,x):
x = self.hidden_layers_activation_function(self.input_to_hidden_1(x))
x = self.hidden_layers_activation_function(self.hidden_layers[0](x))
return x
def StateTensorToEncryptedTensor(self,x):
x = self.hidden_layers_activation_function(self.hidden_layers[1](x))
x = torch.sigmoid(self.last_hidden_to_output(x))
return x
Here is the content of utils.py
import torch
import secrets
import binascii
def HexStringToBinaryBlob(p_hex_string):
if(len(p_hex_string)%2 != 0):
print("[HexStringToBinaryTensor] Parameter [p_hex_string]'s size if not an even number")
exit(1)
binary_blob = binascii.unhexlify(p_hex_string)
return binary_blob
def HexStringToBinaryTensor(p_hex_string):
if(len(p_hex_string)%2 != 0):
print("[HexStringToBinaryTensor] Parameter [p_hex_string]'s size if not an even number")
exit(1)
binary_blob = binascii.unhexlify(p_hex_string)
binary_string = ''.join(format(x, '08b') for x in binary_blob)
result_tensor_size = int(len(p_hex_string)/2*8)
result_tensor = torch.randn(result_tensor_size)
for binary_position in range (len(binary_string)):
binary_digit = int(binary_string[binary_position])
result_tensor[binary_position] = binary_digit
return result_tensor
def Create512BitBinaryTensor():
#Create a 512 bits (64 BYTES) block of data in hexadecimal
hexadecimal_blob = secrets.token_hex(64)
result_tensor = HexStringToBinaryTensor(hexadecimal_blob)
return result_tensor
Here is the terminal error
Loaded previously saved model from [data/MD5EncryptionEncoder.model]
Device that will be used :
cuda:0
Model parameter devices :
cuda:0
tensor([0., 1., 0., 1., 0., 1., 0., 1., 0., 1., 1., 0., 1., 0., 1., 1., 1., 0.,
1., 1., 0., 1., 1., 0., 0., 0., 0., 0., 1., 0., 0., 1., 1., 1., 0., 0.,
1., 1., 1., 0., 1., 0., 1., 1., 0., 0., 1., 1., 0., 1., 1., 0., 0., 1.,
0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 0., 0., 0., 1., 0., 0., 0., 1.,
1., 0., 0., 1., 1., 1., 0., 0., 0., 1., 0., 1., 0., 0., 0., 0., 0., 0.,
1., 1., 1., 0., 0., 1., 1., 0., 1., 1., 1., 1., 1., 1., 1., 0., 0., 1.,
0., 1., 1., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 1., 0., 1., 1.,
1., 0.], device='cuda:0')
Traceback (most recent call last):
File "TrainEncryptedDataEncoder.py", line 71, in <module>
predictions_tensor = model(md5_encrypted_data_tensor)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/workspace/Pytorch/models/MD5EncryptedDataEncoder.py", line 45, in forward
x = self.hidden_layers_activation_function(self.hidden_layers[0](x))
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat in method wrapper_addmv_)
| The issue is the List data type defined in the Model Class.
You have used
self.hidden_layers = []
self.hidden_layers.append(nn.Linear(10240, 10240))
self.hidden_layers.append(nn.Linear(10240, 10240))
Here the defined list will be on cpu only. You need to either put it in a nn.sequential and then call it by index as below:-
class MD5EncryptionEncoder(nn.Module):
m_save_path = "data/MD5EncryptionEncoder.model"
def __init__(self):
# call constructor from superclass
super().__init__()
self.input_to_hidden_1 = nn.Linear(128, 10240)
self.hidden_layers = []
self.hidden_layers.append(nn.Linear(10240, 10240))
self.hidden_layers.append(nn.Linear(10240, 10240))
self.hidden = nn.Sequential(*self.hidden_layers)
self.last_hidden_to_output = nn.Linear(10240, 128)
self.hidden_layers_activation_function = torch.nn.LeakyReLU(0.1)
def forward(self, x):
# define forward pass.
# Here, 'x' represent the output of a network defined in __init__
x = self.hidden_layers_activation_function(self.input_to_hidden_1(x))
x = self.hidden_layers_activation_function(self.hidden[0](x))
x = self.hidden_layers_activation_function(self.hidden[1](x))
x = torch.sigmoid(self.last_hidden_to_output(x))
return x
def EncryptedTensorToStateTensor(self, x):
x = self.hidden_layers_activation_function(self.input_to_hidden_1(x))
x = self.hidden_layers_activation_function(self.hidden_layers[0](x))
return x
def StateTensorToEncryptedTensor(self, x):
x = self.hidden_layers_activation_function(self.hidden_layers[1](x))
x = torch.sigmoid(self.last_hidden_to_output(x))
return x
or just ditch putting the linear layers in list and just define the layers outside.
class MD5EncryptionEncoder(nn.Module):
m_save_path = "data/MD5EncryptionEncoder.model"
def __init__(self):
# call constructor from superclass
super().__init__()
self.input_to_hidden_1 = nn.Linear(128, 10240)
self.hidden_layers = []
self.hidden_layer1 = nn.Linear(10240, 10240)
self.hidden_layer2 = nn.Linear(10240, 10240)
self.last_hidden_to_output = nn.Linear(10240, 128)
self.hidden_layers_activation_function = torch.nn.LeakyReLU(0.1)
def forward(self, x):
# define forward pass.
# Here, 'x' represent the output of a network defined in __init__
x = self.hidden_layers_activation_function(self.input_to_hidden_1(x))
x = self.hidden_layer1(x)
x = self.hidden_layer2(x)
x = torch.sigmoid(self.last_hidden_to_output(x))
return x
def EncryptedTensorToStateTensor(self, x):
x = self.hidden_layers_activation_function(self.input_to_hidden_1(x))
x = self.hidden_layers_activation_function(self.hidden_layers[0](x))
return x
def StateTensorToEncryptedTensor(self, x):
x = self.hidden_layers_activation_function(self.hidden_layers[1](x))
x = torch.sigmoid(self.last_hidden_to_output(x))
return x
| https://stackoverflow.com/questions/73543150/ |
torch.cuda.device_count() returns 2, but torch.load(model_path, map_location='cuda:1') throws an error | I have two GPUs and when I run
import torch
print('count: ', torch.cuda.device_count()) # prints count: 2
However, my model throws an error
RuntimeError: Attempting to deserialize object on CUDA device 2 but torch.cuda.device_count() is 1
on the line
torch.load(model_path, map_location='cuda:1')
What could cause it and how to fix it?
This issue is somehow linked to my Flask, because the training itself works with torch.load(model_path, map_location='cuda:1')
| This is a known Flask-CUDA issue. Please run Flask with it with
print('count: ', torch.cuda.device_count()) and check if you see
count: 2
reloading
count: 1
If so, add app.run(... , use_reloader=False)
| https://stackoverflow.com/questions/73543195/ |
Understanding vocab.txt of vinai/bertweet-base | While looking at vocab.txt here, I was left wondering why the vocabulary for Bertweet is not continuous.
For instance, see the below sample:
...
dice 63328
)@@ 63327
struggled 63326
wraps 63324
Investors 63312
#summer@@ 63305
...
As you can see, after 63305, we have 63312, followed by 63324... what about the numbers in between?
Also, it feels a bit strange why vocabulary starts at around 3800.
르@@ 3800
utory 3798
...
Any explanations will be really appreciated.
|
Each number denotes the frequency count that the corresponding word appears in the pre-training corpus.
Only top 64k words are included in the vocab.
| https://stackoverflow.com/questions/73546673/ |
Sample rows from tensor in every 3 rows in Python | How to sample the tensor in Python?
I want to sample a frame every 3 frames from videos, and the tensor shape will be [color, frames, height, width].
Thus, the sampling tensor shape will be [color, frames / 3, height, width]
Assume there is a tensor.size([3,300,10,10]).
After sampling rows every 3 rows in the second dimension, the tensor will be tensor.size([3,100,10,10])
Another example,
A tensor = [[1,2,3,4,5,6,7,8,9,10],[1,2,3,4,5,6,7,8,9,10],[1,2,3,4,5,6,7,8,9,10]].
After sampling rows every 3 rows in the second dimension, the tensor will be [[1,4,7,10],[1,4,7,10],[1,4,7,10]]
| Let N be the size of dimension you want to sample and you want to sample every kth row.
You can do (assuming you want to sample from the 1st dimension, and there are 4 dimensions),
new_tensor = tensor[:, torch.arange(0, N, k), : ,: ]
You may skip slicing the last two dimensions and the result won't change.
new_tensor = tensor[:, torch.arange(0, N, k)]
More specifically for the 2D tensor in question, you can use this code.
tensor=torch.tensor([
[1,2,3,4,5,6,7,8,9,10],
[1,2,3,4,5,6,7,8,9,10],
[1,2,3,4,5,6,7,8,9,10]
])
new_tensor=tensor[:, torch.arange(0,10,3)]
| https://stackoverflow.com/questions/73547020/ |
SGD Optimizer Custom Parameters | I am practicing using Pytorch and trying to implement a simple linear model.
I have initialized some random x and y inputs alongside some random parameters 'a' and 'b'.
import torch
torch.manual_seed(42)
x = torch.rand(100,1)
y = 1 + 2 * x + .1 * torch.rand(100, 1)
a = torch.randn(1, requires_grad=True)
b = torch.randn(1, requires_grad=True)
I would like to use gradient descent on a and b and update them as the model trains. I am using an sklearn linear regression model to compare my results to but they do not seem to be in accordance with each other as I would expect.
Here is the code to create the model
import torch.nn as nn
from torch.optim import SGD
#create simple linear model
class LinearModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(1, 1)
def forward(self, x):
return self.linear(x)
#loss function
criterion = torch.nn.MSELoss()
SGDmodel = LinearModel()
I am thinking my problem could possibly be within
sgd = torch.optim.SGD([a, b], lr=0.001, momentum=0.9, weight_decay=0.1)
Can I simply input my a and b parameters into the torch.optim function and have them update as the model is trained? or do I need to embed them into my model.parameters() in a different way somehow?
Here is the rest of my code if needed.
epochs = 1
inputs = x
targets = y
for epoch in range(epochs):
sgd.zero_grad()
yhat = SGDmodel(inputs)
loss = criterion(yhat, targets)
loss.backward()
sgd.step()
print(f'Loss: {loss}')
I am using the following to see what a and b are
print(SGDmodel.linear.weight.grad)
print(SGDmodel.linear.bias.grad)
tensor([[-3.1456]])
tensor([-5.4119])
Any help would be appreciated!
|
Can I simply input my a and b parameters into the torch.optim function and have them update as the model is trained? or do I need to embed them into my model.parameters() in a different way somehow?
I think I figured it out!
I added a and b as parameters in my model by implementing torch.nn.Parameter()
I added them into the model, and called self.weight and self.bias.
class LinearModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(1, 1)
self.weight = torch.nn.Parameter(a) #sets a as the weight in the linear layer
self.bias = torch.nn.Parameter(b) # sets b as the bias in the linear layer
def forward(self, x):
return self.linear(x)
Now when I call my optimizer it will update the custom parameters I defined as a and b
sgd = torch.optim.SGD(SGDmodel.parameters(), lr=0.001, momentum=0.9, weight_decay=0.1)
| https://stackoverflow.com/questions/73550795/ |
As a non-root user, how to install another version cuda in conda environment in Linux server? | I am a non-root user in a Linux server, when I input nvidia-smi it shows my cuda is 10.2.
However, my project is asked to
pip install torch==1.10.0+cu111 torchvision==0.11.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html
Now, I have create my own environment, for example, cu111, by conda create --name cu111 python=3.10 and conda activate cu111
What should I do step by step, the first step is following pip install torch==1.10.0+cu111 torchvision==0.11.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html ,right?
I read some tutorial, it asked to add path variable in your environment, but I am not sure if it is correct,
export PATH=/home/dj/miniconda3/envs/cu111/lib/:$PATH
Any suggestion is helpful for me!
|
I am a non-root user in a Linux server, when I input nvidia-smi it shows my cuda is 10.2.
The "CUDA version" shown by nvidia-smi is the maximum CUDA version supported by the GPU driver installed on the system. That means that no matter what CUDA toolkit or CUDA accelerated framework you choose to install, nothing newer than CUDA 10.2 or anything compiled against anything newer than CUDA 10.2 will actually work on the machine in question. If you install a version of PyTorch compiled against CUDA 11.1, it cannot work on the machine in its current state.
If you are a non-root user, there is nothing you can do to fix this other than ask a system administrator to perform a driver upgrade (if that is possible, i.e. the hardware is actually supported by a more modern driver).
| https://stackoverflow.com/questions/73551419/ |
why my model does not learn? Same on every epoch (Pytorch) | from sklearn import datasets
import pandas as pd
import numpy as np
import torch
from torch import nn
#loading the dataset
(data, target) = datasets.load_diabetes(as_frame=True,return_X_y=True) #with the as_frame=True data: pd.DataFrame
# converting data,target to tensors
data = torch.tensor(data.values,dtype=torch.float)
target = torch.tensor(target.values,dtype=torch.float)
#split the data 80% train 20% testing
a = 0.8
train_data , train_target = data[:int(a*len(data))] , data[:int(a*len(data))]
test_data , test_target = data[int(a*len(data)):] , data[int(a*len(data)):]
#constructing the model
# for this dataset dimentionality is 10 so the in_features will be 10
model = nn.Sequential(
nn.Linear(in_features=10, out_features=128),
nn.Linear(in_features=128, out_features=128),
nn.Linear(in_features=128, out_features=1)
)
#loss fn , optimizer
loss_fn = nn.L1Loss() #binary cross entropy
optimizer = torch.optim.SGD(params = model.parameters(),lr=0.001) #stochastic gradient descent
#training loop
epochs = 1000
for epoch in range(epochs):
#1. make prediction
model.train()
train_pred = model(train_data)
loss = loss_fn(train_pred, train_target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
model.eval()
with torch.inference_mode():
test_pred = model(test_data)
loss_test = loss_fn(test_pred, test_target)
if epoch%(epochs//min(10,epochs))==0: print(f"{epoch} - training loss: {round(float(loss),4)} | test loss: {round(float(loss_test),4)}")- training loss: {loss} | test loss: {loss_test}")
Output
0 - training loss: 0.0837 | test loss: 0.0806
100 - training loss: 0.0433 | test loss: 0.0431
200 - training loss: 0.0426 | test loss: 0.0425
300 - training loss: 0.042 | test loss: 0.0419
400 - training loss: 0.0414 | test loss: 0.0414
500 - training loss: 0.0408 | test loss: 0.0408
600 - training loss: 0.0403 | test loss: 0.0403
700 - training loss: 0.0398 | test loss: 0.0398
800 - training loss: 0.0393 | test loss: 0.0394
900 - training loss: 0.0388 | test loss: 0.0389
| First, as it was mentioned in the comments, you probably meant:
train_data, train_target = data[:int(a*len(data))] , target[:int(a*len(data))]
test_data, test_target = data[int(a*len(data)):] , target[int(a*len(data)):]
Next, your target size is not consistent with the output size (this should give a warning). Using
loss = loss_fn(train_pred, train_target.unsqueeze(1))
and
loss_test = loss_fn(test_pred, test_target.unsqueeze(1))
should give you some traction.
| https://stackoverflow.com/questions/73554174/ |
Runtime error generated by pytorch functions | Let us consider following code:
from multiprocessing import freeze_support
import torch
import torch.nn as nn
import torchvision
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
from torchvision import datasets,models, transforms
import time
import os
import copy
import matplotlib.pyplot as plt
#import torch.backends.cudnn as cudnn
#cudnn.benchmark = True
plt.ion() # interactive mode
path ='C:/Users/User/PycharmProjects/AI_Project/hymenoptera_data'
data_transforms ={
'train':transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val':transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
image_datasets ={x:datasets.ImageFolder(os.path.join(path,x),data_transforms[x])
for x in ['train','val']}
dataloaders ={x: torch.utils.data.DataLoader(image_datasets[x],batch_size=4,shuffle=True,num_workers=4)
for x in ['train','val']}
class_names = image_datasets['train'].classes
device =torch.device('cuda' if torch.cuda.is_available() else 'cpu')
def imshow(inp,title=None):
inp =inp.numpy().transpose((1,2,0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a b
inputs,classes =next((iter(dataloaders['train'])))
out =torchvision.utils.make_grid(inputs)
imshow(out,title=[class_names[x]for x in classes])
When I run following code, I got this error
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
I did not get point from where this error is coming, maybe DataLoader causes this error, but how to fix?
| You need to run code with multiple workers (like your DataLoader) in the main function. This is so child processes know whether to join or not.
import torch
import torch.nn as nn
import torchvision
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
from torchvision import datasets,models, transforms
import time
import os
import copy
import matplotlib.pyplot as plt
#import torch.backends.cudnn as cudnn
#cudnn.benchmark = True
plt.ion() # interactive mode
path ='C:/Users/User/PycharmProjects/AI_Project/hymenoptera_data'
if __name__ == "__main__":
data_transforms ={
'train':transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val':transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
image_datasets ={x:datasets.ImageFolder(os.path.join(path,x),data_transforms[x])
for x in ['train','val']}
dataloaders ={x: torch.utils.data.DataLoader(image_datasets[x],batch_size=4,shuffle=True,num_workers=4)
for x in ['train','val']}
class_names = image_datasets['train'].classes
device =torch.device('cuda' if torch.cuda.is_available() else 'cpu')
def imshow(inp,title=None):
inp =inp.numpy().transpose((1,2,0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a b
inputs,classes =next((iter(dataloaders['train'])))
out =torchvision.utils.make_grid(inputs)
imshow(out,title=[class_names[x]for x in classes])
| https://stackoverflow.com/questions/73558458/ |
Best way to store seed vectors from pytorch? | I like to make a pytorch calculation reproducible without storing a large random vector for each step. I tried to first generate a random seed and then re-seed the random number generator like this:
seed = torch.rand(1, dtype=torch.float64)
torch.manual_seed(seed) # re-seed, so we get the same vector as we would get when using a stored seed
torch.save(seed, "seedfile") # store the seed
myvector = torch.randn(myvector.shape)
This way I would only need to store a float to reproduce the result. But when I use this in a loop, I get always the same result inside the loop.
Explanation what I try to achieve: Let's say I generate a batch of images in a loop. Each image depends on an initialization vector. Now I can reproduce the image by storing the initialization vector and loading it when I want to re-do the calculation (e.g. with other hyper-parameters). But when the vector is random anyway, it is sufficient to store the random seed.
To do so, I currently generate a random seed (a float64 in that code) and then manually seed with it. The manual_seed is not useful in the first run, but should not be a problem either. When I want to reproduce the image, I do not generate the manual seed with torch.rand, but load the seed from a file. This way I need less than 1 kb (with torch.save which has some overhead, the actual data I need to store would be just 8 byte) instead of, e.g., 64 kb for storing the vector that is generated by
loaded_seed = torch.load("seedfile")
torch.manual_seed(loaded_seed)
myvector = torch.randn(myvector.shape)
| It looks like the problem was (re-)seeding with torch.rand(1) instead of torch.random.seed().
| https://stackoverflow.com/questions/73558510/ |
Why does Softmax(dim=0) produce poor results? | I'm getting weird results from a PyTorch Softmax layer, trying to figure out what's going on, so I boiled it down to a minimal test case, a neural network that just learns to decode binary numbers into one-hot form.
Just Softmax() gets a warning:
UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
Okay, so what to supply for X? I had been guessing 0 would be a sensible argument. Just to make sure, I tried Softmax(dim=1):
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Okay, so that seems clear about allowed values. -1 apparently means the last dimension, so in this case, where the output is just a one-dimensional vector, that should mean the same thing as 0. Trying it with Softmax(dim=-1) works fine; in a few thousand epochs, the network reliably learns to decode the numbers with 100% accuracy.
Just to make sure it gives the same results, I tried it again with Softmax(dim=0) (as shown below)...
And it does not give the same result at all. The accuracy oscillates, but levels off somewhere around 20-30%.
What's going on? Why is 0 not the same as -1 in this context, and what exactly is 0 doing?
import torch
from torch import nn
from torch.utils.data import Dataset, DataLoader
bits = 5
class Dataset1(Dataset):
def __init__(self):
s = []
for i in range(1 << bits):
x = []
for c in format(i, "b").zfill(bits):
x.append(float(c == "1"))
y = []
for j in range(1 << bits):
y.append(float(i == j))
x = torch.as_tensor(x)
y = torch.as_tensor(y)
s.append((x, y))
self.s = s
def __len__(self):
return len(self.s)
def __getitem__(self, i):
return self.s[i]
trainDs = Dataset1()
batchSize = 16
trainDl = DataLoader(trainDs, batch_size=batchSize)
for x, y in trainDl:
print(x.shape)
print(y.shape)
break
hiddenSize = 100
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layers = nn.Sequential(
nn.Linear(bits, hiddenSize),
nn.ReLU(),
nn.Linear(hiddenSize, hiddenSize),
nn.Tanh(),
nn.Linear(hiddenSize, hiddenSize),
nn.ReLU(),
nn.Linear(hiddenSize, 1 << bits),
nn.Softmax(dim=0),
)
def forward(self, x):
return self.layers(x)
device = torch.device("cpu")
model = Net().to(device)
def accuracy(model, ds):
n = 0
for x, y in ds:
with torch.no_grad():
z = model(x)
if torch.argmax(y) == torch.argmax(z):
n += 1
return n / len(ds)
criterion = nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
epochs = 10000
interval = epochs // 10
for epoch in range(epochs + 1):
for bi, (x, y) in enumerate(trainDl):
x = x.to(device)
y = y.to(device)
loss = criterion(model(x), y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % interval == 0 and not bi:
print(f"{epoch}\t{loss}\t{accuracy(model, trainDs)}")
| In the accuracy function, you forgot to create a new dimension for the batch (batchsize=1), which explains why it gives that error when you use dim=1. Regarding the dimension of the softmax, you can check this post.
Below is the modified code.
import torch
from torch import nn
from torch.utils.data import Dataset, DataLoader
bits = 5
class Dataset1(Dataset):
def __init__(self):
s = []
for i in range(1 << bits):
x = []
for c in format(i, "b").zfill(bits):
x.append(float(c == "1"))
y = []
for j in range(1 << bits):
y.append(float(i == j))
x = torch.as_tensor(x)
y = torch.as_tensor(y)
s.append((x, y))
self.s = s
def __len__(self):
return len(self.s)
def __getitem__(self, i):
return self.s[i]
trainDs = Dataset1()
batchSize = 16
trainDl = DataLoader(trainDs, batch_size=batchSize, drop_last=True)
for x, y in trainDl:
print(x.shape)
print(y.shape)
break
hiddenSize = 100
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layers = nn.ModuleList(
[nn.Linear(bits, hiddenSize),
nn.ReLU(),
nn.Linear(hiddenSize, hiddenSize),
nn.Tanh(),
nn.Linear(hiddenSize, hiddenSize),
nn.ReLU(),
nn.Linear(hiddenSize, 1 << bits),
nn.Softmax(dim=1)]
)
def forward(self, x):
for i,layer in enumerate(self.layers):
x = layer(x)
if i == 6:
pass
#print('softmax input shape',x.shape)
#print('softmax output shape',torch.nn.functional.softmax(x,dim=1).shape)
#print('linear',x.shape)
#print('output',x.shape)
return x
device = torch.device("cpu")
model = Net().to(device)
def accuracy(model, ds):
n = 0
for x, y in ds:
x = x.unsqueeze(0) # create a batch of size 1
y = y.unsqueeze(0) # create a batch of size 1
with torch.no_grad():
z = model(x)
print(z.shape)
break
if torch.argmax(y) == torch.argmax(z):
n += 1
return n / len(ds)
criterion = nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
epochs = 10000
interval = epochs // 10
for epoch in range(epochs + 1):
for bi, (x, y) in enumerate(trainDl):
x = x.to(device)
y = y.to(device)
loss = criterion(model(x), y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % interval == 0 and not bi:
print(f"{epoch}\t{loss}\t{accuracy(model, trainDs)}")
| https://stackoverflow.com/questions/73561694/ |
Setting torch.nn.linear() diagonal elements zero | I am trying to build a model with a layer of torch.nn.linear with same input size and output size, so the layer would be square matrix. In this model, I want the diagonal elements of this matrix fixed to zero. Which means, during training, I don't want the diagonal elements to be changed from zero. I could only think of adding some kind of step that change diag elements to zero for each training epoch, but I am not sure if this is valid or efficient way. Is there a definite way of making this kind of layer which can ensure that the diagonal elements don't change?
Sorry if my question is weird.
| You can always implement your own layers. Note that all custom layers should be implemented as classes derived from nn.Module. For example:
class LinearWithZeroDiagonal(nn.Module):
def __init__(self, num_features, bias):
super(LinearWithZeroDiagonal, self).__init__()
self.base_linear_layer = nn.Linear(num_features, num_features, bias)
def forward(self, x):
# first, make sure the diagonal is zero
with torch.no_grad():
self.base_linear_layer.weight.fill_diagonal_(0.)
return self.base_linear_layer(x)
| https://stackoverflow.com/questions/73563677/ |
Trying to run imagen-pytorch | I'm trying to run this "imagen-pytorch" repo https://github.com/lucidrains/imagen-pytorch/blob/main/README.md
I've cloned it and tried running the setup.py file on pycharm (CE) however I get the following error:
$ python setup.py
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
There are no further instructions on the README.md file. Please can you explain where I am going wrong?
| You have to run with some command.
The most popular are
python setup.py build
python setup.py install
You need them to install module.
You can see other commands using
python setup.py --help-commands
Maybe they forgot to show this information because other modules on GitHub already show it ;)
| https://stackoverflow.com/questions/73569307/ |
Does equal probabilities not summing to one in torch.utils.data.WeightedRandomSampler still make it uniform? | In pytorch, there is a sampler class called WeightedRandomSampler (https://pytorch.org/docs/stable/data.html#torch.utils.data.WeightedRandomSampler). It ('weights' parameter) expects probabilities for N samples. For uniform distribution, I believe it expects array with 1/N value.
But if I put say 0.5 for each sample, where N*0.5 is not equal to 1, does it still make the sampling uniform, given equal probabilities are there for each sample?
| Yes, the sampling will still be uniform. Only the relative magnitude of the weights with respect to the other weights is important, not the absolute magnitude, as pytorch normalizes the weights.
If we look under the hood of WeightedRandomSampler, it makes a call to torch.multinomial which itself makes a call to torch.distributions.Categorical, which we can see here (line 57) normalizes the weights such that they sum to one.
| https://stackoverflow.com/questions/73570419/ |
Get image name of pytorch dataset | I am using a custom dataset for image segmentation. While visualizing some of the images and masks i found an error. The problem for me know is, how to find the name of the image. The code i use for the pytorch datasetset creation is:
class SegmentationDataset(Dataset):
def __init__(self, df, augmentations):
self.df = df
self.augmentations = augmentations
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
row = self.df.iloc[idx]
image_path = DATA_DIR + row.images
mask_path = DATA_DIR + row.masks
image = skimage.io.imread(image_path)
mask = skimage.io.imread(mask_path)
mask = np.expand_dims(mask, axis = -1)
if self.augmentations:
data = self.augmentations(image = image, mask = mask)
image = data['image']
mask = data['mask']
image = np.transpose(image, (2, 0, 1)).astype(np.float32)
mask = np.transpose(mask, (2, 0, 1)).astype(np.float32)
image = torch.Tensor(image) / 255.0
mask = torch.round(torch.Tensor(mask) / 255.0)
return image, mask
trainset = SegmentationDataset(train_df, get_train_augs())
validset = SegmentationDataset(valid_df, get_valid_augs())
When i then print one specific image, i see that the mask is not available/wrong:
idx = 9
print('Drawn sample ID:', idx)
image, mask = validset[idx]
show_image(image, mask)
How do i now get the image name of this idx = 9?
| I'd imagine you could print out one of the following, under this line image = skimage.io.imread(image_path), it should help lead you to your answer:
print(row)
print(row.images)
print(images)
print(image_path)
To get the file name after you have parsed the fully quaified path above:
my_str = '/my/data/path/images/wallpaper.jpg'
result = my_str.rsplit('/', 1)[1]
print(result) # 'wallpaper.jpg'
with_slash = '/' + my_str.rsplit('/', 1)[1]
print(with_slash) # '/wallpaper.jpg'
['/my/data/path/images/', 'wallpaper.jpg']
print(my_str.rsplit('/', 1)[1])
| https://stackoverflow.com/questions/73571393/ |
Is it possible that the inference time is large while number of parameters and flops are low in pytorch? | I calculated flops of network using Pytorch.
I used the function 'profile' in 'thop' library.
In my experiment. My network showed that
Flops : 619.038M
Parameters : 4.191M
Inference time : 25.911
Unlike my experiment, I would check the flops and parameters with ResNet50 which showed that
Flops : 1.315G
Parameters: 26.596M
Inference time : 8.553545
Is is possible that the inference time is large while flops are low?
Or are there flops that the 'profile' function can't measure some functions?
However, Similar results came out using the 'FlopCountAnalysis in fvcore.nn' and 'get_model_complexity_info in ptflops'
Here is the code that I measured the inference time using Pytorch.
model.eval()
model.cuda()
dummy_input = torch.randn(1,3,32,32).cuda()
#flops = FlopCountAnalysis(model, dummy_input)
#print(flop_count_table(flops))
#print(flops.total())
macs, params = profile(model, inputs=(dummy_input,))
macs, params = clever_format([macs, params], "%.3f")
print('Flops:',macs)
print('Parameters:',params)
starter, ender = torch.cuda.Event(enable_timing=True),
torch.cuda.Event(enable_timing=True)
repetitions = 300
timings=np.zeros((repetitions,1))
for _ in range(10):
_ = model(dummy_input)
# MEASURE PERFORMANCE
with torch.no_grad():
for rep in range(repetitions):
starter.record()
_ = model(dummy_input)
ender.record()
# WAIT FOR GPU SYNC
torch.cuda.synchronize()
curr_time = starter.elapsed_time(ender)
timings[rep] = curr_time
print('time(s) :',np.average(timings))
| It is absolutely normal situation. The thing is FLOPS (or MACs) are theoretical measures that may be useful when you want to disregard some hardware/software optimizations that leads to the fact that different operations will work faster/slower on different hardware.
For example in the case of neural networks the different architectures will have different CPU/GPU utilization. Let's consider two simple architectures with almost same number of parameters / FLOPs:
Deep network:
layers = [nn.Conv2d(3, 16, 3)]
for _ in range(12):
layers.extend([nn.Conv2d(16, 16, 3, padding=1)])
deep_model = nn.Sequential(*layers)
Wide network
wide_model = nn.Sequential(nn.Conv2d(3, 1024, 3))
Modern GPUs allows you to parallelize a large number of simple operations.
But when you have deep network you need to know outputs of layer[i] to compute the outputs of layer[i+1]. So it became blocking factor that reduces utilization of your hardware.
Complete example:
import numpy as np
import torch
from thop import clever_format, profile
from torch import nn
def measure(model, name):
model.eval()
model.cuda()
dummy_input = torch.randn(1, 3, 64, 64).cuda()
macs, params = profile(model, inputs=(dummy_input,), verbose=0)
macs, params = clever_format([macs, params], "%.3f")
print("<" * 50, name)
print("Flops:", macs)
print("Parameters:", params)
starter, ender = torch.cuda.Event(enable_timing=True), torch.cuda.Event(
enable_timing=True
)
repetitions = 300
timings = np.zeros((repetitions, 1))
for _ in range(10):
_ = model(dummy_input)
# MEASURE PERFORMANCE
with torch.no_grad():
for rep in range(repetitions):
starter.record()
_ = model(dummy_input)
ender.record()
# WAIT FOR GPU SYNC
torch.cuda.synchronize()
curr_time = starter.elapsed_time(ender)
timings[rep] = curr_time
print("time(ms) :", np.average(timings))
layers = [nn.Conv2d(3, 16, 3)]
for _ in range(12):
layers.extend([nn.Conv2d(16, 16, 3, padding=1)])
deep_model = nn.Sequential(*layers)
measure(deep_model, "My deep model")
wide_model = nn.Sequential(nn.Conv2d(3, 1024, 3))
measure(wide_model, "My wide model")
Results:
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< My deep model
Flops: 107.940M
Parameters: 28.288K
time(ms) : 0.6160109861691793
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< My wide model
Flops: 106.279M
Parameters: 28.672K
time(ms) : 0.1514971748739481
As you can see the models have similar number of parameters/flops but computing time is 4x large for the deep network.
It is just a one of possible reason why the inference time is large when number of parameters and flops are low. You may need to take into account other underlying hardware/software optimizations.
| https://stackoverflow.com/questions/73577108/ |
How to redefine model for Pytorch max pool error | Hi I am trying to use tensor board for the model given at - https://github.com/siddharthvaria/WordPair-CNN/blob/master/src/pytorch_impl/cnn_pdtb_arg_multiclass_jl.py
However I get an error at Line 135
: x_max_pools = [F.max_pool1d(xi, xi.size(2)).squeeze(2) for xi in x_convs]
As
max_pool1d(): argument ‘kernel_size’ must be tuple of ints. Is there any solution to resolve this?
There is a discussion about this in pytorch forum - https://discuss.pytorch.org/t/typeerror-avg-pool2d-argument-kernel-size-must-be-tuple-of-ints-not-proxy/108482
| The line that raises error should be line 135:
# At this point x_convs is [(batch_size, nfmaps_arg, seq_len_new), ...]*len(fsz_arg)
x_max_pools = [F.max_pool1d(xi, xi.size(2)).squeeze(2) for xi in x_convs] # [(batch_size, nfmaps_arg), ...]*len(fsz_arg)
Actually max_pool1d reduces the size of the dimension n°2 that represents the sequence (after batch and channel dimension), see the documentation. Plus, according to the discuss you point out, the kernel_size argument now requires a concrete value at execution and cannot be inferred dynamically from inputs.
Fortunately your case is particularly simple because the kernel size is the entire sequence length. In other words, your pooling is actually a global max pooling that reduces the size of the sequence to only one scalar. This scalar is simply the maximum of all the values (check the shapes in comments!). So it is actually easy to implement a version without dynamic shapes. Just change line 135 by:
x_max_pools = [torch.max(xi, dim=2, keepdim=True)[0] for xi in x_convs] # [(batch_size, nfmaps_arg), ...]*len(fsz_arg)
Note:
there is no direct implementation of global_max_pooling layer but as you can see it is just a torch.max
Actually torch.max return a tuple (max, max_indicies) so you hae to take only the first element with [0] (see the doc)
gradients can propagate through torch.max like usual max_pooling layers (one source see here for instance)
torch.max(x, dim=2).squeeze(2) is changed to torch.max(x, dim=2, keepdim=True) that is exactly the same thing but cleaner.
| https://stackoverflow.com/questions/73588661/ |
CrossEntropyLoss showing poor accuracy on 2d output | I'm trying some experiments on a simple neural network that just tries to learn the squares of some random numbers, represented as arrays of decimal digits, code copied below, with changes indicated by comments.
The version using nn.Softmax(dim=2) and criterion = nn.BCELoss() works fine.
But for situations like this, where the output is N-way classification (in this case, an array of outputs, each of which indicates one of ten decimal digits), CrossEntropyLoss is considered ideal, so I made that change. nn.CrossEntropyLoss does softmax for you, so I also commented out the nn.Softmax line.
And instead of performing slightly better, the result performs much worse; it now maxes out around something like 76% accuracy on the training set, where previously it had reached 100%.
What am I doing wrong? The same substitution worked fine on an even simpler test case https://github.com/russellw/ml/blob/main/compound_output/single.py with the main difference being that case produces only a single N-way output whereas this produces an array of them. Am I misunderstanding how CrossEntropyLoss handles shapes, or some such?
import random
import torch
from torch import nn
from torch.utils.data import Dataset, DataLoader
def oneHot(n, i, s):
for j in range(n):
s.append(float(i == j))
size = 12
class Dataset1(Dataset):
def __init__(self):
s = []
for _ in range(1000):
a = random.randrange(10 ** size)
x = []
for c in str(a).zfill(size):
oneHot(10, int(c), x)
y = []
for c in str(a ** 2).zfill(size * 2):
y1 = []
oneHot(10, int(c), y1)
y.append(y1)
x = torch.as_tensor(x)
y = torch.as_tensor(y)
s.append((x, y))
self.s = s
def __len__(self):
return len(self.s)
def __getitem__(self, i):
return self.s[i]
trainDs = Dataset1()
testDs = Dataset1()
batchSize = 20
trainDl = DataLoader(trainDs, batch_size=batchSize)
testDl = DataLoader(testDs, batch_size=batchSize)
for x, y in trainDl:
print(x.shape)
print(y.shape)
break
class View(nn.Module):
def __init__(self, *shape):
super(View, self).__init__()
self.shape = shape
def forward(self, x):
batchSize = x.data.size(0)
shape = (batchSize,) + self.shape
return x.view(*shape)
hiddenSize = 100
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layers = nn.Sequential(
nn.Linear(size * 10, hiddenSize),
nn.ReLU(),
nn.Linear(hiddenSize, hiddenSize),
nn.Tanh(),
nn.Linear(hiddenSize, hiddenSize),
nn.ReLU(),
nn.Linear(hiddenSize, size * 2 * 10),
View(size * 2, 10),
#nn.Softmax(dim=2),
)
def forward(self, x):
return self.layers(x)
device = torch.device("cpu")
model = Net().to(device)
print(sum(p.numel() for p in model.parameters()))
def accuracy(model, ds):
n = 0
for x, y in ds:
# make input sample shape match a mini batch
# for the sake of things like softmax that cause the model
# to expect a specific shape
x = x.unsqueeze(0)
# this is just for reporting, not part of training
# so we don't need to track gradients here
with torch.no_grad():
z = model(x)
# conversely, the model will return a batch-shaped output
# so unwrap it for comparison with the unwrapped expected output
z = z[0]
# at this point, if the output were a scalar mapped to one-hot
# we could use a simple argmax comparison
# but it is an array thereof
# which makes comparison a little more complex
assert y.shape[0] == size * 2
assert z.shape[0] == size * 2
for i in range(0, size * 2):
if torch.argmax(y[i]) == torch.argmax(z[i]):
n += 1
return n / (len(ds) * size * 2)
#criterion = nn.BCELoss()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
epochs = 10000
interval = epochs // 10
for epoch in range(epochs + 1):
for bi, (x, y) in enumerate(trainDl):
x = x.to(device)
y = y.to(device)
loss = criterion(model(x), y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % interval == 0 and not bi:
print(
f"{epoch}\t{loss}\t{accuracy(model, trainDs)}\t{accuracy(model, testDs)}"
)
| Even if it doesn't raise error, torch.BCELoss is not actually what you want to minimize because it wrongly interprets your tensors as a multitude of binary classifications. Therefore it is a good idea to switch to torch.nn.CrossEntropyLoss.
As you can see in its documentation the function takes class number at targets (not one-hot-encoded) and only supports tensor with at most one dimension for the batch. So you can try:
x = x.to(device)
y = y.to(device)
# Flat together the figures prediction in the batch
pred = model(x).reshape(-1, 10) # shape (batch_size*2*size , 10)
# Reverse one-hot encoding for targets + flat
y = torch.argmax(y, dim=2).reshape(-1) # shape (batch_size*2*size, )
loss = criterion(pred, y)
I got 100% training accuracy at epoch 1100 with your config (same architecture, CPU, batch size 20) as you can see:
Note that the model actually overfits the training data in this scenario but it is an other problem...
| https://stackoverflow.com/questions/73591918/ |
TypeError: dropout(): argument 'input' (position 1) must be Tensor, not tuple | I am studying NLP and trying to make a model for classifying sentences. I am creating my class with a model but I get an error saying that the input should be of type Tensor, not tuple. I use 4.21.2 transformers version.
class BertClassificationModel(nn.Module):
def __init__(self, bert_model_name, num_labels, dropout=0.1):
super(BertClassificationModel, self).__init__()
self.bert = BertForSequenceClassification.from_pretrained(bert_model_name, return_dict=False)
self.dropout = nn.Dropout(dropout)
self.classifier = nn.Linear(768, num_labels)
self.num_labels = num_labels
def forward(self, input_ids, attention_mask=None, token_type_ids=None):
pooled_output = self.bert(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)
pooled_output = self.dropout(pooled_output)
logits = self.classifier(pooled_output)
return logits
TypeError: dropout(): argument 'input' (position 1) must be Tensor, not tuple
| The issue you face, is that the output of self.bert is not a tensor but a tuple:
from transformers import BertForSequenceClassification, BertTokenizer
bert_model_name = "bert-base-cased"
t = BertTokenizer.from_pretrained(bert_model_name)
m = BertForSequenceClassification.from_pretrained(bert_model_name, return_dict=False)
o=m(**t("test test", return_tensors="pt"))
print(type(o))
Output:
tuple
I personally do not recommend using return_dict=False as the code becomes more difficult to read. But changing this parameter doesn't help in your case, as you want to use the pooler output which is removed by the classification head of BertForSequenceClassification (the output of BertForSequenceClassification is listed here).
You already wrote in your own answer, that you don't intend to use the classification head of BertForSequenceClassification and you can therefore load BertModel directly (instead of initializing BertForSequenceClassification and only using BERT as you did with: BertForSequenceClassification.from_pretrained(bert_model_name, return_dict=True).bert):
from torch import nn
from transformers import BertModel, BertTokenizer
class BertClassificationModel(nn.Module):
def __init__(self, bert_model_name, num_labels, dropout=0.1):
super(BertClassificationModel, self).__init__()
self.bert = BertModel.from_pretrained(bert_model_name)
self.dropout = nn.Dropout(dropout)
self.classifier = nn.Linear(768, num_labels)
self.num_labels = num_labels
def forward(self, input_ids, attention_mask=None, token_type_ids=None):
pooled_output = self.bert(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids).pooler_output
pooled_output = self.dropout(pooled_output)
logits = self.classifier(pooled_output)
return logits
m = BertClassificationModel("bert-base-cased",4, 0.1)
o = m(**t("test test", return_tensors="pt"))
print(o.shape)
Output:
torch.Size([1, 4])
| https://stackoverflow.com/questions/73593136/ |
What is the reason for low GPU util when training machine learning model? | Suppose I have 8 gpus on a server. (From 0 to 7)
When I train a simple (and small) model on a gpu #0, it takes about 20 minutes per epoch.
However, when I load more than 5 or 6 models on some gpus,
for example, 2 experiments per gpu from gpu #0 to #2, (6 in total)
the training time per epoch explodes. ( about 1 hour ),
When I train 2 models per gpu for all gpus ( 16 experiments in total ), it takes about 3 hours to complete an epoch.
When I see the CPU utilization, it is fine.
But GPU utilization drops.
What is the reason for the drop, and how can I solve the problem?
| There are basically two ways of using multi-GPUs for deep learning:
Use torch.nn.DataParallel(module) (DP)
This function is quite discouraged by the official documentation because it replicates the entire module in all GPUs at each forward pass. At the end of forward pass, the models are destroyed. Therefore, when you have big models it could be an important bottleneck in your training time and even slow it by compared to single GPU. It could be the case for instance when you freeze a large part of big module for fine tuning.
That's why you may consider using:
torch.nn.parallel.DistributedDataParallel(module, device_ids=) (DDP) documentation
This function often requires refactoring your code a little bit more but it improves the efficiency because it copy the models on GPUs only once, at the beginning of the training. The models are persistent over time and the gradients are synchronized after each backward pass via hooks. To go further, you can distributed data and optimizer as well to avoid data transfer. You can do it simply (as well as parallelized modules) using torch-ignite/distributed.
I don't know what kind of method you tried but I encourage you to use DDP instead of DP if you are using it.
| https://stackoverflow.com/questions/73594247/ |
400% higher error with PyTorch compared with identical Keras model (with Adam optimizer) |
TLDR:
A simple (single hidden-layer) feed-forward Pytorch model trained to predict the function y = sin(X1) + sin(X2) + ... sin(X10) substantially underperforms an identical model built/trained with Keras. Why is this so and what can be done to mitigate the difference in performance?
In training a regression model, I noticed that PyTorch drastically underperforms an identical model built with Keras.
This phenomenon has been observed and reported previously:
The same model produces worse results on pytorch than on tensorflow
CNN model in pytorch giving 30% less accuracy to Tensoflowflow model:
PyTorch Adam vs Tensorflow Adam
Suboptimal convergence when compared with TensorFlow model
RNN and Adam: slower convergence than Keras
PyTorch comparable but worse than keras on a simple feed forward network
Why is the PyTorch model doing worse than the same model in Keras even with the same weight initialization?
Why Keras behave better than Pytorch under the same network configuration?
The following explanations and suggestions have been made previously as well:
Using the same decimal precision (32 vs 64): 1, 2,
Using a CPU instead of a GPU: 1,2
Change retain_graph=True to create_graph=True in computing the 2nd derivative with autograd.grad: 1
Check if keras is using a regularizer, constraint, bias, or loss function in a different way from pytorch: 1,2
Ensure you are computing the validation loss in the same way: 1
Use the same initialization routine: 1,2
Training the pytorch model for longer epochs: 1
Trying several random seeds: 1
Ensure that model.eval() is called in validation step when training pytorch model: 1
The main issue is with the Adam optimizer, not the initialization: 1
To understand this issue, I trained a simple two-layer neural network (much simpler than my original model) in Keras and PyTorch, using the same hyperparameters and initialization routines, and following all the recommendations listed above. However, the PyTorch model results in a mean squared error (MSE) that is 400% higher than the MSE of the Keras model.
Here is my code:
0. Imports
import numpy as np
from scipy.stats import pearsonr
from sklearn.preprocessing import MinMaxScaler
from sklearn import metrics
from torch.utils.data import Dataset, DataLoader
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.regularizers import L2
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
1. Generate a reproducible dataset
def get_data():
np.random.seed(0)
Xtrain = np.random.normal(0, 1, size=(7000,10))
Xval = np.random.normal(0, 1, size=(700,10))
ytrain = np.sum(np.sin(Xtrain), axis=-1)
yval = np.sum(np.sin(Xval), axis=-1)
scaler = MinMaxScaler()
ytrain = scaler.fit_transform(ytrain.reshape(-1,1)).reshape(-1)
yval = scaler.transform(yval.reshape(-1,1)).reshape(-1)
return Xtrain, Xval, ytrain, yval
class XYData(Dataset):
def __init__(self, X, y):
super(XYData, self).__init__()
self.X = torch.tensor(X, dtype=torch.float32)
self.y = torch.tensor(y, dtype=torch.float32)
self.len = len(y)
def __getitem__(self, index):
return (self.X[index], self.y[index])
def __len__(self):
return self.len
# Data, dataset, and dataloader
Xtrain, Xval, ytrain, yval = get_data()
traindata = XYData(Xtrain, ytrain)
valdata = XYData(Xval, yval)
trainloader = DataLoader(dataset=traindata, shuffle=True, batch_size=32, drop_last=False)
valloader = DataLoader(dataset=valdata, shuffle=True, batch_size=32, drop_last=False)
2. Build Keras and PyTorch models with identical hyperparameters and initialization methods
class TorchLinearModel(nn.Module):
def __init__(self, input_dim=10, random_seed=0):
super(TorchLinearModel, self).__init__()
_ = torch.manual_seed(random_seed)
self.hidden_layer = nn.Linear(input_dim,100)
self.initialize_layer(self.hidden_layer)
self.output_layer = nn.Linear(100, 1)
self.initialize_layer(self.output_layer)
def initialize_layer(self, layer):
_ = torch.nn.init.xavier_normal_(layer.weight)
#_ = torch.nn.init.xavier_uniform_(layer.weight)
_ = torch.nn.init.constant(layer.bias,0)
def forward(self, x):
x = self.hidden_layer(x)
x = self.output_layer(x)
return x
def mean_squared_error(ytrue, ypred):
return torch.mean(((ytrue - ypred) ** 2))
def build_torch_model():
torch_model = TorchLinearModel()
optimizer = optim.Adam(torch_model.parameters(),
betas=(0.9,0.9999),
eps=1e-7,
lr=1e-3,
weight_decay=0)
return torch_model, optimizer
def build_keras_model():
x = layers.Input(shape=10)
z = layers.Dense(units=100, activation=None, use_bias=True, kernel_regularizer=None,
bias_regularizer=None)(x)
y = layers.Dense(units=1, activation=None, use_bias=True, kernel_regularizer=None,
bias_regularizer=None)(z)
keras_model = Model(x, y, name='linear')
optimizer = Adam(learning_rate=1e-3, beta_1=0.9, beta_2=0.9999, epsilon=1e-7,
amsgrad=False)
keras_model.compile(optimizer=optimizer, loss='mean_squared_error')
return keras_model
# Instantiate models
torch_model, optimizer = build_torch_model()
keras_model = build_keras_model()
3. Train PyTorch model for 100 epochs:
torch_trainlosses, torch_vallosses = [], []
for epoch in range(100):
# Training
losses = []
_ = torch_model.train()
for i, (x,y) in enumerate(trainloader):
optimizer.zero_grad()
ypred = torch_model(x)
loss = mean_squared_error(y, ypred)
_ = loss.backward()
_ = optimizer.step()
losses.append(loss.item())
torch_trainlosses.append(np.mean(losses))
# Validation
losses = []
_ = torch_model.eval()
with torch.no_grad():
for i, (x, y) in enumerate(valloader):
ypred = torch_model(x)
loss = mean_squared_error(y, ypred)
losses.append(loss.item())
torch_vallosses.append(np.mean(losses))
print(f"epoch={epoch+1}, train_loss={torch_trainlosses[-1]:.4f}, val_loss={torch_vallosses[-1]:.4f}")
4. Train Keras model for 100 epochs:
history = keras_model.fit(Xtrain, ytrain, sample_weight=None, batch_size=32, epochs=100,
validation_data=(Xval, yval))
5. Loss in training history
plt.plot(torch_trainlosses, color='blue', label='PyTorch Train')
plt.plot(torch_vallosses, color='blue', linestyle='--', label='PyTorch Val')
plt.plot(history.history['loss'], color='brown', label='Keras Train')
plt.plot(history.history['val_loss'], color='brown', linestyle='--', label='Keras Val')
plt.legend()
Keras records a much lower error in the training. Since this may be due to a difference in how Keras computes the loss, I calculated the prediction error on the validation set with sklearn.metrics.mean_squared_error
6. Validation error after training
ypred_keras = keras_model.predict(Xval).reshape(-1)
ypred_torch = torch_model(torch.tensor(Xval, dtype=torch.float32))
ypred_torch = ypred_torch.detach().numpy().reshape(-1)
mse_keras = metrics.mean_squared_error(yval, ypred_keras)
mse_torch = metrics.mean_squared_error(yval, ypred_torch)
print('Percent error difference:', (mse_torch / mse_keras - 1) * 100)
r_keras = pearsonr(yval, ypred_keras)[0]
r_pytorch = pearsonr(yval, ypred_torch)[0]
print("r_keras:", r_keras)
print("r_pytorch:", r_pytorch)
plt.scatter(ypred_keras, yval); plt.title('Keras'); plt.show(); plt.close()
plt.scatter(ypred_torch, yval); plt.title('Pytorch'); plt.show(); plt.close()
Percent error difference: 479.1312469426776
r_keras: 0.9115184443702814
r_pytorch: 0.21728812737220082
The correlation of predicted values with ground truth is 0.912 for Keras but 0.217 for Pytorch, and the error for Pytorch is 479% higher!
7. Other trials
I also tried:
Lowering the learning rate for Pytorch (lr=1e-4), R increases from 0.217 to 0.576, but it's still much worse than Keras (r=0.912).
Increasing the learning rate for Pytorch (lr=1e-2), R is worse at 0.095
Training numerous times with different random seeds. The performance is roughly the same, regardless.
Trained for longer than 100 epochs. No improvement was observed!
Used torch.nn.init.xavier_uniform_ instead of torch.nn.init.xavier_normal_ in the initialization of the weights. R improves from 0.217 to 0.639, but it's still worse than Keras (0.912).
What can be done to ensure that the PyTorch model converges to a reasonable error comparable with the Keras model?
| The problem here is unintentional broadcasting in the PyTorch training loop.
The result of a nn.Linear operation always has shape [B,D], where B is the batch size and D is the output dimension. Therefore, in your mean_squared_error function ypred has shape [32,1] and ytrue has shape [32]. By the broadcasting rules used by NumPy and PyTorch this means that ytrue - ypred has shape [32,32]. What you almost certainly meant is for ypred to have shape [32]. This can be accomplished in many ways; probably the most readable is to use Tensor.flatten
class TorchLinearModel(nn.Module):
...
def forward(self, x):
x = self.hidden_layer(x)
x = self.output_layer(x)
return x.flatten()
which produces the following train/val curves
| https://stackoverflow.com/questions/73600481/ |
Forcing PyTorch Neural Net to output a specific datatype | I am learning how to create a GAN with PyTorch 1.12 and I need the instance returned by my generator to fall into a specific feature space.
The model in my generator class looks like this:
self.model = nn.Sequential(
nn.Linear(2, 16),
nn.ReLU(),
nn.Linear(16, 32),
nn.ReLU(),
nn.Linear(32, 2),
)
I need every feature in the instance returned by my generator to be an unsigned integer. The noise fed into the generator fits this, but the output has decimals and some features are negative. How can I set this?
(If needed I can include more of my code but I figured this is the relevant line and wanted to keep this question short to make things easier for everyone.)
| You can try clipping negative values and casting to torch.int32:
from torch import nn
import torch
class TransformOutput(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return torch.clamp_min(x, min=0).to(dtype=torch.int32)
model = nn.Sequential(
nn.Linear(2, 16),
nn.ReLU(),
nn.Linear(16, 32),
nn.ReLU(),
nn.Linear(32, 2),
TransformOutput(),
)
x = torch.rand(4, 2) - 0.5
out = model(x)
assert out.dtype == torch.int32
| https://stackoverflow.com/questions/73602930/ |
VGG-16 and ResNet-9 loss values not corresponding to accuracy on test set | I have two models whose performance I am comparing, a ResNet-9 model and a VGG-16 model. They are being used for image classification. Their accuracies on the same test set are:
ResNet-9 = 99.25% accuracy
VGG-16 = 97.90% accuracy
However from their loss curves during training, shown in the images, I see that the VGG-16 has lower losses as compared to ResNet-9 which has higher losses.
I am using torch.nn.CrossEntropyLoss() for both VGG-16 and ResNet-9. I would have expected the ResNet-9 to have lower losses (because it performs better on the test set) but this is not the case.
Is this observation normal?
| Yes, the loss for a model can be greater even if the accuracy is greater. This is because the loss function penalizes if the confidence is lower.
For example:
if you have a label [0, 0, 0, 1],
and model A predicts [0.1, 0.1, 0.1, 0.7]
and model B predicts [0.1, 0.0, 0.0, 0.9]
Both models will have an accuracy of 100% but the CrossEntropyLoss for model A will be greater than that of model B because model A predicts with a lower confidence score.
So, a higher loss of ResNet-9 compared to VGG-16, even when ResNet-9 has a greater accuracy, means ResNet-9's predictions (on average) have less confidence on the predicted labels.
| https://stackoverflow.com/questions/73603560/ |
pytorch std must more than or equal to 0.0 | I tried to use torch.normal but got the error that shows std >= 0.0; I need to fix this error.
b=32
n_s = 10
dim = 64
slots_mu = nn.Parameter(torch.randn(1, 1, dim))
slots_log_sigma = nn.Parameter(torch.randn(1, 1, dim))
mu = slots_mu.expand(b, n_s, -1)
sigma = slots_log_sigma.expand(b, n_s, -1)
slots = torch.normal(mu, sigma)
and it raised an error below
---> 10 slots = torch.normal(mu, sigma)
RuntimeError: normal expects all elements of std >= 0.0
| It's because of the definition of standard deviation, std is for distance measurement, please take a look at this answer.
To solve your problem, converting std to absolute value can help:
slots_log_sigma = nn.Parameter(abs(torch.randn(1, 1, dim)))
| https://stackoverflow.com/questions/73603882/ |
Why the grad is unavailable for the tensor in gpu | a = torch.nn.Parameter(torch.ones(5, 5))
a = a.cuda()
print(a.requires_grad)
b = a
b = b - 2
print('a ', a)
print('b ', b)
loss = (b - 1).pow(2).sum()
loss.backward()
print(a.grad)
print(b.grad)
After executing codes, the a.grad is None although a.requires_grad is True.
But if the code a = a.cuda() is removed, a.grad is available after the loss backward.
|
The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more information.
a = torch.nn.Parameter(torch.ones(5, 5))
a = a.cuda()
print(a.requires_grad)
b = a
b = b - 2
print('a ', a)
print('b ', b)
loss = (b - 1).pow(2).sum()
a.retain_grad() # added this line
loss.backward()
print(a.grad)
That happens because of your line a = a.cuda() that override the original value of a.
You could use
a = torch.nn.Parameter(torch.ones(5, 5))
a.cuda()
Or
a = torch.nn.Parameter(torch.ones(5, 5, device='cuda'))
a = torch.nn.Parameter(torch.ones(5, 5).cuda())
Or explicitly requesting to retain the gradients of a
a.retain_grad() # added this line
Erasing the gradients of intermediate variables can save significant amount of memory. So it is good that you retain gradients only where you need.
| https://stackoverflow.com/questions/73605095/ |
Pytorch - RuntimeError: Error(s) in loading state_dict for Sequential: Unexpected key(s) in state_dict: "0.weight", "0.bias", | I have created a Pytorch object from the class Sequential (see official page).
As they suggest, I am saving it using the command torch.save(model.state_dict(), PATH).
It seems that everything has worked fine, since when I use torch.load(PATH) in another file I get
an ordered Dict like
'0.weight': tensor([[ 0.1202, ...]]) ,
'0.bias': tensor([ 0.1422, ...]) ,
...
with all the shapes of the tensors being correct. However, when doing
model = Sequential()
model.load_state_dict(torch.load(PATH))
I get the error
RuntimeError: Error(s) in loading state_dict for Sequential:
Unexpected key(s) in state_dict: "0.weight", "0.bias", "2.weight", "2.bias", "4.weight", "4.bias".
| When trying to load, the model you are trying to load into (model) is an empty Sequential object with no layers. On the other hand, looking at the error message, the state dictionary of the model you are trying to load from indicates that it has at least five layers, with the first, third, and fifth layers containing a weight and bias parameter. This is a mismatch since the corresponding layers and parameters do not exist in model.
To fix this, model should have the same architecture as the saved model you are trying to load. Since you say you saved the original model yourself, use an identical initialization step when creating model.
| https://stackoverflow.com/questions/73607116/ |
Loading resnet50 prettrianed model in PyTorch | I want to use resnet50 pretrained model using PyTorch and I am using the following code for loading it:
import torch
model = torch.hub.load("pytorch/vision", "resnet50", weights="IMAGENET1K_V2")
Although I upgrade the torchvision but I receive the following error:
Any idea?
| As per the latest definition, we now load models using torchvision library, you can try that using:
from torchvision.models import resnet50, ResNet50_Weights
# Old weights with accuracy 76.130%
model1 = resnet50(weights=ResNet50_Weights.IMAGENET1K_V1)
# New weights with accuracy 80.858%
model2 = resnet50(weights=ResNet50_Weights.IMAGENET1K_V2)
| https://stackoverflow.com/questions/73609732/ |
Is there any difference between the DNN model by keras and the DNN model by pytorch? | Here are my codes for DNN by torch and keras.
I use them to train the same data, but finally get the totally different AUC results(keras version reaches 0.74 and torch version reaches 0.67).
So I'm so confused!
And I have tried many times that my results remain differences.
Is there any difference between the two models?
categorical_embed_sizes = [589806, 21225, 2565, 2686, 343, 344, 10, 2, 8, 8, 7, 7, 2, 2, 2, 17, 17, 17]
#keras model
cat_input, embeds = [], []
for i in range(cat_len):
input_ = Input(shape=(1, ))
cat_input.append(input_)
nums = categorical_embed_sizes[i]
embed = Embedding(nums, 8)(input_)
embeds.append(embed)
cont_input = Input(shape=(cont_len,), name='cont_input', dtype='float32')
cont_input_r = Reshape((1, cont_len))(cont_input)
embeds.append(cont_input_r)
#Merge_L=concatenate([train_emb,trainnumber_emb,departstationname_emb,arrivestationname_emb,seatname_emb,orderofftime_emb,fromcityname_emb,tocityname_emb,daytype_emb,num_input_r])
Merge_L=concatenate(embeds, name='cat_1')
Merge_L=Dense(256,activation=None,name='dense_0')(Merge_L)
Merge_L=PReLU(name='merge_0')(Merge_L)
Merge_L=BatchNormalization(name='bn_0')(Merge_L)
Merge_L=Dense(128,activation=None,name='dense_1')(Merge_L)
Merge_L=PReLU(name='prelu_1')(Merge_L)
Merge_L=BatchNormalization(name='bn_1')(Merge_L)
Merge_L=Dense(64,activation=None,name='Dense_2')(Merge_L)
Merge_L=PReLU(name='prelu_2')(Merge_L)
Merge_L=BatchNormalization(name='bn_2')(Merge_L)
Merge_L=Dense(32,activation=None,name='Dense_3')(Merge_L)
Merge_L=PReLU(name='prelu_3')(Merge_L)
Merge_L=BatchNormalization(name='bn_3')(Merge_L)
Merge_L=Dense(16,activation=None,name='Dense_4')(Merge_L)
Merge_L=PReLU(name='prelu_4')(Merge_L)
predictions= Dense(1, activation='sigmoid', name='Dense_rs')(Merge_L)
predictions=Reshape((1,), name='pred')(predictions)
cat_input.append(cont_input)
model = Model(inputs=cat_input,
outputs=predictions)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[tf.keras.metrics.BinaryAccuracy(), tf.keras.metrics.AUC()])
# torch model
class DNN(nn.Module):
def __init__(self, categorical_length, categorical_embed_sizes, categorical_embed_dim, in_size):
super(DNN, self).__init__()
self.categorical_length = categorical_length
self.categorical_embed_sizes = categorical_embed_sizes
self.categorical_embed_dim = categorical_embed_dim
self.in_size = in_size
self.nn = torch.nn.Sequential(
nn.Linear(self.in_size, 256),
nn.PReLU(256),
nn.BatchNorm1d(256),
nn.Linear(256, 128),
nn.PReLU(128),
nn.BatchNorm1d(128),
nn.Linear(128, 64),
nn.PReLU(64),
nn.BatchNorm1d(64),
nn.Linear(64, 32),
nn.PReLU(32),
nn.BatchNorm1d(32),
nn.Linear(32, 16),
nn.PReLU(16)
)
self.out = torch.nn.Sequential(
nn.Linear(16, 1),
nn.Sigmoid()
)
self.embedding = nn.Embedding(self.categorical_embed_sizes, self.categorical_embed_dim)
def forward(self, x):
x_categorical = x[:, :self.categorical_length].long()
x_categorical = self.embedding(x_categorical).view(x_categorical.size(0), -1)
x = torch.cat((x_categorical, x[:, self.categorical_length:]), dim=1)
x = self.nn(x)
out = self.out(x)
return out
| Finally I find the real reason for the error I met. It has no business with the model structure or parameters. Actually, the wrong input for sklearn's roc_auc_score function is the direct cause of this error.
As we know, sklearn.metrics.roc_auc_score need at least y_true and y_score. y_true is the real labels of datasets and y_score is the predicted probabilities of label 1 (for binary tasks).
But when I use torch's outputs to calculate two metrics(Accuracy and AUC), I transform the outputs to 0-1 vectors. So my y_score is no longer probabilities but 0-1 vectors.
Then the error happened...
| https://stackoverflow.com/questions/73616875/ |
"RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation" when there's actually no in-place operations | I am working on some paper replication, but I am having trouble with it.
According to the log, it says that RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation. However, when I check the line where the error is referring to, it was just a simple property setter inside the class:
@pdfvec.setter
def pdfvec(self, value):
self.param.pdfvec[self.key] = value # where the error message is referring to
Isn't in-place operations are something like += or *= etc.? I don't see why this error message appeared in this line.
I am really confused about this message, and I will be glad if any one knows any possible reason this can happen.
For additional information, this is the part where the setter function was called:
def _update_params(params, pdfvecs):
idx = 0
for param in params:
totdim = param.stats.numel()
shape = param.stats.shape
param.pdfvec = pdfvecs[idx: idx + totdim].reshape(shape) # where the setter function was called
idx += totdim
I know this can still lack information for solving the problem, but if you know any possiblity why the error message appeared I would be really glad to hear.
| In-place operation means the assignment you've done is modifiying the underlying storage of your Tensor, of which requires_grad is set to True, according to your error message.
That said, your param.pdfvec[self.key] is not a leaf Tensor, because they will be updated during back-propagation. And you tried to assign a value to it , that will interference with autograd, so this action is prohibited by default. You can do this by directly modifying its underlying storage(f.e., with .data).
| https://stackoverflow.com/questions/73616963/ |
PyTorch CNN doesn't update weights while training | I want to predict a 8x8 matrix with the original 8x8 matrix. But the weights DO NOT update in the training process.
I use two simple conv layers to conv input matrix from 1x8x8 to 2x8x8. Then I used another conv layer to convert 2x8x8 to 1x8x8. The inputs and outputs in the data folder are generated randomly. The pytorch codes are shown as follows.
I have already checked some posts about weights not update issues. I think there must be some wrong with "requires_grad = True" of data or loss.backward().
Any suggestions about the codes would be grateful. Thanks in advance.
M
Tue Sep 6 15:34:17 CST 2022
The data input folder is in
data/CM10_1/CM_1.txt
data/CM10_1/CM_2.txt
data/CM10_1/CM_3.txt
data/CM10_1/CM_4.txt
The data output folder is in
data/CM10_2/CM_1.txt
data/CM10_2/CM_2.txt
data/CM10_2/CM_3.txt
data/CM10_2/CM_4.txt
CM_i.txt is shown as
207 244 107 173 70 111 180 244
230 246 233 193 11 97 192 86
32 40 202 189 24 195 70 149
232 247 244 100 209 202 173 57
161 244 167 167 177 47 167 191
24 123 9 43 80 124 41 65
71 204 216 180 242 113 30 129
139 36 238 8 8 164 127 178
data/CM_info_tr.csv
CMname,
CM_1.txt,
CM_2.txt,
CM_3.txt,
CM_4.txt,
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# outline###############################################################
#
# CM10_1/CM_i.txt to predict CM10_2/CM_i.txt
#
# data pair example
# CM10_1/CM_1.txt -> CM10_2/CM_1.txt
#
# CM10_1/CM_1.txt is 8x8 matrix with random int
# CM10_2/CM_1.txt is 8x8 matrix with random int
#
# The model uses two conv layers
# layer 01 : 1x8x8 -> 2x8x8
# layer 02 : 2x8x8 -> 1x8x8
#
# The loss is the difference between
# CM10_2/CM_1.txt(predicted) and CM10_2/CM_1.txt
#
# main ###############################################################
from __future__ import print_function, division
import os
import sys
import torch
import pandas as pd
import numpy as np
import torch.nn.functional as F
from skimage import io, transform
from torch.utils.data import Dataset, DataLoader
from torch import nn
from torch.autograd import Variable
torch.cuda.empty_cache()
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
# test CM parameters
n_Ca = 8
batch_size = 4
#device = "cuda" if torch.cuda.is_available() else "cpu"
device = "cpu"
# define class dataset CMDataset ###################################################
class CMDataset(Dataset):
"""CM dataset"""
def __init__(self,csv_CM,CM_beg_dir,CM_end_dir,n_Ca=n_Ca):
"""
Args:
csv_CM (string): Path to the csv file with CM class.
CM_beg_dir (string): Directory with all the CM begin data.
CM_end_dir (string): Directory with all the CM end data.
"""
self.CM_info = pd.read_csv(csv_CM)
self.CM_beg_dir = CM_beg_dir
self.CM_end_dir = CM_end_dir
def __len__(self):
return len(self.CM_info)# the number of the samples
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
#load and convert CM begin data ---------------------------------------
CM_beg_path = os.path.join(self.CM_beg_dir, self.CM_info.iloc[idx, 0])
CM_beg_data = np.loadtxt(CM_beg_path)
CM_beg_data = CM_beg_data.reshape(1,n_Ca,n_Ca)
CM_beg_data = CM_beg_data.astype(np.float32)
CM_beg_data = torch.from_numpy(CM_beg_data)
CM_beg_data = CM_beg_data.to(device)
#load and convert CM endin data ---------------------------------------
CM_end_path = os.path.join(self.CM_end_dir, self.CM_info.iloc[idx, 0])
CM_end_data = np.loadtxt(CM_end_path)
CM_end_data = CM_end_data.reshape(1,n_Ca,n_Ca)
CM_end_data = CM_end_data.astype(np.float32)
CM_end_data = torch.from_numpy(CM_end_data)
CM_end_data = CM_end_data.to(device)
return CM_beg_data, CM_end_data
# define class model CMNet ###################################################
class CMNet(nn.Module):
def __init__(self):
super(CMNet, self).__init__()
self.lay_CM_01 = nn.Conv2d(in_channels=1,out_channels=2,kernel_size=1,stride=1,bias=True)
self.lay_CM_02 = nn.Conv2d(in_channels=2,out_channels=1,kernel_size=1,stride=1,bias=True)
def forward(self, CM_data):
[n_in_batch,n_in_chan,n_in_hei,n_in_wid]=CM_data.shape
n_Ca = n_in_hei
out1_1 = self.lay_CM_01(CM_data)
out1_2 = out1_1
out1_3 = self.lay_CM_02(out1_2)
out = out1_3
return out
# load data for training and validing
CM_dataset_train = CMDataset(csv_CM = 'data/CM_info_tr.csv',
CM_beg_dir = 'data/CM10_1/',
CM_end_dir = 'data/CM10_2/',
n_Ca = n_Ca)
train_dataloader = DataLoader(CM_dataset_train,
batch_size=batch_size,
shuffle=True)
# training parameter
learning_rate = 2
epochs = 5
model = CMNet()
model = model.to(device)
# Initialize the loss function
loss_fn = nn.MSELoss(reduction='mean')
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# define train loop ###############################################################
def train_loop(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
for batch, (X1,Y) in enumerate(dataloader):
X1=X1.to(torch.float32)
Y = Y.to(torch.float32)
# Compute prediction and loss
X1=torch.autograd.Variable(X1)
pred = model(X1)
pred = torch.autograd.Variable(pred)
# compute loss
loss = loss_fn(pred,Y)
loss = Variable(loss, requires_grad = True)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss, current = loss.item(), batch * len(X1)
print(f" loss:{loss:>15f}, [{current:>5d}/{size:>5d}]")
# Train ###############################################################
for t in range(epochs):
print(f"Epoch {t+1}\n----------------------------------------------")
# print(list(model.parameters()))
train_loop(train_dataloader, model, loss_fn, optimizer)
#print("Train and Valid Done!")
| What pytorch version are you using? Variable is depracated for 5 years now. Remove the lines loss = Variable(loss, requires_grad = True) and pred = torch.autograd.Variable(pred), that should do the trick. Try and read the current documentation and don't rely on archaic tutorials.
| https://stackoverflow.com/questions/73618570/ |
Scipy minimize.optimize LBFGS vs PyTorch LBFGS | I have written some code with scipy.optimize.minimize using the LBFGS algorithm. Now I want to implement the same with PyTorch.
SciPy:
res = minimize(calc_cost, x_0, args = const_data, method='L-BFGS-B', jac=calc_grad)
def calc_cost(x, const_data):
# do some calculations with array "calculation" as result
return np.sum(np.square(calculation)) #this returns a scalar!
def calc_grad(x, const_data):
# do some calculations which result in array "calculation"
return np.ravel(calculation) #in PyTorch this returns without ravel!
Now in PyTorch I am following this example. However, I want to use my own gradient calculation. This results in the error RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([3, 200, 200]) and output[0] has a shape of torch.Size([]). I understand that the shape/size of my gradient should be the same as the objective function (i.e. here a scalar), but this is not what I need (see above). How do I adapt the following code in a way that it does the same calculations as the SciPy version:
optimizer = optim.LBFGS([x_0], history_size=10, max_iter=10, line_search_fn="strong_wolfe")
h_lbfgs = []
for i in range(10):
optimizer.zero_grad()
objective = calc_cost(x_0, const_data)
objective.backward(gradient = calc_gradient(x_0, const_data))
optimizer.step(lambda: calc_cost(x_0, const_data))
h_lbfgs.append(objective.item())
I have had a look at the PyTorch docs already but don't quite understand how they apply here:
https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html
https://pytorch.org/docs/stable/optim.html#optimizer-step-closure
| The problem is that I was using the wrong "objective" function. What I am trying to optimize is the x_0 array, therefore I had to alter my code as follows:
for i in range(10):
optimizer.zero_grad()
x_0.backward(gradient = calc_gradient(x_0, const_data))
optimizer.step(lambda: calc_cost(x_0, const_data))
h_lbfgs.append(objective.item())
| https://stackoverflow.com/questions/73619180/ |
Why is my pytorch Autoencoder giving me a "mat1 and mat2 shapes cannot be multiplied" error? | I know this is because the shapes don't match for the multiplication, but why when my code is similar to most example code I found:
import torch.nn as nn
...
#input is a 256x256 image
num_input_channels = 3
self.encoder = nn.Sequential(
nn.Conv2d(num_input_channels*2**0, num_input_channels*2**1, kernel_size=3, padding=1, stride=2), #1 6 128 128
nn.Tanh(),
nn.Conv2d(num_input_channels*2**1, num_input_channels*2**2, kernel_size=3, padding=1, stride=2), #1 12 64 64
nn.Tanh(),
nn.Conv2d(num_input_channels*2**2, num_input_channels*2**3, kernel_size=3, padding=1, stride=2), #1 24 32 32
nn.Tanh(),
nn.Conv2d(num_input_channels*2**3, num_input_channels*2**4, kernel_size=3, padding=1, stride=2), #1 48 16 16
nn.Tanh(),
nn.Conv2d(num_input_channels*2**4, num_input_channels*2**5, kernel_size=3, padding=1, stride=2), #1 96 8 8
nn.Tanh(),
nn.Conv2d(num_input_channels*2**5, num_input_channels*2**6, kernel_size=3, padding=1, stride=2), #1 192 4 4
nn.LeakyReLU(),
nn.Conv2d(num_input_channels*2**6, num_input_channels*2**7, kernel_size=3, padding=1, stride=2), #1 384 2 2
nn.LeakyReLU(),
nn.Conv2d(num_input_channels*2**7, num_input_channels*2**8, kernel_size=2, padding=0, stride=1), #1 768 1 1
nn.LeakyReLU(),
nn.Flatten(),
nn.Linear(768, 1024*32),
nn.ReLU(),
nn.Linear(1024*32, 256),
nn.ReLU(),
).cuda()
I get the error "RuntimeError: mat1 and mat2 shapes cannot be multiplied (768x1 and 768x32768)"
To my understanding I should end up with a Tensor of shape [1,768,1,1] after the convolutions
and [1,768] after flattening, so I can use a fully connected Linear layer that goes to 1024*32 in size (by which I tried to add some more ways for the neural net to store data/knowledge).
Using nn.Linear(1,1024*32) runs with a warning later: "UserWarning: Using a target size (torch.Size([3, 256, 256])) that is different to the input size (torch.Size([768, 3, 256, 256]))". I think it comes from my decoder, though
What am I not understanding correctly here?
| All torch.nn Modules require batched inputs, and it seems in your case you have no batch dimension. Without knowing your code I'm assuming you are using
my_input.shape == (3, 256, 256)
But you will need to add a batch dimension, that is, you need to have
my_input.shape == (1, 3, 256, 256)
You can easily do that by introducing a dummy dimension using:
my_input = my_input[None, ...]
| https://stackoverflow.com/questions/73623626/ |
Command Line stable diffusion runs out of GPU memory but GUI version doesn't | I installed the GUI version of Stable Diffusion here. With it I was able to make 512 by 512 pixel images using my GeForce RTX 3070 GPU with 8 GB of memory:
However when I try to do the same thing with the command line interface, I run out of memory:
Input:
>> C:\SD\stable-diffusion-main>python scripts/txt2img.py --prompt "a close-up portrait of a cat by pablo picasso, vivid, abstract art, colorful, vibrant" --plms --n_iter 3 --n_samples 1 --H 512 --W 512
Error:
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.13 GiB already allocated; 0 bytes free; 6.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
If I reduce the size of the image to 256 X 256, it gives a result, but obviously much lower quality.
So part 1 of my question is why do I run out of memory at 6.13 GiB when I have 8 GiB on the card, and part 2 is what does the GUI do differently to allow 512 by 512 output? Is there a setting I can change to reduce the load on the GPU?
Thanks a lot,
Alex
| This might not be the only answer, but I solved it by using the optimized version here. If you already have the standard version installed, just copy the "OptimizedSD" folder into your existing folders, and then run the optimized txt2img script instead of the original:
>> python optimizedSD/optimized_txt2img.py --prompt "a close-up portrait of a cat by pablo picasso, vivid, abstract art, colorful, vibrant" --H 512 --W 512 --seed 27 --n_iter 2 --n_samples 10 --ddim_steps 50
It's quite slow on my computer, but produces 512 X 512 images!
Thanks,
Alex
| https://stackoverflow.com/questions/73629154/ |
What exactly is meant by param_groups in pytorch? | I would like to update learning rates corresponding to each weight matrix and each bias in pytorch during training. The answers here and here and many other answers I found online talk about doing this using the model's param_groups which to the best of my knowledge applies learning rates in groups, not layer weight/bias specific. I also want to update the learning rates during training, not pre-setting them with torch.optim.
Any help is appreciated.
| Updates to model parameters are handled by an optimizer in PyTorch. When you define the optimizer you have the option of partitioning the model parameters into different groups, called param groups. Each param group can have different optimizer settings. For example one group of parameters could have learning rate of 0.1 and another could have learning rate of 0.01.
To do what you're asking, you can just make every parameter belong to a different param group. You'll need some way to keep track of which param group corresponds to which parameter. Once you've defined the optimizer with different groups you can update the learning rate whenever you want, including at training time.
For example, say we have the following simple linear model
import torch
import torch.nn as nn
import torch.optim as optim
class LinearModel(nn.Module):
def __init__(self):
super().__init__()
self.layer1 = nn.Linear(10, 20)
self.layer2 = nn.Linear(20, 1)
def forward(self, x):
return self.layer2(self.layer1(x))
model = LinearModel()
and suppose we want learning rates for each trainable parameter initialized according to the following:
learning_rates = {
'layer1.weight': 0.01,
'layer1.bias': 0.1,
'layer2.weight': 0.001,
'layer2.bias': 1.0}
We can use this dictionary to define a different learning rate for each parameter when we initialize the optimizer.
# Build param_group where each group consists of a single parameter.
# `param_group_names` is created so we can keep track of which param_group
# corresponds to which parameter.
param_groups = []
param_group_names = []
for name, parameter in model.named_parameters():
param_groups.append({'params': [parameter], 'lr': learning_rates[name]})
param_group_names.append(name)
# optimizer requires default learning rate even if its overridden by all param groups
optimizer = optim.SGD(param_groups, lr=10)
Alternatively, we could omit the 'lr' entry and each param group would be initialized with the default learning rate (lr=10 in this case).
At training time if we wanted to update the learning rates we could do so by iterating over each of the optimizer.param_groups and updating the 'lr' entry for each of them. For example, in the following simplified training loop, we update the learning rates before each step.
for i in range(10):
output = model(torch.zeros(1, 10))
loss = output.sum()
optimizer.zero_grad()
loss.backward()
# we can change the learning rate whenever we want for each param group
print(f'step {i} learning rates')
for name, param_group in zip(param_group_names, optimizer.param_groups):
param_group['lr'] = learning_rates[name] / (i + 1)
print(f' {name}: {param_group["lr"]}')
optimizer.step()
which prints
step 0 learning rates
layer1.weight: 0.01
layer1.bias: 0.1
layer2.weight: 0.001
layer2.bias: 1.0
step 1 learning rates
layer1.weight: 0.005
layer1.bias: 0.05
layer2.weight: 0.0005
layer2.bias: 0.5
step 2 learning rates
layer1.weight: 0.0033333333333333335
layer1.bias: 0.03333333333333333
layer2.weight: 0.0003333333333333333
layer2.bias: 0.3333333333333333
step 3 learning rates
layer1.weight: 0.0025
layer1.bias: 0.025
layer2.weight: 0.00025
layer2.bias: 0.25
step 4 learning rates
layer1.weight: 0.002
layer1.bias: 0.02
layer2.weight: 0.0002
layer2.bias: 0.2
step 5 learning rates
layer1.weight: 0.0016666666666666668
layer1.bias: 0.016666666666666666
layer2.weight: 0.00016666666666666666
layer2.bias: 0.16666666666666666
step 6 learning rates
layer1.weight: 0.0014285714285714286
layer1.bias: 0.014285714285714287
layer2.weight: 0.00014285714285714287
layer2.bias: 0.14285714285714285
step 7 learning rates
layer1.weight: 0.00125
layer1.bias: 0.0125
layer2.weight: 0.000125
layer2.bias: 0.125
step 8 learning rates
layer1.weight: 0.0011111111111111111
layer1.bias: 0.011111111111111112
layer2.weight: 0.00011111111111111112
layer2.bias: 0.1111111111111111
step 9 learning rates
layer1.weight: 0.001
layer1.bias: 0.01
layer2.weight: 0.0001
layer2.bias: 0.1
| https://stackoverflow.com/questions/73629330/ |
cuda out of memory during training | I am using Pytorch to do a cat-dog classification. I keep getting a Cuda out of memory problem during training and validation. If I only run the training, I don't have this issue. But when I add a valiation process, I get this oom issue. I cannot see what is happenning.
I have tried: change batchsize to 1; torch.cuda.empty_cache(); and tensor.cpu() for all variables.
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 0; 8.00 GiB total capacity; 7.21 GiB already allocated; 0 bytes free; 7.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
| Could you please update your question to show your code?
Also check that you do use with torch.no_grad() for validation, because otherwise it might compute the gradients and thus consume more memory.
| https://stackoverflow.com/questions/73629755/ |
Getting the probability density value for a given distribution in PyTorch | Consider the following code for generating a random sample from a normal distribution with given mean and standard deviation
# import the torch module
import torch
# create the mean with 5 values
mean = torch.tensor([1.0, 2.0, 3.0, 4.0, 5.0])
# create the standard deviation with 5 values
std = torch.tensor([1.22, 0.78, 0.56, 1.23, 0.23])
# create normal distribution
print(torch.normal(mean, std))
Output:
tensor([-0.0367, 1.7494, 2.3784, 4.2227, 5.0095])
But I want to calculate the probability density value of the normal distribution for a particular sample given the mean and standard deviation. Is there any function available in PyTorch for doing the same?
Note that I can get the pdf value by coding the analytical expression for the normal distribution with the given mean and standard deviation. But, I want to use an inbuilt from PyTorch.
| There's torch.distributions, providing some helpful methods. How about:
dist = torch.distributions.normal.Normal(mean, std)
print(torch.exp(dist.log_prob(torch.Tensor(my_value))))
The result seems to be the same as scipy.stats.norm.pdf() yields.
| https://stackoverflow.com/questions/73634337/ |
Writing a custom pytorch dataloader iter with pre-processing on batch | A typical custom PyTorch Dataset looks like this,
class TorchCustomDataset(torch.utils.data.Dataset):
def __init__(self, filenames, speech_labels):
pass
def __len__(self):
return 100
def __getitem__(self, idx):
return 1, 0
Here, with __getitem__ I can read any file, and apply any pre-processing for that specific file.
What if I want to apply some tensor-level pre-processing to the whole batch of data? Technically, it's possible to just iterate through the data loader to get the batch sample and apply the pre-processing on it.
But how to do it with a custom data loader? In short, what will be the __getitem__ equivalent for data loader to apply some operation on the whole batch of data?
| You can override the collate_fn of DataLoader: This function takes the individual items from the underlying Dataset and forms the batch. You can add your custom pre-processing at that point by modifying the collate_fn.
| https://stackoverflow.com/questions/73634372/ |
How to replace PyTorch model layer's tensor with another layer of same shape in Huggingface model? | Given a Huggingface model, e.g.
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("bert-large-uncased", num_labels=2)
I can access a layer's tensor as such:
# Shape [1024, 1024]
model.state_dict()["bert.encoder.layer.0.attention.self.query.weight"]
[out]:
tensor([[ 0.0167, -0.0422, -0.0425, ..., 0.0302, -0.0341, 0.0251],
[ 0.0323, 0.0347, -0.0041, ..., -0.0722, 0.0031, -0.0351],
[ 0.0387, -0.0293, -0.0694, ..., 0.0492, 0.0201, -0.0727],
...,
[ 0.0035, 0.0081, -0.0337, ..., 0.0460, 0.0268, 0.0747],
[ 0.0513, 0.0131, 0.0735, ..., -0.0127, 0.0144, -0.0400],
[ 0.0385, 0.0013, -0.0272, ..., 0.0148, 0.0399, 0.0339]])
Given the another tensor of the same shape that I've pre-defined from somewhere else, in this case, for illustration, I'm creating a random tensor but this can be any tensor that is pre-defined.
import torch
replacement_layer = torch.rand([1024, 1024])
Note: I'm not trying to replace a layer with a random tensor but replace it with a pre-defined one.
When I try to do this to replace the layer tensor through the state_dict(), it didn't seem to work:
import torch
from transformers import AutoModelForSequenceClassification
# The model with a layer that we want to replace.
model = AutoModelForSequenceClassification.from_pretrained("bert-large-uncased", num_labels=2)
# A replacement layer.
replacement_layer = torch.rand([1024, 1024])
# Replacing the layer in the statedict.
model.state_dict()["bert.encoder.layer.0.attention.self.query.weight"] = replacement_layer
# Check that the layer is replaced. No, it is not =(
assert torch.equal(
model.state_dict()["bert.encoder.layer.0.attention.self.query.weight"],
replacement_layer)
How to replace PyTorch model layer's tensor with another layer of same shape in Huggingface model?
| A state_dict is something special. It is an on-the-fly copy more than it is the actual contents of a model, if that makes sense.
You can directly access a model's layers by dot notation. Note that 0 often indicates an index rather than a string. You'll also need to transform your tensor into a torch Parameter for it to work within a model.
So this should work:
model.bert.encoder.layer[0].attention.self.query.weight = torch.nn.Parameter(replacement_layer)
or in full:
# Note I used the base model for testing
import torch
from transformers import AutoModelForSequenceClassification
# The model with a layer that we want to replace.
model: torch.nn.Module = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2)
# A replacement layer.
replacement_layer = torch.rand([768, 768])
model.bert.encoder.layer[0].attention.self.query.weight = torch.nn.Parameter(replacement_layer)
# Check that the layer is replaced
assert torch.equal(
model.state_dict()["bert.encoder.layer.0.attention.self.query.weight"],
replacement_layer)
assert torch.equal(
model.bert.encoder.layer[0].attention.self.query.weight,
replacement_layer)
print("Succes!")
| https://stackoverflow.com/questions/73635388/ |
Precision,recall, F1 score with Sklearn on Pytorch | I've been looking through samples but am unable to understand how to integrate the precision, recall and f1 metrics for my model. My code is as follows:
for epoch in range(num_epochs):
#Calculate Accuracy (stack tutorial no n_total)
n_correct = 0
n_total = 0
for i, (words, labels) in enumerate(train_loader):
words = words.to(device)
labels = labels.to(dtype=torch.long).to(device)
# Forward pass
outputs = model(words)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
#feedforward tutorial solution
_, predicted = torch.max(outputs, 1)
n_correct += (predicted == labels).sum().item()
n_total += labels.shape[0]
accuracy = 100 * n_correct/n_total
#Push to matplotlib
train_losses.append(loss.item())
train_epochs.append(epoch)
train_acc.append(accuracy)
#Loss and Accuracy
if (epoch+1) % 10 == 0:
print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.2f}, Acc: {accuracy:.2f}')
| Since you have the predicted and the labels variables, you can aggregate them during the epoch loop and convert them to numpy arrays to calculate the required metrics.
At the beginning of the epoch, initialize two empty lists; one for true labels and one for ground truth labels.
for epoch in range(num_epochs):
predicted_labels, ground_truth_labels = [], []
...
Then, keep appending the respective entries to each list during the epoch:
...
_, predicted = torch.max(outputs, 1)
n_correct += (predicted == labels).sum().item()
# appending
predicted_labels.append(predicted.cpu().detach().numpy())
ground_truth_labels.append(labels.cpu().detach().numpy())
...
Then, at the epoch end, you could use precision_recall_fscore_support with predicted_labels and ground_truth_labels as inputs.
Notes:
You'll probably have to refer something like this to flatten the above two lists.
Read about torch.no_grad() to apply it as a good practice during the calculations of metrics.
| https://stackoverflow.com/questions/73640393/ |
gpu memory is still occupied after validation phase is finished, pytorch | As far as I know, when training and validating a model with GPU, GPU memory is mainly used for loading data, forward & backward. and to what I know, I think GPU memory usage should be same 1)before training, 2)after training, 3)before validation, 4)after validation. But in my case, GPU memory used in the validation phase is still occupied in the training phase and vice versa. It is not increasing per epoch so I'm sure it is not a common mistake like loss.item().
Here is the summary of my question
Shouldn't the GPU memory used in one phase be cleaned up before another(except for model weights)?
If it should, are there any noob mistakes I'm making here..?
Thank you for your help.
Here is the code for training loop
eval_result = evaluate(model,val_loader,True,True)
print(eval_result)
print('start training')
for epoch in range(num_epoch):
model.train()
time_ = datetime.datetime.now()
for iter_, data in enumerate(tr_loader):
x, y = data
x = x.to(device).view(x.shape[0],1,*(x.shape[1:]))
y = y.to(device).long()
pred = model.forward(x)
loss = loss_fn(pred,y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# print
print_iter = 16
if (iter_+1) % print_iter == 0:
elapsed = datetime.datetime.now() - time_
expected = elapsed * (num_batches / print_iter)
_epoch = epoch + ((iter_ + 1) / num_batches)
print('\rTRAIN [{:.3f}/{}] loss({}) '
'elapsed {} expected per epoch {}'.format(
_epoch,num_epoch, loss.item(), elapsed, expected)
,end="\t\t\t")
time_ = datetime.datetime.now()
print()
eval_result = evaluate(model,val_loader,True,True)
print(eval_result)
scheduler.step(eval_result[0])
if (epoch+1) %1 == 0:
save_model(model, optimizer, scheduler)
I've read about how making a validation phase a function helps since python is function scoping language.
so the evaluate() is
def evaluate(model, val_loader, get_acc = True, get_IOU = True):
"""
pred: Tensor of shape B C D H W
label Tensor of shape B D H W
"""
val_loss = 0
val_acc = 0
val_IOU = 0
with torch.no_grad():
model.eval()
for data in tqdm(val_loader):
x, y = data
x = x.to(device).view(x.shape[0],1,*(x.shape[1:]))
y = y.to(device).long()
pred = model.forward(x)
loss = loss_fn(pred,y)
val_loss += loss.item()
pred = torch.argmax(pred, dim=1)
if get_acc:
total = np.prod(y.shape)
total = total if total != 0 else 1
val_acc += torch.sum((pred == y)).cpu().item()/total
if get_IOU:
iou = 0
for class_num in range(1,8):
iou += torch.sum((pred==class_num)&(y==class_num)).cpu().item()\
/ torch.sum((pred==class_num)|(y==class_num)).cpu().item()
val_IOU += iou/7
val_loss /= len(val_loader)
val_acc /= len(val_loader)
val_IOU /= len(val_loader)
return (val_loss, val_acc, val_IOU)
and here is GPU usage in colab. 1 is the point where the evaluate() is first called, and 2 is when the train started.
| Allocating GPU memory is slow. PyTorch retains the GPU memory it allocates, even after no more tensors referencing that memory remain. You can call torch.cuda.empty_cache() to free any GPU memory that isn't accessible.
| https://stackoverflow.com/questions/73643887/ |
I am kinda new to the pytorch, now struggling with a classification problem | I built a very simple structure
class classifier (nn.Module):
def __init__(self):
super().__init__()
self.classify = nn.Sequential(
nn.Linear(166,80),
nn.Tanh(),
nn.Linear(80,40),
nn.Tanh(),
nn.Linear(40,1),
nn.Softmax()
)
def forward (self, x):
pred = self.classify(x)
return pred
model = classifier()
The loss function and optimizer are defined as
criteria = nn.BCEWithLogitsLoss()
iteration = 1000
learning_rate = 0.1
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
and here is the training and evaluation section
for epoch in range (iteration):
model.train()
y_pred = model(x_train)
loss = criteria(y_pred,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
model.eval()
with torch.inference_mode():
test_pred = model(x_test)
test_loss = criteria(test_pred, y_test)
if epoch % 100 == 0:
print(loss)
print(test_loss)
I received the same loss values, and by debugging, I found that the weights were not being updated.
| The problem is in the network architecture: you are using a Softmax layer on a single valued output at the end. As per the definition of the softmax function, for a output vector x, we have, for index i:
softmax(x_i) = e^{x_i} / sum_j (e^{x_j})
Here, you only have a single valued output. Due to this, the output of your neural network is always 1, irrespective of the inputs or the weights. To fix this, remove the Softmax layer at the end. An activation function like Sigmoid might be more appropriate, and in fact you are already applying this when using the BCEWithLogitsLoss.
| https://stackoverflow.com/questions/73644038/ |
Pytorch transformation for just certain batch | Hi is there any method for apply trasnformation for certain batch?
It means, I want apply trasnformation for just last batch in every epochs.
What I tried is here
import torch
class test(torch.utils.data.Dataset):
def __init__(self):
self.source = [i for i in range(10)]
def __len__(self):
return len(self.source)
def __getitem__(self, idx):
print(idx)
return self.source[idx]
ds = test()
dl = torch.utils.data.DataLoader(dataset = ds, batch_size = 3,
shuffle = False, num_workers = 5)
for i in dl:
print(i)
because I thought that if I could get idx number, it would be possible to apply for certain batchs.
However If using num_workers outputs are
0
1
2
3
964
57
8
tensor([0, 1, 2])
tensor([3, 4, 5])
tensor([6, 7, 8])
tensor([9])
which are not I thought
without num_worker
0
1
2
tensor([0, 1, 2])
3
4
5
tensor([3, 4, 5])
6
7
8
tensor([6, 7, 8])
9
tensor([9])
So the question is
Why idx works so with num_workers?
How can I apply trasnform for certain batchs (or certain idx)?
|
When you have num_workers > 1, you have multiple subprocesses doing data loading in parallel. So what is likely happening is that there is a race condition for the print step, and the order you see in the output depends on which subprocess goes first each time.
For most transforms, you can apply them on a specific batch simply by calling the transform after the batch has been loaded. To do this just for the last batch, you could do something like:
for batch_idx, batch_data in dl:
# check if batch is the last batch
if ((batch_idx+1) * batch_size) >= len(ds):
batch_data = transform(batch_data)
| https://stackoverflow.com/questions/73644178/ |
Change image labels when using pytorch | I am loading an image dataset with pytorch as seen below:
dataset = datasets.ImageFolder('...', transform=transform)
loader = DataLoader(dataset, batch_size=args.batchsize)
The dataset is i na folder with structure as seen below:
dataset/
class_1/
class_2/
class_3/
So in result each image in class_1 folder has a label of 0..etc.
However i would like to change these labels and randomly assign a label to each image in the dataset. What i tried is:
new_labels = [random.randint(0, 3) for i in range(len(dataset.targets))]
dataset.targets = new_labels
This however does not change the labels as i wanted due to some errors later in model training.
Is this the correct way to do it or is tehre a more appropriate one?
| You can have a transformation for the labels:
import random
class rand_label_transform(object):
def __init__(self, num_labels):
self.num_labels = num_labels
def __call__(self, labels):
# generate new random label
new_label = random.randint(0, self.num_labels - 1)
return new_label
dataset = datasets.ImageFolder('...', transform=transform, target_transform=rand_label_transform(num_labels=3))
See ImageFolder for more details.
| https://stackoverflow.com/questions/73645125/ |
how to convert ML Project from a GPU project to CPU project? | I am learning ML and i want to re train a AI model for lane detection.
I want to be familiar with the ML training process. The accuracy/result is not my primary goal and i do not need a best ML model for lane detection.
I found this AI model and want to try it out. But i have been facing a problem:
I do not have a GPU, so i wish i can train this model with my CPU. But sadly some part of this code is written with CUDA. Is there a way, i can convert this GPU code to CPU code only?
Should i find another AI-model only for the CPU training?
| you can use the tensor.to(device) command to move a tensor to a device.
The .to() command is also used to move a whole model to a device, like in the post you linked to.
Another possibility is to set the device of a tensor during creation using the device= keyword argument, like in t = torch.tensor(some_list, device=device)
To set the device dynamically in your code, you can use
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
to set cuda as your device if possible.
Above is the answer for how to add CUDA in the code. SO Use Cntrl + F and remove all the keywords which forces code to run on GPU. Such as "device", "to"
| https://stackoverflow.com/questions/73649586/ |
How to apply function element wise to 2D tensor | Very simple question but I have been struggling with this forever now.
import torch
t = torch.tensor([[2,3],[4,6]])
overlap = [2, 6]
f = lambda x: x in overlap
I want:
torch.tensor([[True,False],[False,True]])
Both the tensor and overlap are very big, so efficiency is wished here.
| I found an easy way. Since torch is implemented through numpy array the following works and is performant:
import torch
import numpy as np
t = torch.tensor([[2,3],[4,6]])
overlap = [2, 6]
f = lambda x: x in overlap
mask = np.vectorize(f)(t)
Found here.
| https://stackoverflow.com/questions/73650652/ |
Docker Image for Image prediction using Pytorch Image Models (timm) | I want to create a docker image for Image Prediction using Timm which gives outputs in JSON format like {"predicted": "cat", "confidence": "0.99"}.
I am using this code.
from __future__ import print_function
import argparse
import torch
import timm
import urllib
import json
import torchvision.io as io
from PIL import Image as img
from timm.data import resolve_data_config
from timm.data.transforms_factory import create_transform
# Training settings
parser = argparse.ArgumentParser(description='Assignment 1')
parser.add_argument('--model',type=str)
parser.add_argument('--image',type=str)
if __name__ =='__main__':
args =parser.parse_args()
model = timm.create_model(args.model,pretrained=True)
model.eval()
config = resolve_data_config({},model=model)
transform = create_transform(**config)
url, filename = (args.image, "cat.jpg")
urllib.request.urlretrieve(url, filename)
img = img.open(filename).convert('RGB')
tensor = transform(img).unsqueeze(0) # transform and add batch dimension
import torch
with torch.no_grad():
out = model(tensor)
probabilities = torch.nn.functional.softmax(out[0], dim=0)
print({"predicted" : "cat", "confidence" : f"{int(probabilities.max()*100)/100}"})
After running a test script, the output is:
Output is not valid json ! got: {'predicted': 'cat', 'confidence': '0.99'}
After using json.dump I am getting this error.
Traceback (most recent call last):
File "/opt/src/main.py", line 25, in <module>
urllib.request.urlretrieve(url, filename)
File "/usr/local/lib/python3.9/urllib/request.py", line 239, in
urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "/usr/local/lib/python3.9/urllib/request.py", line 214, in
urlopen
return opener.open(url, data, timeout)
File "/usr/local/lib/python3.9/urllib/request.py", line 523, in
open
response = meth(req, response)
File "/usr/local/lib/python3.9/urllib/request.py", line 632, in
http_response
response = self.parent.error(
File "/usr/local/lib/python3.9/urllib/request.py", line 561, in
error
return self._call_chain(*args)
File "/usr/local/lib/python3.9/urllib/request.py", line 494, in
_call_chain
result = func(*args)
File "/usr/local/lib/python3.9/urllib/request.py", line 641, in
http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
| You need to use json.dumps:
import json
...
print(json.dumps({"predicted" : "cat", "confidence" : f"{int(probabilities.max()*100)/100}"}))
| https://stackoverflow.com/questions/73654169/ |
Normally distributed vector given mean, stddev|variance | how to do this in torch
np.random.normal(loc=mean,scale=stdev,size=vsize)
i.e. by providing mean, stddev or variance
| There's a similar torch function torch.normal:
torch.normal(mean=mean, std=std)
If mean and std are scalars, this will produce a scalar value. You can simply repeat them for the target you want it to be, eg:
mean = 2
std = 10
size = (3,3)
r = torch.normal(mean=torch.full(size,mean).float(),std=torch.full(size,mean).float())
print(r)
> tensor([[ 2.2263, 1.1374, 4.5766],
[ 3.4727, 2.6712, 2.4878],
[-0.1787, 2.9600, 2.7598]])
| https://stackoverflow.com/questions/73656322/ |
Pytorch silent data corruption | I am on a workstation with 4 A6000 GPUs. Moving a Torch tensor from one GPU to another GPU corrupts the data, silently!!!
See the simple example below.
x
>tensor([1], device='cuda:0')
x.to(1)
>tensor([1], device='cuda:1')
x.to(2)
>tensor([0], device='cuda:2')
x.to(3)
>tensor([0], device='cuda:3')
Any ideas what is the cause of this issue?
Other info that might be handy:
(there was two nvlinks which I manually removed trying to solve the problem)
GPU0 GPU1 GPU2 GPU3 CPU Affinity NUMA Affinity
GPU0 X SYS SYS SYS 0-63 N/A
GPU1 SYS X SYS SYS 0-63 N/A
GPU2 SYS SYS X SYS 0-63 N/A
GPU3 SYS SYS SYS X 0-63 N/A
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_Mar__8_18:18:20_PST_2022
Cuda compilation tools, release 11.6, V11.6.124
Build cuda_11.6.r11.6/compiler.31057947_0
NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 11.6
Edit: adding some screenshots
It seems to be stateful. Changes which GPUs work fine together after starting a new python runtime.
| The solution was to disable IOMMU. On our server, in the BIOS settings
/Advanced/AMD CBS/NBIO Common Options/IOMMU -> IOMMU - Disabled
See the PyTorch issues thread for more information.
| https://stackoverflow.com/questions/73656975/ |
Larger batch size cause larger loss | I am trying to solve a regression problem using pytorch. I have a pre-trained model to start with. When I was tuning hyperparameters, I found my batch size and train/validation loss have a weird correlation. Specifically:
batch size = 16 -\> train/val loss around 0.6 (for epoch 1)
batch size = 64 -\> train/val loss around 0.8 (for epoch 1)
batch size = 128 -\> train/val loss around 1 (for epoch 1)
I want to know if this is normal, or there is something wrong with my code.
optimizer: SGD with learning rate of 1e-3
Loss function:
def rmse(pred, real):
residuals = pred - real
square = torch.square(residuals)
sum_of_square = torch.sum(square)
mean = sum_of_square / pred.shape[0]
root = torch.sqrt(mean)
return root
train loop:
def train_loop(dataloader, model, optimizer, epoch):
num_of_batches = len(dataloader)
total_loss = 0
for batch, (X, y) in enumerate(dataloader):
optimizer.zero_grad()
pred = model(X)
loss = rmse(pred, y)
loss.backward()
optimizer.step()
total_loss += loss.item()
#lr_scheduler.step(epoch*num_of_batches+batch)
#last_lr = lr_scheduler.get_last_lr()[0]
train_loss = total_loss / num_of_batches
return train_loss
test loop:
def test_loop(dataloader, model):
size = len(dataloader.dataset)
num_of_batches = len(dataloader)
test_loss = 0
with torch.no_grad():
for X, y in dataloader:
pred = model(X)
test_loss += rmse(pred, y).item()
test_loss /= num_of_batches
return test_loss
| I'll start with an a. analogy, b. dive into the math, and then c. end with a numerical experiment.
a.) What you are witnessing is roughly the same phenomenon as the difference between stochastic and batched gradient descent. In the analog case, the "true" gradient or direction in which the learned parameters should be shifted minimizes the loss over the entire training set of data. In stochastic gradient descent, the gradient shifts the learned parameters in the direction that minimizes the loss for a single example. As the size of the batch is increased from 1 towards the size of the overall dataset, the gradient estimated from the minibatch becomes closer to the gradient for the whole dataset.
Now, is stochastic gradient descent useful at all, given that it is imprecise wrt the whole dataset? Absolutely. In fact, the noise in this estimate can be useful for escaping local minima in the optimization. Analogously, any noise in your estimate of loss wrt the whole dataset is likely nothing to worry about.
b.) But let's next look at why this behavior occurs. RMSE is defined as:
where N is the total number of examples in your dataset. And if RMSE were calculated this way, we would expect the value to be roughly the same (and to approach exactly the same value as N becomes large). However, in your case, you are actually calculating the mean epoch loss as:
where B is the number of minibatches per epoch, and b is the number of examples per minibatch:
Thus, epoch loss is the average RMSE per minibatch. Rearranging, we can see:
when B is large (B = N) and the minibatch size is 1,
which clearly has quite different properties than RMSE defined above. However, as B becomes small B = 1, and minibatch size is N,
which is exactly equal to RMSE above. So as you increase the batch size, the expected value for the quantity you compute moves between these two expressions. This explains the (roughly square root) scaling of your loss with different minibatch sizes. Epoch loss is an estimate of RMSE (which can be thought of as the standard deviation of model prediction error). One training goal could be to drive this error standard deviation to zero, but your expression for epoch loss is also likely a good proxy for this. And both quantities are themselves proxies for whatever model performance you actually hope to obtain.
c. You can try this for yourself with a trivial toy problem. A normal distribution is used as a proxy for model error.
EXAMPLE 1: Compute RMSE for whole dataset ( of size 10000 x b)
import torch
for b in [1,2,3,5,9,10,100,1000,10000,100000]:
b_errors = []
for i in range (10000):
error = torch.normal(0,100,size = (1,b))
error = error **2
error = error.mean()
b_errors.append(error)
RMSE = torch.sqrt(sum(b_errors)/len(b_errors))
print("Average RMSE for b = {}: {}".format(N,RMSE))
Result:
Average RMSE for b = 1: 99.94982147216797
Average RMSE for b = 2: 100.38357543945312
Average RMSE for b = 3: 100.24600982666016
Average RMSE for b = 5: 100.97154998779297
Average RMSE for b = 9: 100.06820678710938
Average RMSE for b = 10: 100.12358856201172
Average RMSE for b = 100: 99.94219970703125
Average RMSE for b = 1000: 99.97941589355469
Average RMSE for b = 10000: 100.00338745117188
EXAMPLE 2: Compute Epoch Loss with B = 10000
import torch
for b in [1,2,3,5,9,10,100,1000,10000,100000]:
b_errors = []
for i in range (10000):
error = torch.normal(0,100,size = (1,b))
error = error **2
error = error.mean()
error = torch.sqrt(error)
b_errors.append(error)
avg = (sum(b_errors)/len(b_errors)
print("Average Epoch Loss for b = {}: {}".format(b,avg))
Result:
Average Epoch Loss for b = 1: 80.95650482177734
Average Epoch Loss for b = 2: 88.734375
Average Epoch Loss for b = 3: 92.08515930175781
Average Epoch Loss for b = 5: 95.56260681152344
Average Epoch Loss for b = 9: 97.49445343017578
Average Epoch Loss for b = 10: 97.20250701904297
Average Epoch Loss for b = 100: 99.6297607421875
Average Epoch Loss for b = 1000: 99.96969604492188
Average Epoch Loss for b = 10000: 99.99618530273438
Average Epoch Loss for b = 100000: 100.00079345703125
| https://stackoverflow.com/questions/73659118/ |
What is the most efficient way to make a method that is able to process single and multi dimensional arrays in python? | I was using pytorch and realized that for a linear layer you could pass not only 1d tensors but multidmensional tensors as long as the last dimensions matched. Multi dimensional inputs in pytorch Linear method?
I tried looping over each item, but is that what pytorch does?
I'm having trouble thinking how you would program looping with more dimensions, maybe recursion but that seems messy.
What is the most efficient way to implement this?
| I tried looping over each item, but is that what pytorch does?
The short answer is yes, loops are used, but it's more complicated than you probably think. If input is a 2D tensor (a matrix), then the output of a linear operation is computed as input @ weight.T + bias using an external BLAS library's GEMM operation. Otherwise it uses torch.matmul(input, weight.T) + bias which uses broadcast semantics to compute a batched version of the operation. Broadcasting is a semantic, not an implementation, so how the broadcasting is performed is going to be backend-dependent. Ultimately some form of looping combined with parallel processing will be used for most of these implementation.
To go a little deeper, lets take a look at the PyTorch implementation of the linear layer. This quickly leads down some rabbit holes since PyTorch uses different backend libraries for performing linear algebra operations efficiently on the hardware available (libraries like oneAPI, Intel MKL, or MAGMA) but perhaps understanding some of the details can help.
Starting at the C++ entrypoint to nn.functional.linear:
Tensor linear(const Tensor& input, const Tensor& weight, const Tensor& bias) {
if (input.is_mkldnn()) {
return at::mkldnn_linear(input, weight, bias);
}
if (input.dim() == 2 && bias.defined()) {
// Fused op is marginally faster.
return at::addmm(bias, input, weight.t());
}
auto output = at::matmul(input, weight.t());
if (bias.defined()) {
output.add_(bias);
}
return output;
}
There are three cases here.
input.is_mkldnn(). This condition occurs if the input tensor is in the MKL-DNN format (Tensor.to_mkldnn) and will make PyTorch use the at::mkldnn_linear function, which in turn makes calls to ideep, which in turn makes calls to the oneDNN library (previous known as Intel MKL-DNN), which ultimately selects a specific general matrix-matrix multiplication (GEMM) routine dependent on platform and data types. The simplest implementation is the reference implementation, and from that we can see that they use a parallel-for loop (note the anonymous function they use uses a quadruple nested for-loop). In practice the reference implementation probably isn't used, instead, you would probably be calling the x86 optimized version compiled with the Xbyak JIT assembler to produce highly optimized code. I'm not going to pretend to follow all the details of the optimized code, but efficient GEMM is a heavily studied topic that I only have a passing knowledge of.
input.dim() == 2 && bias.defined(). This condition means that input is a 2D tensor (shape [B,M]) and bias is defined. In this case pytorch uses the addmm function. This efficiently computes the output as input @ weight.T + bias where @ is matrix multiplication. There are multiple implementations of addmm registered in PyTorch depending on what types of tensors are being used. The dense-CPU specific version is here which eventually makes calls to an external BLAS library's GEMM subroutine. The backend used is likely Intel MKL but you can check using print(torch.__config__.parallel_info()). Whichever BLAS implementation is being used, its certainly a highly optimized implementation of matrix multiplication similar to the oneDNN implementation, probably using multi-threading and optimized compilation.
If neither of the previous two conditions are met then PyTorch uses the torch.matmul function, which performs a broadcasted version of input @ weight.T where input is shape [..., M]. The result of this operation is a tensor of shape [..., N]. Similar to addmm, there are multiple implementations of this function depending on the tensor types but an external library will ultimately be used that uses parallelization and optimized matrix-multiplication subroutines. After the broadcasted matrix-multiplication a broadcasted add_ operation is used to add the bias term (if bias is defined).
| https://stackoverflow.com/questions/73660261/ |
What should the input to DeepLabV3 be in training mode? | I am trying to train a deeplabv3_resnet50 model on a custom dataset, but get the error ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 256, 1, 1]) when trying to do the forward pass. The following minimal example produces this error:
import torch
import torchvision
model = torchvision.models.segmentation.deeplabv3_resnet50(weights="DEFAULT")
model.train()
batch_size = 1
nbr_of_channels = 3
img_height, img_width = (500, 500)
input = torch.rand((batch_size, nbr_of_channels, img_height, img_width))
model(input)
I do not understand this at all. What is meant by got input size torch.Size([1, 256, 1, 1]), and what should I do differently?
| The error you get is from a deep BatchNorm layer: deep in the backbone, the feature map size is reduced to 1x1 pixels. As a result, BatchNorm cannot compute std of the feature map when batch size is only one.
For any batch_size > 1 this will work:
batch_size = 2 # Need bigger batches for training
nbr_of_channels = 3
img_height, img_width = (500, 500)
input = torch.rand((batch_size, nbr_of_channels, img_height, img_width))
model(input) # working with batch_size > 1
| https://stackoverflow.com/questions/73661815/ |
Pytorch lightning: see input/ouptut size in model summary when using nn.ModuleList | When I use nn.ModuleList() to define layers in my pytorch lightning model, their "In sizes" and "Out sizes" in ModelSummary are "?"
Is their a way to have input/output sizes of layer in model summary, eventually using something else than nn.ModuleList() to define layers from a list of arguments.
Here is a dummy model:
(12 is the batch size)
import torch
import torch.nn as nn
from pytorch_lightning import LightningModule
from pytorch_lightning.utilities.model_summary import ModelSummary
class module_list_dummy(LightningModule):
def __init__(self,
layers_size_list,
):
super().__init__()
self.example_input_array = torch.zeros((12,100), dtype=torch.float32)
self.fc11 = nn.Linear(100,50)
self.moduleList = nn.ModuleList()
input_size = 50
for layer_size in layers_size_list:
self.moduleList.append(nn.Linear(input_size, layer_size))
input_size = layer_size
self.loss_fn = nn.MSELoss()
def forward(self, x):
out = self.fc11(x)
for layer in self.moduleList:
out = layer(out)
return out
def training_step(self, batch, batch_idx):
x, y = batch
out = self.fc11(X)
for layer in self.moduleList:
out = layer(out)
loss = torch.sqrt(self.loss_fn(out, y))
self.log('train_loss', loss)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
out = self.fc11(X)
for layer in self.moduleList:
out = layer(out)
loss = torch.sqrt(self.loss_fn(y_hat, y))
self.log('val_loss', loss)
return loss
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=self.lr)
return optimizer
the following code print the summary:
net = module_list_dummy(layers_size_list=[30,20,1])
summary = ModelSummary(net)
print(summary)
But the output is:
| Name | Type | Params | In sizes | Out sizes
------------------------------------------------------------------
0 | fc11 | Linear | 5.0 K | [12, 100] | [12, 50]
1 | moduleList | ModuleList | 2.2 K | ? | ?
2 | loss_fn | MSELoss | 0 | ? | ?
------------------------------------------------------------------
7.2 K Trainable params
0 Non-trainable params
7.2 K Total params
0.029 Total estimated model params size (MB)
I would expect to have:
1 | moduleList | ModuleList | 2.2 K | [12,50] | [12,1]
or even better something like
1 | Linear | Linear | ... | [12,50] | [12,30]
2 | Linear | Linear | ... | [12,30] | [12,20]
3 | Linear | Linear | ... | [12,20] | [12,1]
I did this to check that those layer are being used in forward:
x = torch.zeros((12,100), dtype=torch.float32)
net(x).shape
and they are ( model output is size [12,1])
| In your case you are using the layers in the list in a sequential manner:
for layer in self.moduleList:
out = layer(out)
However, nn.ModuleList does not force you to run your layers sequentially. You could be doing this:
out = self.moduleList[3](out)
out = self.moduleList[1](out)
out = self.moduleList[0](out)
Hence, ModuleList cannot itself be interpreted as a "layer" with an input- and output shape. It is completely arbitrary how the user might integrate the layers inside the list in their graph.
In your specific case where you run the layers sequentially, you could define a nn.Sequential instead. This would look like this:
self.layers = nn.Sequential()
for layer_size in layers_size_list:
self.layers.append(nn.Linear(input_size, layer_size))
input_size = layer_size
And then in your forward:
out = self.layers(input)
Now, the model summary in Lightning will also show the shape sizes, because nn.Sequential's output shape is now directly the output shape of the last layer inside of it.
I hope this answer helps your understanding.
| https://stackoverflow.com/questions/73665285/ |
RuntimeError: Could not infer dtype of generator | I build a training data for a model using pytorch.
def shufflerow(tensor1, tensor2, axis):
row_perm = torch.rand(tensor1.shape[:axis+1]).argsort(axis) # get permutation indices
for _ in range(tensor1.ndim-axis-1): row_perm.unsqueeze_(-1)
row_perm = row_perm.repeat(*[1 for _ in range(axis+1)], *(tensor1.shape[axis+1:])) # reformat this for the gather operation
return tensor1.gather(axis, row_perm),tensor2.gather(axis, row_perm)
class Dataset:
def __init__(self, observation, next_observation):
self.data =(observation, next_observation)
indices = torch.randperm(observation.shape[0])
self.train_samples = (observation[indices ,:], next_observation[indices ,:])
self.test_samples = shufflerow(observation, next_observation, 0)
I also have this function which examine whether the data converted to torch.tensor and set the device
def to_tensor(x, device):
if torch.is_tensor(x):
return x
elif isinstance(x, np.ndarray):
return torch.from_numpy(x).to(device=device, dtype=torch.float32)
elif isinstance(x, list):
if all(isinstance(item, np.ndarray) for item in x):
return [torch.from_numpy(item).to(device=device, dtype=torch.float32) for item in x]
elif isinstance(x, tuple):
return (torch.from_numpy(item).to(device=device, dtype=torch.float32) for item in x)
else:
print(f"X:{x} and X's type{type(x)}")
return torch.tensor(x).to(device=device, dtype=torch.float32)
But passing the input data that basically looks like this through the Dataset class
data=Dataset(s1,s2)
print(data.train_samples)
(tensor([[-0.3121, -0.9500, 1.4518],
[-0.9903, -0.1391, -4.4141],
[-0.9645, -0.2642, 5.0233],
[-0.6413, 0.7673, -4.5495],
[-0.3073, 0.9516, -1.0128],
[-0.5495, 0.8355, 3.4044],
[-0.5710, -0.8209, -3.2716],
[-0.9388, 0.3445, 3.9225],
[-0.8402, -0.5423, -4.0820]]), tensor([[-0.2723, -0.9622, 0.8342],
[-0.9958, 0.0912, -4.6186],
[-0.8747, -0.4847, 4.7741],
[-0.5495, 0.8355, 3.4044],
[-0.7146, 0.6996, 4.2841],
[-0.7128, -0.7014, -3.7148],
[-0.9915, 0.1303, 4.4200],
[-0.9358, -0.3526, -4.2585]]))
I am getting this error message
-> 1725 self._target_samples = to_tensor(true_samples)
1726 self._steps = []
/content/data_gen.py in to_tensor(x)
1368 else:
1369 print(f"X:{x} and X's type{type(x)}")
-> 1370 return torch.tensor(x).to(device=device, dtype=torch.float32)
X:<generator object to_tensor.<locals>.<genexpr> at 0x7f380235d6d0> and X's type<class 'generator'>
RuntimeError: Could not infer dtype of generator
Any suggestion, why I am getting this error?
| The expression (torch.from_numpy(item).to(device=device, dtype=torch.float32) for item in x) isn't creating a tuple, it's a generator expression. Since it's in a case where you test for tuples, I suspect you wanted a tuple instead of a generator. Try:
elif isinstance(x, tuple):
return tuple(torch.from_numpy(item).to(device=device, dtype=torch.float32) for item in x)
| https://stackoverflow.com/questions/73667338/ |
some parameters appear in more than one parameter group | When I try the below I get the error ValueError: some parameters appear in more than one parameter group. However, inspecting the model it is not clear to me what is the overlapping module.
The only possiblity as to why it might think so may be because lm_head and transformer.wte have parameters named weight. I'm wondering if this name is what is causing this error.
I am doing this so that I can have the lower layers "moving slowly" compared to the upper layers. Happy to hear if there is an alternative way to do these discriminative learning rates where I don't have overlapping parameters (if any).
import torch
from transformers import AutoModelForCausalLM
language_model = AutoModelForCausalLM.from_pretrained("gpt2")
FREEZE_LAYERS = 2
caption_params = [
{"params": language_model.lm_head.parameters() , "lr": 1e-4},
{"params": language_model.transformer.ln_f.parameters() , "lr": 1e-4},
{"params": language_model.transformer.h[FREEZE_LAYERS:].parameters() , "lr": 5e-5},
{"params": language_model.transformer.wte.parameters() , "lr": 1e-5},
]
optimizer = torch.optim.Adam(caption_params)
| The error message is diagnosing the problem correctly: there are some parameters that appear in more than one parameter group. You can prove this to yourself by doing the following:
>>> parameter_ids = [[id(p) for p in group["params"]] for group in caption_params]
>>> parameter_ids[0]
[140666221372896]
>>> parameter_ids[3]
[140666221372896]
This reveals that the first and last parameter groups, each of which contains a single large embedding tensor, are actually holding a reference to the same exact tensor. What is this tensor? Let's look at it, using both routes of reference to further show it's the same thing:
>>> a = next(language_model.lm_head.parameters())
>>> a
Parameter containing:
tensor([[-0.1101, -0.0393, 0.0331, ..., -0.1364, 0.0151, 0.0453],
[ 0.0403, -0.0486, 0.0462, ..., 0.0861, 0.0025, 0.0432],
[-0.1275, 0.0479, 0.1841, ..., 0.0899, -0.1297, -0.0879],
...,
[-0.0445, -0.0548, 0.0123, ..., 0.1044, 0.0978, -0.0695],
[ 0.1860, 0.0167, 0.0461, ..., -0.0963, 0.0785, -0.0225],
[ 0.0514, -0.0277, 0.0499, ..., 0.0070, 0.1552, 0.1207]],
requires_grad=True)
>>> b = next(language_model.transformer.wte.parameters())
>>> b
Parameter containing:
tensor([[-0.1101, -0.0393, 0.0331, ..., -0.1364, 0.0151, 0.0453],
[ 0.0403, -0.0486, 0.0462, ..., 0.0861, 0.0025, 0.0432],
[-0.1275, 0.0479, 0.1841, ..., 0.0899, -0.1297, -0.0879],
...,
[-0.0445, -0.0548, 0.0123, ..., 0.1044, 0.0978, -0.0695],
[ 0.1860, 0.0167, 0.0461, ..., -0.0963, 0.0785, -0.0225],
[ 0.0514, -0.0277, 0.0499, ..., 0.0070, 0.1552, 0.1207]],
requires_grad=True)
>>> a is b
True
This makes sense, because many Transformer-based models tie the weights used in mapping between word IDs and word representations at the beginning (the initial Embedding layer) and end (the LM head) of the model.
For your specific problem, you can either accept that the tied weights will be moving at the same LR, or you can untie them by cloning and assigning a new copy of the parameter to one of the two modules.
| https://stackoverflow.com/questions/73675928/ |
Why my neural network isn`t learning and why it prediction is equal on all test dataset? | I try to train my first simple neural network. I downloaded dataset from kaggle with already written on tensorflow model. I try to repeat this model on pytorch, but my network obviously not training and prediction does not change from example to example. Please help me. What wrong I do?
My model
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.conv_2d_1 = Conv2d(3, out_channels=16, kernel_size=16, padding=0)
self.pooling1 = MaxPool2d((2, 2))
self.conv_2d_2 = Conv2d(16, out_channels=32, kernel_size=32, padding=0)
self.pooling2 = MaxPool2d((2, 2))
self.conv_2d_3 = Conv2d(32, out_channels=64, kernel_size=16, padding=0)
self.pooling3 = MaxPool2d((2, 2))
self.l1 = Linear(64*25, 128)
self.l2 = Linear(128, 2)
self.sigmoid = Sigmoid()
self.dropout = Dropout(0.2)
self.relu = ReLU()
self.softmax = softmax
self.flatten = tr.nn.Flatten()
def forward(self, x):
x.requires_grad = True
x = self.conv_2d_1(x)
x = self.relu(self.pooling1(x))
x = self.conv_2d_2(x)
x = self.relu(self.pooling2(x))
x = self.conv_2d_3(x)
x = self.relu(self.pooling3(x))
x = self.flatten(x)
x = x.reshape(64*25)
x = self.l1(x)
x = self.l2(x)
return x
Optimizer and loss function
loss_fn = CrossEntropyLoss()
optimizer = Adam(model.parameters(), lr=1e-3)
Training cycle
from matplotlib import pyplot as plt
test1 = tr.Tensor([0, 1]).to('cuda:1')
test2 = tr.Tensor([1, 0]).to('cuda:1')
epochs_number = 10
loss_data = np.zeros((train_clear.shape[0] + train_ill.shape[0]) * epochs_number)
loss_data_cursor = 0
model.train()
for k in range(epochs_number):
i = 0
while i < max(train_clear.shape[0], train_ill.shape[0]) + 1:
try:
optimizer.zero_grad()
prediction = model(tr.Tensor(train_clear[i]).to('cuda:1'))
loss = loss_fn(prediction, test1)
loss.backward()
optimizer.step()
# print(loss, prediction, test1)
loss_item = loss.item()
loss_data[loss_data_cursor] = loss_item
loss_data_cursor += 1
except:
pass
try:
prediction = model(tr.Tensor(train_ill[i]).to('cuda:1'))
loss = loss_fn(prediction, test2)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# print(loss, prediction, test1)
loss_item = loss.item()
loss_data[loss_data_cursor] = loss_item
loss_data_cursor += 1
except:
pass
i += 1
plt.plot(loss_data)
plt.show()
Training loss
The test
model.train(mode=False)
prediction1 = model(tr.Tensor(train_clear[0]).to('cuda:1'))
prediction2 = model(tr.Tensor(train_ill[5]).to('cuda:1'))
prediction1, prediction2, model.parameters(), test_clear[0].shape
rand = tr.randn(test_clear[0].shape).to('cuda:1')
prediction3 = model(rand)
prediction1, prediction2, prediction3, model.parameters(), test_clear[0].shape
Test results
(tensor([ 0.2431, -0.3565], device='cuda:1', grad_fn=<AddBackward0>),
tensor([ 0.2431, -0.3565], device='cuda:1', grad_fn=<AddBackward0>),
tensor([ 58.9911, -75.1154], device='cuda:1', grad_fn=<AddBackward0>),
<generator object Module.parameters at 0x00000183A4845970>,
(3, 180, 180))
| I finded solution. My problem was a using of torch.nn.ReLU instead of torch.nn.functional.relu. I finded that it isn`t equivalent methods and my problem was solved when I replace it.
| https://stackoverflow.com/questions/73678326/ |
why `local_rank` is zero in DDP even I set visible CUDA as 2? | There are 3 GPUs in my system.
I want to run on the last one i.e. 2. For this reason, I set gpu_id as 2 in my configuration file as well as CUDA_VISIBLE_DEVICES=2. But in my program, the following line always assigns the 0th GPU.
local_rank = torch.distributed.get_rank()
torch.cuda.set_device(local_rank)
How to fix this issue?
| When setting CUDA_VISIBLE_DEVICES=2 you tell the OS to only expose the third GPU to your process. That is, as far as PyTorch is concerned, there is only one GPU. Therefore torch.distributed.get_world_size() returns 1 (and not 3).
The rank of this GPU, in your process, will be 0 - since there are no other GPUs available for the process. But as far as the OS is concerned - all processing are done on the third GPU that was allocated to the job.
| https://stackoverflow.com/questions/73678882/ |
RuntimeError: The size of tensor a (38) must match the size of tensor b (34) at non-singleton dimension 3 | I studied Resnet 50 using cifar-10
but, I faced RuntimeError.
Here is code
class BasicBlock(nn.Module):
def __init__(self, in_planes, planes, stride = 1):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size = 1, stride = stride, padding = 1, bias = False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size = 3, stride = 1, padding = 1, bias = False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size = 1, stride = 1, padding = 1, bias = False)
self.bn3 = nn.BatchNorm2d(planes * 4)
if stride != 1:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, planes, kernel_size = 1, stride = stride, bias = False),
nn.BatchNorm2d(planes)
)
else:
self.shortcut = nn.Sequential()
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
out = self.bn3(self.conv3(out))
out += self.shortcut(x) #shortcut connection
out = F.relu(out)
and Error is
RuntimeError: The size of tensor a (38) must match the size of tensor b (34) at non-singleton dimension 3
How can I fix it?
| You don't want padding in your self.conv1 nor self.conv3, because you'll be increasing your image size by 2 (1 each size) each time. Padding should only be used to avoid reducing your image size when using a kernel size of above 1.
| https://stackoverflow.com/questions/73681725/ |
PyTorch model output different dimension when using DataParallel | I implemented my PyTorch model with DataParallel for multi-GPU training. However, it seems that the model doesn't consistently output the right dimension. In the training loop, it seems that the model gave the correct output dimension for the first two batches, but it failed to do so for the third batch and caused an error when calculating the loss:
I also tried to use the solution from this post but it didn't help.
| It seems like you are left with only one sample for the last batch. Try setting drop_last=True in your Dataloader: This will discard the last "not-full" batch.
| https://stackoverflow.com/questions/73683611/ |
Simultaneous Batch and Channel slicing in PyTorch | In PyTorch I have an RGB tensor imgA of batch size 256. I want to retain the green channel for first 128 batches and red channel for remaining 128 batches, something like below:
imgA[:128,2,:,:] = imgA[:128,1,:,:]
imgA[128:,2,:,:] = imgA[128:,0,:,:]
imgA = imgA[:,2,:,:].unsqueeze(1)
or same can be achieved like
imgA = torch.cat((imgA[:128,1,:,:].unsqueeze(1),imgA[128:,0,:,:].unsqueeze(1)),dim=0)
but as I have multiple such images like imgA, imgB, imgC, etc what is the fastest way of achieving the above goal?
| A slicing-based solution can be achieved using torch.gather and repeat_interleave:
select = torch.tensor([1, 0], device=imgA.device)
imgA = = imgA.gather(dim=1, index=select.repeat_interleave(128, dim=0).view(256, 1, 1, 1).expand(-1, -1, *imgA.shape[-2:]))
You can also do that using matrix multiplication and repeat_interleave:
# select c=1 for first half and c=0 for second
select = torch.tensor([[0, 1],[1, 0],[0, 0]], dtype=imgA.dtype, device=imgA.device)
imgA = torch.einsum('cb,bchw->bhw',select.repeat_interleave(128, dim=1), imgA).unsqueeze(dim=1)
| https://stackoverflow.com/questions/73687700/ |
How to load one model’s output as another model’s parameters and do end-to-end optimization | Here we have an abstract of this problem:
Assuming that we have two models: ResNet and EfficienNet, respectively.
The first model is as follow (ResNet):
def __init__(self, in_channels, out_channels, num_classes):
super().__init__()
self.conv1_0 = _conv3x3(3, 32, stride=2)
self.bn1_0 = _bn(32)
self.conv1_1 = _conv3x3(32, 32, stride=1)
self.bn1_1 = _bn(32)
self.conv1_2 = _conv3x3(32, 64, stride=1)
self.relu = nn.ReLU()
self.pad = torch.nn.ReplicationPad2d(padding=(0, 0, 1, 1))
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2)
self.end_point = _fc(225, num_classes)
def forward(self, x):
x = self.conv1(x)
#and so on...
out = self.end_point(x)
return out
while the second model has been downloaded as follow (Efficient):
efficientNet = models.efficientnet_b5().to(device)
So, we have two models but the first is developed to scratch and the second one is imported by torchvision.models library.
Now, we want the EfficientNet to take the output of ResNet, then make an end-to-end optimization from EfficientNet’s output to first layer of ResNet.
I know an easier way, which consist of changing the codes and directly using ResNet’s result as an input of Efficient.forward(). However, Efficient model is too complex, making such a change is difficult.
For mathematical reasons, assuming that we have antoher easy model between the ResNet and EfficientNet, that it's called gumbel_model.
So, in conclusion we have 3 models but we got just one target labels about the last one of them, then we can only calculate the loss of the last model (Efficent Net).
When we calculate the loss of the last model, we actually write the three rows as follow:
optimizer.zero_grad()
loss.backforward()
optimizer.step()
Where the optimizer is as follow:
optimizer = optim.SGD([dict(params=efficientNet.parameters(), lr=LR)])
Is it correct that we are backpropagationing only the last model?
To backpropagation end to end models, is it necessary to add parameters of ResNet like the first arg of optim.SGD? If Yes, how could we get params by ResNet? (see above)
I have tried using some codes during one epoch, as follow :
efficientNet = get_efficient_trained(device, out_features=1, in_features=3,path_model)
predictor = ResNet(ResidualBlockBase, layer_config, num_classes=num_cl)
for i, imgs in enumerate(dataloader):
inputs, labels = imgs
inputs, labels = inputs.to(device), labels.to(device)
predictor_output = predictor(inputs)
predictor_gumbel_output = gumbel(predictor_output)
optimizer.zero_grad()
outputs = efficientnet(predictor_gumbel_output).torch.squeeze(outputs, 1)
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
return model
But from the results I have the apprehension that only EfficientNet is just trained.
Are there any ways to let the back-optimization reach to ResNet?
How could I solve this type of problem?*
Waiting for responses.
I really thank you all in advance.
| The params argument of the optimizer specifies all the parameters you want to optimize. Here, as you are only passing the parameters of EfficientNet, only those get optimized, as you suspect.
To optimize for all parameters end-to-end, simply pass them all when initializing the optimizer. This can be done as:
optimizer = optim.SGD(list(efficientNet.parameters()) + list(gumbel.parameters()) + list(predictor.parameters()), lr=LR)
| https://stackoverflow.com/questions/73691492/ |
How to select indices according to another tensor in pytorch? | I have two tensors a and b. And I want to retrive the values of b according to the positions of max values in a. That is,
max_values, indices = torch.max(a, dim=0, keepdim=True)
However, I do not know how to use the indices to retrive the values of b. Can anybody helps to solve it? Thanks a lot!!
Edit:
Sorry for not describing my problem concretely. To give a minimal example, the value of tensors a and b are:
a = torch.tensor([[1,2,4],[2,1,3]])
b = torch.tensor([[10,24,2],[23,4,5]])
If I use torch.max(a, dim=0, keepdim=True), it will return:
max: tensor([[2, 2, 4]])
indices: tensor([[1, 0, 0]])
What I want to obtain is the selected value of tensor b according to the indices of max values of a in dim=0, that is,
tensor([[23, 24, 2]])
I have tried b[indices], whereas the result is not what I want:
tensor([[[ 2, 3, 5],
[10, 30, 40],
[10, 30, 40]]])
| You can use torch.gather:
torch.gather(b, dim=0, index=indices)
| https://stackoverflow.com/questions/73691660/ |
Pytorch : TypeError when I call my data_transformation function inside where I define my train_dataset object | When I try to make an object from my get_data function :
train = get_data(root ="My_train_path",
transform = data_transforms[TRAIN] )
it returns an TypeError: 'function' object is not subscriptable.
data_dir = 'my_dataset_dir'
TEST = 'test'
TRAIN = 'train'
VAL = 'val'
def data_transforms(phase):
if phase == TRAIN:
transform = A.Compose([
A.CLAHE(clip_limit=4.0, p=0.7),
A.CoarseDropout(max_height=8, max_width=8, max_holes=8, p=0.5),
A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
ToTensorV2(),
])
if phase == VAL:
transform = A.Compose([
A.Resize(height=256,width=256),
A.CenterCrop(height=224,width=224),
A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
ToTensorV2(),
])
if phase == TEST:
transform = A.Compose([
A.Resize(height=256,width=256),
A.CenterCrop(height=224,width=224),
A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
ToTensorV2(),
])
return transform
def get_data(root,transform):
image_dataset = CustomImageFolder(root=".",
transform = transform,
)
return image_dataset
def make_loader(dataset, batch_size,shuffle,num_workers):
loader = torch.utils.data.DataLoader(dataset=dataset,
batch_size=batch_size,
shuffle=shuffle,
pin_memory=True, num_workers=num_workers)
return loader
| The problem is exactly as the error message says it: data_transforms is a function you have defined, and you want to call it with the training phase as the argument. However, you are erroneously subscripting the function, with your use of square brackets ([]). To fix this, replace the square brackets with parentheses (()), as is done for a function call.
That is,
train = get_data(root ="My_train_path",
transform = data_transforms(TRAIN) )
| https://stackoverflow.com/questions/73692593/ |
What is the difference between sagemaker-pytorch-training-toolkit and sagemaker-training-toolkit in SageMaker? | When porting PyTorch code / models to SageMaker, which one should we use:
PyTorch Training Toolkit (https://github.com/aws/sagemaker-pytorch-training-toolkit/) or
SageMaker Training Toolkit (https://github.com/aws/sagemaker-training-toolkit)? What's the difference when using these toolkits?
| The SageMaker PyTorch Training Toolkit repository used to be the repository for the Sagemaker Pytorch Training Containers, and similarly the SageMaker PyTorch Inference Toolkit was the repository for the SageMaker PyTorch Inference containers.
At some point, AWS has started to directly use the DockerFiles of the Deep Learning containers from the AWS Deep Learning Containers repository so the repositories above were renamed because now AWS has used them to build a library that gets installed into the DL containers to make them SageMaker-compatible for training.
Example: From here https://github.com/aws/sagemaker-pytorch-training-toolkit/blob/master/setup.py Example of building a package that then gets installed in the DL container here: https://github.com/aws/deep-learning-containers/blob/1596489c9002cea08f8a2a7d2f4642c4b3727d52/pytorch/training/docker/1.6.0/py3/Dockerfile.cpu#L112
| https://stackoverflow.com/questions/73694705/ |
Pytorch crashes with error IndexError: Target 32 is out of bounds | I am attempting to train a model using CIFAR-100 dataset, on CPU.
But, I get an error:
Traceback (most recent call last):
File "recog.py", line 68, in <module>
loss = criterion(outputs, labels)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py", line 1152, in forward
label_smoothing=self.label_smoothing)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 2846, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
IndexError: Target 32 is out of bounds.
I took a snippet from here and modified it a little.
Code:
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
batch_size = 4
trainset = torchvision.datasets.CIFAR100(root='./dataone', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR100(root='./dataone', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=2)
classes = ('aquatic mammals','fish','flowers','food containers','fruit and vegetables','household electrical devices','household furniture','insects','large carnivores','large man-made outdoor things','large natural outdoor scenes','large omnivores and herbivores','medium-sized mammals','non-insect invertebrates','people','reptiles','small mammals','trees','vehicles 1','vehicles 2')
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
import torch.optim as optim
#criterion = nn.CrossEntropyLoss()
#optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
running_loss = 0.0
print('Finished Training')
Also, the target number is always different. I use Python 3.9, with the last pytorch.
When I attempt to do the same thing,but with CIFAR-10, it works perfectly. I'm stuck.
Please help.
| Your model only predicts 10 classes. CIFAR100 has 100 classes.
Change
self.fc3 = nn.Linear(84, 10)
to
self.fc3 = nn.Linear(84, 100)
| https://stackoverflow.com/questions/73697011/ |
How retain_grad() in pytorch works? I found its position changes the grad result | in a simple test in pytorch, I want to see grad in a non-leaf tensor, so I use retain_grad():
import torch
a = torch.tensor([1.], requires_grad=True)
y = torch.zeros((10))
gt = torch.zeros((10))
y[0] = a
y[1] = y[0] * 2
y.retain_grad()
loss = torch.sum((y-gt) ** 2)
loss.backward()
print(y.grad)
it gives me a normal output:
tensor([2., 4., 0., 0., 0., 0., 0., 0., 0., 0.])
but when I use retain grad() before y[1] and after y[0] is assigned:
import torch
a = torch.tensor([1.], requires_grad=True)
y = torch.zeros((10))
gt = torch.zeros((10))
y[0] = a
y.retain_grad()
y[1] = y[0] * 2
loss = torch.sum((y-gt) ** 2)
loss.backward()
print(y.grad)
now the output changes to:
tensor([10., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
I can't understand the result at all.
| Okay so what's going on is really weird.
What .retain_grad() essentially does is convert any non-leaf tensor into a leaf tensor, such that it contains a .grad attribute (since by default, pytorch computes gradients to leaf tensors only).
Hence, in your first example, after calling y.retain_grad(), it basically converted y into a leaf tensor with an accessible .grad attribute.
However, in your second example, you initially converted the entire y tensor into a leaf tensor; then, you created a non-leaf tensor (y[1]) within your leaf tensor (y), which is what caused the confusion.
y = torch.zeros((10)) # y is a non-leaf tensor
y[0] = a # y[0] is a non-leaf tensor
y.retain_grad() # y is a leaf tensor (including y[1])
y[1] = y[0] * 2 # y[1] is a non-leaf tensor, BUT y[0], y[2], y[3], ..., y[9] are all leaf tensors!
The confusing part is:
y[1] after calling y.retain_grad() is now a leaf tensor with a .grad attribute. However, y[1] after the computation (y[1] = y[0] * 2) is now not a leaf tensor with a .grad attribute; it is now treated as a new non-leaf variable/tensor.
Therefore, when calling loss.backward(), the Chain rule of the loss w.r.t y, and particularly looking at the Chain rule of the loss w.r.t leaf y[1] now looks something like this:
| https://stackoverflow.com/questions/73698041/ |
In pytorch, how to calculate gradient for a element in a tensor when it is used to calculate another element in this tensor? | In this pytorch code:
import torch
a = torch.tensor([2.], requires_grad=True)
y = torch.zeros((10))
gt = torch.zeros((10))
y[0] = a
y[1] = y[0] * 2
y.retain_grad()
loss = torch.sum((y-gt) ** 2)
loss.backward()
print(y.grad)
I want y[0]'s gradient to consist 2 parts:
loss backward to y[0] itself.
y[0] is used to calculate y[1], so it should have the part of y[1]'s gradient.
but when I run this code, there is only part 1 in y[0]'s gradient.
So how to make y[0]'s gradient to have all 2 parts?
edit: the output is:
tensor([4., 8., 0., 0., 0., 0., 0., 0., 0., 0.])
but I expect:
tensor([20., 8., 0., 0., 0., 0., 0., 0., 0., 0.])
| y[0] and y[1] are two different elements, therefore they have different grad. The only thing that "binds" them is the underlying relation to a. If you inspect the grad of a, you'll see:
print(a.grad)
tensor([20.])
That is, the two parts of the gradients are combined in a.grad.
| https://stackoverflow.com/questions/73698878/ |
How to use architecture of T5 without pretrained model (Hugging face) | I would like to study the effect of pre-trained model, so I want to test t5 model with and without pre-trained weights. Using pre-trained weights is straight forward, but I cannot figure out how to use the architecture of T5 from hugging face without the weights. I am using Hugging face with pytorch but open for different solution.
| https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model
"Initializing with a config file does not load the weights associated with the model, only the configuration."
for without weights create a T5Model with config file
from transformers import AutoConfig
from transformers import T5Tokenizer, T5Model
model_name = "t5-small"
config = AutoConfig.from_pretrained(model_name)
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5Model.from_pretrained(model_name)
model_raw = T5Model(config)
| https://stackoverflow.com/questions/73700165/ |
How do I use a pt file in Pytorch to predict the label of a new data? | This is my training model run.py, my data is a one-dimensional matrix with one row and one category.
import numpy as np # linear algebra
import pandas as pd
import os
for dirname, _, filenames in os.walk('./kaggle'):
for filename in filenames:
print(os.path.join(dirname, filename))
import torch
from torch.utils.data import DataLoader
from torch import nn,optim
import sys
from tqdm import tqdm
import io
import torch.utils.model_zoo as model_zoo
import torch.onnx
def my_DataLoader(train_root,test_root,batch_size = 100, val_split_factor = 0.2):
train_df = pd.read_csv(train_root, header=None)
test_df = pd.read_csv(test_root, header=None)
train_data = train_df.to_numpy()
test_data = test_df.to_numpy()
train_dataset = torch.utils.data.TensorDataset(torch.from_numpy(train_data[:, :-1]).float(),
torch.from_numpy(train_data[:, -1]).long(),)#
test_dataset = torch.utils.data.TensorDataset(torch.from_numpy(test_data[:, :-1]).float(),
torch.from_numpy(test_data[:, -1]).long())
train_len = train_data.shape[0]
val_len = int(train_len * val_split_factor)
train_len -= val_len
train_dataset, val_dataset = torch.utils.data.random_split(train_dataset, [train_len, val_len])
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
return train_loader, val_loader, test_loader
class conv_net(nn.Module):
def __init__(self, num_of_class):
super(conv_net, self).__init__()
self.model = nn.Sequential(
#nn.Conv1d(1, 16, kernel_size=5, stride=1, padding=2),
#nn.Conv1d(1, 16, kernel_size=1, stride=1),
nn.Conv1d(1, 16, kernel_size=1, stride=1),
nn.BatchNorm1d(16),
nn.ReLU(),
nn.MaxPool1d(2),
nn.Conv1d(16, 64, kernel_size=5, stride=1, padding=2),
nn.BatchNorm1d(64),
nn.ReLU(),
nn.MaxPool1d(2),
)
#self.relu = nn.ReLU()
self.linear = nn.Sequential(
#nn.Linear(5120,32),
nn.Linear(5120,32),
nn.LeakyReLU(inplace=True),
nn.Linear(32, num_of_class),
)
def forward(self,x):
#org = x
x = x.unsqueeze(1)
x = self.model(x)
#x = self.relu(x)
# print(x.shape)
x = x.view(x.size(0), -1)
#x [b, 2944]
# print(x.shape)
x = self.linear(x)
return x
batch_size=32
lr = 3e-3
epochs = 150
torch.manual_seed(1234)
#device = torch.device("cpu:0 cuda:0" if torch.cuda.is_available() else "cpu")
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print("using {} device.".format(device))
def evalute(model, loader):
model.eval()
correct = 0
total = len(loader.dataset)
val_bar = tqdm(loader, file=sys.stdout)
for x, y in val_bar:
x, y = x.to(device), y.to(device)
with torch.no_grad():
logits = model(x)
pred = logits.argmax(dim=1)
correct += torch.eq(pred, y).sum().float().item()
return correct / total
def main():
train_loader, val_loader, test_loader = my_DataLoader('./kaggle/train.csv',
'./kaggle/test.csv',
batch_size=batch_size,
val_split_factor=0.2)
model = conv_net(8).to(device)
optimizer = optim.Adam(model.parameters(), lr=lr)
criteon = nn.CrossEntropyLoss()
# Print model's state_dict
print(model)
best_acc, best_epoch = 0, 0
global_step = 0
for epoch in range(epochs):
train_bar = tqdm(train_loader, file=sys.stdout)
for step, (x, y) in enumerate(train_bar):
# x: [b, 187], y: [b]
x, y = x.to(device), y.to(device)
model.train()
logits = model(x)
loss = criteon(logits, y)
optimizer.zero_grad()
loss.backward()
# for param in model.parameters():
# print(param.grad)
optimizer.step()
train_bar.desc = "train epoch[{}/{}] loss:{:.3f}".format(epoch + 1,
epochs,
loss)
global_step += 1
if epoch % 1 == 0: # You can change the validation frequency as you wish
val_acc = evalute(model, val_loader)
print('val_acc = ',val_acc)
if val_acc > best_acc:
best_epoch = epoch
best_acc = val_acc
# Export the model
name_pt = 'best3.pt'
torch.save(model.state_dict(), name_pt)
print('best acc:', best_acc, 'best epoch:', best_epoch)
model.load_state_dict(torch.load(name_pt))
print('loaded from ckpt!')
test_acc = evalute(model, test_loader)
print('test acc:', test_acc)
if __name__ == '__main__':
main()
Then I try to make predictions and modify with reference to other people's code
import torch
from torchvision.transforms import transforms
import pandas as pd
from PIL import Image
from run import conv_net
from pathlib import Path
name_pt = 'best3.pt'
model = conv_net(8)
checkpoint = torch.load(name_pt)
model.load_state_dict(checkpoint)
testdata = './kaggle/onedata.csv'
test_df = pd.read_csv(testdata, header=None)
test_data = test_df.to_numpy()
csv = torch.utils.data.TensorDataset(torch.from_numpy(test_data[:, :]).float())
output = model(csv)
prediction = int(torch.max(output.data, 1)[1].numpy())
print(prediction)
if (prediction == 0):
print ('other')
if (prediction == 1):
print ('100%PET')
if (prediction == 2):
print ('100% Cotton')
if (prediction == 3):
print ('100% Nylon')
if (prediction == 4):
print ('>70% PET')
if (prediction == 5):
print ('<70% PET')
if (prediction == 6):
print ('Spandex/PET Spandex<5%')
if (prediction == 7):
print ('Spandex/PET Spandex>5%')
Something went wrong
File "C:\Users\54-0461100-01\Desktop\for_spec_train\run.py", line 70, in forward
x = x.unsqueeze(1)
AttributeError: 'TensorDataset' object has no attribute 'unsqueeze'
Most of the questions are for images, not found on CSV files.Any help is appreciated if you have any suggestions.
By the way this is my data format.
LJ column are labels,train and test set are same format
enter image description here
onedata format
enter image description here
| When calling output = model(csv) you are passing the model a 'TensorDataset' object as the input instead of a tensor. You can access the tensors in this object by indexing it. https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html#TensorDataset
Additionally, you can avoid the TensorDataset object all together by replacing
csv = torch.utils.data.TensorDataset(torch.from_numpy(test_data[:, :]).float())
with
csv = torch.from_numpy(test_data[:, :]).float()
| https://stackoverflow.com/questions/73701545/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.