text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
torch.Tensor.dist Tensor.dist(other, p=2) -> Tensor See "torch.dist()"
https://pytorch.org/docs/stable/generated/torch.Tensor.dist.html
pytorch docs
torch.cuda.device_count torch.cuda.device_count() Returns the number of GPUs available. Return type: int
https://pytorch.org/docs/stable/generated/torch.cuda.device_count.html
pytorch docs
SyncBatchNorm class torch.nn.SyncBatchNorm(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, process_group=None, device=None, dtype=None) Applies Batch Normalization over a N-Dimensional input (a mini- batch of [N-2]D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta The mean and standard-deviation are calculated per-dimension over all mini-batches of the same process groups. \gamma and \beta are learnable parameter vectors of size C (where C is the input size). By default, the elements of \gamma are sampled from \mathcal{U}(0, 1) and the elements of \beta are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).
https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html
pytorch docs
Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default "momentum" of 0.1. If "track_running_stats" is set to "False", this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well. Note: This "momentum" argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t, where \hat{x} is the estimated statistic and x_t is the new observed value. Because the Batch Normalization is done for each channel in the "C" dimension, computing statistics on "(N, +)" slices, it's common terminology to call this Volumetric Batch Normalization or Spatio-
https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html
pytorch docs
temporal Batch Normalization. Currently "SyncBatchNorm" only supports "DistributedDataParallel" (DDP) with single GPU per process. Use "torch.nn.SyncBatchNorm.convert_sync_batchnorm()" to convert "BatchNorm*D" layer to "SyncBatchNorm" before wrapping Network with DDP. Parameters: * num_features (int) -- C from an expected input of size (N, C, +) * **eps** (*float*) -- a value added to the denominator for numerical stability. Default: "1e-5" * **momentum** (*float*) -- the value used for the running_mean and running_var computation. Can be set to "None" for cumulative moving average (i.e. simple average). Default: 0.1 * **affine** (*bool*) -- a boolean value that when set to "True", this module has learnable affine parameters. Default: "True" * **track_running_stats** (*bool*) -- a boolean value that when set to "True", this module tracks the running mean and
https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html
pytorch docs
variance, and when set to "False", this module does not track such statistics, and initializes statistics buffers "running_mean" and "running_var" as "None". When these buffers are "None", this module always uses batch statistics. in both training and eval modes. Default: "True" * **process_group** (*Optional**[**Any**]*) -- synchronization of stats happen within each process group individually. Default behavior is synchronization across the whole world Shape: * Input: (N, C, +) * Output: (N, C, +) (same shape as input) Note: Synchronization of batchnorm statistics occurs only while training, i.e. synchronization is disabled when "model.eval()" is set or if "self.training" is otherwise "False". Examples: >>> # With Learnable Parameters >>> m = nn.SyncBatchNorm(100) >>> # creating process group (optional) >>> # ranks is a list of int identifying rank ids.
https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html
pytorch docs
ranks = list(range(8)) >>> r1, r2 = ranks[:4], ranks[4:] >>> # Note: every rank calls into new_group for every >>> # process group created, even if that rank is not >>> # part of the group. >>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]] >>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1] >>> # Without Learnable Parameters >>> m = nn.BatchNorm3d(100, affine=False, process_group=process_group) >>> input = torch.randn(20, 100, 35, 45, 10) >>> output = m(input) >>> # network is nn.BatchNorm layer >>> sync_bn_network = nn.SyncBatchNorm.convert_sync_batchnorm(network, process_group) >>> # only single gpu per process is currently supported >>> ddp_sync_bn_network = torch.nn.parallel.DistributedDataParallel( >>> sync_bn_network, >>> device_ids=[args.local_rank],
https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html
pytorch docs
output_device=args.local_rank) classmethod convert_sync_batchnorm(module, process_group=None) Helper function to convert all "BatchNorm*D" layers in the model to "torch.nn.SyncBatchNorm" layers. Parameters: * **module** (*nn.Module*) -- module containing one or more "BatchNorm*D" layers * **process_group** (*optional*) -- process group to scope synchronization, default is the whole world Returns: The original "module" with the converted "torch.nn.SyncBatchNorm" layers. If the original "module" is a "BatchNorm*D" layer, a new "torch.nn.SyncBatchNorm" layer object will be returned instead. Example: >>> # Network with nn.BatchNorm layer >>> module = torch.nn.Sequential( >>> torch.nn.Linear(20, 100), >>> torch.nn.BatchNorm1d(100), >>> ).cuda() >>> # creating process group (optional)
https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html
pytorch docs
creating process group (optional) >>> # ranks is a list of int identifying rank ids. >>> ranks = list(range(8)) >>> r1, r2 = ranks[:4], ranks[4:] >>> # Note: every rank calls into new_group for every >>> # process group created, even if that rank is not >>> # part of the group. >>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]] >>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1] >>> sync_bn_module = torch.nn.SyncBatchNorm.convert_sync_batchnorm(module, process_group)
https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html
pytorch docs
LambdaLR class torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=- 1, verbose=False) Sets the learning rate of each parameter group to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr. Parameters: * optimizer (Optimizer) -- Wrapped optimizer. * **lr_lambda** (*function** or **list*) -- A function which computes a multiplicative factor given an integer parameter epoch, or a list of such functions, one for each group in optimizer.param_groups. * **last_epoch** (*int*) -- The index of last epoch. Default: -1. * **verbose** (*bool*) -- If "True", prints a message to stdout for each update. Default: "False". -[ Example ]- Assuming optimizer has two groups. lambda1 = lambda epoch: epoch // 30 lambda2 = lambda epoch: 0.95 ** epoch scheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2]) for epoch in range(100):
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LambdaLR.html
pytorch docs
for epoch in range(100): train(...) validate(...) scheduler.step() get_last_lr() Return last computed learning rate by current scheduler. load_state_dict(state_dict) Loads the schedulers state. When saving or loading the scheduler, please make sure to also save or load the state of the optimizer. Parameters: **state_dict** (*dict*) -- scheduler state. Should be an object returned from a call to "state_dict()". print_lr(is_verbose, group, lr, epoch=None) Display the current learning rate. state_dict() Returns the state of the scheduler as a "dict". It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas. When saving or loading the scheduler, please make sure to also
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LambdaLR.html
pytorch docs
save or load the state of the optimizer.
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LambdaLR.html
pytorch docs
torch.eye torch.eye(n, m=None, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor Returns a 2-D tensor with ones on the diagonal and zeros elsewhere. Parameters: * n (int) -- the number of rows * **m** (*int**, **optional*) -- the number of columns with default being "n" Keyword Arguments: * out (Tensor, optional) -- the output tensor. * **dtype** ("torch.dtype", optional) -- the desired data type of returned tensor. Default: if "None", uses a global default (see "torch.set_default_tensor_type()"). * **layout** ("torch.layout", optional) -- the desired layout of returned Tensor. Default: "torch.strided". * **device** ("torch.device", optional) -- the desired device of returned tensor. Default: if "None", uses the current device for the default tensor type (see "torch.set_default_tensor_type()"). "device" will be the CPU
https://pytorch.org/docs/stable/generated/torch.eye.html
pytorch docs
for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires_grad** (*bool**, **optional*) -- If autograd should record operations on the returned tensor. Default: "False". Returns: A 2-D tensor with ones on the diagonal and zeros elsewhere Return type: Tensor Example: >>> torch.eye(3) tensor([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]])
https://pytorch.org/docs/stable/generated/torch.eye.html
pytorch docs
Adagrad class torch.optim.Adagrad(params, lr=0.01, lr_decay=0, weight_decay=0, initial_accumulator_value=0, eps=1e-10, foreach=None, *, maximize=False, differentiable=False) Implements Adagrad algorithm. \begin{aligned} &\rule{110mm}{0.4pt} \\ &\textbf{input} : \gamma \text{ (lr)}, \: \theta_0 \text{ (params)}, \: f(\theta) \text{ (objective)}, \: \lambda \text{ (weight decay)}, \\ &\hspace{12mm} \tau \text{ (initial accumulator value)}, \: \eta\text{ (lr decay)}\\ &\textbf{initialize} : state\_sum_0 \leftarrow 0 \\[-1.ex] &\rule{110mm}{0.4pt} \\ &\textbf{for} \: t=1 \: \textbf{to} \: \ldots \: \textbf{do} \\ &\hspace{5mm}g_t \leftarrow \nabla_{\theta} f_t (\theta_{t-1}) \\ &\hspace{5mm} \tilde{\gamma} \leftarrow \gamma / (1 +(t-1)
https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html
pytorch docs
\eta) \ &\hspace{5mm} \textbf{if} \: \lambda \neq 0 \ &\hspace{10mm} g_t \leftarrow g_t + \lambda \theta_{t-1} \ &\hspace{5mm}state_sum_t \leftarrow state_sum_{t-1} + g^2_t \ &\hspace{5mm}\theta_t \leftarrow \theta_{t-1}- \tilde{\gamma} \frac{g_t}{\sqrt{state_sum_t}+\epsilon} \ &\rule{110mm}{0.4pt} \[-1.ex] &\bf{return} \: \theta_t \[-1.ex] &\rule{110mm}{0.4pt} \[-1.ex] \end{aligned} For further details regarding the algorithm we refer to Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Parameters: * params (iterable) -- iterable of parameters to optimize or dicts defining parameter groups * **lr** (*float**, **optional*) -- learning rate (default: 1e-2) * **lr_decay** (*float**, **optional*) -- learning rate decay
https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html
pytorch docs
(default: 0) * **weight_decay** (*float**, **optional*) -- weight decay (L2 penalty) (default: 0) * **eps** (*float**, **optional*) -- term added to the denominator to improve numerical stability (default: 1e-10) * **foreach** (*bool**, **optional*) -- whether foreach implementation of optimizer is used. If unspecified by the user (so foreach is None), we will try to use foreach over the for-loop implementation on CUDA, since it is usually significantly more performant. (default: None) * **maximize** (*bool**, **optional*) -- maximize the params based on the objective, instead of minimizing (default: False) * **differentiable** (*bool**, **optional*) -- whether autograd should occur through the optimizer step in training. Otherwise, the step() function runs in a torch.no_grad() context. Setting to True can impair performance, so leave it
https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html
pytorch docs
False if you don't intend to run autograd through this instance (default: False) add_param_group(param_group) Add a param group to the "Optimizer" s *param_groups*. This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the "Optimizer" as training progresses. Parameters: **param_group** (*dict*) -- Specifies what Tensors should be optimized along with group specific optimization options. load_state_dict(state_dict) Loads the optimizer state. Parameters: **state_dict** (*dict*) -- optimizer state. Should be an object returned from a call to "state_dict()". register_step_post_hook(hook) Register an optimizer step post hook which will be called after optimizer step. It should have the following signature: hook(optimizer, args, kwargs) -> None The "optimizer" argument is the optimizer instance being used.
https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html
pytorch docs
Parameters: hook (Callable) -- The user defined hook to be registered. Returns: a handle that can be used to remove the added hook by calling "handle.remove()" Return type: "torch.utils.hooks.RemoveableHandle" register_step_pre_hook(hook) Register an optimizer step pre hook which will be called before optimizer step. It should have the following signature: hook(optimizer, args, kwargs) -> None or modified args and kwargs The "optimizer" argument is the optimizer instance being used. If args and kwargs are modified by the pre-hook, then the transformed values are returned as a tuple containing the new_args and new_kwargs. Parameters: **hook** (*Callable*) -- The user defined hook to be registered. Returns: a handle that can be used to remove the added hook by calling "handle.remove()" Return type:
https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html
pytorch docs
"handle.remove()" Return type: "torch.utils.hooks.RemoveableHandle" state_dict() Returns the state of the optimizer as a "dict". It contains two entries: * state - a dict holding current optimization state. Its content differs between optimizer classes. * param_groups - a list containing all parameter groups where each parameter group is a dict zero_grad(set_to_none=False) Sets the gradients of all optimized "torch.Tensor" s to zero. Parameters: **set_to_none** (*bool*) -- instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests
https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html
pytorch docs
differently. 2. If the user requests "zero_grad(set_to_none=True)" followed by a backward pass, ".grad"s are guaranteed to be None for params that did not receive a gradient. 3. "torch.optim" optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).
https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html
pytorch docs
torch.nn.functional.tanh torch.nn.functional.tanh(input) -> Tensor Applies element-wise, \text{Tanh}(x) = \tanh(x) = \frac{\exp(x) - \exp(-x)}{\exp(x) + \exp(-x)} See "Tanh" for more details.
https://pytorch.org/docs/stable/generated/torch.nn.functional.tanh.html
pytorch docs
torch.Tensor.cholesky_inverse Tensor.cholesky_inverse(upper=False) -> Tensor See "torch.cholesky_inverse()"
https://pytorch.org/docs/stable/generated/torch.Tensor.cholesky_inverse.html
pytorch docs
torch.Tensor.new_empty Tensor.new_empty(size, *, dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) -> Tensor Returns a Tensor of size "size" filled with uninitialized data. By default, the returned Tensor has the same "torch.dtype" and "torch.device" as this tensor. Parameters: size (int...) -- a list, tuple, or "torch.Size" of integers defining the shape of the output tensor. Keyword Arguments: * dtype ("torch.dtype", optional) -- the desired type of returned tensor. Default: if None, same "torch.dtype" as this tensor. * **device** ("torch.device", optional) -- the desired device of returned tensor. Default: if None, same "torch.device" as this tensor. * **requires_grad** (*bool**, **optional*) -- If autograd should record operations on the returned tensor. Default: "False". * **layout** ("torch.layout", optional) -- the desired layout of
https://pytorch.org/docs/stable/generated/torch.Tensor.new_empty.html
pytorch docs
returned Tensor. Default: "torch.strided". * **pin_memory** (*bool**, **optional*) -- If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: "False". Example: >>> tensor = torch.ones(()) >>> tensor.new_empty((2, 3)) tensor([[ 5.8182e-18, 4.5765e-41, -1.0545e+30], [ 3.0949e-41, 4.4842e-44, 0.0000e+00]])
https://pytorch.org/docs/stable/generated/torch.Tensor.new_empty.html
pytorch docs
default_debug_observer torch.quantization.observer.default_debug_observer alias of "RecordingObserver"
https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_debug_observer.html
pytorch docs
default_per_channel_weight_observer torch.quantization.observer.default_per_channel_weight_observer alias of functools.partial(, dtype=torch.qint8, qscheme=torch.per_channel_symmetric){}
https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_per_channel_weight_observer.html
pytorch docs
torch.Tensor.transpose Tensor.transpose(dim0, dim1) -> Tensor See "torch.transpose()"
https://pytorch.org/docs/stable/generated/torch.Tensor.transpose.html
pytorch docs
InstanceNorm2d class torch.ao.nn.quantized.InstanceNorm2d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None) This is the quantized version of "InstanceNorm2d". Additional args: * scale - quantization scale of the output, type: double. * **zero_point** - quantization zero point of the output, type: long.
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.InstanceNorm2d.html
pytorch docs
torch.Tensor.lerp Tensor.lerp(end, weight) -> Tensor See "torch.lerp()"
https://pytorch.org/docs/stable/generated/torch.Tensor.lerp.html
pytorch docs
torch.Tensor.div_ Tensor.div_(value, *, rounding_mode=None) -> Tensor In-place version of "div()"
https://pytorch.org/docs/stable/generated/torch.Tensor.div_.html
pytorch docs
torch.diag torch.diag(input, diagonal=0, *, out=None) -> Tensor If "input" is a vector (1-D tensor), then returns a 2-D square tensor with the elements of "input" as the diagonal. If "input" is a matrix (2-D tensor), then returns a 1-D tensor with the diagonal elements of "input". The argument "diagonal" controls which diagonal to consider: If "diagonal" = 0, it is the main diagonal. If "diagonal" > 0, it is above the main diagonal. If "diagonal" < 0, it is below the main diagonal. Parameters: * input (Tensor) -- the input tensor. * **diagonal** (*int**, **optional*) -- the diagonal to consider Keyword Arguments: out (Tensor, optional) -- the output tensor. See also: "torch.diagonal()" always returns the diagonal of its input. "torch.diagflat()" always constructs a tensor with diagonal elements specified by the input. Examples:
https://pytorch.org/docs/stable/generated/torch.diag.html
pytorch docs
Examples: Get the square matrix where the input vector is the diagonal: >>> a = torch.randn(3) >>> a tensor([ 0.5950,-0.0872, 2.3298]) >>> torch.diag(a) tensor([[ 0.5950, 0.0000, 0.0000], [ 0.0000,-0.0872, 0.0000], [ 0.0000, 0.0000, 2.3298]]) >>> torch.diag(a, 1) tensor([[ 0.0000, 0.5950, 0.0000, 0.0000], [ 0.0000, 0.0000,-0.0872, 0.0000], [ 0.0000, 0.0000, 0.0000, 2.3298], [ 0.0000, 0.0000, 0.0000, 0.0000]]) Get the k-th diagonal of a given matrix: >>> a = torch.randn(3, 3) >>> a tensor([[-0.4264, 0.0255,-0.1064], [ 0.8795,-0.2429, 0.1374], [ 0.1029,-0.6482,-1.6300]]) >>> torch.diag(a, 0) tensor([-0.4264,-0.2429,-1.6300]) >>> torch.diag(a, 1) tensor([ 0.0255, 0.1374])
https://pytorch.org/docs/stable/generated/torch.diag.html
pytorch docs
torch.nn.functional.multilabel_soft_margin_loss torch.nn.functional.multilabel_soft_margin_loss(input, target, weight=None, size_average=None, reduce=None, reduction='mean') -> Tensor See "MultiLabelSoftMarginLoss" for details. Return type: Tensor
https://pytorch.org/docs/stable/generated/torch.nn.functional.multilabel_soft_margin_loss.html
pytorch docs
ConvBnReLU1d class torch.ao.nn.intrinsic.ConvBnReLU1d(conv, bn, relu) This is a sequential container which calls the Conv 1d, Batch Norm 1d, and ReLU modules. During quantization this will be replaced with the corresponding fused module.
https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBnReLU1d.html
pytorch docs
torch.Tensor.isclose Tensor.isclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) -> Tensor See "torch.isclose()"
https://pytorch.org/docs/stable/generated/torch.Tensor.isclose.html
pytorch docs
torch.fft.hfft2 torch.fft.hfft2(input, s=None, dim=(- 2, - 1), norm=None, *, out=None) -> Tensor Computes the 2-dimensional discrete Fourier transform of a Hermitian symmetric "input" signal. Equivalent to "hfftn()" but only transforms the last two dimensions by default. "input" is interpreted as a one-sided Hermitian signal in the time domain. By the Hermitian property, the Fourier transform will be real-valued. Note: Supports torch.half and torch.chalf on CUDA with GPU Architecture SM53 or greater. However it only supports powers of 2 signal length in every transformed dimensions. With default arguments, the size of last dimension should be (2^n + 1) as argument *s* defaults to even output size = 2 * (last_dim_size - 1) Parameters: * input (Tensor) -- the input tensor * **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the transformed dimensions. If given, each dimension "dim[i]" will
https://pytorch.org/docs/stable/generated/torch.fft.hfft2.html
pytorch docs
either be zero-padded or trimmed to the length "s[i]" before computing the Hermitian FFT. If a length "-1" is specified, no padding is done in that dimension. Defaults to even output in the last dimension: "s[-1] = 2*(input.size(dim[-1]) - 1)". * **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be transformed. The last dimension must be the half-Hermitian compressed dimension. Default: last two dimensions. * **norm** (*str**, **optional*) -- Normalization mode. For the forward transform ("hfft2()"), these correspond to: * ""forward"" - normalize by "1/n" * ""backward"" - no normalization * ""ortho"" - normalize by "1/sqrt(n)" (making the Hermitian FFT orthonormal) Where "n = prod(s)" is the logical FFT size. Calling the backward transform ("ihfft2()") with the same normalization mode will apply an overall normalization of "1/n" between the
https://pytorch.org/docs/stable/generated/torch.fft.hfft2.html
pytorch docs
two transforms. This is required to make "ihfft2()" the exact inverse. Default is ""backward"" (no normalization). Keyword Arguments: out (Tensor, optional) -- the output tensor. -[ Example ]- Starting from a real frequency-space signal, we can generate a Hermitian-symmetric time-domain signal: >>> T = torch.rand(10, 9) t = torch.fft.ihfft2(T) Without specifying the output length to "hfftn()", the output will not round-trip properly because the input is odd-length in the last dimension: torch.fft.hfft2(t).size() torch.Size([10, 10]) So, it is recommended to always pass the signal shape "s". roundtrip = torch.fft.hfft2(t, T.size()) roundtrip.size() torch.Size([10, 9]) torch.allclose(roundtrip, T) True
https://pytorch.org/docs/stable/generated/torch.fft.hfft2.html
pytorch docs
torch.Tensor.bitwise_and Tensor.bitwise_and() -> Tensor See "torch.bitwise_and()"
https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_and.html
pytorch docs
torch.topk torch.topk(input, k, dim=None, largest=True, sorted=True, *, out=None) Returns the "k" largest elements of the given "input" tensor along a given dimension. If "dim" is not given, the last dimension of the input is chosen. If "largest" is "False" then the k smallest elements are returned. A namedtuple of (values, indices) is returned with the values and indices of the largest k elements of each row of the input tensor in the given dimension dim. The boolean option "sorted" if "True", will make sure that the returned k elements are themselves sorted Parameters: * input (Tensor) -- the input tensor. * **k** (*int*) -- the k in "top-k" * **dim** (*int**, **optional*) -- the dimension to sort along * **largest** (*bool**, **optional*) -- controls whether to return largest or smallest elements * **sorted** (*bool**, **optional*) -- controls whether to
https://pytorch.org/docs/stable/generated/torch.topk.html
pytorch docs
return the elements in sorted order Keyword Arguments: out (tuple, optional) -- the output tuple of (Tensor, LongTensor) that can be optionally given to be used as output buffers Example: >>> x = torch.arange(1., 6.) >>> x tensor([ 1., 2., 3., 4., 5.]) >>> torch.topk(x, 3) torch.return_types.topk(values=tensor([5., 4., 3.]), indices=tensor([4, 3, 2]))
https://pytorch.org/docs/stable/generated/torch.topk.html
pytorch docs
SmoothL1Loss class torch.nn.SmoothL1Loss(size_average=None, reduce=None, reduction='mean', beta=1.0) Creates a criterion that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise. It is less sensitive to outliers than "torch.nn.MSELoss" and in some cases prevents exploding gradients (e.g. see the paper Fast R-CNN by Ross Girshick). For a batch of size N, the unreduced loss can be described as: \ell(x, y) = L = \{l_1, ..., l_N\}^T with l_n = \begin{cases} 0.5 (x_n - y_n)^2 / beta, & \text{if } |x_n - y_n| < beta \\ |x_n - y_n| - 0.5 * beta, & \text{otherwise } \end{cases} If reduction is not none, then: \ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases} Note: Smooth L1 loss can be seen as exactly "L1Loss", but with the |x -
https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html
pytorch docs
y| < beta portion replaced with a quadratic function such that its slope is 1 at |x - y| = beta. The quadratic segment smooths the L1 loss near |x - y| = 0. Note: Smooth L1 loss is closely related to "HuberLoss", being equivalent to huber(x, y) / beta (note that Smooth L1's beta hyper-parameter is also known as delta for Huber). This leads to the following differences: * As beta -> 0, Smooth L1 loss converges to "L1Loss", while "HuberLoss" converges to a constant 0 loss. When beta is 0, Smooth L1 loss is equivalent to L1 loss. * As beta -> +\infty, Smooth L1 loss converges to a constant 0 loss, while "HuberLoss" converges to "MSELoss". * For Smooth L1 loss, as beta varies, the L1 segment of the loss has a constant slope of 1. For "HuberLoss", the slope of the L1 segment is beta. Parameters: * size_average (bool, optional) -- Deprecated (see
https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html
pytorch docs
"reduction"). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field "size_average" is set to "False", the losses are instead summed for each minibatch. Ignored when "reduce" is "False". Default: "True" * **reduce** (*bool**, **optional*) -- Deprecated (see "reduction"). By default, the losses are averaged or summed over observations for each minibatch depending on "size_average". When "reduce" is "False", returns a loss per batch element instead and ignores "size_average". Default: "True" * **reduction** (*str**, **optional*) -- Specifies the reduction to apply to the output: "'none'" | "'mean'" | "'sum'". "'none'": no reduction will be applied, "'mean'": the sum of the output will be divided by the number of elements in the output, "'sum'": the output will be summed. Note:
https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html
pytorch docs
"size_average" and "reduce" are in the process of being deprecated, and in the meantime, specifying either of those two args will override "reduction". Default: "'mean'" * **beta** (*float**, **optional*) -- Specifies the threshold at which to change between L1 and L2 loss. The value must be non- negative. Default: 1.0 Shape: * Input: (*), where * means any number of dimensions. * Target: (*), same shape as the input. * Output: scalar. If "reduction" is "'none'", then (*), same shape as the input.
https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html
pytorch docs
AdaptiveAvgPool2d class torch.nn.AdaptiveAvgPool2d(output_size) Applies a 2D adaptive average pooling over an input signal composed of several input planes. The output is of size H x W, for any input size. The number of output features is equal to the number of input planes. Parameters: output_size (Union[int, None, Tuple[Optional[int], Optional[int]]]) -- the target output size of the image of the form H x W. Can be a tuple (H, W) or a single H for a square image H x H. H and W can be either a "int", or "None" which means the size will be the same as that of the input. Shape: * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}). * Output: (N, C, S_{0}, S_{1}) or (C, S_{0}, S_{1}), where S=\text{output\_size}. -[ Examples ]- target output size of 5x7 m = nn.AdaptiveAvgPool2d((5, 7)) input = torch.randn(1, 64, 8, 9) output = m(input)
https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool2d.html
pytorch docs
output = m(input) target output size of 7x7 (square) m = nn.AdaptiveAvgPool2d(7) input = torch.randn(1, 64, 10, 9) output = m(input) target output size of 10x7 m = nn.AdaptiveAvgPool2d((None, 7)) input = torch.randn(1, 64, 10, 9) output = m(input)
https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool2d.html
pytorch docs
torch.quantized_max_pool1d torch.quantized_max_pool1d(input, kernel_size, stride=[], padding=0, dilation=1, ceil_mode=False) -> Tensor Applies a 1D max pooling over an input quantized tensor composed of several input planes. Parameters: * input (Tensor) -- quantized tensor * **kernel_size** (*list of python:int*) -- the size of the sliding window * **stride** ("list of int", optional) -- the stride of the sliding window * **padding** ("list of int", optional) -- padding to be added on both sides, must be >= 0 and <= kernel_size / 2 * **dilation** ("list of int", optional) -- The stride between elements within a sliding window, must be > 0. Default 1 * **ceil_mode** (*bool**, **optional*) -- If True, will use ceil instead of floor to compute the output shape. Defaults to False. Returns: A quantized tensor with max_pool1d applied. Return type: Tensor
https://pytorch.org/docs/stable/generated/torch.quantized_max_pool1d.html
pytorch docs
Return type: Tensor Example: >>> qx = torch.quantize_per_tensor(torch.rand(2, 2), 1.5, 3, torch.quint8) >>> torch.quantized_max_pool1d(qx, [2]) tensor([[0.0000], [1.5000]], size=(2, 1), dtype=torch.quint8, quantization_scheme=torch.per_tensor_affine, scale=1.5, zero_point=3)
https://pytorch.org/docs/stable/generated/torch.quantized_max_pool1d.html
pytorch docs
torch.Tensor.rsqrt_ Tensor.rsqrt_() -> Tensor In-place version of "rsqrt()"
https://pytorch.org/docs/stable/generated/torch.Tensor.rsqrt_.html
pytorch docs
torch.sort torch.sort(input, dim=- 1, descending=False, stable=False, *, out=None) Sorts the elements of the "input" tensor along a given dimension in ascending order by value. If "dim" is not given, the last dimension of the input is chosen. If "descending" is "True" then the elements are sorted in descending order by value. If "stable" is "True" then the sorting routine becomes stable, preserving the order of equivalent elements. A namedtuple of (values, indices) is returned, where the values are the sorted values and indices are the indices of the elements in the original input tensor. Parameters: * input (Tensor) -- the input tensor. * **dim** (*int**, **optional*) -- the dimension to sort along * **descending** (*bool**, **optional*) -- controls the sorting order (ascending or descending) * **stable** (*bool**, **optional*) -- makes the sorting routine
https://pytorch.org/docs/stable/generated/torch.sort.html
pytorch docs
stable, which guarantees that the order of equivalent elements is preserved. Keyword Arguments: out (tuple, optional) -- the output tuple of (Tensor, LongTensor) that can be optionally given to be used as output buffers Example: >>> x = torch.randn(3, 4) >>> sorted, indices = torch.sort(x) >>> sorted tensor([[-0.2162, 0.0608, 0.6719, 2.3332], [-0.5793, 0.0061, 0.6058, 0.9497], [-0.5071, 0.3343, 0.9553, 1.0960]]) >>> indices tensor([[ 1, 0, 2, 3], [ 3, 1, 0, 2], [ 0, 3, 1, 2]]) >>> sorted, indices = torch.sort(x, 0) >>> sorted tensor([[-0.5071, -0.2162, 0.6719, -0.5793], [ 0.0608, 0.0061, 0.9497, 0.3343], [ 0.6058, 0.9553, 1.0960, 2.3332]]) >>> indices tensor([[ 2, 0, 0, 1], [ 0, 1, 1, 2], [ 1, 2, 2, 0]]) >>> x = torch.tensor([0, 1] * 9)
https://pytorch.org/docs/stable/generated/torch.sort.html
pytorch docs
x = torch.tensor([0, 1] * 9) >>> x.sort() torch.return_types.sort( values=tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]), indices=tensor([ 2, 16, 4, 6, 14, 8, 0, 10, 12, 9, 17, 15, 13, 11, 7, 5, 3, 1])) >>> x.sort(stable=True) torch.return_types.sort( values=tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]), indices=tensor([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 1, 3, 5, 7, 9, 11, 13, 15, 17]))
https://pytorch.org/docs/stable/generated/torch.sort.html
pytorch docs
torch.sparse.addmm torch.sparse.addmm(mat, mat1, mat2, *, beta=1., alpha=1.) -> Tensor This function does exact same thing as "torch.addmm()" in the forward, except that it supports backward for sparse COO matrix "mat1". When "mat1" is a COO tensor it must have sparse_dim = 2. When inputs are COO tensors, this function also supports backward for both inputs. Supports both CSR and COO storage formats. Note: This function doesn't support computing derivaties with respect to CSR matrices. Parameters: * mat (Tensor) -- a dense matrix to be added * **mat1** (*Tensor*) -- a sparse matrix to be multiplied * **mat2** (*Tensor*) -- a dense matrix to be multiplied * **beta** (*Number**, **optional*) -- multiplier for "mat" (\beta) * **alpha** (*Number**, **optional*) -- multiplier for mat1 @ mat2 (\alpha)
https://pytorch.org/docs/stable/generated/torch.sparse.addmm.html
pytorch docs
torch.is_nonzero torch.is_nonzero(input) Returns True if the "input" is a single element tensor which is not equal to zero after type conversions. i.e. not equal to "torch.tensor([0.])" or "torch.tensor([0])" or "torch.tensor([False])". Throws a "RuntimeError" if "torch.numel() != 1" (even in case of sparse tensors). Parameters: input (Tensor) -- the input tensor. Examples: >>> torch.is_nonzero(torch.tensor([0.])) False >>> torch.is_nonzero(torch.tensor([1.5])) True >>> torch.is_nonzero(torch.tensor([False])) False >>> torch.is_nonzero(torch.tensor([3])) True >>> torch.is_nonzero(torch.tensor([1, 3, 5])) Traceback (most recent call last): ... RuntimeError: bool value of Tensor with more than one value is ambiguous >>> torch.is_nonzero(torch.tensor([])) Traceback (most recent call last): ... RuntimeError: bool value of Tensor with no values is ambiguous
https://pytorch.org/docs/stable/generated/torch.is_nonzero.html
pytorch docs
torch.signbit torch.signbit(input, *, out=None) -> Tensor Tests if each element of "input" has its sign bit set or not. Parameters: input (Tensor) -- the input tensor. Keyword Arguments: out (Tensor, optional) -- the output tensor. Example: >>> a = torch.tensor([0.7, -1.2, 0., 2.3]) >>> torch.signbit(a) tensor([ False, True, False, False]) >>> a = torch.tensor([-0.0, 0.0]) >>> torch.signbit(a) tensor([ True, False]) Note: signbit handles signed zeros, so negative zero (-0) returns True.
https://pytorch.org/docs/stable/generated/torch.signbit.html
pytorch docs
torch.kaiser_window torch.kaiser_window(window_length, periodic=True, beta=12.0, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor Computes the Kaiser window with window length "window_length" and shape parameter "beta". Let I_0 be the zeroth order modified Bessel function of the first kind (see "torch.i0()") and "N = L - 1" if "periodic" is False and "L" if "periodic" is True, where "L" is the "window_length". This function computes: out_i = I_0 \left( \beta \sqrt{1 - \left( {\frac{i - N/2}{N/2}} \right) ^2 } \right) / I_0( \beta ) Calling "torch.kaiser_window(L, B, periodic=True)" is equivalent to calling "torch.kaiser_window(L + 1, B, periodic=False)[:-1])". The "periodic" argument is intended as a helpful shorthand to produce a periodic window as input to functions like "torch.stft()". Note: If "window_length" is one, then the returned window is a single element tensor containing a one.
https://pytorch.org/docs/stable/generated/torch.kaiser_window.html
pytorch docs
element tensor containing a one. Parameters: * window_length (int) -- length of the window. * **periodic** (*bool**, **optional*) -- If True, returns a periodic window suitable for use in spectral analysis. If False, returns a symmetric window suitable for use in filter design. * **beta** (*float**, **optional*) -- shape parameter for the window. Keyword Arguments: * dtype ("torch.dtype", optional) -- the desired data type of returned tensor. Default: if "None", uses a global default (see "torch.set_default_tensor_type()"). * **layout** ("torch.layout", optional) -- the desired layout of returned window tensor. Only "torch.strided" (dense layout) is supported. * **device** ("torch.device", optional) -- the desired device of returned tensor. Default: if "None", uses the current device for the default tensor type (see
https://pytorch.org/docs/stable/generated/torch.kaiser_window.html
pytorch docs
for the default tensor type (see "torch.set_default_tensor_type()"). "device" will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires_grad** (*bool**, **optional*) -- If autograd should record operations on the returned tensor. Default: "False".
https://pytorch.org/docs/stable/generated/torch.kaiser_window.html
pytorch docs
torch.prod torch.prod(input, *, dtype=None) -> Tensor Returns the product of all elements in the "input" tensor. Parameters: input (Tensor) -- the input tensor. Keyword Arguments: dtype ("torch.dtype", optional) -- the desired data type of returned tensor. If specified, the input tensor is casted to "dtype" before the operation is performed. This is useful for preventing data type overflows. Default: None. Example: >>> a = torch.randn(1, 3) >>> a tensor([[-0.8020, 0.5428, -1.5854]]) >>> torch.prod(a) tensor(0.6902) torch.prod(input, dim, keepdim=False, *, dtype=None) -> Tensor Returns the product of each row of the "input" tensor in the given dimension "dim". If "keepdim" is "True", the output tensor is of the same size as "input" except in the dimension "dim" where it is of size 1. Otherwise, "dim" is squeezed (see "torch.squeeze()"), resulting in
https://pytorch.org/docs/stable/generated/torch.prod.html
pytorch docs
the output tensor having 1 fewer dimension than "input". Parameters: * input (Tensor) -- the input tensor. * **dim** (*int*) -- the dimension to reduce. * **keepdim** (*bool*) -- whether the output tensor has "dim" retained or not. Keyword Arguments: dtype ("torch.dtype", optional) -- the desired data type of returned tensor. If specified, the input tensor is casted to "dtype" before the operation is performed. This is useful for preventing data type overflows. Default: None. Example: >>> a = torch.randn(4, 2) >>> a tensor([[ 0.5261, -0.3837], [ 1.1857, -0.2498], [-1.1646, 0.0705], [ 1.1131, -1.0629]]) >>> torch.prod(a, 1) tensor([-0.2018, -0.2962, -0.0821, -1.1831])
https://pytorch.org/docs/stable/generated/torch.prod.html
pytorch docs
torch.Tensor.stft Tensor.stft(n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=None, return_complex=None) See "torch.stft()" Warning: This function changed signature at version 0.4.1. Calling with the previous signature may cause error or return incorrect result.
https://pytorch.org/docs/stable/generated/torch.Tensor.stft.html
pytorch docs
torch.fft.hfftn torch.fft.hfftn(input, s=None, dim=None, norm=None, *, out=None) -> Tensor Computes the n-dimensional discrete Fourier transform of a Hermitian symmetric "input" signal. "input" is interpreted as a one-sided Hermitian signal in the time domain. By the Hermitian property, the Fourier transform will be real-valued. Note: "hfftn()"/"ihfftn()" are analogous to "rfftn()"/"irfftn()". The real FFT expects a real signal in the time-domain and gives Hermitian symmetry in the frequency-domain. The Hermitian FFT is the opposite; Hermitian symmetric in the time-domain and real- valued in the frequency-domain. For this reason, special care needs to be taken with the shape argument "s", in the same way as with "irfftn()". Note: Some input frequencies must be real-valued to satisfy the Hermitian property. In these cases the imaginary component will be ignored. For example, any imaginary component in the zero-
https://pytorch.org/docs/stable/generated/torch.fft.hfftn.html
pytorch docs
frequency term cannot be represented in a real output and so will always be ignored. Note: The correct interpretation of the Hermitian input depends on the length of the original data, as given by "s". This is because each input shape could correspond to either an odd or even length signal. By default, the signal is assumed to be even length and odd signals will not round-trip properly. It is recommended to always pass the signal shape "s". Note: Supports torch.half and torch.chalf on CUDA with GPU Architecture SM53 or greater. However it only supports powers of 2 signal length in every transformed dimensions. With default arguments, the size of last dimension should be (2^n + 1) as argument *s* defaults to even output size = 2 * (last_dim_size - 1) Parameters: * input (Tensor) -- the input tensor * **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the
https://pytorch.org/docs/stable/generated/torch.fft.hfftn.html
pytorch docs
transformed dimensions. If given, each dimension "dim[i]" will either be zero-padded or trimmed to the length "s[i]" before computing the real FFT. If a length "-1" is specified, no padding is done in that dimension. Defaults to even output in the last dimension: "s[-1] = 2*(input.size(dim[-1]) - 1)". * **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be transformed. The last dimension must be the half-Hermitian compressed dimension. Default: all dimensions, or the last "len(s)" dimensions if "s" is given. * **norm** (*str**, **optional*) -- Normalization mode. For the forward transform ("hfftn()"), these correspond to: * ""forward"" - normalize by "1/n" * ""backward"" - no normalization * ""ortho"" - normalize by "1/sqrt(n)" (making the Hermitian FFT orthonormal) Where "n = prod(s)" is the logical FFT size. Calling the
https://pytorch.org/docs/stable/generated/torch.fft.hfftn.html
pytorch docs
backward transform ("ihfftn()") with the same normalization mode will apply an overall normalization of "1/n" between the two transforms. This is required to make "ihfftn()" the exact inverse. Default is ""backward"" (no normalization). Keyword Arguments: out (Tensor, optional) -- the output tensor. -[ Example ]- Starting from a real frequency-space signal, we can generate a Hermitian-symmetric time-domain signal: >>> T = torch.rand(10, 9) t = torch.fft.ihfftn(T) Without specifying the output length to "hfftn()", the output will not round-trip properly because the input is odd-length in the last dimension: torch.fft.hfftn(t).size() torch.Size([10, 10]) So, it is recommended to always pass the signal shape "s". roundtrip = torch.fft.hfftn(t, T.size()) roundtrip.size() torch.Size([10, 9]) torch.allclose(roundtrip, T) True
https://pytorch.org/docs/stable/generated/torch.fft.hfftn.html
pytorch docs
MultiMarginLoss class torch.nn.MultiMarginLoss(p=1, margin=1.0, weight=None, size_average=None, reduce=None, reduction='mean') Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input x (a 2D mini-batch Tensor) and output y (which is a 1D tensor of target class indices, 0 \leq y \leq \text{x.size}(1)-1): For each mini-batch sample, the loss in terms of the 1D input x and scalar output y is: \text{loss}(x, y) = \frac{\sum_i \max(0, \text{margin} - x[y] + x[i])^p}{\text{x.size}(0)} where i \in \left{0, \; \cdots , \; \text{x.size}(0) - 1\right} and i \neq y. Optionally, you can give non-equal weighting on the classes by passing a 1D "weight" tensor into the constructor. The loss function then becomes: \text{loss}(x, y) = \frac{\sum_i \max(0, w[y] * (\text{margin} - x[y] + x[i]))^p}{\text{x.size}(0)} Parameters:
https://pytorch.org/docs/stable/generated/torch.nn.MultiMarginLoss.html
pytorch docs
Parameters: * p (int, optional) -- Has a default value of 1. 1 and 2 are the only supported values. * **margin** (*float**, **optional*) -- Has a default value of 1. * **weight** (*Tensor**, **optional*) -- a manual rescaling weight given to each class. If given, it has to be a Tensor of size *C*. Otherwise, it is treated as if having all ones. * **size_average** (*bool**, **optional*) -- Deprecated (see "reduction"). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field "size_average" is set to "False", the losses are instead summed for each minibatch. Ignored when "reduce" is "False". Default: "True" * **reduce** (*bool**, **optional*) -- Deprecated (see "reduction"). By default, the losses are averaged or summed over observations for each minibatch depending on
https://pytorch.org/docs/stable/generated/torch.nn.MultiMarginLoss.html
pytorch docs
"size_average". When "reduce" is "False", returns a loss per batch element instead and ignores "size_average". Default: "True" * **reduction** (*str**, **optional*) -- Specifies the reduction to apply to the output: "'none'" | "'mean'" | "'sum'". "'none'": no reduction will be applied, "'mean'": the sum of the output will be divided by the number of elements in the output, "'sum'": the output will be summed. Note: "size_average" and "reduce" are in the process of being deprecated, and in the meantime, specifying either of those two args will override "reduction". Default: "'mean'" Shape: * Input: (N, C) or (C), where N is the batch size and C is the number of classes. * Target: (N) or (), where each value is 0 \leq \text{targets}[i] \leq C-1. * Output: scalar. If "reduction" is "'none'", then same shape as the target. Examples: >>> loss = nn.MultiMarginLoss()
https://pytorch.org/docs/stable/generated/torch.nn.MultiMarginLoss.html
pytorch docs
loss = nn.MultiMarginLoss() >>> x = torch.tensor([[0.1, 0.2, 0.4, 0.8]]) >>> y = torch.tensor([3]) >>> # 0.25 * ((1-(0.8-0.1)) + (1-(0.8-0.2)) + (1-(0.8-0.4))) >>> loss(x, y) tensor(0.32...)
https://pytorch.org/docs/stable/generated/torch.nn.MultiMarginLoss.html
pytorch docs
torch.slice_scatter torch.slice_scatter(input, src, dim=0, start=None, end=None, step=1) -> Tensor Embeds the values of the "src" tensor into "input" at the given dimension. This function returns a tensor with fresh storage; it does not create a view. Parameters: * input (Tensor) -- the input tensor. * **src** (*Tensor*) -- The tensor to embed into "input" * **dim** (*int*) -- the dimension to insert the slice into * **start** (*Optional**[**int**]*) -- the start index of where to insert the slice * **end** (*Optional**[**int**]*) -- the end index of where to insert the slice * **step** (*int*) -- the how many elements to skip in Example: >>> a = torch.zeros(8, 8) >>> b = torch.ones(8) >>> a.slice_scatter(b, start=6) tensor([[0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0.],
https://pytorch.org/docs/stable/generated/torch.slice_scatter.html
pytorch docs
[0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0.], [1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1.]]) >>> b = torch.ones(2) >>> a.slice_scatter(b, dim=1, start=2, end=6, step=2) tensor([[0., 0., 1., 0., 1., 0., 0., 0.], [0., 0., 1., 0., 1., 0., 0., 0.], [0., 0., 1., 0., 1., 0., 0., 0.], [0., 0., 1., 0., 1., 0., 0., 0.], [0., 0., 1., 0., 1., 0., 0., 0.], [0., 0., 1., 0., 1., 0., 0., 0.], [0., 0., 1., 0., 1., 0., 0., 0.], [0., 0., 1., 0., 1., 0., 0., 0.]])
https://pytorch.org/docs/stable/generated/torch.slice_scatter.html
pytorch docs
torch.det torch.det(input) -> Tensor Alias for "torch.linalg.det()"
https://pytorch.org/docs/stable/generated/torch.det.html
pytorch docs
torch.linalg.matrix_power torch.linalg.matrix_power(A, n, *, out=None) -> Tensor Computes the n-th power of a square matrix for an integer n. Supports input of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if "A" is a batch of matrices then the output has the same batch dimensions. If "n"= 0, it returns the identity matrix (or batch) of the same shape as "A". If "n" is negative, it returns the inverse of each matrix (if invertible) raised to the power of abs(n). Note: Consider using "torch.linalg.solve()" if possible for multiplying a matrix on the left by a negative power as, if "n"*> 0*: matrix_power(torch.linalg.solve(A, B), n) == matrix_power(A, -n) @ B It is always preferred to use "solve()" when possible, as it is faster and more numerically stable than computing A^{-n} explicitly. See also: "torch.linalg.solve()" computes "A"*.inverse() @ *"B" with a
https://pytorch.org/docs/stable/generated/torch.linalg.matrix_power.html
pytorch docs
numerically stable algorithm. Parameters: * A (Tensor) -- tensor of shape (, m, m)* where *** is zero or more batch dimensions. * **n** (*int*) -- the exponent. Keyword Arguments: out (Tensor, optional) -- output tensor. Ignored if None. Default: None. Raises: RuntimeError -- if "n"< 0 and the matrix "A" or any matrix in the batch of matrices "A" is not invertible. Examples: >>> A = torch.randn(3, 3) >>> torch.linalg.matrix_power(A, 0) tensor([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) >>> torch.linalg.matrix_power(A, 3) tensor([[ 1.0756, 0.4980, 0.0100], [-1.6617, 1.4994, -1.9980], [-0.4509, 0.2731, 0.8001]]) >>> torch.linalg.matrix_power(A.expand(2, -1, -1), -2) tensor([[[ 0.2640, 0.4571, -0.5511], [-1.0163, 0.3491, -1.5292], [-0.4899, 0.0822, 0.2773]],
https://pytorch.org/docs/stable/generated/torch.linalg.matrix_power.html
pytorch docs
[-0.4899, 0.0822, 0.2773]], [[ 0.2640, 0.4571, -0.5511], [-1.0163, 0.3491, -1.5292], [-0.4899, 0.0822, 0.2773]]])
https://pytorch.org/docs/stable/generated/torch.linalg.matrix_power.html
pytorch docs
Adadelta class torch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0, foreach=None, *, maximize=False, differentiable=False) Implements Adadelta algorithm. \begin{aligned} &\rule{110mm}{0.4pt} \\ &\textbf{input} : \gamma \text{ (lr)}, \: \theta_0 \text{ (params)}, \: f(\theta) \text{ (objective)}, \: \rho \text{ (decay)}, \: \lambda \text{ (weight decay)} \\ &\textbf{initialize} : v_0 \leftarrow 0 \: \text{ (square avg)}, \: u_0 \leftarrow 0 \: \text{ (accumulate variables)} \\[-1.ex] &\rule{110mm}{0.4pt} \\ &\textbf{for} \: t=1 \: \textbf{to} \: \ldots \: \textbf{do} \\ &\hspace{5mm}g_t \leftarrow \nabla_{\theta} f_t (\theta_{t-1}) \\ &\hspace{5mm}if \: \lambda \neq 0 \\ &\hspace{10mm} g_t \leftarrow g_t + \lambda \theta_{t-1} \\ &\hspace{5mm}
https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html
pytorch docs
v_t \leftarrow v_{t-1} \rho + g^2_t (1 - \rho) \ &\hspace{5mm}\Delta x_t \leftarrow \frac{\sqrt{u_{t-1} + \epsilon }}{ \sqrt{v_t + \epsilon} }g_t \hspace{21mm} \ &\hspace{5mm} u_t \leftarrow u_{t-1} \rho + \Delta x^2_t (1 - \rho) \ &\hspace{5mm}\theta_t \leftarrow \theta_{t-1} - \gamma \Delta x_t \ &\rule{110mm}{0.4pt} \[-1.ex] &\bf{return} \: \theta_t \[-1.ex] &\rule{110mm}{0.4pt} \[-1.ex] \end{aligned} For further details regarding the algorithm we refer to ADADELTA: An Adaptive Learning Rate Method. Parameters: * params (iterable) -- iterable of parameters to optimize or dicts defining parameter groups * **rho** (*float**, **optional*) -- coefficient used for computing a running average of squared gradients (default: 0.9)
https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html
pytorch docs
0.9) * **eps** (*float**, **optional*) -- term added to the denominator to improve numerical stability (default: 1e-6) * **lr** (*float**, **optional*) -- coefficient that scale delta before it is applied to the parameters (default: 1.0) * **weight_decay** (*float**, **optional*) -- weight decay (L2 penalty) (default: 0) * **foreach** (*bool**, **optional*) -- whether foreach implementation of optimizer is used. If unspecified by the user (so foreach is None), we will try to use foreach over the for-loop implementation on CUDA, since it is usually significantly more performant. (default: None) * **maximize** (*bool**, **optional*) -- maximize the params based on the objective, instead of minimizing (default: False) * **differentiable** (*bool**, **optional*) -- whether autograd should occur through the optimizer step in training.
https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html
pytorch docs
Otherwise, the step() function runs in a torch.no_grad() context. Setting to True can impair performance, so leave it False if you don't intend to run autograd through this instance (default: False) add_param_group(param_group) Add a param group to the "Optimizer" s *param_groups*. This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the "Optimizer" as training progresses. Parameters: **param_group** (*dict*) -- Specifies what Tensors should be optimized along with group specific optimization options. load_state_dict(state_dict) Loads the optimizer state. Parameters: **state_dict** (*dict*) -- optimizer state. Should be an object returned from a call to "state_dict()". register_step_post_hook(hook) Register an optimizer step post hook which will be called after optimizer step. It should have the following signature:
https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html
pytorch docs
hook(optimizer, args, kwargs) -> None The "optimizer" argument is the optimizer instance being used. Parameters: **hook** (*Callable*) -- The user defined hook to be registered. Returns: a handle that can be used to remove the added hook by calling "handle.remove()" Return type: "torch.utils.hooks.RemoveableHandle" register_step_pre_hook(hook) Register an optimizer step pre hook which will be called before optimizer step. It should have the following signature: hook(optimizer, args, kwargs) -> None or modified args and kwargs The "optimizer" argument is the optimizer instance being used. If args and kwargs are modified by the pre-hook, then the transformed values are returned as a tuple containing the new_args and new_kwargs. Parameters: **hook** (*Callable*) -- The user defined hook to be registered. Returns:
https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html
pytorch docs
registered. Returns: a handle that can be used to remove the added hook by calling "handle.remove()" Return type: "torch.utils.hooks.RemoveableHandle" state_dict() Returns the state of the optimizer as a "dict". It contains two entries: * state - a dict holding current optimization state. Its content differs between optimizer classes. * param_groups - a list containing all parameter groups where each parameter group is a dict zero_grad(set_to_none=False) Sets the gradients of all optimized "torch.Tensor" s to zero. Parameters: **set_to_none** (*bool*) -- instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a
https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html
pytorch docs
None attribute or a Tensor full of 0s will behave differently. 2. If the user requests "zero_grad(set_to_none=True)" followed by a backward pass, ".grad"s are guaranteed to be None for params that did not receive a gradient. 3. "torch.optim" optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).
https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html
pytorch docs
torch.autograd.function.FunctionCtx.mark_non_differentiable FunctionCtx.mark_non_differentiable(*args) Marks outputs as non-differentiable. This should be called at most once, only from inside the "forward()" method, and all arguments should be tensor outputs. This will mark outputs as not requiring gradients, increasing the efficiency of backward computation. You still need to accept a gradient for each output in "backward()", but it's always going to be a zero tensor with the same shape as the shape of a corresponding output. This is used e.g. for indices returned from a sort. See example:: >>> class Func(Function): >>> @staticmethod >>> def forward(ctx, x): >>> sorted, idx = x.sort() >>> ctx.mark_non_differentiable(idx) >>> ctx.save_for_backward(x, idx) >>> return sorted, idx >>> >>> @staticmethod
https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.mark_non_differentiable.html
pytorch docs
>>> @staticmethod >>> @once_differentiable >>> def backward(ctx, g1, g2): # still need to accept g2 >>> x, idx = ctx.saved_tensors >>> grad_input = torch.zeros_like(x) >>> grad_input.index_add_(0, idx, g1) >>> return grad_input
https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.mark_non_differentiable.html
pytorch docs
torch.sparse.mm torch.sparse.mm() Performs a matrix multiplication of the sparse matrix "mat1" and the (sparse or strided) matrix "mat2". Similar to "torch.mm()", if "mat1" is a (n \times m) tensor, "mat2" is a (m \times p) tensor, out will be a (n \times p) tensor. When "mat1" is a COO tensor it must have *sparse_dim = 2*. When inputs are COO tensors, this function also supports backward for both inputs. Supports both CSR and COO storage formats. Note: This function doesn't support computing derivaties with respect to CSR matrices. Args: mat1 (Tensor): the first sparse matrix to be multiplied mat2 (Tensor): the second matrix to be multiplied, which could be sparse or dense Shape: The format of the output tensor of this function follows: - sparse x sparse -> sparse - sparse x dense -> dense Example: >>> a = torch.randn(2, 3).to_sparse().requires_grad_(True)
https://pytorch.org/docs/stable/generated/torch.sparse.mm.html
pytorch docs
a tensor(indices=tensor([[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]]), values=tensor([ 1.5901, 0.0183, -0.6146, 1.8061, -0.0112, 0.6302]), size=(2, 3), nnz=6, layout=torch.sparse_coo, requires_grad=True) >>> b = torch.randn(3, 2, requires_grad=True) >>> b tensor([[-0.6479, 0.7874], [-1.2056, 0.5641], [-1.1716, -0.9923]], requires_grad=True) >>> y = torch.sparse.mm(a, b) >>> y tensor([[-0.3323, 1.8723], [-1.8951, 0.7904]], grad_fn=<SparseAddmmBackward>) >>> y.sum().backward() >>> a.grad tensor(indices=tensor([[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]]), values=tensor([ 0.1394, -0.6415, -2.1639, 0.1394, -0.6415, -2.1639]), size=(2, 3), nnz=6, layout=torch.sparse_coo)
https://pytorch.org/docs/stable/generated/torch.sparse.mm.html
pytorch docs
torch.Tensor.get_device Tensor.get_device() -> Device ordinal (Integer) For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides. For CPU tensors, this function returns -1. Example: >>> x = torch.randn(3, 4, 5, device='cuda:0') >>> x.get_device() 0 >>> x.cpu().get_device() -1
https://pytorch.org/docs/stable/generated/torch.Tensor.get_device.html
pytorch docs
torch.cuda.set_per_process_memory_fraction torch.cuda.set_per_process_memory_fraction(fraction, device=None) Set memory fraction for a process. The fraction is used to limit an caching allocator to allocated memory on a CUDA device. The allowed value equals the total visible memory multiplied fraction. If trying to allocate more than the allowed value in a process, will raise an out of memory error in allocator. Parameters: * fraction (float) -- Range: 0~1. Allowed memory equals total_memory * fraction. * **device** (*torch.device** or **int**, **optional*) -- selected device. If it is "None" the default CUDA device is used. Note: In general, the total available free memory is less than the total capacity.
https://pytorch.org/docs/stable/generated/torch.cuda.set_per_process_memory_fraction.html
pytorch docs
ConvBn2d class torch.ao.nn.intrinsic.ConvBn2d(conv, bn) This is a sequential container which calls the Conv 2d and Batch Norm 2d modules. During quantization this will be replaced with the corresponding fused module.
https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBn2d.html
pytorch docs
torch.func.grad torch.func.grad(func, argnums=0, has_aux=False) "grad" operator helps computing gradients of "func" with respect to the input(s) specified by "argnums". This operator can be nested to compute higher-order gradients. Parameters: * func (Callable) -- A Python function that takes one or more arguments. Must return a single-element Tensor. If specified "has_aux" equals "True", function can return a tuple of single-element Tensor and other auxiliary objects: "(output, aux)". * **argnums** (*int** or **Tuple**[**int**]*) -- Specifies arguments to compute gradients with respect to. "argnums" can be single integer or tuple of integers. Default: 0. * **has_aux** (*bool*) -- Flag indicating that "func" returns a tensor and other auxiliary objects: "(output, aux)". Default: False. Returns: Function to compute gradients with respect to its inputs. By
https://pytorch.org/docs/stable/generated/torch.func.grad.html
pytorch docs
default, the output of the function is the gradient tensor(s) with respect to the first argument. If specified "has_aux" equals "True", tuple of gradients and output auxiliary objects is returned. If "argnums" is a tuple of integers, a tuple of output gradients with respect to each "argnums" value is returned. Return type: Callable Example of using "grad": from torch.func import grad x = torch.randn([]) cos_x = grad(lambda x: torch.sin(x))(x) assert torch.allclose(cos_x, x.cos()) Second-order gradients neg_sin_x = grad(grad(lambda x: torch.sin(x)))(x) assert torch.allclose(neg_sin_x, -x.sin()) When composed with "vmap", "grad" can be used to compute per- sample-gradients: from torch.func import grad, vmap batch_size, feature_size = 3, 5 def model(weights, feature_vec): # Very simple linear model with activation assert feature_vec.dim() == 1
https://pytorch.org/docs/stable/generated/torch.func.grad.html
pytorch docs
assert feature_vec.dim() == 1 return feature_vec.dot(weights).relu() def compute_loss(weights, example, target): y = model(weights, example) return ((y - target) ** 2).mean() # MSELoss weights = torch.randn(feature_size, requires_grad=True) examples = torch.randn(batch_size, feature_size) targets = torch.randn(batch_size) inputs = (weights, examples, targets) grad_weight_per_example = vmap(grad(compute_loss), in_dims=(None, 0, 0))(*inputs) Example of using "grad" with "has_aux" and "argnums": from torch.func import grad def my_loss_func(y, y_pred): loss_per_sample = (0.5 * y_pred - y) ** 2 loss = loss_per_sample.mean() return loss, (y_pred, loss_per_sample) fn = grad(my_loss_func, argnums=(0, 1), has_aux=True) y_true = torch.rand(4) y_preds = torch.rand(4, requires_grad=True) out = fn(y_true, y_preds)
https://pytorch.org/docs/stable/generated/torch.func.grad.html
pytorch docs
out = fn(y_true, y_preds) > output is ((grads w.r.t y_true, grads w.r.t y_preds), (y_pred, loss_per_sample)) Note: Using PyTorch "torch.no_grad" together with "grad".Case 1: Using "torch.no_grad" inside a function: >>> def f(x): >>> with torch.no_grad(): >>> c = x ** 2 >>> return x - c In this case, "grad(f)(x)" will respect the inner "torch.no_grad".Case 2: Using "grad" inside "torch.no_grad" context manager: >>> with torch.no_grad(): >>> grad(f)(x) In this case, "grad" will respect the inner "torch.no_grad", but not the outer one. This is because "grad" is a "function transform": its result should not depend on the result of a context manager outside of "f".
https://pytorch.org/docs/stable/generated/torch.func.grad.html
pytorch docs
torch.sparse_coo_tensor torch.sparse_coo_tensor(indices, values, size=None, *, dtype=None, device=None, requires_grad=False, check_invariants=None) -> Tensor Constructs a sparse tensor in COO(rdinate) format with specified values at the given "indices". Note: This function returns an uncoalesced tensor. Note: If the "device" argument is not specified the device of the given "values" and indices tensor(s) must match. If, however, the argument is specified the input Tensors will be converted to the given device and in turn determine the device of the constructed sparse tensor. Parameters: * indices (array_like) -- Initial data for the tensor. Can be a list, tuple, NumPy "ndarray", scalar, and other types. Will be cast to a "torch.LongTensor" internally. The indices are the coordinates of the non-zero values in the matrix, and thus should be two-dimensional where the first dimension is
https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html
pytorch docs
the number of tensor dimensions and the second dimension is the number of non-zero values. * **values** (*array_like*) -- Initial values for the tensor. Can be a list, tuple, NumPy "ndarray", scalar, and other types. * **size** (list, tuple, or "torch.Size", optional) -- Size of the sparse tensor. If not provided the size will be inferred as the minimum size big enough to hold all non-zero elements. Keyword Arguments: * dtype ("torch.dtype", optional) -- the desired data type of returned tensor. Default: if None, infers data type from "values". * **device** ("torch.device", optional) -- the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see "torch.set_default_tensor_type()"). "device" will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html
pytorch docs
tensor types. * **requires_grad** (*bool**, **optional*) -- If autograd should record operations on the returned tensor. Default: "False". * **check_invariants** (*bool**, **optional*) -- If sparse tensor invariants are checked. Default: as returned by "torch.sparse.check_sparse_tensor_invariants.is_enabled()", initially False. Example: >>> i = torch.tensor([[0, 1, 1], ... [2, 0, 2]]) >>> v = torch.tensor([3, 4, 5], dtype=torch.float32) >>> torch.sparse_coo_tensor(i, v, [2, 4]) tensor(indices=tensor([[0, 1, 1], [2, 0, 2]]), values=tensor([3., 4., 5.]), size=(2, 4), nnz=3, layout=torch.sparse_coo) >>> torch.sparse_coo_tensor(i, v) # Shape inference tensor(indices=tensor([[0, 1, 1], [2, 0, 2]]), values=tensor([3., 4., 5.]), size=(2, 3), nnz=3, layout=torch.sparse_coo)
https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html
pytorch docs
torch.sparse_coo_tensor(i, v, [2, 4], ... dtype=torch.float64, ... device=torch.device('cuda:0')) tensor(indices=tensor([[0, 1, 1], [2, 0, 2]]), values=tensor([3., 4., 5.]), device='cuda:0', size=(2, 4), nnz=3, dtype=torch.float64, layout=torch.sparse_coo) # Create an empty sparse tensor with the following invariants: # 1. sparse_dim + dense_dim = len(SparseTensor.shape) # 2. SparseTensor._indices().shape = (sparse_dim, nnz) # 3. SparseTensor._values().shape = (nnz, SparseTensor.shape[sparse_dim:]) # # For instance, to create an empty sparse tensor with nnz = 0, dense_dim = 0 and # sparse_dim = 1 (hence indices is a 2D tensor of shape = (1, 0)) >>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), [], [1]) tensor(indices=tensor([], size=(1, 0)), values=tensor([], size=(0,)),
https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html
pytorch docs
values=tensor([], size=(0,)), size=(1,), nnz=0, layout=torch.sparse_coo) # and to create an empty sparse tensor with nnz = 0, dense_dim = 1 and # sparse_dim = 1 >>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), torch.empty([0, 2]), [1, 2]) tensor(indices=tensor([], size=(1, 0)), values=tensor([], size=(0, 2)), size=(1, 2), nnz=0, layout=torch.sparse_coo)
https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html
pytorch docs