text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
torch.autograd.profiler.profile.key_averages
profile.key_averages(group_by_input_shape=False, group_by_stack_n=0)
Averages all function events over their keys.
Parameters:
* group_by_input_shapes -- group entries by (event name,
input shapes) rather than just event name. This is useful to
see which input shapes contribute to the runtime the most and
may help with size-specific optimizations or choosing the best
candidates for quantization (aka fitting a roof line)
* **group_by_stack_n** -- group by top n stack trace entries
Returns:
An EventList containing FunctionEventAvg objects. | https://pytorch.org/docs/stable/generated/torch.autograd.profiler.profile.key_averages.html | pytorch docs |
Hardsigmoid
class torch.nn.Hardsigmoid(inplace=False)
Applies the Hardsigmoid function element-wise.
Hardsigmoid is defined as:
\text{Hardsigmoid}(x) = \begin{cases} 0 & \text{if~} x \le
-3, \\ 1 & \text{if~} x \ge +3, \\ x / 6 + 1 / 2 &
\text{otherwise} \end{cases}
Parameters:
inplace (bool) -- can optionally do the operation in-
place. Default: "False"
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.Hardsigmoid()
>>> input = torch.randn(2)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.Hardsigmoid.html | pytorch docs |
torch._foreach_sqrt
torch._foreach_sqrt(self: List[Tensor]) -> List[Tensor]
Apply "torch.sqrt()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_sqrt.html | pytorch docs |
torch.linalg.vander
torch.linalg.vander(x, N=None) -> Tensor
Generates a Vandermonde matrix.
Returns the Vandermonde matrix V
V = \begin{pmatrix} 1 & x_1 & x_1^2 & \dots &
x_1^{N-1}\\ 1 & x_2 & x_2^2 & \dots & x_2^{N-1}\\
1 & x_3 & x_3^2 & \dots & x_3^{N-1}\\ \vdots & \vdots &
\vdots & \ddots &\vdots \\ 1 & x_n & x_n^2 & \dots &
x_n^{N-1} \end{pmatrix}.
for N > 1. If "N"= None, then N = x.size(-1) so that the
output is a square matrix.
Supports inputs of float, double, cfloat, cdouble, and integral
dtypes. Also supports batches of vectors, and if "x" is a batch of
vectors then the output has the same batch dimensions.
Differences with numpy.vander:
Unlike numpy.vander, this function returns the powers of "x" in
ascending order. To get them in the reverse order call
"linalg.vander(x, N).flip(-1)".
Parameters: | https://pytorch.org/docs/stable/generated/torch.linalg.vander.html | pytorch docs |
Parameters:
x (Tensor) -- tensor of shape (, n)* where *** is zero
or more batch dimensions consisting of vectors.
Keyword Arguments:
N (int, optional) -- Number of columns in the output.
Default: x.size(-1)
Example:
>>> x = torch.tensor([1, 2, 3, 5])
>>> linalg.vander(x)
tensor([[ 1, 1, 1, 1],
[ 1, 2, 4, 8],
[ 1, 3, 9, 27],
[ 1, 5, 25, 125]])
>>> linalg.vander(x, N=3)
tensor([[ 1, 1, 1],
[ 1, 2, 4],
[ 1, 3, 9],
[ 1, 5, 25]])
| https://pytorch.org/docs/stable/generated/torch.linalg.vander.html | pytorch docs |
torch.nn.functional.silu
torch.nn.functional.silu(input, inplace=False)
Applies the Sigmoid Linear Unit (SiLU) function, element-wise. The
SiLU function is also known as the swish function.
\text{silu}(x) = x * \sigma(x), \text{where } \sigma(x) \text{
is the logistic sigmoid.}
Note:
See Gaussian Error Linear Units (GELUs) where the SiLU (Sigmoid
Linear Unit) was originally coined, and see Sigmoid-Weighted
Linear Units for Neural Network Function Approximation in
Reinforcement Learning and Swish: a Self-Gated Activation
Function where the SiLU was experimented with later.
See "SiLU" for more details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.silu.html | pytorch docs |
torch.clone
torch.clone(input, *, memory_format=torch.preserve_format) -> Tensor
Returns a copy of "input".
Note:
This function is differentiable, so gradients will flow back from
the result of this operation to "input". To create a tensor
without an autograd relationship to "input" see "detach()".
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
memory_format ("torch.memory_format", optional) -- the
desired memory format of returned tensor. Default:
"torch.preserve_format". | https://pytorch.org/docs/stable/generated/torch.clone.html | pytorch docs |
LinearReLU
class torch.ao.nn.intrinsic.qat.LinearReLU(in_features, out_features, bias=True, qconfig=None)
A LinearReLU module fused from Linear and ReLU modules, attached
with FakeQuantize modules for weight, used in quantization aware
training.
We adopt the same interface as "torch.nn.Linear".
Similar to torch.nn.intrinsic.LinearReLU, with FakeQuantize
modules initialized to default.
Variables:
weight (torch.Tensor) -- fake quant module for weight
Examples:
>>> m = nn.qat.LinearReLU(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])
| https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.LinearReLU.html | pytorch docs |
torch.foreach_cosh
torch.foreach_cosh(self: List[Tensor]) -> None
Apply "torch.cosh()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_cosh_.html | pytorch docs |
torch.imag
torch.imag(input) -> Tensor
Returns a new tensor containing imaginary values of the "self"
tensor. The returned tensor and "self" share the same underlying
storage.
Warning:
"imag()" is only supported for tensors with complex dtypes.
Parameters:
input (Tensor) -- the input tensor.
Example:
>>> x=torch.randn(4, dtype=torch.cfloat)
>>> x
tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)])
>>> x.imag
tensor([ 0.3553, -0.7896, -0.0633, -0.8119])
| https://pytorch.org/docs/stable/generated/torch.imag.html | pytorch docs |
RMSprop
class torch.optim.RMSprop(params, lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False, foreach=None, maximize=False, differentiable=False)
Implements RMSprop algorithm.
\begin{aligned} &\rule{110mm}{0.4pt}
\\ &\textbf{input} : \alpha \text{ (alpha)},\: \gamma
\text{ (lr)}, \: \theta_0 \text{ (params)}, \:
f(\theta) \text{ (objective)} \\
&\hspace{13mm} \lambda \text{ (weight decay)},\: \mu \text{
(momentum)},\: centered\\ &\textbf{initialize} : v_0
\leftarrow 0 \text{ (square average)}, \: \textbf{b}_0
\leftarrow 0 \text{ (buffer)}, \: g^{ave}_0 \leftarrow 0
\\[-1.ex] &\rule{110mm}{0.4pt}
\\ &\textbf{for} \: t=1 \: \textbf{to} \: \ldots \:
\textbf{do} \\ &\hspace{5mm}g_t
\leftarrow \nabla_{\theta} f_t (\theta_{t-1}) \\
&\hspace{5mm}if \: \lambda \neq 0
| https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html | pytorch docs |
&\hspace{5mm}if \: \lambda \neq 0
\ &\hspace{10mm} g_t \leftarrow g_t + \lambda
\theta_{t-1} \ &\hspace{5mm}v_t
\leftarrow \alpha v_{t-1} + (1 - \alpha) g^2_t
\hspace{8mm}
\ &\hspace{5mm} \tilde{v_t} \leftarrow v_t
\ &\hspace{5mm}if \: centered
\ &\hspace{10mm} g^{ave}t \leftarrow g^{ave} \alpha
+ (1-\alpha) g_t \ &\hspace{10mm} \tilde{v_t}
\leftarrow \tilde{v_t} - \big(g^{ave}{t} \big)^2 \
&\hspace{5mm}if \: \mu > 0
\ &\hspace{10mm} \textbf{b}_t\leftarrow \mu
\textbf{b} + g_t/ \big(\sqrt{\tilde{v_t}} +
\epsilon \big) \
&\hspace{10mm} \theta_t \leftarrow \theta_{t-1} - \gamma
\textbf{b}t \ &\hspace{5mm} else
\ &\hspace{10mm}\theta_t \leftarrow \theta -
\gamma g_t/ \big(\sqrt{\tilde{v_t}} + \epsilon \big) | https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html | pytorch docs |
\hspace{3mm} \ &\rule{110mm}{0.4pt}
\[-1.ex] &\bf{return} \: \theta_t
\[-1.ex] &\rule{110mm}{0.4pt}
\[-1.ex] \end{aligned}
For further details regarding the algorithm we refer to lecture
notes by G. Hinton. and centered version Generating Sequences With
Recurrent Neural Networks. The implementation here takes the square
root of the gradient average before adding epsilon (note that
TensorFlow interchanges these two operations). The effective
learning rate is thus \gamma/(\sqrt{v} + \epsilon) where \gamma is
the scheduled learning rate and v is the weighted moving average of
the squared gradient.
Parameters:
* params (iterable) -- iterable of parameters to optimize
or dicts defining parameter groups
* **lr** (*float**, **optional*) -- learning rate (default:
1e-2)
* **momentum** (*float**, **optional*) -- momentum factor
(default: 0)
| https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html | pytorch docs |
(default: 0)
* **alpha** (*float**, **optional*) -- smoothing constant
(default: 0.99)
* **eps** (*float**, **optional*) -- term added to the
denominator to improve numerical stability (default: 1e-8)
* **centered** (*bool**, **optional*) -- if "True", compute the
centered RMSProp, the gradient is normalized by an estimation
of its variance
* **weight_decay** (*float**, **optional*) -- weight decay (L2
penalty) (default: 0)
* **foreach** (*bool**, **optional*) -- whether foreach
implementation of optimizer is used. If unspecified by the
user (so foreach is None), we will try to use foreach over the
for-loop implementation on CUDA, since it is usually
significantly more performant. (default: None)
* **maximize** (*bool**, **optional*) -- maximize the params
based on the objective, instead of minimizing (default: False)
| https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html | pytorch docs |
differentiable (bool, optional) -- whether autograd
should occur through the optimizer step in training.
Otherwise, the step() function runs in a torch.no_grad()
context. Setting to True can impair performance, so leave it
False if you don't intend to run autograd through this
instance (default: False)
add_param_group(param_group)
Add a param group to the "Optimizer" s *param_groups*.
This can be useful when fine tuning a pre-trained network as
frozen layers can be made trainable and added to the "Optimizer"
as training progresses.
Parameters:
**param_group** (*dict*) -- Specifies what Tensors should be
optimized along with group specific optimization options.
load_state_dict(state_dict)
Loads the optimizer state.
Parameters:
**state_dict** (*dict*) -- optimizer state. Should be an
object returned from a call to "state_dict()".
register_step_post_hook(hook) | https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html | pytorch docs |
register_step_post_hook(hook)
Register an optimizer step post hook which will be called after
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None
The "optimizer" argument is the optimizer instance being used.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
register_step_pre_hook(hook)
Register an optimizer step pre hook which will be called before
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The "optimizer" argument is the optimizer instance being used.
If args and kwargs are modified by the pre-hook, then the
transformed values are returned as a tuple containing the
| https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html | pytorch docs |
new_args and new_kwargs.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
state_dict()
Returns the state of the optimizer as a "dict".
It contains two entries:
* state - a dict holding current optimization state. Its content
differs between optimizer classes.
* param_groups - a list containing all parameter groups where
each
parameter group is a dict
zero_grad(set_to_none=False)
Sets the gradients of all optimized "torch.Tensor" s to zero.
Parameters:
**set_to_none** (*bool*) -- instead of setting to zero, set
the grads to None. This will in general have lower memory
footprint, and can modestly improve performance. However, it
| https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html | pytorch docs |
changes certain behaviors. For example: 1. When the user
tries to access a gradient and perform manual ops on it, a
None attribute or a Tensor full of 0s will behave
differently. 2. If the user requests
"zero_grad(set_to_none=True)" followed by a backward pass,
".grad"s are guaranteed to be None for params that did not
receive a gradient. 3. "torch.optim" optimizers have a
different behavior if the gradient is 0 or None (in one case
it does the step with a gradient of 0 and in the other it
skips the step altogether). | https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html | pytorch docs |
torch.qr
torch.qr(input, some=True, *, out=None)
Computes the QR decomposition of a matrix or a batch of matrices
"input", and returns a namedtuple (Q, R) of tensors such that
\text{input} = Q R with Q being an orthogonal matrix or batch of
orthogonal matrices and R being an upper triangular matrix or batch
of upper triangular matrices.
If "some" is "True", then this function returns the thin (reduced)
QR factorization. Otherwise, if "some" is "False", this function
returns the complete QR factorization.
Warning:
"torch.qr()" is deprecated in favor of "torch.linalg.qr()" and
will be removed in a future PyTorch release. The boolean
parameter "some" has been replaced with a string parameter
"mode"."Q, R = torch.qr(A)" should be replaced with
Q, R = torch.linalg.qr(A)
"Q, R = torch.qr(A, some=False)" should be replaced with
Q, R = torch.linalg.qr(A, mode="complete")
Warning: | https://pytorch.org/docs/stable/generated/torch.qr.html | pytorch docs |
Warning:
If you plan to backpropagate through QR, note that the current
backward implementation is only well-defined when the first
\min(input.size(-1), input.size(-2)) columns of "input" are
linearly independent. This behavior will probably change once QR
supports pivoting.
Note:
This function uses LAPACK for CPU inputs and MAGMA for CUDA
inputs, and may produce different (valid) decompositions on
different device types or different platforms.
Parameters:
* input (Tensor) -- the input tensor of size (*, m, n)
where *** is zero or more batch dimensions consisting of
matrices of dimension m \times n.
* **some** (*bool**, **optional*) --
Set to "True" for reduced QR decomposition and "False" for
complete QR decomposition. If *k = min(m, n)* then:
* "some=True" : returns *(Q, R)* with dimensions (m, k),
(k, n) (default)
| https://pytorch.org/docs/stable/generated/torch.qr.html | pytorch docs |
(k, n) (default)
* "'some=False'": returns *(Q, R)* with dimensions (m, m),
(m, n)
Keyword Arguments:
out (tuple, optional) -- tuple of Q and R tensors.
The dimensions of Q and R are detailed in the description of
"some" above.
Example:
>>> a = torch.tensor([[12., -51, 4], [6, 167, -68], [-4, 24, -41]])
>>> q, r = torch.qr(a)
>>> q
tensor([[-0.8571, 0.3943, 0.3314],
[-0.4286, -0.9029, -0.0343],
[ 0.2857, -0.1714, 0.9429]])
>>> r
tensor([[ -14.0000, -21.0000, 14.0000],
[ 0.0000, -175.0000, 70.0000],
[ 0.0000, 0.0000, -35.0000]])
>>> torch.mm(q, r).round()
tensor([[ 12., -51., 4.],
[ 6., 167., -68.],
[ -4., 24., -41.]])
>>> torch.mm(q.t(), q).round()
tensor([[ 1., 0., 0.],
[ 0., 1., -0.],
[ 0., -0., 1.]])
| https://pytorch.org/docs/stable/generated/torch.qr.html | pytorch docs |
[ 0., -0., 1.]])
>>> a = torch.randn(3, 4, 5)
>>> q, r = torch.qr(a, some=False)
>>> torch.allclose(torch.matmul(q, r), a)
True
>>> torch.allclose(torch.matmul(q.mT, q), torch.eye(5))
True | https://pytorch.org/docs/stable/generated/torch.qr.html | pytorch docs |
torch.linalg.lu_factor_ex
torch.linalg.lu_factor_ex(A, *, pivot=True, check_errors=False, out=None)
This is a version of "lu_factor()" that does not perform error
checks unless "check_errors"= True. It also returns the "info"
tensor returned by LAPACK's getrf.
Note:
When the inputs are on a CUDA device, this function synchronizes
only when "check_errors"*= True*.
Warning:
This function is "experimental" and it may change in a future
PyTorch release.
Parameters:
A (Tensor) -- tensor of shape (, m, n)* where *** is
zero or more batch dimensions.
Keyword Arguments:
* pivot (bool, optional) -- Whether to compute the LU
decomposition with partial pivoting, or the regular LU
decomposition. "pivot"= False not supported on CPU. Default:
True.
* **check_errors** (*bool**, **optional*) -- controls whether to
| https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor_ex.html | pytorch docs |
check the content of "infos" and raise an error if it is non-
zero. Default: False.
* **out** (*tuple**, **optional*) -- tuple of three tensors to
write the output to. Ignored if *None*. Default: *None*.
Returns:
A named tuple (LU, pivots, info). | https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor_ex.html | pytorch docs |
torch.Tensor.sum_to_size
Tensor.sum_to_size(*size) -> Tensor
Sum "this" tensor to "size". "size" must be broadcastable to "this"
tensor size.
Parameters:
size (int...) -- a sequence of integers defining the
shape of the output tensor. | https://pytorch.org/docs/stable/generated/torch.Tensor.sum_to_size.html | pytorch docs |
torch.logcumsumexp
torch.logcumsumexp(input, dim, *, out=None) -> Tensor
Returns the logarithm of the cumulative summation of the
exponentiation of elements of "input" in the dimension "dim".
For summation index j given by dim and other indices i, the
result is
\text{logcumsumexp}(x)_{ij} = \log \sum\limits_{j=0}^{i}
\exp(x_{ij})
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int*) -- the dimension to do the operation over
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(10)
>>> torch.logcumsumexp(a, dim=0)
tensor([-0.42296738, -0.04462666, 0.86278635, 0.94622083, 1.05277811,
1.39202815, 1.83525007, 1.84492621, 2.06084887, 2.06844475]))
| https://pytorch.org/docs/stable/generated/torch.logcumsumexp.html | pytorch docs |
torch.Tensor.conj_physical
Tensor.conj_physical() -> Tensor
See "torch.conj_physical()" | https://pytorch.org/docs/stable/generated/torch.Tensor.conj_physical.html | pytorch docs |
torch.Tensor.unsqueeze
Tensor.unsqueeze(dim) -> Tensor
See "torch.unsqueeze()" | https://pytorch.org/docs/stable/generated/torch.Tensor.unsqueeze.html | pytorch docs |
device
class torch.cuda.device(device)
Context-manager that changes the selected device.
Parameters:
device (torch.device or int) -- device index to
select. It's a no-op if this argument is a negative integer or
"None". | https://pytorch.org/docs/stable/generated/torch.cuda.device.html | pytorch docs |
torch.Tensor.fmod_
Tensor.fmod_(divisor) -> Tensor
In-place version of "fmod()" | https://pytorch.org/docs/stable/generated/torch.Tensor.fmod_.html | pytorch docs |
torch.diagonal
torch.diagonal(input, offset=0, dim1=0, dim2=1) -> Tensor
Returns a partial view of "input" with the its diagonal elements
with respect to "dim1" and "dim2" appended as a dimension at the
end of the shape.
The argument "offset" controls which diagonal to consider:
If "offset" = 0, it is the main diagonal.
If "offset" > 0, it is above the main diagonal.
If "offset" < 0, it is below the main diagonal.
Applying "torch.diag_embed()" to the output of this function with
the same arguments yields a diagonal matrix with the diagonal
entries of the input. However, "torch.diag_embed()" has different
default dimensions, so those need to be explicitly specified.
Parameters:
* input (Tensor) -- the input tensor. Must be at least
2-dimensional.
* **offset** (*int**, **optional*) -- which diagonal to
consider. Default: 0 (main diagonal).
* **dim1** (*int**, **optional*) -- first dimension with respect
| https://pytorch.org/docs/stable/generated/torch.diagonal.html | pytorch docs |
to which to take diagonal. Default: 0.
* **dim2** (*int**, **optional*) -- second dimension with
respect to which to take diagonal. Default: 1.
Note:
To take a batch diagonal, pass in dim1=-2, dim2=-1.
Examples:
>>> a = torch.randn(3, 3)
>>> a
tensor([[-1.0854, 1.1431, -0.1752],
[ 0.8536, -0.0905, 0.0360],
[ 0.6927, -0.3735, -0.4945]])
>>> torch.diagonal(a, 0)
tensor([-1.0854, -0.0905, -0.4945])
>>> torch.diagonal(a, 1)
tensor([ 1.1431, 0.0360])
>>> x = torch.randn(2, 5, 4, 2)
>>> torch.diagonal(x, offset=-1, dim1=1, dim2=2)
tensor([[[-1.2631, 0.3755, -1.5977, -1.8172],
[-1.1065, 1.0401, -0.2235, -0.7938]],
[[-1.7325, -0.3081, 0.6166, 0.2335],
[ 1.0500, 0.7336, -0.3836, -1.1015]]])
| https://pytorch.org/docs/stable/generated/torch.diagonal.html | pytorch docs |
MultiLabelMarginLoss
class torch.nn.MultiLabelMarginLoss(size_average=None, reduce=None, reduction='mean')
Creates a criterion that optimizes a multi-class multi-
classification hinge loss (margin-based loss) between input x (a 2D
mini-batch Tensor) and output y (which is a 2D Tensor of target
class indices). For each sample in the mini-batch:
\text{loss}(x, y) = \sum_{ij}\frac{\max(0, 1 - (x[y[j]] -
x[i]))}{\text{x.size}(0)}
where x \in \left{0, \; \cdots , \; \text{x.size}(0) - 1\right},
y \in \left{0, \; \cdots , \; \text{y.size}(0) - 1\right}, 0 \leq
y[j] \leq \text{x.size}(0)-1, and i \neq y[j] for all i and j.
y and x must have the same size.
The criterion only considers a contiguous block of non-negative
targets that starts at the front.
This allows for different samples to have variable amounts of
target classes.
Parameters:
* size_average (bool, optional) -- Deprecated (see | https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelMarginLoss.html | pytorch docs |
"reduction"). By default, the losses are averaged over each
loss element in the batch. Note that for some losses, there
are multiple elements per sample. If the field "size_average"
is set to "False", the losses are instead summed for each
minibatch. Ignored when "reduce" is "False". Default: "True"
* **reduce** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged or summed
over observations for each minibatch depending on
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"True"
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the sum of
the output will be divided by the number of elements in the
output, "'sum'": the output will be summed. Note:
| https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelMarginLoss.html | pytorch docs |
"size_average" and "reduce" are in the process of being
deprecated, and in the meantime, specifying either of those
two args will override "reduction". Default: "'mean'"
Shape:
* Input: (C) or (N, C) where N is the batch size and C is
the number of classes.
* Target: (C) or (N, C), label targets padded by -1 ensuring
same shape as the input.
* Output: scalar. If "reduction" is "'none'", then (N).
Examples:
>>> loss = nn.MultiLabelMarginLoss()
>>> x = torch.FloatTensor([[0.1, 0.2, 0.4, 0.8]])
>>> # for target y, only consider labels 3 and 0, not after label -1
>>> y = torch.LongTensor([[3, 0, -1, 1]])
>>> # 0.25 * ((1-(0.1-0.2)) + (1-(0.1-0.4)) + (1-(0.8-0.2)) + (1-(0.8-0.4)))
>>> loss(x, y)
tensor(0.85...)
| https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelMarginLoss.html | pytorch docs |
BatchNorm3d
class torch.nn.BatchNorm3d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)
Applies Batch Normalization over a 5D input (a mini-batch of 3D
inputs with additional channel dimension) as described in the paper
Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift .
y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}}
* \gamma + \beta
The mean and standard-deviation are calculated per-dimension over
the mini-batches and \gamma and \beta are learnable parameter
vectors of size C (where C is the input size). By default, the
elements of \gamma are set to 1 and the elements of \beta are set
to 0. The standard-deviation is calculated via the biased
estimator, equivalent to torch.var(input, unbiased=False).
Also by default, during training this layer keeps running estimates
of its computed mean and variance, which are then used for | https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm3d.html | pytorch docs |
normalization during evaluation. The running estimates are kept
with a default "momentum" of 0.1.
If "track_running_stats" is set to "False", this layer then does
not keep running estimates, and batch statistics are instead used
during evaluation time as well.
Note:
This "momentum" argument is different from one used in optimizer
classes and the conventional notion of momentum. Mathematically,
the update rule for running statistics here is \hat{x}_\text{new}
= (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times
x_t, where \hat{x} is the estimated statistic and x_t is the new
observed value.
Because the Batch Normalization is done over the C dimension,
computing statistics on (N, D, H, W) slices, it's common
terminology to call this Volumetric Batch Normalization or Spatio-
temporal Batch Normalization.
Parameters:
* num_features (int) -- C from an expected input of size
(N, C, D, H, W) | https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm3d.html | pytorch docs |
(N, C, D, H, W)
* **eps** (*float*) -- a value added to the denominator for
numerical stability. Default: 1e-5
* **momentum** (*float*) -- the value used for the running_mean
and running_var computation. Can be set to "None" for
cumulative moving average (i.e. simple average). Default: 0.1
* **affine** (*bool*) -- a boolean value that when set to
"True", this module has learnable affine parameters. Default:
"True"
* **track_running_stats** (*bool*) -- a boolean value that when
set to "True", this module tracks the running mean and
variance, and when set to "False", this module does not track
such statistics, and initializes statistics buffers
"running_mean" and "running_var" as "None". When these buffers
are "None", this module always uses batch statistics. in both
training and eval modes. Default: "True"
Shape:
* Input: (N, C, D, H, W) | https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm3d.html | pytorch docs |
Shape:
* Input: (N, C, D, H, W)
* Output: (N, C, D, H, W) (same shape as input)
Examples:
>>> # With Learnable Parameters
>>> m = nn.BatchNorm3d(100)
>>> # Without Learnable Parameters
>>> m = nn.BatchNorm3d(100, affine=False)
>>> input = torch.randn(20, 100, 35, 45, 10)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm3d.html | pytorch docs |
Softshrink
class torch.nn.Softshrink(lambd=0.5)
Applies the soft shrinkage function elementwise:
\text{SoftShrinkage}(x) = \begin{cases} x - \lambda, & \text{ if
} x > \lambda \\ x + \lambda, & \text{ if } x < -\lambda \\ 0, &
\text{ otherwise } \end{cases}
Parameters:
lambd (float) -- the \lambda (must be no less than zero)
value for the Softshrink formulation. Default: 0.5
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.Softshrink()
>>> input = torch.randn(2)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.Softshrink.html | pytorch docs |
torch.Tensor.slogdet
Tensor.slogdet()
See "torch.slogdet()" | https://pytorch.org/docs/stable/generated/torch.Tensor.slogdet.html | pytorch docs |
torch.foreach_sigmoid
torch.foreach_sigmoid(self: List[Tensor]) -> None
Apply "torch.sigmoid()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_sigmoid_.html | pytorch docs |
torch.scatter_reduce
torch.scatter_reduce(input, dim, index, src, reduce, *, include_self=True) -> Tensor
Out-of-place version of "torch.Tensor.scatter_reduce_()" | https://pytorch.org/docs/stable/generated/torch.scatter_reduce.html | pytorch docs |
torch.cross
torch.cross(input, other, dim=None, *, out=None) -> Tensor
Returns the cross product of vectors in dimension "dim" of "input"
and "other".
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of vectors, for which it computes the product
along the dimension "dim". In this case, the output has the same
batch dimensions as the inputs.
If "dim" is not given, it defaults to the first dimension found
with the size 3. Note that this might be unexpected.
See also:
"torch.linalg.cross()" which requires specifying dim (defaulting
to -1).
Warning:
This function may change in a future PyTorch release to match the
default behaviour in "torch.linalg.cross()". We recommend using
"torch.linalg.cross()".
Parameters:
* input (Tensor) -- the input tensor.
* **other** (*Tensor*) -- the second input tensor
* **dim** (*int**, **optional*) -- the dimension to take the
| https://pytorch.org/docs/stable/generated/torch.cross.html | pytorch docs |
cross-product in.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4, 3)
>>> a
tensor([[-0.3956, 1.1455, 1.6895],
[-0.5849, 1.3672, 0.3599],
[-1.1626, 0.7180, -0.0521],
[-0.1339, 0.9902, -2.0225]])
>>> b = torch.randn(4, 3)
>>> b
tensor([[-0.0257, -1.4725, -1.2251],
[-1.1479, -0.7005, -1.9757],
[-1.3904, 0.3726, -1.1836],
[-0.9688, -0.7153, 0.2159]])
>>> torch.cross(a, b, dim=1)
tensor([[ 1.0844, -0.5281, 0.6120],
[-2.4490, -1.5687, 1.9792],
[-0.8304, -1.3037, 0.5650],
[-1.2329, 1.9883, 1.0551]])
>>> torch.cross(a, b)
tensor([[ 1.0844, -0.5281, 0.6120],
[-2.4490, -1.5687, 1.9792],
[-0.8304, -1.3037, 0.5650],
[-1.2329, 1.9883, 1.0551]])
| https://pytorch.org/docs/stable/generated/torch.cross.html | pytorch docs |
torch.Tensor.sinc_
Tensor.sinc_() -> Tensor
In-place version of "sinc()" | https://pytorch.org/docs/stable/generated/torch.Tensor.sinc_.html | pytorch docs |
torch.is_inference_mode_enabled
torch.is_inference_mode_enabled()
Returns True if inference mode is currently enabled. | https://pytorch.org/docs/stable/generated/torch.is_inference_mode_enabled.html | pytorch docs |
torch.Tensor.lerp_
Tensor.lerp_(end, weight) -> Tensor
In-place version of "lerp()" | https://pytorch.org/docs/stable/generated/torch.Tensor.lerp_.html | pytorch docs |
torch.Tensor.nanquantile
Tensor.nanquantile(q, dim=None, keepdim=False, *, interpolation='linear') -> Tensor
See "torch.nanquantile()" | https://pytorch.org/docs/stable/generated/torch.Tensor.nanquantile.html | pytorch docs |
torch.cuda.nvtx.range_pop
torch.cuda.nvtx.range_pop()
Pops a range off of a stack of nested range spans. Returns the
zero-based depth of the range that is ended. | https://pytorch.org/docs/stable/generated/torch.cuda.nvtx.range_pop.html | pytorch docs |
torch.dequantize
torch.dequantize(tensor) -> Tensor
Returns an fp32 Tensor by dequantizing a quantized Tensor
Parameters:
tensor (Tensor) -- A quantized Tensor
torch.dequantize(tensors) -> sequence of Tensors
Given a list of quantized Tensors, dequantize them and return a
list of fp32 Tensors
Parameters:
tensors (sequence of Tensors) -- A list of quantized
Tensors | https://pytorch.org/docs/stable/generated/torch.dequantize.html | pytorch docs |
torch.Tensor.bitwise_left_shift
Tensor.bitwise_left_shift(other) -> Tensor
See "torch.bitwise_left_shift()" | https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_left_shift.html | pytorch docs |
LinearReLU
class torch.ao.nn.intrinsic.quantized.LinearReLU(in_features, out_features, bias=True, dtype=torch.qint8)
A LinearReLU module fused from Linear and ReLU modules
We adopt the same interface as "torch.ao.nn.quantized.Linear".
Variables:
torch.ao.nn.quantized.Linear (Same as) --
Examples:
>>> m = nn.intrinsic.LinearReLU(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])
| https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.LinearReLU.html | pytorch docs |
FakeQuantizeBase
class torch.quantization.fake_quantize.FakeQuantizeBase
Base fake quantize module Any fake quantize implementation should
derive from this class.
Concrete fake quantize module should follow the same API. In
forward, they will update the statistics of the observed Tensor and
fake quantize the input. They should also provide a
calculate_qparams function that computes the quantization
parameters given the collected statistics. | https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.FakeQuantizeBase.html | pytorch docs |
torch.optim.Optimizer.add_param_group
Optimizer.add_param_group(param_group)
Add a param group to the "Optimizer" s param_groups.
This can be useful when fine tuning a pre-trained network as frozen
layers can be made trainable and added to the "Optimizer" as
training progresses.
Parameters:
param_group (dict) -- Specifies what Tensors should be
optimized along with group specific optimization options. | https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.add_param_group.html | pytorch docs |
ConvBnReLU2d
class torch.ao.nn.intrinsic.ConvBnReLU2d(conv, bn, relu)
This is a sequential container which calls the Conv 2d, Batch Norm
2d, and ReLU modules. During quantization this will be replaced
with the corresponding fused module. | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBnReLU2d.html | pytorch docs |
torch.Tensor.new_ones
Tensor.new_ones(size, *, dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) -> Tensor
Returns a Tensor of size "size" filled with "1". By default, the
returned Tensor has the same "torch.dtype" and "torch.device" as
this tensor.
Parameters:
size (int...) -- a list, tuple, or "torch.Size" of
integers defining the shape of the output tensor.
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired type of
returned tensor. Default: if None, same "torch.dtype" as this
tensor.
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if None, same "torch.device" as this
tensor.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **layout** ("torch.layout", optional) -- the desired layout of
| https://pytorch.org/docs/stable/generated/torch.Tensor.new_ones.html | pytorch docs |
returned Tensor. Default: "torch.strided".
* **pin_memory** (*bool**, **optional*) -- If set, returned
tensor would be allocated in the pinned memory. Works only for
CPU tensors. Default: "False".
Example:
>>> tensor = torch.tensor((), dtype=torch.int32)
>>> tensor.new_ones((2, 3))
tensor([[ 1, 1, 1],
[ 1, 1, 1]], dtype=torch.int32)
| https://pytorch.org/docs/stable/generated/torch.Tensor.new_ones.html | pytorch docs |
AdaptiveMaxPool3d
class torch.nn.AdaptiveMaxPool3d(output_size, return_indices=False)
Applies a 3D adaptive max pooling over an input signal composed of
several input planes.
The output is of size D_{out} \times H_{out} \times W_{out}, for
any input size. The number of output features is equal to the
number of input planes.
Parameters:
* output_size (Union[int, None,
Tuple[Optional[int], Optional[int],
Optional[int]]]) -- the target output size of the
image of the form D_{out} \times H_{out} \times W_{out}. Can
be a tuple (D_{out}, H_{out}, W_{out}) or a single D_{out} for
a cube D_{out} \times D_{out} \times D_{out}. D_{out}, H_{out}
and W_{out} can be either a "int", or "None" which means the
size will be the same as that of the input.
* **return_indices** (*bool*) -- if "True", will return the
| https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveMaxPool3d.html | pytorch docs |
indices along with the outputs. Useful to pass to
nn.MaxUnpool3d. Default: "False"
Shape:
* Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},
W_{in}).
* Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},
H_{out}, W_{out}), where (D_{out}, H_{out},
W_{out})=\text{output\_size}.
-[ Examples ]-
target output size of 5x7x9
m = nn.AdaptiveMaxPool3d((5, 7, 9))
input = torch.randn(1, 64, 8, 9, 10)
output = m(input)
target output size of 7x7x7 (cube)
m = nn.AdaptiveMaxPool3d(7)
input = torch.randn(1, 64, 10, 9, 8)
output = m(input)
target output size of 7x9x8
m = nn.AdaptiveMaxPool3d((7, None, None))
input = torch.randn(1, 64, 10, 9, 8)
output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveMaxPool3d.html | pytorch docs |
torch.optim.Optimizer.zero_grad
Optimizer.zero_grad(set_to_none=False)
Sets the gradients of all optimized "torch.Tensor" s to zero.
Parameters:
set_to_none (bool) -- instead of setting to zero, set the
grads to None. This will in general have lower memory footprint,
and can modestly improve performance. However, it changes
certain behaviors. For example: 1. When the user tries to access
a gradient and perform manual ops on it, a None attribute or a
Tensor full of 0s will behave differently. 2. If the user
requests "zero_grad(set_to_none=True)" followed by a backward
pass, ".grad"s are guaranteed to be None for params that did not
receive a gradient. 3. "torch.optim" optimizers have a different
behavior if the gradient is 0 or None (in one case it does the
step with a gradient of 0 and in the other it skips the step
altogether). | https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html | pytorch docs |
torch.nn.modules.module.register_module_backward_hook
torch.nn.modules.module.register_module_backward_hook(hook)
Registers a backward hook common to all the modules.
This function is deprecated in favor of
"torch.nn.modules.module.register_module_full_backward_hook()" and
the behavior of this function will change in future versions.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemovableHandle" | https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_backward_hook.html | pytorch docs |
torch._foreach_cosh
torch._foreach_cosh(self: List[Tensor]) -> List[Tensor]
Apply "torch.cosh()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_cosh.html | pytorch docs |
ConstantPad3d
class torch.nn.ConstantPad3d(padding, value)
Pads the input tensor boundaries with a constant value.
For N-dimensional padding, use "torch.nn.functional.pad()".
Parameters:
padding (int, tuple) -- the size of the padding. If is
int, uses the same padding in all boundaries. If a 6-tuple,
uses (\text{padding_left}, \text{padding_right},
\text{padding_top}, \text{padding_bottom},
\text{padding_front}, \text{padding_back})
Shape:
* Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},
W_{in}).
* Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},
H_{out}, W_{out}), where
D_{out} = D_{in} + \text{padding\_front} +
\text{padding\_back}
H_{out} = H_{in} + \text{padding\_top} +
\text{padding\_bottom}
W_{out} = W_{in} + \text{padding\_left} +
\text{padding\_right}
Examples:
>>> m = nn.ConstantPad3d(3, 3.5)
| https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad3d.html | pytorch docs |
m = nn.ConstantPad3d(3, 3.5)
>>> input = torch.randn(16, 3, 10, 20, 30)
>>> output = m(input)
>>> # using different paddings for different sides
>>> m = nn.ConstantPad3d((3, 3, 6, 6, 0, 1), 3.5)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad3d.html | pytorch docs |
torch.nn.utils.weight_norm
torch.nn.utils.weight_norm(module, name='weight', dim=0)
Applies weight normalization to a parameter in the given module.
\mathbf{w} = g \dfrac{\mathbf{v}}{\|\mathbf{v}\|}
Weight normalization is a reparameterization that decouples the
magnitude of a weight tensor from its direction. This replaces the
parameter specified by "name" (e.g. "'weight'") with two
parameters: one specifying the magnitude (e.g. "'weight_g'") and
one specifying the direction (e.g. "'weight_v'"). Weight
normalization is implemented via a hook that recomputes the weight
tensor from the magnitude and direction before every "forward()"
call.
By default, with "dim=0", the norm is computed independently per
output channel/plane. To compute a norm over the entire weight
tensor, use "dim=None".
See https://arxiv.org/abs/1602.07868
Parameters:
* module (Module) -- containing module | https://pytorch.org/docs/stable/generated/torch.nn.utils.weight_norm.html | pytorch docs |
name (str, optional) -- name of weight parameter
dim (int, optional) -- dimension over which to
compute the norm
Returns:
The original module with the weight norm hook
Return type:
T_module
Example:
>>> m = weight_norm(nn.Linear(20, 40), name='weight')
>>> m
Linear(in_features=20, out_features=40, bias=True)
>>> m.weight_g.size()
torch.Size([40, 1])
>>> m.weight_v.size()
torch.Size([40, 20])
| https://pytorch.org/docs/stable/generated/torch.nn.utils.weight_norm.html | pytorch docs |
torch.cuda.make_graphed_callables
torch.cuda.make_graphed_callables(callables, sample_args, num_warmup_iters=3, allow_unused_input=False)
Accepts callables (functions or "nn.Module"s) and returns graphed
versions.
Each graphed callable's forward pass runs its source callable's
forward CUDA work as a CUDA graph inside a single autograd node.
The graphed callable's forward pass also appends a backward node to
the autograd graph. During backward, this node runs the callable's
backward work as a CUDA graph.
Therefore, each graphed callable should be a drop-in replacement
for its source callable in an autograd-enabled training loop.
See Partial-network capture for detailed use and constraints.
If you pass a tuple of several callables, their captures will use
the same memory pool. See Graph memory management for when this is
appropriate.
Parameters:
* callables (torch.nn.Module or Python function*, or | https://pytorch.org/docs/stable/generated/torch.cuda.make_graphed_callables.html | pytorch docs |
*tuple of these) -- Callable or callables to graph. See
Graph memory management for when passing a tuple of callables
is appropriate. If you pass a tuple of callables, their order
in the tuple must be the same order they'll run in the live
workload.
* **sample_args** (*tuple of Tensors**, or **tuple of tuples of
Tensors*) -- Samples args for each callable. If a single
callable was passed, "sample_args" must be a single tuple of
argument Tensors. If a tuple of callables was passed,
"sample_args" must be tuple of tuples of argument Tensors.
* **num_warmup_iters** (*int*) -- The number of warmup
iterations. Currently, "DataDistributedParallel" needs 11
iterations for warm up. Default: "3".
* **allow_unused_input** (*bool*) -- If False, specifying inputs
that were not used when computing outputs (and therefore their
grad is always zero) is an error. Defaults to False.
Note: | https://pytorch.org/docs/stable/generated/torch.cuda.make_graphed_callables.html | pytorch docs |
Note:
The "requires_grad" state of each Tensor in "sample_args" must
match the state that's expected for the corresponding real input
in the training loop.
Warning:
This API is in beta and may change in future releases.
Warning:
"sample_args" for each callable must contain only Tensors. Other
types are not allowed.
Warning:
Returned callables do not support higher order differentiation
(e.g., double backward).
Warning:
In any "Module" passed to "make_graphed_callables()", only
parameters may be trainable. Buffers must have
"requires_grad=False".
Warning:
After you pass a "torch.nn.Module" through
"make_graphed_callables()", you may not add or remove any of that
Module's parameters or buffers.
Warning:
"torch.nn.Module"s passed to "make_graphed_callables()" must not
have module hooks registered on them at the time they are passed.
However, registering hooks on modules *after* passing them
| https://pytorch.org/docs/stable/generated/torch.cuda.make_graphed_callables.html | pytorch docs |
through "make_graphed_callables()" is allowed.
Warning:
When running a graphed callable, you must pass its arguments in
the same order and format they appeared in that callable's
"sample_args".
Warning:
The automatic mixed precision is supported in
"make_graphed_callables()" only with disabled caching. The
context manager *torch.cuda.amp.autocast()* must have
*cache_enabled=False*.
| https://pytorch.org/docs/stable/generated/torch.cuda.make_graphed_callables.html | pytorch docs |
torch.nn.utils.spectral_norm
torch.nn.utils.spectral_norm(module, name='weight', n_power_iterations=1, eps=1e-12, dim=None)
Applies spectral normalization to a parameter in the given module.
\mathbf{W}_{SN} = \dfrac{\mathbf{W}}{\sigma(\mathbf{W})},
\sigma(\mathbf{W}) = \max_{\mathbf{h}: \mathbf{h} \ne 0}
\dfrac{\|\mathbf{W} \mathbf{h}\|_2}{\|\mathbf{h}\|_2}
Spectral normalization stabilizes the training of discriminators
(critics) in Generative Adversarial Networks (GANs) by rescaling
the weight tensor with spectral norm \sigma of the weight matrix
calculated using power iteration method. If the dimension of the
weight tensor is greater than 2, it is reshaped to 2D in power
iteration method to get spectral norm. This is implemented via a
hook that calculates spectral norm and rescales weight before every
"forward()" call.
See Spectral Normalization for Generative Adversarial Networks .
Parameters: | https://pytorch.org/docs/stable/generated/torch.nn.utils.spectral_norm.html | pytorch docs |
Parameters:
* module (nn.Module) -- containing module
* **name** (*str**, **optional*) -- name of weight parameter
* **n_power_iterations** (*int**, **optional*) -- number of
power iterations to calculate spectral norm
* **eps** (*float**, **optional*) -- epsilon for numerical
stability in calculating norms
* **dim** (*int**, **optional*) -- dimension corresponding to
number of outputs, the default is "0", except for modules that
are instances of ConvTranspose{1,2,3}d, when it is "1"
Returns:
The original module with the spectral norm hook
Return type:
T_module
Note:
This function has been reimplemented as
"torch.nn.utils.parametrizations.spectral_norm()" using the new
parametrization functionality in
"torch.nn.utils.parametrize.register_parametrization()". Please
use the newer version. This function will be deprecated in a
future version of PyTorch.
Example: | https://pytorch.org/docs/stable/generated/torch.nn.utils.spectral_norm.html | pytorch docs |
future version of PyTorch.
Example:
>>> m = spectral_norm(nn.Linear(20, 40))
>>> m
Linear(in_features=20, out_features=40, bias=True)
>>> m.weight_u.size()
torch.Size([40])
| https://pytorch.org/docs/stable/generated/torch.nn.utils.spectral_norm.html | pytorch docs |
torch.roll
torch.roll(input, shifts, dims=None) -> Tensor
Roll the tensor "input" along the given dimension(s). Elements that
are shifted beyond the last position are re-introduced at the first
position. If "dims" is None, the tensor will be flattened before
rolling and then restored to the original shape.
Parameters:
* input (Tensor) -- the input tensor.
* **shifts** (*int** or **tuple of ints*) -- The number of
places by which the elements of the tensor are shifted. If
shifts is a tuple, dims must be a tuple of the same size, and
each dimension will be rolled by the corresponding value
* **dims** (*int** or **tuple of ints*) -- Axis along which to
roll
Example:
>>> x = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8]).view(4, 2)
>>> x
tensor([[1, 2],
[3, 4],
[5, 6],
[7, 8]])
>>> torch.roll(x, 1)
tensor([[8, 1],
[2, 3],
[4, 5],
| https://pytorch.org/docs/stable/generated/torch.roll.html | pytorch docs |
[2, 3],
[4, 5],
[6, 7]])
>>> torch.roll(x, 1, 0)
tensor([[7, 8],
[1, 2],
[3, 4],
[5, 6]])
>>> torch.roll(x, -1, 0)
tensor([[3, 4],
[5, 6],
[7, 8],
[1, 2]])
>>> torch.roll(x, shifts=(2, 1), dims=(0, 1))
tensor([[6, 5],
[8, 7],
[2, 1],
[4, 3]]) | https://pytorch.org/docs/stable/generated/torch.roll.html | pytorch docs |
torch.Tensor.new_tensor
Tensor.new_tensor(data, *, dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) -> Tensor
Returns a new Tensor with "data" as the tensor data. By default,
the returned Tensor has the same "torch.dtype" and "torch.device"
as this tensor.
Warning:
"new_tensor()" always copies "data". If you have a Tensor "data"
and want to avoid a copy, use "torch.Tensor.requires_grad_()" or
"torch.Tensor.detach()". If you have a numpy array and want to
avoid a copy, use "torch.from_numpy()".
Warning:
When data is a tensor *x*, "new_tensor()" reads out 'the data'
from whatever it is passed, and constructs a leaf variable.
Therefore "tensor.new_tensor(x)" is equivalent to
"x.clone().detach()" and "tensor.new_tensor(x,
requires_grad=True)" is equivalent to
"x.clone().detach().requires_grad_(True)". The equivalents using
"clone()" and "detach()" are recommended.
| https://pytorch.org/docs/stable/generated/torch.Tensor.new_tensor.html | pytorch docs |
"clone()" and "detach()" are recommended.
Parameters:
data (array_like) -- The returned Tensor copies "data".
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired type of
returned tensor. Default: if None, same "torch.dtype" as this
tensor.
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if None, same "torch.device" as this
tensor.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **pin_memory** (*bool**, **optional*) -- If set, returned
tensor would be allocated in the pinned memory. Works only for
CPU tensors. Default: "False".
Example:
>>> tensor = torch.ones((2,), dtype=torch.int8)
>>> data = [[0, 1], [2, 3]]
| https://pytorch.org/docs/stable/generated/torch.Tensor.new_tensor.html | pytorch docs |
data = [[0, 1], [2, 3]]
>>> tensor.new_tensor(data)
tensor([[ 0, 1],
[ 2, 3]], dtype=torch.int8)
| https://pytorch.org/docs/stable/generated/torch.Tensor.new_tensor.html | pytorch docs |
torch.set_printoptions
torch.set_printoptions(precision=None, threshold=None, edgeitems=None, linewidth=None, profile=None, sci_mode=None)
Set options for printing. Items shamelessly taken from NumPy
Parameters:
* precision -- Number of digits of precision for floating
point output (default = 4).
* **threshold** -- Total number of array elements which trigger
summarization rather than full *repr* (default = 1000).
* **edgeitems** -- Number of array items in summary at beginning
and end of each dimension (default = 3).
* **linewidth** -- The number of characters per line for the
purpose of inserting line breaks (default = 80). Thresholded
matrices will ignore this parameter.
* **profile** -- Sane defaults for pretty printing. Can override
with any of the above options. (any one of *default*, *short*,
*full*)
* **sci_mode** -- Enable (True) or disable (False) scientific
| https://pytorch.org/docs/stable/generated/torch.set_printoptions.html | pytorch docs |
notation. If None (default) is specified, the value is defined
by torch._tensor_str._Formatter. This value is automatically
chosen by the framework.
Example:
>>> # Limit the precision of elements
>>> torch.set_printoptions(precision=2)
>>> torch.tensor([1.12345])
tensor([1.12])
>>> # Limit the number of elements shown
>>> torch.set_printoptions(threshold=5)
>>> torch.arange(10)
tensor([0, 1, 2, ..., 7, 8, 9])
>>> # Restore defaults
>>> torch.set_printoptions(profile='default')
>>> torch.tensor([1.12345])
tensor([1.1235])
>>> torch.arange(10)
tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
| https://pytorch.org/docs/stable/generated/torch.set_printoptions.html | pytorch docs |
torch.jit.ignore
torch.jit.ignore(drop=False, **kwargs)
This decorator indicates to the compiler that a function or method
should be ignored and left as a Python function. This allows you to
leave code in your model that is not yet TorchScript compatible. If
called from TorchScript, ignored functions will dispatch the call
to the Python interpreter. Models with ignored functions cannot be
exported; use "@torch.jit.unused" instead.
Example (using "@torch.jit.ignore" on a method):
import torch
import torch.nn as nn
class MyModule(nn.Module):
@torch.jit.ignore
def debugger(self, x):
import pdb
pdb.set_trace()
def forward(self, x):
x += 10
# The compiler would normally try to compile `debugger`,
# but since it is `@ignore`d, it will be left as a call
# to Python
self.debugger(x)
return x
| https://pytorch.org/docs/stable/generated/torch.jit.ignore.html | pytorch docs |
return x
m = torch.jit.script(MyModule())
# Error! The call `debugger` cannot be saved since it calls into Python
m.save("m.pt")
Example (using "@torch.jit.ignore(drop=True)" on a method):
import torch
import torch.nn as nn
class MyModule(nn.Module):
@torch.jit.ignore(drop=True)
def training_method(self, x):
import pdb
pdb.set_trace()
def forward(self, x):
if self.training:
self.training_method(x)
return x
m = torch.jit.script(MyModule())
# This is OK since `training_method` is not saved, the call is replaced
# with a `raise`.
m.save("m.pt")
| https://pytorch.org/docs/stable/generated/torch.jit.ignore.html | pytorch docs |
torch.nn.functional.adaptive_avg_pool2d
torch.nn.functional.adaptive_avg_pool2d(input, output_size)
Applies a 2D adaptive average pooling over an input signal composed
of several input planes.
See "AdaptiveAvgPool2d" for details and output shape.
Parameters:
output_size (None) -- the target output size (single
integer or double-integer tuple)
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_avg_pool2d.html | pytorch docs |
torch.sparse_bsc_tensor
torch.sparse_bsc_tensor(ccol_indices, row_indices, values, size=None, *, dtype=None, device=None, requires_grad=False, check_invariants=None) -> Tensor
Constructs a sparse tensor in BSC (Block Compressed Sparse Column))
with specified 2-dimensional blocks at the given "ccol_indices" and
"row_indices". Sparse matrix multiplication operations in BSC
format are typically faster than that for sparse tensors in COO
format. Make you have a look at the note on the data type of the
indices.
Note:
If the "device" argument is not specified the device of the given
"values" and indices tensor(s) must match. If, however, the
argument is specified the input Tensors will be converted to the
given device and in turn determine the device of the constructed
sparse tensor.
Parameters:
* ccol_indices (array_like) -- (B+1)-dimensional array of
size "(*batchsize, ncolblocks + 1)". The last element of each | https://pytorch.org/docs/stable/generated/torch.sparse_bsc_tensor.html | pytorch docs |
batch is the number of non-zeros. This tensor encodes the
index in values and row_indices depending on where the given
column starts. Each successive number in the tensor subtracted
by the number before it denotes the number of elements in a
given column.
* **row_indices** (*array_like*) -- Row block co-ordinates of
each block in values. (B+1)-dimensional tensor with the same
length as values.
* **values** (*array_list*) -- Initial blocks for the tensor.
Can be a list, tuple, NumPy "ndarray", and other types that
represents a (1 + 2 + K)-dimensional tensor where "K" is the
number of dense dimensions.
* **size** (list, tuple, "torch.Size", optional) -- Size of the
sparse tensor: "(*batchsize, nrows * blocksize[0], ncols *
blocksize[1], *densesize)" If not provided, the size will be
inferred as the minimum size big enough to hold all non-zero
blocks.
Keyword Arguments: | https://pytorch.org/docs/stable/generated/torch.sparse_bsc_tensor.html | pytorch docs |
blocks.
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if None, infers data type from
"values".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if None, uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **check_invariants** (*bool**, **optional*) -- If sparse
tensor invariants are checked. Default: as returned by
"torch.sparse.check_sparse_tensor_invariants.is_enabled()",
initially False.
Example::
>>> ccol_indices = [0, 1, 2]
>>> row_indices = [0, 1] | https://pytorch.org/docs/stable/generated/torch.sparse_bsc_tensor.html | pytorch docs |
row_indices = [0, 1]
>>> values = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
>>> torch.sparse_bsc_tensor(torch.tensor(ccol_indices, dtype=torch.int64),
... torch.tensor(row_indices, dtype=torch.int64),
... torch.tensor(values), dtype=torch.double)
tensor(ccol_indices=tensor([0, 1, 2]),
row_indices=tensor([0, 1]),
values=tensor([[[1., 2.],
[3., 4.]],
[[5., 6.],
[7., 8.]]]), size=(2, 2), nnz=2, dtype=torch.float64,
layout=torch.sparse_bsc)
| https://pytorch.org/docs/stable/generated/torch.sparse_bsc_tensor.html | pytorch docs |
LSTM
class torch.nn.LSTM(args, *kwargs)
Applies a multi-layer long short-term memory (LSTM) RNN to an input
sequence.
For each element in the input sequence, each layer computes the
following function:
\begin{array}{ll} \\ i_t = \sigma(W_{ii} x_t + b_{ii} +
W_{hi} h_{t-1} + b_{hi}) \\ f_t = \sigma(W_{if} x_t + b_{if}
+ W_{hf} h_{t-1} + b_{hf}) \\ g_t = \tanh(W_{ig} x_t +
b_{ig} + W_{hg} h_{t-1} + b_{hg}) \\ o_t = \sigma(W_{io} x_t
+ b_{io} + W_{ho} h_{t-1} + b_{ho}) \\ c_t = f_t \odot
c_{t-1} + i_t \odot g_t \\ h_t = o_t \odot \tanh(c_t) \\
\end{array}
where h_t is the hidden state at time t, c_t is the cell state at
time t, x_t is the input at time t, h_{t-1} is the hidden state
of the layer at time t-1 or the initial hidden state at time 0,
and i_t, f_t, g_t, o_t are the input, forget, cell, and output
gates, respectively. \sigma is the sigmoid function, and \odot is
the Hadamard product. | https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html | pytorch docs |
the Hadamard product.
In a multilayer LSTM, the input x^{(l)}_t of the l -th layer (l >=
2) is the hidden state h^{(l-1)}_t of the previous layer multiplied
by dropout \delta^{(l-1)}_t where each \delta^{(l-1)}_t is a
Bernoulli random variable which is 0 with probability "dropout".
If "proj_size > 0" is specified, LSTM with projections will be
used. This changes the LSTM cell in the following way. First, the
dimension of h_t will be changed from "hidden_size" to "proj_size"
(dimensions of W_{hi} will be changed accordingly). Second, the
output hidden state of each layer will be multiplied by a learnable
projection matrix: h_t = W_{hr}h_t. Note that as a consequence of
this, the output of LSTM network will be of different shape as
well. See Inputs/Outputs sections below for exact dimensions of all
variables. You can find more details in
https://arxiv.org/abs/1402.1128.
Parameters:
* input_size -- The number of expected features in the input | https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html | pytorch docs |
x
* **hidden_size** -- The number of features in the hidden state
*h*
* **num_layers** -- Number of recurrent layers. E.g., setting
"num_layers=2" would mean stacking two LSTMs together to form
a *stacked LSTM*, with the second LSTM taking in outputs of
the first LSTM and computing the final results. Default: 1
* **bias** -- If "False", then the layer does not use bias
weights *b_ih* and *b_hh*. Default: "True"
* **batch_first** -- If "True", then the input and output
tensors are provided as *(batch, seq, feature)* instead of
*(seq, batch, feature)*. Note that this does not apply to
hidden or cell states. See the Inputs/Outputs sections below
for details. Default: "False"
* **dropout** -- If non-zero, introduces a *Dropout* layer on
the outputs of each LSTM layer except the last layer, with
dropout probability equal to "dropout". Default: 0
| https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html | pytorch docs |
bidirectional -- If "True", becomes a bidirectional LSTM.
Default: "False"
proj_size -- If "> 0", will use LSTM with projections of
corresponding size. Default: 0
Inputs: input, (h_0, c_0)
* input: tensor of shape (L, H_{in}) for unbatched input,
(L, N, H_{in}) when "batch_first=False" or (N, L, H_{in}) when
"batch_first=True" containing the features of the input
sequence. The input can also be a packed variable length
sequence. See "torch.nn.utils.rnn.pack_padded_sequence()" or
"torch.nn.utils.rnn.pack_sequence()" for details.
* **h_0**: tensor of shape (D * \text{num\_layers}, H_{out}) for
unbatched input or (D * \text{num\_layers}, N, H_{out})
containing the initial hidden state for each element in the
input sequence. Defaults to zeros if (h_0, c_0) is not
provided.
* **c_0**: tensor of shape (D * \text{num\_layers}, H_{cell})
| https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html | pytorch docs |
for unbatched input or (D * \text{num_layers}, N, H_{cell})
containing the initial cell state for each element in the
input sequence. Defaults to zeros if (h_0, c_0) is not
provided.
where:
\begin{aligned} N ={} & \text{batch size} \\ L ={} &
\text{sequence length} \\ D ={} & 2 \text{ if
bidirectional=True otherwise } 1 \\ H_{in} ={} &
\text{input\_size} \\ H_{cell} ={} & \text{hidden\_size}
\\ H_{out} ={} & \text{proj\_size if }
\text{proj\_size}>0 \text{ otherwise hidden\_size} \\
\end{aligned}
Outputs: output, (h_n, c_n)
* output: tensor of shape (L, D * H_{out}) for unbatched
input, (L, N, D * H_{out}) when "batch_first=False" or (N, L,
D * H_{out}) when "batch_first=True" containing the output
features (h_t) from the last layer of the LSTM, for each
t. If a "torch.nn.utils.rnn.PackedSequence" has been given | https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html | pytorch docs |
as the input, the output will also be a packed sequence. When
"bidirectional=True", output will contain a concatenation of
the forward and reverse hidden states at each time step in the
sequence.
* **h_n**: tensor of shape (D * \text{num\_layers}, H_{out}) for
unbatched input or (D * \text{num\_layers}, N, H_{out})
containing the final hidden state for each element in the
sequence. When "bidirectional=True", *h_n* will contain a
concatenation of the final forward and reverse hidden states,
respectively.
* **c_n**: tensor of shape (D * \text{num\_layers}, H_{cell})
for unbatched input or (D * \text{num\_layers}, N, H_{cell})
containing the final cell state for each element in the
sequence. When "bidirectional=True", *c_n* will contain a
concatenation of the final forward and reverse cell states,
respectively.
Variables: | https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html | pytorch docs |
respectively.
Variables:
* weight_ih_l[k] -- the learnable input-hidden weights of
the \text{k}^{th} layer (W_ii|W_if|W_ig|W_io), of shape
(4hidden_size, input_size) for k = 0. Otherwise, the
shape is (4hidden_size, num_directions * hidden_size). If
"proj_size > 0" was specified, the shape will be
(4hidden_size, num_directions * proj_size) for k > 0*
* **weight_hh_l[k]** -- the learnable hidden-hidden weights of
the \text{k}^{th} layer *(W_hi|W_hf|W_hg|W_ho)*, of shape
*(4*hidden_size, hidden_size)*. If "proj_size > 0" was
specified, the shape will be *(4*hidden_size, proj_size)*.
* **bias_ih_l[k]** -- the learnable input-hidden bias of the
\text{k}^{th} layer *(b_ii|b_if|b_ig|b_io)*, of shape
*(4*hidden_size)*
* **bias_hh_l[k]** -- the learnable hidden-hidden bias of the
\text{k}^{th} layer *(b_hi|b_hf|b_hg|b_ho)*, of shape
*(4*hidden_size)*
| https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html | pytorch docs |
(4hidden_size)*
* **weight_hr_l[k]** -- the learnable projection weights of the
\text{k}^{th} layer of shape *(proj_size, hidden_size)*. Only
present when "proj_size > 0" was specified.
* **weight_ih_l[k]_reverse** -- Analogous to *weight_ih_l[k]*
for the reverse direction. Only present when
"bidirectional=True".
* **weight_hh_l[k]_reverse** -- Analogous to *weight_hh_l[k]*
for the reverse direction. Only present when
"bidirectional=True".
* **bias_ih_l[k]_reverse** -- Analogous to *bias_ih_l[k]* for
the reverse direction. Only present when "bidirectional=True".
* **bias_hh_l[k]_reverse** -- Analogous to *bias_hh_l[k]* for
the reverse direction. Only present when "bidirectional=True".
* **weight_hr_l[k]_reverse** -- Analogous to *weight_hr_l[k]*
for the reverse direction. Only present when
"bidirectional=True" and "proj_size > 0" was specified.
Note: | https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html | pytorch docs |
Note:
All the weights and biases are initialized from
\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k =
\frac{1}{\text{hidden\_size}}
Note:
For bidirectional LSTMs, forward and backward are directions 0
and 1 respectively. Example of splitting the output layers when
"batch_first=False": "output.view(seq_len, batch, num_directions,
hidden_size)".
Note:
For bidirectional LSTMs, *h_n* is not equivalent to the last
element of *output*; the former contains the final forward and
reverse hidden states, while the latter contains the final
forward hidden state and the initial reverse hidden state.
Note:
"batch_first" argument is ignored for unbatched inputs.
Warning:
There are known non-determinism issues for RNN functions on some
versions of cuDNN and CUDA. You can enforce deterministic
behavior by setting the following environment variables:On CUDA
10.1, set environment variable "CUDA_LAUNCH_BLOCKING=1". This may
| https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html | pytorch docs |
affect performance.On CUDA 10.2 or later, set environment
variable (note the leading colon symbol)
"CUBLAS_WORKSPACE_CONFIG=:16:8" or
"CUBLAS_WORKSPACE_CONFIG=:4096:2".See the cuDNN 8 Release Notes
for more information.
Note:
If the following conditions are satisfied: 1) cudnn is enabled,
2) input data is on the GPU 3) input data has dtype
"torch.float16" 4) V100 GPU is used, 5) input data is not in
"PackedSequence" format persistent algorithm can be selected to
improve performance.
Examples:
>>> rnn = nn.LSTM(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> c0 = torch.randn(2, 3, 20)
>>> output, (hn, cn) = rnn(input, (h0, c0))
| https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html | pytorch docs |
torch.zeros_like
torch.zeros_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor
Returns a tensor filled with the scalar value 0, with the same
size as "input". "torch.zeros_like(input)" is equivalent to
"torch.zeros(input.size(), dtype=input.dtype, layout=input.layout,
device=input.device)".
Warning:
As of 0.4, this function does not support an "out" keyword. As an
alternative, the old "torch.zeros_like(input, out=output)" is
equivalent to "torch.zeros(input.size(), out=output)".
Parameters:
input (Tensor) -- the size of "input" will determine size
of the output tensor.
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned Tensor. Default: if "None", defaults to the dtype
of "input".
* **layout** ("torch.layout", optional) -- the desired layout of
| https://pytorch.org/docs/stable/generated/torch.zeros_like.html | pytorch docs |
returned tensor. Default: if "None", defaults to the layout of
"input".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", defaults to the device of
"input".
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **memory_format** ("torch.memory_format", optional) -- the
desired memory format of returned Tensor. Default:
"torch.preserve_format".
Example:
>>> input = torch.empty(2, 3)
>>> torch.zeros_like(input)
tensor([[ 0., 0., 0.],
[ 0., 0., 0.]])
| https://pytorch.org/docs/stable/generated/torch.zeros_like.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.