text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
hop_length (int, optional) -- the distance between
neighboring sliding window frames. Default: "None" (treated as
equal to "floor(n_fft / 4)")
win_length (int, optional) -- the size of window
frame and STFT filter. Default: "None" (treated as equal to
"n_fft")
window (Tensor, optional) -- the optional window
function. Default: "None" (treated as window of all 1 s)
center (bool, optional) -- whether to pad "input" on
both sides so that the t-th frame is centered at time t \times
\text{hop_length}. Default: "True"
pad_mode (str, optional) -- controls the padding
method used when "center" is "True". Default: ""reflect""
normalized (bool, optional) -- controls whether to
return the normalized STFT results Default: "False"
onesided (bool, optional) -- controls whether to
| https://pytorch.org/docs/stable/generated/torch.stft.html | pytorch docs |
return half of results to avoid redundancy for real inputs.
Default: "True" for real "input" and "window", "False"
otherwise.
* **return_complex** (*bool**, **optional*) --
whether to return a complex tensor, or a real tensor with an
extra last dimension for the real and imaginary components.
Changed in version 2.0: "return_complex" is now a required
argument for real inputs, as the default is being transitioned
to "True".
Deprecated since version 2.0: "return_complex=False" is
deprecated, instead use "return_complex=True" Note that
calling "torch.view_as_real()" on the output will recover the
deprecated output format.
Returns:
A tensor containing the STFT result with shape described above
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.stft.html | pytorch docs |
upsample_nearest
class torch.ao.nn.quantized.functional.upsample_nearest(input, size=None, scale_factor=None)
Upsamples the input, using nearest neighbours' pixel values.
Warning:
This function is deprecated in favor of
"torch.nn.quantized.functional.interpolate()". This is equivalent
with "nn.quantized.functional.interpolate(..., mode='nearest')".
Note:
The input quantization parameters propagate to the output.
Note:
Only 2D inputs are supported
Parameters:
* input (Tensor) -- quantized input
* **size** (*int** or **Tuple**[**int**, **int**] or
**Tuple**[**int**, **int**, **int**]*) -- output spatial size.
* **scale_factor** (*int*) -- multiplier for spatial size. Has
to be an integer.
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.upsample_nearest.html | pytorch docs |
torch.addmm
torch.addmm(input, mat1, mat2, *, beta=1, alpha=1, out=None) -> Tensor
Performs a matrix multiplication of the matrices "mat1" and "mat2".
The matrix "input" is added to the final result.
If "mat1" is a (n \times m) tensor, "mat2" is a (m \times p)
tensor, then "input" must be broadcastable with a (n \times p)
tensor and "out" will be a (n \times p) tensor.
"alpha" and "beta" are scaling factors on matrix-vector product
between "mat1" and "mat2" and the added matrix "input"
respectively.
\text{out} = \beta\ \text{input} + \alpha\ (\text{mat1}_i
\mathbin{@} \text{mat2}_i)
If "beta" is 0, then "input" will be ignored, and nan and inf
in it will not be propagated.
For inputs of type FloatTensor or DoubleTensor, arguments
"beta" and "alpha" must be real numbers, otherwise they should be
integers.
This operation has support for arguments with sparse layouts. If | https://pytorch.org/docs/stable/generated/torch.addmm.html | pytorch docs |
"input" is sparse the result will have the same layout and if "out"
is provided it must have the same layout as "input".
Warning:
Sparse support is a beta feature and some layout(s)/dtype/device
combinations may not be supported, or may not have autograd
support. If you notice missing functionality please open a
feature request.
This operator supports TensorFloat32.
On certain ROCm devices, when using float16 inputs this module will
use different precision for backward.
Parameters:
* input (Tensor) -- matrix to be added
* **mat1** (*Tensor*) -- the first matrix to be matrix
multiplied
* **mat2** (*Tensor*) -- the second matrix to be matrix
multiplied
Keyword Arguments:
* beta (Number, optional) -- multiplier for "input"
(\beta)
* **alpha** (*Number**, **optional*) -- multiplier for mat1 @
mat2 (\alpha)
* **out** (*Tensor**, **optional*) -- the output tensor.
Example: | https://pytorch.org/docs/stable/generated/torch.addmm.html | pytorch docs |
Example:
>>> M = torch.randn(2, 3)
>>> mat1 = torch.randn(2, 3)
>>> mat2 = torch.randn(3, 3)
>>> torch.addmm(M, mat1, mat2)
tensor([[-4.8716, 1.4671, -1.3746],
[ 0.7573, -3.9555, -2.8681]])
| https://pytorch.org/docs/stable/generated/torch.addmm.html | pytorch docs |
torch.Tensor.subtract_
Tensor.subtract_(other, *, alpha=1) -> Tensor
In-place version of "subtract()". | https://pytorch.org/docs/stable/generated/torch.Tensor.subtract_.html | pytorch docs |
torch.Tensor.arcsin
Tensor.arcsin() -> Tensor
See "torch.arcsin()" | https://pytorch.org/docs/stable/generated/torch.Tensor.arcsin.html | pytorch docs |
torch.quantized_max_pool2d
torch.quantized_max_pool2d(input, kernel_size, stride=[], padding=0, dilation=1, ceil_mode=False) -> Tensor
Applies a 2D max pooling over an input quantized tensor composed of
several input planes.
Parameters:
* input (Tensor) -- quantized tensor
* **kernel_size** ("list of int") -- the size of the sliding
window
* **stride** ("list of int", optional) -- the stride of the
sliding window
* **padding** ("list of int", optional) -- padding to be added
on both sides, must be >= 0 and <= kernel_size / 2
* **dilation** ("list of int", optional) -- The stride between
elements within a sliding window, must be > 0. Default 1
* **ceil_mode** (*bool**, **optional*) -- If True, will use ceil
instead of floor to compute the output shape. Defaults to
False.
Returns:
A quantized tensor with max_pool2d applied.
Return type:
Tensor
Example: | https://pytorch.org/docs/stable/generated/torch.quantized_max_pool2d.html | pytorch docs |
Return type:
Tensor
Example:
>>> qx = torch.quantize_per_tensor(torch.rand(2, 2, 2, 2), 1.5, 3, torch.quint8)
>>> torch.quantized_max_pool2d(qx, [2,2])
tensor([[[[1.5000]],
[[1.5000]]],
[[[0.0000]],
[[0.0000]]]], size=(2, 2, 1, 1), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=1.5, zero_point=3)
| https://pytorch.org/docs/stable/generated/torch.quantized_max_pool2d.html | pytorch docs |
ReLU
class torch.nn.ReLU(inplace=False)
Applies the rectified linear unit function element-wise:
\text{ReLU}(x) = (x)^+ = \max(0, x)
Parameters:
inplace (bool) -- can optionally do the operation in-
place. Default: "False"
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.ReLU()
>>> input = torch.randn(2)
>>> output = m(input)
An implementation of CReLU - https://arxiv.org/abs/1603.05201
>>> m = nn.ReLU()
>>> input = torch.randn(2).unsqueeze(0)
>>> output = torch.cat((m(input), m(-input)))
| https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html | pytorch docs |
hardtanh
class torch.ao.nn.quantized.functional.hardtanh(input, min_val=- 1.0, max_val=1.0, inplace=False)
This is the quantized version of "hardtanh()".
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.hardtanh.html | pytorch docs |
torch.Tensor.put_
Tensor.put_(index, source, accumulate=False) -> Tensor
Copies the elements from "source" into the positions specified by
"index". For the purpose of indexing, the "self" tensor is treated
as if it were a 1-D tensor.
"index" and "source" need to have the same number of elements, but
not necessarily the same shape.
If "accumulate" is "True", the elements in "source" are added to
"self". If accumulate is "False", the behavior is undefined if
"index" contain duplicate elements.
Parameters:
* index (LongTensor) -- the indices into self
* **source** (*Tensor*) -- the tensor containing values to copy
from
* **accumulate** (*bool*) -- whether to accumulate into self
Example:
>>> src = torch.tensor([[4, 3, 5],
... [6, 7, 8]])
>>> src.put_(torch.tensor([1, 3]), torch.tensor([9, 10]))
tensor([[ 4, 9, 5],
[ 10, 7, 8]])
| https://pytorch.org/docs/stable/generated/torch.Tensor.put_.html | pytorch docs |
torch._foreach_trunc
torch._foreach_trunc(self: List[Tensor]) -> List[Tensor]
Apply "torch.trunc()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_trunc.html | pytorch docs |
torch.Tensor.acosh
Tensor.acosh() -> Tensor
See "torch.acosh()" | https://pytorch.org/docs/stable/generated/torch.Tensor.acosh.html | pytorch docs |
torch.Tensor.backward
Tensor.backward(gradient=None, retain_graph=None, create_graph=False, inputs=None)
Computes the gradient of current tensor w.r.t. graph leaves.
The graph is differentiated using the chain rule. If the tensor is
non-scalar (i.e. its data has more than one element) and requires
gradient, the function additionally requires specifying "gradient".
It should be a tensor of matching type and location, that contains
the gradient of the differentiated function w.r.t. "self".
This function accumulates gradients in the leaves - you might need
to zero ".grad" attributes or set them to "None" before calling it.
See Default gradient layouts for details on the memory layout of
accumulated gradients.
Note:
If you run any forward ops, create "gradient", and/or call
"backward" in a user-specified CUDA stream context, see Stream
semantics of backward passes.
Note: | https://pytorch.org/docs/stable/generated/torch.Tensor.backward.html | pytorch docs |
semantics of backward passes.
Note:
When "inputs" are provided and a given input is not a leaf, the
current implementation will call its grad_fn (though it is not
strictly needed to get this gradients). It is an implementation
detail on which the user should not rely. See https://github.com
/pytorch/pytorch/pull/60521#issuecomment-867061780 for more
details.
Parameters:
* gradient (Tensor or None) -- Gradient w.r.t. the
tensor. If it is a tensor, it will be automatically converted
to a Tensor that does not require grad unless "create_graph"
is True. None values can be specified for scalar Tensors or
ones that don't require grad. If a None value would be
acceptable then this argument is optional.
* **retain_graph** (*bool**, **optional*) -- If "False", the
graph used to compute the grads will be freed. Note that in
nearly all cases setting this option to True is not needed and
| https://pytorch.org/docs/stable/generated/torch.Tensor.backward.html | pytorch docs |
often can be worked around in a much more efficient way.
Defaults to the value of "create_graph".
* **create_graph** (*bool**, **optional*) -- If "True", graph of
the derivative will be constructed, allowing to compute higher
order derivative products. Defaults to "False".
* **inputs** (*sequence of Tensor*) -- Inputs w.r.t. which the
gradient will be accumulated into ".grad". All other Tensors
will be ignored. If not provided, the gradient is accumulated
into all the leaf Tensors that were used to compute the
attr::tensors.
| https://pytorch.org/docs/stable/generated/torch.Tensor.backward.html | pytorch docs |
torch.pinverse
torch.pinverse(input, rcond=1e-15) -> Tensor
Alias for "torch.linalg.pinv()" | https://pytorch.org/docs/stable/generated/torch.pinverse.html | pytorch docs |
torch.Tensor.reshape_as
Tensor.reshape_as(other) -> Tensor
Returns this tensor as the same shape as "other".
"self.reshape_as(other)" is equivalent to
"self.reshape(other.sizes())". This method returns a view if
"other.sizes()" is compatible with the current shape. See
"torch.Tensor.view()" on when it is possible to return a view.
Please see "reshape()" for more information about "reshape".
Parameters:
other ("torch.Tensor") -- The result tensor has the same
shape as "other". | https://pytorch.org/docs/stable/generated/torch.Tensor.reshape_as.html | pytorch docs |
torch.nn.modules.module.register_module_forward_pre_hook
torch.nn.modules.module.register_module_forward_pre_hook(hook)
Registers a forward pre-hook common to all modules.
Warning:
This adds global state to the *nn.module* module and it is only
intended for debugging/profiling purposes.
The hook will be called every time before "forward()" is invoked.
It should have the following signature:
hook(module, input) -> None or modified input
The input contains only the positional arguments given to the
module. Keyword arguments won't be passed to the hooks and only to
the "forward". The hook can modify the input. User can either
return a tuple or a single modified value in the hook. We will wrap
the value into a tuple if a single value is returned(unless that
value is already a tuple).
This hook has precedence over the specific module hooks registered
with "register_forward_pre_hook".
Returns: | https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_pre_hook.html | pytorch docs |
with "register_forward_pre_hook".
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemovableHandle" | https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_pre_hook.html | pytorch docs |
torch.nn.functional.prelu
torch.nn.functional.prelu(input, weight) -> Tensor
Applies element-wise the function \text{PReLU}(x) = \max(0,x) +
\text{weight} * \min(0,x) where weight is a learnable parameter.
Note:
*weight* is expected to be a scalar or 1-D tensor. If *weight* is
1-D, its size must match the number of input channels, determined
by *input.size(1)* when *input.dim() >= 2*, otherwise 1. In the
1-D case, note that when *input* has dim > 2, *weight* can be
expanded to the shape of *input* in a way that is not possible
using normal broadcasting semantics.
See "PReLU" for more details. | https://pytorch.org/docs/stable/generated/torch.nn.functional.prelu.html | pytorch docs |
default_per_channel_weight_fake_quant
torch.quantization.fake_quantize.default_per_channel_weight_fake_quant
alias of functools.partial(,
observer=, quant_min=-128, quant_max=127,
dtype=torch.qint8, qscheme=torch.per_channel_symmetric,
reduce_range=False, ch_axis=0){} | https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_per_channel_weight_fake_quant.html | pytorch docs |
torch.fft.irfft
torch.fft.irfft(input, n=None, dim=- 1, norm=None, *, out=None) -> Tensor
Computes the inverse of "rfft()".
"input" is interpreted as a one-sided Hermitian signal in the
Fourier domain, as produced by "rfft()". By the Hermitian property,
the output will be real-valued.
Note:
Some input frequencies must be real-valued to satisfy the
Hermitian property. In these cases the imaginary component will
be ignored. For example, any imaginary component in the zero-
frequency term cannot be represented in a real output and so will
always be ignored.
Note:
The correct interpretation of the Hermitian input depends on the
length of the original data, as given by "n". This is because
each input shape could correspond to either an odd or even length
signal. By default, the signal is assumed to be even length and
odd signals will not round-trip properly. So, it is recommended
to always pass the signal length "n".
| https://pytorch.org/docs/stable/generated/torch.fft.irfft.html | pytorch docs |
to always pass the signal length "n".
Note:
Supports torch.half and torch.chalf on CUDA with GPU Architecture
SM53 or greater. However it only supports powers of 2 signal
length in every transformed dimension. With default arguments,
size of the transformed dimension should be (2^n + 1) as argument
*n* defaults to even output size = 2 * (transformed_dim_size - 1)
Parameters:
* input (Tensor) -- the input tensor representing a half-
Hermitian signal
* **n** (*int**, **optional*) -- Output signal length. This
determines the length of the output signal. If given, the
input will either be zero-padded or trimmed to this length
before computing the real IFFT. Defaults to even output:
"n=2*(input.size(dim) - 1)".
* **dim** (*int**, **optional*) -- The dimension along which to
take the one dimensional real IFFT.
* **norm** (*str**, **optional*) --
| https://pytorch.org/docs/stable/generated/torch.fft.irfft.html | pytorch docs |
norm (str, optional) --Normalization mode. For the backward transform ("irfft()"),
these correspond to:
* ""forward"" - no normalization
* ""backward"" - normalize by "1/n"
* ""ortho"" - normalize by "1/sqrt(n)" (making the real IFFT
orthonormal)
Calling the forward transform ("rfft()") with the same
normalization mode will apply an overall normalization of
"1/n" between the two transforms. This is required to make
"irfft()" the exact inverse.
Default is ""backward"" (normalize by "1/n").
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
-[ Example ]-
t = torch.linspace(0, 1, 5)
t
tensor([0.0000, 0.2500, 0.5000, 0.7500, 1.0000])
T = torch.fft.rfft(t)
T
tensor([ 2.5000+0.0000j, -0.6250+0.8602j, -0.6250+0.2031j])
Without specifying the output length to "irfft()", the output will | https://pytorch.org/docs/stable/generated/torch.fft.irfft.html | pytorch docs |
not round-trip properly because the input is odd-length:
torch.fft.irfft(T)
tensor([0.1562, 0.3511, 0.7812, 1.2114])
So, it is recommended to always pass the signal length "n":
roundtrip = torch.fft.irfft(T, t.numel())
torch.testing.assert_close(roundtrip, t, check_stride=False)
| https://pytorch.org/docs/stable/generated/torch.fft.irfft.html | pytorch docs |
torch.hamming_window
torch.hamming_window(window_length, periodic=True, alpha=0.54, beta=0.46, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Hamming window function.
w[n] = \alpha - \beta\ \cos \left( \frac{2 \pi n}{N - 1}
\right),
where N is the full window size.
The input "window_length" is a positive integer controlling the
returned window size. "periodic" flag determines whether the
returned window trims off the last duplicate value from the
symmetric window and is ready to be used as a periodic window with
functions like "torch.stft()". Therefore, if "periodic" is true,
the N in above formula is in fact \text{window_length} + 1. Also,
we always have "torch.hamming_window(L, periodic=True)" equal to
"torch.hamming_window(L + 1, periodic=False)[:-1])".
Note:
If "window_length" =1, the returned window contains a single
value 1.
Note: | https://pytorch.org/docs/stable/generated/torch.hamming_window.html | pytorch docs |
value 1.
Note:
This is a generalized version of "torch.hann_window()".
Parameters:
* window_length (int) -- the size of returned window
* **periodic** (*bool**, **optional*) -- If True, returns a
window to be used as periodic function. If False, return a
symmetric window.
* **alpha** (*float**, **optional*) -- The coefficient \alpha in
the equation above
* **beta** (*float**, **optional*) -- The coefficient \beta in
the equation above
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()"). Only floating point
types are supported.
* **layout** ("torch.layout", optional) -- the desired layout of
returned window tensor. Only "torch.strided" (dense layout) is
supported.
* **device** ("torch.device", optional) -- the desired device of
| https://pytorch.org/docs/stable/generated/torch.hamming_window.html | pytorch docs |
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Returns:
A 1-D tensor of size (\text{window_length},) containing the
window.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.hamming_window.html | pytorch docs |
torch.Tensor.matrix_power
Tensor.matrix_power(n) -> Tensor
Note:
"matrix_power()" is deprecated, use "torch.linalg.matrix_power()"
instead.
Alias for "torch.linalg.matrix_power()" | https://pytorch.org/docs/stable/generated/torch.Tensor.matrix_power.html | pytorch docs |
BackendConfig
class torch.ao.quantization.backend_config.BackendConfig(name='')
Config that defines the set of patterns that can be quantized on a
given backend, and how reference quantized models can be produced
from these patterns.
A pattern in this context refers to a module, a functional, an
operator, or a directed acyclic graph of the above. Each pattern
supported on the target backend can be individually configured
through "BackendPatternConfig" in terms of:
The supported input/output activation, weight, and bias data
types
How observers and quant/dequant ops are inserted in order to
construct the reference pattern, and
(Optionally) Fusion, QAT, and reference module mappings.
The format of the patterns is described in: https://github.com/pyt
orch/pytorch/blob/master/torch/ao/quantization/backend_config/READ
ME.md
Example usage:
import torch
from torch.ao.quantization.backend_config import (
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendConfig.html | pytorch docs |
BackendConfig,
BackendPatternConfig,
DTypeConfig,
ObservationType,
)
weighted_int8_dtype_config = DTypeConfig(
input_dtype=torch.quint8,
output_dtype=torch.quint8,
weight_dtype=torch.qint8,
bias_dtype=torch.float)
def fuse_conv2d_relu(is_qat, conv, relu):
return torch.ao.nn.intrinsic.ConvReLU2d(conv, relu)
# For quantizing Linear
linear_config = BackendPatternConfig(torch.nn.Linear) .set_observation_type(ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT) .add_dtype_config(weighted_int8_dtype_config) .set_root_module(torch.nn.Linear) .set_qat_module(torch.ao.nn.qat.Linear) .set_reference_quantized_module(torch.ao.nn.quantized.reference.Linear)
# For fusing Conv2d + ReLU into ConvReLU2d
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendConfig.html | pytorch docs |
For fusing Conv2d + ReLU into ConvReLU2d
conv_relu_config = BackendPatternConfig((torch.nn.Conv2d, torch.nn.ReLU)) .set_observation_type(ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT) .add_dtype_config(weighted_int8_dtype_config) .set_fused_module(torch.ao.nn.intrinsic.ConvReLU2d) .set_fuser_method(fuse_conv2d_relu)
# For quantizing ConvReLU2d
fused_conv_relu_config = BackendPatternConfig(torch.ao.nn.intrinsic.ConvReLU2d) .set_observation_type(ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT) .add_dtype_config(weighted_int8_dtype_config) .set_root_module(torch.nn.Conv2d) .set_qat_module(torch.ao.nn.intrinsic.qat.ConvReLU2d) .set_reference_quantized_module(torch.ao.nn.quantized.reference.Conv2d)
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendConfig.html | pytorch docs |
backend_config = BackendConfig("my_backend") .set_backend_pattern_config(linear_config) .set_backend_pattern_config(conv_relu_config) .set_backend_pattern_config(fused_conv_relu_config)
property configs: List[BackendPatternConfig]
Return a copy of the list of configs set in this
*BackendConfig*.
classmethod from_dict(backend_config_dict)
Create a "BackendConfig" from a dictionary with the following
items:
"name": the name of the target backend
"configs": a list of dictionaries that each represents a
*BackendPatternConfig*
Return type:
*BackendConfig*
set_backend_pattern_config(config)
Set the config for an pattern that can be run on the target
backend. This overrides any existing config for the given
pattern.
Return type:
*BackendConfig*
set_backend_pattern_configs(configs)
Set the configs for patterns that can be run on the target
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendConfig.html | pytorch docs |
backend. This overrides any existing config for a given pattern
if it was previously registered already.
Return type:
*BackendConfig*
set_name(name)
Set the name of the target backend.
Return type:
*BackendConfig*
to_dict()
Convert this "BackendConfig" to a dictionary with the items
described in "from_dict()".
Return type:
*Dict*[str, *Any*]
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendConfig.html | pytorch docs |
torch.Tensor.as_subclass
Tensor.as_subclass(cls) -> Tensor
Makes a "cls" instance with the same data pointer as "self".
Changes in the output mirror changes in "self", and the output
stays attached to the autograd graph. "cls" must be a subclass of
"Tensor". | https://pytorch.org/docs/stable/generated/torch.Tensor.as_subclass.html | pytorch docs |
torch.Tensor.cumprod_
Tensor.cumprod_(dim, dtype=None) -> Tensor
In-place version of "cumprod()" | https://pytorch.org/docs/stable/generated/torch.Tensor.cumprod_.html | pytorch docs |
torch.Tensor.flipud
Tensor.flipud() -> Tensor
See "torch.flipud()" | https://pytorch.org/docs/stable/generated/torch.Tensor.flipud.html | pytorch docs |
torch.zeros
torch.zeros(size, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Returns a tensor filled with the scalar value 0, with the shape
defined by the variable argument "size".
Parameters:
size (int...) -- a sequence of integers defining the
shape of the output tensor. Can be a variable number of
arguments or a collection like a list or tuple.
Keyword Arguments:
* out (Tensor, optional) -- the output tensor.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
| https://pytorch.org/docs/stable/generated/torch.zeros.html | pytorch docs |
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Example:
>>> torch.zeros(2, 3)
tensor([[ 0., 0., 0.],
[ 0., 0., 0.]])
>>> torch.zeros(5)
tensor([ 0., 0., 0., 0., 0.])
| https://pytorch.org/docs/stable/generated/torch.zeros.html | pytorch docs |
torch.Tensor.swapaxes
Tensor.swapaxes(axis0, axis1) -> Tensor
See "torch.swapaxes()" | https://pytorch.org/docs/stable/generated/torch.Tensor.swapaxes.html | pytorch docs |
torch.jit.save
torch.jit.save(m, f, _extra_files=None)
Save an offline version of this module for use in a separate
process. The saved module serializes all of the methods,
submodules, parameters, and attributes of this module. It can be
loaded into the C++ API using "torch::jit::load(filename)" or into
the Python API with "torch.jit.load".
To be able to save a module, it must not make any calls to native
Python functions. This means that all submodules must be
subclasses of "ScriptModule" as well.
Danger:
All modules, no matter their device, are always loaded onto the
CPU during loading. This is different from "torch.load()"'s
semantics and may change in the future.
Parameters:
* m -- A "ScriptModule" to save.
* **f** -- A file-like object (has to implement write and flush)
or a string containing a file name.
* **_extra_files** -- Map from filename to contents which will
be stored as part of *f*.
| https://pytorch.org/docs/stable/generated/torch.jit.save.html | pytorch docs |
be stored as part of f.
Note:
torch.jit.save attempts to preserve the behavior of some
operators across versions. For example, dividing two integer
tensors in PyTorch 1.5 performed floor division, and if the
module containing that code is saved in PyTorch 1.5 and loaded in
PyTorch 1.6 its division behavior will be preserved. The same
module saved in PyTorch 1.6 will fail to load in PyTorch 1.5,
however, since the behavior of division changed in 1.6, and 1.5
does not know how to replicate the 1.6 behavior.
Example:
import torch
import io
class MyModule(torch.nn.Module):
def forward(self, x):
return x + 10
m = torch.jit.script(MyModule())
# Save to file
torch.jit.save(m, 'scriptmodule.pt')
# This line is equivalent to the previous
m.save("scriptmodule.pt")
# Save to io.BytesIO buffer
buffer = io.BytesIO()
torch.jit.save(m, buffer)
| https://pytorch.org/docs/stable/generated/torch.jit.save.html | pytorch docs |
torch.jit.save(m, buffer)
# Save with extra files
extra_files = {'foo.txt': b'bar'}
torch.jit.save(m, 'scriptmodule.pt', _extra_files=extra_files)
| https://pytorch.org/docs/stable/generated/torch.jit.save.html | pytorch docs |
Sequential
class torch.nn.Sequential(*args: Module)
class torch.nn.Sequential(arg: OrderedDict[str, Module])
A sequential container. Modules will be added to it in the order
they are passed in the constructor. Alternatively, an "OrderedDict"
of modules can be passed in. The "forward()" method of "Sequential"
accepts any input and forwards it to the first module it contains.
It then "chains" outputs to inputs sequentially for each subsequent
module, finally returning the output of the last module.
The value a "Sequential" provides over manually calling a sequence
of modules is that it allows treating the whole container as a
single module, such that performing a transformation on the
"Sequential" applies to each of the modules it stores (which are
each a registered submodule of the "Sequential").
What's the difference between a "Sequential" and a
"torch.nn.ModuleList"? A "ModuleList" is exactly what it sounds | https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html | pytorch docs |
like--a list for storing "Module" s! On the other hand, the layers
in a "Sequential" are connected in a cascading way.
Example:
# Using Sequential to create a small model. When `model` is run,
# input will first be passed to `Conv2d(1,20,5)`. The output of
# `Conv2d(1,20,5)` will be used as the input to the first
# `ReLU`; the output of the first `ReLU` will become the input
# for `Conv2d(20,64,5)`. Finally, the output of
# `Conv2d(20,64,5)` will be used as input to the second `ReLU`
model = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),
nn.Conv2d(20,64,5),
nn.ReLU()
)
# Using Sequential with OrderedDict. This is functionally the
# same as the above code
model = nn.Sequential(OrderedDict([
('conv1', nn.Conv2d(1,20,5)),
('relu1', nn.ReLU()),
('conv2', nn.Conv2d(20,64,5)),
('relu2', nn.ReLU())
| https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html | pytorch docs |
('relu2', nn.ReLU())
]))
append(module)
Appends a given module to the end.
Parameters:
**module** (*nn.Module*) -- module to append
Return type:
*Sequential*
| https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html | pytorch docs |
torch.ones
torch.ones(size, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Returns a tensor filled with the scalar value 1, with the shape
defined by the variable argument "size".
Parameters:
size (int...) -- a sequence of integers defining the
shape of the output tensor. Can be a variable number of
arguments or a collection like a list or tuple.
Keyword Arguments:
* out (Tensor, optional) -- the output tensor.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
| https://pytorch.org/docs/stable/generated/torch.ones.html | pytorch docs |
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Example:
>>> torch.ones(2, 3)
tensor([[ 1., 1., 1.],
[ 1., 1., 1.]])
>>> torch.ones(5)
tensor([ 1., 1., 1., 1., 1.])
| https://pytorch.org/docs/stable/generated/torch.ones.html | pytorch docs |
torch.arcsin
torch.arcsin(input, *, out=None) -> Tensor
Alias for "torch.asin()". | https://pytorch.org/docs/stable/generated/torch.arcsin.html | pytorch docs |
torch.mean
torch.mean(input, *, dtype=None) -> Tensor
Returns the mean value of all elements in the "input" tensor.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
dtype ("torch.dtype", optional) -- the desired data type of
returned tensor. If specified, the input tensor is casted to
"dtype" before the operation is performed. This is useful for
preventing data type overflows. Default: None.
Example:
>>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.2294, -0.5481, 1.3288]])
>>> torch.mean(a)
tensor(0.3367)
torch.mean(input, dim, keepdim=False, *, dtype=None, out=None) -> Tensor
Returns the mean value of each row of the "input" tensor in the
given dimension "dim". If "dim" is a list of dimensions, reduce
over all of them.
If "keepdim" is "True", the output tensor is of the same size as
"input" except in the dimension(s) "dim" where it is of size 1. | https://pytorch.org/docs/stable/generated/torch.mean.html | pytorch docs |
Otherwise, "dim" is squeezed (see "torch.squeeze()"), resulting in
the output tensor having 1 (or "len(dim)") fewer dimension(s).
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int** or **tuple of ints*) -- the dimension or
dimensions to reduce.
* **keepdim** (*bool*) -- whether the output tensor has "dim"
retained or not.
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned tensor. If specified, the input tensor is casted
to "dtype" before the operation is performed. This is useful
for preventing data type overflows. Default: None.
* **out** (*Tensor**, **optional*) -- the output tensor.
See also:
"torch.nanmean()" computes the mean value of *non-NaN* elements.
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[-0.3841, 0.6320, 0.4254, -0.7384],
[-0.9644, 1.0131, -0.6549, -1.4279],
| https://pytorch.org/docs/stable/generated/torch.mean.html | pytorch docs |
[-0.2951, -1.3350, -0.7694, 0.5600],
[ 1.0842, -0.9580, 0.3623, 0.2343]])
>>> torch.mean(a, 1)
tensor([-0.0163, -0.5085, -0.4599, 0.1807])
>>> torch.mean(a, 1, True)
tensor([[-0.0163],
[-0.5085],
[-0.4599],
[ 0.1807]]) | https://pytorch.org/docs/stable/generated/torch.mean.html | pytorch docs |
torch.fft.fft
torch.fft.fft(input, n=None, dim=- 1, norm=None, *, out=None) -> Tensor
Computes the one dimensional discrete Fourier transform of "input".
Note:
The Fourier domain representation of any real signal satisfies
the Hermitian property: *X[i] = conj(X[-i])*. This function
always returns both the positive and negative frequency terms
even though, for real inputs, the negative frequencies are
redundant. "rfft()" returns the more compact one-sided
representation where only the positive frequencies are returned.
Note:
Supports torch.half and torch.chalf on CUDA with GPU Architecture
SM53 or greater. However it only supports powers of 2 signal
length in every transformed dimension.
Parameters:
* input (Tensor) -- the input tensor
* **n** (*int**, **optional*) -- Signal length. If given, the
input will either be zero-padded or trimmed to this length
before computing the FFT.
| https://pytorch.org/docs/stable/generated/torch.fft.fft.html | pytorch docs |
before computing the FFT.
* **dim** (*int**, **optional*) -- The dimension along which to
take the one dimensional FFT.
* **norm** (*str**, **optional*) --
Normalization mode. For the forward transform ("fft()"), these
correspond to:
* ""forward"" - normalize by "1/n"
* ""backward"" - no normalization
* ""ortho"" - normalize by "1/sqrt(n)" (making the FFT
orthonormal)
Calling the backward transform ("ifft()") with the same
normalization mode will apply an overall normalization of
"1/n" between the two transforms. This is required to make
"ifft()" the exact inverse.
Default is ""backward"" (no normalization).
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
-[ Example ]-
t = torch.arange(4)
t
tensor([0, 1, 2, 3])
torch.fft.fft(t)
tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j])
| https://pytorch.org/docs/stable/generated/torch.fft.fft.html | pytorch docs |
tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j])
t = torch.tensor([0.+1.j, 2.+3.j, 4.+5.j, 6.+7.j])
torch.fft.fft(t)
tensor([12.+16.j, -8.+0.j, -4.-4.j, 0.-8.j])
| https://pytorch.org/docs/stable/generated/torch.fft.fft.html | pytorch docs |
torch.Tensor.var
Tensor.var(dim=None, *, correction=1, keepdim=False) -> Tensor
See "torch.var()" | https://pytorch.org/docs/stable/generated/torch.Tensor.var.html | pytorch docs |
torch.erfc
torch.erfc(input, *, out=None) -> Tensor
Alias for "torch.special.erfc()". | https://pytorch.org/docs/stable/generated/torch.erfc.html | pytorch docs |
torch.nn.functional.one_hot
torch.nn.functional.one_hot(tensor, num_classes=- 1) -> LongTensor
Takes LongTensor with index values of shape "()" and returns a
tensor of shape "(, num_classes)" that have zeros everywhere
except where the index of last dimension matches the corresponding
value of the input tensor, in which case it will be 1.
See also One-hot on Wikipedia .
Parameters:
* tensor (LongTensor) -- class values of any shape.
* **num_classes** (*int*) -- Total number of classes. If set to
-1, the number of classes will be inferred as one greater than
the largest class value in the input tensor.
Returns:
LongTensor that has one more dimension with 1 values at the
index of last dimension indicated by the input, and 0 everywhere
else.
-[ Examples ]-
F.one_hot(torch.arange(0, 5) % 3)
tensor([[1, 0, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0],
| https://pytorch.org/docs/stable/generated/torch.nn.functional.one_hot.html | pytorch docs |
[0, 0, 1],
[1, 0, 0],
[0, 1, 0]])
F.one_hot(torch.arange(0, 5) % 3, num_classes=5)
tensor([[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0]])
F.one_hot(torch.arange(0, 6).view(3,2) % 3)
tensor([[[1, 0, 0],
[0, 1, 0]],
[[0, 0, 1],
[1, 0, 0]],
[[0, 1, 0],
[0, 0, 1]]])
| https://pytorch.org/docs/stable/generated/torch.nn.functional.one_hot.html | pytorch docs |
torch.Tensor.tile
Tensor.tile(*reps) -> Tensor
See "torch.tile()" | https://pytorch.org/docs/stable/generated/torch.Tensor.tile.html | pytorch docs |
torch.Tensor.log2
Tensor.log2() -> Tensor
See "torch.log2()" | https://pytorch.org/docs/stable/generated/torch.Tensor.log2.html | pytorch docs |
torch.Tensor.lcm_
Tensor.lcm_(other) -> Tensor
In-place version of "lcm()" | https://pytorch.org/docs/stable/generated/torch.Tensor.lcm_.html | pytorch docs |
torch.cholesky_solve
torch.cholesky_solve(input, input2, upper=False, *, out=None) -> Tensor
Solves a linear system of equations with a positive semidefinite
matrix to be inverted given its Cholesky factor matrix u.
If "upper" is "False", u is and lower triangular and c is
returned such that:
c = (u u^T)^{{-1}} b
If "upper" is "True" or not provided, u is upper triangular and c
is returned such that:
c = (u^T u)^{{-1}} b
torch.cholesky_solve(b, u) can take in 2D inputs b, u or inputs
that are batches of 2D matrices. If the inputs are batches, then
returns batched outputs c
Supports real-valued and complex-valued inputs. For the complex-
valued inputs the transpose operator above is the conjugate
transpose.
Parameters:
* input (Tensor) -- input matrix b of size (*, m, k),
where * is zero or more batch dimensions
* **input2** (*Tensor*) -- input matrix u of size (*, m, m),
| https://pytorch.org/docs/stable/generated/torch.cholesky_solve.html | pytorch docs |
where * is zero of more batch dimensions composed of upper or
lower triangular Cholesky factor
* **upper** (*bool**, **optional*) -- whether to consider the
Cholesky factor as a lower or upper triangular matrix.
Default: "False".
Keyword Arguments:
out (Tensor, optional) -- the output tensor for c
Example:
>>> a = torch.randn(3, 3)
>>> a = torch.mm(a, a.t()) # make symmetric positive definite
>>> u = torch.linalg.cholesky(a)
>>> a
tensor([[ 0.7747, -1.9549, 1.3086],
[-1.9549, 6.7546, -5.4114],
[ 1.3086, -5.4114, 4.8733]])
>>> b = torch.randn(3, 2)
>>> b
tensor([[-0.6355, 0.9891],
[ 0.1974, 1.4706],
[-0.4115, -0.6225]])
>>> torch.cholesky_solve(b, u)
tensor([[ -8.1625, 19.6097],
[ -5.8398, 14.2387],
[ -4.3771, 10.4173]])
>>> torch.mm(a.inverse(), b)
tensor([[ -8.1626, 19.6097],
| https://pytorch.org/docs/stable/generated/torch.cholesky_solve.html | pytorch docs |
tensor([[ -8.1626, 19.6097],
[ -5.8398, 14.2387],
[ -4.3771, 10.4173]]) | https://pytorch.org/docs/stable/generated/torch.cholesky_solve.html | pytorch docs |
torch.tensor_split
torch.tensor_split(input, indices_or_sections, dim=0) -> List of Tensors
Splits a tensor into multiple sub-tensors, all of which are views
of "input", along dimension "dim" according to the indices or
number of sections specified by "indices_or_sections". This
function is based on NumPy's "numpy.array_split()".
Parameters:
* input (Tensor) -- the tensor to split
* **indices_or_sections** (*Tensor**, **int** or **list** or
**tuple of ints*) --
If "indices_or_sections" is an integer "n" or a zero
dimensional long tensor with value "n", "input" is split into
"n" sections along dimension "dim". If "input" is divisible by
"n" along dimension "dim", each section will be of equal size,
"input.size(dim) / n". If "input" is not divisible by "n", the
sizes of the first "int(input.size(dim) % n)" sections will
have size "int(input.size(dim) / n) + 1", and the rest will
| https://pytorch.org/docs/stable/generated/torch.tensor_split.html | pytorch docs |
have size "int(input.size(dim) / n)".
If "indices_or_sections" is a list or tuple of ints, or a one-
dimensional long tensor, then "input" is split along dimension
"dim" at each of the indices in the list, tuple or tensor. For
instance, "indices_or_sections=[2, 3]" and "dim=0" would
result in the tensors "input[:2]", "input[2:3]", and
"input[3:]".
If "indices_or_sections" is a tensor, it must be a zero-
dimensional or one-dimensional long tensor on the CPU.
* **dim** (*int**, **optional*) -- dimension along which to
split the tensor. Default: "0"
Example:
>>> x = torch.arange(8)
>>> torch.tensor_split(x, 3)
(tensor([0, 1, 2]), tensor([3, 4, 5]), tensor([6, 7]))
>>> x = torch.arange(7)
>>> torch.tensor_split(x, 3)
(tensor([0, 1, 2]), tensor([3, 4]), tensor([5, 6]))
>>> torch.tensor_split(x, (1, 6))
(tensor([0]), tensor([1, 2, 3, 4, 5]), tensor([6]))
| https://pytorch.org/docs/stable/generated/torch.tensor_split.html | pytorch docs |
x = torch.arange(14).reshape(2, 7)
>>> x
tensor([[ 0, 1, 2, 3, 4, 5, 6],
[ 7, 8, 9, 10, 11, 12, 13]])
>>> torch.tensor_split(x, 3, dim=1)
(tensor([[0, 1, 2],
[7, 8, 9]]),
tensor([[ 3, 4],
[10, 11]]),
tensor([[ 5, 6],
[12, 13]]))
>>> torch.tensor_split(x, (1, 6), dim=1)
(tensor([[0],
[7]]),
tensor([[ 1, 2, 3, 4, 5],
[ 8, 9, 10, 11, 12]]),
tensor([[ 6],
[13]]))
| https://pytorch.org/docs/stable/generated/torch.tensor_split.html | pytorch docs |
torch.fft.rfftn
torch.fft.rfftn(input, s=None, dim=None, norm=None, *, out=None) -> Tensor
Computes the N-dimensional discrete Fourier transform of real
"input".
The FFT of a real signal is Hermitian-symmetric, "X[i_1, ..., i_n]
= conj(X[-i_1, ..., -i_n])" so the full "fftn()" output contains
redundant information. "rfftn()" instead omits the negative
frequencies in the last dimension.
Note:
Supports torch.half on CUDA with GPU Architecture SM53 or
greater. However it only supports powers of 2 signal length in
every transformed dimensions.
Parameters:
* input (Tensor) -- the input tensor
* **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the
transformed dimensions. If given, each dimension "dim[i]" will
either be zero-padded or trimmed to the length "s[i]" before
computing the real FFT. If a length "-1" is specified, no
padding is done in that dimension. Default: "s =
| https://pytorch.org/docs/stable/generated/torch.fft.rfftn.html | pytorch docs |
[input.size(d) for d in dim]"
* **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be
transformed. Default: all dimensions, or the last "len(s)"
dimensions if "s" is given.
* **norm** (*str**, **optional*) --
Normalization mode. For the forward transform ("rfftn()"),
these correspond to:
* ""forward"" - normalize by "1/n"
* ""backward"" - no normalization
* ""ortho"" - normalize by "1/sqrt(n)" (making the real FFT
orthonormal)
Where "n = prod(s)" is the logical FFT size. Calling the
backward transform ("irfftn()") with the same normalization
mode will apply an overall normalization of "1/n" between the
two transforms. This is required to make "irfftn()" the exact
inverse.
Default is ""backward"" (no normalization).
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
-[ Example ]-
t = torch.rand(10, 10)
| https://pytorch.org/docs/stable/generated/torch.fft.rfftn.html | pytorch docs |
-[ Example ]-
t = torch.rand(10, 10)
rfftn = torch.fft.rfftn(t)
rfftn.size()
torch.Size([10, 6])
Compared against the full output from "fftn()", we have all
elements up to the Nyquist frequency.
fftn = torch.fft.fftn(t)
torch.testing.assert_close(fftn[..., :6], rfftn, check_stride=False)
The discrete Fourier transform is separable, so "rfftn()" here is
equivalent to a combination of "fft()" and "rfft()":
two_ffts = torch.fft.fft(torch.fft.rfft(t, dim=1), dim=0)
torch.testing.assert_close(rfftn, two_ffts, check_stride=False)
| https://pytorch.org/docs/stable/generated/torch.fft.rfftn.html | pytorch docs |
torch.randperm
torch.randperm(n, *, generator=None, out=None, dtype=torch.int64, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) -> Tensor
Returns a random permutation of integers from "0" to "n - 1".
Parameters:
n (int) -- the upper bound (exclusive)
Keyword Arguments:
* generator ("torch.Generator", optional) -- a pseudorandom
number generator for sampling
* **out** (*Tensor**, **optional*) -- the output tensor.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: "torch.int64".
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
| https://pytorch.org/docs/stable/generated/torch.randperm.html | pytorch docs |
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **pin_memory** (*bool**, **optional*) -- If set, returned
tensor would be allocated in the pinned memory. Works only for
CPU tensors. Default: "False".
Example:
>>> torch.randperm(4)
tensor([2, 1, 0, 3])
| https://pytorch.org/docs/stable/generated/torch.randperm.html | pytorch docs |
torch.nn.functional.tanhshrink
torch.nn.functional.tanhshrink(input) -> Tensor
Applies element-wise, \text{Tanhshrink}(x) = x - \text{Tanh}(x)
See "Tanhshrink" for more details. | https://pytorch.org/docs/stable/generated/torch.nn.functional.tanhshrink.html | pytorch docs |
torch.func.replace_all_batch_norm_modules_
torch.func.replace_all_batch_norm_modules_(root)
In place updates "root" by setting the "running_mean" and
"running_var" to be None and setting track_running_stats to be
False for any nn.BatchNorm module in "root"
Return type:
Module | https://pytorch.org/docs/stable/generated/torch.func.replace_all_batch_norm_modules_.html | pytorch docs |
hardswish
class torch.ao.nn.quantized.functional.hardswish(input, scale, zero_point)
This is the quantized version of "hardswish()".
Parameters:
* input (Tensor) -- quantized input
* **scale** (*float*) -- quantization scale of the output tensor
* **zero_point** (*int*) -- quantization zero point of the
output tensor
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.hardswish.html | pytorch docs |
Transformer
class torch.nn.Transformer(d_model=512, nhead=8, num_encoder_layers=6, num_decoder_layers=6, dim_feedforward=2048, dropout=0.1, activation=, custom_encoder=None, custom_decoder=None, layer_norm_eps=1e-05, batch_first=False, norm_first=False, device=None, dtype=None)
A transformer model. User is able to modify the attributes as
needed. The architecture is based on the paper "Attention Is All
You Need". Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia
Polosukhin. 2017. Attention is all you need. In Advances in Neural
Information Processing Systems, pages 6000-6010.
Parameters:
* d_model (int) -- the number of expected features in the
encoder/decoder inputs (default=512).
* **nhead** (*int*) -- the number of heads in the
multiheadattention models (default=8).
* **num_encoder_layers** (*int*) -- the number of sub-encoder-
| https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html | pytorch docs |
layers in the encoder (default=6).
* **num_decoder_layers** (*int*) -- the number of sub-decoder-
layers in the decoder (default=6).
* **dim_feedforward** (*int*) -- the dimension of the
feedforward network model (default=2048).
* **dropout** (*float*) -- the dropout value (default=0.1).
* **activation** (*Union**[**str**,
**Callable**[**[**Tensor**]**, **Tensor**]**]*) -- the
activation function of encoder/decoder intermediate layer, can
be a string ("relu" or "gelu") or a unary callable. Default:
relu
* **custom_encoder** (*Optional**[**Any**]*) -- custom encoder
(default=None).
* **custom_decoder** (*Optional**[**Any**]*) -- custom decoder
(default=None).
* **layer_norm_eps** (*float*) -- the eps value in layer
normalization components (default=1e-5).
* **batch_first** (*bool*) -- If "True", then the input and
| https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html | pytorch docs |
output tensors are provided as (batch, seq, feature). Default:
"False" (seq, batch, feature).
* **norm_first** (*bool*) -- if "True", encoder and decoder
layers will perform LayerNorms before other attention and
feedforward operations, otherwise after. Default: "False"
(after).
Examples::
>>> transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12)
>>> src = torch.rand((10, 32, 512))
>>> tgt = torch.rand((20, 32, 512))
>>> out = transformer_model(src, tgt)
Note: A full example to apply nn.Transformer module for the word
language model is available in
https://github.com/pytorch/examples/tree/master/word_language_model
forward(src, tgt, src_mask=None, tgt_mask=None, memory_mask=None, src_key_padding_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None)
Take in and process masked source/target sequences.
Parameters:
* **src** (*Tensor*) -- the sequence to the encoder
| https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html | pytorch docs |
(required).
* **tgt** (*Tensor*) -- the sequence to the decoder
(required).
* **src_mask** (*Optional**[**Tensor**]*) -- the additive
mask for the src sequence (optional).
* **tgt_mask** (*Optional**[**Tensor**]*) -- the additive
mask for the tgt sequence (optional).
* **memory_mask** (*Optional**[**Tensor**]*) -- the additive
mask for the encoder output (optional).
* **src_key_padding_mask** (*Optional**[**Tensor**]*) -- the
ByteTensor mask for src keys per batch (optional).
* **tgt_key_padding_mask** (*Optional**[**Tensor**]*) -- the
ByteTensor mask for tgt keys per batch (optional).
* **memory_key_padding_mask** (*Optional**[**Tensor**]*) --
the ByteTensor mask for memory keys per batch (optional).
Return type:
*Tensor*
Shape:
* src: (S, E) for unbatched input, (S, N, E) if
| https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html | pytorch docs |
batch_first=False or (N, S, E) if batch_first=True.
* tgt: (T, E) for unbatched input, (T, N, E) if
*batch_first=False* or *(N, T, E)* if *batch_first=True*.
* src_mask: (S, S) or (N\cdot\text{num\_heads}, S, S).
* tgt_mask: (T, T) or (N\cdot\text{num\_heads}, T, T).
* memory_mask: (T, S).
* src_key_padding_mask: (S) for unbatched input otherwise (N,
S).
* tgt_key_padding_mask: (T) for unbatched input otherwise (N,
T).
* memory_key_padding_mask: (S) for unbatched input otherwise
(N, S).
Note: [src/tgt/memory]_mask ensures that position i is
allowed to attend the unmasked positions. If a ByteTensor is
provided, the non-zero positions are not allowed to attend
while the zero positions will be unchanged. If a BoolTensor
is provided, positions with "True" are not allowed to attend
| https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html | pytorch docs |
while "False" values will be unchanged. If a FloatTensor is
provided, it will be added to the attention weight.
[src/tgt/memory]_key_padding_mask provides specified elements
in the key to be ignored by the attention. If a ByteTensor is
provided, the non-zero positions will be ignored while the
zero positions will be unchanged. If a BoolTensor is
provided, the positions with the value of "True" will be
ignored while the position with the value of "False" will be
unchanged.
* output: (T, E) for unbatched input, (T, N, E) if
*batch_first=False* or *(N, T, E)* if *batch_first=True*.
Note: Due to the multi-head attention architecture in the
transformer model, the output sequence length of a
transformer is same as the input sequence (i.e. target)
length of the decoder.
where S is the source sequence length, T is the target
| https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html | pytorch docs |
sequence length, N is the batch size, E is the feature number
-[ Examples ]-
>>> output = transformer_model(src, tgt, src_mask=src_mask, tgt_mask=tgt_mask)
static generate_square_subsequent_mask(sz, device='cpu')
Generate a square mask for the sequence. The masked positions
are filled with float('-inf'). Unmasked positions are filled
with float(0.0).
Return type:
*Tensor*
| https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html | pytorch docs |
default_placeholder_observer
torch.quantization.observer.default_placeholder_observer
alias of "PlaceholderObserver" | https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_placeholder_observer.html | pytorch docs |
torch.sparse_csr_tensor
torch.sparse_csr_tensor(crow_indices, col_indices, values, size=None, *, dtype=None, device=None, requires_grad=False, check_invariants=None) -> Tensor
Constructs a sparse tensor in CSR (Compressed Sparse Row) with
specified values at the given "crow_indices" and "col_indices".
Sparse matrix multiplication operations in CSR format are typically
faster than that for sparse tensors in COO format. Make you have a
look at the note on the data type of the indices.
Note:
If the "device" argument is not specified the device of the given
"values" and indices tensor(s) must match. If, however, the
argument is specified the input Tensors will be converted to the
given device and in turn determine the device of the constructed
sparse tensor.
Parameters:
* crow_indices (array_like) -- (B+1)-dimensional array of
size "(*batchsize, nrows + 1)". The last element of each | https://pytorch.org/docs/stable/generated/torch.sparse_csr_tensor.html | pytorch docs |
batch is the number of non-zeros. This tensor encodes the
index in values and col_indices depending on where the given
row starts. Each successive number in the tensor subtracted by
the number before it denotes the number of elements in a given
row.
* **col_indices** (*array_like*) -- Column co-ordinates of each
element in values. (B+1)-dimensional tensor with the same
length as values.
* **values** (*array_list*) -- Initial values for the tensor.
Can be a list, tuple, NumPy "ndarray", scalar, and other types
that represents a (1+K)-dimensional tensor where "K" is the
number of dense dimensions.
* **size** (list, tuple, "torch.Size", optional) -- Size of the
sparse tensor: "(*batchsize, nrows, ncols, *densesize)". If
not provided, the size will be inferred as the minimum size
big enough to hold all non-zero elements.
Keyword Arguments: | https://pytorch.org/docs/stable/generated/torch.sparse_csr_tensor.html | pytorch docs |
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if None, infers data type from
"values".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if None, uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **check_invariants** (*bool**, **optional*) -- If sparse
tensor invariants are checked. Default: as returned by
"torch.sparse.check_sparse_tensor_invariants.is_enabled()",
initially False.
Example::
>>> crow_indices = [0, 2, 4]
>>> col_indices = [0, 1, 0, 1]
>>> values = [1, 2, 3, 4] | https://pytorch.org/docs/stable/generated/torch.sparse_csr_tensor.html | pytorch docs |
values = [1, 2, 3, 4]
>>> torch.sparse_csr_tensor(torch.tensor(crow_indices, dtype=torch.int64),
... torch.tensor(col_indices, dtype=torch.int64),
... torch.tensor(values), dtype=torch.double)
tensor(crow_indices=tensor([0, 2, 4]),
col_indices=tensor([0, 1, 0, 1]),
values=tensor([1., 2., 3., 4.]), size=(2, 2), nnz=4,
dtype=torch.float64, layout=torch.sparse_csr)
| https://pytorch.org/docs/stable/generated/torch.sparse_csr_tensor.html | pytorch docs |
torch.column_stack
torch.column_stack(tensors, *, out=None) -> Tensor
Creates a new tensor by horizontally stacking the tensors in
"tensors".
Equivalent to "torch.hstack(tensors)", except each zero or one
dimensional tensor "t" in "tensors" is first reshaped into a
"(t.numel(), 1)" column before being stacked horizontally.
Parameters:
tensors (sequence of Tensors) -- sequence of tensors to
concatenate
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.tensor([1, 2, 3])
>>> b = torch.tensor([4, 5, 6])
>>> torch.column_stack((a, b))
tensor([[1, 4],
[2, 5],
[3, 6]])
>>> a = torch.arange(5)
>>> b = torch.arange(10).reshape(5, 2)
>>> torch.column_stack((a, b, b))
tensor([[0, 0, 1, 0, 1],
[1, 2, 3, 2, 3],
[2, 4, 5, 4, 5],
[3, 6, 7, 6, 7],
[4, 8, 9, 8, 9]])
| https://pytorch.org/docs/stable/generated/torch.column_stack.html | pytorch docs |
torch.index_reduce
torch.index_reduce(input, dim, index, source, reduce, *, include_self=True, out=None) -> Tensor
See "index_reduce_()" for function description. | https://pytorch.org/docs/stable/generated/torch.index_reduce.html | pytorch docs |
torch.Tensor.neg
Tensor.neg() -> Tensor
See "torch.neg()" | https://pytorch.org/docs/stable/generated/torch.Tensor.neg.html | pytorch docs |
torch.Tensor.is_complex
Tensor.is_complex() -> bool
Returns True if the data type of "self" is a complex data type. | https://pytorch.org/docs/stable/generated/torch.Tensor.is_complex.html | pytorch docs |
torch.nn.utils.parametrizations.orthogonal
torch.nn.utils.parametrizations.orthogonal(module, name='weight', orthogonal_map=None, *, use_trivialization=True)
Applies an orthogonal or unitary parametrization to a matrix or a
batch of matrices.
Letting \mathbb{K} be \mathbb{R} or \mathbb{C}, the parametrized
matrix Q \in \mathbb{K}^{m \times n} is orthogonal as
\begin{align*} Q^{\text{H}}Q &= \mathrm{I}_n
\mathrlap{\qquad \text{if }m \geq n}\\ QQ^{\text{H}} &=
\mathrm{I}_m \mathrlap{\qquad \text{if }m < n} \end{align*}
where Q^{\text{H}} is the conjugate transpose when Q is complex and
the transpose when Q is real-valued, and \mathrm{I}_n is the
n-dimensional identity matrix. In plain words, Q will have
orthonormal columns whenever m \geq n and orthonormal rows
otherwise.
If the tensor has more than two dimensions, we consider it as a
batch of matrices of shape (..., m, n). | https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.orthogonal.html | pytorch docs |
batch of matrices of shape (..., m, n).
The matrix Q may be parametrized via three different
"orthogonal_map" in terms of the original tensor:
""matrix_exp""/""cayley"": the "matrix_exp()" Q = \exp(A) and the
Cayley map Q = (\mathrm{I}_n + A/2)(\mathrm{I}_n - A/2)^{-1} are
applied to a skew-symmetric A to give an orthogonal matrix.
""householder"": computes a product of Householder reflectors
("householder_product()").
""matrix_exp""/""cayley"" often make the parametrized weight
converge faster than ""householder"", but they are slower to
compute for very thin or very wide matrices.
If "use_trivialization=True" (default), the parametrization
implements the "Dynamic Trivialization Framework", where an extra
matrix B \in \mathbb{K}^{n \times n} is stored under
"module.parametrizations.weight[0].base". This helps the
convergence of the parametrized layer at the expense of some extra | https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.orthogonal.html | pytorch docs |
memory use. See Trivializations for Gradient-Based Optimization on
Manifolds .
Initial value of Q: If the original tensor is not parametrized and
"use_trivialization=True" (default), the initial value of Q is that
of the original tensor if it is orthogonal (or unitary in the
complex case) and it is orthogonalized via the QR decomposition
otherwise (see "torch.linalg.qr()"). Same happens when it is not
parametrized and "orthogonal_map="householder"" even when
"use_trivialization=False". Otherwise, the initial value is the
result of the composition of all the registered parametrizations
applied to the original tensor.
Note:
This function is implemented using the parametrization
functionality in "register_parametrization()".
Parameters:
* module (nn.Module) -- module on which to register the
parametrization.
* **name** (*str**, **optional*) -- name of the tensor to make
orthogonal. Default: ""weight"".
| https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.orthogonal.html | pytorch docs |
orthogonal. Default: ""weight"".
* **orthogonal_map** (*str**, **optional*) -- One of the
following: ""matrix_exp"", ""cayley"", ""householder"".
Default: ""matrix_exp"" if the matrix is square or complex,
""householder"" otherwise.
* **use_trivialization** (*bool**, **optional*) -- whether to
use the dynamic trivialization framework. Default: "True".
Returns:
The original module with an orthogonal parametrization
registered to the specified weight
Return type:
Module
Example:
>>> orth_linear = orthogonal(nn.Linear(20, 40))
>>> orth_linear
ParametrizedLinear(
in_features=20, out_features=40, bias=True
(parametrizations): ModuleDict(
(weight): ParametrizationList(
(0): _Orthogonal()
)
)
)
>>> Q = orth_linear.weight
>>> torch.dist(Q.T @ Q, torch.eye(20))
tensor(4.9332e-07)
| https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.orthogonal.html | pytorch docs |
torch.Tensor.any
Tensor.any(dim=None, keepdim=False) -> Tensor
See "torch.any()" | https://pytorch.org/docs/stable/generated/torch.Tensor.any.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.