text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
LD, pivots = torch.linalg.ldl_factor(A)
>>> LD
tensor([[ 7.2079, 0.0000, 0.0000],
[ 0.5884, 0.9595, 0.0000],
[ 0.2695, -0.8513, 0.1633]])
>>> pivots
tensor([1, 2, 3], dtype=torch.int32)
| https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor.html | pytorch docs |
torch.cuda.change_current_allocator
torch.cuda.change_current_allocator(allocator)
Changes the currently used memory allocator to be the one provided.
If the current allocator has already been used/initialized, this
function will error.
Parameters:
allocator (torch.cuda.memory._CUDAAllocator) -- allocator
to be set as the active one.
Note:
See Memory management for details on creating and using a custom
allocator
| https://pytorch.org/docs/stable/generated/torch.cuda.change_current_allocator.html | pytorch docs |
torch.bitwise_not
torch.bitwise_not(input, *, out=None) -> Tensor
Computes the bitwise NOT of the given input tensor. The input
tensor must be of integral or Boolean types. For bool tensors, it
computes the logical NOT.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.bitwise_not(torch.tensor([-1, -2, 3], dtype=torch.int8))
tensor([ 0, 1, -4], dtype=torch.int8)
| https://pytorch.org/docs/stable/generated/torch.bitwise_not.html | pytorch docs |
torch.nn.functional.hardshrink
torch.nn.functional.hardshrink(input, lambd=0.5) -> Tensor
Applies the hard shrinkage function element-wise
See "Hardshrink" for more details. | https://pytorch.org/docs/stable/generated/torch.nn.functional.hardshrink.html | pytorch docs |
torch.atleast_2d
torch.atleast_2d(*tensors)
Returns a 2-dimensional view of each input tensor with zero
dimensions. Input tensors with two or more dimensions are returned
as-is.
Parameters:
input (Tensor or list of Tensors) --
Returns:
output (Tensor or tuple of Tensors)
Example:
>>> x = torch.tensor(1.)
>>> x
tensor(1.)
>>> torch.atleast_2d(x)
tensor([[1.]])
>>> x = torch.arange(4).view(2, 2)
>>> x
tensor([[0, 1],
[2, 3]])
>>> torch.atleast_2d(x)
tensor([[0, 1],
[2, 3]])
>>> x = torch.tensor(0.5)
>>> y = torch.tensor(1.)
>>> torch.atleast_2d((x, y))
(tensor([[0.5000]]), tensor([[1.]]))
| https://pytorch.org/docs/stable/generated/torch.atleast_2d.html | pytorch docs |
torch.nn.functional.dropout1d
torch.nn.functional.dropout1d(input, p=0.5, training=True, inplace=False)
Randomly zero out entire channels (a channel is a 1D feature map,
e.g., the j-th channel of the i-th sample in the batched input is a
1D tensor \text{input}[i, j]) of the input tensor). Each channel
will be zeroed out independently on every forward call with
probability "p" using samples from a Bernoulli distribution.
See "Dropout1d" for details.
Parameters:
* p (float) -- probability of a channel to be zeroed.
Default: 0.5
* **training** (*bool*) -- apply dropout if is "True". Default:
"True"
* **inplace** (*bool*) -- If set to "True", will do this
operation in-place. Default: "False"
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.dropout1d.html | pytorch docs |
torch.signal.windows.exponential
torch.signal.windows.exponential(M, *, center=None, tau=1.0, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)
Computes a window with an exponential waveform. Also known as
Poisson window.
The exponential window is defined as follows:
w_n = \exp{\left(-\frac{|n - c|}{\tau}\right)}
where c is the "center" of the window.
The window is normalized to 1 (maximum value is 1). However, the 1
doesn't appear if "M" is even and "sym" is True.
Parameters:
M (int) -- the length of the window. In other words, the
number of points of the returned window.
Keyword Arguments:
* center (float, optional) -- where the center of the
window will be located. Default: M / 2 if sym is False,
else (M - 1) / 2.
* **tau** (*float**, **optional*) -- the decay value. Tau is
generally associated with a percentage, that means, that the
| https://pytorch.org/docs/stable/generated/torch.signal.windows.exponential.html | pytorch docs |
value should vary within the interval (0, 100]. If tau is 100,
it is considered the uniform window. Default: 1.0.
* **sym** (*bool**, **optional*) -- If *False*, returns a
periodic window suitable for use in spectral analysis. If
*True*, returns a symmetric window suitable for use in filter
design. Default: *True*.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
| https://pytorch.org/docs/stable/generated/torch.signal.windows.exponential.html | pytorch docs |
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Return type:
Tensor
Examples:
>>> # Generates a symmetric exponential window of size 10 and with a decay value of 1.0.
>>> # The center will be at (M - 1) / 2, where M is 10.
>>> torch.signal.windows.exponential(10)
tensor([0.0111, 0.0302, 0.0821, 0.2231, 0.6065, 0.6065, 0.2231, 0.0821, 0.0302, 0.0111])
>>> # Generates a periodic exponential window and decay factor equal to .5
>>> torch.signal.windows.exponential(10, sym=False,tau=.5)
tensor([4.5400e-05, 3.3546e-04, 2.4788e-03, 1.8316e-02, 1.3534e-01, 1.0000e+00, 1.3534e-01, 1.8316e-02, 2.4788e-03, 3.3546e-04])
| https://pytorch.org/docs/stable/generated/torch.signal.windows.exponential.html | pytorch docs |
torch.ne
torch.ne(input, other, *, out=None) -> Tensor
Computes \text{input} \neq \text{other} element-wise.
The second argument can be a number or a tensor whose shape is
broadcastable with the first argument.
Parameters:
* input (Tensor) -- the tensor to compare
* **other** (*Tensor** or **float*) -- the tensor or value to
compare
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Returns:
A boolean tensor that is True where "input" is not equal to
"other" and False elsewhere
Example:
>>> torch.ne(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[False, True], [True, False]])
| https://pytorch.org/docs/stable/generated/torch.ne.html | pytorch docs |
torch.Tensor.logcumsumexp
Tensor.logcumsumexp(dim) -> Tensor
See "torch.logcumsumexp()" | https://pytorch.org/docs/stable/generated/torch.Tensor.logcumsumexp.html | pytorch docs |
default_activation_only_qconfig
torch.quantization.qconfig.default_activation_only_qconfig
alias of QConfig(activation=functools.partial(,
observer=,
quant_min=0, quant_max=255, dtype=torch.quint8,
qscheme=torch.per_tensor_affine, reduce_range=True){},
weight=) | https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_activation_only_qconfig.html | pytorch docs |
torch.set_float32_matmul_precision
torch.set_float32_matmul_precision(precision)
Sets the internal precision of float32 matrix multiplications.
Running float32 matrix multiplications in lower precision may
significantly increase performance, and in some programs the loss
of precision has a negligible impact.
Supports three settings:
* "highest", float32 matrix multiplications use the float32
datatype for internal computations.
* "high", float32 matrix multiplications use the TensorFloat32
or bfloat16_3x datatypes for internal computations, if fast
matrix multiplication algorithms using those datatypes
internally are available. Otherwise float32 matrix
multiplications are computed as if the precision is "highest".
* "medium", float32 matrix multiplications use the bfloat16
datatype for internal computations, if a fast matrix
| https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html | pytorch docs |
multiplication algorithm using that datatype internally is
available. Otherwise float32 matrix multiplications are
computed as if the precision is "high".
Note:
This does not change the output dtype of float32 matrix
multiplications, it controls how the internal computation of the
matrix multiplication is performed.
Note:
This does not change the precision of convolution operations.
Other flags, like *torch.backends.cudnn.allow_tf32*, may control
the precision of convolution operations.
Note:
This flag currently only affects one native device type: CUDA. If
"high" or "medium" are set then the TensorFloat32 datatype will
be used when computing float32 matrix multiplications, equivalent
to setting *torch.backends.cuda.matmul.allow_tf32 = True*. When
"highest" (the default) is set then the float32 datatype is used
for internal computations, equivalent to setting
*torch.backends.cuda.matmul.allow_tf32 = False*.
| https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html | pytorch docs |
Parameters:
precision (str) -- can be set to "highest" (default),
"high", or "medium" (see above). | https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html | pytorch docs |
Linear
class torch.ao.nn.qat.Linear(in_features, out_features, bias=True, qconfig=None, device=None, dtype=None)
A linear module attached with FakeQuantize modules for weight, used
for quantization aware training.
We adopt the same interface as torch.nn.Linear, please see
https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for
documentation.
Similar to torch.nn.Linear, with FakeQuantize modules initialized
to default.
Variables:
weight (torch.Tensor) -- fake quant module for weight
classmethod from_float(mod)
Create a qat module from a float module or qparams_dict Args:
*mod* a float module, either produced by torch.ao.quantization
utilities or directly from user
| https://pytorch.org/docs/stable/generated/torch.ao.nn.qat.Linear.html | pytorch docs |
LazyModuleMixin
class torch.nn.modules.lazy.LazyModuleMixin(args, *kwargs)
A mixin for modules that lazily initialize parameters, also known
as "lazy modules."
Modules that lazily initialize parameters, or "lazy modules",
derive the shapes of their parameters from the first input(s) to
their forward method. Until that first forward they contain
"torch.nn.UninitializedParameter" s that should not be accessed or
used, and afterward they contain regular "torch.nn.Parameter" s.
Lazy modules are convenient since they don't require computing some
module arguments, like the "in_features" argument of a typical
"torch.nn.Linear".
After construction, networks with lazy modules should first be
converted to the desired dtype and placed on the expected device.
This is because lazy modules only perform shape inference so the
usual dtype and device placement behavior applies. The lazy modules
should then perform "dry runs" to initialize all the components in | https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html | pytorch docs |
the module. These "dry runs" send inputs of the correct size,
dtype, and device through the network and to each one of its lazy
modules. After this the network can be used as usual.
class LazyMLP(torch.nn.Module):
... def init(self):
... super().init()
... self.fc1 = torch.nn.LazyLinear(10)
... self.relu1 = torch.nn.ReLU()
... self.fc2 = torch.nn.LazyLinear(1)
... self.relu2 = torch.nn.ReLU()
...
... def forward(self, input):
... x = self.relu1(self.fc1(input))
... y = self.relu2(self.fc2(x))
... return y
constructs a network with lazy modules
lazy_mlp = LazyMLP()
transforms the network's device and dtype
NOTE: these transforms can and should be applied after construction and before any 'dry runs'
lazy_mlp = lazy_mlp.cuda().double()
lazy_mlp
LazyMLP( (fc1): LazyLinear(in_features=0, out_features=10, bias=True)
(relu1): ReLU()
| https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html | pytorch docs |
(relu1): ReLU()
(fc2): LazyLinear(in_features=0, out_features=1, bias=True)
(relu2): ReLU()
)
performs a dry run to initialize the network's lazy modules
lazy_mlp(torch.ones(10,10).cuda())
after initialization, LazyLinear modules become regular Linear modules
lazy_mlp
LazyMLP(
(fc1): Linear(in_features=10, out_features=10, bias=True)
(relu1): ReLU()
(fc2): Linear(in_features=10, out_features=1, bias=True)
(relu2): ReLU()
)
attaches an optimizer, since parameters can now be used as usual
optim = torch.optim.SGD(mlp.parameters(), lr=0.01)
A final caveat when using lazy modules is that the order of
initialization of a network's parameters may change, since the lazy
modules are always initialized after other modules. For example, if
the LazyMLP class defined above had a "torch.nn.LazyLinear" module
first and then a regular "torch.nn.Linear" second, the second | https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html | pytorch docs |
module would be initialized on construction and the first module
would be initialized during the first dry run. This can cause the
parameters of a network using lazy modules to be initialized
differently than the parameters of a network without lazy modules
as the order of parameter initializations, which often depends on a
stateful random number generator, is different. Check
Reproducibility for more details.
Lazy modules can be serialized with a state dict like other
modules. For example:
lazy_mlp = LazyMLP()
The state dict shows the uninitialized parameters
lazy_mlp.state_dict()
OrderedDict([('fc1.weight', Uninitialized parameter),
('fc1.bias',
tensor([-1.8832e+25, 4.5636e-41, -1.8832e+25, 4.5636e-41, -6.1598e-30,
4.5637e-41, -1.8788e+22, 4.5636e-41, -2.0042e-31, 4.5637e-41])),
('fc2.weight', Uninitialized parameter),
('fc2.bias', tensor([0.0019]))])
| https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html | pytorch docs |
('fc2.bias', tensor([0.0019]))])
Lazy modules can load regular "torch.nn.Parameter" s (i.e. you can
serialize/deserialize initialized LazyModules and they will remain
initialized)
full_mlp = LazyMLP()
Dry run to initialize another module
full_mlp.forward(torch.ones(10, 1))
Load an initialized state into a lazy module
lazy_mlp.load_state_dict(full_mlp.state_dict())
The state dict now holds valid values
lazy_mlp.state_dict()
OrderedDict([('fc1.weight',
tensor([[-0.3837],
[ 0.0907],
[ 0.6708],
[-0.5223],
[-0.9028],
[ 0.2851],
[-0.4537],
[ 0.6813],
[ 0.5766],
[-0.8678]])),
('fc1.bias',
tensor([-1.8832e+25, 4.5636e-41, -1.8832e+25, 4.5636e-41, -6.1598e-30,
| https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html | pytorch docs |
4.5637e-41, -1.8788e+22, 4.5636e-41, -2.0042e-31, 4.5637e-41])),
('fc2.weight',
tensor([[ 0.1320, 0.2938, 0.0679, 0.2793, 0.1088, -0.1795, -0.2301, 0.2807,
0.2479, 0.1091]])),
('fc2.bias', tensor([0.0019]))])
Note, however, that the loaded parameters will not be replaced when
doing a "dry run" if they are initialized when the state is loaded.
This prevents using initialized modules in different contexts.
has_uninitialized_params()
Check if a module has parameters that are not initialized
initialize_parameters(args, *kwargs)
Initialize parameters according to the input batch properties.
This adds an interface to isolate parameter initialization from
the forward pass when doing parameter shape inference.
| https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html | pytorch docs |
torch.fft.rfft
torch.fft.rfft(input, n=None, dim=- 1, norm=None, *, out=None) -> Tensor
Computes the one dimensional Fourier transform of real-valued
"input".
The FFT of a real signal is Hermitian-symmetric, "X[i] =
conj(X[-i])" so the output contains only the positive frequencies
below the Nyquist frequency. To compute the full output, use
"fft()"
Note:
Supports torch.half on CUDA with GPU Architecture SM53 or
greater. However it only supports powers of 2 signal length in
every transformed dimension.
Parameters:
* input (Tensor) -- the real input tensor
* **n** (*int**, **optional*) -- Signal length. If given, the
input will either be zero-padded or trimmed to this length
before computing the real FFT.
* **dim** (*int**, **optional*) -- The dimension along which to
take the one dimensional real FFT.
* **norm** (*str**, **optional*) --
| https://pytorch.org/docs/stable/generated/torch.fft.rfft.html | pytorch docs |
norm (str, optional) --Normalization mode. For the forward transform ("rfft()"),
these correspond to:
* ""forward"" - normalize by "1/n"
* ""backward"" - no normalization
* ""ortho"" - normalize by "1/sqrt(n)" (making the FFT
orthonormal)
Calling the backward transform ("irfft()") with the same
normalization mode will apply an overall normalization of
"1/n" between the two transforms. This is required to make
"irfft()" the exact inverse.
Default is ""backward"" (no normalization).
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
-[ Example ]-
t = torch.arange(4)
t
tensor([0, 1, 2, 3])
torch.fft.rfft(t)
tensor([ 6.+0.j, -2.+2.j, -2.+0.j])
Compare against the full output from "fft()":
torch.fft.fft(t)
tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j])
Notice that the symmetric element "T[-1] == T[1].conj()" is | https://pytorch.org/docs/stable/generated/torch.fft.rfft.html | pytorch docs |
omitted. At the Nyquist frequency "T[-2] == T[2]" is it's own
symmetric pair, and therefore must always be real-valued. | https://pytorch.org/docs/stable/generated/torch.fft.rfft.html | pytorch docs |
torch.nanmean
torch.nanmean(input, dim=None, keepdim=False, *, dtype=None, out=None) -> Tensor
Computes the mean of all non-NaN elements along the specified
dimensions.
This function is identical to "torch.mean()" when there are no
NaN values in the "input" tensor. In the presence of NaN,
"torch.mean()" will propagate the NaN to the output whereas
"torch.nanmean()" will ignore the NaN values (torch.nanmean(a)
is equivalent to torch.mean(a[~a.isnan()])).
If "keepdim" is "True", the output tensor is of the same size as
"input" except in the dimension(s) "dim" where it is of size 1.
Otherwise, "dim" is squeezed (see "torch.squeeze()"), resulting in
the output tensor having 1 (or "len(dim)") fewer dimension(s).
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int** or **tuple of ints**, **optional*) -- the
dimension or dimensions to reduce. If "None", all dimensions
are reduced.
| https://pytorch.org/docs/stable/generated/torch.nanmean.html | pytorch docs |
are reduced.
* **keepdim** (*bool*) -- whether the output tensor has "dim"
retained or not.
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned tensor. If specified, the input tensor is casted
to "dtype" before the operation is performed. This is useful
for preventing data type overflows. Default: None.
* **out** (*Tensor**, **optional*) -- the output tensor.
See also:
"torch.mean()" computes the mean value, propagating *NaN*.
Example:
>>> x = torch.tensor([[torch.nan, 1, 2], [1, 2, 3]])
>>> x.mean()
tensor(nan)
>>> x.nanmean()
tensor(1.8000)
>>> x.mean(dim=0)
tensor([ nan, 1.5000, 2.5000])
>>> x.nanmean(dim=0)
tensor([1.0000, 1.5000, 2.5000])
# If all elements in the reduced dimensions are NaN then the result is NaN
>>> torch.tensor([torch.nan]).nanmean()
tensor(nan)
| https://pytorch.org/docs/stable/generated/torch.nanmean.html | pytorch docs |
Identity
class torch.nn.utils.prune.Identity
Utility pruning method that does not prune any units but generates
the pruning parametrization with a mask of ones.
classmethod apply(module, name)
Adds the forward pre-hook that enables pruning on the fly and
the reparametrization of a tensor in terms of the original
tensor and the pruning mask.
Parameters:
* **module** (*nn.Module*) -- module containing the tensor to
prune
* **name** (*str*) -- parameter name within "module" on which
pruning will act.
apply_mask(module)
Simply handles the multiplication between the parameter being
pruned and the generated mask. Fetches the mask and the original
tensor from the module and returns the pruned version of the
tensor.
Parameters:
**module** (*nn.Module*) -- module containing the tensor to
prune
Returns:
pruned version of the input tensor
Return type:
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.Identity.html | pytorch docs |
Return type:
pruned_tensor (torch.Tensor)
prune(t, default_mask=None, importance_scores=None)
Computes and returns a pruned version of input tensor "t"
according to the pruning rule specified in "compute_mask()".
Parameters:
* **t** (*torch.Tensor*) -- tensor to prune (of same
dimensions as "default_mask").
* **importance_scores** (*torch.Tensor*) -- tensor of
importance scores (of same shape as "t") used to compute
mask for pruning "t". The values in this tensor indicate
the importance of the corresponding elements in the "t"
that is being pruned. If unspecified or None, the tensor
"t" will be used in its place.
* **default_mask** (*torch.Tensor**, **optional*) -- mask
from previous pruning iteration, if any. To be considered
when determining what portion of the tensor that pruning
should act on. If None, default to a mask of ones.
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.Identity.html | pytorch docs |
Returns:
pruned version of tensor "t".
remove(module)
Removes the pruning reparameterization from a module. The pruned
parameter named "name" remains permanently pruned, and the
parameter named "name+'_orig'" is removed from the parameter
list. Similarly, the buffer named "name+'_mask'" is removed from
the buffers.
Note:
Pruning itself is NOT undone or reversed!
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.Identity.html | pytorch docs |
torch.kthvalue
torch.kthvalue(input, k, dim=None, keepdim=False, *, out=None)
Returns a namedtuple "(values, indices)" where "values" is the "k"
th smallest element of each row of the "input" tensor in the given
dimension "dim". And "indices" is the index location of each
element found.
If "dim" is not given, the last dimension of the input is chosen.
If "keepdim" is "True", both the "values" and "indices" tensors are
the same size as "input", except in the dimension "dim" where they
are of size 1. Otherwise, "dim" is squeezed (see
"torch.squeeze()"), resulting in both the "values" and "indices"
tensors having 1 fewer dimension than the "input" tensor.
Note:
When "input" is a CUDA tensor and there are multiple valid "k" th
values, this function may nondeterministically return "indices"
for any of them.
Parameters:
* input (Tensor) -- the input tensor.
* **k** (*int*) -- k for the k-th smallest element
| https://pytorch.org/docs/stable/generated/torch.kthvalue.html | pytorch docs |
dim (int, optional) -- the dimension to find the kth
value along
keepdim (bool) -- whether the output tensor has "dim"
retained or not.
Keyword Arguments:
out (tuple, optional) -- the output tuple of (Tensor,
LongTensor) can be optionally given to be used as output buffers
Example:
>>> x = torch.arange(1., 6.)
>>> x
tensor([ 1., 2., 3., 4., 5.])
>>> torch.kthvalue(x, 4)
torch.return_types.kthvalue(values=tensor(4.), indices=tensor(3))
>>> x=torch.arange(1.,7.).resize_(2,3)
>>> x
tensor([[ 1., 2., 3.],
[ 4., 5., 6.]])
>>> torch.kthvalue(x, 2, 0, True)
torch.return_types.kthvalue(values=tensor([[4., 5., 6.]]), indices=tensor([[1, 1, 1]]))
| https://pytorch.org/docs/stable/generated/torch.kthvalue.html | pytorch docs |
torch.foreach_sinh
torch.foreach_sinh(self: List[Tensor]) -> None
Apply "torch.sinh()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_sinh_.html | pytorch docs |
torch.Tensor.nanmedian
Tensor.nanmedian(dim=None, keepdim=False)
See "torch.nanmedian()" | https://pytorch.org/docs/stable/generated/torch.Tensor.nanmedian.html | pytorch docs |
torch.Tensor.fix_
Tensor.fix_() -> Tensor
In-place version of "fix()" | https://pytorch.org/docs/stable/generated/torch.Tensor.fix_.html | pytorch docs |
torch.Tensor.nonzero
Tensor.nonzero() -> LongTensor
See "torch.nonzero()" | https://pytorch.org/docs/stable/generated/torch.Tensor.nonzero.html | pytorch docs |
interpolate
class torch.ao.nn.quantized.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None)
Down/up samples the input to either the given "size" or the given
"scale_factor"
See "torch.nn.functional.interpolate()" for implementation details.
The input dimensions are interpreted in the form: mini-batch x
channels x [optional depth] x [optional height] x width.
Note:
The input quantization parameters propagate to the output.
Note:
Only 2D/3D input is supported for quantized inputs
Note:
Only the following modes are supported for the quantized inputs:
* *bilinear*
* *nearest*
Parameters:
* input (Tensor) -- the input tensor
* **size** (*int** or **Tuple**[**int**] or **Tuple**[**int**,
**int**] or **Tuple**[**int**, **int**, **int**]*) -- output
spatial size.
* **scale_factor** (*float** or **Tuple**[**float**]*) --
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.interpolate.html | pytorch docs |
multiplier for spatial size. Has to match input size if it is
a tuple.
* **mode** (*str*) -- algorithm used for upsampling: "'nearest'"
| "'bilinear'"
* **align_corners** (*bool**, **optional*) -- Geometrically, we
consider the pixels of the input and output as squares rather
than points. If set to "True", the input and output tensors
are aligned by the center points of their corner pixels,
preserving the values at the corner pixels. If set to "False",
the input and output tensors are aligned by the corner points
of their corner pixels, and the interpolation uses edge value
padding for out-of-boundary values, making this operation
*independent* of input size when "scale_factor" is kept the
same. This only has an effect when "mode" is "'bilinear'".
Default: "False"
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.interpolate.html | pytorch docs |
torch.mul
torch.mul(input, other, *, out=None) -> Tensor
Multiplies "input" by "other".
\text{out}_i = \text{input}_i \times \text{other}_i
Supports broadcasting to a common shape, type promotion, and
integer, float, and complex inputs.
Parameters:
* input (Tensor) -- the input tensor.
* **other** (*Tensor** or **Number*) --
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Examples:
>>> a = torch.randn(3)
>>> a
tensor([ 0.2015, -0.4255, 2.6087])
>>> torch.mul(a, 100)
tensor([ 20.1494, -42.5491, 260.8663])
>>> b = torch.randn(4, 1)
>>> b
tensor([[ 1.1207],
[-0.3137],
[ 0.0700],
[ 0.8378]])
>>> c = torch.randn(1, 4)
>>> c
tensor([[ 0.5146, 0.1216, -0.5244, 2.2382]])
>>> torch.mul(b, c)
tensor([[ 0.5767, 0.1363, -0.5877, 2.5083],
[-0.1614, -0.0382, 0.1645, -0.7021],
| https://pytorch.org/docs/stable/generated/torch.mul.html | pytorch docs |
[ 0.0360, 0.0085, -0.0367, 0.1567],
[ 0.4312, 0.1019, -0.4394, 1.8753]]) | https://pytorch.org/docs/stable/generated/torch.mul.html | pytorch docs |
torch.nn.functional.adaptive_avg_pool3d
torch.nn.functional.adaptive_avg_pool3d(input, output_size)
Applies a 3D adaptive average pooling over an input signal composed
of several input planes.
See "AdaptiveAvgPool3d" for details and output shape.
Parameters:
output_size (None) -- the target output size (single
integer or triple-integer tuple)
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_avg_pool3d.html | pytorch docs |
torch.use_deterministic_algorithms
torch.use_deterministic_algorithms(mode, *, warn_only=False)
Sets whether PyTorch operations must use "deterministic"
algorithms. That is, algorithms which, given the same input, and
when run on the same software and hardware, always produce the same
output. When enabled, operations will use deterministic algorithms
when available, and if only nondeterministic algorithms are
available they will throw a "RuntimeError" when called.
Note:
This setting alone is not always enough to make an application
reproducible. Refer to Reproducibility for more information.
Note:
"torch.set_deterministic_debug_mode()" offers an alternative
interface for this feature.
The following normally-nondeterministic operations will act
deterministically when "mode=True":
* "torch.nn.Conv1d" when called on CUDA tensor
* "torch.nn.Conv2d" when called on CUDA tensor
| https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html | pytorch docs |
"torch.nn.Conv3d" when called on CUDA tensor
"torch.nn.ConvTranspose1d" when called on CUDA tensor
"torch.nn.ConvTranspose2d" when called on CUDA tensor
"torch.nn.ConvTranspose3d" when called on CUDA tensor
"torch.bmm()" when called on sparse-dense CUDA tensors
"torch.Tensor.getitem()" when attempting to differentiate
a CPU tensor and the index is a list of tensors
"torch.Tensor.index_put()" with "accumulate=False"
"torch.Tensor.index_put()" with "accumulate=True" when called
on a CPU tensor
"torch.Tensor.put_()" with "accumulate=True" when called on a
CPU tensor
"torch.Tensor.scatter_add_()" when called on a CUDA tensor
"torch.gather()" when called on a CUDA tensor that requires
grad
"torch.index_add()" when called on CUDA tensor
"torch.index_select()" when attempting to differentiate a CUDA
tensor
| https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html | pytorch docs |
tensor
* "torch.repeat_interleave()" when attempting to differentiate a
CUDA tensor
* "torch.Tensor.index_copy()" when called on a CPU or CUDA
tensor
The following normally-nondeterministic operations will throw a
"RuntimeError" when "mode=True":
* "torch.nn.AvgPool3d" when attempting to differentiate a CUDA
tensor
* "torch.nn.AdaptiveAvgPool2d" when attempting to differentiate
a CUDA tensor
* "torch.nn.AdaptiveAvgPool3d" when attempting to differentiate
a CUDA tensor
* "torch.nn.MaxPool3d" when attempting to differentiate a CUDA
tensor
* "torch.nn.AdaptiveMaxPool2d" when attempting to differentiate
a CUDA tensor
* "torch.nn.FractionalMaxPool2d" when attempting to
differentiate a CUDA tensor
* "torch.nn.FractionalMaxPool3d" when attempting to
differentiate a CUDA tensor
* "torch.nn.MaxUnpool1d"
* "torch.nn.MaxUnpool2d"
| https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html | pytorch docs |
"torch.nn.MaxUnpool2d"
"torch.nn.MaxUnpool3d"
"torch.nn.functional.interpolate()" when attempting to
differentiate a CUDA tensor and one of the following modes is
used:
"linear"
"bilinear"
"bicubic"
"trilinear"
"torch.nn.ReflectionPad1d" when attempting to differentiate a
CUDA tensor
"torch.nn.ReflectionPad2d" when attempting to differentiate a
CUDA tensor
"torch.nn.ReflectionPad3d" when attempting to differentiate a
CUDA tensor
"torch.nn.ReplicationPad1d" when attempting to differentiate a
CUDA tensor
"torch.nn.ReplicationPad2d" when attempting to differentiate a
CUDA tensor
"torch.nn.ReplicationPad3d" when attempting to differentiate a
CUDA tensor
"torch.nn.NLLLoss" when called on a CUDA tensor
"torch.nn.CTCLoss" when attempting to differentiate a CUDA
tensor
| https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html | pytorch docs |
tensor
* "torch.nn.EmbeddingBag" when attempting to differentiate a
CUDA tensor when "mode='max'"
* "torch.Tensor.put_()" when "accumulate=False"
* "torch.Tensor.put_()" when "accumulate=True" and called on a
CUDA tensor
* "torch.histc()" when called on a CUDA tensor
* "torch.bincount()" when called on a CUDA tensor
* "torch.kthvalue()" with called on a CUDA tensor
* "torch.median()" with indices output when called on a CUDA
tensor
* "torch.nn.functional.grid_sample()" when attempting to
differentiate a CUDA tensor
* "torch.cumsum()" when called on a CUDA tensor when dtype is
floating point or complex
A handful of CUDA operations are nondeterministic if the CUDA
version is 10.2 or greater, unless the environment variable
"CUBLAS_WORKSPACE_CONFIG=:4096:8" or
"CUBLAS_WORKSPACE_CONFIG=:16:8" is set. See the CUDA documentation | https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html | pytorch docs |
for more details: https://docs.nvidia.com/cuda/cublas/index.html#c
ublasApi_reproducibility If one of these environment variable
configurations is not set, a "RuntimeError" will be raised from
these operations when called with CUDA tensors:
* "torch.mm()"
* "torch.mv()"
* "torch.bmm()"
Note that deterministic operations tend to have worse performance
than nondeterministic operations.
Note:
This flag does not detect or prevent nondeterministic behavior
caused by calling an inplace operation on a tensor with an
internal memory overlap or by giving such a tensor as the "out"
argument for an operation. In these cases, multiple writes of
different data may target a single memory location, and the order
of writes is not guaranteed.
Parameters:
mode ("bool") -- If True, makes potentially nondeterministic
operations switch to a deterministic algorithm or throw a | https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html | pytorch docs |
runtime error. If False, allows nondeterministic operations.
Keyword Arguments:
warn_only ("bool", optional) -- If True, operations that do
not have a deterministic implementation will throw a warning
instead of an error. Default: "False"
Example:
>>> torch.use_deterministic_algorithms(True)
# Forward mode nondeterministic error
>>> torch.randn(10, device='cuda').kthvalue(0)
...
RuntimeError: kthvalue CUDA does not have a deterministic implementation...
# Backward mode nondeterministic error
>>> torch.nn.AvgPool3d(1)(torch.randn(3, 4, 5, 6, requires_grad=True).cuda()).sum().backward()
...
RuntimeError: avg_pool3d_backward_cuda does not have a deterministic implementation...
| https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html | pytorch docs |
torch.as_strided
torch.as_strided(input, size, stride, storage_offset=None) -> Tensor
Create a view of an existing torch.Tensor "input" with specified
"size", "stride" and "storage_offset".
Warning:
Prefer using other view functions, like "torch.Tensor.expand()",
to setting a view's strides manually with *as_strided*, as this
function's behavior depends on the implementation of a tensor's
storage. The constructed view of the storage must only refer to
elements within the storage or a runtime error will be thrown,
and if the view is "overlapped" (with multiple indices referring
to the same element in memory) its behavior is undefined.
Parameters:
* input (Tensor) -- the input tensor.
* **size** (*tuple** or **ints*) -- the shape of the output
tensor
* **stride** (*tuple** or **ints*) -- the stride of the output
tensor
* **storage_offset** (*int**, **optional*) -- the offset in the
| https://pytorch.org/docs/stable/generated/torch.as_strided.html | pytorch docs |
underlying storage of the output tensor. If "None", the
storage_offset of the output tensor will match the input
tensor.
Example:
>>> x = torch.randn(3, 3)
>>> x
tensor([[ 0.9039, 0.6291, 1.0795],
[ 0.1586, 2.1939, -0.4900],
[-0.1909, -0.7503, 1.9355]])
>>> t = torch.as_strided(x, (2, 2), (1, 2))
>>> t
tensor([[0.9039, 1.0795],
[0.6291, 0.1586]])
>>> t = torch.as_strided(x, (2, 2), (1, 2), 1)
tensor([[0.6291, 0.1586],
[1.0795, 2.1939]])
| https://pytorch.org/docs/stable/generated/torch.as_strided.html | pytorch docs |
torch.einsum
torch.einsum(equation, *operands) -> Tensor
Sums the product of the elements of the input "operands" along
dimensions specified using a notation based on the Einstein
summation convention.
Einsum allows computing many common multi-dimensional linear
algebraic array operations by representing them in a short-hand
format based on the Einstein summation convention, given by
"equation". The details of this format are described below, but the
general idea is to label every dimension of the input "operands"
with some subscript and define which subscripts are part of the
output. The output is then computed by summing the product of the
elements of the "operands" along the dimensions whose subscripts
are not part of the output. For example, matrix multiplication can
be computed using einsum as torch.einsum("ij,jk->ik", A, B).
Here, j is the summation subscript and i and k the output
subscripts (see section below for more details on why). | https://pytorch.org/docs/stable/generated/torch.einsum.html | pytorch docs |
Equation:
The "equation" string specifies the subscripts (letters in
*[a-zA-Z]*) for each dimension of the input "operands" in the
same order as the dimensions, separating subscripts for each
operand by a comma (','), e.g. *'ij,jk'* specify subscripts for
two 2D operands. The dimensions labeled with the same subscript
must be broadcastable, that is, their size must either match or
be *1*. The exception is if a subscript is repeated for the same
input operand, in which case the dimensions labeled with this
subscript for this operand must match in size and the operand
will be replaced by its diagonal along these dimensions. The
subscripts that appear exactly once in the "equation" will be
part of the output, sorted in increasing alphabetical order. The
output is computed by multiplying the input "operands" element-
wise, with their dimensions aligned based on the subscripts, and
| https://pytorch.org/docs/stable/generated/torch.einsum.html | pytorch docs |
then summing out the dimensions whose subscripts are not part of
the output.
Optionally, the output subscripts can be explicitly defined by
adding an arrow ('->') at the end of the equation followed by
the subscripts for the output. For instance, the following
equation computes the transpose of a matrix multiplication:
'ij,jk->ki'. The output subscripts must appear at least once for
some input operand and at most once for the output.
Ellipsis ('...') can be used in place of subscripts to broadcast
the dimensions covered by the ellipsis. Each input operand may
contain at most one ellipsis which will cover the dimensions not
covered by subscripts, e.g. for an input operand with 5
dimensions, the ellipsis in the equation *'ab...c'* cover the
third and fourth dimensions. The ellipsis does not need to cover
the same number of dimensions across the "operands" but the
| https://pytorch.org/docs/stable/generated/torch.einsum.html | pytorch docs |
'shape' of the ellipsis (the size of the dimensions covered by
them) must broadcast together. If the output is not explicitly
defined with the arrow ('->') notation, the ellipsis will come
first in the output (left-most dimensions), before the subscript
labels that appear exactly once for the input operands. e.g. the
following equation implements batch matrix multiplication
'...ij,...jk'.
A few final notes: the equation may contain whitespaces between
the different elements (subscripts, ellipsis, arrow and comma)
but something like *'. . .'* is not valid. An empty string *''*
is valid for scalar operands.
Note:
"torch.einsum" handles ellipsis ('...') differently from NumPy in
that it allows dimensions covered by the ellipsis to be summed
over, that is, ellipsis are not required to be part of the
output.
Note:
This function uses opt_einsum (https://optimized-
| https://pytorch.org/docs/stable/generated/torch.einsum.html | pytorch docs |
einsum.readthedocs.io/en/stable/) to speed up computation or to
consume less memory by optimizing contraction order. This
optimization occurs when there are at least three inputs, since
the order does not matter otherwise. Note that finding the
optimal path is an NP-hard problem, thus, opt_einsum relies on
different heuristics to achieve near-optimal results. If
opt_einsum is not available, the default order is to contract
from left to right.To bypass this default behavior, add the
following line to disable the usage of opt_einsum and skip path
calculation: torch.backends.opt_einsum.enabled = FalseTo
specify which strategy you'd like for opt_einsum to compute the
contraction path, add the following line:
torch.backends.opt_einsum.strategy = 'auto'. The default
strategy is 'auto', and we also support 'greedy' and 'optimal'.
Disclaimer that the runtime of 'optimal' is factorial in the | https://pytorch.org/docs/stable/generated/torch.einsum.html | pytorch docs |
number of inputs! See more details in the opt_einsum
documentation (https://optimized-
einsum.readthedocs.io/en/stable/path_finding.html).
Note:
As of PyTorch 1.10 "torch.einsum()" also supports the sublist
format (see examples below). In this format, subscripts for each
operand are specified by sublists, list of integers in the range
[0, 52). These sublists follow their operands, and an extra
sublist can appear at the end of the input to specify the
output's subscripts., e.g. *torch.einsum(op1, sublist1, op2,
sublist2, ..., [subslist_out])*. Python's *Ellipsis* object may
be provided in a sublist to enable broadcasting as described in
the Equation section above.
Parameters:
* equation (str) -- The subscripts for the Einstein
summation.
* **operands** (*List**[**Tensor**]*) -- The tensors to compute
the Einstein summation of.
Return type:
Tensor
Examples:
>>> # trace
| https://pytorch.org/docs/stable/generated/torch.einsum.html | pytorch docs |
Tensor
Examples:
>>> # trace
>>> torch.einsum('ii', torch.randn(4, 4))
tensor(-1.2104)
>>> # diagonal
>>> torch.einsum('ii->i', torch.randn(4, 4))
tensor([-0.1034, 0.7952, -0.2433, 0.4545])
>>> # outer product
>>> x = torch.randn(5)
>>> y = torch.randn(4)
>>> torch.einsum('i,j->ij', x, y)
tensor([[ 0.1156, -0.2897, -0.3918, 0.4963],
[-0.3744, 0.9381, 1.2685, -1.6070],
[ 0.7208, -1.8058, -2.4419, 3.0936],
[ 0.1713, -0.4291, -0.5802, 0.7350],
[ 0.5704, -1.4290, -1.9323, 2.4480]])
>>> # batch matrix multiplication
>>> As = torch.randn(3, 2, 5)
>>> Bs = torch.randn(3, 5, 4)
>>> torch.einsum('bij,bjk->bik', As, Bs)
tensor([[[-1.0564, -1.5904, 3.2023, 3.1271],
[-1.6706, -0.8097, -0.8025, -2.1183]],
[[ 4.2239, 0.3107, -0.5756, -0.2354],
[-1.4558, -0.3460, 1.5087, -0.8530]],
| https://pytorch.org/docs/stable/generated/torch.einsum.html | pytorch docs |
[[ 2.8153, 1.8787, -4.3839, -1.2112],
[ 0.3728, -2.1131, 0.0921, 0.8305]]])
>>> # with sublist format and ellipsis
>>> torch.einsum(As, [..., 0, 1], Bs, [..., 1, 2], [..., 0, 2])
tensor([[[-1.0564, -1.5904, 3.2023, 3.1271],
[-1.6706, -0.8097, -0.8025, -2.1183]],
[[ 4.2239, 0.3107, -0.5756, -0.2354],
[-1.4558, -0.3460, 1.5087, -0.8530]],
[[ 2.8153, 1.8787, -4.3839, -1.2112],
[ 0.3728, -2.1131, 0.0921, 0.8305]]])
>>> # batch permute
>>> A = torch.randn(2, 3, 4, 5)
>>> torch.einsum('...ij->...ji', A).shape
torch.Size([2, 3, 5, 4])
>>> # equivalent to torch.nn.functional.bilinear
>>> A = torch.randn(3, 5, 4)
>>> l = torch.randn(2, 5)
>>> r = torch.randn(2, 4)
>>> torch.einsum('bn,anm,bm->ba', l, A, r)
tensor([[-0.3430, -5.2405, 0.4494],
[ 0.3311, 5.5201, -3.0356]])
| https://pytorch.org/docs/stable/generated/torch.einsum.html | pytorch docs |
torch.less_equal
torch.less_equal(input, other, *, out=None) -> Tensor
Alias for "torch.le()". | https://pytorch.org/docs/stable/generated/torch.less_equal.html | pytorch docs |
torch.nn.functional.margin_ranking_loss
torch.nn.functional.margin_ranking_loss(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') -> Tensor
See "MarginRankingLoss" for details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.margin_ranking_loss.html | pytorch docs |
torch.linalg.ldl_factor_ex
torch.linalg.ldl_factor_ex(A, *, hermitian=False, check_errors=False, out=None)
This is a version of "ldl_factor()" that does not perform error
checks unless "check_errors"= True. It also returns the "info"
tensor returned by LAPACK's sytrf. "info" stores integer error
codes from the backend library. A positive integer indicates the
diagonal element of D that is zero. Division by 0 will occur if the
result is used for solving a system of linear equations. "info"
filled with zeros indicates that the factorization was successful.
If "check_errors=True" and "info" contains positive integers, then
a RuntimeError is thrown.
Note:
When the inputs are on a CUDA device, this function synchronizes
only when "check_errors"*= True*.
Warning:
This function is "experimental" and it may change in a future
PyTorch release.
Parameters:
A (Tensor) -- tensor of shape (*, n, n) where * is zero or | https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor_ex.html | pytorch docs |
more batch dimensions consisting of symmetric or Hermitian
matrices. (*, n, n) where *** is one or more batch dimensions.
Keyword Arguments:
* hermitian (bool, optional) -- whether to consider
the input to be Hermitian or symmetric. For real-valued
matrices, this switch has no effect. Default: False.
* **check_errors** (*bool**, **optional*) -- controls whether to
check the content of "info" and raise an error if it is non-
zero. Default: *False*.
* **out** (*tuple**, **optional*) -- tuple of three tensors to
write the output to. Ignored if *None*. Default: *None*.
Returns:
A named tuple (LD, pivots, info).
Examples:
>>> A = torch.randn(3, 3)
>>> A = A @ A.mT # make symmetric
>>> A
tensor([[7.2079, 4.2414, 1.9428],
[4.2414, 3.4554, 0.3264],
[1.9428, 0.3264, 1.3823]])
>>> LD, pivots, info = torch.linalg.ldl_factor_ex(A)
>>> LD
| https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor_ex.html | pytorch docs |
LD
tensor([[ 7.2079, 0.0000, 0.0000],
[ 0.5884, 0.9595, 0.0000],
[ 0.2695, -0.8513, 0.1633]])
>>> pivots
tensor([1, 2, 3], dtype=torch.int32)
>>> info
tensor(0, dtype=torch.int32)
| https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor_ex.html | pytorch docs |
torch.nn.utils.prune.random_unstructured
torch.nn.utils.prune.random_unstructured(module, name, amount)
Prunes tensor corresponding to parameter called "name" in "module"
by removing the specified "amount" of (currently unpruned) units
selected at random. Modifies module in place (and also return the
modified module) by:
adding a named buffer called "name+'_mask'" corresponding to the
binary mask applied to the parameter "name" by the pruning
method.
replacing the parameter "name" by its pruned version, while the
original (unpruned) parameter is stored in a new parameter named
"name+'_orig'".
Parameters:
* module (nn.Module) -- module containing the tensor to
prune
* **name** (*str*) -- parameter name within "module" on which
pruning will act.
* **amount** (*int** or **float*) -- quantity of parameters to
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.random_unstructured.html | pytorch docs |
prune. If "float", should be between 0.0 and 1.0 and represent
the fraction of parameters to prune. If "int", it represents
the absolute number of parameters to prune.
Returns:
modified (i.e. pruned) version of the input module
Return type:
module (nn.Module)
-[ Examples ]-
m = prune.random_unstructured(nn.Linear(2, 3), 'weight', amount=1)
torch.sum(m.weight_mask == 0)
tensor(1)
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.random_unstructured.html | pytorch docs |
clamp
class torch.ao.nn.quantized.functional.clamp(input, min_, max_)
float(input, min_, max_) -> Tensor
Applies the clamp function element-wise. See "clamp" for more
details.
Parameters:
* input (Tensor) -- quantized input
* **min** -- minimum value for clamping
* **max** -- maximum value for clamping
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.clamp.html | pytorch docs |
torch.Tensor.scatter_
Tensor.scatter_(dim, index, src, reduce=None) -> Tensor
Writes all values from the tensor "src" into "self" at the indices
specified in the "index" tensor. For each value in "src", its
output index is specified by its index in "src" for "dimension !=
dim" and by the corresponding value in "index" for "dimension =
dim".
For a 3-D tensor, "self" is updated as:
self[index[i][j][k]][j][k] = src[i][j][k] # if dim == 0
self[i][index[i][j][k]][k] = src[i][j][k] # if dim == 1
self[i][j][index[i][j][k]] = src[i][j][k] # if dim == 2
This is the reverse operation of the manner described in
"gather()".
"self", "index" and "src" (if it is a Tensor) should all have the
same number of dimensions. It is also required that "index.size(d)
<= src.size(d)" for all dimensions "d", and that "index.size(d) <=
self.size(d)" for all dimensions "d != dim". Note that "index" and
"src" do not broadcast. | https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html | pytorch docs |
"src" do not broadcast.
Moreover, as for "gather()", the values of "index" must be between
"0" and "self.size(dim) - 1" inclusive.
Warning:
When indices are not unique, the behavior is non-deterministic
(one of the values from "src" will be picked arbitrarily) and the
gradient will be incorrect (it will be propagated to all
locations in the source that correspond to the same index)!
Note:
The backward pass is implemented only for "src.shape ==
index.shape".
Additionally accepts an optional "reduce" argument that allows
specification of an optional reduction operation, which is applied
to all values in the tensor "src" into "self" at the indices
specified in the "index". For each value in "src", the reduction
operation is applied to an index in "self" which is specified by
its index in "src" for "dimension != dim" and by the corresponding
value in "index" for "dimension = dim".
Given a 3-D tensor and reduction using the multiplication | https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html | pytorch docs |
operation, "self" is updated as:
self[index[i][j][k]][j][k] *= src[i][j][k] # if dim == 0
self[i][index[i][j][k]][k] *= src[i][j][k] # if dim == 1
self[i][j][index[i][j][k]] *= src[i][j][k] # if dim == 2
Reducing with the addition operation is the same as using
"scatter_add_()".
Parameters:
* dim (int) -- the axis along which to index
* **index** (*LongTensor*) -- the indices of elements to
scatter, can be either empty or of the same dimensionality as
"src". When empty, the operation returns "self" unchanged.
* **src** (*Tensor** or **float*) -- the source element(s) to
scatter.
* **reduce** (*str**, **optional*) -- reduction operation to
apply, can be either "'add'" or "'multiply'".
Example:
>>> src = torch.arange(1, 11).reshape((2, 5))
>>> src
tensor([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10]])
>>> index = torch.tensor([[0, 1, 2, 0]])
| https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html | pytorch docs |
index = torch.tensor([[0, 1, 2, 0]])
>>> torch.zeros(3, 5, dtype=src.dtype).scatter_(0, index, src)
tensor([[1, 0, 0, 4, 0],
[0, 2, 0, 0, 0],
[0, 0, 3, 0, 0]])
>>> index = torch.tensor([[0, 1, 2], [0, 1, 4]])
>>> torch.zeros(3, 5, dtype=src.dtype).scatter_(1, index, src)
tensor([[1, 2, 3, 0, 0],
[6, 7, 0, 0, 8],
[0, 0, 0, 0, 0]])
>>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]),
... 1.23, reduce='multiply')
tensor([[2.0000, 2.0000, 2.4600, 2.0000],
[2.0000, 2.0000, 2.0000, 2.4600]])
>>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]),
... 1.23, reduce='add')
tensor([[2.0000, 2.0000, 3.2300, 2.0000],
[2.0000, 2.0000, 2.0000, 3.2300]])
| https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html | pytorch docs |
torch.ceil
torch.ceil(input, *, out=None) -> Tensor
Returns a new tensor with the ceil of the elements of "input", the
smallest integer greater than or equal to each element.
For integer inputs, follows the array-api convention of returning a
copy of the input tensor.
\text{out}_{i} = \left\lceil \text{input}_{i} \right\rceil
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([-0.6341, -1.4208, -1.0900, 0.5826])
>>> torch.ceil(a)
tensor([-0., -1., -1., 1.])
| https://pytorch.org/docs/stable/generated/torch.ceil.html | pytorch docs |
torch.Tensor.remainder_
Tensor.remainder_(divisor) -> Tensor
In-place version of "remainder()" | https://pytorch.org/docs/stable/generated/torch.Tensor.remainder_.html | pytorch docs |
torch.real
torch.real(input) -> Tensor
Returns a new tensor containing real values of the "self" tensor.
The returned tensor and "self" share the same underlying storage.
Parameters:
input (Tensor) -- the input tensor.
Example:
>>> x=torch.randn(4, dtype=torch.cfloat)
>>> x
tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)])
>>> x.real
tensor([ 0.3100, -0.5445, -1.6492, -0.0638])
| https://pytorch.org/docs/stable/generated/torch.real.html | pytorch docs |
torch.jit.isinstance
torch.jit.isinstance(obj, target_type)
This function provides for container type refinement in
TorchScript. It can refine parameterized containers of the List,
Dict, Tuple, and Optional types. E.g. "List[str]", "Dict[str,
List[torch.Tensor]]", "Optional[Tuple[int,str,int]]". It can also
refine basic types such as bools and ints that are available in
TorchScript.
Parameters:
* obj -- object to refine the type of
* **target_type** -- type to try to refine obj to
Returns:
True if obj was successfully refined to the type of target_type,
False otherwise with no new type refinement
Return type:
"bool"
Example (using "torch.jit.isinstance" for type refinement): ..
testcode:
import torch
from typing import Any, Dict, List
class MyModule(torch.nn.Module):
def __init__(self):
super(MyModule, self).__init__()
| https://pytorch.org/docs/stable/generated/torch.jit.isinstance.html | pytorch docs |
super(MyModule, self).init()
def forward(self, input: Any): # note the Any type
if torch.jit.isinstance(input, List[torch.Tensor]):
for t in input:
y = t.clamp(0, 0.5)
elif torch.jit.isinstance(input, Dict[str, str]):
for val in input.values():
print(val)
m = torch.jit.script(MyModule())
x = [torch.rand(3,3), torch.rand(4,3)]
m(x)
y = {"key1":"val1","key2":"val2"}
m(y)
| https://pytorch.org/docs/stable/generated/torch.jit.isinstance.html | pytorch docs |
torch.Tensor.index_fill_
Tensor.index_fill_(dim, index, value) -> Tensor
Fills the elements of the "self" tensor with value "value" by
selecting the indices in the order given in "index".
Parameters:
* dim (int) -- dimension along which to index
* **index** (*LongTensor*) -- indices of "self" tensor to fill
in
* **value** (*float*) -- the value to fill with
Example::
>>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)
>>> index = torch.tensor([0, 2])
>>> x.index_fill_(1, index, -1)
tensor([[-1., 2., -1.],
[-1., 5., -1.],
[-1., 8., -1.]]) | https://pytorch.org/docs/stable/generated/torch.Tensor.index_fill_.html | pytorch docs |
torch.Tensor.clone
Tensor.clone(*, memory_format=torch.preserve_format) -> Tensor
See "torch.clone()" | https://pytorch.org/docs/stable/generated/torch.Tensor.clone.html | pytorch docs |
LPPool1d
class torch.nn.LPPool1d(norm_type, kernel_size, stride=None, ceil_mode=False)
Applies a 1D power-average pooling over an input signal composed of
several input planes.
On each window, the function computed is:
f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}
At p = \infty, one gets Max Pooling
At p = 1, one gets Sum Pooling (which is proportional to Average
Pooling)
Note:
If the sum to the power of *p* is zero, the gradient of this
function is not defined. This implementation will set the
gradient to zero in this case.
Parameters:
* kernel_size (Union[int, Tuple[int]]) --
a single int, the size of the window
* **stride** (*Union**[**int**, **Tuple**[**int**]**]*) -- a
single int, the stride of the window. Default value is
"kernel_size"
* **ceil_mode** (*bool*) -- when True, will use *ceil* instead
of *floor* to compute the output shape
Shape: | https://pytorch.org/docs/stable/generated/torch.nn.LPPool1d.html | pytorch docs |
Shape:
* Input: (N, C, L_{in}) or (C, L_{in}).
* Output: (N, C, L_{out}) or (C, L_{out}), where
L_{out} = \left\lfloor\frac{L_{in} -
\text{kernel\_size}}{\text{stride}} + 1\right\rfloor
Examples::
>>> # power-2 pool of window of length 3, with stride 2.
>>> m = nn.LPPool1d(2, 3, stride=2)
>>> input = torch.randn(20, 16, 50)
>>> output = m(input) | https://pytorch.org/docs/stable/generated/torch.nn.LPPool1d.html | pytorch docs |
Embedding
class torch.ao.nn.quantized.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None, dtype=torch.quint8)
A quantized Embedding module with quantized packed weights as
inputs. We adopt the same interface as torch.nn.Embedding, please
see https://pytorch.org/docs/stable/nn.html#torch.nn.Embedding for
documentation.
Similar to "Embedding", attributes will be randomly initialized at
module creation time and will be overwritten later
Variables:
weight (Tensor) -- the non-learnable quantized weights of
the module of shape (\text{num_embeddings},
\text{embedding_dim}).
Examples::
>>> m = nn.quantized.Embedding(num_embeddings=10, embedding_dim=12)
>>> indices = torch.tensor([9, 6, 5, 7, 8, 8, 9, 2, 8])
>>> output = m(indices)
>>> print(output.size())
torch.Size([9, 12])
classmethod from_float(mod) | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Embedding.html | pytorch docs |
classmethod from_float(mod)
Create a quantized embedding module from a float module
Parameters:
**mod** (*Module*) -- a float module, either produced by
torch.ao.quantization utilities or provided by user
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Embedding.html | pytorch docs |
torch.multiply
torch.multiply(input, other, *, out=None)
Alias for "torch.mul()". | https://pytorch.org/docs/stable/generated/torch.multiply.html | pytorch docs |
AlphaDropout
class torch.nn.AlphaDropout(p=0.5, inplace=False)
Applies Alpha Dropout over the input.
Alpha Dropout is a type of Dropout that maintains the self-
normalizing property. For an input with zero mean and unit standard
deviation, the output of Alpha Dropout maintains the original mean
and standard deviation of the input. Alpha Dropout goes hand-in-
hand with SELU activation function, which ensures that the outputs
have zero mean and unit standard deviation.
During training, it randomly masks some of the elements of the
input tensor with probability p using samples from a bernoulli
distribution. The elements to masked are randomized on every
forward call, and scaled and shifted to maintain zero mean and unit
standard deviation.
During evaluation the module simply computes an identity function.
More details can be found in the paper Self-Normalizing Neural
Networks .
Parameters: | https://pytorch.org/docs/stable/generated/torch.nn.AlphaDropout.html | pytorch docs |
Networks .
Parameters:
* p (float) -- probability of an element to be dropped.
Default: 0.5
* **inplace** (*bool**, **optional*) -- If set to "True", will
do this operation in-place
Shape:
* Input: (*). Input can be of any shape
* Output: (*). Output is of the same shape as input
Examples:
>>> m = nn.AlphaDropout(p=0.2)
>>> input = torch.randn(20, 16)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.AlphaDropout.html | pytorch docs |
torch.logdet
torch.logdet(input) -> Tensor
Calculates log determinant of a square matrix or batches of square
matrices.
It returns "-inf" if the input has a determinant of zero, and "NaN"
if it has a negative determinant.
Note:
Backward through "logdet()" internally uses SVD results when
"input" is not invertible. In this case, double backward through
"logdet()" will be unstable in when "input" doesn't have distinct
singular values. See "torch.linalg.svd()" for details.
See also:
"torch.linalg.slogdet()" computes the sign (resp. angle) and
natural logarithm of the absolute value of the determinant of
real-valued (resp. complex) square matrices.
Parameters:
input (Tensor) -- the input tensor of size "(, n, n)"
where "" is zero or more batch dimensions.
Example:
>>> A = torch.randn(3, 3)
>>> torch.det(A)
tensor(0.2611)
>>> torch.logdet(A)
tensor(-1.3430)
>>> A
| https://pytorch.org/docs/stable/generated/torch.logdet.html | pytorch docs |
tensor(-1.3430)
>>> A
tensor([[[ 0.9254, -0.6213],
[-0.5787, 1.6843]],
[[ 0.3242, -0.9665],
[ 0.4539, -0.0887]],
[[ 1.1336, -0.4025],
[-0.7089, 0.9032]]])
>>> A.det()
tensor([1.1990, 0.4099, 0.7386])
>>> A.det().log()
tensor([ 0.1815, -0.8917, -0.3031])
| https://pytorch.org/docs/stable/generated/torch.logdet.html | pytorch docs |
torch.Tensor.max
Tensor.max(dim=None, keepdim=False)
See "torch.max()" | https://pytorch.org/docs/stable/generated/torch.Tensor.max.html | pytorch docs |
torch.abs
torch.abs(input, *, out=None) -> Tensor
Computes the absolute value of each element in "input".
\text{out}_{i} = |\text{input}_{i}|
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.abs(torch.tensor([-1, -2, 3]))
tensor([ 1, 2, 3])
| https://pytorch.org/docs/stable/generated/torch.abs.html | pytorch docs |
torch.positive
torch.positive(input) -> Tensor
Returns "input". Throws a runtime error if "input" is a bool
tensor.
Parameters:
input (Tensor) -- the input tensor.
Example:
>>> t = torch.randn(5)
>>> t
tensor([ 0.0090, -0.2262, -0.0682, -0.2866, 0.3940])
>>> torch.positive(t)
tensor([ 0.0090, -0.2262, -0.0682, -0.2866, 0.3940])
| https://pytorch.org/docs/stable/generated/torch.positive.html | pytorch docs |
prepare_fx
class torch.quantization.quantize_fx.prepare_fx(model, qconfig_mapping, example_inputs, prepare_custom_config=None, _equalization_config=None, backend_config=None)
Prepare a model for post training static quantization
Parameters:
* model (***) -- torch.nn.Module model
* **qconfig_mapping** (***) -- QConfigMapping object to
configure how a model is quantized, see "QConfigMapping" for
more details
* **example_inputs** (***) -- Example inputs for forward
function of the model, Tuple of positional args (keyword args
can be passed as positional args as well)
* **prepare_custom_config** (***) -- customization configuration
for quantization tool. See "PrepareCustomConfig" for more
details
* **_equalization_config** (***) -- config for specifying how to
perform equalization on the model
* **backend_config** (***) -- config that specifies how
| https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html | pytorch docs |
operators are quantized in a backend, this includes how the
operators are observed, supported fusion patterns, how
quantize/dequantize ops are inserted, supported dtypes etc.
See "BackendConfig" for more details
Returns:
A GraphModule with observer (configured by qconfig_mapping),
ready for calibration
Return type:
ObservedGraphModule
Example:
import torch
from torch.ao.quantization import get_default_qconfig_mapping
from torch.ao.quantization import prepare_fx
class Submodule(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear = torch.nn.Linear(5, 5)
def forward(self, x):
x = self.linear(x)
return x
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear = torch.nn.Linear(5, 5)
self.sub = Submodule()
def forward(self, x):
| https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html | pytorch docs |
def forward(self, x):
x = self.linear(x)
x = self.sub(x) + x
return x
# initialize a floating point model
float_model = M().eval()
# define calibration function
def calibrate(model, data_loader):
model.eval()
with torch.no_grad():
for image, target in data_loader:
model(image)
# qconfig is the configuration for how we insert observers for a particular
# operator
# qconfig = get_default_qconfig("fbgemm")
# Example of customizing qconfig:
# qconfig = torch.ao.quantization.QConfig(
# activation=MinMaxObserver.with_args(dtype=torch.qint8),
# weight=MinMaxObserver.with_args(dtype=torch.qint8))
# `activation` and `weight` are constructors of observer module
# qconfig_mapping is a collection of quantization configurations, user can
# set the qconfig for each operator (torch op calls, functional calls, module calls)
| https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html | pytorch docs |
in the model through qconfig_mapping
# the following call will get the qconfig_mapping that works best for models
# that target "fbgemm" backend
qconfig_mapping = get_default_qconfig_mapping("fbgemm")
# We can customize qconfig_mapping in different ways.
# e.g. set the global qconfig, which means we will use the same qconfig for
# all operators in the model, this can be overwritten by other settings
# qconfig_mapping = QConfigMapping().set_global(qconfig)
# e.g. quantize the linear submodule with a specific qconfig
# qconfig_mapping = QConfigMapping().set_module_name("linear", qconfig)
# e.g. quantize all nn.Linear modules with a specific qconfig
# qconfig_mapping = QConfigMapping().set_object_type(torch.nn.Linear, qconfig)
# for a more complete list, please see the docstring for :class:`torch.ao.quantization.QConfigMapping`
# argument
# example_inputs is a tuple of inputs, that is used to infer the type of the
| https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html | pytorch docs |
outputs in the model
# currently it's not used, but please make sure model(*example_inputs) runs
example_inputs = (torch.randn(1, 3, 224, 224),)
# TODO: add backend_config after we split the backend_config for fbgemm and qnnpack
# e.g. backend_config = get_default_backend_config("fbgemm")
# `prepare_fx` inserts observers in the model based on qconfig_mapping and
# backend_config. If the configuration for an operator in qconfig_mapping
# is supported in the backend_config (meaning it's supported by the target
# hardware), we'll insert observer modules according to the qconfig_mapping
# otherwise the configuration in qconfig_mapping will be ignored
#
# Example:
# in qconfig_mapping, user sets linear module to be quantized with quint8 for
# activation and qint8 for weight:
# qconfig = torch.ao.quantization.QConfig(
# observer=MinMaxObserver.with_args(dtype=torch.quint8),
| https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html | pytorch docs |
weight=MinMaxObserver.with-args(dtype=torch.qint8))
# Note: current qconfig api does not support setting output observer, but
# we may extend this to support these more fine grained control in the
# future
#
# qconfig_mapping = QConfigMapping().set_object_type(torch.nn.Linear, qconfig)
# in backend config, linear module also supports in this configuration:
# weighted_int8_dtype_config = DTypeConfig(
# input_dtype=torch.quint8,
# output_dtype=torch.quint8,
# weight_dtype=torch.qint8,
# bias_type=torch.float)
# linear_pattern_config = BackendPatternConfig(torch.nn.Linear) \
# .set_observation_type(ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT) \
# .add_dtype_config(weighted_int8_dtype_config) \
# ...
# backend_config = BackendConfig().set_backend_pattern_config(linear_pattern_config)
# `prepare_fx` will check that the setting requested by suer in qconfig_mapping
| https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html | pytorch docs |
is supported by the backend_config and insert observers and fake quant modules
# in the model
prepared_model = prepare_fx(float_model, qconfig_mapping, example_inputs)
# Run calibration
calibrate(prepared_model, sample_inference_data)
| https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html | pytorch docs |
ExponentialLR
class torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma, last_epoch=- 1, verbose=False)
Decays the learning rate of each parameter group by gamma every
epoch. When last_epoch=-1, sets initial lr as lr.
Parameters:
* optimizer (Optimizer) -- Wrapped optimizer.
* **gamma** (*float*) -- Multiplicative factor of learning rate
decay.
* **last_epoch** (*int*) -- The index of last epoch. Default:
-1.
* **verbose** (*bool*) -- If "True", prints a message to stdout
for each update. Default: "False".
get_last_lr()
Return last computed learning rate by current scheduler.
load_state_dict(state_dict)
Loads the schedulers state.
Parameters:
**state_dict** (*dict*) -- scheduler state. Should be an
object returned from a call to "state_dict()".
print_lr(is_verbose, group, lr, epoch=None)
Display the current learning rate.
state_dict() | https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ExponentialLR.html | pytorch docs |
state_dict()
Returns the state of the scheduler as a "dict".
It contains an entry for every variable in self.__dict__ which
is not the optimizer.
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ExponentialLR.html | pytorch docs |
torch.Tensor.logdet
Tensor.logdet() -> Tensor
See "torch.logdet()" | https://pytorch.org/docs/stable/generated/torch.Tensor.logdet.html | pytorch docs |
torch.Tensor.log1p_
Tensor.log1p_() -> Tensor
In-place version of "log1p()" | https://pytorch.org/docs/stable/generated/torch.Tensor.log1p_.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.