text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
torch._foreach_round
torch._foreach_round(self: List[Tensor]) -> List[Tensor]
Apply "torch.round()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_round.html | pytorch docs |
LeakyReLU
class torch.nn.LeakyReLU(negative_slope=0.01, inplace=False)
Applies the element-wise function:
\text{LeakyReLU}(x) = \max(0, x) + \text{negative\_slope} *
\min(0, x)
or
\text{LeakyReLU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\
\text{negative\_slope} \times x, & \text{ otherwise }
\end{cases}
Parameters:
* negative_slope (float) -- Controls the angle of the
negative slope. Default: 1e-2
* **inplace** (*bool*) -- can optionally do the operation in-
place. Default: "False"
Shape:
* Input: (*) where *** means, any number of additional
dimensions
* Output: (*), same shape as the input
[image]
Examples:
>>> m = nn.LeakyReLU(0.1)
>>> input = torch.randn(2)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.LeakyReLU.html | pytorch docs |
torch.Tensor.isreal
Tensor.isreal() -> Tensor
See "torch.isreal()" | https://pytorch.org/docs/stable/generated/torch.Tensor.isreal.html | pytorch docs |
torch.nn.functional.leaky_relu
torch.nn.functional.leaky_relu(input, negative_slope=0.01, inplace=False) -> Tensor
Applies element-wise, \text{LeakyReLU}(x) = \max(0, x) +
\text{negative_slope} * \min(0, x)
See "LeakyReLU" for more details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.leaky_relu.html | pytorch docs |
torch.Tensor.fill_diagonal_
Tensor.fill_diagonal_(fill_value, wrap=False) -> Tensor
Fill the main diagonal of a tensor that has at least 2-dimensions.
When dims>2, all dimensions of input must be of equal length. This
function modifies the input tensor in-place, and returns the input
tensor.
Parameters:
* fill_value (Scalar) -- the fill value
* **wrap** (*bool*) -- the diagonal 'wrapped' after N columns
for tall matrices.
Example:
>>> a = torch.zeros(3, 3)
>>> a.fill_diagonal_(5)
tensor([[5., 0., 0.],
[0., 5., 0.],
[0., 0., 5.]])
>>> b = torch.zeros(7, 3)
>>> b.fill_diagonal_(5)
tensor([[5., 0., 0.],
[0., 5., 0.],
[0., 0., 5.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
>>> c = torch.zeros(7, 3)
>>> c.fill_diagonal_(5, wrap=True)
tensor([[5., 0., 0.],
| https://pytorch.org/docs/stable/generated/torch.Tensor.fill_diagonal_.html | pytorch docs |
tensor([[5., 0., 0.],
[0., 5., 0.],
[0., 0., 5.],
[0., 0., 0.],
[5., 0., 0.],
[0., 5., 0.],
[0., 0., 5.]]) | https://pytorch.org/docs/stable/generated/torch.Tensor.fill_diagonal_.html | pytorch docs |
torch.Tensor.acos
Tensor.acos() -> Tensor
See "torch.acos()" | https://pytorch.org/docs/stable/generated/torch.Tensor.acos.html | pytorch docs |
ConstantLR
class torch.optim.lr_scheduler.ConstantLR(optimizer, factor=0.3333333333333333, total_iters=5, last_epoch=- 1, verbose=False)
Decays the learning rate of each parameter group by a small
constant factor until the number of epoch reaches a pre-defined
milestone: total_iters. Notice that such decay can happen
simultaneously with other changes to the learning rate from outside
this scheduler. When last_epoch=-1, sets initial lr as lr.
Parameters:
* optimizer (Optimizer) -- Wrapped optimizer.
* **factor** (*float*) -- The number we multiply learning rate
until the milestone. Default: 1./3.
* **total_iters** (*int*) -- The number of steps that the
scheduler decays the learning rate. Default: 5.
* **last_epoch** (*int*) -- The index of the last epoch.
Default: -1.
* **verbose** (*bool*) -- If "True", prints a message to stdout
for each update. Default: "False".
-[ Example ]- | https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ConstantLR.html | pytorch docs |
-[ Example ]-
Assuming optimizer uses lr = 0.05 for all groups
lr = 0.025 if epoch == 0
lr = 0.025 if epoch == 1
lr = 0.025 if epoch == 2
lr = 0.025 if epoch == 3
lr = 0.05 if epoch >= 4
scheduler = ConstantLR(self.opt, factor=0.5, total_iters=4)
for epoch in range(100):
train(...)
validate(...)
scheduler.step()
get_last_lr()
Return last computed learning rate by current scheduler.
load_state_dict(state_dict)
Loads the schedulers state.
Parameters:
**state_dict** (*dict*) -- scheduler state. Should be an
object returned from a call to "state_dict()".
print_lr(is_verbose, group, lr, epoch=None)
Display the current learning rate.
state_dict()
Returns the state of the scheduler as a "dict".
It contains an entry for every variable in self.__dict__ which
is not the optimizer.
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ConstantLR.html | pytorch docs |
BNReLU2d
class torch.ao.nn.intrinsic.quantized.BNReLU2d(num_features, eps=1e-05, momentum=0.1, device=None, dtype=None)
A BNReLU2d module is a fused module of BatchNorm2d and ReLU
We adopt the same interface as "torch.ao.nn.quantized.BatchNorm2d".
Variables:
torch.ao.nn.quantized.BatchNorm2d (Same as) -- | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.BNReLU2d.html | pytorch docs |
freeze_bn_stats
class torch.ao.nn.intrinsic.qat.freeze_bn_stats(mod) | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.freeze_bn_stats.html | pytorch docs |
ConvertCustomConfig
class torch.ao.quantization.fx.custom_config.ConvertCustomConfig
Custom configuration for "convert_fx()".
Example usage:
convert_custom_config = ConvertCustomConfig() .set_observed_to_quantized_mapping(ObservedCustomModule, QuantizedCustomModule) .set_preserved_attributes(["attr1", "attr2"])
classmethod from_dict(convert_custom_config_dict)
Create a "ConvertCustomConfig" from a dictionary with the
following items:
"observed_to_quantized_custom_module_class": a nested
dictionary mapping from quantization mode to an inner mapping
from observed module classes to quantized module classes,
e.g.:: { "static": {FloatCustomModule: ObservedCustomModule},
"dynamic": {FloatCustomModule: ObservedCustomModule},
"weight_only": {FloatCustomModule: ObservedCustomModule} }
"preserved_attributes": a list of attributes that persist
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.ConvertCustomConfig.html | pytorch docs |
even if they are not used in "forward"
This function is primarily for backward compatibility and may be
removed in the future.
Return type:
*ConvertCustomConfig*
set_observed_to_quantized_mapping(observed_class, quantized_class, quant_type=QuantType.STATIC)
Set the mapping from a custom observed module class to a custom
quantized module class.
The quantized module class must have a "from_observed" class
method that converts the observed module class to the quantized
module class.
Return type:
*ConvertCustomConfig*
set_preserved_attributes(attributes)
Set the names of the attributes that will persist in the graph
module even if they are not used in the model's "forward"
method.
Return type:
*ConvertCustomConfig*
to_dict()
Convert this "ConvertCustomConfig" to a dictionary with the
items described in "from_dict()".
Return type:
*Dict*[str, *Any*]
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.ConvertCustomConfig.html | pytorch docs |
torch.Tensor.angle
Tensor.angle() -> Tensor
See "torch.angle()" | https://pytorch.org/docs/stable/generated/torch.Tensor.angle.html | pytorch docs |
torch.set_default_tensor_type
torch.set_default_tensor_type(t)
Sets the default "torch.Tensor" type to floating point tensor type
"t". This type will also be used as default floating point type for
type inference in "torch.tensor()".
The default floating point tensor type is initially
"torch.FloatTensor".
Parameters:
t (type or string) -- the floating point tensor type
or its name
Example:
>>> torch.tensor([1.2, 3]).dtype # initial default for floating point is torch.float32
torch.float32
>>> torch.set_default_tensor_type(torch.DoubleTensor)
>>> torch.tensor([1.2, 3]).dtype # a new floating point tensor
torch.float64
| https://pytorch.org/docs/stable/generated/torch.set_default_tensor_type.html | pytorch docs |
PairwiseDistance
class torch.nn.PairwiseDistance(p=2.0, eps=1e-06, keepdim=False)
Computes the pairwise distance between input vectors, or between
columns of input matrices.
Distances are computed using "p"-norm, with constant "eps" added to
avoid division by zero if "p" is negative, i.e.:
\mathrm{dist}\left(x, y\right) = \left\Vert x-y + \epsilon e
\right\Vert_p,
where e is the vector of ones and the "p"-norm is given by.
\Vert x \Vert _p = \left( \sum_{i=1}^n \vert x_i \vert ^ p
\right) ^ {1/p}.
Parameters:
* p (real, optional) -- the norm degree. Can be
negative. Default: 2
* **eps** (*float**, **optional*) -- Small value to avoid
division by zero. Default: 1e-6
* **keepdim** (*bool**, **optional*) -- Determines whether or
not to keep the vector dimension. Default: False
Shape:
* Input1: (N, D) or (D) where N = batch dimension and D =
vector dimension | https://pytorch.org/docs/stable/generated/torch.nn.PairwiseDistance.html | pytorch docs |
vector dimension*
* Input2: (N, D) or (D), same shape as the Input1
* Output: (N) or () based on input dimension. If "keepdim" is
"True", then (N, 1) or (1) based on input dimension.
Examples::
>>> pdist = nn.PairwiseDistance(p=2)
>>> input1 = torch.randn(100, 128)
>>> input2 = torch.randn(100, 128)
>>> output = pdist(input1, input2) | https://pytorch.org/docs/stable/generated/torch.nn.PairwiseDistance.html | pytorch docs |
torch.fft.ifftn
torch.fft.ifftn(input, s=None, dim=None, norm=None, *, out=None) -> Tensor
Computes the N dimensional inverse discrete Fourier transform of
"input".
Note:
Supports torch.half and torch.chalf on CUDA with GPU Architecture
SM53 or greater. However it only supports powers of 2 signal
length in every transformed dimensions.
Parameters:
* input (Tensor) -- the input tensor
* **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the
transformed dimensions. If given, each dimension "dim[i]" will
either be zero-padded or trimmed to the length "s[i]" before
computing the IFFT. If a length "-1" is specified, no padding
is done in that dimension. Default: "s = [input.size(d) for d
in dim]"
* **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be
transformed. Default: all dimensions, or the last "len(s)"
dimensions if "s" is given.
| https://pytorch.org/docs/stable/generated/torch.fft.ifftn.html | pytorch docs |
dimensions if "s" is given.
* **norm** (*str**, **optional*) --
Normalization mode. For the backward transform ("ifftn()"),
these correspond to:
* ""forward"" - no normalization
* ""backward"" - normalize by "1/n"
* ""ortho"" - normalize by "1/sqrt(n)" (making the IFFT
orthonormal)
Where "n = prod(s)" is the logical IFFT size. Calling the
forward transform ("fftn()") with the same normalization mode
will apply an overall normalization of "1/n" between the two
transforms. This is required to make "ifftn()" the exact
inverse.
Default is ""backward"" (normalize by "1/n").
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
-[ Example ]-
x = torch.rand(10, 10, dtype=torch.complex64)
ifftn = torch.fft.ifftn(x)
The discrete Fourier transform is separable, so "ifftn()" here is
equivalent to two one-dimensional "ifft()" calls: | https://pytorch.org/docs/stable/generated/torch.fft.ifftn.html | pytorch docs |
two_iffts = torch.fft.ifft(torch.fft.ifft(x, dim=0), dim=1)
torch.testing.assert_close(ifftn, two_iffts, check_stride=False)
| https://pytorch.org/docs/stable/generated/torch.fft.ifftn.html | pytorch docs |
torch.Tensor.ldexp
Tensor.ldexp(other) -> Tensor
See "torch.ldexp()" | https://pytorch.org/docs/stable/generated/torch.Tensor.ldexp.html | pytorch docs |
torch.nn.functional.lp_pool1d
torch.nn.functional.lp_pool1d(input, norm_type, kernel_size, stride=None, ceil_mode=False)
Applies a 1D power-average pooling over an input signal composed of
several input planes. If the sum of all inputs to the power of p
is zero, the gradient is set to zero as well.
See "LPPool1d" for details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.lp_pool1d.html | pytorch docs |
torch.frexp
torch.frexp(input, *, out=None) -> (Tensor mantissa, Tensor exponent)
Decomposes "input" into mantissa and exponent tensors such that
\text{input} = \text{mantissa} \times 2^{\text{exponent}}.
The range of mantissa is the open interval (-1, 1).
Supports float inputs.
Parameters:
input (Tensor) -- the input tensor
Keyword Arguments:
out (tuple, optional) -- the output tensors
Example:
>>> x = torch.arange(9.)
>>> mantissa, exponent = torch.frexp(x)
>>> mantissa
tensor([0.0000, 0.5000, 0.5000, 0.7500, 0.5000, 0.6250, 0.7500, 0.8750, 0.5000])
>>> exponent
tensor([0, 1, 2, 2, 3, 3, 3, 3, 4], dtype=torch.int32)
>>> torch.ldexp(mantissa, exponent)
tensor([0., 1., 2., 3., 4., 5., 6., 7., 8.])
| https://pytorch.org/docs/stable/generated/torch.frexp.html | pytorch docs |
torch.vsplit
torch.vsplit(input, indices_or_sections) -> List of Tensors
Splits "input", a tensor with two or more dimensions, into multiple
tensors vertically according to "indices_or_sections". Each split
is a view of "input".
This is equivalent to calling torch.tensor_split(input,
indices_or_sections, dim=0) (the split dimension is 0), except that
if "indices_or_sections" is an integer it must evenly divide the
split dimension or a runtime error will be thrown.
This function is based on NumPy's "numpy.vsplit()".
Parameters:
* input (Tensor) -- tensor to split.
* **indices_or_sections** (*int** or **list** or **tuple of
ints*) -- See argument in "torch.tensor_split()".
Example::
>>> t = torch.arange(16.0).reshape(4,4)
>>> t
tensor([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.]])
>>> torch.vsplit(t, 2) | https://pytorch.org/docs/stable/generated/torch.vsplit.html | pytorch docs |
torch.vsplit(t, 2)
(tensor([[0., 1., 2., 3.],
[4., 5., 6., 7.]]),
tensor([[ 8., 9., 10., 11.],
[12., 13., 14., 15.]]))
>>> torch.vsplit(t, [3, 6])
(tensor([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]]),
tensor([[12., 13., 14., 15.]]),
tensor([], size=(0, 4)))
| https://pytorch.org/docs/stable/generated/torch.vsplit.html | pytorch docs |
no_grad
class torch.no_grad
Context-manager that disabled gradient calculation.
Disabling gradient calculation is useful for inference, when you
are sure that you will not call "Tensor.backward()". It will reduce
memory consumption for computations that would otherwise have
requires_grad=True.
In this mode, the result of every computation will have
requires_grad=False, even when the inputs have
requires_grad=True.
This context manager is thread local; it will not affect
computation in other threads.
Also functions as a decorator. (Make sure to instantiate with
parenthesis.)
Note:
No-grad is one of several mechanisms that can enable or disable
gradients locally see Locally disabling gradient computation for
more information on how they compare.
Note:
This API does not apply to forward-mode AD. If you want to
disable forward AD for a computation, you can unpack your dual
tensors.
Example:: | https://pytorch.org/docs/stable/generated/torch.no_grad.html | pytorch docs |
tensors.
Example::
>>> x = torch.tensor([1.], requires_grad=True)
>>> with torch.no_grad():
... y = x * 2
>>> y.requires_grad
False
>>> @torch.no_grad()
... def doubler(x):
... return x * 2
>>> z = doubler(x)
>>> z.requires_grad
False | https://pytorch.org/docs/stable/generated/torch.no_grad.html | pytorch docs |
ConvReLU1d
class torch.ao.nn.intrinsic.quantized.ConvReLU1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)
A ConvReLU1d module is a fused module of Conv1d and ReLU
We adopt the same interface as "torch.ao.nn.quantized.Conv1d".
Variables:
torch.ao.nn.quantized.Conv1d (Same as) -- | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.ConvReLU1d.html | pytorch docs |
torch.nn.functional.rrelu
torch.nn.functional.rrelu(input, lower=1. / 8, upper=1. / 3, training=False, inplace=False) -> Tensor
Randomized leaky ReLU.
See "RReLU" for more details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.rrelu.html | pytorch docs |
InstanceNorm1d
class torch.ao.nn.quantized.InstanceNorm1d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None)
This is the quantized version of "InstanceNorm1d".
Additional args:
* scale - quantization scale of the output, type: double.
* **zero_point** - quantization zero point of the output, type:
long.
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.InstanceNorm1d.html | pytorch docs |
torch.cuda.max_memory_reserved
torch.cuda.max_memory_reserved(device=None)
Returns the maximum GPU memory managed by the caching allocator in
bytes for a given device.
By default, this returns the peak cached memory since the beginning
of this program. "reset_peak_memory_stats()" can be used to reset
the starting point in tracking this metric. For example, these two
functions can measure the peak cached memory amount of each
iteration in a training loop.
Parameters:
device (torch.device or int, optional) -- selected
device. Returns statistic for the current device, given by
"current_device()", if "device" is "None" (default).
Return type:
int
Note:
See Memory management for more details about GPU memory
management.
| https://pytorch.org/docs/stable/generated/torch.cuda.max_memory_reserved.html | pytorch docs |
torch.nn.functional.affine_grid
torch.nn.functional.affine_grid(theta, size, align_corners=None)
Generates a 2D or 3D flow field (sampling grid), given a batch of
affine matrices "theta".
Note:
This function is often used in conjunction with "grid_sample()"
to build Spatial Transformer Networks .
Parameters:
* theta (Tensor) -- input batch of affine matrices with
shape (N \times 2 \times 3) for 2D or (N \times 3 \times 4)
for 3D
* **size** (*torch.Size*) -- the target output image size. (N
\times C \times H \times W for 2D or N \times C \times D
\times H \times W for 3D) Example: torch.Size((32, 3, 24, 24))
* **align_corners** (*bool**, **optional*) -- if "True",
consider "-1" and "1" to refer to the centers of the corner
pixels rather than the image corners. Refer to "grid_sample()"
for a more complete description. A grid generated by
| https://pytorch.org/docs/stable/generated/torch.nn.functional.affine_grid.html | pytorch docs |
"affine_grid()" should be passed to "grid_sample()" with the
same setting for this option. Default: "False"
Returns:
output Tensor of size (N \times H \times W \times 2)
Return type:
output (Tensor)
Warning:
When "align_corners = True", the grid positions depend on the
pixel size relative to the input image size, and so the locations
sampled by "grid_sample()" will differ for the same input given
at different resolutions (that is, after being upsampled or
downsampled). The default behavior up to version 1.2.0 was
"align_corners = True". Since then, the default behavior has been
changed to "align_corners = False", in order to bring it in line
with the default for "interpolate()".
Warning:
When "align_corners = True", 2D affine transforms on 1D data and
3D affine transforms on 2D data (that is, when one of the spatial
dimensions has unit size) are ill-defined, and not an intended
| https://pytorch.org/docs/stable/generated/torch.nn.functional.affine_grid.html | pytorch docs |
use case. This is not a problem when "align_corners = False". Up
to version 1.2.0, all grid points along a unit dimension were
considered arbitrarily to be at "-1". From version 1.3.0, under
"align_corners = True" all grid points along a unit dimension are
considered to be at "0" (the center of the input image). | https://pytorch.org/docs/stable/generated/torch.nn.functional.affine_grid.html | pytorch docs |
torch.Tensor.to_sparse_coo
Tensor.to_sparse_coo()
Convert a tensor to coordinate format.
Examples:
>>> dense = torch.randn(5, 5)
>>> sparse = dense.to_sparse_coo()
>>> sparse._nnz()
25
| https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_coo.html | pytorch docs |
torch.Tensor.negative_
Tensor.negative_() -> Tensor
In-place version of "negative()" | https://pytorch.org/docs/stable/generated/torch.Tensor.negative_.html | pytorch docs |
torch.Tensor.expand_as
Tensor.expand_as(other) -> Tensor
Expand this tensor to the same size as "other".
"self.expand_as(other)" is equivalent to
"self.expand(other.size())".
Please see "expand()" for more information about "expand".
Parameters:
other ("torch.Tensor") -- The result tensor has the same
size as "other". | https://pytorch.org/docs/stable/generated/torch.Tensor.expand_as.html | pytorch docs |
torch.erf
torch.erf(input, *, out=None) -> Tensor
Alias for "torch.special.erf()". | https://pytorch.org/docs/stable/generated/torch.erf.html | pytorch docs |
torch.cuda.get_rng_state
torch.cuda.get_rng_state(device='cuda')
Returns the random number generator state of the specified GPU as a
ByteTensor.
Parameters:
device (torch.device or int, optional) -- The
device to return the RNG state of. Default: "'cuda'" (i.e.,
"torch.device('cuda')", the current CUDA device).
Return type:
Tensor
Warning:
This function eagerly initializes CUDA.
| https://pytorch.org/docs/stable/generated/torch.cuda.get_rng_state.html | pytorch docs |
torch.Tensor.diff
Tensor.diff(n=1, dim=- 1, prepend=None, append=None) -> Tensor
See "torch.diff()" | https://pytorch.org/docs/stable/generated/torch.Tensor.diff.html | pytorch docs |
torch.range
torch.range(start=0, end, step=1, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Returns a 1-D tensor of size \left\lfloor \frac{\text{end} -
\text{start}}{\text{step}} \right\rfloor + 1 with values from
"start" to "end" with step "step". Step is the gap between two
values in the tensor.
\text{out}_{i+1} = \text{out}_i + \text{step}.
Warning:
This function is deprecated and will be removed in a future
release because its behavior is inconsistent with Python's range
builtin. Instead, use "torch.arange()", which produces values in
[start, end).
Parameters:
* start (float) -- the starting value for the set of
points. Default: "0".
* **end** (*float*) -- the ending value for the set of points
* **step** (*float*) -- the gap between each pair of adjacent
points. Default: "1".
Keyword Arguments: | https://pytorch.org/docs/stable/generated/torch.range.html | pytorch docs |
Keyword Arguments:
* out (Tensor, optional) -- the output tensor.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()"). If *dtype* is not
given, infer the data type from the other input arguments. If
any of *start*, *end*, or *stop* are floating-point, the
*dtype* is inferred to be the default dtype, see
"get_default_dtype()". Otherwise, the *dtype* is inferred to
be *torch.int64*.
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
| https://pytorch.org/docs/stable/generated/torch.range.html | pytorch docs |
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Example:
>>> torch.range(1, 4)
tensor([ 1., 2., 3., 4.])
>>> torch.range(1, 4, 0.5)
tensor([ 1.0000, 1.5000, 2.0000, 2.5000, 3.0000, 3.5000, 4.0000])
| https://pytorch.org/docs/stable/generated/torch.range.html | pytorch docs |
ConvBnReLU1d
class torch.ao.nn.intrinsic.qat.ConvBnReLU1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)
A ConvBnReLU1d module is a module fused from Conv1d, BatchNorm1d
and ReLU, attached with FakeQuantize modules for weight, used in
quantization aware training.
We combined the interface of "torch.nn.Conv1d" and
"torch.nn.BatchNorm1d" and "torch.nn.ReLU".
Similar to torch.nn.Conv1d, with FakeQuantize modules initialized
to default.
Variables:
weight_fake_quant -- fake quant module for weight | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvBnReLU1d.html | pytorch docs |
torch.rand_like
torch.rand_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor
Returns a tensor with the same size as "input" that is filled with
random numbers from a uniform distribution on the interval [0, 1).
"torch.rand_like(input)" is equivalent to "torch.rand(input.size(),
dtype=input.dtype, layout=input.layout, device=input.device)".
Parameters:
input (Tensor) -- the size of "input" will determine size
of the output tensor.
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned Tensor. Default: if "None", defaults to the dtype
of "input".
* **layout** ("torch.layout", optional) -- the desired layout of
returned tensor. Default: if "None", defaults to the layout of
"input".
* **device** ("torch.device", optional) -- the desired device of
| https://pytorch.org/docs/stable/generated/torch.rand_like.html | pytorch docs |
returned tensor. Default: if "None", defaults to the device of
"input".
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **memory_format** ("torch.memory_format", optional) -- the
desired memory format of returned Tensor. Default:
"torch.preserve_format".
| https://pytorch.org/docs/stable/generated/torch.rand_like.html | pytorch docs |
torch.orgqr
torch.orgqr(input, tau) -> Tensor
Alias for "torch.linalg.householder_product()". | https://pytorch.org/docs/stable/generated/torch.orgqr.html | pytorch docs |
torch.Tensor.detach
Tensor.detach()
Returns a new Tensor, detached from the current graph.
The result will never require gradient.
This method also affects forward mode AD gradients and the result
will never have forward mode AD gradients.
Note:
Returned Tensor shares the same storage with the original one.
In-place modifications on either of them will be seen, and may
trigger errors in correctness checks. IMPORTANT NOTE: Previously,
in-place size / stride / storage changes (such as *resize_* /
*resize_as_* / *set_* / *transpose_*) to the returned tensor also
update the original tensor. Now, these in-place changes will not
update the original tensor anymore, and will instead trigger an
error. For sparse tensors: In-place indices / values changes
(such as *zero_* / *copy_* / *add_*) to the returned tensor will
not update the original tensor anymore, and will instead trigger
an error.
| https://pytorch.org/docs/stable/generated/torch.Tensor.detach.html | pytorch docs |
torch.clip
torch.clip(input, min=None, max=None, *, out=None) -> Tensor
Alias for "torch.clamp()". | https://pytorch.org/docs/stable/generated/torch.clip.html | pytorch docs |
torch.Tensor.histogram
Tensor.histogram(input, bins, *, range=None, weight=None, density=False)
See "torch.histogram()" | https://pytorch.org/docs/stable/generated/torch.Tensor.histogram.html | pytorch docs |
CUDAGraph
class torch.cuda.CUDAGraph
Wrapper around a CUDA graph.
Warning:
This API is in beta and may change in future releases.
capture_begin(pool=None)
Begins capturing CUDA work on the current stream.
Typically, you shouldn't call "capture_begin" yourself. Use
"graph" or "make_graphed_callables()", which call
"capture_begin" internally.
Parameters:
**pool** (*optional*) -- Token (returned by
"graph_pool_handle()" or "other_Graph_instance.pool()") that
hints this graph may share memory with the indicated pool.
See Graph memory management.
capture_end()
Ends CUDA graph capture on the current stream. After
"capture_end", "replay" may be called on this instance.
Typically, you shouldn't call "capture_end" yourself. Use
"graph" or "make_graphed_callables()", which call "capture_end"
internally.
debug_dump(debug_path)
Parameters:
| https://pytorch.org/docs/stable/generated/torch.cuda.CUDAGraph.html | pytorch docs |
debug_dump(debug_path)
Parameters:
**debug_path** (*required*) -- Path to dump the graph to.
Calls a debugging function to dump the graph if the debugging is
enabled via CUDAGraph.enable_debug_mode()
enable_debug_mode()
Enables debugging mode for CUDAGraph.debug_dump.
pool()
Returns an opaque token representing the id of this graph's
memory pool. This id can optionally be passed to another graph's
"capture_begin", which hints the other graph may share the same
memory pool.
replay()
Replays the CUDA work captured by this graph.
reset()
Deletes the graph currently held by this instance.
| https://pytorch.org/docs/stable/generated/torch.cuda.CUDAGraph.html | pytorch docs |
torch.are_deterministic_algorithms_enabled
torch.are_deterministic_algorithms_enabled()
Returns True if the global deterministic flag is turned on. Refer
to "torch.use_deterministic_algorithms()" documentation for more
details. | https://pytorch.org/docs/stable/generated/torch.are_deterministic_algorithms_enabled.html | pytorch docs |
torch.Tensor.squeeze
Tensor.squeeze(dim=None) -> Tensor
See "torch.squeeze()" | https://pytorch.org/docs/stable/generated/torch.Tensor.squeeze.html | pytorch docs |
Softmax2d
class torch.nn.Softmax2d
Applies SoftMax over features to each spatial location.
When given an image of "Channels x Height x Width", it will apply
Softmax to each location (Channels, h_i, w_j)
Shape:
* Input: (N, C, H, W) or (C, H, W).
* Output: (N, C, H, W) or (C, H, W) (same shape as input)
Returns:
a Tensor of the same dimension and shape as the input with
values in the range [0, 1]
Return type:
None
Examples:
>>> m = nn.Softmax2d()
>>> # you softmax over the 2nd dimension
>>> input = torch.randn(2, 3, 12, 13)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.Softmax2d.html | pytorch docs |
torch.chunk
torch.chunk(input, chunks, dim=0) -> List of Tensors
Attempts to split a tensor into the specified number of chunks.
Each chunk is a view of the input tensor.
Note:
This function may return fewer than the specified number of
chunks!
See also:
"torch.tensor_split()" a function that always returns exactly the
specified number of chunks
If the tensor size along the given dimension "dim" is divisible by
"chunks", all returned chunks will be the same size. If the tensor
size along the given dimension "dim" is not divisible by "chunks",
all returned chunks will be the same size, except the last one. If
such division is not possible, this function may return fewer than
the specified number of chunks.
Parameters:
* input (Tensor) -- the tensor to split
* **chunks** (*int*) -- number of chunks to return
* **dim** (*int*) -- dimension along which to split the tensor
-[ Example ]- | https://pytorch.org/docs/stable/generated/torch.chunk.html | pytorch docs |
-[ Example ]-
torch.arange(11).chunk(6)
(tensor([0, 1]),
tensor([2, 3]),
tensor([4, 5]),
tensor([6, 7]),
tensor([8, 9]),
tensor([10]))
torch.arange(12).chunk(6)
(tensor([0, 1]),
tensor([2, 3]),
tensor([4, 5]),
tensor([6, 7]),
tensor([8, 9]),
tensor([10, 11]))
torch.arange(13).chunk(6)
(tensor([0, 1, 2]),
tensor([3, 4, 5]),
tensor([6, 7, 8]),
tensor([ 9, 10, 11]),
tensor([12]))
| https://pytorch.org/docs/stable/generated/torch.chunk.html | pytorch docs |
torch.nn.functional.group_norm
torch.nn.functional.group_norm(input, num_groups, weight=None, bias=None, eps=1e-05)
Applies Group Normalization for last certain number of dimensions.
See "GroupNorm" for details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.group_norm.html | pytorch docs |
SGD
class torch.optim.SGD(params, lr=, momentum=0, dampening=0, weight_decay=0, nesterov=False, *, maximize=False, foreach=None, differentiable=False)
Implements stochastic gradient descent (optionally with momentum).
\begin{aligned} &\rule{110mm}{0.4pt}
\\ &\textbf{input} : \gamma \text{ (lr)}, \: \theta_0
\text{ (params)}, \: f(\theta) \text{ (objective)}, \:
\lambda \text{ (weight decay)}, \\
&\hspace{13mm} \:\mu \text{ (momentum)}, \:\tau \text{
(dampening)}, \:\textit{ nesterov,}\:\textit{ maximize}
\\[-1.ex] &\rule{110mm}{0.4pt}
\\ &\textbf{for} \: t=1 \: \textbf{to} \: \ldots \:
\textbf{do} \\ &\hspace{5mm}g_t
\leftarrow \nabla_{\theta} f_t (\theta_{t-1}) \\
&\hspace{5mm}\textbf{if} \: \lambda \neq 0
\\ &\hspace{10mm} g_t \leftarrow g_t + \lambda
\theta_{t-1} \\
| https://pytorch.org/docs/stable/generated/torch.optim.SGD.html | pytorch docs |
\theta_{t-1} \
&\hspace{5mm}\textbf{if} \: \mu \neq 0
\ &\hspace{10mm}\textbf{if} \: t > 1
\ &\hspace{15mm} \textbf{b}t \leftarrow \mu
\textbf{b} + (1-\tau) g_t \
&\hspace{10mm}\textbf{else}
\ &\hspace{15mm} \textbf{b}t \leftarrow g_t
\ &\hspace{10mm}\textbf{if} \: \textit{nesterov}
\ &\hspace{15mm} g_t \leftarrow g + \mu \textbf{b}t
\ &\hspace{10mm}\textbf{else}
\[-1.ex] &\hspace{15mm} g_t \leftarrow \textbf{b}_t
\ &\hspace{5mm}\textbf{if} \: \textit{maximize}
\ &\hspace{10mm}\theta_t \leftarrow \theta + \gamma
g_t \[-1.ex] &\hspace{5mm}\textbf{else}
\[-1.ex] &\hspace{10mm}\theta_t \leftarrow \theta_{t-1} -
\gamma g_t \[-1.ex] &\rule{110mm}{0.4pt}
\[-1.ex] &\bf{return} \: \theta_t
\[-1.ex] &\rule{110mm}{0.4pt} | https://pytorch.org/docs/stable/generated/torch.optim.SGD.html | pytorch docs |
\[-1.ex] &\rule{110mm}{0.4pt}
\[-1.ex] \end{aligned}
Nesterov momentum is based on the formula from On the importance of
initialization and momentum in deep learning.
Parameters:
* params (iterable) -- iterable of parameters to optimize
or dicts defining parameter groups
* **lr** (*float*) -- learning rate
* **momentum** (*float**, **optional*) -- momentum factor
(default: 0)
* **weight_decay** (*float**, **optional*) -- weight decay (L2
penalty) (default: 0)
* **dampening** (*float**, **optional*) -- dampening for
momentum (default: 0)
* **nesterov** (*bool**, **optional*) -- enables Nesterov
momentum (default: False)
* **maximize** (*bool**, **optional*) -- maximize the params
based on the objective, instead of minimizing (default: False)
* **foreach** (*bool**, **optional*) -- whether foreach
implementation of optimizer is used (default: None)
| https://pytorch.org/docs/stable/generated/torch.optim.SGD.html | pytorch docs |
differentiable (bool, optional) -- whether autograd
should occur through the optimizer step in training.
Otherwise, the step() function runs in a torch.no_grad()
context. Setting to True can impair performance, so leave it
False if you don't intend to run autograd through this
instance (default: False)
-[ Example ]-
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
optimizer.zero_grad()
loss_fn(model(input), target).backward()
optimizer.step()
Note:
The implementation of SGD with Momentum/Nesterov subtly differs
from Sutskever et. al. and implementations in some other
frameworks.Considering the specific case of Momentum, the update
can be written as
\begin{aligned} v_{t+1} & = \mu * v_{t} + g_{t+1}, \\
p_{t+1} & = p_{t} - \text{lr} * v_{t+1}, \end{aligned}
where p, g, v and \mu denote the parameters, gradient, velocity,
| https://pytorch.org/docs/stable/generated/torch.optim.SGD.html | pytorch docs |
and momentum respectively.This is in contrast to Sutskever et.
al. and other frameworks which employ an update of the form
\begin{aligned} v_{t+1} & = \mu * v_{t} + \text{lr} *
g_{t+1}, \\ p_{t+1} & = p_{t} - v_{t+1}. \end{aligned}
The Nesterov version is analogously modified.Moreover, the
initial value of the momentum buffer is set to the gradient value
at the first step. This is in contrast to some other frameworks
that initialize it to all zeros.
add_param_group(param_group)
Add a param group to the "Optimizer" s *param_groups*.
This can be useful when fine tuning a pre-trained network as
frozen layers can be made trainable and added to the "Optimizer"
as training progresses.
Parameters:
**param_group** (*dict*) -- Specifies what Tensors should be
optimized along with group specific optimization options.
load_state_dict(state_dict)
Loads the optimizer state.
Parameters:
| https://pytorch.org/docs/stable/generated/torch.optim.SGD.html | pytorch docs |
Parameters:
state_dict (dict) -- optimizer state. Should be an
object returned from a call to "state_dict()".
register_step_post_hook(hook)
Register an optimizer step post hook which will be called after
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None
The "optimizer" argument is the optimizer instance being used.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
register_step_pre_hook(hook)
Register an optimizer step pre hook which will be called before
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The "optimizer" argument is the optimizer instance being used.
| https://pytorch.org/docs/stable/generated/torch.optim.SGD.html | pytorch docs |
If args and kwargs are modified by the pre-hook, then the
transformed values are returned as a tuple containing the
new_args and new_kwargs.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
state_dict()
Returns the state of the optimizer as a "dict".
It contains two entries:
* state - a dict holding current optimization state. Its content
differs between optimizer classes.
* param_groups - a list containing all parameter groups where
each
parameter group is a dict
zero_grad(set_to_none=False)
Sets the gradients of all optimized "torch.Tensor" s to zero.
Parameters:
**set_to_none** (*bool*) -- instead of setting to zero, set
| https://pytorch.org/docs/stable/generated/torch.optim.SGD.html | pytorch docs |
the grads to None. This will in general have lower memory
footprint, and can modestly improve performance. However, it
changes certain behaviors. For example: 1. When the user
tries to access a gradient and perform manual ops on it, a
None attribute or a Tensor full of 0s will behave
differently. 2. If the user requests
"zero_grad(set_to_none=True)" followed by a backward pass,
".grad"s are guaranteed to be None for params that did not
receive a gradient. 3. "torch.optim" optimizers have a
different behavior if the gradient is 0 or None (in one case
it does the step with a gradient of 0 and in the other it
skips the step altogether). | https://pytorch.org/docs/stable/generated/torch.optim.SGD.html | pytorch docs |
torch.std
torch.std(input, dim=None, *, correction=1, keepdim=False, out=None) -> Tensor
Calculates the standard deviation over the dimensions specified by
"dim". "dim" can be a single dimension, list of dimensions, or
"None" to reduce over all dimensions.
The standard deviation (\sigma) is calculated as
\sigma = \sqrt{\frac{1}{N - \delta
N}\sum_{i=0}^{N-1}(x_i-\bar{x})^2}
where x is the sample set of elements, \bar{x} is the sample mean,
N is the number of samples and \delta N is the "correction".
If "keepdim" is "True", the output tensor is of the same size as
"input" except in the dimension(s) "dim" where it is of size 1.
Otherwise, "dim" is squeezed (see "torch.squeeze()"), resulting in
the output tensor having 1 (or "len(dim)") fewer dimension(s).
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int** or **tuple of ints*) -- the dimension or
dimensions to reduce.
Keyword Arguments: | https://pytorch.org/docs/stable/generated/torch.std.html | pytorch docs |
Keyword Arguments:
* correction (int) --
difference between the sample size and sample degrees of
freedom. Defaults to Bessel's correction, "correction=1".
Changed in version 2.0: Previously this argument was called
"unbiased" and was a boolean with "True" corresponding to
"correction=1" and "False" being "correction=0".
* **keepdim** (*bool*) -- whether the output tensor has "dim"
retained or not.
* **out** (*Tensor**, **optional*) -- the output tensor.
-[ Example ]-
a = torch.tensor(
... [[ 0.2035, 1.2959, 1.8101, -0.4644],
... [ 1.5027, -0.3270, 0.5905, 0.6538],
... [-1.5745, 1.3330, -0.5596, -0.6548],
... [ 0.1264, -0.5080, 1.6420, 0.1992]])
torch.std(a, dim=1, keepdim=True)
tensor([[1.0311],
[0.7477],
[1.2204],
[0.9087]])
| https://pytorch.org/docs/stable/generated/torch.std.html | pytorch docs |
torch.linalg.inv
torch.linalg.inv(A, *, out=None) -> Tensor
Computes the inverse of a square matrix if it exists. Throws a
RuntimeError if the matrix is not invertible.
Letting \mathbb{K} be \mathbb{R} or \mathbb{C}, for a matrix A \in
\mathbb{K}^{n \times n}, its inverse matrix A^{-1} \in
\mathbb{K}^{n \times n} (if it exists) is defined as
A^{-1}A = AA^{-1} = \mathrm{I}_n
where \mathrm{I}_n is the n-dimensional identity matrix.
The inverse matrix exists if and only if A is invertible. In this
case, the inverse is unique.
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if "A" is a batch of matrices
then the output has the same batch dimensions.
Note:
When inputs are on a CUDA device, this function synchronizes that
device with the CPU.
Note:
Consider using "torch.linalg.solve()" if possible for multiplying
a matrix on the left by the inverse, as:
| https://pytorch.org/docs/stable/generated/torch.linalg.inv.html | pytorch docs |
a matrix on the left by the inverse, as:
linalg.solve(A, B) == linalg.inv(A) @ B # When B is a matrix
It is always preferred to use "solve()" when possible, as it is
faster and more numerically stable than computing the inverse
explicitly.
See also:
"torch.linalg.pinv()" computes the pseudoinverse (Moore-Penrose
inverse) of matrices of any shape.
"torch.linalg.solve()" computes "A"*.inv() @ *"B" with a
numerically stable algorithm.
Parameters:
A (Tensor) -- tensor of shape (, n, n)* where *** is
zero or more batch dimensions consisting of invertible matrices.
Keyword Arguments:
out (Tensor, optional) -- output tensor. Ignored if
None. Default: None.
Raises:
RuntimeError -- if the matrix "A" or any matrix in the batch
of matrices "A" is not invertible.
Examples:
>>> A = torch.randn(4, 4)
>>> Ainv = torch.linalg.inv(A)
>>> torch.dist(A @ Ainv, torch.eye(4))
| https://pytorch.org/docs/stable/generated/torch.linalg.inv.html | pytorch docs |
torch.dist(A @ Ainv, torch.eye(4))
tensor(1.1921e-07)
>>> A = torch.randn(2, 3, 4, 4) # Batch of matrices
>>> Ainv = torch.linalg.inv(A)
>>> torch.dist(A @ Ainv, torch.eye(4))
tensor(1.9073e-06)
>>> A = torch.randn(4, 4, dtype=torch.complex128) # Complex matrix
>>> Ainv = torch.linalg.inv(A)
>>> torch.dist(A @ Ainv, torch.eye(4))
tensor(7.5107e-16, dtype=torch.float64)
| https://pytorch.org/docs/stable/generated/torch.linalg.inv.html | pytorch docs |
ChannelShuffle
class torch.nn.ChannelShuffle(groups)
Divide the channels in a tensor of shape (, C , H, W) into g
groups and rearrange them as (, C \frac g, g, H, W), while keeping
the original tensor shape.
Parameters:
groups (int) -- number of groups to divide channels in.
Examples:
>>> channel_shuffle = nn.ChannelShuffle(2)
>>> input = torch.randn(1, 4, 2, 2)
>>> print(input)
[[[[1, 2],
[3, 4]],
[[5, 6],
[7, 8]],
[[9, 10],
[11, 12]],
[[13, 14],
[15, 16]],
]]
>>> output = channel_shuffle(input)
>>> print(output)
[[[[1, 2],
[3, 4]],
[[9, 10],
[11, 12]],
[[5, 6],
[7, 8]],
[[13, 14],
[15, 16]],
]]
| https://pytorch.org/docs/stable/generated/torch.nn.ChannelShuffle.html | pytorch docs |
torch.gt
torch.gt(input, other, *, out=None) -> Tensor
Computes \text{input} > \text{other} element-wise.
The second argument can be a number or a tensor whose shape is
broadcastable with the first argument.
Parameters:
* input (Tensor) -- the tensor to compare
* **other** (*Tensor** or **float*) -- the tensor or value to
compare
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Returns:
A boolean tensor that is True where "input" is greater than
"other" and False elsewhere
Example:
>>> torch.gt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[False, True], [False, False]])
| https://pytorch.org/docs/stable/generated/torch.gt.html | pytorch docs |
Bilinear
class torch.nn.Bilinear(in1_features, in2_features, out_features, bias=True, device=None, dtype=None)
Applies a bilinear transformation to the incoming data: y = x_1^T A
x_2 + b
Parameters:
* in1_features (int) -- size of each first input sample
* **in2_features** (*int*) -- size of each second input sample
* **out_features** (*int*) -- size of each output sample
* **bias** (*bool*) -- If set to False, the layer will not learn
an additive bias. Default: "True"
Shape:
* Input1: (*, H_{in1}) where H_{in1}=\text{in1_features} and *
means any number of additional dimensions including none. All
but the last dimension of the inputs should be the same.
* Input2: (*, H_{in2}) where H_{in2}=\text{in2\_features}.
* Output: (*, H_{out}) where H_{out}=\text{out\_features} and
all but the last dimension are the same shape as the input.
Variables: | https://pytorch.org/docs/stable/generated/torch.nn.Bilinear.html | pytorch docs |
Variables:
* weight (torch.Tensor) -- the learnable weights of the
module of shape (\text{out_features}, \text{in1_features},
\text{in2_features}). The values are initialized from
\mathcal{U}(-\sqrt{k}, \sqrt{k}), where k =
\frac{1}{\text{in1_features}}
* **bias** -- the learnable bias of the module of shape
(\text{out\_features}). If "bias" is "True", the values are
initialized from \mathcal{U}(-\sqrt{k}, \sqrt{k}), where k =
\frac{1}{\text{in1\_features}}
Examples:
>>> m = nn.Bilinear(20, 30, 40)
>>> input1 = torch.randn(128, 20)
>>> input2 = torch.randn(128, 30)
>>> output = m(input1, input2)
>>> print(output.size())
torch.Size([128, 40])
| https://pytorch.org/docs/stable/generated/torch.nn.Bilinear.html | pytorch docs |
torch.Tensor.logical_and_
Tensor.logical_and_() -> Tensor
In-place version of "logical_and()" | https://pytorch.org/docs/stable/generated/torch.Tensor.logical_and_.html | pytorch docs |
torch.arange
torch.arange(start=0, end, step=1, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Returns a 1-D tensor of size \left\lceil \frac{\text{end} -
\text{start}}{\text{step}} \right\rceil with values from the
interval "[start, end)" taken with common difference "step"
beginning from start.
Note that non-integer "step" is subject to floating point rounding
errors when comparing against "end"; to avoid inconsistency, we
advise adding a small epsilon to "end" in such cases.
\text{out}_{{i+1}} = \text{out}_{i} + \text{step}
Parameters:
* start (Number) -- the starting value for the set of
points. Default: "0".
* **end** (*Number*) -- the ending value for the set of points
* **step** (*Number*) -- the gap between each pair of adjacent
points. Default: "1".
Keyword Arguments:
* out (Tensor, optional) -- the output tensor. | https://pytorch.org/docs/stable/generated/torch.arange.html | pytorch docs |
dtype ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()"). If dtype is not
given, infer the data type from the other input arguments. If
any of start, end, or stop are floating-point, the
dtype is inferred to be the default dtype, see
"get_default_dtype()". Otherwise, the dtype is inferred to
be torch.int64.
layout ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
device ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
| https://pytorch.org/docs/stable/generated/torch.arange.html | pytorch docs |
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Example:
>>> torch.arange(5)
tensor([ 0, 1, 2, 3, 4])
>>> torch.arange(1, 4)
tensor([ 1, 2, 3])
>>> torch.arange(1, 2.5, 0.5)
tensor([ 1.0000, 1.5000, 2.0000])
| https://pytorch.org/docs/stable/generated/torch.arange.html | pytorch docs |
torch.signal.windows.hann
torch.signal.windows.hann(M, *, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)
Computes the Hann window.
The Hann window is defined as follows:
w_n = \frac{1}{2}\ \left[1 - \cos \left( \frac{2 \pi n}{M - 1}
\right)\right] = \sin^2 \left( \frac{\pi n}{M - 1} \right)
The window is normalized to 1 (maximum value is 1). However, the 1
doesn't appear if "M" is even and "sym" is True.
Parameters:
M (int) -- the length of the window. In other words, the
number of points of the returned window.
Keyword Arguments:
* sym (bool, optional) -- If False, returns a
periodic window suitable for use in spectral analysis. If
True, returns a symmetric window suitable for use in filter
design. Default: True.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
| https://pytorch.org/docs/stable/generated/torch.signal.windows.hann.html | pytorch docs |
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Return type:
Tensor
Examples:
>>> # Generates a symmetric Hann window.
>>> torch.signal.windows.hann(10)
tensor([0.0000, 0.1170, 0.4132, 0.7500, 0.9698, 0.9698, 0.7500, 0.4132, 0.1170, 0.0000])
>>> # Generates a periodic Hann window.
>>> torch.signal.windows.hann(10, sym=False)
| https://pytorch.org/docs/stable/generated/torch.signal.windows.hann.html | pytorch docs |
torch.signal.windows.hann(10, sym=False)
tensor([0.0000, 0.0955, 0.3455, 0.6545, 0.9045, 1.0000, 0.9045, 0.6545, 0.3455, 0.0955])
| https://pytorch.org/docs/stable/generated/torch.signal.windows.hann.html | pytorch docs |
torch.maximum
torch.maximum(input, other, *, out=None) -> Tensor
Computes the element-wise maximum of "input" and "other".
Note:
If one of the elements being compared is a NaN, then that element
is returned. "maximum()" is not supported for tensors with
complex dtypes.
Parameters:
* input (Tensor) -- the input tensor.
* **other** (*Tensor*) -- the second input tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.tensor((1, 2, -1))
>>> b = torch.tensor((3, 0, 4))
>>> torch.maximum(a, b)
tensor([3, 2, 4])
| https://pytorch.org/docs/stable/generated/torch.maximum.html | pytorch docs |
strict_fusion
class torch.jit.strict_fusion
This class errors if not all nodes have been fused in inference, or
symbolically differentiated in training.
Example:
Forcing fusion of additions.
@torch.jit.script
def foo(x):
with torch.jit.strict_fusion():
return x + x + x
| https://pytorch.org/docs/stable/generated/torch.jit.strict_fusion.html | pytorch docs |
Linear
class torch.nn.Linear(in_features, out_features, bias=True, device=None, dtype=None)
Applies a linear transformation to the incoming data: y = xA^T + b
This module supports TensorFloat32.
On certain ROCm devices, when using float16 inputs this module will
use different precision for backward.
Parameters:
* in_features (int) -- size of each input sample
* **out_features** (*int*) -- size of each output sample
* **bias** (*bool*) -- If set to "False", the layer will not
learn an additive bias. Default: "True"
Shape:
* Input: (*, H_{in}) where * means any number of dimensions
including none and H_{in} = \text{in_features}.
* Output: (*, H_{out}) where all but the last dimension are the
same shape as the input and H_{out} = \text{out\_features}.
Variables:
* weight (torch.Tensor) -- the learnable weights of the
module of shape (\text{out_features}, \text{in_features}). | https://pytorch.org/docs/stable/generated/torch.nn.Linear.html | pytorch docs |
The values are initialized from \mathcal{U}(-\sqrt{k},
\sqrt{k}), where k = \frac{1}{\text{in_features}}
* **bias** -- the learnable bias of the module of shape
(\text{out\_features}). If "bias" is "True", the values are
initialized from \mathcal{U}(-\sqrt{k}, \sqrt{k}) where k =
\frac{1}{\text{in\_features}}
Examples:
>>> m = nn.Linear(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])
| https://pytorch.org/docs/stable/generated/torch.nn.Linear.html | pytorch docs |
torch.cumulative_trapezoid
torch.cumulative_trapezoid(y, x=None, *, dx=None, dim=- 1) -> Tensor
Cumulatively computes the trapezoidal rule along "dim". By default
the spacing between elements is assumed to be 1, but "dx" can be
used to specify a different constant spacing, and "x" can be used
to specify arbitrary spacing along "dim".
For more details, please read "torch.trapezoid()". The difference
between "torch.trapezoid()" and this function is that,
"torch.trapezoid()" returns a value for each integration, where as
this function returns a cumulative value for every spacing within
the integration. This is analogous to how .sum returns a value
and .cumsum returns a cumulative sum.
Parameters:
* y (Tensor) -- Values to use when computing the
trapezoidal rule.
* **x** (*Tensor*) -- If specified, defines spacing between
values as specified above.
Keyword Arguments: | https://pytorch.org/docs/stable/generated/torch.cumulative_trapezoid.html | pytorch docs |
Keyword Arguments:
* dx (float) -- constant spacing between values. If
neither "x" or "dx" are specified then this defaults to 1.
Effectively multiplies the result by its value.
* **dim** (*int*) -- The dimension along which to compute the
trapezoidal rule. The last (inner-most) dimension by default.
Examples:
>>> # Cumulatively computes the trapezoidal rule in 1D, spacing is implicitly 1.
>>> y = torch.tensor([1, 5, 10])
>>> torch.cumulative_trapezoid(y)
tensor([3., 10.5])
>>> # Computes the same trapezoidal rule directly up to each element to verify
>>> (1 + 5) / 2
3.0
>>> (1 + 10 + 10) / 2
10.5
>>> # Cumulatively computes the trapezoidal rule in 1D with constant spacing of 2
>>> # NOTE: the result is the same as before, but multiplied by 2
>>> torch.cumulative_trapezoid(y, dx=2)
tensor([6., 21.])
| https://pytorch.org/docs/stable/generated/torch.cumulative_trapezoid.html | pytorch docs |
tensor([6., 21.])
>>> # Cumulatively computes the trapezoidal rule in 1D with arbitrary spacing
>>> x = torch.tensor([1, 3, 6])
>>> torch.cumulative_trapezoid(y, x)
tensor([6., 28.5])
>>> # Computes the same trapezoidal rule directly up to each element to verify
>>> ((3 - 1) * (1 + 5)) / 2
6.0
>>> ((3 - 1) * (1 + 5) + (6 - 3) * (5 + 10)) / 2
28.5
>>> # Cumulatively computes the trapezoidal rule for each row of a 3x3 matrix
>>> y = torch.arange(9).reshape(3, 3)
tensor([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> torch.cumulative_trapezoid(y)
tensor([[ 0.5, 2.],
[ 3.5, 8.],
[ 6.5, 14.]])
>>> # Cumulatively computes the trapezoidal rule for each column of the matrix
>>> torch.cumulative_trapezoid(y, dim=0)
tensor([[ 1.5, 2.5, 3.5],
[ 6.0, 8.0, 10.0]])
| https://pytorch.org/docs/stable/generated/torch.cumulative_trapezoid.html | pytorch docs |
[ 6.0, 8.0, 10.0]])
>>> # Cumulatively computes the trapezoidal rule for each row of a 3x3 ones matrix
>>> # with the same arbitrary spacing
>>> y = torch.ones(3, 3)
>>> x = torch.tensor([1, 3, 6])
>>> torch.cumulative_trapezoid(y, x)
tensor([[2., 5.],
[2., 5.],
[2., 5.]])
>>> # Cumulatively computes the trapezoidal rule for each row of a 3x3 ones matrix
>>> # with different arbitrary spacing per row
>>> y = torch.ones(3, 3)
>>> x = torch.tensor([[1, 2, 3], [1, 3, 5], [1, 4, 7]])
>>> torch.cumulative_trapezoid(y, x)
tensor([[1., 2.],
[2., 4.],
[3., 6.]])
| https://pytorch.org/docs/stable/generated/torch.cumulative_trapezoid.html | pytorch docs |
BatchNorm2d
class torch.ao.nn.quantized.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, device=None, dtype=None)
This is the quantized version of "BatchNorm2d". | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.BatchNorm2d.html | pytorch docs |
ParameterDict
class torch.nn.ParameterDict(parameters=None)
Holds parameters in a dictionary.
ParameterDict can be indexed like a regular Python dictionary, but
Parameters it contains are properly registered, and will be visible
by all Module methods. Other objects are treated as would be done
by a regular Python dictionary
"ParameterDict" is an ordered dictionary. "update()" with other
unordered mapping types (e.g., Python's plain "dict") does not
preserve the order of the merged mapping. On the other hand,
"OrderedDict" or another "ParameterDict" will preserve their
ordering.
Note that the constructor, assigning an element of the dictionary
and the "update()" method will convert any "Tensor" into
"Parameter".
Parameters:
values (iterable, optional) -- a mapping (dictionary)
of (string : Any) or an iterable of key-value pairs of type
(string, Any)
Example:
class MyModule(nn.Module):
| https://pytorch.org/docs/stable/generated/torch.nn.ParameterDict.html | pytorch docs |
Example:
class MyModule(nn.Module):
def __init__(self):
super(MyModule, self).__init__()
self.params = nn.ParameterDict({
'left': nn.Parameter(torch.randn(5, 10)),
'right': nn.Parameter(torch.randn(5, 10))
})
def forward(self, x, choice):
x = self.params[choice].mm(x)
return x
clear()
Remove all items from the ParameterDict.
copy()
Returns a copy of this "ParameterDict" instance.
Return type:
*ParameterDict*
fromkeys(keys, default=None)
Return a new ParameterDict with the keys provided
Parameters:
* **keys** (*iterable**, **string*) -- keys to make the new
ParameterDict from
* **default** (*Parameter**, **optional*) -- value to set for
all keys
Return type:
*ParameterDict*
get(key, default=None) | https://pytorch.org/docs/stable/generated/torch.nn.ParameterDict.html | pytorch docs |
get(key, default=None)
Return the parameter associated with key if present. Otherwise
return default if provided, None if not.
Parameters:
* **key** (*str*) -- key to get from the ParameterDict
* **default** (*Parameter**, **optional*) -- value to return
if key not present
Return type:
*Any*
items()
Return an iterable of the ParameterDict key/value pairs.
Return type:
*Iterable*[*Tuple*[str, *Any*]]
keys()
Return an iterable of the ParameterDict keys.
Return type:
*Iterable*[str]
pop(key)
Remove key from the ParameterDict and return its parameter.
Parameters:
**key** (*str*) -- key to pop from the ParameterDict
Return type:
*Any*
popitem()
Remove and return the last inserted *(key, parameter)* pair from
the ParameterDict
Return type:
*Tuple*[str, *Any*]
setdefault(key, default=None) | https://pytorch.org/docs/stable/generated/torch.nn.ParameterDict.html | pytorch docs |
setdefault(key, default=None)
If key is in the ParameterDict, return its value. If not, insert
*key* with a parameter *default* and return *default*. *default*
defaults to *None*.
Parameters:
* **key** (*str*) -- key to set default for
* **default** (*Any*) -- the parameter set to the key
Return type:
*Any*
update(parameters)
Update the "ParameterDict" with the key-value pairs from a
mapping or an iterable, overwriting existing keys.
Note:
If "parameters" is an "OrderedDict", a "ParameterDict", or an
iterable of key-value pairs, the order of new elements in it
is preserved.
Parameters:
**parameters** (*iterable*) -- a mapping (dictionary) from
string to "Parameter", or an iterable of key-value pairs of
type (string, "Parameter")
values()
Return an iterable of the ParameterDict values.
Return type:
*Iterable*[*Any*]
| https://pytorch.org/docs/stable/generated/torch.nn.ParameterDict.html | pytorch docs |
torch.bitwise_and
torch.bitwise_and(input, other, *, out=None) -> Tensor
Computes the bitwise AND of "input" and "other". The input tensor
must be of integral or Boolean types. For bool tensors, it computes
the logical AND.
Parameters:
* input -- the first input tensor
* **other** -- the second input tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.bitwise_and(torch.tensor([-1, -2, 3], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8))
tensor([1, 0, 3], dtype=torch.int8)
>>> torch.bitwise_and(torch.tensor([True, True, False]), torch.tensor([False, True, False]))
tensor([ False, True, False])
| https://pytorch.org/docs/stable/generated/torch.bitwise_and.html | pytorch docs |
torch.nn.functional.selu
torch.nn.functional.selu(input, inplace=False) -> Tensor
Applies element-wise, \text{SELU}(x) = scale * (\max(0,x) + \min(0,
\alpha * (\exp(x) - 1))), with
\alpha=1.6732632423543772848170429916717 and
scale=1.0507009873554804934193349852946.
See "SELU" for more details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.selu.html | pytorch docs |
torch.view_as_real
torch.view_as_real(input) -> Tensor
Returns a view of "input" as a real tensor. For an input complex
tensor of "size" m1, m2, \dots, mi, this function returns a new
real tensor of size m1, m2, \dots, mi, 2, where the last dimension
of size 2 represents the real and imaginary components of complex
numbers.
Warning:
"view_as_real()" is only supported for tensors with "complex
dtypes".
Parameters:
input (Tensor) -- the input tensor.
Example:
>>> x=torch.randn(4, dtype=torch.cfloat)
>>> x
tensor([(0.4737-0.3839j), (-0.2098-0.6699j), (0.3470-0.9451j), (-0.5174-1.3136j)])
>>> torch.view_as_real(x)
tensor([[ 0.4737, -0.3839],
[-0.2098, -0.6699],
[ 0.3470, -0.9451],
[-0.5174, -1.3136]])
| https://pytorch.org/docs/stable/generated/torch.view_as_real.html | pytorch docs |
torch.Tensor.sspaddmm
Tensor.sspaddmm(mat1, mat2, *, beta=1, alpha=1) -> Tensor
See "torch.sspaddmm()" | https://pytorch.org/docs/stable/generated/torch.Tensor.sspaddmm.html | pytorch docs |
torch.less
torch.less(input, other, *, out=None) -> Tensor
Alias for "torch.lt()". | https://pytorch.org/docs/stable/generated/torch.less.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.