text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
torch.nn.functional.softplus torch.nn.functional.softplus(input, beta=1, threshold=20) -> Tensor Applies element-wise, the function \text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x)). For numerical stability the implementation reverts to the linear function when input \times \beta > threshold. See "Softplus" for more details.
https://pytorch.org/docs/stable/generated/torch.nn.functional.softplus.html
pytorch docs
torch.tile torch.tile(input, dims) -> Tensor Constructs a tensor by repeating the elements of "input". The "dims" argument specifies the number of repetitions in each dimension. If "dims" specifies fewer dimensions than "input" has, then ones are prepended to "dims" until all dimensions are specified. For example, if "input" has shape (8, 6, 4, 2) and "dims" is (2, 2), then "dims" is treated as (1, 1, 2, 2). Analogously, if "input" has fewer dimensions than "dims" specifies, then "input" is treated as if it were unsqueezed at dimension zero until it has as many dimensions as "dims" specifies. For example, if "input" has shape (4, 2) and "dims" is (3, 3, 2, 2), then "input" is treated as if it had the shape (1, 1, 4, 2). Note: This function is similar to NumPy's tile function. Parameters: * input (Tensor) -- the tensor whose elements to repeat. * **dims** (*tuple*) -- the number of repetitions per dimension. Example:
https://pytorch.org/docs/stable/generated/torch.tile.html
pytorch docs
Example: >>> x = torch.tensor([1, 2, 3]) >>> x.tile((2,)) tensor([1, 2, 3, 1, 2, 3]) >>> y = torch.tensor([[1, 2], [3, 4]]) >>> torch.tile(y, (2, 2)) tensor([[1, 2, 1, 2], [3, 4, 3, 4], [1, 2, 1, 2], [3, 4, 3, 4]])
https://pytorch.org/docs/stable/generated/torch.tile.html
pytorch docs
Conv3d class torch.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None) Applies a 3D convolution over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N, C_{in}, D, H, W) and output (N, C_{out}, D_{out}, H_{out}, W_{out}) can be precisely described as: out(N_i, C_{out_j}) = bias(C_{out_j}) + \sum_{k = 0}^{C_{in} - 1} weight(C_{out_j}, k) \star input(N_i, k) where \star is the valid 3D cross-correlation operator This module supports TensorFloat32. On certain ROCm devices, when using float16 inputs this module will use different precision for backward. "stride" controls the stride for the cross-correlation. "padding" controls the amount of padding applied to the input. It can be either a string {'valid', 'same'} or a tuple of ints
https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html
pytorch docs
giving the amount of implicit padding applied on both sides. "dilation" controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what "dilation" does. "groups" controls the connections between inputs and outputs. "in_channels" and "out_channels" must both be divisible by "groups". For example, * At groups=1, all inputs are convolved to all outputs. * At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated. * At groups= "in_channels", each input channel is convolved with its own set of filters (of size \frac{\text{out\_channels}}{\text{in\_channels}}). The parameters "kernel_size", "stride", "padding", "dilation" can either be:
https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html
pytorch docs
either be: * a single "int" -- in which case the same value is used for the depth, height and width dimension * a "tuple" of three ints -- in which case, the first *int* is used for the depth dimension, the second *int* for the height dimension and the third *int* for the width dimension Note: When *groups == in_channels* and *out_channels == K * in_channels*, where *K* is a positive integer, this operation is also known as a "depthwise convolution".In other words, for an input of size (N, C_{in}, L_{in}), a depthwise convolution with a depthwise multiplier *K* can be performed with the arguments (C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times \text{K}, ..., \text{groups}=C_\text{in}). Note: In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you
https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html
pytorch docs
can try to make the operation deterministic (potentially at a performance cost) by setting "torch.backends.cudnn.deterministic = True". See Reproducibility for more information. Note: "padding='valid'" is the same as no padding. "padding='same'" pads the input so the output has the shape as the input. However, this mode doesn't support any stride values other than 1. Note: This module supports complex data types i.e. "complex32, complex64, complex128". Parameters: * in_channels (int) -- Number of channels in the input image * **out_channels** (*int*) -- Number of channels produced by the convolution * **kernel_size** (*int** or **tuple*) -- Size of the convolving kernel * **stride** (*int** or **tuple**, **optional*) -- Stride of the convolution. Default: 1 * **padding** (*int**, **tuple** or **str**, **optional*) -- Padding added to all six sides of the input. Default: 0
https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html
pytorch docs
padding_mode (str, optional) -- "'zeros'", "'reflect'", "'replicate'" or "'circular'". Default: "'zeros'" dilation (int or tuple, optional) -- Spacing between kernel elements. Default: 1 groups (int, optional) -- Number of blocked connections from input channels to output channels. Default: 1 bias (bool, optional) -- If "True", adds a learnable bias to the output. Default: "True" Shape: * Input: (N, C_{in}, D_{in}, H_{in}, W_{in}) or (C_{in}, D_{in}, H_{in}, W_{in}) * Output: (N, C_{out}, D_{out}, H_{out}, W_{out}) or (C_{out}, D_{out}, H_{out}, W_{out}), where D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor H_{out} = \left\lfloor\frac{H_{in} + 2 \times
https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html
pytorch docs
\text{padding}[1] - \text{dilation}[1] \times (\text{kernel_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{dilation}[2] \times (\text{kernel\_size}[2] - 1) - 1}{\text{stride}[2]} + 1\right\rfloor Variables: * weight (Tensor) -- the learnable weights of the module of shape (\text{out_channels}, \frac{\text{in_channels}}{\text{groups}}, \text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]}). The values of these weights are sampled from \mathcal{U}(-\sqrt{k}, \sqrt{k}) where k = \frac{groups}{C_\text{in} * \prod_{i=0}^{2}\text{kernel_size}[i]} * **bias** (*Tensor*) -- the learnable bias of the module of shape (out_channels). If "bias" is "True", then the values of these weights are sampled from \mathcal{U}(-\sqrt{k},
https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html
pytorch docs
\sqrt{k}) where k = \frac{groups}{C_\text{in} * \prod_{i=0}^{2}\text{kernel_size}[i]} Examples: >>> # With square kernels and equal stride >>> m = nn.Conv3d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(4, 2, 0)) >>> input = torch.randn(20, 16, 10, 50, 100) >>> output = m(input)
https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html
pytorch docs
torch.Tensor.cuda Tensor.cuda(device=None, non_blocking=False, memory_format=torch.preserve_format) -> Tensor Returns a copy of this object in CUDA memory. If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned. Parameters: * device ("torch.device") -- The destination GPU device. Defaults to the current CUDA device. * **non_blocking** (*bool*) -- If "True" and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. Default: "False". * **memory_format** ("torch.memory_format", optional) -- the desired memory format of returned Tensor. Default: "torch.preserve_format".
https://pytorch.org/docs/stable/generated/torch.Tensor.cuda.html
pytorch docs
torch.Tensor.exponential_ Tensor.exponential_(lambd=1, *, generator=None) -> Tensor Fills "self" tensor with elements drawn from the exponential distribution: f(x) = \lambda e^{-\lambda x}
https://pytorch.org/docs/stable/generated/torch.Tensor.exponential_.html
pytorch docs
torch.randn_like torch.randn_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor Returns a tensor with the same size as "input" that is filled with random numbers from a normal distribution with mean 0 and variance 1. "torch.randn_like(input)" is equivalent to "torch.randn(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)". Parameters: input (Tensor) -- the size of "input" will determine size of the output tensor. Keyword Arguments: * dtype ("torch.dtype", optional) -- the desired data type of returned Tensor. Default: if "None", defaults to the dtype of "input". * **layout** ("torch.layout", optional) -- the desired layout of returned tensor. Default: if "None", defaults to the layout of "input". * **device** ("torch.device", optional) -- the desired device of
https://pytorch.org/docs/stable/generated/torch.randn_like.html
pytorch docs
returned tensor. Default: if "None", defaults to the device of "input". * **requires_grad** (*bool**, **optional*) -- If autograd should record operations on the returned tensor. Default: "False". * **memory_format** ("torch.memory_format", optional) -- the desired memory format of returned Tensor. Default: "torch.preserve_format".
https://pytorch.org/docs/stable/generated/torch.randn_like.html
pytorch docs
torch.nn.functional.poisson_nll_loss torch.nn.functional.poisson_nll_loss(input, target, log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean') Poisson negative log likelihood loss. See "PoissonNLLLoss" for details. Parameters: * input (Tensor) -- expectation of underlying Poisson distribution. * **target** (*Tensor*) -- random sample target \sim \text{Poisson}(input). * **log_input** (*bool*) -- if "True" the loss is computed as \exp(\text{input}) - \text{target} * \text{input}, if "False" then loss is \text{input} - \text{target} * \log(\text{input}+\text{eps}). Default: "True" * **full** (*bool*) -- whether to compute full loss, i. e. to add the Stirling approximation term. Default: "False" \text{target} * \log(\text{target}) - \text{target} + 0.5 * \log(2 * \pi * \text{target}).
https://pytorch.org/docs/stable/generated/torch.nn.functional.poisson_nll_loss.html
pytorch docs
\log(2 * \pi * \text{target}). * **size_average** (*bool**, **optional*) -- Deprecated (see "reduction"). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field "size_average" is set to "False", the losses are instead summed for each minibatch. Ignored when reduce is "False". Default: "True" * **eps** (*float**, **optional*) -- Small value to avoid evaluation of \log(0) when "log_input"="False". Default: 1e-8 * **reduce** (*bool**, **optional*) -- Deprecated (see "reduction"). By default, the losses are averaged or summed over observations for each minibatch depending on "size_average". When "reduce" is "False", returns a loss per batch element instead and ignores "size_average". Default: "True" * **reduction** (*str**, **optional*) -- Specifies the reduction
https://pytorch.org/docs/stable/generated/torch.nn.functional.poisson_nll_loss.html
pytorch docs
to apply to the output: "'none'" | "'mean'" | "'sum'". "'none'": no reduction will be applied, "'mean'": the sum of the output will be divided by the number of elements in the output, "'sum'": the output will be summed. Note: "size_average" and "reduce" are in the process of being deprecated, and in the meantime, specifying either of those two args will override "reduction". Default: "'mean'" Return type: Tensor
https://pytorch.org/docs/stable/generated/torch.nn.functional.poisson_nll_loss.html
pytorch docs
torch.foreach_log1p torch.foreach_log1p(self: List[Tensor]) -> None Apply "torch.log1p()" to each Tensor of the input list.
https://pytorch.org/docs/stable/generated/torch._foreach_log1p_.html
pytorch docs
torch.max torch.max(input) -> Tensor Returns the maximum value of all elements in the "input" tensor. Warning: This function produces deterministic (sub)gradients unlike "max(dim=0)" Parameters: input (Tensor) -- the input tensor. Example: >>> a = torch.randn(1, 3) >>> a tensor([[ 0.6763, 0.7445, -2.2369]]) >>> torch.max(a) tensor(0.7445) torch.max(input, dim, keepdim=False, *, out=None) Returns a namedtuple "(values, indices)" where "values" is the maximum value of each row of the "input" tensor in the given dimension "dim". And "indices" is the index location of each maximum value found (argmax). If "keepdim" is "True", the output tensors are of the same size as "input" except in the dimension "dim" where they are of size 1. Otherwise, "dim" is squeezed (see "torch.squeeze()"), resulting in the output tensors having 1 fewer dimension than "input". Note:
https://pytorch.org/docs/stable/generated/torch.max.html
pytorch docs
Note: If there are multiple maximal values in a reduced row then the indices of the first maximal value are returned. Parameters: * input (Tensor) -- the input tensor. * **dim** (*int*) -- the dimension to reduce. * **keepdim** (*bool*) -- whether the output tensor has "dim" retained or not. Default: "False". Keyword Arguments: out (tuple, optional) -- the result tuple of two output tensors (max, max_indices) Example: >>> a = torch.randn(4, 4) >>> a tensor([[-1.2360, -0.2942, -0.1222, 0.8475], [ 1.1949, -1.1127, -2.2379, -0.6702], [ 1.5717, -0.9207, 0.1297, -1.8768], [-0.6172, 1.0036, -0.6060, -0.2432]]) >>> torch.max(a, 1) torch.return_types.max(values=tensor([0.8475, 1.1949, 1.5717, 1.0036]), indices=tensor([3, 0, 0, 1])) torch.max(input, other, *, out=None) -> Tensor See "torch.maximum()".
https://pytorch.org/docs/stable/generated/torch.max.html
pytorch docs
torch.Tensor.storage Tensor.storage() -> torch.TypedStorage Returns the underlying "TypedStorage". Warning: "TypedStorage" is deprecated. It will be removed in the future, and "UntypedStorage" will be the only storage class. To access the "UntypedStorage" directly, use "Tensor.untyped_storage()".
https://pytorch.org/docs/stable/generated/torch.Tensor.storage.html
pytorch docs
torch.Tensor.cross Tensor.cross(other, dim=None) -> Tensor See "torch.cross()"
https://pytorch.org/docs/stable/generated/torch.Tensor.cross.html
pytorch docs
torch.corrcoef torch.corrcoef(input) -> Tensor Estimates the Pearson product-moment correlation coefficient matrix of the variables given by the "input" matrix, where rows are the variables and columns are the observations. Note: The correlation coefficient matrix R is computed using the covariance matrix C as given by R_{ij} = \frac{ C_{ij} } { \sqrt{ C_{ii} * C_{jj} } } Note: Due to floating point rounding, the resulting array may not be Hermitian and its diagonal elements may not be 1. The real and imaginary values are clipped to the interval [-1, 1] in an attempt to improve this situation. Parameters: input (Tensor) -- A 2D matrix containing multiple variables and observations, or a Scalar or 1D vector representing a single variable. Returns: (Tensor) The correlation coefficient matrix of the variables. See also: "torch.cov()" covariance matrix. Example:
https://pytorch.org/docs/stable/generated/torch.corrcoef.html
pytorch docs
Example: >>> x = torch.tensor([[0, 1, 2], [2, 1, 0]]) >>> torch.corrcoef(x) tensor([[ 1., -1.], [-1., 1.]]) >>> x = torch.randn(2, 4) >>> x tensor([[-0.2678, -0.0908, -0.3766, 0.2780], [-0.5812, 0.1535, 0.2387, 0.2350]]) >>> torch.corrcoef(x) tensor([[1.0000, 0.3582], [0.3582, 1.0000]]) >>> torch.corrcoef(x[0]) tensor(1.)
https://pytorch.org/docs/stable/generated/torch.corrcoef.html
pytorch docs
torch.bitwise_left_shift torch.bitwise_left_shift(input, other, *, out=None) -> Tensor Computes the left arithmetic shift of "input" by "other" bits. The input tensor must be of integral type. This operator supports broadcasting to a common shape and type promotion. The operation applied is: \text{out}_i = \text{input}_i << \text{other}_i Parameters: * input (Tensor or Scalar) -- the first input tensor * **other** (*Tensor** or **Scalar*) -- the second input tensor Keyword Arguments: out (Tensor, optional) -- the output tensor. Example: >>> torch.bitwise_left_shift(torch.tensor([-1, -2, 3], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8)) tensor([-2, -2, 24], dtype=torch.int8)
https://pytorch.org/docs/stable/generated/torch.bitwise_left_shift.html
pytorch docs
torch.heaviside torch.heaviside(input, values, *, out=None) -> Tensor Computes the Heaviside step function for each element in "input". The Heaviside step function is defined as: \text{{heaviside}}(input, values) = \begin{cases} 0, & \text{if input < 0}\\ values, & \text{if input == 0}\\ 1, & \text{if input > 0} \end{cases} Parameters: * input (Tensor) -- the input tensor. * **values** (*Tensor*) -- The values to use where "input" is zero. Keyword Arguments: out (Tensor, optional) -- the output tensor. Example: >>> input = torch.tensor([-1.5, 0, 2.0]) >>> values = torch.tensor([0.5]) >>> torch.heaviside(input, values) tensor([0.0000, 0.5000, 1.0000]) >>> values = torch.tensor([1.2, -2.0, 3.5]) >>> torch.heaviside(input, values) tensor([0., -2., 1.])
https://pytorch.org/docs/stable/generated/torch.heaviside.html
pytorch docs
float16_dynamic_qconfig torch.quantization.qconfig.float16_dynamic_qconfig alias of QConfig(activation=functools.partial(, dtype=torch.float16, is_dynamic=True){}, weight=functools.partial(, dtype=torch.float16){})
https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.float16_dynamic_qconfig.html
pytorch docs
torch.cuda.get_allocator_backend torch.cuda.get_allocator_backend() Returns a string describing the active allocator backend as set by "PYTORCH_CUDA_ALLOC_CONF". Currently available backends are "native" (PyTorch's native caching allocator) and cudaMallocAsync` (CUDA's built-in asynchronous allocator). Note: See Memory management for details on choosing the allocator backend. Return type: str
https://pytorch.org/docs/stable/generated/torch.cuda.get_allocator_backend.html
pytorch docs
torch.Tensor.cholesky_solve Tensor.cholesky_solve(input2, upper=False) -> Tensor See "torch.cholesky_solve()"
https://pytorch.org/docs/stable/generated/torch.Tensor.cholesky_solve.html
pytorch docs
torch.nn.functional.upsample torch.nn.functional.upsample(input, size=None, scale_factor=None, mode='nearest', align_corners=None) Upsamples the input to either the given "size" or the given "scale_factor" Warning: This function is deprecated in favor of "torch.nn.functional.interpolate()". This is equivalent with "nn.functional.interpolate(...)". Note: This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. The algorithm used for upsampling is determined by "mode". Currently temporal, spatial and volumetric upsampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape. The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width. The modes available for upsampling are: nearest, linear (3D- only), bilinear, bicubic (4D-only), trilinear (5D-only)
https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample.html
pytorch docs
Parameters: * input (Tensor) -- the input tensor * **size** (*int** or **Tuple**[**int**] or **Tuple**[**int**, **int**] or **Tuple**[**int**, **int**, **int**]*) -- output spatial size. * **scale_factor** (*float** or **Tuple**[**float**]*) -- multiplier for spatial size. Has to match input size if it is a tuple. * **mode** (*str*) -- algorithm used for upsampling: "'nearest'" | "'linear'" | "'bilinear'" | "'bicubic'" | "'trilinear'". Default: "'nearest'" * **align_corners** (*bool**, **optional*) -- Geometrically, we consider the pixels of the input and output as squares rather than points. If set to "True", the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to "False", the input and output tensors are aligned by the corner points
https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample.html
pytorch docs
of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation independent of input size when "scale_factor" is kept the same. This only has an effect when "mode" is "'linear'", "'bilinear'", "'bicubic'" or "'trilinear'". Default: "False" Note: With "mode='bicubic'", it's possible to cause overshoot, in other words it can produce negative values or values greater than 255 for images. Explicitly call "result.clamp(min=0, max=255)" if you want to reduce the overshoot when displaying the image. Warning: With "align_corners = True", the linearly interpolating modes (*linear*, *bilinear*, and *trilinear*) don't proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is
https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample.html
pytorch docs
"align_corners = False". See "Upsample" for concrete examples on how this affects the outputs.
https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample.html
pytorch docs
torch.nn.functional.relu_ torch.nn.functional.relu_(input) -> Tensor In-place version of "relu()".
https://pytorch.org/docs/stable/generated/torch.nn.functional.relu_.html
pytorch docs
torch.Tensor.storage_offset Tensor.storage_offset() -> int Returns "self" tensor's offset in the underlying storage in terms of number of storage elements (not bytes). Example: >>> x = torch.tensor([1, 2, 3, 4, 5]) >>> x.storage_offset() 0 >>> x[3:].storage_offset() 3
https://pytorch.org/docs/stable/generated/torch.Tensor.storage_offset.html
pytorch docs
Hardswish class torch.ao.nn.quantized.Hardswish(scale, zero_point) This is the quantized version of "Hardswish". Parameters: * scale -- quantization scale of the output tensor * **zero_point** -- quantization zero point of the output tensor
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Hardswish.html
pytorch docs
torch.linalg.vecdot torch.linalg.vecdot(x, y, *, dim=- 1, out=None) -> Tensor Computes the dot product of two batches of vectors along a dimension. In symbols, this function computes \sum_{i=1}^n \overline{x_i}y_i. over the dimension "dim" where \overline{x_i} denotes the conjugate for complex vectors, and it is the identity for real vectors. Supports input of half, bfloat16, float, double, cfloat, cdouble and integral dtypes. It also supports broadcasting. Parameters: * x (Tensor) -- first batch of vectors of shape (, n)*. * **y** (*Tensor*) -- second batch of vectors of shape *(*, n)*. Keyword Arguments: * dim (int) -- Dimension along which to compute the dot product. Default: -1. * **out** (*Tensor**, **optional*) -- output tensor. Ignored if *None*. Default: *None*. Examples: >>> v1 = torch.randn(3, 2) >>> v2 = torch.randn(3, 2) >>> linalg.vecdot(v1, v2)
https://pytorch.org/docs/stable/generated/torch.linalg.vecdot.html
pytorch docs
linalg.vecdot(v1, v2) tensor([ 0.3223, 0.2815, -0.1944]) >>> torch.vdot(v1[0], v2[0]) tensor(0.3223)
https://pytorch.org/docs/stable/generated/torch.linalg.vecdot.html
pytorch docs
torch.Tensor.stride Tensor.stride(dim) -> tuple or int Returns the stride of "self" tensor. Stride is the jump necessary to go from one element to the next one in the specified dimension "dim". A tuple of all strides is returned when no argument is passed in. Otherwise, an integer value is returned as the stride in the particular dimension "dim". Parameters: dim (int, optional) -- the desired dimension in which stride is required Example: >>> x = torch.tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]) >>> x.stride() (5, 1) >>> x.stride(0) 5 >>> x.stride(-1) 1
https://pytorch.org/docs/stable/generated/torch.Tensor.stride.html
pytorch docs
torch.Tensor.bitwise_left_shift_ Tensor.bitwise_left_shift_(other) -> Tensor In-place version of "bitwise_left_shift()"
https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_left_shift_.html
pytorch docs
torch.Tensor.logsumexp Tensor.logsumexp(dim, keepdim=False) -> Tensor See "torch.logsumexp()"
https://pytorch.org/docs/stable/generated/torch.Tensor.logsumexp.html
pytorch docs
ReplicationPad1d class torch.nn.ReplicationPad1d(padding) Pads the input tensor using replication of the input boundary. For N-dimensional padding, use "torch.nn.functional.pad()". Parameters: padding (int, tuple) -- the size of the padding. If is int, uses the same padding in all boundaries. If a 2-tuple, uses (\text{padding_left}, \text{padding_right}) Shape: * Input: (C, W_{in}) or (N, C, W_{in}). * Output: (C, W_{out}) or (N, C, W_{out}), where W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right} Examples: >>> m = nn.ReplicationPad1d(2) >>> input = torch.arange(8, dtype=torch.float).reshape(1, 2, 4) >>> input tensor([[[0., 1., 2., 3.], [4., 5., 6., 7.]]]) >>> m(input) tensor([[[0., 0., 0., 1., 2., 3., 3., 3.], [4., 4., 4., 5., 6., 7., 7., 7.]]]) >>> # using different paddings for different sides
https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad1d.html
pytorch docs
m = nn.ReplicationPad1d((3, 1)) >>> m(input) tensor([[[0., 0., 0., 0., 1., 2., 3., 3.], [4., 4., 4., 4., 5., 6., 7., 7.]]])
https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad1d.html
pytorch docs
torch.linalg.qr torch.linalg.qr(A, mode='reduced', *, out=None) Computes the QR decomposition of a matrix. Letting \mathbb{K} be \mathbb{R} or \mathbb{C}, the full QR decomposition of a matrix A \in \mathbb{K}^{m \times n} is defined as A = QR\mathrlap{\qquad Q \in \mathbb{K}^{m \times m}, R \in \mathbb{K}^{m \times n}} where Q is orthogonal in the real case and unitary in the complex case, and R is upper triangular with real diagonal (even in the complex case). When m > n (tall matrix), as R is upper triangular, its last m - n rows are zero. In this case, we can drop the last m - n columns of Q to form the reduced QR decomposition: A = QR\mathrlap{\qquad Q \in \mathbb{K}^{m \times n}, R \in \mathbb{K}^{n \times n}} The reduced QR decomposition agrees with the full QR decomposition when n >= m (wide matrix). Supports input of float, double, cfloat and cdouble dtypes. Also
https://pytorch.org/docs/stable/generated/torch.linalg.qr.html
pytorch docs
supports batches of matrices, and if "A" is a batch of matrices then the output has the same batch dimensions. The parameter "mode" chooses between the full and reduced QR decomposition. If "A" has shape (, m, n), denoting k = min(m, n)* "mode"= 'reduced' (default): Returns (Q, R) of shapes (, m, k), (, k, n) respectively. It is always differentiable. "mode"= 'complete': Returns (Q, R) of shapes (, m, m), (, m, n) respectively. It is differentiable for m <= n. "mode"= 'r': Computes only the reduced R. Returns (Q, R) with Q empty and R of shape (, k, n)*. It is never differentiable. Differences with numpy.linalg.qr: "mode"= 'raw' is not implemented. Unlike numpy.linalg.qr, this function always returns a tuple of two tensors. When "mode"= 'r', the Q tensor is an empty tensor. Warning: The elements in the diagonal of *R* are not necessarily positive.
https://pytorch.org/docs/stable/generated/torch.linalg.qr.html
pytorch docs
As such, the returned QR decomposition is only unique up to the sign of the diagonal of R. Therefore, different platforms, like NumPy, or inputs on different devices, may produce different valid decompositions. Warning: The QR decomposition is only well-defined if the first *k = min(m, n)* columns of every matrix in "A" are linearly independent. If this condition is not met, no error will be thrown, but the QR produced may be incorrect and its autodiff may fail or produce incorrect results. Parameters: * A (Tensor) -- tensor of shape (, m, n)* where *** is zero or more batch dimensions. * **mode** (*str**, **optional*) -- one of *'reduced'*, *'complete'*, *'r'*. Controls the shape of the returned tensors. Default: *'reduced'*. Keyword Arguments: out (tuple, optional) -- output tuple of two tensors. Ignored if None. Default: None. Returns: A named tuple (Q, R).
https://pytorch.org/docs/stable/generated/torch.linalg.qr.html
pytorch docs
Returns: A named tuple (Q, R). Examples: >>> A = torch.tensor([[12., -51, 4], [6, 167, -68], [-4, 24, -41]]) >>> Q, R = torch.linalg.qr(A) >>> Q tensor([[-0.8571, 0.3943, 0.3314], [-0.4286, -0.9029, -0.0343], [ 0.2857, -0.1714, 0.9429]]) >>> R tensor([[ -14.0000, -21.0000, 14.0000], [ 0.0000, -175.0000, 70.0000], [ 0.0000, 0.0000, -35.0000]]) >>> (Q @ R).round() tensor([[ 12., -51., 4.], [ 6., 167., -68.], [ -4., 24., -41.]]) >>> (Q.T @ Q).round() tensor([[ 1., 0., 0.], [ 0., 1., -0.], [ 0., -0., 1.]]) >>> Q2, R2 = torch.linalg.qr(A, mode='r') >>> Q2 tensor([]) >>> torch.equal(R, R2) True >>> A = torch.randn(3, 4, 5) >>> Q, R = torch.linalg.qr(A, mode='complete') >>> torch.dist(Q @ R, A) tensor(1.6099e-06)
https://pytorch.org/docs/stable/generated/torch.linalg.qr.html
pytorch docs
tensor(1.6099e-06) >>> torch.dist(Q.mT @ Q, torch.eye(4)) tensor(6.2158e-07)
https://pytorch.org/docs/stable/generated/torch.linalg.qr.html
pytorch docs
torch.cuda.get_device_capability torch.cuda.get_device_capability(device=None) Gets the cuda capability of a device. Parameters: device (torch.device or int, optional) -- device for which to return the device capability. This function is a no-op if this argument is a negative integer. It uses the current device, given by "current_device()", if "device" is "None" (default). Returns: the major and minor cuda capability of the device Return type: tuple(int, int)
https://pytorch.org/docs/stable/generated/torch.cuda.get_device_capability.html
pytorch docs
torch.Tensor.fliplr Tensor.fliplr() -> Tensor See "torch.fliplr()"
https://pytorch.org/docs/stable/generated/torch.Tensor.fliplr.html
pytorch docs
torch.Tensor.addmm_ Tensor.addmm_(mat1, mat2, *, beta=1, alpha=1) -> Tensor In-place version of "addmm()"
https://pytorch.org/docs/stable/generated/torch.Tensor.addmm_.html
pytorch docs
torch.Tensor.logical_or_ Tensor.logical_or_() -> Tensor In-place version of "logical_or()"
https://pytorch.org/docs/stable/generated/torch.Tensor.logical_or_.html
pytorch docs
torch.cuda.get_arch_list torch.cuda.get_arch_list() Returns list CUDA architectures this library was compiled for. Return type: List[str]
https://pytorch.org/docs/stable/generated/torch.cuda.get_arch_list.html
pytorch docs
torch.Tensor.bitwise_right_shift_ Tensor.bitwise_right_shift_(other) -> Tensor In-place version of "bitwise_right_shift()"
https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_right_shift_.html
pytorch docs
torch._foreach_lgamma torch._foreach_lgamma(self: List[Tensor]) -> List[Tensor] Apply "torch.lgamma()" to each Tensor of the input list.
https://pytorch.org/docs/stable/generated/torch._foreach_lgamma.html
pytorch docs
torch.is_complex torch.is_complex(input) Returns True if the data type of "input" is a complex data type i.e., one of "torch.complex64", and "torch.complex128". Parameters: input (Tensor) -- the input tensor.
https://pytorch.org/docs/stable/generated/torch.is_complex.html
pytorch docs
torch.foreach_erfc torch.foreach_erfc(self: List[Tensor]) -> None Apply "torch.erfc()" to each Tensor of the input list.
https://pytorch.org/docs/stable/generated/torch._foreach_erfc_.html
pytorch docs
CosineAnnealingLR class torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max, eta_min=0, last_epoch=- 1, verbose=False) Set the learning rate of each parameter group using a cosine annealing schedule, where \eta_{max} is set to the initial lr and T_{cur} is the number of epochs since the last restart in SGDR: \begin{aligned} \eta_t & = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right), & T_{cur} \neq (2k+1)T_{max}; \\ \eta_{t+1} & = \eta_{t} + \frac{1}{2}(\eta_{max} - \eta_{min}) \left(1 - \cos\left(\frac{1}{T_{max}}\pi\right)\right), & T_{cur} = (2k+1)T_{max}. \end{aligned} When last_epoch=-1, sets initial lr as lr. Notice that because the schedule is defined recursively, the learning rate can be simultaneously modified outside this scheduler by other operators. If the learning rate is set solely by this scheduler, the learning
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingLR.html
pytorch docs
rate at each step becomes: \eta_t = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right) It has been proposed in SGDR: Stochastic Gradient Descent with Warm Restarts. Note that this only implements the cosine annealing part of SGDR, and not the restarts. Parameters: * optimizer (Optimizer) -- Wrapped optimizer. * **T_max** (*int*) -- Maximum number of iterations. * **eta_min** (*float*) -- Minimum learning rate. Default: 0. * **last_epoch** (*int*) -- The index of last epoch. Default: -1. * **verbose** (*bool*) -- If "True", prints a message to stdout for each update. Default: "False". get_last_lr() Return last computed learning rate by current scheduler. load_state_dict(state_dict) Loads the schedulers state. Parameters: **state_dict** (*dict*) -- scheduler state. Should be an
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingLR.html
pytorch docs
object returned from a call to "state_dict()". print_lr(is_verbose, group, lr, epoch=None) Display the current learning rate. state_dict() Returns the state of the scheduler as a "dict". It contains an entry for every variable in self.__dict__ which is not the optimizer.
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingLR.html
pytorch docs
torch.foreach_tan torch.foreach_tan(self: List[Tensor]) -> None Apply "torch.tan()" to each Tensor of the input list.
https://pytorch.org/docs/stable/generated/torch._foreach_tan_.html
pytorch docs
torch.is_floating_point torch.is_floating_point(input) Returns True if the data type of "input" is a floating point data type i.e., one of "torch.float64", "torch.float32", "torch.float16", and "torch.bfloat16". Parameters: input (Tensor) -- the input tensor.
https://pytorch.org/docs/stable/generated/torch.is_floating_point.html
pytorch docs
Conv1d class torch.ao.nn.quantized.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None) Applies a 1D convolution over a quantized input signal composed of several quantized input planes. For details on input arguments, parameters, and implementation see "Conv1d". Note: Only *zeros* is supported for the "padding_mode" argument. Note: Only *torch.quint8* is supported for the input data type. Variables: * weight (Tensor) -- packed tensor derived from the learnable weight parameter. * **scale** (*Tensor*) -- scalar for the output scale * **zero_point** (*Tensor*) -- scalar for the output zero point See "Conv1d" for other attributes. Examples: >>> m = nn.quantized.Conv1d(16, 33, 3, stride=2) >>> input = torch.randn(20, 16, 100) >>> # quantize input to quint8
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv1d.html
pytorch docs
quantize input to quint8 >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, ... dtype=torch.quint8) >>> output = m(q_input) classmethod from_float(mod) Creates a quantized module from a float module or qparams_dict. Parameters: **mod** (*Module*) -- a float module, either produced by torch.ao.quantization utilities or provided by the user
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv1d.html
pytorch docs
disable_observer class torch.quantization.fake_quantize.disable_observer(mod) Disable observation for this module, if applicable. Example usage: # model is any PyTorch model model.apply(torch.ao.quantization.disable_observer)
https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.disable_observer.html
pytorch docs
torch.autograd.graph.Node.metadata abstract Node.metadata() Returns the metadata. Return type: dict
https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.metadata.html
pytorch docs
torch.Tensor.arccosh_ Tensor.arccosh_() acosh_() -> Tensor In-place version of "arccosh()"
https://pytorch.org/docs/stable/generated/torch.Tensor.arccosh_.html
pytorch docs
DTypeConfig class torch.ao.quantization.backend_config.DTypeConfig(input_dtype=None, output_dtype=None, weight_dtype=None, bias_dtype=None, is_dynamic=None) Config object that specifies the supported data types passed as arguments to quantize ops in the reference model spec, for input and output activations, weights, and biases. For example, consider the following reference model: quant1 - [dequant1 - fp32_linear - quant2] - dequant2 The pattern in the square brackets refers to the reference pattern of statically quantized linear. Setting the input dtype as torch.quint8 in the DTypeConfig means we pass in torch.quint8 as the dtype argument to the first quantize op (quant1). Similarly, setting the output dtype as torch.quint8 means we pass in torch.quint8 as the dtype argument to the second quantize op (quant2). Note that the dtype here does not refer to the interface dtypes of
https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeConfig.html
pytorch docs
the op. For example, the "input dtype" here is not the dtype of the input tensor passed to the quantized linear op. Though it can still be the same as the interface dtype, this is not always the case, e.g. the interface dtype is fp32 in dynamic quantization but the "input dtype" specified in the DTypeConfig would still be quint8. The semantics of dtypes here are the same as the semantics of the dtypes specified in the observers. These dtypes are matched against the ones specified in the user’s QConfig. If there is a match, and the QConfig satisfies the constraints specified in the DTypeConfig (if any), then we will quantize the given pattern using this DTypeConfig. Otherwise, the QConfig is ignored and the pattern will not be quantized. Example usage: >>> dtype_config1 = DTypeConfig( ... input_dtype=torch.quint8, ... output_dtype=torch.quint8, ... weight_dtype=torch.qint8, ... bias_dtype=torch.float)
https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeConfig.html
pytorch docs
... bias_dtype=torch.float) >>> dtype_config2 = DTypeConfig( ... input_dtype=DTypeWithConstraints( ... dtype=torch.quint8, ... quant_min_lower_bound=0, ... quant_max_upper_bound=255, ... ), ... output_dtype=DTypeWithConstraints( ... dtype=torch.quint8, ... quant_min_lower_bound=0, ... quant_max_upper_bound=255, ... ), ... weight_dtype=DTypeWithConstraints( ... dtype=torch.qint8, ... quant_min_lower_bound=-128, ... quant_max_upper_bound=127, ... ), ... bias_dtype=torch.float) >>> dtype_config1.input_dtype torch.quint8 >>> dtype_config2.input_dtype torch.quint8 >>> dtype_config2.input_dtype_with_constraints DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None)
https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeConfig.html
pytorch docs
classmethod from_dict(dtype_config_dict) Create a "DTypeConfig" from a dictionary with the following items (all optional): "input_dtype": torch.dtype or "DTypeWithConstraints" "output_dtype": torch.dtype or "DTypeWithConstraints" "weight_dtype": torch.dtype or "DTypeWithConstraints" "bias_type": torch.dtype "is_dynamic": bool Return type: *DTypeConfig* to_dict() Convert this "DTypeConfig" to a dictionary with the items described in "from_dict()". Return type: *Dict*[str, *Any*]
https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeConfig.html
pytorch docs
BCEWithLogitsLoss class torch.nn.BCEWithLogitsLoss(weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None) This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability. The unreduced (i.e. with "reduction" set to "'none'") loss can be described as: \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log \sigma(x_n) + (1 - y_n) \cdot \log (1 - \sigma(x_n)) \right], where N is the batch size. If "reduction" is not "'none'" (default "'mean'"), then \ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}
https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html
pytorch docs
\end{cases} This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets t[i] should be numbers between 0 and 1. It's possible to trade off recall and precision by adding weights to positive examples. In the case of multi-label classification the loss can be described as: \ell_c(x, y) = L_c = \{l_{1,c},\dots,l_{N,c}\}^\top, \quad l_{n,c} = - w_{n,c} \left[ p_c y_{n,c} \cdot \log \sigma(x_{n,c}) + (1 - y_{n,c}) \cdot \log (1 - \sigma(x_{n,c})) \right], where c is the class number (c > 1 for multi-label binary classification, c = 1 for single-label binary classification), n is the number of the sample in the batch and p_c is the weight of the positive answer for the class c. p_c > 1 increases the recall, p_c < 1 increases the precision. For example, if a dataset contains 100 positive and 300 negative examples of a single class, then pos_weight for the class should
https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html
pytorch docs
be equal to \frac{300}{100}=3. The loss would act as if the dataset contains 3\times 100=300 positive examples. Examples: >>> target = torch.ones([10, 64], dtype=torch.float32) # 64 classes, batch size = 10 >>> output = torch.full([10, 64], 1.5) # A prediction (logit) >>> pos_weight = torch.ones([64]) # All weights are equal to 1 >>> criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight) >>> criterion(output, target) # -log(sigmoid(1.5)) tensor(0.20...) Parameters: * weight (Tensor, optional) -- a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch. * **size_average** (*bool**, **optional*) -- Deprecated (see "reduction"). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field "size_average"
https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html
pytorch docs
is set to "False", the losses are instead summed for each minibatch. Ignored when "reduce" is "False". Default: "True" * **reduce** (*bool**, **optional*) -- Deprecated (see "reduction"). By default, the losses are averaged or summed over observations for each minibatch depending on "size_average". When "reduce" is "False", returns a loss per batch element instead and ignores "size_average". Default: "True" * **reduction** (*str**, **optional*) -- Specifies the reduction to apply to the output: "'none'" | "'mean'" | "'sum'". "'none'": no reduction will be applied, "'mean'": the sum of the output will be divided by the number of elements in the output, "'sum'": the output will be summed. Note: "size_average" and "reduce" are in the process of being deprecated, and in the meantime, specifying either of those two args will override "reduction". Default: "'mean'"
https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html
pytorch docs
pos_weight (Tensor, optional) -- a weight of positive examples. Must be a vector with length equal to the number of classes. Shape: * Input: (*), where * means any number of dimensions. * Target: (*), same shape as the input. * Output: scalar. If "reduction" is "'none'", then (*), same shape as input. Examples: >>> loss = nn.BCEWithLogitsLoss() >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> output = loss(input, target) >>> output.backward()
https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html
pytorch docs
torch.Tensor.nextafter_ Tensor.nextafter_(other) -> Tensor In-place version of "nextafter()"
https://pytorch.org/docs/stable/generated/torch.Tensor.nextafter_.html
pytorch docs
torch.Tensor.qscheme Tensor.qscheme() -> torch.qscheme Returns the quantization scheme of a given QTensor.
https://pytorch.org/docs/stable/generated/torch.Tensor.qscheme.html
pytorch docs
torch.autograd.gradcheck torch.autograd.gradcheck(func, inputs, *, eps=1e-06, atol=1e-05, rtol=0.001, raise_exception=True, check_sparse_nnz=False, nondet_tol=0.0, check_undefined_grad=True, check_grad_dtypes=False, check_batched_grad=False, check_batched_forward_grad=False, check_forward_ad=False, check_backward_ad=True, fast_mode=False) Check gradients computed via small finite differences against analytical gradients w.r.t. tensors in "inputs" that are of floating point or complex type and with "requires_grad=True". The check between numerical and analytical gradients uses "allclose()". For most of the complex functions we consider for optimization purposes, no notion of Jacobian exists. Instead, gradcheck verifies if the numerical and analytical values of the Wirtinger and Conjugate Wirtinger derivatives are consistent. Because the gradient computation is done under the assumption that the overall
https://pytorch.org/docs/stable/generated/torch.autograd.gradcheck.html
pytorch docs
function has a real-valued output, we treat functions with complex output in a special way. For these functions, gradcheck is applied to two real-valued functions corresponding to taking the real components of the complex outputs for the first, and taking the imaginary components of the complex outputs for the second. For more details, check out Autograd for Complex Numbers. Note: The default values are designed for "input" of double precision. This check will likely fail if "input" is of less precision, e.g., "FloatTensor". Warning: If any checked tensor in "input" has overlapping memory, i.e., different indices pointing to the same memory address (e.g., from "torch.expand()"), this check will likely fail because the numerical gradients computed by point perturbation at such indices will change values at all other indices that share the same memory address. Parameters:
https://pytorch.org/docs/stable/generated/torch.autograd.gradcheck.html
pytorch docs
same memory address. Parameters: * func (function) -- a Python function that takes Tensor inputs and returns a Tensor or a tuple of Tensors * **inputs** (*tuple of Tensor** or **Tensor*) -- inputs to the function * **eps** (*float**, **optional*) -- perturbation for finite differences * **atol** (*float**, **optional*) -- absolute tolerance * **rtol** (*float**, **optional*) -- relative tolerance * **raise_exception** (*bool**, **optional*) -- indicating whether to raise an exception if the check fails. The exception gives more information about the exact nature of the failure. This is helpful when debugging gradchecks. * **check_sparse_nnz** (*bool**, **optional*) -- if True, gradcheck allows for SparseTensor input, and for any SparseTensor at input, gradcheck will perform check at nnz positions only. * **nondet_tol** (*float**, **optional*) -- tolerance for non-
https://pytorch.org/docs/stable/generated/torch.autograd.gradcheck.html
pytorch docs
determinism. When running identical inputs through the differentiation, the results must either match exactly (default, 0.0) or be within this tolerance. * **check_undefined_grad** (*bool**, **optional*) -- if True, check if undefined output grads are supported and treated as zeros, for "Tensor" outputs. * **check_batched_grad** (*bool**, **optional*) -- if True, check if we can compute batched gradients using prototype vmap support. Defaults to False. * **check_batched_forward_grad** (*bool**, **optional*) -- if True, checks if we can compute batched forward gradients using forward ad and prototype vmap support. Defaults to False. * **check_forward_ad** (*bool**, **optional*) -- if True, check that the gradients computed with forward mode AD match the numerical ones. Defaults to False. * **check_backward_ad** (*bool**, **optional*) -- if False, do
https://pytorch.org/docs/stable/generated/torch.autograd.gradcheck.html
pytorch docs
not perform any checks that rely on backward mode AD to be implemented. Defaults to True. * **fast_mode** (*bool**, **optional*) -- Fast mode for gradcheck and gradgradcheck is currently only implemented for R to R functions. If none of the inputs and outputs are complex a faster implementation of gradcheck that no longer computes the entire jacobian is run; otherwise, we fall back to the slow implementation. Returns: True if all differences satisfy allclose condition Return type: bool
https://pytorch.org/docs/stable/generated/torch.autograd.gradcheck.html
pytorch docs
MaxPool3d class torch.nn.MaxPool3d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False) Applies a 3D max pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N, C, D, H, W), output (N, C, D_{out}, H_{out}, W_{out}) and "kernel_size" (kD, kH, kW) can be precisely described as: \begin{aligned} \text{out}(N_i, C_j, d, h, w) ={} & \max_{k=0, \ldots, kD-1} \max_{m=0, \ldots, kH-1} \max_{n=0, \ldots, kW-1} \\ & \text{input}(N_i, C_j, \text{stride[0]} \times d + k, \text{stride[1]} \times h + m, \text{stride[2]} \times w + n) \end{aligned} If "padding" is non-zero, then the input is implicitly padded with negative infinity on both sides for "padding" number of points. "dilation" controls the spacing between the kernel points. It is
https://pytorch.org/docs/stable/generated/torch.nn.MaxPool3d.html
pytorch docs
harder to describe, but this link has a nice visualization of what "dilation" does. Note: When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored. The parameters "kernel_size", "stride", "padding", "dilation" can either be: * a single "int" -- in which case the same value is used for the depth, height and width dimension * a "tuple" of three ints -- in which case, the first *int* is used for the depth dimension, the second *int* for the height dimension and the third *int* for the width dimension Parameters: * kernel_size (Union[int, Tuple[int, int, int]]) -- the size of the window to take a max over * **stride** (*Union**[**int**, **Tuple**[**int**, **int**, **int**]**]*) -- the stride of the window. Default value is "kernel_size"
https://pytorch.org/docs/stable/generated/torch.nn.MaxPool3d.html
pytorch docs
"kernel_size" * **padding** (*Union**[**int**, **Tuple**[**int**, **int**, **int**]**]*) -- Implicit negative infinity padding to be added on all three sides * **dilation** (*Union**[**int**, **Tuple**[**int**, **int**, **int**]**]*) -- a parameter that controls the stride of elements in the window * **return_indices** (*bool*) -- if "True", will return the max indices along with the outputs. Useful for "torch.nn.MaxUnpool3d" later * **ceil_mode** (*bool*) -- when True, will use *ceil* instead of *floor* to compute the output shape Shape: * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in}, W_{in}). * Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out}, H_{out}, W_{out}), where D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} +
https://pytorch.org/docs/stable/generated/torch.nn.MaxPool3d.html
pytorch docs
1\right\rfloor H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{dilation}[2] \times (\text{kernel\_size}[2] - 1) - 1}{\text{stride}[2]} + 1\right\rfloor Examples: >>> # pool of square window of size=3, stride=2 >>> m = nn.MaxPool3d(3, stride=2) >>> # pool of non-square window >>> m = nn.MaxPool3d((3, 2, 2), stride=(2, 1, 2)) >>> input = torch.randn(20, 16, 50, 44, 31) >>> output = m(input)
https://pytorch.org/docs/stable/generated/torch.nn.MaxPool3d.html
pytorch docs
torch.Tensor.index_reduce Tensor.index_reduce()
https://pytorch.org/docs/stable/generated/torch.Tensor.index_reduce.html
pytorch docs
torch.hspmm torch.hspmm(mat1, mat2, *, out=None) -> Tensor Performs a matrix multiplication of a sparse COO matrix "mat1" and a strided matrix "mat2". The result is a (1 + 1)-dimensional hybrid COO matrix. Parameters: * mat1 (Tensor) -- the first sparse matrix to be matrix multiplied * **mat2** (*Tensor*) -- the second strided matrix to be matrix multiplied Keyword Arguments: out (Tensor, optional) -- the output tensor.
https://pytorch.org/docs/stable/generated/torch.hspmm.html
pytorch docs
torch.sparse.sampled_addmm torch.sparse.sampled_addmm(input, mat1, mat2, *, beta=1., alpha=1., out=None) -> Tensor Performs a matrix multiplication of the dense matrices "mat1" and "mat2" at the locations specified by the sparsity pattern of "input". The matrix "input" is added to the final result. Mathematically this performs the following operation: \text{out} = \alpha\ (\text{mat1} \mathbin{@} \text{mat2})*\text{spy}(\text{input}) + \beta\ \text{input} where \text{spy}(\text{input}) is the sparsity pattern matrix of "input", "alpha" and "beta" are the scaling factors. \text{spy}(\text{input}) has value 1 at the positions where "input" has non-zero values, and 0 elsewhere. Note: "input" must be a sparse CSR tensor. "mat1" and "mat2" must be dense tensors. Parameters: * input (Tensor) -- a sparse CSR matrix of shape (m, n) to be added and used to compute the sampled matrix multiplication
https://pytorch.org/docs/stable/generated/torch.sparse.sampled_addmm.html
pytorch docs
multiplication * **mat1** (*Tensor*) -- a dense matrix of shape *(m, k)* to be multiplied * **mat2** (*Tensor*) -- a dense matrix of shape *(k, n)* to be multiplied Keyword Arguments: * beta (Number, optional) -- multiplier for "input" (\beta) * **alpha** (*Number**, **optional*) -- multiplier for mat1 @ mat2 (\alpha) * **out** (*Tensor**, **optional*) -- output tensor. Ignored if *None*. Default: *None*. Examples: >>> input = torch.eye(3, device='cuda').to_sparse_csr() >>> mat1 = torch.randn(3, 5, device='cuda') >>> mat2 = torch.randn(5, 3, device='cuda') >>> torch.sparse.sampled_addmm(input, mat1, mat2) tensor(crow_indices=tensor([0, 1, 2, 3]), col_indices=tensor([0, 1, 2]), values=tensor([ 0.2847, -0.7805, -0.1900]), device='cuda:0', size=(3, 3), nnz=3, layout=torch.sparse_csr) >>> torch.sparse.sampled_addmm(input, mat1, mat2).to_dense()
https://pytorch.org/docs/stable/generated/torch.sparse.sampled_addmm.html
pytorch docs
tensor([[ 0.2847, 0.0000, 0.0000], [ 0.0000, -0.7805, 0.0000], [ 0.0000, 0.0000, -0.1900]], device='cuda:0') >>> torch.sparse.sampled_addmm(input, mat1, mat2, beta=0.5, alpha=0.5) tensor(crow_indices=tensor([0, 1, 2, 3]), col_indices=tensor([0, 1, 2]), values=tensor([ 0.1423, -0.3903, -0.0950]), device='cuda:0', size=(3, 3), nnz=3, layout=torch.sparse_csr)
https://pytorch.org/docs/stable/generated/torch.sparse.sampled_addmm.html
pytorch docs
torch.take torch.take(input, index) -> Tensor Returns a new tensor with the elements of "input" at the given indices. The input tensor is treated as if it were viewed as a 1-D tensor. The result takes the same shape as the indices. Parameters: * input (Tensor) -- the input tensor. * **index** (*LongTensor*) -- the indices into tensor Example: >>> src = torch.tensor([[4, 3, 5], ... [6, 7, 8]]) >>> torch.take(src, torch.tensor([0, 2, 5])) tensor([ 4, 5, 8])
https://pytorch.org/docs/stable/generated/torch.take.html
pytorch docs
torch.Tensor.equal Tensor.equal(other) -> bool See "torch.equal()"
https://pytorch.org/docs/stable/generated/torch.Tensor.equal.html
pytorch docs
default_weight_only_qconfig torch.quantization.qconfig.default_weight_only_qconfig alias of QConfig(activation=, weight=functools.partial(, observer=, quant_min=-128, quant_max=127, dtype=torch.qint8, qscheme=torch.per_tensor_symmetric, reduce_range=False){})
https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_weight_only_qconfig.html
pytorch docs
torch.vander torch.vander(x, N=None, increasing=False) -> Tensor Generates a Vandermonde matrix. The columns of the output matrix are elementwise powers of the input vector x^{(N-1)}, x^{(N-2)}, ..., x^0. If increasing is True, the order of the columns is reversed x^0, x^1, ..., x^{(N-1)}. Such a matrix with a geometric progression in each row is named for Alexandre-Theophile Vandermonde. Parameters: * x (Tensor) -- 1-D input tensor. * **N** (*int**, **optional*) -- Number of columns in the output. If N is not specified, a square array is returned (N = len(x)). * **increasing** (*bool**, **optional*) -- Order of the powers of the columns. If True, the powers increase from left to right, if False (the default) they are reversed. Returns: Vandermonde matrix. If increasing is False, the first column is x^{(N-1)}, the second x^{(N-2)} and so forth. If increasing is
https://pytorch.org/docs/stable/generated/torch.vander.html
pytorch docs
True, the columns are x^0, x^1, ..., x^{(N-1)}. Return type: Tensor Example: >>> x = torch.tensor([1, 2, 3, 5]) >>> torch.vander(x) tensor([[ 1, 1, 1, 1], [ 8, 4, 2, 1], [ 27, 9, 3, 1], [125, 25, 5, 1]]) >>> torch.vander(x, N=3) tensor([[ 1, 1, 1], [ 4, 2, 1], [ 9, 3, 1], [25, 5, 1]]) >>> torch.vander(x, N=3, increasing=True) tensor([[ 1, 1, 1], [ 1, 2, 4], [ 1, 3, 9], [ 1, 5, 25]])
https://pytorch.org/docs/stable/generated/torch.vander.html
pytorch docs
NLLLoss class torch.nn.NLLLoss(weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean') The negative log likelihood loss. It is useful to train a classification problem with C classes. If provided, the optional argument "weight" should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set. The input given through a forward call is expected to contain log-probabilities of each class. input has to be a Tensor of size either (minibatch, C) or (minibatch, C, d_1, d_2, ..., d_K) with K \geq 1 for the K-dimensional case. The latter is useful for higher dimension inputs, such as computing NLL loss per-pixel for 2D images. Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer.
https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html
pytorch docs
extra layer. The target that this loss expects should be a class index in the range [0, C-1] where C = number of classes; if ignore_index is specified, this loss also accepts this class index (this index may not necessarily be in the class range). The unreduced (i.e. with "reduction" set to "'none'") loss can be described as: \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_{y_n} x_{n,y_n}, \quad w_{c} = \text{weight}[c] \cdot \mathbb{1}\{c \not= \text{ignore\_index}\}, where x is the input, y is the target, w is the weight, and N is the batch size. If "reduction" is not "'none'" (default "'mean'"), then \ell(x, y) = \begin{cases} \sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n}} l_n, & \text{if reduction} = \text{`mean';}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{`sum'.} \end{cases} Parameters: * weight (Tensor, optional) -- a manual rescaling
https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html
pytorch docs
weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones. * **size_average** (*bool**, **optional*) -- Deprecated (see "reduction"). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field "size_average" is set to "False", the losses are instead summed for each minibatch. Ignored when "reduce" is "False". Default: "None" * **ignore_index** (*int**, **optional*) -- Specifies a target value that is ignored and does not contribute to the input gradient. When "size_average" is "True", the loss is averaged over non-ignored targets. * **reduce** (*bool**, **optional*) -- Deprecated (see "reduction"). By default, the losses are averaged or summed over observations for each minibatch depending on
https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html
pytorch docs