text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
"fft_ifft2", "fft_fftn", "fft_ifftn", "fft_rfft", "fft_irfft",
"fft_rfft2", "fft_irfft2", "fft_rfftn", "fft_irfftn", "fft_hfft",
"fft_ihfft", "linalg_matrix_norm", "linalg_cond",
"linalg_matrix_rank", "linalg_solve", "linalg_cholesky",
"linalg_svdvals", "linalg_eigvals", "linalg_eigvalsh", "linalg_inv",
"linalg_householder_product", "linalg_tensorinv",
"linalg_tensorsolve", "fake_quantize_per_tensor_affine", "eig",
"geqrf", "lstsq", "_lu_with_info", "qr", "solve", "svd", "symeig",
"triangular_solve", "fractional_max_pool2d", "fractional_max_pool3d",
"adaptive_max_pool3d", "multilabel_margin_loss_forward", "linalg_qr",
"linalg_cholesky_ex", "linalg_svd", "linalg_eig", "linalg_eigh",
"linalg_lstsq", "linalg_inv_ex"
CPU Ops that promote to the widest input type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
These ops don't require a particular dtype for stability, but take
multiple inputs and require that the inputs' dtypes match. If all of
the inputs are "bfloat16", the op runs in "bfloat16". If any of the | https://pytorch.org/docs/stable/amp.html | pytorch docs |
inputs is "float32", autocast casts all inputs to "float32" and runs
the op in "float32".
"cat", "stack", "index_copy"
Some ops not listed here (e.g., binary ops like "add") natively
promote inputs without autocasting's intervention. If inputs are a
mixture of "bfloat16" and "float32", these ops run in "float32" and
produce "float32" output, regardless of whether autocast is enabled. | https://pytorch.org/docs/stable/amp.html | pytorch docs |
torch._dynamo
Warning:
This module is an early prototype and is subject to change.
torch._dynamo.allow_in_graph(fn)
Customize which functions TorchDynamo will include in the generated
graph. Similar to torch.fx.wrap().
torch._dynamo.allow_in_graph(my_custom_function)
@torch._dynamo.optimize(...)
def fn(a):
x = torch.add(x, 1)
x = my_custom_function(x)
x = torch.add(x, 1)
return x
fn(...)
Will capture a single graph containing my_custom_function().
torch._dynamo.disallow_in_graph(fn)
Customize which functions TorchDynamo will exclude in the generated
graph and force a graph break on.
torch._dynamo.disallow_in_graph(torch.sub)
@torch._dynamo.optimize(...)
def fn(a):
x = torch.add(x, 1)
x = torch.sub(x, 1)
x = torch.add(x, 1)
return x
fn(...)
Will break the graph on torch.sub, and give two graphs each with | https://pytorch.org/docs/stable/_dynamo.html | pytorch docs |
a single torch.add() op.
torch._dynamo.graph_break()
Force a graph break
torch._dynamo.optimize(backend='inductor', *, nopython=False, guard_export_fn=None, guard_fail_fn=None, disable=False, dynamic=False)
The main entrypoint of TorchDynamo. Do graph capture and call
backend() to optimize extracted graphs.
Parameters:
* backend -- One of the two things: - Either, a
function/callable taking a torch.fx.GraphModule and
example_inputs and returning a python callable that runs the
graph faster. One can also provide additional context for the
backend, like torch.jit.fuser("fuser2"), by setting the
backend_ctx_ctor attribute. See
AOTAutogradMemoryEfficientFusionWithContext for the usage. -
Or, a string backend name in torch._dynamo.list_backends()
* **nopython** -- If True, graph breaks will be errors and there
will be a single whole-program graph.
| https://pytorch.org/docs/stable/_dynamo.html | pytorch docs |
will be a single whole-program graph.
* **disable** -- If True, turn this decorator into a no-op
* **dynamic** -- If True, turn on dynamic shapes support
Example Usage:
@torch._dynamo.optimize()
def toy_example(a, b):
...
torch._dynamo.optimize_assert(backend, *, hooks=Hooks(guard_export_fn=None, guard_fail_fn=None), export=False, dynamic=False)
The same as torch._dynamo.optimize(backend, nopython=True)
torch._dynamo.run(fn=None)
Don't do any dynamic compiles, just use prior optimizations
torch._dynamo.disable(fn=None)
Decorator and context manager to disable TorchDynamo
torch._dynamo.reset()
Clear all compile caches and restore initial state
torch._dynamo.list_backends()
Return valid strings that can be passed to:
@torch._dynamo.optimize(<backend>)
def foo(...):
....
torch._dynamo.skip(fn=None)
Skip frames associated with the function code, but still process
recursively invoked frames | https://pytorch.org/docs/stable/_dynamo.html | pytorch docs |
recursively invoked frames
class torch._dynamo.OptimizedModule(mod, dynamo_ctx)
Wraps the original nn.Module object and later patches its forward
method to optimized self.forward method. | https://pytorch.org/docs/stable/_dynamo.html | pytorch docs |
Tensor Views
PyTorch allows a tensor to be a "View" of an existing tensor. View
tensor shares the same underlying data with its base tensor.
Supporting "View" avoids explicit data copy, thus allows us to do fast
and memory efficient reshaping, slicing and element-wise operations.
For example, to get a view of an existing tensor "t", you can call
"t.view(...)".
t = torch.rand(4, 4)
b = t.view(2, 8)
t.storage().data_ptr() == b.storage().data_ptr() # t and b share the same underlying data.
True
# Modifying view tensor changes base tensor as well.
b[0][0] = 3.14
t[0][0]
tensor(3.14)
Since views share underlying data with its base tensor, if you edit
the data in the view, it will be reflected in the base tensor as well.
Typically a PyTorch op returns a new tensor as output, e.g. "add()".
But in case of view ops, outputs are views of input tensors to avoid
unnecessary data copy. No data movement occurs when creating a view, | https://pytorch.org/docs/stable/tensor_view.html | pytorch docs |
view tensor just changes the way it interprets the same data. Taking a
view of contiguous tensor could potentially produce a non-contiguous
tensor. Users should pay additional attention as contiguity might have
implicit performance impact. "transpose()" is a common example.
base = torch.tensor([[0, 1],[2, 3]])
base.is_contiguous()
True
t = base.transpose(0, 1) # t is a view of base. No data movement happened here.
# View tensors might be non-contiguous.
t.is_contiguous()
False
# To get a contiguous tensor, call .contiguous() to enforce
# copying data when t is not contiguous.
c = t.contiguous()
For reference, hereâs a full list of view ops in PyTorch:
Basic slicing and indexing op, e.g. "tensor[0, 2:, 1:7:2]" returns a
view of base "tensor", see note below.
"adjoint()"
"as_strided()"
"detach()"
"diagonal()"
"expand()"
"expand_as()"
"movedim()"
"narrow()"
"permute()"
"select()"
"squeeze()"
"transpose()"
| https://pytorch.org/docs/stable/tensor_view.html | pytorch docs |
"select()"
"squeeze()"
"transpose()"
"t()"
"T"
"H"
"mT"
"mH"
"real"
"imag"
"view_as_real()"
"unflatten()"
"unfold()"
"unsqueeze()"
"view()"
"view_as()"
"unbind()"
"split()"
"hsplit()"
"vsplit()"
"tensor_split()"
"split_with_sizes()"
"swapaxes()"
"swapdims()"
"chunk()"
"indices()" (sparse tensor only)
"values()" (sparse tensor only)
Note:
When accessing the contents of a tensor via indexing, PyTorch
follows Numpy behaviors that basic indexing returns views, while
advanced indexing returns a copy. Assignment via either basic or
advanced indexing is in-place. See more examples in Numpy indexing
documentation.
It's also worth mentioning a few ops with special behaviors:
"reshape()", "reshape_as()" and "flatten()" can return either a view
or new tensor, user code shouldn't rely on whether it's view or not.
"contiguous()" returns itself if input tensor is already
| https://pytorch.org/docs/stable/tensor_view.html | pytorch docs |
contiguous, otherwise it returns a new contiguous tensor by copying
data.
For a more detailed walk-through of PyTorch internal implementation,
please refer to ezyang's blogpost about PyTorch Internals. | https://pytorch.org/docs/stable/tensor_view.html | pytorch docs |
torch.Storage
"torch.Storage" is an alias for the storage class that corresponds
with the default data type ("torch.get_default_dtype()"). For
instance, if the default data type is "torch.float", "torch.Storage"
resolves to "torch.FloatStorage".
The "torch.Storage" and "torch.cuda.Storage" classes, like
"torch.FloatStorage", "torch.IntStorage", etc., are not actually ever
instantiated. Calling their constructors creates a
"torch.TypedStorage" with the appropriate "torch.dtype" and
"torch.device". "torch.Storage" classes have all of the same
class methods that "torch.TypedStorage" has.
A "torch.TypedStorage" is a contiguous, one-dimensional array of
elements of a particular "torch.dtype". It can be given any
"torch.dtype", and the internal data will be interpreted
appropriately. "torch.TypedStorage" contains a "torch.UntypedStorage"
which holds the data as an untyped array of bytes.
Every strided "torch.Tensor" contains a "torch.TypedStorage", which | https://pytorch.org/docs/stable/storage.html | pytorch docs |
stores all of the data that the "torch.Tensor" views.
Warning:
All storage classes except for "torch.UntypedStorage" will be
removed in the future, and "torch.UntypedStorage" will be used in
all cases.
class torch.TypedStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
bfloat16()
Casts this storage to bfloat16 type
bool()
Casts this storage to bool type
byte()
Casts this storage to byte type
char()
Casts this storage to char type
clone()
Returns a copy of this storage
complex_double()
Casts this storage to complex double type
complex_float()
Casts this storage to complex float type
copy_(source, non_blocking=None)
cpu()
Returns a CPU copy of this storage if it's not already on the
CPU
cuda(device=None, non_blocking=False, **kwargs)
Returns a copy of this object in CUDA memory.
If this object is already in CUDA memory and on the correct
| https://pytorch.org/docs/stable/storage.html | pytorch docs |
device, then no copy is performed and the original object is
returned.
Parameters:
* **device** (*int*) -- The destination GPU id. Defaults to
the current device.
* **non_blocking** (*bool*) -- If "True" and the source is in
pinned memory, the copy will be asynchronous with respect
to the host. Otherwise, the argument has no effect.
* ****kwargs** -- For compatibility, may contain the key
"async" in place of the "non_blocking" argument.
Return type:
*T*
data_ptr()
property device
double()
Casts this storage to double type
dtype: dtype
element_size()
fill_(value)
float()
Casts this storage to float type
classmethod from_buffer(args, *kwargs)
classmethod from_file(filename, shared=False, size=0) -> Storage
If *shared* is *True*, then memory is shared between all
processes. All changes are written to the file. If *shared* is
| https://pytorch.org/docs/stable/storage.html | pytorch docs |
False, then the changes on the storage do not affect the file.
*size* is the number of elements in the storage. If *shared* is
*False*, then the file must contain at least *size *
sizeof(Type)* bytes (*Type* is the type of storage). If *shared*
is *True* the file will be created if needed.
Parameters:
* **filename** (*str*) -- file name to map
* **shared** (*bool*) -- whether to share memory
* **size** (*int*) -- number of elements in the storage
get_device()
Return type:
int
half()
Casts this storage to half type
int()
Casts this storage to int type
property is_cuda
is_pinned()
is_shared()
is_sparse = False
long()
Casts this storage to long type
nbytes()
pickle_storage_type()
pin_memory()
Coppies the storage to pinned memory, if it's not already
pinned.
resize_(size)
share_memory_()
Moves the storage to shared memory.
| https://pytorch.org/docs/stable/storage.html | pytorch docs |
Moves the storage to shared memory.
This is a no-op for storages already in shared memory and for
CUDA storages, which do not need to be moved for sharing across
processes. Storages in shared memory cannot be resized.
Returns: self
short()
Casts this storage to short type
size()
tolist()
Returns a list containing the elements of this storage
type(dtype=None, non_blocking=False)
Returns the type if *dtype* is not provided, else casts this
object to the specified type.
If this is already of the correct type, no copy is performed and
the original object is returned.
Parameters:
* **dtype** (*type** or **string*) -- The desired type
* **non_blocking** (*bool*) -- If "True", and the source is
in pinned memory and destination is on the GPU or vice
versa, the copy is performed asynchronously with respect to
the host. Otherwise, the argument has no effect.
| https://pytorch.org/docs/stable/storage.html | pytorch docs |
**kwargs -- For compatibility, may contain the key
"async" in place of the "non_blocking" argument. The
"async" arg is deprecated.
Return type:
Union[T, str]
untyped()
Returns the internal "torch.UntypedStorage"
class torch.UntypedStorage(args, *kwargs)
bfloat16()
Casts this storage to bfloat16 type
bool()
Casts this storage to bool type
byte()
Casts this storage to byte type
char()
Casts this storage to char type
clone()
Returns a copy of this storage
complex_double()
Casts this storage to complex double type
complex_float()
Casts this storage to complex float type
copy_()
cpu()
Returns a CPU copy of this storage if it's not already on the
CPU
cuda(device=None, non_blocking=False, **kwargs)
Returns a copy of this object in CUDA memory.
If this object is already in CUDA memory and on the correct
| https://pytorch.org/docs/stable/storage.html | pytorch docs |
device, then no copy is performed and the original object is
returned.
Parameters:
* **device** (*int*) -- The destination GPU id. Defaults to
the current device.
* **non_blocking** (*bool*) -- If "True" and the source is in
pinned memory, the copy will be asynchronous with respect
to the host. Otherwise, the argument has no effect.
* ****kwargs** -- For compatibility, may contain the key
"async" in place of the "non_blocking" argument.
data_ptr()
device: device
double()
Casts this storage to double type
element_size()
fill_()
float()
Casts this storage to float type
static from_buffer()
static from_file(filename, shared=False, size=0) -> Storage
If *shared* is *True*, then memory is shared between all
processes. All changes are written to the file. If *shared* is
*False*, then the changes on the storage do not affect the file.
| https://pytorch.org/docs/stable/storage.html | pytorch docs |
size is the number of elements in the storage. If shared is
False, then the file must contain at least size *
sizeof(Type) bytes (Type is the type of storage). If shared
is True the file will be created if needed.
Parameters:
* **filename** (*str*) -- file name to map
* **shared** (*bool*) -- whether to share memory
* **size** (*int*) -- number of elements in the storage
get_device()
Return type:
int
half()
Casts this storage to half type
int()
Casts this storage to int type
property is_cuda
is_pinned()
is_shared()
is_sparse: bool = False
is_sparse_csr: bool = False
long()
Casts this storage to long type
mps()
Returns a CPU copy of this storage if it's not already on the
CPU
nbytes()
new()
pin_memory()
Copies the storage to pinned memory, if it's not already pinned.
resize_()
share_memory_() | https://pytorch.org/docs/stable/storage.html | pytorch docs |
resize_()
share_memory_()
Moves the storage to shared memory.
This is a no-op for storages already in shared memory and for
CUDA storages, which do not need to be moved for sharing across
processes. Storages in shared memory cannot be resized.
Returns: self
short()
Casts this storage to short type
size()
Return type:
int
tolist()
Returns a list containing the elements of this storage
type(dtype=None, non_blocking=False, **kwargs)
Returns the type if *dtype* is not provided, else casts this
object to the specified type.
If this is already of the correct type, no copy is performed and
the original object is returned.
Parameters:
* **dtype** (*type** or **string*) -- The desired type
* **non_blocking** (*bool*) -- If "True", and the source is
in pinned memory and destination is on the GPU or vice
| https://pytorch.org/docs/stable/storage.html | pytorch docs |
versa, the copy is performed asynchronously with respect to
the host. Otherwise, the argument has no effect.
* ****kwargs** -- For compatibility, may contain the key
"async" in place of the "non_blocking" argument. The
"async" arg is deprecated.
untyped()
class torch.DoubleStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.float64
class torch.FloatStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.float32
class torch.HalfStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.float16
class torch.LongStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.int64
class torch.IntStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.int32 | https://pytorch.org/docs/stable/storage.html | pytorch docs |
dtype: dtype = torch.int32
class torch.ShortStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.int16
class torch.CharStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.int8
class torch.ByteStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.uint8
class torch.BoolStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.bool
class torch.BFloat16Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.bfloat16
class torch.ComplexDoubleStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.complex128
class torch.ComplexFloatStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.complex64 | https://pytorch.org/docs/stable/storage.html | pytorch docs |
dtype: dtype = torch.complex64
class torch.QUInt8Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.quint8
class torch.QInt8Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.qint8
class torch.QInt32Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.qint32
class torch.QUInt4x2Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.quint4x2
class torch.QUInt2x4Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)
dtype: dtype = torch.quint2x4 | https://pytorch.org/docs/stable/storage.html | pytorch docs |
torch.monitor
Warning:
This module is a prototype release, and its interfaces and
functionality may change without warning in future PyTorch releases.
"torch.monitor" provides an interface for logging events and counters
from PyTorch.
The stat interfaces are designed to be used for tracking high level
metrics that are periodically logged out to be used for monitoring
system performance. Since the stats aggregate with a specific window
size you can log to them from critical loops with minimal performance
impact.
For more infrequent events or values such as loss, accuracy, usage
tracking the event interface can be directly used.
Event handlers can be registered to handle the events and pass them to
an external event sink.
API Reference
class torch.monitor.Aggregation
These are types of aggregations that can be used to accumulate
stats.
Members:
VALUE :
VALUE returns the last value to be added.
MEAN :
| https://pytorch.org/docs/stable/monitor.html | pytorch docs |
MEAN :
MEAN computes the arithmetic mean of all the added values.
COUNT :
COUNT returns the total number of added values.
SUM :
SUM returns the sum of the added values.
MAX :
MAX returns the max of the added values.
MIN :
MIN returns the min of the added values.
property name
class torch.monitor.Stat
Stat is used to compute summary statistics in a performant way over
fixed intervals. Stat logs the statistics as an Event once every
"window_size" duration. When the window closes the stats are logged
via the event handlers as a "torch.monitor.Stat" event.
"window_size" should be set to something relatively high to avoid a
huge number of events being logged. Ex: 60s. Stat uses millisecond
precision.
If "max_samples" is set, the stat will cap the number of samples
per window by discarding add calls once "max_samples" adds have
occurred. If it's not set, all "add" calls during the window will | https://pytorch.org/docs/stable/monitor.html | pytorch docs |
be included. This is an optional field to make aggregations more
directly comparable across windows when the number of samples might
vary.
When the Stat is destructed it will log any remaining data even if
the window hasn't elapsed.
init(self: torch._C._monitor.Stat, name: str, aggregations: List[torch._C._monitor.Aggregation], window_size: datetime.timedelta, max_samples: int = 9223372036854775807) -> None
Constructs the "Stat".
add(self: torch._C._monitor.Stat, v: float) -> None
Adds a value to the stat to be aggregated according to the
configured stat type and aggregations.
property count
Number of data points that have currently been collected. Resets
once the event has been logged.
get(self: torch._C._monitor.Stat) -> Dict[torch._C._monitor.Aggregation, float]
Returns the current value of the stat, primarily for testing
purposes. If the stat has logged and no additional values have
been added this will be zero.
| https://pytorch.org/docs/stable/monitor.html | pytorch docs |
been added this will be zero.
property name
The name of the stat that was set during creation.
class torch.monitor.data_value_t
data_value_t is one of "str", "float", "int", "bool".
class torch.monitor.Event
Event represents a specific typed event to be logged. This can
represent high-level data points such as loss or accuracy per epoch
or more low-level aggregations such as through the Stats provided
through this library.
All Events of the same type should have the same name so downstream
handlers can correctly process them.
init(self: torch._C._monitor.Event, name: str, timestamp: datetime.datetime, data: Dict[str, data_value_t]) -> None
Constructs the "Event".
property data
The structured data contained within the "Event".
property name
The name of the "Event".
property timestamp
The timestamp when the "Event" happened.
class torch.monitor.EventHandlerHandle
EventHandlerHandle is a wrapper type returned by | https://pytorch.org/docs/stable/monitor.html | pytorch docs |
"register_event_handler" used to unregister the handler via
"unregister_event_handler". This cannot be directly initialized.
torch.monitor.log_event(event: torch._C._monitor.Event) -> None
log_event logs the specified event to all of the registered event
handlers. It's up to the event handlers to log the event out to the
corresponding event sink.
If there are no event handlers registered this method is a no-op.
torch.monitor.register_event_handler(callback: Callable[[torch._C._monitor.Event], None]) -> torch._C._monitor.EventHandlerHandle
register_event_handler registers a callback to be called whenever
an event is logged via "log_event". These handlers should avoid
blocking the main thread since that may interfere with training as
they run during the "log_event" call.
torch.monitor.unregister_event_handler(handler: torch._C._monitor.EventHandlerHandle) -> None
unregister_event_handler unregisters the "EventHandlerHandle" | https://pytorch.org/docs/stable/monitor.html | pytorch docs |
returned after calling "register_event_handler". After this returns
the event handler will no longer receive events.
class torch.monitor.TensorboardEventHandler(writer)
TensorboardEventHandler is an event handler that will write known
events to the provided SummaryWriter.
This currently only supports "torch.monitor.Stat" events which are
logged as scalars.
-[ Example ]-
from torch.utils.tensorboard import SummaryWriter
from torch.monitor import TensorboardEventHandler, register_event_handler
writer = SummaryWriter("log_dir")
register_event_handler(TensorboardEventHandler(writer))
init(writer)
Constructs the "TensorboardEventHandler".
| https://pytorch.org/docs/stable/monitor.html | pytorch docs |
Note:
If the following conditions are satisfied: 1) cudnn is enabled, 2)
input data is on the GPU 3) input data has dtype "torch.float16" 4)
V100 GPU is used, 5) input data is not in "PackedSequence" format
persistent algorithm can be selected to improve performance. | https://pytorch.org/docs/stable/cudnn_persistent_rnn.html | pytorch docs |
C++
Note:
If you are looking for the PyTorch C++ API docs, directly go here.
PyTorch provides several features for working with C++, and itâs best
to choose from them based on your needs. At a high level, the
following support is available:
TorchScript C++ API
TorchScript allows PyTorch models defined in Python to be serialized
and then loaded and run in C++ capturing the model code via
compilation or tracing its execution. You can learn more in the
Loading a TorchScript Model in C++ tutorial. This means you can define
your models in Python as much as possible, but subsequently export
them via TorchScript for doing no-Python execution in production or
embedded environments. The TorchScript C++ API is used to interact
with these models and the TorchScript execution engine, including:
Loading serialized TorchScript models saved from Python
Doing simple model modifications if needed (e.g. pulling out
submodules)
| https://pytorch.org/docs/stable/cpp_index.html | pytorch docs |
submodules)
Constructing the input and doing preprocessing using C++ Tensor API
Extending PyTorch and TorchScript with C++ Extensions
TorchScript can be augmented with user-supplied code through custom
operators and custom classes. Once registered with TorchScript, these
operators and classes can be invoked in TorchScript code run from
Python or from C++ as part of a serialized TorchScript model. The
Extending TorchScript with Custom C++ Operators tutorial walks through
interfacing TorchScript with OpenCV. In addition to wrapping a
function call with a custom operator, C++ classes and structs can be
bound into TorchScript through a pybind11-like interface which is
explained in the Extending TorchScript with Custom C++ Classes
tutorial.
Tensor and Autograd in C++
Most of the tensor and autograd operations in PyTorch Python API are
also available in the C++ API. These include: | https://pytorch.org/docs/stable/cpp_index.html | pytorch docs |
also available in the C++ API. These include:
"torch::Tensor" methods such as "add" / "reshape" / "clone". For the
full list of methods available, please see:
https://pytorch.org/cppdocs/api/classat_1_1_tensor.html
C++ tensor indexing API that looks and behaves the same as the
Python API. For details on its usage, please see:
https://pytorch.org/cppdocs/notes/tensor_indexing.html
The tensor autograd APIs and the "torch::autograd" package that are
crucial for building dynamic neural networks in C++ frontend. For
more details, please see:
https://pytorch.org/tutorials/advanced/cpp_autograd.html
Authoring Models in C++
The "author in TorchScript, infer in C++" workflow requires model
authoring to be done in TorchScript. However, there might be cases
where the model has to be authored in C++ (e.g. in workflows where a
Python component is undesirable). To serve such use cases, we provide
the full capability of authoring and training a neural net model | https://pytorch.org/docs/stable/cpp_index.html | pytorch docs |
purely in C++, with familiar components such as "torch::nn" /
"torch::nn::functional" / "torch::optim" that closely resemble the
Python API.
For an overview of the PyTorch C++ model authoring and training API,
please see: https://pytorch.org/cppdocs/frontend.html
For a detailed tutorial on how to use the API, please see:
https://pytorch.org/tutorials/advanced/cpp_frontend.html
Docs for components such as "torch::nn" / "torch::nn::functional" /
"torch::optim" can be found at:
https://pytorch.org/cppdocs/api/library_root.html
Packaging for C++
For guidance on how to install and link with libtorch (the library
that contains all of the above C++ APIs), please see:
https://pytorch.org/cppdocs/installing.html. Note that on Linux there
are two types of libtorch binaries provided: one compiled with GCC
pre-cxx11 ABI and the other with GCC cxx11 ABI, and you should make
the selection based on the GCC ABI your system is using. | https://pytorch.org/docs/stable/cpp_index.html | pytorch docs |
torch.random
torch.random.fork_rng(devices=None, enabled=True, _caller='fork_rng', _devices_kw='devices')
Forks the RNG, so that when you return, the RNG is reset to the
state that it was previously in.
Parameters:
* devices (iterable of CUDA IDs) -- CUDA devices for which
to fork the RNG. CPU RNG state is always forked. By default,
"fork_rng()" operates on all devices, but will emit a warning
if your machine has a lot of devices, since this function will
run very slowly in that case. If you explicitly specify
devices, this warning will be suppressed
* **enabled** (*bool*) -- if "False", the RNG is not forked.
This is a convenience argument for easily disabling the
context manager without having to delete it and unindent your
Python code under it.
Return type:
Generator
torch.random.get_rng_state()
Returns the random number generator state as a torch.ByteTensor.
Return type: | https://pytorch.org/docs/stable/random.html | pytorch docs |
Return type:
Tensor
torch.random.initial_seed()
Returns the initial seed for generating random numbers as a Python
long.
Return type:
int
torch.random.manual_seed(seed)
Sets the seed for generating random numbers. Returns a
torch.Generator object.
Parameters:
seed (int) -- The desired seed. Value must be within the
inclusive range [-0x8000_0000_0000_0000,
0xffff_ffff_ffff_ffff]. Otherwise, a RuntimeError is raised.
Negative inputs are remapped to positive values with the formula
0xffff_ffff_ffff_ffff + seed.
Return type:
Generator
torch.random.seed()
Sets the seed for generating random numbers to a non-deterministic
random number. Returns a 64 bit number used to seed the RNG.
Return type:
int
torch.random.set_rng_state(new_state)
Sets the random number generator state.
Parameters:
new_state (torch.ByteTensor) -- The desired state | https://pytorch.org/docs/stable/random.html | pytorch docs |
AvgPool1d
class torch.nn.AvgPool1d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)
Applies a 1D average pooling over an input signal composed of
several input planes.
In the simplest case, the output value of the layer with input size
(N, C, L), output (N, C, L_{out}) and "kernel_size" k can be
precisely described as:
\text{out}(N_i, C_j, l) = \frac{1}{k} \sum_{m=0}^{k-1}
\text{input}(N_i, C_j, \text{stride} \times l + m)
If "padding" is non-zero, then the input is implicitly zero-padded
on both sides for "padding" number of points.
Note:
When ceil_mode=True, sliding windows are allowed to go off-bounds
if they start within the left padding or the input. Sliding
windows that would start in the right padded region are ignored.
The parameters "kernel_size", "stride", "padding" can each be an
"int" or a one-element tuple.
Parameters:
* kernel_size (Union[int, Tuple[int]]) -- | https://pytorch.org/docs/stable/generated/torch.nn.AvgPool1d.html | pytorch docs |
the size of the window
* **stride** (*Union**[**int**, **Tuple**[**int**]**]*) -- the
stride of the window. Default value is "kernel_size"
* **padding** (*Union**[**int**, **Tuple**[**int**]**]*) --
implicit zero padding to be added on both sides
* **ceil_mode** (*bool*) -- when True, will use *ceil* instead
of *floor* to compute the output shape
* **count_include_pad** (*bool*) -- when True, will include the
zero-padding in the averaging calculation
Shape:
* Input: (N, C, L_{in}) or (C, L_{in}).
* Output: (N, C, L_{out}) or (C, L_{out}), where
L_{out} = \left\lfloor \frac{L_{in} + 2 \times
\text{padding} - \text{kernel\_size}}{\text{stride}} +
1\right\rfloor
Examples:
>>> # pool with window of size=3, stride=2
>>> m = nn.AvgPool1d(3, stride=2)
>>> m(torch.tensor([[[1., 2, 3, 4, 5, 6, 7]]]))
tensor([[[2., 4., 6.]]])
| https://pytorch.org/docs/stable/generated/torch.nn.AvgPool1d.html | pytorch docs |
torch.Tensor.tanh
Tensor.tanh() -> Tensor
See "torch.tanh()" | https://pytorch.org/docs/stable/generated/torch.Tensor.tanh.html | pytorch docs |
torch.eq
torch.eq(input, other, *, out=None) -> Tensor
Computes element-wise equality
The second argument can be a number or a tensor whose shape is
broadcastable with the first argument.
Parameters:
* input (Tensor) -- the tensor to compare
* **other** (*Tensor** or **float*) -- the tensor or value to
compare
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Returns:
A boolean tensor that is True where "input" is equal to "other"
and False elsewhere
Example:
>>> torch.eq(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[ True, False],
[False, True]])
| https://pytorch.org/docs/stable/generated/torch.eq.html | pytorch docs |
torch.floor
torch.floor(input, *, out=None) -> Tensor
Returns a new tensor with the floor of the elements of "input", the
largest integer less than or equal to each element.
For integer inputs, follows the array-api convention of returning a
copy of the input tensor.
\text{out}_{i} = \left\lfloor \text{input}_{i} \right\rfloor
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([-0.8166, 1.5308, -0.2530, -0.2091])
>>> torch.floor(a)
tensor([-1., 1., -1., -1.])
| https://pytorch.org/docs/stable/generated/torch.floor.html | pytorch docs |
torch.autograd.Function.jvp
static Function.jvp(ctx, *grad_inputs)
Defines a formula for differentiating the operation with forward
mode automatic differentiation. This function is to be overridden
by all subclasses. It must accept a context "ctx" as the first
argument, followed by as many inputs as the "forward()" got (None
will be passed in for non tensor inputs of the forward function),
and it should return as many tensors as there were outputs to
"forward()". Each argument is the gradient w.r.t the given input,
and each returned value should be the gradient w.r.t. the
corresponding output. If an output is not a Tensor or the function
is not differentiable with respect to that output, you can just
pass None as a gradient for that input.
You can use the "ctx" object to pass any value from the forward to
this functions.
Return type:
Any | https://pytorch.org/docs/stable/generated/torch.autograd.Function.jvp.html | pytorch docs |
ConvReLU3d
class torch.ao.nn.intrinsic.quantized.ConvReLU3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)
A ConvReLU3d module is a fused module of Conv3d and ReLU
We adopt the same interface as "torch.ao.nn.quantized.Conv3d".
Attributes: Same as torch.ao.nn.quantized.Conv3d | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.ConvReLU3d.html | pytorch docs |
torch.Tensor.corrcoef
Tensor.corrcoef() -> Tensor
See "torch.corrcoef()" | https://pytorch.org/docs/stable/generated/torch.Tensor.corrcoef.html | pytorch docs |
torch.Tensor.tolist
Tensor.tolist() -> list or number
Returns the tensor as a (nested) list. For scalars, a standard
Python number is returned, just like with "item()". Tensors are
automatically moved to the CPU first if necessary.
This operation is not differentiable.
Examples:
>>> a = torch.randn(2, 2)
>>> a.tolist()
[[0.012766935862600803, 0.5415473580360413],
[-0.08909505605697632, 0.7729271650314331]]
>>> a[0,0].tolist()
0.012766935862600803
| https://pytorch.org/docs/stable/generated/torch.Tensor.tolist.html | pytorch docs |
torch.autograd.gradgradcheck
torch.autograd.gradgradcheck(func, inputs, grad_outputs=None, *, eps=1e-06, atol=1e-05, rtol=0.001, gen_non_contig_grad_outputs=False, raise_exception=True, nondet_tol=0.0, check_undefined_grad=True, check_grad_dtypes=False, check_batched_grad=False, check_fwd_over_rev=False, check_rev_over_rev=True, fast_mode=False)
Check gradients of gradients computed via small finite differences
against analytical gradients w.r.t. tensors in "inputs" and
"grad_outputs" that are of floating point or complex type and with
"requires_grad=True".
This function checks that backpropagating through the gradients
computed to the given "grad_outputs" are correct.
The check between numerical and analytical gradients uses
"allclose()".
Note:
The default values are designed for "input" and "grad_outputs" of
double precision. This check will likely fail if they are of less
precision, e.g., "FloatTensor".
Warning: | https://pytorch.org/docs/stable/generated/torch.autograd.gradgradcheck.html | pytorch docs |
precision, e.g., "FloatTensor".
Warning:
If any checked tensor in "input" and "grad_outputs" has
overlapping memory, i.e., different indices pointing to the same
memory address (e.g., from "torch.expand()"), this check will
likely fail because the numerical gradients computed by point
perturbation at such indices will change values at all other
indices that share the same memory address.
Parameters:
* func (function) -- a Python function that takes Tensor
inputs and returns a Tensor or a tuple of Tensors
* **inputs** (*tuple of Tensor** or **Tensor*) -- inputs to the
function
* **grad_outputs** (*tuple of Tensor** or **Tensor**,
**optional*) -- The gradients with respect to the function's
outputs.
* **eps** (*float**, **optional*) -- perturbation for finite
differences
* **atol** (*float**, **optional*) -- absolute tolerance
* **rtol** (*float**, **optional*) -- relative tolerance
| https://pytorch.org/docs/stable/generated/torch.autograd.gradgradcheck.html | pytorch docs |
gen_non_contig_grad_outputs (bool, optional) -- if
"grad_outputs" is "None" and "gen_non_contig_grad_outputs" is
"True", the randomly generated gradient outputs are made to be
noncontiguous
raise_exception (bool, optional) -- indicating
whether to raise an exception if the check fails. The
exception gives more information about the exact nature of the
failure. This is helpful when debugging gradchecks.
nondet_tol (float, optional) -- tolerance for non-
determinism. When running identical inputs through the
differentiation, the results must either match exactly
(default, 0.0) or be within this tolerance. Note that a small
amount of nondeterminism in the gradient will lead to larger
inaccuracies in the second derivative.
check_undefined_grad (bool, optional) -- if True,
check if undefined output grads are supported and treated as
| https://pytorch.org/docs/stable/generated/torch.autograd.gradgradcheck.html | pytorch docs |
zeros
* **check_batched_grad** (*bool**, **optional*) -- if True,
check if we can compute batched gradients using prototype vmap
support. Defaults to False.
* **fast_mode** (*bool**, **optional*) -- if True, run a faster
implementation of gradgradcheck that no longer computes the
entire jacobian.
Returns:
True if all differences satisfy allclose condition
Return type:
bool | https://pytorch.org/docs/stable/generated/torch.autograd.gradgradcheck.html | pytorch docs |
torch.Tensor.bmm
Tensor.bmm(batch2) -> Tensor
See "torch.bmm()" | https://pytorch.org/docs/stable/generated/torch.Tensor.bmm.html | pytorch docs |
default_fused_wt_fake_quant
torch.quantization.fake_quantize.default_fused_wt_fake_quant
alias of functools.partial(, observer=,
quant_min=-128, quant_max=127, dtype=torch.qint8,
qscheme=torch.per_tensor_symmetric){} | https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_fused_wt_fake_quant.html | pytorch docs |
torch.jit.trace
torch.jit.trace(func, example_inputs=None, optimize=None, check_trace=True, check_inputs=None, check_tolerance=1e-05, strict=True, _force_outplace=False, _module_class=None, _compilation_unit=, example_kwarg_inputs=None)
Trace a function and return an executable or "ScriptFunction" that
will be optimized using just-in-time compilation. Tracing is ideal
for code that operates only on "Tensor"s and lists, dictionaries,
and tuples of "Tensor"s.
Using torch.jit.trace and torch.jit.trace_module, you can turn
an existing module or Python function into a TorchScript
"ScriptFunction" or "ScriptModule". You must provide example
inputs, and we run the function, recording the operations performed
on all the tensors.
The resulting recording of a standalone function produces
ScriptFunction.
The resulting recording of nn.Module.forward or nn.Module
produces ScriptModule.
| https://pytorch.org/docs/stable/generated/torch.jit.trace.html | pytorch docs |
produces ScriptModule.
This module also contains any parameters that the original module
had as well.
Warning:
Tracing only correctly records functions and modules which are
not data dependent (e.g., do not have conditionals on data in
tensors) and do not have any untracked external dependencies
(e.g., perform input/output or access global variables). Tracing
only records operations done when the given function is run on
the given tensors. Therefore, the returned *ScriptModule* will
always run the same traced graph on any input. This has some
important implications when your module is expected to run
different sets of operations, depending on the input and/or the
module state. For example,
* Tracing will not record any control-flow like if-statements or
loops. When this control-flow is constant across your module,
this is fine and it often inlines the control-flow decisions.
| https://pytorch.org/docs/stable/generated/torch.jit.trace.html | pytorch docs |
But sometimes the control-flow is actually part of the model
itself. For instance, a recurrent network is a loop over the
(possibly dynamic) length of an input sequence.
* In the returned "ScriptModule", operations that have different
behaviors in "training" and "eval" modes will always behave as
if it is in the mode it was in during tracing, no matter which
mode the *ScriptModule* is in.
In cases like these, tracing would not be appropriate and
"scripting" is a better choice. If you trace such models, you may
silently get incorrect results on subsequent invocations of the
model. The tracer will try to emit warnings when doing something
that may cause an incorrect trace to be produced.
Parameters:
func (callable or torch.nn.Module) -- A Python
function or torch.nn.Module that will be run with
example_inputs. func arguments and return values must be | https://pytorch.org/docs/stable/generated/torch.jit.trace.html | pytorch docs |
tensors or (possibly nested) tuples that contain tensors. When a
module is passed torch.jit.trace, only the "forward" method is
run and traced (see "torch.jit.trace" for details).
Keyword Arguments:
* example_inputs (tuple or torch.Tensor or None,
optional) -- A tuple of example inputs that will be passed
to the function while tracing. Default: "None". Either this
argument or "example_kwarg_inputs" should be specified. The
resulting trace can be run with inputs of different types and
shapes assuming the traced operations support those types and
shapes. example_inputs may also be a single Tensor in which
case it is automatically wrapped in a tuple. When the value is
None, "example_kwarg_inputs" should be specified.
* **check_trace** ("bool", optional) -- Check if the same inputs
run through traced code produce the same outputs. Default:
| https://pytorch.org/docs/stable/generated/torch.jit.trace.html | pytorch docs |
"True". You might want to disable this if, for example, your
network contains non- deterministic ops or if you are sure
that the network is correct despite a checker failure.
* **check_inputs** (*list of tuples**, **optional*) -- A list of
tuples of input arguments that should be used to check the
trace against what is expected. Each tuple is equivalent to a
set of input arguments that would be specified in
"example_inputs". For best results, pass in a set of checking
inputs representative of the space of shapes and types of
inputs you expect the network to see. If not specified, the
original "example_inputs" are used for checking
* **check_tolerance** (*float**, **optional*) -- Floating-point
comparison tolerance to use in the checker procedure. This
can be used to relax the checker strictness in the event that
results diverge numerically for a known reason, such as
operator fusion.
| https://pytorch.org/docs/stable/generated/torch.jit.trace.html | pytorch docs |
operator fusion.
* **strict** ("bool", optional) -- run the tracer in a strict
mode or not (default: "True"). Only turn this off when you
want the tracer to record your mutable container types
(currently "list"/"dict") and you are sure that the container
you are using in your problem is a "constant" structure and
does not get used as control flow (if, for) conditions.
* **example_kwarg_inputs** (*dict**, **optional*) -- This
parameter is a pack of keyword arguments of example inputs
that will be passed to the function while tracing. Default:
"None". Either this argument or "example_inputs" should be
specified. The dict will be unpacking by the arguments name of
the traced function. If the keys of the dict don't not match
with the traced function's arguments name, a runtime exception
will be raised.
Returns:
If func is nn.Module or "forward" of nn.Module, trace | https://pytorch.org/docs/stable/generated/torch.jit.trace.html | pytorch docs |
returns a "ScriptModule" object with a single "forward" method
containing the traced code. The returned ScriptModule will
have the same set of sub-modules and parameters as the original
"nn.Module". If "func" is a standalone function, "trace"
returns ScriptFunction.
Example (tracing a function):
import torch
def foo(x, y):
return 2 * x + y
# Run `foo` with the provided inputs and record the tensor operations
traced_foo = torch.jit.trace(foo, (torch.rand(3), torch.rand(3)))
# `traced_foo` can now be run with the TorchScript interpreter or saved
# and loaded in a Python-free environment
Example (tracing an existing module):
import torch
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv = nn.Conv2d(1, 1, 3)
def forward(self, x):
return self.conv(x)
n = Net()
| https://pytorch.org/docs/stable/generated/torch.jit.trace.html | pytorch docs |
return self.conv(x)
n = Net()
example_weight = torch.rand(1, 1, 3, 3)
example_forward_input = torch.rand(1, 1, 3, 3)
# Trace a specific method and construct `ScriptModule` with
# a single `forward` method
module = torch.jit.trace(n.forward, example_forward_input)
# Trace a module (implicitly traces `forward`) and construct a
# `ScriptModule` with a single `forward` method
module = torch.jit.trace(n, example_forward_input)
| https://pytorch.org/docs/stable/generated/torch.jit.trace.html | pytorch docs |
Unflatten
class torch.nn.Unflatten(dim, unflattened_size)
Unflattens a tensor dim expanding it to a desired shape. For use
with "Sequential".
"dim" specifies the dimension of the input tensor to be
unflattened, and it can be either int or str when Tensor or
NamedTensor is used, respectively.
"unflattened_size" is the new shape of the unflattened dimension
of the tensor and it can be a tuple of ints or a list of ints
or torch.Size for Tensor input; a NamedShape (tuple of
(name, size) tuples) for NamedTensor input.
Shape:
* Input: (, S_{\text{dim}}, ), where S_{\text{dim}} is the
size at dimension "dim" and * means any number of dimensions
including none.
* Output: (*, U_1, ..., U_n, *), where U = "unflattened_size"
and \prod_{i=1}^n U_i = S_{\text{dim}}.
Parameters:
* dim (Union[int, str]) -- Dimension to be
unflattened | https://pytorch.org/docs/stable/generated/torch.nn.Unflatten.html | pytorch docs |
unflattened
* **unflattened_size** (*Union**[**torch.Size**, **Tuple**,
**List**, **NamedShape**]*) -- New shape of the unflattened
dimension
-[ Examples ]-
input = torch.randn(2, 50)
With tuple of ints
m = nn.Sequential(
nn.Linear(50, 50),
nn.Unflatten(1, (2, 5, 5))
)
output = m(input)
output.size()
torch.Size([2, 2, 5, 5])
With torch.Size
m = nn.Sequential(
nn.Linear(50, 50),
nn.Unflatten(1, torch.Size([2, 5, 5]))
)
output = m(input)
output.size()
torch.Size([2, 2, 5, 5])
With namedshape (tuple of tuples)
input = torch.randn(2, 50, names=('N', 'features'))
unflatten = nn.Unflatten('features', (('C', 2), ('H', 5), ('W', 5)))
output = unflatten(input)
output.size()
torch.Size([2, 2, 5, 5])
| https://pytorch.org/docs/stable/generated/torch.nn.Unflatten.html | pytorch docs |
torch.Tensor.coalesce
Tensor.coalesce() -> Tensor
Returns a coalesced copy of "self" if "self" is an uncoalesced
tensor.
Returns "self" if "self" is a coalesced tensor.
Warning:
Throws an error if "self" is not a sparse COO tensor.
| https://pytorch.org/docs/stable/generated/torch.Tensor.coalesce.html | pytorch docs |
torch.outer
torch.outer(input, vec2, *, out=None) -> Tensor
Outer product of "input" and "vec2". If "input" is a vector of size
n and "vec2" is a vector of size m, then "out" must be a matrix of
size (n \times m).
Note:
This function does not broadcast.
Parameters:
* input (Tensor) -- 1-D input vector
* **vec2** (*Tensor*) -- 1-D input vector
Keyword Arguments:
out (Tensor, optional) -- optional output matrix
Example:
>>> v1 = torch.arange(1., 5.)
>>> v2 = torch.arange(1., 4.)
>>> torch.outer(v1, v2)
tensor([[ 1., 2., 3.],
[ 2., 4., 6.],
[ 3., 6., 9.],
[ 4., 8., 12.]])
| https://pytorch.org/docs/stable/generated/torch.outer.html | pytorch docs |
torch.nn.functional.avg_pool3d
torch.nn.functional.avg_pool3d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) -> Tensor
Applies 3D average-pooling operation in kT \times kH \times kW
regions by step size sT \times sH \times sW steps. The number of
output features is equal to \lfloor\frac{\text{input
planes}}{sT}\rfloor.
See "AvgPool3d" for details and output shape.
Parameters:
* input -- input tensor (\text{minibatch} ,
\text{in_channels} , iT \times iH , iW)
* **kernel_size** -- size of the pooling region. Can be a single
number or a tuple *(kT, kH, kW)*
* **stride** -- stride of the pooling operation. Can be a single
number or a tuple *(sT, sH, sW)*. Default: "kernel_size"
* **padding** -- implicit zero paddings on both sides of the
input. Can be a single number or a tuple *(padT, padH, padW)*,
Default: 0
| https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool3d.html | pytorch docs |
Default: 0
* **ceil_mode** -- when True, will use *ceil* instead of *floor*
in the formula to compute the output shape
* **count_include_pad** -- when True, will include the zero-
padding in the averaging calculation
* **divisor_override** -- if specified, it will be used as
divisor, otherwise size of the pooling region will be used.
Default: None
| https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool3d.html | pytorch docs |
torch.autograd.graph.Node.next_functions
abstract property Node.next_functions: Tuple[Tuple[Optional[Node], int], ...] | https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.next_functions.html | pytorch docs |
torch.Tensor.byte
Tensor.byte(memory_format=torch.preserve_format) -> Tensor
"self.byte()" is equivalent to "self.to(torch.uint8)". See "to()".
Parameters:
memory_format ("torch.memory_format", optional) -- the
desired memory format of returned Tensor. Default:
"torch.preserve_format". | https://pytorch.org/docs/stable/generated/torch.Tensor.byte.html | pytorch docs |
LinearReLU
class torch.ao.nn.intrinsic.quantized.dynamic.LinearReLU(in_features, out_features, bias=True, dtype=torch.qint8)
A LinearReLU module fused from Linear and ReLU modules that can be
used for dynamic quantization. Supports both, FP16 and INT8
quantization.
We adopt the same interface as
"torch.ao.nn.quantized.dynamic.Linear".
Variables:
torch.ao.nn.quantized.dynamic.Linear (Same as) --
Examples:
>>> m = nn.intrinsic.quantized.dynamic.LinearReLU(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])
| https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.dynamic.LinearReLU.html | pytorch docs |
torch.nn.functional.adaptive_max_pool2d
torch.nn.functional.adaptive_max_pool2d(args, *kwargs)
Applies a 2D adaptive max pooling over an input signal composed of
several input planes.
See "AdaptiveMaxPool2d" for details and output shape.
Parameters:
* output_size -- the target output size (single integer or
double-integer tuple)
* **return_indices** -- whether to return pooling indices.
Default: "False"
| https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_max_pool2d.html | pytorch docs |
MaxUnpool1d
class torch.nn.MaxUnpool1d(kernel_size, stride=None, padding=0)
Computes a partial inverse of "MaxPool1d".
"MaxPool1d" is not fully invertible, since the non-maximal values
are lost.
"MaxUnpool1d" takes in as input the output of "MaxPool1d" including
the indices of the maximal values and computes a partial inverse in
which all non-maximal values are set to zero.
Note:
"MaxPool1d" can map several input sizes to the same output sizes.
Hence, the inversion process can get ambiguous. To accommodate
this, you can provide the needed output size as an additional
argument "output_size" in the forward call. See the Inputs and
Example below.
Parameters:
* kernel_size (int or tuple) -- Size of the max
pooling window.
* **stride** (*int** or **tuple*) -- Stride of the max pooling
window. It is set to "kernel_size" by default.
* **padding** (*int** or **tuple*) -- Padding that was added to
| https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool1d.html | pytorch docs |
the input
Inputs:
* input: the input Tensor to invert
* *indices*: the indices given out by "MaxPool1d"
* *output_size* (optional): the targeted output size
Shape:
* Input: (N, C, H_{in}) or (C, H_{in}).
* Output: (N, C, H_{out}) or (C, H_{out}), where
H_{out} = (H_{in} - 1) \times \text{stride}[0] - 2 \times
\text{padding}[0] + \text{kernel\_size}[0]
or as given by "output_size" in the call operator
Example:
>>> pool = nn.MaxPool1d(2, stride=2, return_indices=True)
>>> unpool = nn.MaxUnpool1d(2, stride=2)
>>> input = torch.tensor([[[1., 2, 3, 4, 5, 6, 7, 8]]])
>>> output, indices = pool(input)
>>> unpool(output, indices)
tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8.]]])
>>> # Example showcasing the use of output_size
>>> input = torch.tensor([[[1., 2, 3, 4, 5, 6, 7, 8, 9]]])
>>> output, indices = pool(input)
| https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool1d.html | pytorch docs |
output, indices = pool(input)
>>> unpool(output, indices, output_size=input.size())
tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8., 0.]]])
>>> unpool(output, indices)
tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8.]]])
| https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool1d.html | pytorch docs |
torch.Tensor.argmax
Tensor.argmax(dim=None, keepdim=False) -> LongTensor
See "torch.argmax()" | https://pytorch.org/docs/stable/generated/torch.Tensor.argmax.html | pytorch docs |
torch.nn.functional.max_pool2d
torch.nn.functional.max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)
Applies a 2D max pooling over an input signal composed of several
input planes.
Note:
The order of "ceil_mode" and "return_indices" is different from
what seen in "MaxPool2d", and will change in a future release.
See "MaxPool2d" for details.
Parameters:
* input -- input tensor (\text{minibatch} ,
\text{in_channels} , iH , iW), minibatch dim optional.
* **kernel_size** -- size of the pooling region. Can be a single
number or a tuple *(kH, kW)*
* **stride** -- stride of the pooling operation. Can be a single
number or a tuple *(sH, sW)*. Default: "kernel_size"
* **padding** -- Implicit negative infinity padding to be added
on both sides, must be >= 0 and <= kernel_size / 2.
| https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool2d.html | pytorch docs |
dilation -- The stride between elements within a sliding
window, must be > 0.
ceil_mode -- If "True", will use ceil instead of floor
to compute the output shape. This ensures that every element
in the input tensor is covered by a sliding window.
return_indices -- If "True", will return the argmax along
with the max values. Useful for
"torch.nn.functional.max_unpool2d" later
| https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool2d.html | pytorch docs |
torch.Tensor.unsqueeze_
Tensor.unsqueeze_(dim) -> Tensor
In-place version of "unsqueeze()" | https://pytorch.org/docs/stable/generated/torch.Tensor.unsqueeze_.html | pytorch docs |
QFunctional
class torch.ao.nn.quantized.QFunctional
Wrapper class for quantized operations.
The instance of this class can be used instead of the
"torch.ops.quantized" prefix. See example usage below.
Note:
This class does not provide a "forward" hook. Instead, you must
use one of the underlying functions (e.g. "add").
Examples:
>>> q_add = QFunctional()
>>> a = torch.quantize_per_tensor(torch.tensor(3.0), 1.0, 0, torch.qint32)
>>> b = torch.quantize_per_tensor(torch.tensor(4.0), 1.0, 0, torch.qint32)
>>> q_add.add(a, b) # Equivalent to ``torch.ops.quantized.add(a, b, 1.0, 0)``
Valid operation names:
* add
* cat
* mul
* add_relu
* add_scalar
* mul_scalar
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.QFunctional.html | pytorch docs |
LazyBatchNorm1d
class torch.nn.LazyBatchNorm1d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)
A "torch.nn.BatchNorm1d" module with lazy initialization of the
"num_features" argument of the "BatchNorm1d" that is inferred from
the "input.size(1)". The attributes that will be lazily initialized
are weight, bias, running_mean and running_var.
Check the "torch.nn.modules.lazy.LazyModuleMixin" for further
documentation on lazy modules and their limitations.
Parameters:
* eps (float) -- a value added to the denominator for
numerical stability. Default: 1e-5
* **momentum** (*float*) -- the value used for the running_mean
and running_var computation. Can be set to "None" for
cumulative moving average (i.e. simple average). Default: 0.1
* **affine** (*bool*) -- a boolean value that when set to
"True", this module has learnable affine parameters. Default:
"True"
| https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm1d.html | pytorch docs |
"True"
* **track_running_stats** (*bool*) -- a boolean value that when
set to "True", this module tracks the running mean and
variance, and when set to "False", this module does not track
such statistics, and initializes statistics buffers
"running_mean" and "running_var" as "None". When these buffers
are "None", this module always uses batch statistics. in both
training and eval modes. Default: "True"
cls_to_become
alias of "BatchNorm1d"
| https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm1d.html | pytorch docs |
torch.fliplr
torch.fliplr(input) -> Tensor
Flip tensor in the left/right direction, returning a new tensor.
Flip the entries in each row in the left/right direction. Columns
are preserved, but appear in a different order than before.
Note:
Requires the tensor to be at least 2-D.
Note:
*torch.fliplr* makes a copy of "input"'s data. This is different
from NumPy's *np.fliplr*, which returns a view in constant time.
Since copying a tensor's data is more work than viewing that
data, *torch.fliplr* is expected to be slower than *np.fliplr*.
Parameters:
input (Tensor) -- Must be at least 2-dimensional.
Example:
>>> x = torch.arange(4).view(2, 2)
>>> x
tensor([[0, 1],
[2, 3]])
>>> torch.fliplr(x)
tensor([[1, 0],
[3, 2]])
| https://pytorch.org/docs/stable/generated/torch.fliplr.html | pytorch docs |
EmbeddingBag
class torch.nn.EmbeddingBag(num_embeddings, embedding_dim, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='mean', sparse=False, _weight=None, include_last_offset=False, padding_idx=None, device=None, dtype=None)
Computes sums or means of 'bags' of embeddings, without
instantiating the intermediate embeddings.
For bags of constant length, no "per_sample_weights", no indices
equal to "padding_idx", and with 2D inputs, this class
* with "mode="sum"" is equivalent to "Embedding" followed by
"torch.sum(dim=1)",
* with "mode="mean"" is equivalent to "Embedding" followed by
"torch.mean(dim=1)",
* with "mode="max"" is equivalent to "Embedding" followed by
"torch.max(dim=1)".
However, "EmbeddingBag" is much more time and memory efficient than
using a chain of these operations.
EmbeddingBag also supports per-sample weights as an argument to the
forward pass. This scales the output of the Embedding before | https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html | pytorch docs |
performing a weighted reduction as specified by "mode". If
"per_sample_weights" is passed, the only supported "mode" is
""sum"", which computes a weighted sum according to
"per_sample_weights".
Parameters:
* num_embeddings (int) -- size of the dictionary of
embeddings
* **embedding_dim** (*int*) -- the size of each embedding vector
* **max_norm** (*float**, **optional*) -- If given, each
embedding vector with norm larger than "max_norm" is
renormalized to have norm "max_norm".
* **norm_type** (*float**, **optional*) -- The p of the p-norm
to compute for the "max_norm" option. Default "2".
* **scale_grad_by_freq** (*bool**, **optional*) -- if given,
this will scale gradients by the inverse of frequency of the
words in the mini-batch. Default "False". Note: this option is
not supported when "mode="max"".
* **mode** (*str**, **optional*) -- ""sum"", ""mean"" or
| https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html | pytorch docs |
""max"". Specifies the way to reduce the bag. ""sum"" computes
the weighted sum, taking "per_sample_weights" into
consideration. ""mean"" computes the average of the values in
the bag, ""max"" computes the max value over each bag.
Default: ""mean""
* **sparse** (*bool**, **optional*) -- if "True", gradient
w.r.t. "weight" matrix will be a sparse tensor. See Notes for
more details regarding sparse gradients. Note: this option is
not supported when "mode="max"".
* **include_last_offset** (*bool**, **optional*) -- if "True",
"offsets" has one additional element, where the last element
is equivalent to the size of *indices*. This matches the CSR
format.
* **padding_idx** (*int**, **optional*) -- If specified, the
entries at "padding_idx" do not contribute to the gradient;
therefore, the embedding vector at "padding_idx" is not
| https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html | pytorch docs |
updated during training, i.e. it remains as a fixed "pad". For
a newly constructed EmbeddingBag, the embedding vector at
"padding_idx" will default to all zeros, but can be updated to
another value to be used as the padding vector. Note that the
embedding vector at "padding_idx" is excluded from the
reduction.
Variables:
weight (Tensor) -- the learnable weights of the module of
shape (num_embeddings, embedding_dim) initialized from
\mathcal{N}(0, 1).
Examples:
>>> # an EmbeddingBag module containing 10 tensors of size 3
>>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum')
>>> # a batch of 2 samples of 4 indices each
>>> input = torch.tensor([1, 2, 4, 5, 4, 3, 2, 9], dtype=torch.long)
>>> offsets = torch.tensor([0, 4], dtype=torch.long)
>>> embedding_sum(input, offsets)
tensor([[-0.8861, -5.4350, -0.0523],
[ 1.1306, -2.5798, -1.0044]])
| https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html | pytorch docs |
[ 1.1306, -2.5798, -1.0044]])
>>> # Example with padding_idx
>>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum', padding_idx=2)
>>> input = torch.tensor([2, 2, 2, 2, 4, 3, 2, 9], dtype=torch.long)
>>> offsets = torch.tensor([0, 4], dtype=torch.long)
>>> embedding_sum(input, offsets)
tensor([[ 0.0000, 0.0000, 0.0000],
[-0.7082, 3.2145, -2.6251]])
>>> # An EmbeddingBag can be loaded from an Embedding like so
>>> embedding = nn.Embedding(10, 3, padding_idx=2)
>>> embedding_sum = nn.EmbeddingBag.from_pretrained(
embedding.weight,
padding_idx=embedding.padding_idx,
mode='sum')
forward(input, offsets=None, per_sample_weights=None)
Forward pass of EmbeddingBag.
Parameters:
* **input** (*Tensor*) -- Tensor containing bags of indices
into the embedding matrix.
* **offsets** (*Tensor**, **optional*) -- Only used when
| https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html | pytorch docs |
"input" is 1D. "offsets" determines the starting index
position of each bag (sequence) in "input".
* **per_sample_weights** (*Tensor**, **optional*) -- a tensor
of float / double weights, or None to indicate all weights
should be taken to be "1". If specified,
"per_sample_weights" must have exactly the same shape as
input and is treated as having the same "offsets", if those
are not "None". Only supported for "mode='sum'".
Returns:
Tensor output shape of *(B, embedding_dim)*.
Return type:
*Tensor*
Note:
A few notes about "input" and "offsets":
* "input" and "offsets" have to be of the same type, either
int or long
* If "input" is 2D of shape *(B, N)*, it will be treated as
"B" bags (sequences) each of fixed length "N", and this will
return "B" values aggregated in a way depending on the
| https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html | pytorch docs |
"mode". "offsets" is ignored and required to be "None" in
this case.
* If "input" is 1D of shape *(N)*, it will be treated as a
concatenation of multiple bags (sequences). "offsets" is
required to be a 1D tensor containing the starting index
positions of each bag in "input". Therefore, for "offsets"
of shape *(B)*, "input" will be viewed as having "B" bags.
Empty bags (i.e., having 0-length) will have returned
vectors filled by zeros.
classmethod from_pretrained(embeddings, freeze=True, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='mean', sparse=False, include_last_offset=False, padding_idx=None)
Creates EmbeddingBag instance from given 2-dimensional
FloatTensor.
Parameters:
* **embeddings** (*Tensor*) -- FloatTensor containing weights
for the EmbeddingBag. First dimension is being passed to
EmbeddingBag as 'num_embeddings', second as
| https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html | pytorch docs |
'embedding_dim'.
* **freeze** (*bool**, **optional*) -- If "True", the tensor
does not get updated in the learning process. Equivalent to
"embeddingbag.weight.requires_grad = False". Default:
"True"
* **max_norm** (*float**, **optional*) -- See module
initialization documentation. Default: "None"
* **norm_type** (*float**, **optional*) -- See module
initialization documentation. Default "2".
* **scale_grad_by_freq** (*bool**, **optional*) -- See module
initialization documentation. Default "False".
* **mode** (*str**, **optional*) -- See module initialization
documentation. Default: ""mean""
* **sparse** (*bool**, **optional*) -- See module
initialization documentation. Default: "False".
* **include_last_offset** (*bool**, **optional*) -- See
module initialization documentation. Default: "False".
| https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html | pytorch docs |
padding_idx (int, optional) -- See module
initialization documentation. Default: "None".
Return type:
EmbeddingBag
Examples:
>>> # FloatTensor containing pretrained weights
>>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]])
>>> embeddingbag = nn.EmbeddingBag.from_pretrained(weight)
>>> # Get embeddings for index 1
>>> input = torch.LongTensor([[1, 0]])
>>> embeddingbag(input)
tensor([[ 2.5000, 3.7000, 4.6500]])
| https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html | pytorch docs |
default_float_qparams_observer
torch.quantization.observer.default_float_qparams_observer
alias of functools.partial(,
dtype=torch.quint8, qscheme=torch.per_channel_affine_float_qparams,
ch_axis=0){} | https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_float_qparams_observer.html | pytorch docs |
torch.Tensor.retains_grad
Tensor.retains_grad
Is "True" if this Tensor is non-leaf and its "grad" is enabled to
be populated during "backward()", "False" otherwise. | https://pytorch.org/docs/stable/generated/torch.Tensor.retains_grad.html | pytorch docs |
torch.Tensor.index_copy_
Tensor.index_copy_(dim, index, tensor) -> Tensor
Copies the elements of "tensor" into the "self" tensor by selecting
the indices in the order given in "index". For example, if "dim ==
0" and "index[i] == j", then the "i"th row of "tensor" is copied to
the "j"th row of "self".
The "dim"th dimension of "tensor" must have the same size as the
length of "index" (which must be a vector), and all other
dimensions must match "self", or an error will be raised.
Note:
If "index" contains duplicate entries, multiple elements from
"tensor" will be copied to the same index of "self". The result
is nondeterministic since it depends on which copy occurs last.
Parameters:
* dim (int) -- dimension along which to index
* **index** (*LongTensor*) -- indices of "tensor" to select from
* **tensor** (*Tensor*) -- the tensor containing values to copy
Example:
>>> x = torch.zeros(5, 3)
| https://pytorch.org/docs/stable/generated/torch.Tensor.index_copy_.html | pytorch docs |
Example:
>>> x = torch.zeros(5, 3)
>>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)
>>> index = torch.tensor([0, 4, 2])
>>> x.index_copy_(0, index, t)
tensor([[ 1., 2., 3.],
[ 0., 0., 0.],
[ 7., 8., 9.],
[ 0., 0., 0.],
[ 4., 5., 6.]])
| https://pytorch.org/docs/stable/generated/torch.Tensor.index_copy_.html | pytorch docs |
torch.Tensor.vsplit
Tensor.vsplit(split_size_or_sections) -> List of Tensors
See "torch.vsplit()" | https://pytorch.org/docs/stable/generated/torch.Tensor.vsplit.html | pytorch docs |
MultiheadAttention
class torch.nn.MultiheadAttention(embed_dim, num_heads, dropout=0.0, bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None, batch_first=False, device=None, dtype=None)
Allows the model to jointly attend to information from different
representation subspaces as described in the paper: Attention Is
All You Need.
Multi-Head Attention is defined as:
\text{MultiHead}(Q, K, V) =
\text{Concat}(head_1,\dots,head_h)W^O
where head_i = \text{Attention}(QW_i^Q, KW_i^K, VW_i^V).
"forward()" will use a special optimized implementation if all of
the following conditions are met:
self attention is being computed (i.e., "query", "key", and
"value" are the same tensor. This restriction will be loosened in
the future.)
inputs are batched (3D) with "batch_first==True"
Either autograd is disabled (using "torch.inference_mode" or
"torch.no_grad") or no tensor argument "requires_grad"
| https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html | pytorch docs |
training is disabled (using ".eval()")
"add_bias_kv" is "False"
"add_zero_attn" is "False"
"batch_first" is "True" and the input is batched
"kdim" and "vdim" are equal to "embed_dim"
if a NestedTensor is passed, neither "key_padding_mask" nor
"attn_mask" is passed
autocast is disabled
If the optimized implementation is in use, a NestedTensor can be
passed for "query"/"key"/"value" to represent padding more
efficiently than using a padding mask. In this case, a NestedTensor
will be returned, and an additional speedup proportional to the
fraction of the input that is padding can be expected.
Parameters:
* embed_dim -- Total dimension of the model.
* **num_heads** -- Number of parallel attention heads. Note that
"embed_dim" will be split across "num_heads" (i.e. each head
will have dimension "embed_dim // num_heads").
* **dropout** -- Dropout probability on "attn_output_weights".
| https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html | pytorch docs |
Default: "0.0" (no dropout).
* **bias** -- If specified, adds bias to input / output
projection layers. Default: "True".
* **add_bias_kv** -- If specified, adds bias to the key and
value sequences at dim=0. Default: "False".
* **add_zero_attn** -- If specified, adds a new batch of zeros
to the key and value sequences at dim=1. Default: "False".
* **kdim** -- Total number of features for keys. Default: "None"
(uses "kdim=embed_dim").
* **vdim** -- Total number of features for values. Default:
"None" (uses "vdim=embed_dim").
* **batch_first** -- If "True", then the input and output
tensors are provided as (batch, seq, feature). Default:
"False" (seq, batch, feature).
Examples:
>>> multihead_attn = nn.MultiheadAttention(embed_dim, num_heads)
>>> attn_output, attn_output_weights = multihead_attn(query, key, value)
| https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html | pytorch docs |
forward(query, key, value, key_padding_mask=None, need_weights=True, attn_mask=None, average_attn_weights=True, is_causal=False)
Parameters:
* **query** (*Tensor*) -- Query embeddings of shape (L, E_q)
for unbatched input, (L, N, E_q) when "batch_first=False"
or (N, L, E_q) when "batch_first=True", where L is the
target sequence length, N is the batch size, and E_q is the
query embedding dimension "embed_dim". Queries are compared
against key-value pairs to produce the output. See
"Attention Is All You Need" for more details.
* **key** (*Tensor*) -- Key embeddings of shape (S, E_k) for
unbatched input, (S, N, E_k) when "batch_first=False" or
(N, S, E_k) when "batch_first=True", where S is the source
sequence length, N is the batch size, and E_k is the key
embedding dimension "kdim". See "Attention Is All You Need"
for more details.
| https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html | pytorch docs |
for more details.
* **value** (*Tensor*) -- Value embeddings of shape (S, E_v)
for unbatched input, (S, N, E_v) when "batch_first=False"
or (N, S, E_v) when "batch_first=True", where S is the
source sequence length, N is the batch size, and E_v is the
value embedding dimension "vdim". See "Attention Is All You
Need" for more details.
* **key_padding_mask** (*Optional**[**Tensor**]*) -- If
specified, a mask of shape (N, S) indicating which elements
within "key" to ignore for the purpose of attention (i.e.
treat as "padding"). For unbatched *query*, shape should be
(S). Binary and byte masks are supported. For a binary
mask, a "True" value indicates that the corresponding "key"
value will be ignored for the purpose of attention. For a
float mask, it will be directly added to the corresponding
"key" value.
| https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html | pytorch docs |
"key" value.
* **need_weights** (*bool*) -- If specified, returns
"attn_output_weights" in addition to "attn_outputs".
Default: "True".
* **attn_mask** (*Optional**[**Tensor**]*) -- If specified, a
2D or 3D mask preventing attention to certain positions.
Must be of shape (L, S) or (N\cdot\text{num\_heads}, L, S),
where N is the batch size, L is the target sequence length,
and S is the source sequence length. A 2D mask will be
broadcasted across the batch while a 3D mask allows for a
different mask for each entry in the batch. Binary, byte,
and float masks are supported. For a binary mask, a "True"
value indicates that the corresponding position is not
allowed to attend. For a byte mask, a non-zero value
indicates that the corresponding position is not allowed to
attend. For a float mask, the mask values will be added to
| https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html | pytorch docs |
the attention weight.
* **is_causal** (*bool*) -- If specified, applies a causal
mask as attention mask. Mutually exclusive with providing
attn_mask. Default: "False".
* **average_attn_weights** (*bool*) -- If true, indicates
that the returned "attn_weights" should be averaged across
heads. Otherwise, "attn_weights" are provided separately
per head. Note that this flag only has an effect when
"need_weights=True". Default: "True" (i.e. average weights
across heads)
Return type:
*Tuple*[*Tensor*, *Optional*[*Tensor*]]
Outputs:
* **attn_output** - Attention outputs of shape (L, E) when
input is unbatched, (L, N, E) when "batch_first=False" or
(N, L, E) when "batch_first=True", where L is the target
sequence length, N is the batch size, and E is the
embedding dimension "embed_dim".
| https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.