text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
Returns:
the dictionary used to save all logger stats
Return type:
target_dict
class torch.ao.ns._numeric_suite.Logger
Base class for stats logging
forward(x)
class torch.ao.ns._numeric_suite.ShadowLogger
Class used in Shadow module to record the outputs of the original
and shadow modules.
forward(x, y)
class torch.ao.ns._numeric_suite.OutputLogger
Class used to log the outputs of the module
forward(x)
class torch.ao.ns._numeric_suite.Shadow(q_module, float_module, logger_cls)
Shadow module attaches the float module to its matching quantized
module as the shadow. Then it uses Logger module to process the
outputs of both modules.
Parameters:
* q_module -- module quantized from float_module that we
want to shadow
* **float_module** -- float module used to shadow q_module
* **logger_cls** -- type of logger used to process the outputs
of q_module and float_module. ShadowLogger or custom loggers
| https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html | pytorch docs |
can be used.
forward(*x)
Return type:
*Tensor*
add(x, y)
Return type:
*Tensor*
add_scalar(x, y)
Return type:
*Tensor*
mul(x, y)
Return type:
*Tensor*
mul_scalar(x, y)
Return type:
*Tensor*
cat(x, dim=0)
Return type:
*Tensor*
add_relu(x, y)
Return type:
*Tensor*
torch.ao.ns._numeric_suite.prepare_model_with_stubs(float_module, q_module, module_swap_list, logger_cls)
Prepare the model by attaching the float module to its matching
quantized module as the shadow if the float module type is in
module_swap_list.
Example usage:
prepare_model_with_stubs(float_model, q_model, module_swap_list, Logger)
q_model(data)
ob_dict = get_logger_dict(q_model)
Parameters:
* float_module (Module) -- float module used to generate
the q_module
* **q_module** (*Module*) -- module quantized from float_module
| https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html | pytorch docs |
module_swap_list (Set[type]) -- list of float
module types to attach the shadow
logger_cls (Callable) -- type of logger to be used in
shadow module to process the outputs of quantized module and
its float shadow module
torch.ao.ns._numeric_suite.compare_model_stub(float_model, q_model, module_swap_list, *data, logger_cls=)
Compare quantized module in a model with its floating point
counterpart, feeding both of them the same input. Return a dict
with key corresponding to module names and each entry being a
dictionary with two keys 'float' and 'quantized', containing the
output tensors of quantized and its matching float shadow module.
This dict can be used to compare and compute the module level
quantization error.
This function first call prepare_model_with_stubs() to swap the
quantized module that we want to compare with the Shadow module, | https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html | pytorch docs |
which takes quantized module, corresponding float module and logger
as input, and creates a forward path inside to make the float
module to shadow quantized module sharing the same input. The
logger can be customizable, default logger is ShadowLogger and it
will save the outputs of the quantized module and float module that
can be used to compute the module level quantization error.
Example usage:
module_swap_list = [torchvision.models.quantization.resnet.QuantizableBasicBlock]
ob_dict = compare_model_stub(float_model,qmodel,module_swap_list, data)
for key in ob_dict:
print(key, compute_error(ob_dict[key]['float'], ob_dict[key]['quantized'].dequantize()))
Parameters:
* float_model (Module) -- float model used to generate the
q_model
* **q_model** (*Module*) -- model quantized from float_model
* **module_swap_list** (*Set**[**type**]*) -- list of float
module types at which shadow modules will be attached.
| https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html | pytorch docs |
data -- input data used to run the prepared q_model
logger_cls -- type of logger to be used in shadow module
to process the outputs of quantized module and its float
shadow module
Return type:
Dict[str, Dict]
torch.ao.ns._numeric_suite.get_matching_activations(float_module, q_module)
Find the matching activation between float and quantized modules.
Parameters:
* float_module (Module) -- float module used to generate
the q_module
* **q_module** (*Module*) -- module quantized from float_module
Returns:
dict with key corresponding to quantized module names and each
entry being a dictionary with two keys 'float' and 'quantized',
containing the matching float and quantized activations
Return type:
act_dict
torch.ao.ns._numeric_suite.prepare_model_outputs(float_module, q_module, logger_cls=, allow_list=None) | https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html | pytorch docs |
Prepare the model by attaching the logger to both float module and
quantized module if they are in the allow_list.
Parameters:
* float_module (Module) -- float module used to generate
the q_module
* **q_module** (*Module*) -- module quantized from float_module
* **logger_cls** -- type of logger to be attached to
float_module and q_module
* **allow_list** -- list of module types to attach logger
torch.ao.ns._numeric_suite.compare_model_outputs(float_model, q_model, *data, logger_cls=, allow_list=None)
Compare output activations between float and quantized models at
corresponding locations for the same input. Return a dict with key
corresponding to quantized module names and each entry being a
dictionary with two keys 'float' and 'quantized', containing the
activations of quantized model and float model at matching
locations. This dict can be used to compare and compute the | https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html | pytorch docs |
propagation quantization error.
Example usage:
act_compare_dict = compare_model_outputs(float_model, qmodel, data)
for key in act_compare_dict:
print(
key,
compute_error(
act_compare_dict[key]['float'],
act_compare_dict[key]['quantized'].dequantize()
)
)
Parameters:
* float_model (Module) -- float model used to generate the
q_model
* **q_model** (*Module*) -- model quantized from float_model
* **data** -- input data used to run the prepared float_model
and q_model
* **logger_cls** -- type of logger to be attached to
float_module and q_module
* **allow_list** -- list of module types to attach logger
Returns:
dict with key corresponding to quantized module names and each
entry being a dictionary with two keys 'float' and 'quantized',
containing the matching float and quantized activations
Return type: | https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html | pytorch docs |
Return type:
act_compare_dict | https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html | pytorch docs |
torch.utils.model_zoo
Moved to torch.hub.
torch.utils.model_zoo.load_url(url, model_dir=None, map_location=None, progress=True, check_hash=False, file_name=None)
Loads the Torch serialized object at the given URL.
If downloaded file is a zip file, it will be automatically
decompressed.
If the object is already present in model_dir, it's deserialized
and returned. The default value of "model_dir" is
"/checkpoints" where "hub_dir" is the directory returned
by "get_dir()".
Parameters:
* url (str) -- URL of the object to download
* **model_dir** (*str**, **optional*) -- directory in which to
save the object
* **map_location** (*optional*) -- a function or a dict
specifying how to remap storage locations (see torch.load)
* **progress** (*bool**, **optional*) -- whether or not to
display a progress bar to stderr. Default: True
| https://pytorch.org/docs/stable/model_zoo.html | pytorch docs |
check_hash (bool, optional) -- If True, the filename
part of the URL should follow the naming convention
"filename-.ext" where "" is the first eight or
more digits of the SHA256 hash of the contents of the file.
The hash is used to ensure unique names and to verify the
contents of the file. Default: False
file_name (str, optional) -- name for the downloaded
file. Filename from "url" will be used if not set.
Return type:
Dict[str, Any]
-[ Example ]-
state_dict = torch.hub.load_state_dict_from_url('https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth')
| https://pytorch.org/docs/stable/model_zoo.html | pytorch docs |
Warning:
There are known non-determinism issues for RNN functions on some
versions of cuDNN and CUDA. You can enforce deterministic behavior
by setting the following environment variables:On CUDA 10.1, set
environment variable "CUDA_LAUNCH_BLOCKING=1". This may affect
performance.On CUDA 10.2 or later, set environment variable (note
the leading colon symbol) "CUBLAS_WORKSPACE_CONFIG=:16:8" or
"CUBLAS_WORKSPACE_CONFIG=:4096:2".See the cuDNN 8 Release Notes for
more information. | https://pytorch.org/docs/stable/cudnn_rnn_determinism.html | pytorch docs |
PyTorch documentation
PyTorch is an optimized tensor library for deep learning using GPUs
and CPUs.
Features described in this documentation are classified by release
status:
Stable: These features will be maintained long-term and there
should generally be no major performance limitations or gaps in
documentation. We also expect to maintain backwards compatibility
(although breaking changes can happen and notice will be given one
release ahead of time).
Beta: These features are tagged as Beta because the API may
change based on user feedback, because the performance needs to
improve, or because coverage across operators is not yet complete.
For Beta features, we are committing to seeing the feature through
to the Stable classification. We are not, however, committing to
backwards compatibility.
Prototype: These features are typically not available as part of
binary distributions like PyPI or Conda, except sometimes behind | https://pytorch.org/docs/stable/index.html | pytorch docs |
run-time flags, and are at an early stage for feedback and testing.
Community
^^^^^^^^^
PyTorch Governance | Build + CI
PyTorch Contribution Guide
PyTorch Design Philosophy
PyTorch Governance | Mechanics
PyTorch Governance | Maintainers
Developer Notes
^^^^^^^^^^^^^^^
CUDA Automatic Mixed Precision examples
Autograd mechanics
Broadcasting semantics
CPU threading and TorchScript inference
CUDA semantics
Distributed Data Parallel
Extending PyTorch
Extending torch.func with autograd.Function
Frequently Asked Questions
Gradcheck mechanics
HIP (ROCm) semantics
Features for large-scale deployments
Modules
MPS backend
Multiprocessing best practices
Numerical accuracy
Reproducibility
Serialization semantics
Windows FAQ
Language Bindings
^^^^^^^^^^^^^^^^^
C++
Javadoc
torch::deploy
Python API
^^^^^^^^^^
torch
Tensors
Generators
Random sampling
Serialization
Parallelism
| https://pytorch.org/docs/stable/index.html | pytorch docs |
Serialization
Parallelism
Locally disabling gradient computation
Math operations
Utilities
Symbolic Numbers
Optimizations
Operator Tags
Engine Configuration
torch.nn
Parameter
UninitializedParameter
UninitializedBuffer
Containers
Convolution Layers
Pooling layers
Padding Layers
Non-linear Activations (weighted sum, nonlinearity)
Non-linear Activations (other)
Normalization Layers
Recurrent Layers
Transformer Layers
Linear Layers
Dropout Layers
Sparse Layers
Distance Functions
Loss Functions
Vision Layers
Shuffle Layers
DataParallel Layers (multi-GPU, distributed)
Utilities
Quantized Functions
Lazy Modules Initialization
torch.nn.functional
Convolution functions
Pooling functions
Non-linear activation functions
Linear functions
Dropout functions
Sparse functions
Distance functions
Loss functions
| https://pytorch.org/docs/stable/index.html | pytorch docs |
Distance functions
Loss functions
Vision functions
DataParallel functions (multi-GPU, distributed)
torch.Tensor
Data types
Initializing and basic operations
Tensor class reference
Tensor Attributes
torch.dtype
torch.device
torch.layout
torch.memory_format
Tensor Views
torch.amp
Autocasting
Gradient Scaling
Autocast Op Reference
torch.autograd
torch.autograd.backward
torch.autograd.grad
Forward-mode Automatic Differentiation
Functional higher level API
Locally disabling gradient computation
Default gradient layouts
In-place operations on Tensors
Variable (deprecated)
Tensor autograd functions
Function
Context method mixins
Numerical gradient checking
Profiler
Anomaly detection
Autograd graph
torch.library
torch.cuda
StreamContext
torch.cuda.can_device_access_peer
torch.cuda.current_blas_handle
torch.cuda.current_device
| https://pytorch.org/docs/stable/index.html | pytorch docs |
torch.cuda.current_device
torch.cuda.current_stream
torch.cuda.default_stream
device
torch.cuda.device_count
device_of
torch.cuda.get_arch_list
torch.cuda.get_device_capability
torch.cuda.get_device_name
torch.cuda.get_device_properties
torch.cuda.get_gencode_flags
torch.cuda.get_sync_debug_mode
torch.cuda.init
torch.cuda.ipc_collect
torch.cuda.is_available
torch.cuda.is_initialized
torch.cuda.memory_usage
torch.cuda.set_device
torch.cuda.set_stream
torch.cuda.set_sync_debug_mode
torch.cuda.stream
torch.cuda.synchronize
torch.cuda.utilization
torch.cuda.OutOfMemoryError
Random Number Generator
Communication collectives
Streams and events
Graphs (beta)
Memory management
NVIDIA Tools Extension (NVTX)
Jiterator (beta)
Stream Sanitizer (prototype)
torch.backends
torch.backends.cuda
torch.backends.cudnn
torch.backends.mps
| https://pytorch.org/docs/stable/index.html | pytorch docs |
torch.backends.cudnn
torch.backends.mps
torch.backends.mkl
torch.backends.mkldnn
torch.backends.openmp
torch.backends.opt_einsum
torch.backends.xeon
torch.distributed
Backends
Basics
Initialization
Post-Initialization
Distributed Key-Value Store
Groups
Point-to-point communication
Synchronous and asynchronous collective operations
Collective functions
Profiling Collective Communication
Multi-GPU collective functions
Third-party backends
Launch utility
Spawn utility
Debugging "torch.distributed" applications
Logging
torch.distributed.algorithms.join
torch.distributed.elastic
Get Started
Documentation
torch.distributed.fsdp
torch.distributed.optim
torch.distributed.tensor.parallel
torch.distributed.checkpoint
torch.distributions
Score function
Pathwise derivative
Distribution
ExponentialFamily
Bernoulli
Beta
Binomial
| https://pytorch.org/docs/stable/index.html | pytorch docs |
Bernoulli
Beta
Binomial
Categorical
Cauchy
Chi2
ContinuousBernoulli
Dirichlet
Exponential
FisherSnedecor
Gamma
Geometric
Gumbel
HalfCauchy
HalfNormal
Independent
Kumaraswamy
LKJCholesky
Laplace
LogNormal
LowRankMultivariateNormal
MixtureSameFamily
Multinomial
MultivariateNormal
NegativeBinomial
Normal
OneHotCategorical
Pareto
Poisson
RelaxedBernoulli
LogitRelaxedBernoulli
RelaxedOneHotCategorical
StudentT
TransformedDistribution
Uniform
VonMises
Weibull
Wishart
KL Divergence
Transforms
Constraints
Constraint Registry
torch._dynamo
torch.fft
Fast Fourier Transforms
Helper Functions
torch.func
What are composable function transforms?
Why composable function transforms?
Read More
torch.futures
torch.fx
Overview
Writing Transformations
Debugging
| https://pytorch.org/docs/stable/index.html | pytorch docs |
Writing Transformations
Debugging
Limitations of Symbolic Tracing
API Reference
torch.hub
Publishing models
Loading models from Hub
torch.jit
TorchScript Language Reference
Creating TorchScript Code
Mixing Tracing and Scripting
TorchScript Language
Built-in Functions and Modules
Debugging
Frequently Asked Questions
Known Issues
Appendix
torch.linalg
Matrix Properties
Decompositions
Solvers
Inverses
Matrix Functions
Matrix Products
Tensor Operations
Misc
Experimental Functions
torch.monitor
API Reference
torch.signal
torch.signal.windows
torch.special
Functions
torch.overrides
Functions
torch.package
Tutorials
How do I...
Explanation
API Reference
torch.profiler
Overview
API Reference
Intel Instrumentation and Tracing Technology APIs
torch.nn.init
torch.onnx
Example: AlexNet from PyTorch to ONNX
| https://pytorch.org/docs/stable/index.html | pytorch docs |
Example: AlexNet from PyTorch to ONNX
Tracing vs Scripting
Avoiding Pitfalls
Limitations
Adding support for operators
Frequently Asked Questions
Contributing / developing
Functions
Classes
torch.onnx diagnostics
Overview
Diagnostic Rules
API Reference
torch.optim
How to use an optimizer
Base class
Algorithms
How to adjust learning rate
Stochastic Weight Averaging
Complex Numbers
Creating Complex Tensors
Transition from the old representation
Accessing real and imag
Angle and abs
Linear Algebra
Serialization
Autograd
DDP Communication Hooks
How to Use a Communication Hook?
What Does a Communication Hook Operate On?
Default Communication Hooks
PowerSGD Communication Hook
Debugging Communication Hooks
Checkpointing of Communication Hooks
Acknowledgements
Pipeline Parallelism
Model Parallelism using multiple GPUs
Pipelined Execution
| https://pytorch.org/docs/stable/index.html | pytorch docs |
Pipelined Execution
Pipe APIs in PyTorch
Tutorials
Acknowledgements
Quantization
Introduction to Quantization
Quantization API Summary
Quantization Stack
Quantization Support Matrix
Quantization API Reference
Quantization Backend Configuration
Quantization Accuracy Debugging
Quantization Customizations
Best Practices
Frequently Asked Questions
Common Errors
Distributed RPC Framework
Basics
RPC
RRef
RemoteModule
Distributed Autograd Framework
Distributed Optimizer
Design Notes
Tutorials
torch.random
torch.masked
Introduction
Supported Operators
torch.nested
Introduction
Construction
size
unbind
Nested tensor constructor and conversion functions
Supported operations
torch.sparse
Why and when to use sparsity
Functionality overview
Operator overview
Sparse COO tensors
Sparse Compressed Tensors
Supported operations
| https://pytorch.org/docs/stable/index.html | pytorch docs |
Supported operations
torch.Storage
torch.testing
torch.utils.benchmark
torch.utils.bottleneck
torch.utils.checkpoint
torch.utils.cpp_extension
torch.utils.data
Dataset Types
Data Loading Order and "Sampler"
Loading Batched and Non-Batched Data
Single- and Multi-process Data Loading
Memory Pinning
torch.utils.jit
torch.utils.dlpack
torch.utils.mobile_optimizer
torch.utils.model_zoo
torch.utils.tensorboard
Type Info
torch.finfo
torch.iinfo
Named Tensors
Creating named tensors
Named dimensions
Name propagation semantics
Explicit alignment by names
Manipulating dimensions
Autograd support
Currently supported operations and subsystems
Named tensor API reference
Named Tensors operator coverage
Keeps input names
Removes dimensions
Unifies names from inputs
Permutes dimensions
Contracts away dims
Factory functions
out function and in-place variants
| https://pytorch.org/docs/stable/index.html | pytorch docs |
out function and in-place variants
torch.config
Libraries
^^^^^^^^^
torchaudio
TorchData
TorchRec
TorchServe
torchtext
torchvision
PyTorch on XLA Devices
Indices and tables
* Index
Module Index
| https://pytorch.org/docs/stable/index.html | pytorch docs |
Events
Module contains events processing mechanisms that are integrated with
the standard python logging.
Example of usage:
from torch.distributed.elastic import events
event = events.Event(name="test_event", source=events.EventSource.WORKER, metadata={...})
events.get_logging_handler(destination="console").info(event)
API Methods
torch.distributed.elastic.events.record(event, destination='null')
torch.distributed.elastic.events.get_logging_handler(destination='null')
Return type:
Handler
Event Objects
class torch.distributed.elastic.events.api.Event(name, source, timestamp=0, metadata=)
The class represents the generic event that occurs during the
torchelastic job execution. The event can be any kind of meaningful
action.
Parameters:
* name (str) -- event name.
* **source** (*EventSource*) -- the event producer, e.g. agent
or worker
| https://pytorch.org/docs/stable/elastic/events.html | pytorch docs |
or worker
* **timestamp** (*int*) -- timestamp in milliseconds when event
occured.
* **metadata** (*Dict**[**str**, **Optional**[**Union**[**str**,
**int**, **float**, **bool**]**]**]*) -- additional data that
is associated with the event.
class torch.distributed.elastic.events.api.EventSource(value)
Known identifiers of the event producers.
torch.distributed.elastic.events.api.EventMetadataValue
alias of "Optional"["Union"["str", "int", "float", "bool"]] | https://pytorch.org/docs/stable/elastic/events.html | pytorch docs |
Metrics
Metrics API
Overview:
The metrics API in torchelastic is used to publish telemetry metrics.
It is designed to be used by torchelastic's internal modules to
publish metrics for the end user with the goal of increasing
visibility and helping with debugging. However you may use the same
API in your jobs to publish metrics to the same metrics "sink".
A "metric" can be thought of as timeseries data and is uniquely
identified by the string-valued tuple "(metric_group, metric_name)".
torchelastic makes no assumptions about what a "metric_group" is and
what relationship it has with "metric_name". It is totally up to the
user to use these two fields to uniquely identify a metric.
Note:
The metric group "torchelastic" is reserved by torchelastic for
platform level metrics that it produces. For instance torchelastic
may output the latency (in milliseconds) of a re-rendezvous
operation from the agent as "(torchelastic,
agent.rendezvous.duration.ms)" | https://pytorch.org/docs/stable/elastic/metrics.html | pytorch docs |
agent.rendezvous.duration.ms)"
A sensible way to use metric groups is to map them to a stage or
module in your job. You may also encode certain high level properties
the job such as the region or stage (dev vs prod).
Publish Metrics:
Using torchelastic's metrics API is similar to using python's logging
framework. You first have to configure a metrics handler before trying
to add metric data.
The example below measures the latency for the "calculate()" function.
import time
import torch.distributed.elastic.metrics as metrics
# makes all metrics other than the one from "my_module" to go /dev/null
metrics.configure(metrics.NullMetricsHandler())
metrics.configure(metrics.ConsoleMetricsHandler(), "my_module")
def my_method():
start = time.time()
calculate()
end = time.time()
metrics.put_metric("calculate_latency", int(end-start), "my_module")
You may also use the torch.distributed.elastic.metrics.prof` decorator
to conveniently and succinctly profile functions | https://pytorch.org/docs/stable/elastic/metrics.html | pytorch docs |
to conveniently and succinctly profile functions
# -- in module examples.foobar --
import torch.distributed.elastic.metrics as metrics
metrics.configure(metrics.ConsoleMetricsHandler(), "foobar")
metrics.configure(metrics.ConsoleMetricsHandler(), "Bar")
@metrics.prof
def foo():
pass
class Bar():
@metrics.prof
def baz():
pass
"@metrics.prof" will publish the following metrics
.success - 1 if the function finished successfully
.failure - 1 if the function threw an exception
.duration.ms - function duration in milliseconds
Configuring Metrics Handler:
torch.distributed.elastic.metrics.MetricHandler is responsible for
emitting the added metric values to a particular destination. Metric
groups can be configured with different metric handlers.
By default torchelastic emits all metrics to "/dev/null". By adding | https://pytorch.org/docs/stable/elastic/metrics.html | pytorch docs |
the following configuration metrics, "torchelastic" and "my_app"
metric groups will be printed out to console.
import torch.distributed.elastic.metrics as metrics
metrics.configure(metrics.ConsoleMetricHandler(), group = "torchelastic")
metrics.configure(metrics.ConsoleMetricHandler(), group = "my_app")
Writing a Custom Metric Handler:
If you want your metrics to be emitted to a custom location, implement
the torch.distributed.elastic.metrics.MetricHandler interface and
configure your job to use your custom metric handler.
Below is a toy example that prints the metrics to "stdout"
import torch.distributed.elastic.metrics as metrics
class StdoutMetricHandler(metrics.MetricHandler):
def emit(self, metric_data):
ts = metric_data.timestamp
group = metric_data.group_name
name = metric_data.name
value = metric_data.value
print(f"[{ts}][{group}]: {name}={value}")
metrics.configure(StdoutMetricHandler(), group="my_app") | https://pytorch.org/docs/stable/elastic/metrics.html | pytorch docs |
Now all metrics in the group "my_app" will be printed to stdout as:
[1574213883.4182858][my_app]: my_metric=
[1574213940.5237644][my_app]: my_metric=
Metric Handlers
Below are the metric handlers that come included with torchelastic.
class torch.distributed.elastic.metrics.api.MetricHandler
class torch.distributed.elastic.metrics.api.ConsoleMetricHandler
class torch.distributed.elastic.metrics.api.NullMetricHandler
Methods
torch.distributed.elastic.metrics.configure(handler, group=None)
torch.distributed.elastic.metrics.prof(fn=None, group='torchelastic')
@profile decorator publishes duration.ms, count, success, failure
metrics for the function that it decorates. The metric name
defaults to the qualified name ("class_name.def_name") of the
function. If the function does not belong to a class, it uses the
leaf module name instead.
Usage
@metrics.prof
def x():
pass
@metrics.prof(group="agent")
| https://pytorch.org/docs/stable/elastic/metrics.html | pytorch docs |
pass
@metrics.prof(group="agent")
def y():
pass
torch.distributed.elastic.metrics.put_metric(metric_name, metric_value, metric_group='torchelastic')
Publishes a metric data point.
Usage
put_metric("metric_name", 1)
put_metric("metric_name", 1, "metric_group_name")
| https://pytorch.org/docs/stable/elastic/metrics.html | pytorch docs |
Quickstart
To launch a fault-tolerant job, run the following on all nodes.
torchrun
--nnodes=NUM_NODES
--nproc_per_node=TRAINERS_PER_NODE
--max_restarts=NUM_ALLOWED_FAILURES
--rdzv_id=JOB_ID
--rdzv_backend=c10d
--rdzv_endpoint=HOST_NODE_ADDR
YOUR_TRAINING_SCRIPT.py (--arg1 ... train script args...)
To launch an elastic job, run the following on at least "MIN_SIZE"
nodes and at most "MAX_SIZE" nodes.
torchrun
--nnodes=MIN_SIZE:MAX_SIZE
--nproc_per_node=TRAINERS_PER_NODE
--max_restarts=NUM_ALLOWED_FAILURES_OR_MEMBERSHIP_CHANGES
--rdzv_id=JOB_ID
--rdzv_backend=c10d
--rdzv_endpoint=HOST_NODE_ADDR
YOUR_TRAINING_SCRIPT.py (--arg1 ... train script args...)
Note:
TorchElastic models failures as membership changes. When a node
fails, this is treated as a "scale down" event. When the failed node
is replaced by the scheduler, it is a "scale up" event. Hence for | https://pytorch.org/docs/stable/elastic/quickstart.html | pytorch docs |
both fault tolerant and elastic jobs, "--max_restarts" is used to
control the total number of restarts before giving up, regardless of
whether the restart was caused due to a failure or a scaling event.
"HOST_NODE_ADDR", in form [:] (e.g.
node1.example.com:29400), specifies the node and the port on which the
C10d rendezvous backend should be instantiated and hosted. It can be
any node in your training cluster, but ideally you should pick a node
that has a high bandwidth.
Note:
If no port number is specified "HOST_NODE_ADDR" defaults to 29400.
Note:
The "--standalone" option can be passed to launch a single node job
with a sidecar rendezvous backend. You donât have to pass "--
rdzv_id", "--rdzv_endpoint", and "--rdzv_backend" when the "--
standalone" option is used.
Note:
Learn more about writing your distributed training script here.
If "torchrun" does not meet your requirements you may use our APIs
directly for more powerful customization. Start by taking a look at | https://pytorch.org/docs/stable/elastic/quickstart.html | pytorch docs |
the elastic agent API. | https://pytorch.org/docs/stable/elastic/quickstart.html | pytorch docs |
Examples
Please refer to the elastic/examples README. | https://pytorch.org/docs/stable/elastic/examples.html | pytorch docs |
Customization
This section describes how to customize TorchElastic to fit your
needs.
Launcher
The launcher program that ships with TorchElastic should be sufficient
for most use-cases (see torchrun (Elastic Launch)). You can implement
a custom launcher by programmatically creating an agent and passing it
specs for your workers as shown below.
# my_launcher.py
if name == "main":
args = parse_args(sys.argv[1:])
rdzv_handler = RendezvousHandler(...)
spec = WorkerSpec(
local_world_size=args.nproc_per_node,
fn=trainer_entrypoint_fn,
args=(trainer_entrypoint_fn args.fn_args,...),
rdzv_handler=rdzv_handler,
max_restarts=args.max_restarts,
monitor_interval=args.monitor_interval,
)
agent = LocalElasticAgent(spec, start_method="spawn")
try:
run_result = agent.run()
if run_result.is_failed():
print(f"worker 0 failed with: run_result.failures[0]")
else:
| https://pytorch.org/docs/stable/elastic/customization.html | pytorch docs |
else:
print(f"worker 0 return value is: run_result.return_values[0]")
except Exception ex:
# handle exception
Rendezvous Handler
To implement your own rendezvous, extend
"torch.distributed.elastic.rendezvous.RendezvousHandler" and implement
its methods.
Warning:
Rendezvous handlers are tricky to implement. Before you begin make
sure you completely understand the properties of rendezvous. Please
refer to Rendezvous for more information.
Once implemented you can pass your custom rendezvous handler to the
worker spec when creating the agent.
spec = WorkerSpec(
rdzv_handler=MyRendezvousHandler(params),
...
)
elastic_agent = LocalElasticAgent(spec, start_method=start_method)
elastic_agent.run(spec.role)
Metric Handler
TorchElastic emits platform level metrics (see Metrics). By default
metrics are emitted to /dev/null so you will not see them. To have | https://pytorch.org/docs/stable/elastic/customization.html | pytorch docs |
the metrics pushed to a metric handling service in your
infrastructure, implement a
torch.distributed.elastic.metrics.MetricHandler and configure it
in your custom launcher.
# my_launcher.py
import torch.distributed.elastic.metrics as metrics
class MyMetricHandler(metrics.MetricHandler):
def emit(self, metric_data: metrics.MetricData):
# push metric_data to your metric sink
def main():
metrics.configure(MyMetricHandler())
spec = WorkerSpec(...)
agent = LocalElasticAgent(spec)
agent.run()
Events Handler
TorchElastic supports events recording (see Events). The events module
defines API that allows you to record events and implement custom
EventHandler. EventHandler is used for publishing events produced
during torchelastic execution to different sources, e.g. AWS
CloudWatch. By default it uses
torch.distributed.elastic.events.NullEventHandler that ignores
events. To configure custom events handler you need to implement | https://pytorch.org/docs/stable/elastic/customization.html | pytorch docs |
torch.distributed.elastic.events.EventHandler interface and
configure it in your custom launcher.
# my_launcher.py
import torch.distributed.elastic.events as events
class MyEventHandler(events.EventHandler):
def record(self, event: events.Event):
# process event
def main():
events.configure(MyEventHandler())
spec = WorkerSpec(...)
agent = LocalElasticAgent(spec)
agent.run()
| https://pytorch.org/docs/stable/elastic/customization.html | pytorch docs |
Multiprocessing
Library that launches and manages "n" copies of worker subprocesses
either specified by a function or a binary.
For functions, it uses "torch.multiprocessing" (and therefore python
"multiprocessing") to spawn/fork worker processes. For binaries it
uses python "subprocessing.Popen" to create worker processes.
Usage 1: Launching two trainers as a function
from torch.distributed.elastic.multiprocessing import Std, start_processes
def trainer(a, b, c):
pass # train
# runs two trainers
# LOCAL_RANK=0 trainer(1,2,3)
# LOCAL_RANK=1 trainer(4,5,6)
ctx = start_processes(
name="trainer",
entrypoint=trainer,
args={0: (1,2,3), 1: (4,5,6)},
envs={0: {"LOCAL_RANK": 0}, 1: {"LOCAL_RANK": 1}},
log_dir="/tmp/foobar",
redirects=Std.ALL, # write all worker stdout/stderr to a log file
tee={0: Std.ERR}, # tee only local rank 0's stderr to console
) | https://pytorch.org/docs/stable/elastic/multiprocessing.html | pytorch docs |
)
# waits for all copies of trainer to finish
ctx.wait()
Usage 2: Launching 2 echo workers as a binary
# same as invoking
# echo hello
# echo world > stdout.log
ctx = start_processes(
name="echo"
entrypoint="echo",
log_dir="/tmp/foobar",
args={0: "hello", 1: "world"},
redirects={1: Std.OUT},
)
Just like "torch.multiprocessing", the return value of the function
"start_processes()" is a process context ("api.PContext"). If a
function was launched, a "api.MultiprocessContext" is returned and if
a binary was launched a "api.SubprocessContext" is returned. Both are
specific implementations of the parent "api.PContext" class.
Starting Multiple Workers
torch.distributed.elastic.multiprocessing.start_processes(name, entrypoint, args, envs, log_dir, start_method='spawn', redirects=Std.NONE, tee=Std.NONE)
Starts "n" copies of "entrypoint" processes with the provided | https://pytorch.org/docs/stable/elastic/multiprocessing.html | pytorch docs |
options. "entrypoint" is either a "Callable" (function) or a "str"
(binary). The number of copies is determined by the number of
entries for "args" and "envs" arguments, which need to have the
same key set.
"args" and "env" parameters are the arguments and environment
variables to pass down to the entrypoint mapped by the replica
index (local rank). All local ranks must be accounted for. That is,
the keyset should be "{0,1,...,(nprocs-1)}".
Note:
When the "entrypoint" is a binary ("str"), "args" can only be
strings. If any other type is given, then it is casted to a
string representation (e.g. "str(arg1)"). Furthermore, a binary
failure will only write an "error.json" error file if the main
function is annotated with
"torch.distributed.elastic.multiprocessing.errors.record". For
function launches, this is done by default and there is no need
to manually annotate with the "@record" annotation.
| https://pytorch.org/docs/stable/elastic/multiprocessing.html | pytorch docs |
"redirects" and "tee" are bitmasks specifying which std stream(s)
to redirect to a log file in the "log_dir". Valid mask values are
defined in "Std". To redirect/tee only certain local ranks, pass
"redirects" as a map with the key as the local rank to specify the
redirect behavior for. Any missing local ranks will default to
"Std.NONE".
"tee" acts like the unix "tee" command in that it redirects +
prints to console. To avoid worker stdout/stderr from printing to
console, use the "redirects" parameter.
For each process, the "log_dir" will contain:
"{local_rank}/error.json": if the process failed, a file with
the error info
"{local_rank}/stdout.json": if "redirect & STDOUT == STDOUT"
"{local_rank}/stderr.json": if "redirect & STDERR == STDERR"
Note:
It is expected that the "log_dir" exists, is empty, and is a
directory.
Example:
log_dir = "/tmp/test"
# ok; two copies of foo: foo("bar0"), foo("bar1")
| https://pytorch.org/docs/stable/elastic/multiprocessing.html | pytorch docs |
start_processes(
name="trainer",
entrypoint=foo,
args:{0:("bar0",), 1:("bar1",),
envs:{0:{}, 1:{}},
log_dir=log_dir
)
# invalid; envs missing for local rank 1
start_processes(
name="trainer",
entrypoint=foo,
args:{0:("bar0",), 1:("bar1",),
envs:{0:{}},
log_dir=log_dir
)
# ok; two copies of /usr/bin/touch: touch file1, touch file2
start_processes(
name="trainer",
entrypoint="/usr/bin/touch",
args:{0:("file1",), 1:("file2",),
envs:{0:{}, 1:{}},
log_dir=log_dir
)
# caution; arguments casted to string, runs:
# echo "1" "2" "3" and echo "[1, 2, 3]"
start_processes(
name="trainer",
entrypoint="/usr/bin/echo",
args:{0:(1,2,3), 1:([1,2,3],),
envs:{0:{}, 1:{}},
log_dir=log_dir
)
Parameters:
* name (str) -- a human readable short name that describes | https://pytorch.org/docs/stable/elastic/multiprocessing.html | pytorch docs |
what the processes are (used as header when tee'ing
stdout/stderr outputs)
* **entrypoint** (*Union**[**Callable**, **str**]*) -- either a
"Callable" (function) or "cmd" (binary)
* **args** (*Dict**[**int**, **Tuple**]*) -- arguments to each
replica
* **envs** (*Dict**[**int**, **Dict**[**str**, **str**]**]*) --
env vars to each replica
* **log_dir** (*str*) -- directory used to write log files
* **start_method** (*str*) -- multiprocessing start method
(spawn, fork, forkserver) ignored for binaries
* **redirects** (*Union**[**Std**, **Dict**[**int**,
**Std**]**]*) -- which std streams to redirect to a log file
* **tee** (*Union**[**Std**, **Dict**[**int**, **Std**]**]*) --
which std streams to redirect + print to console
Return type:
PContext
Process Context | https://pytorch.org/docs/stable/elastic/multiprocessing.html | pytorch docs |
Process Context
class torch.distributed.elastic.multiprocessing.api.PContext(name, entrypoint, args, envs, stdouts, stderrs, tee_stdouts, tee_stderrs, error_files)
The base class that standardizes operations over a set of processes
that are launched via different mechanisms. The name "PContext" is
intentional to disambiguate with
"torch.multiprocessing.ProcessContext".
Warning:
stdouts and stderrs should ALWAYS be a superset of tee_stdouts
and tee_stderrs (respectively) this is b/c tee is implemented as
a redirect + tail -f <stdout/stderr.log>
class torch.distributed.elastic.multiprocessing.api.MultiprocessContext(name, entrypoint, args, envs, stdouts, stderrs, tee_stdouts, tee_stderrs, error_files, start_method)
"PContext" holding worker processes invoked as a function.
class torch.distributed.elastic.multiprocessing.api.SubprocessContext(name, entrypoint, args, envs, stdouts, stderrs, tee_stdouts, tee_stderrs, error_files) | https://pytorch.org/docs/stable/elastic/multiprocessing.html | pytorch docs |
"PContext" holding worker processes invoked as a binary.
class torch.distributed.elastic.multiprocessing.api.RunProcsResult(return_values=, failures=, stdouts=, stderrs=)
Results of a completed run of processes started with
"start_processes()". Returned by "PContext".
Note the following:
All fields are mapped by local rank
"return_values" - only populated for functions (not the
binaries).
"stdouts" - path to stdout.log (empty string if no redirect)
"stderrs" - path to stderr.log (empty string if no redirect)
| https://pytorch.org/docs/stable/elastic/multiprocessing.html | pytorch docs |
Elastic Agent
Server
The elastic agent is the control plane of torchelastic. It is a
process that launches and manages underlying worker processes. The
agent is responsible for:
Working with distributed torch: the workers are started with all
the necessary information to successfully and trivially call
"torch.distributed.init_process_group()".
Fault tolerance: monitors workers and upon detecting worker
failures or unhealthiness, tears down all workers and restarts
everyone.
Elasticity: Reacts to membership changes and restarts workers with
the new members.
The simplest agents are deployed per node and works with local
processes. A more advanced agent can launch and manage workers
remotely. Agents can be completely decentralized, making decisions
based on the workers it manages. Or can be coordinated, communicating
to other agents (that manage workers in the same job) to make a
collective decision. | https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
collective decision.
Below is a diagram of an agent that manages a local group of workers.
[image]
Concepts
This section describes the high-level classes and concepts that are
relevant to understanding the role of the "agent" in torchelastic.
class torch.distributed.elastic.agent.server.ElasticAgent
Agent process responsible for managing one or more worker
processes. The worker processes are assumed to be regular
distributed PyTorch scripts. When the worker process is created by
the agent, the agent provides the necessary information for the
worker processes to properly initialize a torch process group.
The exact deployment topology and ratio of agent-to-worker is
dependent on the specific implementation of the agent and the
user's job placement preferences. For instance, to run a
distributed training job on GPU with 8 trainers (one per GPU) one
can:
Use 8 x single GPU instances, place an agent per instance,
managing 1 worker per agent.
| https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
managing 1 worker per agent.
Use 4 x double GPU instances, place an agent per instance,
managing 2 workers per agent.
Use 2 x quad GPU instances, place an agent per instance,
managing 4 workers per agent.
Use 1 x 8 GPU instance, place an agent per instance, managing 8
workers per agent.
Usage
group_result = agent.run()
if group_result.is_failed():
# workers failed
failure = group_result.failures[0]
log.exception(f"worker 0 failed with exit code : {failure.exit_code}")
else:
return group_result.return_values[0] # return rank 0's results
abstract get_worker_group(role='default')
Returns:
The "WorkerGroup" for the given "role". Note that the worker
group is a mutable object and hence in a multi-
threaded/process environment it may change state.
Implementors are encouraged (but not required) to return a
defensive read-only copy.
Return type:
| https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
Return type:
WorkerGroup
abstract run(role='default')
Runs the agent, retrying the worker group on failures up to
"max_restarts".
Returns:
The result of the execution, containing the return values or
failure details for each worker mapped by the worker's global
rank.
Raises:
**Exception - any other failures NOT related to worker
process** --
Return type:
*RunResult*
class torch.distributed.elastic.agent.server.WorkerSpec(role, local_world_size, rdzv_handler, fn=None, entrypoint=None, args=(), max_restarts=3, monitor_interval=30.0, master_port=None, master_addr=None, local_addr=None, redirects=Std.NONE, tee=Std.NONE)
Contains blueprint information about a particular type of worker.
For a given role, there must only exist a single worker spec.
Worker spec is expected to be homogenous across all nodes
(machine), that is each node runs the same number of workers for a
particular spec. | https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
particular spec.
Parameters:
* role (str) -- user-defined role for the workers with
this spec
* **local_world_size** (*int*) -- number local workers to run
* **fn** (*Optional**[**Callable**]*) -- (deprecated use
entrypoint instead)
* **entrypoint** (*Optional**[**Union**[**Callable**,
**str**]**]*) -- worker function or command
* **args** (*Tuple*) -- arguments to pass to "entrypoint"
* **rdzv_handler** (*RendezvousHandler*) -- handles rdzv for
this set of workers
* **max_restarts** (*int*) -- number of max retries for the
workers
* **monitor_interval** (*float*) -- monitor status of workers
every "n" seconds
* **master_port** (*Optional**[**int**]*) -- fixed port to run
the c10d store on rank 0 if not specified then will chose a
random free port
* **master_addr** (*Optional**[**str**]*) -- fixed master_addr
| https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
to run the c10d store on rank 0 if not specified then will
chose hostname on agent rank 0
* **redirects** (*Union**[**Std**, **Dict**[**int**,
**Std**]**]*) -- redirect std streams to a file, selectively
redirect for a particular local rank by passing a map
* **tee** (*Union**[**Std**, **Dict**[**int**, **Std**]**]*) --
tees the specified std stream(s) to console + file,
selectively tee for a particular local rank by passing a map,
takes precedence over "redirects" settings.
get_entrypoint_name()
If the entrypoint is a function (e.g. "Callable") returns its
"__qualname__", else if the entrypoint is a binary (e.g. "str"),
returns the binary name.
class torch.distributed.elastic.agent.server.WorkerState(value)
State of the "WorkerGroup". Workers in a worker group change state
as a unit. If a single worker in a worker group fails the entire
set is considered failed: | https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
set is considered failed:
UNKNOWN - agent lost track of worker group state, unrecoverable
INIT - worker group object created not yet started
HEALTHY - workers running and healthy
UNHEALTHY - workers running and unhealthy
STOPPED - workers stopped (interrupted) by the agent
SUCCEEDED - workers finished running (exit 0)
FAILED - workers failed to successfully finish (exit !0)
A worker group starts from an initial "INIT" state, then progresses
to "HEALTHY" or "UNHEALTHY" states, and finally reaches a terminal
"SUCCEEDED" or "FAILED" state.
Worker groups can be interrupted and temporarily put into "STOPPED"
state by the agent. Workers in "STOPPED" state are scheduled to be
restarted in the near future by the agent. Some examples of workers
being put into "STOPPED" state are:
Worker group failure|unhealthy observed
Membership change detected
When actions (start, stop, rdzv, retry, etc) on worker group fails | https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
and results in the action being partially applied to the worker
group the state will be "UNKNOWN". Typically this happens on
uncaught/unhandled exceptions during state change events on the
agent. The agent is not expected to recover worker groups in
"UNKNOWN" state and is better off self terminating and allowing the
job manager to retry the node.
static is_running(state)
Returns:
True if the worker state represents workers still running
(e.g. that the process exists but not necessarily healthy).
Return type:
bool
class torch.distributed.elastic.agent.server.Worker(local_rank, global_rank=- 1, role_rank=- 1, world_size=- 1, role_world_size=- 1)
Represents a worker instance. Contrast this with "WorkerSpec" that
represents the specifications of a worker. A "Worker" is created
from a "WorkerSpec". A "Worker" is to a "WorkerSpec" as an object
is to a class.
The "id" of the worker is interpreted by the specific | https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
implementation of "ElasticAgent". For a local agent, it could be
the "pid (int)" of the worker, for a remote agent it could be
encoded as "host:port (string)".
Parameters:
* id (Any) -- uniquely identifies a worker (interpreted by
the agent)
* **local_rank** (*int*) -- local rank of the worker
* **global_rank** (*int*) -- global rank of the worker
* **role_rank** (*int*) -- rank of the worker across all workers
that have the same role
* **world_size** (*int*) -- number of workers (globally)
* **role_world_size** (*int*) -- number of workers that have the
same role
class torch.distributed.elastic.agent.server.WorkerGroup(spec)
Represents the set of "Worker" instances for the given "WorkerSpec"
managed by "ElasticAgent". Whether the worker group contains cross
instance workers or not depends on the implementation of the agent.
Implementations | https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
Implementations
Below are the agent implementations provided by torchelastic.
class torch.distributed.elastic.agent.server.local_elastic_agent.LocalElasticAgent(spec, start_method='spawn', exit_barrier_timeout=300, log_dir=None)
An implementation of "torchelastic.agent.server.ElasticAgent" that
handles host-local workers. This agent is deployed per host and is
configured to spawn "n" workers. When using GPUs, "n" maps to the
number of GPUs available on the host.
The local agent does not communicate to other local agents deployed
on other hosts, even if the workers may communicate inter-host. The
worker id is interpreted to be a local process. The agent starts
and stops all worker processes as a single unit.
The worker function and argument passed to the worker function must
be python multiprocessing compatible. To pass multiprocessing data
structures to the workers you may create the data structure in the | https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
same multiprocessing context as the specified "start_method" and
pass it as a function argument.
The "exit_barrier_timeout" specifies the amount of time (in
seconds) to wait for other agents to finish. This acts as a safety
net to handle cases where workers finish at different times, to
prevent agents from viewing workers that finished early as a scale-
down event. It is strongly advised that the user code deal with
ensuring that workers are terminated in a synchronous manner rather
than relying on the exit_barrier_timeout.
A named pipe based watchdog can be enabled in "LocalElasticAgent"
if an environment variable "TORCHELASTIC_ENABLE_FILE_TIMER" with
value 1 has been defined in the "LocalElasticAgent" process.
Optionally, another environment variable
"TORCHELASTIC_TIMER_FILE" can be set with a unique file name for
the named pipe. If the environment variable
"TORCHELASTIC_TIMER_FILE" is not set, "LocalElasticAgent" will | https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
internally create a unique file name and set it to the environment
variable "TORCHELASTIC_TIMER_FILE", and this environment variable
will be propagated to the worker processes to allow them to connect
to the same named pipe that "LocalElasticAgent" uses.
Example launching function
def trainer(args) -> str:
return "do train"
def main():
start_method="spawn"
shared_queue= multiprocessing.get_context(start_method).Queue()
spec = WorkerSpec(
role="trainer",
local_world_size=nproc_per_process,
entrypoint=trainer,
args=("foobar",),
...<OTHER_PARAMS...>)
agent = LocalElasticAgent(spec, start_method)
results = agent.run()
if results.is_failed():
print("trainer failed")
else:
print(f"rank 0 return value: {results.return_values[0]}")
| https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
prints -> rank 0 return value: do train
Example launching binary
def main():
spec = WorkerSpec(
role="trainer",
local_world_size=nproc_per_process,
entrypoint="/usr/local/bin/trainer",
args=("--trainer_args", "foobar"),
...<OTHER_PARAMS...>)
agent = LocalElasticAgent(spec)
results = agent.run()
if not results.is_failed():
print("binary launches do not have return values")
Extending the Agent
To extend the agent you can implement "`ElasticAgent" directly,
however we recommend you extend "SimpleElasticAgent" instead, which
provides most of the scaffolding and leaves you with a few specific
abstract methods to implement.
class torch.distributed.elastic.agent.server.SimpleElasticAgent(spec, exit_barrier_timeout=300)
An "ElasticAgent" that manages workers ("WorkerGroup") for a single | https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
"WorkerSpec" (e.g. one particular type of worker role).
_assign_worker_ranks(store, group_rank, group_world_size, spec)
Determines proper ranks for worker processes. The rank
assignment is done according to the following algorithm:
1. Each agent writes its configuration(group_rank,
group_world_size , num_workers) to the common store.
2. Each agent retrieves configuration for all agents and
performs two level sort using role and rank.
3. Determine the global rank: the global rank of the workers for
the current agent is the offset of the infos array up to
group_rank of the agent. The offset is computed as a sum of
local_world_size of all agents that have rank less than the
group_rank. The workers would have the ranks: [offset,
offset+local_world_size)
4. Determine the role rank: The role rank is determined using
the algorithms in the point 3 with the exception that the
| https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
offset is done from the first agent that has the same role as
current one and has the minimum group rank.
Return type:
*List*[*Worker*]
_exit_barrier()
Wait for "exit_barrier_timeout" seconds for all agents to finish
executing their local workers (either successfully or not). This
acts as a safety guard against user scripts that terminate at
different times. This barrier keeps the agent process alive
until all workers finish.
_initialize_workers(worker_group)
Starts a fresh set of workers for the worker_group. Essentially
a rendezvous followed by a start_workers.
The caller should first call "_stop_workers()" to stop running
workers prior to calling this method.
Optimistically sets the state of the worker group that just
started as "HEALTHY" and delegates the actual monitoring of
state to "_monitor_workers()" method
abstract _monitor_workers(worker_group) | https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
abstract _monitor_workers(worker_group)
Checks on the workers for the "worker_group" and returns the new
state of the worker group.
Return type:
*RunResult*
_rendezvous(worker_group)
Runs rendezvous for the workers specified by worker spec.
Assigns workers a new global rank and world size. Updates the
rendezvous store for the worker group.
_restart_workers(worker_group)
Restarts (stops, rendezvous, starts) all local workers in the
group.
abstract _shutdown(death_sig=Signals.SIGTERM)
Cleans up any resources that were allocated during the agent's
work.
Parameters:
**death_sig** (*Signals*) -- Signal to send to the child
process, SIGTERM is default
abstract _start_workers(worker_group)
Starts "worker_group.spec.local_world_size" number of workers
according to worker spec for the worker group .
Returns a map of "local_rank" to worker "id".
Return type:
| https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
Return type:
Dict[int, Any]
abstract _stop_workers(worker_group)
Stops all workers in the given worker group. Implementors must
deal with workers in all states defined by "WorkerState". That
is, it must gracefully handle stopping non-existent workers,
unhealthy (stuck) workers, etc.
class torch.distributed.elastic.agent.server.api.RunResult(state, return_values=, failures=)
Results returned by the worker executions. Run results follow an
"all-or-nothing" policy where the run is successful if and only if
ALL local workers managed by this agent complete successfully.
If the result is successful (e.g. "is_failed() = False") then the
"return_values" field contains the outputs (return values) of the
workers managed by THIS agent mapped by their GLOBAL ranks. That is
"result.return_values[0]" is the return value of global rank 0.
Note:
"return_values" are only meaningful for when the worker
| https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
entrypoint is a function. Workers specified as a binary
entrypoint do not canonically have a return value and the
"return_values" field is meaningless and may be empty.
If "is_failed()" returns "True" then the "failures" field contains
the failure information, again, mapped by the GLOBAL rank of the
worker that failed.
The keys in "return_values" and "failures" are mutually exclusive,
that is, a worker's final state can only be one of: succeeded,
failed. Workers intentionally terminated by the agent according to
the agent's restart policy, are not represented in either
"return_values" nor "failures".
Watchdog in the Agent
A named pipe based watchdog can be enabled in "LocalElasticAgent" if
an environment variable "TORCHELASTIC_ENABLE_FILE_TIMER" with value 1
has been defined in the "LocalElasticAgent" process. Optionally,
another environment variable "TORCHELASTIC_TIMER_FILE" can be set | https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
with a unique file name for the named pipe. If the environment
variable "TORCHELASTIC_TIMER_FILE" is not set, "LocalElasticAgent"
will internally create a unique file name and set it to the
environment variable "TORCHELASTIC_TIMER_FILE", and this environment
variable will be propagated to the worker processes to allow them to
connect to the same named pipe that "LocalElasticAgent" uses. | https://pytorch.org/docs/stable/elastic/agent.html | pytorch docs |
Expiration Timers
Expiration timers are set up on the same process as the agent and used
from your script to deal with stuck workers. When you go into a code-
block that has the potential to get stuck you can acquire an
expiration timer, which instructs the timer server to kill the process
if it does not release the timer by the self-imposed expiration
deadline.
Usage:
import torchelastic.timer as timer
import torchelastic.agent.server as agent
def main():
start_method = "spawn"
message_queue = mp.get_context(start_method).Queue()
server = timer.LocalTimerServer(message, max_interval=0.01)
server.start() # non-blocking
spec = WorkerSpec(
fn=trainer_func,
args=(message_queue,),
...<OTHER_PARAMS...>)
agent = agent.LocalElasticAgent(spec, start_method)
agent.run()
def trainer_func(message_queue):
timer.configure(timer.LocalTimerClient(message_queue)) | https://pytorch.org/docs/stable/elastic/timer.html | pytorch docs |
with timer.expires(after=60): # 60 second expiry
# do some work
In the example above if "trainer_func" takes more than 60 seconds to
complete, then the worker process is killed and the agent retries the
worker group.
Client Methods
torch.distributed.elastic.timer.configure(timer_client)
Configures a timer client. Must be called before using "expires".
torch.distributed.elastic.timer.expires(after, scope=None, client=None)
Acquires a countdown timer that expires in "after" seconds from
now, unless the code-block that it wraps is finished within the
timeframe. When the timer expires, this worker is eligible to be
reaped. The exact meaning of "reaped" depends on the client
implementation. In most cases, reaping means to terminate the
worker process. Note that the worker is NOT guaranteed to be reaped
at exactly "time.now() + after", but rather the worker is
"eligible" for being reaped and the "TimerServer" that the client | https://pytorch.org/docs/stable/elastic/timer.html | pytorch docs |
talks to will ultimately make the decision when and how to reap the
workers with expired timers.
Usage:
torch.distributed.elastic.timer.configure(LocalTimerClient())
with expires(after=10):
torch.distributed.all_reduce(...)
Server/Client Implementations
Below are the timer server and client pairs that are provided by
torchelastic.
Note:
Timer server and clients always have to be implemented and used in
pairs since there is a messaging protocol between the server and
client.
Below is a pair of timer server and client that is implemented based
on a "multiprocess.Queue".
class torch.distributed.elastic.timer.LocalTimerServer(mp_queue, max_interval=60, daemon=True)
Server that works with "LocalTimerClient". Clients are expected to
be subprocesses to the parent process that is running this server.
Each host in the job is expected to start its own timer server
locally and each server instance manages timers for local workers | https://pytorch.org/docs/stable/elastic/timer.html | pytorch docs |
(running on processes on the same host).
class torch.distributed.elastic.timer.LocalTimerClient(mp_queue)
Client side of "LocalTimerServer". This client is meant to be used
on the same host that the "LocalTimerServer" is running on and uses
pid to uniquely identify a worker. This is particularly useful in
situations where one spawns a subprocess (trainer) per GPU on a
host with multiple GPU devices.
Below is another pair of timer server and client that is implemented
based on a named pipe.
class torch.distributed.elastic.timer.FileTimerServer(file_path, max_interval=10, daemon=True, log_event=None)
Server that works with "FileTimerClient". Clients are expected to
be running on the same host as the process that is running this
server. Each host in the job is expected to start its own timer
server locally and each server instance manages timers for local
workers (running on processes on the same host).
Parameters: | https://pytorch.org/docs/stable/elastic/timer.html | pytorch docs |
Parameters:
* file_path (str) -- str, the path of a FIFO special file
to be created.
* **max_interval** (*float*) -- float, max interval in seconds
for each watchdog loop.
* **daemon** (*bool*) -- bool, running the watchdog thread in
daemon mode or not. A daemon thread will not block a process
to stop.
* **log_event** (*Callable**[**[**str**,
**Optional**[**FileTimerRequest**]**]**, **None**]*) --
Callable[[Dict[str, str]], None], an optional callback for
logging the events in JSON format.
class torch.distributed.elastic.timer.FileTimerClient(file_path, signal=Signals.SIGKILL)
Client side of "FileTimerServer". This client is meant to be used
on the same host that the "FileTimerServer" is running on and uses
pid to uniquely identify a worker. This client uses a named_pipe to
send timer requests to the "FileTimerServer". This client is a
producer while the "FileTimerServer" is a consumer. Multiple | https://pytorch.org/docs/stable/elastic/timer.html | pytorch docs |
clients can work with the same "FileTimerServer".
Parameters:
* file_path (str) -- str, the path of a FIFO special file.
"FileTimerServer" must have created it by calling os.mkfifo().
* **signal** -- signal, the signal to use to kill the process.
Using a negative or zero signal will not kill the process.
Writing a custom timer server/client
To write your own timer server and client extend the
"torch.distributed.elastic.timer.TimerServer" for the server and
"torch.distributed.elastic.timer.TimerClient" for the client. The
"TimerRequest" object is used to pass messages between the server and
client.
class torch.distributed.elastic.timer.TimerRequest(worker_id, scope_id, expiration_time)
Data object representing a countdown timer acquisition and release
that is used between the "TimerClient" and "TimerServer". A
negative "expiration_time" should be interpreted as a "release"
request.
Note: | https://pytorch.org/docs/stable/elastic/timer.html | pytorch docs |
request.
Note:
the type of "worker_id" is implementation specific. It is
whatever the TimerServer and TimerClient implementations have on
to uniquely identify a worker.
class torch.distributed.elastic.timer.TimerServer(request_queue, max_interval, daemon=True)
Entity that monitors active timers and expires them in a timely
fashion. This server is responsible for reaping workers that have
expired timers.
abstract clear_timers(worker_ids)
Clears all timers for the given "worker_ids".
abstract get_expired_timers(deadline)
Returns all expired timers for each worker_id. An expired timer
is a timer for which the expiration_time is less than or equal
to the provided deadline.
Return type:
*Dict*[str, *List*[*TimerRequest*]]
abstract register_timers(timer_requests)
Processes the incoming timer requests and registers them with
the server. The timer request can either be a acquire-timer or
| https://pytorch.org/docs/stable/elastic/timer.html | pytorch docs |
release-timer request. Timer requests with a negative
expiration_time should be interpreted as a release-timer
request.
class torch.distributed.elastic.timer.TimerClient
Client library to acquire and release countdown timers by
communicating with the TimerServer.
abstract acquire(scope_id, expiration_time)
Acquires a timer for the worker that holds this client object
given the scope_id and expiration_time. Typically registers the
timer with the TimerServer.
abstract release(scope_id)
Releases the timer for the "scope_id" on the worker this client
represents. After this method is called, the countdown timer on
the scope is no longer in effect.
| https://pytorch.org/docs/stable/elastic/timer.html | pytorch docs |
Train script
If your train script works with "torch.distributed.launch" it will
continue working with "torchrun" with these differences:
No need to manually pass "RANK", "WORLD_SIZE", "MASTER_ADDR", and
"MASTER_PORT".
"rdzv_backend" and "rdzv_endpoint" can be provided. For most users
this will be set to "c10d" (see rendezvous). The default
"rdzv_backend" creates a non-elastic rendezvous where
"rdzv_endpoint" holds the master address.
Make sure you have a "load_checkpoint(path)" and
"save_checkpoint(path)" logic in your script. When any number of
workers fail we restart all the workers with the same program
arguments so you will lose progress up to the most recent
checkpoint (see elastic launch).
"use_env" flag has been removed. If you were parsing local rank by
parsing the "--local_rank" option, you need to get the local rank
from the environment variable "LOCAL_RANK" (e.g.
"int(os.environ["LOCAL_RANK"])").
| https://pytorch.org/docs/stable/elastic/train_script.html | pytorch docs |
"int(os.environ["LOCAL_RANK"])").
Below is an expository example of a training script that checkpoints
on each epoch, hence the worst-case progress lost on failure is one
full epoch worth of training.
def main():
args = parse_args(sys.argv[1:])
state = load_checkpoint(args.checkpoint_path)
initialize(state)
# torch.distributed.run ensures that this will work
# by exporting all the env vars needed to initialize the process group
torch.distributed.init_process_group(backend=args.backend)
for i in range(state.epoch, state.total_num_epochs)
for batch in iter(state.dataset)
train(batch, state.model)
state.epoch += 1
save_checkpoint(state)
For concrete examples of torchelastic-compliant train scripts, visit
our examples page. | https://pytorch.org/docs/stable/elastic/train_script.html | pytorch docs |
TorchElastic Kubernetes
Please refer to our GitHub's Kubernetes README for more information on
Elastic Job Controller and custom resource definition. | https://pytorch.org/docs/stable/elastic/kubernetes.html | pytorch docs |
Automatic Mixed Precision package - torch.amp
"torch.amp" provides convenience methods for mixed precision, where
some operations use the "torch.float32" ("float") datatype and other
operations use lower precision floating point datatype
("lower_precision_fp"): "torch.float16" ("half") or "torch.bfloat16".
Some ops, like linear layers and convolutions, are much faster in
"lower_precision_fp". Other ops, like reductions, often require the
dynamic range of "float32". Mixed precision tries to match each op to
its appropriate datatype.
Ordinarily, "automatic mixed precision training" with datatype of
"torch.float16" uses "torch.autocast" and "torch.cuda.amp.GradScaler"
together, as shown in the CUDA Automatic Mixed Precision examples and
CUDA Automatic Mixed Precision recipe. However, "torch.autocast" and
"torch.cuda.amp.GradScaler" are modular, and may be used separately if
desired. As shown in the CPU example section of "torch.autocast", | https://pytorch.org/docs/stable/amp.html | pytorch docs |
"automatic mixed precision training/inference" on CPU with datatype of
"torch.bfloat16" only uses "torch.autocast".
For CUDA and CPU, APIs are also provided separately:
"torch.autocast("cuda", args...)" is equivalent to
"torch.cuda.amp.autocast(args...)".
"torch.autocast("cpu", args...)" is equivalent to
"torch.cpu.amp.autocast(args...)". For CPU, only lower precision
floating point datatype of "torch.bfloat16" is supported for now.
Autocasting
Gradient Scaling
Autocast Op Reference
Op Eligibility
CUDA Op-Specific Behavior
CUDA Ops that can autocast to "float16"
CUDA Ops that can autocast to "float32"
CUDA Ops that promote to the widest input type
Prefer "binary_cross_entropy_with_logits" over
"binary_cross_entropy"
CPU Op-Specific Behavior
CPU Ops that can autocast to "bfloat16"
CPU Ops that can autocast to "float32"
CPU Ops that promote to the widest input type
Autocasting | https://pytorch.org/docs/stable/amp.html | pytorch docs |
Autocasting
class torch.autocast(device_type, dtype=None, enabled=True, cache_enabled=None)
Instances of "autocast" serve as context managers or decorators
that allow regions of your script to run in mixed precision.
In these regions, ops run in an op-specific dtype chosen by
autocast to improve performance while maintaining accuracy. See the
Autocast Op Reference for details.
When entering an autocast-enabled region, Tensors may be any type.
You should not call "half()" or "bfloat16()" on your model(s) or
inputs when using autocasting.
"autocast" should wrap only the forward pass(es) of your network,
including the loss computation(s). Backward passes under autocast
are not recommended. Backward ops run in the same type that
autocast used for corresponding forward ops.
Example for CUDA Devices:
# Creates model and optimizer in default precision
model = Net().cuda()
optimizer = optim.SGD(model.parameters(), ...)
| https://pytorch.org/docs/stable/amp.html | pytorch docs |
for input, target in data:
optimizer.zero_grad()
# Enables autocasting for the forward pass (model + loss)
with autocast():
output = model(input)
loss = loss_fn(output, target)
# Exits the context manager before backward()
loss.backward()
optimizer.step()
See the CUDA Automatic Mixed Precision examples for usage (along
with gradient scaling) in more complex scenarios (e.g., gradient
penalty, multiple models/losses, custom autograd functions).
"autocast" can also be used as a decorator, e.g., on the "forward"
method of your model:
class AutocastModel(nn.Module):
...
@autocast()
def forward(self, input):
...
Floating-point Tensors produced in an autocast-enabled region may
be "float16". After returning to an autocast-disabled region, using
them with floating-point Tensors of different dtypes may cause type | https://pytorch.org/docs/stable/amp.html | pytorch docs |
mismatch errors. If so, cast the Tensor(s) produced in the
autocast region back to "float32" (or other dtype if desired). If a
Tensor from the autocast region is already "float32", the cast is a
no-op, and incurs no additional overhead. CUDA Example:
# Creates some tensors in default dtype (here assumed to be float32)
a_float32 = torch.rand((8, 8), device="cuda")
b_float32 = torch.rand((8, 8), device="cuda")
c_float32 = torch.rand((8, 8), device="cuda")
d_float32 = torch.rand((8, 8), device="cuda")
with autocast():
# torch.mm is on autocast's list of ops that should run in float16.
# Inputs are float32, but the op runs in float16 and produces float16 output.
# No manual casts are required.
e_float16 = torch.mm(a_float32, b_float32)
# Also handles mixed input types
f_float16 = torch.mm(d_float32, e_float16)
# After exiting autocast, calls f_float16.float() to use with d_float32
| https://pytorch.org/docs/stable/amp.html | pytorch docs |
g_float32 = torch.mm(d_float32, f_float16.float())
CPU Training Example:
# Creates model and optimizer in default precision
model = Net()
optimizer = optim.SGD(model.parameters(), ...)
for epoch in epochs:
for input, target in data:
optimizer.zero_grad()
# Runs the forward pass with autocasting.
with torch.autocast(device_type="cpu", dtype=torch.bfloat16):
output = model(input)
loss = loss_fn(output, target)
loss.backward()
optimizer.step()
CPU Inference Example:
# Creates model in default precision
model = Net().eval()
with torch.autocast(device_type="cpu", dtype=torch.bfloat16):
for input in data:
# Runs the forward pass with autocasting.
output = model(input)
CPU Inference Example with Jit Trace:
class TestModel(nn.Module):
def __init__(self, input_size, num_classes):
| https://pytorch.org/docs/stable/amp.html | pytorch docs |
super(TestModel, self).init()
self.fc1 = nn.Linear(input_size, num_classes)
def forward(self, x):
return self.fc1(x)
input_size = 2
num_classes = 2
model = TestModel(input_size, num_classes).eval()
# For now, we suggest to disable the Jit Autocast Pass,
# As the issue: https://github.com/pytorch/pytorch/issues/75956
torch._C._jit_set_autocast_mode(False)
with torch.cpu.amp.autocast(cache_enabled=False):
model = torch.jit.trace(model, torch.randn(1, input_size))
model = torch.jit.freeze(model)
# Models Run
for _ in range(3):
model(torch.randn(1, input_size))
Type mismatch errors in an autocast-enabled region are a bug; if
this is what you observe, please file an issue.
"autocast(enabled=False)" subregions can be nested in autocast-
enabled regions. Locally disabling autocast can be useful, for | https://pytorch.org/docs/stable/amp.html | pytorch docs |
example, if you want to force a subregion to run in a particular
"dtype". Disabling autocast gives you explicit control over the
execution type. In the subregion, inputs from the surrounding
region should be cast to "dtype" before use:
# Creates some tensors in default dtype (here assumed to be float32)
a_float32 = torch.rand((8, 8), device="cuda")
b_float32 = torch.rand((8, 8), device="cuda")
c_float32 = torch.rand((8, 8), device="cuda")
d_float32 = torch.rand((8, 8), device="cuda")
with autocast():
e_float16 = torch.mm(a_float32, b_float32)
with autocast(enabled=False):
# Calls e_float16.float() to ensure float32 execution
# (necessary because e_float16 was created in an autocasted region)
f_float32 = torch.mm(c_float32, e_float16.float())
# No manual casts are required when re-entering the autocast-enabled region.
| https://pytorch.org/docs/stable/amp.html | pytorch docs |
torch.mm again runs in float16 and produces float16 output, regardless of input types.
g_float16 = torch.mm(d_float32, f_float32)
The autocast state is thread-local. If you want it enabled in a
new thread, the context manager or decorator must be invoked in
that thread. This affects "torch.nn.DataParallel" and
"torch.nn.parallel.DistributedDataParallel" when used with more
than one GPU per process (see Working with Multiple GPUs).
Parameters:
* device_type (str, required) -- Whether to use 'cuda'
or 'cpu' device
* **enabled** (*bool**, **optional*) -- Whether autocasting
should be enabled in the region. Default: "True"
* **dtype** (*torch_dtype**, **optional*) -- Whether to use
torch.float16 or torch.bfloat16.
* **cache_enabled** (*bool**, **optional*) -- Whether the weight
cache inside autocast should be enabled. Default: "True"
| https://pytorch.org/docs/stable/amp.html | pytorch docs |
class torch.cuda.amp.autocast(enabled=True, dtype=torch.float16, cache_enabled=True)
See "torch.autocast". "torch.cuda.amp.autocast(args...)" is
equivalent to "torch.autocast("cuda", args...)"
torch.cuda.amp.custom_fwd(fwd=None, *, cast_inputs=None)
Helper decorator for "forward" methods of custom autograd functions
(subclasses of "torch.autograd.Function"). See the example page
for more detail.
Parameters:
cast_inputs ("torch.dtype" or None, optional, default=None)
-- If not "None", when "forward" runs in an autocast-enabled
region, casts incoming floating-point CUDA Tensors to the target
dtype (non-floating-point Tensors are not affected), then
executes "forward" with autocast disabled. If "None",
"forward"'s internal ops execute with the current autocast
state.
Note:
If the decorated "forward" is called outside an autocast-enabled
region, "custom_fwd" is a no-op and "cast_inputs" has no effect.
| https://pytorch.org/docs/stable/amp.html | pytorch docs |
torch.cuda.amp.custom_bwd(bwd)
Helper decorator for backward methods of custom autograd functions
(subclasses of "torch.autograd.Function"). Ensures that "backward"
executes with the same autocast state as "forward". See the example
page for more detail.
class torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16, cache_enabled=True)
See "torch.autocast". "torch.cpu.amp.autocast(args...)" is
equivalent to "torch.autocast("cpu", args...)"
Gradient Scaling
If the forward pass for a particular op has "float16" inputs, the
backward pass for that op will produce "float16" gradients. Gradient
values with small magnitudes may not be representable in "float16".
These values will flush to zero ("underflow"), so the update for the
corresponding parameters will be lost.
To prevent underflow, "gradient scaling" multiplies the network's
loss(es) by a scale factor and invokes a backward pass on the scaled
loss(es). Gradients flowing backward through the network are then | https://pytorch.org/docs/stable/amp.html | pytorch docs |
scaled by the same factor. In other words, gradient values have a
larger magnitude, so they don't flush to zero.
Each parameter's gradient (".grad" attribute) should be unscaled
before the optimizer updates the parameters, so the scale factor does
not interfere with the learning rate.
class torch.cuda.amp.GradScaler(init_scale=65536.0, growth_factor=2.0, backoff_factor=0.5, growth_interval=2000, enabled=True)
get_backoff_factor()
Returns a Python float containing the scale backoff factor.
get_growth_factor()
Returns a Python float containing the scale growth factor.
get_growth_interval()
Returns a Python int containing the growth interval.
get_scale()
Returns a Python float containing the current scale, or 1.0 if
scaling is disabled.
Warning:
"get_scale()" incurs a CPU-GPU sync.
is_enabled()
Returns a bool indicating whether this instance is enabled.
load_state_dict(state_dict) | https://pytorch.org/docs/stable/amp.html | pytorch docs |
load_state_dict(state_dict)
Loads the scaler state. If this instance is disabled,
"load_state_dict()" is a no-op.
Parameters:
**state_dict** (*dict*) -- scaler state. Should be an object
returned from a call to "state_dict()".
scale(outputs)
Multiplies ('scales') a tensor or list of tensors by the scale
factor.
Returns scaled outputs. If this instance of "GradScaler" is not
enabled, outputs are returned unmodified.
Parameters:
**outputs** (*Tensor** or **iterable of Tensors*) -- Outputs
to scale.
set_backoff_factor(new_factor)
Parameters:
**new_scale** (*float*) -- Value to use as the new scale
backoff factor.
set_growth_factor(new_factor)
Parameters:
**new_scale** (*float*) -- Value to use as the new scale
growth factor.
set_growth_interval(new_interval)
Parameters:
**new_interval** (*int*) -- Value to use as the new growth
| https://pytorch.org/docs/stable/amp.html | pytorch docs |
interval.
state_dict()
Returns the state of the scaler as a "dict". It contains five
entries:
* ""scale"" - a Python float containing the current scale
* ""growth_factor"" - a Python float containing the current
growth factor
* ""backoff_factor"" - a Python float containing the current
backoff factor
* ""growth_interval"" - a Python int containing the current
growth interval
* ""_growth_tracker"" - a Python int containing the number of
recent consecutive unskipped steps.
If this instance is not enabled, returns an empty dict.
Note:
If you wish to checkpoint the scaler's state after a
particular iteration, "state_dict()" should be called after
"update()".
step(optimizer, args, *kwargs)
"step()" carries out the following two operations:
1. Internally invokes "unscale_(optimizer)" (unless "unscale_()"
was explicitly called for "optimizer" earlier in the
| https://pytorch.org/docs/stable/amp.html | pytorch docs |
iteration). As part of the "unscale_()", gradients are
checked for infs/NaNs.
2. If no inf/NaN gradients are found, invokes "optimizer.step()"
using the unscaled gradients. Otherwise, "optimizer.step()"
is skipped to avoid corrupting the params.
"*args" and "**kwargs" are forwarded to "optimizer.step()".
Returns the return value of "optimizer.step(*args, **kwargs)".
Parameters:
* **optimizer** (*torch.optim.Optimizer*) -- Optimizer that
applies the gradients.
* **args** -- Any arguments.
* **kwargs** -- Any keyword arguments.
Warning:
Closure use is not currently supported.
unscale_(optimizer)
Divides ("unscales") the optimizer's gradient tensors by the
scale factor.
"unscale_()" is optional, serving cases where you need to modify
or inspect gradients between the backward pass(es) and "step()".
If "unscale_()" is not called explicitly, gradients will be
| https://pytorch.org/docs/stable/amp.html | pytorch docs |
unscaled automatically during "step()".
Simple example, using "unscale_()" to enable clipping of
unscaled gradients:
...
scaler.scale(loss).backward()
scaler.unscale_(optimizer)
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm)
scaler.step(optimizer)
scaler.update()
Parameters:
**optimizer** (*torch.optim.Optimizer*) -- Optimizer that
owns the gradients to be unscaled.
Note:
"unscale_()" does not incur a CPU-GPU sync.
Warning:
"unscale_()" should only be called once per optimizer per
"step()" call, and only after all gradients for that
optimizer's assigned parameters have been accumulated. Calling
"unscale_()" twice for a given optimizer between each "step()"
triggers a RuntimeError.
Warning:
"unscale_()" may unscale sparse gradients out of place,
replacing the ".grad" attribute.
update(new_scale=None) | https://pytorch.org/docs/stable/amp.html | pytorch docs |
update(new_scale=None)
Updates the scale factor.
If any optimizer steps were skipped the scale is multiplied by
"backoff_factor" to reduce it. If "growth_interval" unskipped
iterations occurred consecutively, the scale is multiplied by
"growth_factor" to increase it.
Passing "new_scale" sets the new scale value manually.
("new_scale" is not used directly, it's used to fill
GradScaler's internal scale tensor. So if "new_scale" was a
tensor, later in-place changes to that tensor will not further
affect the scale GradScaler uses internally.)
Parameters:
**new_scale** (float or "torch.cuda.FloatTensor", optional,
default=None) -- New scale factor.
Warning:
"update()" should only be called at the end of the iteration,
after "scaler.step(optimizer)" has been invoked for all
optimizers used this iteration.
Autocast Op Reference
Op Eligibility | https://pytorch.org/docs/stable/amp.html | pytorch docs |
Op Eligibility
Ops that run in "float64" or non-floating-point dtypes are not
eligible, and will run in these types whether or not autocast is
enabled.
Only out-of-place ops and Tensor methods are eligible. In-place
variants and calls that explicitly supply an "out=..." Tensor are
allowed in autocast-enabled regions, but won't go through autocasting.
For example, in an autocast-enabled region "a.addmm(b, c)" can
autocast, but "a.addmm_(b, c)" and "a.addmm(b, c, out=d)" cannot. For
best performance and stability, prefer out-of-place ops in autocast-
enabled regions.
Ops called with an explicit "dtype=..." argument are not eligible, and
will produce output that respects the "dtype" argument.
CUDA Op-Specific Behavior
The following lists describe the behavior of eligible ops in autocast-
enabled regions. These ops always go through autocasting whether they
are invoked as part of a "torch.nn.Module", as a function, or as a | https://pytorch.org/docs/stable/amp.html | pytorch docs |
"torch.Tensor" method. If functions are exposed in multiple
namespaces, they go through autocasting regardless of the namespace.
Ops not listed below do not go through autocasting. They run in the
type defined by their inputs. However, autocasting may still change
the type in which unlisted ops run if they're downstream from
autocasted ops.
If an op is unlisted, we assume it's numerically stable in "float16".
If you believe an unlisted op is numerically unstable in "float16",
please file an issue.
CUDA Ops that can autocast to "float16"
"__matmul__", "addbmm", "addmm", "addmv", "addr", "baddbmm", "bmm",
"chain_matmul", "multi_dot", "conv1d", "conv2d", "conv3d",
"conv_transpose1d", "conv_transpose2d", "conv_transpose3d", "GRUCell",
"linear", "LSTMCell", "matmul", "mm", "mv", "prelu", "RNNCell"
CUDA Ops that can autocast to "float32"
"pow", "rdiv", "rpow", "rtruediv", "acos", "asin", | https://pytorch.org/docs/stable/amp.html | pytorch docs |
"binary_cross_entropy_with_logits", "cosh", "cosine_embedding_loss",
"cdist", "cosine_similarity", "cross_entropy", "cumprod", "cumsum",
"dist", "erfinv", "exp", "expm1", "group_norm",
"hinge_embedding_loss", "kl_div", "l1_loss", "layer_norm", "log",
"log_softmax", "log10", "log1p", "log2", "margin_ranking_loss",
"mse_loss", "multilabel_margin_loss", "multi_margin_loss", "nll_loss",
"norm", "normalize", "pdist", "poisson_nll_loss", "pow", "prod",
"reciprocal", "rsqrt", "sinh", "smooth_l1_loss", "soft_margin_loss",
"softmax", "softmin", "softplus", "sum", "renorm", "tan",
"triplet_margin_loss"
CUDA Ops that promote to the widest input type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
These ops don't require a particular dtype for stability, but take
multiple inputs and require that the inputs' dtypes match. If all of
the inputs are "float16", the op runs in "float16". If any of the
inputs is "float32", autocast casts all inputs to "float32" and runs
the op in "float32". | https://pytorch.org/docs/stable/amp.html | pytorch docs |
the op in "float32".
"addcdiv", "addcmul", "atan2", "bilinear", "cross", "dot",
"grid_sample", "index_put", "scatter_add", "tensordot"
Some ops not listed here (e.g., binary ops like "add") natively
promote inputs without autocasting's intervention. If inputs are a
mixture of "float16" and "float32", these ops run in "float32" and
produce "float32" output, regardless of whether autocast is enabled.
Prefer "binary_cross_entropy_with_logits" over "binary_cross_entropy"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The backward passes of "torch.nn.functional.binary_cross_entropy()"
(and "torch.nn.BCELoss", which wraps it) can produce gradients that
aren't representable in "float16". In autocast-enabled regions, the
forward input may be "float16", which means the backward gradient must
be representable in "float16" (autocasting "float16" forward inputs to
"float32" doesn't help, because that cast must be reversed in
backward). Therefore, "binary_cross_entropy" and "BCELoss" raise an | https://pytorch.org/docs/stable/amp.html | pytorch docs |
error in autocast-enabled regions.
Many models use a sigmoid layer right before the binary cross entropy
layer. In this case, combine the two layers using
"torch.nn.functional.binary_cross_entropy_with_logits()" or
"torch.nn.BCEWithLogitsLoss". "binary_cross_entropy_with_logits" and
"BCEWithLogits" are safe to autocast.
CPU Op-Specific Behavior
The following lists describe the behavior of eligible ops in autocast-
enabled regions. These ops always go through autocasting whether they
are invoked as part of a "torch.nn.Module", as a function, or as a
"torch.Tensor" method. If functions are exposed in multiple
namespaces, they go through autocasting regardless of the namespace.
Ops not listed below do not go through autocasting. They run in the
type defined by their inputs. However, autocasting may still change
the type in which unlisted ops run if they're downstream from
autocasted ops.
If an op is unlisted, we assume it's numerically stable in "bfloat16". | https://pytorch.org/docs/stable/amp.html | pytorch docs |
If you believe an unlisted op is numerically unstable in "bfloat16",
please file an issue.
CPU Ops that can autocast to "bfloat16"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"conv1d", "conv2d", "conv3d", "bmm", "mm", "baddbmm", "addmm",
"addbmm", "linear", "matmul", "_convolution"
CPU Ops that can autocast to "float32"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"conv_transpose1d", "conv_transpose2d", "conv_transpose3d",
"avg_pool3d", "binary_cross_entropy", "grid_sampler",
"grid_sampler_2d", "_grid_sampler_2d_cpu_fallback", "grid_sampler_3d",
"polar", "prod", "quantile", "nanquantile", "stft", "cdist", "trace",
"view_as_complex", "cholesky", "cholesky_inverse", "cholesky_solve",
"inverse", "lu_solve", "orgqr", "inverse", "ormqr", "pinverse",
"max_pool3d", "max_unpool2d", "max_unpool3d", "adaptive_avg_pool3d",
"reflection_pad1d", "reflection_pad2d", "replication_pad1d",
"replication_pad2d", "replication_pad3d", "mse_loss", "ctc_loss",
"kl_div", "multilabel_margin_loss", "fft_fft", "fft_ifft", "fft_fft2", | https://pytorch.org/docs/stable/amp.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.