text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
These implementations include FlashAttention from HazyResearch, Memory-Efficient Attention from the xFormers project, and a native C++ implementation that is ideal for non-CUDA devices or when high-precision is required.
PyTorch 2.0 will automatically select the optimal implementation for your use case, but you can also toggle them individually for finer-grained control. Additionally, the scaled dot product attention function can be used to build common transformer architecture components.
Learn more with the documentation and this tutorial.
[Beta] functorch -> torch.func | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
[Beta] functorch -> torch.func
Inspired by Google JAX, functorch is a library that offers composable vmap (vectorization) and autodiff transforms. It enables advanced autodiff use cases that would otherwise be tricky to express in PyTorch. Examples include:
* model ensembling
* efficiently computing jacobians and hessians
* computing per-sample-gradients (or other per-sample quantities) | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
We’re excited to announce that, as the final step of upstreaming and integrating functorch into PyTorch, the functorch APIs are now available in the torch.func module. Our function transform APIs are identical to before, but we have changed how the interaction with NN modules work. Please see the docs and the migration guide for more details.
Furthermore, we have added support for torch.autograd.Function: one is now able to apply function transformations (e.g. vmap, grad, jvp) over torch.autograd.Function.
[Beta] Dispatchable Collectives | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
[Beta] Dispatchable Collectives
Dispatchable collectives is an improvement to the existing init_process_group() API which changes backend to an optional argument. For users, the main advantage of this feature is that it will allow them to write code that can run on both GPU and CPU machines without having to change the backend specification. The dispatchability feature will also make it easier for users to support both GPU and CPU collectives, as they will no longer need to specify the backend manually (e.g. “NCCL” or “GLOO”). Existing backend specifications by users will be honored and will not require change.
Usage example:
import torch.distributed.dist
…
# old
dist.init_process_group(backend=”nccl”, ...)
dist.all_reduce(...) # with CUDA tensors works
dist.all_reduce(...) # with CPU tensors does not work
# new
dist.init_process_group(...) # backend is optional
dist.all_reduce(...) # with CUDA tensors works
dist.all_reduce(...) # with CPU tensors works
| https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
dist.all_reduce(...) # with CPU tensors works
```
Learn more here.
[Beta] torch.set_default_device and torch.device as context manager
torch.set_default_device allows users to change the default device that factory functions in PyTorch allocate on. For example, if you torch.set_default_device(‘cuda’), a call to torch.empty(2) will allocate on CUDA (rather than on CPU). You can also use torch.device as a context manager to change the default device on a local basis. This resolves a long standing feature request from PyTorch’s initial release for a way to do this.
Learn more here.
[Beta] "X86" as the new default quantization backend for x86 CPU | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
The new X86 quantization backend, which utilizes FBGEMM and oneDNN kernel libraries, replaces FBGEMM as the default quantization backend for x86 CPU platforms and offers improved int8 inference performance compared to the original FBGEMM backend, leveraging the strengths of both libraries, with 1.3X – 2X inference performance speedup measured on 40+ deep learning models. The new backend is functionally compatible with the original FBGEMM backend.
Table: Geomean Speedup of X86 Quantization Backend vs. FBGEMM Backend
1 core/instance
2 cores/instance
4 cores/instance
1 socket (32 cores)/instance
Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
1.76X
1.80X
2.04X
1.34X
| https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
1.34X
By default, users on x86 platforms will utilize the x86 quantization backend and their PyTorch programs will remain unchanged when using the default backend. Alternatively, users have the option to specify "X86" as the quantization backend explicitly. Example code is shown below:
import torch
from torch.ao.quantization import get_default_qconfig_mappingfrom torch.quantization.quantize_fx
import prepare_fx, convert_fx
# get default configuration
qconfig_mapping = get_default_qconfig_mapping()
# or explicitly specify the backend
# qengine = 'x86'
# torch.backends.quantized.engine = qengine
# qconfig_mapping = get_default_qconfig_mapping(qengine)
# construct fp32 model
model_fp32 = ...
# prepare
prepared_model = prepare_fx(model_fp32, qconfig_mapping, example_inputs=x)
# calibrate
...
# convert
quantized_model = convert_fx(prepared_model)
| https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
quantized_model = convert_fx(prepared_model)
```
Find more information: https://github.com/pytorch/pytorch/issues/83888 and https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-pytorch-int8-inf-with-new-x86-backend.html.
[Beta] GNN inference and training optimization on CPU
PyTorch 2.0 includes several critical optimizations to improve GNN inference and training performance on CPU. Before 2.0, GNN models of PyG suffers from low efficiency on CPU due to lack of performance tuning for several critical kernels (scatter/gather, etc) and the lack of GNN-related sparse matrix multiplication ops. To be specific, optimizations include:
* scatter_reduce: performance hotspot in Message Passing when the edge index is stored in Coordinate format (COO). | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
gather: backward of scatter_reduce, specially tuned for the GNN compute when the index is an expanded tensor.
torch.sparse.mm with reduce flag: performance hotspot in Message Passing when the edge index is stored in Compressed Sparse Row (CSR). Supported reduce flag of: sum, mean, amax, amin.
On PyG benchmarks/examples, OGB benchmarks, a 1.12x - 4.07x performance speedup is measured (1.13.1 compared with 2.0) for single node inference and training.
Model-Dataset
Option
Speedup Ratio
GCN-Reddit (inference)
512-2-64-dense
1.22x
1024-3-128-dense
1.25x
512-2-64-sparse
1.31x
1024-3-128-sparse
1.68x
| https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
1.68x
512-2-64-dense
1.22x
GraphSage-ogbn-products (inference)
1024-3-128-dense
1.15x
512-2-64-sparse
1.20x
1024-3-128-sparse
1.33x
full-batch-sparse
4.07x
GCN-PROTEINS (training)
3-32
1.67x
GCN-REDDIT-BINARY (training)
3-32
1.67x
GCN-Reddit (training)
512-2-64-dense
1.20x
1024-3-128-dense
1.12x
Learn more: PyG CPU Performance Optimization. | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
[Beta] Accelerating inference on CPU with PyTorch by leveraging oneDNN Graph
oneDNN Graph API extends oneDNN with a flexible graph API to maximize the optimization opportunity for generating efficient code on AI hardware.
* It automatically identifies the graph partitions to be accelerated via fusion.
* The fusion patterns focus on fusing compute-intensive operations such as convolution, matmul and their neighbor operations for both inference and training use cases. | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
Although work is ongoing to integrate oneDNN Graph with TorchDynamo as well, its integration with the PyTorch JIT Fuser attained beta status in PyTorch 2.0 for Float32 & BFloat16 inference (on machines that support AVX512_BF16 ISA).
From a developer’s/researcher’s perspective, the usage is quite simple & intuitive, with the only change in code being an API invocation:
* Leverage oneDNN Graph, with JIT-tracing, a model is profiled with an example input.
* The context manager with torch.jit.fuser(“fuser3”): can also be used instead of invoking torch.jit.enable_onednn_fusion(True). | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
For accelerating BFloat16 inference, we rely on eager-mode AMP (Automatic Mixed Precision) support in PyTorch & disable JIT mode’s AMP, as both of them are currently divergent:
# Assuming we have a model of the name 'model'
example_input = torch.rand(1, 3, 224, 224)
# enable oneDNN Graph
torch.jit.enable_onednn_fusion(True)
# Disable AMP for JIT
torch._C._jit_set_autocast_mode(False)
with torch.no_grad(), torch.cpu.amp.autocast():
model = torch.jit.trace(model, (example_input))
model = torch.jit.freeze(model)
# 2 warm-ups (2 for tracing/scripting with an example, 3 without an example)
model(example_input)
model(example_input)
# speedup would be observed in subsequent runs.
model(example_input)
Learn more here.
Prototype Features
Distributed API | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
Prototype Features
Distributed API
[Prototype] DTensor | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
PyTorch DistributedTensor (DTensor) is a prototyping effort with distributed tensor primitives to allow easier distributed computation authoring in the SPMD (Single Program Multiple Devices) paradigm. The primitives are simple but powerful when used to express tensor distributions with both sharded and replicated parallelism strategies. PyTorch DTensor empowered PyTorch Tensor Parallelism along with other advanced parallelism explorations. In addition, it also offers a uniform way to save/load state_dict for distributed checkpointing purposes, even when there’re complex tensor distribution strategies such as combining tensor parallelism with parameter sharding in FSDP. More details can be found in this RFC and the DTensor examples notebook. | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
[Prototype] TensorParallel
We now support DTensor based Tensor Parallel which users can distribute their model parameters across different GPU devices. We also support Pairwise Parallel which shards two concatenated linear layers in a col-wise and row-wise style separately so that only one collective(all-reduce/reduce-scatter) is needed in the end. More details can be found in this example.
[Prototype] 2D Parallel
We implemented the integration of the aforementioned TP with FullyShardedDataParallel(FSDP) as 2D parallel to further scale large model training. More details can be found in this slide and code example.
[Prototype] torch.compile(dynamic=True) | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
[Prototype] torch.compile(dynamic=True)
Experimental support for PT2 compilation with dynamic shapes is available in this release. Inference compilation with inductor for simple models is supported, but there are a lot of limitations:
Training available in a future release (This is partially fixed in nightlies!)
Minifier available in a future release.
It is easy to end up in a situation where the dimension you wanted to be dynamic gets specialized anyway. Some of these issues are fixed in nightlies, others are not.
We do not appropriately propagate Inductor guards to the top-level, this is tracked at #96296.
Data-dependent operations like nonzero still require a graph break.
Dynamic does not work with non-standard modes like reduce-overhead or max-autotune.
| https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
There are many bugs in Inductor compilation. To track known bugs, check the dynamic shapes label on the PyTorch issue tracker.
For the latest and greatest news about dynamic shapes support on master, check out our status reports.
Highlights/Performance Improvements
Deprecation of Cuda 11.6 and Python 3.7 support for PyTorch 2.0
If you are still using or depending on CUDA 11.6 or Python 3.7 builds, we strongly recommend moving to at least CUDA 11.7 and Python 3.8, as it would be the minimum versions required for PyTorch 2.0. For more detail, please refer to the Release Compatibility Matrix for PyTorch releases.
Python 3.11 support on Anaconda Platform | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
Python 3.11 support on Anaconda Platform
Due to lack of Python 3.11 support for packages that PyTorch depends on, including NumPy, SciPy, SymPy, Pillow and others on the Anaconda platform. We will not be releasing Conda binaries compiled with Python 3.11 for PyTorch Release 2.0. The Pip packages with Python 3.11 support will be released, hence if you intend to use PyTorch 2.0 with Python 3.11 please use our Pip packages. Please note: Conda packages with Python 3.11 support will be made available on our nightly channel. Also we are planning on releasing Conda Python 3.11 binaries as part of future release once Anaconda provides these key dependencies. More information and instructions on how to download the Pip packages can be found here.
Optimized PyTorch Inference with AWS Graviton processors | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
The optimizations focused on three key areas: GEMM kernels, bfloat16 support, primitive caching and the memory allocator. For aarch64 platforms, PyTorch supports Arm Compute Library (ACL) GEMM kernels via Mkldnn(OneDNN) backend. The ACL library provides Neon/SVE GEMM kernels for fp32 and bfloat16 formats. The bfloat16 support on c7g allows efficient deployment of bfloat16 trained, AMP (Automatic Mixed Precision) trained, or even the standard fp32 trained models. The standard fp32 models leverage bfloat16 kernels via OneDNN fast math mode, without any model quantization. Next we implemented primitive caching for conv, matmul and inner product operators. More information on the updated PyTorch user guide with the upcoming 2.0 release improvements and TorchBench benchmark details can be found here. | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
layout: blog_detail
title: "Case Study: PathAI Uses PyTorch to Improve Patient Outcomes with AI-powered Pathology"
author: Logan Kilpatrick - Sr. Technology Advocate, Harshith Padigela - ML Engineer, Syed Ashar Javed - ML Technical Lead, Robert Egger - Biomedical Data Scientist
featured-img: "/assets/images/2022-7-15-PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology-1.png"
PathAI is the leading provider of AI-powered technology tools and services for pathology (the study of disease). Our platform was built to enable substantial improvements to the accuracy of diagnosis and the measurement of therapeutic efficacy for complex diseases, leveraging modern approaches in machine learning like image segmentation, graph neural networks, and multiple instance learning.
| https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
Traditional manual pathology is prone to subjectivity and observer variability that can negatively affect diagnoses and drug development trials. Before we dive into how we use PyTorch to improve our diagnosis workflow, let us first lay out the traditional analog Pathology workflow without machine learning.
How Traditional Biopharma Works
There are many avenues that biopharma companies take to discover novel therapeutics or diagnostics. One of those avenues relies heavily on the analysis of pathology slides to answer a variety of questions: how does a particular cellular communication pathway work? Can a specific disease state be linked to the presence or lack of a particular protein? Why did a particular drug in a clinical trial work for some patients but not others? Might there be an association between patient outcomes and a novel biomarker? | https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
To help answer these questions, biopharma companies rely on expert pathologists to analyze slides and help evaluate the questions they might have.
As you might imagine, it takes an expert board certified pathologist to make accurate interpretations and diagnosis. In one study, a single biopsy result was given to 36 different pathologists and the outcome was 18 different diagnoses varying in severity from no treatment to aggressive treatment necessary. Pathologists also often solicit feedback from colleagues in difficult edge cases. Given the complexity of the problem, even with expert training and collaboration, pathologists can still have a hard time making a correct diagnosis. This potential variance can be the difference between a drug being approved and it failing the clinical trial.
How PathAI utilizes machine learning to power drug development | https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
PathAI develops machine learning models which provide insights for drug development R&D, for powering clinical trials, and for making diagnoses. To this end, PathAI leverages PyTorch for slide level inference using a variety of methods including graph neural networks (GNN) as well as multiple instance learning. In this context, “slides” refers to full size scanned images of glass slides, which are pieces of glass with a thin slice of tissue between them, stained to show various cell formations. PyTorch enables our teams using these different methodologies to share a common framework which is robust enough to work in all the conditions we need. PyTorch’s high level, imperative, and pythonic syntax allows us to prototype models quickly and then take those models to scale once we have the results we want.
Multi-instance learning on gigabyte images | https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
One of the uniquely challenging aspects of applying ML to pathology is the immense size of the images. These digital slides can often be 100,000 x 100,000 pixels or more in resolution and gigabytes in size. Loading the full image in GPU memory and applying traditional computer vision algorithms on them is an almost impossible task. It also takes both a considerable amount of time and resources to have a full slide image (100k x 100k) annotated, especially when annotators need to be domain experts (board-certified pathologists). We often build models to predict image-level labels, like the presence of cancer, on a patient slide which covers a few thousand pixels in the whole image. The cancerous area is sometimes a tiny fraction of the entire slide, which makes the ML problem similar to finding a needle in a haystack. On the other hand, some problems like the prediction of certain histological biomarkers require an aggregation of information from the whole slide which is again hard due to the size of the images. All these factors add significant algorithmic, computational, and logistical complexity when applying ML techniques to pathology problems. | https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
Breaking down the image into smaller patches, learning patch representations, and then pooling those representations to predict an image-level label is one way to solve this problem as is depicted in the image below. One popular method for doing this is called Multiple Instance Learning (MIL). Each patch is considered an ‘instance’ and a set of patches forms a ‘bag’. The individual patch representations are pooled together to predict a final bag-level label. Algorithmically, the individual patch instances in the bag do not require labels and hence allow us to learn bag-level labels in a weakly-supervised way. They also use permutation invariant pooling functions which make the prediction independent of the order of patches and allows for an efficient aggregation of information. Typically, attention based pooling functions are used which not only allow for efficient aggregation but also provide attention values for each patch in the bag. These values indicate the importance of the corresponding patch in the prediction and can be visualized to better understand the model predictions. This element of interpretability can be very important to drive adoption of these models in the real world and we use variations like Additive MIL models to enable such spatial explainability. Computationally, MIL models circumvent the problem of applying neural networks to large image sizes since patch representations are obtained independently of the size of the image. | https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
At PathAI, we use custom MIL models based on deep nets to predict image-level labels. The overview of this process is as follows:
Select patches from a slide using different sampling approaches.
Construct a bag of patches based on random sampling or heuristic rules.
Generate patch representations for each instance based on pre-trained models or large-scale representation learning models.
Apply permutation invariant pooling functions to get the final slide-level score.
Now that we have walked through some of the high-level details around MIL in PyTorch, let’s look at some code to see how simple it is to go from ideation to code in production with PyTorch. We begin by defining a sampler, transformations, and our MIL dataset:
```python
Create a bag sampler which randomly samples patches from a slide | https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
bag_sampler = RandomBagSampler(bag_size=12)
Setup the transformations
crop_transform = FlipRotateCenterCrop(use_flips=True)
Create the dataset which loads patches for each bag
train_dataset = MILDataset(
bag_sampler=bag_sampler,
samples_loader=sample_loader,
transform=crop_transform,
)
After we have defined our sampler and dataset, we need to define the model we will actually train with said dataset. PyTorch’s familiar model definition syntax makes this easy to do while also allowing us to create bespoke models at the same time.
```python
classifier = DefaultPooledClassifier(hidden_dims=[256, 256], input_dims=1024, output_dims=1)
pooling = DefaultAttentionModule(
input_dims=1024,
hidden_dims=[256, 256],
output_activation=StableSoftmax()
)
# Define the model which is a composition of the featurizer, pooling module and a classifier
model = DefaultMILGraph(featurizer=ShuffleNetV2(), classifier=classifier, pooling = pooling)
| https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
```
Since these models are trained end-to-end, they offer a powerful way to go directly from a gigapixel whole slide image to a single label. Due to their wide applicability to different biological problems, two aspects of their implementation and deployment are important:
Configurable control over each part of the pipeline including the data loaders, the modular parts of the model, and their interaction with each other.
Ability to rapidly iterate through the ideate-implement-experiment-productionize loop.
| https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
PyTorch has various advantages when it comes to MIL modeling. It offers an intuitive way to create dynamic computational graphs with flexible control flow which is great for rapid research experimentation. The map-style datasets, configurable sampler and batch-samplers allow us to customize how we construct bags of patches, enabling faster experimentation. Since MIL models are IO heavy, data parallelism and pythonic data loaders make the task very efficient and user friendly. Lastly, the object-oriented nature of PyTorch enables building of reusable modules which aid in the rapid experimentation, maintainable implementation and ease of building compositional components of the pipeline.
Exploring spatial tissue organization with GNNs in PyTorch
| https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
In both healthy and diseased tissue, the spatial arrangement and structure of cells can oftentimes be as important as the cells themselves. For example, when assessing lung cancers, pathologists try to look at the overall grouping and structure of tumor cells (do they form solid sheets? Or do they occur in smaller, localized clusters?) to determine if the cancer belongs to specific subtypes which can have vastly different prognosis. Such spatial relationships between cells and other tissue structures can be modeled using graphs to capture tissue topology and cellular composition at the same time. Graph Neural Networks (GNNs) allow learning spatial patterns within these graphs that relate to other clinical variables, for example overexpression of genes in certain cancers. | https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
In late 2020, when PathAI started using GNNs on tissue samples, PyTorch had the best and most mature support for GNN functionality via the PyG package. This made PyTorch the natural choice for our team given that GNN models were something that we knew would be an important ML concept we wanted to explore. | https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
One of the main value-adds of GNN’s in the context of tissue samples is that the graph itself can uncover spatial relationships that would otherwise be very difficult to find by visual inspection alone. In our recent AACR publication, we showed that by using GNNs, we can better understand the way the presence of immune cell aggregates (specifically tertiary lymphoid structures, or TLS) in the tumor microenvironment can influence patient prognosis. In this case, the GNN approach was used to predict expression of genes associated with the presence of TLS, and identify histological features beyond the TLS region itself that are relevant to TLS. Such insights into gene expression are difficult to identify from tissue sample images when unassisted by ML models. | https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
One of the most promising GNN variations we have had success with is self attention graph pooling. Let’s take a look at how we define our Self Attention Graph Pooling (SAGPool) model using PyTorch and PyG:
class SAGPool(torch.nn.Module):
def __init__(self, ...):
super().__init__()
self.conv1 = GraphConv(in_features, hidden_features, aggr='mean')
self.convs = torch.nn.ModuleList()
self.pools = torch.nn.ModuleList()
self.convs.extend([GraphConv(hidden_features, hidden_features, aggr='mean') for i in range(num_layers - 1)])
self.pools.extend([SAGPooling(hidden_features, ratio, GNN=GraphConv, min_score=min_score) for i in range((num_layers) // 2)])
self.jump = JumpingKnowledge(mode='cat')
self.lin1 = Linear(num_layers * hidden_features, hidden_features)
self.lin2 = Linear(hidden_features, out_features)
self.out_activation = out_activation
self.dropout = dropout
| https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
self.dropout = dropout
```
In the above code, we begin by defining a single convolutional graph layer and then add two module list layers which allow us to pass in a variable number of layers. We then take our empty module list and append a variable number of GraphConv layers followed by a variable number of SAGPooling layers. We finish up our SAGPool definition by adding a JumpingKnowledge Layer, two linear layers, our activation function, and our dropout value. PyTorch’s intuitive syntax allows us to abstract away the complexity of working with state of the art methods like SAG Poolings while also maintaining the common approach to model development we are familiar with. | https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
Models like our SAG Pool one described above are just one example of how GNNs with PyTorch are allowing us to explore new and novel ideas. We also recently explored multimodal CNN - GNN hybrid models which ended up being 20% more accurate than traditional Pathologist consensus scores. These innovations and interplay between traditional CNNs and GNNs are again enabled by the short research to production model development loop.
Improving Patient Outcomes | https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
Improving Patient Outcomes
In order to achieve our mission of improving patient outcomes with AI-powered pathology, PathAI needs to rely on an ML development framework that (1) facilitates quick iteration and easy extension (i.e. Model configuration as code) during initial phases of development and exploration (2) scales model training and inference to massive images (3) easily and robustly serves models for production uses of our products (in clinical trials and beyond). As we’ve demonstrated, PyTorch offers us all of these capabilities and more. We are incredibly excited about the future of PyTorch and cannot wait to see what other impactful challenges we can solve using the framework. | https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/ | pytorch blogs |
Torch Distributed Elastic
Makes distributed PyTorch fault-tolerant and elastic.
Get Started
Usage
^^^^^
Quickstart
Train script
Examples
Documentation
API
^^^
torchrun (Elastic Launch)
Elastic Agent
Multiprocessing
Error Propagation
Rendezvous
Expiration Timers
Metrics
Events
Advanced
^^^^^^^^
Customization
Plugins
^^^^^^^
TorchElastic Kubernetes
| https://pytorch.org/docs/stable/distributed.elastic.html | pytorch docs |
torch.overrides
This module exposes various helper functions for the
"torch_function" protocol. See Extending torch for more detail on
the "torch_function" protocol.
Functions
torch.overrides.get_ignored_functions()
Return public functions that cannot be overridden by
"torch_function".
Returns:
A tuple of functions that are publicly available in the torch
API but cannot be overridden with "torch_function". Mostly
this is because none of the arguments of these functions are
tensors or tensor-likes.
Return type:
Set[Callable]
-[ Examples ]-
torch.Tensor.as_subclass in torch.overrides.get_ignored_functions()
True
torch.add in torch.overrides.get_ignored_functions()
False
torch.overrides.get_overridable_functions()
List functions that are overridable via torch_function
Returns:
A dictionary that maps namespaces that contain overridable | https://pytorch.org/docs/stable/torch.overrides.html | pytorch docs |
functions to functions in that namespace that can be overridden.
Return type:
Dict[Any, List[Callable]]
torch.overrides.resolve_name(f)
Get a human readable string name for a function passed to
torch_function
Parameters:
f (Callable) -- Function to resolve the name of.
Returns:
Name of the function; if eval'ed it should give back the input
function.
Return type:
str
torch.overrides.get_testing_overrides()
Return a dict containing dummy overrides for all overridable
functions
Returns:
A dictionary that maps overridable functions in the PyTorch API
to lambda functions that have the same signature as the real
function and unconditionally return -1. These lambda functions
are useful for testing API coverage for a type that defines
"torch_function".
Return type:
Dict[Callable, Callable]
-[ Examples ]-
import inspect
| https://pytorch.org/docs/stable/torch.overrides.html | pytorch docs |
-[ Examples ]-
import inspect
my_add = torch.overrides.get_testing_overrides()[torch.add]
inspect.signature(my_add)
torch.overrides.handle_torch_function(public_api, relevant_args, args, *kwargs)
Implement a function with checks for "torch_function"
overrides.
See torch::autograd::handle_torch_function for the equivalent of
this function in the C++ implementation.
Parameters:
* public_api (function) -- Function exposed by the public
torch API originally called like "public_api(args, *kwargs)"
on which arguments are now being checked.
* **relevant_args** (*iterable*) -- Iterable of arguments to
check for __torch_function__ methods.
* **args** (*tuple*) -- Arbitrary positional arguments
originally passed into "public_api".
* **kwargs** (*tuple*) -- Arbitrary keyword arguments originally
passed into "public_api".
Returns: | https://pytorch.org/docs/stable/torch.overrides.html | pytorch docs |
passed into "public_api".
Returns:
Result from calling "implementation" or an "torch_function"
method, as appropriate.
Return type:
object
:raises TypeError : if no implementation is found.:
-[ Example ]-
def func(a):
... if has_torch_function_unary(a):
... return handle_torch_function(func, (a,), a)
... return a + 0
torch.overrides.has_torch_function()
Check for torch_function implementations in the elements of an
iterable or if a torch_function mode is enabled. Considers
exact "Tensor" s and "Parameter" s non-dispatchable. Use this to
guard a call to "handle_torch_function()"; don't use it to test if
something is Tensor-like, use "is_tensor_like()" instead. :param
relevant_args: Iterable or arguments to check for
torch_function methods. :type relevant_args: iterable
Returns:
True if any of the elements of relevant_args have
torch_function implementations, False otherwise. | https://pytorch.org/docs/stable/torch.overrides.html | pytorch docs |
Return type:
bool
See also:
"torch.is_tensor_like"
Checks if something is a Tensor-like, including an exact
"Tensor".
torch.overrides.is_tensor_like(inp)
Returns "True" if the passed-in input is a Tensor-like.
Currently, this occurs whenever there's a "torch_function"
attribute on the type of the input.
-[ Examples ]-
A subclass of tensor is generally a Tensor-like.
class SubTensor(torch.Tensor): ...
is_tensor_like(SubTensor([0]))
True
Built-in or user types aren't usually Tensor-like.
is_tensor_like(6)
False
is_tensor_like(None)
False
class NotATensor: ...
is_tensor_like(NotATensor())
False
But, they can be made Tensor-like by implementing
torch_function.
class TensorLike:
... @classmethod
... def torch_function(cls, func, types, args, kwargs):
... return -1
is_tensor_like(TensorLike())
True
| https://pytorch.org/docs/stable/torch.overrides.html | pytorch docs |
is_tensor_like(TensorLike())
True
torch.overrides.is_tensor_method_or_property(func)
Returns True if the function passed in is a handler for a method or
property belonging to "torch.Tensor", as passed into
"torch_function".
Note:
For properties, their "__get__" method must be passed in.
This may be needed, in particular, for the following reasons:
Methods/properties sometimes don't contain a module slot.
They require that the first passed-in argument is an instance of
"torch.Tensor".
-[ Examples ]-
is_tensor_method_or_property(torch.Tensor.add)
True
is_tensor_method_or_property(torch.add)
False
Return type:
bool
torch.overrides.wrap_torch_function(dispatcher)
Wraps a given function with "torch_function" -related
functionality.
Parameters:
dispatcher (Callable) -- A callable that returns an
iterable of Tensor-likes passed into the function.
Note: | https://pytorch.org/docs/stable/torch.overrides.html | pytorch docs |
Note:
This decorator may reduce the performance of your code.
Generally, it's enough to express your code as a series of
functions that, themselves, support __torch_function__. If you
find yourself in the rare situation where this is not the case,
e.g. if you're wrapping a low-level library and you also need it
to work for Tensor-likes, then this function is available.
-[ Examples ]-
def dispatcher(a): # Must have the same signature as func
... return (a,)
@torch.overrides.wrap_torch_function(dispatcher)
def func(a): # This will make func dispatchable by torch_function
... return a + 0
| https://pytorch.org/docs/stable/torch.overrides.html | pytorch docs |
Quantization Accuracy Debugging
This document provides high level strategies for improving
quantization accuracy. If a quantized model has error compared to the
original model, we can categorize the error into:
data insensitive error - caused by intrinsic model quantization
error, large portion of input data has large error
data sensitive error - caused by outlier input data, small
portion of input data has large error
implementation error - quantized kernel is not matching
reference implementation
Data insensitive error
General tips
For PTQ, ensure that the data you are calibrating with is
representative of your dataset. For example, for a classification
problem a general guideline is to have multiple samples in every
category, and the overall number of samples should be at least 100.
There is no penalty for calibrating with more data other than
calibration time.
| https://pytorch.org/docs/stable/quantization-accuracy-debugging.html | pytorch docs |
calibration time.
If your model has Conv-BN or Linear-BN patterns, consider fusing
them. If you are using FX graph mode quantization, this is done
automatically by the workflow. If you are using Eager mode
quantization, you can do this manually with the
"torch.ao.quantization.fuse_modules" API.
Increase the precision of dtype of the problematic ops. Usually,
fp32 will have the highest accuracy, followed by fp16, followed by
dynamically quantized int8, followed by statically quantized int8.
Note: this is trading off performance for accuracy.
Note: availability of kernels per dtype per op can vary by
backend.
Note: dtype conversions add an additional performance cost. For
example, "fp32_op -> quant -> int8_op -> dequant -> fp32_op ->
quant -> int8_op -> dequant" will have a performance penalty
compared to "fp32_op -> fp32_op -> quant -> int8_op -> int8_op
-> dequant" because of a higher number of required dtype
conversions.
| https://pytorch.org/docs/stable/quantization-accuracy-debugging.html | pytorch docs |
conversions.
If you are using PTQ, consider using QAT to recover some of the
accuracy loss from quantization.
Int8 quantization tips
If you are using per-tensor weight quantization, consider using
per-channel weight quantization.
If you are doing inference on fbgemm, ensure that you set the
reduce_range argument to False if your CPU is Cooperlake or
newer, and to True otherwise.
Audit the input activation distribution variation across different
samples. If this variation is high, the layer may be suitable for
dynamic quantization but not static quantization.
Data sensitive error
If you are using static quantization and a small portion of your input
data is resulting in high quantization error, you can try:
Adjust your calibration dataset to make it more representative of
your inference dataset.
Manually inspect (using Numeric Suite) which layers have high
| https://pytorch.org/docs/stable/quantization-accuracy-debugging.html | pytorch docs |
quantization error. For these layers, consider leaving them in
floating point or adjusting the observer settings to choose a
better scale and zero_point.
Implementation error
If you are using PyTorch quantization with your own backend you may
see differences between the reference implementation of an operation
(such as "dequant -> op_fp32 -> quant") and the quantized
implementation (such as op_int8) of the op on the target hardware.
This could mean one of two things:
the differences (usually small) are expected due to specific
behavior of the target kernel on the target hardware compared to
fp32/cpu. An example of this is accumulating in an integer dtype.
Unless the kernel guarantees bitwise equivalency with the reference
implementation, this is expected.
the kernel on the target hardware has an accuracy issue. In this
case, reach out to the kernel developer.
Numerical Debugging Tooling (prototype)
Warning: | https://pytorch.org/docs/stable/quantization-accuracy-debugging.html | pytorch docs |
Warning:
Numerical debugging tooling is early prototype and subject to
change.
torch.ao.ns._numeric_suite Eager mode numeric suite
torch.ao.ns._numeric_suite_fx FX numeric suite
| https://pytorch.org/docs/stable/quantization-accuracy-debugging.html | pytorch docs |
JIT Utils - torch.utils.jit
| https://pytorch.org/docs/stable/jit_utils.html | pytorch docs |
Distributed Optimizers
Warning:
Distributed optimizer is not currently supported when using CUDA
tensors
"torch.distributed.optim" exposes DistributedOptimizer, which takes a
list of remote parameters ("RRef") and runs the optimizer locally on
the workers where the parameters live. The distributed optimizer can
use any of the local optimizer Base class to apply the gradients on
each worker.
class torch.distributed.optim.DistributedOptimizer(optimizer_class, params_rref, args, *kwargs)
DistributedOptimizer takes remote references to parameters
scattered across workers and applies the given optimizer locally
for each parameter.
This class uses "get_gradients()" in order to retrieve the
gradients for specific parameters.
Concurrent calls to "step()", either from the same or different
clients, will be serialized on each worker -- as each worker's
optimizer can only work on one set of gradients at a time. However, | https://pytorch.org/docs/stable/distributed.optim.html | pytorch docs |
there is no guarantee that the full forward-backward-optimizer
sequence will execute for one client at a time. This means that the
gradients being applied may not correspond to the latest forward
pass executed on a given worker. Also, there is no guaranteed
ordering across workers.
DistributedOptimizer creates the local optimizer with TorchScript
enabled by default, so that optimizer updates are not blocked by
the Python Global Interpreter Lock (GIL) in the case of
multithreaded training (e.g. Distributed Model Parallel). This
feature is currently enabled for most optimizers. You can also
follow the recipe in PyTorch tutorials to enable TorchScript
support for your own custom optimizers.
Parameters:
* optimizer_class (optim.Optimizer) -- the class of
optimizer to instantiate on each worker.
* **params_rref** (*list**[**RRef**]*) -- list of RRefs to local
or remote parameters to optimize.
| https://pytorch.org/docs/stable/distributed.optim.html | pytorch docs |
or remote parameters to optimize.
* **args** -- arguments to pass to the optimizer constructor on
each worker.
* **kwargs** -- arguments to pass to the optimizer constructor
on each worker.
Example::
>>> import torch.distributed.autograd as dist_autograd
>>> import torch.distributed.rpc as rpc
>>> from torch import optim
>>> from torch.distributed.optim import DistributedOptimizer
>>>
>>> with dist_autograd.context() as context_id:
>>> # Forward pass.
>>> rref1 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 3))
>>> rref2 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 1))
>>> loss = rref1.to_here() + rref2.to_here()
>>>
>>> # Backward pass.
>>> dist_autograd.backward(context_id, [loss.sum()])
>>>
>>> # Optimizer.
>>> dist_optim = DistributedOptimizer(
>>> optim.SGD,
>>> [rref1, rref2],
>>> lr=0.05, | https://pytorch.org/docs/stable/distributed.optim.html | pytorch docs |
lr=0.05,
>>> )
>>> dist_optim.step(context_id)
step(context_id)
Performs a single optimization step.
This will call "torch.optim.Optimizer.step()" on each worker
containing parameters to be optimized, and will block until all
workers return. The provided "context_id" will be used to
retrieve the corresponding "context" that contains the gradients
that should be applied to the parameters.
Parameters:
**context_id** -- the autograd context id for which we should
run the optimizer step.
class torch.distributed.optim.PostLocalSGDOptimizer(optim, averager)
Wraps an arbitrary "torch.optim.Optimizer" and runs post-local SGD,
This optimizer runs local optimizer at every step. After the warm-
up stage, it averages parameters periodically afer the local
optimizer is applied.
Parameters:
* optim (Optimizer) -- The local optimizer. | https://pytorch.org/docs/stable/distributed.optim.html | pytorch docs |
averager (ModelAverager) -- A model averager instance to
run post-localSGD algorithm.
Example:
>>> import torch
>>> import torch.distributed as dist
>>> import torch.distributed.algorithms.model_averaging.averagers as averagers
>>> import torch.nn as nn
>>> from torch.distributed.optim import PostLocalSGDOptimizer
>>> from torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook import (
>>> PostLocalSGDState,
>>> post_localSGD_hook,
>>> )
>>>
>>> model = nn.parallel.DistributedDataParallel(
>>> module, device_ids=[rank], output_device=rank
>>> )
>>>
>>> # Register a post-localSGD communication hook.
>>> state = PostLocalSGDState(process_group=None, subgroup=None, start_localSGD_iter=100)
>>> model.register_comm_hook(state, post_localSGD_hook)
>>>
>>> # Create a post-localSGD optimizer that wraps a local optimizer.
| https://pytorch.org/docs/stable/distributed.optim.html | pytorch docs |
Note that warmup_steps used in PostLocalSGDOptimizer must be the same as
>>> # ``start_localSGD_iter`` used in ``PostLocalSGDState``.
>>> local_optim = torch.optim.SGD(params=model.parameters(), lr=0.01)
>>> opt = PostLocalSGDOptimizer(
>>> optim=local_optim,
>>> averager=averagers.PeriodicModelAverager(period=4, warmup_steps=100)
>>> )
>>>
>>> # In the first 100 steps, DDP runs global gradient averaging at every step.
>>> # After 100 steps, DDP runs gradient averaging within each subgroup (intra-node by default),
>>> # and post-localSGD optimizer runs global model averaging every 4 steps after applying the local optimizer.
>>> for step in range(0, 200):
>>> opt.zero_grad()
>>> loss = loss_fn(output, labels)
>>> loss.backward()
>>> opt.step()
load_state_dict(state_dict)
This is the same as "torch.optim.Optimizer" "load_state_dict()",
| https://pytorch.org/docs/stable/distributed.optim.html | pytorch docs |
but also restores model averager's step value to the one saved
in the provided "state_dict".
If there is no ""step"" entry in "state_dict", it will raise a
warning and initialize the model averager's step to 0.
state_dict()
This is the same as "torch.optim.Optimizer" "state_dict()", but
adds an extra entry to record model averager's step to the
checkpoint to ensure reload does not cause unnecessary warm up
again.
step()
Performs a single optimization step (parameter update).
class torch.distributed.optim.ZeroRedundancyOptimizer(params, optimizer_class, process_group=None, parameters_as_bucket_view=False, overlap_with_ddp=False, **defaults)
This class wraps an arbitrary "optim.Optimizer" and shards its
states across ranks in the group as described by ZeRO. The local
optimizer instance in each rank is only responsible for updating
approximately "1 / world_size" parameters and hence only needs to | https://pytorch.org/docs/stable/distributed.optim.html | pytorch docs |
keep "1 / world_size" optimizer states. After parameters are
updated locally, each rank will broadcast its parameters to all
other peers to keep all model replicas in the same state.
"ZeroRedundancyOptimizer" can be used in conjunction with
"torch.nn.parallel.DistributedDataParallel" to reduce per-rank peak
memory consumption.
"ZeroRedundancyOptimizer" uses a sorted-greedy algorithm to pack a
number of parameters at each rank. Each parameter belongs to a
single rank and is not divided among ranks. The partition is
arbitrary and might not match the the parameter registration or
usage order.
Parameters:
params ("Iterable") -- an "Iterable" of "torch.Tensor" s or
"dict" s giving all parameters, which will be sharded across
ranks.
Keyword Arguments:
* optimizer_class ("torch.nn.Optimizer") -- the class of the
local optimizer.
* **process_group** ("ProcessGroup", optional) --
| https://pytorch.org/docs/stable/distributed.optim.html | pytorch docs |
"torch.distributed" "ProcessGroup" (default:
"dist.group.WORLD" initialized by
"torch.distributed.init_process_group()").
* **parameters_as_bucket_view** (*bool**, **optional*) -- if
"True", parameters are packed into buckets to speed up
communication, and "param.data" fields point to bucket views
at different offsets; if "False", each individual parameter is
communicated separately, and each "params.data" stays intact
(default: "False").
* **overlap_with_ddp** (*bool**, **optional*) -- if "True",
"step()" is overlapped with "DistributedDataParallel" 's
gradient synchronization; this requires (1) either a
functional optimizer for the "optimizer_class" argument or one
with a functional equivalent and (2) registering a DDP
communication hook constructed from one of the functions in
"ddp_zero_hook.py"; parameters are packed into buckets
| https://pytorch.org/docs/stable/distributed.optim.html | pytorch docs |
matching those in "DistributedDataParallel", meaning that the
"parameters_as_bucket_view" argument is ignored. If "False",
"step()" runs disjointly after the backward pass (per normal).
(default: "False")
* ****defaults** -- any trailing arguments, which are forwarded
to the local optimizer.
Example:
>>> import torch.nn as nn
>>> from torch.distributed.optim import ZeroRedundancyOptimizer
>>> from torch.nn.parallel import DistributedDataParallel as DDP
>>> model = nn.Sequential(*[nn.Linear(2000, 2000).to(rank) for _ in range(20)])
>>> ddp = DDP(model, device_ids=[rank])
>>> opt = ZeroRedundancyOptimizer(
>>> ddp.parameters(),
>>> optimizer_class=torch.optim.Adam,
>>> lr=0.01
>>> )
>>> ddp(inputs).sum().backward()
>>> opt.step()
Warning:
Currently, "ZeroRedundancyOptimizer" requires that all of the
passed-in parameters are the same dense type.
Warning: | https://pytorch.org/docs/stable/distributed.optim.html | pytorch docs |
Warning:
If you pass "overlap_with_ddp=True", be wary of the following:
Given the way that overlapping "DistributedDataParallel" with
"ZeroRedundancyOptimizer" is currently implemented, the first two
or three training iterations do not perform parameter updates in
the optimizer step, depending on if "static_graph=False" or
"static_graph=True", respectively. This is because it needs
information about the gradient bucketing strategy used by
"DistributedDataParallel", which is not finalized until the
second forward pass if "static_graph=False" or until the third
forward pass if "static_graph=True". To adjust for this, one
option is to prepend dummy inputs.
Warning:
ZeroRedundancyOptimizer is experimental and subject to change.
add_param_group(param_group)
Add a parameter group to the "Optimizer" 's "param_groups".
This can be useful when fine tuning a pre-trained network, as
| https://pytorch.org/docs/stable/distributed.optim.html | pytorch docs |
frozen layers can be made trainable and added to the "Optimizer"
as training progresses.
Parameters:
**param_group** (*dict*) -- specifies the parameters to be
optimized and group-specific optimization options.
Warning:
This method handles updating the shards on all partitions but
needs to be called on all ranks. Calling this on a subset of
the ranks will cause the training to hang because
communication primitives are called depending on the managed
parameters and expect all the ranks to participate on the same
set of parameters.
consolidate_state_dict(to=0)
Consolidate a list of "state_dict" s (one per rank) on the
target rank.
Parameters:
**to** (*int*) -- the rank that receives the optimizer states
(default: 0).
Raises:
**RuntimeError** -- if "overlap_with_ddp=True" and this
method is called before this "ZeroRedundancyOptimizer"
| https://pytorch.org/docs/stable/distributed.optim.html | pytorch docs |
instance has been fully initialized, which happens once
"DistributedDataParallel" gradient buckets have been
rebuilt.
Warning:
This needs to be called on all ranks.
join_hook(**kwargs)
Returns the ZeRO join hook, which enables training on uneven
inputs by shadowing the collective communications in the
optimizer step.
Gradients must be properly set before this hook is called.
Parameters:
**kwargs** (*dict*) -- a "dict" containing any keyword
arguments to modify the behavior of the join hook at run
time; all "Joinable" instances sharing the same join context
manager are forwarded the same value for "kwargs".
This hook does not support any keyword arguments; i.e. "kwargs"
is unused.
load_state_dict(state_dict)
Load the state pertaining to the given rank from the input
"state_dict", updating the local optimizer as needed.
Parameters:
| https://pytorch.org/docs/stable/distributed.optim.html | pytorch docs |
Parameters:
state_dict (dict) -- optimizer state; should be an
object returned from a call to "state_dict()".
Raises:
**RuntimeError** -- if "overlap_with_ddp=True" and this
method is called before this "ZeroRedundancyOptimizer"
instance has been fully initialized, which happens once
"DistributedDataParallel" gradient buckets have been
rebuilt.
state_dict()
Returns the last global optimizer state known to this rank.
Raises:
**RuntimeError** -- if "overlap_with_ddp=True" and this
method is called before this "ZeroRedundancyOptimizer"
instance has been fully initialized, which happens once
"DistributedDataParallel" gradient buckets have been
rebuilt; or if this method is called without a preceding call
to "consolidate_state_dict()".
Return type:
*Dict*[str, *Any*]
step(closure=None, **kwargs) | https://pytorch.org/docs/stable/distributed.optim.html | pytorch docs |
step(closure=None, **kwargs)
Performs a single optimizer step and syncs parameters across all
ranks.
Parameters:
**closure** (*Callable*) -- a closure that re-evaluates the
model and returns the loss; optional for most optimizers.
Returns:
Optional loss depending on the underlying local optimizer.
Return type:
*Optional*[float]
| https://pytorch.org/docs/stable/distributed.optim.html | pytorch docs |
Distributed Autograd Design
This note will present the detailed design for distributed autograd
and walk through the internals of the same. Make sure you're familiar
with Autograd mechanics and the Distributed RPC Framework before
proceeding.
Background
Let's say you have two nodes and a very simple model partitioned
across two nodes. This can be implemented using
"torch.distributed.rpc" as follows:
import torch
import torch.distributed.rpc as rpc
def my_add(t1, t2):
return torch.add(t1, t2)
# On worker 0:
t1 = torch.rand((3, 3), requires_grad=True)
t2 = torch.rand((3, 3), requires_grad=True)
# Perform some computation remotely.
t3 = rpc.rpc_sync("worker1", my_add, args=(t1, t2))
# Perform some computation locally based on remote result.
t4 = torch.rand((3, 3), requires_grad=True)
t5 = torch.mul(t3, t4)
# Compute some loss.
loss = t5.sum()
The main motivation behind distributed autograd is to enable running a | https://pytorch.org/docs/stable/rpc/distributed_autograd.html | pytorch docs |
backward pass on such distributed models with the "loss" that we've
computed and record appropriate gradients for all tensors that require
gradients.
Autograd recording during the forward pass
PyTorch builds the autograd graph during the forward pass and this
graph is used to execute the backward pass. For more details see How
autograd encodes the history.
For distributed autograd, we need to keep track of all RPCs during the
forward pass to ensure the backward pass is executed appropriately.
For this purpose, we attach "send" and "recv" functions to the
autograd graph when we perform an RPC.
The "send" function is attached to the source of the RPC and its
output edges point to the autograd function for the input tensors of
the RPC. The input for this function during the backward pass is
received from the destination as the output of the appropriate
"recv" function.
The "recv" function is attached to the destination of the RPC and
| https://pytorch.org/docs/stable/rpc/distributed_autograd.html | pytorch docs |
its inputs are retrieved from operators executed on the destination
using the input tensors. The output gradients of this function are
sent to the source node to the appropriate "send" function during
the backward pass.
Each "send-recv" pair is assigned a globally unique
"autograd_message_id" to uniquely identify the pair. This is useful
to look up the corresponding function on a remote node during the
backward pass.
For RRef, whenever we call "torch.distributed.rpc.RRef.to_here()" we
attach an appropriate "send-recv" pair for the tensors involved.
As an example, this is what the autograd graph for our example above
would look like (t5.sum() excluded for simplicity):
[image]
Distributed Autograd Context
Each forward and backward pass that uses distributed autograd is
assigned a unique "torch.distributed.autograd.context" and this
context has a globally unique "autograd_context_id". This context is
created on each node as needed. | https://pytorch.org/docs/stable/rpc/distributed_autograd.html | pytorch docs |
created on each node as needed.
This context serves the following purpose:
Multiple nodes running distributed backward passes might accumulate
gradients on the same tensor and as a result the ".grad" field of
the tensor would have gradients from a variety of distributed
backward passes before we have the opportunity to run the
optimizer. This is similar to calling "torch.autograd.backward()"
multiple times locally. In order to provide a way of separating out
the gradients for each backward pass, the gradients are accumulated
in the "torch.distributed.autograd.context" for each backward pass.
During the forward pass we store the "send" and "recv" functions
for each autograd pass in this context. This ensures we hold
references to the appropriate nodes in the autograd graph to keep
it alive. In addition to this, it is easy to look up the
appropriate "send" and "recv" functions during the backward pass.
| https://pytorch.org/docs/stable/rpc/distributed_autograd.html | pytorch docs |
In general we also use this context to store some metadata for each
distributed autograd pass.
From the user's perspective the autograd context is setup as follows:
import torch.distributed.autograd as dist_autograd
with dist_autograd.context() as context_id:
loss = model.forward()
dist_autograd.backward(context_id, loss)
It is important to note that your model's forward pass must be invoked
within the distributed autograd context manager, as a valid context is
needed in order to ensure that all "send" and "recv" functions are
stored properly to run the backward pass across all participating
nodes.
Distributed Backward Pass
In this section we outline the challenge of computing dependencies
accurately during a distributed backward pass and describe a couple of
algorithms (with tradeoffs) on how we can execute a distributed
backward pass.
Computing dependencies
Consider the following piece of code being run on a single machine | https://pytorch.org/docs/stable/rpc/distributed_autograd.html | pytorch docs |
import torch
a = torch.rand((3, 3), requires_grad=True)
b = torch.rand((3, 3), requires_grad=True)
c = torch.rand((3, 3), requires_grad=True)
d = a + b
e = b * c
d.sum.().backward()
This is what the autograd graph for the code above would look like:
[image]
The first step the autograd engine performs as part of the backward
pass is computing the number of dependencies for each node in the
autograd graph. This helps the autograd engine know when a node in the
graph is ready for execution. The numbers in brackets for "add(1)" and
"mul(0)" denote the number of dependencies. As you can see, this means
during the backward pass the "add" node needs 1 input and the "mul"
node doesn't need any inputs (in other words doesn't need to be
executed). The local autograd engine computes these dependencies by
traversing the graph from the root nodes ("d" in this case).
The fact that certain nodes in the autograd graph might not be
executed in the backward pass poses a challenge for distributed | https://pytorch.org/docs/stable/rpc/distributed_autograd.html | pytorch docs |
autograd. Consider this piece of code which uses RPC.
import torch
import torch.distributed.rpc as rpc
a = torch.rand((3, 3), requires_grad=True)
b = torch.rand((3, 3), requires_grad=True)
c = torch.rand((3, 3), requires_grad=True)
d = rpc.rpc_sync("worker1", torch.add, args=(a, b))
e = rpc.rpc_sync("worker1", torch.mul, args=(b, c))
loss = d.sum()
The associated autograd graph for the code above would be:
[image]
Computing dependencies of this distributed autograd graph is much more
challenging and requires some overhead (either in terms of computation
or network communication).
For performance sensitive applications we can avoid a lot of overhead
by assuming every "send" and "recv" function are valid as part of the
backward pass (most applications don't perform RPCs that aren't used).
This simplifies the distributed autograd algorithm and is much more
efficient, but at the cost that the application needs to be aware of | https://pytorch.org/docs/stable/rpc/distributed_autograd.html | pytorch docs |
the limitations. This algorithm is called the FAST mode algorithm and
is described in detail below.
In the general case it might not be necessary that every "send" and
"recv" function is valid as part of the backward pass. To address
this, we have proposed a SMART mode algorithm which is described in a
later section. Please note that currently, only the FAST mode
algorithm is implemented.
FAST mode algorithm
The key assumption of this algorithm is that each "send" function has
a dependency of 1 when we run a backward pass. In other words, we
assume we'll receive a gradient over RPC from another node.
The algorithm is as follows:
We start from the worker which has the roots for the backward pass
(all roots must be local).
Lookup all the "send" functions for the current Distributed
Autograd Context.
Compute dependencies locally starting from the provided roots and
all the "send" functions we retrieved.
| https://pytorch.org/docs/stable/rpc/distributed_autograd.html | pytorch docs |
all the "send" functions we retrieved.
After computing dependencies, kick off the local autograd engine
with the provided roots.
When the autograd engine executes the "recv" function, the "recv"
function sends the input gradients via RPC to the appropriate
worker. Each "recv" function knows the destination worker id since
it is recorded as part of the forward pass. The "recv" function
also sends over the "autograd_context_id" and "autograd_message_id"
to the remote host.
When this request is received on the remote host, we use the
"autograd_context_id" and "autograd_message_id" to look up the
appropriate "send" function.
If this is the first time a worker has received a request for the
given "autograd_context_id", it will compute dependencies locally
as described in points 1-3 above.
The "send" function retrieved in 6. is then enqueued for execution
on the local autograd engine for that worker.
| https://pytorch.org/docs/stable/rpc/distributed_autograd.html | pytorch docs |
on the local autograd engine for that worker.
Finally, instead of accumulating the gradients on the ".grad" field
of the Tensor, we accumulate the gradients separately per
Distributed Autograd Context. The gradients are stored in a
"Dict[Tensor, Tensor]", which is basically a map from Tensor to its
associated gradient and this map can be retrieved using the
"get_gradients()" API.
As an example the complete code with distributed autograd would be as
follows:
import torch
import torch.distributed.autograd as dist_autograd
import torch.distributed.rpc as rpc
def my_add(t1, t2):
return torch.add(t1, t2)
# On worker 0:
# Setup the autograd context. Computations that take
# part in the distributed backward pass must be within
# the distributed autograd context manager.
with dist_autograd.context() as context_id:
t1 = torch.rand((3, 3), requires_grad=True)
t2 = torch.rand((3, 3), requires_grad=True)
# Perform some computation remotely.
| https://pytorch.org/docs/stable/rpc/distributed_autograd.html | pytorch docs |
Perform some computation remotely.
t3 = rpc.rpc_sync("worker1", my_add, args=(t1, t2))
# Perform some computation locally based on remote result.
t4 = torch.rand((3, 3), requires_grad=True)
t5 = torch.mul(t3, t4)
# Compute some loss.
loss = t5.sum()
# Run the backward pass.
dist_autograd.backward(context_id, [loss])
# Retrieve the gradients from the context.
dist_autograd.get_gradients(context_id)
The distributed autograd graph with dependencies would be as follows
(t5.sum() excluded for simplicity):
[image]
The FAST mode algorithm applied to the above example would be as
follows:
On "Worker 0" we start from the roots "loss" and "send1" to compute
dependencies. As a result "send1" is marked with a dependency of 1
and "mul" on "Worker 0" is marked with a dependency of 1.
Now, we kickoff the local autograd engine on "Worker 0". We first
execute the "mul" function, accumulate its output in the autograd
| https://pytorch.org/docs/stable/rpc/distributed_autograd.html | pytorch docs |
context as the gradient for "t4". Then, we execute "recv2" which
sends the gradients to "Worker 1".
Since this is the first time "Worker 1" has heard about this
backward pass, it starts dependency computation and marks the
dependencies for "send2", "add" and "recv1" appropriately.
Next, we enqueue "send2" on the local autograd engine of "Worker
1", which in turn executes "add" and "recv1".
When "recv1" is executed it sends the gradients over to "Worker 0".
Since "Worker 0" has already computed dependencies for this
backward pass, it just enqueues and executes "send1" locally.
Finally, gradients for "t1", "t2" and "t4" are accumulated in the
Distributed Autograd Context.
SMART mode algorithm
Full details of this algorithm are still in the works, but for the
general idea you can refer to Distributed Autograd Algorithm Smart
mode section in the RFC.
Distributed Optimizer
The "DistributedOptimizer" operates as follows: | https://pytorch.org/docs/stable/rpc/distributed_autograd.html | pytorch docs |
The "DistributedOptimizer" operates as follows:
Takes a list of remote parameters ("RRef") to optimize. These could
also be local parameters wrapped within a local "RRef".
Takes a "Optimizer" class as the local optimizer to run on all
distinct "RRef" owners.
The distributed optimizer creates an instance of the local
"Optimizer" on each of the worker nodes and holds an "RRef" to
them.
When "torch.distributed.optim.DistributedOptimizer.step()" is
invoked, the distributed optimizer uses RPC to remotely execute all
the local optimizers on the appropriate remote workers. A
distributed autograd "context_id" must be provided as input to
"torch.distributed.optim.DistributedOptimizer.step()". This is used
by local optimizers to apply gradients stored in the corresponding
context.
If multiple concurrent distributed optimizers are updating the same
parameters on a worker, these updates are serialized via a lock.
Simple end to end example | https://pytorch.org/docs/stable/rpc/distributed_autograd.html | pytorch docs |
=========================
Putting it all together, the following is a simple end to end example
using distributed autograd and the distributed optimizer. If the code
is placed into a file called "dist_autograd_simple.py", it can be run
with the command "MASTER_ADDR="localhost" MASTER_PORT=29500 python
dist_autograd_simple.py":
import torch
import torch.multiprocessing as mp
import torch.distributed.autograd as dist_autograd
from torch.distributed import rpc
from torch import optim
from torch.distributed.optim import DistributedOptimizer
def random_tensor():
return torch.rand((3, 3), requires_grad=True)
def _run_process(rank, dst_rank, world_size):
name = "worker{}".format(rank)
dst_name = "worker{}".format(dst_rank)
# Initialize RPC.
rpc.init_rpc(
name=name,
rank=rank,
world_size=world_size
)
# Use a distributed autograd context.
with dist_autograd.context() as context_id:
| https://pytorch.org/docs/stable/rpc/distributed_autograd.html | pytorch docs |
with dist_autograd.context() as context_id:
# Forward pass (create references on remote nodes).
rref1 = rpc.remote(dst_name, random_tensor)
rref2 = rpc.remote(dst_name, random_tensor)
loss = rref1.to_here() + rref2.to_here()
# Backward pass (run distributed autograd).
dist_autograd.backward(context_id, [loss.sum()])
# Build DistributedOptimizer.
dist_optim = DistributedOptimizer(
optim.SGD,
[rref1, rref2],
lr=0.05,
)
# Run the distributed optimizer step.
dist_optim.step(context_id)
def run_process(rank, world_size):
dst_rank = (rank + 1) % world_size
_run_process(rank, dst_rank, world_size)
rpc.shutdown()
if name == 'main':
# Run world_size workers
world_size = 2
mp.spawn(run_process, args=(world_size,), nprocs=world_size) | https://pytorch.org/docs/stable/rpc/distributed_autograd.html | pytorch docs |
torch.utils.mobile_optimizer
Warning:
This API is in beta and may change in the near future.
Torch mobile supports
"torch.utils.mobile_optimizer.optimize_for_mobile" utility to run a
list of optimization pass with modules in eval mode. The method takes
the following parameters: a torch.jit.ScriptModule object, a
blocklisting optimization set, a preserved method list, and a backend.
For CPU Backend, by default, if optimization blocklist is None or
empty, "optimize_for_mobile" will run the following optimizations:
* Conv2D + BatchNorm fusion (blocklisting option
mobile_optimizer.MobileOptimizerType.CONV_BN_FUSION): This
optimization pass folds "Conv2d-BatchNorm2d" into "Conv2d" in
"forward" method of this module and all its submodules. The
weight and bias of the "Conv2d" are correspondingly updated.
Insert and Fold prepacked ops (blocklisting option
mobile_optimizer.MobileOptimizerType.INSERT_FOLD_PREPACK_OPS):
| https://pytorch.org/docs/stable/mobile_optimizer.html | pytorch docs |
This optimization pass rewrites the graph to replace 2D
convolutions and linear ops with their prepacked counterparts.
Prepacked ops are stateful ops in that, they require some state
to be created, such as weight prepacking and use this state, i.e.
prepacked weights, during op execution. XNNPACK is one such
backend that provides prepacked ops, with kernels optimized for
mobile platforms (such as ARM CPUs). Prepacking of weight enables
efficient memory access and thus faster kernel execution. At the
moment "optimize_for_mobile" pass rewrites the graph to replace
"Conv2D/Linear" with 1) op that pre-packs weight for XNNPACK
conv2d/linear ops and 2) op that takes pre-packed weight and
activation as input and generates output activations. Since 1
needs to be done only once, we fold the weight pre-packing such
that it is done only once at model load time. This pass of the
"optimize_for_mobile" does 1 and 2 and then folds, i.e. removes, | https://pytorch.org/docs/stable/mobile_optimizer.html | pytorch docs |
weight pre-packing ops.
ReLU/Hardtanh fusion: XNNPACK ops support fusion of clamping.
That is clamping of output activation is done as part of the
kernel, including for 2D convolution and linear op kernels. Thus
clamping effectively comes for free. Thus any op that can be
expressed as clamping op, such as "ReLU" or "hardtanh", can be
fused with previous "Conv2D" or "linear" op in XNNPACK. This pass
rewrites graph by finding "ReLU/hardtanh" ops that follow XNNPACK
"Conv2D/linear" ops, written by the previous pass, and fuses them
together.
Dropout removal (blocklisting option
mobile_optimizer.MobileOptimizerType.REMOVE_DROPOUT): This
optimization pass removes "dropout" and "dropout_" nodes from
this module when training is false.
Conv packed params hoisting (blocklisting option
mobile_optimizer.MobileOptimizerType.HOIST_CONV_PACKED_PARAMS):
This optimization pass moves convolution packed params to the
| https://pytorch.org/docs/stable/mobile_optimizer.html | pytorch docs |
root module, so that the convolution structs can be deleted. This
decreases model size without impacting numerics.
Add/ReLU fusion (blocklisting option
mobile_optimizer.MobileOptimizerType.FUSE_ADD_RELU): This pass
finds instances of "relu" ops that follow "add" ops and fuses
them into a single "add_relu".
for Vulkan Backend, by default, if optimization blocklist is None or
empty, "optimize_for_mobile" will run the following optimization:
* Automatic GPU Transfer (blocklisting option mobile_optimize
r.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER): This
optimization pass rewrites the graph so that moving input and
output data to and from the GPU becomes part of the model.
"optimize_for_mobile" will also invoke freeze_module pass which only
preserves "forward" method. If you have other method to that needed to
be preserved, add them into the preserved method list and pass into
the method. | https://pytorch.org/docs/stable/mobile_optimizer.html | pytorch docs |
the method.
torch.utils.mobile_optimizer.optimize_for_mobile(script_module, optimization_blocklist=None, preserved_methods=None, backend='CPU')
Parameters:
* script_module (ScriptModule) -- An instance of torch
script module with type of ScriptModule.
* **optimization_blocklist**
(*Optional**[**Set**[**_MobileOptimizerType**]**]*) -- A set
with type of MobileOptimizerType. When set is not passed,
optimization method will run all the optimizer pass;
otherwise, optimizer method will run the optimization pass
that is not included inside optimization_blocklist.
* **preserved_methods** (*Optional**[**List**]*) -- A list of
methods that needed to be preserved when freeze_module pass is
invoked
* **backend** (*str*) -- Device type to use for running the
result model ('CPU'(default), 'Vulkan' or 'Metal').
Returns:
A new optimized torch script module
Return type:
RecursiveScriptModule | https://pytorch.org/docs/stable/mobile_optimizer.html | pytorch docs |
Quantization Backend Configuration
FX Graph Mode Quantization allows the user to configure various
quantization behaviors of an op in order to match the expectation of
their backend.
In the future, this document will contain a detailed spec of these
configurations.
Default values for native configurations
Below is the output of the configuration for quantization of ops in
x86 and qnnpack (PyTorch's default quantized backends).
Results:
{
'pattern': ,
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None), | https://pytorch.org/docs/stable/quantization-backend-configuration.html | pytorch docs |
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
},
],
'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,
},
{
'pattern': ,
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
},
],
'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,
},
{ | https://pytorch.org/docs/stable/quantization-backend-configuration.html | pytorch docs |
},
{
'pattern': ,
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
},
],
'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,
},
{
'pattern': ,
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None), | https://pytorch.org/docs/stable/quantization-backend-configuration.html | pytorch docs |
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
},
],
'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,
},
{
'pattern': ,
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
},
],
'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,
},
{ | https://pytorch.org/docs/stable/quantization-backend-configuration.html | pytorch docs |
},
{
'pattern': ,
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
},
],
'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,
},
{
'pattern': ,
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None), | https://pytorch.org/docs/stable/quantization-backend-configuration.html | pytorch docs |
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
},
],
'num_tensor_args_to_observation_type': {
0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,
1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,
2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,
},
'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,
},
{
'pattern': ,
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None), | https://pytorch.org/docs/stable/quantization-backend-configuration.html | pytorch docs |
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
},
],
'num_tensor_args_to_observation_type': {
0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,
1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,
2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,
},
'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,
},
{
'pattern': ,
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None), | https://pytorch.org/docs/stable/quantization-backend-configuration.html | pytorch docs |
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
},
],
'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,
},
{
'pattern': ,
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
},
],
'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,
},
{ | https://pytorch.org/docs/stable/quantization-backend-configuration.html | pytorch docs |
},
{
'pattern': ,
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
},
],
'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,
},
{
'pattern': ,
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None), | https://pytorch.org/docs/stable/quantization-backend-configuration.html | pytorch docs |
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
},
],
'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,
},
{
'pattern': ,
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
},
],
'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,
},
{ | https://pytorch.org/docs/stable/quantization-backend-configuration.html | pytorch docs |
},
{
'pattern': ,
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
},
],
'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,
},
{
'pattern': (, ),
'dtype_configs': [
{ | https://pytorch.org/docs/stable/quantization-backend-configuration.html | pytorch docs |
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
'bias_dtype': torch.float32,
},
],
'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,
'fused_module': ,
'fuser_method': ,
},
{ | https://pytorch.org/docs/stable/quantization-backend-configuration.html | pytorch docs |
},
{
'pattern': (, ),
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
'bias_dtype': torch.float32,
},
],
'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT, | https://pytorch.org/docs/stable/quantization-backend-configuration.html | pytorch docs |
'root_module': ,
'reference_quantized_module_for_root': ,
'fuser_method': ,
},
{
'pattern': (, ),
'dtype_configs': [
{
'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),
'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None), | https://pytorch.org/docs/stable/quantization-backend-configuration.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.