text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
A maturing ecosystem Is it all roses? No, it has been a rockier journey than we expected. We encountered what seems to be a memory leak in the MKL libraries used by PyTorch while serving the PyTorch code directly. We encountered deadlocks in trying to load multiple models from multiple threads. We had difficulties exporting our models to ONNX and TorchScript formats. Models would not work out-of-the-box on hardware with multiple GPUs, they always accessed the particular GPU device on which they were exported. We encountered excessive memory usage in the Triton inference server while serving TorchScript models, which we found out was due to automatic differentiation accidentally being enabled during the forward pass. However, the ecosystem keeps improving, and there is a helpful and vibrant open-source community eager to work with us to mitigate such issues.
https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/
pytorch blogs
Where to go from here? For those that require the flexibility of serving PyTorch code directly, without going through the extra step of exporting self-contained models, it is worth pointing out that the TorchServe project now provides a way of bundling the code together with parameter checkpoints into a single servable archive, greatly reducing the risk of code and parameters running apart. To us, however, exporting models to TorchScript has proven beneficial. It provides a clear interface between modeling and deployment teams, and TorchScript further reduces the latency when serving models on GPU via its just-in-time compilation engine. Scaling at large and the future
https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/
pytorch blogs
Scaling at large and the future Finally, efficient deployment to the cloud is about more than just computing the response of a single model instance efficiently. Flexibility is needed in managing, versioning and updating models. High-level scalability must be achieved via techniques such as load-balancing, horizontal scaling and vertical scaling. If many models are involved, scale-to-zero quickly becomes a topic as it is unacceptable to pay for serving models that do not answer any requests. Providing such extra functionality on top of a low-level inference server like Triton is the job of an orchestration framework. After gaining some first experience with KubeFlow, to that end, we decided to turn our attention to Azure ML, which provides similar functionality but integrates more deeply with the Azure platform, on which we crucially rely for large parts of our technology stack already. This part of our journey has just begun. Conclusion
https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/
pytorch blogs
Academia has long recognized that we are "standing on the shoulders of giants." As Artificial Intelligence is maturing from a scientific discipline into technology, the same spirit of collaboration that originally fueled its scientific foundation has carried over into the world of software engineering. Open-source enthusiasts join technology companies worldwide to build open software ecosystems that allow for new angles at solving some of the most pressing challenges of modern society. In this article, we've taken a look at Nuance's Dragon Ambient eXperience, an AI-powered, voice-enabled solution that automatically documents patient care, reducing healthcare providers' administrative burdens. Nuance DAX improves the patient-provider experience, reduces physician burnout, and improves financial outcomes. It brings back trust, joy, and humanity to the delivery of healthcare. Fairseq and PyTorch have proven to be an incredible platform for powering this AI technology, and in turn, Nuance has contributed back some of its innovations in this space. For further reading, we invite you to take a look at our recent ACL publication and the Nuance "What's Next" blog.
https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/
pytorch blogs
layout: blog_detail title: 'Microsoft becomes maintainer of the Windows version of PyTorch' author: Maxim Lukiyanov - Principal PM at Microsoft, Emad Barsoum - Group EM at Microsoft, Guoliang Hua - Principal EM at Microsoft, Nikita Shulga - Tech Lead at Facebook, Geeta Chauhan - PE Lead at Facebook, Chris Gottbrath - Technical PM at Facebook, Jiachen Pu - Engineer at Facebook Along with the PyTorch 1.6 release, we are excited to announce that Microsoft has expanded its participation in the PyTorch community and will be responsible for the development and maintenance of the PyTorch build for Windows.
https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/
pytorch blogs
According to the latest Stack Overflow developer survey, Windows remains the primary operating system for the developer community (46% Windows vs 28% MacOS). Jiachen Pu initially made a heroic effort to add support for PyTorch on Windows, but due to limited resources, Windows support for PyTorch has lagged behind other platforms. Lack of test coverage resulted in unexpected issues popping up every now and then. Some of the core tutorials, meant for new users to learn and adopt PyTorch, would fail to run. The installation experience was also not as smooth, with the lack of official PyPI support for PyTorch on Windows. Lastly, some of the PyTorch functionality was simply not available on the Windows platform, such as the TorchAudio domain library and distributed training support. To help alleviate this pain, Microsoft is happy to bring its Windows expertise to the table and bring PyTorch on Windows to its best possible self.
https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/
pytorch blogs
In the PyTorch 1.6 release, we have improved the core quality of the Windows build by bringing test coverage up to par with Linux for core PyTorch and its domain libraries and by automating tutorial testing. Thanks to the broader PyTorch community, which contributed TorchAudio support to Windows, we were able to add test coverage to all three domain libraries: TorchVision, TorchText and TorchAudio. In subsequent releases of PyTorch, we will continue improving the Windows experience based on community feedback and requests. So far, the feedback we received from the community points to distributed training support and a better installation experience using pip as the next areas of improvement.
https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/
pytorch blogs
In addition to the native Windows experience, Microsoft released a preview adding GPU compute support to Windows Subsystem for Linux (WSL) 2 distros, with a focus on enabling AI and ML developer workflows. WSL is designed for developers that want to run any Linux based tools directly on Windows. This preview enables valuable scenarios for a variety of frameworks and Python packages that utilize NVIDIA CUDA for acceleration and only support Linux. This means WSL customers using the preview can run native Linux based PyTorch applications on Windows unmodified without the need for a traditional virtual machine or a dual boot setup. Getting started with PyTorch on Windows
https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/
pytorch blogs
Getting started with PyTorch on Windows It's easy to get started with PyTorch on Windows. To install PyTorch using Anaconda with the latest GPU support, run the command below. To install different supported configurations of PyTorch, refer to the installation instructions on pytorch.org. conda install pytorch torchvision cudatoolkit=10.2 -c pytorch Once you install PyTorch, learn more by visiting the PyTorch Tutorials and documentation. Getting started with PyTorch on Windows Subsystem for Linux
https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/
pytorch blogs
The preview of NVIDIA CUDA support in WSL is now available to Windows Insiders running Build 20150 or higher. In WSL, the command to install PyTorch using Anaconda is the same as the above command for native Windows. If you prefer pip, use the command below. pip install torch torchvision You can use the same tutorials and documentation inside your WSL environment as on native Windows. This functionality is still in preview so if you run into issues with WSL please share feedback via the WSL GitHub repo or with NVIDIA CUDA support share via NVIDIA’s Community Forum for CUDA on WSL. Feedback
https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/
pytorch blogs
Feedback If you find gaps in the PyTorch experience on Windows, please let us know on the PyTorch discussion forum or file an issue on GitHub using the #module: windows label.
https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/
pytorch blogs
layout: blog_detail title: "Accelerated PyTorch 2 Transformers" author: Michael Gschwind, Driss Guessous, Christian Puhrsch The PyTorch 2.0 release includes a new high-performance implementation of the PyTorch Transformer API with the goal of making training and deployment of state-of-the-art Transformer models affordable. Following the successful release of “fastpath” inference execution (“Better Transformer”), this release introduces high-performance support for training and inference using a custom kernel architecture for scaled dot product attention (SPDA).
https://pytorch.org/blog/accelerated-pytorch-2/
pytorch blogs
You can take advantage of the new fused SDPA kernels either by calling the new SDPA operator directly (as described in the SDPA tutorial), or transparently via integration into the pre-existing PyTorch Transformer API. All features of the PyTorch Transformer API will continue to work compatibly, with many features mapped to high-performance SDPA kernels, while other features are impossible to support with higher performance (e.g., need_weights, as per below) while expanded high-performance support for other features may still be under active development. \ \
https://pytorch.org/blog/accelerated-pytorch-2/
pytorch blogs
\ Similar to the “fastpath” architecture, custom kernels are fully integrated into the PyTorch Transformer API – thus, using the native Transformer and MultiHeadAttention API will enable users to transparently see significant speed improvements. Unlike the “fastpath” architecture, the newly introduced “custom kernels” support many more use cases including models using Cross-Attention, Transformer Decoders, and for training models, in addition to the existing fastpath inference for fixed and variable sequence length Transformer Encoder and Self Attention use cases.
https://pytorch.org/blog/accelerated-pytorch-2/
pytorch blogs
To take full advantage of different hardware models and Transformer use cases, multiple SDPA custom kernels are supported, with custom kernel selection logic that will pick the highest-performance kernel for a given model and hardware type. In particular, the first custom kernels included with the PyTorch 2.0 release are the Flash Attention kernel (sdpa_flash, for 16-bit floating point training and inference on Nvidia GPUs with SM80+ architecture level) and the xFormers memory-efficient attention kernel (sdpa_mem_eff, for 16-bit and 32-bit floating point training and inference on a broad range of Nvidia GPUs). A general-purpose kernel sdpa_math provides an implementation when the custom kernels are not applicable.
https://pytorch.org/blog/accelerated-pytorch-2/
pytorch blogs
As mentioned, custom kernels provide a wider range of support for execution scenarios To ensure efficient execution (e,g., to use GPU tensor cores), model configurations need to meet a small number of requirements. This list of requirements will evolve over time, prospectively relaxing constraints limiting the usage of currently supported custom kernels, or providing additional kernels in the future. For the most up to date list of custom kernels and dispatch constraints, you can refer to sdp_utils.h. As of PyTorch 2.0, the existing fused SDPA kernels have the following constraints: Flash Attention only supports 16 bit floating point data types (float16 and bfloat16). The head dimension must be a multiple of 8 for 16-bit floating point numbers and a multiple of 4 for 32-bit floating point numbers. At present, the maximum head_dim support for the Flash Attention custom kernel is 128.
https://pytorch.org/blog/accelerated-pytorch-2/
pytorch blogs
The CUDA architecture level must be sm5x or better for the mem_efficient kernel, and sm80 for Flash Attention. Flash Attention supports arbitrary dropout, in PyTorch 2.0 the mem_efficient kernel does not support dropout (i.e., dropout must be set to zero for this kernel to be selected in PyTorch 2.0). To support variable-sequence length batches, all SDPA kernels support Nested Tensor inputs that combine input data and padding information using variable sequence length tensors for forward. (You can find more information about Nested Tensors in the Nested Tensor tutorial.) You can specify both a key_padding_mask and an attn_mask by combining them before passing them to the SDPA operator. In particular, you can use the per-batch-element key padding mask of the nn.Transformer API to implement training for variable-sequence length inputs in a batch.
https://pytorch.org/blog/accelerated-pytorch-2/
pytorch blogs
At present, the only attention mask supported by fused kernel implementation is the causal mask commonly used for training. To specify the causal mask in custom kernels, it must be specified with the is_causal boolean and attn_mask must be None. Support for Nested Tensors is still under development. Specifically, in PyTorch 2.0, only the sdpa_math kernel supports training with Nested Tensors. Also, PyTorch 2.0 does not support Nested Tensors as part of code being compiled with torch.compile(). The SDPA operator does not support returning averaged attention weights because computing them defeats the optimizations that enabled fused kernels to execute more efficiently. The argument need_weights for torch.nn.MultiheadAttention's forward function defaults to True. In order to use the fused kernels, need_weights needs to be set to need_weights=False.
https://pytorch.org/blog/accelerated-pytorch-2/
pytorch blogs
We find that an attention mask is rarely used in real-world applications, except for the causal mask during training. Consequently, we reduce kernel complexity and compute cost by building in the option to use a causal mask as attention mask, and select this new capability with the is_causal parameter introduced in conjunction with the new SDPA operator. Providing the is_causal Boolean flag for the frequently used causal mask also obviates the expensive and memory-intensive allocation of a causal mask, increasing training memory efficiency by allowing more memory to be used for large batch sizes, and reduce memory bandwidth and cache contention – which are both at a premium in GPU accelerators – by not needing to load an attention mask tensor.
https://pytorch.org/blog/accelerated-pytorch-2/
pytorch blogs
If the constraints of none of the available custom kernels are met, then training falls back to using the default sdpa_math kernel, implementing the mathematical equations for scaled dot product attention using a sequence of PyTorch operator to implement SDPA. This is the most general “catch-all” fallback kernel to ensure successful training for all models. In addition to the existing Transformer API, model developers may also use the scaled dot product attention kernels directly by calling the new scaled_dot_product_attention() operator. This operator may be used to efficiently implement multi-head attention by combining it with in-projection and outprojection, as described in the SDPA tutorial.
https://pytorch.org/blog/accelerated-pytorch-2/
pytorch blogs
In addition to adding custom kernels, Accelerated PyTorch 2 Transformers are integrated with PyTorch 2.0 compilation. To use your model while benefiting from the additional acceleration of PT2-compilation (for inference or training), pre-process the model with model = torch.compile(model) We have achieved major speedups for training transformer models and in particular large language models with Accelerated PyTorch 2 Transformers using a combination of custom kernels and torch.compile(). Figure: Using scaled dot product attention with custom kernels and torch.compile delivers significant speedups for training large language models, such as for nanoGPT shown here.
https://pytorch.org/blog/accelerated-pytorch-2/
pytorch blogs
Finally, because the custom kernels are much more memory efficient, try to increase the size of training batches to achieve faster training with increased batch size. In addition to automatic kernel selection, a context manager enables developers to override the kernel selection algorithm – this is not required for day to day operation, but enables developers to debug their code as well as enable performance engineers to override kernel selection. The SDPA tutorial provides additional information on using the SDPA context manager. In addition to availability as part of the nn.Transformer API, Accelerated PyTorch 2 Transformer custom kernels are also available in conjunction with the torchtext, torchvision, and fairseq domain libraries with the launch of PyTorch 2.0.
https://pytorch.org/blog/accelerated-pytorch-2/
pytorch blogs
layout: blog_detail title: 'Mapillary Research: Seamless Scene Segmentation and In-Place Activated BatchNorm' author: Lorenzo Porzi, Mapillary redirect_from: /2019/07/23/mapillary-research.html With roads in developed countries like the US changing up to 15% annually, Mapillary addresses a growing demand for keeping maps updated by combining images from any camera into a 3D visualization of the world. Mapillary's independent and collaborative approach enables anyone to collect, share, and use street-level images for improving maps, developing cities, and advancing the automotive industry. Today, people and organizations all over the world have contributed more than 600 million images toward Mapillary's mission of helping people understand the world's places through images and making this data available, with clients and partners including the World Bank, HERE, and Toyota Research Institute.
https://pytorch.org/blog/mapillary-research/
pytorch blogs
Mapillary’s computer vision technology brings intelligence to maps in an unprecedented way, increasing our overall understanding of the world. Mapillary runs state-of-the-art semantic image analysis and image-based 3d modeling at scale and on all its images. In this post we discuss two recent works from Mapillary Research and their implementations in PyTorch - Seamless Scene Segmentation [1] and In-Place Activated BatchNorm [2] - generating Panoptic segmentation results and saving up to 50% of GPU memory during training, respectively. Seamless Scene Segmentation Github project page: https://github.com/mapillary/seamseg/
https://pytorch.org/blog/mapillary-research/
pytorch blogs
The objective of Seamless Scene Segmentation is to predict a “panoptic” segmentation [3] from an image, that is a complete labeling where each pixel is assigned with a class id and, where possible, an instance id. Like many modern CNNs dealing with instance detection and segmentation, we adopt the Mask R-CNN framework [4], using ResNet50 + FPN [5] as a backbone. This architecture works in two stages: first, the “Proposal Head” selects a set of candidate bounding boxes on the image (i.e. the proposals) that could contain an object; then, the “Mask Head” focuses on each proposal, predicting its class and segmentation mask. The output of this process is a “sparse” instance segmentation, covering only the parts of the image that contain countable objects (e.g. cars and pedestrians).
https://pytorch.org/blog/mapillary-research/
pytorch blogs
To complete our panoptic approach coined Seamless Scene Segmentation, we add a third stage to Mask R-CNN. Stemming from the same backbone, the “Semantic Head” predicts a dense semantic segmentation over the whole image, also accounting for the uncountable or amorphous classes (e.g. road and sky). The outputs of the Mask and Semantic heads are finally fused using a simple non-maximum suppression algorithm to generate the final panoptic prediction. All details about the actual network architecture, used losses and underlying math can be found at the project website for our CVPR 2019 paper [1].
https://pytorch.org/blog/mapillary-research/
pytorch blogs
While several versions of Mask R-CNN are publicly available, including an official implementation written in Caffe2, at Mapillary we decided to build Seamless Scene Segmentation from scratch using PyTorch, in order to have full control and understanding of the whole pipeline. While doing so we encountered a couple of main stumbling blocks, and had to come up with some creative workarounds we are going to describe next. Dealing with variable-sized tensors
https://pytorch.org/blog/mapillary-research/
pytorch blogs
Dealing with variable-sized tensors Something that sets aside panoptic segmentation networks from traditional CNNs is the prevalence of variable-sized data. In fact, many of the quantities we are dealing with cannot be easily represented with fixed sized tensors: each image contains a different number of objects, the Proposal head can produce a different number of proposals for each image, and the images themselves can have different sizes. While this is not a problem per-se -- one could just process images one at a time -- we would still like to exploit batch-level parallelism as much as possible. Furthermore, when performing distributed training with multiple GPUs, DistributedDataParallel expects its inputs to be batched, uniformly-sized tensors.
https://pytorch.org/blog/mapillary-research/
pytorch blogs
Our solution to these issues is to wrap each batch of variable-sized tensors in a PackedSequence. PackedSequence is little more than a glorified list class for tensors, tagging its contents as “related”, ensuring that they all share the same type, and providing useful methods like moving all the tensors to a particular device, etc. When performing light-weight operations that wouldn’t be much faster with batch-level parallelism, we simply iterate over the contents of the PackedSequence in a for loop. When performance is crucial, e.g. in the body of the network, we simply concatenate the contents of the PackedSequence, adding zero padding as required (like in RNNs with variable-length inputs), and keeping track of the original dimensions of each tensor.
https://pytorch.org/blog/mapillary-research/
pytorch blogs
PackedSequences also help us deal with the second problem highlighted above. We slightly modify DistributedDataParallel to recognize PackedSequence inputs, splitting them in equally sized chunks and distributing their contents across the GPUs. Asymmetric computational graphs with Distributed Data Parallel Another, perhaps more subtle, peculiarity of our network is that it can generate asymmetric computational graphs across GPUs. In fact, some of the modules that compose the network are “optional”, in the sense that they are not always computed for all images. As an example, when the Proposal head doesn’t output any proposal, the Mask head is not traversed at all. If we are training on multiple GPUs with DistributedDataParallel, this results in one of the replicas not computing gradients for the Mask head parameters.
https://pytorch.org/blog/mapillary-research/
pytorch blogs
Prior to PyTorch 1.1, this resulted in a crash, so we had to develop a workaround. Our simple but effective solution was to compute a “fake forward pass” when no actual forward is required, i.e. something like this: def fake_forward(): fake_input = get_correctly_shaped_fake_input() fake_output = mask_head(fake_input) fake_loss = fake_output.sum() * 0 return fake_loss Here, we generate a batch of bogus data, pass it through the Mask head, and return a loss that always back-progates zeros to all parameters. Starting from PyTorch 1.1 this workaround is no longer required: by setting find_unused_parameters=True in the constructor, DistributedDataParallel is told to identify parameters whose gradients have not been computed by all replicas and correctly handle them. This leads to some substantial simplifications in our code base! In-place Activated BatchNorm Github project page: https://github.com/mapillary/inplace_abn/
https://pytorch.org/blog/mapillary-research/
pytorch blogs
Most researchers would probably agree that there are always constraints in terms of available GPU resources, regardless if their research lab has access to only a few or multiple thousands of GPUs. In a time where at Mapillary we still worked at rather few and mostly 12GB Titan X - style prosumer GPUs, we were searching for a solution that virtually enhances the usable memory during training, so we would be able to obtain and push state-of-the-art results on dense labeling tasks like semantic segmentation. In-place activated BatchNorm is enabling us to use up to 50% more memory (at little computational overhead) and is therefore deeply integrated in all our current projects (including Seamless Scene Segmentation described above).
https://pytorch.org/blog/mapillary-research/
pytorch blogs
When processing a BN-Activation-Convolution sequence in the forward pass, most deep learning frameworks (including PyTorch) need to store two big buffers, i.e. the input x of BN and the input z of Conv. This is necessary because the standard implementations of the backward passes of BN and Conv depend on their inputs to calculate the gradients. Using InPlace-ABN to replace the BN-Activation sequence, we can safely discard x, thus saving up to 50% GPU memory at training time. To achieve this, we rewrite the backward pass of BN in terms of its output y, which is in turn reconstructed from z by inverting the activation function.
https://pytorch.org/blog/mapillary-research/
pytorch blogs
The only limitation of InPlace-ABN is that it requires using an invertible activation function, such as leaky relu or elu. Except for this, it can be used as a direct, drop-in replacement for BN+activation modules in any network. Our native CUDA implementation offers minimal computational overhead compared to PyTorch’s standard BN, and is available for anyone to use from here: https://github.com/mapillary/inplace_abn/. Synchronized BN with asymmetric graphs and unbalanced batches
https://pytorch.org/blog/mapillary-research/
pytorch blogs
When training networks with synchronized SGD over multiple GPUs and/or multiple nodes, it’s common practice to compute BatchNorm statistics separately on each device. However, in our experience working with semantic and panoptic segmentation networks, we found that accumulating mean and variance across all workers can bring a substantial boost in accuracy. This is particularly true when dealing with small batches, like in Seamless Scene Segmentation where we train with a single, super-high resolution image per GPU. InPlace-ABN supports synchronized operation over multiple GPUs and multiple nodes, and, since version 1.1, this can also be achieved in the standard PyTorch library using SyncBatchNorm. Compared to SyncBatchNorm, however, we support some additional functionality which is particularly important for Seamless Scene Segmentation: unbalanced batches and asymmetric graphs.
https://pytorch.org/blog/mapillary-research/
pytorch blogs
As mentioned before, Mask R-CNN-like networks naturally give rise to variable-sized tensors. Thus, in InPlace-ABN we calculate synchronized statistics using a variant of the parallel algorithm described here, which properly takes into account the fact that each GPU can hold a different number of samples. PyTorch’s SyncBatchNorm is currently being revised to support this, and the improved functionality will be available in a future release.
https://pytorch.org/blog/mapillary-research/
pytorch blogs
Asymmetric graphs (in the sense mentioned above) are another complicating factor one has to deal with when creating a synchronized BatchNorm implementation. Luckily, PyTorch’s distributed group functionality allows us to restrict distributed communication to a subset of workers, easily excluding those that are currently inactive. The only missing piece is that, in order to create a distributed group, each process needs to know the ids of all processes that will participate in the group, and even processes that are not part of the group need to call the new_group() function. In InPlace-ABN we handle it with a function like this: ```python import torch import torch.distributed as distributed def active_group(active): """Initialize a distributed group where each process can independently decide whether to participate or not""" world_size = distributed.get_world_size() rank = distributed.get_rank() # Gather active status from all workers
https://pytorch.org/blog/mapillary-research/
pytorch blogs
Gather active status from all workers active = torch.tensor(rank if active else -1, dtype=torch.long, device=torch.cuda.current_device()) active_workers = torch.empty(world_size, dtype=torch.long, device=torch.cuda.current_device()) distributed.all_gather(list(active_workers.unbind(0)), active) # Create group active_workers = [int(i) for i in active_workers.tolist() if i != -1] group = distributed.new_group(active_workers) return group ``` First each process, including inactive ones, communicates its status to all others through an all_gather call, then it creates the distributed group with the shared information. In the actual implementation we also include a caching mechanism for groups, since new_group() is usually too expensive to call at each batch. References [1] Seamless Scene Segmentation; Lorenzo Porzi, Samuel Rota Bulò, Aleksander Colovic, Peter Kontschieder; Computer Vision and Pattern Recognition (CVPR), 2019
https://pytorch.org/blog/mapillary-research/
pytorch blogs
[2] In-place Activated BatchNorm for Memory-Optimized Training of DNNs; Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder; Computer Vision and Pattern Recognition (CVPR), 2018 [3] Panoptic Segmentation; Alexander Kirillov, Kaiming He, Ross Girshick, Carsten Rother, Piotr Dollar; Computer Vision and Pattern Recognition (CVPR), 2019 [4] Mask R-CNN; Kaiming He, Georgia Gkioxari, Piotr Dollar, Ross Girshick; International Conference on Computer Vision (ICCV), 2017 [5] Feature Pyramid Networks for Object Detection; Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie; Computer Vision and Pattern Recognition (CVPR), 2017
https://pytorch.org/blog/mapillary-research/
pytorch blogs
layout: blog_detail title: 'Introduction to Quantization on PyTorch' author: Raghuraman Krishnamoorthi, James Reed, Min Ni, Chris Gottbrath, and Seth Weidman It’s important to make efficient use of both server-side and on-device compute resources when developing machine learning applications. To support more efficient deployment on servers and edge devices, PyTorch added a support for model quantization using the familiar eager mode Python API.
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
Quantization leverages 8bit integer (int8) instructions to reduce the model size and run the inference faster (reduced latency) and can be the difference between a model achieving quality of service goals or even fitting into the resources available on a mobile device. Even when resources aren’t quite so constrained it may enable you to deploy a larger and more accurate model. Quantization is available in PyTorch starting in version 1.3 and with the release of PyTorch 1.4 we published quantized models for ResNet, ResNext, MobileNetV2, GoogleNet, InceptionV3 and ShuffleNetV2 in the PyTorch torchvision 0.5 library. This blog post provides an overview of the quantization support on PyTorch and its incorporation with the TorchVision domain library. What is Quantization? Quantization refers to techniques for doing both computations and memory accesses with lower precision data, usually int8 compared to floating point implementations. This enables performance gains in several important areas:
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
4x reduction in model size; 2-4x reduction in memory bandwidth; 2-4x faster inference due to savings in memory bandwidth and faster compute with int8 arithmetic (the exact speed up varies depending on the hardware, the runtime, and the model). Quantization does not however come without additional cost. Fundamentally quantization means introducing approximations and the resulting networks have slightly less accuracy. These techniques attempt to minimize the gap between the full floating point accuracy and the quantized accuracy. We designed quantization to fit into the PyTorch framework. The means that: 1. PyTorch has data types corresponding to quantized tensors, which share many of the features of tensors.
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
One can write kernels with quantized tensors, much like kernels for floating point tensors to customize their implementation. PyTorch supports quantized modules for common operations as part of the torch.nn.quantized and torch.nn.quantized.dynamic name-space. Quantization is compatible with the rest of PyTorch: quantized models are traceable and scriptable. The quantization method is virtually identical for both server and mobile backends. One can easily mix quantized and floating point operations in a model. Mapping of floating point tensors to quantized tensors is customizable with user defined observer/fake-quantization blocks. PyTorch provides default implementations that should work for most use cases. We developed three techniques for quantizing neural networks in PyTorch as part of quantization tooling in the torch.quantization name-space.
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
The Three Modes of Quantization Supported in PyTorch starting version 1.3 Dynamic Quantization The easiest method of quantization PyTorch supports is called dynamic quantization. This involves not just converting the weights to int8 - as happens in all quantization variants - but also converting the activations to int8 on the fly, just before doing the computation (hence “dynamic”). The computations will thus be performed using efficient int8 matrix multiplication and convolution implementations, resulting in faster compute. However, the activations are read and written to memory in floating point format.
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
PyTorch API: we have a simple API for dynamic quantization in PyTorch. torch.quantization.quantize_dynamic takes in a model, as well as a couple other arguments, and produces a quantized model! Our end-to-end tutorial illustrates this for a BERT model; while the tutorial is long and contains sections on loading pre-trained models and other concepts unrelated to quantization, the part the quantizes the BERT model is simply: python import torch.quantization quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
``` * See the documentation for the function here an end-to-end example in our tutorials here and here. Post-Training Static Quantization
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
Post-Training Static Quantization One can further improve the performance (latency) by converting networks to use both integer arithmetic and int8 memory accesses. Static quantization performs the additional step of first feeding batches of data through the network and computing the resulting distributions of the different activations (specifically, this is done by inserting “observer” modules at different points that record these distributions). This information is used to determine how specifically the different activations should be quantized at inference time (a simple technique would be to simply divide the entire range of activations into 256 levels, but we support more sophisticated methods as well). Importantly, this additional step allows us to pass quantized values between operations instead of converting these values to floats - and then back to ints - between every operation, resulting in a significant speed-up.
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
With this release, we’re supporting several features that allow users to optimize their static quantization: 1. Observers: you can customize observer modules which specify how statistics are collected prior to quantization to try out more advanced methods to quantize your data. 2. Operator fusion: you can fuse multiple operations into a single operation, saving on memory access while also improving the operation’s numerical accuracy. 3. Per-channel quantization: we can independently quantize weights for each output channel in a convolution/linear layer, which can lead to higher accuracy with almost the same speed. PyTorch API: To fuse modules, we have torch.quantization.fuse_modules Observers are inserted using torch.quantization.prepare Finally, quantization itself is done using torch.quantization.convert
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
We have a tutorial with an end-to-end example of quantization (this same tutorial also covers our third quantization method, quantization-aware training), but because of our simple API, the three lines that perform post-training static quantization on the pre-trained model myModel are: ```python # set quantization config for server (x86) deploymentmyModel.qconfig = torch.quantization.get_default_config('fbgemm') # insert observers torch.quantization.prepare(myModel, inplace=True) # Calibrate the model and collect statistics # convert to quantized version torch.quantization.convert(myModel, inplace=True) ``` Quantization Aware Training
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
``` Quantization Aware Training Quantization-aware training(QAT) is the third method, and the one that typically results in highest accuracy of these three. With QAT, all weights and activations are “fake quantized” during both the forward and backward passes of training: that is, float values are rounded to mimic int8 values, but all computations are still done with floating point numbers. Thus, all the weight adjustments during training are made while “aware” of the fact that the model will ultimately be quantized; after quantizing, therefore, this method usually yields higher accuracy than the other two methods. PyTorch API: torch.quantization.prepare_qat inserts fake quantization modules to model quantization. Mimicking the static quantization API, torch.quantization.convert actually quantizes the model once training is complete.
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
For example, in the end-to-end example, we load in a pre-trained model as qat_model, then we simply perform quantization-aware training using: ```python # specify quantization config for QAT qat_model.qconfig=torch.quantization.get_default_qat_qconfig('fbgemm') # prepare QAT torch.quantization.prepare_qat(qat_model, inplace=True) # convert to quantized version, removing dropout, to check for accuracy on each epochquantized_model=torch.quantization.convert(qat_model.eval(), inplace=False) ``` Device and Operator Support Quantization support is restricted to a subset of available operators, depending on the method being used, for a list of supported operators, please see the documentation at https://pytorch.org/docs/stable/quantization.html.
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
The set of available operators and the quantization numerics also depend on the backend being used to run quantized models. Currently quantized operators are supported only for CPU inference in the following backends: x86 and ARM. Both the quantization configuration (how tensors should be quantized and the quantized kernels (arithmetic with quantized tensors) are backend dependent. One can specify the backend by doing: import torchbackend='fbgemm' # 'fbgemm' for server, 'qnnpack' for mobile my_model.qconfig = torch.quantization.get_default_qconfig(backend) # prepare and convert model # Set the backend on which the quantized kernels need to be run torch.backends.quantized.engine=backend
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
torch.backends.quantized.engine=backend ``` However, quantization aware training occurs in full floating point and can run on either GPU or CPU. Quantization aware training is typically only used in CNN models when post training static or dynamic quantization doesn’t yield sufficient accuracy. This can occur with models that are highly optimized to achieve small size (such as Mobilenet). Integration in torchvision We’ve also enabled quantization for some of the most popular models in torchvision: Googlenet, Inception, Resnet, ResNeXt, Mobilenet and Shufflenet. We have upstreamed these changes to torchvision in three forms: 1. Pre-trained quantized weights so that you can use them right away. 2. Quantization ready model definitions so that you can do post-training quantization or quantization aware training.
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
A script for doing quantization aware training — which is available for any of these model though, as you will learn below, we only found it necessary for achieving accuracy with Mobilenet. We also have a tutorial showing how you can do transfer learning with quantization using one of the torchvision models. Choosing an approach The choice of which scheme to use depends on multiple factors: 1. Model/Target requirements: Some models might be sensitive to quantization, requiring quantization aware training. 2. Operator/Backend support: Some backends require fully quantized operators. Currently, operator coverage is limited and may restrict the choices listed in the table below: The table below provides a guideline.
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;} .tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;} .tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top} .tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top;font-weight:bold;color:black;} article.pytorch-article table tr th:first-of-type, article.pytorch-article table tr td:first-of-type{padding-left:5px} Model Type Preferred scheme Why LSTM/RNN Dynamic Quantization Throughput dominated by compute/memory bandwidth for weights
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
BERT/Transformer Dynamic Quantization Throughput dominated by compute/memory bandwidth for weights CNN Static Quantization Throughput limited by memory bandwidth for activations CNN Quantization Aware Training In the case where accuracy can't be achieved with static quantization Performance Results Quantization provides a 4x reduction in the model size and a speedup of 2x to 3x compared to floating point implementations depending on the hardware platform and the model being benchmarked. Some sample results are: Model Float Latency (ms)
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
Float Latency (ms) <td class="tg-0pky">Quantized Latency (ms)</td> <td class="tg-0pky">Inference Performance Gain</td> <td class="tg-0pky">Device</td> <td class="tg-0pky">Notes</td> </tr> <tr> <td class="tg-lboi">BERT</td> <td class="tg-lboi">581</td> <td class="tg-lboi">313</td> <td class="tg-lboi">1.8x</td> <td class="tg-lboi">Xeon-D2191 (1.6GHz)</td> <td class="tg-lboi">Batch size = 1, Maximum sequence length= 128, Single thread, x86-64, Dynamic quantization</td> </tr> <tr> <td class="tg-lboi">Resnet-50</td> <td class="tg-lboi">214</td> <td class="tg-lboi">103</td> <td class="tg-lboi">2x</td> <td class="tg-lboi">Xeon-D2191 (1.6GHz)</td> <td class="tg-lboi">Single thread, x86-64, Static quantization</td> </tr> <tr> <td class="tg-lboi">Mobilenet-v2</td> <td class="tg-lboi">97</td> <td class="tg-lboi">17</td>
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
17 <td class="tg-lboi">5.7x</td> <td class="tg-lboi">Samsung S9</td> <td class="tg-lboi">Static quantization, Floating point numbers are based on Caffe2 run-time and are not optimized</td> </tr> Accuracy results We also compared the accuracy of static quantized models with the floating point models on Imagenet. For dynamic quantization, we compared the F1 score of BERT on the GLUE benchmark for MRPC. Computer Vision Model accuracy Model Top-1 Accuracy (Float) Top-1 Accuracy (Quantized) Quantization scheme Googlenet 69.8 69.7 Static post training quantization
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
Inception-v3 77.5 77.1 Static post training quantization ResNet-18 69.8 69.4 Static post training quantization Resnet-50 76.1 75.9 Static post training quantization ResNext-101 32x8d 79.3 79 Static post training quantization Mobilenet-v2 71.9 71.6 Quantization Aware Training Shufflenet-v2 69.4
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
69.4 <td class="tg-0lax">68.4</td> <td class="tg-0lax">Static post training quantization</td> Speech and NLP Model accuracy Model F1 (GLUEMRPC) Float F1 (GLUEMRPC) Quantized Quantization scheme BERT 0.902 0.895 Dynamic quantization Conclusion
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
Conclusion To get started on quantizing your models in PyTorch, start with the tutorials on the PyTorch website. If you are working with sequence data start with dynamic quantization for LSTM, or BERT. If you are working with image data then we recommend starting with the transfer learning with quantization tutorial. Then you can explore static post training quantization. If you find that the accuracy drop with post training quantization is too high, then try quantization aware training.
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
If you run into issues you can get community help by posting in at discuss.pytorch.org, use the quantization category for quantization related issues. This post is authored by Raghuraman Krishnamoorthi, James Reed, Min Ni, Chris Gottbrath and Seth Weidman. Special thanks to Jianyu Huang, Lingyi Liu and Haixin Liu for producing quantization metrics included in this post. Further reading: PyTorch quantization presentation at Neurips: (https://research.fb.com/wp-content/uploads/2019/12/2.-Quantization.pptx) Quantized Tensors (https://github.com/pytorch/pytorch/wiki/ Introducing-Quantized-Tensor) Quantization RFC on Github (https://github.com/pytorch/pytorch/ issues/18318)
https://pytorch.org/blog/introduction-to-quantization-on-pytorch/
pytorch blogs
layout: blog_detail title: "Scaling PyTorch FSDP for Training Foundation Models on IBM Cloud" author: Linsong Chu, Less Wright, Hamid Shojanazeri, Sophia Wen, Raghu Ganti, Geeta Chauhan featured-img: "/assets/images/scaling-pytorch-fsdp-image1-IBM_scaling_FSDP_visual_new.png" Large model training using a cloud native approach is of growing interest for many enterprises given the emergence and success of foundation models. Some AI practitioners may assume that the only way they can achieve high GPU utilization for distributed training jobs is to run them on HPC systems, such as those inter-connected with Infiniband and may not consider Ethernet connected systems. We demonstrate how the latest distributed training technique, Fully Sharded Data Parallel (FSDP) from PyTorch, successfully scales to models of size 10B+ parameters using commodity Ethernet networking in IBM Cloud. PyTorch FSDP Scaling
https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/
pytorch blogs
PyTorch FSDP Scaling As models get larger, the standard techniques for data parallel training work only if the GPU can hold a full replica of the model, along with its training state (optimizer, activations, etc.). However, GPU memory increases have not kept up with the model size increases and new techniques for training such models have emerged (e.g., Fully Sharded Data Parallel, DeepSpeed), which allow us to efficiently distribute the model and data over multiple GPUs during training. In this blog post, we demonstrate a path to achieve remarkable scaling of model training to 64 nodes (512 GPUs) using PyTorch native FSDP APIs as we increase model sizes to 11B. What is Fully Sharded Data Parallel?
https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/
pytorch blogs
What is Fully Sharded Data Parallel? FSDP extends the distributed data parallel training (DDP) approach by sharding model parameters, gradient and optimizer states into K FSDP units, determined by using a wrapping policy. FSDP achieves large model training efficiency in terms of resources and performance by significantly reducing the memory footprint on each GPU and overlapping computation and communication. Resource efficiency is achieved with memory footprint reduction by having all GPUs own a portion of each FSDP unit. To process a given FSDP unit, all GPUs share their locally owned portion via all_gather communication calls.
https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/
pytorch blogs
Performance efficiency is accomplished by overlapping all_gather communication calls for upcoming FSDP units with computation of the current FSDP unit. Once the current FSDP unit has been processed, the non-locally owned parameters are dropped, freeing memory for the upcoming FSDP units. This process achieves training efficiency by the overlap of computation and communication, while also reducing the peak memory needed by each GPU.
https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/
pytorch blogs
In what follows, we demonstrate how FSDP allows us to keep hundreds of GPUs highly utilized throughout a distributed training job, while running over standard Ethernet networking (system description towards the end of the blog). We chose the T5 architecture for our experiments and leveraged the code from the FSDP workshop. In each of our experiments, we start with a single node experiment to create a baseline and report the metric seconds/iteration normalized by the batch size as well as compute the teraflops based on the Megatron-LM paper (see Appendix for details of teraflop computation for T5). Our experiments aim to maximize the batch size (while avoiding cudaMalloc retries) to take full advantage of overlap in computation and communications, as discussed below. Scaling is defined as the ratio of the seconds/iteration normalized by batch size for N nodes versus a single node, representing how well we can utilize the additional GPUs as more nodes are added.
https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/
pytorch blogs
Experimental Results Our first set of experiments using the T5-3B configuration (mixed precision with BF16, activation checkpointing, and transformer wrapping policy) demonstrated scaling efficiency of 95% as we increased the number of GPUs from 8 to 512 (1 to 64 nodes, respectively). We achieved these results without any modifications to the existing FSDP APIs. We observed that, for this scale, over Ethernet based network, there is sufficient bandwidth to enable continuous overlap of communication and computation.
https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/
pytorch blogs
However, when we increased the T5 model size to 11B, the scaling efficiency declined substantially to 20%. The PyTorch profiler shows that overlap of communication and computation was very limited. Further investigation into the network bandwidth usage revealed that the poor overlap is being caused by latency in the communication of individual packets and not the bandwidth required (in fact, our peak bandwidth utilization is 1/4th of that available). This led us to hypothesize that if we can increase the compute time by increasing the batch size, we can better overlap communication and computation. However, given we are already at maximum GPU memory allocation, we must identify opportunities to rebalance the memory allocation to allow for increase in batch size. We identified that the model state was being allocated a lot more memory than was needed. The primary function of these reservations is to have pre-reserved memory ready to aggressively send/receive tensors during the communication periods and too few buffers can result in increased wait times, whereas too many buffers result in smaller batch sizes.
https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/
pytorch blogs
To achieve better efficiency, the PyTorch distributed team introduced a new control knob, the rate_limiter which controls how much memory is allocated for send/receive of tensors, alleviating the memory pressure and providing room for higher batch sizes. In our case, the rate_limiter could increase the batch size from 20 to 50, thus increasing compute time by 2.5x and allowing for much greater overlap of communication and computation. With this fix, we increased the scaling efficiency to >75% (at 32 nodes)!
https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/
pytorch blogs
Continued investigation into the factors limiting scaling efficiency uncovered that the rate limiter was creating a recurring pipeline bubble of GPU idle time. This was due to the rate limiter using a block and flush approach for the allocation and release of each set of memory buffers. By waiting for the entire block to complete before initiating a new all_gather, the GPU was idling at the start of each block, while waiting for the new set of all_gather parameters to arrive. This bubble was alleviated by moving to a sliding window approach. Upon the completion of a single all_gather step and its computation (rather than a block of them), the memory is freed and the next all_gather is immediately issued in a much more uniform manner. This improvement eliminated the pipeline bubble and boosted the scaling efficiencies to >90% (at 32 nodes).
https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/
pytorch blogs
Figure 1: Scaling of T5-XL (3B) and T5-XXL (11B) from 1 node to 64 nodes Figure 2: TFLOPs/sec usage for T5-XL(3B) and T5-XXL (11B) as we increase number of nodes IBM Cloud AI System and Middleware The AI infrastructure used for this work is a large-scale AI system on IBM Cloud consisting of nearly 200 nodes, each node with 8 NVIDIA A100 80GB cards, 96 vCPUs, and 1.2TB CPU RAM. The GPU cards within a node are connected via NVLink with a card-to-card bandwidth of 600GBps. Nodes are connected by 2 x 100Gbps Ethernet links with SRIOV based TCP/IP stack, providing a usable bandwidth of 120Gbps.
https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/
pytorch blogs
The IBM Cloud AI System has been production-ready since May of 2022 and is configured with the OpenShift container platform to run AI workloads. We also built a software stack for production AI workloads that provide end-to-end tools for training workloads. The middleware leverages Ray for pre and post processing workloads and PyTorch for training of models. We also integrate a Kubernetes native scheduler, MCAD, that manages multiple jobs with job queuing, gang scheduling, prioritization, and quota management. A multi-NIC CNI discovers all available network interfaces and handles them as a single NIC pool enabling optimized use of the network interfaces in Kubernetes. Finally, CodeFlare CLI supports a single pane for observability of the full stack using a desktop CLI (e.g., GPU utilization, application metrics like loss, gradient norm).
https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/
pytorch blogs
Figure 3: Foundation Model Middleware Stack Conclusion and Future Work In conclusion, we demonstrated how we can achieve remarkable scaling of FSDP APIs over non-InfiniBand networks. We identified the bottleneck that had limited scaling to less than 20% efficiency for 11B parameter model training. After identifying the issue, we were able to correct this with a new rate limiter control to ensure a more optimal balance of reserved memory and communication overlap relative to compute time. With this improvement, we were able to achieve 90% scaling efficiency (a 4.5x improvement), at 256 GPUs and 80% at 512 GPUs for training of the 11B parameter model. In addition, the 3B parameter model scales extremely well with 95% efficiency even as we increase the number of GPUs to 512.
https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/
pytorch blogs
This is a first in the industry to achieve such scaling efficiencies for up to 11B parameter models using Kubernetes with vanilla Ethernet and PyTorch native FSDP API’s. This improvement enables users to train huge models on a Hybrid Cloud platform in a cost efficient and sustainable manner. We plan on continuing to investigate scaling with decoder only models and increasing the size of these models to 100B+ parameters. From a system design perspective, we are exploring capabilities such as RoCE and GDR that can improve latencies of communications over Ethernet networks. Acknowledgements This blog was possible because of contributions from both PyTorch Distributed and IBM Research teams. From the PyTorch Distributed team, we would like to thank Less Wright, Hamid Shojanazeri, Geeta Chauhan, Shen Li, Rohan Varma, Yanli Zhao, Andrew Gu, Anjali Sridhar, Chien-Chin Huang, and Bernard Nguyen.
https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/
pytorch blogs
From the IBM Research team, we would like to thank Linsong Chu, Sophia Wen, Lixiang (Eric) Luo, Marquita Ellis, Davis Wertheimer, Supriyo Chakraborty, Raghu Ganti, Mudhakar Srivatsa, Seetharami Seelam, Carlos Costa, Abhishek Malvankar, Diana Arroyo, Alaa Youssef, Nick Mitchell. Appendix Teraflop computation The T5-XXL (11B) architecture has two types of T5 blocks, one is an encoder and the second is a decoder. Following the approach of Megatron-LM, where each matrix multiplication requires 2m×k×n FLOPs, where the first matrix is of size m×k and the second is k×n. The encoder block consists of self-attention and feed forward layers, whereas the decoder block consists of self-attention, cross-attention, and feed forward layers.
https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/
pytorch blogs
The attention (both self and cross) block consists of a QKV projection, which requires 6Bsh2 operations, an attention matrix computation requiring 2Bs2h operations, an attention over values which needs 2Bs2h computations, and the post-attention linear projection requires 2Bsh2 operations. Finally, the feed forward layer requires 15Bsh2 operations. The total for an encoder block is 23Bsh2+4Bs2h, whereas for a decoder block, it comes to 31Bsh2+8Bs2h. With a total of 24 encoder and 24 decoder blocks and 2 forward passes (as we discard the activations) and one backward pass (equivalent to two forward passes), the final FLOPs computation comes to be 96×(54Bsh2+ 12Bs2h) + 6BshV. Here, B is the batch size per GPU, s is sequence length, h is hidden state size, and V is vocabulary size. We repeat a similar computation for T5-XL (3B) architecture, which is slightly different.
https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/
pytorch blogs
layout: blog_detail title: "Empowering PyTorch on Intel® Xeon® Scalable processors with Bfloat16" author: Mingfei Ma (Intel), Vitaly Fedyunin (Meta), Wei Wei (Meta) featured-img: '\assets\images\empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16.png' Overview Recent years, the growing complexity of AI models have been posing requirements on hardware for more and more compute capability. Reduced precision numeric format has been proposed to address this problem. Bfloat16 is a custom 16-bit floating point format for AI which consists of one sign bit, eight exponent bits, and seven mantissa bits. With the same dynamic range as float32, bfloat16 doesn’t require a special handling such as loss scaling. Therefore, bfloat16 is a drop-in replacement for float32 when running deep neural networks for both inference and training.
https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/
pytorch blogs
The 3rd Gen Intel® Xeon® Scalable processor (codenamed Cooper Lake), is the first general purpose x86 CPU with native bfloat16 support. Three new bfloat16 instructions were introduced in Intel® Advanced Vector Extensions-512 (Intel® AVX-512): VCVTNE2PS2BF16, VCVTNEPS2BF16, and VDPBF16PS. The first two instructions perform conversion from float32 to bfloat16, and the last one performs a dot product of bfloat16 pairs. Bfloat16 theoretical compute throughput is doubled over float32 on Cooper Lake. On the next generation of Intel® Xeon® Scalable Processors, bfloat16 compute throughput will be further enhanced through Advanced Matrix Extensions (Intel® AMX) instruction set extension.
https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/
pytorch blogs
Intel and Meta previously collaborated to enable bfloat16 on PyTorch, and the related work was published in an earlier blog during launch of Cooper Lake. In that blog, we introduced the hardware advancement for native bfloat16 support and showcased a performance boost of 1.4x to 1.6x of bfloat16 over float32 from DLRM, ResNet-50 and ResNext-101-32x4d. In this blog, we will introduce the latest software enhancement on bfloat16 in PyTorch 1.12, which would apply to much broader scope of user scenarios and showcase even higher performance boost. Native Level Optimization on Bfloat16
https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/
pytorch blogs
Native Level Optimization on Bfloat16 On PyTorch CPU bfloat16 path, the compute intensive operators, e.g., convolution, linear and bmm, use oneDNN (oneAPI Deep Neural Network Library) to achieve optimal performance on Intel CPUs with AVX512_BF16 or AMX support. The other operators, such as tensor operators and neural network operators, are optimized at PyTorch native level. We have enlarged bfloat16 kernel level optimizations to majority of operators on dense tensors, both inference and training applicable (sparse tensor bfloat16 support will be covered in future work), specifically:
https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/
pytorch blogs
Bfloat16 vectorization: Bfloat16 is stored as unsigned 16-bit integer, which requires it to be casted to float32 for arithmetic operations such as add, mul, etc. Specifically, each bfloat16 vector will be converted to two float32 vectors, processed accordingly and then converted back. While for non-arithmetic operations such as cat, copy, etc., it is a straight memory copy and no data type conversion will be involved. Bfloat16 reduction: Reduction on bfloat16 data uses float32 as accumulation type to guarantee numerical stability, e.g., sum, BatchNorm2d, MaxPool2d, etc. Channels Last optimization: For vision models, Channels Last is the preferable memory format over Channels First from performance perspective. We have implemented fully optimized CPU kernels for all the commonly used CV modules on channels last memory format, taking care of both float32 and bfloat16. Run Bfloat16 with Auto Mixed Precision
https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/
pytorch blogs
Run Bfloat16 with Auto Mixed Precision To run model on bfloat16, typically user can either explicitly convert the data and model to bfloat16, for example: # with explicit conversion input = input.to(dtype=torch.bfloat16) model = model.to(dtype=torch.bfloat16) or utilize torch.amp (Automatic Mixed Precision) package. The autocast instance serves as context managers or decorators that allow regions of your script to run in mixed precision, for example: # with AMP with torch.autocast(device_type="cpu", dtype=torch.bfloat16): output = model(input) Generally, the explicit conversion approach and AMP approach have similar performance. Even though, we recommend run bfloat16 models with AMP, because: Better user experience with automatic fallback: If your script includes operators that don’t have bfloat16 support, autocast will implicitly convert them back to float32 while the explicit converted model will give a runtime error.
https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/
pytorch blogs
Mixed data type for activation and parameters: Unlike the explicit conversion which converts all the model parameters to bfloat16, AMP mode will run in mixed data type. To be specific, input/output will be kept in bfloat16 while parameters, e.g., weight/bias, will be kept in float32. The mixed data type of activation and parameters will help improve performance while maintaining the accuracy. Performance Gains We benchmarked inference performance of TorchVision models on Intel® Xeon® Platinum 8380H CPU @ 2.90GHz (codenamed Cooper Lake), single instance per socket (batch size = 2 x number of physical cores). Results show that bfloat16 has 1.4x to 2.2x performance gain over float32. The performance boost of bfloat16 over float32 primarily comes from 3 aspects:
https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/
pytorch blogs
The compute intensive operators take advantage of the new bfloat16 native instruction VDPBF16PS which doubles the hardware compute throughput. Bfloat16 have only half the memory footprint of float32, so theoretically the memory bandwidth intensive operators will be twice faster. On Channels Last, we intentionally keep the same parallelization scheme for all the memory format aware operators (can’t do this on Channels First though), which increases the data locality when passing each layer’s output to the next. Basically, it keeps the data closer to CPU cores while data would reside in cache anyway. And bfloat16 will have a higher cache hit rate compared with float32 in such scenarios due to smaller memory footprint. Conclusion & Future Work
https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/
pytorch blogs
Conclusion & Future Work In this blog, we introduced recent software optimizations on bfloat16 introduced in PyTorch 1.12. Results on the 3rd Gen Intel® Xeon® Scalable processor show that bfloat16 has 1.4x to 2.2x performance gain over float32 on the TorchVision models. Further improvement is expected on the next generation of Intel® Xeon® Scalable Processors with AMX instruction support. Though the performance number for this blog is collected with TorchVision models, the benefit is broad across all topologies. And we will continue to extend the bfloat16 optimization effort to a broader scope in the future! Acknowledgement The results presented in this blog is a joint effort of Meta and Intel PyTorch team. Special thanks to Vitaly Fedyunin and Wei Wei from Meta who spent precious time and gave substantial assistance! Together we made one more step on the path of improving the PyTorch CPU eco system. Reference
https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/
pytorch blogs
Reference The bfloat16 numerical format https://pytorch.org/docs/master/amp.html#torch.autocast Intel and Facebook Accelerate PyTorch Performance with 3rd Gen Intel® Xeon® Processors and Intel® Deep Learning Boost’s new BFloat16 capability
https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/
pytorch blogs
layout: blog_detail title: "Announcing PyTorch Conference 2022" author: featured-img: "/assets/images/pytorch-conference-2022.png" We are excited to announce that the PyTorch Conference returns in-person as a satellite event to NeurlPS (Neural Information Processing Systems) in New Orleans on Dec. 2nd.
https://pytorch.org/blog/announcing-pytorch-conference-2022/
pytorch blogs
We changed the name from PyTorch Developer Day to PyTorch Conference to signify the turning of a new chapter as we look to the future of PyTorch, encompassing the entire PyTorch Community. This conference will bring together leading researchers, academics and developers from the Machine Learning (ML) and Deep Learning (DL) communities to join a multiple set of talks and a poster session; covering new software releases on PyTorch, use cases in academia and industry, as well as ML/DL development and production trends. EVENT OVERVIEW When: Dec 2nd, 2022 (In-Person and Virtual) Where: New Orleans, Louisiana (USA) | Virtual option as well SCHEDULE All times in Central Standard. 8:00-9:00 am   Registration/Check in 9:00-11:20 am   Keynote & Technical Talks 11:30-1:00 pm   Lunch 1:00-3:00 pm   Poster Session & Breakouts 3:00-4:00 pm   Community/Partner Talks 4:00-5:00 pm   Panel Discussion
https://pytorch.org/blog/announcing-pytorch-conference-2022/
pytorch blogs
4:00-5:00 pm   Panel Discussion Agenda subject to change. All talks will be livestreamed and available to the public. The in-person event will be by invitation only as space is limited. If you’d like to apply to attend in person, please submit all requests here. LINKS Submit Content for Consideration by Sept. 30th Livestream event page Apply for an invitation to the in-person event
https://pytorch.org/blog/announcing-pytorch-conference-2022/
pytorch blogs
layout: blog_detail title: 'New PyTorch library releases including TorchVision Mobile, TorchAudio I/O, and more' author: Team PyTorch Today, we are announcing updates to a number of PyTorch libraries, alongside the PyTorch 1.8 release. The updates include new releases for the domain libraries including TorchVision, TorchText and TorchAudio as well as new version of TorchCSPRNG. These releases include a number of new features and improvements and, along with the PyTorch 1.8 release, provide a broad set of updates for the PyTorch community to build on and leverage. Some highlights include: * TorchVision - Added support for PyTorch Mobile including Detectron2Go (D2Go), auto-augmentation of data during training, on the fly type conversion, and AMP autocasting.
https://pytorch.org/blog/pytorch-1.8-new-library-releases/
pytorch blogs
TorchAudio - Major improvements to I/O, including defaulting to sox_io backend and file-like object support. Added Kaldi Pitch feature and support for CMake based build allowing TorchAudio to better support no-Python environments. TorchText - Updated the dataset loading API to be compatible with standard PyTorch data loading utilities. TorchCSPRNG - Support for cryptographically secure pseudorandom number generators for PyTorch is now stable with new APIs for AES128 ECB/CTR and CUDA support on Windows. Please note that, starting in PyTorch 1.6, features are classified as Stable, Beta, and Prototype. Prototype features are not included as part of the binary distribution and are instead available through either building from source, using nightlies or via compiler flag. You can see the detailed announcement here. TorchVision 0.9.0 [Stable] TorchVision Mobile: Operators, Android Binaries, and Tutorial
https://pytorch.org/blog/pytorch-1.8-new-library-releases/
pytorch blogs
We are excited to announce the first on-device support and binaries for a PyTorch domain library. We have seen significant appetite in both research and industry for on-device vision support to allow low latency, privacy friendly, and resource efficient mobile vision experiences. You can follow this new tutorial to build your own Android object detection app using TorchVision operators, D2Go, or your own custom operators and model. [Stable] New Mobile models for Classification, Object Detection and Semantic Segmentation We have added support for the MobileNetV3 architecture and provided pre-trained weights for Classification, Object Detection and Segmentation. It is easy to get up and running with these models, just import and load them as you would any torchvision model: ```python import torch import torchvision
https://pytorch.org/blog/pytorch-1.8-new-library-releases/
pytorch blogs
import torch import torchvision # Classification x = torch.rand(1, 3, 224, 224) m_classifier = torchvision.models.mobilenet_v3_large(pretrained=True) m_classifier.eval() predictions = m_classifier(x) # Quantized Classification x = torch.rand(1, 3, 224, 224) m_classifier = torchvision.models.quantization.mobilenet_v3_large(pretrained=True) m_classifier.eval() predictions = m_classifier(x) # Object Detection: Highly Accurate High Resolution Mobile Model x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)] m_detector = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True) m_detector.eval() predictions = m_detector(x) # Semantic Segmentation: Highly Accurate Mobile Model x = torch.rand(1, 3, 520, 520) m_segmenter = torchvision.models.segmentation.deeplabv3_mobilenet_v3_large(pretrained=True) m_segmenter.eval() predictions = m_segmenter(x)
https://pytorch.org/blog/pytorch-1.8-new-library-releases/
pytorch blogs
predictions = m_segmenter(x) These models are highly competitive with TorchVision’s existing models on resource efficiency, speed, and accuracy. See our [release notes](https://github.com/pytorch/vision/releases) for detailed performance metrics. ### [Stable] AutoAugment [AutoAugment](https://arxiv.org/pdf/1805.09501.pdf) is a common Data Augmentation technique that can increase the accuracy of Scene Classification models. Though the data augmentation policies are directly linked to their trained dataset, empirical studies show that ImageNet policies provide significant improvements when applied to other datasets. We’ve implemented 3 policies learned on the following datasets: ImageNet, CIFA10 and SVHN. These can be used standalone or mixed-and-matched with existing transforms: ```python from torchvision import transforms t = transforms.AutoAugment() transformed = t(image) transform=transforms.Compose([ transforms.Resize(256), transforms.AutoAugment(), transforms.ToTensor()])
https://pytorch.org/blog/pytorch-1.8-new-library-releases/
pytorch blogs
transforms.ToTensor()]) ``` Other New Features for TorchVision [Stable] All read and decode methods in the io.image package now support: Palette, Grayscale Alpha and RBG Alpha image types during PNG decoding On-the-fly conversion of image from one type to the other during read [Stable] WiderFace dataset [Stable] Improved FasterRCNN speed and accuracy by introducing a score threshold on RPN [Stable] Modulation input for DeformConv2D [Stable] Option to write audio to a video file [Stable] Utility to draw bounding boxes [Beta] Autocast support in all Operators Find the full TorchVision release notes here. TorchAudio 0.8.0 I/O Improvements We have continued our work from the previous release to improve TorchAudio’s I/O support, including:
https://pytorch.org/blog/pytorch-1.8-new-library-releases/
pytorch blogs
[Stable] Changing the default backend to “sox_io” (for Linux/macOS), and updating the “soundfile” backend’s interface to align with that of “sox_io”. The legacy backend and interface are still accessible, though it is strongly discouraged to use them. [Stable] File-like object support in both "sox_io" backend, “soundfile” backend and sox_effects. [Stable] New options to change the format, encoding, and bits_per_sample when saving. [Stable] Added GSM, HTK, AMB, AMR-NB and AMR-WB format support to the “sox_io” backend. [Beta] A new functional.apply_codec function which can degrade audio data by applying audio codecs supported by “sox_io” backend in an in-memory fashion. Here are some examples of features landed in this release: ```python Load audio over HTTP with requests.get(URL, stream=True) as response: waveform, sample_rate = torchaudio.load(response.raw) Saving to Bytes buffer as 32-bit floating-point PCM buffer_ = io.BytesIO() torchaudio.save( buffer_, waveform, sample_rate,
https://pytorch.org/blog/pytorch-1.8-new-library-releases/
pytorch blogs
buffer_, waveform, sample_rate, format="wav", encoding="PCM_S", bits_per_sample=16) Apply effects while loading audio from S3 client = boto3.client('s3') response = client.get_object(Bucket=S3_BUCKET, Key=S3_KEY) waveform, sample_rate = torchaudio.sox_effects.apply_effect_file( response['Body'], [["lowpass", "-1", "300"], ["rate", "8000"]]) Apply GSM codec to Tensor encoded = torchaudio.functional.apply_codec( waveform, sample_rate, format="gsm") ``` Check out the revamped audio preprocessing tutorial, Audio Manipulation with TorchAudio. [Stable] Switch to CMake-based build
https://pytorch.org/blog/pytorch-1.8-new-library-releases/
pytorch blogs
[Stable] Switch to CMake-based build In the previous version of TorchAudio, it was utilizing CMake to build third party dependencies. Starting in 0.8.0, TorchaAudio uses CMake to build its C++ extension. This will open the door to integrate TorchAudio in non-Python environments (such as C++ applications and mobile). We will continue working on adding example applications and mobile integrations. [Beta] Improved and New Audio Transforms
https://pytorch.org/blog/pytorch-1.8-new-library-releases/
pytorch blogs
[Beta] Improved and New Audio Transforms We have added two widely requested operators in this release: the SpectralCentroid transform and the Kaldi Pitch feature extraction (detailed in "A pitch extraction algorithm tuned for automatic speech recognition"). We’ve also exposed a normalization method to Mel transforms, and additional STFT arguments to Spectrogram. We would like to ask our community to continue to raise feature requests for core audio processing features like these! Community Contributions
https://pytorch.org/blog/pytorch-1.8-new-library-releases/
pytorch blogs