text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
Semantic Segmentation In this section we will start by providing some benchmarks of the released pre-trained models. Then we will discuss how a MobileNetV3-Large backbone was combined with segmentation heads such as LR-ASPP, DeepLabV3 and the FCN to conduct Semantic Segmentation. We will also explain how the network was trained and propose a few optional optimization techniques for speed critical applications. Benchmarks This is how to initialize the pre-trained models: lraspp = torchvision.models.segmentation.lraspp_mobilenet_v3_large(pretrained=True) deeplabv3 = torchvision.models.segmentation.deeplabv3_mobilenet_v3_large(pretrained=True)
https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/
pytorch blogs
``` Below are the detailed benchmarks between new and selected existing models. As we can see, the DeepLabV3 with a MobileNetV3-Large backbone is a viable replacement of FCN with ResNet50 for the majority of applications as it achieves similar accuracy with a 8.5x speed-up. We also observe that the LR-ASPP network supersedes the equivalent FCN in all metrics: Model mIoU Global Pixel Acc Inference on CPU (sec) # Params (M) LR-ASPP MobileNetV3-Large 57.9 91.2 0.3278 3.22 DeepLabV3 MobileNetV3-Large 60.3 91.2 0.5869 11.03 FCN MobileNetV3-Large (not released) 57.8 90.9 0.3702 5.05 DeepLabV3 ResNet50 66.4 92.4 6.3531 39.64
https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/
pytorch blogs
| FCN ResNet50 | 60.5 | 91.4 | 5.0146 | 32.96 | Implementation details In this section we will discuss important implementation details of tested segmentation heads. Note that all models described in this section use a dilated MobileNetV3-Large backbone. LR-ASPP The LR-ASPP is the Lite variant of the Reduced Atrous Spatial Pyramid Pooling model proposed by the authors of the MobileNetV3 paper. Unlike the other segmentation models in TorchVision, it does not make use of an auxiliary loss. Instead it uses low and high-level features with output strides of 8 and 16 respectively.
https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/
pytorch blogs
Unlike the paper where a 49x49 AveragePooling layer with variable strides is used, our implementation uses an AdaptiveAvgPool2d layer to process the global features. This is because the authors of the paper tailored the head to the Cityscapes dataset while our focus is to provide a general purpose implementation that can work on multiple datasets. Finally our implementation always has a bilinear interpolation before returning the output to ensure that the sizes of the input and output images match exactly. DeepLabV3 & FCN
https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/
pytorch blogs
DeepLabV3 & FCN The combination of MobileNetV3 with DeepLabV3 and FCN follows closely the ones of other models and the stage estimation for these methods is identical to LR-ASPP. The only notable difference is that instead of using high and low level features, we attach the normal loss to the feature map with output stride 16 and an auxiliary loss on the feature map with output stride 8. Finally we should note that the FCN version of the model was not released because it was completely superseded by the LR-ASPP both in terms of speed and accuracy. The pre-trained weights are still available and can be used with minimal changes to the code. Training & Tuning process
https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/
pytorch blogs
Training & Tuning process We currently offer two MobileNetV3 pre-trained models capable of doing semantic segmentation: the LR-ASPP and the DeepLabV3. The backbones of the models were initialized with ImageNet weights and trained end-to-end. Both architectures were trained on the COCO dataset using the same scripts with similar hyper-parameters. Their details can be found in our references folder.
https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/
pytorch blogs
Normally, during inference the images are resized to 520 pixels. An optional speed optimization is to construct a Low Res configuration of the model by using the High-Res pre-trained weights and reducing the inference resizing to 320 pixels. This will improve the CPU execution times by roughly 60% while sacrificing a couple of mIoU points. The detailed numbers of this optimization can be found on the table below: Low-Res Configuration mIoU Difference Speed Improvement mIoU Global Pixel Acc Inference on CPU (sec) LR-ASPP MobileNetV3-Large -2.1 65.26% 55.8 90.3 0.1139
https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/
pytorch blogs
| DeepLabV3 MobileNetV3-Large | -3.8 | 63.86% | 56.5 | 90.3 | 0.2121 | | FCN MobileNetV3-Large (not released) | -3.0 | 57.57% | 54.8 | 90.1 | 0.1571 | Here are some examples of visualizing the predictions of the LR-ASPP MobileNetV3-Large model: We hope that you found this article interesting. We are looking forward to your feedback to see if this is the type of content you would like us to publish more often. If the community finds that such posts are useful, we will be happy to publish more articles that cover the implementation details of newly introduced Machine Learning models.
https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/
pytorch blogs
layout: blog_detail title: "PyTorch strengthens its governance by joining the Linux Foundation" author: Soumith Chintala featured-img: "/assets/images/pytorch-foundation-blog-image.jpg"
https://pytorch.org/blog/PyTorchfoundation/
pytorch blogs
Today, I am proud to announce that PyTorch is moving to the Linux Foundation (LF) as a top-level project under the name PyTorch Foundation. The core mission of the Linux Foundation is the collaborative development of open source software. With a governing board of leaders from AMD, Amazon Web Services (AWS), Google Cloud, Meta, Microsoft Azure and NVIDIA, this model aligns with where PyTorch stands today and what it needs to travel forward. The creation of the PyTorch Foundation will ensure business decisions are being made in a transparent and open manner by a diverse group of members for years to come. The technical decisions remain in control of individual maintainers. I’m excited that the Linux Foundation will be our new home as they have notable experience supporting large open-source projects like ours such as Kubernetes and NodeJS. At this pivotal moment, I want to take a look back at how we started, share why we are moving, and what’s ahead.
https://pytorch.org/blog/PyTorchfoundation/
pytorch blogs
This January, PyTorch celebrated its 5 year anniversary! I reflected on what it meant to me in this tweet thread, and this conversation with my colleagues Mike Schroepfer, Lin Qiao, and Yann LeCun. When we started PyTorch development in 2016, it was a collective effort by a band of people from the [Lua]Torch community with a big chunk of people and funding from Meta and individuals contributing from NVIDIA, Twitter and other entities.
https://pytorch.org/blog/PyTorchfoundation/
pytorch blogs
Since 2017, PyTorch has grown far beyond our initial vision. With over 2,400 contributors who have built nearly 154,000 projects using PyTorch as a foundation, PyTorch has become one of the primary platforms for AI research, as well as commercial production use. We’ve seen its impact across industry and academia, from large companies to numerous university courses at Stanford, NYU, EPFL, Oxford, and other academic institutions. As a maintainer of PyTorch, the journey has been extremely fulfilling, with the impact of the project seen in various fields from self-driving cars to healthcare to aerospace.
https://pytorch.org/blog/PyTorchfoundation/
pytorch blogs
As PyTorch grew, many companies have made foundational investments around it. While Meta remains the largest contributor to PyTorch, companies such as AMD, Amazon Web Services (AWS), Google Cloud, HuggingFace, Lightning AI, Microsoft Azure, Nvidia, and many others have made significant investments, including both technical contributions and community building efforts. They’ve established teams around PyTorch or filled significant voids within the PyTorch community and sent countless contributions to the PyTorch core and to the ecosystem around it — PyTorch is an important part of their future. With PyTorch continuing to grow as a multi-stakeholder project, it’s time to move to a broader open-source foundation.
https://pytorch.org/blog/PyTorchfoundation/
pytorch blogs
The business governance of PyTorch was fairly unstructured for quite some time since launch – we operated like a scrappy startup. Team members at Meta spent the time and energy to structure this properly and organize PyTorch into an organizationally more healthy entity. Meta helped PyTorch with introducing many structures, such as Contributor License Agreements, Branding Guidelines, and Trademark registration. Keeping PyTorch’s organizational health up to check is essential and beneficial for the community. The next stage of our organizational progress is to support the interests of multiple stakeholders, hence moving to a foundation is good. We chose the Linux Foundation as it has vast organization experience hosting large multi-stakeholder open-source projects with the right balance of organizational structure and finding specific solutions for these projects.
https://pytorch.org/blog/PyTorchfoundation/
pytorch blogs
Simultaneously, the technical governance of PyTorch has been a loosely structured community model of open-source development — A set of people maintaining PyTorch by area with their responsibility often tied to their individual identity rather than their employment. While we kept a codified list at the PyTorch - Maintainers page, the technical governance was not formalized nor codified. As PyTorch scales as a community, the next step is to structure and codify. The PyTorch Technical Governance now supports a hierarchical maintainer structure and clear outlining of processes around day to day work and escalations. This doesn’t change how we run things, but it does add discipline and openness that at our scale feels essential and timely.
https://pytorch.org/blog/PyTorchfoundation/
pytorch blogs
It’s been an exciting journey since 2016. I am grateful for the experiences and people I’ve met along the way. PyTorch started with a small group of contributors which have grown and diversified over the years, all bringing in new ideas and innovations that would not have been possible without our community. We want to continue the open-source spirit – for the community and by the community. Thank you to our contributors, maintainers, users, supporters and new foundation members. We look forward to the next chapter of PyTorch with the PyTorch Foundation.
https://pytorch.org/blog/PyTorchfoundation/
pytorch blogs
layout: blog_detail title: "Fast Beam Search Decoding in PyTorch with TorchAudio and Flashlight Text" author: Caroline Chen, Jacob Kahn (@jacob_d_kahn) featured-img: "/assets/images/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text-6.png" Beam search decoding with industry-leading speed from Flashlight Text (part of the Flashlight ML framework) is now available with official support in TorchAudio, bringing high-performance beam search and text utilities for speech and text applications built on top of PyTorch. The current integration supports CTC-style decoding, but it can be used for any modeling setting that outputs token-level probability distributions over time steps. A brief beam search refresher
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
A brief beam search refresher In speech and language settings, beam search is an efficient, greedy algorithm that can convert sequences of continuous values (i.e. probabilities or scores) into graphs or sequences (i.e. tokens, word-pieces, words) using optional constraints on valid sequences (i.e. a lexicon), optional external scoring (i.e. an LM which scores valid sequences), and other score adjustments for particular sequences. In the example that follows, we'll consider — a token set of {ϵ, a, b}, where ϵ is a special token that we can imagine denotes a space between words or a pause in speech. Graphics here and below are taken from Awni Hannun's excellent distill.pub writeup on CTC and beam search.
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
With a greedy-like approach, beam search considers the next viable token given an existing sequence of tokens — in the example above, a, b, b is a valid sequence, but a, b, a is not. We rank each possible next token at each step of the beam search according to a scoring function. Scoring functions (s) typically looks something like:
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
Where ŷ is a potential path/sequence of tokens, x is the input (P(ŷ|x) represents the model's predictions over time), and 𝛼 is a weight on the language model probability (P(y) the probability of the sequence under the language model). Some scoring functions add 𝜷 which adjusts a score based on the length of the predicted sequence |ŷ|. This particular scoring function is used in FAIR's prior work on end-to-end ASR, and there are many variations on scoring functions which can vary across application areas.
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
Given a particular sequence, to assess the next viable token in that sequence (perhaps constrained by a set of allowed words or sequences, such as a lexicon of words), the beam search algorithm scores the sequence with each candidate token added, and sorts token candidates based on those scores. For efficiency and since the number of paths is exponential in the token set size, the top-k highest-scoring candidates are kept — k represents the beam size. There are many other nuances with how beam search can progress: similar hypothesis sequences can be “merged”, for instance.
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
The scoring function can be further augmented to up/down-weight token insertion or long or short words. Scoring with stronger external language models, while incurring computational cost, can also significantly improve performance; this is frequently referred to as LM fusion. There are many other knobs to tune for decoding — these are documented in TorchAudio’s documentation and explored further in TorchAudio’s ASR Inference tutorial. Since decoding is quite efficient, parameters can be easily swept and tuned.
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
Beam search has been used in ASR extensively over the years in far too many works to cite, and in strong, recent results and systems including wav2vec 2.0 and NVIDIA's NeMo. Why beam search? Beam search remains a fast competitor to heavier-weight decoding approaches such as RNN-Transducer that Google has invested in putting on-device and has shown strong results with on common benchmarks. Autoregressive text models at scale can benefit from beam search as well. Among other things, beam search gives:
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
A flexible performance/latency tradeoff — by adjusting beam size and the external LM, users can sacrifice latency for accuracy or pay for more accurate results with a small latency cost. Decoding with no external LM can improve results at very little performance cost. Portability without retraining — existing neural models can benefit from multiple decoding setups and plug-and-play with external LMs without training or fine-tuning. A compelling complexity/accuracy tradeoff — adding beam search to an existing modeling pipeline incurs little additional complexity and can improve performance. Performance Benchmarks
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
Performance Benchmarks Today's most commonly-used beam search decoding libraries today that support external language model integration include Kensho's pyctcdecode, NVIDIA's NeMo toolkit. We benchmark the TorchAudio + Flashlight decoder against them with a wav2vec 2.0 base model trained on 100 hours of audio evaluated on LibriSpeech dev-other with the official KenLM 3-gram LM. Benchmarks were run on Intel E5-2698 CPUs on a single thread. All computation was in-memory — KenLM memory mapping was disabled as it wasn't widely supported.
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
When benchmarking, we measure the time-to-WER (word error rate) — because of subtle differences in the implementation of decoding algorithms and the complex relationships between parameters and decoding speed, some hyperparameters differed across runs. To fairly assess performance, we first sweep for parameters that achieve a baseline WER, minimizing beam size if possible. Decoding performance on Librispeech dev-other of a pretrained wav2vec 2.0 model. TorchAudio + Flashlight decoding outperforms by an order of magnitude at low WERs.
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
Time-to-WER results, deferring to smaller beam size, across decoders. The TorchAudio + Flashlight decoder scales far better with larger beam sizes and at lower WERs. TorchAudio API and Usage TorchAudio provides a Python API for CTC beam search decoding, with support for the following: lexicon and lexicon-free decoding KenLM n-gram language model integration character and word-piece decoding sample pretrained LibriSpeech KenLM models and corresponding lexicon and token files various customizable beam search parameters (beam size, pruning threshold, LM weight...) To set up the decoder, use the factory function torchaudio.models.decoder.ctc_decoder from torchaudio.models.decoder import ctc_decoder, download_pretrained_files files = download_pretrained_files("librispeech-4-gram") decoder = ctc_decoder( lexicon=files.lexicon, tokens=files.tokens, lm=files.lm, nbest=1, ... additional optional customizable args ... )
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
) Given emissions of shape *(batch, time, num_tokens)*, the decoder will compute and return a List of batch Lists, each consisting of the nbest hypotheses corresponding to the emissions. Each hypothesis can be further broken down into tokens, words (if a lexicon is provided), score, and timesteps components. ```python emissions = acoustic_model(waveforms) # (B, T, N) batch_hypotheses = decoder(emissions) # List[List[CTCHypothesis]] # transcript for a lexicon decoder transcripts = [" ".join(hypo[0].words) for hypo in batch_hypotheses] # transcript for a lexicon free decoder, splitting by sil token batch_tokens = [decoder.idxs_to_tokens(hypo[0].tokens) for hypo in batch_hypotheses] transcripts = ["".join(tokens) for tokens in batch_tokens]
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
``` Please refer to the documentation for more API details, and the tutorial (ASR Inference Decoding) or sample inference script for more usage examples. Upcoming Improvements Full NNLM support — decoding with large neural language models (e.g. transformers) remains somewhat unexplored at scale. Already supported in Flashlight, we plan to add support in TorchAudio, allowing users to use custom decoder-compatible LMs. Custom word level language models are already available in the nightly TorchAudio build, and is slated to be released in TorchAudio 0.13.
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
Autoregressive/seq2seq decoding — Flashlight Text also supports sequence-to-sequence (seq2seq) decoding for autoregressive models, which we hope to add bindings for and add to TorchAudio and TorchText with efficient GPU implementations as well. Better build support — to benefit from improvements in Flashlight Text, TorchAudio will directly submodule Flashlight Text to make upstreaming modifications and improvements easier. This is already in effect in the nightly TorchAudio build, and is slated to be released in TorchAudio 0.13. Citation To cite the decoder, please use the following: ```python @inproceedings{kahn2022flashlight, title={Flashlight: Enabling innovation in tools for machine learning},
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
author={Kahn, Jacob D and Pratap, Vineel and Likhomanenko, Tatiana and Xu, Qiantong and Hannun, Awni and Cai, Jeff and Tomasello, Paden and Lee, Ann and Grave, Edouard and Avidov, Gilad and others}, booktitle={International Conference on Machine Learning}, pages={10557--10574}, year={2022}, organization={PMLR} } ```python @inproceedings{yang2022torchaudio, title={Torchaudio: Building blocks for audio and speech processing}, author={Yang, Yao-Yuan and Hira, Moto and Ni, Zhaoheng and Astafurov, Artyom and Chen, Caroline and Puhrsch, Christian and Pollack, David and Genzel, Dmitriy and Greenberg, Donny and Yang, Edward Z and others}, booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={6982--6986}, year={2022}, organization={IEEE} }
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
layout: blog_detail title: 'New PyTorch Library Releases in PyTorch 1.9, including TorchVision, TorchAudio, and more' author: Team PyTorch Today, we are announcing updates to a number of PyTorch libraries, alongside the PyTorch 1.9 release. The updates include new releases for the domain libraries including TorchVision, TorchText and TorchAudio. These releases, along with the PyTorch 1.9 release, include a number of new features and improvements that will provide a broad set of updates for the PyTorch community. Some highlights include: TorchVision - Added new SSD and SSDLite models, quantized kernels for object detection, GPU Jpeg decoding, and iOS support. See release notes here.
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
TorchAudio - Added wav2vec 2.0 model deployable in non-Python environments (including C++, Android, and iOS). Many performance improvements in lfilter, spectral operations, resampling. Added options for quality control in sampling (i.e. Kaiser window support). Initiated the migration of complex tensors operations. Improved autograd support. See release notes here. TorchText - Added a new high-performance Vocab module that provides common functional APIs for NLP workflows. See release notes here. We’d like to thank the community for their support and work on this latest release. Features in PyTorch releases are classified as Stable, Beta, and Prototype. You can learn more about the definitions in this blog post. TorchVision 0.10 (Stable) Quantized kernels for object detection
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
The forward pass of the nms and roi_align operators now support tensors with a quantized dtype, which can help lower the memory footprint of object detection models, particularly on mobile environments. For more details, refer to the documentation. (Stable) Speed optimizations for Tensor transforms The resize and flip transforms have been optimized and its runtime improved by up to 5x on the CPU. (Stable) Documentation improvements Significant improvements were made to the documentation. In particular, a new gallery of examples is available. These examples visually illustrate how each transform acts on an image, and also properly documents and illustrates the output of the segmentation models.
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
The example gallery will be extended in the future to provide more comprehensive examples and serve as a reference for common torchvision tasks. For more details, refer to the documentation. (Beta) New models for detection SSD and SSDlite are two popular object detection architectures that are efficient in terms of speed and provide good results for low resolution pictures. In this release, we provide implementations for the original SSD model with VGG16 backbone and for its mobile-friendly variant SSDlite with MobileNetV3-Large backbone. The models were pre-trained on COCO train2017 and can be used as follows: ```python import torch import torchvision Original SSD variant x = [torch.rand(3, 300, 300), torch.rand(3, 500, 400)] m_detector = torchvision.models.detection.ssd300_vgg16(pretrained=True) m_detector.eval() predictions = m_detector(x)
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
m_detector.eval() predictions = m_detector(x) Mobile-friendly SSDlite variant x = [torch.rand(3, 320, 320), torch.rand(3, 500, 400)] m_detector = torchvision.models.detection.ssdlite320_mobilenet_v3_large(pretrained=True) m_detector.eval() predictions = m_detector(x) ``` The following accuracies can be obtained on COCO val2017 (full results available in #3403 and #3757): {:.table.table-striped.table-bordered} | Model | mAP | mAP@50 | mAP@75 | | ------------- | ------------- | ------------- | ------------- | | SSD300 VGG16 | 25.1 | 41.5 | 26.2 | | SSDlite320 MobileNetV3-Large | 21.3 | 34.3 | 22.1 | For more details, refer to the documentation. (Beta) JPEG decoding on the GPU
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
(Beta) JPEG decoding on the GPU Decoding jpegs is now possible on GPUs with the use of nvjpeg, which should be readily available in your CUDA setup. The decoding time of a single image should be about 2 to 3 times faster than with libjpeg on CPU. While the resulting tensor will be stored on the GPU device, the input raw tensor still needs to reside on the host (CPU), because the first stages of the decoding process take place on the host: from torchvision.io.image import read_file, decode_jpeg data = read_file('path_to_image.jpg') # raw data is on CPU img = decode_jpeg(data, device='cuda') # decoded image in on GPU For more details, see the documentation. (Beta) iOS support
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
(Beta) iOS support TorchVision 0.10 now provides pre-compiled iOS binaries for its C++ operators, which means you can run Faster R-CNN and Mask R-CNN on iOS. An example app on how to build a program leveraging those ops can be found here. TorchAudio 0.9.0 (Stable) Complex Tensor Migration TorchAudio has functions that handle complex-valued tensors. These functions follow a convention to use an extra dimension to represent real and imaginary parts. In PyTorch 1.6, the native complex type was introduced. As its API is getting stable, torchaudio has started to migrate to the native complex type.
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
In this release, we added support for native complex tensors, and you can opt-in to use them. Using the native complex types, we have verified that affected functions continue to support autograd and TorchScript, moreover, switching to native complex types improves their performance. For more details, refer to pytorch/audio#1337. (Stable) Filtering Improvement
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
(Stable) Filtering Improvement In release 0.8, we added the C++ implementation of the core part of lfilter for CPU, which improved the performance. In this release, we optimized some internal operations of the CPU implementation for further performance improvement. We also added autograd support to both CPU and GPU. Now lfilter and all the biquad filters (biquad, band_biquad, bass_biquad, treble_biquad, allpass_biquad, lowpass_biquad, highpass_biquad, bandpass_biquad, equalizer_biquad and bandrefect_biquad) benefit from the performance improvement and support autograd. We also moved the implementation of overdrive to C++ for performance improvement. For more details, refer to the documentation. (Stable) Improved Autograd Support
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
(Stable) Improved Autograd Support Along with the work of Complex Tensor Migration and Filtering Improvement, we also added autograd tests to transforms. lfilter, biquad and its variants, and most transforms are now guaranteed to support autograd. For more details, refer to the release note. (Stable) Improved Windows Support Torchaudio implements some operations in C++ for reasons such as performance and integration with third-party libraries. These C++ components were only available on Linux and macOS. In this release, we have added support to Windows. With this, the efficient filtering implementation mentioned above is also available on Windows. However, please note that not all the C++ components are available for Windows. “sox_io” backend and torchaudio.functional.compute_kaldi_pitch are not supported. (Stable) I/O Functions Migration
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
(Stable) I/O Functions Migration Since the 0.6 release, we have continuously improved I/O functionality. Specifically, in 0.8 we changed the default backend from “sox” to “sox_io” and applied the same switch to API of the “soundfile” backend. The 0.9 release concludes this migration by removing the deprecated backends. For more details, please refer to #903. (Beta) Wav2Vec2.0 Model
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
(Beta) Wav2Vec2.0 Model We have added the model architectures from Wav2Vec2.0. You can import fine-tuned models parameters published on fairseq and Hugging Face Hub. Our model definition supports TorchScript, and it is possible to deploy the model to non-Python environments, such as C++, Android and iOS. The following code snippet illustrates such a use case. Please check out our c++ example directory for the complete example. Currently, it is designed for running inference. If you would like more support for training, please file a feature request. ```python Import fine-tuned model from Hugging Face Hub import transformers
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
import transformers from torchaudio.models.wav2vec2.utils import import_huggingface_model original = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") imported = import_huggingface_model(original) ```python # Import fine-tuned model from fairseq import fairseq from torchaudio.models.wav2vec2.utils import import_fairseq_model original, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task( ["wav2vec_small_960h.pt"], arg_overrides={'data': "<data_dir>"}) imported = import_fairseq_model(original[0].w2v_encoder) ```python Build uninitialized model and load state dict from torchaudio.models import wav2vec2_base model = wav2vec2_base(num_out=32) model.load_state_dict(imported.state_dict()) Quantize / script / optimize for mobile quantized_model = torch.quantization.quantize_dynamic( model, qconfig_spec={torch.nn.Linear}, dtype=torch.qint8) scripted_model = torch.jit.script(quantized_model) optimized_model = optimize_for_mobile(scripted_model)
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
optimized_model.save("model_for_deployment.pt") ``` For more details, see the documentation. (Beta) Resampling Improvement In release 0.8, we vectorized the operation in torchaudio.compliance.kaldi.resample_waveform, which improved the performance of resample_waveform and torchaudio.transforms.Resample. In this release, we have further revised the way the resampling algorithm is implemented. We have: * Added Kaiser Window support for a wider range of resampling quality. * Added rolloff parameter for anti-aliasing control. * Added the mechanism to precompute the kernel and cache it in torchaudio.transforms.Resample for even faster operation. * Moved the implementation from torchaudio.compliance.kaldi.resample_waveform to torchaudio.functional.resample and deprecated torchaudio.compliance.kaldi.resample_waveform.
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
For more details, see the documentation. (Prototype) RNN Transducer Loss The RNN transducer loss is used in training RNN transducer models, which is a popular architecture for speech recognition tasks. The prototype loss in torchaudio currently supports autograd, torchscript, float16 and float32, and can also be run on both CPU and CUDA. For more details, please refer to the documentation. TorchText 0.10.0 (Beta) New Vocab Module
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
TorchText 0.10.0 (Beta) New Vocab Module In this release, we introduce a new Vocab module that replaces the current Vocab class. The new Vocab provides common functional APIs for NLP workflows. This module is backed by an efficient C++ implementation that reduces batch look-up time by up-to ~85% (refer to summary of #1248 and #1290 for further information on benchmarks), and provides support for TorchScript. We provide accompanying factory functions that can be used to build the Vocab object either through a python ordered dictionary or an Iterator that yields lists of tokens. ```python creating Vocab from text file import io from torchtext.vocab import build_vocab_from_iterator generator that yield list of tokens def yield_tokens(file_path): with io.open(file_path, encoding = 'utf-8') as f: for line in f: yield line.strip().split() get Vocab object
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
get Vocab object vocab_obj = build_vocab_from_iterator(yield_tokens(file_path), specials=[""]) creating Vocab through ordered dict from torchtext.vocab import vocab from collections import Counter, OrderedDict counter = Counter(["a", "a", "b", "b", "b"]) sorted_by_freq_tuples = sorted(counter.items(), key=lambda x: x[1], reverse=True) ordered_dict = OrderedDict(sorted_by_freq_tuples) vocab_obj = vocab(ordered_dict) common API usage look-up index vocab_obj["a"] batch look-up indices vocab_obj.looup_indices(["a","b"]) support forward API of PyTorch nn Modules vocab_obj(["a","b"]) batch look-up tokens vocab_obj.lookup_tokens([0,1]) set default index to return when token not found vocab_obj.set_default_index(0) vocab_obj["out_of_vocabulary"] #prints 0 ``` For more details, refer to the documentation.
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
Thanks for reading. If you’re interested in these updates and want to join the PyTorch community, we encourage you to join the discussion forums and open GitHub issues. To get the latest news from PyTorch, follow us on Facebook, Twitter, Medium, YouTube or LinkedIn. Cheers! -Team PyTorch
https://pytorch.org/blog/pytorch-1.9-new-library-releases/
pytorch blogs
layout: blog_detail title: 'Everything You Need To Know About Torchvision’s SSDlite Implementation' author: Vasilis Vryniotis featured-img: 'assets/images/mAP-of-SSD320-MobileNetV3-Large.png' In the previous article, we’ve discussed how the SSD algorithm works, covered its implementation details and presented its training process. If you have not read the previous blog post, I encourage you to check it out before continuing. In this part 2 of the series, we will focus on the mobile-friendly variant of SSD called SSDlite. Our plan is to first go through the main components of the algorithm highlighting the parts that differ from the original SSD, then discuss how the released model was trained and finally provide detailed benchmarks for all the new Object Detection models that we explored. The SSDlite Network Architecture
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
The SSDlite Network Architecture The SSDlite is an adaptation of SSD which was first briefly introduced on the MobileNetV2 paper and later reused on the MobileNetV3 paper. Because the main focus of the two papers was to introduce novel CNN architectures, most of the implementation details of SSDlite were not clarified. Our code follows all the details presented on the two papers and where necessary fills the gaps from the official implementation. As noted before, the SSD is a family of models because one can configure it with different backbones (such as VGG, MobileNetV3 etc) and different Heads (such as using regular convolutions, separable convolutions etc). Thus many of the SSD components remain the same in SSDlite. Below we discuss only those that are different Classification and Regression Heads
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
Classification and Regression Heads Following the Section 6.2 of the MobileNetV2 paper, SSDlite replaces the regular convolutions used on the original Heads with separable convolutions. Consequently, our implementation introduces new heads that use 3x3 Depthwise convolutions and 1x1 projections. Since all other components of the SSD method remain the same, to create an SSDlite model our implementation initializes the SSDlite head and passes it directly to the SSD constructor. Backbone Feature Extractor
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
Our implementation introduces a new class for building MobileNet feature extractors. Following the Section 6.3 of the MobileNetV3 paper, the backbone returns the output of the expansion layer of the Inverted Bottleneck block which has an output stride of 16 and the output of the layer just before the pooling which has an output stride of 32. Moreover, all extra blocks of the backbone are replaced with lightweight equivalents which use a 1x1 compression, a separable 3x3 convolution with stride 2 and a 1x1 expansion. Finally to ensure that the heads have enough prediction power even when small width multipliers are used, the minimum depth size of all convolutions is controlled by the min_depth hyperparameter.
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
The SSDlite320 MobileNetV3-Large model This section discusses the configuration of the provided SSDlite pre-trained model along with the training processes followed to replicate the paper results as closely as possible. Training process All of the hyperparameters and scripts used to train the model on the COCO dataset can be found in our references folder. Here we discuss the most notable details of the training process. Tuned Hyperparameters
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
Tuned Hyperparameters Though the papers don’t provide any information on the hyperparameters used for training the models (such as regularization, learning rate and the batch size), the parameters listed in the configuration files on the official repo were good starting points and using cross validation we adjusted them to their optimal values. All the above gave us a significant boost over the baseline SSD configuration. Data Augmentation
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
Data Augmentation Key important difference of SSDlite comparing to SSD is that the backbone of the first has only a fraction of the weights of the latter. This is why in SSDlite, the Data Augmentation focuses more on making the model robust to objects of variable sizes than trying to avoid overfitting. Consequently, SSDlite uses only a subset of the SSD transformations and this way it avoids the over-regularization of the model. LR Scheme
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
LR Scheme Due to the reliance on Data Augmentation to make the model robust to small and medium sized objects, we found that it is particularly beneficial for the training recipe to use large number of epochs. More specifically by using roughly 3x more epochs than SSD we are able to increase our precision by 4.2mAP points and by using a 6x multiplier we improve by 4.9mAP. Increasing further the epochs seems to yield diminishing returns and makes the training too slow and impractical, nevertheless based on the model configuration it seems that the authors of the paper used an equivalent 16x multiplier. Weight Initialization & Input Scaling & ReLU6
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
A set of final optimizations that brought our implementation very close to the official one and helped us bridge the accuracy gap was training the backbone from scratch instead of initializing from ImageNet, adapting our weight initialization scheme, changing our Input Scaling and replacing all standard ReLUs added on the SSDlite heads with ReLU6. Note that since we trained the model from random weights, we additionally applied the speed optimization described on the paper of using a reduced tail on the backbone.
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
Implementation Differences
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
Comparing the above implementation with the one on the official repo, we’ve identified a few differences. Most of them are minor and they are related to how we initialize the weights (for example Normal initialization vs Truncated Normal), how we parameterize the LR Scheduling (for example smaller vs larger warmup rate, shorter vs longer training) etc. The biggest known difference lies in the way we compute the Classification loss. More specifically the implementation of SSDlite with MobileNetV3 backbone on the official repo doesn’t use the SSD’s Multibox loss but instead uses RetinaNet’s focal loss. This is a rather significant deviation from the paper and since TorchVision already offers a full implementation of RetinaNet, we decided to implement SSDlite using the normal Multi-box SSD loss.
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
Break down of key accuracy improvements As discussed in previous articles, reproducing research papers and porting them to code is not a journey of monotonically increasing accuracies, especially in cases where the full training and implementation details are not known. Typically the process involves lots of backtracking as one needs to identify those implementation details and parameters that have significant impact on the accuracy from those that don’t. Below we try to visualize the most important iterations that improved our accuracy from the baseline: {:.table.table-striped.table-bordered} | Iteration | mAP | | ------------- | ------------- | | Baseline with "SSD-style" Hyperparams | 10.6 | | + Tuned Hyperparams | 14.2 | | + SSDlite Data Augmentation | 15.2 | | + 3x LR Scheme | 19.4 | | + 6x LR Scheme | 20.1 |
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
| + 6x LR Scheme | 20.1 | | + Weight Initialization & Input Scaling & ReLU6 | 21.3 |
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
The order of optimizations presented above is accurate, though a bit idealized in some cases. For example, though different schedulers were tested during the Hyperparameter tuning phase, none of them provided significant improvements and thus we maintained the MultiStepLR which was used in the baseline. Nevertheless while later experimenting with different LR Schemes, we found it beneficial to switch to CosineAnnealingLR, as it required less configuration. Consequently, we believe that the main takeaway from the above summary should be that even by starting with a correct implementation and a set of optimal hyperparams from a model of the same family, there is always accuracy points to be found by optimizing the training recipe and tuning the implementation. Admittedly the above is a rather extreme case where the accuracy doubled, but still in many cases there is a large number of optimizations that can help us push the accuracy significantly. Benchmarks
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
Benchmarks Here is how to initialize the two pre-trained models: ssdlite = torchvision.models.detection.ssdlite320_mobilenet_v3_large(pretrained=True) ssd = torchvision.models.detection.ssd300_vgg16(pretrained=True) Below are the benchmarks between the new and selected previous detection models: {:.table.table-striped.table-bordered} | Model | mAP | Inference on CPU (sec) | # Params (M) | | ------------- | ------------- | ------------- | ------------- | | SSDlite320 MobileNetV3-Large | 21.3 | 0.0911 | 3.44 | | SSD300 VGG16 | 25.1 | 0.8303 | 35.64 | | SSD512 VGG16 (not released) | 28.8| 2.2494 | 37.08 | | SSD512 ResNet50 (not released) | 30.2 | 1.1137 | 42.70 | | Faster R-CNN MobileNetV3-Large 320 FPN (Low-Res) | 22.8 | 0.1679 | 19.39| | Faster R-CNN MobileNetV3-Large FPN (High-Res) | 32.8 | 0.8409 | 19.39 |
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
As we can see, the SSDlite320 MobileNetV3-Large model is by far the fastest and smallest model and thus it’s an excellent candidate for real-world mobile applications. Though its accuracy is lower than the pre-trained low-resolution Faster R-CNN equivalent, the SSDlite framework is adaptable and one can boost its accuracy by introducing heavier heads with more convolutions. On the other hand, the SSD300 VGG16 model is rather slow and less accurate. This is mainly because of its VGG16 backbone. Though extremely important and influential, the VGG architecture is nowadays quite outdated. Thus though the specific model has historical and research value and hence it’s included in TorchVision, we recommend to users who want high-resolution detectors for real world applications to either combine SSD with alternative backbones (see this example on how to create one) or use one of the Faster R-CNN pre-trained models.
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
We hope you enjoyed the 2nd and final part of the SSD series. We are looking forward to your feedback.
https://pytorch.org/blog/torchvision-ssdlite-implementation/
pytorch blogs
layout: blog_detail title: 'Announcing PyTorch Annual Hackathon 2021' author: Team PyTorch featured-img: 'assets/images/social_hackathon21.png' We’re excited to announce the PyTorch Annual Hackathon 2021! This year, we’re looking to support the community in creating innovative PyTorch tools, libraries, and applications. 2021 is the third year we’re hosting this Hackathon, and we welcome you to join the PyTorch community and put your machine learning skills into action. Submissions start on September 8 and end on November 3. Good luck to everyone! Submission Categories You can enter your PyTorch projects into three categories:
https://pytorch.org/blog/pytorch-hackathon-2021/
pytorch blogs
PyTorch Responsible AI Development Tools & Libraries - Build an AI development tool or library that helps develop AI models and applications responsibly. These tools, libraries, and apps need to support a researcher or developer to factor in fairness, security, and privacy throughout the entire machine learning development process of data gathering, model training, model validation, inferences, monitoring, and more. Web and Mobile Applications Powered by PyTorch - Build an application with the web, mobile interface, and/or embedded device powered by PyTorch so the end users can interact with it. The submission must be built on PyTorch or use PyTorch-based libraries such as torchvision, torchtext, and fast.ai.
https://pytorch.org/blog/pytorch-hackathon-2021/
pytorch blogs
PyTorch Developer Tools & Libraries - Build a creative, useful, and well-implemented tool or library for improving the productivity and efficiency of PyTorch researchers and developers. The submission must be a machine learning algorithm, model, or application built using PyTorch or PyTorch-based libraries. Prizes Submissions will be judged on the idea’s quality, originality, implementation, and potential impact. First-Place Winners in each category of the Hackathon will receive $5,000 in cash, along with a 30-minute call with the PyTorch development team. Second-Place Winners will receive $3,000. Third-Place Winners will receive $2,000. All winners will also receive the opportunity to create blog posts that will be featured throughout PyTorch channels as well as an exclusive Github badge. Honorable Mentions will also be awarded to the following three highest-scoring entries in each category and will receive $1,000 each. Cloud Computing Credits
https://pytorch.org/blog/pytorch-hackathon-2021/
pytorch blogs
Cloud Computing Credits Request $100 in credits from Amazon Web Services or Google Cloud for your computing costs. Please allow 3 business days for your request to be reviewed. Credits will be provided to verified registrants until the supplies run out. For more information, see https://pytorch2021.devpost.com/details/sponsors. 2020 Winning Projects DeMask won first place in the PyTorch Developer Tools category. Built using Asteroid, a PyTorch-based audio source separation toolkit, DeMask is an end-to-end model for enhancing speech while wearing face masks. Q&Aid won first place in the Web/Mobile Applications Powered by PyTorch category. Backed by PyTorch core algorithms and models, Q&Aid is a conceptual health care chatbot aimed at making health care diagnoses and facilitating communication between patients and doctors.
https://pytorch.org/blog/pytorch-hackathon-2021/
pytorch blogs
FairTorch won first place in the PyTorch Responsible AI Development Tools category. FairTorch is a PyTorch fairness library that lets developers add constraints to their models to equalize metrics across subgroups by simply adding a few lines of code. How to Join If you’re interested in joining this year’s PyTorch Hackathon, register at http://pytorch2021.devpost.com.
https://pytorch.org/blog/pytorch-hackathon-2021/
pytorch blogs
layout: blog_detail title: "Accelerated Generative Diffusion Models with PyTorch 2" author: Grigory Sizov, Michael Gschwind, Hamid Shojanazeri, Driss Guessous, Daniel Haziza, Christian Puhrsch TL;DR: PyTorch 2.0 nightly offers out-of-the-box performance improvement for Generative Diffusion models by using the new torch.compile() compiler and optimized implementations of Multihead Attention integrated with PyTorch 2. Introduction A large part of the recent progress in Generative AI came from denoising diffusion models, which allow producing high quality images and videos from text prompts. This family includes Imagen, DALLE, Latent Diffusion, and others. However, all models in this family share a common drawback: generation is rather slow, due to the iterative nature of the sampling process by which the images are produced. This makes it important to optimize the code running inside the sampling loop.
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
We took an open source implementation of a popular text-to-image diffusion model as a starting point and accelerated its generation using two optimizations available in PyTorch 2: compilation and fast attention implementation. Together with a few minor memory processing improvements in the code these optimizations give up to 49% inference speedup relative to the original implementation without xFormers, and 39% inference speedup relative to using the original code with xFormers (excluding the compilation time), depending on the GPU architecture and batch size. Importantly, the speedup comes without a need to install xFormers or any other extra dependencies.
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
The table below shows the improvement in runtime between the original implementation with xFormers installed and our optimized version with PyTorch-integrated memory efficient attention (originally developed for and released in the xFormers library) and PyTorch compilation. The compilation time is excluded. Runtime improvement in % compared to original+xFormers See the absolute runtime numbers in section “Benchmarking setup and results summary” GPU Batch size 1 Batch size 2 Batch size 4 P100 (no compilation) -3.8 0.44 5.47 T4 2.12 10.51
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
2.12 10.51 14.2 A10 -2.34 8.99 10.57 V100 18.63 6.39 10.43 A100 38.5 20.33 12.17 One can notice the following: The improvements are significant for powerful GPUs like A100 and V100. For those GPUs the improvement is most pronounced for batch size 1 For less powerful GPUs we observe smaller speedups (or in two cases slight regressions). The batch size trend is reversed here: improvement is larger for larger batches In the following sections we describe the applied optimizations and provide detailed benchmarking data, comparing the generation time with various optimization features on/off.
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
Specifically, we benchmark 5 configurations and the plots below compare their absolute performance for different GPUs and batch sizes. For definitions of these configurations see section “Benchmarking setup and results”. Optimizations
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
Optimizations Here we’ll go into more detail about the optimizations introduced into the model code. These optimizations rely on features of PyTorch 2.0 which has been released recently. Optimized Attention One part of the code which we optimized is the scaled dot-product attention. Attention is known to be a heavy operation: naive implementation materializes the attention matrix, leading to time and memory complexity quadratic in sequence length. It is common for diffusion models to use attention (CrossAttention) as part of Transformer blocks in multiple parts of the U-Net. Since the U-Net runs at every sampling step, this becomes a critical point to optimize. Instead of custom attention implementation one can use torch.nn.MultiheadAttention, which in PyTorch 2 has optimized attention implementation is integrated into it. This optimization schematically boils down to the following pseudocode: ``` class CrossAttention(nn.Module): def init(self, ...):
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
def init(self, ...): # Create matrices: Q, K, V, out_proj ... def forward(self, x, context=None, mask=None): # Compute out = SoftMax(Q*K/sqrt(d))V # Return out_proj(out) … gets replaced with class CrossAttention(nn.Module): def init(self, ...): self.mha = nn.MultiheadAttention(...) def forward(self, x, context): return self.mha(x, context, context) ```
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
return self.mha(x, context, context) ``` The optimized implementation of attention was available already in PyTorch 1.13 (see here) and widely adopted (see e.g. HuggingFace transformers library example). In particular, it integrates memory-efficient attention from the xFormers library and flash attention from https://arxiv.org/abs/2205.14135. PyTorch 2.0 expands this to additional attention functions such as cross attention and custom kernels for further acceleration, making it applicable to diffusion models.
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
Flash attention is available on GPUs with compute capability SM 7.5 or SM 8.x - for example, on T4, A10, and A100, which are included in our benchmark (you can check compute capability of each NVIDIA GPU here). However, in our tests on A100 the memory efficient attention performed better than flash attention for the particular case of diffusion models, due to the small number of attention heads and small batch size. PyTorch understands this and in this case chooses memory efficient attention over flash attention when both are available (see the logic here). For full control over the attention backends (memory-efficient attention, flash attention, “vanilla math”, or any future ones), power users can enable and disable them manually with the help of the context manager torch.backends.cuda.sdp_kernel.
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
Compilation Compilation is a new feature of PyTorch 2.0, enabling significant speedups with a very simple user experience. To invoke the default behavior, simply wrap a PyTorch module or a function into torch.compile: model = torch.compile(model) PyTorch compiler then turns Python code into a set of instructions which can be executed efficiently without Python overhead. The compilation happens dynamically the first time the code is executed. With the default behavior, under the hood PyTorch utilized TorchDynamo to compile the code and TorchInductor to further optimize it. See this tutorial for more details.
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
Although the one-liner above is enough for compilation, certain modifications in the code can squeeze a larger speedup. In particular, one should avoid so-called graph breaks - places in the code which PyTorch can’t compile. As opposed to previous PyTorch compilation approaches (like TorchScript), PyTorch 2 compiler doesn’t break in this case. Instead it falls back on eager execution - so the code runs, but with reduced performance. We introduced a few minor changes to the model code to get rid of graph breaks. This included eliminating functions from libraries not supported by the compiler, such as inspect.isfunction and einops.rearrange. See this doc to learn more about graph breaks and how to eliminate them.
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
Theoretically, one can apply torch.compileon the whole diffusion sampling loop. However, in practice it is enough to just compile the U-Net. The reason is that torch.compile doesn’t yet have a loop analyzer and would recompile the code for each iteration of the sampling loop. Moreover, compiled sampler code is likely to generate graph breaks - so one would need to adjust it if one wants to get a good performance from the compiled version. Note that compilation requires GPU compute capability >= SM 7.0 to run in non-eager mode. This covers all GPUs in our benchmarks - T4, V100, A10, A100 - except for P100 (see the full list). Other optimizations
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
Other optimizations In addition, we have improved efficiency of GPU memory operations by eliminating some common pitfalls, e.g. creating a tensor on GPU directly rather than creating it on CPU and later moving to GPU. The places where such optimizations were necessary were determined by line-profiling and looking at CPU/GPU traces and Flame Graphs. Benchmarking setup and results summary We have two versions of code to compare: original and optimized. On top of this, several optimization features (xFormers, PyTorch memory efficient attention, compilation) can be turned on/off. Overall, as mentioned in the introduction, we will be benchmarking 5 configurations: Original code without xFormers Original code with xFormers Optimized code with vanilla math attention backend and no compilation Optimized code with memory-efficient attention backend and no compilation
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
Optimized code with memory-efficient attention backend and compilation As the original version we took the version of the code which uses PyTorch 1.12 and a custom implementation of attention. The optimized version uses nn.MultiheadAttention in CrossAttention and PyTorch 2.0.0.dev20230111+cu117. It also has a few other minor optimizations in PyTorch-related code. The table below shows runtime of each version of the code in seconds, and the percentage improvement compared to the _original with xFormers. _The compilation time is excluded. Runtimes for batch size 1. In parenthesis - relative improvement with respect to the “Original with xFormers” row Configuration P100 T4 A10 V100 A100
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
Original without xFormers 30.4s (-19.3%) 29.8s (-77.3%) 13.0s (-83.9%) 10.9s (-33.1%) 8.0s (-19.3%) Original with xFormers 25.5s (0.0%) 16.8s (0.0%) 7.1s (0.0%) 8.2s (0.0%) 6.7s (0.0%) Optimized with vanilla math attention, no compilation 27.3s (-7.0%) 19.9s (-18.7%) 13.2s (-87.2%) 7.5s (8.7%) 5.7s (15.1%) Optimized with mem. efficient attention, no compilation 26.5s (-3.8%) 16.8s (0.2%) 7.1s (-0.8%) 6.9s (16.0%) 5.3s (20.6%)
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
5.3s (20.6%) Optimized with mem. efficient attention and compilation - 16.4s (2.1%) 7.2s (-2.3%) 6.6s (18.6%) 4.1s (38.5%) Runtimes for batch size 2 Configuration P100 T4 A10 V100 A100 Original without xFormers 58.0s (-21.6%) 57.6s (-84.0%) 24.4s (-95.2%) 18.6s (-63.0%) 12.0s (-50.6%) Original with xFormers 47.7s (0.0%) 31.3s (0.0%)
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
47.7s (0.0%) 31.3s (0.0%) 12.5s (0.0%) 11.4s (0.0%) 8.0s (0.0%) Optimized with vanilla math attention, no compilation 49.3s (-3.5%) 37.9s (-21.0%) 17.8s (-42.2%) 12.7s (-10.7%) 7.8s (1.8%) Optimized with mem. efficient attention, no compilation 47.5s (0.4%) 31.2s (0.5%) 12.2s (2.6%) 11.5s (-0.7%) 7.0s (12.6%) Optimized with mem. efficient attention and compilation - 28.0s (10.5%) 11.4s (9.0%) 10.7s (6.4%) 6.4s (20.3%) Runtimes for batch size 4
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
Runtimes for batch size 4 Configuration P100 T4 A10 V100 A100 Original without xFormers 117.9s (-20.0%) 112.4s (-81.8%) 47.2s (-101.7%) 35.8s (-71.9%) 22.8s (-78.9%) Original with xFormers 98.3s (0.0%) 61.8s (0.0%) 23.4s (0.0%) 20.8s (0.0%) 12.7s (0.0%) Optimized with vanilla math attention, no compilation 101.1s (-2.9%) 73.0s (-18.0%) 28.3s (-21.0%) 23.3s (-11.9%)
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
23.3s (-11.9%) 14.5s (-13.9%) Optimized with mem. efficient attention, no compilation 92.9s (5.5%) 61.1s (1.2%) 23.9s (-1.9%) 20.8s (-0.1%) 12.8s (-0.9%) Optimized with mem. efficient attention and compilation - 53.1s (14.2%) 20.9s (10.6%) 18.6s (10.4%) 11.2s (12.2%)
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
To minimize fluctuations and external influence on the performance of the benchmarked code, we ran each version of the code one after another, and then repeated this sequence 10 times: A, B, C, D, E, A, B, … So the results of a typical run would look like the one in the picture below.. Note that one shouldn’t rely on comparison of absolute run times between different graphs, but comparison of run times_ inside_ one graph is pretty reliable, thanks to our benchmarking setup.
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
Each run of text-to-image generation script produces several batches, the number of which is regulated by the CLI parameter --n_iter. In the benchmarks we used n_iter = 2, but introduced an additional “warm-up” iteration, which doesn’t contribute to the run time. This was necessary for the runs with compilation, because compilation happens the first time the code runs, and so the first iteration is much longer than all subsequent. To make comparison fair, we also introduced this additional “warm-up” iteration to all other runs. The numbers in the table above are for number of iterations 2 (plus a “warm-up one”), prompt ”A photo”, seed 1, PLMS sampler, and autocast turned on. Benchmarks were done using P100, V100, A100, A10 and T4 GPUs. The T4 benchmarks were done in Google Colab Pro. The A10 benchmarks were done on g5.4xlarge AWS instances with 1 GPU. Conclusions and next steps
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
Conclusions and next steps We have shown that new features of PyTorch 2 - compiler and optimized attention implementation - give performance improvements exceeding or comparable with what previously required installation of an external dependency (xFormers). PyTorch achieved this, in particular, by integrating memory efficient attention from xFormers into its codebase. This is a significant improvement for user experience, given that xFormers, being a state-of-the-art library, in many scenarios requires custom installation process and long builds. There are a few natural directions in which this work can be continued: The optimizations we implemented and described here are only benchmarked for text-to-image inference so far. It would be interesting to see how they affect training performance. PyTorch compilation can be directly applied to training; enabling training with PyTorch optimized attention is on the roadmap
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
We intentionally minimized changes to the original model code. Further profiling and optimization can probably bring more improvements At the moment compilation is applied only to the U-Net model inside the sampler. Since there is a lot happening outside of U-Net (e.g. operations directly in the sampling loop), it would be beneficial to compile the whole sampler. However, this would require analysis of the compilation process to avoid recompilation at every sampling step Current code only applies compilation within the PLMS sampler, but it should be trivial to extend it to other samplers Besides text-to-image generation, diffusion models are also applied to other tasks - image-to-image and inpainting. It would be interesting to measure how their performance improves from PyTorch 2 optimizations See if you can increase performance of open source diffusion models using the methods we described, and share the results! Resources
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
Resources PyTorch 2.0 overview, which has a lot of information on torch.compile: https://pytorch.org/get-started/pytorch-2.0/ Tutorial on torch.compile: https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html General compilation troubleshooting: https://pytorch.org/docs/master/dynamo/troubleshooting.html Details on graph breaks: https://pytorch.org/docs/master/dynamo/faq.html#identifying-the-cause-of-a-graph-break Details on guards: https://pytorch.org/docs/master/dynamo/guards-overview.html Video deep dive on TorchDynamo https://www.youtube.com/watch?v=egZB5Uxki0I
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
Tutorial on optimized attention in PyTorch 1.12: https://pytorch.org/tutorials/beginner/bettertransformer_tutorial.html Acknowledgements We would like to thank Geeta Chauhan, Natalia Gimelshein, Patrick Labatut, Bert Maher, Mark Saroufim, Michael Voznesensky and Francisco Massa for their valuable advice and early feedback on the text. Special thanks to Yudong Tao initiating the work on using PyTorch native attention in diffusion models.
https://pytorch.org/blog/accelerated-generative-diffusion-models/
pytorch blogs
layout: blog_detail title: "New Library Updates in PyTorch 2.0" Summary We are bringing a number of improvements to the current PyTorch libraries, alongside the PyTorch 2.0 release. These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch. Along with 2.0, we are also releasing a series of beta updates to the PyTorch domain libraries, including those that are in-tree, and separate libraries including TorchAudio, TorchVision, and TorchText. An update for TorchX is also being released as it moves to community supported mode. Please find the list of the latest stable versions and updates below. Latest Stable Library Versions (Full List) TorchArrow 0.1.0 TorchRec 0.4.0 TorchVision 0.15
https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/
pytorch blogs
TorchVision 0.15 TorchAudio 2.0 TorchServe 0.7.1 TorchX 0.4.0 TorchData 0.6.0 TorchText 0.15.0 PyTorch on XLA Devices 1.14 *To see prior versions or (unstable) nightlies, click on versions in the top left menu above ‘Search Docs’. TorchAudio [Beta] Data augmentation operators The release adds several data augmentation operators under torchaudio.functional and torchaudio.transforms: * torchaudio.functional.add_noise * torchaudio.functional.convolve * torchaudio.functional.deemphasis * torchaudio.functional.fftconvolve * torchaudio.functional.preemphasis * torchaudio.functional.speed * torchaudio.transforms.AddNoise * torchaudio.transforms.Convolve * torchaudio.transforms.Deemphasis * torchaudio.transforms.FFTConvolve * torchaudio.transforms.Preemphasis * torchaudio.transforms.Speed
https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/
pytorch blogs
torchaudio.transforms.Speed torchaudio.transforms.SpeedPerturbation The operators can be used to synthetically diversify training data to improve the generalizability of downstream models. For usage details, please refer to the functional and transform documentation and Audio Data Augmentation tutorial. [Beta] WavLM and XLS-R models The release adds two self-supervised learning models for speech and audio. WavLM that is robust to noise and reverberation. XLS-R that is trained on cross-lingual datasets. Besides the model architectures, torchaudio also supports corresponding pre-trained pipelines: torchaudio.pipelines.WAVLM_BASE torchaudio.pipelines.WAVLM_BASE_PLUS torchaudio.pipelines.WAVLM_LARGE
https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/
pytorch blogs
torchaudio.pipelines.WAVLM_LARGE torchaudio.pipelines.WAV2VEC_XLSR_300M torchaudio.pipelines.WAV2VEC_XLSR_1B torchaudio.pipelines.WAV2VEC_XLSR_2B For usage details, please refer to the factory function and pre-trained pipelines documentation. TorchRL The initial release of torchrl includes several features that span across the entire RL domain. TorchRL can already be used in online, offline, multi-agent, multi-task and distributed RL settings, among others. See below: [Beta] Environment wrappers and transforms torchrl.envs includes several wrappers around common environment libraries. This allows users to swap one library with another without effort. These wrappers build an interface between these simulators and torchrl: dm_control: Gym Brax EnvPool Jumanji Habitat
https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/
pytorch blogs