text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
Gym
Brax
EnvPool
Jumanji
Habitat
It also comes with many commonly used transforms and vectorized environment utilities that allow for a fast execution across simulation libraries. Please refer to the documentation for more detail.
[Beta] Datacollectors
Data collection in RL is made easy via the usage of single process or multiprocessed/distributed data collectors that execute the policy in the environment over a desired duration and deliver samples according to the user’s needs. These can be found in torchrl.collectors and are documented here.
[Beta] Objective modules
Several objective functions are included in torchrl.objectives, among which:
A generic PPOLoss class and derived ClipPPOLoss and KLPPOLoss
SACLoss and DiscreteSACLoss
DDPGLoss
DQNLoss
REDQLoss
A2CLoss
TD3Loss
ReinforceLoss
Dreamer
| https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/ | pytorch blogs |
A2CLoss
TD3Loss
ReinforceLoss
Dreamer
Vectorized value function operators also appear in the library. Check the documentation here.
[Beta] Models and exploration strategies
We provide multiple models, modules and exploration strategies. Get a detailed description in the doc.
[Beta] Composable replay buffer
A composable replay buffer class is provided that can be used to store data in multiple contexts including single and multi-agent, on and off-policy and many more.. Components include:
Storages (list, physical or memory-based contiguous storages)
Samplers (Prioritized, sampler without repetition)
Writers
Possibility to add transforms
Replay buffers and other data utilities are documented here.
[Beta] Logging tools and trainer
We support multiple logging tools including tensorboard, wandb and mlflow. | https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/ | pytorch blogs |
We provide a generic Trainer class that allows for easy code recycling and checkpointing.
These features are documented here.
TensorDict
TensorDict is a new data carrier for PyTorch.
[Beta] TensorDict: specialized dictionary for PyTorch
TensorDict allows you to execute many common operations across batches of tensors carried by a single container. TensorDict supports many shape and device or storage operations, and can readily be used in distributed settings. Check the documentation to know more.
[Beta] @tensorclass: a dataclass for PyTorch
Like TensorDict, tensorclass provides the opportunity to write dataclasses with built-in torch features such as shape or device operations.
[Beta] tensordict.nn: specialized modules for TensorDict | https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/ | pytorch blogs |
The tensordict.nn module provides specialized nn.Module subclasses that make it easy to build arbitrarily complex graphs that can be executed with TensorDict inputs. It is compatible with the latest PyTorch features such as functorch, torch.fx and torch.compile.
TorchRec
[Beta] KeyedJaggedTensor All-to-All Redesign and Input Dist Fusion
We observed performance regression due to a bottleneck in sparse data distribution for models that have multiple, large KJTs to redistribute.
To combat this we altered the comms pattern to transport the minimum data required in the initial collective to support the collective calls for the actual KJT tensor data. This data sent in the initial collective, ‘splits’ means more data is transmitted over the comms stream overall, but the CPU is blocked for significantly shorter amounts of time leading to better overall QPS. | https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/ | pytorch blogs |
Furthermore, we altered the TorchRec train pipeline to group the initial collective calls for the splits together before launching the more expensive KJT tensor collective calls. This fusion minimizes the CPU blocked time as launching each subsequent input distribution is no longer dependent on the previous input distribution.
With this feature, variable batch sizes are now natively supported across ranks. These features are documented here.
TorchVision
[Beta] Extending TorchVision’s Transforms to Object Detection, Segmentation & Video tasks
TorchVision is extending its Transforms API! Here is what’s new:
You can use them not only for Image Classification but also for Object Detection, Instance & Semantic Segmentation and Video Classification.
You can use new functional transforms for transforming Videos, Bounding Boxes and Segmentation Masks.
| https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/ | pytorch blogs |
Learn more about these new transforms from our docs, and submit any feedback in our dedicated issue.
TorchText
[Beta] Adding scriptable T5 and Flan-T5 to the TorchText library with incremental decoding support!
TorchText has added the T5 model architecture with pre-trained weights for both the original T5 paper and Flan-T5. The model is fully torchscriptable and features an optimized multiheaded attention implementation. We include several examples of how to utilize the model including summarization, classification, and translation.
For more details, please refer to our docs.
TorchX | https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/ | pytorch blogs |
TorchX
TorchX is moving to community supported mode. More details will be coming in at a later time. | https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/ | pytorch blogs |
layout: blog_detail
title: "Deprecation of CUDA 11.6 and Python 3.7 Support"
For the upcoming PyTorch 2.0 feature release (target March 2023), we will target CUDA 11.7 as the stable version and CUDA 11.8 as the experimental version of CUDA and Python >=3.8, <=3.11.
If you are still using or depending on CUDA 11.6 or Python 3.7 builds, we strongly recommend moving to at least CUDA 11.7 and Python 3.8, as it would be the minimum versions required for PyTorch 2.0.
Please note that as of Feb 1, CUDA 11.6 and Python 3.7 are no longer included in the nightlies
Please refer to the Release Compatibility Matrix for PyTorch releases:
PyTorch Version
Python
Stable CUDA
Experimental CUDA
2.0
>=3.8, <=3.11
CUDA 11.7, CUDNN 8.5.0.96
CUDA 11.8, CUDNN 8.7.0.84
| https://pytorch.org/blog/deprecation-cuda-python-support/ | pytorch blogs |
1.13
>=3.7, <=3.10
CUDA 11.6, CUDNN 8.3.2.44
CUDA 11.7, CUDNN 8.5.0.96
1.12
>=3.7, <=3.10
CUDA 11.3, CUDNN 8.3.2.44
CUDA 11.6, CUDNN 8.3.2.44
As of 2/1/2023
For more information on PyTorch releases, updated compatibility matrix and release policies, please see (and bookmark) Readme. | https://pytorch.org/blog/deprecation-cuda-python-support/ | pytorch blogs |
layout: blog_detail
title: 'torchvision 0.3: segmentation, detection models, new datasets and more..'
author: Francisco Massa
redirect_from: /2019/05/23/torchvision03.html
PyTorch domain libraries like torchvision provide convenient access to common datasets and models that can be used to quickly create a state-of-the-art baseline. Moreover, they also provide common abstractions to reduce boilerplate code that users might have to otherwise repeatedly write. The torchvision 0.3 release brings several new features including models for semantic segmentation, object detection, instance segmentation, and person keypoint detection, as well as custom C++ / CUDA ops specific to computer vision.
New features include: | https://pytorch.org/blog/torchvision03/ | pytorch blogs |
New features include:
Reference training / evaluation scripts: torchvision now provides, under the references/ folder, scripts for training and evaluation of the following tasks: classification, semantic segmentation, object detection, instance segmentation and person keypoint detection. These serve as a log of how to train a specific model and provide baseline training and evaluation scripts to quickly bootstrap research.
torchvision ops: torchvision now contains custom C++ / CUDA operators. Those operators are specific to computer vision, and make it easier to build object detection models. These operators currently do not support PyTorch script mode, but support for it is planned for in the next release. Some of the ops supported include:
roi_pool (and the module version RoIPool)
roi_align (and the module version RoIAlign)
nms, for non-maximum suppression of bounding boxes
box_iou, for computing the intersection over union metric between two sets of bounding boxes
| https://pytorch.org/blog/torchvision03/ | pytorch blogs |
box_area, for computing the area of a set of bounding boxes
Here are a few examples on using torchvision ops:
import torch
import torchvision
# create 10 random boxes
boxes = torch.rand(10, 4) * 100
# they need to be in [x0, y0, x1, y1] format
boxes[:, 2:] += boxes[:, :2]
# create a random image
image = torch.rand(1, 3, 200, 200)
# extract regions in `image` defined in `boxes`, rescaling
# them to have a size of 3x3
pooled_regions = torchvision.ops.roi_align(image, [boxes], output_size=(3, 3))
# check the size
print(pooled_regions.shape)
# torch.Size([10, 3, 3, 3])
# or compute the intersection over union between
# all pairs of boxes
print(torchvision.ops.box_iou(boxes, boxes).shape)
# torch.Size([10, 10])
| https://pytorch.org/blog/torchvision03/ | pytorch blogs |
torch.Size([10, 10])
```
New models and datasets: torchvision now adds support for object detection, instance segmentation and person keypoint detection models. In addition, several popular datasets have been added. Note: The API is currently experimental and might change in future versions of torchvision. New models include:
Segmentation Models
The 0.3 release also contains models for dense pixelwise prediction on images.
It adds FCN and DeepLabV3 segmentation models, using a ResNet50 and ResNet101 backbones.
Pre-trained weights for ResNet101 backbone are available, and have been trained on a subset of COCO train2017, which contains the same 20 categories as those from Pascal VOC.
The pre-trained models give the following results on the subset of COCO val2017 which contain the same 20 categories as those present in Pascal VOC:
Network
mean IoU
global pixelwise acc
FCN ResNet101
63.7
91.9
DeepLabV3 ResNet101
67.4
92.4
Detection Models | https://pytorch.org/blog/torchvision03/ | pytorch blogs |
Detection Models
Network
box AP
mask AP
keypoint AP
Faster R-CNN ResNet-50 FPN trained on COCO
37.0
Mask R-CNN ResNet-50 FPN trained on COCO
37.9
34.6
Keypoint R-CNN ResNet-50 FPN trained on COCO
54.6
65.0
The implementations of the models for object detection, instance segmentation and keypoint detection are fast, specially during training.
In the following table, we use 8 V100 GPUs, with CUDA 10.0 and CUDNN 7.4 to report the results. During training, we use a batch size of 2 per GPU, and during testing a batch size of 1 is used.
For test time, we report the time for the model evaluation and post-processing (including mask pasting in image), but not the time for computing the precision-recall.
Network
train time (s / it)
test time (s / it)
memory (GB)
Faster R-CNN ResNet-50 FPN
0.2288
0.0590
5.2
Mask R-CNN ResNet-50 FPN
0.2728
0.0903
5.4
Keypoint R-CNN ResNet-50 FPN
0.3789
0.1242
6.8
| https://pytorch.org/blog/torchvision03/ | pytorch blogs |
You can load and use pre-trained detection and segmentation models with a few lines of code
import torchvision
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
# set it to evaluation mode, as the model behaves differently
# during training and during evaluation
model.eval()
image = PIL.Image.open('/path/to/an/image.jpg')
image_tensor = torchvision.transforms.functional.to_tensor(image)
# pass a list of (potentially different sized) tensors
# to the model, in 0-1 range. The model will take care of
# batching them together and normalizing
output = model([image_tensor])
# output is a list of dict, containing the postprocessed predictions
Classification Models
The following classification models were added:
GoogLeNet (Inception v1)
MobileNet V2
ShuffleNet v2
ResNeXt-50 32x4d and ResNeXt-101 32x8d
Datasets
The following datasets were added:
Caltech101, Caltech256, and CelebA
ImageNet dataset (improving on ImageFolder, provides class-strings)
| https://pytorch.org/blog/torchvision03/ | pytorch blogs |
Semantic Boundaries Dataset
VisionDataset as a base class for all datasets
In addition, we've added more image transforms, general improvements and bug fixes, as well as improved documentation.
See the full release notes here as well as this getting started tutorial on Google Colab here, which describes how to fine tune your own instance segmentation model on a custom dataset.
Cheers!
Team PyTorch | https://pytorch.org/blog/torchvision03/ | pytorch blogs |
layout: blog_detail
title: 'Prototype Features Now Available - APIs for Hardware Accelerated Mobile and ARM64 Builds'
author: Team PyTorch
Today, we are announcing four PyTorch prototype features. The first three of these will enable Mobile machine-learning developers to execute models on the full set of hardware (HW) engines making up a system-on-chip (SOC). This gives developers options to optimize their model execution for unique performance, power, and system-level concurrency.
These features include enabling execution on the following on-device HW engines:
* DSP and NPUs using the Android Neural Networks API (NNAPI), developed in collaboration with Google
* GPU execution on Android via Vulkan
* GPU execution on iOS via Metal
This release also includes developer efficiency benefits with newly introduced support for ARM64 builds for Linux. | https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/ | pytorch blogs |
Below, you’ll find brief descriptions of each feature with the links to get you started. These features are available through our nightly builds. Reach out to us on the PyTorch Forums for any comment or feedback. We would love to get your feedback on those and hear how you are using them!
NNAPI Support with Google Android | https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/ | pytorch blogs |
NNAPI Support with Google Android
The Google Android and PyTorch teams collaborated to enable support for Android’s Neural Networks API (NNAPI) via PyTorch Mobile. Developers can now unlock high-performance execution on Android phones as their machine-learning models will be able to access additional hardware blocks on the phone’s system-on-chip. NNAPI allows Android apps to run computationally intensive neural networks on the most powerful and efficient parts of the chips that power mobile phones, including DSPs (Digital Signal Processors) and NPUs (specialized Neural Processing Units). The API was introduced in Android 8 (Oreo) and significantly expanded in Android 10 and 11 to support a richer set of AI models. With this integration, developers can now seamlessly access NNAPI directly from PyTorch Mobile. This initial release includes fully-functional support for a core set of features and operators, and Google and Facebook will be working to expand capabilities in the coming months.
Links | https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/ | pytorch blogs |
Links
* Android Blog: Android Neural Networks API 1.3 and PyTorch Mobile support
* PyTorch Medium Blog: Support for Android NNAPI with PyTorch Mobile
PyTorch Mobile GPU support | https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/ | pytorch blogs |
PyTorch Mobile GPU support
Inferencing on GPU can provide great performance on many models types, especially those utilizing high-precision floating-point math. Leveraging the GPU for ML model execution as those found in SOCs from Qualcomm, Mediatek, and Apple allows for CPU-offload, freeing up the Mobile CPU for non-ML use cases. This initial prototype level support provided for on device GPUs is via the Metal API specification for iOS, and the Vulkan API specification for Android. As this feature is in an early stage: performance is not optimized and model coverage is limited. We expect this to improve significantly over the course of 2021 and would like to hear from you which models and devices you would like to see performance improvements on.
Links
* Prototype source workflows
ARM64 Builds for Linux | https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/ | pytorch blogs |
ARM64 Builds for Linux
We will now provide prototype level PyTorch builds for ARM64 devices on Linux. As we see more ARM usage in our community with platforms such as Raspberry Pis and Graviton(2) instances spanning both at the edge and on servers respectively. This feature is available through our nightly builds.
We value your feedback on these features and look forward to collaborating with you to continuously improve them further!
Thank you,
Team PyTorch | https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/ | pytorch blogs |
layout: blog_detail
title: "PyTorch Enterprise Support Program Update"
author: Team PyTorch
featured-img: ""
On May 25, 2021, we announced the PyTorch Enterprise Support Program (ESP) that enabled providers to develop and offer tailored enterprise-grade support to their customers.
The program enabled Program certified service providers to develop and offer tailored enterprise-grade support to their customers through contribution of hotfixes and other improvements requested by PyTorch enterprise users who were developing models in production at scale for mission-critical applications. However, as we evaluate community feedback, we found ongoing ESP support was not necessary at this time and will immediately divert these resources to other areas to improve the user experience for the entire community. | https://pytorch.org/blog/pytorch-enterprise-support-update/ | pytorch blogs |
Today, we are removing the PyTorch long-term support (LTS 1.8.2) download link from the “Get Started” page from the “Start Locally” download option in order to simplify the user experience. One can download PyTorch v1.8.2 in previous versions. Please note that it is only supported for Python while it is being deprecated. If there are any updates to ESP/LTS, we will update future blogs.
Please reach out to [email protected] with any questions. | https://pytorch.org/blog/pytorch-enterprise-support-update/ | pytorch blogs |
layout: blog_detail
title: 'New Releases: PyTorch 1.2, torchtext 0.4, torchaudio 0.3, and torchvision 0.4'
author: Team PyTorch
redirect_from: /2019/08/06/pytorch_aug2019_releases.html
Since the release of PyTorch 1.0, we’ve seen the community expand to add new tools, contribute to a growing set of models available in the PyTorch Hub, and continually increase usage in both research and production.
From a core perspective, PyTorch has continued to add features to support both research and production usage, including the ability to bridge these two worlds via TorchScript. Today, we are excited to announce that we have four new releases including PyTorch 1.2, torchvision 0.4, torchaudio 0.3, and torchtext 0.4. You can get started now with any of these releases at pytorch.org.
PyTorch 1.2 | https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
PyTorch 1.2
With PyTorch 1.2, the open source ML framework takes a major step forward for production usage with the addition of an improved and more polished TorchScript environment. These improvements make it even easier to ship production models, expand support for exporting ONNX formatted models, and enhance module level support for Transformers. In addition to these new features, TensorBoard is now no longer experimental - you can simply type from torch.utils.tensorboard import SummaryWriter to get started.
TorchScript Improvements
Since its release in PyTorch 1.0, TorchScript has provided a path to production for eager PyTorch models. The TorchScript compiler converts PyTorch models to a statically typed graph representation, opening up opportunities for
optimization and execution in constrained environments where Python is not available. You can incrementally convert your model to TorchScript, mixing compiled code seamlessly with Python. | https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
PyTorch 1.2 significantly expands TorchScript's support for the subset of Python used in PyTorch models and delivers a new, easier-to-use API for compiling your models to TorchScript. See the migration guide for details. Below is an example usage of the new API:
import torch
class MyModule(torch.nn.Module):
def __init__(self, N, M):
super(MyModule, self).__init__()
self.weight = torch.nn.Parameter(torch.rand(N, M))
def forward(self, input):
if input.sum() > 0:
output = self.weight.mv(input)
else:
output = self.weight + input
return output
# Compile the model code to a static representation
my_script_module = torch.jit.script(MyModule(3, 4))
# Save the compiled code and model data so it can be loaded elsewhere
my_script_module.save("my_script_module.pt")
| https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
my_script_module.save("my_script_module.pt")
```
To learn more, see our Introduction to TorchScript and Loading a
PyTorch Model in C++ tutorials.
Expanded ONNX Export | https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
Expanded ONNX Export
The ONNX community continues to grow with an open governance structure and additional steering committee members, special interest groups (SIGs), and working groups (WGs). In collaboration with Microsoft, we’ve added full support to export ONNX Opset versions 7(v1.2), 8(v1.3), 9(v1.4) and 10 (v1.5). We’ve have also enhanced the constant folding pass to support Opset 10, the latest available version of ONNX. ScriptModule has also been improved including support for multiple outputs, tensor factories, and tuples as inputs and outputs. Additionally, users are now able to register their own symbolic to export custom ops, and specify the dynamic dimensions of inputs during export. Here is a summary of the all of the major improvements:
Support for multiple Opsets including the ability to export dropout, slice, flip, and interpolate in Opset 10.
| https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
Improvements to ScriptModule including support for multiple outputs, tensor factories, and tuples as inputs and outputs.
More than a dozen additional PyTorch operators supported including the ability to export a custom operator.
Many big fixes and test infra improvements.
You can try out the latest tutorial here, contributed by @lara-hdr at Microsoft. A big thank you to the entire Microsoft team for all of their hard work to make this release happen!
nn.Transformer | https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
nn.Transformer
In PyTorch 1.2, we now include a standard nn.Transformer module, based on the paper “Attention is All You Need”. The nn.Transformer module relies entirely on an attention mechanism to draw global dependencies between input and output. The individual components of the nn.Transformer module are designed so they can be adopted independently. For example, the nn.TransformerEncoder can be used by itself, without the larger nn.Transformer. The new APIs include:
nn.Transformer
nn.TransformerEncoder and nn.TransformerEncoderLayer
nn.TransformerDecoder and nn.TransformerDecoderLayer
| https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
See the Transformer Layers documentation for more information. See here for the full PyTorch 1.2 release notes.
Domain API Library Updates | https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
Domain API Library Updates
PyTorch domain libraries like torchvision, torchtext, and torchaudio provide convenient access to common datasets, models, and transforms that can be used to quickly create a state-of-the-art baseline. Moreover, they also provide common abstractions to reduce boilerplate code that users might have to otherwise repeatedly write. Since research domains have distinct requirements, an ecosystem of specialized libraries called domain APIs (DAPI) has emerged around PyTorch to simplify the development of new and existing algorithms in a number of fields. We’re excited to release three updated DAPI libraries for text, audio, and vision that compliment the PyTorch 1.2 core release.
Torchaudio 0.3 with Kaldi Compatibility, New Transforms
| https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
Torchaudio specializes in machine understanding of audio waveforms. It is an ML library that provides relevant signal processing functionality (but is not a general signal processing library). It leverages PyTorch’s GPU support to provide many tools and transformations for waveforms to make data loading and standardization easier and more readable. For example, it offers data loaders for waveforms using sox, and transformations such as spectrograms, resampling, and mu-law encoding and decoding.
We are happy to announce the availability of torchaudio 0.3.0, with a focus on standardization and complex numbers, a transformation (resample) and two new functionals (phase_vocoder, ISTFT), Kaldi compatibility, and a new tutorial. Torchaudio was redesigned to be an extension of PyTorch and a part of the domain APIs (DAPI) ecosystem.
Standardization | https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
Standardization
Significant effort in solving machine learning problems goes into data preparation. In this new release, we've updated torchaudio's interfaces for its transformations to standardize around the following vocabulary and conventions.
Tensors are assumed to have channel as the first dimension and time as the last dimension (when applicable). This makes it consistent with PyTorch's dimensions. For size names, the prefix n_ is used (e.g. "a tensor of size (n_freq, n_mel)") whereas dimension names do not have this prefix (e.g. "a tensor of dimension (channel, time)"). The input of all transforms and functions now assumes channel first. This is done to be consistent with PyTorch, which has channel followed by the number of samples. The channel parameter of all transforms and functions is now deprecated. | https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
The output of STFT is (channel, frequency, time, 2), meaning for each channel, the columns are the Fourier transform of a certain window, so as we travel horizontally we can see each column (the Fourier transformed waveform) change over time. This matches the output of librosa so we no longer need to transpose in our test comparisons with Spectrogram, MelScale, MelSpectrogram, and MFCC. Moreover, because of these new conventions, we deprecated LC2CL and BLC2CBL which were used to transfer from one shape of signal to another.
As part of this release, we're also introducing support for complex numbers via tensors of dimension (..., 2), and providing magphase to convert such a tensor into its magnitude and phase, and similarly complex_norm and angle.
The details of the standardization are provided in the README.
Functionals, Transformations, and Kaldi Compatibility | https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
Prior to the standardization, we separated state and computation into torchaudio.transforms and torchaudio.functional.
As part of the transforms, we're adding a new transformation in 0.3.0: Resample. Resample can upsample or downsample a waveform to a different frequency.
As part of the functionals, we're introducing: phase_vocoder, a phase vocoder to change the speed of a waveform without changing its pitch, and ISTFT, the inverse STFT implemented to be compatible with STFT provided by PyTorch. This separation allows us to make functionals weak scriptable and to utilize JIT in 0.3.0. We thus have JIT and CUDA support for the following transformations: Spectrogram, AmplitudeToDB (previously named SpectrogramToDB), MelScale,
MelSpectrogram, MFCC, MuLawEncoding, MuLawDecoding (previously named MuLawExpanding). | https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
We now also provide a compatibility interface with Kaldi to ease onboarding and reduce a user's code dependency on Kaldi. We now have an interface for spectrogram, fbank, and resample_waveform.
New Tutorial
To showcase the new conventions and transformations, we have a new tutorial demonstrating how to preprocess waveforms using torchaudio. This tutorial walks through an example of loading a waveform and applying some of the available transformations to it.
We are excited to see an active community around torchaudio and eager to further grow and support it. We encourage you to go ahead and experiment for yourself with this tutorial and the two datasets that are available: VCTK and YESNO! They have an interface to download the datasets and preprocess them in a convenient format. You can find the details in the release notes here. | https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
Torchtext 0.4 with supervised learning datasets
A key focus area of torchtext is to provide the fundamental elements to help accelerate NLP research. This includes easy access to commonly used datasets and basic preprocessing pipelines for working on raw text based data. The torchtext 0.4.0 release includes several popular supervised learning baselines with "one-command" data loading. A tutorial is included to show how to use the new datasets for text classification analysis. We also added and improved on a few functions such as get_tokenizer and build_vocab_from_iterator to make it easier to implement future datasets. Additional examples can be found here. | https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
Text classification is an important task in Natural Language Processing with many applications, such as sentiment analysis. The new release includes several popular text classification datasets for supervised learning including:
AG_NEWS
SogouNews
DBpedia
YelpReviewPolarity
YelpReviewFull
YahooAnswers
AmazonReviewPolarity
AmazonReviewFull
Each dataset comes with two parts (train vs. test), and can be easily loaded with a single command. The datasets also support an ngrams feature to capture the partial information about the local word order. Take a look at the tutorial here to learn more about how to use the new datasets for supervised problems such as text classification analysis.
```python
from torchtext.datasets.text_classification import DATASETS | https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
train_dataset, test_dataset = DATASETS'AG_NEWS'
In addition to the domain library, PyTorch provides many tools to make data loading easy. Users now can load and preprocess the text classification datasets with some well supported tools, like [torch.utils.data.DataLoader](https://pytorch.org/docs/stable/_modules/torch/utils/data/dataloader.html) and [torch.utils.data.IterableDataset](https://pytorch.org/docs/master/data.html#torch.utils.data.IterableDataset). Here are a few lines to wrap the data with DataLoader. More examples can be found [here](https://github.com/pytorch/text/tree/master/examples/text_classification).
```python
from torch.utils.data import DataLoader
data = DataLoader(train_dataset, collate_fn=generate_batch)
Check out the release notes here to learn more and try out the tutorial here.
Torchvision 0.4 with Support for Video | https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
Torchvision 0.4 with Support for Video
Video is now a first-class citizen in torchvision, with support for data loading, datasets, pre-trained models, and transforms. The 0.4 release of torchvision includes:
Efficient IO primitives for reading/writing video files (including audio), with support for arbitrary encodings and formats.
Standard video datasets, compatible with torch.utils.data.Dataset and torch.utils.data.DataLoader.
Pre-trained models built on the Kinetics-400 dataset for action classification on videos (including the training scripts).
Reference training scripts for training your own video models.
We wanted working with video data in PyTorch to be as straightforward as possible, without compromising too much on performance.
As such, we avoid the steps that would require re-encoding the videos beforehand, as it would involve:
A preprocessing step which duplicates the dataset in order to re-encode it.
An overhead in time and space because this re-encoding is time-consuming.
| https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
Generally, an external script should be used to perform the re-encoding.
Additionally, we provide APIs such as the utility class, VideoClips, that simplifies the task of enumerating all possible clips of fixed size in a list of video files by creating an index of all clips in a set of videos. It also allows you to specify a fixed frame-rate for the videos. An example of the API is provided below:
from torchvision.datasets.video_utils import VideoClips
class MyVideoDataset(object):
def __init__(self, video_paths):
self.video_clips = VideoClips(video_paths,
clip_length_in_frames=16,
frames_between_clips=1,
frame_rate=15)
def __getitem__(self, idx):
video, audio, info, video_idx = self.video_clips.get_clip(idx)
return video, audio
def __len__(self):
return self.video_clips.num_clips()
| https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
return self.video_clips.num_clips()
```
Most of the user-facing API is in Python, similar to PyTorch, which makes it easily extensible. Plus, the underlying implementation is fast — torchvision decodes as little as possible from the video on-the-fly in order to return a clip from the video.
Check out the torchvision 0.4 release notes here for more details.
We look forward to continuing our collaboration with the community and hearing your feedback as we further improve and expand the PyTorch deep learning platform.
We’d like to thank the entire PyTorch team and the community for all of the contributions to this work! | https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/ | pytorch blogs |
layout: blog_detail
title: 'How to Train State-Of-The-Art Models Using TorchVision’s Latest Primitives'
author: Vasilis Vryniotis
featured-img: 'assets/images/fx-image2.png'
| https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
A few weeks ago, TorchVision v0.11 was released packed with numerous new primitives, models and training recipe improvements which allowed achieving state-of-the-art (SOTA) results. The project was dubbed “TorchVision with Batteries Included” and aimed to modernize our library. We wanted to enable researchers to reproduce papers and conduct research more easily by using common building blocks. Moreover, we aspired to provide the necessary tools to Applied ML practitioners to train their models on their own data using the same SOTA techniques as in research. Finally, we wanted to refresh our pre-trained weights and offer better off-the-shelf models to our users, hoping that they would build better applications. | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
Though there is still much work to be done, we wanted to share with you some exciting results from the above work. We will showcase how one can use the new tools included in TorchVision to achieve state-of-the-art results on a highly competitive and well-studied architecture such as ResNet50 [1]. We will share the exact recipe used to improve our baseline by over 4.7 accuracy points to reach a final top-1 accuracy of 80.9% and share the journey for deriving the new training process. Moreover, we will show that this recipe generalizes well to other model variants and families. We hope that the above will influence future research for developing stronger generalizable training methodologies and will inspire the community to adopt and contribute to our efforts.
The Results
Using our new training recipe found on ResNet50, we’ve refreshed the pre-trained weights of the following models:
Model
Accuracy@1
Accuracy@5
| https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
|----------|:--------:|:----------:|
| ResNet50 | 80.858 | 95.434|
|----------|:--------:|:----------:|
| ResNet101 | 81.886 | 95.780|
|----------|:--------:|:----------:|
| ResNet152 | 82.284 | 96.002|
|----------|:--------:|:----------:|
| ResNeXt50-32x4d | 81.198 | 95.340|
Note that the accuracy of all models except RetNet50 can be further improved by adjusting their training parameters slightly, but our focus was to have a single robust recipe which performs well for all.
UPDATE: We have refreshed the majority of popular classification models of TorchVision, you can find the details on this blog post.
There are currently two ways to use the latest weights of the model.
Using the Multi-pretrained weight API | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
Using the Multi-pretrained weight API
We are currently working on a new prototype mechanism which will extend the model builder methods of TorchVision to support multiple weights. Along with the weights, we store useful meta-data (such as the labels, the accuracy, links to recipe etc) and the preprocessing transforms necessary for using the models. Example:
```python
from PIL import Image
from torchvision import prototype as P
img = Image.open("test/assets/encode_jpeg/grace_hopper_517x606.jpg")
# Initialize model
weights = P.models.ResNet50_Weights.IMAGENET1K_V2
model = P.models.resnet50(weights=weights)
model.eval()
# Initialize inference transforms
preprocess = weights.transforms()
# Apply inference preprocessing transforms
batch = preprocess(img).unsqueeze(0) | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
batch = preprocess(img).unsqueeze(0)
prediction = model(batch).squeeze(0).softmax(0)
# Make predictions
label = prediction.argmax().item()
score = prediction[label].item()
# Use meta to get the labels
category_name = weights.meta['categories'][label]
print(f"{category_name}: {100 * score}%")
## Using the legacy API
Those who don’t want to use a prototype API have the option of accessing the new weights via the legacy API using the following approach:
```python
from torchvision.models import resnet
# Overwrite the URL of the previous weights
resnet.model_urls["resnet50"] = "https://download.pytorch.org/models/resnet50-11ad3fa6.pth"
# Initialize the model using the legacy API
model = resnet.resnet50(pretrained=True)
# TODO: Apply preprocessing + call the model
# ...
The Training Recipe | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
...
```
The Training Recipe
Our goal was to use the newly introduced primitives of TorchVision to derive a new strong training recipe which achieves state-of-the-art results for the vanilla ResNet50 architecture when trained from scratch on ImageNet with no additional external data. Though by using architecture specific tricks [2] one could further improve the accuracy, we’ve decided not to include them so that the recipe can be used in other architectures. Our recipe heavily focuses on simplicity and builds upon work by FAIR [3], [4], [5], [6], [7]. Our findings align with the parallel study of Wightman et al. [7], who also report major accuracy improvements by focusing on the training recipes. | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
Without further ado, here are the main parameters of our recipe:
# Optimizer & LR scheme
ngpus=8,
batch_size=128, # per GPU
epochs=600,
opt='sgd',
momentum=0.9,
lr=0.5,
lr_scheduler='cosineannealinglr',
lr_warmup_epochs=5,
lr_warmup_method='linear',
lr_warmup_decay=0.01,
# Regularization and Augmentation
weight_decay=2e-05,
norm_weight_decay=0.0,
label_smoothing=0.1,
mixup_alpha=0.2,
cutmix_alpha=1.0,
auto_augment='ta_wide',
random_erase=0.1,
ra_sampler=True,
ra_reps=4,
# EMA configuration
model_ema=True,
model_ema_steps=32,
model_ema_decay=0.99998,
# Resizing
interpolation='bilinear',
val_resize_size=232,
val_crop_size=224,
train_crop_size=176,
Using our standard training reference script, we can train a ResNet50 using the following command:
```
torchrun --nproc_per_node=8 train.py --model resnet50 --batch-size 128 --lr 0.5 \ | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
--lr-scheduler cosineannealinglr --lr-warmup-epochs 5 --lr-warmup-method linear \
--auto-augment ta_wide --epochs 600 --random-erase 0.1 --weight-decay 0.00002 \
--norm-weight-decay 0.0 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0 \
--train-crop-size 176 --model-ema --val-resize-size 232 --ra-sampler --ra-reps 4
```
Methodology
There are a few principles we kept in mind during our explorations:
Training is a stochastic process and the validation metric we try to optimize is a random variable. This is due to the random weight initialization scheme employed and the existence of random effects during the training process. This means that we can’t do a single run to assess the effect of a recipe change. The standard practice is doing multiple runs (usually 3 to 5) and studying the summarization stats (such as mean, std, median, max, etc).
| https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
There is usually a significant interaction between different parameters, especially for techniques that focus on Regularization and reducing overfitting. Thus changing the value of one can have effects on the optimal configurations of others. To account for that one can either adopt a greedy search approach (which often leads to suboptimal results but tractable experiments) or apply grid search (which leads to better results but is computationally expensive). In this work, we used a mixture of both.
Techniques that are non-deterministic or introduce noise usually require longer training cycles to improve model performance. To keep things tractable, we initially used short training cycles (small number of epochs) to decide which paths can be eliminated early and which should be explored using longer training.
| https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
There is a risk of overfitting the validation dataset [8] because of the repeated experiments. To mitigate some of the risk, we apply only training optimizations that provide a significant accuracy improvements and use K-fold cross validation to verify optimizations done on the validation set. Moreover we confirm that our recipe ingredients generalize well on other models for which we didn’t optimize the hyper-parameters.
Break down of key accuracy improvements | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
Break down of key accuracy improvements
As discussed in earlier blogposts, training models is not a journey of monotonically increasing accuracies and the process involves a lot of backtracking. To quantify the effect of each optimization, below we attempt to show-case an idealized linear journey of deriving the final recipe starting from the original recipe of TorchVision. We would like to clarify that this is an oversimplification of the actual path we followed and thus it should be taken with a grain of salt.
In the table below, we provide a summary of the performance of stacked incremental improvements on top of Baseline. Unless denoted otherwise, we report the model with best Acc@1 out of 3 runs: | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
Accuracy@1
Accuracy@5
Incremental Diff
Absolute Diff
ResNet50 Baseline
76.130
92.862
0.000
0.000
----------
:--------:
:----------:
:---------
:--------:
+ LR optimizations
76.494
93.198
0.364
0.364
----------
:--------:
:----------:
:---------
:--------:
+ TrivialAugment
76.806
93.272
0.312
0.676
----------
:--------:
:----------:
:---------
:--------:
+ Long Training
78.606
94.052
1.800
2.476
----------
:--------:
:----------:
:---------
:--------:
+ Random Erasing
78.796
94.094
0.190
2.666
----------
:--------:
:----------:
:---------
:--------:
+ Label Smoothing
79.114
94.374
0.318
2.984
----------
:--------:
:----------:
:---------
:--------:
+ Mixup
79.232
94.536
0.118
3.102
----------
:--------:
:----------:
:---------
:--------:
+ Cutmix
79.510
94.642
0.278
3.380
----------
:--------:
:----------:
:---------
:--------:
| https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
+ Weight Decay tuning
80.036
94.746
0.526
3.906
+ FixRes mitigations
80.196
94.672
0.160
4.066
----------
:--------:
:----------:
:---------
:--------:
+ EMA
80.450
94.908
0.254
4.320
----------
:--------:
:----------:
:---------
:--------:
+ Inference Resize tuning *
80.674
95.166
0.224
4.544
----------
:--------:
:----------:
:---------
:--------:
+ Repeated Augmentation **
80.858
95.434
0.184
4.728
*The tuning of the inference size was done on top of the last model. See below for details.
** Community contribution done after the release of the article. See below for details.
Baseline
Our baseline is the previously released ResNet50 model of TorchVision. It was trained with the following recipe:
```python
# Optimizer & LR scheme
ngpus=8,
batch_size=32, # per GPU
epochs=90,
opt='sgd',
momentum=0.9,
lr=0.1,
lr_scheduler='steplr',
lr_step_size=30,
lr_gamma=0.1, | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
lr_step_size=30,
lr_gamma=0.1,
# Regularization
weight_decay=1e-4,
# Resizing
interpolation='bilinear',
val_resize_size=256,
val_crop_size=224,
train_crop_size=224,
```
Most of the above parameters are the defaults on our training scripts. We will start building on top of this baseline by introducing optimizations until we gradually arrive at the final recipe.
LR optimizations | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
LR optimizations
There are a few parameter updates we can apply to improve both the accuracy and the speed of our training. This can be achieved by increasing the batch size and tuning the LR. Another common method is to apply warmup and gradually increase our learning rate. This is beneficial especially when we use very high learning rates and helps with the stability of the training in the early epochs. Finally, another optimization is to apply Cosine Schedule to adjust our LR during the epochs. A big advantage of cosine is that there are no hyper-parameters to optimize, which cuts down our search space.
Here are the additional optimizations applied on top of the baseline recipe. Note that we’ve run multiple experiments to determine the optimal configuration of the parameters:
batch_size=128, # per GPU
lr=0.5,
lr_scheduler='cosineannealinglr',
lr_warmup_epochs=5,
lr_warmup_method='linear',
lr_warmup_decay=0.01,
| https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
lr_warmup_decay=0.01,
The above optimizations increase our top-1 Accuracy by 0.364 points comparing to the baseline. Note that in order to combine the different LR strategies we use the newly introduced [SequentialLR](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.SequentialLR.html#torch.optim.lr_scheduler.SequentialLR) scheduler.
## TrivialAugment
The original model was trained using basic augmentation transforms such as Random resized crops and horizontal flips. An easy way to improve our accuracy is to apply more complex “Automatic-Augmentation” techniques. The one that performed best for us is TrivialAugment [[9]](https://arxiv.org/abs/2103.10158), which is extremely simple and can be considered “parameter free”, which means it can help us cut down our search space further.
Here is the update applied on top of the previous step:
auto_augment='ta_wide',
```
The use of TrivialAugment increased our top-1 Accuracy by 0.312 points compared to the previous step. | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
Long Training
Longer training cycles are beneficial when our recipe contains ingredients that behave randomly. More specifically as we start adding more and more techniques that introduce noise, increasing the number of epochs becomes crucial. Note that at early stages of our exploration, we used relatively short cycles of roughly 200 epochs which was later increased to 400 as we started narrowing down most of the parameters and finally increased to 600 epochs at the final versions of the recipe.
Below we see the update applied on top of the earlier steps:
epochs=600,
| https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
epochs=600,
This further increases our top-1 Accuracy by 1.8 points on top of the previous step. This is the biggest increase we will observe in this iterative process. It’s worth noting that the effect of this single optimization is overstated and somehow misleading. Just increasing the number of epochs on top of the old baseline won’t yield such significant improvements. Nevertheless the combination of the LR optimizations with strong Augmentation strategies helps the model benefit from longer cycles. It’s also worth mentioning that the reason we introduce the lengthy training cycles so early in the process is because in the next steps we will introduce techniques that require significantly more epochs to provide good results.
Random Erasing | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
Random Erasing
Another data augmentation technique known to help the classification accuracy is Random Erasing [10], [11]. Often paired with Automatic Augmentation methods, it usually yields additional improvements in accuracy due to its regularization effect. In our experiments we tuned only the probability of applying the method via a grid search and found that it’s beneficial to keep its probability at low levels, typically around 10%.
Here is the extra parameter introduced on top of the previous:
random_erase=0.1,
Applying Random Erasing increases our Acc@1 by further 0.190 points.
Label Smoothing | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
Label Smoothing
A good technique to reduce overfitting is to stop the model from becoming overconfident. This can be achieved by softening the ground truth using Label Smoothing [12]. There is a single parameter which controls the degree of smoothing (the higher the stronger) that we need to specify. Though optimizing it via grid search is possible, we found that values around 0.05-0.15 yield similar results, so to avoid overfitting it we used the same value as on the paper that introduced it.
Below we can find the extra config added on this step:
label_smoothing=0.1,
We use PyTorch’s newly introduced CrossEntropyLoss label_smoothing parameter and that increases our accuracy by an additional 0.318 points.
Mixup and Cutmix | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
Mixup and Cutmix
Two data augmentation techniques often used to produce SOTA results are Mixup and Cutmix [13], [14]. They both provide strong regularization effects by softening not only the labels but also the images. In our setup we found it beneficial to apply one of them randomly with equal probability. Each is parameterized with a hyperparameter alpha, which controls the shape of the Beta distribution from which the smoothing probability is sampled. We did a very limited grid search, focusing primarily on common values proposed on the papers.
Below you will find the optimal values for the alpha parameters of the two techniques:
mixup_alpha=0.2,
cutmix_alpha=1.0,
Applying mixup increases our accuracy by 0.118 points and combining it with cutmix improves it by additional 0.278 points.
Weight Decay tuning | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
Weight Decay tuning
Our standard recipe uses L2 regularization to reduce overfitting. The Weight Decay parameter controls the degree of the regularization (the larger the stronger) and is applied universally to all learned parameters of the model by default. In this recipe, we apply two optimizations to the standard approach. First we perform grid search to tune the parameter of weight decay and second we disable weight decay for the parameters of the normalization layers.
Below you can find the optimal configuration of weight decay for our recipe:
weight_decay=2e-05,
norm_weight_decay=0.0,
The above update improves our accuracy by a further 0.526 points, providing additional experimental evidence for a known fact that tuning weight decay has significant effects on the performance of the model. Our approach for separating the Normalization parameters from the rest was inspired by ClassyVision’s approach.
FixRes mitigations | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
FixRes mitigations
An important property identified early in our experiments is the fact that the models performed significantly better if the resolution used during validation was increased from the 224x224 of training. This effect is studied in detail on the FixRes paper [5] and two mitigations are proposed: a) one could try to reduce the training resolution so that the accuracy on the validation resolution is maximized or b) one could fine-tune the model on a two-phase training so that it adjusts on the target resolution. Since we didn’t want to introduce a 2-phase training, we went for option a). This means that we reduced the train crop size from 224 and used grid search to find the one that maximizes the validation on resolution of 224x224.
Below you can see the optimal value used on our recipe:
val_crop_size=224,
train_crop_size=176,
The above optimization improved our accuracy by an additional 0.160 points and sped up our training by 10%. | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
It’s worth noting that the FixRes effect still persists, meaning that the model continues to perform better on validation when we increase the resolution. Moreover, further reducing the training crop-size actually hurts the accuracy. This intuitively makes sense because one can only reduce the resolution so much before critical details start disappearing from the picture. Finally, we should note that the above FixRes mitigation seems to benefit models with similar depth to ResNet50. Deeper variants with larger receptive fields seem to be slightly negatively affected (typically by 0.1-0.2 points). Hence we consider this part of the recipe optional. Below we visualize the performance of the best available checkpoints (with the full recipe) for models trained with 176 and 224 resolution:
| https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
Exponential Moving Average (EMA)
EMA is a technique that allows one to push the accuracy of a model without increasing its complexity or inference time. It performs an exponential moving average on the model weights and this leads to increased accuracy and more stable models. The averaging happens every few iterations and its decay parameter was tuned via grid search.
Below you can see the optimal values for our recipe:
model_ema=True,
model_ema_steps=32,
model_ema_decay=0.99998,
| https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
model_ema_steps=32,
model_ema_decay=0.99998,
```
The use of EMA increases our accuracy by 0.254 points comparing to the previous step. Note that TorchVision’s EMA implementation is build on top of PyTorch’s AveragedModel class with the key difference being that it averages not only the model parameters but also its buffers. Moreover, we have adopted tricks from Pycls which allow us to parameterize the decay in a way that doesn’t depend on the number of epochs.
Inference Resize tuning | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
Inference Resize tuning
Unlike all other steps of the process which involved training models with different parameters, this optimization was done on top of the final model. During inference, the image is resized to a specific resolution and then a central 224x224 crop is taken from it. The original recipe used a resize size of 256, which caused a similar discrepancy as the one described on the FixRes paper [5]. By bringing this resize value closer to the target inference resolution, one can improve the accuracy. To select the value we run a short grid search between interval [224, 256] with step of 8. To avoid overfitting, the value was selected using half of the validation set and confirmed using the other half.
Below you can see the optimal value used on our recipe:
val_resize_size=232,
| https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
val_resize_size=232,
The above is an optimization which improved our accuracy by 0.224 points. It’s worth noting that the optimal value for ResNet50 works also best for ResNet101, ResNet152 and ResNeXt50, which hints that it generalizes across models:
[UPDATE] Repeated Augmentation | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
[UPDATE] Repeated Augmentation
Repeated Augmentation [15], [16] is another technique which can improve the overall accuracy and has been used by other strong recipes such as those at [6], [7]. Tal Ben-Nun, a community contributor, has further improved upon our original recipe by proposing training the model with 4 repetitions. His contribution came after the release of this article.
Below you can see the optimal value used on our recipe:
ra_sampler=True,
ra_reps=4,
The above is the final optimization which improved our accuracy by 0.184 points.
Optimizations that were tested but not adopted | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
Optimizations that were tested but not adopted
During the early stages of our research, we experimented with additional techniques, configurations and optimizations. Since our target was to keep our recipe as simple as possible, we decided not to include anything that didn’t provide a significant improvement. Here are a few approaches that we took but didn’t make it to our final recipe:
Optimizers: Using more complex optimizers such as Adam, RMSProp or SGD with Nesterov momentum didn’t provide significantly better results than vanilla SGD with momentum.
LR Schedulers: We tried different LR Scheduler schemes such as StepLR and Exponential. Though the latter tends to work better with EMA, it often requires additional hyper-parameters such as defining the minimum LR to work well. Instead, we just use cosine annealing decaying the LR up to zero and choose the checkpoint with the highest accuracy.
| https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
Automatic Augmentations: We’ve tried different augmentation strategies such as AutoAugment and RandAugment. None of these outperformed the simpler parameter-free TrivialAugment.
Interpolation: Using bicubic or nearest interpolation didn’t provide significantly better results than bilinear.
Normalization layers: Using Sync Batch Norm didn’t yield significantly better results than using the regular Batch Norm.
Acknowledgements
We would like to thank Piotr Dollar, Mannat Singh and Hugo Touvron for providing their insights and feedback during the development of the recipe and for their previous research work on which our recipe is based on. Their support was invaluable for achieving the above result. Moreover, we would like to thank Prabhat Roy, Kai Zhang, Yiwen Song, Joel Schlosser, Ilqar Ramazanli, Francisco Massa, Mannat Singh, Xiaoliang Dai, Samuel Gabriel, Allen Goodman and Tal Ben-Nun for their contributions to the Batteries Included project.
References | https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
References
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. “Deep Residual Learning for Image Recognition”.
Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li. “Bag of Tricks for Image Classification with Convolutional Neural Networks”
Piotr Dollár, Mannat Singh, Ross Girshick. “Fast and Accurate Model Scaling”
Tete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Dollár, Ross Girshick. “Early Convolutions Help Transformers See Better”
Hugo Touvron, Andrea Vedaldi, Matthijs Douze, Hervé Jégou. “Fixing the train-test resolution discrepancy
Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. “Training data-efficient image transformers & distillation through attention”
Ross Wightman, Hugo Touvron, Hervé Jégou. “ResNet strikes back: An improved training procedure in timm”
Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, Vaishaal Shankar. “Do ImageNet Classifiers Generalize to ImageNet?”
| https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
Samuel G. Müller, Frank Hutter. “TrivialAugment: Tuning-free Yet State-of-the-Art Data Augmentation”
Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, Yi Yang. “Random Erasing Data Augmentation”
Terrance DeVries, Graham W. Taylor. “Improved Regularization of Convolutional Neural Networks with Cutout”
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew Wojna. “Rethinking the Inception Architecture for Computer Vision”
Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz. “mixup: Beyond Empirical Risk Minimization”
Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, Youngjoon Yoo. “CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features”
Elad Hoffer, Tal Ben-Nun, Itay Hubara, Niv Giladi, Torsten Hoefler, Daniel Soudry. “Augment your batch: better training with larger batches”
| https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
Maxim Berman, Hervé Jégou, Andrea Vedaldi, Iasonas Kokkinos, Matthijs Douze. “Multigrain: a unified image embedding for classes and instances”
| https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ | pytorch blogs |
layout: blog_detail
title: 'Updates & Improvements to PyTorch Tutorials'
author: Team PyTorch
PyTorch.org provides researchers and developers with documentation, installation instructions, latest news, community projects, tutorials, and more. Today, we are introducing usability and content improvements including tutorials in additional categories, a new recipe format for quickly referencing common topics, sorting using tags, and an updated homepage.
Let’s take a look at them in detail.
TUTORIALS HOME PAGE UPDATE
The tutorials home page now provides clear actions that developers can take. For new PyTorch users, there is an easy-to-discover button to take them directly to “A 60 Minute Blitz”. Right next to it, there is a button to view all recipes which are designed to teach specific features quickly with examples.
| https://pytorch.org/blog/updates-improvements-to-pytorch-tutorials/ | pytorch blogs |
In addition to the existing left navigation bar, tutorials can now be quickly filtered by multi-select tags. Let’s say you want to view all tutorials related to “Production” and “Quantization”. You can select the “Production” and “Quantization” filters as shown in the image shown below:
The following additional resources can also be found at the bottom of the Tutorials homepage:
* PyTorch Cheat Sheet
* PyTorch Examples
* Tutorial on GitHub
PYTORCH RECIPES
Recipes are new bite-sized, actionable examples designed to teach researchers and developers how to use specific PyTorch features. Some notable new recipes include:
* Loading Data in PyTorch | https://pytorch.org/blog/updates-improvements-to-pytorch-tutorials/ | pytorch blogs |
Model Interpretability Using Captum
How to Use TensorBoard
View the full recipes here.
LEARNING PYTORCH
This section includes tutorials designed for users new to PyTorch. Based on community feedback, we have made updates to the current Deep Learning with PyTorch: A 60 Minute Blitz tutorial, one of our most popular tutorials for beginners. Upon completion, one can understand what PyTorch and neural networks are, and be able to build and train a simple image classification network. Updates include adding explanations to clarify output meanings and linking back to where users can read more in the docs, cleaning up confusing syntax errors, and reconstructing and explaining new concepts for easier readability. | https://pytorch.org/blog/updates-improvements-to-pytorch-tutorials/ | pytorch blogs |
DEPLOYING MODELS IN PRODUCTION
This section includes tutorials for developers looking to take their PyTorch models to production. The tutorials include:
* Deploying PyTorch in Python via a REST API with Flask
* Introduction to TorchScript
* Loading a TorchScript Model in C++
* Exporting a Model from PyTorch to ONNX and Running it using ONNX Runtime
FRONTEND APIS
PyTorch provides a number of frontend API features that can help developers to code, debug, and validate their models more efficiently. This section includes tutorials that teach what these features are and how to use them. Some tutorials to highlight: | https://pytorch.org/blog/updates-improvements-to-pytorch-tutorials/ | pytorch blogs |
Introduction to Named Tensors in PyTorch
Using the PyTorch C++ Frontend
Extending TorchScript with Custom C++ Operators
Extending TorchScript with Custom C++ Classes
Autograd in C++ Frontend
MODEL OPTIMIZATION
Deep learning models often consume large amounts of memory, power, and compute due to their complexity. This section provides tutorials for model optimization:
* Pruning
* Dynamic Quantization on BERT | https://pytorch.org/blog/updates-improvements-to-pytorch-tutorials/ | pytorch blogs |
Static Quantization with Eager Mode in PyTorch
PARALLEL AND DISTRIBUTED TRAINING
PyTorch provides features that can accelerate performance in research and production such as native support for asynchronous execution of collective operations and peer-to-peer communication that is accessible from Python and C++. This section includes tutorials on parallel and distributed training:
* Single-Machine Model Parallel Best Practices
* Getting started with Distributed Data Parallel
* Getting started with Distributed RPC Framework
* Implementing a Parameter Server Using Distributed RPC Framework | https://pytorch.org/blog/updates-improvements-to-pytorch-tutorials/ | pytorch blogs |
Making these improvements are just the first step of improving PyTorch.org for the community. Please submit your suggestions here.
Cheers,
Team PyTorch | https://pytorch.org/blog/updates-improvements-to-pytorch-tutorials/ | pytorch blogs |
layout: blog_detail
title: "PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever"
We are excited to announce the release of PyTorch® 2.0 which we highlighted during the PyTorch Conference on 12/2/22! PyTorch 2.0 offers the same eager-mode development and user experience, while fundamentally changing and supercharging how PyTorch operates at compiler level under the hood with faster performance and support for Dynamic Shapes and Distributed. | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
This next-generation release includes a Stable version of Accelerated Transformers (formerly called Better Transformers); Beta includes torch.compile as the main API for PyTorch 2.0, the scaled_dot_product_attention function as part of torch.nn.functional, the MPS backend, functorch APIs in the torch.func module; and other Beta/Prototype improvements across various inferences, performance and training optimization features on GPUs and CPUs. For a comprehensive introduction and technical overview of torch.compile, please visit the 2.0 Get Started page.
Along with 2.0, we are also releasing a series of beta updates to the PyTorch domain libraries, including those that are in-tree, and separate libraries including TorchAudio, TorchVision, and TorchText. An update for TorchX is also being released as it moves to community supported mode. More details can be found in this library blog. | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
This release is composed of over 4,541 commits and 428 contributors since 1.13.1. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.0 and the overall 2-series this year.
Summary:
* torch.compile is the main API for PyTorch 2.0, which wraps your model and returns a compiled model. It is a fully additive (and optional) feature and hence 2.0 is 100% backward compatible by definition.
* As an underpinning technology of torch.compile, TorchInductor with Nvidia and AMD GPUs will rely on OpenAI Triton deep learning compiler to generate performant code and hide low level hardware details. OpenAI Triton-generated kernels achieve performance that's on par with hand-written kernels and specialized cuda libraries such as cublas. | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
Accelerated Transformers introduce high-performance support for training and inference using a custom kernel architecture for scaled dot product attention (SPDA). The API is integrated with torch.compile() and model developers may also use the scaled dot product attention kernels directly by calling the new scaled_dot_product_attention() operator.
Metal Performance Shaders (MPS) backend provides GPU accelerated PyTorch training on Mac platforms with added support for Top 60 most used ops, bringing coverage to over 300 operators.
Amazon AWS optimizes the PyTorch CPU inference on AWS Graviton3 based C7g instances. PyTorch 2.0 improves inference performance on Graviton compared to the previous releases, including improvements for Resnet50 and Bert.
| https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
New prototype features and technologies across TensorParallel, DTensor, 2D parallel, TorchDynamo, AOTAutograd, PrimTorch and TorchInductor.
Stable
Beta
Prototype
Performance Improvements
Accelerated PT 2 Transformers
torch.compile
DTensor
CUDA support for 11.7 & 11.8 (deprecating CUDA 11.6)
PyTorch MPS Backend
TensorParallel
| https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
Python 3.8 (deprecating Python 3.7)
Scaled dot product attention
2D Parallel
AWS Graviton3
functorch
Torch.compile (dynamic=True)
Dispatchable Collectives
Torch.set_default & torch.device
| https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
X86 quantization backend
GNN inference and training performance
*To see a full list of public 2.0, 1.13 and 1.12 feature submissions click here.
Stable Features
[Stable] Accelerated PyTorch 2 Transformers | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
[Stable] Accelerated PyTorch 2 Transformers
The PyTorch 2.0 release includes a new high-performance implementation of the PyTorch Transformer API. In releasing Accelerated PT2 Transformers, our goal is to make training and deployment of state-of-the-art Transformer models affordable across the industry. This release introduces high-performance support for training and inference using a custom kernel architecture for scaled dot product attention (SPDA), extending the inference “fastpath” architecture, previously known as "Better Transformer."
Similar to the “fastpath” architecture, custom kernels are fully integrated into the PyTorch Transformer API – thus, using the native Transformer and MultiHeadAttention API will enable users to:
transparently see significant speed improvements;
support many more use cases including models using Cross-Attention, Transformer Decoders, and for training models; and
| https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
continue to use fastpath inference for fixed and variable sequence length Transformer Encoder and Self Attention use cases.
To take full advantage of different hardware models and Transformer use cases, multiple SDPA custom kernels are supported (see below), with custom kernel selection logic that will pick the highest-performance kernel for a given model and hardware type. In addition to the existing Transformer API, model developers may also use the scaled dot product attention kernels directly by calling the new scaled_dot_product_attention() operator. Accelerated PyTorch 2 Transformers are integrated with torch.compile() . To use your model while benefiting from the additional acceleration of PT2-compilation (for inference or training), pre-process the model with model = torch.compile(model). | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
We have achieved major speedups for training transformer models and in particular large language models with Accelerated PyTorch 2 Transformers using a combination of custom kernels and torch.compile().
Figure: Using scaled dot product attention with custom kernels and torch.compile delivers significant speedups for training large language models, such as for nanoGPT shown here.
Beta Features
[Beta] torch.compile
torch.compile is the main API for PyTorch 2.0, which wraps your model and returns a compiled model. It is a fully additive (and optional) feature and hence 2.0 is 100% backward compatible by definition.
Underpinning torch.compile are new technologies – TorchDynamo, AOTAutograd, PrimTorch and TorchInductor: | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
TorchDynamo captures PyTorch programs safely using Python Frame Evaluation Hooks and is a significant innovation that was a result of 5 years of our R&D into safe graph capture.
AOTAutograd overloads PyTorch’s autograd engine as a tracing autodiff for generating ahead-of-time backward traces.
PrimTorch canonicalizes ~2000+ PyTorch operators down to a closed set of ~250 primitive operators that developers can target to build a complete PyTorch backend. This substantially lowers the barrier of writing a PyTorch feature or backend.
TorchInductor is a deep learning compiler that generates fast code for multiple accelerators and backends. For NVIDIA and AMD GPUs, it uses OpenAI Triton as a key building block. For intel CPUs, we generate C++ code using multithreading, vectorized instructions and offloading appropriate operations to mkldnn when possible.
| https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
With all the new technologies, torch.compile is able to work 93% of time across 165 open-source models and runs 20% faster on average at float32 precision and 36% faster on average at AMP precision.
For more information, please refer to https://pytorch.org/get-started/pytorch-2.0/ and for TorchInductor CPU with Intel here.
[Beta] PyTorch MPS Backend
MPS backend provides GPU-accelerated PyTorch training on Mac platforms. This release brings improved correctness, stability, and operator coverage. | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
MPS backend now includes support for the Top 60 most used ops, along with the most frequently requested operations by the community, bringing coverage to over 300 operators. The major focus of the release was to enable full OpInfo-based forward and gradient mode testing to address silent correctness issues. These changes have resulted in wider adoption of MPS backend by 3rd party networks such as Stable Diffusion, YoloV5, WhisperAI, along with increased coverage for Torchbench networks and Basic tutorials. We encourage developers to update to the latest macOS release to see the best performance and stability on the MPS backend.
Links
MPS Backend
Developer information
Accelerated PyTorch training on Mac
| https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
Metal, Metal Performance Shaders & Metal Performance Shaders Graph
[Beta] Scaled dot product attention 2.0
We are thrilled to announce the release of PyTorch 2.0, which introduces a powerful scaled dot product attention function as part of torch.nn.functional. This function includes multiple implementations that can be seamlessly applied depending on the input and hardware in use.
In previous versions of PyTorch, you had to rely on third-party implementations and install separate packages to take advantage of memory-optimized algorithms like FlashAttention. With PyTorch 2.0, all these implementations are readily available by default. | https://pytorch.org/blog/pytorch-2.0-release/ | pytorch blogs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.