text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
layout: blog_detail title: "Introducing the PlayTorch app: Rapidly Create Mobile AI Experiences" author: PlayTorch Team featured-img: "" In December, we announced PyTorch Live, a toolkit for building AI-powered mobile prototypes in minutes. The initial release included a command-line interface to set up a development environment and an SDK for building AI-powered experiences in React Native. Today, we're excited to share that PyTorch Live will now be known as PlayTorch. This new release provides an improved and simplified developer experience. PlayTorch development is independent from the PyTorch project and the PlayTorch code repository is moving into the Meta Research GitHub organization. A New Workflow: The PlayTorch App
https://pytorch.org/blog/introducing-the-playtorch-app/
pytorch blogs
A New Workflow: The PlayTorch App The PlayTorch team is excited to announce that we have partnered with Expo to change the way AI powered mobile experiences are built. Our new release simplifies the process of building mobile AI experiences by eliminating the need for a complicated development environment. You will now be able to build cross platform AI powered prototypes from the very browser you are using to read this blog. In order to make this happen, we are releasing the PlayTorch app which is able to run AI-powered experiences built in the Expo Snack web based code editor.
https://pytorch.org/blog/introducing-the-playtorch-app/
pytorch blogs
The PlayTorch app can be downloaded from the Apple App Store and Google Play Store. With the app installed, you can head over to playtorch.dev/snack and write the code for your AI-powered PlayTorch Snack. When you want to try what you’ve built, you can use the PlayTorch app’s QR code scanner to scan the QR code on the Snack page and load the code to your device. NOTE: PlayTorch Snacks will not work in the Expo Go app. More to Explore in the PlayTorch App AI Demos The PlayTorch app comes with several examples of how you can build AI powered experiences with a variety of different machine learning models from object detection to natural language processing. See what can be built with the PlayTorch SDK and be inspired to make something of your own as you play with the examples. Sharing Your Creations
https://pytorch.org/blog/introducing-the-playtorch-app/
pytorch blogs
Sharing Your Creations Any PlayTorch Snack that you run in the PlayTorch app can be shared with others in an instant. When they open the link on their device, the PlayTorch app will instantly load what you’ve built from the cloud so they can experience it first hand. When you have something you want to share, let us know on Discord or Twitter or embed the PlayTorch Snack on your own webpage. SDK Overhaul We learned a lot from the community after our initial launch in December and have been hard at work over the past several months to make the PlayTorch SDK (formerly known as PyTorch Live) simple, performant, and robust. In our initial version, the SDK relied on config files to define how a model ingested and output data.
https://pytorch.org/blog/introducing-the-playtorch-app/
pytorch blogs
Today, we are happy to announce the next version of our SDK can handle data processing in JavaScript for your prototypes with the new PlayTorch API that leverages the JavaScript Interface (JSI) to directly call C++ code. Not only have we completely redone the way you can interact with models, but we have also greatly expanded the variety of supported model architectures. A New Data Processing API for Prototyping With this JSI API, we now allow users direct access to tensors (data format for machine learning). Instead of only having access to predefined transformations, you can now manipulate tensors however you would like for your prototypes. No more switching back and forth between code and config. You will now be able to write everything in JavaScript and have access to all of the type annotations and autocomplete features available to you in those languages.
https://pytorch.org/blog/introducing-the-playtorch-app/
pytorch blogs
Check out our tutorials to see the new Data Processing API in action, take a deeper dive in the API docs, or inspect the code yourself on GitHub. Expanded Use Cases With the new version of the SDK, we have added support for several cutting edge models. Image-to-image transformations are now supported thanks to our robust JSI API, so you can see what your world would look like if it were an anime. Translate French to English with an AI powered translator using the Seq2Seq model. Use DeepLab V3 to segment images! Start Playing
https://pytorch.org/blog/introducing-the-playtorch-app/
pytorch blogs
Start Playing If you want to start creating AI experiences yourself, head over to playtorch.dev and try out our tutorials. Each tutorial will guide you through building a simple AI powered experience that you can instantly run on your phone and share with others. How to Get Support Join us on Discord, collaborate with us on GitHub, or follow us on Twitter. Got questions or feedback? We’d love to hear from you!
https://pytorch.org/blog/introducing-the-playtorch-app/
pytorch blogs
layout: blog_detail title: 'Overview of PyTorch Autograd Engine' author: Preferred Networks, Inc. This blog post is based on PyTorch version 1.8, although it should apply for older versions too, since most of the mechanics have remained constant. To help understand the concepts explained here, it is recommended that you read the awesome blog post by @ezyang: PyTorch internals if you are not familiar with PyTorch architecture components such as ATen or c10d. What is autograd? Background
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
What is autograd? Background PyTorch computes the gradient of a function with respect to the inputs by using automatic differentiation. Automatic differentiation is a technique that, given a computational graph, calculates the gradients of the inputs. Automatic differentiation can be performed in two different ways; forward and reverse mode. Forward mode means that we calculate the gradients along with the result of the function, while reverse mode requires us to evaluate the function first, and then we calculate the gradients starting from the output. While both modes have their pros and cons, the reverse mode is the de-facto choice since the number of outputs is smaller than the number of inputs, which allows a much more efficient computation. Check [3] to learn more about this. Automatic differentiation relies on a classic calculus formula known as the chain-rule. The chain rule allows us to calculate very complex derivatives by splitting them and recombining them later.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Formally speaking, given a composite function , we can calculate its derivative as . This result is what makes automatic differentiation work. By combining the derivatives of the simpler functions that compose a larger one, such as a neural network, it is possible to compute the exact value of the gradient at a given point rather than relying on the numerical approximation, which would require multiple perturbations in the input to obtain a value.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
To get the intuition of how the reverse mode works, let’s look at a simple function . Figure 1 shows its computational graph where the inputs x, y in the left, flow through a series of operations to generate the output z. Figure 1: Computational graph of f(x, y) = log(x*y) The automatic differentiation engine will normally execute this graph. It will also extend it to calculate the derivatives of w with respect to the inputs x, y, and the intermediate result v.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
The example function can be decomposed in f and g, where and . Every time the engine executes an operation in the graph, the derivative of that operation is added to the graph to be executed later in the backward pass. Note, that the engine knows the derivatives of the basic functions.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
In the example above, when multiplying x and y to obtain v, the engine will extend the graph to calculate the partial derivatives of the multiplication by using the multiplication derivative definition that it already knows. and . The resulting extended graph is shown in Figure 2, where the MultDerivative node also calculates the product of the resulting gradients by an input gradient to apply the chain rule; this will be explicitly seen in the following operations. Note that the backward graph (green nodes) will not be executed until all the forward steps are completed.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Figure 2: Computational graph extended after executing the logarithm
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Continuing, the engine now calculates the operation and extends the graph again with the log derivative that it knows to be . This is shown in figure 3. This operation generates the result that when propagated backward and multiplied by the multiplication derivative as in the chain rule, generates the derivatives , .
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Figure 3: Computational graph extended after executing the logarithm The original computation graph is extended with a new dummy variable z that is the same w. The derivative of z with respect to w is 1 as they are the same variable, this trick allows us to apply the chain rule to calculate the derivatives of the inputs. After the forward pass is complete, we start the backward pass, by supplying the initial value of 1.0 for . This is shown in Figure 4.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Figure 4: Computational graph extended for reverse auto differentiation
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Then following the green graph we execute the LogDerivative operation that the auto differentiation engine introduced, and multiply its result by to obtain the gradient as per the chain rule states. Next, the multiplication derivative is executed in the same way, and the desired derivatives are finally obtained.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Formally, what we are doing here, and PyTorch autograd engine also does, is computing a Jacobian-vector product (Jvp) to calculate the gradients of the model parameters, since the model parameters and inputs are vectors. The Jacobian-vector product When we calculate the gradient of a vector-valued function (a function whose inputs and outputs are vectors), we are essentially constructing a Jacobian matrix .
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Thanks to the chain rule, multiplying the Jacobian matrix of a function by a vector with the previously calculated gradients of a scalar function results in the gradients of the scalar output with respect to the vector-valued function inputs.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
As an example, let’s look at some functions in python notation to show how the chain rule applies. def f(x1, x2): a = x1 * x2 y1 = log(a) y2 = sin(x2) return (y1, y2) def g(y1, y2): return y1 * y2
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
return y1 * y2 Now, if we derive this by hand using the chain rule and the definition of the derivatives, we obtain the following set of identities that we can directly plug into the Jacobian matrix of
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Next, let’s consider the gradients for the scalar function
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
If we now calculate the transpose-Jacobian vector product obeying the chain rule, we obtain the following expression:
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Evaluating the Jvp for yields the result: We can execute the same expression in PyTorch and calculate the gradient of the input: >>> import torch >>> x = torch.tensor([0.5, 0.75], requires_grad=True) >>> y = torch.log(x[0] * x[1]) * torch.sin(x[1]) >>> y.backward(1.0) >>> x.grad tensor([1.3633, 0.1912])
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
tensor([1.3633, 0.1912]) The result is the same as our hand-calculated Jacobian-vector product! However, PyTorch never constructed the matrix as it could grow prohibitively large but instead, created a graph of operations that traversed backward while applying the Jacobian-vector products defined in tools/autograd/derivatives.yaml. Going through the graph Every time PyTorch executes an operation, the autograd engine constructs the graph to be traversed backward.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
The reverse mode auto differentiation starts by adding a scalar variable at the end so that as we saw in the introduction. This is the initial gradient value that is supplied to the Jvp engine calculation as we saw in the section above. In PyTorch, the initial gradient is explicitly set by the user when he calls the backward method.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Then, the Jvp calculation starts but it never constructs the matrix. Instead, when PyTorch records the computational graph, the derivatives of the executed forward operations are added (Backward Nodes). Figure 5 shows a backward graph generated by the execution of the functions and seen before. Figure 5: Computational Graph extended with the backward pass
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Once the forward pass is done, the results are used in the backward pass where the derivatives in the computational graph are executed. The basic derivatives are stored in the tools/autograd/derivatives.yaml file and they are not regular derivatives but the Jvp versions of them [3]. They take their primitive function inputs and outputs as parameters along with the gradient of the function outputs with respect to the final outputs. By repeatedly multiplying the resulting gradients by the next Jvp derivatives in the graph, the gradients up to the inputs will be generated following the chain rule. Figure 6: How the chain rule is applied in backward differentiation
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Figure 6 represents the process by showing the chain rule. We started with a value of 1.0 as detailed before which is the already calculated gradient highlighted in green. And we move to the next node in the graph. The backward function registered in derivatives.yaml will calculate the associated
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
value highlighted in red and multiply it by . By the chain rule this results in which will be the already calculated gradient (green) when we process the next backward node in the graph.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
You may also have noticed that in Figure 5 there is a gradient generated from two different sources. When two different functions share an input, the gradients with respect to the output are aggregated for that input, and calculations using that gradient can’t proceed unless all the paths have been aggregated together. Let’s see an example of how the derivatives are stored in PyTorch.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Suppose that we are currently processing the backward propagation of the function, in the LogBackward node in Figure 2. The derivative of in derivatives.yaml is specified as grad.div(self.conj()). grad is the already calculated gradient and self.conj() is the complex conjugate of the input vector. For complex numbers PyTorch calculates a special derivative called the conjugate Wirtinger derivative [6]. This derivative takes the complex number and its conjugate and by operating some magic that is described in [6], they are the direction of steepest descent when plugged into optimizers.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
This code translates to , the corresponding green, and red squares in Figure 3. Continuing, the autograd engine will execute the next operation; backward of the multiplication. As before, the inputs are the original function’s inputs and the gradient calculated from the backward step. This step will keep repeating until we reach the gradient with respect to the inputs and the computation will be finished. The gradient of is only completed once the multiplication and sin gradients are added together. As you can see, we computed the equivalent of the Jvp but without constructing the matrix.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
In the next post we will dive inside PyTorch code to see how this graph is constructed and where are the relevant pieces should you want to experiment with it! References https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html https://web.stanford.edu/class/cs224n/readings/gradient-notes.pdf https://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/slides/lec10.pdf https://mustafaghali11.medium.com/how-pytorch-backward-function-works-55669b3b7c62
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
https://indico.cern.ch/event/708041/contributions/3308814/attachments/1813852/2963725/automatic_differentiation_and_deep_learning.pdf https://pytorch.org/docs/stable/notes/autograd.html#complex-autograd-doc Recommended: shows why the backprop is formally expressed with the Jacobian https://cs.ubc.ca/~fwood/CS340/lectures/AD1.pdf
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
layout: blog_detail title: "Case Study: Amazon Ads Uses PyTorch and AWS Inferentia to Scale Models for Ads Processing" author: Yashal Kanungo – Applied Scientist, Kamran Khan - Sr. Technical Product Manager, Shubha Kumbadakone – Sr. Specialist, ML Frameworks featured-img: "" Amazon Ads uses PyTorch, TorchServe, and AWS Inferentia to reduce inference costs by 71% and drive scale out. Amazon Ads helps companies build their brand and connect with shoppers through ads shown both within and beyond Amazon’s store, including websites, apps, and streaming TV content in more than 15 countries. Businesses and brands of all sizes, including registered sellers, vendors, book vendors, Kindle Direct Publishing (KDP) authors, app developers, and agencies can upload their own ad creatives, which can include images, video, audio, and, of course, products sold on Amazon.
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
To promote an accurate, safe, and pleasant shopping experience, these ads must comply with content guidelines. For example, ads cannot flash on and off, products must be featured in an appropriate context, and images and text should be appropriate for a general audience. To help ensure that ads meet the required policies and standards, we needed to develop scalable mechanisms and tools.
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
As a solution, we used machine learning (ML) models to surface ads that might need revision. As deep neural networks flourished over the past decade, our data science team began exploring more versatile deep learning (DL) methods capable of processing text, images, audio, or video with minimal human intervention. To that end, we’ve used PyTorch to build computer vision (CV) and natural language processing (NLP) models that automatically flag potentially non-compliant ads. PyTorch is intuitive, flexible, and user-friendly, and has made our transition to using DL models seamless. Deploying these new models on AWS Inferentia-based Amazon EC2 Inf1 instances, rather than on GPU-based instances, reduced our inference latency by 30 percent and our inference costs by 71 percent for the same workloads. Transition to deep learning
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
Transition to deep learning Our ML systems paired classical models with word embeddings to evaluate ad text. But our requirements evolved, and as the volume of submissions continued to expand, we needed a method nimble enough to scale along with our business. In addition, our models must be fast and serve ads within milliseconds to provide an optimal customer experience. Over the last decade, DL has become very popular in numerous domains, including natural language, vision, and audio. Because deep neural networks channel data sets through many layers — extracting progressively higher-level features — they can make more nuanced inferences than classical ML models. Rather than simply detecting prohibited language, for example, a DL model can reject an ad for making false claims.
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
In addition, DL techniques are transferable– a model trained for one task can be adapted to carry out a related task. For instance, a pre-trained neural network can be optimized to detect objects in images and then fine-tuned to identify specific objects that are not allowed to be displayed in an ad. Deep neural networks can automate two of classical ML’s most time-consuming steps: feature engineering and data labeling. Unlike traditional supervised learning approaches, which require exploratory data analysis and hand-engineered features, deep neural networks learn the relevant features directly from the data. DL models can also analyze unstructured data, like text and images, without the preprocessing necessary in ML. Deep neural networks scale effectively with more data and perform especially well in applications involving large data sets.
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
We chose PyTorch to develop our models because it helped us maximize the performance of our systems. With PyTorch, we can serve our customers better while taking advantage of Python’s most intuitive concepts. The programming in PyTorch is object-oriented: it groups processing functions with the data they modify. As a result, our codebase is modular, and we can reuse pieces of code in different applications. In addition, PyTorch’s eager mode allows loops and control structures and, therefore, more complex operations in the model. Eager mode makes it easy to prototype and iterate upon our models, and we can work with various data structures. This flexibility helps us update our models quickly to meet changing business requirements.
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
“Before this, we experimented with other frameworks that were “Pythonic,” but PyTorch was the clear winner for us here.” said Yashal Kanungo, Applied Scientist. “Using PyTorch was easy because the structure felt native to Python programming, which the data scientists were very familiar with”. Training pipeline Today, we build our text models entirely in PyTorch. To save time and money, we often skip the early stages of training by fine-tuning a pre-trained NLP model for language analysis. If we need a new model to evaluate images or video, we start by browsing PyTorch’s torchvision library, which offers pretrained options for image and video classification, object detection, instance segmentation, and pose estimation. For specialized tasks, we build a custom model from the ground up. PyTorch is perfect for this, because eager mode and the user-friendly front end make it easy to experiment with different architectures.
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
To learn how to finetune neural networks in PyTorch, head to this tutorial.
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
Before we begin training, we optimize our model’s hyperparameters, the variables that define the network architecture (for example, the number of hidden layers) and training mechanics (such as learning rate and batch size). Choosing appropriate hyperparameter values is essential, because they will shape the training behavior of the model. We rely on the Bayesian search feature in SageMaker, AWS’s ML platform, for this step. Bayesian search treats hyperparameter tuning as a regression problem: It proposes the hyperparameter combinations that are likely to produce the best results and runs training jobs to test those values. After each trial, a regression algorithm determines the next set of hyperparameter values to test, and performance improves incrementally.
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
We prototype and iterate upon our models using SageMaker Notebooks. Eager mode lets us prototype models quickly by building a new computational graph for each training batch; the sequence of operations can change from iteration to iteration to accommodate different data structures or to jibe with intermediate results. That frees us to adjust the network during training without starting over from scratch. These dynamic graphs are particularly valuable for recursive computations based on variable sequence lengths, such as the words, sentences, and paragraphs in an ad that are analyzed with NLP.
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
When we’ve finalized the model architecture, we deploy training jobs on SageMaker. PyTorch helps us develop large models faster by running numerous training jobs at the same time. PyTorch’s Distributed Data Parallel (DDP) module replicates a single model across multiple interconnected machines within SageMaker, and all the processes run forward passes simultaneously on their own unique portion of the data set. During the backward pass, the module averages the gradients of all the processes, so each local model is updated with the same parameter values. Model deployment pipeline When we deploy the model in production, we want to ensure lower inference costs without impacting prediction accuracy. Several PyTorch features and AWS services have helped us address the challenge.
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
The flexibility of a dynamic graph enriches training, but in deployment we want to maximize performance and portability. An advantage of developing NLP models in PyTorch is that out of the box, they can be traced into a static sequence of operations by TorchScript, a subset of Python specialized for ML applications. Torchscript converts PyTorch models to a more efficient, production-friendly intermediate representation (IR) graph that is easily compiled. We run a sample input through the model, and TorchScript records the operations executed during the forward pass. The resulting IR graph can run in high-performance environments, including C++ and other multithreaded Python-free contexts, and optimizations such as operator fusion can speed up the runtime. Neuron SDK and AWS Inferentia powered compute
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
Neuron SDK and AWS Inferentia powered compute We deploy our models on Amazon EC2 Inf1 instances powered by AWS Inferentia, Amazon's first ML silicon designed to accelerate deep learning inference workloads. Inferentia has shown to reduce inference costs by up to 70% compared to Amazon EC2 GPU-based instances. We used the AWS Neuron SDK — a set of software tools used with Inferentia — to compile and optimize our models for deployment on EC2 Inf1 instances. The code snippet below shows how to compile a Hugging Face BERT model with Neuron. Like torch.jit.trace(), neuron.trace() records the model’s operations on an example input during the forward pass to build a static IR graph. ```python import torch from transformers import BertModel, BertTokenizer import torch.neuron tokenizer = BertTokenizer.from_pretrained("path to saved vocab")
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
model = BertModel.from_pretrained("path to the saved model", returned_dict=False) inputs = tokenizer ("sample input", return_tensor="pt") neuron_model = torch.neuron.trace(model, example_inputs = (inputs['input_ids'], inputs['attention_mask']), verbose = 1) output = neuron_model(*(inputs['input_ids'], inputs['attention_mask'])) ``` Autocasting and recalibration Under the hood, Neuron optimizes our models for performance by autocasting them to a smaller data type. As a default, most applications represent neural network values in the 32-bit single-precision floating point (FP32) number format. Autocasting the model to a 16-bit format — half-precision floating point (FP16) or Brain Floating Point (BF16) — reduces a model’s memory footprint and execution time. In our case, we decided to use FP16 to optimize for performance while maintaining high accuracy.
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
Autocasting to a smaller data type can, in some cases, trigger slight differences in the model’s predictions. To ensure that the model’s accuracy is not affected, Neuron compares the performance metrics and predictions of the FP16 and FP32 models. When autocasting diminishes the model’s accuracy, we can tell the Neuron compiler to convert only the weights and certain data inputs to FP16, keeping the rest of the intermediate results in FP32. In addition, we often run a few iterations with the training data to recalibrate our autocasted models. This process is much less intensive than the original training. Deployment
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
Deployment To analyze multimedia ads, we run an ensemble of DL models. All ads uploaded to Amazon are run through specialized models that assess every type of content they include: images, video and audio, headlines, texts, backgrounds, and even syntax, grammar, and potentially inappropriate language. The signals we receive from these models indicate whether or not an advertisement complies with our criteria.
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
Deploying and monitoring multiple models is significantly complex, so we depend on TorchServe, SageMaker’s default PyTorch model serving library. Jointly developed by Facebook’s PyTorch team and AWS to streamline the transition from prototyping to production, TorchServe helps us deploy trained PyTorch models at scale without having to write custom code. It provides a secure set of REST APIs for inference, management, metrics, and explanations. With features such as multi-model serving, model versioning, ensemble support, and automatic batching, TorchServe is ideal for supporting our immense workload. You can read more about deploying your Pytorch models on SageMaker with native TorchServe integration in this blog post.
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
In some use cases, we take advantage of PyTorch’s object-oriented programming paradigm to wrap multiple DL models into one parent object — a PyTorch nn.Module — and serve them as a single ensemble. In other cases, we use TorchServe to serve individual models on separate SageMaker endpoints, running on AWS Inf1 instances. Custom handlers We particularly appreciate that TorchServe allows us to embed our model initialization, preprocessing, inferencing, and post processing code in a single Python script, handler.py, which lives on the server. This script — the handler —preprocesses the un-labeled data from an ad, runs that data through our models, and delivers the resulting inferences to downstream systems. TorchServe provides several default handlers that load weights and architecture and prepare the model to run on a particular device. We can bundle all the additional required artifacts, such as vocabulary files or label maps, with the model in a single archive file.
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
When we need to deploy models that have complex initialization processes or that originated in third-party libraries, we design custom handlers in TorchServe. These let us load any model, from any library, with any required process. The following snippet shows a simple handler that can serve Hugging Face BERT models on any SageMaker hosting endpoint instance. ```python import torch import torch.neuron from ts.torch_handler.base_handler import BaseHandler import transformers from transformers import AutoModelForSequenceClassification,AutoTokenizer class MyModelHandler(BaseHandler): def initialize(self, context): self.manifest = ctx.manifest properties = ctx.system_properties model_dir = properties.get("model_dir") serialized_file = self.manifest["model"]["serializedFile"] model_pt_path = os.path.join(model_dir, serialized_file) self.tokenizer = AutoTokenizer.from_pretrained( model_dir, do_lower_case=True )
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
) self.model = AutoModelForSequenceClassification.from_pretrained( model_dir ) def preprocess(self, data): input_text = data.get("data") if input_text is None: input_text = data.get("body") inputs = self.tokenizer.encode_plus(input_text, max_length=int(max_length), pad_to_max_length=True, add_special_tokens=True, return_tensors='pt') return inputs def inference(self,inputs): predictions = self.model(**inputs) return predictions def postprocess(self, output): return output ``` Batching
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
return output ``` Batching Hardware accelerators are optimized for parallelism, and batching — feeding a model multiple inputs in a single step — helps saturate all available capacity, typically resulting in higher throughputs. Excessively high batch sizes, however, can increase latency with minimal improvement in throughputs. Experimenting with different batch sizes helps us identify the sweet spot for our models and hardware accelerator. We run experiments to determine the best batch size for our model size, payload size, and request traffic patterns. The Neuron compiler now supports variable batch sizes. Previously, tracing a model hardcoded the predefined batch size, so we had to pad our data, which can waste compute, slow throughputs, and exacerbate latency. Inferentia is optimized to maximize throughput for small batches, reducing latency by easing the load on the system. Parallelism
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
Parallelism Model parallelism on multi-cores also improves throughput and latency, which is crucial for our heavy workloads. Each Inferentia chip contains four NeuronCores that can either run separate models simultaneously or form a pipeline to stream a single model. In our use case, the data parallel configuration offers the highest throughput at the lowest cost, because it scales out concurrent processing requests. Data Parallel: Model Parallel: Monitoring It is critical that we monitor the accuracy of our inferences in production. Models that initially make good predictions can eventually degrade in deployment as they are exposed to a wider variety of data. This phenomenon, called model drift, usually occurs when the input data distributions or the prediction targets change.
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
We use SageMaker Model Monitor to track parity between the training and production data. Model Monitor notifies us when predictions in production begin to deviate from the training and validation results. Thanks to this early warning, we can restore accuracy — by retraining the model if necessary — before our advertisers are affected. To track performance in real time, Model Monitor also sends us metrics about the quality of predictions, such as accuracy, F-scores, and the distribution of the predicted classes. To determine if our application needs to scale, TorchServe logs resource utilization metrics for the CPU, Memory, and Disk at regular intervals; it also records the number of requests received versus the number served. For custom metrics, TorchServe offers a Metrics API. A rewarding result
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
A rewarding result Our DL models, developed in PyTorch and deployed on Inferentia, sped up our ads analysis while cutting costs. Starting with our first explorations in DL, programming in PyTorch felt natural. Its user-friendly features helped smooth the course from our early experiments to the deployment of our multimodal ensembles. PyTorch lets us prototype and build models quickly, which is vital as our advertising service evolves and expands. For an added benefit, PyTorch works seamlessly with Inferentia and our AWS ML stack. We look forward to building more use cases with PyTorch, so we can continue to serve our clients accurate, real-time results.
https://pytorch.org/blog/amazon-ads-case-study/
pytorch blogs
layout: blog_detail title: 'PyTorch feature classification changes' author: Team PyTorch Traditionally features in PyTorch were classified as either stable or experimental with an implicit third option of testing bleeding edge features by building master or through installing nightly builds (available via prebuilt whls). This has, in a few cases, caused some confusion around the level of readiness, commitment to the feature and backward compatibility that can be expected from a user perspective. Moving forward, we’d like to better classify the 3 types of features as well as define explicitly here what each mean from a user perspective. New Feature Designations We will continue to have three designations for features but, as mentioned, with a few changes: Stable, Beta (previously Experimental) and Prototype (previously Nightlies). Below is a brief description of each and a comment on the backward compatibility expected: Stable
https://pytorch.org/blog/pytorch-feature-classification-changes/
pytorch blogs
Stable Nothing changes here. A stable feature means that the user value-add is or has been proven, the API isn’t expected to change, the feature is performant and all documentation exists to support end user adoption. Level of commitment: We expect to maintain these features long term and generally there should be no major performance limitations, gaps in documentation and we also expect to maintain backwards compatibility (although breaking changes can happen and notice will be given one release ahead of time). Beta
https://pytorch.org/blog/pytorch-feature-classification-changes/
pytorch blogs
Beta We previously called these features ‘Experimental’ and we found that this created confusion amongst some of the users. In the case of a Beta level features, the value add, similar to a Stable feature, has been proven (e.g. pruning is a commonly used technique for reducing the number of parameters in NN models, independent of the implementation details of our particular choices) and the feature generally works and is documented. This feature is tagged as Beta because the API may change based on user feedback, because the performance needs to improve or because coverage across operators is not yet complete. Level of commitment: We are committing to seeing the feature through to the Stable classification. We are however not committing to Backwards Compatibility. Users can depend on us providing a solution for problems in this area going forward, but the APIs and performance characteristics of this feature may change.
https://pytorch.org/blog/pytorch-feature-classification-changes/
pytorch blogs
Prototype Previously these were features that were known about by developers who paid close attention to RFCs and to features that land in master. In this case the feature is not available as part of binary distributions like PyPI or Conda (except maybe behind run-time flags), but we would like to get high bandwidth partner feedback ahead of a real release in order to gauge utility and any changes we need to make to the UX. To test these kinds of features we would, depending on the feature, recommend building from master or using the nightly whls that are made available on pytorch.org. For each prototype feature, a pointer to draft docs or other instructions will be provided.
https://pytorch.org/blog/pytorch-feature-classification-changes/
pytorch blogs
Level of commitment: We are committing to gathering high bandwidth feedback only. Based on this feedback and potential further engagement between community members, we as a community will decide if we want to upgrade the level of commitment or to fail fast. Additionally, while some of these features might be more speculative (e.g. new Frontend APIs), others have obvious utility (e.g. model optimization) but may be in a state where gathering feedback outside of high bandwidth channels is not practical, e.g. the feature may be in an earlier state, may be moving fast (PRs are landing too quickly to catch a major release) and/or generally active development is underway. What changes for current features? First and foremost, you can find these designations on pytorch.org/docs. We will also be linking any early stage features here for clarity. Additionally, the following features will be reclassified under this new rubric:
https://pytorch.org/blog/pytorch-feature-classification-changes/
pytorch blogs
High Level Autograd APIs: Beta (was Experimental) Eager Mode Quantization: Beta (was Experimental) Named Tensors: Prototype (was Experimental) TorchScript/RPC: Prototype (was Experimental) Channels Last Memory Layout: Beta (was Experimental) Custom C++ Classes: Beta (was Experimental) PyTorch Mobile: Beta (was Experimental) Java Bindings: Beta (was Experimental) Torch.Sparse: Beta (was Experimental) Cheers, Joe, Greg, Woo & Jessica
https://pytorch.org/blog/pytorch-feature-classification-changes/
pytorch blogs
layout: blog_detail title: 'Introducing TorchRec, a library for modern production recommendation systems' author: Meta AI - Donny Greenberg, Colin Taylor, Dmytro Ivchenko, Xing Liu, Anirudh Sudarshan featured-img: '' We are excited to announce TorchRec, a PyTorch domain library for Recommendation Systems. This new library provides common sparsity and parallelism primitives, enabling researchers to build state-of-the-art personalization models and deploy them in production. How did we get here?
https://pytorch.org/blog/introducing-torchrec/
pytorch blogs
How did we get here? Recommendation Systems (RecSys) comprise a large footprint of production-deployed AI today, but you might not know it from looking at Github. Unlike areas like Vision and NLP, much of the ongoing innovation and development in RecSys is behind closed company doors. For academic researchers studying these techniques or companies building personalized user experiences, the field is far from democratized. Further, RecSys as an area is largely defined by learning models over sparse and/or sequential events, which has large overlaps with other areas of AI. Many of the techniques are transferable, particularly for scaling and distributed execution. A large portion of the global investment in AI is in developing these RecSys techniques, so cordoning them off blocks this investment from flowing into the broader AI field.
https://pytorch.org/blog/introducing-torchrec/
pytorch blogs
By mid-2020, the PyTorch team received a lot of feedback that there hasn't been a large-scale production-quality recommender systems package in the open-source PyTorch ecosystem. While we were trying to find a good answer, a group of engineers at Meta wanted to contribute Meta’s production RecSys stack as a PyTorch domain library, with a strong commitment to growing an ecosystem around it. This seemed like a good idea that benefits researchers and companies across the RecSys domain. So, starting from Meta’s stack, we began modularizing and designing a fully-scalable codebase that is adaptable for diverse recommendation use-cases. Our goal was to extract the key building blocks from across Meta’s software stack to simultaneously enable creative exploration and scale. After nearly two years, a battery of benchmarks, migrations, and testing across Meta, we’re excited to finally embark on this journey together with the RecSys community. We want this package to open a dialogue and collaboration across the RecSys industry, starting with Meta as the first sizable contributor.
https://pytorch.org/blog/introducing-torchrec/
pytorch blogs
Introducing TorchRec TorchRec includes a scalable low-level modeling foundation alongside rich batteries-included modules. We initially target “two-tower” ([[1]], [[2]]) architectures that have separate submodules to learn representations of candidate items and the query or context. Input signals can be a mix of floating point “dense” features or high-cardinality categorical “sparse” features that require large embedding tables to be trained. Efficient training of such architectures involves combining data parallelism that replicates the “dense” part of computation and model parallelism that partitions large embedding tables across many nodes. In particular, the library includes: - Modeling primitives, such as embedding bags and jagged tensors, that enable easy authoring of large, performant multi-device/multi-node models using hybrid data-parallelism and model-parallelism.
https://pytorch.org/blog/introducing-torchrec/
pytorch blogs
Optimized RecSys kernels powered by FBGEMM , including support for sparse and quantized operations. A sharder which can partition embedding tables with a variety of different strategies including data-parallel, table-wise, row-wise, table-wise-row-wise, and column-wise sharding. A planner which can automatically generate optimized sharding plans for models. Pipelining to overlap dataloading device transfer (copy to GPU), inter-device communications (input_dist), and computation (forward, backward) for increased performance. GPU inference support. Common modules for RecSys, such as models and public datasets (Criteo & Movielens). To showcase the flexibility of this tooling, let’s look at the following code snippet, pulled from our DLRM Event Prediction example: ```python Specify the sparse embedding layers eb_configs = [ EmbeddingBagConfig( name=f"t_{feature_name}", embedding_dim=64, num_embeddings=100_000,
https://pytorch.org/blog/introducing-torchrec/
pytorch blogs
num_embeddings=100_000, feature_names=[feature_name], ) for feature_idx, feature_name in enumerate(DEFAULT_CAT_NAMES) ] Import and instantiate the model with the embedding configuration The "meta" device indicates lazy instantiation, with no memory allocated train_model = DLRM( embedding_bag_collection=EmbeddingBagCollection( tables=eb_configs, device=torch.device("meta") ), dense_in_features=len(DEFAULT_INT_NAMES), dense_arch_layer_sizes=[512, 256, 64], over_arch_layer_sizes=[512, 512, 256, 1], dense_device=device, ) Distribute the model over many devices, just as one would with DDP. model = DistributedModelParallel( module=train_model, device=device, ) optimizer = torch.optim.SGD(params, lr=args.learning_rate) Optimize the model in a standard loop just as you would any other model! Or, you can use the pipeliner to synchronize communication and compute for epoch in range(epochs): # Train ``` Scaling Performance
https://pytorch.org/blog/introducing-torchrec/
pytorch blogs
Train ``` Scaling Performance TorchRec has state-of-the-art infrastructure for scaled Recommendations AI, powering some of the largest models at Meta. It was used to train a 1.25 trillion parameter model, pushed to production in January, and a 3 trillion parameter model which will be in production soon. This should be a good indication that PyTorch is fully capable of the largest scale RecSys problems in industry. We’ve heard from many in the community that sharded embeddings are a pain point. TorchRec cleanly addresses that. Unfortunately it is challenging to provide large-scale benchmarks with public datasets, as most open-source benchmarks are too small to show performance at scale. Looking ahead
https://pytorch.org/blog/introducing-torchrec/
pytorch blogs
Looking ahead Open-source and open-technology have universal benefits. Meta is seeding the PyTorch community with a state-of-the-art RecSys package, with the hope that many join in on building it forward, enabling new research and helping many companies. The team behind TorchRec plan to continue this program indefinitely, building up TorchRec to meet the needs of the RecSys community, to welcome new contributors, and to continue to power personalization at Meta. We’re excited to begin this journey and look forward to contributions, ideas, and feedback! References [[1]] Sampling-Bias-Corrected Neural Modeling for Large Corpus Item Recommendations [[2]] DLRM: An advanced, open source deep learning recommendation model
https://pytorch.org/blog/introducing-torchrec/
pytorch blogs
layout: blog_detail title: "Scaling PyTorch models on Cloud TPUs with FSDP" author: Ronghang Hu, Vaibhav Singh, Jack Cao, Milad Mohammadi, Yeounoh Chung, Shauheen Zahirazami, Ross Girshick featured-img: "/assets/images/scaling-pytorch-models-on-cloud-tpus-with-fsdp.jpg" Introduction The research community has witnessed a lot of successes with large models across NLP, computer vision, and other domains in recent years. Many of these successes were enabled by Cloud TPUs -- which are powerful hardware for distributed training. To support TPUs in PyTorch, the PyTorch/XLA library provides a backend for XLA devices (most notably TPUs) and lays the groundwork for scaling large PyTorch models on TPUs. However, most existing modeling scaling tools in the PyTorch ecosystem assume GPU (or CPU) devices, often depend on specific features in CUDA, and do not work directly on TPUs. The lack of scaling tools makes it challenging to build large models that cannot fit into the memory of a single TPU chip.
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
To support model scaling on TPUs, we implemented the widely-adopted Fully Sharded Data Parallel (FSDP) algorithm for XLA devices as part of the PyTorch/XLA 1.12 release. We provide an FSDP interface with a similar high-level design to the CUDA-based PyTorch FSDP class while also handling several restrictions in XLA (see Design Notes below for more details). This FSDP interface allowed us to easily build models with e.g. 10B+ parameters on TPUs and has enabled many research explorations. Using Fully Sharded Data Parallel (FSDP) in PyTorch/XLA We provide a wrapper class XlaFullyShardedDataParallel over a given PyTorch model to shard its parameters across data-parallel workers. An example usage is as follows: ```python import torch import torch_xla.core.xla_model as xm from torch_xla.distributed.fsdp import XlaFullyShardedDataParallel as FSDP model = FSDP(my_module) optim = torch.optim.Adam(model.parameters(), lr=0.0001) output = model(x, y)
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
output = model(x, y) loss = output.sum() loss.backward() optim.step() ``` Wrapping an nn.Module instance with XlaFullyShardedDataParallel enables the ZeRO-2 algorithm on it, where its gradients and the optimizer states are sharded for the entire training process. During its forward and backward passes, the full parameters of the wrapped module are first reconstructed from their corresponding shards for computation.
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
Nested FSDP wrapping can be used to further save memory. This allows the model to store only the full parameters of one individual layer at any given time. For nested FSDP, one should first wrap its individual submodules with an inner FSDP before wrapping the base model with an outer FSDP. This allows the model to store only the full parameters of one individual layer at any given time. And having an outer wrapper ensures to handle any leftover parameters, corresponding to the ZeRO-3 algorithm. Nested FSDP wrapping can be applied at any depth of submodules and there can be more than 2 layers of nesting.
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
Model checkpoint saving and loading for models and optimizers can be done like before by saving and loading their .state_dict(). Meanwhile, each training process should save its own checkpoint file of the sharded model parameters and optimizer states, and load the checkpoint file for the corresponding rank when resuming (regardless of ZeRO-2 or ZeRO-3, i.e. nested wrapping or not). A command line tool and a Python interface are provided to consolidate the sharded model checkpoint files together into a full/unshareded model checkpoint file. Gradient checkpointing (also referred to as "activation checkpointing" or "rematerialization") is another common technique for model scaling and can be used in conjunction with FSDP. We provide checkpoint_module, a wrapper function over a given nn.Module instance for gradient checkpointing (based on torch_xla.utils.checkpoint.checkpoint).
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
The MNIST and ImageNet examples below provide illustrative usages of (plain or nested) FSDP, saving and consolidation of model checkpoints, as well as gradient checkpointing. Starting examples of FSDP in PyTorch/XLA Training MNIST and ImageNet with FSDP MNIST and ImageNet classification can often be used as starting points to build more complicated deep learning models. We provide the following FSDP examples on these two datasets: MNIST: test/test_train_mp_mnist_fsdp_with_ckpt.py (it also illustrates checkpoint saving and consolidation) ImageNet: test/test_train_mp_imagenet_fsdp.py
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
A comparison of them with the vanilla data-parallel examples of MNIST and ImageNet illustrates how to adapt a training script to use FSDP. A major distinction to keep in mind is that when stepping the optimizer on an FSDP-wrapped model, one should directly call optimizer.step() instead of xm.optimizer_step(optimizer). The latter reduces the gradients across ranks, which is not what we need in FSDP, where the gradients are already reduced and sharded (from a reduce-scatter op in its backward pass). Installation FSDP is available from the PyTorch/XLA 1.12 and newer nightly releases. Please refer to https://github.com/pytorch/xla#-available-images-and-wheels for a guide on installation as well as Cloud TPU allocation. Then clone PyTorch/XLA repo on a TPU VM as follows ```python
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
mkdir -p ~/pytorch && cd ~/pytorch git clone --recursive https://github.com/pytorch/xla.git cd ~/ Train MNIST on v3-8 TPU It gets around 98.9 accuracy for 2 epochs: python3 ~/pytorch/xla/test/test_train_mp_mnist_fsdp_with_ckpt.py \ --batch_size 16 --drop_last --num_epochs 2 \ --use_nested_fsdp The script above automatically tests consolidation of the sharded model checkpoints at the end. You can also manually consolidate the sharded checkpoint files via python3 -m torch_xla.distributed.fsdp.consolidate_sharded_ckpts \ --ckpt_prefix /tmp/mnist-fsdp/final_ckpt \ --ckpt_suffix "_rank-*-of-*.pth" Train ImageNet with ResNet-50 on v3-8 TPU It gets around 75.9 accuracy for 100 epochs, same as what one would get without using FSDP; download and preprocess the ImageNet-1k dataset to /datasets/imagenet-1k: ```python python3 ~/pytorch/xla/test/test_train_mp_imagenet_fsdp.py \
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
--datadir /datasets/imagenet-1k --drop_last \ --model resnet50 --test_set_batch_size 64 --eval_interval 10 \ --lr 0.4 --batch_size 128 --num_warmup_epochs 5 \ --lr_scheduler_divide_every_n_epochs 30 --lr_scheduler_divisor 10 \ --num_epochs 100 \ --use_nested_fsdp ``` You can also explore other options in these two examples, such as --use_gradient_checkpointing to apply gradient checkpointing (i.e. activation checkpointing) on the ResNet blocks, or --compute_dtype bfloat16 to perform forward and backward passes in bfloat16 precision. Examples on large-scale models When building large models on TPUs, we often need to be aware of the memory constraints (e.g. 16 GB per core in TPU v3 and 32 GB per chip in TPU v4). For large models that cannot fit into a single TPU memory or the host CPU memory, one should use nested FSDP to implement the ZeRO-3 algorithm interleave submodule construction with inner FSDP wrapping, so that the full model never needs to be stored in memory during construction.
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
We illustrate these cases in https://github.com/ronghanghu/ptxla_scaling_examples, which provides examples of training a Vision Transformer (ViT) model with 10B+ parameters on a TPU v3 pod (with 128 cores) as well as other cases. Design Notes One might wonder why we need to develop a separate FSDP class in PyTorch/XLA instead of directly reusing PyTorch's FSDP class or extending it to the XLA backend. The main motivation behind a separate FSDP class in PyTorch/XLA is that the native PyTorch's FSDP class heavily relies on CUDA features that are not supported by XLA devices, while XLA also has several unique characteristics that need special handling. These distinctions require a different implementation of FSDP that would be much easier to build in a separate class. Changes in API calls
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
Changes in API calls One prominent distinction is that the native PyTorch FSDP is built upon separate CUDA streams for asynchronous execution in eager mode, while PyTorch/XLA runs in lazy mode and also does not support streams. In addition, TPU requires that all devices homogeneously run the same program. As a result, in the PyTorch/XLA FSDP implementation, CUDA calls and per-process heterogeneity need to be replaced by XLA APIs and alternative homogeneous implementations. Tensor Storage Handling
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
Tensor Storage Handling Another prominent distinction is how to free a tensor's storage, which is much harder in XLA than in CUDA. To implement ZeRO-3, one needs to free the storage of full parameters after a module's forward pass, so that the next module can reuse this memory buffer for subsequent computation. PyTorch's FSPD accomplishes this on CUDA by freeing the actual storage of a parameter p via p.data.storage().resize_(0). However, XLA tensors do not have this .storage() handle given that the XLA HLO IRs are completely functional and do not provide any ops to deallocate a tensor or resize its storage. Below the PyTorch interface, only the XLA compiler can decide when to free a TPU device memory corresponding to an XLA tensor, and a prerequisite is that the memory can only be released when the tensor object gets deallocated in Python -- which cannot happen in FSDP because these parameter tensors are referenced as module attributes and also saved by PyTorch autograd for the backward pass.
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
Our solution to this issue is to split a tensor's value properties from its autograd Variable properties, and to free a nn.Parameter tensor by setting its .data attribute to a dummy scalar of size 1. This way the actual data tensor for the full parameter gets dereferenced in Python so that XLA can recycle its memory for other computation, while autograd can still trace the base nn.Parameter as a weak reference to the parameter data. To get this to work, one also needs to handle views over the parameters as views in PyTorch also hold references to its actual data (this required fixing a shape-related issue with views in PyTorch/XLA). Working with XLA compiler
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
The solution above should be enough to free full parameters if the XLA compiler faithfully preserves the operations and their execution order in our PyTorch program. But there is another problem -- XLA attempts to optimize the program to speed up its execution by applying common subexpression elimination (CSE) to the HLO IRs. In a naive implementation of FSDP, the XLA compiler typically eliminates the 2nd all-gather in the backward pass to reconstruct the full parameters when it sees that it is a repeated computation from the forward pass, and directly holds and reuses the full parameters we want to free up after the forward pass. To guard against this undesired compiler behavior, we introduced the optimization barrier op into PyTorch/XLA and used it to stop eliminating the 2nd all-gather. This optimization barrier is also applied to a similar case of gradient checkpointing to prevent CSE between forward and backward passes that could eliminate the rematerialization.
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
In the future, if the distinctions between CUDA and XLA become not as prominent as mentioned above, it could be worth considering a merge of the PyTorch/XLA FSDP with the native PyTorch FSDP to have a unified interface. Acknowledgments Thanks to Junmin Hao from AWS for reviewing the PyTorch/XLA FSDP pull request. Thanks to Brian Hirsh from the Meta PyTorch team for support on the PyTorch core issues. Thanks to Isaack Karanja, Will Cromar, and Blake Hechtman from Google for support on GCP, XLA, and TPU issues. Thanks to Piotr Dollar, Wan-Yen Lo, Alex Berg, Ryan Mark, Kaiming He, Xinlei Chen, Saining Xie, Shoubhik Debnath, Min Xu, and Vaibhav Aggarwal from Meta FAIR for various TPU-related discussions.
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
layout: blog_detail title: "Accelerated Diffusers with PyTorch 2.0" author: Pedro Cuenca, Patrick von Platen, Suraj Patil PyTorch 2.0 has just been released. Its flagship new feature is torch.compile(), a one-line code change that promises to automatically improve performance across codebases. We have previously checked on that promise in Hugging Face Transformers and TIMM models, and delved deep into its motivation, architecture and the road ahead.
https://pytorch.org/blog/accelerated-diffusers-pt-20/
pytorch blogs
As important as torch.compile() is, there’s much more to PyTorch 2.0. Notably, PyTorch 2.0 incorporates several strategies to accelerate transformer blocks, and these improvements are very relevant for diffusion models too. Techniques such as FlashAttention, for example, have become very popular in the diffusion community thanks to their ability to significantly speed up Stable Diffusion and achieve larger batch sizes, and they are now part of PyTorch 2.0. In this post we discuss how attention layers are optimized in PyTorch 2.0 and how these optimization are applied to the popular 🧨 Diffusers library. We finish with a benchmark that shows how the use of PyTorch 2.0 and Diffusers immediately translates to significant performance improvements across different hardware. Accelerating transformer blocks
https://pytorch.org/blog/accelerated-diffusers-pt-20/
pytorch blogs
Accelerating transformer blocks PyTorch 2.0 includes a scaled dot-product attention function as part of torch.nn.functional. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. Before PyTorch 2.0, you had to search for third-party implementations and install separate packages in order to take advantage of memory optimized algorithms, such as FlashAttention. The available implementations are: * FlashAttention, from the official FlashAttention project. * Memory-Efficient Attention, from the xFormers project. * A native C++ implementation suitable for non-CUDA devices or when high-precision is required.
https://pytorch.org/blog/accelerated-diffusers-pt-20/
pytorch blogs
All these methods are available by default, and PyTorch will try to select the optimal one automatically through the use of the new scaled dot-product attention (SDPA) API. You can also individually toggle them for finer-grained control, see the documentation for details. Using scaled dot-product attention in diffusers
https://pytorch.org/blog/accelerated-diffusers-pt-20/
pytorch blogs
The incorporation of Accelerated PyTorch 2.0 Transformer attention to the Diffusers library was achieved through the use of the set_attn_processor method, which allows for pluggable attention modules to be configured. In this case, a new attention processor was created, which is enabled by default when PyTorch 2.0 is available. For clarity, this is how you could enable it manually (but it’s usually not necessary since diffusers will automatically take care of it): ``` from diffusers import StableDiffusionPipeline from diffusers.models.cross_attention import AttnProcessor2_0 pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
https://pytorch.org/blog/accelerated-diffusers-pt-20/
pytorch blogs
pipe.to("cuda") pipe.unet.set_attn_processor(AttnProcessor2_0()) prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] ``` Stable Diffusion Benchmark We ran a number of tests using accelerated dot-product attention from PyTorch 2.0 in Diffusers. We installed diffusers from pip and used nightly versions of PyTorch 2.0, since our tests were performed before the official release. We also used torch.set_float32_matmul_precision('high') to enable additional fast matrix multiplication algorithms. We compared results with the traditional attention implementation in diffusers (referred to as vanilla below) as well as with the best-performing solution in pre-2.0 PyTorch: PyTorch 1.13.1 with the xFormers package (v0.0.16) installed.
https://pytorch.org/blog/accelerated-diffusers-pt-20/
pytorch blogs
Results were measured without compilation (i.e., no code changes at all), and also with a single call to torch.compile() to wrap the UNet module. We did not compile the image decoder because most of the time is spent in the 50 denoising iterations that run UNet evaluations. Results in float32 The following figures explore performance improvement vs batch size for various representative GPUs belonging to different generations. We collected data for each combination until we reached maximum memory utilization. Vanilla attention runs out of memory earlier than xFormers or PyTorch 2.0, which explains the missing bars for larger batch sizes. Similarly, A100 (we used the 40 GB version) is capable of running batch sizes of 64, but the other GPUs could only reach 32 in our tests.
https://pytorch.org/blog/accelerated-diffusers-pt-20/
pytorch blogs
https://pytorch.org/blog/accelerated-diffusers-pt-20/
pytorch blogs