text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
In our experiments with the T5 3B model, using the transformer wrapping policy resulted in >2x higher throughput measured in TFLOPS versus the default wrapping policy. Activation checkpointing resulted in 10x improvement by reinvesting the freed memory from the checkpoints into larger batch size. Mixed precision with BFloat16 resulted in ~5x improvement versus FP32 and finally the full sharding strategy versus zero2 (DDP) resulted in 1.5x improvement. | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
We ran similar experiments for a larger model, T5 11B, but the larger model size resulted in some changes to the experiment space. Specifically, we found that two optimizations, transformer wrapping policy and activation checkpointing, were needed to enable us to run these experiments on 3 nodes (each node had 8 A100 gpus with 80 GB of memory). With these optimizations, we could fit a batch size of 50 and get higher throughput compared to removing each one of them. Thus rather than running on/off solely for a single optimization test as with the 3B model, the larger model experiments were done with 1 of 3 optimizations turned on/off while always running the other two in order to allow a usable batch size for both test states for each item. | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
Based on TFLOP comparisons, with the 11B model, we saw even more payoff from the optimizations. Mixed precision(~10x improvement) and activation checkpointing (~100x improvement) had a much larger impact with the 11B model compared to the 3B parameter model. With mixed precision we could fit ~2x larger batch sizes and with activation checkpointing >15x batch sizes (from 3 with no activation checkpointing to 50 with activation checkpointing) which translated into large throughput improvements.
We also have observed that for these larger models > 3B, using Zero2 sharding strategy would result in minimal room left in memory for the batch data, and had to go with very small batch sizes (e.g 1-2) that essentially makes full sharding strategy a necessity to enable fitting larger batches sizes. | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
Note - this tutorial assumes a basic understanding of FSDP. To learn more about basics of FSDP please refer to the getting started and advanced FSDP tutorials.
What is FSDP? How does it make Large-Scale Training More Efficient
FSDP expands upon distributed data parallel, by parallelizing not just data, but the model parameters, the optimizer states and gradients associated with the model. Specifically - each GPU only stores a subset of the entire model and the associated subset of optimizer states and gradients.
To show the evolution of distributed training, we can start from the beginning, where AI models were simply trained on a single GPU. | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
DDP (Distributed Data Parallel) was the initial step up from training with only a single GPU, and was an effort to address the data and model size growth, where multiple GPUs each housed their own copy of the same model. The gain here is that the data for each batch could be split and processed independently on each GPU, all at the same time,thus parallelizing the processing of the data set and increasing training speed by the increasing number of GPUs. The tradeoff is the need to communicate the gradients between each GPU to synchronize the models after the backward pass. | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
FSDP expands on scaling models by removing the redundancy of optimizer calculations and state storage, as well as gradient and memory storage of model parameters that are present in DDP (DDP = Distributed Data Parallel). This redundancy reduction, along with increased communication overlap where model parameter communication takes place at the same time as model computation, is what allows FSDP to train much larger models with the same resources as DDP.
A key point is that this efficiency also allows for AI models that are larger than a single GPU to be trained. The model size available for training is now increased to the aggregate memory of all GPUs, rather than the size of a single GPU. (And as a point of note, FSDP can go beyond aggregated GPU memory by leveraging CPU memory as well, though we will not directly cover this aspect here). | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
As discussed in a previous blog post, with DDP the largest model that we could train on 32, A100 gpus with 40 GB memory (4 nodes) was up to 3B parameters, and batch size of 128, with the help of activation checkpointing. By contrast, using FSDP we were able to train up to 81B model size, combining activation checkpointing, along with activation and parameter offloading. In another experiment, we benchmarked a 1T parameter model with FSDP using 512 gpus.
For intuition on the parameter level workings of FSDP, below we show an animation detailing how the model parameters are sharded and communicated assuming a two GPU scenario and a simple 8 parameter model:
| https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
Above - the animations walk through the steps involved with the initial sharding of the model amongst ranks, and we start the all_gathers and forward pass
We continue through the model with the forward pass. After each FSDP unit completes, non-locally owned params are dropped to free memory, and optionally activations can be checkpointed. This continues until we finish the forward pass and compute the loss.
During the backward pass, another all_gather is used to load the parameters and the gradients are computed. These gradients are then reduce_scattered so that the local owners of each param can aggregate and prepare to update the weights.
| https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
Finally, each rank passes the summed gradients through the optimizer states and updates the weights to complete the mini-batch.
With the model now distributed across the entire set of available GPUs, the logical question is how data moves through the model given this sharding of model parameters.
This is accomplished by FSDP coordinating with all GPUs to effectively share (communicate) the respective parts of the model. The model is decomposed into FSDP units and parameters within each unit are flattened and then sharded across all GPUs. Within each FSDP unit, GPU’s are assigned interleaving ownership of individual model parameters.
By interleaving, we mean the following - assuming 2 gpus with an id of 1 and 2, the FSDP unit ownership pattern would be [12121212], rather than a contiguous chunk of [111222]. | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
During training, an all_gather is initiated and the locally owned model parameters within a FSDP unit are shared by the owner GPU with the other non-owners, when they need it, on a ‘just in time’ type basis. FSDP prefetches parameters to overlap all_gather communication with computation.
When those requested parameters arrive, the GPU uses the delivered parameters, in combination with the parameters it already owns, to create a fully populated FSDP unit. Thus there is a moment where each GPU hits peak memory usage while holding a fully populated FSDP unit. | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
It then processes the data through the FSDP unit, and drops the parameters it received from other GPU’s to free up memory for the next unit…the process continues over and over proceeding through the entire model to complete the forward pass.The process is then repeated (in general) for the backward pass.(note - this is a simplified version for understanding..there is additional complexity but this should help construct a basic mental model of the FSDP process). | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
This eliminates much of the memory redundancy present in DDP, but imposes the cost of higher amounts of network communication to shuttle these requested parameters back and forth amongst all the GPUs.Overlapping the communication timing with the computation taking place is the basis of many of the performance improvements we’ll discuss in this series. The key gains are frequently based on the fact that communication can often take place at the same time as computation.As you can surmise, having high communication speed is vital for FSDP performance.
How do I optimize my training with FSDP?
There are four main performance improvements we will cover - the transformer wrapper, activation checkpointing, mixed precision, and selecting the proper sharding strategy. The flowchart below will help as a checklist for tuning options that we will discuss in this post.
| https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
Wrapping policy - for transformers, use Transformer wrapping policy
The first performance optimization is leveraging the FSDP transformer wrapper for transformer models.
One of the pre-defined wrapping policy is size_based_autowrap_policy. With size_based_autowrap_policy, FSDP will traverse the module structure from bottom to top, a new FSDP unit will be created once the current unit has at least the min_num_params specified within the size policy (this defaults to 1e8, or 100M). If the module can not be created as an FSDP unit, FSDP will continue to check its parent module. This size based wrapping policy may not be ideal for some model structures, PyTorch distributed team is actively working on a new default wrapping policy in the next release which is based on size and also module execution order, users can simply tune the size and achieve the optimized performance. | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
In the current release, you can greatly improve your performance when running Transformer models by using the ‘transformer wrapper’. You will need to provide the appropriate layer class for your model. Here, layer class is the class that houses the Multi-Head Attention and Feed Forward Network. | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
FSDP will then form the FSDP units around the layer class rather than arbitrary breaks based on parameter size. By sharding the model around layer classes that are uniformly repeated within the transformer, FSDP can create uniform FSDP units that better balance the overlap of computation and communication. By contrast, size based wrapping can produce very uneven or skewed shards for models, which then have uneven matching of compute vs communication overlap. As discussed earlier, the main driver of FSDP high performance is the overlap of communication and computation, and hence why the Transformer wrapper provides improved performance. Note that the Transformer wrapper can also be used for non-transformer models if these models have a list of uniform layers.
Let’s compare the performance difference on a T5, 3B parameter model when running under the default wrapper and the transformer wrapper.
For default wrapping, we don’t need to take any action - we simply pass the model to FSDP as shown:
```python | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
model = FSDP(
model,
device_id=torch.cuda.current_device(),
)
In this case FSDP will simply wrap the whole model in a single FSDP unit.
Running on an NVIDIA A100-SXM4–40GB with 8 GPUs, we are able to reach 2.3 TFlops and 95% GPU memory utilization with a batch size of 14.
However, since T5 is a transformer model, we are better served to leverage the transformer wrapper for this model.
To use that, we need to isolate the layer class for the transformer, and then pass it in to create our transformer wrapper.
from transformers.models.t5.modeling_t5 import T5Block
And now we can create our Transformer wrapper:
transformer_auto_wrapper_policy = functools.partial(
transformer_auto_wrap_policy,
transformer_layer_cls={
T5Block, # < ---- Your Transformer layer class
},
)
| https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
},
)
With our model aware wrapper ready, we can initialize FSDP:
```python
# invoke FSDP with your transformer wrapper policy:
model = FSDP(
model,
auto_wrap_policy=transformer_auto_wrapper_policy,
device_id=torch.cuda.current_device(), # streaming init
)
Running this wrapped model, we can see some substantial performance gains.We can fit nearly double the batch size, going to 28, and with better memory and communication efficiency, we see a TFlops increase to 5.07 from 2.3.
Thus, we’ve increased our training throughput by over 200% (2.19x) due to providing greater model info to FSDP! The transformer wrapping policy results in more fine-grained and balanced FSDP units each holding a layer class, which leads to a more effective communication-computation overlap.
Above: Graphical comparison of TFlops based on wrapper type | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
If you are training a Transformer model, it pays to configure your training with FSDP using the transformer wrapper. For more information on how to isolate your layer class, please see our in depth video on Transformer wrapping here, where we walk through a number of transformers showing where the layer class can be found.
Mixed precision - use BF16 if you have an Ampere architecture GPU
FSDP supports a flexible mixed precision policy that gives you granular control over parameters, gradients and buffer data types. This lets you easily leverage BFloat16 or FP16 to increase your training speed by up to 70%.
*Note that BFloat 16 is only available on Ampere type GPUs. On AWS this is available with p4dn and g5 instances.
By way of comparison, we can show a 77% speed improvement when comparing fully tuned BFloat16 vs FP32 on an 8B DeepVit model.
| https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
We have obtained even greater acceleration using BFloat16 in fine-tuning a 3B HuggingFace T5 model as shown in the figures below. We observed that because of the lower precision the validation loss of BFloat16 is slightly behind in the first few epochs, but it is able to catch up and results in the same final accuracy as FP32.
To use mixed precision, we create a policy with our desired data types, and pass it in during the FSDP initialization.
To create our policy, we need to import the MixedPrecision class, and then define our custom policy using our customized class:
```python
from torch.distributed.fsdp import MixedPrecision
bfSixteen = MixedPrecision(
param_dtype=torch.bfloat16,
# Gradient communication precision.
reduce_dtype=torch.bfloat16,
# Buffer precision.
buffer_dtype=torch.bfloat16,
)
model = FSDP(
model,
auto_wrap_policy=transformer_auto_wrapper_policy, | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
mixed_precision=bfloatPolicy)
You can mix and match the precision for parameters, gradients and buffers as you prefer:
```python
comboPolicy = MixedPrecision(
# Param precision
param_dtype=torch.bfloat16,
# Gradient communication precision.
reduce_dtype=torch.float32,
# Buffer precision.
buffer_dtype=torch.float32,
)
For training with FP16, you will need to also use the ShardedGradScaler, which we will cover in subsequent posts. For BFloat16, it is a drop-in replacement.
AnyPrecision Optimizer - going beyond mixed precision with full BF16 training | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
Mixed precision training, both in FSDP and elsewhere, maintains the working weights in the reduced datatype (BF16 or FP16) while keeping the master weights in full FP32. The reason for the master weights in FP32 is that running in pure BF16 will result in ‘weight stagnation’, where very small weight updates are lost due to the lower precision, and the accuracy flatlines over time while FP32 weights can continue to improve from these small updates.
In order to resolve this dilemma, we can use the new AnyPrecision optimizer available in TorchDistX (Torch Distributed Experimental) that allows you to successfully train and keep the master weights in pure BF16 instead of FP32. In addition, unlike the typical storage of optimizer states in FP32, AnyPrecision is able to maintain states in pure BF16 as well. | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
AnyPrecision enables pure BF16 training by maintaining an extra buffer that tracks the precision lost during the weight updates and re-applies that during the next update…effectively resolving the weight stagnation issue without requiring FP32.
As a comparison of the throughput gains available with pure BF16 training using AnyPrecision, we ran experiments using FSDP with the T5 11B model with regular FP32 training, Mixed Precision training with BF16, and pure BF16 training using the AnyPrecision optimizer on 3 nodes with A100 gpus as mentioned previously.
As shown above, training with AnyPrecision and pure BF16 resulted in 2x the throughput vs Mixed Precision, and over 20x improvement vs FP32. | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
The potential tradeoff is the impact on final accuracy - in the cases we tested, the accuracy was equal or better than FP32 due to a regularization effect from the slightly reduced precision, but your results may vary.
AnyPrecision optimizer is available for you to test with here, and is a drop in replacement for AdamW optimizer.
Activation checkpointing - increasing throughput by trading compute for memory
FSDP supports activation checkpointing once the model has been sharded, and makes it easy to implement. The graph above shows ~4x throughput improvement using activation checkpointing.
Activation checkpointing is where the intermediate activations are freed during the forward pass, and a checkpoint is left as a placeholder. This generally increases available GPU memory by over 30%. | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
The tradeoff is that during the backward pass, these previously removed intermediate activations must be re-calculated again using information in the checkpoint (duplicate compute), but by leveraging the increased GPU memory, one can increase the batch size such that the net throughput can increase substantially.
# verify we have FSDP activation support ready by importing:
from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import (
checkpoint_wrapper,
CheckpointImpl,
apply_activation_checkpointing_wrapper,
)
The steps required to implement activation checkpointing is to first import the FSDP checkpointing functions. We need declare our checkpointer wrapper type which is non-reentrant and create a check function to identify which layer to wrap as follows
non_reentrant_wrapper = partial(
checkpoint_wrapper,
offload_to_cpu=False,
checkpoint_impl=CheckpointImpl.NO_REENTRANT,
)
check_fn = lambda submodule: isinstance(submodule, T5Block)
```python | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
```python
apply_activation_checkpointing_wrapper(
model, checkpoint_wrapper_fn=non_reentrant_wrapper, check_fn=check_fn
)
Important note - this must be run after the model has been initialized with FSDP.
However, hopefully you’ve seen how some initial tuning with FSDP options can have a large impact on your training performance.
With that, we turn our attention from how to scale within FSDP, to how to scale your server hardware for FSDP using AWS.
Large Scale Training with FSDP on AWS - For multi-node prioritize high speed network
AWS provides several services that can be used to run distributed training with FSDP: Amazon EC2 Accelerated Computing instances, AWS ParallelCluster, and Amazon Sagemaker. | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
In this series of blog posts, we used Amazon EC2 p4d instances in a single-instance multi-GPU configuration and in a multi-instance configuration using AWS ParallelCluster and SageMaker in order to run our training jobs.
Here, we’ll focus specifically on AWS parallel cluster and provide an overview of how to utilize it for training purposes.
AWS ParallelCluster Setup | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
AWS ParallelCluster Setup
AWS ParallelCluster is an open source, cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. AWS ParallelCluster uses yaml configuration files to provision all the necessary resources. It also supports multiple instance types, job submission queues, shared file systems like Amazon EFS (NFS) or Amazon FSx for Lustre, and job schedulers like AWS Batch and Slurm.
| https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
Workflow on Clusters
The high level idea is to have a cluster that has a head node which controls the compute nodes. The actual training job runs on the compute nodes. Overall steps to run a training job on a cluster are as follows:
Set up an AWS ParallelCuster (we discuss below)
Connect to the head node, and import the training code/ setup the environment.
Pull the data and place it in a shared folder that compute nodes can access (FSx Lustre drive).
Run the training job using a job scheduler (in this case Slurm).
Setup AWS ParallelCuster
To setup AWS ParallelCluster,
Deploy a network stack. This step is optional since you could use your account default VPC and let AWS ParallelCluster create your subnets and security groups. However, we prefer to compartmentalize our desired network infrastructure and do this deployment via a CloudFormation stack.
| https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
Since we deploy a public and a private subnet, we want to create them into an Availability Zone that contains our target instances, in this case p4d. We consult their availability in the region we use (us-east-1) through the following AWS CLI command:
`aws ec2 describe-instance-type-offerings --location-type availability-zone \ --filters Name=instance-type,Values=p4d.24xlarge --region us-east-1 --output table`
We see three availability zones containing p4d instances, we pick one of them (`us-east-1c`, yours may be different) when deploying our network stack. This can be done with the AWS Console or the AWS CLI. In our case we use the latter as follows
`aws cloudformation create-stack --stack-name VPC-Large-Scale --capabilities CAPABILITY_IAM --template-body file://VPC-Large-Scale.yaml --parameters ParameterKey=SubnetsAZ,ParameterValue=us-east-1c`
| https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
CloudFormation will deploy our new VPC, subnets, security groups and endpoints on our behalf. Once done, you can retrieve the IDs of the public and private subnets by querying the stack outputs and the values PublicSubnet and PrivateSubnet.
For example, using the AWS CLI for the private subnet:
`aws cloudformation describe-stacks --stack-name VPC-Large-Scale --query "Stacks[0].Outputs[?OutputKey=='PrivateSubnet'].OutputValue" --output text`
Create ParallelCluster, The cluster configuration file specifies the resources for our cluster. These resources include instance type for Head node, compute nodes, access to S3 buckets, shared storage where our data will be located. We will use Amazon FSx for Lustre that offers a fully managed shared storage service with Lustre.
| https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
Here is an example of a cluster configuration file. We can use AWs ParallelCluster CLI to create the cluster. Please note that the private and public subnet IDs will need to be replaced by the ones you retrieved earlier. You will be able to control the cluster using the AWS ParallelCluster CLI to start, stop, pause, etc.
```
pcluster create-cluster --cluster-name my-hpc-cluster --cluster-configuration cluster.yaml
```
SSH to Head node - once the cluster is ready, we can connect to the Head node using the SSH protocol, pull our training code with and place the data in the shared storage specified in the cluster configuration file.pcluster ssh --cluster-name cluster -i your-key_pair
| https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
Launch the training job - now that we have the data and training code, we can launch the slurm job for training. Here is an example of a slurm script to launch the job using torchrun.
More details on how to set up the cluster is out of the scope of this post, however we will have a separate post on it.
What’s next?
With this post we provided a high level overview of FSDP and how it efficiently scales distributed AI training. The flowchart included will help provide a checklist for you to review tuning options discussed such as the transformer wrapper and activation checkpointing. | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
In the next posts, we will continue with the T5 model and go deeper into each of the topics above, specifically with sharding strategy and other optimizations to provide more insight and details. For now, a good reference for the sharding strategy is in our video tutorial here:
If you have questions or find an issue, please find the authors Less, Hamid and Geeta or open an issue on PyTorch github.
Special thanks to:
Pytorch Distributed team, Shen Li, Rohan Varma, Yanli Zhao, Andrew Gu, Anjali Sridhar, Ana Simoes, Pierre-Yves Aquilanti, Sundar Ranganathan, and the broader AWS team for supporting us with providing infrastructure and technical support for running the large scale experiments.
Resources: | https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
Resources:
FSDP video series
Getting started with FSDP
Advanced tutorial on FSDP
API documentation
| https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/ | pytorch blogs |
layout: blog_detail
title: "Performance Debugging of Production PyTorch Models at Meta"
author: CK Luk, Lei Tian
featured-img: "/assets/images/performance-debugging-of-production-pytorch-models-at-meta-1.png"
1. Meta’s AI Performance Profiling (MAIProf)
Figure 1: A simplified illustration of the Meta’s AI performance profiling (MAIProf) infrastructure.
| https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/ | pytorch blogs |
Figure 1 gives a simplified illustration of the AI performance profiling infrastructure at Meta. ML research and performance engineers submit through the User Portal a profiling request for a training job to the Profiling Service, which subsequently broadcasts the request to all the GPU hosts running the training job. When the Monitoring Daemon on a GPU host receives the profiling request, it will notify the Kineto GPU tracer (built on top of NVIDIA’s libcupti) inside the PyTorch program corresponding to the training job. As a result, Kineto traces will be collected and uploaded to the Object Store asynchronously (in more details: there is one Kineto trace collected for each individual GPU, each is treated and stored as a blob; an example will be given in Section 2). Meanwhile, MAIProf also collects a variety of aggregated performance metrics: the Monitoring Daemon on every GPU host continuously reads performance counters from NVIDIA’s DCGM/NVML and logs them to a Time Series DB. | https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/ | pytorch blogs |
Once both trace and metrics collections are completed, the Profiling Service will automatically download traces from the Object Store for trace analysis and performance metrics from the Time Series DB for metric analysis. Finally, an overall profiling report with detailed and insightful analysis is delivered to the user.
To serve production uses, we deliberately made the following design choices for MAIProf:
No source-code change required in the PyTorch models: profiling is triggered by sampling the execution of an unmodified model for a user-specified amount of time.
Provide a holistic view of performance: MAIProf performs system-wide analysis that cover both CPU and GPU. Under the hood, it invokes various CPU tools (e.g., Python tracer, Autograd Observer) and GPU tools (e.g., Kineto, DCGM) and correlates their results.
| https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/ | pytorch blogs |
Provide multiple tools that target a wide range of AI partitioners: At Meta, there are engineers with different backgrounds who may need to tune their AI workload performance. Some of them are AI experts while others are general software engineers. Therefore, MAIProf provides a variety of tools for different levels of performance debugging, from high-level automatic trace comprehension to low-level trace analysis.
Support distributed GPU profiling: MAIProf can collect profiling data from multiple hosts, each with multiple GPUs. It then shows a combined view/analysis of the entire system.
Highly scalable: MAIProf is built as a service on top of existing infrastructures in Meta data centers such as a scalable storage system called Manifold. Its profiling capability can be easily scaled by adding more machines in the service pool with the increase of workloads.
2. Case Study: Optimizing a Protection PyTorch Model | https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/ | pytorch blogs |
To be concrete, we use a case study on a protection PyTorch model used in production. First, we discuss our steps for identifying the performance bottlenecks in the model with MAIProf. Then we describe the corresponding optimizations applied and their impacts.
2.1 Performance Bottlenecks
Step 1:
Inspect the CPU and GPU utilization on the same timeline, as shown in Figure 2.
Figure 2: CPU usage over time (the top) vs. GPU usage over time (the bottom).
The first performance anomaly we noticed in Figure 2 is the pattern: “GPU-idle, GPU-active, GPU-idle, GPU-active …” throughout the training. Overall, the GPU is idle for more than half of the training time (this is bad for performance because the GPU is a higher-performance device and so we want it to be utilized as much as possible).
Step 2: | https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/ | pytorch blogs |
Step 2:
Collect a Python function call trace on the CPU with MAIProf while the GPU is idle, which is shown in Figure 3.
Figure 3: A Python call trace.
| https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/ | pytorch blogs |
Figure 3: A Python call trace.
The Python trace shows that most of the CPU time is spent inside a Python function sharded_iterrows(). From the source code of the model, we learned that this function processes a big feature table in parallel. The number of worker threads used is controlled by a configurable parameter (num_worker_threads). Also, after investigating how the feature table is generated, we understood the performance anomaly: the training dataset is too large to fit in the CPU memory all at once; it needs to be broken into multiple sub-datasets, each has sufficient data for running 10 epochs. Consequently, a new sub-dataset needs to be read from the disk to memory every 10 epochs, during which the GPU is totally idle.
Step 3:
Collect GPU performance metrics, which is shown in Figure 4.
| https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/ | pytorch blogs |
Figure 4: GPU performance metrics in MAIProf.
We made the following observations from Figure 4:
The streaming multiprocessor (SM) runs the model’s CUDA kernels. Its utilization [1] is 9.1%, indicating that the parallel compute units on the GPU are not well utilized.
Tensor Core utilization is 0, meaning that Tensor Core (the mixed-precision compute unit on GPU) [2] is not used at all.
Max GPU memory utilization is 47.13%, indicating that half of the GPU memory is left unused.
Step 4:
Collect a GPU trace (aka Kineto trace) of the training loop as shown in Figure 5.
Figure 5: A GPU trace (aka Kineto trace) of the training loop.
| https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/ | pytorch blogs |
Since commonly used PyTorch functions are already annotated, their names are automatically shown on the trace. With them, we can roughly divide the trace into the four phases in a training iteration: (1) data loading, (2) forward pass, (3) backward pass, (4) gradient optimization (note: In Figure 5, the “optimizer” phase is from the previous batch while the other three phases are from the current batch).
2.2 Optimizations
We performed four simple optimizations that target the bottlenecks identified above, each requiring only a change in a config parameter or at most a few source lines. They are listed in Figure 6.
Optimization
Amount of changes
Bottlenecks addressed
Tune num_worker_threads by trying a few possible values within the number of CPU cores on each host.
1 source line
GPU totally idle time
Double the batch sizes
2 config parameters
GPU memory under-utilization
| https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/ | pytorch blogs |
| Use automatic mixed precision in PyTorch | 13 source lines | Zero Tensor Core utilization |
| Use mulitensor optimizer in PyTorch | 1 source line | Many small GPU kernels in the optimizer |
Figure 6: Four simple optimizations applied.
3. Concluding Remarks
Performance tuning for PyTorch in production environments is increasingly important. A capable performance-debugging tool is a key to this process. We demonstrate with a case study on a production model that MAIProf is a powerful infrastructure for identifying optimization opportunities. | https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/ | pytorch blogs |
At Meta, MAIProf has been used by 100s of engineers, from performance novices to experts, to identify many more types of bottlenecks. These include slow data loading, small and/or slow GPU kernels, distributed training issues such as load imbalance and excessive communication. MAIProf covers major classes of models, including recommendation, vision, and natural language processing. In summary, it is now an indispensable tool for tuning the performance of production PyTorch workloads.
References
[1] https://docs.nvidia.com/gameworks/content/developertools/desktop/analysis/report/ cudaexperiments/kernellevel/achievedoccupancy.htm
[2] https://www.nvidia.com/en-us/data-center/tensor-cores/ | https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/ | pytorch blogs |
layout: blog_detail
title: 'The torch.fft module: Accelerated Fast Fourier Transforms with Autograd in PyTorch'
author: Mike Ruberry, Peter Bell, and Joe Spisak
The Fast Fourier Transform (FFT) calculates the Discrete Fourier Transform in O(n log n) time. It is foundational to a wide variety of numerical algorithms and signal processing techniques since it makes working in signals’ “frequency domains” as tractable as working in their spatial or temporal domains.
As part of PyTorch’s goal to support hardware-accelerated deep learning and scientific computing, we have invested in improving our FFT support, and with PyTorch 1.8, we are releasing the torch.fft module. This module implements the same functions as NumPy’s np.fft module, but with support for accelerators, like GPUs, and autograd.
Getting started | https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/ | pytorch blogs |
Getting started
Getting started with the new torch.fft module is easy whether you are familiar with NumPy’s np.fft module or not. While complete documentation for each function in the module can be found here, a breakdown of what it offers is:
fft, which computes a complex FFT over a single dimension, and ifft, its inverse
the more general fftn and ifftn, which support multiple dimensions
The “real” FFT functions, rfft, irfft, rfftn, irfftn, designed to work with signals that are real-valued in their time domains
The "Hermitian" FFT functions, hfft and ihfft, designed to work with signals that are real-valued in their frequency domains
Helper functions, like fftfreq, rfftfreq, fftshift, ifftshift, that make it easier to manipulate signals
| https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/ | pytorch blogs |
We think these functions provide a straightforward interface for FFT functionality, as vetted by the NumPy community, although we are always interested in feedback and suggestions!
To better illustrate how easy it is to move from NumPy’s np.fft module to PyTorch’s torch.fft module, let’s look at a NumPy implementation of a simple low-pass filter that removes high-frequency variance from a 2-dimensional image, a form of noise reduction or blurring:
import numpy as np
import numpy.fft as fft
def lowpass_np(input, limit):
pass1 = np.abs(fft.rfftfreq(input.shape[-1])) < limit
pass2 = np.abs(fft.fftfreq(input.shape[-2])) < limit
kernel = np.outer(pass2, pass1)
fft_input = fft.rfft2(input)
return fft.irfft2(fft_input * kernel, s=input.shape[-2:])
Now let’s see the same filter implemented in PyTorch:
```python
import torch
import torch.fft as fft
def lowpass_torch(input, limit):
pass1 = torch.abs(fft.rfftfreq(input.shape[-1])) < limit | https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/ | pytorch blogs |
pass2 = torch.abs(fft.fftfreq(input.shape[-2])) < limit
kernel = torch.outer(pass2, pass1)
fft_input = fft.rfft2(input)
return fft.irfft2(fft_input * kernel, s=input.shape[-2:])
```
Not only do current uses of NumPy’s np.fft module translate directly to torch.fft, the torch.fft operations also support tensors on accelerators, like GPUs and autograd. This makes it possible to (among other things) develop new neural network modules using the FFT.
Performance
The torch.fft module is not only easy to use — it is also fast! PyTorch natively supports Intel’s MKL-FFT library on Intel CPUs, and NVIDIA’s cuFFT library on CUDA devices, and we have carefully optimized how we use those libraries to maximize performance. While your own results will depend on your CPU and CUDA hardware, computing Fast Fourier Transforms on CUDA devices can be many times faster than computing it on the CPU, especially for larger signals. | https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/ | pytorch blogs |
In the future, we may add support for additional math libraries to support more hardware. See below for where you can request additional hardware support.
Updating from older PyTorch versions
Some PyTorch users might know that older versions of PyTorch also offered FFT functionality with the torch.fft() function. Unfortunately, this function had to be removed because its name conflicted with the new module’s name, and we think the new functionality is the best way to use the Fast Fourier Transform in PyTorch. In particular, torch.fft() was developed before PyTorch supported complex tensors, while the torch.fft module was designed to work with them.
PyTorch also has a “Short Time Fourier Transform”, torch.stft, and its inverse torch.istft. These functions are being kept but updated to support complex tensors.
Future | https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/ | pytorch blogs |
Future
As mentioned, PyTorch 1.8 offers the torch.fft module, which makes it easy to use the Fast Fourier Transform (FFT) on accelerators and with support for autograd. We encourage you to try it out!
While this module has been modeled after NumPy’s np.fft module so far, we are not stopping there. We are eager to hear from you, our community, on what FFT-related functionality you need, and we encourage you to create posts on our forums at https://discuss.pytorch.org/, or file issues on our Github with your feedback and requests. Early adopters have already started asking about Discrete Cosine Transforms and support for more hardware platforms, for example, and we are investigating those features now.
We look forward to hearing from you and seeing what the community does with PyTorch’s new FFT functionality! | https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/ | pytorch blogs |
layout: blog_detail
title: "PyTorch Internals Part II - The Build System"
author: "Trevor Killeen"
date: 2017-06-27 12:00:00 -0500
redirect_from: /2017/06/27/Internals2.html
In the first post I explained how we generate a torch.Tensor object that you can use in your Python interpreter. Next, I will explore the build system for PyTorch. The PyTorch codebase has a variety of components:
The core Torch libraries: TH, THC, THNN, THCUNN
Vendor libraries: CuDNN, NCCL
Python Extension libraries
Additional third-party libraries: NumPy, MKL, LAPACK
How does a simple invocation of python setup.py install do the work that allows you to call import torch and use the PyTorch library in your code? | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
The first part of this document will explain the build process from and end-user point of view. This will explain how we take the components above to build the library. The second part of the document will be important for PyTorch developers. It will document ways to improve your iteration speed by building only a subset of the code that you are working on.
Setuptools and PyTorch's setup( ) function
Python uses Setuptools to build the library. Setuptools is an extension to the original distutils system from the core Python library. The core component of Setuptools is the setup.py file which contains all the information needed to build the project. The most important function is the setup() function which serves as the main entry point. Let's take a look at the one in PyTorch:
```python
setup(name="torch", version=version,
description="Tensors and Dynamic neural networks in Python with strong GPU acceleration", | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
ext_modules=extensions,
cmdclass={
'build': build,
'build_py': build_py,
'build_ext': build_ext,
'build_deps': build_deps,
'build_module': build_module,
'develop': develop,
'install': install,
'clean': clean,
},
packages=packages,
package_data={'torch': [
'lib/.so', 'lib/.dylib',
'lib/torch_shm_manager',
'lib/.h',
'lib/include/TH/.h', 'lib/include/TH/generic/.h',
'lib/include/THC/.h', 'lib/include/THC/generic/*.h']},
install_requires=['pyyaml'],
)
```
The function is composed entirely of keyword arguments, which serve two purposes:
Metadata (e.g. name, description, version)
The contents of the package
We are concerned with #2. Let's break down the individual components: | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
ext_modules: Python modules are either "pure" modules, containing only Python code, or "extension" modules written in the low-level language of the Python implementation. Here we are listing the extension modules in the build, including the main torch._C library that contains our Python Tensor
cmdclass: When using the setup.py script from the command line, the user must specify one or more "commands", code snippets that perform a specific action. For example, the "install" command builds and installs the package. This mapping routes specific commands to functions in setup.py that implement them
packages: The list of packages in the project. These are "pure" - i.e. they only contain Python code. These are defined elsewhere in setup.py
package_data: Additional files that need to be installed into a package: in this case the header files and shared libraries that the build will generate must be included in our installation
| https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
install_requires: In order to build PyTorch, we need pyyaml. Setuptools will handle making sure that pyyaml will be available, downloading and installing it if necessary
We will consider these components in more detail, but for now it is instructive to look at the end product of an installation -- i.e. what Setuptools does after building the code.
site_packages
Third party packages are by default installed into the lib/<version>/site_packages directory associated with your Python binary. For example, because I am using an Miniconda environment, my Python binary is found at:
(p3) killeent@devgpu047:pytorch (master)$ which python
~/local/miniconda2/envs/p3/bin/python
And thus packages are installed into:
/home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages
I installed PyTorch, and let's take a look into torch folder in site-packages:
```bash
(p3) killeent@devgpu047:site-packages$ cd torch | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
(p3) killeent@devgpu047:site-packages$ cd torch
(p3) killeent@devgpu047:torch$ ls
autograd backends _C.cpython-36m-x86_64-linux-gnu.so cuda distributed _dl.cpython-36m-x86_64-linux-gnu.so functional.py init.py legacy lib multiprocessing nn optim pycache serialization.py _six.py sparse storage.py _tensor_docs.py tensor.py _tensor_str.py _thnn _torch_docs.py utils _utils.py version.py
Note that everything we would expect to be here is here:
- All the "pure" packages are here [todo print packages from setup.py to explain]
- The extension libraries are here - the ._C* and ._dl* shared libraries
- The package_data is here: the contents of lib/ match exactly what we described in the setup function:
```bash
(p3) killeent@devgpu047:torch$ ls lib/
include libnccl.so.1 libTHC.so.1 libTHCUNN.so.1 libTHNN.so.1 libTH.so.1 THCUNN.h torch_shm_manager libnccl.so libshm.so libTHCS.so.1 libTHD.so.1 libTHPP.so.1 libTHS.so.1 THNN.h
| https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
The Python interpreter looks into `site_packages` during an import. If we call `import torch` in our Python code it will find the module here and initialize and import it. You can read more about the import system [here](https://docs.python.org/3/tutorial/modules.html).
### Building Individual Parts
Next, we will look at the various individual components of the build from start to finish. This will illustrate how we combine all the code we mentioned in the introduction.
### Backend Torch and Vendor Libraries
Let's take a look at the `install` cmd override in PyTorch's `setup.py`:
```python
class install(setuptools.command.install.install):
def run(self):
if not self.skip_build:
self.run_command('build_deps')
setuptools.command.install.install.run(self)
We note the first thing it does is run a command called "build_deps" - let's take a look at it's run() method:
```python
def run(self):
from tools.nnwrap import generate_wrappers as generate_nn_wrappers | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
build_all_cmd = ['bash', 'torch/lib/build_all.sh']
if WITH_CUDA:
build_all_cmd += ['--with-cuda']
if WITH_NCCL and not SYSTEM_NCCL:
build_all_cmd += ['--with-nccl']
if WITH_DISTRIBUTED:
build_all_cmd += ['--with-distributed']
if subprocess.call(build_all_cmd) != 0:
sys.exit(1)
generate_nn_wrappers()
Here we note that that we have a shell script `build_all.sh` in the `torch/lib/` directory. This script is configurable by whether we are on a system with CUDA enabled, the NCCL library enabled, and PyTorch's distributed library enabled.
Let's take a look in `torch/lib`:
```bash
(p3) killeent@devgpu047:lib (master)$ ls
build_all.sh libshm nccl README.md TH THC THCS THCUNN THD THNN THPP THS
| https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
```
Here we see the directories for all the backend libraries. TH, THC, THNN, THCUNN, and nccl are git subtrees that are in sync with the libraries in e.g. github.com/torch. THS, THCS, THD, THPP and libshm are libraries specific to PyTorch. All of the libraries contain CMakeLists.txt - indicating they are built with CMake.
The build_all.sh is essentially a script that runs the CMake configure step on all of these libraries, and then make install. Let's run ./build_all.sh and see what we are left with:
```bash
(p3) killeent@devgpu047:lib (master)$ ./build_all.sh --with-cuda --with-nccl --with-distributed
[various CMake output logs]
(p3) killeent@devgpu047:lib (master)$ ls | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
(p3) killeent@devgpu047:lib (master)$ ls
build build_all.sh include libnccl.so libnccl.so.1 libshm libshm.so libTHC.so.1 libTHCS.so.1 libTHCUNN.so.1 libTHD.so.1 libTHNN.so.1 libTHPP.so.1 libTH.so.1 libTHS.so.1 nccl README.md TH THC THCS THCUNN THCUNN.h THD THNN THNN.h THPP THS tmp_install torch_shm_manager
Now there are a number of extra things in the directory:
- Shared library files for each library
- Headers for `THNN` and `THCUNN`
- `build` and `tmp_install` directories
- The `torch_shm_manager` executable
Let's explore further. In the shell script, we create the `build` directory and a subdir for each library to build:
```bash
# We create a build directory for the library, which will
# contain the cmake output. $1 is the library to be built
mkdir -p build/$1
cd build/$1
Thus e.g. build/TH contains the CMake configuration output including the Makefile for building TH, and also the result of running make install in this directory. | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
Let's also look at tmp_install:
(p3) killeent@devgpu047:lib (master)$ ls tmp_install/
bin include lib share
tmp_install looks like a standard install directory containing binaries, header files and library files. For example, tmp_install/include/TH contains all the TH headers, and tmp_install/lib/ contains the libTH.so.1 file.
So why have this directory? It is used to compile the libraries that depend on each other. For example, the THC library depends on the TH library and its headers. This is referenced in the build shell script as arguments to the cmake command:
# install_dir is tmp_install
cmake ...
-DTH_INCLUDE_PATH="$INSTALL_DIR/include" \
-DTH_LIB_PATH="$INSTALL_DIR/lib" \
And indeed if we look at the THC library we built:
(p3) killeent@devgpu047:lib (master)$ ldd libTHC.so.1
...
libTH.so.1 => /home/killeent/github/pytorch/torch/lib/tmp_install/lib/./libTH.so.1 (0x00007f84478b7000)
| https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
The way the `build_all.sh` specifies the include and library paths is a little messy but this is representative of the overall idea. Finally, at the end of the script:
```bash
# If all the builds succeed we copy the libraries, headers,
# binaries to torch/lib
cp $INSTALL_DIR/lib/* .
cp THNN/generic/THNN.h .
cp THCUNN/generic/THCUNN.h .
cp -r $INSTALL_DIR/include .
cp $INSTALL_DIR/bin/* .
As we can see, at the end, we copy everything to the top-level torch/lib directory - explaining the contents we saw above. We'll see why we do this next:
NN Wrappers
Briefly, let's touch on the last part of the build_deps command: generate_nn_wrappers(). We bind into the backend libraries using PyTorch's custom cwrap tooling, which we touched upon in a previous post. For binding TH and THC we manually write the YAML declarations for each function. However, due to the relative simplicity of the THNN and THCUNN libraries, we auto-generate both the cwrap declarations and the resulting C++ code. | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
The reason we copy the THNN.h and THCUNN.h header files into torch/lib is that this is where the generate_nn_wrappers() code expects these files to be located. generate_nn_wrappers() does a few things:
Parses the header files, generating cwrap YAML declarations and writing them to output .cwrap files
Calls cwrap with the appropriate plugins on these .cwrap files to generate source code for each
Parses the headers a second time to generate THNN_generic.h - a library that takes THPP Tensors, PyTorch's "generic" C++ Tensor Library, and calls into the appropriate THNN/THCUNN library function based on the dynamic type of the Tensor
If we take a look into torch/csrc/nn after running generate_nn_wrappers() we can see the output:
(p3) killeent@devgpu047:nn (master)$ ls
THCUNN.cpp THCUNN.cwrap THNN.cpp THNN.cwrap THNN_generic.cpp THNN_generic.cwrap THNN_generic.h THNN_generic.inc.h
For example, the code generates cwrap like:
```
[[ | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
[[
name: FloatBatchNormalization_updateOutput
return: void
cname: THNN_FloatBatchNormalization_updateOutput
arguments:
- void* state
- THFloatTensor* input
- THFloatTensor* output
- type: THFloatTensor*
name: weight
nullable: True
- type: THFloatTensor*
name: bias
nullable: True
- THFloatTensor* running_mean
- THFloatTensor* running_var
- THFloatTensor* save_mean
- THFloatTensor* save_std
- bool train
- double momentum
- double eps
]]
with corresponding .cpp:
```cpp
extern "C" void THNN_FloatBatchNormalization_updateOutput(void, THFloatTensor, THFloatTensor, THFloatTensor, THFloatTensor, THFloatTensor, THFloatTensor, THFloatTensor, THFloatTensor*, bool, double, double);
PyObject * FloatBatchNormalization_updateOutput(PyObject _unused, PyObject args) {
// argument checking, unpacking
PyThreadState *_save = NULL;
try {
Py_UNBLOCK_THREADS; | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
try {
Py_UNBLOCK_THREADS;
THNN_FloatBatchNormalization_updateOutput(arg_state, arg_input, arg_output, arg_weight, arg_bias, arg_running_mean, arg_running_var, arg_save_mean, arg_save_std, arg_train, arg_momentum, arg_eps);
Py_BLOCK_THREADS;
Py_RETURN_NONE;
} catch (...) {
if (_save) {
Py_BLOCK_THREADS;
}
throw;
}
...
}
In the `THPP` generated code, the function looks like this:
```cpp
void BatchNormalization_updateOutput(thpp::Tensor* input, thpp::Tensor* output, thpp::Tensor* weight, thpp::Tensor* bias, thpp::Tensor* running_mean, thpp::Tensor* running_var, thpp::Tensor* save_mean, thpp::Tensor* save_std, bool train, double momentum, double eps) {
// Call appropriate THNN function based on tensor type, whether its on CUDA, etc.
}
We will look a little more at how these source files are used later.
"Building" the Pure Python Modules | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
"Building" the Pure Python Modules
Now that we have built the backend libraries (the "dependencies") we can move forward with building the actual PyTorch code. The next Setuptools command that runs is build_py, which is used to build all the "Pure" python modules in our library. These are the "packages" passed to setup.py.
The packages are found using the Setuptools' utility function find_packages():
```python
packages = find_packages(exclude=('tools.*',)) | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
packages = find_packages(exclude=('tools.*',))
['torch', 'torch._thnn', 'torch.autograd', 'torch.backends', 'torch.cuda', 'torch.distributed', 'torch.legacy', 'torch.multiprocessing', 'torch.nn', 'torch.optim', 'torch.sparse', 'torch.utils', 'torch.autograd._functions', 'torch.backends.cudnn', 'torch.legacy.nn', 'torch.legacy.optim', 'torch.nn._functions', 'torch.nn.backends', 'torch.nn.modules', 'torch.nn.parallel', 'torch.nn.utils', 'torch.nn._functions.thnn', 'torch.utils.data', 'torch.utils.ffi', 'torch.utils.serialization', 'torch.utils.trainer', 'torch.utils.backcompat', 'torch.utils.trainer.plugins']
```
As we can see, find_package has recursively traversed the torch directory, finding all the directory paths that have an __init__.py file. | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
When building with Setuptools, the tool creates a build directory in the distribution root, i.e. the same location as the setup.py file. Because PyTorch is composed of both "Pure" python modules and Extension Modules, we need to preserve information about the Operating System and Python version used when performing the build. So if we look in my build directory, we see:
(p3) killeent@devgpu047:pytorch (master)$ ls build
lib.linux-x86_64-3.6 temp.linux-x86_64-3.6
This indicates that I've built the project on linux-x86-64 using Python 3.6. The lib directory contains the library files, while the temp directory contains files generated during the build that aren't needed in the final installation.
Because "Pure" python modules are just Python code, and don't need to be "compiled", the build_py process simply copies files from their locations as found by find_packages to the equivalent location in build/. So our build output is littered with lines like:
```bash | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
copying torch/autograd/_functions/blas.py -> build/lib.linux-x86_64-3.6/torch/autograd/_functions
We also noted earlier that we could pass files and directories to the package_data keyword argument to the main setup() function, and that Setuptools would handle copying those files to the installation location. During build_py, these files are copied to the build/ directory, so we also see lines like:
copying torch/lib/libTH.so.1 -> build/lib.linux-x86_64-3.6/torch/lib
...
copying torch/lib/include/THC/generic/THCTensor.h -> build/lib.linux-x86_64-3.6/torch/lib/include/THC/generic
Building the Extension Modules
Finally, we need to build the Extension Modules, i.e. the PyTorch modules written in C++ using the CPython backend. This also constitutes the majority of the code logic in setup.py. Our overridden build_ext Command has some special logic before the extensions themselves are actually built:
```python
from tools.cwrap import cwrap | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
```python
from tools.cwrap import cwrap
from tools.cwrap.plugins.THPPlugin import THPPlugin
from tools.cwrap.plugins.ArgcountSortPlugin import ArgcountSortPlugin
from tools.cwrap.plugins.AutoGPU import AutoGPU
from tools.cwrap.plugins.BoolOption import BoolOption
from tools.cwrap.plugins.KwargsPlugin import KwargsPlugin
from tools.cwrap.plugins.NullableArguments import NullableArguments
from tools.cwrap.plugins.CuDNNPlugin import CuDNNPlugin
from tools.cwrap.plugins.WrapDim import WrapDim
from tools.cwrap.plugins.AssertNDim import AssertNDim
from tools.cwrap.plugins.Broadcast import Broadcast
from tools.cwrap.plugins.ProcessorSpecificPlugin import ProcessorSpecificPlugin
thp_plugin = THPPlugin()
cwrap('torch/csrc/generic/TensorMethods.cwrap', plugins=[
ProcessorSpecificPlugin(), BoolOption(), thp_plugin,
AutoGPU(condition='IS_CUDA'), ArgcountSortPlugin(), KwargsPlugin(),
AssertNDim(), WrapDim(), Broadcast()
]) | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
])
cwrap('torch/csrc/cudnn/cuDNN.cwrap', plugins=[
CuDNNPlugin(), NullableArguments()
])
Recall above that I documented that we auto-generated C++ code for calling into the `THNN` etc. libraries. Here is where we bind `TH`, `THC` and `CuDNN`. We take the YAML declarations in `TensorMethods.cwrap`, and use them to generate output C++ source files that contain implementations that work within PyTorch's C++ Ecosystem. For example, a simple declaration like zero_:
[[
name: zero_
cname: zero
return: self
arguments:
- THTensor* self
]]
Generates code like:
```cpp
PyObject * THPTensor_(zero_)(PyObject *self, PyObject *args, PyObject *kwargs) {
...
THTensor_(zero)(LIBRARY_STATE arg_self);
...
}
| https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
...
}
```
In the previous post we documented how these functions are tied to specific Tensor types, so I won't expand on that there. For the build process its enough to know that these C++ files are generated prior to the extension being built, because these source files are used during Extension compilation.
Specifying the Extensions
Unlike pure modules, it’s not enough just to list modules or packages and expect the Setuptools to go out and find the right files; you have to specify the extension name, source file(s), and any compile/link requirements (include directories, libraries to link with, etc.). | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
The bulk (200~ LOC at the time of this writing) of the setup.py goes into specifying how to build these Extensions. Here, some of the choices we make in build_all.sh begin to make sense. For example, we saw that our build script specified a tmp_install directory where we installed our backend libraries. In our setup.py code, we reference this directory when adding to the list of directories containing header files to include:
# tmp_install_path is torch/lib/tmp_install
include_dirs += [
cwd,
os.path.join(cwd, "torch", "csrc"),
tmp_install_path + "/include",
tmp_install_path + "/include/TH",
tmp_install_path + "/include/THPP",
tmp_install_path + "/include/THNN",
Similarly, we copied the shared object libraries to torch/csrc at the end of the build_all.sh script. We reference these locations directly in our setup.py code when identifying libraries that we may link against:
```python
lib_path is torch/lib
TH_LIB = os.path.join(lib_path, 'libTH.so.1') | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
TH_LIB = os.path.join(lib_path, 'libTH.so.1')
THS_LIB = os.path.join(lib_path, 'libTHS.so.1')
THC_LIB = os.path.join(lib_path, 'libTHC.so.1')
THCS_LIB = os.path.join(lib_path, 'libTHCS.so.1')
THNN_LIB = os.path.join(lib_path, 'libTHNN.so.1')
...
Let's consider how we build the main `torch._C` Extension Module:
```python
C = Extension("torch._C",
libraries=main_libraries,
sources=main_sources,
language='c++',
extra_compile_args=main_compile_args + extra_compile_args,
include_dirs=include_dirs,
library_dirs=library_dirs,
extra_link_args=extra_link_args + main_link_args + [make_relative_rpath('lib')],
)
The main libraries are all the libraries we link against. This includes things like shm, PyTorch's shared memory management library, and also system libraries like cudart and cudnn. Note that the TH libraries are not listed here
| https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
The main sources are the C++ files that make up the C++ backend for PyTorch
The compile args are various flags that configure compilation. For example, we might want to add debug flags when compiling in debug mode
The include dirs are the paths to all the directories containing header files. This is also another example where the build_all.sh script is important - for example, we look for the TH header files in torch/lib/tmp_install/include/TH - which is the install location we specified with our CMake configuration
The library dirs are directories to search for shared libraries at link time. For example, we include torch/lib - the location we copied our .so files to at the end of build_all.sh, but also the paths to the CUDA and CuDNN directories
| https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
The link arguments are used when linking object files together to create the extension. In PyTorch, this includes more normal options like decided to link libstdc++ statically. However, there is one key component: this is where we link the backend TH libraries. Note that we have lines like:
# The explicit paths to .so files we described above
main_link_args = [TH_LIB, THS_LIB, THPP_LIB, THNN_LIB]
You might be wondering why we do this as opposed to adding these libraries to the list we pass to the libraries keyword argument. After all, that is a list of libraries to link against. The issue is that Lua Torch installs often set the LD_LIBRARY_PATH variable, and thus we could mistakenly link against a TH library built for Lua Torch, instead of the library we have built locally. This would be problematic because the code could be out of date, and also there are various configuration options for Lua Torch's TH that would not play nicely with PyTorch. | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
As such, we manually specify the paths to the shared libraries we generated directly to the linker.
There are other extensions needed to power PyTorch and they are built in a similar way. The Setuptools library invokes the C++ compiler and linker to build all of these extensions. If the builds succeed, we have successfully built the PyTorch library and we can move on to installation.
Installation
After building has finished, installation is quite simple. We simply have to copy everything from our build/lib.linux-x86_64-3.6 directory to the appropriate installation directory. Recall that we noted above that this directory is the site_packages directory associated with our Python binary. As a result, we see lines like:
```bash
running install_lib
creating /home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages/torch
copying build/lib.linux-x86_64-3.6/torch/_C.cpython-36m-x86_64-linux-gnu.so -> /home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages/torch | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
copying build/lib.linux-x86_64-3.6/torch/_dl.cpython-36m-x86_64-linux-gnu.so -> /home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages/torch
creating /home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages/torch/_thnn
copying build/lib.linux-x86_64-3.6/torch/_thnn/_THNN.cpython-36m-x86_64-linux-gnu.so -> /home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages/torch/_thnn
copying build/lib.linux-x86_64-3.6/torch/_thnn/_THCUNN.cpython-36m-x86_64-linux-gnu.so -> /home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages/torch/_thnn
```
Finally lets power up the Python interpreter. When the Python interpreter executes an import statement, it searches for Python code and extension modules along a search path. A default value for the path is configured into the Python binary when the interpreter is built.
```bash
note we are now in my home directory
(p3) killeent@devgpu047:~$ python
Python 3.6.1 |Continuum Analytics, Inc.| (default, Mar 22 2017, 19:54:23) | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
import sys
sys.path
['', '/home/killeent/local/miniconda2/envs/p3/lib/python36.zip', '/home/killeent/local/miniconda2/envs/p3/lib/python3.6', '/home/killeent/local/miniconda2/envs/p3/lib/python3.6/lib-dynload', '/home/killeent/.local/lib/python3.6/site-packages', '/home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages', '/home/killeent/github/pytorch', '/home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg']
As we can see, the `site-packages` directory we copied our PyTorch installation to is part of search path. Now let's load the `torch` module and see its location:
```python
>>> import torch
>>> import inspect
>>> inspect.getfile(torch)
'/home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages/torch/__init__.py'
| https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
```
As we can see, we have loaded the module from site_packages as expected - and our build and installation is successful!
Note: Python prepends the empty string to sys.path to represent the current working directory - making it the first place we search for a module. So if we run Python from the pytorch directory, we would accidentally load the local version of PyTorch rather than our installed version. This is something to watch out for.
Addendum - Developer Efficiency, 3rd Party Libraries, Things I Didn't Cover
The entire installation loop for PyTorch can be quite time-consuming. On my devserver, it takes around 5 minutes for an installation from source. Often times, when developing PyTorch, we only want to work on a subset of the entire project, and re-build only that subset in order to test changes. Fortunately, our build system enables this.
Setuptools Develop Mode
The main tool that supports this is Setuptools develop command. The documentation states that: | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
This command allows you to deploy your project’s source for use in one or more “staging areas” where it will be available for importing. This deployment is done in such a way that changes to the project source are immediately available in the staging area(s), without needing to run a build or install step after each change.
But how does it work? Suppose we run python setup.py build develop in the PyTorch directory. The build command is run, building our dependencies (TH, THPP, etc.) and the extension libraries. However, if we look inside site-packages:
(p3) killeent@devgpu047:site-packages$ ls -la torch*
-rw-r--r--. 1 killeent users 31 Jun 27 08:02 torch.egg-link
Looking at the contents of the torch.egg-link file, it simply references the PyTorch directory:
(p3) killeent@devgpu047:site-packages$ cat torch.egg-link
/home/killeent/github/pytorch
If we navigate back to the PyTorch directory, we see there is a new directory torch.egg-info:
```bash | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
(p3) killeent@devgpu047:pytorch (master)$ ls -la torch.egg-info/
total 28
drwxr-xr-x. 2 killeent users 4096 Jun 27 08:09 .
drwxr-xr-x. 10 killeent users 4096 Jun 27 08:01 ..
-rw-r--r--. 1 killeent users 1 Jun 27 08:01 dependency_links.txt
-rw-r--r--. 1 killeent users 255 Jun 27 08:01 PKG-INFO
-rw-r--r--. 1 killeent users 7 Jun 27 08:01 requires.txt
-rw-r--r--. 1 killeent users 16080 Jun 27 08:01 SOURCES.txt
-rw-r--r--. 1 killeent users 12 Jun 27 08:01 top_level.txt
This file contains metadata about the PyTorch project. For example, requirements.txt lists all of the dependencies for setting up PyTorch:
(p3) killeent@devgpu047:pytorch (master)$ cat torch.egg-info/requires.txt
pyyaml
Without going into too much detail, develop allows us to essentially treat the PyTorch repo itself as if it were in site-packages, so we can import the module and it just works:
```bash
(p3) killeent@devgpu047:~$ python | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
(p3) killeent@devgpu047:~$ python
Python 3.6.1 |Continuum Analytics, Inc.| (default, Mar 22 2017, 19:54:23)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__file__
'/home/killeent/github/pytorch/torch/__init__.py'
As a result, the following consequences hold:
If we change a Python source file, the changes are automatically picked up, and we don't have to run any commands to let the Python interpreter see this change
If we change a C++ Source File in one of the extension libraries, we can re-run the develop command, it will re-build the extension
Thus we can develop the PyTorch codebases seamlessly, and test our changes in an easy way.
Working on the Dependency Libraries | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
Working on the Dependency Libraries
If we are working on the dependencies (e.g. TH, THPP, etc.) we can re-build our changes more quickly by simply running the build_deps command directly. This will automatically call into build_all.sh to re-build our libraries, and copy the generated libraries appropriately. If we are using Setuptools develop mode, we will be using the local extension library built in the PyTorch directory. Because we have specified the paths to the shared libraries when compiling our Extension Libraries, the changes will be picked up:
```bash
we are using the local extension
(p3) killeent@devgpu047:~$ python
Python 3.6.1 |Continuum Analytics, Inc.| (default, Mar 22 2017, 19:54:23)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
import torch
torch._C.file
'/home/killeent/github/pytorch/torch/_C.cpython-36m-x86_64-linux-gnu.so'
it references the local shared object library we just re-built | https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
(p3) killeent@devgpu047:~$ ldd /home/killeent/github/pytorch/torch/_C.cpython-36m-x86_64-linux-gnu.so
...
libTH.so.1 => /home/killeent/github/pytorch/torch/lib/libTH.so.1 (0x00007f543d0e2000)
...
As such, we can test any changes here without having to do a full rebuild.
#### 3rd Party Libraries
PyTorch has dependencies on some 3rd party libraries. The usual mechanism for using these libraries is to install them via Anaconda, and then link against them. For example, we can use the `mkl` library with PyTorch by doing:
```bash
# installed to miniconda2/envs/p3/lib/libmkl_intel_lp64.so
conda install mkl
And then as long as we have the path to this lib directory on our $CMAKE_PREFIX_PATH, it will successfully find this library when compiling:
# in the site-packages dir
(p3) killeent@devgpu047:torch$ ldd _C.cpython-36m-x86_64-linux-gnu.so
# ...
libmkl_intel_lp64.so => /home/killeent/local/miniconda2/envs/p3/lib/libmkl_intel_lp64.so (0x00007f3450bba000)
# ...
| https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
...
```
Not Covered, But Also Relevant
How ccache is used to speed up build times
How PyTorch's top-level __init__.py file handles the initial module import and pulling together all the various modules and extension libraries
The CMake build system, how the backend libraries are configured and built with CMake
| https://pytorch.org/blog/a-tour-of-pytorch-internals-2/ | pytorch blogs |
layout: blog_detail
title: 'An Overview of the PyTorch Mobile Demo Apps'
author: Jeff Tang and Mark Saroufim
featured-img: 'assets/images/android-demo-app.png'
date: 2021-06-18 12:00:00 -0500
PyTorch Mobile provides a runtime environment to execute state-of-the-art machine learning models on mobile devices. Latency is reduced, privacy preserved, and models can run on mobile devices anytime, anywhere.
In this blog post, we provide a quick overview of 10 currently available PyTorch Mobile powered demo apps running various state-of-the-art PyTorch 1.9 machine learning models spanning images, video, audio and text.
It’s never been easier to deploy a state-of-the-art ML model to a phone. You don’t need any domain knowledge in Machine Learning and we hope one of the below examples resonates enough with you to be the starting point for your next project.
Computer Vision
Image Classification | https://pytorch.org/blog/mobile-demo-apps-overview/ | pytorch blogs |
Computer Vision
Image Classification
This app demonstrates how to use PyTorch C++ libraries on iOS and Android to classify a static image with the MobileNetv2/3 model.
iOS #1 iOS #2 Android #1 Android #2
iOS Android
Live Image Classification
This app demonstrates how to run a quantized MobileNetV2 and Resnet18 models to classify images in real time with an iOS and Android device camera. | https://pytorch.org/blog/mobile-demo-apps-overview/ | pytorch blogs |
iOS Android
Image Segmentation
This app demonstrates how to use the PyTorch DeepLabV3 model to segment images. The updated app for PyTorch 1.9 also demonstrates how to create the model using the Mobile Interpreter and load the model with the LiteModuleLoader API.
iOS Android | https://pytorch.org/blog/mobile-demo-apps-overview/ | pytorch blogs |
iOS Android
Vision Transformer for Handwritten Digit Recognition
This app demonstrates how to use Facebook's latest optimized Vision Transformer DeiT model to do image classification and handwritten digit recognition.
iOS Android
Android
| https://pytorch.org/blog/mobile-demo-apps-overview/ | pytorch blogs |
Object Detection
This app demonstrates how to convert the popular YOLOv5 model and use it on an iOS app that detects objects from pictures in your photos, taken with camera, or with live camera.
iOS Android
iOS Android
D2Go | https://pytorch.org/blog/mobile-demo-apps-overview/ | pytorch blogs |
D2Go
This app demonstrates how to create and use a much lighter and faster Facebook D2Go model to detect objects from pictures in your photos, taken with camera, or with live camera.
iOS Android
iOS Android
Video
Video Classification
This app demonstrates how to use a pre-trained PyTorchVideo model to perform video classification on tested videos, videos from the Photos library, or even real-time videos. | https://pytorch.org/blog/mobile-demo-apps-overview/ | pytorch blogs |
iOS Android
iOS Android Deep Dive
Natural Language Processing
Text Classification
This app demonstrates how to use a pre-trained Reddit model to perform text classification.
iOS Android
| https://pytorch.org/blog/mobile-demo-apps-overview/ | pytorch blogs |
Machine Translation
This app demonstrates how to convert a sequence-to-sequence neural machine translation model trained with the code in the PyTorch NMT tutorial for french to english translation.
iOS Android
iOS Android
| https://pytorch.org/blog/mobile-demo-apps-overview/ | pytorch blogs |
Question Answering
This app demonstrates how to use the DistilBERT Hugging Face transformer model to answer questions about Pytorch Mobile itself.
iOS Android
iOS Android
Audio
Speech Recognition
This app demonstrates how to convert Facebook AI's torchaudio-powered wav2vec 2.0, one of the leading models in speech recognition to TorchScript before deploying it. | https://pytorch.org/blog/mobile-demo-apps-overview/ | pytorch blogs |
iOS Android
We really hope one of these demo apps stood out for you. For the full list, make sure to visit the iOS and Android demo app repos. You should also definitely check out the video An Overview of the PyTorch Mobile Demo Apps which provides both an overview of the PyTorch mobile demo apps and a deep dive into the PyTorch Video app for iOS and Android. | https://pytorch.org/blog/mobile-demo-apps-overview/ | pytorch blogs |
layout: blog_detail
title: "Accelerating PyTorch Vision Models with Channels Last on CPU"
author: Mingfei Ma (Intel), Vitaly Fedyunin (Meta), Wei Wei (Meta)
featured-img: '/assets/images/accelerating-pytorch-vision-models-with-channels-last-on-cpu-2.png'
Overview
Memory formats has significant impact on performance when running vision models, generally Channels Last is a more favorable from performance perspective due to better data locality.
This blog will introduce fundamental concepts of memory formats and demonstrate performance benefits using Channels Last on popular PyTorch vision models on Intel® Xeon® Scalable processors.
Memory Formats Introduction
Memory format refers to data representation that describes how a multidimensional (nD) array is stored in linear (1D) memory address space. The concept of memory format has two aspects: | https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/ | pytorch blogs |
Physical Order is the layout of data storage in physical memory. For vision models, usually we talk about NCHW, NHWC. These are the descriptions of physical memory layout, also referred as Channels First and Channels Last respectively.
Logical Order is a convention on how to describe tensor shape and stride. In PyTorch, this convention is NCHW. No matter what the physical order is, tensor shape and stride will always be depicted in the order of NCHW.
Fig-1 is the physical memory layout of a tensor with shape of [1, 3, 4, 4] on both Channels First and Channels Last memory format (channels denoted as R, G, B respectively):
Fig-1 Physical memory layout of Channels First and Channels Last
Memory Formats Propagation | https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/ | pytorch blogs |
Memory Formats Propagation
The general rule for PyTorch memory format propagation is to preserve the input tensor’s memory format. Which means a Channels First input will generate a Channels First output and a Channels Last input will generate a Channels Last output.
For Convolution layers, PyTorch uses oneDNN (oneAPI Deep Neural Network Library) by default to achieve optimal performance on Intel CPUs. Since it is physically impossible to achieve highly optimized performance directly with Channels Frist memory format, input and weight are firstly converted to blocked format and then computed. oneDNN may choose different blocked formats according to input shapes, data type and hardware architecture, for vectorization and cache reuse purposes. The blocked format is opaque to PyTorch, so the output needs to be converted back to Channels First. Though blocked format would bring about optimal computing performance, the format conversions may add overhead and therefore offset the performance gain. | https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/ | pytorch blogs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.