text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
``` PyTorch 2. supports multiple different kernels optimized for specific use cases, with specific requirements. A kernel picker picks the best kernel for a particular combination of input parameters. If no optimized "custom kernel" for a particular combination of input parameters can be identified, the kernel picker selects a general kernel that can handle all input combinations. While future releases may extend this set of operators, PyTorch 2.0 launches with 3 implementations for the SDPA operator: A generic kernel which implements the mathematical equation of SDPA in the function sdpa_math() An optimized kernel based on the paper “Flash Attention”, which supports evaluation of SDPA with 16 bit floating point data types on compute architecture SM80 (A100).
https://pytorch.org/blog/accelerating-large-language-models/
pytorch blogs
An optimized kernel based on the paper “Self-Attention Does Not Need O(n^2) Memory" and implemented in xFormer, which supports both 32 and 16 bit floating data types on a wider range of architectures (SM40 and later). This blog post refers to this kernel as the mem_efficient kernel. Note that both optimized kernels (two and three listed above), support a key padding mask and limit the supported attention mask to causal attention. Accelerated PyTorch 2.0 Transformers today only support the causal mask when it is specified using the is_causal boolean. When a mask is specified, the general-purpose kernel will be selected because it is too expensive to analyze the contents of a provided mask to determine if it is the causal mask. Additional explanations on the constraints for each kernel can be found in the Accelerated PT2 Transformer blog.
https://pytorch.org/blog/accelerating-large-language-models/
pytorch blogs
Enabling Accelerated Transformers with nanoGPT The SDPA operator being a critical component of the GPT model, we identified the open source nanoGPT model as an excellent candidate for both demonstrating the ease of implementation and benefits of PyTorch 2.0’s Accelerated Transformers. The following demonstrates the exact process by which Accelerated Transformers was enabled on nanoGPT. This process largely revolves around replacing the existing SDPA implementation with the newly added F.scaled_dot_product_attention operator from functional.py. This process can be easily adapted to enable the operator in many other LLMs. Alternatively, users can instead choose to call F.multi_head_attention_forward() or utilize the nn.MultiHeadAttention module directly where applicable. The following code snippets are adapted from Karpathy’s nanoGPT repository.
https://pytorch.org/blog/accelerating-large-language-models/
pytorch blogs
Step 1: Identify the existing SDPA implementation In the case of nanoGPT, SDPA is implemented in the model’s CausalSelfAttention class. The original implementation at time of writing is adapted below for this post. Step 2: Replace with Torch’s scaled_dot_product_attention At this point we can note the following: Lines 36 - 42 define the mathematical implementation of SDPA which we are replacing The mask applied on line 39 is no longer relevant since we are using scaled_dot_product_attention’s is_causal flag. The dropout layer used in line 41 is also now unnecessary. Swapping out the SDPA implementation for torch’s scaled_dot_product_attention and removing the now redundant code yields the following implementation.
https://pytorch.org/blog/accelerating-large-language-models/
pytorch blogs
Alternatively, the original mask can be passed into the attn_mask field however due to the mentioned kernel constraints that would limit the implementation to only support the generic sdpa_math kernel. Step 3 (Bonus): Faster matmuls with padding On top of the performance improvements from SDPA, our analysis yielded a nice ancillary win. In Andrej's words "The most dramatic optimization to nanoGPT so far (~25% speedup) is to simply increase the vocab size from 50257 to 50304 (nearest multiple of 64)."
https://pytorch.org/blog/accelerating-large-language-models/
pytorch blogs
The vocab size determines the dimensions of matmuls in the output layer of GPT, and these are so large that they were taking a majority of the time for the entire training loop! We discovered that they were achieving performance significantly below the peak throughput achievable on the A100 GPU, and guessed from NVIDIA's matmul documentation that 64-element alignment would yield better results. Indeed, padding these matmuls achieves nearly a 3x speedup! The underlying cause is that unaligned memory accesses significantly reduce efficiency. A deeper analysis can be found in this Twitter thread. With this optimization we were able to further reduce training time from ~113 ms (using flash attention) to ~87 ms per batch. Results The figure below demonstrates the performance gained using Pytorch custom kernels. Here are the exact figures:
https://pytorch.org/blog/accelerating-large-language-models/
pytorch blogs
baseline (nanoGPT implementation): ~143ms sdpa_math (generic): ~134ms (6.71% faster) mem_efficient kernel: ~119ms (20.16% faster) flash_attention kernel: ~113ms (26.54% faster) flash_attention + padded vocab: ~87ms (64.37% faster) All code was run on an 8 x NVIDIA Corporation A100 server with 80 GB HBM [A100 SXM4 80GB], and for the purpose of this experiment dropout was set to 0. Figure 2: Using scaled dot product attention with custom kernels and torch.compile delivers significant speedups for training large language models, such as for nanoGPT shown here. Enhancing Numerical Model Stability
https://pytorch.org/blog/accelerating-large-language-models/
pytorch blogs
Enhancing Numerical Model Stability In addition to being faster, PyTorch's implementation offers increased numerical stability by avoiding loss of precision in many execution scenarios. There is a great explanation here, but essentially the PyTorch implementation scales the Query and Key matrices before multiplication, which is said to be more stable and avoid loss of precision. Because of the merged custom kernel architecture of SDPA, this scaling does not introduce additional overhead in the computation of the attention result. In comparison, an implementation from the individual computational components would require separate pre-scaling at additional cost. For an additional explanation, see Appendix A. Improved Memory Consumption
https://pytorch.org/blog/accelerating-large-language-models/
pytorch blogs
Improved Memory Consumption Yet another large advantage of using the torch SDPA kernels is the reduced memory footprint, which allows for the utilization of larger batch sizes. The following chart compares the best validation loss after one hour of training for both flash attention and the baseline implementations of causal attention. As can be seen, the maximum batch size achieved with the baseline causal attention implementation (on 8 x NVIDIA Corporation A100 server with 80 GB HBM) was 24, significantly less then the maximum achieved with flash attention, which was 39. Figure 3: Using Flash Attention enables the usage of larger batch sizes, allowing users to achieve lower validation loss after one hour of training (smaller is better). Conclusion
https://pytorch.org/blog/accelerating-large-language-models/
pytorch blogs
Conclusion Accelerated PyTorch 2 Transformers were designed to make the training and production deployment of state-of-the-art transformer models affordable and integrated with PyTorch 2.0 model JIT compilation. The newly introduced PyTorch SDPA operator provides improved performance for training Transformer models and is particularly valuable for the expensive Large Language Model training. In this post we demonstrate a number of optimizations on the exemplary nanoGPT model including: Over 26% training speedup, when compared against the baseline with constant batch size An additional speedup achieved with padded vocabulary, bringing the total optimization to approximately 64% compared to the baseline Additional numerical stability Appendix A: Analyzing Attention Numeric Stability
https://pytorch.org/blog/accelerating-large-language-models/
pytorch blogs
In this section we provide a more in depth explanation of the previously mentioned enhanced numerical stability which is gained by prescaling SDPA’s input vectors. The following is a simplified version of nanoGPT’s mathematical implementation of SDPA. The important thing to note here is that the query undergoes matrix multiplication without being scaled. # nanoGPT implementation of SDPA # notice q (our query vector) is not scaled ! att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1))) att = att.masked_fill(self.bias[:,:,:T,:T] == 0, float('-inf')) att = F.softmax(att, dim=-1) # Dropout is set to 0, so we can safely ignore this line in the implementation# att = self.attn_dropout(att) y_nanogpt = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs) The following is the equivalent mathematical implementation in torch’s scaled_dot_product_attention. ``` PyTorch implementation of SDPA embed_size = q.size(-1) scaling_factor = math.sqrt(math.sqrt(embed_size))
https://pytorch.org/blog/accelerating-large-language-models/
pytorch blogs
scaling_factor = math.sqrt(math.sqrt(embed_size)) q = q / scaling_factor # notice q is scaled here ! same as above, but with scaling factor att = q @ (k.transpose(-2, -1) / scaling_factor) att = att.masked_fill(self.bias[:,:,:T,:T] == 0, float('-inf')) att = F.softmax(att0, dim=-1) Dropout is set to 0, so we can safely ignore this line in the implementation# att = self.attn_dropout(att) y_scale_before = att @ v Mathematically both approaches should be equivalent, however our experimentation shows that in practice we receive different results from each approach. Using the approach above, we verified `y_scale_before` matches the expected output from using the `scaled_dot_product_attention `method while `y_nanogpt` does not. The `torch.allclose` method was used to test equivalence. Specifically, we showed that: y_sdpa = torch.nn.functional._scaled_dot_product_attention( q, k, v, attn_mask=self.bias[:,:,:T,:T] != 0, dropout_p=0.0, need_attn_weights=False, is_causal=False, )
https://pytorch.org/blog/accelerating-large-language-models/
pytorch blogs
need_attn_weights=False, is_causal=False, ) torch.allclose(y_sdpa, y_nanogpt) # False, indicating fp issues torch.allclose(y_sdpa, y_scale_before) # True, as expected ## Appendix B: Reproducing Experiment Results Researchers seeking to reproduce these results should start with the following commit from Andrej’s nanoGPT repository - **<span style="text-decoration:underline;">b3c17c6c6a363357623f223aaa4a8b1e89d0a465</span>**. This commit was used as the baseline when measuring the per batch speed improvements. For results which include padded vocabulary optimizations (which yielded the most significant improvements to batch speed), use the following commit - **<span style="text-decoration:underline;">77e7e04c2657846ddf30c1ca2dd9f7cbb93ddeab</span>**. From either checkout, selecting kernels for experimentation is made trivial with the use of the [torch.backends](https://pytorch.org/docs/stable/backends.html) API. The desired kernel can be selected via a context manager:
https://pytorch.org/blog/accelerating-large-language-models/
pytorch blogs
with torch.backends.cuda.sdp_kernel ( enable_math = False, enable_flash = False, enable_mem_efficient = True ): train(model)
https://pytorch.org/blog/accelerating-large-language-models/
pytorch blogs
layout: blog_detail title: "Straggler Mitigation On PyTorch DDP By Hierarchical SGD" author: Yi Wang (Cruise AI), Rohan Varma (Meta AI) PyTorch DDP has been widely adopted across the industry for distributed training, which by default runs synchronous SGD to synchronize gradients across model replicas at every step. The performance of this technique is critical for fast iteration during model exploration as well as resource and cost saving. The performance is critical for fast iteration and cost saving of model development and exploration. To resolve a ubiquitous performance bottleneck introduced by slow nodes in large-scale training, Cruise and Meta co-developed a solution based on the Hierarchical SGD algorithm to significantly accelerate training in the presence of these stragglers. The Need For Straggler Mitigation
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
The Need For Straggler Mitigation In DDP setup, a straggler problem can occur when one or more processes run much slower ("stragglers") than other processes. When this happens, all the processes have to wait for the stragglers before synchronizing gradients and completing the communication, which essentially bottlenecks distributed performance to the slowest worker.As a result, even for the cases of training relatively small models, the communication cost can still be a major performance bottleneck. Potential Causes of Stragglers Severe straggler issues are usually caused by workload imbalance before synchronization, and many factors can contribute to this imbalance. For instance, some data loader workers in the distributed environment can become stragglers, because some input examples can be outliers in terms of the data size, or the data transfer of some examples can be drastically slowed down due to unstable network I/O, or the on-the-fly data transformation costs can have a high variance.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Besides data loading, other phases before gradient synchronization can also cause stragglers, such as unbalanced workloads of embedding table lookup during the forward pass in recommendation systems. The Appearance of Stragglers If we profile DDP training jobs that have stragglers, we can find that some processes may have much higher gradient synchronization costs (a.k.a., allreducing gradients) than other processes at a certain step. As a result, the distributed performance can be dominated by the communication cost even if the model size is very small. In this case, some processes run faster than the straggler(s) at a step, and hence they have to wait for the stragglers and spend a much longer time on allreduce. The below shows screenshots of two trace files output by PyTorch profiler in a use case. Each screenshot profiles 3 steps.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
The first screenshot shows that a process has a very high allreduce cost in both the first and the third steps, because this process reaches the synchronization phase earlier than the straggler(s), and it spends more time on waiting. On the other hand, the allreduce cost is relatively small in the second step, this suggests that 1) there is no straggler at this step; or 2) this process is the straggler among all the processes, so it does not need to wait for any other process. Both the 1st and the 3rd Steps Are Slowed Down by Stragglers The second screenshot shows a normal case without stragglers. In this case, all the gradient synchronizations are relatively short.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Normal Case Without Stragglers Hierarchical SGD in PyTorch Recently hierarchical SGD has been proposed to optimize the communication costs by mainly reducing the total amount of data transfer in large-scale distributed training, and multiple convergence analyses have been provided (example). As a main novelty of this post, at Cruise we could leverage hierarchical SGD to mitigate stragglers, which may also occur on training relatively small models. Our implementation has been upstreamed by Cruise to PyTorch in early 2022. How Does Hierarchical SGD Work? As the name implies, hierarchical SGD organizes all the processes into groups at different levels as a hierarchy, and runs synchronization by following the rules below:
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
All the groups at the same level have the same number of processes, and the processes in these groups synchronize at the same frequency concurrently, where the synchronization period is pre-defined by the user. The higher level a group is, the larger synchronization period is used, as the synchronization becomes more expensive. When multiple overlapping groups are supposed to synchronize according to their periods, to reduce redundant synchronization and avoid data race across groups, only the highest-level group runs synchronization. The following figure illustrates an example of 4-level hierarchy SGD among 16 processes on 8 machines, each of which has 2 GPUs: Level 1: Each process runs mini-batch SGD locally; Level 2: Each 4-process group across 2 machines runs synchronization every 2 steps; Level 3: Each 8-process group across 4 machines runs synchronization every 4 steps;
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Level 4: The global process group of all 16 processes over 8 machines runs synchronization every 8 steps. Particularly, when the step number can be divided by 8, only the synchronization at 3) is executed, and when the step number can be divided by 4 but not 8, only the synchronization at 2) is executed. Intuitively, hierarchical SGD can be viewed as an extension of local SGD, which only has a two-level hierarchy – every process runs mini-batch SGD locally and then synchronizes globally at a certain frequency. This can also help explain that, just like local SGD, hierarchical SGD synchronizes model parameters instead of gradients. Otherwise the gradient descent will be mathematically incorrect when the frequency is greater than 1.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Why Can Hierarchical SGD Mitigate Stragglers? The key insight here is that, when there is a random straggler, it only directly slows down a relatively small group of processes instead of all the processes. Next time another random straggler is very likely to slow down a different small group, and hence a hierarchy can help smooth out the straggler effect. The example below assumes that there is a random straggler among totally 8 processes at every step. After 4 steps, vanilla DDP that runs synchronous SGD will be slowed down by straggler 4 times, because it runs global synchronization at every step. In contrast, hierarchical SGD runs synchronization with the groups of 4 processes after the first two steps, and then a global synchronization after another two steps. We can see that both the first two and the last two stragglers have a large overlap, and hence the performance loss can be mitigated.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Essentially, the mitigation effect of this hierarchical SGD example actually is between local SGD at a frequency of every 2 steps and every 4 steps. The main advantage of hierarchical SGD over local SGD is a better convergence efficiency of the same global synchronization frequency, because hierarchical SGD allows more low-level synchronization. Moreover, it is possible for hierarchical SGD to provide a global synchronization frequency lower than local SGD with model parity, leading to a higher training performance, especially in a large-scale distributed training. Ease of Use
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Straggler mitigation is not a novel study in distributed training. Multiple approaches have been proposed, such as gossip SGD, data encoding, gradient coding, as well as some particularly designed for parameter-server architecture, including backup workers and stale synchronous parallel. However, to the best of our knowledge, before this effort we have not found a good open-source PyTorch implementation of straggler mitigation that can work like a plugin to our training system at Cruise. In contrast, our implementation only requires the minimal changes – no need to modify the existing code or tune any existing hyperparameters. This is a very appealing advantage for industry users.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
As the code example below shows, only a few lines need to be added to the setup of DDP model, and the training loop code can keep untouched. As explained previously, hierarchical SGD is an extended form of local SGD, so the enablement can be quite similar to local SGD (see PyTorch docs of PostLocalSGDOptimizer): Register a post-local SGD communication hook to run a warmup stage of fully synchronous SGD and defer hierarchical SGD. Create a post-local SGD optimizer that wraps an existing local optimizer and a hierarchical SGD configuration. ``` import torch.distributed.algorithms.model_averaging.hierarchical_model_averager as hierarchicalSGD from torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook import ( PostLocalSGDState, post_localSGD_hook, ) from torch.distributed.optim import PostLocalSGDOptimizer ddp_model = nn.parallel.DistributedDataParallel( module=model,
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
module=model, device_ids=[rank], ) Register a post-local SGD communication hook for the warmup. subgroup, _ = torch.distributed.new_subgroups() state = PostLocalSGDState(subgroup=subgroup, start_localSGD_iter=1_000) ddp_model.register_comm_hook(state, post_localSGD_hook) Wraps the existing (local) optimizer to run hierarchical model averaging. optim = PostLocalSGDOptimizer( optim=optim, averager=hierarchicalSGD.HierarchicalModelAverager( # The config runs a 4-level hierarchy SGD among 128 processes: # 1) Each process runs mini-batch SGD locally; # 2) Each 8-process group synchronize every 2 steps; # 3) Each 32-process group synchronize every 4 steps; # 4) All 128 processes synchronize every 8 steps. period_group_size_dict=OrderedDict([(2, 8), (4, 32), (8, 128)]), # Do not run hierarchical SGD until 1K steps for model parity. warmup_steps=1_000) ) ``` Algorithm Hyperparameters
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
) ``` Algorithm Hyperparameters Hierarchical SGD has two major hyperparameters: period_group_size_dict and warmup_steps. period_group_size_dict is an ordered dictionary mapping from synchronization period to process group size, used for initializing process groups of different sizes in a hierarchy to synchronize parameters concurrently. A larger group is expected to use a larger synchronization period. warmup_steps specifies a number of steps as the warmup stage to run synchronous SGD before hierarchical SGD. Similar to post-local SGD algorithm, a warmup stage is usually recommended to achieve a higher accuracy. The value should be the same as start_localSGD_iter arg used in PostLocalSGDState when post_localSGD_hook is registered. Typically the warmup stage should at least cover the beginning of training when the loss is decreased drastically.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
A subtle difference between the PyTorch implementation and the initial design proposed by relevant papers is that, after the warmup stage, by default the processes within each host still run intra-host gradient synchronization at every step. This is because that: The intra-host communication is relatively cheap, and it can usually significantly accelerate the convergence; The intra-host group (of size 4 or 8 for most industry users) can usually be a good choice of the smallest group of processes that synchronize most frequently in hierarchical SGD. If the synchronization period is 1, then gradient synchronization is faster than model parameter synchronization (a.k.a., model averaging), because DDP automatically overlaps gradient synchronization and the backward pass. Such intra-host gradient synchronization can be disabled by unsetting post_local_gradient_allreduce arg in PostLocalSGDState. Demonstration
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Demonstration Now we demonstrate that hierarchical SGD can accelerate distributed training by mitigating stragglers. Experimental Setup We compared the performance of hierarchical SGD against local SGD and synchronous SGD on ResNet18 (model size: 45MB). Since the model is so small, the training is not bottlenecked by data transfer cost during synchronization. To avoid the noises incurred by data loading from remote storage, the input data was randomly simulated from memory. We varied the number of GPUs used by training from 64 to 256. The batch size per worker is 32, and the number of iterations of training is 1,000. Since we don’t evaluate convergence efficiency in this set of experiments, warmup is not enabled.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
We also emulated stragglers at a rate of 1% on 128 and 256 GPUs, and 2% on 64 GPUs, to make sure at least one stragglers at every step on average. These stragglers randomly appear on different CUDA devices. Each straggler stalls for 1 second besides the normal per-step training time (~55ms in our setup). This can be perceived as a practical scenario where 1% or 2% of input data are outliers in terms of the data pre-processing cost (I/O and/or data transformation on the fly) during training, and such cost is 20X+ larger than the average. The code snippet below shows how a straggler can be emulated in the training loop. We applied it to a ResNet model, and it can be easily applied to the other models as well. loss = loss_fn(y_pred, y) # Emulate a straggler that lags for 1 second at a rate of 1%. if random.randint(1, 100) == 1: time.sleep(1) loss.backward() optimizer.step()
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
loss.backward() optimizer.step() ``` The experiments are conducted on us-central1 GCP cluster. Each machine has 4 NVIDIA Tesla T4 GPUs with 16 GB memory per GPU, connected through a 32 Gbit/s ethernet network. Each instance also features 96 vCPUs, 360 GB RAM. Architecture ResNet18 (45MB) Workers 64, 128, 256 Backend NCCL GPU Tesla T4, 16 GB memory Batch size 32 x ## of workers Straggler Duration 1 sec Straggler Rate 1% on 128 and 256 GPUs, 2% on 64 GPUs We used multiple configurations for both local SGD and hierarchical SGD. Local SGD runs global synchronization every 2, 4, and 8 steps, respectively.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
We ran hierarchical SGD with the following configurations: On 64 GPUs: Each 8-process group, 32-process, and the global 64-process group synchronizes every 2, 4, and 8 steps, respectively. Denoted as "HSGD 2-8,4-32,8-64". Each 32-process group and the global 64-process group synchronizes every 4 and 8 steps, respectively. Denoted as "HSGD 4-32,8-64". On 128 GPUs: Each 8-process group, 32-process group, and the global 128-process group synchronizes every 2, 4, and 8 steps, respectively. Denoted as "HSGD 2-8,4-32,8-128". Each 32-process group and the global 128-process group synchronizes every 4 and 8 steps, respectively. Denoted as "HSGD 4-32,8-128". On 256 GPUs: Each 4-process group, 16-process group, 64-process group, and the global 256-process group synchronizes every 1, 2, 4, and 8 steps, respectively. Denoted as "HSGD 1-4,2-16,4-64,8-256".
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Each 8-process group, 64-process group, and the global 256-process group synchronizes every 2, 4, and 8 steps. Denoted as "HSGD 2-8,4-64,8-256". Each 16-process group and the global 256-process group synchronizes every 4 and 8 steps, respectively. Denoted as "HSGD 4-16,8-256". Experimental Results The figures below show the speedups of different communication schemes against the baseline of synchronous SGD, with the emulated stragglers. We can make the following observations: As expected, we can see that both hierarchical SGD and local SGD can achieve a higher speedup with a lower synchronization frequency. The speedups of the hierarchical SGD schemes are 2.08X-2.45X on 64 GPUs, 2.57X-2.68X on 128 GPUs, and 2.63X-3.25X on 256 GPUs, respectively. This shows that hierarchical SGD can significantly mitigate stragglers, and such mitigation can be more effective at a larger scale.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
The performance of local SGD with the synchronization period of 2 steps and 8 steps can be perceived as the lower bound and upper bound of the experimented hierarchical SGD schemes, respectively. This is because the hierarchical SGD schemes synchronize less frequently than every 2 steps globally, but their low-level synchronization at small groups are the extra overheads in comparison with the global synchronization every 8 steps. Overall, hierarchical SGD can provide a finer-grained trade-off between communication cost and model quality than local SGD. Therefore, when local SGD at a relatively large synchronization period like 8 or 4 cannot give a satisfactory convergence efficiency, hierarchical SGD can have a much better chance to achieve both a good speedup and a model parity. Since only simulated data is used in the experiments, we did not demonstrate the model parity here, which in practice can be achieved in two ways: 1. Tuning the hyperparameters including both hierarchy and warmup steps;
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
For some cases, hierarchical SGD could lead to a slightly lower quality than the original model for the same number of training steps (i.e., lower convergence rate), but with a speedup like 2X+ per training step, it is still possible to achieve model parity with more steps but still less total training time. Limitations Before applying hierarchical SGD to straggler mitigation, the user should be aware of a few limitations of this approach:
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
This approach can only mitigate non-persistent stragglers, which occur to different workers at different times. However, for the case of persistent stragglers, which can be caused by hardware degradation or a network issue on a specific host, these stragglers will slow down the same low-level subgroup at every time, leading to nearly no straggler mitigation. This approach can only mitigate low-frequency stragglers. E.g., if 30% workers can randomly become stragglers at every step, then most low-level synchronizations will still be slowed down by stragglers. As a result, hierarchical SGD may not show an obvious performance advantage over synchronous SGD.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Since hierarchical SGD applies model averaging that does not overlap with backward like gradient averaging used by vanilla DDP, its performance gain of straggler mitigation must outweigh the performance loss of no overlap between communication and backward pass. Therefore, if stragglers only slow down training by less than 10%, hierarchical SGD may not be able to bring much speedup. This limitation can be addressed by overlapping optimizer step and backward pass in the future.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Since hierarchical SGD is less well-studied than local SGD, there is no guarantee that hierarchical SGD with a finer-grained synchronization granularity can converge faster than certain advanced forms of local SGD, such as SlowMo, which can improve convergence efficiency with slow momentum. However, to the best of our knowledge, these advanced algorithms cannot be natively supported as a PyTorch DDP plugin like hierarchical SGD yet. Acknowledgements We would like to thank Cruise teammates Bo Tian, Sergei Vorobev, Eugene Selivonchyk, Tsugn-Hsien Lee, Dan Ring, Ian Ackerman, Lei Chen, Maegan Chew, Viet Anh To, Xiaohui Long, Zeyu Chen, Alexander Sidorov, Igor Tsvetkov, Xin Hu, Manav Kataria, Marina Rubtsova, and Mohamed Fawzy, as well as Meta teammates Shen Li, Yanli Zhao, Suraj Subramanian, Hamid Shojanzeri, Anjali Sridhar and Bernard Nguyen for the support.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
layout: blog_detail title: "Easily list and initialize models with new APIs in TorchVision" author: Vasilis Vryniotis and Laurence Rouesnel featured-img: "/assets/images/easily-list-and-initialize-models-with-new-apis-in-torchvision-1.png" TorchVision now supports listing and initializing all available built-in models and weights by name. This new API builds upon the recently introduced Multi-weight support API, is currently in Beta, and it addresses a long-standing request from the community.
https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/
pytorch blogs
You can try out the new API in the latest nightly release of TorchVision. We’re looking to collect feedback ahead of finalizing the feature in TorchVision v0.14. We have created a dedicated Github Issue where you can post your comments, questions and suggestions! Querying and initializing available models Before the new model registration API, developers had to query the __dict__ attribute of the modules in order to list all available models or to fetch a specific model builder method by its name: # Initialize a model by its name: model = torchvision.models.__dict__[model_name]() # List available models: available_models = [ k for k, v in torchvision.models.__dict__.items() if callable(v) and k[0].islower() and k[0] != "_" ]
https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/
pytorch blogs
] The above approach does not always produce the expected results and is hard to discover. For example, since the [``get_weight()``](https://pytorch.org/vision/main/models.html#using-models-from-hub) method is exposed publicly under the same module, it will be included in the list despite not being a model. In general, reducing the verbosity (less imports, shorter names etc) and being able to initialize models and weights directly from their names (better support of configs, TorchHub etc) was [feedback](https://github.com/pytorch/vision/issues/5088) provided previously by the community. To solve this problem, we have developed a model registration API. ## A new approach We’ve added 4 new methods under the torchvision.models module: ```python from torchvision.models import get_model, get_model_weights, get_weight, list_models
https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/
pytorch blogs
The styles and naming conventions align closely with a prototype mechanism proposed by Philip Meier for the [Datasets V2](https://github.com/pytorch/vision/blob/main/torchvision/prototype/datasets/_api.py) API, aiming to offer a similar user experience. The model registration methods are kept private on purpose as we currently focus only on supporting the built-in models of TorchVision. ### List models Listing all available models in TorchVision can be done with a single function call: ```python >>> list_models() ['alexnet', 'mobilenet_v3_large', 'mobilenet_v3_small', 'quantized_mobilenet_v3_large', ...] To list the available models of specific submodules: >>> list_models(module=torchvision.models) ['alexnet', 'mobilenet_v3_large', 'mobilenet_v3_small', ...] >>> list_models(module=torchvision.models.quantization) ['quantized_mobilenet_v3_large', ...] Initialize models Now that you know which models are available, you can easily initialize a model with pre-trained weights:
https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/
pytorch blogs
>>> get_model("quantized_mobilenet_v3_large", weights="DEFAULT") QuantizableMobileNetV3( (features): Sequential( .... ) ) Get weights Sometimes, while working with config files or using TorchHub, you might have the name of a specific weight entry and wish to get its instance. This can be easily done with the following method: >>> get_weight("ResNet50_Weights.IMAGENET1K_V2") ResNet50_Weights.IMAGENET1K_V2 To get the enum class with all available weights of a specific model you can use either its name: >>> get_model_weights("quantized_mobilenet_v3_large") <enum 'MobileNet_V3_Large_QuantizedWeights'> Or its model builder method: >>> get_model_weights(torchvision.models.quantization.mobilenet_v3_large) <enum 'MobileNet_V3_Large_QuantizedWeights'> TorchHub support The new methods are also available via TorchHub: ```python import torch Fetching a specific weight entry by its name:
https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/
pytorch blogs
Fetching a specific weight entry by its name: weights = torch.hub.load("pytorch/vision", "get_weight", weights="ResNet50_Weights.IMAGENET1K_V2") Fetching the weights enum class to list all available entries: weight_enum = torch.hub.load("pytorch/vision", "get_model_weights", name="resnet50") print([weight for weight in weight_enum]) ``` Putting it all together For example, if you wanted to retrieve all the small-sized models with pre-trained weights and initialize one of them, it’s a matter of using the above APIs: ```python import torchvision from torchvision.models import get_model, get_model_weights, list_models max_params = 5000000 tiny_models = [] for model_name in list_models(module=torchvision.models): weights_enum = get_model_weights(model_name) if len([w for w in weights_enum if w.meta["num_params"] <= max_params]) > 0: tiny_models.append(model_name) print(tiny_models) ['mnasnet0_5', 'mnasnet0_75', 'mnasnet1_0', 'mobilenet_v2', ...]
https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/
pytorch blogs
model = get_model(tiny_models[0], weights="DEFAULT") print(sum(x.numel() for x in model.state_dict().values())) 2239188 ``` For more technical details please see the original RFC. Please spare a few minutes to provide your feedback on the new API, as this is crucial for graduating it from beta and including it in the next release. You can do this on the dedicated Github Issue. We are looking forward to reading your comments!
https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/
pytorch blogs
layout: blog_detail title: 'Introducing native PyTorch automatic mixed precision for faster training on NVIDIA GPUs' author: Mengdi Huang, Chetan Tekur, Michael Carilli Most deep learning frameworks, including PyTorch, train with 32-bit floating point (FP32) arithmetic by default. However this is not essential to achieve full accuracy for many deep learning models. In 2017, NVIDIA researchers developed a methodology for mixed-precision training, which combined single-precision (FP32) with half-precision (e.g. FP16) format when training a network, and achieved the same accuracy as FP32 training using the same hyperparameters, with additional performance benefits on NVIDIA GPUs: Shorter training time; Lower memory requirements, enabling larger batch sizes, larger models, or larger inputs.
https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/
pytorch blogs
In order to streamline the user experience of training in mixed precision for researchers and practitioners, NVIDIA developed Apex in 2018, which is a lightweight PyTorch extension with Automatic Mixed Precision (AMP) feature. This feature enables automatic conversion of certain GPU operations from FP32 precision to mixed precision, thus improving performance while maintaining accuracy. For the PyTorch 1.6 release, developers at NVIDIA and Facebook moved mixed precision functionality into PyTorch core as the AMP package, torch.cuda.amp. torch.cuda.amp is more flexible and intuitive compared to apex.amp. Some of apex.amp's known pain points that torch.cuda.amp has been able to fix: Guaranteed PyTorch version compatibility, because it's part of PyTorch No need to build extensions Windows support
https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/
pytorch blogs
No need to build extensions Windows support Bitwise accurate saving/restoring of checkpoints DataParallel and intra-process model parallelism (although we still recommend torch.nn.DistributedDataParallel with one GPU per process as the most performant approach) Gradient penalty (double backward)
https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/
pytorch blogs
torch.cuda.amp.autocast() has no effect outside regions where it's enabled, so it should serve cases that formerly struggled with multiple calls to apex.amp.initialize() (including cross-validation) without difficulty. Multiple convergence runs in the same script should each use a fresh GradScaler instance, but GradScalers are lightweight and self-contained so that's not a problem. Sparse gradient support With AMP being added to PyTorch core, we have started the process of deprecating apex.amp. We have moved apex.amp to maintenance mode and will support customers using apex.amp. However, we highly encourage apex.amp customers to transition to using torch.cuda.amp from PyTorch Core. Example Walkthrough Please see official docs for usage:
https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/
pytorch blogs
Please see official docs for usage: * https://pytorch.org/docs/stable/amp.html * https://pytorch.org/docs/stable/notes/amp_examples.html Example: import torch # Creates once at the beginning of training scaler = torch.cuda.amp.GradScaler() for data, label in data_iter: optimizer.zero_grad() # Casts operations to mixed precision with torch.cuda.amp.autocast(): loss = model(data) # Scales the loss, and calls backward() # to create scaled gradients scaler.scale(loss).backward() # Unscales gradients and calls # or skips optimizer.step() scaler.step(optimizer) # Updates the scale for next iteration scaler.update() Performance Benchmarks
https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/
pytorch blogs
scaler.update() ``` Performance Benchmarks In this section, we discuss the accuracy and performance of mixed precision training with AMP on the latest NVIDIA GPU A100 and also previous generation V100 GPU. The mixed precision performance is compared to FP32 performance, when running Deep Learning workloads in the NVIDIA pytorch:20.06-py3 container from NGC. Accuracy: AMP (FP16), FP32
https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/
pytorch blogs
Accuracy: AMP (FP16), FP32 The advantage of using AMP for Deep Learning training is that the models converge to the similar final accuracy while providing improved training performance. To illustrate this point, for Resnet 50 v1.5 training, we see the following accuracy results where higher is better. Please note that the below accuracy numbers are sample numbers that are subject to run to run variance of up to 0.4%. Accuracy numbers for other models including BERT, Transformer, ResNeXt-101, Mask-RCNN, DLRM can be found at NVIDIA Deep Learning Examples Github. Training accuracy: NVIDIA DGX A100 (8x A100 40GB)  epochs  Mixed Precision Top 1(%)
https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/
pytorch blogs
 TF32 Top1(%) </tr> <tr> <td>&nbsp;90</td> <td>&nbsp;76.93</td> <td>&nbsp;76.85</td> </tr> Training accuracy: NVIDIA DGX-1 (8x V100 16GB)  epochs  Mixed Precision Top 1(%)  FP32 Top1(%) 50 76.25 76.26 90 77.09 77.01 250 78.42 78.30 Speedup Performance: FP16 on NVIDIA V100 vs. FP32 on V100
https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/
pytorch blogs
FP16 on NVIDIA V100 vs. FP32 on V100 AMP with FP16 is the most performant option for DL training on the V100. In Table 1, we can observe that for various models, AMP on V100 provides a speedup of 1.5x to 5.5x over FP32 on V100 while converging to the same final accuracy. Figure 2. Performance of mixed precision training on NVIDIA 8xV100 vs. FP32 training on 8xV100 GPU. Bars represent the speedup factor of V100 AMP over V100 FP32. The higher the better. FP16 on NVIDIA A100 vs. FP16 on V100 AMP with FP16 remains the most performant option for DL training on the A100. In Figure 3, we can observe that for various models, AMP on A100 provides a speedup of 1.3x to 2.5x over AMP on V100 while converging to the same final accuracy.
https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/
pytorch blogs
Figure 3. Performance of mixed precision training on NVIDIA 8xA100 vs. 8xV100 GPU. Bars represent the speedup factor of A100 over V100. The higher the better. Call to action AMP provides a healthy speedup for Deep Learning training workloads on Nvidia Tensor Core GPUs, especially on the latest Ampere generation A100 GPUs. You can start experimenting with AMP enabled models and model scripts for A100, V100, T4 and other GPUs available at NVIDIA deep learning examples. NVIDIA PyTorch with native AMP support is available from the PyTorch NGC container version 20.06. We highly encourage existing apex.amp customers to transition to using torch.cuda.amp from PyTorch Core available in the latest PyTorch 1.6 release.
https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/
pytorch blogs
layout: blog_detail title: 'Tensor Comprehensions in PyTorch' author: Priya Goyal (FAIR), Nicolas Vasilache (FAIR), Oleksandr Zinenko (Inria & DI ENS), Theodoros Theodoridis (ETH Zürich), Zachary DeVito (FAIR), William S. Moses (MIT CSAIL), Sven Verdoolaege (FAIR), Andrew Adams (FAIR), Albert Cohen (Inria & DI ENS & FAIR) redirect_from: /2018/03/05/tensor-comprehensions.html Tensor Comprehensions (TC) is a tool that lowers the barrier for writing high-performance code. It generates GPU code from a simple high-level language and autotunes the code for specific input sizes. We highly recommend reading the Tensor Comprehensions blogpost first. If you ran into any of the following scenarios, TC is a useful tool for you. Your PyTorch layer is large and slow, and you contemplated writing a dedicated C++ or CUDA code for it. But you don't know how to program in CUDA or write low-level code.
https://pytorch.org/blog/tensor-comprehensions/
pytorch blogs
You wrote a CUDA layer, but it took a week to write, debug, optimize for speed. You wished you could do this in an hour. You want to fuse multiple layers like Conv-ReLU-BatchNorm or Linear-ReLU-Linear-ReLU in your network for speed, but it was quite difficult to comprehend Your research involves weird Tensor shapes that CuDNN and MKL are not optimized for. For example, you do convolutions of 13 x 24 with an input image of 143 x 55. You tried running it with CuDNN and it was slower than you wished. Your code is slowed-down by transposing Tensors constantly to fit a particular memory layout. You wish it was easy to write custom code that operates efficiently on your input layout. Tensor Comprehensions are seamless to use in PyTorch, interoperating with PyTorch Tensors and nn Variables. Let us run through using TC with PyTorch. 1. Install the package conda install -c pytorch -c tensorcomp tensor_comprehensions
https://pytorch.org/blog/tensor-comprehensions/
pytorch blogs
At this time we only provide Linux-64 binaries which have been tested on Ubuntu 16.04 and CentOS7. TC depends on heavyweight C++ projects such as [Halide](http://halide-lang.org/), [Tapir-LLVM](https://github.com/wsmoses/Tapir-LLVM) and [ISL](http://isl.gforge.inria.fr/). Hence, we rely on Anaconda to distribute these dependencies reliably. For the same reason, TC is not available via PyPI. #### 2. Import the python package ```python import tensor_comprehensions as tc 3. Define the TC expression and create a python function lang = """ def fcrelu(float(B,M) I, float(N,M) W1, float(N) B1) -> (O1) { O1(b, n) +=! I(b, m) * W1(n, m) O1(b, n) = O1(b, n) + B1(n) O1(b, n) = fmax(O1(b, n), 0) } """ fcrelu = tc.define(lang, name="fcrelu") This fcrelu function takes PyTorch Tensors as input and returns a PyTorch Tensor. It takes input I, weight W1, bias B1 and returns output O1. 4. Let's create some dummy input tensors ```python B, M, N = 100, 128, 100
https://pytorch.org/blog/tensor-comprehensions/
pytorch blogs
B, M, N = 100, 128, 100 I, W1, B1 = torch.randn(B, M).cuda(), torch.randn(N, M).cuda(), torch.randn(N).cuda() 5. Now autotune the function for your input sizes fcrelu.autotune(I, W1, B1, cache="fcrelu_100_128_100.tc") The autotuner is your biggest friend. You generally do not want to use a tc function without autotuning it first. When the autotuning is running, the current best performance is displayed. If you are satisfied with the current result or you are out of time, stop the tuning procedure by pressing Ctrl+C. cache saves the results of the autotuned kernel search and saves it to the file fcrelu_100_128_100.tc. The next time you call the same line of code, it loads the results of the autotuning without recomputing it.
https://pytorch.org/blog/tensor-comprehensions/
pytorch blogs
The autotuner has a few hyperparameters (just like your ConvNet has learning rate, number of layers, etc.). We pick reasonable defaults, but you can read about using advanced options here. 6. Call the function with the inputs, to get your result out = fcrelu(I, W1, B1) Now, let's look at how to write TC expressions. A quick primer on the TC language The TC notation focuses on the mathematical nature of the layer, leaving performance considerations to it's backend code that uses Halide and polyhedral compilation techniques which accumulate decades of cutting edge Loop Nest Optimization (LNO) research. TC is close to np.einsum. We shall quickly learn TC by example ```python lang = """ def matmul(float(M,N) A, float(N,K) B) -> (output) { output(i, j) +=! A(i, kk) * B(kk, j)
https://pytorch.org/blog/tensor-comprehensions/
pytorch blogs
output(i, j) +=! A(i, kk) * B(kk, j) } """ In this example, we define a function `matmul` which takes two input `A` and `B` of shapes `M x N` and `N x K` and returns a single `output`. The shape of `output` is automatically inferred by the TC language (discussed below). Let's look at this line: ```python output(i, j) +=! A(i, kk) * B(kk, j) It says: output(i, j) means output is 2D. for each location output(i, j), we add (+=) A(i, kk) * B(kk, j). i is well-defined as all locations in A dim=0, i.e. i in range(0, M) j is well-defined as all locations in B dim=1, i.e. j in range(0, K) kk is inferred as all locations from 0 to N The shape of output is inferred from the maximum values i and j can take, which is M and K, so output is of size M x K. The ! symbol initializes output with 0.0. It is equivalent to: output(i, j) = 0 output(i, j) += A(i, kk) * B(kk, j) Scalar inputs and range constraints: implement AvgPool2d ```python
https://pytorch.org/blog/tensor-comprehensions/
pytorch blogs
""" {% raw %}def avgpool(float(B, C, H, W) input) -> (output) {{{% endraw %} output(b, c, h, w) += input(b, c, h * {sH} + kh, w * {sW} + kw) where kh in 0:{kH}, kw in 0:{kW} {% raw %}}}{% endraw %} """ avgpool = tc.define(LANG, name="avgpool", constants={"sH":1, "sW":1, "kH":2, "kW":2}) here the where keyword can take ranges of values to operate on. 0:{kH} is equivalent range(kH) in Python. Note: the syntax for passing in scalars is subject to change in the next release. torch.nn layers We added some sugar-coating around the basic PyTorch integration of TC to make it easy to integrate TC into larger torch.nn models by defining the forward and backward TC expressions and taking Variable inputs / outputs. Here is an example of defining a convolution layer with TC. Some essentials that you will miss (we're working on them)
https://pytorch.org/blog/tensor-comprehensions/
pytorch blogs
Autotuning for variable-length sequences The TC auto-tuner requires all input sizes to be specified before-hand. For example, if you have input I1 which is an image batch, the autotuner wants to know the exact shape of I1 to generate an optimized kernel. You cannot specify: image with height between 200 and 300. This is more essential in sequence data such as NLP, where each sentence can have a different length. The reason why the autotuner is non-parametric is because it's harder and harder to auto-tune parametric constraints, this is active research. Hence, for the first release, we made a conscious decision to give you the tool in a form where we know it works well. As a work-around, if you know that you have a few specific shapes of interest, you can run the autotuner with these multiple shapes. ```python relu = tc.define(LANG, name="relu") batch, channels = 16, 3 tc.autotune((batch, channels, 32, 32)) # image of size 32 x 32 tc.autotune((batch, channels, 48, 48)) # image of size 48 x 48
https://pytorch.org/blog/tensor-comprehensions/
pytorch blogs
tc.autotune((batch, channels, 64, 64)) # image of size 64 x 64 ``` Now the autotuner is tuned for these three specific image sizes 32x32, 48x48 and 64x64. Lack of loops If you want to write an RNN, it's easy to see it as a for loop over time. However, the TC language does not have loops yet. If you really want to write RNNs, you can write unrolled loops. Strided-Tensors The TC backend does not support non-contiguous Tensors yet. If the inputs you give are not contiguous, they are made contiguous before passing to the TC backend. Reshaping Tensors within a TC expression You cannot write this operation in TC: torch.matmul(...).view(...).mean(...). Whenever there is need for a view to change the shape of an input, you have to get the output, view it at the PyTorch level. Getting Started
https://pytorch.org/blog/tensor-comprehensions/
pytorch blogs
Getting Started Walk through Tutorial to quickly get started with understanding and using Tensor Comprehensions PyTorch package. Over 20 examples of various ML layers with TC, including avgpool, maxpool, matmul, matmul - give output buffers and batch-matmul, convolution, strided-convolution, batchnorm, copy, cosine similarity, Linear, Linear + ReLU, group-convolutions, strided group-convolutions, indexing, Embedding (lookup table), small-mobilenet, softmax, tensordot, transpose Detailed docs on Tensor Comprehensions and integration with PyTorch. Communication
https://pytorch.org/blog/tensor-comprehensions/
pytorch blogs
Communication Slack: For discussion around framework integration, build support, collaboration, etc. join our slack channel. Email: [email protected] GitHub: bug reports, feature requests, install issues, RFCs, thoughts, etc. Acknowledgements We would like to thank Soumith Chintala, Edward Yang and Sam Gross for their immense guidance and help in making the integration API nice and smooth. We would also like to thank rest of the PyTorch team and our pre-release users for their helpful feedback that guided us in making the integration better.
https://pytorch.org/blog/tensor-comprehensions/
pytorch blogs
layout: blog_detail title: "Introducing TorchVision’s New Multi-Weight Support API" author: Vasilis Vryniotis featured-img: "assets/images/torchvision_featured.png" TorchVision has a new backwards compatible API for building models with multi-weight support. The new API allows loading different pre-trained weights on the same model variant, keeps track of vital meta-data such as the classification labels and includes the preprocessing transforms necessary for using the models. In this blog post, we plan to review the prototype API, show-case its features and highlight key differences with the existing one. We are hoping to get your thoughts about the API prior finalizing it. To collect your feedback, we have created a Github issue where you can post your thoughts, questions and comments. Limitations of the current API
https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/
pytorch blogs
Limitations of the current API TorchVision currently provides pre-trained models which could be a starting point for transfer learning or used as-is in Computer Vision applications. The typical way to instantiate a pre-trained model and make a prediction is: ```Python import torch from PIL import Image from torchvision import models as M from torchvision.transforms import transforms as T img = Image.open("test/assets/encode_jpeg/grace_hopper_517x606.jpg") Step 1: Initialize model model = M.resnet50(pretrained=True) model.eval() Step 2: Define and initialize the inference transforms preprocess = T.Compose([ T.Resize([256, ]), T.CenterCrop(224), T.PILToTensor(), T.ConvertImageDtype(torch.float), T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) Step 3: Apply inference preprocessing transforms batch = preprocess(img).unsqueeze(0) prediction = model(batch).squeeze(0).softmax(0) Step 4: Use the model and print the predicted category
https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/
pytorch blogs
class_id = prediction.argmax().item() score = prediction[class_id].item() with open("imagenet_classes.txt", "r") as f: categories = [s.strip() for s in f.readlines()] category_name = categories[class_id] print(f"{category_name}: {100 * score}%") ``` There are a few limitations with the above approach: Inability to support multiple pre-trained weights: Since the pretrained variable is boolean, we can only offer one set of weights. This poses a severe limitation when we significantly improve the accuracy of existing models and we want to make those improvements available to the community. It also stops us from offering pre-trained weights of the same model variant on different datasets.
https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/
pytorch blogs
Missing inference/preprocessing transforms: The user is forced to define the necessary transforms prior using the model. The inference transforms are usually linked to the training process and dataset used to estimate the weights. Any minor discrepancies in these transforms (such as interpolation value, resize/crop sizes etc) can lead to major reductions in accuracy or unusable models. Lack of meta-data: Critical pieces of information in relation to the weights are unavailable to the users. For example, one needs to look into external sources and the documentation to find things like the category labels, the training recipe, the accuracy metrics etc. The new API addresses the above limitations and reduces the amount of boilerplate code needed for standard tasks. Overview of the prototype API Let’s see how we can achieve exactly the same results as above using the new API: ```Python from PIL import Image
https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/
pytorch blogs
from PIL import Image from torchvision.prototype import models as PM img = Image.open("test/assets/encode_jpeg/grace_hopper_517x606.jpg") # Step 1: Initialize model weights = PM.ResNet50_Weights.IMAGENET1K_V1 model = PM.resnet50(weights=weights) model.eval() # Step 2: Initialize the inference transforms preprocess = weights.transforms() # Step 3: Apply inference preprocessing transforms batch = preprocess(img).unsqueeze(0) prediction = model(batch).squeeze(0).softmax(0) # Step 4: Use the model and print the predicted category class_id = prediction.argmax().item() score = prediction[class_id].item() category_name = weights.meta["categories"][class_id] print(f"{category_name}: {100 * score}*%*") As we can see the new API eliminates the aforementioned limitations. Let’s explore the new features in detail. Multi-weight support
https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/
pytorch blogs
Multi-weight support At the heart of the new API, we have the ability to define multiple different weights for the same model variant. Each model building method (eg resnet50) has an associated Enum class (eg ResNet50_Weights) which has as many entries as the number of pre-trained weights available. Additionally, each Enum class has a DEFAULT alias which points to the best available weights for the specific model. This allows the users who want to always use the best available weights to do so without modifying their code. Here is an example of initializing models with different weights: ```python from torchvision.prototype.models import resnet50, ResNet50_Weights Legacy weights with accuracy 76.130% model = resnet50(weights=ResNet50_Weights.IMAGENET1K_V1) New weights with accuracy 80.858% model = resnet50(weights=ResNet50_Weights.IMAGENET1K_V2) Best available weights (currently alias for IMAGENET1K_V2) model = resnet50(weights=ResNet50_Weights.DEFAULT)
https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/
pytorch blogs
No weights - random initialization model = resnet50(weights=None) ### Associated meta-data & preprocessing transforms The weights of each model are associated with meta-data. The type of information we store depends on the task of the model (Classification, Detection, Segmentation etc). Typical information includes a link to the training recipe, the interpolation mode, information such as the categories and validation metrics. These values are programmatically accessible via the `meta` attribute: ```Python from torchvision.prototype.models import ResNet50_Weights # Accessing a single record size = ResNet50_Weights.IMAGENET1K_V2.meta["size"] # Iterating the items of the meta-data dictionary for k, v in ResNet50_Weights.IMAGENET1K_V2.meta.items(): print(k, v)
https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/
pytorch blogs
print(k, v) Additionally, each weights entry is associated with the necessary preprocessing transforms. All current preprocessing transforms are JIT-scriptable and can be accessed via the `transforms` attribute. Prior using them with the data, the transforms need to be initialized/constructed. This lazy initialization scheme is done to ensure the solution is memory efficient. The input of the transforms can be either a `PIL.Image` or a `Tensor` read using `torchvision.io`. ```Python from torchvision.prototype.models import ResNet50_Weights # Initializing preprocessing at standard 224x224 resolution preprocess = ResNet50_Weights.IMAGENET1K_V2.transforms() # Initializing preprocessing at 400x400 resolution preprocess = ResNet50_Weights.IMAGENET1K_V2.transforms(crop_size=400, resize_size=400) # Once initialized the callable can accept the image data: # img_preprocessed = preprocess(img)
https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/
pytorch blogs
img_preprocessed = preprocess(img) Associating the weights with their meta-data and preprocessing will boost transparency, improve reproducibility and make it easier to document how a set of weights was produced. ### Get weights by name The ability to link directly the weights with their properties (meta data, preprocessing callables etc) is the reason why our implementation uses Enums instead of Strings. Nevertheless for cases when only the name of the weights is available, we offer a method capable of linking Weight names to their Enums: ```Python from torchvision.prototype.models import get_weight # Weights can be retrieved by name: assert get_weight("ResNet50_Weights.IMAGENET1K_V1") == ResNet50_Weights.IMAGENET1K_V1 assert get_weight("ResNet50_Weights.IMAGENET1K_V2") == ResNet50_Weights.IMAGENET1K_V2 # Including using the DEFAULT alias: assert get_weight("ResNet50_Weights.DEFAULT") == ResNet50_Weights.IMAGENET1K_V2 Deprecations
https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/
pytorch blogs
## Deprecations In the new API the boolean `pretrained` and `pretrained_backbone` parameters, which were previously used to load weights to the full model or to its backbone, are deprecated. The current implementation is fully backwards compatible as it seamlessly maps the old parameters to the new ones. Using the old parameters to the new builders emits the following deprecation warnings: ```Python >>> model = torchvision.prototype.models.resnet50(pretrained=True) UserWarning: The parameter 'pretrained' is deprecated, please use 'weights' instead. UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated. The current behavior is equivalent to passing `weights=ResNet50_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet50_Weights.DEFAULT` to get the most up-to-date weights. Additionally the builder methods require using keyword parameters. The use of positional parameter is deprecated and using them emits the following warning: ```Python
https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/
pytorch blogs
>>> model = torchvision.prototype.models.resnet50(None) UserWarning: Using 'weights' as positional parameter(s) is deprecated. Please use keyword parameter(s) instead. Testing the new API Migrating to the new API is very straightforward. The following method calls between the 2 APIs are all equivalent: # Using pretrained weights: torchvision.prototype.models.resnet50(weights=ResNet50_Weights.IMAGENET1K_V1) torchvision.models.resnet50(pretrained=True) torchvision.models.resnet50(True) # Using no weights: torchvision.prototype.models.resnet50(weights=None) torchvision.models.resnet50(pretrained=False) torchvision.models.resnet50(False) Note that the prototype features are available only on the nightly versions of TorchVision, so to use it you need to install it as follows: conda install torchvision -c pytorch-nightly
https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/
pytorch blogs
conda install torchvision -c pytorch-nightly ``` For alternative ways to install the nightly have a look on the PyTorch download page. You can also install TorchVision from source from the latest main; for more information have a look on our repo. Accessing state-of-the-art model weights with the new API If you are still unconvinced about giving a try to the new API, here is one more reason to do so. We’ve recently refreshed our training recipe and achieved SOTA accuracy from many of our models. The improved weights can easily be accessed via the new API. Here is a quick overview of the model improvements: | Model | Old Acc@1 | New Acc@1 |
https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/
pytorch blogs
| -------------------------- | --------- | --------- | | EfficientNet B1 | 78.642 | 79.838 | | MobileNetV3 Large | 74.042 | 75.274 | | Quantized ResNet50 | 75.92 | 80.282 | | Quantized ResNeXt101 32x8d | 78.986 | 82.574 | | RegNet X 400mf | 72.834 | 74.864 | | RegNet X 800mf | 75.212 | 77.522 | | RegNet X 1 6gf | 77.04 | 79.668 | | RegNet X 3 2gf | 78.364 | 81.198 | | RegNet X 8gf | 79.344 | 81.682 | | RegNet X 16gf | 80.058 | 82.72 | | RegNet X 32gf | 80.622 | 83.018 | | RegNet Y 400mf | 74.046 | 75.806 | | RegNet Y 800mf | 76.42 | 78.838 | | RegNet Y 1 6gf | 77.95 | 80.882 | | RegNet Y 3 2gf | 78.948 | 81.984 | | RegNet Y 8gf | 80.032 | 82.828 | | RegNet Y 16gf | 80.424 | 82.89 |
https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/
pytorch blogs
| RegNet Y 32gf | 80.878 | 83.366 | | ResNet50 | 76.13 | 80.858 | | ResNet101 | 77.374 | 81.886 | | ResNet152 | 78.312 | 82.284 | | ResNeXt50 32x4d | 77.618 | 81.198 | | ResNeXt101 32x8d | 79.312 | 82.834 | | Wide ResNet50 2 | 78.468 | 81.602 | | Wide ResNet101 2 | 78.848 | 82.51 | Please spare a few minutes to provide your feedback on the new API, as this is crucial for graduating it from prototype and including it in the next release. You can do this on the dedicated Github Issue. We are looking forward to reading your comments!
https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/
pytorch blogs
layout: blog_detail title: "Optimizing CUDA Recurrent Neural Networks with TorchScript" author: "The PyTorch Team" date: 2019-05-01 8:00:00 -0500 This week, we officially released PyTorch 1.1, a large feature update to PyTorch 1.0. One of the new features we've added is better support for fast, custom Recurrent Neural Networks (fastrnns) with TorchScript (the PyTorch JIT) (https://pytorch.org/docs/stable/jit.html). RNNs are popular models that have shown good performance on a variety of NLP tasks that come in different shapes and sizes. PyTorch implements a number of the most popular ones, the Elman RNN, GRU, and LSTM as well as multi-layered and bidirectional variants.
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
However, many users want to implement their own custom RNNs, taking ideas from recent literature. Applying Layer Normalization to LSTMs is one such use case. Because the PyTorch CUDA LSTM implementation uses a fused kernel, it is difficult to insert normalizations or even modify the base LSTM implementation. Many users have turned to writing custom implementations using standard PyTorch operators, but such code suffers from high overhead: most PyTorch operations launch at least one kernel on the GPU and RNNs generally run many operations due to their recurrent nature. However, we can apply TorchScript to fuse operations and optimize our code automatically, launching fewer, more optimized kernels on the GPU.
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
Our goal is for users to be able to write fast, custom RNNs in TorchScript without writing specialized CUDA kernels to achieve similar performance. In this post, we'll provide a tutorial for how to write your own fast RNNs with TorchScript. To better understand the optimizations TorchScript applies, we'll examine how those work on a standard LSTM implementation but most of the optimizations can be applied to general RNNs. Writing custom RNNs To get started, you can use this file as a template to write your own custom RNNs. We are constantly improving our infrastructure on trying to make the performance better. If you want to gain the speed/optimizations that TorchScript currently provides (like operator fusion, batch matrix multiplications, etc.), here are some guidelines to follow. The next section explains the optimizations in depth.
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
If the customized operations are all element-wise, that's great because you can get the benefits of the PyTorch JIT's operator fusion automatically! If you have more complex operations (e.g. reduce ops mixed with element-wise ops), consider grouping the reduce operations and element-wise ops separately in order to fuse the element-wise operations into a single fusion group. If you want to know about what has been fused in your custom RNN, you can inspect the operation's optimized graph by using graph_for . Using LSTMCell as an example: ```python get inputs and states for LSTMCell inputs = get_lstm_inputs() instantiate a ScriptModule cell = LSTMCell(input_size, hidden_size) print the optimized graph using graph_for out = cell(inputs) print(cell.graph_for(inputs)) ``` This will generate the optimized TorchScript graph (a.k.a PyTorch JIT IR) for the specialized inputs that you provides: ``` graph(%x : Float(, ),
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
``` graph(%x : Float(, ), %hx : Float(, ), %cx : Float(, ), %w_ih : Float(, ), %w_hh : Float(, ), %b_ih : Float(), %b_hh : Float()): %hy : Float(, ), %cy : Float(, ) = prim::DifferentiableGraph_0(%cx, %b_hh, %b_ih, %hx, %w_hh, %x, %w_ih) %30 : (Float(, ), Float(, )) = prim::TupleConstruct(%hy, %cy) return (%30) with prim::DifferentiableGraph_0 = graph(%13 : Float(, ), %29 : Float(), %33 : Float(), %40 : Float(, ), %43 : Float(, ), %45 : Float(, ), %48 : Float(, )): %49 : Float(, ) = aten::t(%48) %47 : Float(, ) = aten::mm(%45, %49) %44 : Float(, ) = aten::t(%43) %42 : Float(, ) = aten::mm(%40, %44) ...some broadcast sizes operations...
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
...some broadcast sizes operations... %hy : Float(, ), %287 : Float(, ), %cy : Float(, ), %outgate.1 : Float(, ), %cellgate.1 : Float(, ), %forgetgate.1 : Float(, ), %ingate.1 : Float(, ) = prim::FusionGroup_0(%13, %346, %345, %344, %343) ...some broadcast sizes operations... return (%hy, %cy, %49, %44, %196, %199, %340, %192, %325, %185, %ingate.1, %forgetgate.1, %cellgate.1, %outgate.1, %395, %396, %287) with prim::FusionGroup_0 = graph(%13 : Float(, ), %71 : Tensor, %76 : Tensor, %81 : Tensor, %86 : Tensor): ...some chunks, constants, and add operations... %ingate.1 : Float(, ) = aten::sigmoid(%38) %forgetgate.1 : Float(, ) = aten::sigmoid(%34) %cellgate.1 : Float(, ) = aten::tanh(%30) %outgate.1 : Float(, ) = aten::sigmoid(%26) %14 : Float(, ) = aten::mul(%forgetgate.1, %13) %11 : Float(, ) = aten::mul(%ingate.1, %cellgate.1)
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
%cy : Float(, ) = aten::add(%14, %11, %69) %4 : Float(, ) = aten::tanh(%cy) %hy : Float(, ) = aten::mul(%outgate.1, %4) return (%hy, %4, %cy, %outgate.1, %cellgate.1, %forgetgate.1, %ingate.1) ``` From the above graph we can see that it has a prim::FusionGroup_0 subgraph that is fusing all element-wise operations in LSTMCell (transpose and matrix multiplication are not element-wise ops). Some graph nodes might be hard to understand in the first place but we will explain some of them in the optimization section, we also omitted some long verbose operators in this post that is there just for correctness. Variable-length sequences best practices TorchScript does not support PackedSequence. Generally, when one is handling variable-length sequences, it is best to pad them into a single tensor and send that tensor through a TorchScript LSTM. Here's an example: ```python sequences = [...] # List[Tensor], each Tensor is T' x C
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
padded = torch.utils.rnn.pad_sequence(sequences) lengths = [seq.size(0) for seq in sequences] padded # T x N x C, where N is batch size and T is the max of all T' model = LSTM(...) output, hiddens = model(padded) output # T x N x C ``` Of course, output may have some garbage data in the padded regions; use lengths to keep track of which part you don't need. Optimizations We will now explain the optimizations performed by the PyTorch JIT to speed up custom RNNs. We will use a simple custom LSTM model in TorchScript to illustrate the optimizations, but many of these are general and apply to other RNNs. To illustrate the optimizations we did and how we get benefits from those optimizations, we will run a simple custom LSTM model written in TorchScript (you can refer the code in the custom_lstm.py or the below code snippets) and time our changes.
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
We set up the environment in a machine equipped with 2 Intel Xeon chip and one Nvidia P100, with cuDNN v7.3, CUDA 9.2 installed. The basic set up for the LSTM model is as follows: input_size = 512 hidden_size = 512 mini_batch = 64 numLayers = 1 seq_length = 100
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
``` The most important thing PyTorch JIT did is to compile the python program to a PyTorch JIT IR, which is an intermediate representation used to model the program's graph structure. This IR can then benefit from whole program optimization, hardware acceleration and overall has the potential to provide large computation gains. In this example, we run the initial TorchScript model with only compiler optimization passes that are provided by the JIT, including common subexpression elimination, constant pooling, constant propagation, dead code elimination and some peephole optimizations. We run the model training for 100 times after warm up and average the training time. The initial results for model forward time is around 27ms and backward time is around 64ms, which is a bit far away from what PyTorch cuDNN LSTM provided. Next we will explain the major optimizations we did on how we improve the performance on training or inferencing, starting with LSTMCell and LSTMLayer, and some misc optimizations.
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
LSTM Cell (forward) Almost all the computations in an LSTM happen in the LSTMCell, so it's important for us to take a look at the computations it contains and how can we improve their speed. Below is a sample LSTMCell implementation in TorchScript: ```python class LSTMCell(jit.ScriptModule): def init(self, input_size, hidden_size): super(LSTMCell, self).init() self.input_size = input_size self.hidden_size = hidden_size self.weight_ih = Parameter(torch.randn(4 * hidden_size, input_size)) self.weight_hh = Parameter(torch.randn(4 * hidden_size, hidden_size)) self.bias_ih = Parameter(torch.randn(4 * hidden_size)) self.bias_hh = Parameter(torch.randn(4 * hidden_size)) @jit.script_method def forward(self, input, state): # type: (Tensor, Tuple[Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor]] hx, cx = state gates = (torch.mm(input, self.weight_ih.t()) + self.bias_ih +
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
torch.mm(hx, self.weight_hh.t()) + self.bias_hh) ingate, forgetgate, cellgate, outgate = gates.chunk(4, 1) ingate = torch.sigmoid(ingate) forgetgate = torch.sigmoid(forgetgate) cellgate = torch.tanh(cellgate) outgate = torch.sigmoid(outgate) cy = (forgetgate * cx) + (ingate * cellgate) hy = outgate * torch.tanh(cy) return hy, (hy, cy) ``` This graph representation (IR) that TorchScript generated enables several optimizations and scalable computations. In addition to the typical compiler optimizations that we could do (CSE, constant propagation, etc. ) we can also run other IR transformations to make our code run faster.
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
Element-wise operator fusion. PyTorch JIT will automatically fuse element-wise ops, so when you have adjacent operators that are all element-wise, JIT will automatically group all those operations together into a single FusionGroup, this FusionGroup can then be launched with a single GPU/CPU kernel and performed in one pass. This avoids expensive memory reads and writes for each operation. Reordering chunks and pointwise ops to enable more fusion. An LSTM cell adds gates together (a pointwise operation), and then chunks the gates into four pieces: the ifco gates. Then, it performs pointwise operations on the ifco gates like above. This leads to two fusion groups in practice: one fusion group for the element-wise ops pre-chunk, and one group for the element-wise ops post-chunk.
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
The interesting thing to note here is that pointwise operations commute with torch.chunk: Instead of performing pointwise ops on some input tensors and chunking the output, we can chunk the input tensors and then perform the same pointwise ops on the output tensors. By moving the chunk to before the first fusion group, we can merge the first and second fusion groups into one big group.
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
Tensor creation on the CPU is expensive, but there is ongoing work to make it faster. At this point, a LSTMCell runs three CUDA kernels: two gemm kernels and one for the single pointwise group. One of the things we noticed was that there was a large gap between the finish of the second gemm and the start of the single pointwise group. This gap was a period of time when the GPU was idling around and not doing anything. Looking into it more, we discovered that the problem was that torch.chunk constructs new tensors and that tensor construction was not as fast as it could be. Instead of constructing new Tensor objects, we taught the fusion compiler how to manipulate a data pointer and strides to do the torch.chunk before sending it into the fused kernel, shrinking the amount of idle time between the second gemm and the launch of the element-wise fusion group. This give us around 1.2x increase speed up on the LSTM forward pass.
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
By doing the above tricks, we are able to fuse the almost all LSTMCell forward graph (except the two gemm kernels) into a single fusion group, which corresponds to the prim::FusionGroup_0 in the above IR graph. It will then be launched into a single fused kernel for execution. With these optimizations the model performance improves significantly with average forward time reduced by around 17ms (1.7x speedup) to 10ms, and average backward time reduce by 37ms to 27ms (1.37x speed up). LSTM Layer (forward) ```python class LSTMLayer(jit.ScriptModule): def init(self, cell, cell_args): super(LSTMLayer, self).init() self.cell = cell(cell_args) @jit.script_method def forward(self, input, state): # type: (Tensor, Tuple[Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor]] inputs = input.unbind(0) outputs = torch.jit.annotate(List[Tensor], []) for i in range(len(inputs)): out, state = self.cell(inputs[i], state)
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
outputs += [out] return torch.stack(outputs), state ``` We did several tricks on the IR we generated for TorchScript LSTM to boost the performance, some example optimizations we did: Loop Unrolling: We automatically unroll loops in the code (for big loops, we unroll a small subset of it), which then empowers us to do further optimizations on the for loops control flow. For example, the fuser can fuse together operations across iterations of the loop body, which results in a good performance improvement for control flow intensive models like LSTMs. Batch Matrix Multiplication: For RNNs where the input is pre-multiplied (i.e. the model has a lot of matrix multiplies with the same LHS or RHS), we can efficiently batch those operations together into a single matrix multiply while chunking the outputs to achieve equivalent semantics.
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
By applying these techniques, we reduced our time in the forward pass by an additional 1.6ms to 8.4ms (1.2x speed up) and timing in backward by 7ms to around 20ms (1.35x speed up). LSTM Layer (backward) “Tree” Batch Matrix Muplication: It is often the case that a single weight is reused multiple times in the LSTM backward graph, forming a tree where the leaves are matrix multiplies and nodes are adds. These nodes can be combined together by concatenating the LHSs and RHSs in different dimensions, then computed as a single matrix multiplication. The formula of equivalence can be denoted as follows: $L1 * R1 + L2 * R2 = torch.cat((L1, L2), dim=1) * torch.cat((R1, R2), dim=0)$
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
Autograd is a critical component of what makes PyTorch such an elegant ML framework. As such, we carried this through to PyTorch JIT, but using a new Automatic Differentiation (AD) mechanism that works on the IR level. JIT automatic differentiation will slice the forward graph into symbolically differentiable subgraphs, and generate backwards nodes for those subgraphs. Taking the above IR as an example, we group the graph nodes into a single prim::DifferentiableGraph_0 for the operations that has AD formulas. For operations that have not been added to AD formulas, we will fall back to Autograd during execution.
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs
Optimizing the backwards path is hard, and the implicit broadcasting semantics make the optimization of automatic differentiation harder. PyTorch makes it convenient to write tensor operations without worrying about the shapes by broadcasting the tensors for you. For performance, the painful point in backward is that we need to have a summation for such kind of broadcastable operations. This results in the derivative of every broadcastable op being followed by a summation. Since we cannot currently fuse reduce operations, this causes FusionGroups to break into multiple small groups leading to bad performance. To deal with this, refer to this great post written by Thomas Viehmann. Misc Optimizations
https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/
pytorch blogs