muellerzr commited on
Commit
bceceb3
·
0 Parent(s):

Initial commit, remove licence

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +29 -0
  2. docs/Makefile +19 -0
  3. docs/README.md +267 -0
  4. docs/source/_toctree.yml +100 -0
  5. docs/source/basic_tutorials/install.md +102 -0
  6. docs/source/basic_tutorials/launch.md +232 -0
  7. docs/source/basic_tutorials/migration.md +129 -0
  8. docs/source/basic_tutorials/notebook.md +459 -0
  9. docs/source/basic_tutorials/overview.md +24 -0
  10. docs/source/basic_tutorials/troubleshooting.md +222 -0
  11. docs/source/concept_guides/big_model_inference.md +341 -0
  12. docs/source/concept_guides/deferring_execution.md +130 -0
  13. docs/source/concept_guides/gradient_synchronization.md +169 -0
  14. docs/source/concept_guides/internal_mechanism.md +72 -0
  15. docs/source/concept_guides/low_precision_training.md +74 -0
  16. docs/source/concept_guides/performance.md +103 -0
  17. docs/source/concept_guides/training_tpu.md +167 -0
  18. docs/source/imgs/accelerate_logo.png +0 -0
  19. docs/source/imgs/course_banner.png +0 -0
  20. docs/source/index.md +74 -0
  21. docs/source/package_reference/accelerator.md +211 -0
  22. docs/source/package_reference/big_modeling.md +47 -0
  23. docs/source/package_reference/cli.md +308 -0
  24. docs/source/package_reference/deepspeed.md +28 -0
  25. docs/source/package_reference/fsdp.md +18 -0
  26. docs/source/package_reference/kwargs.md +39 -0
  27. docs/source/package_reference/launchers.md +22 -0
  28. docs/source/package_reference/logging.md +21 -0
  29. docs/source/package_reference/megatron_lm.md +32 -0
  30. docs/source/package_reference/state.md +28 -0
  31. docs/source/package_reference/torch_wrappers.md +37 -0
  32. docs/source/package_reference/tracking.md +35 -0
  33. docs/source/package_reference/utilities.md +178 -0
  34. docs/source/quicktour.md +441 -0
  35. docs/source/usage_guides/big_modeling.md +150 -0
  36. docs/source/usage_guides/checkpoint.md +96 -0
  37. docs/source/usage_guides/deepspeed.md +722 -0
  38. docs/source/usage_guides/distributed_inference.md +136 -0
  39. docs/source/usage_guides/explore.md +51 -0
  40. docs/source/usage_guides/fsdp.md +170 -0
  41. docs/source/usage_guides/gradient_accumulation.md +232 -0
  42. docs/source/usage_guides/ipex.md +174 -0
  43. docs/source/usage_guides/local_sgd.md +108 -0
  44. docs/source/usage_guides/low_precision_training.md +92 -0
  45. docs/source/usage_guides/megatron_lm.md +583 -0
  46. docs/source/usage_guides/model_size_estimator.md +137 -0
  47. docs/source/usage_guides/mps.md +54 -0
  48. docs/source/usage_guides/quantization.md +136 -0
  49. docs/source/usage_guides/sagemaker.md +205 -0
  50. docs/source/usage_guides/tracking.md +233 -0
README.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Preparing the dataset
2
+
3
+ ### NOTICE:
4
+
5
+ All code is owned by Hugging Face and uses the Apache 2.0 Licence. While I clean and strip the dataset for processing, do note that this dataset is under the same scruteny as the original Apache 2.0 License.
6
+
7
+ ## Clone Repo
8
+
9
+ Data souce used is the [accelerate](https://github.com/huggingface/accelerate) repository. I'm using the latest version, v0.25.0
10
+
11
+ ```bash
12
+ git clone https://github.com/huggingface/accelerate
13
+ cd accelerate
14
+ git checkout v0.25.0
15
+ cd ..
16
+ mkdir docs src
17
+ mv accelerate/src/accelerate/* src
18
+ mv accelerate/docs/* docs
19
+ cd src
20
+ rm __init__.py commands/__init__.py test_utils/__init__.py utils/__init__.py
21
+ ```
22
+
23
+ ### Cleaning the dataset
24
+
25
+ Using `regex` in VSCODE, use the following replacement:
26
+
27
+ ```regex
28
+ # Copyright(.*\n)+# limitations under the license.
29
+ ```
docs/Makefile ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Minimal makefile for Sphinx documentation
2
+ #
3
+
4
+ # You can set these variables from the command line.
5
+ SPHINXOPTS =
6
+ SPHINXBUILD = sphinx-build
7
+ SOURCEDIR = source
8
+ BUILDDIR = _build
9
+
10
+ # Put it first so that "make" without argument is like "make help".
11
+ help:
12
+ @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
13
+
14
+ .PHONY: help Makefile
15
+
16
+ # Catch-all target: route all unknown targets to Sphinx using the new
17
+ # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
18
+ %: Makefile
19
+ @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
docs/README.md ADDED
@@ -0,0 +1,267 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!---
2
+ Copyright 2023 The HuggingFace Team. All rights reserved.
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ -->
16
+
17
+ # Generating the documentation
18
+
19
+ To generate the documentation, you first have to build it. Several packages are necessary to build the doc,
20
+ you can install them with the following command, at the root of the code repository:
21
+
22
+ ```bash
23
+ pip install -e ".[docs]"
24
+ ```
25
+
26
+ Then you need to install our special tool that builds the documentation:
27
+
28
+ ```bash
29
+ pip install git+https://github.com/huggingface/doc-builder
30
+ ```
31
+
32
+ ---
33
+ **NOTE**
34
+
35
+ You only need to generate the documentation to inspect it locally (if you're planning changes and want to
36
+ check how they look before committing for instance). You don't have to commit the built documentation.
37
+
38
+ ---
39
+
40
+ ## Building the documentation
41
+
42
+ Once you have setup the `doc-builder` and additional packages, you can generate the documentation by
43
+ typing the following command:
44
+
45
+ ```bash
46
+ doc-builder build accelerate docs/source/ --build_dir ~/tmp/test-build
47
+ ```
48
+
49
+ You can adapt the `--build_dir` to set any temporary folder that you prefer. This command will create it and generate
50
+ the MDX files that will be rendered as the documentation on the main website. You can inspect them in your favorite
51
+ Markdown editor.
52
+
53
+ ## Previewing the documentation
54
+
55
+ To preview the docs, first install the `watchdog` module with:
56
+
57
+ ```bash
58
+ pip install watchdog
59
+ ```
60
+
61
+ Then run the following command:
62
+
63
+ ```bash
64
+ doc-builder preview {package_name} {path_to_docs}
65
+ ```
66
+
67
+ For example:
68
+
69
+ ```bash
70
+ doc-builder preview accelerate docs/source/
71
+ ```
72
+
73
+ The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
74
+
75
+ ---
76
+ **NOTE**
77
+
78
+ The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).
79
+
80
+ ---
81
+
82
+ ## Adding a new element to the navigation bar
83
+
84
+ Accepted files are Markdown (.md).
85
+
86
+ Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting
87
+ the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/accelerate/blob/main/docs/source/_toctree.yml) file.
88
+
89
+ ## Renaming section headers and moving sections
90
+
91
+ It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.
92
+
93
+ Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.
94
+
95
+ So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:
96
+
97
+ ```
98
+ Sections that were moved:
99
+
100
+ [ <a href="#section-b">Section A</a><a id="section-a"></a> ]
101
+ ```
102
+ and of course, if you moved it to another file, then:
103
+
104
+ ```
105
+ Sections that were moved:
106
+
107
+ [ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ]
108
+ ```
109
+
110
+ Use the relative style to link to the new file so that the versioned docs continue to work.
111
+
112
+
113
+ ## Writing Documentation - Specification
114
+
115
+ The `huggingface/accelerate` documentation follows the
116
+ [Google documentation](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) style for docstrings,
117
+ although we can write them directly in Markdown.
118
+
119
+ ### Adding a new tutorial
120
+
121
+ Adding a new tutorial or section is done in two steps:
122
+
123
+ - Add a new file under `./source`. This file can either be ReStructuredText (.rst) or Markdown (.md).
124
+ - Link that file in `./source/_toctree.yml` on the correct toc-tree.
125
+
126
+ Make sure to put your new file under the proper section. It's unlikely to go in the first section (*Get Started*), so
127
+ depending on the intended targets (beginners, more advanced users, or researchers) it should go in sections two, three, or
128
+ four.
129
+
130
+ ### Writing source documentation
131
+
132
+ Values that should be put in `code` should either be surrounded by backticks: \`like so\`. Note that argument names
133
+ and objects like True, None, or any strings should usually be put in `code`.
134
+
135
+ When mentioning a class, function, or method, it is recommended to use our syntax for internal links so that our tool
136
+ adds a link to its documentation with this syntax: \[\`XXXClass\`\] or \[\`function\`\]. This requires the class or
137
+ function to be in the main package.
138
+
139
+ If you want to create a link to some internal class or function, you need to
140
+ provide its path. For instance: \[\`utils.gather\`\]. This will be converted into a link with
141
+ `utils.gather` in the description. To get rid of the path and only keep the name of the object you are
142
+ linking to in the description, add a ~: \[\`~utils.gather\`\] will generate a link with `gather` in the description.
143
+
144
+ The same works for methods so you can either use \[\`XXXClass.method\`\] or \[~\`XXXClass.method\`\].
145
+
146
+ #### Defining arguments in a method
147
+
148
+ Arguments should be defined with the `Args:` (or `Arguments:` or `Parameters:`) prefix, followed by a line return and
149
+ an indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its
150
+ description:
151
+
152
+ ```
153
+ Args:
154
+ n_layers (`int`): The number of layers of the model.
155
+ ```
156
+
157
+ If the description is too long to fit in one line (more than 119 characters in total), another indentation is necessary
158
+ before writing the description after the argument.
159
+
160
+ Finally, to maintain uniformity if any *one* description is too long to fit on one line, the
161
+ rest of the parameters should follow suit and have an indention before their description.
162
+
163
+ Here's an example showcasing everything so far:
164
+
165
+ ```
166
+ Args:
167
+ gradient_accumulation_steps (`int`, *optional*, default to 1):
168
+ The number of steps that should pass before gradients are accumulated. A number > 1 should be combined with `Accelerator.accumulate`.
169
+ cpu (`bool`, *optional*):
170
+ Whether or not to force the script to execute on CPU. Will ignore GPU available if set to `True` and force the execution on one process only.
171
+ ```
172
+
173
+ For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the
174
+ following signature:
175
+
176
+ ```
177
+ def my_function(x: str = None, a: float = 1):
178
+ ```
179
+
180
+ then its documentation should look like this:
181
+
182
+ ```
183
+ Args:
184
+ x (`str`, *optional*):
185
+ This argument controls ... and has a description longer than 119 chars.
186
+ a (`float`, *optional*, defaults to 1):
187
+ This argument is used to ... and has a description longer than 119 chars.
188
+ ```
189
+
190
+ Note that we always omit the "defaults to \`None\`" when None is the default for any argument. Also note that even
191
+ if the first line describing your argument type and its default gets long, you can't break it on several lines. You can
192
+ however write as many lines as you want in the indented description (see the example above with `input_ids`).
193
+
194
+ #### Writing a multi-line code block
195
+
196
+ Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:
197
+
198
+
199
+ ````
200
+ ```python
201
+ # first line of code
202
+ # second line
203
+ # etc
204
+ ```
205
+ ````
206
+
207
+ #### Writing a return block
208
+
209
+ The return block should be introduced with the `Returns:` prefix, followed by a line return and an indentation.
210
+ The first line should be the type of the return, followed by a line return. No need to indent further for the elements
211
+ building the return.
212
+
213
+ Here's an example of a single value return:
214
+
215
+ ```
216
+ Returns:
217
+ `List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token.
218
+ ```
219
+
220
+ Here's an example of a tuple return, comprising several objects:
221
+
222
+ ```
223
+ Returns:
224
+ `tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs:
225
+ - ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` --
226
+ Total loss is the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
227
+ - **prediction_scores** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) --
228
+ Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
229
+ ```
230
+
231
+ ## Styling the docstring
232
+
233
+ We have an automatic script running with the `make style` comment that will make sure that:
234
+ - the docstrings fully take advantage of the line width
235
+ - all code examples are formatted using black, like the code of the Transformers library
236
+
237
+ This script may have some weird failures if you made a syntax mistake or if you uncover a bug. Therefore, it's
238
+ recommended to commit your changes before running `make style`, so you can revert the changes done by that script
239
+ easily.
240
+
241
+ ## Writing documentation examples
242
+
243
+ The syntax for Example docstrings can look as follows:
244
+
245
+ ```
246
+ Example:
247
+
248
+ ```python
249
+ >>> import time
250
+ >>> from accelerate import Accelerator
251
+ >>> accelerator = Accelerator()
252
+ >>> if accelerator.is_main_process:
253
+ ... time.sleep(2)
254
+ >>> else:
255
+ ... print("I'm waiting for the main process to finish its sleep...")
256
+ >>> accelerator.wait_for_everyone()
257
+ >>> # Should print on every process at the same time
258
+ >>> print("Everyone is here")
259
+ ```
260
+ ```
261
+
262
+ The docstring should give a minimal, clear example of how the respective function
263
+ is to be used in inference and also include the expected (ideally sensible)
264
+ output.
265
+ Often, readers will try out the example before even going through the function
266
+ or class definitions. Therefore, it is of utmost importance that the example
267
+ works as expected.
docs/source/_toctree.yml ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ - sections:
2
+ - local: index
3
+ title: 🤗 Accelerate
4
+ - local: basic_tutorials/install
5
+ title: Installation
6
+ - local: quicktour
7
+ title: Quicktour
8
+ title: Getting started
9
+ - sections:
10
+ - local: basic_tutorials/overview
11
+ title: Overview
12
+ - local: basic_tutorials/migration
13
+ title: Migrating to 🤗 Accelerate
14
+ - local: basic_tutorials/launch
15
+ title: Launching distributed code
16
+ - local: basic_tutorials/notebook
17
+ title: Launching distributed training from Jupyter Notebooks
18
+ - local: basic_tutorials/troubleshooting
19
+ title: Troubleshooting guide
20
+ title: Tutorials
21
+ - sections:
22
+ - local: usage_guides/explore
23
+ title: Start Here!
24
+ - local: usage_guides/training_zoo
25
+ title: Example Zoo
26
+ - local: usage_guides/big_modeling
27
+ title: How to perform inference on large models with small resources
28
+ - local: usage_guides/model_size_estimator
29
+ title: Knowing how big of a model you can fit into memory
30
+ - local: usage_guides/quantization
31
+ title: How to quantize model
32
+ - local: usage_guides/distributed_inference
33
+ title: How to perform distributed inference with normal resources
34
+ - local: usage_guides/gradient_accumulation
35
+ title: Performing gradient accumulation
36
+ - local: usage_guides/local_sgd
37
+ title: Accelerating training with local SGD
38
+ - local: usage_guides/checkpoint
39
+ title: Saving and loading training states
40
+ - local: usage_guides/tracking
41
+ title: Using experiment trackers
42
+ - local: usage_guides/mps
43
+ title: How to use Apple Silicon M1 GPUs
44
+ - local: usage_guides/low_precision_training
45
+ title: How to train in low precision (FP8)
46
+ - local: usage_guides/deepspeed
47
+ title: How to use DeepSpeed
48
+ - local: usage_guides/fsdp
49
+ title: How to use Fully Sharded Data Parallelism
50
+ - local: usage_guides/megatron_lm
51
+ title: How to use Megatron-LM
52
+ - local: usage_guides/sagemaker
53
+ title: How to use 🤗 Accelerate with SageMaker
54
+ - local: usage_guides/ipex
55
+ title: How to use 🤗 Accelerate with Intel® Extension for PyTorch for cpu
56
+ title: How-To Guides
57
+ - sections:
58
+ - local: concept_guides/internal_mechanism
59
+ title: 🤗 Accelerate's internal mechanism
60
+ - local: concept_guides/big_model_inference
61
+ title: Loading big models into memory
62
+ - local: concept_guides/performance
63
+ title: Comparing performance across distributed setups
64
+ - local: concept_guides/deferring_execution
65
+ title: Executing and deferring jobs
66
+ - local: concept_guides/gradient_synchronization
67
+ title: Gradient synchronization
68
+ - local: concept_guides/low_precision_training
69
+ title: How training in low-precision environments is possible (FP8)
70
+ - local: concept_guides/training_tpu
71
+ title: TPU best practices
72
+ title: Concepts and fundamentals
73
+ - sections:
74
+ - local: package_reference/accelerator
75
+ title: Main Accelerator class
76
+ - local: package_reference/state
77
+ title: Stateful configuration classes
78
+ - local: package_reference/cli
79
+ title: The Command Line
80
+ - local: package_reference/torch_wrappers
81
+ title: Torch wrapper classes
82
+ - local: package_reference/tracking
83
+ title: Experiment trackers
84
+ - local: package_reference/launchers
85
+ title: Distributed launchers
86
+ - local: package_reference/deepspeed
87
+ title: DeepSpeed utilities
88
+ - local: package_reference/logging
89
+ title: Logging
90
+ - local: package_reference/big_modeling
91
+ title: Working with large models
92
+ - local: package_reference/kwargs
93
+ title: Kwargs handlers
94
+ - local: package_reference/utilities
95
+ title: Utility functions and classes
96
+ - local: package_reference/megatron_lm
97
+ title: Megatron-LM Utilities
98
+ - local: package_reference/fsdp
99
+ title: Fully Sharded Data Parallelism Utilities
100
+ title: "Reference"
docs/source/basic_tutorials/install.md ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Installation and Configuration
17
+
18
+ Before you start, you will need to setup your environment, install the appropriate packages, and configure 🤗 Accelerate. 🤗 Accelerate is tested on **Python 3.8+**.
19
+
20
+ ## Installing 🤗 Accelerate
21
+
22
+ 🤗 Accelerate is available on pypi and conda, as well as on GitHub. Details to install from each are below:
23
+
24
+ ### pip
25
+
26
+ To install 🤗 Accelerate from pypi, perform:
27
+
28
+ ```bash
29
+ pip install accelerate
30
+ ```
31
+
32
+ ### conda
33
+
34
+ 🤗 Accelerate can also be installed with conda with:
35
+
36
+ ```bash
37
+ conda install -c conda-forge accelerate
38
+ ```
39
+
40
+ ### Source
41
+
42
+ New features are added every day that haven't been released yet. To try them out yourself, install
43
+ from the GitHub repository:
44
+
45
+ ```bash
46
+ pip install git+https://github.com/huggingface/accelerate
47
+ ```
48
+
49
+ If you're working on contributing to the library or wish to play with the source code and see live
50
+ results as you run the code, an editable version can be installed from a locally-cloned version of the
51
+ repository:
52
+
53
+ ```bash
54
+ git clone https://github.com/huggingface/accelerate
55
+ cd accelerate
56
+ pip install -e .
57
+ ```
58
+
59
+ ## Configuring 🤗 Accelerate
60
+
61
+ After installing, you need to configure 🤗 Accelerate for how the current system is setup for training.
62
+ To do so run the following and answer the questions prompted to you:
63
+
64
+ ```bash
65
+ accelerate config
66
+ ```
67
+
68
+ To write a barebones configuration that doesn't include options such as DeepSpeed configuration or running on TPUs, you can quickly run:
69
+
70
+ ```bash
71
+ python -c "from accelerate.utils import write_basic_config; write_basic_config(mixed_precision='fp16')"
72
+ ```
73
+ 🤗 Accelerate will automatically utilize the maximum number of GPUs available and set the mixed precision mode.
74
+
75
+ To check that your configuration looks fine, run:
76
+
77
+ ```bash
78
+ accelerate env
79
+ ```
80
+
81
+ An example output is shown below, which describes two GPUs on a single machine with no mixed precision being used:
82
+
83
+ ```bash
84
+ - `Accelerate` version: 0.11.0.dev0
85
+ - Platform: Linux-5.10.0-15-cloud-amd64-x86_64-with-debian-11.3
86
+ - Python version: 3.7.12
87
+ - Numpy version: 1.19.5
88
+ - PyTorch version (GPU?): 1.12.0+cu102 (True)
89
+ - `Accelerate` default config:
90
+ - compute_environment: LOCAL_MACHINE
91
+ - distributed_type: MULTI_GPU
92
+ - mixed_precision: no
93
+ - use_cpu: False
94
+ - num_processes: 2
95
+ - machine_rank: 0
96
+ - num_machines: 1
97
+ - main_process_ip: None
98
+ - main_process_port: None
99
+ - main_training_function: main
100
+ - deepspeed_config: {}
101
+ - fsdp_config: {}
102
+ ```
docs/source/basic_tutorials/launch.md ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Launching your 🤗 Accelerate scripts
17
+
18
+ In the previous tutorial, you were introduced to how to modify your current training script to use 🤗 Accelerate.
19
+ The final version of that code is shown below:
20
+
21
+ ```python
22
+ from accelerate import Accelerator
23
+
24
+ accelerator = Accelerator()
25
+
26
+ model, optimizer, training_dataloader, scheduler = accelerator.prepare(
27
+ model, optimizer, training_dataloader, scheduler
28
+ )
29
+
30
+ for batch in training_dataloader:
31
+ optimizer.zero_grad()
32
+ inputs, targets = batch
33
+ outputs = model(inputs)
34
+ loss = loss_function(outputs, targets)
35
+ accelerator.backward(loss)
36
+ optimizer.step()
37
+ scheduler.step()
38
+ ```
39
+
40
+ But how do you run this code and have it utilize the special hardware available to it?
41
+
42
+ First, you should rewrite the above code into a function, and make it callable as a script. For example:
43
+
44
+ ```diff
45
+ from accelerate import Accelerator
46
+
47
+ + def main():
48
+ accelerator = Accelerator()
49
+
50
+ model, optimizer, training_dataloader, scheduler = accelerator.prepare(
51
+ model, optimizer, training_dataloader, scheduler
52
+ )
53
+
54
+ for batch in training_dataloader:
55
+ optimizer.zero_grad()
56
+ inputs, targets = batch
57
+ outputs = model(inputs)
58
+ loss = loss_function(outputs, targets)
59
+ accelerator.backward(loss)
60
+ optimizer.step()
61
+ scheduler.step()
62
+
63
+ + if __name__ == "__main__":
64
+ + main()
65
+ ```
66
+
67
+ Next, you need to launch it with `accelerate launch`.
68
+
69
+ <Tip warning={true}>
70
+
71
+ It's recommended you run `accelerate config` before using `accelerate launch` to configure your environment to your liking.
72
+ Otherwise 🤗 Accelerate will use very basic defaults depending on your system setup.
73
+
74
+ </Tip>
75
+
76
+
77
+ ## Using accelerate launch
78
+
79
+ 🤗 Accelerate has a special CLI command to help you launch your code in your system through `accelerate launch`.
80
+ This command wraps around all of the different commands needed to launch your script on various platforms, without you having to remember what each of them is.
81
+
82
+ <Tip>
83
+
84
+ If you are familiar with launching scripts in PyTorch yourself such as with `torchrun`, you can still do this. It is not required to use `accelerate launch`.
85
+
86
+ </Tip>
87
+
88
+ You can launch your script quickly by using:
89
+
90
+ ```bash
91
+ accelerate launch {script_name.py} --arg1 --arg2 ...
92
+ ```
93
+
94
+ Just put `accelerate launch` at the start of your command, and pass in additional arguments and parameters to your script afterward like normal!
95
+
96
+ Since this runs the various torch spawn methods, all of the expected environment variables can be modified here as well.
97
+ For example, here is how to use `accelerate launch` with a single GPU:
98
+
99
+ ```bash
100
+ CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ...
101
+ ```
102
+
103
+ You can also use `accelerate launch` without performing `accelerate config` first, but you may need to manually pass in the right configuration parameters.
104
+ In this case, 🤗 Accelerate will make some hyperparameter decisions for you, e.g., if GPUs are available, it will use all of them by default without the mixed precision.
105
+ Here is how you would use all GPUs and train with mixed precision disabled:
106
+
107
+ ```bash
108
+ accelerate launch --multi_gpu {script_name.py} {--arg1} {--arg2} ...
109
+ ```
110
+
111
+ Or by specifying a number of GPUs to use:
112
+
113
+ ```bash
114
+ accelerate launch --num_processes=2 {script_name.py} {--arg1} {--arg2} ...
115
+ ```
116
+
117
+ To get more specific you should pass in the needed parameters yourself. For instance, here is how you
118
+ would also launch that same script on two GPUs using mixed precision while avoiding all of the warnings:
119
+
120
+ ```bash
121
+ accelerate launch --multi_gpu --mixed_precision=fp16 --num_processes=2 {script_name.py} {--arg1} {--arg2} ...
122
+ ```
123
+
124
+ For a complete list of parameters you can pass in, run:
125
+
126
+ ```bash
127
+ accelerate launch -h
128
+ ```
129
+
130
+ <Tip>
131
+
132
+ Even if you are not using 🤗 Accelerate in your code, you can still use the launcher for starting your scripts!
133
+
134
+ </Tip>
135
+
136
+ For a visualization of this difference, that earlier `accelerate launch` on multi-gpu would look something like so with `torchrun`:
137
+
138
+ ```bash
139
+ MIXED_PRECISION="fp16" torchrun --nproc_per_node=2 --num_machines=1 {script_name.py} {--arg1} {--arg2} ...
140
+ ```
141
+
142
+ You can also launch your script utilizing the launch CLI as a python module itself, enabling the ability to pass in other python-specific
143
+ launching behaviors. To do so, use `accelerate.commands.launch` instead of `accelerate launch`:
144
+
145
+ ```bash
146
+ python -m accelerate.commands.launch --num_processes=2 {script_name.py} {--arg1} {--arg2}
147
+ ```
148
+
149
+ If you want to execute the script with any other python flags, you can pass them in as well similar to `-m`, such as
150
+ the below example enabling unbuffered stdout and stderr:
151
+
152
+ ```bash
153
+ python -u -m accelerate.commands.launch --num_processes=2 {script_name.py} {--arg1} {--arg2}
154
+ ```
155
+
156
+ <Tip>
157
+
158
+ You can run your code on CPU as well! This is helpful for debugging and testing purposes on toy models and datasets.
159
+
160
+ ```bash
161
+ accelerate launch --cpu {script_name.py} {--arg1} {--arg2}
162
+ ```
163
+
164
+ </Tip>
165
+
166
+ ## Why you should always use `accelerate config`
167
+
168
+ Why is it useful to the point you should **always** run `accelerate config`?
169
+
170
+ Remember that earlier call to `accelerate launch` as well as `torchrun`?
171
+ Post configuration, to run that script with the needed parts you just need to use `accelerate launch` outright, without passing anything else in:
172
+
173
+ ```bash
174
+ accelerate launch {script_name.py} {--arg1} {--arg2} ...
175
+ ```
176
+
177
+
178
+ ## Custom Configurations
179
+
180
+ As briefly mentioned earlier, `accelerate launch` should be mostly used through combining set configurations
181
+ made with the `accelerate config` command. These configs are saved to a `default_config.yaml` file in your cache folder for 🤗 Accelerate.
182
+ This cache folder is located at (with decreasing order of priority):
183
+
184
+ - The content of your environment variable `HF_HOME` suffixed with `accelerate`.
185
+ - If it does not exist, the content of your environment variable `XDG_CACHE_HOME` suffixed with
186
+ `huggingface/accelerate`.
187
+ - If this does not exist either, the folder `~/.cache/huggingface/accelerate`.
188
+
189
+ To have multiple configurations, the flag `--config_file` can be passed to the `accelerate launch` command paired
190
+ with the location of the custom yaml.
191
+
192
+ An example yaml may look something like the following for two GPUs on a single machine using `fp16` for mixed precision:
193
+ ```yaml
194
+ compute_environment: LOCAL_MACHINE
195
+ deepspeed_config: {}
196
+ distributed_type: MULTI_GPU
197
+ fsdp_config: {}
198
+ machine_rank: 0
199
+ main_process_ip: null
200
+ main_process_port: null
201
+ main_training_function: main
202
+ mixed_precision: fp16
203
+ num_machines: 1
204
+ num_processes: 2
205
+ use_cpu: false
206
+ ```
207
+
208
+ Launching a script from the location of that custom yaml file looks like the following:
209
+ ```bash
210
+ accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_name.py} {--arg1} {--arg2} ...
211
+ ```
212
+
213
+ ## Multi-node training
214
+ Multi-node training with 🤗Accelerate is similar to [multi-node training with torchrun](https://pytorch.org/tutorials/intermediate/ddp_series_multinode.html). The simplest way to launch a multi-node training run is to do the following:
215
+
216
+ - Copy your codebase and data to all nodes. (or place them on a shared filesystem)
217
+ - Setup your python packages on all nodes.
218
+ - Run `accelerate config` on the main single node first. After specifying the number of nodes, you will be asked to specify the rank of each node (this will be 0 for the main/master node), along with the IP address and port for the main process. This is required for the worker nodes to communicate with the main process. Afterwards, you can copy or send this config file across all of your nodes, changing the `machine_rank` to 1, 2,3, etc. to avoid having to run the command (or just follow their directions directly for launching with `torchrun` as well)
219
+
220
+ Once you have done this, you can start your multi-node training run by running `accelerate launch` (or `torchrun`) on all nodes.
221
+
222
+ <Tip>
223
+ It is required that the command be ran on all nodes for everything to start, not just running it from the main node. You can use something like SLURM or a different process executor to wrap around this requirement and call everything from a single command.
224
+ </Tip>
225
+
226
+ <Tip>
227
+
228
+ It is recommended to use the intranet IP of your main node over the public IP for better latency. This is the `192.168.x.x` or the `172.x.x.x` address you see when you run `hostname -I` on the main node.
229
+
230
+ </Tip>
231
+
232
+ To get a better idea about multi-node training, check out our example for [multi-node training with FSDP](https://huggingface.co/blog/ram-efficient-pytorch-fsdp).
docs/source/basic_tutorials/migration.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Migrating your code to 🤗 Accelerate
17
+
18
+ This tutorial will detail how to easily convert existing PyTorch code to use 🤗 Accelerate!
19
+ You'll see that by just changing a few lines of code, 🤗 Accelerate can perform its magic and get you on
20
+ your way toward running your code on distributed systems with ease!
21
+
22
+ ## The base training loop
23
+
24
+ To begin, write out a very basic PyTorch training loop.
25
+
26
+ <Tip>
27
+
28
+ We are under the presumption that `training_dataloader`, `model`, `optimizer`, `scheduler`, and `loss_function` have been defined beforehand.
29
+
30
+ </Tip>
31
+
32
+ ```python
33
+ device = "cuda"
34
+ model.to(device)
35
+
36
+ for batch in training_dataloader:
37
+ optimizer.zero_grad()
38
+ inputs, targets = batch
39
+ inputs = inputs.to(device)
40
+ targets = targets.to(device)
41
+ outputs = model(inputs)
42
+ loss = loss_function(outputs, targets)
43
+ loss.backward()
44
+ optimizer.step()
45
+ scheduler.step()
46
+ ```
47
+
48
+ ## Add in 🤗 Accelerate
49
+
50
+ To start using 🤗 Accelerate, first import and create an [`Accelerator`] instance:
51
+ ```python
52
+ from accelerate import Accelerator
53
+
54
+ accelerator = Accelerator()
55
+ ```
56
+ [`Accelerator`] is the main force behind utilizing all the possible options for distributed training!
57
+
58
+ ### Setting the right device
59
+
60
+ The [`Accelerator`] class knows the right device to move any PyTorch object to at any time, so you should
61
+ change the definition of `device` to come from [`Accelerator`]:
62
+
63
+ ```diff
64
+ - device = 'cuda'
65
+ + device = accelerator.device
66
+ model.to(device)
67
+ ```
68
+
69
+ ### Preparing your objects
70
+
71
+ Next, you need to pass all of the important objects related to training into [`~Accelerator.prepare`]. 🤗 Accelerate will
72
+ make sure everything is setup in the current environment for you to start training:
73
+
74
+ ```
75
+ model, optimizer, training_dataloader, scheduler = accelerator.prepare(
76
+ model, optimizer, training_dataloader, scheduler
77
+ )
78
+ ```
79
+ These objects are returned in the same order they were sent in. By default when using `device_placement=True`, all of the objects that can be sent to the right device will be.
80
+ If you need to work with data that isn't passed to [~Accelerator.prepare] but should be on the active device, you should pass in the `device` you made earlier.
81
+
82
+ <Tip warning={true}>
83
+
84
+ Accelerate will only prepare objects that inherit from their respective PyTorch classes (such as `torch.optim.Optimizer`).
85
+
86
+ </Tip>
87
+
88
+ ### Modifying the training loop
89
+
90
+ Finally, three lines of code need to be changed in the training loop. 🤗 Accelerate's DataLoader classes will automatically handle the device placement by default,
91
+ and [`~Accelerator.backward`] should be used for performing the backward pass:
92
+
93
+ ```diff
94
+ - inputs = inputs.to(device)
95
+ - targets = targets.to(device)
96
+ outputs = model(inputs)
97
+ loss = loss_function(outputs, targets)
98
+ - loss.backward()
99
+ + accelerator.backward(loss)
100
+ ```
101
+
102
+ With that, your training loop is now ready to use 🤗 Accelerate!
103
+
104
+ ## The finished code
105
+
106
+ Below is the final version of the converted code:
107
+
108
+ ```python
109
+ from accelerate import Accelerator
110
+
111
+ accelerator = Accelerator()
112
+
113
+ model, optimizer, training_dataloader, scheduler = accelerator.prepare(
114
+ model, optimizer, training_dataloader, scheduler
115
+ )
116
+
117
+ for batch in training_dataloader:
118
+ optimizer.zero_grad()
119
+ inputs, targets = batch
120
+ outputs = model(inputs)
121
+ loss = loss_function(outputs, targets)
122
+ accelerator.backward(loss)
123
+ optimizer.step()
124
+ scheduler.step()
125
+ ```
126
+
127
+ ## More Resources
128
+
129
+ To check out more ways on how to migrate to 🤗 Accelerate, check out our [interactive migration tutorial](https://huggingface.co/docs/accelerate/usage_guides/explore) which showcases other items that need to be watched for when using Accelerate and how to do so quickly.
docs/source/basic_tutorials/notebook.md ADDED
@@ -0,0 +1,459 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Launching Multi-GPU Training from a Jupyter Environment
17
+
18
+ This tutorial teaches you how to fine tune a computer vision model with 🤗 Accelerate from a Jupyter Notebook on a distributed system.
19
+ You will also learn how to setup a few requirements needed for ensuring your environment is configured properly, your data has been prepared properly, and finally how to launch training.
20
+
21
+ <Tip>
22
+
23
+ This tutorial is also available as a Jupyter Notebook [here](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_cv_example.ipynb)
24
+
25
+ </Tip>
26
+
27
+ ## Configuring the Environment
28
+
29
+ Before any training can be performed, a 🤗 Accelerate config file must exist in the system. Usually this can be done by running the following in a terminal and answering the prompts:
30
+
31
+ ```bash
32
+ accelerate config
33
+ ```
34
+
35
+ However, if general defaults are fine and you are *not* running on a TPU, 🤗Accelerate has a utility to quickly write your GPU configuration into a config file via [`utils.write_basic_config`].
36
+
37
+ The following code will restart Jupyter after writing the configuration, as CUDA code was called to perform this.
38
+
39
+ <Tip warning={true}>
40
+
41
+ CUDA can't be initialized more than once on a multi-GPU system. It's fine to debug in the notebook and have calls to CUDA, but in order to finally train a full cleanup and restart will need to be performed.
42
+
43
+ </Tip>
44
+
45
+ ```python
46
+ import os
47
+ from accelerate.utils import write_basic_config
48
+
49
+ write_basic_config() # Write a config file
50
+ os._exit(00) # Restart the notebook
51
+ ```
52
+
53
+ ## Preparing the Dataset and Model
54
+
55
+ Next you should prepare your dataset. As mentioned at earlier, great care should be taken when preparing the `DataLoaders` and model to make sure that **nothing** is put on *any* GPU.
56
+
57
+ If you do, it is recommended to put that specific code into a function and call that from within the notebook launcher interface, which will be shown later.
58
+
59
+ Make sure the dataset is downloaded based on the directions [here](https://github.com/huggingface/accelerate/tree/main/examples#simple-vision-example)
60
+
61
+ ```python
62
+ import os, re, torch, PIL
63
+ import numpy as np
64
+
65
+ from torch.optim.lr_scheduler import OneCycleLR
66
+ from torch.utils.data import DataLoader, Dataset
67
+ from torchvision.transforms import Compose, RandomResizedCrop, Resize, ToTensor
68
+
69
+ from accelerate import Accelerator
70
+ from accelerate.utils import set_seed
71
+ from timm import create_model
72
+ ```
73
+
74
+ First you need to create a function to extract the class name based on a filename:
75
+
76
+ ```python
77
+ import os
78
+
79
+ data_dir = "../../images"
80
+ fnames = os.listdir(data_dir)
81
+ fname = fnames[0]
82
+ print(fname)
83
+ ```
84
+
85
+ ```python out
86
+ beagle_32.jpg
87
+ ```
88
+
89
+ In the case here, the label is `beagle`. Using regex you can extract the label from the filename:
90
+
91
+ ```python
92
+ import re
93
+
94
+
95
+ def extract_label(fname):
96
+ stem = fname.split(os.path.sep)[-1]
97
+ return re.search(r"^(.*)_\d+\.jpg$", stem).groups()[0]
98
+ ```
99
+
100
+ ```python
101
+ extract_label(fname)
102
+ ```
103
+
104
+ And you can see it properly returned the right name for our file:
105
+
106
+ ```python out
107
+ "beagle"
108
+ ```
109
+
110
+ Next a `Dataset` class should be made to handle grabbing the image and the label:
111
+
112
+ ```python
113
+ class PetsDataset(Dataset):
114
+ def __init__(self, file_names, image_transform=None, label_to_id=None):
115
+ self.file_names = file_names
116
+ self.image_transform = image_transform
117
+ self.label_to_id = label_to_id
118
+
119
+ def __len__(self):
120
+ return len(self.file_names)
121
+
122
+ def __getitem__(self, idx):
123
+ fname = self.file_names[idx]
124
+ raw_image = PIL.Image.open(fname)
125
+ image = raw_image.convert("RGB")
126
+ if self.image_transform is not None:
127
+ image = self.image_transform(image)
128
+ label = extract_label(fname)
129
+ if self.label_to_id is not None:
130
+ label = self.label_to_id[label]
131
+ return {"image": image, "label": label}
132
+ ```
133
+
134
+ Now to build the dataset. Outside the training function you can find and declare all the filenames and labels and use them as references inside the
135
+ launched function:
136
+
137
+ ```python
138
+ fnames = [os.path.join("../../images", fname) for fname in fnames if fname.endswith(".jpg")]
139
+ ```
140
+
141
+ Next gather all the labels:
142
+
143
+ ```python
144
+ all_labels = [extract_label(fname) for fname in fnames]
145
+ id_to_label = list(set(all_labels))
146
+ id_to_label.sort()
147
+ label_to_id = {lbl: i for i, lbl in enumerate(id_to_label)}
148
+ ```
149
+
150
+ Next, you should make a `get_dataloaders` function that will return your built dataloaders for you. As mentioned earlier, if data is automatically
151
+ sent to the GPU or a TPU device when building your `DataLoaders`, they must be built using this method.
152
+
153
+ ```python
154
+ def get_dataloaders(batch_size: int = 64):
155
+ "Builds a set of dataloaders with a batch_size"
156
+ random_perm = np.random.permutation(len(fnames))
157
+ cut = int(0.8 * len(fnames))
158
+ train_split = random_perm[:cut]
159
+ eval_split = random_perm[cut:]
160
+
161
+ # For training a simple RandomResizedCrop will be used
162
+ train_tfm = Compose([RandomResizedCrop((224, 224), scale=(0.5, 1.0)), ToTensor()])
163
+ train_dataset = PetsDataset([fnames[i] for i in train_split], image_transform=train_tfm, label_to_id=label_to_id)
164
+
165
+ # For evaluation a deterministic Resize will be used
166
+ eval_tfm = Compose([Resize((224, 224)), ToTensor()])
167
+ eval_dataset = PetsDataset([fnames[i] for i in eval_split], image_transform=eval_tfm, label_to_id=label_to_id)
168
+
169
+ # Instantiate dataloaders
170
+ train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size, num_workers=4)
171
+ eval_dataloader = DataLoader(eval_dataset, shuffle=False, batch_size=batch_size * 2, num_workers=4)
172
+ return train_dataloader, eval_dataloader
173
+ ```
174
+
175
+ Finally, you should import the scheduler to be used later:
176
+
177
+ ```python
178
+ from torch.optim.lr_scheduler import CosineAnnealingLR
179
+ ```
180
+
181
+ ## Writing the Training Function
182
+
183
+ Now you can build the training loop. [`notebook_launcher`] works by passing in a function to call that will be ran across the distributed system.
184
+
185
+ Here is a basic training loop for the animal classification problem:
186
+
187
+ <Tip>
188
+
189
+ The code has been split up to allow for explainations on each section. A full version that can be copy and pasted will be available at the end
190
+
191
+ </Tip>
192
+
193
+
194
+ ```python
195
+ def training_loop(mixed_precision="fp16", seed: int = 42, batch_size: int = 64):
196
+ set_seed(seed)
197
+ accelerator = Accelerator(mixed_precision=mixed_precision)
198
+ ```
199
+
200
+ First you should set the seed and create an [`Accelerator`] object as early in the training loop as possible.
201
+
202
+ <Tip warning={true}>
203
+
204
+ If training on the TPU, your training loop should take in the model as a parameter and it should be instantiated
205
+ outside of the training loop function. See the [TPU best practices](../concept_guides/training_tpu)
206
+ to learn why
207
+
208
+ </Tip>
209
+
210
+ Next you should build your dataloaders and create your model:
211
+
212
+ ```python
213
+ train_dataloader, eval_dataloader = get_dataloaders(batch_size)
214
+ model = create_model("resnet50d", pretrained=True, num_classes=len(label_to_id))
215
+ ```
216
+
217
+ <Tip>
218
+
219
+ You build the model here so that the seed also controls the new weight initialization
220
+
221
+ </Tip>
222
+
223
+ As you are performing transfer learning in this example, the encoder of the model starts out frozen so the head of the model can be
224
+ trained only initially:
225
+
226
+ ```python
227
+ for param in model.parameters():
228
+ param.requires_grad = False
229
+ for param in model.get_classifier().parameters():
230
+ param.requires_grad = True
231
+ ```
232
+
233
+ Normalizing the batches of images will make training a little faster:
234
+
235
+ ```python
236
+ mean = torch.tensor(model.default_cfg["mean"])[None, :, None, None]
237
+ std = torch.tensor(model.default_cfg["std"])[None, :, None, None]
238
+ ```
239
+
240
+ To make these constants available on the active device, you should set it to the Accelerator's device:
241
+
242
+ ```python
243
+ mean = mean.to(accelerator.device)
244
+ std = std.to(accelerator.device)
245
+ ```
246
+
247
+ Next instantiate the rest of the PyTorch classes used for training:
248
+
249
+ ```python
250
+ optimizer = torch.optim.Adam(params=model.parameters(), lr=3e-2 / 25)
251
+ lr_scheduler = OneCycleLR(optimizer=optimizer, max_lr=3e-2, epochs=5, steps_per_epoch=len(train_dataloader))
252
+ ```
253
+
254
+ Before passing everything to [`~Accelerator.prepare`].
255
+
256
+ <Tip>
257
+
258
+ There is no specific order to remember, you just need to unpack the objects in the same order you gave them to the prepare method.
259
+
260
+ </Tip>
261
+
262
+ ```python
263
+ model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
264
+ model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
265
+ )
266
+ ```
267
+
268
+ Now train the model:
269
+
270
+ ```python
271
+ for epoch in range(5):
272
+ model.train()
273
+ for batch in train_dataloader:
274
+ inputs = (batch["image"] - mean) / std
275
+ outputs = model(inputs)
276
+ loss = torch.nn.functional.cross_entropy(outputs, batch["label"])
277
+ accelerator.backward(loss)
278
+ optimizer.step()
279
+ lr_scheduler.step()
280
+ optimizer.zero_grad()
281
+ ```
282
+
283
+ The evaluation loop will look slightly different compared to the training loop. The number of elements passed as well as the overall
284
+ total accuracy of each batch will be added to two constants:
285
+
286
+ ```python
287
+ model.eval()
288
+ accurate = 0
289
+ num_elems = 0
290
+ ```
291
+
292
+ Next you have the rest of your standard PyTorch loop:
293
+
294
+ ```python
295
+ for batch in eval_dataloader:
296
+ inputs = (batch["image"] - mean) / std
297
+ with torch.no_grad():
298
+ outputs = model(inputs)
299
+ predictions = outputs.argmax(dim=-1)
300
+ ```
301
+
302
+ Before finally the last major difference.
303
+
304
+ When performing distributed evaluation, the predictions and labels need to be passed through
305
+ [`~Accelerator.gather`] so that all of the data is available on the current device and a properly calculated metric can be achieved:
306
+
307
+ ```python
308
+ accurate_preds = accelerator.gather(predictions) == accelerator.gather(batch["label"])
309
+ num_elems += accurate_preds.shape[0]
310
+ accurate += accurate_preds.long().sum()
311
+ ```
312
+
313
+ Now you just need to calculate the actual metric for this problem, and you can print it on the main process using [`~Accelerator.print`]:
314
+
315
+ ```python
316
+ eval_metric = accurate.item() / num_elems
317
+ accelerator.print(f"epoch {epoch}: {100 * eval_metric:.2f}")
318
+ ```
319
+
320
+ A full version of this training loop is available below:
321
+
322
+ ```python
323
+ def training_loop(mixed_precision="fp16", seed: int = 42, batch_size: int = 64):
324
+ set_seed(seed)
325
+ # Initialize accelerator
326
+ accelerator = Accelerator(mixed_precision=mixed_precision)
327
+ # Build dataloaders
328
+ train_dataloader, eval_dataloader = get_dataloaders(batch_size)
329
+
330
+ # Instantiate the model (you build the model here so that the seed also controls new weight initaliziations)
331
+ model = create_model("resnet50d", pretrained=True, num_classes=len(label_to_id))
332
+
333
+ # Freeze the base model
334
+ for param in model.parameters():
335
+ param.requires_grad = False
336
+ for param in model.get_classifier().parameters():
337
+ param.requires_grad = True
338
+
339
+ # You can normalize the batches of images to be a bit faster
340
+ mean = torch.tensor(model.default_cfg["mean"])[None, :, None, None]
341
+ std = torch.tensor(model.default_cfg["std"])[None, :, None, None]
342
+
343
+ # To make these constants available on the active device, set it to the accelerator device
344
+ mean = mean.to(accelerator.device)
345
+ std = std.to(accelerator.device)
346
+
347
+ # Intantiate the optimizer
348
+ optimizer = torch.optim.Adam(params=model.parameters(), lr=3e-2 / 25)
349
+
350
+ # Instantiate the learning rate scheduler
351
+ lr_scheduler = OneCycleLR(optimizer=optimizer, max_lr=3e-2, epochs=5, steps_per_epoch=len(train_dataloader))
352
+
353
+ # Prepare everything
354
+ # There is no specific order to remember, you just need to unpack the objects in the same order you gave them to the
355
+ # prepare method.
356
+ model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
357
+ model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
358
+ )
359
+
360
+ # Now you train the model
361
+ for epoch in range(5):
362
+ model.train()
363
+ for batch in train_dataloader:
364
+ inputs = (batch["image"] - mean) / std
365
+ outputs = model(inputs)
366
+ loss = torch.nn.functional.cross_entropy(outputs, batch["label"])
367
+ accelerator.backward(loss)
368
+ optimizer.step()
369
+ lr_scheduler.step()
370
+ optimizer.zero_grad()
371
+
372
+ model.eval()
373
+ accurate = 0
374
+ num_elems = 0
375
+ for batch in eval_dataloader:
376
+ inputs = (batch["image"] - mean) / std
377
+ with torch.no_grad():
378
+ outputs = model(inputs)
379
+ predictions = outputs.argmax(dim=-1)
380
+ accurate_preds = accelerator.gather(predictions) == accelerator.gather(batch["label"])
381
+ num_elems += accurate_preds.shape[0]
382
+ accurate += accurate_preds.long().sum()
383
+
384
+ eval_metric = accurate.item() / num_elems
385
+ # Use accelerator.print to print only on the main process.
386
+ accelerator.print(f"epoch {epoch}: {100 * eval_metric:.2f}")
387
+ ```
388
+
389
+ ## Using the notebook_launcher
390
+
391
+ All that's left is to use the [`notebook_launcher`].
392
+
393
+ You pass in the function, the arguments (as a tuple), and the number of processes to train on. (See the [documentation](../package_reference/launchers) for more information)
394
+
395
+ ```python
396
+ from accelerate import notebook_launcher
397
+ ```
398
+
399
+ ```python
400
+ args = ("fp16", 42, 64)
401
+ notebook_launcher(training_loop, args, num_processes=2)
402
+ ```
403
+
404
+ In the case of running on multiple nodes, you need to set up a Jupyter session at each node and run the launching cell at the same time.
405
+
406
+ For an environment containing 2 nodes (computers) with 8 GPUs each and the main computer with an IP address of "172.31.43.8", it would look like so:
407
+
408
+ ```python
409
+ notebook_launcher(training_loop, args, master_addr="172.31.43.8", node_rank=0, num_nodes=2, num_processes=8)
410
+ ```
411
+
412
+ And in the second Jupyter session on the other machine:
413
+
414
+ <Tip>
415
+
416
+ Notice how the `node_rank` has changed
417
+
418
+ </Tip>
419
+
420
+ ```python
421
+ notebook_launcher(training_loop, args, master_addr="172.31.43.8", node_rank=1, num_nodes=2, num_processes=8)
422
+ ```
423
+
424
+ In the case of running on the TPU, it would look like so:
425
+
426
+ ```python
427
+ model = create_model("resnet50d", pretrained=True, num_classes=len(label_to_id))
428
+
429
+ args = (model, "fp16", 42, 64)
430
+ notebook_launcher(training_loop, args, num_processes=8)
431
+ ```
432
+
433
+ As it's running it will print the progress as well as state how many devices you ran on. This tutorial was ran with two GPUs:
434
+
435
+ ```python out
436
+ Launching training on 2 GPUs.
437
+ epoch 0: 88.12
438
+ epoch 1: 91.73
439
+ epoch 2: 92.58
440
+ epoch 3: 93.90
441
+ epoch 4: 94.71
442
+ ```
443
+
444
+ And that's it!
445
+
446
+ ## Debugging
447
+
448
+ A common issue when running the `notebook_launcher` is receiving a CUDA has already been initialized issue. This usually stems
449
+ from an import or prior code in the notebook that makes a call to the PyTorch `torch.cuda` sublibrary. To help narrow down what went wrong,
450
+ you can launch the `notebook_launcher` with `ACCELERATE_DEBUG_MODE=yes` in your environment and an additional check
451
+ will be made when spawning that a regular process can be created and utilize CUDA without issue. (Your CUDA code can still be ran afterwards).
452
+
453
+ ## Conclusion
454
+
455
+ This notebook showed how to perform distributed training from inside of a Jupyter Notebook. Some key notes to remember:
456
+
457
+ - Make sure to save any code that use CUDA (or CUDA imports) for the function passed to [`notebook_launcher`]
458
+ - Set the `num_processes` to be the number of devices used for training (such as number of GPUs, CPUs, TPUs, etc)
459
+ - If using the TPU, declare your model outside the training loop function
docs/source/basic_tutorials/overview.md ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Overview
17
+
18
+ Welcome to the 🤗 Accelerate tutorials! These introductory guides will help catch you up to speed on working with 🤗 Accelerate.
19
+ You'll learn how to modify your code to have it work with the API seamlessly, how to launch your script properly,
20
+ and more!
21
+
22
+ These tutorials assume some basic knowledge of Python and familiarity with the PyTorch framework.
23
+
24
+ If you have any questions about 🤗 Accelerate, feel free to join and ask the community on our [forum](https://discuss.huggingface.co/c/accelerate/18).
docs/source/basic_tutorials/troubleshooting.md ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Troubleshooting guide
17
+
18
+ This guide aims to provide you the tools and knowledge required to navigate some common issues. However,
19
+ as 🤗 Accelerate continuously evolves and the use cases and setups are diverse, you might encounter an issue not covered in this
20
+ guide. If the suggestions listed in this guide do not cover your such situation, please refer to the final section of
21
+ the guide, [Asking for Help](#ask-for-help), to learn where to find help with your specific issue.
22
+
23
+ ## Logging
24
+
25
+ When facing an error, logging can help narrow down where it is coming from. In a distributed setup with multiple processes,
26
+ logging can be a challenge, but 🤗 Accelerate provides a utility that streamlines the logging process and ensures that
27
+ logs are synchronized and managed effectively across the distributed setup.
28
+
29
+ To troubleshoot an issue, use `accelerate.logging` instead of the standard Python `logging` module:
30
+
31
+ ```diff
32
+ - import logging
33
+ + from accelerate.logging import get_logger
34
+ - logger = logging.getLogger(__name__)
35
+ + logger = get_logger(__name__)
36
+ ```
37
+
38
+ To set the log level (`INFO`, `DEBUG`, `WARNING`, `ERROR`, `CRITICAL`), export it as the `ACCELERATE_LOG_LEVEL` environment,
39
+ or pass as `log_level` to `get_logger`:
40
+
41
+ ```python
42
+ from accelerate.logging import get_logger
43
+
44
+ logger = get_logger(__name__, log_level="INFO")
45
+ ```
46
+
47
+ By default, the log is called on main processes only. To call it on all processes, pass `main_process_only=False`.
48
+ If a log should be called on all processes and in order, also pass `in_order=True`.
49
+
50
+ ## Hanging code and timeout errors
51
+
52
+ ### Mismatched tensor shapes
53
+
54
+ If your code seems to be hanging for a significant amount time on a distributed setup, a common cause is mismatched shapes of tensors on different
55
+ devices.
56
+
57
+ When running scripts in a distributed fashion, functions such as [`Accelerator.gather`] and [`Accelerator.reduce`] are
58
+ necessary to grab tensors across devices to perform operations on them collectively. These (and other) functions rely on
59
+ `torch.distributed` performing a `gather` operation, which requires that tensors have the **exact same shape** across all processes.
60
+ When the tensor shapes don't match, you will experience handing code, and eventually hit a timeout exception.
61
+
62
+ If you suspect this to be the case, use Accelerate's operational debug mode to immediately catch the issue.
63
+
64
+ The recommended way to enable Accelerate's operational debug mode is during `accelerate config` setup.
65
+ Alternative ways to enable debug mode are:
66
+
67
+ * From the CLI:
68
+
69
+ ```bash
70
+ accelerate launch --debug {my_script.py} --arg1 --arg2
71
+ ```
72
+
73
+ * As an environmental variable (which avoids the need for `accelerate launch`):
74
+
75
+ ```bash
76
+ ACCELERATE_DEBUG_MODE="1" torchrun {my_script.py} --arg1 --arg2
77
+ ```
78
+
79
+ * Manually changing the `config.yaml` file:
80
+
81
+ ```diff
82
+ compute_environment: LOCAL_MACHINE
83
+ +debug: true
84
+ ```
85
+
86
+ Once you enable the debug mode, you should get a similar traceback that points to the tensor shape mismatch issue:
87
+
88
+ ```py
89
+ Traceback (most recent call last):
90
+ File "/home/zach_mueller_huggingface_co/test.py", line 18, in <module>
91
+ main()
92
+ File "/home/zach_mueller_huggingface_co/test.py", line 15, in main
93
+ broadcast_tensor = broadcast(tensor)
94
+ File "/home/zach_mueller_huggingface_co/accelerate/src/accelerate/utils/operations.py", line 303, in wrapper
95
+ accelerate.utils.operations.DistributedOperationException:
96
+
97
+ Cannot apply desired operation due to shape mismatches. All shapes across devices must be valid.
98
+
99
+ Operation: `accelerate.utils.operations.broadcast`
100
+ Input shapes:
101
+ - Process 0: [1, 5]
102
+ - Process 1: [1, 2, 5]
103
+ ```
104
+
105
+ ### Early stopping leads to hanging
106
+
107
+ When doing early stopping in distributed training, if each process has a specific stopping condition (e.g. validation loss),
108
+ it may not be synchronized across all of them. As a result, a break can happen on process 0 but not on process 1.
109
+ This will cause the code to hang indefinitely until a timeout occurs.
110
+
111
+ If you have early stopping conditionals, use `set_breakpoint` and `check_breakpoint` methods to make sure all the processes
112
+ are ended correctly:
113
+
114
+ ```py
115
+ # Assume `should_do_breakpoint` is a custom defined function that returns a conditional,
116
+ # and that conditional might be true only on process 1
117
+ if should_do_breakpoint(loss):
118
+ accelerator.set_breakpoint()
119
+
120
+ # Later in the training script when we need to check for the breakpoint
121
+ if accelerator.check_breakpoint():
122
+ break
123
+ ```
124
+
125
+ ### Hanging on low kernel versions on Linux
126
+
127
+ This is a known issue. On Linux with kernel version < 5.5, hanging processes have been reported. To avoid
128
+ encountering this problem, we recommend upgrading your system to a later kernel version.
129
+
130
+ ## CUDA out of memory
131
+
132
+ One of the most frustrating errors when it comes to running training scripts is hitting "CUDA Out-of-Memory",
133
+ as the entire script needs to be restarted, progress is lost, and typically a developer would want to simply
134
+ start their script and let it run.
135
+
136
+ To address this problem, `Accelerate` offers a utility `find_executable_batch_size` that is heavily based on [toma](https://github.com/BlackHC/toma).
137
+ The utility retries code that fails due to OOM (out-of-memory) conditions and lowers batch sizes automatically.
138
+
139
+ ### find_executable_batch_size
140
+
141
+ This algorithm operates with exponential decay, decreasing the batch size in half after each failed run on some
142
+ training script. To use it, restructure your training function to include an inner function that includes this wrapper,
143
+ and build your dataloaders inside it. At a minimum, this could look like 4 new lines of code.
144
+
145
+ <Tip warning={true}>
146
+
147
+ The inner function *must* take in the batch size as the first parameter, but we do not pass one to it when called. The wrapper handles this for us.
148
+
149
+ </Tip>
150
+
151
+ It should also be noted that anything which will consume CUDA memory and passed to the `accelerator` **must** be declared inside the inner function,
152
+ such as models and optimizers.
153
+
154
+ ```diff
155
+ def training_function(args):
156
+ accelerator = Accelerator()
157
+
158
+ + @find_executable_batch_size(starting_batch_size=args.batch_size)
159
+ + def inner_training_loop(batch_size):
160
+ + nonlocal accelerator # Ensure they can be used in our context
161
+ + accelerator.free_memory() # Free all lingering references
162
+ model = get_model()
163
+ model.to(accelerator.device)
164
+ optimizer = get_optimizer()
165
+ train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
166
+ lr_scheduler = get_scheduler(
167
+ optimizer,
168
+ num_training_steps=len(train_dataloader)*num_epochs
169
+ )
170
+ model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
171
+ model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
172
+ )
173
+ train(model, optimizer, train_dataloader, lr_scheduler)
174
+ validate(model, eval_dataloader)
175
+ + inner_training_loop()
176
+ ```
177
+
178
+ To find out more, check the documentation [here](../package_reference/utilities#accelerate.find_executable_batch_size).
179
+
180
+ ## Non-reproducible results between device setups
181
+
182
+ If you have changed the device setup and are observing different model performance, this is likely due to the fact that
183
+ you have not updated your script when moving from one setup to another. The same script with the same batch size across TPU,
184
+ multi-GPU, and single-GPU with Accelerate will have different results.
185
+
186
+ For example, if you were previously training on a single GPU with a batch size of 16, when moving to two GPU setup,
187
+ you need to change the batch size to 8 to have the same effective batch size. This is because when training with Accelerate,
188
+ the batch size passed to the dataloader is the **batch size per GPU**.
189
+
190
+ To make sure you can reproduce the results between the setups, make sure to use the same seed, adjust the batch size
191
+ accordingly, consider scaling the learning rate.
192
+
193
+ For more details and a quick reference for batch sizes, check out the [Comparing performance between different device setups](../concept_guides/performance) guide.
194
+
195
+ ## Performance issues on different GPUs
196
+
197
+ If your multi-GPU setup consists of different GPUs, you may hit some limitations:
198
+
199
+ - There may be an imbalance in GPU memory between the GPUs. In this case, the GPU with smaller memory will limit the batch size or the size of the model that can be loaded onto the GPUs.
200
+ - If you are using GPUs with different performance profiles, the performance will be driven by the slowest GPU that you are using as the other GPUs will have to wait for it to complete its workload.
201
+
202
+ Vastly different GPUs within the same setup can lead to performance bottlenecks.
203
+
204
+ ## Ask for help
205
+
206
+ If the above troubleshooting tools and advice did not help you resolve your issue, reach out for help to the community
207
+ and the team.
208
+
209
+ ### Forums
210
+
211
+ Ask for help on the Hugging Face forums - post your question in the [🤗Accelerate category](https://discuss.huggingface.co/c/accelerate/18)
212
+ Make sure to write a descriptive post with relevant context about your setup and reproducible code to maximize the likelihood that your problem is solved!
213
+
214
+ ### Discord
215
+
216
+ Post a question on [Discord](http://hf.co/join/discord), and let the team and the community help you.
217
+
218
+ ### GitHub Issues
219
+
220
+ Create an Issue on the 🤗 Accelerate [GitHub repository](https://github.com/huggingface/accelerate/issues) if you suspect
221
+ to have found a bug related to the library. Include context regarding the bug and details about your distributed setup
222
+ to help us better figure out what's wrong and how we can fix it.
docs/source/concept_guides/big_model_inference.md ADDED
@@ -0,0 +1,341 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Handling big models for inference
17
+
18
+ When loading a pre-trained model in PyTorch, the usual workflow looks like this:
19
+
20
+ ```py
21
+ import torch
22
+
23
+ my_model = ModelClass(...)
24
+ state_dict = torch.load(checkpoint_file)
25
+ my_model.load_state_dict(state_dict)
26
+ ```
27
+
28
+ In plain English, those steps are:
29
+ 1. Create the model with randomly initialized weights
30
+ 2. Load the model weights (in a dictionary usually called a state dict) from the disk
31
+ 3. Load those weights inside the model
32
+
33
+ While this works very well for regularly sized models, this workflow has some clear limitations when we deal with a huge model: in step 1, we load a full version of the model in RAM, and spend some time randomly initializing the weights (which will be discarded in step 3). In step 2, we load another full version of the model in RAM, with the pre-trained weights. If you're loading a model with 6 billion parameters, this means you will need 24GB of RAM for each copy of the model, so 48GB in total (half of it to load the model in FP16).
34
+
35
+ <Tip warning={true}>
36
+
37
+ This API is quite new and still in its experimental stage. While we strive to provide a stable API, it's possible some small parts of the public API will change in the future.
38
+
39
+ </Tip>
40
+
41
+ ## How the Process Works: A Quick Overview
42
+
43
+ <Youtube id="MWCSGj9jEAo" />
44
+
45
+ ## How the Process Works: Working with Code
46
+
47
+ ### Instantiating an empty model
48
+
49
+ The first tool 🤗 Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM so that step 1 can be done on models of any size. Here is how it works:
50
+
51
+ ```py
52
+ from accelerate import init_empty_weights
53
+
54
+ with init_empty_weights():
55
+ my_model = ModelClass(...)
56
+ ```
57
+
58
+ For instance:
59
+
60
+ ```py
61
+ with init_empty_weights():
62
+ model = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)])
63
+ ```
64
+
65
+ initializes an empty model with a bit more than 100B parameters. Behind the scenes, this relies on the meta device introduced in PyTorch 1.9. During the initialization under the context manager, each time a parameter is created, it is instantly moved to that device.
66
+
67
+ <Tip warning={true}>
68
+
69
+ You can't move a model initialized like this on CPU or another device directly, since it doesn't have any data. It's also very likely that a forward pass with that empty model will fail, as not all operations are supported on the meta device.
70
+
71
+ </Tip>
72
+
73
+ ### Sharded checkpoints
74
+
75
+ It's possible your model is so big that even a single copy won't fit in RAM. That doesn't mean it can't be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it's better if your checkpoint is split into several smaller files that we call checkpoint shards.
76
+
77
+ 🤗 Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. You can easily shard your model with [`~Accelerator.save_model`]. For instance, we could have a folder containing:
78
+
79
+ ```bash
80
+ first_state_dict.bin
81
+ index.json
82
+ second_state_dict.bin
83
+ ```
84
+
85
+ with index.json being the following file:
86
+
87
+ ```
88
+ {
89
+ "linear1.weight": "first_state_dict.bin",
90
+ "linear1.bias": "first_state_dict.bin",
91
+ "linear2.weight": "second_state_dict.bin",
92
+ "linear2.bias": "second_state_dict.bin"
93
+ }
94
+ ```
95
+
96
+ and `first_state_dict.bin` containing the weights for `"linear1.weight"` and `"linear1.bias"`, `second_state_dict.bin` the ones for `"linear2.weight"` and `"linear2.bias"`
97
+
98
+ ### Loading weights
99
+
100
+ The second tool 🤗 Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard.
101
+
102
+ If you want to use big model inference with 🤗 Transformers models, check out this [documentation](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
103
+
104
+ Here is how we can use this to load the [GPT2-1.5B](https://huggingface.co/marcsun13/gpt2-xl-linear-sharded) model.
105
+
106
+ Let's download the sharded version of this model.
107
+
108
+ ```bash
109
+ pip install huggingface_hub
110
+ ```
111
+
112
+ ```py
113
+ from huggingface_hub import snapshot_download
114
+ checkpoint = "marcsun13/gpt2-xl-linear-sharded"
115
+ weights_location = snapshot_download(repo_id=checkpoint)
116
+ ```
117
+
118
+ In order to initialize the model, we will use the library minGPT.
119
+
120
+ ```bash
121
+ git clone https://github.com/karpathy/minGPT.git
122
+ pip install minGPT/
123
+ ```
124
+
125
+ ```py
126
+ from accelerate import init_empty_weights
127
+ from mingpt.model import GPT
128
+
129
+ model_config = GPT.get_default_config()
130
+ model_config.model_type = 'gpt2-xl'
131
+ model_config.vocab_size = 50257
132
+ model_config.block_size = 1024
133
+
134
+ with init_empty_weights():
135
+ model = GPT(model_config)
136
+ ```
137
+
138
+ Then, load the checkpoint we just downloaded with:
139
+
140
+ ```py
141
+ from accelerate import load_checkpoint_and_dispatch
142
+
143
+ model = load_checkpoint_and_dispatch(
144
+ model, checkpoint=weights_location, device_map="auto", no_split_module_classes=['Block']
145
+ )
146
+ ```
147
+
148
+ By passing `device_map="auto"`, we tell 🤗 Accelerate to determine automatically where to put each layer of the model depending on the available resources:
149
+ - first, we use the maximum space available on the GPU(s)
150
+ - if we still need space, we store the remaining weights on the CPU
151
+ - if there is not enough RAM, we store the remaining weights on the hard drive as memory-mapped tensors
152
+
153
+
154
+ #### `no_split_module_classes`
155
+
156
+ This parameter will indicate that some of the modules with the name `"Block"` should not be split across different devices. You should set here all blocks that
157
+ include a residual connection of some kind.
158
+
159
+
160
+ #### The `device_map`
161
+
162
+ You can see the `device_map` that 🤗 Accelerate picked by accessing the `hf_device_map` attribute of your model:
163
+
164
+ ```py
165
+ model.hf_device_map
166
+ ```
167
+
168
+ ```python out
169
+ {'transformer.wte': 0,
170
+ 'transformer.wpe': 0,
171
+ 'transformer.drop': 0,
172
+ 'transformer.h.0': 0,
173
+ ...
174
+ 'transformer.h.21': 0,
175
+ 'transformer.h.22': 1,
176
+ 'transformer.h.23': 1,
177
+ 'transformer.h.24': 1,
178
+ ...
179
+ 'transformer.h.47': 1,
180
+ 'transformer.ln_f': 1,
181
+ 'lm_head': 1}
182
+ ```
183
+
184
+ It's fully possible to create your own device map for the layers to use as well, specifying the GPU device to use (a number), `"cpu"`, or `"disk"` and pass this in:
185
+
186
+ ```python
187
+ device_map = {
188
+ "transformer.wte": "cpu",
189
+ "transformer.wpe": 0,
190
+ "transformer.drop": "cpu",
191
+ "transformer.h.0": "disk"
192
+ }
193
+
194
+ model = load_checkpoint_and_dispatch(
195
+ model, checkpoint=weights_location, device_map=device_map
196
+ )
197
+
198
+ ```
199
+
200
+ ### Run the model
201
+
202
+ Now that we have done this, our model lies across several devices, and maybe the hard drive. But it can still be used as a regular PyTorch model:
203
+
204
+ ```py
205
+ from mingpt.bpe import BPETokenizer
206
+ tokenizer = BPETokenizer()
207
+ inputs = tokenizer("Hello, my name is").to(0)
208
+
209
+ outputs = model.generate(x1, max_new_tokens=10, do_sample=False)[0]
210
+ tokenizer.decode(outputs.cpu().squeeze())
211
+ ```
212
+
213
+ Behind the scenes, 🤗 Accelerate added hooks to the model, so that:
214
+ - at each layer, the inputs are put on the right device (so even if your model is spread across several GPUs, it works)
215
+ - for the weights offloaded on the CPU, they are put on a GPU just before the forward pass and cleaned up just after
216
+ - for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass and cleaned up just after
217
+
218
+ This way, your model can run for inference even if it doesn't fit on one of the GPUs or the CPU RAM!
219
+
220
+ <Tip warning={true}>
221
+
222
+ This only supports the inference of your model, not training. Most of the computation happens behind `torch.no_grad()` context managers to avoid spending some GPU memory with intermediate activations.
223
+
224
+ </Tip>
225
+
226
+ ### Designing a device map
227
+
228
+ You can let 🤗 Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself if you want more control over where each layer should go.
229
+
230
+ <Tip>
231
+
232
+ You can derive all sizes of the model (and thus compute a `device_map`) on a model that is on the meta device.
233
+
234
+ </Tip>
235
+
236
+ All the options will produce the same result when you don't have enough GPU memory to accommodate the whole model (which is to fit everything that can on the GPU, then offload weights on the CPU or even on the disk if there is not enough RAM).
237
+
238
+ When you have more GPU memory available than the model size, here is the difference between each option:
239
+ - `"auto"` and `"balanced"` evenly split the model on all available GPUs, making it possible for you to use a batch size greater than 1.
240
+ - `"balanced_low_0"` evenly splits the model on all GPUs except the first one, and only puts on GPU 0 what does not fit on the others. This option is great when you need to use GPU 0 for some processing of the outputs, like when using the `generate` function for Transformers models
241
+ - `"sequential"` will fit what it can on GPU 0, then move on GPU 1 and so forth (so won't use the last GPUs if it doesn't need to).
242
+
243
+ <Tip>
244
+
245
+ The options `"auto"` and `"balanced"` produce the same results for now, but the behavior of `"auto"` might change in the future if we find a strategy that makes more sense, while `"balanced"` will stay stable.
246
+
247
+ </Tip>
248
+
249
+ First note that you can limit the memory used on each GPU by using the `max_memory` argument (available in [`infer_auto_device_map`] and in all functions using it). When setting `max_memory`, you should pass along a dictionary containing the GPU identifiers (for instance `0`, `1` etc.) and the `"cpu"` key for the maximum RAM you want to use for CPU offload. The values can either be an integer (in bytes) or a string representing a number with its unit, such as `"10GiB"` or `"10GB"`.
250
+
251
+ Here is an example where we don't want to use more than 10GiB on each of the two GPUs and no more than 30GiB of CPU RAM for the model weights:
252
+
253
+ ```python
254
+ from accelerate import infer_auto_device_map
255
+
256
+ device_map = infer_auto_device_map(my_model, max_memory={0: "10GiB", 1: "10GiB", "cpu": "30GiB"})
257
+ ```
258
+
259
+ <Tip warning={true}>
260
+
261
+ When a first allocation happens in PyTorch, it loads CUDA kernels which take about 1-2GB of memory depending on the GPU. Therefore you always have less usable memory than the actual size of the GPU. To see how much memory is actually used do `torch.ones(1).cuda()` and look at the memory usage.
262
+
263
+ Therefore when you create memory maps with `max_memory` make sure to adjust the available memory accordingly to avoid out-of-memory errors.
264
+
265
+ </Tip>
266
+
267
+ Additionally, if you do some additional operations with your outputs without placing them back on the CPU (for instance inside the `generate` method of Transformers) and if you placed your inputs on a GPU, that GPU will consume more memory than the others (Accelerate always place the output back to the device of the input). Therefore if you would like to optimize the maximum batch size and you have many GPUs, give the first GPU less memory. For example, with BLOOM-176B on 8x80 A100 setup, the close-to-ideal map is:
268
+
269
+ ```python
270
+ max_memory = {0: "30GIB", 1: "46GIB", 2: "46GIB", 3: "46GIB", 4: "46GIB", 5: "46GIB", 6: "46GIB", 7: "46GIB"}
271
+ ```
272
+ as you can see we gave the remaining 7 GPUs ~50% more memory than GPU 0.
273
+
274
+ If you opt to fully design the `device_map` yourself, it should be a dictionary with keys being module names of your model and values being a valid device identifier (for instance an integer for the GPUs) or `"cpu"` for CPU offload, `"disk"` for disk offload. The keys need to cover the whole model, you can then define your device map as you wish: for instance, if your model has two blocks (let's say `block1` and `block2`) which each contain three linear layers (let's say `linear1`, `linear2` and `linear3`), a valid device map can be:
275
+
276
+ ```python
277
+ device_map = {"block1": 0, "block2": 1}
278
+ ```
279
+
280
+ another one that is valid could be:
281
+
282
+ ```python
283
+ device_map = {"block1": 0, "block2.linear1": 0, "block2.linear2": 1, "block2.linear3": 1}
284
+ ```
285
+
286
+ On the other hand, this one is not valid as it does not cover every parameter of the model:
287
+
288
+ ```python
289
+ device_map = {"block1": 0, "block2.linear1": 1, "block2.linear2": 1}
290
+ ```
291
+
292
+ <Tip>
293
+
294
+ To be the most efficient, make sure your device map puts the parameters on the GPUs in a sequential manner (e.g. don't put one of the first weights on GPU 0, then weights on GPU 1 and the last weight back to GPU 0) to avoid making many transfers of data between the GPUs.
295
+
296
+ </Tip>
297
+
298
+ ## CPU offload only
299
+
300
+ If you want to offload your model on CPU, you can use [`cpu_offload`]. As a result, all parameters of the model will be offloaded and only one copy of the state dict of the model will be kept. During the forward pass, parameters will be extracted from that state dict and put on the execution device and passed as they are needed, then offloaded again.
301
+
302
+ ```python
303
+ cpu_offload(model, execution_device)
304
+ ```
305
+
306
+ You can also use [`cpu_offload_with_hook`]. This function will offloads a model on the CPU and puts it back to an execution device when executed. The difference with [`cpu_offload`] is that the model stays on the execution device after the forward and is only offloaded again when the `offload` method of the returned `hook` is called. Furthermore, [`cpu_offload_with_hook`] is more performant but less memory saving. It is useful for pipelines running a model in a loop:
307
+
308
+ ```python
309
+ model_1, hook_1 = cpu_offload_with_hook(model_1, execution_device)
310
+ model_2, hook_2 = cpu_offload_with_hook(model_2, execution_device, prev_module_hook=hook_1)
311
+ model_3, hook_3 = cpu_offload_with_hook(model_3, execution_device, prev_module_hook=hook_2)
312
+
313
+ hid_1 = model_1(input)
314
+ for i in range(50):
315
+ # model1 is offloaded on the CPU at the first iteration, model 2 stays on the GPU for this whole loop.
316
+ hid_2 = model_2(hid_1)
317
+ # model2 is offloaded to the CPU just before this forward.
318
+ hid_3 = model_3(hid_3)
319
+
320
+ # For model3, you need to manually call the hook offload method.
321
+ hook_3.offload()
322
+ ```
323
+
324
+ ## Disk offload only
325
+
326
+ To perform disk offload, you can use [`disk_offload`]. As a result, all parameters of the model will be offloaded as memory-mapped array in a given folder. During the forward pass, parameters will be accessed from that folder and put on the execution device passed as they are needed, then offloaded again.
327
+
328
+ ```python
329
+ disk_offload(model, offload_dir, execution_device)
330
+ ```
331
+
332
+ ## Limits and further development
333
+
334
+ We are aware of the current limitations in the API:
335
+
336
+ - [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) tries to maximize GPU and CPU RAM it sees available when you execute it. While PyTorch is very good at managing GPU RAM efficiently (and giving it back when not needed), it's not entirely true with Python and CPU RAM. Therefore, an automatically computed device map might be too intense on the CPU. Move a few modules to the disk device if you get crashes due to a lack of RAM.
337
+ - [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) attributes devices sequentially (to avoid moving things back and forth) so if your first layer is bigger than the size of the GPU you have, it will end up with everything on the CPU/Disk.
338
+ - [`load_checkpoint_and_dispatch`] and [`load_checkpoint_in_model`] do not perform any check on the correctness of your state dict compared to your model at the moment (this will be fixed in a future version), so you may get some weird errors if trying to load a checkpoint with mismatched or missing keys.
339
+ - The model parallelism used when your model is split on several GPUs is naive and not optimized, meaning that only one GPU works at a given time and the other sits idle.
340
+ - When weights are offloaded on the CPU/hard drive, there is no pre-fetching (yet, we will work on this for future versions) which means the weights are put on the GPU when they are needed and not before.
341
+ - Hard-drive offloading might be very slow if the hardware you run on does not have fast communication between disk and CPU (like NVMes).
docs/source/concept_guides/deferring_execution.md ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Deferring Executions
17
+
18
+ When you run your usual script, instructions are executed in order. Using 🤗 Accelerate to deploy your script on several
19
+ GPUs at the same time introduces a complication: while each process executes all instructions in order, some may be
20
+ faster than others.
21
+
22
+ You might need to wait for all processes to have reached a certain point before executing a given instruction. For
23
+ instance, you shouldn't save a model before being sure every process is done with training, and you wouldn't want to
24
+ continue training before all the model weights have been loaded in. To do this, just write the following line in your code:
25
+
26
+ ```
27
+ accelerator.wait_for_everyone()
28
+ ```
29
+
30
+ This instruction will block all the processes that arrive first until all the other processes have reached that
31
+ point (if you run your script on just one GPU or CPU, this won't do anything).
32
+
33
+ A few example cases of when to use this utility are listed below:
34
+
35
+ <Tip>
36
+
37
+ Some of these are utilized with the [`~Accelerator.main_process_first`] context manager, which utilizes [`~Accelerator.wait_for_everyone`] to
38
+ run a particular set of code on the main process beforehand before triggering and launching the other processes
39
+
40
+ </Tip>
41
+
42
+ ## Downloading a Dataset
43
+
44
+ When downloading a dataset, you should download it first on the main process and then load the cached dataset afterward
45
+
46
+ <Tip>
47
+
48
+ `load_dataset` will perform a lock under the hood to stop multiple downloads from happening at once, but if you are downloading something
49
+ not using this library you should use this method.
50
+
51
+ </Tip>
52
+
53
+ ```python
54
+ with accelerator.main_process_first():
55
+ datasets = load_dataset("glue", "mrpc")
56
+ ```
57
+
58
+ Under the hood this is the same as calling:
59
+
60
+ ```python
61
+ # First do something on the main process
62
+ if accelerator.is_main_process:
63
+ datasets = load_dataset("glue", "mrpc")
64
+ else:
65
+ accelerator.wait_for_everyone()
66
+
67
+ # And then send it to the rest of them
68
+ if not accelerator.is_main_process:
69
+ datasets = load_dataset("glue", "mrpc")
70
+ else:
71
+ accelerator.wait_for_everyone()
72
+ ```
73
+
74
+ ## Saving the `state_dict`
75
+
76
+ When saving the `state_dict` of the model, since you would normally save one file on just the main process
77
+ you should specify that:
78
+
79
+ ```python
80
+ if accelerator.is_main_process:
81
+ model = accelerator.unwrap_model(model)
82
+ torch.save(model.state_dict(), "weights.pth")
83
+ ```
84
+
85
+ ## Loading in the `state_dict`
86
+
87
+ When loading in the `state_dict` to a model, optimizer, or scheduler, you should wait
88
+ for all workers to have the weights loaded in before moving on to training
89
+
90
+ ```python
91
+ with accelerator.main_process_first():
92
+ state = torch.load("weights.pth")
93
+ model.load_state_dict(state)
94
+ ```
95
+
96
+ ## Applying a multi-worker CPU operation
97
+
98
+ Applying a `map()` operation on multiple workers, such as tokenizing should be done on the
99
+ main process first, and then propagated to each one.
100
+
101
+ ```python
102
+ datasets = load_dataset("glue", "mrpc")
103
+
104
+ with accelerator.main_process_first():
105
+ tokenized_datasets = datasets.map(
106
+ tokenize_function,
107
+ batched=True,
108
+ remove_columns=["idx", "sentence1", "sentence2"],
109
+ )
110
+ ```
111
+
112
+ ## Applying checks such as Early Stopping
113
+
114
+ To have a check that works with a flag set by a particular process, the `set_trigger` and `check_trigger` API should be used. Useful examples
115
+ for doing so can include situations such as using early stopping and monitoring the loss (as each loss slightly differs on each process).
116
+
117
+ Call [`Accelerator.set_trigger`] when your condition has been met, and [`Accelerator.check_trigger`] when checking if that condition has been met in any process:
118
+
119
+ ```python
120
+ for (x,y) in data_loader:
121
+ logits = model(x)
122
+ loss = loss_func(logits, y)
123
+ # Assume `should_do_early_stopping` is a custom defined function that returns a conditional
124
+ if should_do_early_stopping(loss):
125
+ accelerator.set_trigger()
126
+
127
+ # Later in the training script when we need to check for the breakpoint
128
+ if accelerator.check_trigger():
129
+ break
130
+ ```
docs/source/concept_guides/gradient_synchronization.md ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Gradient Synchronization
17
+
18
+ PyTorch's distributed module operates by communicating back and forth between all of the GPUs in your system.
19
+ This communication takes time, and ensuring all processes know the states of each other happens at particular triggerpoints
20
+ when using the `ddp` module.
21
+
22
+ These triggerpoints are added to the PyTorch model, specifically their `forward()` and `backward()` methods.
23
+ This happens when the model is wrapped with `DistributedDataParallel`:
24
+ ```python
25
+ import torch.nn as nn
26
+ from torch.nn.parallel import DistributedDataParallel
27
+
28
+ model = nn.Linear(10, 10)
29
+ ddp_model = DistributedDataParallel(model)
30
+ ```
31
+ In 🤗 Accelerate this conversion happens automatically when calling [`~Accelerator.prepare`] and passing in your model.
32
+
33
+ ```diff
34
+ + from accelerate import Accelerator
35
+ + accelerator = Accelerator()
36
+ import torch.nn as nn
37
+ - from torch.nn.parallel import DistributedDataParallel
38
+
39
+ model = nn.Linear(10,10)
40
+ + model = accelerator.prepare(model)
41
+ ```
42
+
43
+ ## The slowdown in gradient accumulation
44
+
45
+ You now understand that PyTorch adds hooks to the `forward` and `backward` method of your PyTorch model when
46
+ training in a distributed setup. But how does this risk slowing down your code?
47
+
48
+ In DDP (distributed data parallel), the specific order in which processes are performed and ran are expected
49
+ at specific points and these must also occur at roughly the same time before moving on.
50
+
51
+ The most direct example is when you update model parameters through
52
+ `optimizer.step()`.
53
+ Without gradient accumulation, all instances of the model need to have updated
54
+ their gradients computed, collated, and updated before moving on to the next
55
+ batch of data.
56
+ When performing gradient accumulation, you accumulate `n` loss gradients and
57
+ skip `optimizer.step()` until `n` batches have been reached. As all training
58
+ processes only need to synchronize by the time `optimizer.step()` is called,
59
+ without any modification to your training step, this needless inter-process
60
+ communication can cause a significant slowdown.
61
+
62
+ How can you avoid this overhead?
63
+
64
+ ## Solving the slowdown problem
65
+
66
+ Since you are skipping model parameter updates when training on these batches, their gradients do not need to be synchronized until the point where `optimizer.step()` is actually called.
67
+ PyTorch cannot automagically tell when you need to do this, but they do provide a tool to help through the [`no_sync`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel.no_sync) context manager
68
+ that is added to your model after converting it to DDP.
69
+
70
+ Under this context manager, PyTorch will skip synchronizing the gradients when
71
+ `.backward()` is called, and the first call to `.backward()` outside this
72
+ context manager will trigger the synchronization. See an example below:
73
+ ```python
74
+ ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer)
75
+
76
+ for index, batch in enumerate(dataloader):
77
+ inputs, targets = batch
78
+ # Trigger gradient synchronization on the last batch
79
+ if index != (len(dataloader) - 1):
80
+ with ddp_model.no_sync():
81
+ # Gradients only accumulate
82
+ outputs = ddp_model(inputs)
83
+ loss = loss_func(outputs)
84
+ accelerator.backward(loss)
85
+ else:
86
+ # Gradients finally sync
87
+ outputs = ddp_model(inputs)
88
+ loss = loss_func(outputs)
89
+ accelerator.backward(loss)
90
+ optimizer.step()
91
+ ```
92
+
93
+ In 🤗 Accelerate to make this an API that can be called no matter the training device (though it may not do anything if you are not in a distributed system!),
94
+ `ddp_model.no_sync` gets replaced with [`~Accelerator.no_sync`] and operates the same way:
95
+
96
+ ```diff
97
+ ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer)
98
+
99
+ for index, batch in enumerate(dataloader):
100
+ inputs, targets = batch
101
+ # Trigger gradient synchronization on the last batch
102
+ if index != (len(dataloader)-1):
103
+ - with ddp_model.no_sync():
104
+ + with accelerator.no_sync(model):
105
+ # Gradients only accumulate
106
+ outputs = ddp_model(inputs)
107
+ loss = loss_func(outputs, targets)
108
+ accelerator.backward(loss)
109
+ else:
110
+ # Gradients finally sync
111
+ outputs = ddp_model(inputs)
112
+ loss = loss_func(outputs)
113
+ accelerator.backward(loss)
114
+ optimizer.step()
115
+ optimizer.zero_grad()
116
+ ```
117
+
118
+ As you may expect, the [`~Accelerator.accumulate`] function wraps around this conditional check by keeping track of the current batch number, leaving you with the final
119
+ gradient accumulation API:
120
+
121
+ ```python
122
+ ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer)
123
+
124
+ for batch in dataloader:
125
+ with accelerator.accumulate(model):
126
+ optimizer.zero_grad()
127
+ inputs, targets = batch
128
+ outputs = model(inputs)
129
+ loss = loss_function(outputs, targets)
130
+ accelerator.backward(loss)
131
+ optimizer.step()
132
+ optimizer.zero_grad()
133
+ ```
134
+
135
+ As a result, you should either use *`accelerator.accumulate` or `accelerator.no_sync`* when it comes to API choice.
136
+
137
+ ## Just how much of a slowdown is there, and easy mistakes you can make
138
+
139
+ To set up a realistic example, consider the following setup:
140
+
141
+ * Two single-GPU T4 nodes and one node with two GPUs
142
+ * Each GPU is a T4, and are hosted on GCP
143
+ * The script used is a modification of the [NLP Example](https://github.com/muellerzr/timing_experiments/blob/main/baseline.py) script
144
+ * Batch size per GPU is 16, and gradients are accumulated every 4 steps
145
+
146
+ All scripts are available in [this repository](https://github.com/muellerzr/timing_experiments).
147
+
148
+ If not careful about gradient synchronization and GPU communication, a *large* amount of time can be wasted
149
+ from when these GPUs communicate to each other during unnecessary periods.
150
+
151
+ By how much?
152
+
153
+ Reference:
154
+ - Baseline: uses no synchronization practices discussed here
155
+ - `no_sync` improperly: `no_sync` only around the `backward` call, not the `forward`
156
+ - `no_sync`: using the `no_sync` pattern properly
157
+ - `accumulate`: using [`~Accelerator.accumulate`] properly
158
+
159
+ Below are the average seconds per batch iterating over 29 batches of data for each setup on both a single node and on the dual-node setup:
160
+
161
+ | | Baseline | `no_sync` improperly | `no_sync` | `accumulate`|
162
+ | :---------: | :-------: | :------------------: | :-------: | :---------: |
163
+ | Multi-Node | 2±0.01s | 2.13±0.08s | **0.91±0.11s** | **0.91±0.11s** |
164
+ | Single Node | 0.50±0.01s | 0.50±0.01s | **0.41±0.015s** | **0.41±0.015s** |
165
+
166
+ As you can see, if you are not careful about how you set up your gradient synchronization, you can get upwards of more than a 2x slowdown during training!
167
+
168
+ If you are worried about making sure everything is done properly, we highly recommend utilizing the [`~Accelerator.accumulate`] function and passing in
169
+ `gradient_accumulation_steps` or `gradient_accumulation_plugin` to the [`Accelerator`] object so Accelerate can handle this for you.
docs/source/concept_guides/internal_mechanism.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2021 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # 🤗 Accelerate's internal mechanisms
17
+
18
+ Internally, 🤗 Accelerate works by first analyzing the environment in which the script is launched to determine which
19
+ kind of distributed setup is used, how many different processes there are and which one the current script is in. All
20
+ that information is stored in the [`~AcceleratorState`].
21
+
22
+ This class is initialized the first time you instantiate an [`~Accelerator`] as well as performing any
23
+ specific initialization your distributed setup needs. Its state is then uniquely shared through all instances of
24
+ [`~state.AcceleratorState`]. (The same can also be done with the [`PartialState`], a more barebones version it inherits)
25
+
26
+ Then, when calling [`~Accelerator.prepare`], the library:
27
+
28
+ - wraps your model(s) in the container adapted for the distributed setup,
29
+ - wraps your optimizer(s) in an [`~optimizer.AcceleratedOptimizer`],
30
+ - wraps your scheduler(s) in an [`~scheduler.AcceleratedScheduler`]
31
+ - creates a new version of your dataloader(s) in a [`~data_loader.DataLoaderShard`] or [`~data_loader.DataLoaderDispatcher`]
32
+
33
+ While the model(s), optimizer(s), and scheduler(s) are just put in simple wrappers, the dataloader(s) are re-created. This is mostly
34
+ because PyTorch does not let the user change the `batch_sampler` of a dataloader once it's been created and the
35
+ library handles the sharding of your data between processes by changing that `batch_sampler` to yield every other
36
+ `num_processes` batches (if enabled).
37
+
38
+ The [`~data_loader.DataLoaderShard`] subclasses `DataLoader` to add the following functionality:
39
+
40
+ - it synchronizes the appropriate random number generator of all processes at each new iteration, to ensure any
41
+ randomization (like shuffling) is done the exact same way across processes.
42
+ - it puts the batches on the proper device before yielding them (unless you have opted out of
43
+ `device_placement=True`).
44
+
45
+ The [`~data_loader.DataLoaderDispatcher`] subclasses differs from the [`~data_loader.DataLoaderShard`] in that when iterating through the `DataLoader`, the data is all starting from process 0 and *then* split and sent off to each process rather than it happening at the dataset level.
46
+
47
+ The random number generator synchronization will by default synchronize:
48
+
49
+ - the `generator` attribute of a given sampler (like the PyTorch `RandomSampler`) for PyTorch >= 1.6
50
+ - the main random number generator in PyTorch <=1.5.1
51
+
52
+ You can choose which random number generator(s) to synchronize with the `rng_types` argument of the main
53
+ [`Accelerator`]. In PyTorch >= 1.6, it is recommended to rely on a local `generator` to avoid
54
+ setting the same seed in the main random number generator in all processes.
55
+
56
+ <Tip warning={true}>
57
+
58
+ Synchronization of the main torch (or CUDA or XLA) random number generator will affect any other potential random
59
+ artifacts you could have in your dataset (like random data augmentation) in the sense that all processes will get
60
+ the same random numbers from the torch random modules (so will apply the same random data augmentation if it's
61
+ controlled by torch).
62
+
63
+ </Tip>
64
+
65
+ <Tip>
66
+
67
+ The randomization part of your custom sampler, batch sampler or iterable dataset should be done using a local
68
+ `torch.Generator` object (in PyTorch >= 1.6), see the traditional `RandomSampler`, as an example.
69
+
70
+ </Tip>
71
+
72
+ For more details about the internals, see the [Internals page](package_reference/torch_wrappers).
docs/source/concept_guides/low_precision_training.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Low Precision Training Methods
17
+
18
+ The release of new kinds of hardware led to the emergence of new training paradigms that better utilize them. Currently, this is in the form of training
19
+ in 8-bit precision using packages such as [TranformersEngine](https://github.com/NVIDIA/TransformerEngine) (TE) or [MS-AMP](https://github.com/Azure/MS-AMP/tree/main).
20
+
21
+ For an introduction to the topics discussed today, we recommend reviewing the [low-precision usage guide](../usage_guides/low_precision_training.md) as this documentation will reference it regularly.
22
+
23
+ ## A Quick Chart
24
+
25
+ Below is a quick chart from the MS-AMP documentation showing the different bit-precisions for each solution during training:
26
+
27
+ Optimization Level | Computation(GEMM) | Comm | Weight | Master Weight | Weight Gradient | Optimizer States
28
+ -- | -- | -- | -- | -- | -- | --
29
+ FP16 AMP | FP16 | FP32 | FP32 | N/A | FP32 | FP32+FP32
30
+ Nvidia TE | FP8 | FP32 | FP32 | N/A | FP32 | FP32+FP32
31
+ MS-AMP O1 | FP8 | FP8 | FP16 | N/A | FP8 | FP32+FP32
32
+ MS-AMP O2 | FP8 | FP8 | FP16 | N/A | FP8 | FP8+FP16
33
+ MS-AMP O3 | FP8 | FP8 | FP8 | FP16 | FP8 | FP8+FP16
34
+
35
+ ## `TransformersEngine`
36
+
37
+ `TranformersEngine` is the first solution to trying to train in 8-bit floating point. It works by using drop-in replacement layers for certain ones in a model that utilize their FP8-engine to reduce the number of bits (such as 32 to 8) without degrading the final accuracy of the model.
38
+
39
+ Specifically, 🤗 Accelerate will find and replace the following layers with `TranformersEngine` versions:
40
+
41
+ * `nn.LayerNorm` for `te.LayerNorm`
42
+ * `nn.Linear` for `te.Linear`
43
+
44
+ As a result we wind up with a model that has most of its layers in BF16, while some layers are in FP8 reducing some of the memory.
45
+
46
+ Anecdotally, we have noticed that performance gains don't really start showing when using `TransformerEngine` until a large majority of the layers
47
+ in the model are made up of those two layers to replace. As a result, only larger models have shown performance improvements when the number of parameters is around and upwards of a few billion.
48
+
49
+ The `TransformerEngine` can receive many different arguments that customize how it performs FP8 calculations and what they do. A full list of the arguments is available below:
50
+
51
+ * `margin`: The margin to use for the gradient scaling.
52
+ * `interval`: The interval to use for how often the scaling factor is recomputed.
53
+ * `fp8_format``: The format to use for the FP8 recipe. Must be one of `E4M3` or `HYBRID`.
54
+ * `amax_history_len`: The length of the history to use for the scaling factor computation
55
+ * `amax_compute_algo`: The algorithm to use for the scaling factor computation. Must be one of `max` or `most_recent`.
56
+ * `override_linear_precision`: Whether or not to execute `fprop`, `dgrad`, and `wgrad` GEMMS in higher precision.
57
+
58
+ You can customize each of these as part of [`utils.FP8RecipeKwargs`] to help optimize performance of your models.
59
+
60
+ If we notice in the chart mentioned earlier, TE simply casts the computation layers into FP8, while everything else is in FP32. As a result this winds up utilizing the most memory but does so with the benefit of guaranteeing the least amount of loss in end accuracy during training.
61
+
62
+ ## `MS-AMP`
63
+
64
+ MS-AMP takes a different approach to `TransformersEngine` by providing three different optimization levels to convert more operations in FP8 or FP16.
65
+
66
+ * The base optimization level (`O1`), passes communications of the weights (such as in DDP) in FP8, stores the weights of the model in FP16, and leaves the optimizer states in FP32. The main benefit of this optimization level is that we can reduce the communication bandwidth by essentially half. Additionally, more GPU memory is saved due to 1/2 of everything being cast in FP8, and the weights being cast to FP16. Notably, both the optimizer states remain in FP32.
67
+
68
+ * The second optimization level (`O2`) improves upon this by also reducing the precision of the optimizer states. One is in FP8 while the other is in FP16. Generally it's been shown that this will only provide a net-gain of no degredated end accuracy, increased training speed, and reduced memory as now every state is either in FP16 or FP8.
69
+
70
+ * Finally, MS-AMP has a third optimization level (`O3`) which helps during DDP scenarios such as DeepSpeed. The weights of the model in memory are fully cast to FP8, and the master weights are now stored in FP16. This fully reduces memory by the highest factor as now not only is almost everything in FP8, only two states are left in FP16. Currently, only DeepSpeed versions up through 0.9.2 are supported, so this capability is not included in the 🤗 Accelerate integration
71
+
72
+ ## Combining the two
73
+
74
+ More experiments need to be performed but it's been noted that combining both MS-AMP and TransformersEngine can lead to the highest throughput by relying on NVIDIA's optimized FP8 operators and utilizing how MS-AMP reduces the memory overhead.
docs/source/concept_guides/performance.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Comparing performance between different device setups
17
+
18
+ Evaluating and comparing the performance from different setups can be quite tricky if you don't know what to look for.
19
+ For example, you cannot run the same script with the same batch size across TPU, multi-GPU, and single-GPU with Accelerate
20
+ and expect your results to line up.
21
+
22
+ But why?
23
+
24
+ There are three reasons for this that this tutorial will cover:
25
+
26
+ 1. **Setting the right seeds**
27
+ 2. **Observed Batch Sizes**
28
+ 3. **Learning Rates**
29
+
30
+ ## Setting the Seed
31
+
32
+ While this issue has not come up as much, make sure to use [`utils.set_seed`] to fully set the seed in all distributed cases so training will be reproducible:
33
+
34
+ ```python
35
+ from accelerate.utils import set_seed
36
+
37
+ set_seed(42)
38
+ ```
39
+
40
+ Why is this important? Under the hood this will set **5** different seed settings:
41
+
42
+ ```python
43
+ random.seed(seed)
44
+ np.random.seed(seed)
45
+ torch.manual_seed(seed)
46
+ torch.cuda.manual_seed_all(seed)
47
+ # ^^ safe to call this function even if cuda is not available
48
+ if is_tpu_available():
49
+ xm.set_rng_state(seed)
50
+ ```
51
+
52
+ The random state, numpy's state, torch, torch's cuda state, and if TPUs are available torch_xla's cuda state.
53
+
54
+ ## Observed Batch Sizes
55
+
56
+ When training with Accelerate, the batch size passed to the dataloader is the **batch size per GPU**. What this entails is
57
+ a batch size of 64 on two GPUs is truly a batch size of 128. As a result, when testing on a single GPU this needs to be accounted for,
58
+ as well as similarly for TPUs.
59
+
60
+ The below table can be used as a quick reference to try out different batch sizes:
61
+
62
+ <Tip>
63
+
64
+ In this example, there are two GPUs for "Multi-GPU" and a TPU pod with 8 workers
65
+
66
+ </Tip>
67
+
68
+ | Single GPU Batch Size | Multi-GPU Equivalent Batch Size | TPU Equivalent Batch Size |
69
+ |-----------------------|---------------------------------|---------------------------|
70
+ | 256 | 128 | 32 |
71
+ | 128 | 64 | 16 |
72
+ | 64 | 32 | 8 |
73
+ | 32 | 16 | 4 |
74
+
75
+ ## Learning Rates
76
+
77
+ As noted in multiple sources[[1](https://aws.amazon.com/blogs/machine-learning/scalable-multi-node-deep-learning-training-using-gpus-in-the-aws-cloud/)][[2](https://docs.nvidia.com/clara/clara-train-sdk/pt/model.html#classification-models-multi-gpu-training)], the learning rate should be scaled *linearly* based on the number of devices present. The below
78
+ snippet shows doing so with Accelerate:
79
+
80
+ <Tip>
81
+
82
+ Since users can have their own learning rate schedulers defined, we leave this up to the user to decide if they wish to scale their
83
+ learning rate or not.
84
+
85
+ </Tip>
86
+
87
+ ```python
88
+ learning_rate = 1e-3
89
+ accelerator = Accelerator()
90
+ learning_rate *= accelerator.num_processes
91
+
92
+ optimizer = AdamW(params=model.parameters(), lr=learning_rate)
93
+ ```
94
+
95
+ You will also find that `accelerate` will step the learning rate based on the number of processes being trained on. This is because
96
+ of the observed batch size noted earlier. So in the case of 2 GPUs, the learning rate will be stepped twice as often as a single GPU
97
+ to account for the batch size being twice as large (if no changes to the batch size on the single GPU instance are made).
98
+
99
+ ## Gradient Accumulation and Mixed Precision
100
+
101
+ When using gradient accumulation and mixed precision, due to how gradient averaging works (accumulation) and the precision loss (mixed precision),
102
+ some degradation in performance is expected. This will be explicitly seen when comparing the batch-wise loss between different compute
103
+ setups. However, the overall loss, metric, and general performance at the end of training should be _roughly_ the same.
docs/source/concept_guides/training_tpu.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Training on TPUs with 🤗 Accelerate
17
+
18
+ Training on TPUs can be slightly different from training on multi-gpu, even with 🤗 Accelerate. This guide aims to show you
19
+ where you should be careful and why, as well as the best practices in general.
20
+
21
+ ## Training in a Notebook
22
+
23
+ The main carepoint when training on TPUs comes from the [`notebook_launcher`]. As mentioned in the [notebook tutorial](../usage_guides/notebook), you need to
24
+ restructure your training code into a function that can get passed to the [`notebook_launcher`] function and be careful about not declaring any tensors on the GPU.
25
+
26
+ While on a TPU that last part is not as important, a critical part to understand is that when you launch code from a notebook you do so through a process called **forking**.
27
+ When launching from the command-line, you perform **spawning**, where a python process is not currently running and you *spawn* a new process in. Since your Jupyter notebook is already
28
+ utilizing a python process, you need to *fork* a new process from it to launch your code.
29
+
30
+ Where this becomes important is in regard to declaring your model. On forked TPU processes, it is recommended that you instantiate your model *once* and pass this into your
31
+ training function. This is different than training on GPUs where you create `n` models that have their gradients synced and back-propagated at certain moments. Instead, one
32
+ model instance is shared between all the nodes and it is passed back and forth. This is important especially when training on low-resource TPUs such as those provided in Kaggle kernels or
33
+ on Google Colaboratory.
34
+
35
+ Below is an example of a training function passed to the [`notebook_launcher`] if training on CPUs or GPUs:
36
+
37
+ <Tip>
38
+
39
+ This code snippet is based off the one from the `simple_nlp_example` notebook found [here](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb) with slight
40
+ modifications for the sake of simplicity
41
+
42
+ </Tip>
43
+
44
+ ```python
45
+ def training_function():
46
+ # Initialize accelerator
47
+ accelerator = Accelerator()
48
+ model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
49
+ train_dataloader, eval_dataloader = create_dataloaders(
50
+ train_batch_size=hyperparameters["train_batch_size"], eval_batch_size=hyperparameters["eval_batch_size"]
51
+ )
52
+
53
+ # Instantiate optimizer
54
+ optimizer = AdamW(params=model.parameters(), lr=hyperparameters["learning_rate"])
55
+
56
+ # Prepare everything
57
+ # There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
58
+ # prepare method.
59
+ model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(
60
+ model, optimizer, train_dataloader, eval_dataloader
61
+ )
62
+
63
+ num_epochs = hyperparameters["num_epochs"]
64
+ # Now we train the model
65
+ for epoch in range(num_epochs):
66
+ model.train()
67
+ for step, batch in enumerate(train_dataloader):
68
+ outputs = model(**batch)
69
+ loss = outputs.loss
70
+ accelerator.backward(loss)
71
+
72
+ optimizer.step()
73
+ optimizer.zero_grad()
74
+ ```
75
+
76
+ ```python
77
+ from accelerate import notebook_launcher
78
+
79
+ notebook_launcher(training_function)
80
+ ```
81
+
82
+ <Tip>
83
+
84
+ The `notebook_launcher` will default to 8 processes if 🤗 Accelerate has been configured for a TPU
85
+
86
+ </Tip>
87
+
88
+ If you use this example and declare the model *inside* the training loop, then on a low-resource system you will potentially see an error
89
+ like:
90
+
91
+ ```
92
+ ProcessExitedException: process 0 terminated with signal SIGSEGV
93
+ ```
94
+
95
+ This error is *extremely* cryptic but the basic explanation is you ran out of system RAM. You can avoid this entirely by reconfiguring the training function to
96
+ accept a single `model` argument, and declare it in an outside cell:
97
+
98
+ ```python
99
+ # In another Jupyter cell
100
+ model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
101
+ ```
102
+
103
+ ```diff
104
+ + def training_function(model):
105
+ # Initialize accelerator
106
+ accelerator = Accelerator()
107
+ - model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
108
+ train_dataloader, eval_dataloader = create_dataloaders(
109
+ train_batch_size=hyperparameters["train_batch_size"], eval_batch_size=hyperparameters["eval_batch_size"]
110
+ )
111
+ ...
112
+ ```
113
+
114
+ And finally calling the training function with:
115
+
116
+ ```diff
117
+ from accelerate import notebook_launcher
118
+ - notebook_launcher(training_function)
119
+ + notebook_launcher(training_function, (model,))
120
+ ```
121
+
122
+ <Tip>
123
+
124
+ The above workaround is only needed when launching a TPU instance from a Jupyter Notebook on a low-resource server such as Google Colaboratory or Kaggle. If
125
+ using a script or launching on a much beefier server declaring the model beforehand is not needed.
126
+
127
+ </Tip>
128
+
129
+ ## Mixed Precision and Global Variables
130
+
131
+ As mentioned in the [mixed precision tutorial](../usage_guides/mixed_precision), 🤗 Accelerate supports fp16 and bf16, both of which can be used on TPUs.
132
+ That being said, ideally `bf16` should be utilized as it is extremely efficient to use.
133
+
134
+ There are two "layers" when using `bf16` and 🤗 Accelerate on TPUs, at the base level and at the operation level.
135
+
136
+ At the base level, this is enabled when passing `mixed_precision="bf16"` to `Accelerator`, such as:
137
+ ```python
138
+ accelerator = Accelerator(mixed_precision="bf16")
139
+ ```
140
+ By default, this will cast `torch.float` and `torch.double` to `bfloat16` on TPUs.
141
+ The specific configuration being set is an environmental variable of `XLA_USE_BF16` is set to `1`.
142
+
143
+ There is a further configuration you can perform which is setting the `XLA_DOWNCAST_BF16` environmental variable. If set to `1`, then
144
+ `torch.float` is `bfloat16` and `torch.double` is `float32`.
145
+
146
+ This is performed in the `Accelerator` object when passing `downcast_bf16=True`:
147
+ ```python
148
+ accelerator = Accelerator(mixed_precision="bf16", downcast_bf16=True)
149
+ ```
150
+
151
+ Using downcasting instead of bf16 everywhere is good for when you are trying to calculate metrics, log values, and more where raw bf16 tensors would be unusable.
152
+
153
+ ## Training Times on TPUs
154
+
155
+ As you launch your script, you may notice that training seems exceptionally slow at first. This is because TPUs
156
+ first run through a few batches of data to see how much memory to allocate before finally utilizing this configured
157
+ memory allocation extremely efficiently.
158
+
159
+ If you notice that your evaluation code to calculate the metrics of your model takes longer due to a larger batch size being used,
160
+ it is recommended to keep the batch size the same as the training data if it is too slow. Otherwise the memory will reallocate to this
161
+ new batch size after the first few iterations.
162
+
163
+ <Tip>
164
+
165
+ Just because the memory is allocated does not mean it will be used or that the batch size will increase when going back to your training dataloader.
166
+
167
+ </Tip>
docs/source/imgs/accelerate_logo.png ADDED
docs/source/imgs/course_banner.png ADDED
docs/source/index.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Accelerate
17
+
18
+ 🤗 Accelerate is a library that enables the same PyTorch code to be run across any distributed configuration by adding just four lines of code! In short, training and inference at scale made simple, efficient and adaptable.
19
+
20
+ ```diff
21
+ + from accelerate import Accelerator
22
+ + accelerator = Accelerator()
23
+
24
+ + model, optimizer, training_dataloader, scheduler = accelerator.prepare(
25
+ + model, optimizer, training_dataloader, scheduler
26
+ + )
27
+
28
+ for batch in training_dataloader:
29
+ optimizer.zero_grad()
30
+ inputs, targets = batch
31
+ inputs = inputs.to(device)
32
+ targets = targets.to(device)
33
+ outputs = model(inputs)
34
+ loss = loss_function(outputs, targets)
35
+ + accelerator.backward(loss)
36
+ optimizer.step()
37
+ scheduler.step()
38
+ ```
39
+
40
+ Built on `torch_xla` and `torch.distributed`, 🤗 Accelerate takes care of the heavy lifting, so you don't have to write any custom code to adapt to these platforms.
41
+ Convert existing codebases to utilize [DeepSpeed](usage_guides/deepspeed), perform [fully sharded data parallelism](usage_guides/fsdp), and have automatic support for mixed-precision training!
42
+
43
+ <Tip>
44
+
45
+ To get a better idea of this process, make sure to check out the [Tutorials](basic_tutorials/overview)!
46
+
47
+ </Tip>
48
+
49
+
50
+ This code can then be launched on any system through Accelerate's CLI interface:
51
+ ```bash
52
+ accelerate launch {my_script.py}
53
+ ```
54
+
55
+ <div class="mt-10">
56
+ <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
57
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./basic_tutorials/overview"
58
+ ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
59
+ <p class="text-gray-700">Learn the basics and become familiar with using 🤗 Accelerate. Start here if you are using 🤗 Accelerate for the first time!</p>
60
+ </a>
61
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./usage_guides/explore"
62
+ ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
63
+ <p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use 🤗 Accelerate to solve real-world problems.</p>
64
+ </a>
65
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./concept_guides/gradient_synchronization"
66
+ ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
67
+ <p class="text-gray-700">High-level explanations for building a better understanding of important topics such as avoiding subtle nuances and pitfalls in distributed training and DeepSpeed.</p>
68
+ </a>
69
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/accelerator"
70
+ ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
71
+ <p class="text-gray-700">Technical descriptions of how 🤗 Accelerate classes and methods work.</p>
72
+ </a>
73
+ </div>
74
+ </div>
docs/source/package_reference/accelerator.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2021 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Accelerator
17
+
18
+ The [`Accelerator`] is the main class provided by 🤗 Accelerate.
19
+ It serves at the main entry point for the API.
20
+
21
+ ## Quick adaptation of your code
22
+
23
+ To quickly adapt your script to work on any kind of setup with 🤗 Accelerate just:
24
+
25
+ 1. Initialize an [`Accelerator`] object (that we will call `accelerator` throughout this page) as early as possible in your script.
26
+ 2. Pass your dataloader(s), model(s), optimizer(s), and scheduler(s) to the [`~Accelerator.prepare`] method.
27
+ 3. Remove all the `.cuda()` or `.to(device)` from your code and let the `accelerator` handle the device placement for you.
28
+
29
+ <Tip>
30
+
31
+ Step three is optional, but considered a best practice.
32
+
33
+ </Tip>
34
+
35
+ 4. Replace `loss.backward()` in your code with `accelerator.backward(loss)`
36
+ 5. Gather your predictions and labels before storing them or using them for metric computation using [`~Accelerator.gather`]
37
+
38
+ <Tip warning={true}>
39
+
40
+ Step five is mandatory when using distributed evaluation
41
+
42
+ </Tip>
43
+
44
+ In most cases this is all that is needed. The next section lists a few more advanced use cases and nice features
45
+ you should search for and replace by the corresponding methods of your `accelerator`:
46
+
47
+ ## Advanced recommendations
48
+
49
+ ### Printing
50
+
51
+ `print` statements should be replaced by [`~Accelerator.print`] to be printed once per process:
52
+
53
+ ```diff
54
+ - print("My thing I want to print!")
55
+ + accelerator.print("My thing I want to print!")
56
+ ```
57
+
58
+ ### Executing processes
59
+
60
+ #### Once on a single server
61
+
62
+ For statements that should be executed once per server, use [`~Accelerator.is_local_main_process`]:
63
+
64
+ ```python
65
+ if accelerator.is_local_main_process:
66
+ do_thing_once_per_server()
67
+ ```
68
+
69
+ A function can be wrapped using the [`~Accelerator.on_local_main_process`] function to achieve the same
70
+ behavior on a function's execution:
71
+
72
+ ```python
73
+ @accelerator.on_local_main_process
74
+ def do_my_thing():
75
+ "Something done once per server"
76
+ do_thing_once_per_server()
77
+ ```
78
+
79
+ #### Only ever once across all servers
80
+
81
+ For statements that should only ever be executed once, use [`~Accelerator.is_main_process`]:
82
+
83
+ ```python
84
+ if accelerator.is_main_process:
85
+ do_thing_once()
86
+ ```
87
+
88
+ A function can be wrapped using the [`~Accelerator.on_main_process`] function to achieve the same
89
+ behavior on a function's execution:
90
+
91
+ ```python
92
+ @accelerator.on_main_process
93
+ def do_my_thing():
94
+ "Something done once per server"
95
+ do_thing_once()
96
+ ```
97
+
98
+ #### On specific processes
99
+
100
+ If a function should be ran on a specific overall or local process index, there are similar decorators
101
+ to achieve this:
102
+
103
+ ```python
104
+ @accelerator.on_local_process(local_process_idx=0)
105
+ def do_my_thing():
106
+ "Something done on process index 0 on each server"
107
+ do_thing_on_index_zero_on_each_server()
108
+ ```
109
+
110
+ ```python
111
+ @accelerator.on_process(process_index=0)
112
+ def do_my_thing():
113
+ "Something done on process index 0"
114
+ do_thing_on_index_zero()
115
+ ```
116
+
117
+ ### Synchronicity control
118
+
119
+ Use [`~Accelerator.wait_for_everyone`] to make sure all processes join that point before continuing. (Useful before a model save for instance).
120
+
121
+ ### Saving and loading
122
+
123
+ ```python
124
+ model = MyModel()
125
+ model = accelerator.prepare(model)
126
+ ```
127
+
128
+ Use [`~Accelerator.save_model`] instead of `torch.save` to save a model. It will remove all model wrappers added during the distributed process, get the state_dict of the model and save it. The state_dict will be in the same precision as the model being trained.
129
+
130
+ ```diff
131
+ - torch.save(state_dict, "my_state.pkl")
132
+ + accelerator.save_model(model, save_directory)
133
+ ```
134
+
135
+ [`~Accelerator.save_model`] can also save a model into sharded checkpoints or with safetensors format.
136
+ Here is an example:
137
+
138
+ ```python
139
+ accelerator.save_model(model, save_directory, max_shard_size="1GB", safe_serialization=True)
140
+ ```
141
+
142
+ #### 🤗 Transformers models
143
+
144
+ If you are using models from the [🤗 Transformers](https://huggingface.co/docs/transformers/) library, you can use the `.save_pretrained()` method.
145
+
146
+ ```python
147
+ from transformers import AutoModel
148
+
149
+ model = AutoModel.from_pretrained("bert-base-cased")
150
+ model = accelerator.prepare(model)
151
+
152
+ # ...fine-tune with PyTorch...
153
+
154
+ unwrapped_model = accelerator.unwrap_model(model)
155
+ unwrapped_model.save_pretrained(
156
+ "path/to/my_model_directory",
157
+ is_main_process=accelerator.is_main_process,
158
+ save_function=accelerator.save,
159
+ )
160
+ ```
161
+
162
+ This will ensure your model stays compatible with other 🤗 Transformers functionality like the `.from_pretrained()` method.
163
+
164
+ ```python
165
+ from transformers import AutoModel
166
+
167
+ model = AutoModel.from_pretrained("path/to/my_model_directory")
168
+ ```
169
+
170
+ ### Operations
171
+
172
+ Use [`~Accelerator.clip_grad_norm_`] instead of ``torch.nn.utils.clip_grad_norm_`` and [`~Accelerator.clip_grad_value_`] instead of ``torch.nn.utils.clip_grad_value``
173
+
174
+ ### Gradient Accumulation
175
+
176
+ To perform gradient accumulation use [`~Accelerator.accumulate`] and specify a gradient_accumulation_steps.
177
+ This will also automatically ensure the gradients are synced or unsynced when on
178
+ multi-device training, check if the step should actually be performed, and auto-scale the loss:
179
+
180
+ ```diff
181
+ - accelerator = Accelerator()
182
+ + accelerator = Accelerator(gradient_accumulation_steps=2)
183
+
184
+ for (input, label) in training_dataloader:
185
+ + with accelerator.accumulate(model):
186
+ predictions = model(input)
187
+ loss = loss_function(predictions, labels)
188
+ accelerator.backward(loss)
189
+ optimizer.step()
190
+ scheduler.step()
191
+ optimizer.zero_grad()
192
+ ```
193
+ #### GradientAccumulationPlugin
194
+ [[autodoc]] utils.GradientAccumulationPlugin
195
+
196
+
197
+ Instead of passing `gradient_accumulation_steps` you can instantiate a GradientAccumulationPlugin and pass it to the [`Accelerator`]'s `__init__`
198
+ as `gradient_accumulation_plugin`. You can only pass either one of `gradient_accumulation_plugin` or `gradient_accumulation_steps` passing both will raise an error.
199
+ ```diff
200
+ from accelerate.utils import GradientAccumulationPlugin
201
+
202
+ gradient_accumulation_plugin = GradientAccumulationPlugin(num_steps=2)
203
+ - accelerator = Accelerator()
204
+ + accelerator = Accelerator(gradient_accumulation_plugin=gradient_accumulation_plugin)
205
+ ```
206
+
207
+ In addition to the number of steps, this also lets you configure whether or not you adjust your learning rate scheduler to account for the change in steps due to accumulation.
208
+
209
+ ## Overall API documentation:
210
+
211
+ [[autodoc]] Accelerator
docs/source/package_reference/big_modeling.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2021 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Working with large models
17
+
18
+ ## Dispatching and Offloading Models
19
+
20
+ [[autodoc]] big_modeling.init_empty_weights
21
+ [[autodoc]] big_modeling.cpu_offload
22
+ [[autodoc]] big_modeling.cpu_offload_with_hook
23
+ [[autodoc]] big_modeling.disk_offload
24
+ [[autodoc]] big_modeling.dispatch_model
25
+ [[autodoc]] big_modeling.load_checkpoint_and_dispatch
26
+ [[autodoc]] big_modeling.load_checkpoint_in_model
27
+ [[autodoc]] utils.infer_auto_device_map
28
+
29
+ ## Model Hooks
30
+
31
+ ### Hook Classes
32
+
33
+ [[autodoc]] hooks.ModelHook
34
+ [[autodoc]] hooks.AlignDevicesHook
35
+ [[autodoc]] hooks.SequentialHook
36
+
37
+ ### Adding Hooks
38
+
39
+ [[autodoc]] hooks.add_hook_to_module
40
+ [[autodoc]] hooks.attach_execution_device_hook
41
+ [[autodoc]] hooks.attach_align_device_hook
42
+ [[autodoc]] hooks.attach_align_device_hook_on_blocks
43
+
44
+ ### Removing Hooks
45
+
46
+ [[autodoc]] hooks.remove_hook_from_module
47
+ [[autodoc]] hooks.remove_hook_from_submodules
docs/source/package_reference/cli.md ADDED
@@ -0,0 +1,308 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2021 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # The Command Line
17
+
18
+ Below is a list of all the available commands 🤗 Accelerate with their parameters
19
+
20
+ ## accelerate config
21
+
22
+ **Command**:
23
+
24
+ `accelerate config` or `accelerate-config`
25
+
26
+ Launches a series of prompts to create and save a `default_config.yml` configuration file for your training system. Should
27
+ always be ran first on your machine.
28
+
29
+ **Usage**:
30
+
31
+ ```bash
32
+ accelerate config [arguments]
33
+ ```
34
+
35
+ **Optional Arguments**:
36
+ * `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
37
+ of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
38
+ (`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
39
+ * `-h`, `--help` (`bool`) -- Show a help message and exit
40
+
41
+ ## accelerate config default
42
+
43
+ **Command**:
44
+
45
+ `accelerate config default` or `accelerate-config default`
46
+
47
+ Create a default config file for Accelerate with only a few flags set.
48
+
49
+ **Usage**:
50
+
51
+ ```bash
52
+ accelerate config default [arguments]
53
+ ```
54
+
55
+ **Optional Arguments**:
56
+ * `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
57
+ of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
58
+ (`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
59
+
60
+ * `-h`, `--help` (`bool`) -- Show a help message and exit
61
+ * `--mixed_precision {no,fp16,bf16}` (`str`) -- Whether or not to use mixed precision training. Choose between FP16 and BF16 (bfloat16) training. BF16 training is only supported on Nvidia Ampere GPUs and PyTorch 1.10 or later.
62
+
63
+ ## accelerate config update
64
+
65
+ **Command**:
66
+
67
+ `accelerate config update` or `accelerate-config update`
68
+
69
+ Update an existing config file with the latest defaults while maintaining the old configuration.
70
+
71
+ **Usage**:
72
+
73
+ ```bash
74
+ accelerate config update [arguments]
75
+ ```
76
+
77
+ **Optional Arguments**:
78
+ * `--config_file CONFIG_FILE` (`str`) -- The path to the config file to update. Will default to a file named default_config.yaml in the cache location, which is the content
79
+ of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
80
+ (`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
81
+
82
+ * `-h`, `--help` (`bool`) -- Show a help message and exit
83
+
84
+
85
+ ## accelerate env
86
+
87
+ **Command**:
88
+
89
+ `accelerate env` or `accelerate-env` or `python -m accelerate.commands.env`
90
+
91
+ Lists the contents of the passed 🤗 Accelerate configuration file. Should always be used when opening an issue on the [GitHub repository](https://github.com/huggingface/accelerate).
92
+
93
+ **Usage**:
94
+
95
+ ```bash
96
+ accelerate env [arguments]
97
+ ```
98
+
99
+ **Optional Arguments**:
100
+ * `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
101
+ of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
102
+ (`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
103
+ * `-h`, `--help` (`bool`) -- Show a help message and exit
104
+
105
+ ## accelerate launch
106
+
107
+ **Command**:
108
+
109
+ `accelerate launch` or `accelerate-launch` or `python -m accelerate.commands.launch`
110
+
111
+ Launches a specified script on a distributed system with the right parameters.
112
+
113
+ **Usage**:
114
+
115
+ ```bash
116
+ accelerate launch [arguments] {training_script} --{training_script-argument-1} --{training_script-argument-2} ...
117
+ ```
118
+
119
+ **Positional Arguments**:
120
+
121
+ - `{training_script}` -- The full path to the script to be launched in parallel
122
+ - `--{training_script-argument-1}` -- Arguments of the training script
123
+
124
+ **Optional Arguments**:
125
+
126
+ * `-h`, `--help` (`bool`) -- Show a help message and exit
127
+ * `--config_file CONFIG_FILE` (`str`)-- The config file to use for the default values in the launching script.
128
+ * `-m`, `--module` (`bool`) -- Change each process to interpret the launch script as a Python module, executing with the same behavior as 'python -m'.
129
+ * `--no_python` (`bool`) -- Skip prepending the training script with 'python' - just execute it directly. Useful when the script is not a Python script.
130
+ * `--debug` (`bool`) -- Whether to print out the torch.distributed stack trace when something fails.
131
+ * `-q`, `--quiet` (`bool`) -- Silence subprocess errors from the launch stack trace to only show the relevant tracebacks. (Only applicable to DeepSpeed and single-process configurations).
132
+
133
+
134
+ The rest of these arguments are configured through `accelerate config` and are read in from the specified `--config_file` (or default configuration) for their
135
+ values. They can also be passed in manually.
136
+
137
+ **Hardware Selection Arguments**:
138
+
139
+ * `--cpu` (`bool`) -- Whether or not to force the training on the CPU.
140
+ * `--multi_gpu` (`bool`) -- Whether or not this should launch a distributed GPU training.
141
+ * `--tpu` (`bool`) -- Whether or not this should launch a TPU training.
142
+ * `--ipex` (`bool`) -- Whether or not this should launch an Intel Pytorch Extension (IPEX) training.
143
+
144
+ **Resource Selection Arguments**:
145
+
146
+ The following arguments are useful for fine-tuning how available hardware should be used
147
+
148
+ * `--mixed_precision {no,fp16,bf16}` (`str`) -- Whether or not to use mixed precision training. Choose between FP16 and BF16 (bfloat16) training. BF16 training is only supported on Nvidia Ampere GPUs and PyTorch 1.10 or later.
149
+ * `--num_processes NUM_PROCESSES` (`int`) -- The total number of processes to be launched in parallel.
150
+ * `--num_machines NUM_MACHINES` (`int`) -- The total number of machines used in this training.
151
+ * `--num_cpu_threads_per_process NUM_CPU_THREADS_PER_PROCESS` (`int`) -- The number of CPU threads per process. Can be tuned for optimal performance.
152
+
153
+ **Training Paradigm Arguments**:
154
+
155
+ The following arguments are useful for selecting which training paradigm to use.
156
+
157
+ * `--use_deepspeed` (`bool`) -- Whether or not to use DeepSpeed for training.
158
+ * `--use_fsdp` (`bool`) -- Whether or not to use FullyShardedDataParallel for training.
159
+ * `--use_megatron_lm` (`bool`) -- Whether or not to use Megatron-LM for training.
160
+ * `--use_xpu` (`bool`) -- Whether to use IPEX plugin to speed up training on XPU specifically.
161
+
162
+ **Distributed GPU Arguments**:
163
+
164
+ The following arguments are only useful when `multi_gpu` is passed or multi-gpu training is configured through `accelerate config`:
165
+
166
+ * `--gpu_ids` (`str`) -- What GPUs (by id) should be used for training on this machine as a comma-seperated list
167
+ * `--same_network` (`bool`) -- Whether all machines used for multinode training exist on the same local network.
168
+ * `--machine_rank MACHINE_RANK` (`int`) -- The rank of the machine on which this script is launched.
169
+ * `--main_process_ip MAIN_PROCESS_IP` (`str`) -- The IP address of the machine of rank 0.
170
+ * `--main_process_port MAIN_PROCESS_PORT` (`int`) -- The port to use to communicate with the machine of rank 0.
171
+ * `--rdzv_backend` (`str`) -- The rendezvous method to use, such as "static" or "c10d"
172
+ * `--rdzv_conf` (`str`) -- Additional rendezvous configuration (<key1>=<value1>,<key2>=<value2>,...).
173
+ * `--max_restarts` (`int`) -- Maximum number of worker group restarts before failing.
174
+ * `--monitor_interval` (`float`) -- Interval, in seconds, to monitor the state of workers.
175
+
176
+ **TPU Arguments**:
177
+
178
+ The following arguments are only useful when `tpu` is passed or TPU training is configured through `accelerate config`:
179
+
180
+ * `--main_training_function MAIN_TRAINING_FUNCTION` (`str`) -- The name of the main function to be executed in your script.
181
+ * `--downcast_bf16` (`bool`) -- Whether when using bf16 precision on TPUs if both float and double tensors are cast to bfloat16 or if double tensors remain as float32.
182
+
183
+ **DeepSpeed Arguments**:
184
+
185
+ The following arguments are only useful when `use_deepspeed` is passed or `deepspeed` is configured through `accelerate config`:
186
+
187
+ * `--deepspeed_config_file` (`str`) -- DeepSpeed config file.
188
+ * `--zero_stage` (`int`) -- DeepSpeed's ZeRO optimization stage.
189
+ * `--offload_optimizer_device` (`str`) -- Decides where (none|cpu|nvme) to offload optimizer states.
190
+ * `--offload_param_device` (`str`) -- Decides where (none|cpu|nvme) to offload parameters.
191
+ * `--gradient_accumulation_steps` (`int`) -- No of gradient_accumulation_steps used in your training script.
192
+ * `--gradient_clipping` (`float`) -- Gradient clipping value used in your training script.
193
+ * `--zero3_init_flag` (`str`) -- Decides Whether (true|false) to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with DeepSpeed ZeRO Stage-3.
194
+ * `--zero3_save_16bit_model` (`str`) -- Decides Whether (true|false) to save 16-bit model weights when using ZeRO Stage-3. Only applicable with DeepSpeed ZeRO Stage-3.
195
+ * `--deepspeed_hostfile` (`str`) -- DeepSpeed hostfile for configuring multi-node compute resources.
196
+ * `--deepspeed_exclusion_filter` (`str`) -- DeepSpeed exclusion filter string when using mutli-node setup.
197
+ * `--deepspeed_inclusion_filter` (`str`) -- DeepSpeed inclusion filter string when using mutli-node setup.
198
+ * `--deepspeed_multinode_launcher` (`str`) -- DeepSpeed multi-node launcher to use.
199
+
200
+ **Fully Sharded Data Parallelism Arguments**:
201
+
202
+ The following arguments are only useful when `use_fsdp` is passed or Fully Sharded Data Parallelism is configured through `accelerate config`:
203
+
204
+ * `--fsdp_offload_params` (`str`) -- Decides Whether (true|false) to offload parameters and gradients to CPU.
205
+ * `--fsdp_min_num_params` (`int`) -- FSDP's minimum number of parameters for Default Auto Wrapping.
206
+ * `--fsdp_sharding_strategy` (`int`) -- FSDP's Sharding Strategy.
207
+ * `--fsdp_auto_wrap_policy` (`str`) -- FSDP's auto wrap policy.
208
+ * `--fsdp_transformer_layer_cls_to_wrap` (`str`) -- Transformer layer class name (case-sensitive) to wrap, e.g, `BertLayer`, `GPTJBlock`, `T5Block` ...
209
+ * `--fsdp_backward_prefetch_policy` (`str`) -- FSDP's backward prefetch policy.
210
+ * `--fsdp_state_dict_type` (`str`) -- FSDP's state dict type.
211
+
212
+ **Megatron-LM Arguments**:
213
+
214
+ The following arguments are only useful when `use_megatron_lm` is passed or Megatron-LM is configured through `accelerate config`:
215
+
216
+ * `--megatron_lm_tp_degree` (``) -- Megatron-LM's Tensor Parallelism (TP) degree.
217
+ * `--megatron_lm_pp_degree` (``) -- Megatron-LM's Pipeline Parallelism (PP) degree.
218
+ * `--megatron_lm_num_micro_batches` (``) -- Megatron-LM's number of micro batches when PP degree > 1.
219
+ * `--megatron_lm_sequence_parallelism` (``) -- Decides Whether (true|false) to enable Sequence Parallelism when TP degree > 1.
220
+ * `--megatron_lm_recompute_activations` (``) -- Decides Whether (true|false) to enable Selective Activation Recomputation.
221
+ * `--megatron_lm_use_distributed_optimizer` (``) -- Decides Whether (true|false) to use distributed optimizer which shards optimizer state and gradients across Data Pralellel (DP) ranks.
222
+ * `--megatron_lm_gradient_clipping` (``) -- Megatron-LM's gradient clipping value based on global L2 Norm (0 to disable).
223
+
224
+ **AWS SageMaker Arguments**:
225
+
226
+ The following arguments are only useful when training in SageMaker
227
+
228
+ * `--aws_access_key_id AWS_ACCESS_KEY_ID` (`str`) -- The AWS_ACCESS_KEY_ID used to launch the Amazon SageMaker training job
229
+ * `--aws_secret_access_key AWS_SECRET_ACCESS_KEY` (`str`) -- The AWS_SECRET_ACCESS_KEY used to launch the Amazon SageMaker training job
230
+
231
+ ## accelerate estimate-memory
232
+
233
+ **Command**:
234
+
235
+ `accelerate estimate-memory` or `accelerate-estimate-memory` or `python -m accelerate.commands.estimate`
236
+
237
+ Estimates the total vRAM a particular model hosted on the Hub needs to be loaded in with an estimate for training. Requires that `huggingface_hub` be installed.
238
+
239
+ <Tip>
240
+
241
+ When performing inference, typically add ≤20% to the result as overall allocation [as referenced here](https://blog.eleuther.ai/transformer-math/). We will have more extensive estimations in the future that will automatically be included in the calculation.
242
+
243
+ </Tip>
244
+
245
+ **Usage**:
246
+
247
+ ```bash
248
+ accelerate estimate-memory {MODEL_NAME} --library_name {LIBRARY_NAME} --dtypes {dtype_1} {dtype_2} ...
249
+ ```
250
+
251
+ **Required Arguments**:
252
+
253
+ * `MODEL_NAME` (`str`)-- The model name on the Hugging Face Hub
254
+
255
+ **Optional Arguments**:
256
+
257
+ * `--library_name {timm,transformers}` (`str`) -- The library the model has an integration with, such as `transformers`, needed only if this information is not stored on the Hub
258
+ * `--dtypes {float32,float16,int8,int4}` (`[{float32,float16,int8,int4} ...]`) -- The dtypes to use for the model, must be one (or many) of `float32`, `float16`, `int8`, and `int4`
259
+ * `--trust_remote_code` (`bool`) -- Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be passed for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
260
+
261
+ ## accelerate tpu-config
262
+
263
+ `accelerate tpu-config`
264
+
265
+ **Usage**:
266
+
267
+ ```bash
268
+ accelerate tpu-config [arguments]
269
+ ```
270
+
271
+ **Optional Arguments**:
272
+ * `-h`, `--help` (`bool`) -- Show a help message and exit
273
+
274
+ **Config Arguments**:
275
+
276
+ Arguments that can be configured through `accelerate config`.
277
+
278
+ * `--config_file` (`str`) -- Path to the config file to use for accelerate.
279
+ * `--tpu_name` (`str`) -- The name of the TPU to use. If not specified, will use the TPU specified in the config file.
280
+ * `--tpu_zone` (`str`) -- The zone of the TPU to use. If not specified, will use the zone specified in the config file.
281
+
282
+ **TPU Arguments**:
283
+
284
+ Arguments for options ran inside the TPU.
285
+
286
+ * `--command_file` (`str`) -- The path to the file containing the commands to run on the pod on startup.
287
+ * `--command` (`str`) -- A command to run on the pod. Can be passed multiple times.
288
+ * `--install_accelerate` (`bool`) -- Whether to install accelerate on the pod. Defaults to False.
289
+ * `--accelerate_version` (`str`) -- The version of accelerate to install on the pod. If not specified, will use the latest pypi version. Specify 'dev' to install from GitHub.
290
+ * `--debug` (`bool`) -- If set, will print the command that would be run instead of running it.
291
+
292
+ ## accelerate test
293
+
294
+ `accelerate test` or `accelerate-test`
295
+
296
+ Runs `accelerate/test_utils/test_script.py` to verify that 🤗 Accelerate has been properly configured on your system and runs.
297
+
298
+ **Usage**:
299
+
300
+ ```bash
301
+ accelerate test [arguments]
302
+ ```
303
+
304
+ **Optional Arguments**:
305
+ * `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
306
+ of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
307
+ (`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
308
+ * `-h`, `--help` (`bool`) -- Show a help message and exit
docs/source/package_reference/deepspeed.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2021 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Utilities for DeepSpeed
17
+
18
+ [[autodoc]] utils.DeepSpeedPlugin
19
+
20
+ [[autodoc]] utils.DummyOptim
21
+
22
+ [[autodoc]] utils.DummyScheduler
23
+
24
+ [[autodoc]] utils.DeepSpeedEngineWrapper
25
+
26
+ [[autodoc]] utils.DeepSpeedOptimizerWrapper
27
+
28
+ [[autodoc]] utils.DeepSpeedSchedulerWrapper
docs/source/package_reference/fsdp.md ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Utilities for Fully Sharded Data Parallelism
17
+
18
+ [[autodoc]] utils.FullyShardedDataParallelPlugin
docs/source/package_reference/kwargs.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2021 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Kwargs Handlers
17
+
18
+ The following objects can be passed to the main [`Accelerator`] to customize how some PyTorch objects
19
+ related to distributed training or mixed precision are created.
20
+
21
+ ## AutocastKwargs
22
+
23
+ [[autodoc]] AutocastKwargs
24
+
25
+ ## DistributedDataParallelKwargs
26
+
27
+ [[autodoc]] DistributedDataParallelKwargs
28
+
29
+ ## FP8RecipeKwargs
30
+
31
+ [[autodoc]] utils.FP8RecipeKwargs
32
+
33
+ ## GradScalerKwargs
34
+
35
+ [[autodoc]] GradScalerKwargs
36
+
37
+ ## InitProcessGroupKwargs
38
+
39
+ [[autodoc]] InitProcessGroupKwargs
docs/source/package_reference/launchers.md ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Launchers
17
+
18
+ Functions for launching training on distributed processes.
19
+
20
+
21
+ [[autodoc]] accelerate.notebook_launcher
22
+ [[autodoc]] accelerate.debug_launcher
docs/source/package_reference/logging.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2021 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Logging with Accelerate
17
+
18
+ Refer to the [Troubleshooting guide](../usage_guides/troubleshooting#logging) or to the example below to learn
19
+ how to use 🤗 Accelerate's logger.
20
+
21
+ [[autodoc]] logging.get_logger
docs/source/package_reference/megatron_lm.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2021 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Utilities for Megatron-LM
17
+
18
+ [[autodoc]] utils.MegatronLMPlugin
19
+
20
+ [[autodoc]] utils.MegatronLMDummyScheduler
21
+
22
+ [[autodoc]] utils.MegatronLMDummyDataLoader
23
+
24
+ [[autodoc]] utils.AbstractTrainStep
25
+
26
+ [[autodoc]] utils.GPTTrainStep
27
+
28
+ [[autodoc]] utils.BertTrainStep
29
+
30
+ [[autodoc]] utils.T5TrainStep
31
+
32
+ [[autodoc]] utils.avg_losses_across_data_parallel_group
docs/source/package_reference/state.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2021 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Stateful Classes
17
+
18
+ Below are variations of a [singleton class](https://en.wikipedia.org/wiki/Singleton_pattern) in the sense that all
19
+ instances share the same state, which is initialized on the first instantiation.
20
+
21
+ These classes are immutable and store information about certain configurations or
22
+ states.
23
+
24
+ [[autodoc]] state.PartialState
25
+
26
+ [[autodoc]] state.AcceleratorState
27
+
28
+ [[autodoc]] state.GradientState
docs/source/package_reference/torch_wrappers.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2021 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Wrapper classes for torch Dataloaders, Optimizers, and Schedulers
17
+
18
+ The internal classes Accelerate uses to prepare objects for distributed training
19
+ when calling [`~Accelerator.prepare`].
20
+
21
+ ## Datasets and DataLoaders
22
+
23
+ [[autodoc]] data_loader.prepare_data_loader
24
+ [[autodoc]] data_loader.skip_first_batches
25
+
26
+ [[autodoc]] data_loader.BatchSamplerShard
27
+ [[autodoc]] data_loader.IterableDatasetShard
28
+ [[autodoc]] data_loader.DataLoaderShard
29
+ [[autodoc]] data_loader.DataLoaderDispatcher
30
+
31
+ ## Optimizers
32
+
33
+ [[autodoc]] optimizer.AcceleratedOptimizer
34
+
35
+ ## Schedulers
36
+
37
+ [[autodoc]] scheduler.AcceleratedScheduler
docs/source/package_reference/tracking.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Experiment Tracking
17
+
18
+ ## The Base Tracker Class
19
+
20
+ [[autodoc]] tracking.GeneralTracker
21
+
22
+ ## Integrated Trackers
23
+
24
+ [[autodoc]] tracking.TensorBoardTracker
25
+ - __init__
26
+ [[autodoc]] tracking.WandBTracker
27
+ - __init__
28
+ [[autodoc]] tracking.CometMLTracker
29
+ - __init__
30
+ [[autodoc]] tracking.AimTracker
31
+ - __init__
32
+ [[autodoc]] tracking.MLflowTracker
33
+ - __init__
34
+ [[autodoc]] tracking.ClearMLTracker
35
+ - __init__
docs/source/package_reference/utilities.md ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2021 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Helpful Utilities
17
+
18
+ Below are a variety of utility functions that 🤗 Accelerate provides, broken down by use-case.
19
+
20
+ ## Constants
21
+
22
+ Constants used throughout 🤗 Accelerate for reference
23
+
24
+ The following are constants used when utilizing [`Accelerator.save_state`]
25
+
26
+ `utils.MODEL_NAME`: `"pytorch_model"`
27
+ `utils.OPTIMIZER_NAME`: `"optimizer"`
28
+ `utils.RNG_STATE_NAME`: `"random_states"`
29
+ `utils.SCALER_NAME`: `"scaler.pt`
30
+ `utils.SCHEDULER_NAME`: `"scheduler`
31
+
32
+ The following are constants used when utilizing [`Accelerator.save_model`]
33
+
34
+ `utils.WEIGHTS_NAME`: `"pytorch_model.bin"`
35
+ `utils.SAFE_WEIGHTS_NAME`: `"model.safetensors"`
36
+ `utils.WEIGHTS_INDEX_NAME`: `"pytorch_model.bin.index.json"`
37
+ `utils.SAFE_WEIGHTS_INDEX_NAME`: `"model.safetensors.index.json"`
38
+
39
+ ## Data Classes
40
+
41
+ These are basic dataclasses used throughout 🤗 Accelerate and they can be passed in as parameters.
42
+
43
+ [[autodoc]] utils.DistributedType
44
+
45
+ [[autodoc]] utils.DynamoBackend
46
+
47
+ [[autodoc]] utils.LoggerType
48
+
49
+ [[autodoc]] utils.PrecisionType
50
+
51
+ [[autodoc]] utils.FP8RecipeKwargs
52
+
53
+ [[autodoc]] utils.ProjectConfiguration
54
+
55
+ ## Environmental Variables
56
+
57
+ These are environmental variables that can be enabled for different use cases
58
+
59
+ * `ACCELERATE_DEBUG_MODE` (`str`): Whether to run accelerate in debug mode. More info available [here](../usage_guides/debug.md).
60
+
61
+ ## Plugins
62
+
63
+ These are plugins that can be passed to the [`Accelerator`] object. While they are defined elsewhere in the documentation,
64
+ for convience all of them are available to see here:
65
+
66
+ [[autodoc]] utils.DeepSpeedPlugin
67
+
68
+ [[autodoc]] utils.FullyShardedDataParallelPlugin
69
+
70
+ [[autodoc]] utils.GradientAccumulationPlugin
71
+
72
+ [[autodoc]] utils.MegatronLMPlugin
73
+
74
+ [[autodoc]] utils.TorchDynamoPlugin
75
+
76
+
77
+ ## Data Manipulation and Operations
78
+
79
+ These include data operations that mimic the same `torch` ops but can be used on distributed processes.
80
+
81
+ [[autodoc]] utils.broadcast
82
+
83
+ [[autodoc]] utils.concatenate
84
+
85
+ [[autodoc]] utils.gather
86
+
87
+ [[autodoc]] utils.pad_across_processes
88
+
89
+ [[autodoc]] utils.reduce
90
+
91
+ [[autodoc]] utils.send_to_device
92
+
93
+ ## Environment Checks
94
+
95
+ These functionalities check the state of the current working environment including information about the operating system itself, what it can support, and if particular dependencies are installed.
96
+
97
+ [[autodoc]] utils.is_bf16_available
98
+
99
+ [[autodoc]] utils.is_ipex_available
100
+
101
+ [[autodoc]] utils.is_mps_available
102
+
103
+ [[autodoc]] utils.is_npu_available
104
+
105
+ [[autodoc]] utils.is_torch_version
106
+
107
+ [[autodoc]] utils.is_tpu_available
108
+
109
+ [[autodoc]] utils.is_xpu_available
110
+
111
+ ## Environment Manipulation
112
+
113
+ [[autodoc]] utils.patch_environment
114
+
115
+ [[autodoc]] utils.clear_environment
116
+
117
+ [[autodoc]] utils.write_basic_config
118
+
119
+ When setting up 🤗 Accelerate for the first time, rather than running `accelerate config` [~utils.write_basic_config] can be used as an alternative for quick configuration.
120
+
121
+ ## Memory
122
+
123
+ [[autodoc]] utils.get_max_memory
124
+
125
+ [[autodoc]] utils.find_executable_batch_size
126
+
127
+ ## Modeling
128
+
129
+ These utilities relate to interacting with PyTorch models
130
+
131
+ [[autodoc]] utils.extract_model_from_parallel
132
+
133
+ [[autodoc]] utils.get_max_layer_size
134
+
135
+ [[autodoc]] utils.offload_state_dict
136
+
137
+
138
+ ## Parallel
139
+
140
+ These include general utilities that should be used when working in parallel.
141
+
142
+ [[autodoc]] utils.extract_model_from_parallel
143
+
144
+ [[autodoc]] utils.save
145
+
146
+ [[autodoc]] utils.wait_for_everyone
147
+
148
+
149
+ ## Random
150
+
151
+ These utilities relate to setting and synchronizing of all the random states.
152
+
153
+ [[autodoc]] utils.set_seed
154
+
155
+ [[autodoc]] utils.synchronize_rng_state
156
+
157
+ [[autodoc]] utils.synchronize_rng_states
158
+
159
+
160
+ ## PyTorch XLA
161
+
162
+ These include utilities that are useful while using PyTorch with XLA.
163
+
164
+ [[autodoc]] utils.install_xla
165
+
166
+ ## Loading model weights
167
+
168
+ These include utilities that are useful to load checkpoints.
169
+
170
+ [[autodoc]] utils.load_checkpoint_in_model
171
+
172
+ ## Quantization
173
+
174
+ These include utilities that are useful to quantize model.
175
+
176
+ [[autodoc]] utils.load_and_quantize_model
177
+
178
+ [[autodoc]] utils.BnbQuantizationConfig
docs/source/quicktour.md ADDED
@@ -0,0 +1,441 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2021 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Quick tour
17
+
18
+ This guide aims to help you get started with 🤗 Accelerate quickly. It covers the essential steps you need to take to
19
+ enable distributed training, as well as the adjustments that you need to make in some common scenarios.
20
+
21
+ To help you navigate, the guide is split into two sections:
22
+ * [Getting Started with 🤗 Accelerate](#getting-started-with--accelerate): start here to learn how to modify your script to enable distributed training with 🤗 Accelerate
23
+ * [Common adaptations to the base case](#common-adaptations-to-the-base-case): check out this section for common deviations from the baseline scenario and what adjustments may need to be made to support them.
24
+
25
+ ## Getting started with 🤗 Accelerate
26
+
27
+ ### Enable distributed training in your script
28
+
29
+ To use 🤗 Accelerate in your own training script, you have to modify four things:
30
+
31
+ 1. Import the [`Accelerator`] main class and instantiate one in an `accelerator` object.
32
+
33
+ ```python
34
+ from accelerate import Accelerator
35
+
36
+ accelerator = Accelerator()
37
+ ```
38
+
39
+ Add this at the beginning of your training script as it will initialize everything necessary for distributed training.
40
+ You don't need to indicate the kind of environment you are in (a single machine with a GPU, a machine with several GPUs,
41
+ or several machines with multiple GPUs or a TPU), the library will detect this automatically.
42
+
43
+ 2. Remove the `.to(device)` or `.cuda()` calls for your model and input data.
44
+
45
+ The `accelerator` object will handle placing these objects on the right device for you.
46
+ If you choose to leave those `.to(device)` calls, make sure to use the device provided by the `accelerator` object: `accelerator.device`.
47
+
48
+ <Tip warning={true}>
49
+
50
+ You can fully deactivate the automatic device placement by passing along `device_placement=False` when
51
+ initializing the [`Accelerator`].
52
+ However, if you place your objects manually on the proper device, be careful to create your optimizer after putting your
53
+ model on `accelerator.device` or your training will fail on TPU.
54
+
55
+ </Tip>
56
+
57
+ 3. Pass all PyTorch objects relevant to training (optimizer, model, dataloader(s), learning rate scheduler) to the
58
+ [`~Accelerator.prepare`] method as soon as these objects are created, before starting your actual
59
+ training loop:
60
+
61
+ ```python
62
+ model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
63
+ model, optimizer, train_dataloader, lr_scheduler
64
+ )
65
+ ```
66
+
67
+ **Important notes**:
68
+
69
+ * You should always pass the the learning rate scheduler to [`~Accelerator.prepare`], however if the scheduler should *not* be stepped at each optimization step, pass `step_with_optimizer=False` to the [`Accelerator`] init.
70
+ * While you can send your dataloader to [`~Accelerator.prepare`] on its own (and there are cases for doing so, such as distributed inference), it's best to send it to [`~Accelerator.prepare`] together with the model and optimizer.
71
+ * If you wish to run distributed evaluation, send your validation dataloader to [`~Accelerator.prepare`] as well. There are some nuances to distributed validation, check the [Distributed evaluation](#add-distributed-evaluation) section of the guide.
72
+ * Any instruction using your training dataloader length (for instance if you want to log the number of total training
73
+ steps) should go after the call to [`~Accelerator.prepare`].
74
+
75
+ Passing `DataLoader` objects to the [`~Accelerator.prepare`] method ensures that your dataloader will be sharded across
76
+ all GPUs/TPU cores available so that each one sees a different portion of the training dataset. In other words, if there are 8 processes and a dataset of 64 items, each process will see 8 of these items per iteration. Also, the random states
77
+ of all processes will be synchronized at the beginning of each iteration through your dataloader, to make sure the data
78
+ is shuffled the same way (if you decided to use `shuffle=True` or any kind of random sampler).
79
+
80
+ <Tip>
81
+
82
+ The actual batch size for your training will be the number of devices used multiplied by the batch size you set in
83
+ your script. For instance, training on 4 GPUs with a batch size of 16 set when creating the training dataloader will
84
+ train at an actual batch size of 64 (4 * 16).
85
+ If you want the batch size remain the same regardless of how many GPUs the script is run on, you can use the
86
+ option `split_batches=True` when creating and initializing [`Accelerator`].
87
+ Your training dataloader may change length when going through this method: if you run on X GPUs, it will have its
88
+ length divided by X (since your actual batch size will be multiplied by X), unless you set
89
+ `split_batches=True`.
90
+
91
+ </Tip>
92
+
93
+
94
+ 4. Replace the `loss.backward()` line with `accelerator.backward(loss)`.
95
+
96
+ And you're all set! With all these changes, your script will run on your local machine as well as on multiple GPUs or a
97
+ TPU! You can either use your favorite tool to launch the distributed training, or you can use the 🤗 Accelerate
98
+ launcher.
99
+
100
+ ### Add distributed evaluation
101
+
102
+ You can perform regular evaluation in your training script if you leave your validation dataloader out of the
103
+ [`~Accelerator.prepare`] method. In this case, you will need to put the input data on the
104
+ `accelerator.device` manually.
105
+
106
+ To perform distributed evaluation, send along your validation dataloader to the [`~Accelerator.prepare`]
107
+ method:
108
+
109
+ ```python
110
+ validation_dataloader = accelerator.prepare(validation_dataloader)
111
+ ```
112
+
113
+ Same as with your training dataloader, each device will only see part of the evaluation data should you run your script
114
+ on multiple devices. This means you will need to group your predictions together which you can do with
115
+ the [`~Accelerator.gather_for_metrics`] method.
116
+
117
+ ```python
118
+ for inputs, targets in validation_dataloader:
119
+ predictions = model(inputs)
120
+ # Gather all predictions and targets
121
+ all_predictions, all_targets = accelerator.gather_for_metrics((predictions, targets))
122
+ # Example of use with a *Datasets.Metric*
123
+ metric.add_batch(all_predictions, all_targets)
124
+ ```
125
+
126
+ <Tip warning={true}>
127
+
128
+ Similar to the training dataloader, passing your validation dataloader through
129
+ [`~Accelerator.prepare`] may change it: if you run on X GPUs, it will have its length divided by X
130
+ (since your actual batch size will be multiplied by X), unless you set `split_batches=True`.
131
+
132
+ </Tip>
133
+
134
+ Some data at the end of the dataset may be duplicated so the batch can be divided equally among all workers. As a result,
135
+ metrics should be calculated through the [`~Accelerator.gather_for_metrics`] method to automatically remove the duplicated
136
+ data while gathering and provide a more accurate metric.
137
+
138
+ <Tip>
139
+
140
+ If for some reason you don't wish to have this automatically done, [`~Accelerator.gather`] can be used instead to gather
141
+ the data across all processes and this can manually be done instead.
142
+
143
+ </Tip>
144
+
145
+
146
+ <Tip warning={true}>
147
+
148
+ The [`~Accelerator.gather`] and [`~Accelerator.gather_for_metrics`] methods require the tensors to be all the same size on each process. If
149
+ you have tensors of different sizes on each process (for instance when dynamically padding to the maximum length in
150
+ a batch), you should use the [`~Accelerator.pad_across_processes`] method to pad you tensor to the
151
+ biggest size across processes.
152
+
153
+ </Tip>
154
+
155
+ ### Launch your distributed script
156
+
157
+ You can use the regular commands to launch your distributed training (like `torch.distributed.run` for
158
+ PyTorch) - they are fully compatible with 🤗 Accelerate.
159
+
160
+ Alternatively, 🤗 Accelerate provides a CLI tool that unifies all launchers, so you only have to remember one command. \
161
+ To use it, run a quick configuration setup first on your machine and answer the questions:
162
+
163
+ ```bash
164
+ accelerate config
165
+ ```
166
+
167
+ At the end of the setup, a *default_config.yaml* file will be saved in your cache folder for 🤗 Accelerate. That cache
168
+ folder is (with decreasing order of priority):
169
+
170
+ - The content of your environment variable `HF_HOME` suffixed with *accelerate*.
171
+ - If it does not exist, the content of your environment variable `XDG_CACHE_HOME` suffixed with
172
+ *huggingface/accelerate*.
173
+ - If this does not exist either, the folder *~/.cache/huggingface/accelerate*.
174
+
175
+ By specifying the `--config_file` flag you can specify an alternative location of the configuration file.
176
+ Once the configuration setup is complete, you can test your setup by running:
177
+
178
+ ```bash
179
+ accelerate test
180
+ ```
181
+
182
+ This will launch a short script that will test the distributed environment. If it runs without issues, you are ready for
183
+ the next step!
184
+
185
+ Note that if you specified a location for the config file in the previous step, you need to pass it here as well:
186
+
187
+ ```bash
188
+ accelerate test --config_file path_to_config.yaml
189
+ ```
190
+
191
+ Now that this is done, you can run your script with the following command:
192
+
193
+ ```bash
194
+ accelerate launch path_to_script.py --args_for_the_script
195
+ ```
196
+
197
+ If you stored the config file in a non-default location, you can indicate it to the launcher like this:
198
+
199
+ ```bash
200
+ accelerate launch --config_file path_to_config.yaml path_to_script.py --args_for_the_script
201
+ ```
202
+
203
+ You can override any of the arguments determined by your config file. To see the complete list of parameters that you
204
+ can pass in, run `accelerate launch -h`. (And further niche argument help by passing in partial commands, such as `accelerate launch --multi_gpu -h` for all `multi_gpu` args)
205
+
206
+ Check out the [Launch tutorial](basic_tutorials/launch) for more information about launching your scripts.
207
+
208
+ ## Common modifications of the base case
209
+
210
+ The previous section covers the minimal essential steps to move a training script into a distributed setup with 🤗 Accelerate.
211
+ Here we describe common modifications/deviations from the base case scenario and the adjustments you need to make to accommodate for them.
212
+
213
+ ### Launch distributed training from a notebook
214
+
215
+ Accelerate has a [`notebook_launcher`] to help you launch your training function from a
216
+ notebook. This launcher supports launching a training with TPUs on Colab or Kaggle, as well as training on several GPUs and machines
217
+ (if the machine on which you are running your notebook has them).
218
+
219
+ Define a function responsible for your whole training and/or evaluation in a cell of the notebook, then execute a
220
+ cell with the following code:
221
+
222
+ ```python
223
+ from accelerate import notebook_launcher
224
+
225
+ notebook_launcher(training_function)
226
+ ```
227
+
228
+ <Tip warning={true}>
229
+
230
+ Your [`Accelerator`] object should only be defined inside the training function. This is because the
231
+ initialization should be done inside the launcher only.
232
+
233
+ </Tip>
234
+
235
+ Check out the [Notebook Launcher tutorial](basic_tutorials/notebook) for more information about training on TPUs.
236
+
237
+ ### Specifics of training on TPU
238
+
239
+ If you want to launch your script on TPUs, there are a few caveats you should be aware of. Behind the scenes, the TPUs
240
+ will create a graph of all the operations happening in your training step (forward pass, backward pass and optimizer
241
+ step). This is why your first step of training will always be very long as building and compiling this graph for
242
+ optimizations takes some time.
243
+
244
+ The good news is that this compilation will be cached so the second step and all the following will be much faster. The
245
+ bad news is that it only applies if all of your steps do exactly the same operations, which implies:
246
+
247
+ - having all tensors of the same length in all your batches
248
+ - having static code (i.e., not a for loop of length that could change from step to step)
249
+
250
+ Having any of the things above change between two steps will trigger a new compilation which will, once again, take a
251
+ lot of time. In practice, that means you must take special care to have all your tensors in your inputs of the same
252
+ shape (so no dynamic padding for instance if you are in an NLP problem) and should not use layers with for loops that
253
+ have different lengths depending on the inputs (such as an LSTM) or the training will be excruciatingly slow.
254
+
255
+ To introduce special behavior in your script for TPUs you can check the `distributed_type` of your
256
+ `accelerator`:
257
+
258
+ ```python docstyle-ignore
259
+ from accelerate import DistributedType
260
+
261
+ if accelerator.distributed_type == DistributedType.TPU:
262
+ # do something of static shape
263
+ else:
264
+ # go crazy and be dynamic
265
+ ```
266
+
267
+ The [NLP example](https://github.com/huggingface/accelerate/blob/main/examples/nlp_example.py) shows an example in a
268
+ situation with dynamic padding.
269
+
270
+ One last thing to pay close attention to: if your model has tied weights (such as language models which tie the weights
271
+ of the embedding matrix with the weights of the decoder), moving this model to the TPU (either yourself or after you
272
+ passed your model to [`~Accelerator.prepare`]) will break the tying. You will need to retie the weights
273
+ after. You can find an example of this in the [run_clm_no_trainer](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py) script in
274
+ the Transformers repository.
275
+
276
+ Check out the [TPU tutorial](concept_guides/training_tpu) for more information about training on TPUs.
277
+
278
+ ### Execute a statement only on one processes
279
+
280
+ Some of your instructions only need to run for one process on a given server: for instance a data download or a log
281
+ statement. To do this, wrap the statement in a test like this:
282
+
283
+ ```python docstyle-ignore
284
+ if accelerator.is_local_main_process:
285
+ # Is executed once per server
286
+ ```
287
+
288
+ Another example is progress bars: to avoid having multiple progress bars in your output, you should only display one on
289
+ the local main process:
290
+
291
+ ```python
292
+ from tqdm.auto import tqdm
293
+
294
+ progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
295
+ ```
296
+
297
+ The *local* means per machine: if you are running your training on two servers with several GPUs, the instruction will
298
+ be executed once on each of those servers. If you need to execute something only once for all processes (and not per
299
+ machine) for instance, uploading the final model to the 🤗 model hub, wrap it in a test like this:
300
+
301
+ ```python docstyle-ignore
302
+ if accelerator.is_main_process:
303
+ # Is executed once only
304
+ ```
305
+
306
+ For printing statements you only want executed once per machine, you can just replace the `print` function by
307
+ `accelerator.print`.
308
+
309
+
310
+ ### Defer execution on multiple GPUs
311
+
312
+ When you run your usual script, instructions are executed in order. Using 🤗 Accelerate to deploy your script on several
313
+ GPUs at the same time introduces a complication: while each process executes all instructions in order, some may be
314
+ faster than others.
315
+
316
+ You might need to wait for all processes to have reached a certain point before executing a given instruction. For
317
+ instance, you shouldn't save a model before making sure every process is done with training. To do this, add the
318
+ following line in your code:
319
+
320
+ ```
321
+ accelerator.wait_for_everyone()
322
+ ```
323
+
324
+ This instruction will block all the processes that arrive first until all the other processes have reached that
325
+ point (if you run your script on just one GPU or CPU, this won't do anything).
326
+
327
+
328
+ ### Save/load a model in a distributed setup
329
+
330
+ Saving the model you trained might need a bit of adjustment: first you should wait for all processes to reach that
331
+ point in the script as shown above, and then, you should unwrap your model before saving it. This is because when going
332
+ through the [`~Accelerator.prepare`] method, your model may have been placed inside a bigger model,
333
+ which deals with the distributed training. This in turn means that saving your model state dictionary without taking
334
+ any precaution will take that potential extra layer into account, and you will end up with weights you can't load back
335
+ in your base model. The [`~Accelerator.save_model`] method will help you to achieve that. It will unwrap your model and save
336
+ the model state dictionary.
337
+
338
+ Here is an example:
339
+
340
+ ```
341
+ accelerator.wait_for_everyone()
342
+ accelerator.save_model(model, save_directory)
343
+ ```
344
+
345
+ The [`~Accelerator.save_model`] method can also save a model into sharded checkpoints or with safetensors format:
346
+
347
+ ```python
348
+ accelerator.wait_for_everyone()
349
+ accelerator.save_model(model, save_directory, max_shard_size="1GB", safe_serialization=True)
350
+ ```
351
+
352
+ If your script contains logic to load a checkpoint, we also recommend you load your weights in the unwrapped model
353
+ (this is only useful if you use the load function after making your model go through
354
+ [`~Accelerator.prepare`]). Here is an example:
355
+
356
+ ```python
357
+ unwrapped_model = accelerator.unwrap_model(model)
358
+ path_to_checkpoint = os.path.join(save_directory,"pytorch_model.bin")
359
+ unwrapped_model.load_state_dict(torch.load(path_to_checkpoint))
360
+ ```
361
+
362
+ Note that since all the model parameters are references to tensors, this will load your weights inside `model`.
363
+
364
+ If you want to load a sharded checkpoint or a checkpoint with safetensors format into the model with a specific `device`,
365
+ we recommend you to load it with [`~utils.load_checkpoint_in_model`] function. Here's an example:
366
+
367
+ ```python
368
+ load_checkpoint_in_model(unwrapped_model, save_directory, device_map={"":device})
369
+ ```
370
+
371
+
372
+ ### Save/load entire states
373
+
374
+ When training your model, you may want to save the current state of the model, optimizer, random generators, and potentially
375
+ learning rate schedulers to be restored in the _same script_.
376
+ You can use [`~Accelerator.save_state`] and [`~Accelerator.load_state`] respectively to do so.
377
+
378
+ To further customize where and how states saved through [`~Accelerator.save_state`] the [`~utils.ProjectConfiguration`] class can be used. For example
379
+ if `automatic_checkpoint_naming` is enabled each saved checkpoint will be located then at `Accelerator.project_dir/checkpoints/checkpoint_{checkpoint_number}`.
380
+
381
+ If you have registered any other stateful items to be stored through [`~Accelerator.register_for_checkpointing`] they will also be saved and/or loaded.
382
+
383
+ <Tip>
384
+
385
+ Every object passed to [`~Accelerator.register_for_checkpointing`] must have a `load_state_dict` and `state_dict` function to be stored
386
+
387
+ </Tip>
388
+
389
+
390
+ ### Use gradient clipping
391
+
392
+ If you are using gradient clipping in your script, you should replace the calls to
393
+ `torch.nn.utils.clip_grad_norm_` or `torch.nn.utils.clip_grad_value_` with [`~Accelerator.clip_grad_norm_`]
394
+ and [`~Accelerator.clip_grad_value_`] respectively.
395
+
396
+
397
+ ### Train with mixed precision
398
+
399
+ If you are running your training in Mixed Precision with 🤗 Accelerate, you will get the best result with your loss being
400
+ computed inside your model (like in Transformer models for instance). Every computation outside of the model will be
401
+ executed in full precision (which is generally what you want for loss computation, especially if it involves a
402
+ softmax). However, you might want to put your loss computation inside the [`~Accelerator.autocast`] context manager:
403
+
404
+ ```
405
+ with accelerator.autocast():
406
+ loss = complex_loss_function(outputs, target):
407
+ ```
408
+
409
+ Another caveat with Mixed Precision training is that the gradient will skip a few updates at the beginning and
410
+ sometimes during training: because of the dynamic loss scaling strategy, there are points during training where the
411
+ gradients have overflown, and the loss scaling factor is reduced to avoid this happening again at the next step.
412
+
413
+ This means that you may update your learning rate scheduler when there was no update, which is fine in general, but may
414
+ have an impact when you have very little training data, or if the first learning rate values of your scheduler are very
415
+ important. In this case, you can skip the learning rate scheduler updates when the optimizer step was not done like
416
+ this:
417
+
418
+ ```
419
+ if not accelerator.optimizer_step_was_skipped:
420
+ lr_scheduler.step()
421
+ ```
422
+
423
+ ### Use gradient accumulation
424
+
425
+ To perform gradient accumulation use [`~Accelerator.accumulate`] and specify a `gradient_accumulation_steps`.
426
+ This will also automatically ensure the gradients are synced or unsynced when on multi-device training, check if the step should
427
+ actually be performed, and auto-scale the loss:
428
+
429
+ ```python
430
+ accelerator = Accelerator(gradient_accumulation_steps=2)
431
+ model, optimizer, training_dataloader = accelerator.prepare(model, optimizer, training_dataloader)
432
+
433
+ for input, label in training_dataloader:
434
+ with accelerator.accumulate(model):
435
+ predictions = model(input)
436
+ loss = loss_function(predictions, label)
437
+ accelerator.backward(loss)
438
+ optimizer.step()
439
+ scheduler.step()
440
+ optimizer.zero_grad()
441
+ ```
docs/source/usage_guides/big_modeling.md ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Handling big models for inference
17
+
18
+ One of the biggest advancements 🤗 Accelerate provides is the concept of [large model inference](../concept_guides/big_model_inference) wherein you can perform *inference* on models that cannot fully fit on your graphics card.
19
+
20
+ This tutorial will be broken down into two parts showcasing how to use both 🤗 Accelerate and 🤗 Transformers (a higher API-level) to make use of this idea.
21
+
22
+ ## Using 🤗 Accelerate
23
+
24
+ For these tutorials, we'll assume a typical workflow for loading your model in such that:
25
+
26
+ ```py
27
+ import torch
28
+
29
+ my_model = ModelClass(...)
30
+ state_dict = torch.load(checkpoint_file)
31
+ my_model.load_state_dict(state_dict)
32
+ ```
33
+
34
+ Note that here we assume that `ModelClass` is a model that takes up more video-card memory than what can fit on your device (be it `mps` or `cuda`).
35
+
36
+ The first step is to init an empty skeleton of the model which won't take up any RAM using the [`init_empty_weights`] context manager:
37
+
38
+ ```py
39
+ from accelerate import init_empty_weights
40
+ with init_empty_weights():
41
+ my_model = ModelClass(...)
42
+ ```
43
+
44
+ With this `my_model` currently is "parameterless", hence leaving the smaller footprint than what one would normally get loading this onto the CPU directly.
45
+
46
+ Next we need to load in the weights to our model so we can perform inference.
47
+
48
+ For this we will use [`load_checkpoint_and_dispatch`], which as the name implies will load a checkpoint inside your empty model and dispatch the weights for each layer across all the devices you have available (GPU/MPS and CPU RAM).
49
+
50
+ To determine how this `dispatch` can be performed, generally specifying `device_map="auto"` will be good enough as 🤗 Accelerate
51
+ will attempt to fill all the space in your GPU(s), then loading them to the CPU, and finally if there is not enough RAM it will be loaded to the disk (the absolute slowest option).
52
+
53
+ <Tip>
54
+
55
+ For more details on desigining your own device map, see this section of the [concept guide](../concept_guide/big_model_inference#designing-a-device-map)
56
+
57
+ </Tip>
58
+
59
+ See an example below:
60
+
61
+ ```py
62
+ from accelerate import load_checkpoint_and_dispatch
63
+
64
+ model = load_checkpoint_and_dispatch(
65
+ model, checkpoint=checkpoint_file, device_map="auto"
66
+ )
67
+ ```
68
+
69
+ <Tip>
70
+
71
+ If there are certain "chunks" of layers that shouldn't be split, you can pass them in as `no_split_module_classes`. Read more about it [here](../concept_guides/big_model_inference#loading-weights)
72
+
73
+ </Tip>
74
+
75
+ <Tip>
76
+
77
+ Also to save on memory (such as if the `state_dict` will not fit in RAM), a model's weights can be divided and split into multiple checkpoint files. Read more about it [here](../concept_guides/big_model_inference#sharded-checkpoints)
78
+
79
+ </Tip>
80
+
81
+ Now that the model is dispatched fully, you can perform inference as normal with the model:
82
+
83
+ ```py
84
+ input = torch.randn(2,3)
85
+ input = input.to("cuda")
86
+ output = model(input)
87
+ ```
88
+
89
+ What will happen now is each time the input gets passed through a layer, it will be sent from the CPU to the GPU (or disk to CPU to GPU), the output is calculated, and then the layer is pulled back off the GPU going back down the line. While this adds some overhead to the inference being performed, through this method it is possible to run **any size model** on your system, as long as the largest layer is capable of fitting on your GPU.
90
+
91
+ <Tip>
92
+
93
+ Multiple GPUs can be utilized, however this is considered "model parallism" and as a result only one GPU will be active at a given moment, waiting for the prior one to send it the output. You should launch your script normally with `python`
94
+ and not need `torchrun`, `accelerate launch`, etc.
95
+
96
+ </Tip>
97
+
98
+ For a visual representation of this, check out the animation below:
99
+
100
+ <Youtube id="MWCSGj9jEAo" />
101
+
102
+ ### Complete Example
103
+
104
+ Below is the full example showcasing what we performed above:
105
+
106
+ ```py
107
+ import torch
108
+ from accelerate import init_empty_weights, load_checkpoint_and_dispatch
109
+
110
+ with init_empty_weights():
111
+ model = MyModel(...)
112
+
113
+ model = load_checkpoint_and_dispatch(
114
+ model, checkpoint=checkpoint_file, device_map="auto"
115
+ )
116
+
117
+ input = torch.randn(2,3)
118
+ input = input.to("cuda")
119
+ output = model(input)
120
+ ```
121
+
122
+ ## Using 🤗 Transformers, 🤗 Diffusers, and other 🤗 Open Source Libraries
123
+
124
+ Libraries that support 🤗 Accelerate big model inference include all of the earlier logic in their `from_pretrained` constructors.
125
+
126
+ These operate by specifying a string representing the model to download from the [🤗 Hub](https://hf.co/models) and then denoting `device_map="auto"` along with a few extra parameters.
127
+
128
+ As a brief example, we will look at using `transformers` and loading in Big Science's T0pp model.
129
+
130
+ ```py
131
+ from transformers import AutoModelForSeq2SeqLM
132
+
133
+ model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
134
+ ```
135
+
136
+ After loading the model in, the initial steps from before to prepare a model have all been done and the model is fully
137
+ ready to make use of all the resources in your machine. Through these constructors, you can also save *more* memory by
138
+ specifying the precision the model is loaded into as well, through the `torch_dtype` parameter, such as:
139
+
140
+ ```py
141
+ from transformers import AutoModelForSeq2SeqLM
142
+
143
+ model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto", torch_dtype=torch.float16)
144
+ ```
145
+
146
+ To learn more about this, check out the 🤗 Transformers documentation available [here](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
147
+
148
+ ## Where to go from here
149
+
150
+ For a much more detailed look at big model inference, be sure to check out the [Conceptual Guide on it](../concept_guides/big_model_inference)
docs/source/usage_guides/checkpoint.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Checkpointing
17
+
18
+ When training a PyTorch model with 🤗 Accelerate, you may often want to save and continue a state of training. Doing so requires
19
+ saving and loading the model, optimizer, RNG generators, and the GradScaler. Inside 🤗 Accelerate are two convenience functions to achieve this quickly:
20
+ - Use [`~Accelerator.save_state`] for saving everything mentioned above to a folder location
21
+ - Use [`~Accelerator.load_state`] for loading everything stored from an earlier `save_state`
22
+
23
+ To further customize where and how states are saved through [`~Accelerator.save_state`] the [`~utils.ProjectConfiguration`] class can be used. For example
24
+ if `automatic_checkpoint_naming` is enabled each saved checkpoint will be located then at `Accelerator.project_dir/checkpoints/checkpoint_{checkpoint_number}`.
25
+
26
+ It should be noted that the expectation is that those states come from the same training script, they should not be from two separate scripts.
27
+
28
+ - By using [`~Accelerator.register_for_checkpointing`], you can register custom objects to be automatically stored or loaded from the two prior functions,
29
+ so long as the object has a `state_dict` **and** a `load_state_dict` functionality. This could include objects such as a learning rate scheduler.
30
+
31
+
32
+ Below is a brief example using checkpointing to save and reload a state during training:
33
+
34
+ ```python
35
+ from accelerate import Accelerator
36
+ import torch
37
+
38
+ accelerator = Accelerator(project_dir="my/save/path")
39
+
40
+ my_scheduler = torch.optim.lr_scheduler.StepLR(my_optimizer, step_size=1, gamma=0.99)
41
+ my_model, my_optimizer, my_training_dataloader = accelerator.prepare(my_model, my_optimizer, my_training_dataloader)
42
+
43
+ # Register the LR scheduler
44
+ accelerator.register_for_checkpointing(my_scheduler)
45
+
46
+ # Save the starting state
47
+ accelerator.save_state()
48
+
49
+ device = accelerator.device
50
+ my_model.to(device)
51
+
52
+ # Perform training
53
+ for epoch in range(num_epochs):
54
+ for batch in my_training_dataloader:
55
+ my_optimizer.zero_grad()
56
+ inputs, targets = batch
57
+ inputs = inputs.to(device)
58
+ targets = targets.to(device)
59
+ outputs = my_model(inputs)
60
+ loss = my_loss_function(outputs, targets)
61
+ accelerator.backward(loss)
62
+ my_optimizer.step()
63
+ my_scheduler.step()
64
+
65
+ # Restore the previous state
66
+ accelerator.load_state("my/save/path/checkpointing/checkpoint_0")
67
+ ```
68
+
69
+ ## Restoring the state of the DataLoader
70
+
71
+ After resuming from a checkpoint, it may also be desirable to resume from a particular point in the active `DataLoader` if
72
+ the state was saved during the middle of an epoch. You can use [`~Accelerator.skip_first_batches`] to do so.
73
+
74
+ ```python
75
+ from accelerate import Accelerator
76
+
77
+ accelerator = Accelerator(project_dir="my/save/path")
78
+
79
+ train_dataloader = accelerator.prepare(train_dataloader)
80
+ accelerator.load_state("my_state")
81
+
82
+ # Assume the checkpoint was saved 100 steps into the epoch
83
+ skipped_dataloader = accelerator.skip_first_batches(train_dataloader, 100)
84
+
85
+ # After the first iteration, go back to `train_dataloader`
86
+
87
+ # First epoch
88
+ for batch in skipped_dataloader:
89
+ # Do something
90
+ pass
91
+
92
+ # Second epoch
93
+ for batch in train_dataloader:
94
+ # Do something
95
+ pass
96
+ ```
docs/source/usage_guides/deepspeed.md ADDED
@@ -0,0 +1,722 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # DeepSpeed
17
+
18
+ [DeepSpeed](https://github.com/microsoft/DeepSpeed) implements everything described in the [ZeRO paper](https://arxiv.org/abs/1910.02054). Some of the salient optimizations are:
19
+
20
+ 1. Optimizer state partitioning (ZeRO stage 1)
21
+ 2. Gradient partitioning (ZeRO stage 2)
22
+ 3. Parameter partitioning (ZeRO stage 3)
23
+ 4. Custom mixed precision training handling
24
+ 5. A range of fast CUDA-extension-based optimizers
25
+ 6. ZeRO-Offload to CPU and Disk/NVMe
26
+ 7. Heirarchical partitioning of model parameters (ZeRO++)
27
+
28
+ ZeRO-Offload has its own dedicated paper: [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840). And NVMe-support is described in the paper [ZeRO-Infinity: Breaking the GPU
29
+ Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857).
30
+
31
+ DeepSpeed ZeRO-2 is primarily used only for training, as its features are of no use to inference.
32
+
33
+ DeepSpeed ZeRO-3 can be used for inference as well since it allows huge models to be loaded on multiple GPUs, which
34
+ won't be possible on a single GPU.
35
+
36
+ 🤗 Accelerate integrates [DeepSpeed](https://github.com/microsoft/DeepSpeed) via 2 options:
37
+
38
+ 1. Integration of the DeepSpeed features via `deepspeed config file` specification in `accelerate config` . You just supply your custom config file or use our template. Most of
39
+ this document is focused on this feature. This supports all the core features of DeepSpeed and gives user a lot of flexibility.
40
+ User may have to change a few lines of code depending on the config.
41
+ 2. Integration via `deepspeed_plugin`.This supports subset of the DeepSpeed features and uses default options for the rest of the configurations.
42
+ User need not change any code and is good for those who are fine with most of the default settings of DeepSpeed.
43
+
44
+ ## What is integrated?
45
+
46
+ Training:
47
+
48
+ 1. 🤗 Accelerate integrates all features of DeepSpeed ZeRO. This includes all the ZeRO stages 1, 2 and 3 as well as ZeRO-Offload, ZeRO-Infinity (which can offload to disk/NVMe) and ZeRO++.
49
+ Below is a short description of Data Parallelism using ZeRO - Zero Redundancy Optimizer along with diagram from this [blog post](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)
50
+ ![ZeRO Data Parallelism](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-zero.png)
51
+
52
+ (Source: [link](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/))
53
+
54
+ a. **Stage 1** : Shards optimizer states across data parallel workers/GPUs
55
+
56
+ b. **Stage 2** : Shards optimizer states + gradients across data parallel workers/GPUs
57
+
58
+ c. **Stage 3**: Shards optimizer states + gradients + model parameters across data parallel workers/GPUs
59
+
60
+ d. **Optimizer Offload**: Offloads the gradients + optimizer states to CPU/Disk building on top of ZERO Stage 2
61
+
62
+ e. **Param Offload**: Offloads the model parameters to CPU/Disk building on top of ZERO Stage 3
63
+
64
+ f. **Heirarchical Paritioning**: Enables efficient multi-node training with data-parallel training across nodes and ZeRO-3 sharding within a node, built on top of ZeRO Stage 3.
65
+
66
+ <u>Note</u>: With respect to Disk Offload, the disk should be an NVME for decent speed but it technically works on any Disk
67
+
68
+ Inference:
69
+
70
+ 1. DeepSpeed ZeRO Inference supports ZeRO stage 3 with ZeRO-Infinity. It uses the same ZeRO protocol as training, but
71
+ it doesn't use an optimizer and a lr scheduler and only stage 3 is relevant. For more details see:
72
+ [deepspeed-zero-inference](#deepspeed-zero-inference).
73
+
74
+
75
+ ## How it works?
76
+
77
+ **Pre-Requisites**: Install DeepSpeed version >=0.6.5. Please refer to the [DeepSpeed Installation details](https://github.com/microsoft/DeepSpeed#installation)
78
+ for more information.
79
+
80
+ We will first look at easy to use integration via `accelerate config`.
81
+ Followed by more flexible and feature rich `deepspeed config file` integration.
82
+
83
+ ### Accelerate DeepSpeed Plugin
84
+ On your machine(s) just run:
85
+
86
+ ```bash
87
+ accelerate config
88
+ ```
89
+
90
+ and answer the questions asked. It will ask whether you want to use a config file for DeepSpeed to which you should answer no. Then answer the following questions to generate a basic DeepSpeed config.
91
+ This will generate a config file that will be used automatically to properly set the
92
+ default options when doing
93
+
94
+ ```bash
95
+ accelerate launch my_script.py --args_to_my_script
96
+ ```
97
+
98
+ For instance, here is how you would run the NLP example `examples/nlp_example.py` (from the root of the repo) with DeepSpeed Plugin:
99
+
100
+ **ZeRO Stage-2 DeepSpeed Plugin Example**
101
+ ```bash
102
+ compute_environment: LOCAL_MACHINE
103
+ deepspeed_config:
104
+ gradient_accumulation_steps: 1
105
+ gradient_clipping: 1.0
106
+ offload_optimizer_device: none
107
+ offload_param_device: none
108
+ zero3_init_flag: true
109
+ zero_stage: 2
110
+ distributed_type: DEEPSPEED
111
+ fsdp_config: {}
112
+ machine_rank: 0
113
+ main_process_ip: null
114
+ main_process_port: null
115
+ main_training_function: main
116
+ mixed_precision: fp16
117
+ num_machines: 1
118
+ num_processes: 2
119
+ use_cpu: false
120
+ ```
121
+
122
+ ```bash
123
+ accelerate launch examples/nlp_example.py --mixed_precision fp16
124
+ ```
125
+
126
+ **ZeRO Stage-3 with CPU Offload DeepSpeed Plugin Example**
127
+ ```bash
128
+ compute_environment: LOCAL_MACHINE
129
+ deepspeed_config:
130
+ gradient_accumulation_steps: 1
131
+ gradient_clipping: 1.0
132
+ offload_optimizer_device: cpu
133
+ offload_param_device: cpu
134
+ zero3_init_flag: true
135
+ zero3_save_16bit_model: true
136
+ zero_stage: 3
137
+ distributed_type: DEEPSPEED
138
+ fsdp_config: {}
139
+ machine_rank: 0
140
+ main_process_ip: null
141
+ main_process_port: null
142
+ main_training_function: main
143
+ mixed_precision: fp16
144
+ num_machines: 1
145
+ num_processes: 2
146
+ use_cpu: false
147
+ ```
148
+
149
+ ```bash
150
+ accelerate launch examples/nlp_example.py --mixed_precision fp16
151
+ ```
152
+
153
+ Currently, `Accelerate` supports following config through the CLI:
154
+
155
+ ```bash
156
+ `zero_stage`: [0] Disabled, [1] optimizer state partitioning, [2] optimizer+gradient state partitioning and [3] optimizer+gradient+parameter partitioning
157
+ `gradient_accumulation_steps`: Number of training steps to accumulate gradients before averaging and applying them.
158
+ `gradient_clipping`: Enable gradient clipping with value.
159
+ `offload_optimizer_device`: [none] Disable optimizer offloading, [cpu] offload optimizer to CPU, [nvme] offload optimizer to NVMe SSD. Only applicable with ZeRO >= Stage-2.
160
+ `offload_param_device`: [none] Disable parameter offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3.
161
+ `zero3_init_flag`: Decides whether to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with ZeRO Stage-3.
162
+ `zero3_save_16bit_model`: Decides whether to save 16-bit model weights when using ZeRO Stage-3.
163
+ `mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16` for BF16 mixed-precision training.
164
+ ```
165
+ To be able to tweak more options, you will need to use a DeepSpeed config file.
166
+
167
+ ### DeepSpeed Config File
168
+ On your machine(s) just run:
169
+
170
+ ```bash
171
+ accelerate config
172
+ ```
173
+
174
+ and answer the questions asked. It will ask whether you want to use a config file for deepspeed to which you answer yes
175
+ and provide the path to the deepspeed config file.
176
+ This will generate a config file that will be used automatically to properly set the
177
+ default options when doing
178
+
179
+ ```bash
180
+ accelerate launch my_script.py --args_to_my_script
181
+ ```
182
+
183
+ For instance, here is how you would run the NLP example `examples/by_feature/deepspeed_with_config_support.py` (from the root of the repo) with DeepSpeed Config File:
184
+
185
+ **ZeRO Stage-2 DeepSpeed Config File Example**
186
+ ```bash
187
+ compute_environment: LOCAL_MACHINE
188
+ deepspeed_config:
189
+ deepspeed_config_file: /home/ubuntu/accelerate/examples/configs/deepspeed_config_templates/zero_stage2_config.json
190
+ zero3_init_flag: true
191
+ distributed_type: DEEPSPEED
192
+ fsdp_config: {}
193
+ machine_rank: 0
194
+ main_process_ip: null
195
+ main_process_port: null
196
+ main_training_function: main
197
+ mixed_precision: fp16
198
+ num_machines: 1
199
+ num_processes: 2
200
+ use_cpu: false
201
+ ```
202
+
203
+ with the contents of `zero_stage2_config.json` being:
204
+ ```json
205
+ {
206
+ "fp16": {
207
+ "enabled": true,
208
+ "loss_scale": 0,
209
+ "loss_scale_window": 1000,
210
+ "initial_scale_power": 16,
211
+ "hysteresis": 2,
212
+ "min_loss_scale": 1
213
+ },
214
+ "optimizer": {
215
+ "type": "AdamW",
216
+ "params": {
217
+ "lr": "auto",
218
+ "weight_decay": "auto",
219
+ "torch_adam": true,
220
+ "adam_w_mode": true
221
+ }
222
+ },
223
+ "scheduler": {
224
+ "type": "WarmupDecayLR",
225
+ "params": {
226
+ "warmup_min_lr": "auto",
227
+ "warmup_max_lr": "auto",
228
+ "warmup_num_steps": "auto",
229
+ "total_num_steps": "auto"
230
+ }
231
+ },
232
+ "zero_optimization": {
233
+ "stage": 2,
234
+ "allgather_partitions": true,
235
+ "allgather_bucket_size": 2e8,
236
+ "overlap_comm": true,
237
+ "reduce_scatter": true,
238
+ "reduce_bucket_size": "auto",
239
+ "contiguous_gradients": true
240
+ },
241
+ "gradient_accumulation_steps": 1,
242
+ "gradient_clipping": "auto",
243
+ "steps_per_print": 2000,
244
+ "train_batch_size": "auto",
245
+ "train_micro_batch_size_per_gpu": "auto",
246
+ "wall_clock_breakdown": false
247
+ }
248
+ ```
249
+
250
+ ```bash
251
+ accelerate launch examples/by_feature/deepspeed_with_config_support.py \
252
+ --config_name "gpt2-large" \
253
+ --tokenizer_name "gpt2-large" \
254
+ --dataset_name "wikitext" \
255
+ --dataset_config_name "wikitext-2-raw-v1" \
256
+ --block_size 128 \
257
+ --output_dir "./clm/clm_deepspeed_stage2_accelerate" \
258
+ --learning_rate 5e-4 \
259
+ --per_device_train_batch_size 24 \
260
+ --per_device_eval_batch_size 24 \
261
+ --num_train_epochs 3 \
262
+ --with_tracking \
263
+ --report_to "wandb"\
264
+ ```
265
+
266
+ **ZeRO Stage-3 with CPU offload DeepSpeed Config File Example**
267
+ ```bash
268
+ compute_environment: LOCAL_MACHINE
269
+ deepspeed_config:
270
+ deepspeed_config_file: /home/ubuntu/accelerate/examples/configs/deepspeed_config_templates/zero_stage3_offload_config.json
271
+ zero3_init_flag: true
272
+ distributed_type: DEEPSPEED
273
+ fsdp_config: {}
274
+ machine_rank: 0
275
+ main_process_ip: null
276
+ main_process_port: null
277
+ main_training_function: main
278
+ mixed_precision: fp16
279
+ num_machines: 1
280
+ num_processes: 2
281
+ use_cpu: false
282
+ ```
283
+ with the contents of `zero_stage3_offload_config.json` being:
284
+ ```json
285
+ {
286
+ "fp16": {
287
+ "enabled": true,
288
+ "loss_scale": 0,
289
+ "loss_scale_window": 1000,
290
+ "initial_scale_power": 16,
291
+ "hysteresis": 2,
292
+ "min_loss_scale": 1
293
+ },
294
+ "optimizer": {
295
+ "type": "AdamW",
296
+ "params": {
297
+ "lr": "auto",
298
+ "weight_decay": "auto"
299
+ }
300
+ },
301
+ "scheduler": {
302
+ "type": "WarmupDecayLR",
303
+ "params": {
304
+ "warmup_min_lr": "auto",
305
+ "warmup_max_lr": "auto",
306
+ "warmup_num_steps": "auto",
307
+ "total_num_steps": "auto"
308
+ }
309
+ },
310
+ "zero_optimization": {
311
+ "stage": 3,
312
+ "offload_optimizer": {
313
+ "device": "cpu",
314
+ "pin_memory": true
315
+ },
316
+ "offload_param": {
317
+ "device": "cpu",
318
+ "pin_memory": true
319
+ },
320
+ "overlap_comm": true,
321
+ "contiguous_gradients": true,
322
+ "reduce_bucket_size": "auto",
323
+ "stage3_prefetch_bucket_size": "auto",
324
+ "stage3_param_persistence_threshold": "auto",
325
+ "sub_group_size": 1e9,
326
+ "stage3_max_live_parameters": 1e9,
327
+ "stage3_max_reuse_distance": 1e9,
328
+ "stage3_gather_16bit_weights_on_model_save": "auto"
329
+ },
330
+ "gradient_accumulation_steps": 1,
331
+ "gradient_clipping": "auto",
332
+ "steps_per_print": 2000,
333
+ "train_batch_size": "auto",
334
+ "train_micro_batch_size_per_gpu": "auto",
335
+ "wall_clock_breakdown": false
336
+ }
337
+ ```
338
+
339
+ ```bash
340
+ accelerate launch examples/by_feature/deepspeed_with_config_support.py \
341
+ --config_name "gpt2-large" \
342
+ --tokenizer_name "gpt2-large" \
343
+ --dataset_name "wikitext" \
344
+ --dataset_config_name "wikitext-2-raw-v1" \
345
+ --block_size 128 \
346
+ --output_dir "./clm/clm_deepspeed_stage3_offload_accelerate" \
347
+ --learning_rate 5e-4 \
348
+ --per_device_train_batch_size 32 \
349
+ --per_device_eval_batch_size 32 \
350
+ --num_train_epochs 3 \
351
+ --with_tracking \
352
+ --report_to "wandb"\
353
+ ```
354
+
355
+ **ZeRO++ Config Example**
356
+ You can use the the features of ZeRO++ by using the appropriate config parameters. Note that ZeRO++ is an extension for ZeRO Stage 3. Here is how the config file can be modified, from [DeepSpeed's ZeRO++ tutorial](https://www.deepspeed.ai/tutorials/zeropp/):
357
+
358
+ ```json
359
+ {
360
+ "zero_optimization": {
361
+ "stage": 3,
362
+ "reduce_bucket_size": "auto",
363
+
364
+ "zero_quantized_weights": true,
365
+ "zero_hpz_partition_size": 8,
366
+ "zero_quantized_gradients": true,
367
+
368
+ "contiguous_gradients": true,
369
+ "overlap_comm": true
370
+ }
371
+ }
372
+ ```
373
+
374
+ For heirarchical partitioning, the partition size `zero_hpz_partition_size` should ideally be set to the number of GPUs per node. (For example, the above config file assumes 8 GPUs per node)
375
+
376
+ **Important code changes when using DeepSpeed Config File**
377
+
378
+ 1. DeepSpeed Optimizers and Schedulers. For more information on these,
379
+ see the [DeepSpeed Optimizers](https://deepspeed.readthedocs.io/en/latest/optimizers.html) and [DeepSpeed Schedulers](https://deepspeed.readthedocs.io/en/latest/schedulers.html) documentation.
380
+ We will look at the changes needed in the code when using these.
381
+
382
+ a. DS Optim + DS Scheduler: The case when both `optimizer` and `scheduler` keys are present in the DeepSpeed config file.
383
+ In this situation, those will be used and the user has to use `accelerate.utils.DummyOptim` and `accelerate.utils.DummyScheduler` to replace the PyTorch/Custom optimizers and schedulers in their code.
384
+ Below is the snippet from `examples/by_feature/deepspeed_with_config_support.py` showing this:
385
+ ```python
386
+ # Creates Dummy Optimizer if `optimizer` was spcified in the config file else creates Adam Optimizer
387
+ optimizer_cls = (
388
+ torch.optim.AdamW
389
+ if accelerator.state.deepspeed_plugin is None
390
+ or "optimizer" not in accelerator.state.deepspeed_plugin.deepspeed_config
391
+ else DummyOptim
392
+ )
393
+ optimizer = optimizer_cls(optimizer_grouped_parameters, lr=args.learning_rate)
394
+
395
+ # Creates Dummy Scheduler if `scheduler` was spcified in the config file else creates `args.lr_scheduler_type` Scheduler
396
+ if (
397
+ accelerator.state.deepspeed_plugin is None
398
+ or "scheduler" not in accelerator.state.deepspeed_plugin.deepspeed_config
399
+ ):
400
+ lr_scheduler = get_scheduler(
401
+ name=args.lr_scheduler_type,
402
+ optimizer=optimizer,
403
+ num_warmup_steps=args.num_warmup_steps,
404
+ num_training_steps=args.max_train_steps,
405
+ )
406
+ else:
407
+ lr_scheduler = DummyScheduler(
408
+ optimizer, total_num_steps=args.max_train_steps, warmup_num_steps=args.num_warmup_steps
409
+ )
410
+ ```
411
+ b. Custom Optim + Custom Scheduler: The case when both `optimizer` and `scheduler` keys are absent in the DeepSpeed config file.
412
+ In this situation, no code changes are needed from the user and this is the case when using integration via DeepSpeed Plugin.
413
+ In the above example we can see that the code remains unchanged if the `optimizer` and `scheduler` keys are absent in the DeepSpeed config file.
414
+
415
+ c. Custom Optim + DS Scheduler: The case when only `scheduler` key is present in the DeepSpeed config file.
416
+ In this situation, the user has to use `accelerate.utils.DummyScheduler` to replace the PyTorch/Custom scheduler in their code.
417
+
418
+ d. DS Optim + Custom Scheduler: The case when only `optimizer` key is present in the DeepSpeed config file.
419
+ This will result in an error because you can only use DS Scheduler when using DS Optim.
420
+
421
+ 2. Notice the `auto` values in the above example DeepSpeed config files. These are automatically handled by `prepare` method
422
+ based on model, dataloaders, dummy optimizer and dummy schedulers provided to `prepare` method.
423
+ Only the `auto` fields specified in above examples are handled by `prepare` method and the rest have to be explicitly specified by the user.
424
+
425
+ The `auto` values are calculated as:
426
+
427
+ - `reduce_bucket_size`: `hidden_size*hidden_size`
428
+ - `stage3_prefetch_bucket_size`: `0.9 * hidden_size * hidden_size`
429
+ - `stage3_param_persistence_threshold`: `10 * hidden_size`
430
+
431
+
432
+ **Things to note when using DeepSpeed Config File**
433
+
434
+ Below is a sample script using `deepspeed_config_file` in different scenarios.
435
+
436
+ Code `test.py`:
437
+
438
+ ```python
439
+ from accelerate import Accelerator
440
+ from accelerate.state import AcceleratorState
441
+
442
+
443
+ def main():
444
+ accelerator = Accelerator()
445
+ accelerator.print(f"{AcceleratorState()}")
446
+
447
+
448
+ if __name__ == "__main__":
449
+ main()
450
+ ```
451
+
452
+ **Scenario 1**: Manually tampered accelerate config file having `deepspeed_config_file` along with other entries.
453
+
454
+ 1. Content of the `accelerate` config:
455
+
456
+ ```yaml
457
+ command_file: null
458
+ commands: null
459
+ compute_environment: LOCAL_MACHINE
460
+ deepspeed_config:
461
+ gradient_accumulation_steps: 1
462
+ gradient_clipping: 1.0
463
+ offload_optimizer_device: 'cpu'
464
+ offload_param_device: 'cpu'
465
+ zero3_init_flag: true
466
+ zero3_save_16bit_model: true
467
+ zero_stage: 3
468
+ deepspeed_config_file: 'ds_config.json'
469
+ distributed_type: DEEPSPEED
470
+ downcast_bf16: 'no'
471
+ dynamo_backend: 'NO'
472
+ fsdp_config: {}
473
+ gpu_ids: null
474
+ machine_rank: 0
475
+ main_process_ip: null
476
+ main_process_port: null
477
+ main_training_function: main
478
+ megatron_lm_config: {}
479
+ num_machines: 1
480
+ num_processes: 2
481
+ rdzv_backend: static
482
+ same_network: true
483
+ tpu_name: null
484
+ tpu_zone: null
485
+ use_cpu: false
486
+ ```
487
+
488
+ 2. `ds_config.json`:
489
+
490
+ ```json
491
+ {
492
+ "bf16": {
493
+ "enabled": true
494
+ },
495
+ "zero_optimization": {
496
+ "stage": 3,
497
+ "stage3_gather_16bit_weights_on_model_save": false,
498
+ "offload_optimizer": {
499
+ "device": "none"
500
+ },
501
+ "offload_param": {
502
+ "device": "none"
503
+ }
504
+ },
505
+ "gradient_clipping": 1.0,
506
+ "train_batch_size": "auto",
507
+ "train_micro_batch_size_per_gpu": "auto",
508
+ "gradient_accumulation_steps": 10,
509
+ "steps_per_print": 2000000
510
+ }
511
+ ```
512
+
513
+ 3. Output of `accelerate launch test.py`:
514
+
515
+ ```bash
516
+ ValueError: When using `deepspeed_config_file`, the following accelerate config variables will be ignored:
517
+ ['gradient_accumulation_steps', 'gradient_clipping', 'zero_stage', 'offload_optimizer_device', 'offload_param_device',
518
+ 'zero3_save_16bit_model', 'mixed_precision'].
519
+ Please specify them appropriately in the DeepSpeed config file.
520
+ If you are using an accelerate config file, remove others config variables mentioned in the above specified list.
521
+ The easiest method is to create a new config following the questionnaire via `accelerate config`.
522
+ It will only ask for the necessary config variables when using `deepspeed_config_file`.
523
+ ```
524
+
525
+ **Scenario 2**: Use the solution of the error to create new accelerate config and check that no ambiguity error is now thrown.
526
+
527
+ 1. Run `accelerate config`:
528
+
529
+ ```bash
530
+ $ accelerate config
531
+ -------------------------------------------------------------------------------------------------------------------------------
532
+ In which compute environment are you running?
533
+ This machine
534
+ -------------------------------------------------------------------------------------------------------------------------------
535
+ Which type of machine are you using?
536
+ multi-GPU
537
+ How many different machines will you use (use more than 1 for multi-node training)? [1]:
538
+ Do you wish to optimize your script with torch dynamo?[yes/NO]:
539
+ Do you want to use DeepSpeed? [yes/NO]: yes
540
+ Do you want to specify a json file to a DeepSpeed config? [yes/NO]: yes
541
+ Please enter the path to the json DeepSpeed config file: ds_config.json
542
+ Do you want to enable `deepspeed.zero.Init` when using ZeRO Stage-3 for constructing massive models? [yes/NO]: yes
543
+ How many GPU(s) should be used for distributed training? [1]:4
544
+ accelerate configuration saved at ds_config_sample.yaml
545
+ ```
546
+
547
+ 2. Content of the `accelerate` config:
548
+
549
+ ```yaml
550
+ compute_environment: LOCAL_MACHINE
551
+ deepspeed_config:
552
+ deepspeed_config_file: ds_config.json
553
+ zero3_init_flag: true
554
+ distributed_type: DEEPSPEED
555
+ downcast_bf16: 'no'
556
+ dynamo_backend: 'NO'
557
+ fsdp_config: {}
558
+ machine_rank: 0
559
+ main_training_function: main
560
+ megatron_lm_config: {}
561
+ num_machines: 1
562
+ num_processes: 4
563
+ rdzv_backend: static
564
+ same_network: true
565
+ use_cpu: false
566
+ ```
567
+
568
+ 3. Output of `accelerate launch test.py`:
569
+
570
+ ```bash
571
+ Distributed environment: DEEPSPEED Backend: nccl
572
+ Num processes: 4
573
+ Process index: 0
574
+ Local process index: 0
575
+ Device: cuda:0
576
+ Mixed precision type: bf16
577
+ ds_config: {'bf16': {'enabled': True}, 'zero_optimization': {'stage': 3, 'stage3_gather_16bit_weights_on_model_save': False, 'offload_optimizer': {'device': 'none'}, 'offload_param': {'device': 'none'}}, 'gradient_clipping': 1.0, 'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'gradient_accumulation_steps': 10, 'steps_per_print': inf, 'fp16': {'enabled': False}}
578
+ ```
579
+
580
+ **Scenario 3**: Setting the `accelerate launch` command arguments related to DeepSpeed as `"auto"` in the DeepSpeed` configuration file and check that things work as expected.
581
+
582
+ 1. New `ds_config.json` with `"auto"` for the `accelerate launch` DeepSpeed command arguments:
583
+
584
+ ```json
585
+ {
586
+ "bf16": {
587
+ "enabled": "auto"
588
+ },
589
+ "zero_optimization": {
590
+ "stage": "auto",
591
+ "stage3_gather_16bit_weights_on_model_save": "auto",
592
+ "offload_optimizer": {
593
+ "device": "auto"
594
+ },
595
+ "offload_param": {
596
+ "device": "auto"
597
+ }
598
+ },
599
+ "gradient_clipping": "auto",
600
+ "train_batch_size": "auto",
601
+ "train_micro_batch_size_per_gpu": "auto",
602
+ "gradient_accumulation_steps": "auto",
603
+ "steps_per_print": 2000000
604
+ }
605
+ ```
606
+
607
+ 2. Output of `accelerate launch --mixed_precision="fp16" --zero_stage=3 --gradient_accumulation_steps=5 --gradient_clipping=1.0 --offload_param_device="cpu" --offload_optimizer_device="nvme" --zero3_save_16bit_model="true" test.py`:
608
+
609
+ ```bash
610
+ Distributed environment: DEEPSPEED Backend: nccl
611
+ Num processes: 4
612
+ Process index: 0
613
+ Local process index: 0
614
+ Device: cuda:0
615
+ Mixed precision type: fp16
616
+ ds_config: {'bf16': {'enabled': False}, 'zero_optimization': {'stage': 3, 'stage3_gather_16bit_weights_on_model_save': True, 'offload_optimizer': {'device': 'nvme'}, 'offload_param': {'device': 'cpu'}}, 'gradient_clipping': 1.0, 'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'gradient_accumulation_steps': 5, 'steps_per_print': inf, 'fp16': {'enabled': True, 'auto_cast': True}}
617
+ ```
618
+
619
+ **Note**:
620
+ 1. Remaining `"auto"` values are handled in `accelerator.prepare()` call as explained in point 2 of
621
+ `Important code changes when using DeepSpeed Config File`.
622
+ 2. Only when `gradient_accumulation_steps` is `auto`, the value passed while creating `Accelerator` object via `Accelerator(gradient_accumulation_steps=k)` will be used. When using DeepSpeed Plugin, the value from it will be used and it will overwrite the value passed while creating Accelerator object.
623
+
624
+ ## Saving and loading
625
+
626
+ 1. Saving and loading of models is unchanged for ZeRO Stage-1 and Stage-2.
627
+
628
+ 2. under ZeRO Stage-3, `state_dict` contains just the placeholders since the model weights are partitioned across multiple GPUs.
629
+ ZeRO Stage-3 has 2 options:
630
+
631
+ a. Saving the entire 16bit model weights to directly load later on using `model.load_state_dict(torch.load(pytorch_model.bin))`.
632
+ For this, either set `zero_optimization.stage3_gather_16bit_weights_on_model_save` to True in DeepSpeed Config file or set
633
+ `zero3_save_16bit_model` to True in DeepSpeed Plugin.
634
+ **Note that this option requires consolidation of the weights on one GPU it can be slow and memory demanding, so only use this feature when needed.**
635
+ Below is the snippet from `examples/by_feature/deepspeed_with_config_support.py` showing this:
636
+ ```python
637
+ unwrapped_model = accelerator.unwrap_model(model)
638
+
639
+ # New Code #
640
+ # Saves the whole/unpartitioned fp16 model when in ZeRO Stage-3 to the output directory if
641
+ # `stage3_gather_16bit_weights_on_model_save` is True in DeepSpeed Config file or
642
+ # `zero3_save_16bit_model` is True in DeepSpeed Plugin.
643
+ # For Zero Stages 1 and 2, models are saved as usual in the output directory.
644
+ # The model name saved is `pytorch_model.bin`
645
+ unwrapped_model.save_pretrained(
646
+ args.output_dir,
647
+ is_main_process=accelerator.is_main_process,
648
+ save_function=accelerator.save,
649
+ state_dict=accelerator.get_state_dict(model),
650
+ )
651
+ ```
652
+
653
+ b. To get 32bit weights, first save the model using `model.save_checkpoint()`.
654
+ Below is the snippet from `examples/by_feature/deepspeed_with_config_support.py` showing this:
655
+ ```python
656
+ success = model.save_checkpoint(PATH, ckpt_id, checkpoint_state_dict)
657
+ status_msg = "checkpointing: PATH={}, ckpt_id={}".format(PATH, ckpt_id)
658
+ if success:
659
+ logging.info(f"Success {status_msg}")
660
+ else:
661
+ logging.warning(f"Failure {status_msg}")
662
+ ```
663
+ This will create ZeRO model and optimizer partitions along with `zero_to_fp32.py` script in checkpoint directory.
664
+ You can use this script to do offline consolidation.
665
+ It requires no configuration files or GPUs. Here is an example of its usage:
666
+ ```bash
667
+ $ cd /path/to/checkpoint_dir
668
+ $ ./zero_to_fp32.py . pytorch_model.bin
669
+ Processing zero checkpoint at global_step1
670
+ Detected checkpoint of type zero stage 3, world_size: 2
671
+ Saving fp32 state dict to pytorch_model.bin (total_numel=60506624)
672
+ ```
673
+ To get 32bit model for saving/inference, you can perform:
674
+ ```python
675
+ from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
676
+
677
+ unwrapped_model = accelerator.unwrap_model(model)
678
+ fp32_model = load_state_dict_from_zero_checkpoint(unwrapped_model, checkpoint_dir)
679
+ ```
680
+ If you are only interested in the `state_dict`, you can do the following:
681
+ ```python
682
+ from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
683
+
684
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir)
685
+ ```
686
+ Note that all these functions require ~2x memory (general RAM) of the size of the final checkpoint.
687
+
688
+ ## ZeRO Inference
689
+ DeepSpeed ZeRO Inference supports ZeRO stage 3 with ZeRO-Infinity.
690
+ It uses the same ZeRO protocol as training, but it doesn't use an optimizer and a lr scheduler and only stage 3 is relevant.
691
+ With accelerate integration, you just need to prepare the model and dataloader as shown below:
692
+
693
+ ```python
694
+ model, eval_dataloader = accelerator.prepare(model, eval_dataloader)
695
+ ```
696
+
697
+ ## Few caveats to be aware of
698
+
699
+ 1. Current integration doesn’t support Pipeline Parallelism of DeepSpeed.
700
+ 2. Current integration doesn’t support `mpu`, limiting the tensor parallelism which is supported in Megatron-LM.
701
+ 3. Current integration doesn’t support multiple models.
702
+
703
+ ## DeepSpeed Resources
704
+
705
+ The documentation for the internals related to deepspeed can be found [here](../package_reference/deepspeed).
706
+
707
+ - [Project's github](https://github.com/microsoft/deepspeed)
708
+ - [Usage docs](https://www.deepspeed.ai/getting-started/)
709
+ - [API docs](https://deepspeed.readthedocs.io/en/latest/index.html)
710
+ - [Blog posts](https://www.microsoft.com/en-us/research/search/?q=deepspeed)
711
+
712
+ Papers:
713
+
714
+ - [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://arxiv.org/abs/1910.02054)
715
+ - [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840)
716
+ - [ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857)
717
+ - [ZeRO++: Extremely Efficient Collective Communication for Giant Model Training](https://arxiv.org/abs/2306.10209)
718
+
719
+
720
+ Finally, please, remember that 🤗 `Accelerate` only integrates DeepSpeed, therefore if you
721
+ have any problems or questions with regards to DeepSpeed usage, please, file an issue with [DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues).
722
+
docs/source/usage_guides/distributed_inference.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Distributed Inference with 🤗 Accelerate
17
+
18
+ Distributed inference is a common use case, especially with natural language processing (NLP) models. Users often want to
19
+ send a number of different prompts, each to a different GPU, and then get the results back. This also has other cases
20
+ outside of just NLP, however for this tutorial we will focus on just this idea of each GPU receiving a different prompt,
21
+ and then returning the results.
22
+
23
+ ## The Problem
24
+
25
+ Normally when doing this, users send the model to a specific device to load it from the CPU, and then move each prompt to a different device.
26
+
27
+ A basic pipeline using the `diffusers` library might look something like so:
28
+
29
+ ```python
30
+ import torch
31
+ import torch.distributed as dist
32
+ from diffusers import DiffusionPipeline
33
+
34
+ pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
35
+ ```
36
+ Followed then by performing inference based on the specific prompt:
37
+
38
+ ```python
39
+ def run_inference(rank, world_size):
40
+ dist.init_process_group("nccl", rank=rank, world_size=world_size)
41
+ pipe.to(rank)
42
+
43
+ if torch.distributed.get_rank() == 0:
44
+ prompt = "a dog"
45
+ elif torch.distributed.get_rank() == 1:
46
+ prompt = "a cat"
47
+
48
+ result = pipe(prompt).images[0]
49
+ result.save(f"result_{rank}.png")
50
+ ```
51
+ One will notice how we have to check the rank to know what prompt to send, which can be a bit tedious.
52
+
53
+ A user might then also think that with 🤗 Accelerate, using the `Accelerator` to prepare a dataloader for such a task might also be
54
+ a simple way to manage this. (To learn more, check out the relevant section in the [Quick Tour](../quicktour#distributed-evaluation))
55
+
56
+ Can it manage it? Yes. Does it add unneeded extra code however: also yes.
57
+
58
+ ## The Solution
59
+
60
+ With 🤗 Accelerate, we can simplify this process by using the [`Accelerator.split_between_processes`] context manager (which also exists in `PartialState` and `AcceleratorState`).
61
+ This function will automatically split whatever data you pass to it (be it a prompt, a set of tensors, a dictionary of the prior data, etc.) across all the processes (with a potential
62
+ to be padded) for you to use right away.
63
+
64
+ Let's rewrite the above example using this context manager:
65
+
66
+ ```python
67
+ from accelerate import PartialState # Can also be Accelerator or AcceleratorState
68
+ from diffusers import DiffusionPipeline
69
+
70
+ pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
71
+ distributed_state = PartialState()
72
+ pipe.to(distributed_state.device)
73
+
74
+ # Assume two processes
75
+ with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt:
76
+ result = pipe(prompt).images[0]
77
+ result.save(f"result_{distributed_state.process_index}.png")
78
+ ```
79
+
80
+ And then to launch the code, we can use the 🤗 Accelerate:
81
+
82
+ If you have generated a config file to be used using `accelerate config`:
83
+
84
+ ```bash
85
+ accelerate launch distributed_inference.py
86
+ ```
87
+
88
+ If you have a specific config file you want to use:
89
+
90
+ ```bash
91
+ accelerate launch --config_file my_config.json distributed_inference.py
92
+ ```
93
+
94
+ Or if don't want to make any config files and launch on two GPUs:
95
+
96
+ > Note: You will get some warnings about values being guessed based on your system. To remove these you can do `accelerate config default` or go through `accelerate config` to create a config file.
97
+
98
+ ```bash
99
+ accelerate launch --num_processes 2 distributed_inference.py
100
+ ```
101
+
102
+ We've now reduced the boilerplate code needed to split this data to a few lines of code quite easily.
103
+
104
+ But what if we have an odd distribution of prompts to GPUs? For example, what if we have 3 prompts, but only 2 GPUs?
105
+
106
+ Under the context manager, the first GPU would receive the first two prompts and the second GPU the third, ensuring that
107
+ all prompts are split and no overhead is needed.
108
+
109
+ *However*, what if we then wanted to do something with the results of *all the GPUs*? (Say gather them all and perform some kind of post processing)
110
+ You can pass in `apply_padding=True` to ensure that the lists of prompts are padded to the same length, with extra data being taken
111
+ from the last sample. This way all GPUs will have the same number of prompts, and you can then gather the results.
112
+
113
+ <Tip>
114
+
115
+ This is only needed when trying to perform an action such as gathering the results, where the data on each device
116
+ needs to be the same length. Basic inference does not require this.
117
+
118
+ </Tip>
119
+
120
+ For instance:
121
+
122
+ ```python
123
+ from accelerate import PartialState # Can also be Accelerator or AcceleratorState
124
+ from diffusers import DiffusionPipeline
125
+
126
+ pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
127
+ distributed_state = PartialState()
128
+ pipe.to(distributed_state.device)
129
+
130
+ # Assume two processes
131
+ with distributed_state.split_between_processes(["a dog", "a cat", "a chicken"], apply_padding=True) as prompt:
132
+ result = pipe(prompt).images
133
+ ```
134
+
135
+ On the first GPU, the prompts will be `["a dog", "a cat"]`, and on the second GPU it will be `["a chicken", "a chicken"]`.
136
+ Make sure to drop the final sample, as it will be a duplicate of the previous one.
docs/source/usage_guides/explore.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Learning how to incorporate 🤗 Accelerate features quickly!
17
+
18
+ Please use the interactive tool below to help you get started with learning about a particular
19
+ feature of 🤗 Accelerate and how to utilize it! It will provide you with a code diff, an explanation
20
+ towards what is going on, as well as provide you with some useful links to explore more within
21
+ the documentation!
22
+
23
+ Most code examples start from the following python code before integrating 🤗 Accelerate in some way:
24
+
25
+ ```python
26
+ for batch in dataloader:
27
+ optimizer.zero_grad()
28
+ inputs, targets = batch
29
+ inputs = inputs.to(device)
30
+ targets = targets.to(device)
31
+ outputs = model(inputs)
32
+ loss = loss_function(outputs, targets)
33
+ loss.backward()
34
+ optimizer.step()
35
+ scheduler.step()
36
+ ```
37
+
38
+ <div class="block dark:hidden">
39
+ <iframe
40
+ src="https://hf-accelerate-accelerate-examples.hf.space?__theme=light"
41
+ width="850"
42
+ height="1600"
43
+ ></iframe>
44
+ </div>
45
+ <div class="hidden dark:block">
46
+ <iframe
47
+ src="https://hf-accelerate-accelerate-examples.hf.space?__theme=dark"
48
+ width="850"
49
+ height="1600"
50
+ ></iframe>
51
+ </div>
docs/source/usage_guides/fsdp.md ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Fully Sharded Data Parallel
17
+
18
+ To accelerate training huge models on larger batch sizes, we can use a fully sharded data parallel model.
19
+ This type of data parallel paradigm enables fitting more data and larger models by sharding the optimizer states, gradients and parameters.
20
+ To read more about it and the benefits, check out the [Fully Sharded Data Parallel blog](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/).
21
+ We have integrated the latest PyTorch's Fully Sharded Data Parallel (FSDP) training feature.
22
+ All you need to do is enable it through the config.
23
+
24
+ ## How it works out of the box
25
+
26
+ On your machine(s) just run:
27
+
28
+ ```bash
29
+ accelerate config
30
+ ```
31
+
32
+ and answer the questions asked. This will generate a config file that will be used automatically to properly set the
33
+ default options when doing
34
+
35
+ ```bash
36
+ accelerate launch my_script.py --args_to_my_script
37
+ ```
38
+
39
+ For instance, here is how you would run `examples/nlp_example.py` (from the root of the repo) with FSDP enabled:
40
+
41
+ ```bash
42
+ compute_environment: LOCAL_MACHINE
43
+ debug: false
44
+ distributed_type: FSDP
45
+ downcast_bf16: 'no'
46
+ fsdp_config:
47
+ fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
48
+ fsdp_backward_prefetch_policy: BACKWARD_PRE
49
+ fsdp_forward_prefetch: false
50
+ fsdp_cpu_ram_efficient_loading: true
51
+ fsdp_offload_params: false
52
+ fsdp_sharding_strategy: FULL_SHARD
53
+ fsdp_state_dict_type: SHARDED_STATE_DICT
54
+ fsdp_sync_module_states: true
55
+ fsdp_transformer_layer_cls_to_wrap: BertLayer
56
+ fsdp_use_orig_params: true
57
+ machine_rank: 0
58
+ main_training_function: main
59
+ mixed_precision: bf16
60
+ num_machines: 1
61
+ num_processes: 2
62
+ rdzv_backend: static
63
+ same_network: true
64
+ tpu_env: []
65
+ tpu_use_cluster: false
66
+ tpu_use_sudo: false
67
+ use_cpu: false
68
+ ```
69
+
70
+ ```bash
71
+ accelerate launch examples/nlp_example.py
72
+ ```
73
+
74
+ Currently, `Accelerate` supports the following config through the CLI:
75
+
76
+ `fsdp_sharding_strategy`: [1] FULL_SHARD (shards optimizer states, gradients and parameters), [2] SHARD_GRAD_OP (shards optimizer states and gradients), [3] NO_SHARD (DDP), [4] HYBRID_SHARD (shards optimizer states, gradients and parameters within each node while each node has full copy), [5] HYBRID_SHARD_ZERO2 (shards optimizer states and gradients within each node while each node has full copy)
77
+
78
+ `fsdp_offload_params` : Decides Whether to offload parameters and gradients to CPU
79
+
80
+ `fsdp_auto_wrap_policy`: [1] TRANSFORMER_BASED_WRAP, [2] SIZE_BASED_WRAP, [3] NO_WRAP
81
+
82
+ `fsdp_transformer_layer_cls_to_wrap`: Only applicable for 🤗 Transformers. When using `fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP`, a user may provide a comma-separated string of transformer layer class names (case-sensitive) to wrap, e.g., `BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput`. This is important because submodules that share weights (e.g., embedding layers) should not end up in different FSDP wrapped units. Using this policy, wrapping happens for each block containing Multi-Head Attention followed by a couple of MLP layers. Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit. Therefore, use this for transformer-based models. You can use the `model._no_split_modules` for 🤗 Transformer models by answering `yes` to `Do you want to use the model's `_no_split_modules` to wrap. It will try to use `model._no_split_modules` when possible.
83
+
84
+ `fsdp_min_num_params`: minimum number of parameters when using `fsdp_auto_wrap_policy=SIZE_BASED_WRAP`.
85
+
86
+ `fsdp_backward_prefetch_policy`: [1] BACKWARD_PRE, [2] BACKWARD_POST, [3] NO_PREFETCH
87
+
88
+ `fsdp_forward_prefetch`: if True, then FSDP explicitly prefetches the next upcoming all-gather while executing in the forward pass. Should only be used for static-graph models since the prefetching follows the first iteration’s execution order. i.e., if the sub-modules' order changes dynamically during the model's executation do not enable this feature.
89
+
90
+ `fsdp_state_dict_type`: [1] FULL_STATE_DICT, [2] LOCAL_STATE_DICT, [3] SHARDED_STATE_DICT
91
+
92
+ `fsdp_use_orig_params`: If True, allows non-uniform `requires_grad` during init, which means support for interspersed frozen and trainable paramteres. This setting is useful in cases such as parameter-efficient fine-tuning as discussed in [this post](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019). This option also allows one to have multiple optimizer param groups. This should be `True` when creating an optimizer before preparing/wrapping the model with FSDP.
93
+
94
+ `fsdp_cpu_ram_efficient_loading`: Only applicable for 🤗 Transformers models. If True, only the first process loads the pretrained model checkpoint while all other processes have empty weights. This should be set to False if you experience errors when loading the pretrained 🤗 Transformers model via `from_pretrained` method. When this setting is True `fsdp_sync_module_states` also must to be True, otherwise all the processes except the main process would have random weights leading to unexpected behaviour during training.
95
+
96
+ `fsdp_sync_module_states`: If True, each individually wrapped FSDP unit will broadcast module parameters from rank 0.
97
+
98
+
99
+ For additional and more nuanced control, you can specify other FSDP parameters via `FullyShardedDataParallelPlugin`.
100
+ When creating `FullyShardedDataParallelPlugin` object, pass it the parameters that weren't part of the accelerate config or if you want to override them.
101
+ The FSDP parameters will be picked based on the accelerate config file or launch command arguments and other parameters that you will pass directly through the `FullyShardedDataParallelPlugin` object will set/override that.
102
+
103
+ Below is an example:
104
+
105
+ ```py
106
+ from accelerate import FullyShardedDataParallelPlugin
107
+ from torch.distributed.fsdp.fully_sharded_data_parallel import FullOptimStateDictConfig, FullStateDictConfig
108
+
109
+ fsdp_plugin = FullyShardedDataParallelPlugin(
110
+ state_dict_config=FullStateDictConfig(offload_to_cpu=False, rank0_only=False),
111
+ optim_state_dict_config=FullOptimStateDictConfig(offload_to_cpu=False, rank0_only=False),
112
+ )
113
+
114
+ accelerator = Accelerator(fsdp_plugin=fsdp_plugin)
115
+ ```
116
+
117
+ ## Saving and loading
118
+
119
+ The new recommended way of checkpointing when using FSDP models is to use `SHARDED_STATE_DICT` as `StateDictType` when setting up the accelerate config.
120
+ Below is the code snippet to save using `save_state` utility of accelerate.
121
+
122
+ ```py
123
+ accelerator.save_state("ckpt")
124
+ ```
125
+
126
+ Inspect the ckeckpoint folder to see model and optimizer as shards per process:
127
+ ```
128
+ ls ckpt
129
+ # optimizer_0 pytorch_model_0 random_states_0.pkl random_states_1.pkl scheduler.bin
130
+
131
+ cd ckpt
132
+
133
+ ls optimizer_0
134
+ # __0_0.distcp __1_0.distcp
135
+
136
+ ls pytorch_model_0
137
+ # __0_0.distcp __1_0.distcp
138
+ ```
139
+
140
+ To load them back for resuming the training, use the `load_state` utility of accelerate
141
+
142
+ ```py
143
+ accelerator.load_state("ckpt")
144
+ ```
145
+
146
+ When using transformers `save_pretrained`, pass `state_dict=accelerator.get_state_dict(model)` to save the model state dict.
147
+ Below is an example:
148
+
149
+ ```diff
150
+ unwrapped_model.save_pretrained(
151
+ args.output_dir,
152
+ is_main_process=accelerator.is_main_process,
153
+ save_function=accelerator.save,
154
+ + state_dict=accelerator.get_state_dict(model),
155
+ )
156
+ ```
157
+
158
+ ### State Dict
159
+
160
+ `accelerator.get_state_dict` will call the underlying `model.state_dict` implementation using `FullStateDictConfig(offload_to_cpu=True, rank0_only=True)` context manager to get the state dict only for rank 0 and it will be offloaded to CPU.
161
+
162
+ You can then pass `state` into the `save_pretrained` method. There are several modes for `StateDictType` and `FullStateDictConfig` that you can use to control the behavior of `state_dict`. For more information, see the [PyTorch documentation](https://pytorch.org/docs/stable/fsdp.html).
163
+
164
+ ## A few caveats to be aware of
165
+
166
+ - In case of multiple models, pass the optimizers to the prepare call in the same order as corresponding models else `accelerator.save_state()` and `accelerator.load_state()` will result in wrong/unexpected behaviour.
167
+ - This feature is incompatible with `--predict_with_generate` in the `run_translation.py` script of 🤗 `Transformers` library.
168
+
169
+ For more control, users can leverage the `FullyShardedDataParallelPlugin`. After creating an instance of this class, users can pass it to the Accelerator class instantiation.
170
+ For more information on these options, please refer to the PyTorch [FullyShardedDataParallel](https://github.com/pytorch/pytorch/blob/0df2e863fbd5993a7b9e652910792bd21a516ff3/torch/distributed/fsdp/fully_sharded_data_parallel.py#L236) code.
docs/source/usage_guides/gradient_accumulation.md ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Performing gradient accumulation with 🤗 Accelerate
17
+
18
+ Gradient accumulation is a technique where you can train on bigger batch sizes than
19
+ your machine would normally be able to fit into memory. This is done by accumulating gradients over
20
+ several batches, and only stepping the optimizer after a certain number of batches have been performed.
21
+
22
+ While technically standard gradient accumulation code would work fine in a distributed setup, it is not the most efficient
23
+ method for doing so and you may experience considerable slowdowns!
24
+
25
+ In this tutorial you will see how to quickly setup gradient accumulation and perform it with the utilities provided in 🤗 Accelerate,
26
+ which can total to adding just one new line of code!
27
+
28
+ This example will use a very simplistic PyTorch training loop that performs gradient accumulation every two batches:
29
+
30
+ ```python
31
+ device = "cuda"
32
+ model.to(device)
33
+
34
+ gradient_accumulation_steps = 2
35
+
36
+ for index, batch in enumerate(training_dataloader):
37
+ inputs, targets = batch
38
+ inputs = inputs.to(device)
39
+ targets = targets.to(device)
40
+ outputs = model(inputs)
41
+ loss = loss_function(outputs, targets)
42
+ loss = loss / gradient_accumulation_steps
43
+ loss.backward()
44
+ if (index + 1) % gradient_accumulation_steps == 0:
45
+ optimizer.step()
46
+ scheduler.step()
47
+ optimizer.zero_grad()
48
+ ```
49
+
50
+ ## Converting it to 🤗 Accelerate
51
+
52
+ First the code shown earlier will be converted to utilize 🤗 Accelerate without the special gradient accumulation helper:
53
+
54
+ ```diff
55
+ + from accelerate import Accelerator
56
+ + accelerator = Accelerator()
57
+
58
+ + model, optimizer, training_dataloader, scheduler = accelerator.prepare(
59
+ + model, optimizer, training_dataloader, scheduler
60
+ + )
61
+
62
+ for index, batch in enumerate(training_dataloader):
63
+ inputs, targets = batch
64
+ - inputs = inputs.to(device)
65
+ - targets = targets.to(device)
66
+ outputs = model(inputs)
67
+ loss = loss_function(outputs, targets)
68
+ loss = loss / gradient_accumulation_steps
69
+ + accelerator.backward(loss)
70
+ if (index+1) % gradient_accumulation_steps == 0:
71
+ optimizer.step()
72
+ scheduler.step()
73
+ optimizer.zero_grad()
74
+ ```
75
+
76
+ <Tip warning={true}>
77
+
78
+ In its current state, this code is not going to perform gradient accumulation efficiently due to a process called gradient synchronization. Read more about that in the [Concepts tutorial](../concept_guides/gradient_synchronization)!
79
+
80
+ </Tip>
81
+
82
+ ## Letting 🤗 Accelerate handle gradient accumulation
83
+
84
+ All that is left now is to let 🤗 Accelerate handle the gradient accumulation for us. To do so you should pass in a `gradient_accumulation_steps` parameter to [`Accelerator`], dictating the number
85
+ of steps to perform before each call to `step()` and how to automatically adjust the loss during the call to [`~Accelerator.backward`]:
86
+
87
+ ```diff
88
+ from accelerate import Accelerator
89
+ - accelerator = Accelerator()
90
+ + accelerator = Accelerator(gradient_accumulation_steps=2)
91
+ ```
92
+
93
+ Alternatively, you can pass in a `gradient_accumulation_plugin` parameter to the [`Accelerator`] object's `__init__`, which will allow you to further customize the gradient accumulation behavior.
94
+ Read more about that in the [GradientAccumulationPlugin](../package_reference/accelerator#accelerate.utils.GradientAccumulationPlugin) docs.
95
+
96
+ From here you can use the [`~Accelerator.accumulate`] context manager from inside your training loop to automatically perform the gradient accumulation for you!
97
+ You just wrap it around the entire training part of our code:
98
+
99
+ ```diff
100
+ - for index, batch in enumerate(training_dataloader):
101
+ + for batch in training_dataloader:
102
+ + with accelerator.accumulate(model):
103
+ inputs, targets = batch
104
+ outputs = model(inputs)
105
+ ```
106
+
107
+ You can remove all the special checks for the step number and the loss adjustment:
108
+
109
+ ```diff
110
+ - loss = loss / gradient_accumulation_steps
111
+ accelerator.backward(loss)
112
+ - if (index+1) % gradient_accumulation_steps == 0:
113
+ optimizer.step()
114
+ scheduler.step()
115
+ optimizer.zero_grad()
116
+ ```
117
+
118
+ As you can see the [`Accelerator`] is able to keep track of the batch number you are on and it will automatically know whether to step through the prepared optimizer and how to adjust the loss.
119
+
120
+ <Tip>
121
+
122
+ Typically with gradient accumulation, you would need to adjust the number of steps to reflect the change in total batches you are
123
+ training on. 🤗 Accelerate automagically does this for you by default. Behind the scenes we instantiate a [`GradientAccumulationPlugin`] configured to do this.
124
+
125
+ </Tip>
126
+
127
+ <Tip warning={true}>
128
+
129
+ The [`state.GradientState`] is sync'd with the active dataloader being iterated upon. As such it assumes naively that when we have reached the end of the dataloader everything will sync and a step will be performed. To disable this, set `sync_with_dataloader` to be `False` in the [`GradientAccumulationPlugin`]:
130
+
131
+ ```{python}
132
+ from accelerate import Accelerator
133
+ from accelerate.utils import GradientAccumulationPlugin
134
+
135
+ plugin = GradientAccumulationPlugin(sync_with_dataloader=False)
136
+ accelerator = Accelerator(..., gradient_accumulation_plugin=plugin)
137
+ ```
138
+
139
+ </Tip>
140
+
141
+ ## The finished code
142
+
143
+ Below is the finished implementation for performing gradient accumulation with 🤗 Accelerate
144
+
145
+ ```python
146
+ from accelerate import Accelerator
147
+ accelerator = Accelerator(gradient_accumulation_steps=2)
148
+ model, optimizer, training_dataloader, scheduler = accelerator.prepare(
149
+ model, optimizer, training_dataloader, scheduler
150
+ )
151
+ for batch in training_dataloader:
152
+ with accelerator.accumulate(model):
153
+ inputs, targets = batch
154
+ outputs = model(inputs)
155
+ loss = loss_function(outputs, targets)
156
+ accelerator.backward(loss)
157
+ optimizer.step()
158
+ scheduler.step()
159
+ optimizer.zero_grad()
160
+ ```
161
+
162
+ <Tip warning={true}>
163
+
164
+ It's important that **only one forward/backward** should be done inside the context manager `with accelerator.accumulate(model)`.
165
+
166
+ </Tip>
167
+
168
+
169
+ To learn more about what magic this wraps around, read the [Gradient Synchronization concept guide](../concept_guides/gradient_synchronization)
170
+
171
+
172
+ ## Self-contained example
173
+
174
+ Here is a self-contained example that you can run to see gradient accumulation in action with 🤗 Accelerate:
175
+
176
+ ```python
177
+ import torch
178
+ import copy
179
+ from accelerate import Accelerator
180
+ from accelerate.utils import set_seed
181
+ from torch.utils.data import TensorDataset, DataLoader
182
+
183
+ # seed
184
+ set_seed(0)
185
+
186
+ # define toy inputs and labels
187
+ x = torch.tensor([1., 2., 3., 4., 5., 6., 7., 8.])
188
+ y = torch.tensor([2., 4., 6., 8., 10., 12., 14., 16.])
189
+ gradient_accumulation_steps = 4
190
+ batch_size = len(x) // gradient_accumulation_steps
191
+
192
+ # define dataset and dataloader
193
+ dataset = TensorDataset(x, y)
194
+ dataloader = DataLoader(dataset, batch_size=batch_size)
195
+
196
+ # define model, optimizer and loss function
197
+ model = torch.zeros((1, 1), requires_grad=True)
198
+ model_clone = copy.deepcopy(model)
199
+ criterion = torch.nn.MSELoss()
200
+ model_optimizer = torch.optim.SGD([model], lr=0.02)
201
+ accelerator = Accelerator(gradient_accumulation_steps=gradient_accumulation_steps)
202
+ model, model_optimizer, dataloader = accelerator.prepare(model, model_optimizer, dataloader)
203
+ model_clone_optimizer = torch.optim.SGD([model_clone], lr=0.02)
204
+ print(f"initial model weight is {model.mean().item():.5f}")
205
+ print(f"initial model weight is {model_clone.mean().item():.5f}")
206
+ for i, (inputs, labels) in enumerate(dataloader):
207
+ with accelerator.accumulate(model):
208
+ inputs = inputs.view(-1, 1)
209
+ print(i, inputs.flatten())
210
+ labels = labels.view(-1, 1)
211
+ outputs = inputs @ model
212
+ loss = criterion(outputs, labels)
213
+ accelerator.backward(loss)
214
+ model_optimizer.step()
215
+ model_optimizer.zero_grad()
216
+ loss = criterion(x.view(-1, 1) @ model_clone, y.view(-1, 1))
217
+ model_clone_optimizer.zero_grad()
218
+ loss.backward()
219
+ model_clone_optimizer.step()
220
+ print(f"w/ accumulation, the final model weight is {model.mean().item():.5f}")
221
+ print(f"w/o accumulation, the final model weight is {model_clone.mean().item():.5f}")
222
+ ```
223
+ ```
224
+ initial model weight is 0.00000
225
+ initial model weight is 0.00000
226
+ 0 tensor([1., 2.])
227
+ 1 tensor([3., 4.])
228
+ 2 tensor([5., 6.])
229
+ 3 tensor([7., 8.])
230
+ w/ accumulation, the final model weight is 2.04000
231
+ w/o accumulation, the final model weight is 2.04000
232
+ ```
docs/source/usage_guides/ipex.md ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Intel® Extension for PyTorch
17
+
18
+ [IPEX](https://github.com/intel/intel-extension-for-pytorch) is optimized for CPUs with AVX-512 or above, and functionally works for CPUs with only AVX2. So, it is expected to bring performance benefit for Intel CPU generations with AVX-512 or above while CPUs with only AVX2 (e.g., AMD CPUs or older Intel CPUs) might result in a better performance under IPEX, but not guaranteed. IPEX provides performance optimizations for CPU training with both Float32 and BFloat16. The usage of BFloat16 is the main focus of the following sections.
19
+
20
+ Low precision data type BFloat16 has been natively supported on the 3rd Generation Xeon® Scalable Processors (aka Cooper Lake) with AVX512 instruction set and will be supported on the next generation of Intel® Xeon® Scalable Processors with Intel® Advanced Matrix Extensions (Intel® AMX) instruction set with further boosted performance. The Auto Mixed Precision for CPU backend has been enabled since PyTorch-1.10. At the same time, the support of Auto Mixed Precision with BFloat16 for CPU and BFloat16 optimization of operators has been massively enabled in Intel® Extension for PyTorch, and partially upstreamed to PyTorch master branch. Users can get better performance and user experience with IPEX Auto Mixed Precision.
21
+
22
+ ## IPEX installation:
23
+
24
+ IPEX release is following PyTorch, to install via pip:
25
+
26
+ | PyTorch Version | IPEX version |
27
+ | :---------------: | :----------: |
28
+ | 2.0 | 2.0.0 |
29
+ | 1.13 | 1.13.0 |
30
+ | 1.12 | 1.12.300 |
31
+ | 1.11 | 1.11.200 |
32
+ | 1.10 | 1.10.100 |
33
+
34
+ ```
35
+ pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
36
+ ```
37
+
38
+ Check more approaches for [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html).
39
+
40
+
41
+ ## How It Works For Training optimization in CPU
42
+
43
+ 🤗 Accelerate has integrated [IPEX](https://github.com/intel/intel-extension-for-pytorch), all you need to do is enabling it through the config.
44
+
45
+ **Scenario 1**: Acceleration of No distributed CPU training
46
+
47
+ Run <u>accelerate config</u> on your machine:
48
+
49
+ ```bash
50
+ $ accelerate config
51
+ -----------------------------------------------------------------------------------------------------------------------------------------------------------
52
+ In which compute environment are you running?
53
+ This machine
54
+ -----------------------------------------------------------------------------------------------------------------------------------------------------------
55
+ Which type of machine are you using?
56
+ No distributed training
57
+ Do you want to run your training on CPU only (even if a GPU / Apple Silicon device is available)? [yes/NO]:yes
58
+ Do you want to use Intel PyTorch Extension (IPEX) to speed up training on CPU? [yes/NO]:yes
59
+ Do you wish to optimize your script with torch dynamo?[yes/NO]:NO
60
+ Do you want to use DeepSpeed? [yes/NO]: NO
61
+ -----------------------------------------------------------------------------------------------------------------------------------------------------------
62
+ Do you wish to use FP16 or BF16 (mixed precision)?
63
+ bf16
64
+ ```
65
+ This will generate a config file that will be used automatically to properly set the
66
+ default options when doing
67
+
68
+ ```bash
69
+ accelerate launch my_script.py --args_to_my_script
70
+ ```
71
+
72
+ For instance, here is how you would run the NLP example `examples/nlp_example.py` (from the root of the repo) with IPEX enabled.
73
+ default_config.yaml that is generated after `accelerate config`
74
+
75
+ ```bash
76
+ compute_environment: LOCAL_MACHINE
77
+ distributed_type: 'NO'
78
+ downcast_bf16: 'no'
79
+ ipex_config:
80
+ ipex: true
81
+ machine_rank: 0
82
+ main_training_function: main
83
+ mixed_precision: bf16
84
+ num_machines: 1
85
+ num_processes: 1
86
+ rdzv_backend: static
87
+ same_network: true
88
+ tpu_env: []
89
+ tpu_use_cluster: false
90
+ tpu_use_sudo: false
91
+ use_cpu: true
92
+ ```
93
+ ```bash
94
+ accelerate launch examples/nlp_example.py
95
+ ```
96
+
97
+ **Scenario 2**: Acceleration of distributed CPU training
98
+ we use Intel oneCCL for communication, combined with Intel® MPI library to deliver flexible, efficient, scalable cluster messaging on Intel® architecture. you could refer the [here](https://huggingface.co/docs/transformers/perf_train_cpu_many) for the installation guide
99
+
100
+ Run <u>accelerate config</u> on your machine(node0):
101
+
102
+ ```bash
103
+ $ accelerate config
104
+ -----------------------------------------------------------------------------------------------------------------------------------------------------------
105
+ In which compute environment are you running?
106
+ This machine
107
+ -----------------------------------------------------------------------------------------------------------------------------------------------------------
108
+ Which type of machine are you using?
109
+ multi-CPU
110
+ How many different machines will you use (use more than 1 for multi-node training)? [1]: 4
111
+ -----------------------------------------------------------------------------------------------------------------------------------------------------------
112
+ What is the rank of this machine?
113
+ 0
114
+ What is the IP address of the machine that will host the main process? 36.112.23.24
115
+ What is the port you will use to communicate with the main process? 29500
116
+ Are all the machines on the same local network? Answer `no` if nodes are on the cloud and/or on different network hosts [YES/no]: yes
117
+ Do you want to use Intel PyTorch Extension (IPEX) to speed up training on CPU? [yes/NO]:yes
118
+ Do you wish to optimize your script with torch dynamo?[yes/NO]:NO
119
+ How many CPU(s) should be used for distributed training? [1]:16
120
+ -----------------------------------------------------------------------------------------------------------------------------------------------------------
121
+ Do you wish to use FP16 or BF16 (mixed precision)?
122
+ bf16
123
+ ```
124
+ For instance, here is how you would run the NLP example `examples/nlp_example.py` (from the root of the repo) with IPEX enabled for distributed CPU training.
125
+
126
+ default_config.yaml that is generated after `accelerate config`
127
+ ```bash
128
+ compute_environment: LOCAL_MACHINE
129
+ distributed_type: MULTI_CPU
130
+ downcast_bf16: 'no'
131
+ ipex_config:
132
+ ipex: true
133
+ machine_rank: 0
134
+ main_process_ip: 36.112.23.24
135
+ main_process_port: 29500
136
+ main_training_function: main
137
+ mixed_precision: bf16
138
+ num_machines: 4
139
+ num_processes: 16
140
+ rdzv_backend: static
141
+ same_network: true
142
+ tpu_env: []
143
+ tpu_use_cluster: false
144
+ tpu_use_sudo: false
145
+ use_cpu: true
146
+ ```
147
+
148
+ Set following env and using intel MPI to launch the training
149
+
150
+ In node0, you need to create a configuration file which contains the IP addresses of each node (for example hostfile) and pass that configuration file path as an argument.
151
+ ```bash
152
+ $ cat hostfile
153
+ xxx.xxx.xxx.xxx #node0 ip
154
+ xxx.xxx.xxx.xxx #node1 ip
155
+ xxx.xxx.xxx.xxx #node2 ip
156
+ xxx.xxx.xxx.xxx #node3 ip
157
+ ```
158
+ Now, run the following command in node0 and **16DDP** will be enabled in node0,node1,node2,node3 with BF16 mixed precision:
159
+ ```bash
160
+ oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
161
+ source $oneccl_bindings_for_pytorch_path/env/setvars.sh
162
+ export CCL_WORKER_COUNT=1
163
+ export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip
164
+ export CCL_ATL_TRANSPORT=ofi
165
+ mpirun -f hostfile -n 16 -ppn 4 accelerate launch examples/nlp_example.py
166
+ ```
167
+
168
+ ## Related Resources
169
+
170
+ - [Project's github](https://github.com/intel/intel-extension-for-pytorch)
171
+ - [API docs](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/api_doc.html)
172
+ - [Tuning guide](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/performance_tuning/tuning_guide.html)
173
+ - [Blogs & Publications](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/blogs_publications.html)
174
+
docs/source/usage_guides/local_sgd.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Using Local SGD with 🤗 Accelerate
17
+
18
+ Local SGD is a technique for distributed training where gradients are not synchronized every step. Thus, each process updates its own version of the model weights and after a given number of steps these weights are synchronized by averaging across all processes. This improves communication efficiency and can lead to substantial training speed up especially when a computer lacks a faster interconnect such as NVLink.
19
+ Unlike gradient accumulation (where improving communication efficiency requires increasing the effective batch size), Local SGD does not require changing a batch size or a learning rate / schedule. However, if necessary, Local SGD can be combined with gradient accumulation as well.
20
+
21
+ In this tutorial you will see how to quickly setup Local SGD 🤗 Accelerate. Compared to a standard Accelerate setup, this requires only two extra lines of code.
22
+
23
+ This example will use a very simplistic PyTorch training loop that performs gradient accumulation every two batches:
24
+
25
+ ```python
26
+ device = "cuda"
27
+ model.to(device)
28
+
29
+ gradient_accumulation_steps = 2
30
+
31
+ for index, batch in enumerate(training_dataloader):
32
+ inputs, targets = batch
33
+ inputs = inputs.to(device)
34
+ targets = targets.to(device)
35
+ outputs = model(inputs)
36
+ loss = loss_function(outputs, targets)
37
+ loss = loss / gradient_accumulation_steps
38
+ loss.backward()
39
+ if (index + 1) % gradient_accumulation_steps == 0:
40
+ optimizer.step()
41
+ scheduler.step()
42
+ optimizer.zero_grad()
43
+ ```
44
+
45
+ ## Converting it to 🤗 Accelerate
46
+
47
+ First the code shown earlier will be converted to use 🤗 Accelerate with neither a LocalSGD or a gradient accumulation helper:
48
+
49
+ ```diff
50
+ + from accelerate import Accelerator
51
+ + accelerator = Accelerator()
52
+
53
+ + model, optimizer, training_dataloader, scheduler = accelerator.prepare(
54
+ + model, optimizer, training_dataloader, scheduler
55
+ + )
56
+
57
+ for index, batch in enumerate(training_dataloader):
58
+ inputs, targets = batch
59
+ - inputs = inputs.to(device)
60
+ - targets = targets.to(device)
61
+ outputs = model(inputs)
62
+ loss = loss_function(outputs, targets)
63
+ loss = loss / gradient_accumulation_steps
64
+ + accelerator.backward(loss)
65
+ if (index+1) % gradient_accumulation_steps == 0:
66
+ optimizer.step()
67
+ scheduler.step()
68
+ ```
69
+
70
+ ## Letting 🤗 Accelerate handle model synchronization
71
+
72
+ All that is left now is to let 🤗 Accelerate handle model parameter synchronization **and** the gradient accumulation for us. For simplicity let us assume we need to synchronize every 8 steps. This is
73
+ achieved by adding one `with LocalSGD` statement and one call `local_sgd.step()` after every optimizer step:
74
+
75
+ ```diff
76
+ +local_sgd_steps=8
77
+
78
+ +with LocalSGD(accelerator=accelerator, model=model, local_sgd_steps=8, enabled=True) as local_sgd:
79
+ for batch in training_dataloader:
80
+ with accelerator.accumulate(model):
81
+ inputs, targets = batch
82
+ outputs = model(inputs)
83
+ loss = loss_function(outputs, targets)
84
+ accelerator.backward(loss)
85
+ optimizer.step()
86
+ scheduler.step()
87
+ optimizer.zero_grad()
88
+ + local_sgd.step()
89
+ ```
90
+
91
+ Under the hood, the Local SGD code **disables** automatic gradient synchornization (but accumulation still works as expected!). Instead it averages model parameters every `local_sgd_steps` steps (as well as in the end of the training loop).
92
+
93
+ ## Limitations
94
+
95
+ The current implementation works only with basic multi-GPU (or multi-CPU) training without, e.g., [DeepSpeed.](https://github.com/microsoft/DeepSpeed).
96
+
97
+ ## References
98
+
99
+ Although we are not aware of the true origins of this simple approach, the idea of local SGD is quite old and goes
100
+ back to at least:
101
+
102
+ Zhang, J., De Sa, C., Mitliagkas, I., & Ré, C. (2016). [Parallel SGD: When does averaging help?. arXiv preprint
103
+ arXiv:1606.07365.](https://arxiv.org/abs/1606.07365)
104
+
105
+ We credit the term Local SGD to the following paper (but there might be earlier references we are not aware of).
106
+
107
+ Stich, Sebastian Urban. ["Local SGD Converges Fast and Communicates Little." ICLR 2019-International Conference on
108
+ Learning Representations. No. CONF. 2019.](https://arxiv.org/abs/1805.09767)
docs/source/usage_guides/low_precision_training.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Low Precision Training Methods
17
+
18
+ 🤗 Accelerate provides integrations to train on lower precision methods using specified supported hardware through the `TransformersEngine` and `MS-AMP` packages. This documentation will help guide you through what hardware is supported, how to configure your [`Accelerator`] to leverage the low precision methods, and what you can expect when training.
19
+
20
+ ## What training on FP8 means
21
+
22
+ To explore more of the nitty-gritty in traninig in FP8 with PyTorch and 🤗 Accelerate, check out the [concept_guide](../concept_guides/low_precision_training.md) on why this can be difficult. But essentially rather than training in BF16, some (or all) aspects of training a model can be performed using 8 bits instead of 16. The challenge is doing so without degrading final performance.
23
+
24
+ This is only enabled on specific NVIDIA hardware, namely:
25
+
26
+ * Anything after the 3000 series consumer graphics cards (such as the 4090)
27
+ * Hopper-based GPU architectures (such as the `H100` and `H200`)
28
+
29
+ What this will result in is some gain in the memory used (as we've cut the needed memory in half for some parts of training) and an increase in throughput *should* be seen as well for larger models that can replace certain layers with FP8-enabled ones.
30
+
31
+ ## Configuring the Accelerator
32
+
33
+ Currently two different backends for FP8 are supported (`TransformersEngine` and `MS-AMP`), each with different capabilities and configurations.
34
+
35
+ To use either, the same core API is used. Just pass `mixed_precision="fp8"` to either the [`Accelerator`], during `accelerate config` when prompted about mixed precision, or as part of your `config.yaml` file in the `mixed_precision` key:
36
+
37
+ ```{python}
38
+ from accelerate import Accelerator
39
+ accelerator = Accelerator(mixed_precision="fp8")
40
+ ```
41
+
42
+ By default, if `MS-AMP` is available in your environment, 🤗 Accelerate will automatically utilize it as a backend. To specify it yourself (and customize other parts of the FP8 mixed precision setup), you can utilize the [`utils.FP8RecipeKwargs`]:
43
+
44
+ ```{python}
45
+ from accelerate import Accelerator
46
+ from accelerate.utils import FP8RecipeKwargs
47
+ kwargs = [FP8RecipeKwargs(backend="msamp")]
48
+ # Or to specify the backend as `TransformersEngine` even if MS-AMP is installed
49
+ # kwargs = [FP8RecipeKwargs(backend="te")]
50
+ accelerator = Accelerator(mixed_precision="fp8", kwarg_handlers=kwargs)
51
+ ```
52
+
53
+ ## Configuring MS-AMP
54
+
55
+ Of the two, `MS-AMP` is traditionally the easier one to configure as there is only a single argument: the optimization level.
56
+
57
+ Currently two levels of optimization are supported in the 🤗 Accelerate integration, `"O1"` and `"O2"` (using the letter 'o', not zero).
58
+
59
+ * `"O1"` will cast the weight gradients and `all_reduce` communications to happen in 8-bit, while the rest are done in 16 bit. This reduces the general GPU memory usage and speeds up communication bandwidths.
60
+ * `"O2"` will also cast first-order optimizer states into 8 bit, while the second order states are in FP16. (Currently just the `Adam` optimizer is supported). This tries it's best to minimize final accuracy degredation and will save the highest potential memory.
61
+
62
+ To specify an optimization level, pass it to the `FP8KwargsHandler` by setting the `optimization_level` argument:
63
+
64
+ ```{python}
65
+ from accelerate import Accelerator
66
+ from accelerate.utils import FP8RecipeKwargs
67
+ kwargs = [FP8RecipeKwargs(backend="msamp", optimization_level="O2")]
68
+ accelerator = Accelerator(mixed_precision="fp8", kwarg_handlers=kwargs)
69
+ ```
70
+
71
+ ## Configuring TransformersEngine
72
+
73
+ TransformersEngine has much more available for customizing how and what FP8 calculations are performed. A full list of supported arguments and what they mean are available in [NVIDIA's documentation](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/api/common.html), however they are restated as part of [`FP8KwargsHandler`]'s docstring for your convience.
74
+
75
+ 🤗 Accelerate tries to set sensible defaults, but exploring and tweaking the various parameters yourself can lead to better performance potentially.
76
+
77
+ To use it, specify `backend="te"` and modify any of the arguments you want as part of your kwarg handler:
78
+
79
+ ```{python}
80
+ from accelerate import Accelerator
81
+ from accelerate.utils import FP8RecipeKwargs
82
+ kwargs = [FP8RecipeKwargs(backend="te", ...)]
83
+ accelerator = Accelerator(mixed_precision="fp8", kwarg_handlers=kwargs)
84
+ ```
85
+
86
+ ## Futher Reading
87
+
88
+ To learn more about training in FP8 please check out the following resources:
89
+
90
+ * [Our concept guide](../concept_guides/low_precision_training.md) detailing into more about both TransformersEngine and MS-AMP
91
+ * [The `transformers-engine` documentation](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/api/common.html)
92
+ * [The `MS-AMP` documentation](https://azure.github.io/MS-AMP/docs/)
docs/source/usage_guides/megatron_lm.md ADDED
@@ -0,0 +1,583 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+
17
+ # Megatron-LM
18
+
19
+ [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) enables training large transformer language models at scale.
20
+ It provides efficient tensor, pipeline and sequence based model parallelism for pre-training transformer based
21
+ Language Models such as [GPT](https://arxiv.org/abs/2005.14165) (Decoder Only), [BERT](https://arxiv.org/pdf/1810.04805.pdf) (Encoder Only) and [T5](https://arxiv.org/abs/1910.10683) (Encoder-Decoder).
22
+ For detailed information and how things work behind the scene please refer the github [repo](https://github.com/NVIDIA/Megatron-LM).
23
+
24
+ ## What is integrated?
25
+
26
+ Accelerate integrates following feature of Megatron-LM to enable large scale pre-training/finetuning
27
+ of BERT (Encoder), GPT (Decoder) or T5 models (Encoder and Decoder):
28
+
29
+ a. **Tensor Parallelism (TP)**: Reduces memory footprint without much additional communication on intra-node ranks.
30
+ Each tensor is split into multiple chunks with each shard residing on separate GPU. At each step, the same mini-batch of data is processed
31
+ independently and in parallel by each shard followed by syncing across all GPUs (`all-reduce` operation).
32
+ In a simple transformer layer, this leads to 2 `all-reduces` in the forward path and 2 in the backward path.
33
+ For more details, please refer research paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using
34
+ Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf) and
35
+ this section of 🤗 blogpost [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#tensor-parallelism).
36
+
37
+
38
+ b. **Pipeline Parallelism (PP)**: Reduces memory footprint and enables large scale training via inter-node parallelization.
39
+ Reduces the bubble of naive PP via PipeDream-Flush schedule/1F1B schedule and Interleaved 1F1B schedule.
40
+ Layers are distributed uniformly across PP stages. For example, if a model has `24` layers and we have `4` GPUs for
41
+ pipeline parallelism, each GPU will have `6` layers (24/4). For more details on schedules to reduce the idle time of PP,
42
+ please refer to the research paper [Efficient Large-Scale Language Model Training on GPU Clusters
43
+ Using Megatron-LM](https://arxiv.org/pdf/2104.04473.pdf) and
44
+ this section of 🤗 blogpost [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#pipeline-parallelism).
45
+
46
+ c. **Sequence Parallelism (SP)**: Reduces memory footprint without any additional communication. Only applicable when using TP.
47
+ It reduces activation memory required as it prevents the same copies to be on the tensor parallel ranks
48
+ post `all-reduce` by replacing then with `reduce-scatter` and `no-op` operation would be replaced by `all-gather`.
49
+ As `all-reduce = reduce-scatter + all-gather`, this saves a ton of activation memory at no added communication cost.
50
+ To put it simply, it shards the outputs of each transformer layer along sequence dimension, e.g.,
51
+ if the sequence length is `1024` and the TP size is `4`, each GPU will have `256` tokens (1024/4) for each sample.
52
+ This increases the batch size that can be supported for training. For more details, please refer to the research paper
53
+ [Reducing Activation Recomputation in Large Transformer Models](https://arxiv.org/pdf/2205.05198.pdf).
54
+
55
+ d. **Data Parallelism (DP)** via Distributed Optimizer: Reduces the memory footprint by sharding optimizer states and gradients across DP ranks
56
+ (versus the traditional method of replicating the optimizer state across data parallel ranks).
57
+ For example, when using Adam optimizer with mixed-precision training, each parameter accounts for 12 bytes of memory.
58
+ This gets distributed equally across the GPUs, i.e., each parameter would account for 3 bytes (12/4) if we have 4 GPUs.
59
+ For more details, please refer the research paper [ZeRO: Memory Optimizations Toward Training Trillion
60
+ Parameter Models](https://arxiv.org/pdf/1910.02054.pdf) and following section of 🤗 blog
61
+ [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#zero-data-parallelism).
62
+
63
+ e. **Selective Activation Recomputation**: Reduces the memory footprint of activations significantly via smart activation checkpointing.
64
+ It doesn't store activations occupying large memory while being fast to recompute thereby achieving great tradeoff between memory and recomputation.
65
+ For example, for GPT-3, this leads to 70% reduction in required memory for activations at the expense of
66
+ only 2.7% FLOPs overhead for recomputation of activations. For more details, please refer to the research paper
67
+ [Reducing Activation Recomputation in Large Transformer Models](https://arxiv.org/pdf/2205.05198.pdf).
68
+
69
+ f. **Fused Kernels**: Fused Softmax, Mixed Precision Fused Layer Norm and Fused gradient accumulation to weight gradient computation of linear layer.
70
+ PyTorch JIT compiled Fused GeLU and Fused Bias+Dropout+Residual addition.
71
+
72
+ g. **Support for Indexed datasets**: Efficient binary format of datasets for large scale training. Support for the `mmap`, `cached` index file and the `lazy` loader format.
73
+
74
+ h. **Checkpoint reshaping and interoperability**: Utility for reshaping Megatron-LM checkpoints of variable
75
+ tensor and pipeline parallel sizes to the beloved 🤗 Transformers sharded checkpoints as it has great support with plethora of tools
76
+ such as 🤗 Accelerate Big Model Inference, Megatron-DeepSpeed Inference etc.
77
+ Support is also available for converting 🤗 Transformers sharded checkpoints to Megatron-LM checkpoint of variable tensor and pipeline parallel sizes
78
+ for large scale training.
79
+
80
+
81
+ ## Pre-Requisites
82
+
83
+ You will need to install the latest pytorch, cuda, nccl, and NVIDIA [APEX](https://github.com/NVIDIA/apex#quick-start) releases and the nltk library.
84
+ See [documentation](https://github.com/NVIDIA/Megatron-LM#setup) for more details.
85
+ Another way to setup the environment is to pull an NVIDIA PyTorch Container that comes with all the required installations from NGC.
86
+
87
+ Below is a step-by-step method to set up the conda environment:
88
+
89
+ 1. Create a virtual environment
90
+ ```
91
+ conda create --name ml
92
+ ```
93
+
94
+ 2. Assuming that the machine has CUDA 11.3 installed, installing the corresponding PyTorch GPU Version
95
+ ```
96
+ conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
97
+ ```
98
+
99
+ 3. Install Nvidia APEX
100
+ ```
101
+ git clone https://github.com/NVIDIA/apex
102
+ cd apex
103
+ pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
104
+ cd ..
105
+ ```
106
+
107
+ 4. Installing Megatron-LM
108
+
109
+ ```
110
+ pip install git+https://github.com/huggingface/Megatron-LM.git
111
+ ```
112
+
113
+ ## Accelerate Megatron-LM Plugin
114
+
115
+ Important features are directly supported via the `accelerate config` command.
116
+ An example of thr corresponding questions for using Megatron-LM features is shown below:
117
+
118
+ ```bash
119
+ :~$ accelerate config --config_file "megatron_gpt_config.yaml"
120
+ In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0
121
+ Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU): 2
122
+ How many different machines will you use (use more than 1 for multi-node training)? [1]:
123
+ Do you want to use DeepSpeed? [yes/NO]:
124
+ Do you want to use FullyShardedDataParallel? [yes/NO]:
125
+ Do you want to use Megatron-LM ? [yes/NO]: yes
126
+ What is the Tensor Parallelism degree/size? [1]:2
127
+ Do you want to enable Sequence Parallelism? [YES/no]:
128
+ What is the Pipeline Parallelism degree/size? [1]:2
129
+ What is the number of micro-batches? [1]:2
130
+ Do you want to enable selective activation recomputation? [YES/no]:
131
+ Do you want to use distributed optimizer which shards optimizer state and gradients across data parallel ranks? [YES/no]:
132
+ What is the gradient clipping value based on global L2 Norm (0 to disable)? [1.0]:
133
+ How many GPU(s) should be used for distributed training? [1]:4
134
+ Do you wish to use FP16 or BF16 (mixed precision)? [NO/fp16/bf16]: bf16
135
+ ```
136
+
137
+ The resulting config is shown below:
138
+
139
+ ```
140
+ ~$ cat megatron_gpt_config.yaml
141
+ compute_environment: LOCAL_MACHINE
142
+ deepspeed_config: {}
143
+ distributed_type: MEGATRON_LM
144
+ downcast_bf16: 'no'
145
+ fsdp_config: {}
146
+ machine_rank: 0
147
+ main_process_ip: null
148
+ main_process_port: null
149
+ main_training_function: main
150
+ megatron_lm_config:
151
+ megatron_lm_gradient_clipping: 1.0
152
+ megatron_lm_num_micro_batches: 2
153
+ megatron_lm_pp_degree: 2
154
+ megatron_lm_recompute_activations: true
155
+ megatron_lm_sequence_parallelism: true
156
+ megatron_lm_tp_degree: 2
157
+ megatron_lm_use_distributed_optimizer: true
158
+ mixed_precision: bf16
159
+ num_machines: 1
160
+ num_processes: 4
161
+ rdzv_backend: static
162
+ same_network: true
163
+ use_cpu: false
164
+ ```
165
+
166
+ We will take the example of GPT pre-training. The minimal changes required to the official `run_clm_no_trainer.py`
167
+ to use Megatron-LM are as follows:
168
+
169
+ 1. As Megatron-LM uses its own implementation of Optimizer, the corresponding scheduler compatible with it needs to be used.
170
+ As such, support for only the Megatron-LM's scheduler is present. User will need to create `accelerate.utils.MegatronLMDummyScheduler`.
171
+ Example is given below:
172
+
173
+ ```python
174
+ from accelerate.utils import MegatronLMDummyScheduler
175
+
176
+ if accelerator.distributed_type == DistributedType.MEGATRON_LM:
177
+ lr_scheduler = MegatronLMDummyScheduler(
178
+ optimizer=optimizer,
179
+ total_num_steps=args.max_train_steps,
180
+ warmup_num_steps=args.num_warmup_steps,
181
+ )
182
+ else:
183
+ lr_scheduler = get_scheduler(
184
+ name=args.lr_scheduler_type,
185
+ optimizer=optimizer,
186
+ num_warmup_steps=args.num_warmup_steps * args.gradient_accumulation_steps,
187
+ num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
188
+ )
189
+ ```
190
+
191
+ 2. Getting the details of the total batch size now needs to be cognization of tensor and pipeline parallel sizes.
192
+ Example of getting the effective total batch size is shown below:
193
+
194
+ ```python
195
+ if accelerator.distributed_type == DistributedType.MEGATRON_LM:
196
+ total_batch_size = accelerator.state.megatron_lm_plugin.global_batch_size
197
+ else:
198
+ total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
199
+ ```
200
+
201
+ 3. When using Megatron-LM, the losses are already averaged across the data parallel group
202
+
203
+ ```python
204
+ if accelerator.distributed_type == DistributedType.MEGATRON_LM:
205
+ losses.append(loss)
206
+ else:
207
+ losses.append(accelerator.gather_for_metrics(loss.repeat(args.per_device_eval_batch_size)))
208
+
209
+ if accelerator.distributed_type == DistributedType.MEGATRON_LM:
210
+ losses = torch.tensor(losses)
211
+ else:
212
+ losses = torch.cat(losses)
213
+ ```
214
+
215
+ 4. For Megatron-LM, we need to save the model using `accelerator.save_state`
216
+
217
+ ```python
218
+ if accelerator.distributed_type == DistributedType.MEGATRON_LM:
219
+ accelerator.save_state(args.output_dir)
220
+ else:
221
+ unwrapped_model = accelerator.unwrap_model(model)
222
+ unwrapped_model.save_pretrained(
223
+ args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save
224
+ )
225
+ ```
226
+
227
+ That's it! We are good to go 🚀. Please find the example script in the examples folder at the path `accelerate/examples/by_feature/megatron_lm_gpt_pretraining.py`.
228
+ Let's run it for `gpt-large` model architecture using 4 A100-80GB GPUs.
229
+
230
+ ```bash
231
+ accelerate launch --config_file megatron_gpt_config.yaml \
232
+ examples/by_feature/megatron_lm_gpt_pretraining.py \
233
+ --config_name "gpt2-large" \
234
+ --tokenizer_name "gpt2-large" \
235
+ --dataset_name wikitext \
236
+ --dataset_config_name wikitext-2-raw-v1 \
237
+ --block_size 1024 \
238
+ --learning_rate 5e-5 \
239
+ --per_device_train_batch_size 24 \
240
+ --per_device_eval_batch_size 24 \
241
+ --num_train_epochs 5 \
242
+ --with_tracking \
243
+ --report_to "wandb" \
244
+ --output_dir "awesome_model"
245
+ ```
246
+
247
+ Below are some important excerpts from the output logs:
248
+
249
+ ```bash
250
+ Loading extension module fused_dense_cuda...
251
+ >>> done with compiling and loading fused kernels. Compilation time: 3.569 seconds
252
+ > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432)
253
+ Building gpt model in the pre-training mode.
254
+ The Megatron LM model weights are initialized at random in `accelerator.prepare`. Please use `accelerator.load_checkpoint` to load a pre-trained checkpoint matching the distributed setup.
255
+ Preparing dataloader
256
+ Preparing dataloader
257
+ Preparing model
258
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 210753280
259
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 1): 209445120
260
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 210753280
261
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 1): 209445120
262
+ Preparing optimizer
263
+ Preparing scheduler
264
+ > learning rate decay style: linear
265
+ 10/10/2022 22:57:22 - INFO - __main__ - ***** Running training *****
266
+ 10/10/2022 22:57:22 - INFO - __main__ - Num examples = 2318
267
+ 10/10/2022 22:57:22 - INFO - __main__ - Num Epochs = 5
268
+ 10/10/2022 22:57:22 - INFO - __main__ - Instantaneous batch size per device = 24
269
+ 10/10/2022 22:57:22 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 48
270
+ 10/10/2022 22:57:22 - INFO - __main__ - Gradient Accumulation steps = 1
271
+ 10/10/2022 22:57:22 - INFO - __main__ - Total optimization steps = 245
272
+ 20%|████████████▍ | 49/245 [01:04<04:09, 1.27s/it]
273
+ 10/10/2022 22:58:29 - INFO - __main__ - epoch 0: perplexity: 1222.1594275215962 eval_loss: 7.10837459564209
274
+ 40%|████████████████████████▊ | 98/245 [02:10<03:07, 1.28s/it]
275
+ 10/10/2022 22:59:35 - INFO - __main__ - epoch 1: perplexity: 894.5236583794557 eval_loss: 6.796291351318359
276
+ 60%|████████████████████████████████████▌ | 147/245 [03:16<02:05, 1.28s/it]
277
+ 10/10/2022 23:00:40 - INFO - __main__ - epoch 2: perplexity: 702.8458788508042 eval_loss: 6.555137634277344
278
+ 80%|████████████████████████████████████████████████▊ | 196/245 [04:22<01:02, 1.28s/it]
279
+ 10/10/2022 23:01:46 - INFO - __main__ - epoch 3: perplexity: 600.3220028695281 eval_loss: 6.39746618270874
280
+ 100%|████████████████████████████████████████████��████████████████| 245/245 [05:27<00:00, 1.28s/it]
281
+ ```
282
+
283
+ There are a large number of other options/features that one can set using `accelerate.utils.MegatronLMPlugin`.
284
+
285
+ ## Advanced features to leverage writing custom train step and Megatron-LM Indexed Datasets
286
+
287
+ For leveraging more features, please go through below details.
288
+
289
+ 1. Below is an example of changes required to customize the Train Step while using Megatron-LM.
290
+ You will implement the `accelerate.utils.AbstractTrainStep` or inherit from their corresponding children
291
+ `accelerate.utils.GPTTrainStep`, `accelerate.utils.BertTrainStep` or `accelerate.utils.T5TrainStep`.
292
+
293
+ ```python
294
+ from accelerate.utils import MegatronLMDummyScheduler, GPTTrainStep, avg_losses_across_data_parallel_group
295
+
296
+
297
+ # Custom loss function for the Megatron model
298
+ class GPTTrainStepWithCustomLoss(GPTTrainStep):
299
+ def __init__(self, megatron_args, **kwargs):
300
+ super().__init__(megatron_args)
301
+ self.kwargs = kwargs
302
+
303
+ def get_loss_func(self):
304
+ def loss_func(inputs, loss_mask, output_tensor):
305
+ batch_size, seq_length = output_tensor.shape
306
+ losses = output_tensor.float()
307
+ loss_mask = loss_mask.view(-1).float()
308
+ loss = losses.view(-1) * loss_mask
309
+
310
+ # Resize and average loss per sample
311
+ loss_per_sample = loss.view(batch_size, seq_length).sum(axis=1)
312
+ loss_mask_per_sample = loss_mask.view(batch_size, seq_length).sum(axis=1)
313
+ loss_per_sample = loss_per_sample / loss_mask_per_sample
314
+
315
+ # Calculate and scale weighting
316
+ weights = torch.stack([(inputs == kt).float() for kt in self.kwargs["keytoken_ids"]]).sum(axis=[0, 2])
317
+ weights = 1.0 + self.kwargs["alpha"] * weights
318
+ # Calculate weighted average
319
+ weighted_loss = (loss_per_sample * weights).mean()
320
+
321
+ # Reduce loss across data parallel groups
322
+ averaged_loss = avg_losses_across_data_parallel_group([weighted_loss])
323
+
324
+ return weighted_loss, {"lm loss": averaged_loss[0]}
325
+
326
+ return loss_func
327
+
328
+ def get_forward_step_func(self):
329
+ def forward_step(data_iterator, model):
330
+ """Forward step."""
331
+ # Get the batch.
332
+ tokens, labels, loss_mask, attention_mask, position_ids = self.get_batch(data_iterator)
333
+ output_tensor = model(tokens, position_ids, attention_mask, labels=labels)
334
+
335
+ return output_tensor, partial(self.loss_func, tokens, loss_mask)
336
+
337
+ return forward_step
338
+
339
+
340
+ def main():
341
+ # Custom loss function for the Megatron model
342
+ keytoken_ids = []
343
+ keywords = ["plt", "pd", "sk", "fit", "predict", " plt", " pd", " sk", " fit", " predict"]
344
+ for keyword in keywords:
345
+ ids = tokenizer([keyword]).input_ids[0]
346
+ if len(ids) == 1:
347
+ keytoken_ids.append(ids[0])
348
+ accelerator.print(f"Keytoken ids: {keytoken_ids}")
349
+ accelerator.state.megatron_lm_plugin.custom_train_step_class = GPTTrainStepWithCustomLoss
350
+ accelerator.state.megatron_lm_plugin.custom_train_step_kwargs = {
351
+ "keytoken_ids": keytoken_ids,
352
+ "alpha": 0.25,
353
+ }
354
+ ```
355
+
356
+ 2. For using the Megatron-LM datasets, a few more changes are required. Dataloaders for these datasets
357
+ are available only on rank 0 of each tensor parallel group. As such, there are rank where dataloader won't be
358
+ available and this requires tweaks to the training loop. Being able to do all this shows how
359
+ flexible and extensible 🤗 Accelerate is. The changes required are as follows.
360
+
361
+ a. For Megatron-LM indexed datasets, we need to use `MegatronLMDummyDataLoader`
362
+ and pass the required dataset args to it such as `data_path`, `seq_length` etc.
363
+ See [here](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/arguments.py#L804) for the list of available args.
364
+
365
+ ```python
366
+ from accelerate.utils import MegatronLMDummyDataLoader
367
+
368
+ megatron_dataloader_config = {
369
+ "data_path": args.data_path,
370
+ "splits_string": args.splits_string,
371
+ "seq_length": args.block_size,
372
+ "micro_batch_size": args.per_device_train_batch_size,
373
+ }
374
+ megatron_dataloader = MegatronLMDummyDataLoader(**megatron_dataloader_config)
375
+ accelerator.state.megatron_lm_plugin.megatron_dataset_flag = True
376
+ ```
377
+
378
+ b. `megatron_dataloader` is repeated 3 times to get training, validation and test dataloaders
379
+ as per the `args.splits_string` proportions
380
+
381
+ ```python
382
+ model, optimizer, lr_scheduler, train_dataloader, eval_dataloader, _ = accelerator.prepare(
383
+ model, optimizer, lr_scheduler, megatron_dataloader, megatron_dataloader, megatron_dataloader
384
+ )
385
+ ```
386
+
387
+ c. Changes to training and evaluation loops as dataloader is only available on tensor parallel ranks 0
388
+ So, we need to iterate only if the dataloader isn't `None` else provide empty dict
389
+ As such, we loop using `while` loop and break when `completed_steps` is equal to `args.max_train_steps`
390
+ This is similar to the Megatron-LM setup wherein user has to provide `max_train_steps` when using Megaton-LM indexed datasets.
391
+ This displays how flexible and extensible 🤗 Accelerate is.
392
+
393
+ ```python
394
+ while completed_steps < args.max_train_steps:
395
+ model.train()
396
+ batch = next(train_dataloader) if train_dataloader is not None else {}
397
+ outputs = model(**batch)
398
+ loss = outputs.loss
399
+ ...
400
+
401
+ if completed_steps % eval_interval == 0:
402
+ eval_completed_steps = 0
403
+ losses = []
404
+ while eval_completed_steps < eval_iters:
405
+ model.eval()
406
+ with torch.no_grad():
407
+ batch = next(eval_dataloader) if eval_dataloader is not None else {}
408
+ outputs = model(**batch)
409
+ ```
410
+
411
+
412
+ ## Utility for Checkpoint reshaping and interoperability
413
+
414
+ 1. The scripts for these are present in 🤗 Transformers library under respective models.
415
+ Currently, it is available for GPT model [checkpoint_reshaping_and_interoperability.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_gpt2/checkpoint_reshaping_and_interoperability.py)
416
+
417
+ 2. Below is an example of conversion of checkpoint from Megatron-LM to universal 🤗 Transformers sharded checkpoint.
418
+ ```bash
419
+ python checkpoint_reshaping_and_interoperability.py \
420
+ --convert_checkpoint_from_megatron_to_transformers \
421
+ --load_path "gpt/iter_0005000" \
422
+ --save_path "gpt/trfs_checkpoint" \
423
+ --max_shard_size "200MB" \
424
+ --tokenizer_name "gpt2" \
425
+ --print-checkpoint-structure
426
+ ```
427
+
428
+ 3. Conversion of checkpoint from transformers to megatron with `tp_size=2`, `pp_size=2` and `dp_size=2`.
429
+ ```bash
430
+ python checkpoint_utils/megatgron_gpt2/checkpoint_reshaping_and_interoperability.py \
431
+ --load_path "gpt/trfs_checkpoint" \
432
+ --save_path "gpt/megatron_lm_checkpoint" \
433
+ --target_tensor_model_parallel_size 2 \
434
+ --target_pipeline_model_parallel_size 2 \
435
+ --target_data_parallel_size 2 \
436
+ --target_params_dtype "bf16" \
437
+ --make_vocab_size_divisible_by 128 \
438
+ --use_distributed_optimizer \
439
+ --print-checkpoint-structure
440
+ ```
441
+
442
+ ## Megatron-LM GPT models support returning logits and `megatron_generate` function for text generation
443
+
444
+ 1. Returning logits require setting `require_logits=True` in MegatronLMPlugin as shown below.
445
+ These would be available on the in the last stage of pipeline.
446
+ ```python
447
+ megatron_lm_plugin = MegatronLMPlugin(return_logits=True)
448
+ ```
449
+
450
+ 2. `megatron_generate` method for Megatron-LM GPT model: This will use Tensor and Pipeline Parallelism to complete
451
+ generations for a batch of inputs when using greedy with/without top_k/top_p sampling and for individual prompt inputs when using beam search decoding.
452
+ Only a subset of features of transformers generate is supported. This will help in using large models via tensor and pipeline parallelism
453
+ for generation (already does key-value caching and uses fused kernels by default).
454
+ This requires data parallel size to be 1, sequence parallelism and activation checkpointing to be disabled.
455
+ It also requires specifying path to tokenizer's vocab file and merges file.
456
+ Below example shows how to configure and use `megatron_generate` method for Megatron-LM GPT model.
457
+ ```python
458
+ # specifying tokenizer's vocab and merges file
459
+ vocab_file = os.path.join(args.resume_from_checkpoint, "vocab.json")
460
+ merge_file = os.path.join(args.resume_from_checkpoint, "merges.txt")
461
+ other_megatron_args = {"vocab_file": vocab_file, "merge_file": merge_file}
462
+ megatron_lm_plugin = MegatronLMPlugin(other_megatron_args=other_megatron_args)
463
+
464
+ # inference using `megatron_generate` functionality
465
+ tokenizer.pad_token = tokenizer.eos_token
466
+ max_new_tokens = 64
467
+ batch_texts = [
468
+ "Are you human?",
469
+ "The purpose of life is",
470
+ "The arsenal was constructed at the request of",
471
+ "How are you doing these days?",
472
+ ]
473
+ batch_encodings = tokenizer(batch_texts, return_tensors="pt", padding=True)
474
+
475
+ # top-p sampling
476
+ generated_tokens = model.megatron_generate(
477
+ batch_encodings["input_ids"],
478
+ batch_encodings["attention_mask"],
479
+ max_new_tokens=max_new_tokens,
480
+ top_p=0.8,
481
+ top_p_decay=0.5,
482
+ temperature=0.9,
483
+ )
484
+ decoded_preds = tokenizer.batch_decode(generated_tokens.cpu().numpy())
485
+ accelerator.print(decoded_preds)
486
+
487
+ # top-k sampling
488
+ generated_tokens = model.megatron_generate(
489
+ batch_encodings["input_ids"],
490
+ batch_encodings["attention_mask"],
491
+ max_new_tokens=max_new_tokens,
492
+ top_k=50,
493
+ temperature=0.9,
494
+ )
495
+ decoded_preds = tokenizer.batch_decode(generated_tokens.cpu().numpy())
496
+ accelerator.print(decoded_preds)
497
+
498
+ # adding `bos` token at the start
499
+ generated_tokens = model.megatron_generate(
500
+ batch_encodings["input_ids"], batch_encodings["attention_mask"], max_new_tokens=max_new_tokens, add_BOS=True
501
+ )
502
+ decoded_preds = tokenizer.batch_decode(generated_tokens.cpu().numpy())
503
+ accelerator.print(decoded_preds)
504
+
505
+ # beam search => only takes single prompt
506
+ batch_texts = ["The purpose of life is"]
507
+ batch_encodings = tokenizer(batch_texts, return_tensors="pt", padding=True)
508
+ generated_tokens = model.megatron_generate(
509
+ batch_encodings["input_ids"],
510
+ batch_encodings["attention_mask"],
511
+ max_new_tokens=max_new_tokens,
512
+ num_beams=20,
513
+ length_penalty=1.5,
514
+ )
515
+ decoded_preds = tokenizer.batch_decode(generated_tokens.cpu().numpy())
516
+ accelerator.print(decoded_preds)
517
+ ```
518
+
519
+ 3. An end-to-end example of using `megatron_generate` method for Megatron-LM GPT model is available at
520
+ [megatron_gpt2_generation.py](https://github.com/pacman100/accelerate-megatron-test/blob/main/src/inference/megatron_gpt2_generation.py) with
521
+ config file [megatron_lm_gpt_generate_config.yaml](https://github.com/pacman100/accelerate-megatron-test/blob/main/src/Configs/megatron_lm_gpt_generate_config.yaml).
522
+ The bash script with accelerate launch command is available at [megatron_lm_gpt_generate.sh](https://github.com/pacman100/accelerate-megatron-test/blob/main/megatron_lm_gpt_generate.sh).
523
+ The output logs of the script are available at [megatron_lm_gpt_generate.log](https://github.com/pacman100/accelerate-megatron-test/blob/main/output_logs/megatron_lm_gpt_generate.log).
524
+
525
+ ## Support for ROPE and ALiBi Positional embeddings and Multi-Query Attention
526
+
527
+ 1. For ROPE/ALiBi attention, pass `position_embedding_type` with `("absolute" | "rotary" | "alibi")` to `MegatronLMPlugin` as shown below.
528
+ ```python
529
+ other_megatron_args = {"position_embedding_type": "alibi"}
530
+ megatron_lm_plugin = MegatronLMPlugin(other_megatron_args=other_megatron_args)
531
+ ```
532
+
533
+ 2. For Multi-Query Attention, pass `attention_head_type` with `("multihead" | "multiquery")` to `MegatronLMPlugin` as shown below.
534
+ ```python
535
+ other_megatron_args = {"attention_head_type": "multiquery"}
536
+ megatron_lm_plugin = MegatronLMPlugin(other_megatron_args=other_megatron_args)
537
+ ```
538
+
539
+ ## Caveats
540
+
541
+ 1. Supports Transformers GPT2, Megatron-BERT and T5 models.
542
+ This covers Decoder only, Encode only and Encoder-Decoder model classes.
543
+
544
+ 2. Only loss is returned from model forward pass as
545
+ there is quite complex interplay of pipeline, tensor and data parallelsim behind the scenes.
546
+ The `model(**batch_data)` call return loss(es) averaged across the data parallel ranks.
547
+ This is fine for most cases wherein pre-training jobs are run using Megatron-LM features and
548
+ you can easily compute the `perplexity` using the loss.
549
+ For GPT model, returning logits in addition to loss(es) is supported.
550
+ These logits aren't gathered across data parallel ranks. Use `accelerator.utils.gather_across_data_parallel_groups`
551
+ to gather logits across data parallel ranks. These logits along with labels can be used for computing various
552
+ performance metrics.
553
+
554
+ 3. The main process is the last rank as the losses/logits are available in the last stage of pipeline.
555
+ `accelerator.is_main_process` and `accelerator.is_local_main_process` return `True` for last rank when using
556
+ Megatron-LM integration.
557
+
558
+ 4. In `accelerator.prepare` call, a Megatron-LM model corresponding to a given Transformers model is created
559
+ with random weights. Please use `accelerator.load_state` to load the Megatron-LM checkpoint with matching TP, PP and DP partitions.
560
+
561
+ 5. Currently, checkpoint reshaping and interoperability support is only available for GPT.
562
+ Soon it will be extended to BERT and T5.
563
+
564
+ 6. `gradient_accumulation_steps` needs to be 1. When using Megatron-LM, micro batches in pipeline parallelism
565
+ setting is synonymous with gradient accumulation.
566
+
567
+ 7. When using Megatron-LM, use `accelerator.save_state` and `accelerator.load_state` for saving and loading checkpoints.
568
+
569
+ 8. Below are the mapping from Megatron-LM model architectures to the the equivalent 🤗 transformers model architectures.
570
+ Only these 🤗 transformers model architectures are supported.
571
+
572
+ a. Megatron-LM [BertModel](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/bert_model.py) :
573
+ 🤗 transformers models with `megatron-bert` in config's model type, e.g.,
574
+ [MegatronBERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)
575
+
576
+ b. Megatron-LM [GPTModel](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py) :
577
+ 🤗 transformers models with `gpt2` in config's model type, e.g.,
578
+ [OpenAI GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)
579
+
580
+ c. Megatron-LM [T5Model](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/t5_model.py) :
581
+ 🤗 transformers models with `t5` in config's model type, e.g.,
582
+ [T5](https://huggingface.co/docs/transformers/model_doc/t5) and
583
+ [MT5](https://huggingface.co/docs/transformers/model_doc/mt5)
docs/source/usage_guides/model_size_estimator.md ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Understanding how big of a model can fit on your machine
17
+
18
+ One very difficult aspect when exploring potential models to use on your machine is knowing just how big of a model will *fit* into memory with your current graphics card (such as loading the model onto CUDA).
19
+
20
+ To help alleviate this, 🤗 Accelerate has a CLI interface through `accelerate estimate-memory`. This tutorial will
21
+ help walk you through using it, what to expect, and at the end link to the interactive demo hosted on the 🤗 Hub which will
22
+ even let you post those results directly on the model repo!
23
+
24
+ Currently we support searching for models that can be used in `timm` and `transformers`.
25
+
26
+ <Tip>
27
+
28
+ This API will load the model into memory on the `meta` device, so we are not actually downloading
29
+ and loading the full weights of the model into memory, nor do we need to. As a result it's
30
+ perfectly fine to measure 8 billion parameter models (or more), without having to worry about
31
+ if your CPU can handle it!
32
+
33
+ </Tip>
34
+
35
+ ## Gradio Demos
36
+
37
+ Below are a few gradio demos related to what was described above. The first is the official Hugging Face memory estimation space, utilizing Accelerate directly:
38
+
39
+ <div class="block dark:hidden">
40
+ <iframe
41
+ src="https://hf-accelerate-model-memory-usage.hf.space?__theme=light"
42
+ width="850"
43
+ height="1600"
44
+ ></iframe>
45
+ </div>
46
+ <div class="hidden dark:block">
47
+ <iframe
48
+ src="https://hf-accelerate-model-memory-usage.hf.space?__theme=dark"
49
+ width="850"
50
+ height="1600"
51
+ ></iframe>
52
+ </div>
53
+
54
+ A community member has taken the idea and expended it further, allowing you to filter models directly and see if you can run a particular LLM given GPU constraints and LoRA configurations. To play with it, see [here](https://huggingface.co/spaces/Vokturz/can-it-run-llm) for more details.
55
+
56
+ ## The Command
57
+
58
+ When using `accelerate estimate-memory`, you need to pass in the name of the model you want to use, potentially the framework
59
+ that model utilizing (if it can't be found automatically), and the data types you want the model to be loaded in with.
60
+
61
+ For example, here is how we can calculate the memory footprint for `bert-base-cased`:
62
+
63
+ ```bash
64
+ accelerate estimate-memory bert-base-cased
65
+ ```
66
+
67
+ This will download the `config.json` for `bert-based-cased`, load the model on the `meta` device, and report back how much space
68
+ it will use:
69
+
70
+ Memory Usage for loading `bert-base-cased`:
71
+
72
+ | dtype | Largest Layer | Total Size | Training using Adam |
73
+ |---------|---------------|------------|---------------------|
74
+ | float32 | 84.95 MB | 418.18 MB | 1.61 GB |
75
+ | float16 | 42.47 MB | 206.59 MB | 826.36 MB |
76
+ | int8 | 21.24 MB | 103.29 MB | 413.18 MB |
77
+ | int4 | 10.62 MB | 51.65 MB | 206.59 MB |
78
+
79
+ By default it will return all the supported dtypes (`int4` through `float32`), but if you are interested in specific ones these can be filtered.
80
+
81
+ ### Specific libraries
82
+
83
+ If the source library cannot be determined automatically (like it could in the case of `bert-base-cased`), a library name can
84
+ be passed in.
85
+
86
+ ```bash
87
+ accelerate estimate-memory HuggingFaceM4/idefics-80b-instruct --library_name transformers
88
+ ```
89
+
90
+ Memory Usage for loading `HuggingFaceM4/idefics-80b-instruct`:
91
+
92
+ | dtype | Largest Layer | Total Size | Training using Adam |
93
+ |---------|---------------|------------|---------------------|
94
+ | float32 | 3.02 GB | 297.12 GB | 1.16 TB |
95
+ | float16 | 1.51 GB | 148.56 GB | 594.24 GB |
96
+ | int8 | 772.52 MB | 74.28 GB | 297.12 GB |
97
+ | int4 | 386.26 MB | 37.14 GB | 148.56 GB |
98
+
99
+
100
+ ```bash
101
+ accelerate estimate-memory timm/resnet50.a1_in1k --library_name timm
102
+ ```
103
+
104
+ Memory Usage for loading `timm/resnet50.a1_in1k`:
105
+
106
+ | dtype | Largest Layer | Total Size | Training using Adam |
107
+ |---------|---------------|------------|---------------------|
108
+ | float32 | 9.0 MB | 97.7 MB | 390.78 MB |
109
+ | float16 | 4.5 MB | 48.85 MB | 195.39 MB |
110
+ | int8 | 2.25 MB | 24.42 MB | 97.7 MB |
111
+ | int4 | 1.12 MB | 12.21 MB | 48.85 MB |
112
+
113
+ ### Specific dtypes
114
+
115
+ As mentioned earlier, while we return `int4` through `float32` by default, any dtype can be used from `float32`, `float16`, `int8`, and `int4`.
116
+
117
+ To do so, pass them in after specifying `--dtypes`:
118
+
119
+ ```bash
120
+ accelerate estimate-memory bert-base-cased --dtypes float32 float16
121
+ ```
122
+
123
+ Memory Usage for loading `bert-base-cased`:
124
+
125
+ | dtype | Largest Layer | Total Size | Training using Adam |
126
+ |---------|---------------|------------|---------------------|
127
+ | float32 | 84.95 MB | 413.18 MB | 1.61 GB |
128
+ | float16 | 42.47 MB | 206.59 MB | 826.36 MB |
129
+
130
+ ## Caveats with this calculator
131
+
132
+ This calculator will tell you how much memory is needed to purely load the model in, *not* to perform inference.
133
+
134
+ This calculation is accurate within a few % of the actual value, so it is a very good view of just how much memory it will take. For instance loading `bert-base-cased` actually takes `413.68 MB` when loaded on CUDA in full precision, and the calculator estimates `413.18 MB`.
135
+
136
+ When performing inference you can expect to add up to an additional 20% as found by [EleutherAI](https://blog.eleuther.ai/transformer-math/). We'll be conducting research into finding a more accurate estimate to these values, and will update
137
+ this calculator once done.
docs/source/usage_guides/mps.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Accelerated PyTorch Training on Mac
17
+
18
+ With PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training.
19
+ This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac.
20
+ Apple's Metal Performance Shaders (MPS) as a backend for PyTorch enables this and can be used via the new `"mps"` device.
21
+ This will map computational graphs and primitives on the MPS Graph framework and tuned kernels provided by MPS.
22
+ For more information please refer official documents [Introducing Accelerated PyTorch Training on Mac](https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/)
23
+ and [MPS BACKEND](https://pytorch.org/docs/stable/notes/mps.html).
24
+
25
+ ### Benefits of Training and Inference using Apple Silicon Chips
26
+
27
+ 1. Enables users to train larger networks or batch sizes locally
28
+ 2. Reduces data retrieval latency and provides the GPU with direct access to the full memory store due to unified memory architecture.
29
+ Therefore, improving end-to-end performance.
30
+ 3. Reduces costs associated with cloud-based development or the need for additional local GPUs.
31
+
32
+ **Pre-requisites**: To install torch with mps support,
33
+ please follow this nice medium article [GPU-Acceleration Comes to PyTorch on M1 Macs](https://medium.com/towards-data-science/gpu-acceleration-comes-to-pytorch-on-m1-macs-195c399efcc1).
34
+
35
+
36
+ ## How it works out of the box
37
+ It is enabled by default on MacOs machines with MPS enabled Apple Silicon GPUs.
38
+ To disable it, pass `--cpu` flag to `accelerate launch` command or answer the corresponding question when answering the `accelerate config` questionnaire.
39
+
40
+ You can directly run the following script to test it out on MPS enabled Apple Silicon machines:
41
+ ```bash
42
+ accelerate launch /examples/cv_example.py --data_dir images
43
+ ```
44
+
45
+ ## A few caveats to be aware of
46
+
47
+ 1. We strongly recommend to install PyTorch >= 1.13 (nightly version at the time of writing) on your MacOS machine.
48
+ It has major fixes related to model correctness and performance improvements for transformer based models.
49
+ Please refer to https://github.com/pytorch/pytorch/issues/82707 for more details.
50
+ 2. Distributed setups `gloo` and `nccl` are not working with `mps` device.
51
+ This means that currently only single GPU of `mps` device type can be used.
52
+
53
+ Finally, please, remember that, 🤗 `Accelerate` only integrates MPS backend, therefore if you
54
+ have any problems or questions with regards to MPS backend usage, please, file an issue with [PyTorch GitHub](https://github.com/pytorch/pytorch/issues).
docs/source/usage_guides/quantization.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Quantization
17
+
18
+ ## `bitsandbytes` Integration
19
+
20
+ 🤗 Accelerate brings `bitsandbytes` quantization to your model. You can now load any pytorch model in 8-bit or 4-bit with a few lines of code.
21
+
22
+ If you want to use 🤗 Transformers models with `bitsandbytes`, you should follow this [documentation](https://huggingface.co/docs/transformers/main_classes/quantization).
23
+
24
+ To learn more about how the `bitsandbytes` quantization works, check out the blog posts on [8-bit quantization](https://huggingface.co/blog/hf-bitsandbytes-integration) and [4-bit quantization](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
25
+
26
+ ### Pre-Requisites
27
+ You will need to install the following requirements:
28
+
29
+ - Install `bitsandbytes` library
30
+ ```bash
31
+ pip install bitsandbytes
32
+ ```
33
+ - Install latest `accelerate` from source
34
+ ```bash
35
+ pip install git+https://github.com/huggingface/accelerate.git
36
+ ```
37
+ - Install `minGPT` and `huggingface_hub` to run examples
38
+ ```bash
39
+ git clone https://github.com/karpathy/minGPT.git
40
+ pip install minGPT/
41
+ pip install huggingface_hub
42
+ ```
43
+
44
+ ### How it works
45
+
46
+ First, we need to initialize our model. To save memory, we can initialize an empty model using the context manager [`init_empty_weights`].
47
+
48
+ Let's take the GPT2 model from minGPT library.
49
+ ```py
50
+ from accelerate import init_empty_weights
51
+ from mingpt.model import GPT
52
+
53
+ model_config = GPT.get_default_config()
54
+ model_config.model_type = 'gpt2-xl'
55
+ model_config.vocab_size = 50257
56
+ model_config.block_size = 1024
57
+
58
+ with init_empty_weights():
59
+ empty_model = GPT(model_config)
60
+ ```
61
+
62
+ Then, we need to get the path to the weights of your model. The path can be the state_dict file (e.g. "pytorch_model.bin") or a folder containing the sharded checkpoints.
63
+
64
+ ```py
65
+ from huggingface_hub import snapshot_download
66
+ weights_location = snapshot_download(repo_id="marcsun13/gpt2-xl-linear-sharded")
67
+ ```
68
+
69
+ Finally, you need to set your quantization configuration with [`~utils.BnbQuantizationConfig`].
70
+
71
+ Here's an example for 8-bit quantization:
72
+ ```py
73
+ from accelerate.utils import BnbQuantizationConfig
74
+ bnb_quantization_config = BnbQuantizationConfig(load_in_8bit=True, llm_int8_threshold = 6)
75
+ ```
76
+
77
+ Here's an example for 4-bit quantization:
78
+ ```py
79
+ from accelerate.utils import BnbQuantizationConfig
80
+ bnb_quantization_config = BnbQuantizationConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4")
81
+ ```
82
+
83
+ To quantize your empty model with the selected configuration, you need to use [`~utils.load_and_quantize_model`].
84
+
85
+ ```py
86
+ from accelerate.utils import load_and_quantize_model
87
+ quantized_model = load_and_quantize_model(empty_model, weights_location=weights_location, bnb_quantization_config=bnb_quantization_config, device_map = "auto")
88
+ ```
89
+
90
+ ### Saving and loading 8-bit model
91
+
92
+ You can save your 8-bit model with accelerate using [`~Accelerator.save_model`].
93
+
94
+ ```py
95
+ from accelerate import Accelerator
96
+ accelerate = Accelerator()
97
+ new_weights_location = "path/to/save_directory"
98
+ accelerate.save_model(quantized_model, new_weights_location)
99
+
100
+ quantized_model_from_saved = load_and_quantize_model(empty_model, weights_location=new_weights_location, bnb_quantization_config=bnb_quantization_config, device_map = "auto")
101
+ ```
102
+
103
+ Note that 4-bit model serialization is currently not supported.
104
+
105
+ ### Offload modules to cpu and disk
106
+
107
+ You can offload some modules to cpu/disk if you don't have enough space on the GPU to store the entire model on your GPUs.
108
+ This uses big model inference under the hood. Check this [documentation](https://huggingface.co/docs/accelerate/usage_guides/big_modeling) for more details.
109
+
110
+ For 8-bit quantization, the selected modules will be converted to 8-bit precision.
111
+
112
+ For 4-bit quantization, the selected modules will be kept in `torch_dtype` that the user passed in `BnbQuantizationConfig`. We will add support to convert these offloaded modules in 4-bit when 4-bit serialization will be possible.
113
+
114
+ You just need to pass a custom `device_map` in order to offload modules on cpu/disk. The offload modules will be dispatched on the GPU when needed. Here's an example :
115
+
116
+ ```py
117
+ device_map = {
118
+ "transformer.wte": 0,
119
+ "transformer.wpe": 0,
120
+ "transformer.drop": 0,
121
+ "transformer.h": "cpu",
122
+ "transformer.ln_f": "disk",
123
+ "lm_head": "disk",
124
+ }
125
+ ```
126
+ ### Fine-tune a quantized model
127
+
128
+ It is not possible to perform pure 8bit or 4bit training on these models. However, you can train these models by leveraging parameter efficient fine tuning methods (PEFT) and train for example adapters on top of them. Please have a look at [peft](https://github.com/huggingface/peft) library for more details.
129
+
130
+ Currently, you can't add adapters on top of any quantized model. However, with the official support of adapters with 🤗 Transformers models, you can fine-tune quantized models. If you want to finetune a 🤗 Transformers model , follow this [documentation](https://huggingface.co/docs/transformers/main_classes/quantization) instead. Check out this [demo](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing) on how to fine-tune a 4-bit 🤗 Transformers model.
131
+
132
+ Note that you don’t need to pass `device_map` when loading the model for training. It will automatically load your model on your GPU. Please note that `device_map=auto` should be used for inference only.
133
+
134
+ ### Example demo - running GPT2 1.5b on a Google Colab
135
+
136
+ Check out the Google Colab [demo](https://colab.research.google.com/drive/1T1pOgewAWVpR9gKpaEWw4orOrzPFb3yM?usp=sharing) for running quantized models on a GTP2 model. The GPT2-1.5B model checkpoint is in FP32 which uses 6GB of memory. After quantization, it uses 1.6GB with 8-bit modules and 1.2GB with 4-bit modules.
docs/source/usage_guides/sagemaker.md ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2021 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Amazon SageMaker
17
+
18
+ Hugging Face and Amazon introduced new [Hugging Face Deep Learning Containers (DLCs)](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers) to
19
+ make it easier than ever to train Hugging Face Transformer models in [Amazon SageMaker](https://aws.amazon.com/sagemaker/).
20
+
21
+ ## Getting Started
22
+
23
+ ### Setup & Installation
24
+
25
+
26
+ Before you can run your 🤗 Accelerate scripts on Amazon SageMaker you need to sign up for an AWS account. If you do not
27
+ have an AWS account yet learn more [here](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-set-up.html).
28
+
29
+ After you have your AWS Account you need to install the `sagemaker` sdk for 🤗 Accelerate with:
30
+
31
+ ```bash
32
+ pip install "accelerate[sagemaker]" --upgrade
33
+ ```
34
+
35
+ 🤗 Accelerate currently uses the 🤗 DLCs, with `transformers`, `datasets` and `tokenizers` pre-installed. 🤗
36
+ Accelerate is not in the DLC yet (will soon be added!) so to use it within Amazon SageMaker you need to create a
37
+ `requirements.txt` in the same directory where your training script is located and add it as dependency:
38
+
39
+ ```
40
+ accelerate
41
+ ```
42
+
43
+ You should also add any other dependencies you have to this `requirements.txt`.
44
+
45
+
46
+ ### Configure 🤗 Accelerate
47
+
48
+ You can configure the launch configuration for Amazon SageMaker the same as you do for non SageMaker training jobs with
49
+ the 🤗 Accelerate CLI:
50
+
51
+ ```bash
52
+ accelerate config
53
+ # In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 1
54
+ ```
55
+
56
+ 🤗 Accelerate will go through a questionnaire about your Amazon SageMaker setup and create a config file you can edit.
57
+
58
+ <Tip>
59
+
60
+ 🤗 Accelerate is not saving any of your credentials.
61
+
62
+ </Tip>
63
+
64
+ ### Prepare a 🤗 Accelerate fine-tuning script
65
+
66
+ The training script is very similar to a training script you might run outside of SageMaker, but to save your model
67
+ after training you need to specify either `/opt/ml/model` or use `os.environ["SM_MODEL_DIR"]` as your save
68
+ directory. After training, artifacts in this directory are uploaded to S3:
69
+
70
+
71
+ ```diff
72
+ - torch.save('/opt/ml/model`)
73
+ + accelerator.save('/opt/ml/model')
74
+ ```
75
+
76
+ <Tip warning={true}>
77
+
78
+ SageMaker doesn’t support argparse actions. If you want to use, for example, boolean hyperparameters, you need to
79
+ specify type as bool in your script and provide an explicit True or False value for this hyperparameter. [[REF]](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#prepare-a-pytorch-training-script).
80
+
81
+ </Tip>
82
+
83
+ ### Launch Training
84
+
85
+ You can launch your training with 🤗 Accelerate CLI with:
86
+
87
+ ```
88
+ accelerate launch path_to_script.py --args_to_the_script
89
+ ```
90
+
91
+ This will launch your training script using your configuration. The only thing you have to do is provide all the
92
+ arguments needed by your training script as named arguments.
93
+
94
+ **Examples**
95
+
96
+ <Tip>
97
+
98
+ If you run one of the example scripts, don't forget to add `accelerator.save('/opt/ml/model')` to it.
99
+
100
+ </Tip>
101
+
102
+ ```bash
103
+ accelerate launch ./examples/sagemaker_example.py
104
+ ```
105
+
106
+ Outputs:
107
+
108
+ ```
109
+ Configuring Amazon SageMaker environment
110
+ Converting Arguments to Hyperparameters
111
+ Creating Estimator
112
+ 2021-04-08 11:56:50 Starting - Starting the training job...
113
+ 2021-04-08 11:57:13 Starting - Launching requested ML instancesProfilerReport-1617883008: InProgress
114
+ .........
115
+ 2021-04-08 11:58:54 Starting - Preparing the instances for training.........
116
+ 2021-04-08 12:00:24 Downloading - Downloading input data
117
+ 2021-04-08 12:00:24 Training - Downloading the training image..................
118
+ 2021-04-08 12:03:39 Training - Training image download completed. Training in progress..
119
+ ........
120
+ epoch 0: {'accuracy': 0.7598039215686274, 'f1': 0.8178438661710037}
121
+ epoch 1: {'accuracy': 0.8357843137254902, 'f1': 0.882249560632689}
122
+ epoch 2: {'accuracy': 0.8406862745098039, 'f1': 0.8869565217391304}
123
+ ........
124
+ 2021-04-08 12:05:40 Uploading - Uploading generated training model
125
+ 2021-04-08 12:05:40 Completed - Training job completed
126
+ Training seconds: 331
127
+ Billable seconds: 331
128
+ You can find your model data at: s3://your-bucket/accelerate-sagemaker-1-2021-04-08-11-56-47-108/output/model.tar.gz
129
+ ```
130
+
131
+ ## Advanced Features
132
+
133
+ ### Distributed Training: Data Parallelism
134
+
135
+ Set up the accelerate config by running `accelerate config` and answer the SageMaker questions and set it up.
136
+ To use SageMaker DDP, select it when asked
137
+ `What is the distributed mode? ([0] No distributed training, [1] data parallelism):`.
138
+ Example config below:
139
+ ```yaml
140
+ base_job_name: accelerate-sagemaker-1
141
+ compute_environment: AMAZON_SAGEMAKER
142
+ distributed_type: DATA_PARALLEL
143
+ ec2_instance_type: ml.p3.16xlarge
144
+ iam_role_name: xxxxx
145
+ image_uri: null
146
+ mixed_precision: fp16
147
+ num_machines: 1
148
+ profile: xxxxx
149
+ py_version: py38
150
+ pytorch_version: 1.10.2
151
+ region: us-east-1
152
+ transformers_version: 4.17.0
153
+ use_cpu: false
154
+ ```
155
+
156
+ ### Distributed Training: Model Parallelism
157
+
158
+ *currently in development, will be supported soon.*
159
+
160
+ ### Python packages and dependencies
161
+
162
+ 🤗 Accelerate currently uses the 🤗 DLCs, with `transformers`, `datasets` and `tokenizers` pre-installed. If you
163
+ want to use different/other Python packages you can do this by adding them to the `requirements.txt`. These packages
164
+ will be installed before your training script is started.
165
+
166
+ ### Local Training: SageMaker Local mode
167
+
168
+ The local mode in the SageMaker SDK allows you to run your training script locally inside the HuggingFace DLC (Deep Learning container)
169
+ or using your custom container image. This is useful for debugging and testing your training script inside the final container environment.
170
+ Local mode uses Docker compose (*Note: Docker Compose V2 is not supported yet*). The SDK will handle the authentication against ECR
171
+ to pull the DLC to your local environment. You can emulate CPU (single and multi-instance) and GPU (single instance) SageMaker training jobs.
172
+
173
+ To use local mode, you need to set your `ec2_instance_type` to `local`.
174
+
175
+ ```yaml
176
+ ec2_instance_type: local
177
+ ```
178
+
179
+ ### Advanced configuration
180
+
181
+ The configuration allows you to override parameters for the [Estimator](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html).
182
+ These settings have to be applied in the config file and are not part of `accelerate config`. You can control many additional aspects of the training job, e.g. use Spot instances, enable network isolation and many more.
183
+
184
+ ```yaml
185
+ additional_args:
186
+ # enable network isolation to restrict internet access for containers
187
+ enable_network_isolation: True
188
+ ```
189
+
190
+ You can find all available configuration [here](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html).
191
+
192
+ ### Use Spot Instances
193
+
194
+ You can use Spot Instances e.g. using (see [Advanced configuration](#advanced-configuration)):
195
+ ```yaml
196
+ additional_args:
197
+ use_spot_instances: True
198
+ max_wait: 86400
199
+ ```
200
+
201
+ *Note: Spot Instances are subject to be terminated and training to be continued from a checkpoint. This is not handled in 🤗 Accelerate out of the box. Contact us if you would like this feature.*
202
+
203
+ ### Remote scripts: Use scripts located on Github
204
+
205
+ *undecided if feature is needed. Contact us if you would like this feature.*
docs/source/usage_guides/tracking.md ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2022 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # Tracking
17
+
18
+ There are a large number of experiment tracking API's available, however getting them all to work with in a multi-processing environment can oftentimes be complex.
19
+ 🤗 Accelerate provides a general tracking API that can be used to log useful items during your script through [`Accelerator.log`]
20
+
21
+ ## Integrated Trackers
22
+
23
+ Currently `Accelerate` supports seven trackers out-of-the-box:
24
+
25
+ - TensorBoard
26
+ - WandB
27
+ - CometML
28
+ - Aim
29
+ - MLFlow
30
+ - ClearML
31
+ - DVCLive
32
+
33
+ To use any of them, pass in the selected type(s) to the `log_with` parameter in [`Accelerate`]:
34
+ ```python
35
+ from accelerate import Accelerator
36
+ from accelerate.utils import LoggerType
37
+
38
+ accelerator = Accelerator(log_with="all") # For all available trackers in the environment
39
+ accelerator = Accelerator(log_with="wandb")
40
+ accelerator = Accelerator(log_with=["wandb", LoggerType.TENSORBOARD])
41
+ ```
42
+
43
+ At the start of your experiment [`Accelerator.init_trackers`] should be used to setup your project, and potentially add any experiment hyperparameters to be logged:
44
+ ```python
45
+ hps = {"num_iterations": 5, "learning_rate": 1e-2}
46
+ accelerator.init_trackers("my_project", config=hps)
47
+ ```
48
+
49
+ When you are ready to log any data, [`Accelerator.log`] should be used.
50
+ A `step` can also be passed in to correlate the data with a particular step in the training loop.
51
+ ```python
52
+ accelerator.log({"train_loss": 1.12, "valid_loss": 0.8}, step=1)
53
+ ```
54
+
55
+ Once you've finished training, make sure to run [`Accelerator.end_training`] so that all the trackers can run their finish functionalities if they have any.
56
+ ```python
57
+ accelerator.end_training()
58
+ ```
59
+
60
+
61
+ A full example is below:
62
+ ```python
63
+ from accelerate import Accelerator
64
+
65
+ accelerator = Accelerator(log_with="all")
66
+ config = {
67
+ "num_iterations": 5,
68
+ "learning_rate": 1e-2,
69
+ "loss_function": str(my_loss_function),
70
+ }
71
+
72
+ accelerator.init_trackers("example_project", config=config)
73
+
74
+ my_model, my_optimizer, my_training_dataloader = accelerate.prepare(my_model, my_optimizer, my_training_dataloader)
75
+ device = accelerator.device
76
+ my_model.to(device)
77
+
78
+ for iteration in config["num_iterations"]:
79
+ for step, batch in my_training_dataloader:
80
+ my_optimizer.zero_grad()
81
+ inputs, targets = batch
82
+ inputs = inputs.to(device)
83
+ targets = targets.to(device)
84
+ outputs = my_model(inputs)
85
+ loss = my_loss_function(outputs, targets)
86
+ accelerator.backward(loss)
87
+ my_optimizer.step()
88
+ accelerator.log({"training_loss": loss}, step=step)
89
+ accelerator.end_training()
90
+ ```
91
+
92
+ If a tracker requires a directory to save data to, such as `TensorBoard`, then pass the directory path to `project_dir`. The `project_dir` parameter is useful
93
+ when there are other configurations to be combined with in the [`~utils.ProjectConfiguration`] data class. For example, you can save the TensorBoard data to `project_dir` and everything else can be logged in the `logging_dir` parameter of [`~utils.ProjectConfiguration`:
94
+
95
+ ```python
96
+ accelerator = Accelerator(log_with="tensorboard", project_dir=".")
97
+
98
+ # use with ProjectConfiguration
99
+ config = ProjectConfiguration(project_dir=".", logging_dir="another/directory")
100
+ accelerator = Accelerator(log_with="tensorboard", project_config=config)
101
+ ```
102
+
103
+ ## Implementing Custom Trackers
104
+
105
+ To implement a new tracker to be used in `Accelerator`, a new one can be made through implementing the [`GeneralTracker`] class.
106
+ Every tracker must implement three functions and have three properties:
107
+ - `__init__`:
108
+ - Should store a `run_name` and initialize the tracker API of the integrated library.
109
+ - If a tracker stores their data locally (such as TensorBoard), a `logging_dir` parameter can be added.
110
+ - `store_init_configuration`:
111
+ - Should take in a `values` dictionary and store them as a one-time experiment configuration
112
+ - `log`:
113
+ - Should take in a `values` dictionary and a `step`, and should log them to the run
114
+
115
+ - `name` (`str`):
116
+ - A unique string name for the tracker, such as `"wandb"` for the wandb tracker.
117
+ - This will be used for interacting with this tracker specifically
118
+ - `requires_logging_directory` (`bool`):
119
+ - Whether a `logging_dir` is needed for this particular tracker and if it uses one.
120
+ - `tracker`:
121
+ - This should be implemented as a `@property` function
122
+ - Should return the internal tracking mechanism the library uses, such as the `run` object for `wandb`.
123
+
124
+ Each method should also utilize the [`state.PartialState`] class if the logger should only be executed on the main process for instance.
125
+
126
+ A brief example can be seen below with an integration with Weights and Biases, containing only the relevant information and logging just on
127
+ the main process:
128
+ ```python
129
+ from accelerate.tracking import GeneralTracker, on_main_process
130
+ from typing import Optional
131
+
132
+ import wandb
133
+
134
+
135
+ class MyCustomTracker(GeneralTracker):
136
+ name = "wandb"
137
+ requires_logging_directory = False
138
+
139
+ @on_main_process
140
+ def __init__(self, run_name: str):
141
+ self.run_name = run_name
142
+ run = wandb.init(self.run_name)
143
+
144
+ @property
145
+ def tracker(self):
146
+ return self.run.run
147
+
148
+ @on_main_process
149
+ def store_init_configuration(self, values: dict):
150
+ wandb.config(values)
151
+
152
+ @on_main_process
153
+ def log(self, values: dict, step: Optional[int] = None):
154
+ wandb.log(values, step=step)
155
+ ```
156
+
157
+ When you are ready to build your `Accelerator` object, pass in an **instance** of your tracker to [`Accelerator.log_with`] to have it automatically
158
+ be used with the API:
159
+
160
+ ```python
161
+ tracker = MyCustomTracker("some_run_name")
162
+ accelerator = Accelerator(log_with=tracker)
163
+ ```
164
+
165
+ These also can be mixed with existing trackers, including with `"all"`:
166
+
167
+ ```python
168
+ tracker = MyCustomTracker("some_run_name")
169
+ accelerator = Accelerator(log_with=[tracker, "all"])
170
+ ```
171
+
172
+ ## Accessing the internal tracker
173
+
174
+ If some custom interactions with a tracker might be wanted directly, you can quickly access one using the
175
+ [`Accelerator.get_tracker`] method. Just pass in the string corresponding to a tracker's `.name` attribute
176
+ and it will return that tracker on the main process.
177
+
178
+ This example shows doing so with wandb:
179
+
180
+ ```python
181
+ wandb_tracker = accelerator.get_tracker("wandb")
182
+ ```
183
+
184
+ From there you can interact with `wandb`'s `run` object like normal:
185
+
186
+ ```python
187
+ wandb_run.log_artifact(some_artifact_to_log)
188
+ ```
189
+
190
+ <Tip>
191
+ Trackers built in Accelerate will automatically execute on the correct process,
192
+ so if a tracker is only meant to be ran on the main process it will do so
193
+ automatically.
194
+ </Tip>
195
+
196
+ If you want to truly remove Accelerate's wrapping entirely, you can
197
+ achieve the same outcome with:
198
+
199
+ ```python
200
+ wandb_tracker = accelerator.get_tracker("wandb", unwrap=True)
201
+ with accelerator.on_main_process:
202
+ wandb_tracker.log_artifact(some_artifact_to_log)
203
+ ```
204
+
205
+
206
+ ## When a wrapper cannot work
207
+
208
+ If a library has an API that does not follow a strict `.log` with an overall dictionary such as Neptune.AI, logging can be done manually under an `if accelerator.is_main_process` statement:
209
+ ```diff
210
+ from accelerate import Accelerator
211
+ + import neptune.new as neptune
212
+
213
+ accelerator = Accelerator()
214
+ + run = neptune.init(...)
215
+
216
+ my_model, my_optimizer, my_training_dataloader = accelerate.prepare(my_model, my_optimizer, my_training_dataloader)
217
+ device = accelerator.device
218
+ my_model.to(device)
219
+
220
+ for iteration in config["num_iterations"]:
221
+ for batch in my_training_dataloader:
222
+ my_optimizer.zero_grad()
223
+ inputs, targets = batch
224
+ inputs = inputs.to(device)
225
+ targets = targets.to(device)
226
+ outputs = my_model(inputs)
227
+ loss = my_loss_function(outputs, targets)
228
+ total_loss += loss
229
+ accelerator.backward(loss)
230
+ my_optimizer.step()
231
+ + if accelerator.is_main_process:
232
+ + run["logs/training/batch/loss"].log(loss)
233
+ ```