File size: 6,330 Bytes
bceceb3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199


# Accelerator

The [`Accelerator`] is the main class provided by 🤗 Accelerate. 
It serves at the main entry point for the API. 

## Quick adaptation of your code

To quickly adapt your script to work on any kind of setup with 🤗 Accelerate just:

1. Initialize an [`Accelerator`] object (that we will call `accelerator` throughout this page) as early as possible in your script.
2. Pass your dataloader(s), model(s), optimizer(s), and scheduler(s) to the [`~Accelerator.prepare`] method.
3. Remove all the `.cuda()` or `.to(device)` from your code and let the `accelerator` handle the device placement for you. 

<Tip>

    Step three is optional, but considered a best practice.

</Tip>

4. Replace `loss.backward()` in your code with `accelerator.backward(loss)`
5. Gather your predictions and labels before storing them or using them for metric computation using [`~Accelerator.gather`]

<Tip warning={true}>

    Step five is mandatory when using distributed evaluation
    
</Tip>

In most cases this is all that is needed. The next section lists a few more advanced use cases and nice features
you should search for and replace by the corresponding methods of your `accelerator`:

## Advanced recommendations

### Printing

`print` statements should be replaced by [`~Accelerator.print`] to be printed once per process:

```diff
- print("My thing I want to print!")
+ accelerator.print("My thing I want to print!")
```

### Executing processes

#### Once on a single server

For statements that should be executed once per server, use [`~Accelerator.is_local_main_process`]:

```python
if accelerator.is_local_main_process:
    do_thing_once_per_server()
```

A function can be wrapped using the [`~Accelerator.on_local_main_process`] function to achieve the same 
behavior on a function's execution:

```python
@accelerator.on_local_main_process
def do_my_thing():
    "Something done once per server"
    do_thing_once_per_server()
```

#### Only ever once across all servers

For statements that should only ever be executed once, use [`~Accelerator.is_main_process`]:

```python
if accelerator.is_main_process:
    do_thing_once()
```

A function can be wrapped using the [`~Accelerator.on_main_process`] function to achieve the same 
behavior on a function's execution:

```python
@accelerator.on_main_process
def do_my_thing():
    "Something done once per server"
    do_thing_once()
```

#### On specific processes

If a function should be ran on a specific overall or local process index, there are similar decorators 
to achieve this:

```python
@accelerator.on_local_process(local_process_idx=0)
def do_my_thing():
    "Something done on process index 0 on each server"
    do_thing_on_index_zero_on_each_server()
```

```python
@accelerator.on_process(process_index=0)
def do_my_thing():
    "Something done on process index 0"
    do_thing_on_index_zero()
```

### Synchronicity control

Use [`~Accelerator.wait_for_everyone`] to make sure all processes join that point before continuing. (Useful before a model save for instance).

### Saving and loading

```python
model = MyModel()
model = accelerator.prepare(model)
```

Use [`~Accelerator.save_model`] instead of `torch.save` to save a model. It will remove all model wrappers added during the distributed process, get the state_dict of the model and save it. The state_dict will be in the same precision as the model being trained.

```diff
- torch.save(state_dict, "my_state.pkl")
+ accelerator.save_model(model, save_directory)
```

[`~Accelerator.save_model`] can also save a model into sharded checkpoints or with safetensors format.
Here is an example: 

```python
accelerator.save_model(model, save_directory, max_shard_size="1GB", safe_serialization=True)
```

#### 🤗 Transformers models

If you are using models from the [🤗 Transformers](https://huggingface.co/docs/transformers/) library, you can use the `.save_pretrained()` method.

```python
from transformers import AutoModel

model = AutoModel.from_pretrained("bert-base-cased")
model = accelerator.prepare(model)

# ...fine-tune with PyTorch...

unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.save_pretrained(
    "path/to/my_model_directory",
    is_main_process=accelerator.is_main_process,
    save_function=accelerator.save,
)
```

This will ensure your model stays compatible with other 🤗 Transformers functionality like the `.from_pretrained()` method.

```python
from transformers import AutoModel

model = AutoModel.from_pretrained("path/to/my_model_directory")
```

### Operations

Use [`~Accelerator.clip_grad_norm_`] instead of ``torch.nn.utils.clip_grad_norm_`` and [`~Accelerator.clip_grad_value_`] instead of ``torch.nn.utils.clip_grad_value``

### Gradient Accumulation

To perform gradient accumulation use [`~Accelerator.accumulate`] and specify a gradient_accumulation_steps. 
This will also automatically ensure the gradients are synced or unsynced when on 
multi-device training, check if the step should actually be performed, and auto-scale the loss:

```diff
- accelerator = Accelerator()
+ accelerator = Accelerator(gradient_accumulation_steps=2)

  for (input, label) in training_dataloader:
+     with accelerator.accumulate(model):
          predictions = model(input)
          loss = loss_function(predictions, labels)
          accelerator.backward(loss)
          optimizer.step()
          scheduler.step()
          optimizer.zero_grad()
```
#### GradientAccumulationPlugin
[[autodoc]] utils.GradientAccumulationPlugin


Instead of passing `gradient_accumulation_steps` you can instantiate a GradientAccumulationPlugin and pass it to the [`Accelerator`]'s `__init__`
as `gradient_accumulation_plugin`. You can only pass either one of `gradient_accumulation_plugin` or `gradient_accumulation_steps` passing both will raise an error.
```diff
from accelerate.utils import GradientAccumulationPlugin

gradient_accumulation_plugin = GradientAccumulationPlugin(num_steps=2)
- accelerator = Accelerator()
+ accelerator = Accelerator(gradient_accumulation_plugin=gradient_accumulation_plugin)
```

In addition to the number of steps, this also lets you configure whether or not you adjust your learning rate scheduler to account for the change in steps due to accumulation.

## Overall API documentation:

[[autodoc]] Accelerator