File size: 3,689 Bytes
bceceb3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116


# Migrating your code to πŸ€— Accelerate

This tutorial will detail how to easily convert existing PyTorch code to use πŸ€— Accelerate!
You'll see that by just changing a few lines of code, πŸ€— Accelerate can perform its magic and get you on 
your way toward running your code on distributed systems with ease!

## The base training loop

To begin, write out a very basic PyTorch training loop. 

<Tip>

    We are under the presumption that `training_dataloader`, `model`, `optimizer`, `scheduler`, and `loss_function` have been defined beforehand.

</Tip>

```python
device = "cuda"
model.to(device)

for batch in training_dataloader:
    optimizer.zero_grad()
    inputs, targets = batch
    inputs = inputs.to(device)
    targets = targets.to(device)
    outputs = model(inputs)
    loss = loss_function(outputs, targets)
    loss.backward()
    optimizer.step()
    scheduler.step()
```

## Add in πŸ€— Accelerate

To start using πŸ€— Accelerate, first import and create an [`Accelerator`] instance:
```python
from accelerate import Accelerator

accelerator = Accelerator()
```
[`Accelerator`] is the main force behind utilizing all the possible options for distributed training!

### Setting the right device

The [`Accelerator`] class knows the right device to move any PyTorch object to at any time, so you should
change the definition of `device` to come from [`Accelerator`]:

```diff
- device = 'cuda'
+ device = accelerator.device
  model.to(device)
```

### Preparing your objects

Next, you need to pass all of the important objects related to training into [`~Accelerator.prepare`]. πŸ€— Accelerate will
make sure everything is setup in the current environment for you to start training:

```
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
    model, optimizer, training_dataloader, scheduler
)
```
These objects are returned in the same order they were sent in. By default when using `device_placement=True`, all of the objects that can be sent to the right device will be.
If you need to work with data that isn't passed to [~Accelerator.prepare] but should be on the active device, you should pass in the `device` you made earlier. 

<Tip warning={true}>

    Accelerate will only prepare objects that inherit from their respective PyTorch classes (such as `torch.optim.Optimizer`).

</Tip>

### Modifying the training loop

Finally, three lines of code need to be changed in the training loop. πŸ€— Accelerate's DataLoader classes will automatically handle the device placement by default,
and [`~Accelerator.backward`] should be used for performing the backward pass:

```diff
-   inputs = inputs.to(device)
-   targets = targets.to(device)
    outputs = model(inputs)
    loss = loss_function(outputs, targets)
-   loss.backward()
+   accelerator.backward(loss)
```

With that, your training loop is now ready to use πŸ€— Accelerate!

## The finished code

Below is the final version of the converted code: 

```python
from accelerate import Accelerator

accelerator = Accelerator()

model, optimizer, training_dataloader, scheduler = accelerator.prepare(
    model, optimizer, training_dataloader, scheduler
)

for batch in training_dataloader:
    optimizer.zero_grad()
    inputs, targets = batch
    outputs = model(inputs)
    loss = loss_function(outputs, targets)
    accelerator.backward(loss)
    optimizer.step()
    scheduler.step()
```

## More Resources

To check out more ways on how to migrate to πŸ€— Accelerate, check out our [interactive migration tutorial](https://huggingface.co/docs/accelerate/usage_guides/explore) which showcases other items that need to be watched for when using Accelerate and how to do so quickly.