##
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_XPU
downcast_bf16: 'no'
enable_cpu_affinity: false
gpu_ids: 0,1,2,3
ipex_config:
  ipex: true
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
## None ## If the YAML was generated through the `accelerate config` command: ``` accelerate launch {script_name.py} {--arg1} {--arg2} ... ``` If the YAML is saved to a `~/config.yaml` file: ``` accelerate launch --config_file ~/config.yaml {script_name.py} {--arg1} {--arg2} ... ``` ## Launching on multi-XPU instances requires a different launch command than just `python myscript.py`. Accelerate will wrap around the proper launching script to delegate and call, reading in how to set their configuration based on the parameters passed in. It is a passthrough to the `torchrun` command. **Remember that you can always use the `accelerate launch` functionality, even if the code in your script does not use the `Accelerator`** ## To learn more checkout the related documentation: - Launching distributed code - The Command Line