Habana
regisss HF staff olszd commited on
Commit
bd9a027
1 Parent(s): 954e75b

Remove deprecated Habana mixed precision from README (#5)

Browse files

- Remove deprecated Habana mixed precision from README (561b46eecc1239ff3011d7220f26ae01f3082040)
- Update README.md (407981cd44b36f0f18db8d482c769804397bb545)


Co-authored-by: Dominika Olszewska <[email protected]>

Files changed (1) hide show
  1. README.md +1 -5
README.md CHANGED
@@ -13,13 +13,9 @@ This model only contains the `GaudiConfig` file for running the [GPT2](https://h
13
  **This model contains no model weights, only a GaudiConfig.**
14
 
15
  This enables to specify:
16
- - `use_habana_mixed_precision`: whether to use Habana Mixed Precision (HMP)
17
- - `hmp_opt_level`: optimization level for HMP, see [here](https://docs.habana.ai/en/latest/PyTorch/PyTorch_Mixed_Precision/PT_Mixed_Precision.html#configuration-options) for a detailed explanation
18
- - `hmp_bf16_ops`: list of operators that should run in bf16
19
- - `hmp_fp32_ops`: list of operators that should run in fp32
20
- - `hmp_is_verbose`: verbosity
21
  - `use_fused_adam`: whether to use Habana's custom AdamW implementation
22
  - `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator
 
23
 
24
  ## Usage
25
 
 
13
  **This model contains no model weights, only a GaudiConfig.**
14
 
15
  This enables to specify:
 
 
 
 
 
16
  - `use_fused_adam`: whether to use Habana's custom AdamW implementation
17
  - `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator
18
+ - `use_torch_autocast`: whether to use PyTorch's autocast mixed precision
19
 
20
  ## Usage
21