ai-team-ori commited on
Commit
45bff28
·
verified ·
1 Parent(s): 82124bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -115,7 +115,7 @@ library_name: transformers
115
 
116
  #### Finetuning:
117
  - **Novel Trainer Architecture**: A custom trainer was written to ensure efficient supervised finetuning, with custom callbacks to enable higher observability during the training process.
118
- - **Custom Dynamic Layer Freezing**: Most active layers were identified in the model by running inference on a subset of the training data using the pre-trained models. These layers were then kept frozen during the training process while all the other layers were kept frozen. This enabled faster convergence and efficient finetuning
119
  - **Deepspeed Integration**: Deepspeed was also utilized to speed up, and optimize the training process.
120
 
121
  ### Performance Overview
 
115
 
116
  #### Finetuning:
117
  - **Novel Trainer Architecture**: A custom trainer was written to ensure efficient supervised finetuning, with custom callbacks to enable higher observability during the training process.
118
+ - **Custom Dynamic Layer Freezing**: Most active layers were identified in the model by running inference on a subset of the training data using the pre-trained models. These layers were then kept unfrozen during the training process while all the other layers were kept frozen. This enabled faster convergence and efficient finetuning
119
  - **Deepspeed Integration**: Deepspeed was also utilized to speed up, and optimize the training process.
120
 
121
  ### Performance Overview