Update README.md
Browse files
README.md
CHANGED
@@ -380,8 +380,7 @@ python 5.run_clm-post.py --tokenizer_name /path/to/ZymCTRL
|
|
380 |
--do_train --do_eval --output_dir output --evaluation_strategy steps --eval_steps 10
|
381 |
--logging_steps 5 --save_steps 500 --num_train_epochs 28 --per_device_train_batch_size 1
|
382 |
--per_device_eval_batch_size 4 --cache_dir '.' --save_total_limit 2 --learning_rate 0.8e-04
|
383 |
-
--dataloader_drop_last True --model_name_or_path /path/to/ZymCTRL
|
384 |
-
--gradient_accumulation_steps 4
|
385 |
```
|
386 |
In any case, the original HuggingFace script run_clm.py can be found here:
|
387 |
https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py
|
|
|
380 |
--do_train --do_eval --output_dir output --evaluation_strategy steps --eval_steps 10
|
381 |
--logging_steps 5 --save_steps 500 --num_train_epochs 28 --per_device_train_batch_size 1
|
382 |
--per_device_eval_batch_size 4 --cache_dir '.' --save_total_limit 2 --learning_rate 0.8e-04
|
383 |
+
--dataloader_drop_last True --model_name_or_path /path/to/ZymCTRL
|
|
|
384 |
```
|
385 |
In any case, the original HuggingFace script run_clm.py can be found here:
|
386 |
https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py
|