gchhablani commited on
Commit
154cd62
·
1 Parent(s): 3782ba8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -17
README.md CHANGED
@@ -48,29 +48,13 @@ More information needed
48
 
49
  ## Training procedure
50
 
51
-
52
  This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
53
 
54
  ```bash
55
  #!/usr/bin/bash
56
 
57
 
58
- python ../run_glue.py \
59
- --model_name_or_path bert-base-cased \
60
- --task_name rte \
61
- --do_train \
62
- --do_eval \
63
- --max_seq_length 512 \
64
- --per_device_train_batch_size 16 \
65
- --learning_rate 2e-5 \
66
- --num_train_epochs 3 \
67
- --output_dir bert-base-cased-finetuned-rte \
68
- --push_to_hub \
69
- --hub_strategy all_checkpoints \
70
- --logging_strategy epoch \
71
- --save_strategy epoch \
72
- --evaluation_strategy epoch \
73
- ```
74
 
75
  ### Training hyperparameters
76
 
 
48
 
49
  ## Training procedure
50
 
 
51
  This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
52
 
53
  ```bash
54
  #!/usr/bin/bash
55
 
56
 
57
+ python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name rte \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-rte \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
 
59
  ### Training hyperparameters
60