Update README.md
Browse files
README.md
CHANGED
@@ -7,11 +7,11 @@ license: mit
|
|
7 |
|
8 |
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
|
9 |
|
10 |
-
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder.
|
11 |
|
12 |
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
|
13 |
|
14 |
-
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size.
|
15 |
|
16 |
|
17 |
### Fine-tuning on NLU tasks
|
@@ -46,20 +46,20 @@ export TASK_NAME=mnli
|
|
46 |
output_dir="ds_results"
|
47 |
num_gpus=8
|
48 |
batch_size=8
|
49 |
-
python -m torch.distributed.launch --nproc_per_node=${num_gpus}
|
50 |
-
run_glue.py
|
51 |
-
--model_name_or_path microsoft/deberta-v2-xxlarge
|
52 |
-
--task_name $TASK_NAME
|
53 |
-
--do_train
|
54 |
-
--do_eval
|
55 |
-
--max_seq_length 256
|
56 |
-
--per_device_train_batch_size ${batch_size}
|
57 |
-
--learning_rate 3e-6
|
58 |
-
--num_train_epochs 3
|
59 |
-
--output_dir $output_dir
|
60 |
-
--overwrite_output_dir
|
61 |
-
--logging_steps 10
|
62 |
-
--logging_dir $output_dir
|
63 |
--deepspeed ds_config.json
|
64 |
```
|
65 |
|
@@ -67,8 +67,8 @@ You can also run with `--sharded_ddp`
|
|
67 |
```bash
|
68 |
cd transformers/examples/text-classification/
|
69 |
export TASK_NAME=mnli
|
70 |
-
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge
|
71 |
-
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8
|
72 |
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
|
73 |
```
|
74 |
|
|
|
7 |
|
8 |
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
|
9 |
|
10 |
+
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
|
11 |
|
12 |
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
|
13 |
|
14 |
+
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
|
15 |
|
16 |
|
17 |
### Fine-tuning on NLU tasks
|
|
|
46 |
output_dir="ds_results"
|
47 |
num_gpus=8
|
48 |
batch_size=8
|
49 |
+
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
|
50 |
+
run_glue.py \\
|
51 |
+
--model_name_or_path microsoft/deberta-v2-xxlarge \\
|
52 |
+
--task_name $TASK_NAME \\
|
53 |
+
--do_train \\
|
54 |
+
--do_eval \\
|
55 |
+
--max_seq_length 256 \\
|
56 |
+
--per_device_train_batch_size ${batch_size} \\
|
57 |
+
--learning_rate 3e-6 \\
|
58 |
+
--num_train_epochs 3 \\
|
59 |
+
--output_dir $output_dir \\
|
60 |
+
--overwrite_output_dir \\
|
61 |
+
--logging_steps 10 \\
|
62 |
+
--logging_dir $output_dir \\
|
63 |
--deepspeed ds_config.json
|
64 |
```
|
65 |
|
|
|
67 |
```bash
|
68 |
cd transformers/examples/text-classification/
|
69 |
export TASK_NAME=mnli
|
70 |
+
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
|
71 |
+
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
|
72 |
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
|
73 |
```
|
74 |
|