liujqian's picture
Upload 5 files
9a4c48e
Downloading (…)lve/main/config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 718/718 [00:00<00:00, 180kB/s]
Downloading (…)olve/main/vocab.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.04M/1.04M [00:00<00:00, 1.34MB/s]
Downloading (…)olve/main/merges.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 456k/456k [00:01<00:00, 283kB/s]
Downloading (…)/main/tokenizer.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.36M/1.36M [00:03<00:00, 386kB/s]
Downloading (…)"pytorch_model.bin";: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.52G/1.52G [06:17<00:00, 4.03MB/s]
Downloading (…)neration_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 124/124 [00:00<00:00, 24.8kB/s]
Found cached dataset common_gen (C:/Users/Jingqian/.cache/huggingface/datasets/common_gen/default/2020.5.30/1a9e8bdc026c41ce7a9e96260debf7d2809cb7fd63fa02b017e4fac1b00c6b23)
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 749.61it/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 68/68 [00:01<00:00, 65.32ba/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:00<00:00, 84.73ba/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 133.33ba/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 68/68 [00:03<00:00, 22.30ba/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:00<00:00, 25.37ba/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 76.81ba/s]
C:\Users\Jingqian\anaconda3\lib\site-packages\transformers\optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
***** Running training *****
Num examples = 4592
Num Epochs = 5
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 2870
Number of trainable parameters = 354823168
17%|β–ˆβ–‹ | 500/2870 [03:25<16:23, 2.41it/s]{'loss': 2.4535, 'learning_rate': 4.128919860627178e-05, 'epoch': 0.87}
20%|β–ˆβ–ˆ | 574/2870 [03:55<15:39, 2.44it/s]***** Running Evaluation *****
Num examples = 297
Batch size = 8
0%| | 0/38 [00:00<?, ?it/s]
5%|β–Œ | 2/38 [00:00<00:02, 17.24it/s]
11%|β–ˆ | 4/38 [00:00<00:03, 10.30it/s]
16%|β–ˆβ–Œ | 6/38 [00:00<00:03, 9.32it/s]
21%|β–ˆβ–ˆ | 8/38 [00:00<00:03, 8.95it/s]
24%|β–ˆβ–ˆβ–Ž | 9/38 [00:00<00:03, 8.82it/s]
26%|β–ˆβ–ˆβ–‹ | 10/38 [00:01<00:03, 8.74it/s]
29%|β–ˆβ–ˆβ–‰ | 11/38 [00:01<00:03, 8.61it/s]
32%|β–ˆβ–ˆβ–ˆβ– | 12/38 [00:01<00:03, 8.52it/s]
34%|β–ˆβ–ˆβ–ˆβ– | 13/38 [00:01<00:02, 8.41it/s]
37%|β–ˆβ–ˆβ–ˆβ–‹ | 14/38 [00:01<00:02, 8.45it/s]
39%|β–ˆβ–ˆβ–ˆβ–‰ | 15/38 [00:01<00:02, 8.33it/s]
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 16/38 [00:01<00:02, 8.27it/s]
45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 17/38 [00:01<00:02, 8.31it/s]
47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 18/38 [00:02<00:02, 8.32it/s]
50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 19/38 [00:02<00:02, 8.28it/s]
53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 20/38 [00:02<00:02, 8.26it/s]
55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 21/38 [00:02<00:02, 8.24it/s]
58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 22/38 [00:02<00:01, 8.17it/s]
61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 23/38 [00:02<00:01, 8.24it/s]
63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 24/38 [00:02<00:01, 8.18it/s]
66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 25/38 [00:02<00:01, 8.19it/s]
68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 26/38 [00:03<00:01, 8.17it/s]
71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 27/38 [00:03<00:01, 8.24it/s]
74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 28/38 [00:03<00:01, 8.25it/s]
76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 29/38 [00:03<00:01, 8.29it/s]
79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 30/38 [00:03<00:00, 8.24it/s]
82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 31/38 [00:03<00:00, 8.31it/s]
84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 32/38 [00:03<00:00, 8.38it/s]
87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 33/38 [00:03<00:00, 8.28it/s]
89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 34/38 [00:03<00:00, 8.24it/s]
92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 35/38 [00:04<00:00, 8.29it/s]
95%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 36/38 [00:04<00:00, 8.34it/s]
20%|β–ˆβ–ˆ | 574/2870 [04:00<15:39, 2.44it/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 38/38 [00:04<00:00, 8.26it/s]
Saving model checkpoint to gpt2-medium-finetuned-commongen\checkpoint-574
Configuration saved in gpt2-medium-finetuned-commongen\checkpoint-574\config.json
Configuration saved in gpt2-medium-finetuned-commongen\checkpoint-574\generation_config.json
{'eval_loss': 2.1204423904418945, 'eval_runtime': 4.495, 'eval_samples_per_second': 66.073, 'eval_steps_per_second': 8.454, 'epoch': 1.0}
Model weights saved in gpt2-medium-finetuned-commongen\checkpoint-574\pytorch_model.bin
35%|β–ˆβ–ˆβ–ˆβ– | 1000/2870 [06:59<12:28, 2.50it/s]{'loss': 2.1332, 'learning_rate': 3.2578397212543556e-05, 'epoch': 1.74}
40%|β–ˆβ–ˆβ–ˆβ–ˆ | 1148/2870 [07:58<11:29, 2.50it/s]***** Running Evaluation *****
Num examples = 297
Batch size = 8
0%| | 0/38 [00:00<?, ?it/s]
5%|β–Œ | 2/38 [00:00<00:02, 15.27it/s]
11%|β–ˆ | 4/38 [00:00<00:03, 9.98it/s]
16%|β–ˆβ–Œ | 6/38 [00:00<00:03, 9.21it/s]
18%|β–ˆβ–Š | 7/38 [00:00<00:03, 9.04it/s]
21%|β–ˆβ–ˆ | 8/38 [00:00<00:03, 8.76it/s]
24%|β–ˆβ–ˆβ–Ž | 9/38 [00:00<00:03, 8.64it/s]
26%|β–ˆβ–ˆβ–‹ | 10/38 [00:01<00:03, 8.37it/s]
29%|β–ˆβ–ˆβ–‰ | 11/38 [00:01<00:03, 8.44it/s]
32%|β–ˆβ–ˆβ–ˆβ– | 12/38 [00:01<00:03, 8.21it/s]
34%|β–ˆβ–ˆβ–ˆβ– | 13/38 [00:01<00:03, 8.33it/s]
37%|β–ˆβ–ˆβ–ˆβ–‹ | 14/38 [00:01<00:02, 8.15it/s]
39%|β–ˆβ–ˆβ–ˆβ–‰ | 15/38 [00:01<00:02, 8.28it/s]
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 16/38 [00:01<00:02, 8.06it/s]
45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 17/38 [00:01<00:02, 8.22it/s]
47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 18/38 [00:02<00:02, 8.11it/s]
50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 19/38 [00:02<00:02, 8.20it/s]
53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 20/38 [00:02<00:02, 8.08it/s]
55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 21/38 [00:02<00:02, 8.25it/s]
58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 22/38 [00:02<00:01, 8.02it/s]
61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 23/38 [00:02<00:01, 8.19it/s]
63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 24/38 [00:02<00:01, 8.16it/s]
66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 25/38 [00:02<00:01, 8.17it/s]
68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 26/38 [00:03<00:01, 8.00it/s]
71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 27/38 [00:03<00:01, 8.16it/s]
74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 28/38 [00:03<00:01, 8.13it/s]
76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 29/38 [00:03<00:01, 8.17it/s]
79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 30/38 [00:03<00:00, 8.00it/s]
82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 31/38 [00:03<00:00, 8.16it/s]
84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 32/38 [00:03<00:00, 8.01it/s]
87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 33/38 [00:03<00:00, 8.21it/s]
89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 34/38 [00:04<00:00, 8.14it/s]
92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 35/38 [00:04<00:00, 8.14it/s]
95%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 36/38 [00:04<00:00, 8.00it/s]
40%|β–ˆβ–ˆβ–ˆβ–ˆ | 1148/2870 [08:03<11:29, 2.50it/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 38/38 [00:04<00:00, 8.18it/s]
Saving model checkpoint to gpt2-medium-finetuned-commongen\checkpoint-1148
Configuration saved in gpt2-medium-finetuned-commongen\checkpoint-1148\config.json
Configuration saved in gpt2-medium-finetuned-commongen\checkpoint-1148\generation_config.json
{'eval_loss': 2.131284475326538, 'eval_runtime': 4.5627, 'eval_samples_per_second': 65.093, 'eval_steps_per_second': 8.328, 'epoch': 2.0}
Model weights saved in gpt2-medium-finetuned-commongen\checkpoint-1148\pytorch_model.bin
52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 1500/2870 [10:29<09:09, 2.49it/s]{'loss': 1.9859, 'learning_rate': 2.3867595818815333e-05, 'epoch': 2.61}
60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1722/2870 [11:58<07:39, 2.50it/s]***** Running Evaluation *****
Num examples = 297
Batch size = 8
0%| | 0/38 [00:00<?, ?it/s]
5%|β–Œ | 2/38 [00:00<00:02, 15.26it/s]
11%|β–ˆ | 4/38 [00:00<00:03, 10.01it/s]
16%|β–ˆβ–Œ | 6/38 [00:00<00:03, 9.30it/s]
18%|β–ˆβ–Š | 7/38 [00:00<00:03, 9.04it/s]
21%|β–ˆβ–ˆ | 8/38 [00:00<00:03, 8.65it/s]
24%|β–ˆβ–ˆβ–Ž | 9/38 [00:00<00:03, 8.64it/s]
26%|β–ˆβ–ˆβ–‹ | 10/38 [00:01<00:03, 8.33it/s]
29%|β–ˆβ–ˆβ–‰ | 11/38 [00:01<00:03, 8.39it/s]
32%|β–ˆβ–ˆβ–ˆβ– | 12/38 [00:01<00:03, 8.18it/s]
34%|β–ˆβ–ˆβ–ˆβ– | 13/38 [00:01<00:03, 8.30it/s]
37%|β–ˆβ–ˆβ–ˆβ–‹ | 14/38 [00:01<00:02, 8.19it/s]
39%|β–ˆβ–ˆβ–ˆβ–‰ | 15/38 [00:01<00:02, 8.31it/s]
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 16/38 [00:01<00:02, 8.20it/s]
45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 17/38 [00:01<00:02, 8.26it/s]
47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 18/38 [00:02<00:02, 8.14it/s]
50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 19/38 [00:02<00:02, 8.27it/s]
53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 20/38 [00:02<00:02, 8.09it/s]
55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 21/38 [00:02<00:02, 8.22it/s]
58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 22/38 [00:02<00:01, 8.05it/s]
61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 23/38 [00:02<00:01, 8.19it/s]
63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 24/38 [00:02<00:01, 8.06it/s]
66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 25/38 [00:02<00:01, 8.20it/s]
68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 26/38 [00:03<00:01, 8.04it/s]
71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 27/38 [00:03<00:01, 8.20it/s]
74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 28/38 [00:03<00:01, 8.16it/s]
76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 29/38 [00:03<00:01, 8.15it/s]
79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 30/38 [00:03<00:00, 8.01it/s]
82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 31/38 [00:03<00:00, 8.19it/s]
84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 32/38 [00:03<00:00, 8.03it/s]
87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 33/38 [00:03<00:00, 8.18it/s]
89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 34/38 [00:04<00:00, 8.01it/s]
92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 35/38 [00:04<00:00, 8.18it/s]
95%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 36/38 [00:04<00:00, 8.03it/s]
{'eval_loss': 2.1727805137634277, 'eval_runtime': 4.5585, 'eval_samples_per_second': 65.152, 'eval_steps_per_second': 8.336, 'epoch': 3.0}
60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1722/2870 [12:03<07:39, 2.50it/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 38/38 [00:04<00:00, 8.18it/s]
Saving model checkpoint to gpt2-medium-finetuned-commongen\checkpoint-1722
Configuration saved in gpt2-medium-finetuned-commongen\checkpoint-1722\config.json
Configuration saved in gpt2-medium-finetuned-commongen\checkpoint-1722\generation_config.json
Model weights saved in gpt2-medium-finetuned-commongen\checkpoint-1722\pytorch_model.bin
70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 2000/2870 [13:59<05:47, 2.51it/s]{'loss': 1.885, 'learning_rate': 1.5156794425087109e-05, 'epoch': 3.48}
80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 2296/2870 [15:57<03:48, 2.51it/s]***** Running Evaluation *****
Num examples = 297
Batch size = 8
0%| | 0/38 [00:00<?, ?it/s]
5%|β–Œ | 2/38 [00:00<00:02, 14.39it/s]
11%|β–ˆ | 4/38 [00:00<00:03, 10.02it/s]
16%|β–ˆβ–Œ | 6/38 [00:00<00:03, 9.21it/s]
18%|β–ˆβ–Š | 7/38 [00:00<00:03, 9.04it/s]
21%|β–ˆβ–ˆ | 8/38 [00:00<00:03, 8.78it/s]
24%|β–ˆβ–ˆβ–Ž | 9/38 [00:00<00:03, 8.62it/s]
26%|β–ˆβ–ˆβ–‹ | 10/38 [00:01<00:03, 8.37it/s]
29%|β–ˆβ–ˆβ–‰ | 11/38 [00:01<00:03, 8.42it/s]
32%|β–ˆβ–ˆβ–ˆβ– | 12/38 [00:01<00:03, 8.20it/s]
34%|β–ˆβ–ˆβ–ˆβ– | 13/38 [00:01<00:02, 8.34it/s]
37%|β–ˆβ–ˆβ–ˆβ–‹ | 14/38 [00:01<00:02, 8.20it/s]
39%|β–ˆβ–ˆβ–ˆβ–‰ | 15/38 [00:01<00:02, 8.26it/s]
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 16/38 [00:01<00:02, 8.06it/s]
45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 17/38 [00:01<00:02, 8.22it/s]
47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 18/38 [00:02<00:02, 8.11it/s]
50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 19/38 [00:02<00:02, 8.26it/s]
53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 20/38 [00:02<00:02, 8.14it/s]
55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 21/38 [00:02<00:02, 8.28it/s]
58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 22/38 [00:02<00:01, 8.15it/s]
61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 23/38 [00:02<00:01, 8.29it/s]
63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 24/38 [00:02<00:01, 8.16it/s]
66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 25/38 [00:02<00:01, 8.31it/s]
68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 26/38 [00:03<00:01, 8.10it/s]
71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 27/38 [00:03<00:01, 8.27it/s]
74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 28/38 [00:03<00:01, 8.07it/s]
76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 29/38 [00:03<00:01, 8.22it/s]
79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 30/38 [00:03<00:00, 8.04it/s]
82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 31/38 [00:03<00:00, 8.20it/s]
84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 32/38 [00:03<00:00, 8.04it/s]
87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 33/38 [00:03<00:00, 8.21it/s]
89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 34/38 [00:04<00:00, 8.03it/s]
92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 35/38 [00:04<00:00, 8.20it/s]
95%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 36/38 [00:04<00:00, 8.04it/s]
{'eval_loss': 2.2184627056121826, 'eval_runtime': 4.5511, 'eval_samples_per_second': 65.259, 'eval_steps_per_second': 8.35, 'epoch': 4.0}
80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 2296/2870 [16:01<03:48, 2.51it/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 38/38 [00:04<00:00, 8.20it/s]
Saving model checkpoint to gpt2-medium-finetuned-commongen\checkpoint-2296
Configuration saved in gpt2-medium-finetuned-commongen\checkpoint-2296\config.json
Configuration saved in gpt2-medium-finetuned-commongen\checkpoint-2296\generation_config.json
Model weights saved in gpt2-medium-finetuned-commongen\checkpoint-2296\pytorch_model.bin
87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 2500/2870 [17:28<02:28, 2.50it/s]{'loss': 1.8157, 'learning_rate': 6.445993031358885e-06, 'epoch': 4.36}
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2870/2870 [19:56<00:00, 2.49it/s]***** Running Evaluation *****
Num examples = 297
Batch size = 8
0%| | 0/38 [00:00<?, ?it/s]
5%|β–Œ | 2/38 [00:00<00:02, 14.49it/s]
11%|β–ˆ | 4/38 [00:00<00:03, 10.04it/s]
16%|β–ˆβ–Œ | 6/38 [00:00<00:03, 9.32it/s]
18%|β–ˆβ–Š | 7/38 [00:00<00:03, 9.08it/s]
21%|β–ˆβ–ˆ | 8/38 [00:00<00:03, 8.63it/s]
24%|β–ˆβ–ˆβ–Ž | 9/38 [00:00<00:03, 8.63it/s]
26%|β–ˆβ–ˆβ–‹ | 10/38 [00:01<00:03, 8.33it/s]
29%|β–ˆβ–ˆβ–‰ | 11/38 [00:01<00:03, 8.41it/s]
32%|β–ˆβ–ˆβ–ˆβ– | 12/38 [00:01<00:03, 8.19it/s]
34%|β–ˆβ–ˆβ–ˆβ– | 13/38 [00:01<00:03, 8.31it/s]
37%|β–ˆβ–ˆβ–ˆβ–‹ | 14/38 [00:01<00:02, 8.12it/s]
39%|β–ˆβ–ˆβ–ˆβ–‰ | 15/38 [00:01<00:02, 8.26it/s]
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 16/38 [00:01<00:02, 8.08it/s]
45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 17/38 [00:01<00:02, 8.24it/s]
47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 18/38 [00:02<00:02, 8.07it/s]
50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 19/38 [00:02<00:02, 8.22it/s]
53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 20/38 [00:02<00:02, 8.04it/s]
55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 21/38 [00:02<00:02, 8.20it/s]
58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 22/38 [00:02<00:01, 8.02it/s]
61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 23/38 [00:02<00:01, 8.19it/s]
63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 24/38 [00:02<00:01, 8.17it/s]
66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 25/38 [00:02<00:01, 8.18it/s]
68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 26/38 [00:03<00:01, 8.03it/s]
71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 27/38 [00:03<00:01, 8.18it/s]
74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 28/38 [00:03<00:01, 8.03it/s]
76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 29/38 [00:03<00:01, 8.20it/s]
79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 30/38 [00:03<00:00, 8.18it/s]
82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 31/38 [00:03<00:00, 8.18it/s]
84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 32/38 [00:03<00:00, 8.01it/s]
87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 33/38 [00:03<00:00, 8.16it/s]
89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 34/38 [00:04<00:00, 8.02it/s]
92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 35/38 [00:04<00:00, 8.19it/s]
95%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 36/38 [00:04<00:00, 8.03it/s]
{'eval_loss': 2.2336463928222656, 'eval_runtime': 4.5651, 'eval_samples_per_second': 65.059, 'eval_steps_per_second': 8.324, 'epoch': 5.0}
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2870/2870 [20:01<00:00, 2.49it/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 38/38 [00:04<00:00, 8.20it/s]
Saving model checkpoint to gpt2-medium-finetuned-commongen\checkpoint-2870
Configuration saved in gpt2-medium-finetuned-commongen\checkpoint-2870\config.json
Configuration saved in gpt2-medium-finetuned-commongen\checkpoint-2870\generation_config.json
Model weights saved in gpt2-medium-finetuned-commongen\checkpoint-2870\pytorch_model.bin
Training completed. Do not forget to share your model on huggingface.co/models =)
Loading best model from gpt2-medium-finetuned-commongen\checkpoint-574 (score: 2.1204423904418945).
{'train_runtime': 1208.3557, 'train_samples_per_second': 19.001, 'train_steps_per_second': 2.375, 'train_loss': 2.0178993145347888, 'epoch': 5.0}
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2870/2870 [20:08<00:00, 2.38it/s]
Saving model checkpoint to gpt2-medium-finetuned-commongen
Configuration saved in gpt2-medium-finetuned-commongen\config.json
Configuration saved in gpt2-medium-finetuned-commongen\generation_config.json
Model weights saved in gpt2-medium-finetuned-commongen\pytorch_model.bin