kepinsam commited on
Commit
c18d1f2
1 Parent(s): 8020877

End of training

Browse files
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ base_model: facebook/nllb-200-distilled-600M
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - nusatranslation_mt
8
+ metrics:
9
+ - sacrebleu
10
+ model-index:
11
+ - name: bbc-to-ind-nmt-v5
12
+ results:
13
+ - task:
14
+ name: Sequence-to-sequence Language Modeling
15
+ type: text2text-generation
16
+ dataset:
17
+ name: nusatranslation_mt
18
+ type: nusatranslation_mt
19
+ config: nusatranslation_mt_btk_ind_source
20
+ split: test
21
+ args: nusatranslation_mt_btk_ind_source
22
+ metrics:
23
+ - name: Sacrebleu
24
+ type: sacrebleu
25
+ value: 38.4814
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # bbc-to-ind-nmt-v5
32
+
33
+ This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the nusatranslation_mt dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 1.2310
36
+ - Sacrebleu: 38.4814
37
+ - Gen Len: 37.8455
38
+
39
+ ## Model description
40
+
41
+ More information needed
42
+
43
+ ## Intended uses & limitations
44
+
45
+ More information needed
46
+
47
+ ## Training and evaluation data
48
+
49
+ More information needed
50
+
51
+ ## Training procedure
52
+
53
+ ### Training hyperparameters
54
+
55
+ The following hyperparameters were used during training:
56
+ - learning_rate: 5e-05
57
+ - train_batch_size: 4
58
+ - eval_batch_size: 16
59
+ - seed: 42
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: linear
62
+ - lr_scheduler_warmup_ratio: 0.1
63
+ - num_epochs: 10
64
+ - mixed_precision_training: Native AMP
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Gen Len |
69
+ |:-------------:|:-----:|:-----:|:---------------:|:---------:|:-------:|
70
+ | 3.4154 | 1.0 | 1650 | 1.2829 | 33.2857 | 37.9245 |
71
+ | 1.1633 | 2.0 | 3300 | 1.1418 | 36.6342 | 37.407 |
72
+ | 0.9377 | 3.0 | 4950 | 1.1148 | 38.0023 | 37.17 |
73
+ | 0.795 | 4.0 | 6600 | 1.1197 | 38.2402 | 37.3695 |
74
+ | 0.6827 | 5.0 | 8250 | 1.1465 | 38.3719 | 37.315 |
75
+ | 0.5937 | 6.0 | 9900 | 1.1642 | 38.3424 | 37.547 |
76
+ | 0.5216 | 7.0 | 11550 | 1.1917 | 38.56 | 37.8515 |
77
+ | 0.466 | 8.0 | 13200 | 1.2079 | 38.6061 | 37.6135 |
78
+ | 0.425 | 9.0 | 14850 | 1.2228 | 38.4918 | 37.928 |
79
+ | 0.3995 | 10.0 | 16500 | 1.2310 | 38.4814 | 37.8455 |
80
+
81
+
82
+ ### Framework versions
83
+
84
+ - Transformers 4.41.2
85
+ - Pytorch 2.3.0+cu121
86
+ - Datasets 2.14.6
87
+ - Tokenizers 0.19.1
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 2,
4
+ "eos_token_id": 2,
5
+ "max_length": 200,
6
+ "pad_token_id": 1,
7
+ "transformers_version": "4.41.2"
8
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2a8a918e07412bfafa61d966a672f7c858789c75e139a09f138fa91a99551921
3
  size 2460354912
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9669f13edcfdd8de5cc9eb3045754698894acaaa0dfdec1c7a7c5b0368f29e8f
3
  size 2460354912
runs/Jul11_16-00-14_b5dd8609ac93/events.out.tfevents.1720713619.b5dd8609ac93.319.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9e93c04ae003202d1b1f68d0f4b1cab4905506010047a6800038a5d36c5fbf48
3
- size 10487
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7eff501c4277ecf01a5b68520d7f25f9b26a6298c4f026e4bdb4589818b28b40
3
+ size 11444