ychafiqui commited on
Commit
27a59be
1 Parent(s): a50f335

End of training

Browse files
Files changed (4) hide show
  1. README.md +13 -13
  2. generation_config.json +1 -1
  3. model.safetensors +1 -1
  4. training_args.bin +1 -1
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: cc-by-nc-4.0
3
- base_model: nadsoft/Faseeh-v0.1-beta
4
  tags:
5
  - generated_from_trainer
6
  model-index:
@@ -13,16 +13,16 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # darija-to-english-2
15
 
16
- This model is a fine-tuned version of [nadsoft/Faseeh-v0.1-beta](https://huggingface.co/nadsoft/Faseeh-v0.1-beta) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
- - eval_loss: 0.6560
19
- - eval_bleu: 71.4028
20
- - eval_gen_len: 32.4743
21
- - eval_runtime: 2284.4559
22
- - eval_samples_per_second: 8.271
23
- - eval_steps_per_second: 1.034
24
  - epoch: 4.0
25
- - step: 37792
26
 
27
  ## Model description
28
 
@@ -47,12 +47,12 @@ The following hyperparameters were used during training:
47
  - seed: 42
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
- - num_epochs: 10
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Framework versions
54
 
55
- - Transformers 4.37.2
56
- - Pytorch 2.2.0+cu121
57
- - Datasets 2.17.0
58
  - Tokenizers 0.15.1
 
1
  ---
2
  license: cc-by-nc-4.0
3
+ base_model: facebook/nllb-200-distilled-600M
4
  tags:
5
  - generated_from_trainer
6
  model-index:
 
13
 
14
  # darija-to-english-2
15
 
16
+ This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
+ - eval_loss: 1.4572
19
+ - eval_bleu: 40.8673
20
+ - eval_gen_len: 10.7995
21
+ - eval_runtime: 228.371
22
+ - eval_samples_per_second: 8.758
23
+ - eval_steps_per_second: 1.095
24
  - epoch: 4.0
25
+ - step: 4000
26
 
27
  ## Model description
28
 
 
47
  - seed: 42
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
+ - num_epochs: 5
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Framework versions
54
 
55
+ - Transformers 4.37.0
56
+ - Pytorch 2.1.2
57
+ - Datasets 2.1.0
58
  - Tokenizers 0.15.1
generation_config.json CHANGED
@@ -4,5 +4,5 @@
4
  "eos_token_id": 2,
5
  "max_length": 200,
6
  "pad_token_id": 1,
7
- "transformers_version": "4.37.2"
8
  }
 
4
  "eos_token_id": 2,
5
  "max_length": 200,
6
  "pad_token_id": 1,
7
+ "transformers_version": "4.37.0"
8
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9c17d10653e9a522787d5ff96a15c7a1ecccab1770bebe896ef51e2c524b1090
3
  size 2460354912
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c27302d77cd464590e289c000ed6b781405068615d50fda8df984d51b4da100
3
  size 2460354912
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8726b7bcd5c212e16e4599e85876a2068be5daef3d7a0cd9cc4284b4c55061c8
3
  size 4856
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2162c1c5d1612d8a96cb7b95ab4dd2c0f600e1be600665cf9776843437916bb
3
  size 4856