Brad John Pitt commited on
Commit
6bf75dc
·
verified ·
1 Parent(s): 2e2324e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -13
README.md CHANGED
@@ -7,6 +7,9 @@ tags:
7
  model-index:
8
  - name: Gopal-finetuned-custom-en-to-ru
9
  results: []
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -18,11 +21,11 @@ This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ru](https://huggi
18
 
19
  ## Model description
20
 
21
- More information needed
22
 
23
- ## Intended uses & limitations
24
 
25
- More information needed
26
 
27
  ## Training and evaluation data
28
 
@@ -32,15 +35,6 @@ More information needed
32
 
33
  ### Training hyperparameters
34
 
35
- The following hyperparameters were used during training:
36
- - learning_rate: 1e-05
37
- - train_batch_size: 64
38
- - eval_batch_size: 64
39
- - seed: 42
40
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
- - lr_scheduler_type: linear
42
- - num_epochs: 10
43
- - mixed_precision_training: Native AMP
44
 
45
  ### Training results
46
 
@@ -51,4 +45,4 @@ The following hyperparameters were used during training:
51
  - Transformers 4.38.2
52
  - Pytorch 2.2.1+cu121
53
  - Datasets 2.18.0
54
- - Tokenizers 0.15.2
 
7
  model-index:
8
  - name: Gopal-finetuned-custom-en-to-ru
9
  results: []
10
+ language:
11
+ - en
12
+ - ru
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
21
 
22
  ## Model description
23
 
24
+ This is the fine-tuned model with my custom dataset and hyperparameters, Follow huggingface tutorials for enhanced knowledge
25
 
26
+ ## Intended uses & limitation
27
 
28
+ Work on the accuracy part, increase the number of datasets, atleast 20k<n<100k
29
 
30
  ## Training and evaluation data
31
 
 
35
 
36
  ### Training hyperparameters
37
 
 
 
 
 
 
 
 
 
 
38
 
39
  ### Training results
40
 
 
45
  - Transformers 4.38.2
46
  - Pytorch 2.2.1+cu121
47
  - Datasets 2.18.0
48
+ - Tokenizers 0.15.2