Varsha00 commited on
Commit
ecc53d2
·
verified ·
1 Parent(s): 48edaa4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -4
README.md CHANGED
@@ -6,6 +6,13 @@ tags:
6
  model-index:
7
  - name: tamil-finetuning
8
  results: []
 
 
 
 
 
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -13,7 +20,7 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # tamil-finetuning
15
 
16
- This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
  - eval_loss: 0.3531
19
  - eval_bleu: 14.4184
@@ -26,7 +33,7 @@ It achieves the following results on the evaluation set:
26
 
27
  ## Model description
28
 
29
- More information needed
30
 
31
  ## Intended uses & limitations
32
 
@@ -34,7 +41,7 @@ More information needed
34
 
35
  ## Training and evaluation data
36
 
37
- More information needed
38
 
39
  ## Training procedure
40
 
@@ -56,4 +63,4 @@ The following hyperparameters were used during training:
56
  - Transformers 4.42.3
57
  - Pytorch 2.1.2
58
  - Datasets 2.20.0
59
- - Tokenizers 0.19.1
 
6
  model-index:
7
  - name: tamil-finetuning
8
  results: []
9
+ datasets:
10
+ - ai4bharat/samanantar
11
+ language:
12
+ - ta
13
+ - en
14
+ metrics:
15
+ - bleu
16
  ---
17
 
18
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
20
 
21
  # tamil-finetuning
22
 
23
+ This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the samanantar dataset.
24
  It achieves the following results on the evaluation set:
25
  - eval_loss: 0.3531
26
  - eval_bleu: 14.4184
 
33
 
34
  ## Model description
35
 
36
+ t5-small finetuned for translation in en-ta
37
 
38
  ## Intended uses & limitations
39
 
 
41
 
42
  ## Training and evaluation data
43
 
44
+ ai4bharath/samanantar -> 80-20 split
45
 
46
  ## Training procedure
47
 
 
63
  - Transformers 4.42.3
64
  - Pytorch 2.1.2
65
  - Datasets 2.20.0
66
+ - Tokenizers 0.19.1