Update README.md
Browse files
README.md
CHANGED
@@ -22,13 +22,13 @@ The goal of this project was to develop a software model that could fix grammati
|
|
22 |
|
23 |
## Initial Testing:
|
24 |
|
25 |
-
In the beginning, I experimented with several T5 models (mt5(small/base), Flan T5, and Bengali T5), but the results were not very good. I was limited by computational resources, which allowed me to train these models with only 10% of the data for 1 to 2 epochs. With this limited testing the result wasn't promising enough to invest all the available resources to the limit therefore I explored other models and found a winner model that is well suited for this task.
|
26 |
|
27 |
I also tested casual large models like Mistral 7B and Gemma 2B, and even with optimizations like QLoRa, they were too large and costly to run.
|
28 |
|
29 |
During the initial testing, I tried training the same models with different token lengths, a maximum token length of 20 provided much better results than 64. The current model has a maximum token length of 32.
|
30 |
|
31 |
-
Beyond Seq2Seq models and
|
32 |
|
33 |
Ultimately, mBART 50 was chosen as the best model because of its flexibility, resource efficiency, and reproducibility.
|
34 |
|
@@ -63,10 +63,10 @@ Here is a simple way to use the fine-tuned model to correct Bengali sentences:
|
|
63 |
If you are trying to use it on a script, this is how can do It:
|
64 |
|
65 |
```python
|
66 |
-
from transformers import AutoModelForSeq2SeqLM,
|
67 |
|
68 |
checkpoint = "model/checkpoint"
|
69 |
-
tokenizer =
|
70 |
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, use_safetensors =True)
|
71 |
|
72 |
incorrect_bengali_sentence = "আপনি কমন আছেন?"
|
@@ -82,7 +82,7 @@ If you want to test this model from the terminal, run the `python correction.py`
|
|
82 |
|
83 |
# General issues faced during the entire journey:
|
84 |
|
85 |
-
- Issue: The system is not printing any evaluation
|
86 |
Solution: The GPU that I am training on doesn't support FP16/BF16 precision. Commenting out `fp16 =True` in the Seq2SeqTrainingArguments solved the issue.
|
87 |
|
88 |
- Issue: Training on TPU crashes on both Colab and Kaggle.
|
|
|
22 |
|
23 |
## Initial Testing:
|
24 |
|
25 |
+
In the beginning, I experimented with several T5 models (mt5 (small/base), Flan T5, and Bengali T5), but the results were not very good. I was limited by computational resources, which allowed me to train these models with only 10% of the data for 1 to 2 epochs. With this limited testing the result wasn't promising enough to invest all the available resources to the limit therefore I explored other models and found a winner model that is well suited for this task.
|
26 |
|
27 |
I also tested casual large models like Mistral 7B and Gemma 2B, and even with optimizations like QLoRa, they were too large and costly to run.
|
28 |
|
29 |
During the initial testing, I tried training the same models with different token lengths, a maximum token length of 20 provided much better results than 64. The current model has a maximum token length of 32.
|
30 |
|
31 |
+
Beyond Seq2Seq models and approaches, a few other ideas also crossed my mind. Other methods considered included using NER (Named Entity Recognition) to tag words as correct or incorrect, and masked models that focused on correcting one wrong word at a time. Both methods required knowing the errors in advance or making multiple calls to get a final verdict, which was not practical to say. There are other solutions too that don't use ML at all. Approaches like running each word against a reference list and replacing them when there's no hit. Attempts to replace each word based on a reference list worked somewhat like a spell checker which wasn't the goal.
|
32 |
|
33 |
Ultimately, mBART 50 was chosen as the best model because of its flexibility, resource efficiency, and reproducibility.
|
34 |
|
|
|
63 |
If you are trying to use it on a script, this is how can do It:
|
64 |
|
65 |
```python
|
66 |
+
from transformers import AutoModelForSeq2SeqLM, MBart50Tokenizer
|
67 |
|
68 |
checkpoint = "model/checkpoint"
|
69 |
+
tokenizer = MBart50Tokenizer.from_pretrained(checkpoint, src_lang="bn_IN", tgt_lang="bn_IN", use_fast=True)
|
70 |
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, use_safetensors =True)
|
71 |
|
72 |
incorrect_bengali_sentence = "আপনি কমন আছেন?"
|
|
|
82 |
|
83 |
# General issues faced during the entire journey:
|
84 |
|
85 |
+
- Issue: The system is not printing any evaluation functions.
|
86 |
Solution: The GPU that I am training on doesn't support FP16/BF16 precision. Commenting out `fp16 =True` in the Seq2SeqTrainingArguments solved the issue.
|
87 |
|
88 |
- Issue: Training on TPU crashes on both Colab and Kaggle.
|