Update README.md
Browse files
README.md
CHANGED
@@ -59,6 +59,18 @@ Post-training quantization was applied using PyTorch's built-in quantization fra
|
|
59 |
|
60 |
A well-trained language model should have a perplexity closer to 10–50, depending on the dataset and domain and our model's perplexity score is 32.4.
|
61 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
62 |
## 📂 Repository Structure
|
63 |
|
64 |
```
|
|
|
59 |
|
60 |
A well-trained language model should have a perplexity closer to 10–50, depending on the dataset and domain and our model's perplexity score is 32.4.
|
61 |
|
62 |
+
## 🔧 Fine-Tuning Details
|
63 |
+
|
64 |
+
### Dataset
|
65 |
+
The **Bookcorpus** dataset was used for training and evaluation. The dataset consists of **texts**.
|
66 |
+
|
67 |
+
### Training Configuration
|
68 |
+
- **Number of epochs**: 3
|
69 |
+
- **Batch size**: 8
|
70 |
+
- **Learning rate**: 5e-5
|
71 |
+
- **Evaluation strategy**: steps
|
72 |
+
|
73 |
+
|
74 |
## 📂 Repository Structure
|
75 |
|
76 |
```
|