Update README.md
Browse files
README.md
CHANGED
@@ -95,42 +95,60 @@ for comment, result in zip([comment_1, comment_2, comment_3], results):
|
|
95 |
# Prediction: highly_unlikely, Score: 0.9902
|
96 |
```
|
97 |
|
98 |
-
Intended Uses & Limitations
|
99 |
-
|
|
|
100 |
This model is intended to be used as a backend component for the Adfluence AI platform. Its primary purpose is to analyze user comments on social media advertisements (e.g., on Instagram, Facebook, TikTok) to gauge audience purchase intent and provide campaign performance metrics.
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
107 |
The dataset was created through the following process:
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
|
|
|
|
|
|
|
|
|
|
118 |
The following hyperparameters were used during training:
|
119 |
-
learning_rate
|
120 |
-
train_batch_size
|
121 |
-
eval_batch_size
|
122 |
-
seed
|
123 |
-
optimizer
|
124 |
-
lr_scheduler_type
|
125 |
-
num_epochs
|
126 |
-
|
|
|
127 |
The model achieved its best performance at the end of Epoch 2.
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
No log
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
|
|
|
|
|
|
|
|
|
|
|
95 |
# Prediction: highly_unlikely, Score: 0.9902
|
96 |
```
|
97 |
|
98 |
+
# Intended Uses & Limitations
|
99 |
+
|
100 |
+
### Intended Use
|
101 |
This model is intended to be used as a backend component for the Adfluence AI platform. Its primary purpose is to analyze user comments on social media advertisements (e.g., on Instagram, Facebook, TikTok) to gauge audience purchase intent and provide campaign performance metrics.
|
102 |
+
|
103 |
+
### Limitations
|
104 |
+
* **Simulated Data:** The model is trained on a high-quality simulated dataset, not on live social media data. While designed to reflect real-world usage, performance may vary on wild, un-sanitized data.
|
105 |
+
* **Domain Specificity:** The source data was derived from product reviews (specifically for electronics). The model's performance may be strongest in the e-commerce/product domain and might require further fine-tuning for vastly different domains like services, events, or fashion.
|
106 |
+
* **Language Scope:** The model only understands Amharic and English. It has not been trained on other Ethiopian languages like Tigrinya, Oromo, etc.
|
107 |
+
|
108 |
+
---
|
109 |
+
|
110 |
+
# Training and Evaluation Data
|
111 |
+
|
112 |
+
This model was fine-tuned on the custom `YosefA/Adflufence-ad-comments` dataset.
|
113 |
+
|
114 |
The dataset was created through the following process:
|
115 |
+
|
116 |
+
* **Source:** Started with ~5,000 English product reviews from an Amazon dataset.
|
117 |
+
* **Transformation:** Each review was programmatically rephrased and translated into a simulated social media comment using Google's Gemini Flash.
|
118 |
+
* **Stylization:** Comments were generated in three styles to mimic real-world Ethiopian user behavior:
|
119 |
+
* Amharic (Ge’ez script)
|
120 |
+
* Romanized Amharic
|
121 |
+
* Mixed Amharic-English (Code-Switching)
|
122 |
+
* **Enrichment:** Comments were styled with emojis, slang, and informal sentence structures.
|
123 |
+
* **Labeling:** Each comment was assigned a purchase intent label mapped from the original star rating of the source review.
|
124 |
+
|
125 |
+
---
|
126 |
+
|
127 |
+
# Training Procedure
|
128 |
+
|
129 |
+
### Training Hyperparameters
|
130 |
The following hyperparameters were used during training:
|
131 |
+
* `learning_rate`: 2e-05
|
132 |
+
* `train_batch_size`: 16
|
133 |
+
* `eval_batch_size`: 16
|
134 |
+
* `seed`: 42
|
135 |
+
* `optimizer`: AdamW with betas=(0.9,0.999) and epsilon=1e-08
|
136 |
+
* `lr_scheduler_type`: linear
|
137 |
+
* `num_epochs`: 3
|
138 |
+
|
139 |
+
### Training Results
|
140 |
The model achieved its best performance at the end of Epoch 2.
|
141 |
+
|
142 |
+
| Training Loss | Epoch | Step | Validation Loss | F1 (Weighted) |
|
143 |
+
| :------------ | :---- | :--- | :-------------- | :------------ |
|
144 |
+
| No log | 1.0 | 160 | 0.5001 | 0.7852 |
|
145 |
+
| No log | 2.0 | 320 | 0.4316 | 0.8101 |
|
146 |
+
| No log | 3.0 | 480 | 0.4281 | 0.8063 |
|
147 |
+
|
148 |
+
---
|
149 |
+
|
150 |
+
# Framework Versions
|
151 |
+
* **Transformers** 4.41.2
|
152 |
+
* **Pytorch** 2.3.0+cu121
|
153 |
+
* **Datasets** 2.19.0
|
154 |
+
* **Tokenizers** 0.19.1
|