Update README.md
Browse files
README.md
CHANGED
@@ -170,7 +170,7 @@ A comprehensive set of metrics employed to evaluate the model's performance:
|
|
170 |
1. **Accuracy (ACC)**
|
171 |
2. **Recall**
|
172 |
3. **Precision**
|
173 |
-
4. **F1
|
174 |
5. **Area Under the Receiver Operating Characteristic curve (AUROC)**
|
175 |
|
176 |
All metrics were evaluated using a threshold of 0.50 for binary classification.
|
@@ -208,7 +208,7 @@ which was trained/fine-tuned to predict based on SMILES - while ChemFIE-SA is SE
|
|
208 |
|
209 |
Comparison with DeepSA_SmELECTRA as described in Wang et al. (2023):
|
210 |
|
211 |
-
| Datasets | Model | ACC | Recall | Precision |
|
212 |
| -------- | ---------- | :---: | :----: | :-------: | :-----: | :---: | :-------: |
|
213 |
| TS1 | DeepSA | 0.995 | 1.000 | 0.990 | 0.995 | 1.000 | 0.500 |
|
214 |
| | ChemFIE-SA | 0.996 | 1.000 | 0.992 | 0.996 | 1.000 | 0.500 |
|
|
|
170 |
1. **Accuracy (ACC)**
|
171 |
2. **Recall**
|
172 |
3. **Precision**
|
173 |
+
4. **F1**
|
174 |
5. **Area Under the Receiver Operating Characteristic curve (AUROC)**
|
175 |
|
176 |
All metrics were evaluated using a threshold of 0.50 for binary classification.
|
|
|
208 |
|
209 |
Comparison with DeepSA_SmELECTRA as described in Wang et al. (2023):
|
210 |
|
211 |
+
| Datasets | Model | ACC | Recall | Precision | F1 | AUROC | Threshold |
|
212 |
| -------- | ---------- | :---: | :----: | :-------: | :-----: | :---: | :-------: |
|
213 |
| TS1 | DeepSA | 0.995 | 1.000 | 0.990 | 0.995 | 1.000 | 0.500 |
|
214 |
| | ChemFIE-SA | 0.996 | 1.000 | 0.992 | 0.996 | 1.000 | 0.500 |
|