File size: 11,109 Bytes
6a34073
 
 
 
 
 
 
 
 
 
 
 
 
 
bfda42b
6a34073
 
 
 
b170da8
6a34073
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b170da8
6a34073
 
 
 
b170da8
6a34073
 
 
 
 
 
 
 
 
 
 
 
 
b170da8
 
6a34073
 
 
 
 
 
b170da8
6a34073
 
b170da8
 
 
6a34073
b170da8
 
 
 
 
 
 
6a34073
 
 
 
b170da8
 
 
 
6a34073
b170da8
 
6a34073
b170da8
 
6a34073
b170da8
 
 
6a34073
b170da8
 
 
 
6a34073
b170da8
 
 
 
6a34073
b170da8
 
 
 
6a34073
 
b170da8
6a34073
b170da8
6a34073
b170da8
 
6a34073
b170da8
 
6a34073
b170da8
 
6a34073
b170da8
 
6a34073
b170da8
6a34073
b170da8
6a34073
b170da8
 
 
 
 
6a34073
b170da8
6a34073
b170da8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6a34073
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6297d4a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6a34073
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
---
license: cc-by-nc-sa-4.0
tags:
- chemistry
- drug-design
- synthesis-accessibility
- cheminformatics
- drug-discovery
- selfies
- drugs
- molecules
- compounds
- ranger21
- madgrad
pipeline_tag: text-classification
---

# Model Card for ChemFIE-SA (Synthesis Accessibility)

This model is a BERT-like sequence classifier for predicting synthesis accessibility given a SELFIES string of a compound, fine-tuned from [gbyuvd/chemselfies-base-bertmlm](https://huggingface.co/gbyuvd/chemselfies-base-bertmlm) on a DeepSA expanded train dataset (Wang et al. 2023). 


### Disclaimer: For Academic Purposes Only
The information and model provided is for academic purposes only. It is intended for educational and research use, and should not be used for any commercial or legal purposes. The author do not guarantee the accuracy, completeness, or reliability of the information.


[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/O4O710GFBZ)

## Model Details

### Model Description

- **Model Type:** Transformer (BertForSequenceClassification)
- **Base model:** [gbyuvd/chemselfies-base-bertmlm](https://huggingface.co/gbyuvd/chemselfies-base-bertmlm) 
- **Maximum Sequence Length:** 512 tokens
- **Number of Labels:** 2 classes (0 ES: easy synthesis; 1 HS: hard to synthesize)
- **Training Dataset:** SELFIES with labels derived from DeepSA
- **Language:** SELFIES
- **License:** CC-BY-NC-SA 4.0

## Uses

If you have Canonical SMILES instead of SELFIES, you can convert it first into a format readable by the model's tokenizer (using whitespace)

```python
import selfies as sf

def smiles_to_selfies_sentence(smiles):
    try:
        selfies = sf.encoder(smiles)  # Encode SMILES into SELFIES
        selfies_tokens = list(sf.split_selfies(selfies))
        
        # Join dots with the nearest next tokens
        joined_tokens = []
        i = 0
        while i < len(selfies_tokens):
            if selfies_tokens[i] == '.' and i + 1 < len(selfies_tokens):
                joined_tokens.append(f".{selfies_tokens[i+1]}")
                i += 2
            else:
                joined_tokens.append(selfies_tokens[i])
                i += 1
        
        selfies_sentence = ' '.join(joined_tokens)
        return selfies_sentence
    except sf.EncoderError as e:
        print(f"Encoder Error: {e}")
        return None

# Example usage:
in_smi = "C1CCC(CC1)(CC(=O)O)CN" # Gabapentin (CID3446)
selfies_sentence = smiles_to_selfies_sentence(in_smi)
print(selfies_sentence)

"""
[C] [C] [C] [C] [Branch1] [Branch1] [C] [C] [Ring1] [=Branch1] [Branch1] [#Branch1] [C] [C] [=Branch1] [C] [=O] [O] [C] [N]

"""

```

### Direct Use using Classifier Pipeline

You can also use pipeline:

```python
from transformers import pipeline

classifier = pipeline("text-classification", model="gbyuvd/synthaccess-chemselfies")
classifier("[C] [C] [C] [C] [Branch1] [Branch1] [C] [C] [Ring1] [=Branch1] [Branch1] [#Branch1] [C] [C] [=Branch1] [C] [=O] [O] [C] [N]") # Gabapentin
# [{'label': 'Easy', 'score': 0.9187200665473938}]

```

## Training Details

### Training Data

##### Data Sources

Training data is fetched from [DeepSA's repository](https://github.com/Shihang-Wang-58/DeepSA).

##### Data Preparation

- SMILES is converted into SELFIES
- Chunked into three parts to accommodate Paperspace's Gradient 6hrs limit.
- Then the data was split by 90:10 ratio of train:validation.
  - 1st chunk size: 1,197,683 (1,077,915 train : 119,768 validation)
- The data contain labels for:
  - 0: Easy synthesis (requires less than 10 steps)
  - 1: Hard synthesis (requires more than 10 steps)

### Training Procedure

#### Training Hyperparameters
- Epoch = 1 for each chunk
- Batch size = 128
- Number of steps for each chunk: 8422
I am using Ranger21 with these configuration:

```
Ranger21 optimizer ready with following settings:

Core optimizer = [madgrad](https://arxiv.org/abs/2101.11075)
Learning rate of 5e-06

Important - num_epochs of training = ** 1 epochs **
using AdaBelief for variance computation
Warm-up: linear warmup, over 2000 iterations

Lookahead active, merging every 5 steps, with blend factor of 0.5
Norm Loss active, factor = 0.0001
Stable weight decay of 0.01
Gradient Centralization = On

Adaptive Gradient Clipping = True
	clipping value of 0.01
	steps for clipping = 0.001
```

1st Chunk:
| Step | Training Loss | Validation Loss | Accuracy | Precision |  Recall  |    F1    | Roc Auc  |
| :--: | :-----------: | :-------------: | :------: | :-------: | :------: | :------: | :------: |
| 8420 |   0.128700    |    0.128632     | 0.922860 | 0.975201  | 0.867836 | 0.918391 | 0.990007 |


## Model Evaluation

### Testing Data

The model (currently only trained on the 1st chunk) was evaluated using three distinct test sets provided by DeepSA's authors (Wang et al. 2023) to ensure comprehensive performance assessment across various scenarios:
1. **Main Expanded Test Set**

2. **Independent Test Set 1 (TS1)**
   - Characteristics: Contains ES and HS compounds with high intra-group fingerprint similarity, but significant inter-group pattern differences.

3. **Independent Test Set 2 (TS2)**
   - Characteristics: Contains a small portion of ES and HS molecules showing similar fingerprint patterns.

4. **Independent Test Set 3 (TS3)**
   - Characteristics: All compounds exhibit high fingerprint similarity, presenting the most challenging classification task.

### Evaluation Metrics

We employed a comprehensive set of metrics to evaluate our model's performance:

1. **Accuracy (ACC)**: Overall correctness of predictions
2. **Recall**: Ability to identify all relevant instances (sensitivity)
3. **Precision**: Accuracy of positive predictions
4. **F1-score**: Harmonic mean of precision and recall
5. **Area Under the Receiver Operating Characteristic curve (AUROC)**: Model's ability to distinguish between classes

All metrics were evaluated using a threshold of 0.50 for binary classification.

### Results

Below are the detailed results of our model's performance across all test sets:

#### Expanded Test Set Results
Comparison data is sourced from Wang et al. (2023), used various models as encoding layer:
- bert-mini (MinBert)
- bert-tini (TinBert)
- roberta-base (RoBERTa)
- deberta-v3-base (DeBERTa)
- Chem_GraphCodeBert (GraphCodeBert)
- electra-small-discriminator (SmELECTRA)
- ChemBERTa-77M-MTR (ChemMTR)
- ChemBERTa-77M-MLM (ChemMLM)

which was trained/fine-tuned to predict based on SMILES - while ChemFIE-SA is SELFIES-based:

| **Model**            | **Recall** | **Precision** | **F–score** | **AUROC** |
| -------------------- | :--------: | :-----------: | :---------: | :-------: |
| DeepSA_DeBERTa       |   0.873    |     0.920     |    0.896    |   0.959   |
| DeepSA_GraphCodeBert |   0.931    |     0.944     |    0.937    |   0.987   |
| DeepSA_MinBert       |   0.933    |     0.945     |    0.939    |   0.988   |
| DeepSA_RoBERTa       |   0.940    |     0.940     |    0.940    |   0.988   |
| DeepSA_TinBert       |   0.937    |     0.947     |    0.942    |   0.990   |
| DeepSA_SmELECTRA     |   0.938    |     0.949     |    0.943    |   0.990   |
| **ChemFIE-SA**       |   0.952    |     0.942     |    0.947    |   0.990   |
| DeepSA_ChemMLM       |   0.955    |     0.967     |    0.961    |   0.995   |
| DeepSA_ChemMTR       |   0.968    |     0.974     |    0.971    |   0.997   |

#### TS1-3 Results

Comparison with DeepSA_SmELECTRA as described in Wang et al. (2023):

| Datasets | Model      |  ACC  | Recall | Precision | F-score | AUROC | Threshold |
| -------- | ---------- | :---: | :----: | :-------: | :-----: | :---: | :-------: |
| TS1      | DeepSA     | 0.995 | 1.000  |   0.990   |  0.995  | 1.000 |   0.500   |
|          | ChemFIE-SA | 0.996 | 1.000  |   0.992   |  0.996  | 1.000 |   0.500   |
| TS2      | DeepSA     | 0.838 | 0.730  |   0.871   |  0.795  | 0.913 |   0.500   |
|          | ChemFIE-SA | 0.805 | 0.775  |   0.770   |  0.773  | 0.886 |   0.500   |
| TS3      | DeepSA     | 0.817 | 0.753  |   0.864   |  0.805  | 0.896 |   0.500   |
|          | ChemFIE-SA | 0.731 | 0.642  |   0.781   |  0.705  | 0.797 |   0.500   |


## Model Examination

You can visualize its attention heads using [BertViz](https://github.com/jessevig/bertviz) and attribution weights using [Captum](https://captum.ai/) - as [done in the base model](gbyuvd/chemselfies-base-bertmlm) in Interpretability section.

### Compute Infrastructure

#### Hardware

- Platform: Paperspace's Gradients
- Compute: Free-P5000 (16 GB GPU, 30 GB RAM, 8 vCPU)

#### Software

- Python: 3.9.13
- Transformers: 4.42.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
- Ranger21: 0.0.1
- Selfies: 2.1.2
- RDKit: 2024.3.3


## Citation

If you find this project useful in your research and wish to cite it, please use the following BibTex entry:

```bibtex
@software{chemfie_basebertmlm,
  author = {GP Bayu},
  title = {{ChemFIE Base}: Pretraining A Lightweight BERT-like model on Molecular SELFIES},
  url = {https://huggingface.co/gbyuvd/chemselfies-base-bertmlm},
  version = {1.0},
  year = {2024},
}
```

## References
[DeepSA](https://doi.org/10.1186/s13321-023-00771-3)

```bibtex
@article{Wang2023DeepSA,
  title={DeepSA: a deep-learning driven predictor of compound synthesis accessibility},
  author={Wang, Shihang and Wang, Lin and Li, Fenglei and Bai, Fang},
  journal={Journal of Cheminformatics},
  volume={15},
  pages={103},
  year={2023},
  month={Nov},
  publisher={BioMed Central},
  doi={10.1186/s13321-023-00771-3},
}

```

[SELFIES](https://doi.org/10.1088/2632-2153/aba947)
```bibtex
@article{krenn2020selfies,
  title={Self-referencing embedded strings (SELFIES): A 100\% robust molecular string representation},
  author={Krenn, Mario and H{\"a}se, Florian and Nigam, AkshatKumar and Friederich, Pascal and Aspuru-Guzik, Alan},
  journal={Machine Learning: Science and Technology},
  volume={1},
  number={4},
  pages={045024},
  year={2020},
  doi={10.1088/2632-2153/aba947}
}
```

[Ranger21](https://arxiv.org/abs/2106.13731)
```bibtex
@article{wright2021ranger21,
      title={Ranger21: a synergistic deep learning optimizer}, 
      author={Wright, Less and Demeure, Nestor},
      year={2021},
      journal={arXiv preprint arXiv:2106.13731},
}
```

## Contact & Support My Work

G Bayu ([email protected])

This project has been quiet a journey for me, I’ve dedicated hours on this and I would like to improve myself, this model, and future projects. However, financial and computational constraints can be challenging.

If you find my work valuable and would like to support my journey, please consider supporting me [here](https://ko-fi.com/gbyuvd). Your support will help me cover costs for computational resources, data acquisition, and further development of this project. Any amount, big or small, is greatly appreciated and will enable me to continue learning and explore more.

Thank you for checking out this model, I am more than happy to receive any feedback, so that I can improve myself and the future model/projects I will be working on.