gbyuvd commited on
Commit
6a34073
1 Parent(s): 5fcfbe4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +231 -3
README.md CHANGED
@@ -1,3 +1,231 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ tags:
4
+ - chemistry
5
+ - drug-design
6
+ - synthesis-accessibility
7
+ - cheminformatics
8
+ - drug-discovery
9
+ - selfies
10
+ - drugs
11
+ - molecules
12
+ - compounds
13
+ - ranger21
14
+ - madgrad
15
+ ---
16
+
17
+ # Model Card for ChemFIE-SA (Synthesis Accessibility)
18
+
19
+ This model is a BERT-like sequence classifier for 221 human protein drug targets, fine-tuned from [gbyuvd/chemselfies-base-bertmlm](https://huggingface.co/gbyuvd/chemselfies-base-bertmlm) on a dataset derived from ChemBL34 (Zdrazil et al. 2023). It predicts using chemical structures represented as SELFIES (Self-Referencing Embedded Strings).
20
+
21
+
22
+ ### Disclaimer: For Academic Purposes Only
23
+ The information and model provided is for academic purposes only. It is intended for educational and research use, and should not be used for any commercial or legal purposes. The author do not guarantee the accuracy, completeness, or reliability of the information.
24
+
25
+
26
+ [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/O4O710GFBZ)
27
+
28
+ ## Model Details
29
+
30
+ ### Model Description
31
+
32
+ - **Model Type:** Transformer (BertForSequenceClassification)
33
+ - **Base model:** [gbyuvd/chemselfies-base-bertmlm](https://huggingface.co/gbyuvd/chemselfies-base-bertmlm)
34
+ - **Maximum Sequence Length:** 512 tokens
35
+ - **Number of Labels:** 2 classes (0 ES: easy synthesis; 1 HS: hard to synthesize)
36
+ - **Training Dataset:** SELFIES with labels derived from DeepSA
37
+ - **Language:** SELFIES
38
+ - **License:** CC-BY-NC-SA 4.0
39
+
40
+ ## Uses
41
+
42
+ If you have Canonical SMILES instead of SELFIES, you can convert it first into a format readable by the model's tokenizer (using whitespace)
43
+
44
+ ```python
45
+ import selfies as sf
46
+
47
+ def smiles_to_selfies_sentence(smiles):
48
+ try:
49
+ selfies = sf.encoder(smiles) # Encode SMILES into SELFIES
50
+ selfies_tokens = list(sf.split_selfies(selfies))
51
+
52
+ # Join dots with the nearest next tokens
53
+ joined_tokens = []
54
+ i = 0
55
+ while i < len(selfies_tokens):
56
+ if selfies_tokens[i] == '.' and i + 1 < len(selfies_tokens):
57
+ joined_tokens.append(f".{selfies_tokens[i+1]}")
58
+ i += 2
59
+ else:
60
+ joined_tokens.append(selfies_tokens[i])
61
+ i += 1
62
+
63
+ selfies_sentence = ' '.join(joined_tokens)
64
+ return selfies_sentence
65
+ except sf.EncoderError as e:
66
+ print(f"Encoder Error: {e}")
67
+ return None
68
+
69
+ # Example usage:
70
+ in_smi = "C1CCC2=CN3C=CC4=C5C=CC=CC5=NC4=C3C=C2C1" # Sempervirine (CID168919)
71
+ selfies_sentence = smiles_to_selfies_sentence(in_smi)
72
+ print(selfies_sentence)
73
+
74
+ """
75
+ [C] [C] [C] [C] [=C] [N] [C] [=C] [C] [=C] [C] [=C] [C] [=C] [C] [Ring1] [=Branch1] [=N] [C] [Ring1] [=Branch2] [=C] [Ring1] [=N] [C] [=C] [Ring1] [P] [C] [Ring2] [Ring1] [Branch1]
76
+
77
+ """
78
+
79
+ ```
80
+
81
+ ### Direct Use using Classifier Pipeline
82
+
83
+ You can also use pipeline:
84
+
85
+ ```python
86
+ from transformers import pipeline
87
+
88
+ classifier = pipeline("text-classification", model="gbyuvd/synthaccess-chemselfies")
89
+ classifier("[C] [C] [C] [C] [=C] [N] [C] [=C] [C] [=C] [C] [=C] [C] [=C] [C] [Ring1] [=Branch1] [=N] [C] [Ring1] [=Branch2] [=C] [Ring1] [=N] [C] [=C] [Ring1] [P] [C] [Ring2] [Ring1] [Branch1]") #Sempervirine (CID168919)
90
+ #
91
+
92
+ ```
93
+
94
+ ### Out-of-Scope Use
95
+
96
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
97
+
98
+ [More Information Needed]
99
+
100
+ ## Bias, Risks, and Limitations
101
+
102
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
103
+
104
+ [More Information Needed]
105
+
106
+ ### Recommendations
107
+
108
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
109
+
110
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
111
+
112
+
113
+ ## Training Details
114
+
115
+ ### Training Data
116
+ ##### Data Sources
117
+ ##### Data Preparation
118
+
119
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
120
+
121
+ [More Information Needed]
122
+
123
+ ### Training Procedure
124
+
125
+ #### Training Hyperparameters
126
+
127
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
128
+
129
+
130
+ ## Evaluation
131
+
132
+ <!-- This section describes the evaluation protocols and provides the results. -->
133
+
134
+ ### Testing Data, Factors & Metrics
135
+
136
+ #### Testing Data
137
+
138
+ <!-- This should link to a Dataset Card if possible. -->
139
+
140
+ [More Information Needed]
141
+
142
+ #### Factors
143
+
144
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
145
+
146
+ [More Information Needed]
147
+
148
+ #### Metrics
149
+
150
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
151
+
152
+ [More Information Needed]
153
+
154
+ ### Results
155
+
156
+ [More Information Needed]
157
+
158
+ #### Summary
159
+
160
+
161
+
162
+ ## Model Examination
163
+
164
+ You can visualize its attention heads using [BertViz](https://github.com/jessevig/bertviz) and attribution weights using [Captum](https://captum.ai/) - as [done in the base model](gbyuvd/chemselfies-base-bertmlm) in Interpretability section.
165
+
166
+
167
+ ### Model Architecture and Objective
168
+
169
+ [More Information Needed]
170
+
171
+ ### Compute Infrastructure
172
+
173
+ #### Hardware
174
+
175
+ - Platform: Paperspace's Gradients
176
+ - Compute: Free-P5000 (16 GB GPU, 30 GB RAM, 8 vCPU)
177
+
178
+ #### Software
179
+
180
+ - Python: 3.9.13
181
+ - Transformers: 4.42.4
182
+ - PyTorch: 2.3.1+cu121
183
+ - Accelerate: 0.32.0
184
+ - Datasets: 2.20.0
185
+ - Tokenizers: 0.19.1
186
+ - Ranger21: 0.0.1
187
+ - Selfies: 2.1.2
188
+ - RDKit: 2024.3.3
189
+
190
+
191
+ ## Citation
192
+
193
+ If you find this project useful in your research and wish to cite it, please use the following BibTex entry:
194
+
195
+ ```bibtex
196
+ @software{chemfie_basebertmlm,
197
+ author = {GP Bayu},
198
+ title = {{ChemFIE Base}: Pretraining A Lightweight BERT-like model on Molecular SELFIES},
199
+ url = {https://huggingface.co/gbyuvd/chemselfies-base-bertmlm},
200
+ version = {1.0},
201
+ year = {2024},
202
+ }
203
+ ```
204
+
205
+ ## References
206
+ [DeepSA](https://doi.org/10.1186/s13321-023-00771-3)
207
+
208
+ ```bibtex
209
+ @article{Wang2023DeepSA,
210
+ title={DeepSA: a deep-learning driven predictor of compound synthesis accessibility},
211
+ author={Wang, Shihang and Wang, Lin and Li, Fenglei and Bai, Fang},
212
+ journal={Journal of Cheminformatics},
213
+ volume={15},
214
+ pages={103},
215
+ year={2023},
216
+ month={Nov},
217
+ publisher={BioMed Central},
218
+ doi={10.1186/s13321-023-00771-3},
219
+ }
220
+
221
+ ```
222
+
223
+ ## Contact & Support My Work
224
+
225
+ G Bayu ([email protected])
226
+
227
+ This project has been quiet a journey for me, I’ve dedicated hours on this and I would like to improve myself, this model, and future projects. However, financial and computational constraints can be challenging.
228
+
229
+ If you find my work valuable and would like to support my journey, please consider supporting me [here](https://ko-fi.com/gbyuvd). Your support will help me cover costs for computational resources, data acquisition, and further development of this project. Any amount, big or small, is greatly appreciated and will enable me to continue learning and explore more.
230
+
231
+ Thank you for checking out this model, I am more than happy to receive any feedback, so that I can improve myself and the future model/projects I will be working on.