|
--- |
|
license: cc-by-nc-sa-4.0 |
|
tags: |
|
- chemistry |
|
- drug-design |
|
- synthesis-accessibility |
|
- cheminformatics |
|
- drug-discovery |
|
- selfies |
|
- drugs |
|
- molecules |
|
- compounds |
|
- ranger21 |
|
- madgrad |
|
--- |
|
|
|
# Model Card for ChemFIE-SA (Synthesis Accessibility) |
|
|
|
This model is a BERT-like sequence classifier for 221 human protein drug targets, fine-tuned from [gbyuvd/chemselfies-base-bertmlm](https://huggingface.co/gbyuvd/chemselfies-base-bertmlm) on a dataset derived from ChemBL34 (Zdrazil et al. 2023). It predicts using chemical structures represented as SELFIES (Self-Referencing Embedded Strings). |
|
|
|
|
|
### Disclaimer: For Academic Purposes Only |
|
The information and model provided is for academic purposes only. It is intended for educational and research use, and should not be used for any commercial or legal purposes. The author do not guarantee the accuracy, completeness, or reliability of the information. |
|
|
|
|
|
[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/O4O710GFBZ) |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
- **Model Type:** Transformer (BertForSequenceClassification) |
|
- **Base model:** [gbyuvd/chemselfies-base-bertmlm](https://huggingface.co/gbyuvd/chemselfies-base-bertmlm) |
|
- **Maximum Sequence Length:** 512 tokens |
|
- **Number of Labels:** 2 classes (0 ES: easy synthesis; 1 HS: hard to synthesize) |
|
- **Training Dataset:** SELFIES with labels derived from DeepSA |
|
- **Language:** SELFIES |
|
- **License:** CC-BY-NC-SA 4.0 |
|
|
|
## Uses |
|
|
|
If you have Canonical SMILES instead of SELFIES, you can convert it first into a format readable by the model's tokenizer (using whitespace) |
|
|
|
```python |
|
import selfies as sf |
|
|
|
def smiles_to_selfies_sentence(smiles): |
|
try: |
|
selfies = sf.encoder(smiles) # Encode SMILES into SELFIES |
|
selfies_tokens = list(sf.split_selfies(selfies)) |
|
|
|
# Join dots with the nearest next tokens |
|
joined_tokens = [] |
|
i = 0 |
|
while i < len(selfies_tokens): |
|
if selfies_tokens[i] == '.' and i + 1 < len(selfies_tokens): |
|
joined_tokens.append(f".{selfies_tokens[i+1]}") |
|
i += 2 |
|
else: |
|
joined_tokens.append(selfies_tokens[i]) |
|
i += 1 |
|
|
|
selfies_sentence = ' '.join(joined_tokens) |
|
return selfies_sentence |
|
except sf.EncoderError as e: |
|
print(f"Encoder Error: {e}") |
|
return None |
|
|
|
# Example usage: |
|
in_smi = "C1CCC2=CN3C=CC4=C5C=CC=CC5=NC4=C3C=C2C1" # Sempervirine (CID168919) |
|
selfies_sentence = smiles_to_selfies_sentence(in_smi) |
|
print(selfies_sentence) |
|
|
|
""" |
|
[C] [C] [C] [C] [=C] [N] [C] [=C] [C] [=C] [C] [=C] [C] [=C] [C] [Ring1] [=Branch1] [=N] [C] [Ring1] [=Branch2] [=C] [Ring1] [=N] [C] [=C] [Ring1] [P] [C] [Ring2] [Ring1] [Branch1] |
|
|
|
""" |
|
|
|
``` |
|
|
|
### Direct Use using Classifier Pipeline |
|
|
|
You can also use pipeline: |
|
|
|
```python |
|
from transformers import pipeline |
|
|
|
classifier = pipeline("text-classification", model="gbyuvd/synthaccess-chemselfies") |
|
classifier("[C] [C] [C] [C] [=C] [N] [C] [=C] [C] [=C] [C] [=C] [C] [=C] [C] [Ring1] [=Branch1] [=N] [C] [Ring1] [=Branch2] [=C] [Ring1] [=N] [C] [=C] [Ring1] [P] [C] [Ring2] [Ring1] [Branch1]") #Sempervirine (CID168919) |
|
# |
|
|
|
``` |
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
|
|
|
[More Information Needed] |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
[More Information Needed] |
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. |
|
|
|
|
|
## Training Details |
|
|
|
### Training Data |
|
##### Data Sources |
|
##### Data Preparation |
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
[More Information Needed] |
|
|
|
### Training Procedure |
|
|
|
#### Training Hyperparameters |
|
|
|
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> |
|
|
|
|
|
## Evaluation |
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
#### Testing Data |
|
|
|
<!-- This should link to a Dataset Card if possible. --> |
|
|
|
[More Information Needed] |
|
|
|
#### Factors |
|
|
|
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> |
|
|
|
[More Information Needed] |
|
|
|
#### Metrics |
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
[More Information Needed] |
|
|
|
### Results |
|
|
|
[More Information Needed] |
|
|
|
#### Summary |
|
|
|
|
|
|
|
## Model Examination |
|
|
|
You can visualize its attention heads using [BertViz](https://github.com/jessevig/bertviz) and attribution weights using [Captum](https://captum.ai/) - as [done in the base model](gbyuvd/chemselfies-base-bertmlm) in Interpretability section. |
|
|
|
|
|
### Model Architecture and Objective |
|
|
|
[More Information Needed] |
|
|
|
### Compute Infrastructure |
|
|
|
#### Hardware |
|
|
|
- Platform: Paperspace's Gradients |
|
- Compute: Free-P5000 (16 GB GPU, 30 GB RAM, 8 vCPU) |
|
|
|
#### Software |
|
|
|
- Python: 3.9.13 |
|
- Transformers: 4.42.4 |
|
- PyTorch: 2.3.1+cu121 |
|
- Accelerate: 0.32.0 |
|
- Datasets: 2.20.0 |
|
- Tokenizers: 0.19.1 |
|
- Ranger21: 0.0.1 |
|
- Selfies: 2.1.2 |
|
- RDKit: 2024.3.3 |
|
|
|
|
|
## Citation |
|
|
|
If you find this project useful in your research and wish to cite it, please use the following BibTex entry: |
|
|
|
```bibtex |
|
@software{chemfie_basebertmlm, |
|
author = {GP Bayu}, |
|
title = {{ChemFIE Base}: Pretraining A Lightweight BERT-like model on Molecular SELFIES}, |
|
url = {https://huggingface.co/gbyuvd/chemselfies-base-bertmlm}, |
|
version = {1.0}, |
|
year = {2024}, |
|
} |
|
``` |
|
|
|
## References |
|
[DeepSA](https://doi.org/10.1186/s13321-023-00771-3) |
|
|
|
```bibtex |
|
@article{Wang2023DeepSA, |
|
title={DeepSA: a deep-learning driven predictor of compound synthesis accessibility}, |
|
author={Wang, Shihang and Wang, Lin and Li, Fenglei and Bai, Fang}, |
|
journal={Journal of Cheminformatics}, |
|
volume={15}, |
|
pages={103}, |
|
year={2023}, |
|
month={Nov}, |
|
publisher={BioMed Central}, |
|
doi={10.1186/s13321-023-00771-3}, |
|
} |
|
|
|
``` |
|
|
|
## Contact & Support My Work |
|
|
|
G Bayu ([email protected]) |
|
|
|
This project has been quiet a journey for me, I’ve dedicated hours on this and I would like to improve myself, this model, and future projects. However, financial and computational constraints can be challenging. |
|
|
|
If you find my work valuable and would like to support my journey, please consider supporting me [here](https://ko-fi.com/gbyuvd). Your support will help me cover costs for computational resources, data acquisition, and further development of this project. Any amount, big or small, is greatly appreciated and will enable me to continue learning and explore more. |
|
|
|
Thank you for checking out this model, I am more than happy to receive any feedback, so that I can improve myself and the future model/projects I will be working on. |