File size: 5,493 Bytes
f388626
 
0cf353a
 
 
 
f388626
 
 
236eab7
aa0b008
f388626
 
 
 
 
 
 
 
 
 
f65e301
651746f
f65e301
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f388626
 
d093493
 
f388626
 
 
 
 
 
4f592de
f388626
4f592de
f388626
 
 
 
236eab7
 
 
 
 
 
f388626
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d093493
236eab7
f388626
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
---
license: cc-by-nc-sa-4.0
library_name: transformers
tags:
- chemistry
- selfies
---
# chemfie-gpt-experiment-1

On-going training (3/4)

## Model Details
- **Model Type**: GPT-2
- **Architecture**: L8, A6, H384
- **Task**: Generation of SELFIES strings
- **Language**: N/A (Chemical representation)

## Personal Intended Use
- Hands-on learning, research and experimentation in molecular generation
- Baseline for ablation studies and comparisons with more advanced models

## Use
Since this model doesn't use a proper GPT2 format tokenizer, special tokens still need to be set up manually (next experiment will use a proper one ofc):

```python
from transformers import PreTrainedTokenizerFast, AutoModelForCausalLM
import torch

tokenizer = PreTrainedTokenizerFast(
    tokenizer_file="gpt2_tokenizer.json",
    model_max_length=512,
    unk_token="<unk>",
    pad_token="<pad>",
    eos_token="</s>",
    bos_token="<s>",
    mask_token="<mask>",
)

model = AutoModelForCausalLM.from_pretrained("gbyuvd/chemfie-gpt-experiment-1")

# Generate some sample outputs
def generate_molecules(model, tokenizer, num_samples=5, max_length=100):
    model.eval()
    generated = []
    for _ in range(num_samples):
        input_ids = torch.tensor([[tokenizer.bos_token_id]]).to(model.device)
        output = model.generate(input_ids, max_length=max_length, num_return_sequences=1, do_sample=True)
        generated.append(tokenizer.decode(output[0], skip_special_tokens=True))
    return generated

sample_molecules = generate_molecules(model, tokenizer)
print("Sample generated molecules:")
for i, mol in enumerate(sample_molecules, 1):
    print(f"{i}. {mol}")

```

Tokenized SELFIES to SMILES:
```python
import selfies as sf

test = "[C] [Branch1] [O] [=C] [C] [C] [C] [C] [C] [C] [C] [=Branch1] [=O] [O] [=C] [C] [C] [C] [Ring1]"
test = test.replace(' ', '')
print(sf.decoder(test))

""""
C(CCCCCCCCO)=CCC=C

""""
```


## Training Data
- **Source**: Curated and merged from COCONUTDB (Sorokina et al., 2021), ChemBL34 (Zdrazil et al., 2023), and SuperNatural3 (Gallo et al. 2023) database
- **Total**: 2,933,355 samples
- **Total Train**: 2,346,680 samples
- **Validation**: 293,336 samples
- **Per chunk**: 586,670 train, 73,334 validation, 73,334 test
- **Random seed for split**: 42

## Training Procedure
- **Batch Size**: 64
- **Num Epoch for Each Chunk**: 1
- **Learning Rate**: 1.5e-5
- **Optimizer**: Ranger21 (MADGRAD-Lookahead-AdaBelief with gradient centralization, linear warm up (22%), gradient clipping, and L2 weight decay)

## Training Logs


| Chunk | Chunk's Training Loss | Chunk's Validation Loss | Status  |
| :---: | :-------------------: | :---------------------: | :-----: |
|   I   |       1.346400        |        1.065180         |  Done   |
|  II   |       1.123500        |        0.993118         |  Done   |
|  III  |       1.058300        |        0.948303         |  Done   |
|  IV   |                       |                         | Ongoing |


## Evaluation Results
[To be filled after model evaluation]

## Limitations and Biases
- May generate unrealistic or synthetically inaccessible molecules
- Performance on complex, branched, and ringed molecules to be evaluated

## Ethical Considerations
- Potential misuse for generating harmful or illegal substances
- May produce biased results based on training data composition
- The information and model provided is for academic purposes only. It is intended for educational and research use, and should not be used for any commercial or legal purposes. The author do not guarantee the accuracy, completeness, or reliability of the information.

## Additional Information
- Part of experimental chemfie-gpt/T5 project
- Serves as a baseline for future experiments with further curated datasets, training improvements, and architectural modifications

## Citation
### BibTeX
#### COCONUTDB
```bibtex
@article{sorokina2021coconut,
  title={COCONUT online: Collection of Open Natural Products database},
  author={Sorokina, Maria and Merseburger, Peter and Rajan, Kohulan and Yirik, Mehmet Aziz and Steinbeck, Christoph},
  journal={Journal of Cheminformatics},
  volume={13},
  number={1},
  pages={2},
  year={2021},
  doi={10.1186/s13321-020-00478-9}
}
```

#### ChemBL34
```bibtex
@article{zdrazil2023chembl,
  title={The ChEMBL Database in 2023: a drug discovery platform spanning multiple bioactivity data types and time periods},
  author={Zdrazil, Barbara and Felix, Eloy and Hunter, Fiona and Manners, Emma J and Blackshaw, James and Corbett, Sybilla and de Veij, Marleen and Ioannidis, Harris and Lopez, David Mendez and Mosquera, Juan F and Magarinos, Maria Paula and Bosc, Nicolas and Arcila, Ricardo and Kizil{\"o}ren, Tevfik and Gaulton, Anna and Bento, A Patr{\'i}cia and Adasme, Melissa F and Monecke, Peter and Landrum, Gregory A and Leach, Andrew R},
  journal={Nucleic Acids Research},
  year={2023},
  volume={gkad1004},
  doi={10.1093/nar/gkad1004}
}

@misc{chembl34,
  title={ChemBL34},
  year={2023},
  doi={10.6019/CHEMBL.database.34}
}
```

#### SuperNatural3
```bibtex
@article{Gallo2023,
  author = {Gallo, K and Kemmler, E and Goede, A and Becker, F and Dunkel, M and Preissner, R and Banerjee, P},
  title = {{SuperNatural 3.0-a database of natural products and natural product-based derivatives}},
  journal = {Nucleic Acids Research},
  year = {2023},
  month = jan,
  day = {6},
  volume = {51},
  number = {D1},
  pages = {D654-D659},
  doi = {10.1093/nar/gkac1008}
}
```