File size: 14,517 Bytes
67fbcc9 72045d5 67fbcc9 f4b2e60 67fbcc9 72045d5 67fbcc9 72045d5 67fbcc9 72045d5 67fbcc9 72045d5 67fbcc9 72045d5 67fbcc9 72045d5 67fbcc9 72045d5 67fbcc9 72045d5 67fbcc9 72045d5 67fbcc9 4a3ec25 67fbcc9 4c201cb 9ff1dc6 4c201cb c2d66c1 d49570e c2d66c1 67fbcc9 d7337ea 67fbcc9 3b85d55 67fbcc9 f4b2e60 67fbcc9 4a3ec25 832422f 4a3ec25 67fbcc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 |
---
datasets:
- COCONUTDB
language:
- code
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1,183,174
- loss:CosineSimilarityLoss
- chemistry
widget:
- source_sentence: >-
[O][=C][Branch2][Branch2][Ring1][O][C][C][Branch2][Ring1][=Branch1][O][C][=Branch1][C][=O][C][C][C][C][C][C][=C][C][C][C][C][C][C][C][C][C][C][O][P][=Branch1][C][=O][Branch1][C][O][O][C][C][Branch2][Ring1][Branch1][O][C][O][C][Branch1][Ring1][C][O][C][Branch1][C][O][C][Branch1][C][O][C][Ring1][#Branch2][O][C][Branch1][C][O][C][Branch1][C][O][C][Branch1][C][O][C][Ring2][Ring1][Branch1][O][C][O][C][Branch1][Ring1][C][O][C][Branch1][C][O][C][Branch1][C][O][C][Ring1][#Branch2][O][C][C][C][C][C][C][C][C][C][C][C][C][C][C]
sentences:
- >-
[O][=C][Branch2][Ring1][N][N][N][=C][Branch1][N][C][=C][C][=C][Branch1][C][Cl][C][=C][Ring1][#Branch1][C][=C][C][=C][Branch1][C][Cl][C][=C][Ring1][#Branch1][C][=C][C][=C][C][=C][C][=C][Ring1][=Branch1][C][=C][Ring1][#Branch2][O]
- '[O][=C][Branch1][C][O][C][=C][Branch1][C][C][C][C][=Branch1][C][=O][O][C]'
- >-
[O][=C][Branch1][C][O][C][C][C][C][C][C][C][C][Branch1][C][O][C][Branch1][C][O][C][Branch1][C][O][C][#C][C][Branch1][C][O][C][C][C][C]
- source_sentence: '[O][=C][Branch1][#Branch1][O][C][Branch1][C][C][C][C][C][C][C][C][C][C][C]'
sentences:
- >-
[O][=C][O][C][C][Branch1][C][O][C][C][Ring1][#Branch1][=C][C][C][C][C][C][C][C][C][C][C][C][C][C]
- '[O][=C][C][=C][C][O][C][O][C][=Ring1][Branch1][C][=C][Ring1][=Branch2][Br]'
- >-
[O][=C][Branch2][#Branch1][=C][O][C][C][=Branch1][C][=C][C][C][Branch1][#Branch1][O][C][=Branch1][C][=O][C][C][C][C][=Branch1][C][=O][C][Branch2][=Branch1][Ring1][O][C][C][Ring1][Branch2][Branch1][C][C][C][Ring1][=Branch1][Branch1][C][O][C][Branch1][#Branch1][O][C][=Branch1][C][=O][C][C][Branch1][#Branch1][O][C][=Branch1][C][=O][C][C][Ring2][Ring1][N][Branch1][#C][C][O][C][=Branch1][C][=O][C][=C][C][=C][C][=C][Ring1][=Branch1][C][Branch1][Branch2][C][O][C][=Branch1][C][=O][C][C][Ring2][Ring2][S][C][C][=C][C][=C][C][=C][C][=C][Ring1][=Branch1]
- source_sentence: >-
[O][=C][O][C][=C][Branch2][Ring1][#C][C][=C][C][O][C][N][Branch1][S][C][=C][C][=C][Branch1][=Branch1][O][C][C][C][C][C][=C][Ring1][O][C][C][Ring2][Ring1][Branch1][=Ring1][P][C][=Branch1][=C][=C][Ring2][Ring1][=Branch2][C][C][=C][C][=C][C][=C][Ring1][=Branch1][C]
sentences:
- >-
[O][=C][N][C][Branch1][S][C][=Branch1][C][=O][N][C][=C][C][=C][C][=C][Ring1][N][Ring1][=Branch1][C][C][=Branch1][C][=O][N][C][C][C][=N][C][=Branch1][Branch1][=C][S][Ring1][Branch1][C]
- >-
[O][=C][C][=C][Branch2][Branch2][O][O][C][=C][C][Branch2][Ring2][#C][O][C][C][Branch1][Ring1][C][O][C][C][C][=C][C][NH1][C][=C][C][=Ring1][Branch1][C][=C][Ring1][=Branch2][C][C][=Branch1][C][=O][C][Branch1][Ring1][C][O][C][C][=Branch1][Branch2][=C][Ring2][Ring1][#C][Ring2][Ring1][O][C][Ring1][#Branch2][=C][C][Branch2][Ring2][O][O][C][Branch1][Ring2][C][Ring1][Branch1][C][Branch1][C][O][C][Branch2][Ring1][Branch2][C][=C][C][C][Branch1][S][N][Branch1][=Branch1][C][C][C][O][C][C][C][C][Ring1][S][Ring1][O][C][C][C][C][C][O][=C][Ring2][Branch1][=Branch2][C][O][C][=Branch1][C][=O][O][C][C]
- >-
[O][C][C][O][C][Branch2][Branch2][#Branch2][O][C][C][Branch1][C][O][C][Branch1][C][O][C][Branch2][#Branch1][#Branch2][O][C][Ring1][Branch2][O][C][C][C][C][Branch2][=Branch1][=N][C][=Branch2][Branch1][P][=C][C][Branch1][C][O][C][Branch1][C][C][C][Ring1][Branch2][C][C][Branch1][C][O][C][C][Branch1][=Branch2][C][C][C][Ring1][O][Ring1][Branch1][C][C][Branch2][Ring1][Branch1][O][C][O][C][Branch1][Ring1][C][O][C][Branch1][C][O][C][Branch1][C][O][C][Ring1][#Branch2][O][Branch1][C][C][C][C][=C][C][Branch1][C][C][C][C][Ring2][Ring2][=Branch2][Branch1][C][C][C][C][C][O][C][Branch1][C][O][C][Branch1][C][O][C][Ring2][Branch1][S][O]
- source_sentence: >-
[O][=C][O][C][=C][C][=C][Branch1][O][O][C][=Branch1][C][=O][N][Branch1][C][C][C][C][=C][Ring1][N][C][=Branch1][Ring2][=C][Ring1][S][C][O][C][=C][C][=C][C][=C][Ring1][=Branch1][C][=Ring1][=Branch2]
sentences:
- >-
[O][=C][C][=Branch2][#Branch1][=N][=C][C][Branch2][Ring2][N][C][NH1+1][C][=Branch2][Ring2][Ring1][=C][Branch1][#Branch2][C][=N][C][=C][C][Ring1][Branch2][Ring1][Branch1][C][C][=C][C][=Branch1][S][=C][C][=Branch1][Ring2][=C][Ring1][=Branch1][C][C][C][O][C][C][Ring1][=Branch1][C][C][C][O][C][Branch1][C][O][C][C][Branch1][C][C][C][C][C][=Branch1][C][=O][C][Branch1][C][C][Branch1][C][C][C][Ring1][#Branch2][C][C][C][Ring1][=C][Branch1][C][C][C][Ring2][Ring2][=C][Branch1][C][C][C][Ring2][Branch1][C][C][Branch1][C][C][C][C][Branch1][C][O][C][O][C][Ring1][Ring1][Branch1][C][C][C]
- >-
[O][=C][Branch1][C][O][C][Branch2][O][P][O][C][C][=Branch1][C][=O][N][C][C][Branch1][C][O][C][C][Branch2][=Branch2][=Branch2][O][C][C][=Branch1][C][=O][N][C][C][Branch1][C][O][C][C][Branch2][=Branch1][P][O][C][C][Branch1][C][O][C][Branch1][C][O][C][Branch1][C][O][C][Branch2][Branch1][#Branch2][O][P][=Branch1][C][=O][Branch1][C][O][O][C][C][Branch2][Ring1][O][N][C][=Branch1][C][=O][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][C][Branch1][C][O][C][=C][C][C][C][C][C][C][C][C][C][C][C][C][Ring2][Branch1][#Branch1][O][Branch1][P][O][C][Ring2][Branch1][S][C][Branch1][C][O][C][Branch1][C][O][C][O][C][C][=Branch1][C][=O][O][Branch1][P][O][C][Ring2][#Branch1][=Branch1][C][Branch1][C][O][C][Branch1][C][O][C][O][C][C][=Branch1][C][=O][O][O][C][Branch1][N][C][Branch1][C][O][C][Branch1][C][O][C][O][C][C][Branch1][Branch2][N][C][=Branch1][C][=O][C][O][C][Branch1][C][O][C][Ring2][=Branch2][Branch2]
- >-
[O][=C][Branch1][C][N][C][=N][C][=C][C][=C][C][=C][Ring1][=Branch1][C][=Branch1][C][=O][N][Ring1][O][C][C][O][C]
- source_sentence: '[O][=C][Branch1][#Branch1][C][=C][C][C][C][=C][C]'
sentences:
- >-
[O][=C][Branch1][C][O][C][C][C][C][C][C][C][C][Branch1][C][O][C][=C][C][#C][C][=C][C][C][C]
- >-
[O][C][C][O][C][Branch2][=Branch2][Ring1][O][C][Branch1][C][C][C][C][C][Branch1][C][O][O][C][C][C][C][C][C][C][C][C][Branch2][Branch1][N][O][C][O][C][Branch1][Ring1][C][O][C][Branch2][Ring2][#Branch1][O][C][O][C][Branch1][Ring1][C][O][C][Branch1][C][O][C][Branch1][C][O][C][Ring1][#Branch2][O][C][O][C][Branch1][C][C][C][Branch1][C][O][C][Branch1][C][O][C][Ring1][=Branch2][O][C][Branch1][C][O][C][Ring2][Ring1][#C][O][C][C][C][Ring2][Ring2][#Branch1][Branch1][C][C][C][Ring2][Ring2][N][C][C][C][Ring2][Ring2][S][Branch1][C][C][C][Ring2][Branch1][Ring2][C][Ring2][Branch1][Branch2][C][C][Branch1][C][O][C][Branch1][C][O][C][Ring2][=Branch1][=Branch1][O]
- >-
[O][=C][Branch1][#Branch2][C][=C][C][#C][C][#C][C][#C][C][N][C][C][C][=C][C][=C][C][=C][Ring1][=Branch1]
model-index:
- name: SentenceTransformer
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: NP isotest
type: NP-isotest
metrics:
- type: pearson_cosine
value: 0.936731178796972
name: Pearson Cosine
- type: spearman_cosine
value: 0.93027366634068
name: Spearman Cosine
- type: pearson_manhattan
value: 0.826340669261792
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.845192256146849
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.842726066770598
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.865381289346298
name: Spearman Euclidean
- type: pearson_dot
value: 0.924283770507162
name: Pearson Dot
- type: spearman_dot
value: 0.923230424410894
name: Spearman Dot
- type: pearson_max
value: 0.936731178796972
name: Pearson Max
- type: spearman_max
value: 0.93027366634068
name: Spearman Max
---
# ChEmbed v0.1 - Chemical Embeddings
This prototype is a [sentence-transformers](https://www.SBERT.net) based on [MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) fine-tuned on around 1 million pairs of valid natural compounds' SELFIES [(Krenn et al. 2020)](https://github.com/aspuru-guzik-group/selfies) taken from COCONUTDB [(Sorokina et al. 2021)](https://coconut.naturalproducts.net/). It maps compounds' *Self-Referencing Embedded Strings* (SELFIES) into a 768-dimensional dense vector space, potentially can be used for chemical similarity, similarity search, classification, clustering, and more.
I am planning to train this model with more epochs on current dataset, before moving on to a larger dataset with 6 million pairs generated from ChemBL34. However, this will take some time due to computational and financial constraints. A future project of mine is to develop a custom model specifically for cheminformatics to address any biases and optimization issues in repurposing an embedding model designed for NLP tasks.
### Update
This model won't be trained further on current natural products dataset nor the ChemBL34, since I've been working on pre-training a BERT-like base model that operates on SELFIES with a custom tokenizer for past two weeks. This base model was scheduled for release this week, but due to mistakes in parsing some SELFIES notations, the pre-training is halted and I am working intensely to correct these issues and continue the training. The base model will hopefully released next week. Following this, I plan to fine-tune a sentence transformer and a classifier model built on top of that base model.
The timeline for these tasks depends on the availability of compute server and my own time constraints, as I also need to finish my undergrad thesis. Thank you for checking out this model.
The base model is now available [here](https://huggingface.co/gbyuvd/chemselfies-base-bertmlm)
New version of this model is now availale [here](https://huggingface.co/gbyuvd/chemembed-chemselfies)
### Disclaimer: For Academic Purposes Only
The information and model provided is for academic purposes only. It is intended for educational and research use, and should not be used for any commercial or legal purposes. The author do not guarantee the accuracy, completeness, or reliability of the information.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased)
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:** SELFIES pairs generated from COCONUTDB
- **Language:** SELFIES
- **License:** CC BY-NC 4.0
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': True, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': False})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("gbyuvd/ChemEmbed-v01")
# Run inference
sentences = [
'[O][=C][Branch1][#Branch1][C][=C][C][C][C][=C][C]',
'[O][=C][Branch1][C][O][C][C][C][C][C][C][C][C][Branch1][C][O][C][=C][C][#C][C][=C][C][C][C]',
'[O][C][C][O][C][Branch2][=Branch2][Ring1][O][C][Branch1][C][C][C][C][C][Branch1][C][O][O][C][C][C][C][C][C][C][C][C][Branch2][Branch1][N][O][C][O][C][Branch1][Ring1][C][O][C][Branch2][Ring2][#Branch1][O][C][O][C][Branch1][Ring1][C][O][C][Branch1][C][O][C][Branch1][C][O][C][Ring1][#Branch2][O][C][O][C][Branch1][C][C][C][Branch1][C][O][C][Branch1][C][O][C][Ring1][=Branch2][O][C][Branch1][C][O][C][Ring2][Ring1][#C][O][C][C][C][Ring2][Ring2][#Branch1][Branch1][C][C][C][Ring2][Ring2][N][C][C][C][Ring2][Ring2][S][Branch1][C][C][C][Ring2][Branch1][Ring2][C][Ring2][Branch1][Branch2][C][C][Branch1][C][O][C][Branch1][C][O][C][Ring2][=Branch1][=Branch1][O]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Dataset
| Dataset | Reference | Number of Pairs |
|:---------------------------|:-----------|:-----------|
| COCONUTDB (0.8:0.1:0.1 split) | [(Sorokina et al. 2021)](https://coconut.naturalproducts.net/) | 1,183,174 |
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `NP-isotest`
* Number of test pairs: 118,318
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.9367 |
| **spearman_cosine** | **0.9303** |
| pearson_manhattan | 0.8263 |
| spearman_manhattan | 0.8452 |
| pearson_euclidean | 0.8654 |
| spearman_euclidean | 0.9243 |
| pearson_dot | 0.9232 |
| spearman_dot | 0.9367 |
| pearson_max | 0.9303 |
| spearman_max | 0.8961 |
## Limitations
For now, the model might be ineffective in embedding synthetic drugs, since it is still trained on just natural products. Also, the tokenizer used is still uncustomized one.
## Testing Generated Embeddings' Clusters
The plot below shows how the model's embeddings (at this stage) cluster different classes of compounds, compared to using MACCS fingerprints.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/667da868d653c0b02d6a2399/c8_5IWjPgbrGY0Z9-ZHop.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/667da868d653c0b02d6a2399/EHEcaSnra4lldI0LY5tGq.png)
### Framework Versions
- Python: 3.9.13
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.1+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Contact
G Bayu ([email protected]) |