File size: 7,397 Bytes
14fddb7
 
 
eca78a8
efd5a17
eca78a8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66d2e5f
 
eca78a8
 
efd5a17
 
 
 
 
 
 
 
 
 
 
 
 
 
eca78a8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c11fc4d
 
 
 
 
eca78a8
 
 
 
666e0ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cb5bad1
666e0ff
 
cb5bad1
 
666e0ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eca78a8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
---

license: apache-2.0
---


# BioM3: Biological Multi-Modal Model for Protein Design

## Citation

If you use this code, please cite:

```bibtex

Natural Language Prompts Guide the Design of Novel Functional Protein Sequences

bioRxiv 2024.11.11.622734

doi: https://doi.org/10.1101/2024.11.11.622734

```

[Read the paper on bioRxiv](https://www.biorxiv.org/content/10.1101/2024.11.11.622734v1)

## Software Requirements

### Required Dependencies
- Python 3.8 or later
- PyTorch (latest stable version)
- PyTorch Lightning
- pandas
- pyyaml

### Installation

Create and activate a conda environment:
```bash

conda create -n BioM3_env python=3.8

conda activate BioM3_env

```

Install the required packages:
```bash

conda install pytorch pytorch-lightning pandas pyyaml -c pytorch -c conda-forge

```

## Stage 1: PenCL Inference

### Overview

This stage demonstrates how to perform inference using the **BioM3 PenCL model** for aligning protein sequences and text descriptions. The model computes latent embeddings for the given inputs and calculates **dot product scores** (similarities) with normalization.

### Model Weights

Before running the model, ensure you have:
- Configuration file: `stage1_config.json`
- Pre-trained weights: `BioM3_PenCL_epoch20.bin`

### Running the Model

1. Clone the repository:
```bash

git clone https://huggingface.co/your_username/BioM3_PenCL

cd BioM3_PenCL

```

2. Run inference:
```bash

python run_PenCL_inference.py \

    --json_path "stage1_config.json" \

    --model_path "./weights/PenCL/BioM3_PenCL_epoch20.bin" \

    --output_path "test_PenCL_embeddings.pt"

```

### Example Input Data

The script demonstrates inference using two protein-text pairs from the SwissProt dataset:

**Pair 1:**
- **Protein Sequence:** MSLEQKKGADIISKILQIQNSIGKTTSPSTLKTKLSEISRKEQENARIQSKL...
- **Text Description:** PROTEIN NAME: 2' cyclic ADP-D-ribose synthase AbTIR...

**Pair 2:**
- **Protein Sequence:** MRFQVIVAAATITMITSYIPGVASQSTSDGDDLFVPVSNFDPKSIFPEIKHP...
- **Text Description:** PROTEIN NAME: Glucan endo-1,3-beta-D-glucosidase 1...

These pairs demonstrate how the model aligns protein sequences with their corresponding functional descriptions. The model will compute embeddings for both the sequences and descriptions, then calculate their similarities using dot product scores.

### Expected Output

The script provides the following outputs:

1. **Latent Embedding Shapes**
   - `z_p`: Protein sequence embeddings
   - `z_t`: Text description embeddings

2. **Vector Magnitudes**
   - L2 norms of both embedding types

3. **Dot Product Scores**
   - Similarity matrix between embeddings

4. **Normalized Probabilities**
   - Protein-normalized (softmax over rows)
   - Text-normalized (softmax over columns)

#### Sample Output
```plaintext

=== Inference Results ===

Shape of z_p (protein latent): torch.Size([2, 512])

Shape of z_t (text latent): torch.Size([2, 512])



Magnitudes of z_p vectors: tensor([5.3376, 4.8237])

Magnitudes of z_t vectors: tensor([29.6971, 27.6714])



=== Dot Product Scores Matrix ===

tensor([[ 7.3152,  1.8080],

        [ 3.3922, 16.6157]])



=== Normalized Probabilities ===

Protein-Normalized Probabilities:

tensor([[9.8060e-01, 3.7078e-07],

        [1.9398e-02, 1.0000e+00]])



Text-Normalized Probabilities:

tensor([[9.9596e-01, 4.0412e-03],

        [1.8076e-06, 1.0000e+00]])



=== Homology Matrix (Dot Product of Normalized z_p) ===

tensor([[1.0000, 0.1840],

        [0.1840, 1.0000]])



```

## Stage 2: Facilitator Sampling

### Overview

In this stage, the **Facilitator model** takes the text embeddings (z_t) computed in Stage 1 and generates **facilitated embeddings (z_c)**. The facilitated embeddings align more closely with protein embeddings (z_p) and reduce discrepancies, as demonstrated by **Mean Squared Error (MSE)** and **Maximum Mean Discrepancy (MMD)** metrics.

### Model Weights

Before running the model, ensure you have:
- Configuration file: `stage2_facilitator_config.json`
- Pre-trained weights: `BioM3_Facilitator_epoch20.bin`

### Running the Facilitator Model

1. Clone the repository:
```bash

git clone https://huggingface.co/your_username/BioM3_Facilitator

cd BioM3_Facilitator

```

2. Run inference:
```bash

python run_Facilitator_sample.py \

    --json_path "stage2_facilitator_config.json" \

    --model_path "./weights/Facilitator/BioM3_Facilitator_epoch20.bin" \

    --input_data_path "test_PenCL_embeddings.pt" \

    --output_data_path "test_Facilitator_embeddings.pt"

```

Arguments:
- **json_path**: Path to the JSON configuration file

- **model_path**: Path to the pre-trained facilitator weights
- **input_data_path**: Path to the input embeddings (z_t and z_p) generated in Stage 1
- **output_data_path**: Path to save the facilitated embeddings (z_c)



### Expected Output



The script provides the following outputs:



1. **Latent Embedding Shapes**

   - z_t: Text embeddings
   - z_p: Protein embeddings

   - z_c: Facilitated embeddings

2. **Vector Magnitudes**
   - L2 norms of z_t, z_p, and z_c for a given batch



3. **Mean Squared Error (MSE)**

   - MSE between facilitated embeddings (z_c) and protein embeddings (z_p)

   - MSE between text embeddings (z_t) and protein embeddings (z_p)



4. **Maximum Mean Discrepancy (MMD)**

   - MMD between facilitated embeddings (z_c) and protein embeddings (z_p)

   - MMD between text embeddings (z_t) and protein embeddings (z_p)



### Sample Output



```plaintext

=== Facilitator Model Output ===

Shape of z_t (Text Embeddings): torch.Size([2, 512])
Shape of z_p (Protein Embeddings): torch.Size([2, 512])

Shape of z_c (Facilitated Embeddings): torch.Size([2, 512])

=== Norm (L2 Magnitude) Results for Batch Index 0 ===
Norm of z_t (Text Embedding): 29.697054

Norm of z_p (Protein Embedding): 5.337610
Norm of z_c (Facilitated Embedding): 3.244318



=== Mean Squared Error (MSE) Results ===

MSE between Facilitated Embeddings (z_c) and Protein Embeddings (z_p): 0.069909

MSE between Text Embeddings (z_t) and Protein Embeddings (z_p): 1.612812



=== Max Mean Discrepancy (MMD) Results ===

MMD between Facilitated Embeddings (z_c) and Protein Embeddings (z_p): 0.000171

MMD between Text Embeddings (z_t) and Protein Embeddings (z_p): 0.005172

```



### What the Output Means



1. **Latent Shapes**:

   - Ensures that z_c has the same shape as z_p and z_t

2. **Norms**:
   - z_c is closer in magnitude to z_p compared to z_t, showing that the facilitator model effectively aligns the embeddings



3. **MSE**:

   - Lower MSE for z_c and z_p compared to z_t and z_p confirms that z_c approximates z_p better



4. **MMD**:

   - The MMD loss shows that the **distribution** of z_c is closer to z_p than the original z_t

### Saving the Output

The facilitated embeddings are saved to the specified output_data_path for further stages.


## Stage 3: ProteoScribe

🚧 **Coming Soon** 🚧

This stage will contain scripts and models for the ProteoScribe process. Check back for:
- Configuration files
- Model weights
- Running instructions
- Output examples

## Support

For questions or issues:
- Open an issue in this repository
- Contact: [Your contact information]

---
Repository maintained by the BioM3 Team