fm4bio-ning
commited on
Commit
•
52277f6
1
Parent(s):
ad73782
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- biology
|
4 |
+
---
|
5 |
+
# ADIO.Protein 16B
|
6 |
+
|
7 |
+
ADIO.Protein stands as the largest protein foundation model in the world to date, trained on 1.2 trillion amino acids sourced from UniRef90 and ColabFoldDB.
|
8 |
+
|
9 |
+
By leveraging MoE layers, ADIO.Protein efficiently scales to 16 billion parameters, delivering exceptional performance across a vast variety of tasks in protein sequence understanding and sequence generation. Remarkably, ADIO.Protein demonstrates exceptional capability despite being trained solely on single protein sequences. Across over 280 DMS protein fitness prediction tasks, our model outperforms previous state-of-the-art protein sequence models without MSA and achieves 99% of the performance of models that utilize MSA, , highlighting the strength of its learned representations.
|
10 |
+
|
11 |
+
# Model Architecture Details
|
12 |
+
ADIO.Protein is a transformer encoder-only architecture with the dense MLP layer in each transformer block replaced by a sparse MoE layer. It uses single amino acid tokenization and is optimized using a masked languange modeling (MLM) training objective. For each token, 2 experts will be selectively activated by the top-2 rounting mechiansim.
|
13 |
+
|
14 |
+
More architecture details are shown below:
|
15 |
+
|Model Arch Component | Value |
|
16 |
+
| Num Attention Head | |
|
17 |
+
| Num Hidden Layer | |
|
18 |
+
| Hidden Size | |
|
19 |
+
| Intermediate Size | |
|
20 |
+
| Num MoE Layer | |
|
21 |
+
|Vocab Size| |
|
22 |
+
| Context Length | |
|
23 |
+
|
24 |
+
# Pre-training of ADIO.Protein 16B
|
25 |
+
Here we briefly introduce the details of pre-training of ADIO.Protein 16B. For more information, please refer to <our paper>
|
26 |
+
## Data
|
27 |
+
|
28 |
+
## Training Details
|
29 |
+
|
30 |
+
## Tokenization
|
31 |
+
|
32 |
+
# Evaluation of ADIO.Protein 16B
|
33 |
+
|
34 |
+
# Results
|
35 |
+
|
36 |
+
|
37 |
+
# How to Use
|
38 |
+
# Build any downstream models from this backbone
|
39 |
+
|
40 |
+
# Embedding
|
41 |
+
|
42 |
+
# Token Level Classification
|
43 |
+
|
44 |
+
# Sequence Level Classification
|
45 |
+
|
46 |
+
# Protein-Protein Interaction
|
47 |
+
|
48 |
+
# Sequence Level Regression
|
49 |
+
|
50 |
+
# Or use our one-liner CLI to finetune or evaluate any of the above
|
51 |
+
|
52 |
+
For more information, visit: Model Generator
|
53 |
+
|
54 |
+
# Citation
|
55 |
+
Please cite ADIO.Protein using the following BibTex code:
|
56 |
+
```
|
57 |
+
@inproceedings{Sun2024mixture,
|
58 |
+
title={Mixture of Experts Enable Efficient and Effective
|
59 |
+
Protein Understanding and Design},
|
60 |
+
author={Ning Sun, Shuxian Zou, Tianhua Tao, Sazan Mahbub, Dian Li, Yonghao Zhuang, Hongyi Wang, Le Song, Eric P. Xing},
|
61 |
+
booktitle={NeurIPS 2024 Workshop on AI for New Drug Modalities},
|
62 |
+
year={2024}
|
63 |
+
}
|
64 |
+
```
|
65 |
+
|
66 |
+
|
67 |
+
|
68 |
+
|
69 |
+
|