POOUCAS commited on
Commit
0b54afa
β€’
1 Parent(s): b780c75

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -23
README.md CHANGED
@@ -1,32 +1,44 @@
1
- ---
2
- license: mit
 
 
 
3
  ---
4
 
5
- **Base model:** [westlake-repl/SaProt_35M_AF2](https://huggingface.co/westlake-repl/SaProt_35M_AF2)
 
6
 
7
- **Task type:** protein-level regression
 
8
 
9
- Label is the experimentally tested fitness score which records the scaled mutation effect for each mutant.
10
 
11
- **Dataset:** XXXX
 
12
 
13
- **Model input type:** Amino acid sequence;label in RhlA
 
14
 
15
- **Performance (on test set):** 0.812 Spearman's ρ
 
 
 
 
 
 
16
 
17
- **LoRA config:**
18
  - **r:** 8
19
- - **lora_dropout:** 0.1
20
- - **lora_alpha:** 8
21
- - **target_modules:** ["query", "key", "value", "intermediate.dense", "output.dense"]
22
- - **modules_to_save:** ["regression"]
23
-
24
- **Training config:**
25
-
26
- - **optimizer:**
27
- - **class:** AdamW
28
- - **betas:** (0.9, 0.98)
29
- - **weight_decay:** 0.01
30
- - **learning rate:** 5e-5
31
- - **epoch:** 5
32
- - **batch size:** Adaptive
 
1
+
2
+ # Model Information 🧬
3
+
4
+ **License:** MIT
5
+
6
  ---
7
 
8
+ ### πŸ”¬ Base Model:
9
+ [westlake-repl/SaProt_35M_AF2](https://huggingface.co/westlake-repl/SaProt_35M_AF2)
10
 
11
+ ### 🧩 Task Type:
12
+ Protein-level regression
13
 
14
+ - **Label:** The experimentally tested fitness score, representing the scaled mutation effect for each mutant.
15
 
16
+ ### πŸ“Š Dataset:
17
+ [DATASET-CAPE-RhlA-seqlabel](https://huggingface.co/datasets/SaProtHub/DATASET-CAPE-RhlA-seqlabel)
18
 
19
+ - Contains mutation data including the RhlA enzyme sequence and corresponding performance metrics.
20
+ - **Source:** Label derived from [CAPE](https://doi.org/10.1021/acssynbio.4c00588)
21
 
22
+ ### πŸ”‘ Model Input Type:
23
+ Amino acid sequence; label in RhlA
24
+
25
+ ### πŸ“ˆ Performance (the best on test set):
26
+ **Spearman's ρ:** 0.862
27
+
28
+ ---
29
 
30
+ ## LoRA Configuration βš™οΈ
31
  - **r:** 8
32
+ - **LoRA dropout:** 0.1
33
+ - **LoRA alpha:** 8
34
+ - **Modules to save:** `["regression"]`
35
+
36
+ ## Training Configuration πŸŽ›οΈ
37
+
38
+ - **Optimizer:**
39
+ - **Class:** AdamW
40
+ - **Betas:** (0.9, 0.98)
41
+ - **Weight decay:** 0.01
42
+ - **Learning rate:** 5e-5
43
+ - **Epochs:** 5
44
+ - **Batch size:** Adaptive