Update README.md
Browse files
README.md
CHANGED
@@ -1,61 +1,57 @@
|
|
1 |
-
# **
|
2 |
|
3 |
-
|
4 |
|
5 |
|
6 |
|
7 |
## **Model description**
|
8 |
-
|
9 |
|
10 |
-
|
11 |
|
12 |
-
|
13 |
-
By doing so, the model learns an internal representation of proteins and is able to <em>speak</em> the protein language.
|
14 |
|
15 |
-
### **How to use
|
16 |
-
|
17 |
|
18 |
-
Since
|
19 |
|
20 |
-
**Example 1: Generating
|
21 |
|
22 |
-
In the example below,
|
23 |
|
24 |
```
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
{'generated_text': 'M\nNNDEQPFIMSTSGYAGNTTSSMNSTSDFNTNNKSNTWSNRFSNFIAYFSGVGWFIGAISV\nIFFIIYVIVFLSRKTKPSGQKQYSRTERNNRDVDSIKRANYYG\n'}
|
36 |
-
{'generated_text': 'M\nEAVYSFTITETGTGTVEVTPLDRTISGADIVYPPDTACVPLTVQPVINANGTWTLGSGCT\nGHFSVDTTGHVNCLTGGFGAAGVHTVIYTVETPYSGNSFAVIDVNVTEPSGPGDGGNGNG\nDRGDGPDNGGGNNPGPDPDPSTPPPPGDCSSPLPVVCSDRDCADFDTQAQVQIYLDRYGG\nTCDLDGNHDGTPCENLPNNSGGQSSDSGNGGGNPGTGSTHQVVTGDCLWNIASRNNGQGG\nQAWPALLAANNESITNP'}
|
37 |
-
{'generated_text': 'M\nGLTTSGGARGFCSLAVLQELVPRPELLFVIDRAFHSGKHAVDMQVVDQEGLGDGVATLLY\nAHQGLYTCLLQAEARLLGREWAAVPALEPNFMESPLIALPRQLLEGLEQNILSAYGSEWS\nQDVAEPQGDTPAALLATALGLHEPQQVAQRRRQLFEAAEAALQAIRASA\n'}
|
38 |
-
{'generated_text': 'M\nGAAGYTGSLILAALKQNPDIAVYALNRNDEKLKDVCGQYSNLKGQVCDLSNESQVEALLS\nGPRKTVVNLVGPYSFYGSRVLNACIEANCHYIDLTGEVYWIPQMIKQYHHKAVQSGARIV\nPAVGFDSTPAELGSFFAYQQCREKLKKAHLKIKAYTGQSGGASGGTILTMIQHGIENGKI\nLREIRSMANPREPQSDFKHYKEKTFQDGSASFWGVPFVMKGINTPVVQRSASLLKKLYQP\nFDYKQCFSFSTLLNSLFSYIFNAI'}
|
39 |
-
{'generated_text': 'M\nKFPSLLLDSYLLVFFIFCSLGLYFSPKEFLSKSYTLLTFFGSLLFIVLVAFPYQSAISAS\nKYYYFPFPIQFFDIGLAENKSNFVTSTTILIFCFILFKRQKYISLLLLTVVLIPIISKGN\nYLFIILILNLAVYFFLFKKLYKKGFCISLFLVFSCIFIFIVSKIMYSSGIEGIYKELIFT\nGDNDGRFLIIKSFLEYWKDNLFFGLGPSSVNLFSGAVSGSFHNTYFFIFFQSGILGAFIF\nLLPFVYFFISFFKDNSSFMKLF'}
|
40 |
-
{'generated_text': 'M\nRRAVGNADLGMEAARYEPSGAYQASEGDGAHGKPHSLPFVALERWQQLGPEERTLAEAVR\nAVLASGQYLLGEAVRRFETAVAAWLGVPFALGVASGTAALTLALRAYGVGPGDEVIVPAI\nTFIATSNAITAAGARPVLVDIDPSTWNMSVASLAARLTPKTKAILAVHLWGQPVDMHPLL\nDIAAQANLAVIEDCAQALGASIAGTKVGTFGDAAAFSFYPTKNMTTGEGGMLVTNARDLA\nQAARMLRSHGQDPPTAYMHSQVGFN'}
|
41 |
```
|
42 |
|
43 |
**Example 2: Finetuning on a set of user-defined sequences**
|
44 |
|
45 |
-
This alternative option to the zero-shot generation permits
|
46 |
|
47 |
-
To create the validation and training file, it is necessary to (1)
|
48 |
|
49 |
```
|
50 |
-
python run_clm.py --model_name_or_path nferruz/
|
51 |
--do_train --do_eval --output_dir output --learning_rate 1e-06
|
52 |
|
53 |
```
|
54 |
The HuggingFace script run_clm.py can be found here: https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py
|
55 |
|
56 |
### **How to select the best sequences**
|
57 |
-
|
58 |
-
|
|
|
|
|
|
|
59 |
|
60 |
```
|
61 |
def calculatePerplexity(sequence, model, tokenizer):
|
@@ -68,7 +64,7 @@ def calculatePerplexity(sequence, model, tokenizer):
|
|
68 |
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
69 |
tokenizer = AutoTokenizer.from_pretrained('/path/to/tokenizer') # replace with the actual path
|
70 |
model = GPT2LMHeadModel.from_pretrained('/path/to/output').to(device)
|
71 |
-
output = model.generate("
|
72 |
|
73 |
# Take (for example) the first sequence
|
74 |
sequence = output[0]
|
@@ -80,4 +76,4 @@ We do not yet have a threshold as of what perplexity value gives a 'good' or 'ba
|
|
80 |
|
81 |
|
82 |
### **Training specs**
|
83 |
-
The model was trained on
|
|
|
1 |
+
# **ZymCTRL**
|
2 |
|
3 |
+
ZymCTRL ([preprint coming soon](https://huggingface.co/nferruz/ZymCTRL)) is a conditional language model trained on the BRENDA database of enzymes. Given a user-defined Enzymatic Commission (EC) number, the model generates protein sequences that fulfill that catalytic reaction. The generated sequences are ordered, globular and distant to natural ones, while their intended catalytic properties match those defined by users.
|
4 |
|
5 |
|
6 |
|
7 |
## **Model description**
|
8 |
+
ZymCTRL is based on the CTRL Transformer architecture and contains 36 layers with a model dimensionality of 1280, totalling 738 million parameters.
|
9 |
|
10 |
+
ZymCTRL is a decoder-only transformer model pre-trained on the BRENDA database (version July 2022). The pre-training was done on the raw sequences without FASTA headers, with the EC classes prepended to each sequence. The databases can be found here: xx.
|
11 |
|
12 |
+
ZymCTRL was trained with an autoregressive objective, i.e., the model learns to predict the next token given a sequence context. Because the first tokens on each sequence encode the EC numbers, the model learns the dependencies among EC classes and their corresponding sequences, and is able to _speak_ the enzyme language.
|
|
|
13 |
|
14 |
+
### **How to use ZymCTRL**
|
15 |
+
ZymCTRL can be used with the HuggingFace transformer python package. Detailed installation instructions can be found here: https://huggingface.co/docs/transformers/installation
|
16 |
|
17 |
+
Since ZymCTRL has been trained on the classical language model objective on enzyme sequences with their EC annotation, it particularly excels at generating enzyme sequences given a user-defined EC class, such as '1.1.1.2'. Besides, it can also be fine-tuned on a specific catalytic reaction by providing more sequences for a given EC class, such as sequences obtained with ancestral reconstruction methods.
|
18 |
|
19 |
+
**Example 1: Generating glucose oxidases (EC 1.1.3.4)**
|
20 |
|
21 |
+
In the example below, ZymCTRL generates sequences that catalyze the reaction encoded by the EC number 1.1.3.4. Any other EC class can also be prompted instead. The model will generate the most probable sequences that follow the input.
|
22 |
|
23 |
```
|
24 |
+
from transformers import GPT2LMHeadModel, AutoTokenizer
|
25 |
+
|
26 |
+
enzyme_class = 1.1.3.4
|
27 |
+
device = torch.device("cuda") # if a GPU is available
|
28 |
+
tokenizer = AutoTokenizer.from_pretrained('/path/to/tokenizer')
|
29 |
+
model = GPT2LMHeadModel.from_pretrained('/path/to/output').to(device)
|
30 |
+
input_ids = tokenizer.encode(enzyme_class,return_tensors='pt').to(device)
|
31 |
+
# change max_length or num_return_sequences to your requirements
|
32 |
+
output = model.generate(input_ids, top_k=8, repetition_penalty=1.2, max_length=1024,
|
33 |
+
eos_token_id=1,pad_token_id=0,do_sample=True, num_return_sequences=100)
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
```
|
35 |
|
36 |
**Example 2: Finetuning on a set of user-defined sequences**
|
37 |
|
38 |
+
This alternative option to the zero-shot generation permits further improve the model's confidence for EC number with few members. User-defined training and validation files containing the sequences of interest are provided to the model. After a short update of the model's weights, ZymCTRL will generate sequences that follow the input properties. This might not be necessary in cases where the model has already seen many sequences per EC class.
|
39 |
|
40 |
+
To create the validation and training file, it is necessary to (1) remove the FASTA headers for each sequence, (2) prepare the sequences in the format: EC number<sep><start>S E Q U E N C E<end><|endoftext|> and (3) split the originating dataset into training and validation files (this is often done with the ratio 90/10, 80/20 or 95/5). Then, to finetune the model to the input sequences, we can use the example below. Here we show a learning rate of 1e-06, but ideally, the learning rate should be optimised in separate runs. After training, the finetuned model will be stored in the ./output folder. Lastly, ZymCTRL can generate the tailored sequences as shown in Example 1:
|
41 |
|
42 |
```
|
43 |
+
python run_clm.py --model_name_or_path nferruz/ZymCTRL --train_file training.txt --validation_file validation.txt --tokenizer_name nferruz/ZymCTRL
|
44 |
--do_train --do_eval --output_dir output --learning_rate 1e-06
|
45 |
|
46 |
```
|
47 |
The HuggingFace script run_clm.py can be found here: https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py
|
48 |
|
49 |
### **How to select the best sequences**
|
50 |
+
|
51 |
+
First of all, we recommend selecting only sequences where the padding token has been emitted. Because the generation occurs with a max_length parameter, Hugging Face generation function will truncate sequences that surpassed that length. Once the sequence has been emitted, select those with at least one <pad> token at the end. Otherwise you might be seeing truncated sequences by the length limit.
|
52 |
+
|
53 |
+
Besides, we've observed that perplexity values correlate with AlphaFold2's plddt.
|
54 |
+
We recommend computing perplexity for each sequence as follows:
|
55 |
|
56 |
```
|
57 |
def calculatePerplexity(sequence, model, tokenizer):
|
|
|
64 |
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
65 |
tokenizer = AutoTokenizer.from_pretrained('/path/to/tokenizer') # replace with the actual path
|
66 |
model = GPT2LMHeadModel.from_pretrained('/path/to/output').to(device)
|
67 |
+
output = model.generate("1.1.1.1", max_length=400, do_sample=True, top_k=8, repetition_penalty=1.2, num_return_sequences=10, eos_token_id=0)
|
68 |
|
69 |
# Take (for example) the first sequence
|
70 |
sequence = output[0]
|
|
|
76 |
|
77 |
|
78 |
### **Training specs**
|
79 |
+
The model was trained on 48 NVIDIA A100 GPUs for 8 epochs, using a block size of 1024, and a total batch size of 768. The optimizer used was Adam (beta1 = 0.9, beta2 = 0.999) with a learning rate of 0.8e-04.
|