Update README.md
Browse files
README.md
CHANGED
@@ -1,83 +1,385 @@
|
|
|
|
|
|
|
|
1 |
# **ZymCTRL**
|
2 |
|
3 |
-
ZymCTRL ([Paper presented @ Machine Learning for Structural Biology workshop](https://www.mlsb.io/papers_2022/ZymCTRL_a_conditional_language_model_for_the_controllable_generation_of_artificial_enzymes.pdf))
|
|
|
|
|
|
|
4 |
|
|
|
5 |
|
|
|
6 |
|
7 |
## **Model description**
|
8 |
-
ZymCTRL is based on the CTRL Transformer architecture
|
|
|
9 |
|
10 |
-
ZymCTRL is a decoder-only transformer model pre-trained on the BRENDA database
|
|
|
|
|
11 |
|
12 |
-
ZymCTRL was trained with an autoregressive objective, i.e., the model learns to predict
|
|
|
|
|
13 |
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
|
|
18 |
|
19 |
-
|
20 |
|
21 |
-
**Example 1: Generating glucose oxidases (EC 1.1.3.4)**
|
22 |
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
```
|
|
|
26 |
from transformers import GPT2LMHeadModel, AutoTokenizer
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
-
enzyme_class = 1.1.3.4
|
29 |
-
device = torch.device("cuda") # if a GPU is available
|
30 |
-
tokenizer = AutoTokenizer.from_pretrained('/path/to/tokenizer')
|
31 |
-
model = GPT2LMHeadModel.from_pretrained('/path/to/output').to(device)
|
32 |
-
input_ids = tokenizer.encode(enzyme_class,return_tensors='pt').to(device)
|
33 |
-
# change max_length or num_return_sequences to your requirements
|
34 |
-
output = model.generate(input_ids, top_k=9, repetition_penalty=1.2, max_length=1024,
|
35 |
-
eos_token_id=1,pad_token_id=0,do_sample=True, num_return_sequences=100)
|
36 |
```
|
|
|
|
|
|
|
37 |
|
38 |
-
|
|
|
|
|
|
|
39 |
|
40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
|
|
|
45 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
```
|
47 |
-
|
48 |
-
|
49 |
|
50 |
```
|
51 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
|
53 |
-
|
|
|
54 |
|
55 |
-
|
|
|
56 |
|
57 |
-
|
58 |
-
|
59 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
```
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
tokenizer = AutoTokenizer.from_pretrained('/path/to/tokenizer') # replace with the actual path
|
70 |
-
model = GPT2LMHeadModel.from_pretrained('/path/to/ZymCTRL').to(device)
|
71 |
-
output = model.generate("1.1.1.1", max_length=400, do_sample=True, top_k=8, repetition_penalty=1.2, num_return_sequences=10, eos_token_id=0)
|
72 |
-
|
73 |
-
# Take (for example) the first sequence
|
74 |
-
sequence = output[0]
|
75 |
-
ppl = calculatePerplexity(sequence, model, tokenizer)
|
76 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
|
78 |
-
|
79 |
-
|
|
|
80 |
|
81 |
|
82 |
### **Training specs**
|
83 |
-
The model was trained on 48 NVIDIA A100 GPUs for 8 epochs,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
# **ZymCTRL**
|
5 |
|
6 |
+
ZymCTRL ([Paper presented @ Machine Learning for Structural Biology workshop](https://www.mlsb.io/papers_2022/ZymCTRL_a_conditional_language_model_for_the_controllable_generation_of_artificial_enzymes.pdf))
|
7 |
+
is a conditional language model for the generation of artificial functional enzymes. It was trained on the BRENDA database of enzymes.
|
8 |
+
Given a user-defined Enzymatic Commission (EC) number, the model generates protein sequences that fulfill that catalytic reaction.
|
9 |
+
The generated sequences are ordered, globular and distant to natural ones, while their intended catalytic properties match those defined by users.
|
10 |
|
11 |
+
If you don't know what EC number of your protein of interest, have a look at the BRENDA webpage: https://www.brenda-enzymes.org/ecexplorer.php?browser=1
|
12 |
|
13 |
+
See below information about the model, how to generate sequences, and how to save and rank them by perplexity.
|
14 |
|
15 |
## **Model description**
|
16 |
+
ZymCTRL is based on the [CTRL Transformer](https://arxiv.org/abs/1909.05858) architecture (which in turn is very similar to ChatGPT) and contains 36 layers
|
17 |
+
with a model dimensionality of 1280, totalling 738 million parameters.
|
18 |
|
19 |
+
ZymCTRL is a decoder-only transformer model pre-trained on the BRENDA database
|
20 |
+
(version July 2022). The pre-training was done on the raw sequences without FASTA headers,
|
21 |
+
with the EC classes prepended to each sequence. The databases will be uploaded soon.
|
22 |
|
23 |
+
ZymCTRL was trained with an autoregressive objective, i.e., the model learns to predict
|
24 |
+
the next token given a sequence context. Because the first tokens on each sequence encode the EC numbers,
|
25 |
+
the model learns the dependencies among EC classes and their corresponding sequences, and is able to _speak_ the enzyme language.
|
26 |
|
27 |
+
There are stark differences in the number of members among EC classes, and for this reason we also tokenized the EC numbers.
|
28 |
+
In this manner, EC numbers '2.7.1.1' and '2.7.1.2' share the first three tokens (six including separators) and hence the model can infer that
|
29 |
+
there are relationships between the two classes.
|
30 |
+
|
31 |
+
The figure below summarizes the process of training:
|
32 |
|
33 |
+
![plot](./github1.png)
|
34 |
|
|
|
35 |
|
36 |
+
## **How to use ZymCTRL**
|
37 |
+
ZymCTRL can be used with the HuggingFace transformer python package.
|
38 |
+
Detailed installation instructions can be found here: https://huggingface.co/docs/transformers/installation
|
39 |
+
|
40 |
+
Since ZymCTRL has been trained on the classical language model objective on enzyme sequences with their EC annotation,
|
41 |
+
it particularly excels at generating enzyme sequences given a user-defined EC class, such as alcohol dehydrogenases ('1.1.1.2').
|
42 |
+
|
43 |
+
The model can generate in two ways: in a zero-shot fashion, i.e directly generating from the checkpoint weights; or after fine-tuning.
|
44 |
+
Fine-tuning allows to augment the BRENDA datasets that were using during training, for example,
|
45 |
+
if you have a curated internal dataset, or a set of ancestrally-reconstructed sequences. This is entirely optional. One advantage of
|
46 |
+
running the model in zero-shot, is that it doesn't require any further training.
|
47 |
+
|
48 |
+
|
49 |
+
### **Example 1: Generating nitrilases (EC 3.5.5.1)**
|
50 |
+
|
51 |
+
The script below will be used for the generation of any BRENDA class in a zero-shot fashion,
|
52 |
+
here we showcase the generation of novel dehalogenases.
|
53 |
+
|
54 |
+
To run this script you should download ZymCTRL to a local folder in your workstation.
|
55 |
+
Then replace the placeholders in the script with your actual folder path.
|
56 |
+
|
57 |
+
You can run it directly in the command line (once you have hugging face installed),
|
58 |
+
with the following command: `python generate.py`.
|
59 |
+
|
60 |
+
The script will write each sequence in a fasta file in the folder you specify. In the fasta header,
|
61 |
+
it will store the sequence's computed perplexity value. Perplexity is a measure of the model's confidence
|
62 |
+
in that generation, with lower values being better. The sequences are ordered by perplexity before writing them out,
|
63 |
+
so those that finish in *_0.fasta and *_1.fasta will be the best ones per batch.
|
64 |
+
|
65 |
+
**Given that generation runs so fast, we recommend to generate hundreds or thousands and then only pick the best 5%.
|
66 |
+
With the script below that would mean picking only those that finish in '_0.fasta'**
|
67 |
|
68 |
```
|
69 |
+
import torch
|
70 |
from transformers import GPT2LMHeadModel, AutoTokenizer
|
71 |
+
import os
|
72 |
+
from tqdm import tqdm
|
73 |
+
import math
|
74 |
+
|
75 |
+
def remove_characters(sequence, char_list):
|
76 |
+
"This function removes special tokens used during training"
|
77 |
+
columns = sequence.split('<sep>')
|
78 |
+
seq = columns[1]
|
79 |
+
for char in char_list:
|
80 |
+
seq = seq.replace(char, '')
|
81 |
+
return seq
|
82 |
+
|
83 |
+
def calculatePerplexity(input_ids,model,tokenizer):
|
84 |
+
"This function computes perplexities for the generated sequences"
|
85 |
+
with torch.no_grad():
|
86 |
+
outputs = model(input_ids, labels=input_ids)
|
87 |
+
loss, logits = outputs[:2]
|
88 |
+
return math.exp(loss)
|
89 |
+
|
90 |
+
def main(label, model,special_tokens,device,tokenizer):
|
91 |
+
# Generating sequences
|
92 |
+
input_ids = tokenizer.encode(label,return_tensors='pt').to(device)
|
93 |
+
outputs = model.generate(
|
94 |
+
input_ids,
|
95 |
+
top_k=9, #tbd
|
96 |
+
repetition_penalty=1.2,
|
97 |
+
max_length=1024,
|
98 |
+
eos_token_id=1,
|
99 |
+
pad_token_id=0,
|
100 |
+
do_sample=True,
|
101 |
+
num_return_sequences=20) # Depending non your GPU, you'll be able to generate fewer or more sequences. This runs in an A40.
|
102 |
+
|
103 |
+
# Check sequence sanity, ensure sequences are not-truncated.
|
104 |
+
# The model will truncate sequences longer than the specified max_length (1024 above). We want to avoid those sequences.
|
105 |
+
new_outputs = [ output for output in outputs if output[-1] == 0]
|
106 |
+
if not new_outputs:
|
107 |
+
print("not enough sequences with short lengths!!")
|
108 |
+
|
109 |
+
# Compute perplexity for every generated sequence in the batch
|
110 |
+
ppls = [(tokenizer.decode(output), calculatePerplexity(output, model, tokenizer)) for output in new_outputs ]
|
111 |
+
|
112 |
+
# Sort the batch by perplexity, the lower the better
|
113 |
+
ppls.sort(key=lambda i:i[1]) # duplicated sequences?
|
114 |
+
|
115 |
+
# Final dictionary with the results
|
116 |
+
sequences={}
|
117 |
+
sequences[label] = [(remove_characters(x[0], special_tokens), x[1]) for x in ppls]
|
118 |
+
|
119 |
+
return sequences
|
120 |
+
|
121 |
+
if __name__=='__main__':
|
122 |
+
device = torch.device("cuda") # Replace with 'cpu' if you don't have a GPU - but it will be slow
|
123 |
+
print('Reading pretrained model and tokenizer')
|
124 |
+
tokenizer = AutoTokenizer.from_pretrained('/path/to/zymCTRL/') # change to ZymCTRL location
|
125 |
+
model = GPT2LMHeadModel.from_pretrained('/path/to/zymCTRL').to(device) # change to ZymCTRL location
|
126 |
+
special_tokens = ['<start>', '<end>', '<|endoftext|>','<pad>',' ', '<sep>']
|
127 |
+
|
128 |
+
# change to the appropriate BRENDA EC classes
|
129 |
+
labels=['3.5.5.1'] # nitrilases. You can put as many labels as you want.
|
130 |
+
|
131 |
+
for label in tqdm(labels):
|
132 |
+
# We'll run 100 batches per label. 20 sequences will be generated per batch.
|
133 |
+
for i in range(0,100):
|
134 |
+
sequences = main(label, model, special_tokens, device, tokenizer)
|
135 |
+
for key,value in sequences.items():
|
136 |
+
for index, val in enumerate(value):
|
137 |
+
# Sequences will be saved with the name of the label followed by the batch index,
|
138 |
+
# and the order of the sequence in that batch.
|
139 |
+
fn = open(f"/path/to/folder/{label}_{i}_{index}.fasta", "w")
|
140 |
+
fn.write(f'>{label}_{i}_{index}\t{val[1]}\n{val[0]}')
|
141 |
+
fn.close()
|
142 |
+
```
|
143 |
+
|
144 |
+
|
145 |
+
## **Example 2: Fine-tuning on a set of user-defined sequences**
|
146 |
+
|
147 |
+
This alternative to the zero-shot generation allows to update ZymCTRL's weights to new sequences.
|
148 |
+
|
149 |
+
This strategy is not strictly necessary, in fact, we have observed good generations even for EC classes where there are
|
150 |
+
only 1-2 representatives in Nature. But you might have an internal set of sequences that you'd like to incorporate to the model.
|
151 |
+
For example, internal datasets after protein engineering efforts,
|
152 |
+
ancestrally-reconstructed sets, or after searching against metagenomics databases. In these cases, it is advisable to fine-tune ZymCTRL,
|
153 |
+
as it will learn new properties from your dataset and potentially improve the generation quality
|
154 |
+
(especially for poorly populated EC classes).
|
155 |
+
|
156 |
+
To fine-tune ZymCTRL, you will need to process your sequences quite a bit. With the scripts below can exactly do that without many
|
157 |
+
modifications. The only requisite is to start with an input file 'sequences.fasta' which contain all the sequences in a fasta format.
|
158 |
+
We recommend using at least 200 sequences to obtain best results. But we've seen it working with fewer sequences, so if you don't have
|
159 |
+
that many, give it still a go.
|
160 |
+
|
161 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
162 |
```
|
163 |
+
import random
|
164 |
+
import transformers
|
165 |
+
from transformers import AutoTokenizer
|
166 |
|
167 |
+
# 1. Read the source file
|
168 |
+
with open('sequences.fasta', 'r') as fn:
|
169 |
+
data = fn.readlines()
|
170 |
+
fn.close()
|
171 |
|
172 |
+
# Put sequences into dictionary
|
173 |
+
sequences={}
|
174 |
+
for line in data:
|
175 |
+
if '>' in line:
|
176 |
+
name = line.strip()
|
177 |
+
sequences[name] = ['2.7.3.12'] # modify with the actual EC class.
|
178 |
+
continue
|
179 |
+
sequences[name].append(line.strip())
|
180 |
|
181 |
+
# Process fasta files to be in single string - run this part only if the fastas were formated to 60 characters
|
182 |
+
processed_sequences = {}
|
183 |
+
for name, sequence in sequences.items():
|
184 |
+
processed_sequences[f"{sequence[0]};{name}"] = ''.join([x for x in sequence[1:]])
|
185 |
|
186 |
+
# Shuffle sequences
|
187 |
+
sequences_list = [(key,value) for key,value in processed_sequences.items()]
|
188 |
+
random.shuffle(sequences_list)
|
189 |
+
|
190 |
+
# Load tokenizer
|
191 |
+
tokenizer = AutoTokenizer.from_pretrained('/path/to/ZymCTRL')
|
192 |
+
|
193 |
+
# the objective is to get here strings, that when tokenized, will span a window length of 1024.
|
194 |
+
# for each sequence group its length and untokenized string
|
195 |
+
|
196 |
+
print("procesing dataset")
|
197 |
+
processed_dataset = []
|
198 |
+
for i in sequences_list:
|
199 |
+
# length of the control code
|
200 |
+
label = i[0].split(';')[0]
|
201 |
+
sequence = i[1].strip()
|
202 |
+
separator = '<sep>'
|
203 |
+
control_code_length = len(tokenizer(label+separator)['input_ids'])
|
204 |
+
available_space = 1021 - control_code_length # It is not 1024 because '<|endoftext|>', and start and end
|
205 |
+
|
206 |
+
# Option 1: the sequence is larger than the available space (3-4% of sequences in BRENDA are over 1024)
|
207 |
+
if len(sequence) > available_space:
|
208 |
+
total_length = control_code_length + len(sequence[:available_space]) + 1
|
209 |
+
seq = f"{label}{separator}{sequence[:available_space]}<|endoftext|>"
|
210 |
+
processed_dataset.append((total_length, seq))
|
211 |
+
|
212 |
+
# Option 2 & 3: The sequence fits in the block_size space with or without padding
|
213 |
+
else:
|
214 |
+
total_length = control_code_length + len(sequence) + 3
|
215 |
+
# in this case the sequence does not fit with the start/end tokens
|
216 |
+
seq = f"{label}{separator}<start>{sequence}<end><|endoftext|>"
|
217 |
+
processed_dataset.append((total_length, seq))
|
218 |
+
|
219 |
+
# Helper function to group sequences
|
220 |
+
def grouper(iterable):
|
221 |
+
prev = None
|
222 |
+
group = ''
|
223 |
+
total_sum = 0
|
224 |
+
for item in iterable:
|
225 |
+
if prev is None or item[0] + total_sum < 1025:
|
226 |
+
group += item[1]
|
227 |
+
total_sum += item[0]
|
228 |
+
else:
|
229 |
+
total_sum = item[0]
|
230 |
+
yield group
|
231 |
+
group = item[1]
|
232 |
+
prev = item
|
233 |
+
if group:
|
234 |
+
total_sum = 0
|
235 |
+
yield group
|
236 |
+
|
237 |
+
# Group sequences
|
238 |
+
print("grouping processed dataset")
|
239 |
+
grouped_dataset=dict(enumerate(grouper(processed_dataset),1))
|
240 |
+
|
241 |
+
# Save the processed file out
|
242 |
+
fn = open("./2.7.3.13_processed.txt",'w')
|
243 |
+
for key,value in grouped_dataset.items():
|
244 |
+
fn.write(value)
|
245 |
+
fn.write("\n")
|
246 |
+
fn.close()
|
247 |
```
|
248 |
+
The previous script will prepare a text file with the correct format for tokenization.
|
249 |
+
Now we can use the tokenizer to convert its contents to tokens.
|
250 |
|
251 |
```
|
252 |
+
from datasets import load_dataset
|
253 |
+
import transformers
|
254 |
+
from transformers.testing_utils import CaptureLogger
|
255 |
+
|
256 |
+
# Load the tokenizer again
|
257 |
+
from transformers import AutoTokenizer
|
258 |
+
tokenizer = AutoTokenizer.from_pretrained('/agh/projects/noelia/NLP/zymCTRL/dataset_preparation/tokenizer')
|
259 |
+
|
260 |
+
|
261 |
+
#Load the data files
|
262 |
+
data_files = {}
|
263 |
+
dataset_args = {}
|
264 |
+
validation_split_percentage = 10 # for a split 90/10
|
265 |
+
data_files["train"] = './2.7.3.12_processed.txt'
|
266 |
+
extension = "text"
|
267 |
+
raw_datasets = load_dataset(extension, data_files=data_files, cache_dir='.', **dataset_args)
|
268 |
+
tok_logger = transformers.utils.logging.get_logger("transformers.tokenization_utils_base")
|
269 |
+
|
270 |
+
# Load datasets using the HF datasets library:
|
271 |
+
raw_datasets["train"] = load_dataset(extension,
|
272 |
+
data_files=data_files,
|
273 |
+
split=f"train[{validation_split_percentage}%:]",
|
274 |
+
cache_dir='.',
|
275 |
+
**dataset_args,)
|
276 |
+
|
277 |
+
raw_datasets["validation"] = load_dataset(extension,
|
278 |
+
data_files=data_files,
|
279 |
+
split=f"train[:{validation_split_percentage}%]",
|
280 |
+
cache_dir='.',
|
281 |
+
**dataset_args,)
|
282 |
+
|
283 |
+
def tokenize_function(examples):
|
284 |
+
" This function tokenizes input"
|
285 |
+
with CaptureLogger(tok_logger) as cl:
|
286 |
+
output = tokenizer(examples["text"])
|
287 |
+
# clm input could be much much longer than block_size
|
288 |
+
if "Token indices sequence length is longer than the" in cl.out:
|
289 |
+
tok_logger.warning(
|
290 |
+
"^^^^^^^^^^^^^^^^ Please ignore the warning above - this long input will be chunked into smaller bits before being passed to the model."
|
291 |
+
)
|
292 |
+
return output
|
293 |
+
|
294 |
+
# tokenize in parallel
|
295 |
+
tokenized_datasets = raw_datasets.map(
|
296 |
+
tokenize_function,
|
297 |
+
batched=True,
|
298 |
+
num_proc=32,
|
299 |
+
remove_columns=['text'],
|
300 |
+
load_from_cache_file = False,
|
301 |
+
desc="Running tokenizer on dataset",
|
302 |
+
)
|
303 |
|
304 |
+
train_dataset = tokenized_datasets["train"]
|
305 |
+
eval_dataset = tokenized_datasets["validation"]
|
306 |
|
307 |
+
train_dataset.save_to_disk('./dataset/train')
|
308 |
+
eval_dataset.save_to_disk('./dataset/eval')
|
309 |
|
310 |
+
# This has saved the datasets tokenized. Now we need to group them into the block size of 1024
|
311 |
+
from datasets import load_from_disk
|
312 |
|
313 |
+
train_dataset = load_from_disk('./2.7.3.13/dataset/train')
|
314 |
+
eval_dataset = load_from_disk('./2.7.3.13/dataset/eval')
|
315 |
+
|
316 |
+
from datasets.dataset_dict import DatasetDict
|
317 |
+
tokenized_datasets = DatasetDict()
|
318 |
+
|
319 |
+
tokenized_datasets["train"] = train_dataset
|
320 |
+
tokenized_datasets["validation"] = eval_dataset
|
321 |
+
|
322 |
+
block_size = 1024
|
323 |
+
def group_texts(examples):
|
324 |
+
# Concatenate all texts.
|
325 |
+
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
|
326 |
+
total_length = len(concatenated_examples[list(examples.keys())[0]])
|
327 |
+
# We drop the small remainder, we could add padding if the model supported it instead of this drop,
|
328 |
+
# you can customize this part to your needs.
|
329 |
+
if total_length >= block_size:
|
330 |
+
total_length = (total_length // block_size) * block_size
|
331 |
+
# Split by chunks of max_len.
|
332 |
+
result = {
|
333 |
+
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
|
334 |
+
for k, t in concatenated_examples.items()
|
335 |
+
}
|
336 |
+
result["labels"] = result["input_ids"].copy()
|
337 |
+
return result
|
338 |
+
|
339 |
+
lm_datasets = tokenized_datasets.map(
|
340 |
+
group_texts,
|
341 |
+
batched=True,
|
342 |
+
num_proc=124,
|
343 |
+
load_from_cache_file=False,
|
344 |
+
desc=f"Grouping texts in chunks of {block_size}",
|
345 |
+
)
|
346 |
+
|
347 |
+
train_dataset = lm_datasets["train"]
|
348 |
+
eval_dataset = lm_datasets["validation"]
|
349 |
+
|
350 |
+
train_dataset.save_to_disk('./dataset/train2')
|
351 |
+
eval_dataset.save_to_disk('./dataset/eval2')
|
352 |
```
|
353 |
+
The processed datasets will be inside the folder dataset/, called train2 and eval2.
|
354 |
+
You could also put the two previous scripts into a single one and run it in one go (that is what we do).
|
355 |
+
|
356 |
+
Now you are ready to fine-tune the model.
|
357 |
+
To do that, you can take the trainer file that we provide in this repository (5.run_clm-post.py), or use the trainer from Hugging Face.
|
358 |
+
The command below shows an example at an specific learning rate,
|
359 |
+
but you could try with other hyperparameters to obtain the best training and evaluation losses.
|
360 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
361 |
```
|
362 |
+
python 5.run_clm-post.py --tokenizer_name /path/to/ZymCTRL
|
363 |
+
--do_train --do_eval --output_dir output --evaluation_strategy steps --eval_steps 10
|
364 |
+
--logging_steps 5 --save_steps 500 --num_train_epochs 28 --per_device_train_batch_size 1
|
365 |
+
--per_device_eval_batch_size 4 --cache_dir '.' --save_total_limit 2 --learning_rate 0.8e-04
|
366 |
+
--dataloader_drop_last True --model_type gpt2 --config_name /path/to/ZymCTRL
|
367 |
+
--gradient_accumulation_steps 4
|
368 |
|
369 |
+
```
|
370 |
+
In any case, the original HuggingFace script run_clm.py can be found here:
|
371 |
+
https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py
|
372 |
|
373 |
|
374 |
### **Training specs**
|
375 |
+
The model was trained on 48 NVIDIA A100 GPUs for 8 epochs,
|
376 |
+
using a block size of 1024, and a total batch size of 768.
|
377 |
+
The optimizer used was Adam (beta1 = 0.9, beta2 = 0.999)
|
378 |
+
with a learning rate of 0.8e-04.
|
379 |
+
|
380 |
+
### **Contact**
|
381 |
+
|
382 |
+
We are the AI for Protein Design group at the Institute of Molecular Biology of Barcelona (https://www.aiproteindesign.com/).
|
383 |
+
For any question post an issue in this repository so that other people can benefit from the feedback and I'll get back to you shortly.
|
384 |
+
We are always open for collaborations, send an email to nfccri [at] ibmb [dot] csic [dot] es
|
385 |
+
|