lhallee commited on
Commit
bfb370f
·
verified ·
1 Parent(s): 6fa8f1c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -17
README.md CHANGED
@@ -8,23 +8,6 @@ FastESM is a Huggingface compatible plug in version of ESM2 rewritten with a new
8
 
9
  Load any ESM2 models into a FastEsm model to dramatically speed up training and inference without **ANY** cost in performance.
10
 
11
- ## Use with 🤗 transformers
12
- ```python
13
- from transformers import AutoModel, AutoModelForMaskedLM, AutoModelForSequenceClassification, AutoModelForTokenClassification # any of these work
14
-
15
- model_dict = {
16
- 'ESM2-8': 'facebook/esm2_t6_8M_UR50D',
17
- 'ESM2-35': 'facebook/esm2_t12_35M_UR50D',
18
- 'ESM2-150': 'facebook/esm2_t30_150M_UR50D',
19
- 'ESM2-650': 'facebook/esm2_t33_650M_UR50D',
20
- 'ESM2-3B': 'facebook/esm2_t36_3B_UR50D',
21
- 'ESM2-15B': 'facebook/esm2_t48_15B_UR50D',
22
- }
23
-
24
- model = AutoModelForMaskedLM.from_pretrained(model_dict['ESM2-8'], trust_remote_code=True)
25
- tokenizer = model.tokenizer
26
- ```
27
-
28
  Outputting attention maps (or the contact prediction head) is not natively possible with SDPA. You can still pass ```output_attentions``` to have attention calculated manually and returned.
29
  Various other optimizations also make the base implementation slightly different than the one in transformers.
30
 
 
8
 
9
  Load any ESM2 models into a FastEsm model to dramatically speed up training and inference without **ANY** cost in performance.
10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  Outputting attention maps (or the contact prediction head) is not natively possible with SDPA. You can still pass ```output_attentions``` to have attention calculated manually and returned.
12
  Various other optimizations also make the base implementation slightly different than the one in transformers.
13