Noelia Ferruz commited on
Commit
006ad59
·
1 Parent(s): d4c5995

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -5
README.md CHANGED
@@ -53,11 +53,7 @@ python run_clm.py --model_name_or_path nferruz/ProtGPT2 --train_file training.tx
53
  The HuggingFace script run_clm.py can be found here: https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py
54
 
55
  ### **How to select the best sequences**
56
- We've observed that perplexity values correlate with AlphaFold2's plddt. This plot shows perplexity vs. pldtt values for each of the 10,000 sequences in the ProtGPT2-generated dataset:
57
- <div align="center">
58
- <img src="https://huggingface.co/nferruz/ProtGPT2/blob/main/ppl-plddt.png" width="45%" />
59
- </div>
60
-
61
 
62
  We recommend to compute perplexity for each sequence with the HuggingFace evaluate method `perplexity`:
63
 
 
53
  The HuggingFace script run_clm.py can be found here: https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py
54
 
55
  ### **How to select the best sequences**
56
+ We've observed that perplexity values correlate with AlphaFold2's plddt. This plot shows perplexity vs. pldtt values for each of the 10,000 sequences in the ProtGPT2-generated dataset (see https://huggingface.co/nferruz/ProtGPT2/blob/main/ppl-plddt.png)
 
 
 
 
57
 
58
  We recommend to compute perplexity for each sequence with the HuggingFace evaluate method `perplexity`:
59