mnaylor commited on
Commit
9787fc0
·
1 Parent(s): 4fdaf0d

add model details

Browse files
Files changed (1) hide show
  1. README.md +9 -0
README.md CHANGED
@@ -1,3 +1,12 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ # Moving Average Gated Attention (Mega): Pretrained LM
6
+
7
+ This repo contains pretrained weights for a language model with the Mega architecture (see [paper](https://arxiv.org/abs/2209.10655)).
8
+ I used the Mega source code (namely the `MegaEncoderLayer` class) and created wrappers for token embeddings and MLM prediction. This model
9
+ was pretrained for 5 epochs (11.3k gradient steps) on wikitext-103, which took roughly 5 hours on a single T4 (in Colab's free tier).
10
+
11
+ See [the Colab notebook](https://colab.research.google.com/drive/1qfUO6o5HRdxBblWlw058HVyvaEPhPpH8?usp=sharing)
12
+ for further training details and example code for reuse.