File size: 657 Bytes
20ad5a4
 
 
9787fc0
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
---
license: mit
---

# Moving Average Gated Attention (Mega): Pretrained LM

This repo contains pretrained weights for a language model with the Mega architecture (see [paper](https://arxiv.org/abs/2209.10655)). 
I used the Mega source code (namely the `MegaEncoderLayer` class) and created wrappers for token embeddings and MLM prediction. This model
was pretrained for 5 epochs (11.3k gradient steps) on wikitext-103, which took roughly 5 hours on a single T4 (in Colab's free tier).

See [the Colab notebook](https://colab.research.google.com/drive/1qfUO6o5HRdxBblWlw058HVyvaEPhPpH8?usp=sharing) 
for further training details and example code for reuse.