--- tags: - generated_from_trainer model-index: - name: baseline-roberta_pre_layer_norm-model results: [] license: mit datasets: - cambridge-climb/BabyLM language: - en --- # baseline-roberta_pre_layer_norm-model ## Model description Base Model Architecture: Roberta Pre-Layer Norm ## Training and evaluation data BabyLM Dataset (CoNLL 2023 Workshop) ## Training procedure Masked language modeling ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100000 - training_steps: 400000 ### Framework versions - Transformers 4.26.1 - Pytorch 1.12.0+cu113 - Datasets 2.10.0 - Tokenizers 0.13.2