Lugha-Llama/Lugha-Llama-8B-wura

Lugha-Llama is an Africa-centric language model developed through continual pretraining with WURA dataset, a large African languages corpora which consists of sixteen low-resource African languages and four high-resource languages commonly spoken on the African continent.

To train the model, we sample as uniformly as possible across languages while limiting the number of times data is repeated and upsample rare languages by at most four epochs. We combine WURA data with high-quality English documents from FineWeb-Edu and OpenWebMath which results into improved Lugha-Llama-Edu and Lugha-Llama-Maths models respectively. Our models consistently achieve the best performance amongst similary-sized baselines on AfriMMLU, AfriMGSM, and AfriXNLI tasks in Irokobench.

In a separate ablation experiment, we translate English education documents to Swahili to study whether the performance gains from FineWeb-Edu data is due to its content or English source language. FineWeb_Edu-swahili-translated.

We demonstrate the findings in our paper comming soon

Authors: Happy Buzaaba*, Alexander Wettig*, David Ifeoluwa Adelani, Christiane Fellbaum (* equal contribution)

Contact {happy.buzaaba@, awettig@cs}princeton.edu

Lugha-Llama models

Our main result

main_result.png Performance of Lugha-Llama models and baselines on IrokoBench Adelani et al., 2024. Languages in italic are not present in the continual pre-training data. †: We exclude the high-resource languages English (eng) and French (fra) from the average, as they would otherwise skew the results due to strong English base models.

Downloads last month
36
Safetensors
Model size
8.03B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Lugha-Llama/Lugha-Llama-8B-wura

Quantizations
2 models