Q8BERTA is the first language model specifically trained on Kuwaiti dialect text. This model was pre-trained on datasets collected from various sources such as several social medias platforms, websites, and books.

BibTex If you utilize the Q8BERT model in your scientific publication, or if you find the resources in this repository beneficial, please cite our paper using the following details (citation information to be updated):

Downloads last month
78
Safetensors
Model size
148M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Kalmundi/Q8BERTA

Finetunes
1 model