AraElectra for Question Answering on Arabic-SQuADv2

This is the AraElectra model, fine-tuned using the Arabic-SQuADv2.0 dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering. with help of AraElectra Classifier to predicted unanswerable question.

Overview

Language model: AraElectra
Language: Arabic
Downstream-task: Extractive QA
Training data: Arabic-SQuADv2.0
Eval data: Arabic-SQuADv2.0
Test data: Arabic-SQuADv2.0
Code: See More Info on Github
Infrastructure: 1x Tesla K80

Hyperparameters

batch_size = 8
n_epochs = 4
base_LM_model = "AraElectra"
learning_rate = 3e-5
optimizer = AdamW
padding = dynamic

Online Demo on Arabic Wikipedia and User Provided Contexts

See model in action hosted on streamlit Open in Streamlit

Usage

For best results use the AraBert preprocessor by aub-mind

from transformers import ElectraForQuestionAnswering, ElectraForSequenceClassification, AutoTokenizer, pipeline
from preprocess import ArabertPreprocessor
prep_object = ArabertPreprocessor("araelectra-base-discriminator")
question = prep_object('ู…ุง ู‡ูŠ ุฌุงู…ุนุฉ ุงู„ุฏูˆู„ ุงู„ุนุฑุจูŠุฉ ุŸ')
context = prep_object('''
ุฌุงู…ุนุฉ ุงู„ุฏูˆู„ ุงู„ุนุฑุจูŠุฉ ู‡ูŠู…ู†ุธู…ุฉ ุฅู‚ู„ูŠู…ูŠุฉ ุชุถู… ุฏูˆู„ุงู‹ ุนุฑุจูŠุฉ ููŠ ุขุณูŠุง ูˆุฃูุฑูŠู‚ูŠุง.
ูŠู†ุต ู…ูŠุซุงู‚ู‡ุง ุนู„ู‰ ุงู„ุชู†ุณูŠู‚ ุจูŠู† ุงู„ุฏูˆู„ ุงู„ุฃุนุถุงุก ููŠ ุงู„ุดุคูˆู† ุงู„ุงู‚ุชุตุงุฏูŠุฉุŒ ูˆู…ู† ุถู…ู†ู‡ุง ุงู„ุนู„ุงู‚ุงุช ุงู„ุชุฌุงุฑูŠุฉ ุงู„ุงุชุตุงู„ุงุชุŒ ุงู„ุนู„ุงู‚ุงุช ุงู„ุซู‚ุงููŠุฉุŒ ุงู„ุฌู†ุณูŠุงุช ูˆูˆุซุงุฆู‚ ูˆุฃุฐูˆู†ุงุช ุงู„ุณูุฑ ูˆุงู„ุนู„ุงู‚ุงุช ุงู„ุงุฌุชู…ุงุนูŠุฉ ูˆุงู„ุตุญุฉ. ุงู„ู…ู‚ุฑ ุงู„ุฏุงุฆู… ู„ุฌุงู…ุนุฉ ุงู„ุฏูˆู„ ุงู„ุนุฑุจูŠุฉ ูŠู‚ุน ููŠ ุงู„ู‚ุงู‡ุฑุฉุŒ ุนุงุตู…ุฉ ู…ุตุฑ (ุชูˆู†ุณ ู…ู† 1979 ุฅู„ู‰ 1990). 
''')
# a) Get predictions
qa_modelname = 'ZeyadAhmed/AraElectra-Arabic-SQuADv2-QA'
cls_modelname = 'ZeyadAhmed/AraElectra-Arabic-SQuADv2-CLS'
qa_pipe = pipeline('question-answering', model=qa_modelname, tokenizer=qa_modelname)
QA_input = {
    'question': question,
    'context': context
}
CLS_input = {
    'text': question,
    'text_pair': context
}
qa_res = qa_pipe(QA_input)
cls_res = cls_pipe(CLS_iput)
threshold = 0.5 #hyperparameter can be tweaked
## note classification results label0 probability it can be answered label1 probability can't be answered 
## if label1 probability > threshold then consider the output of qa_res is empty string else take the qa_res
# b) Load model & tokenizer
qa_model = ElectraForQuestionAnswering.from_pretrained(qa_modelname)
cls_model = ElectraForSequenceClassification.from_pretrained(cls_modelname)
tokenizer = AutoTokenizer.from_pretrained(qa_modelname)

Performance

Evaluated on the Arabic-SQuAD 2.0 test set with the official eval script except changing in the preprocessing a little to fit the arabic language the modified eval script.

"exact": 65.11555277951281,
"f1": 71.49042547237256,,

"total": 9606,
"HasAns_exact": 56.14535768645358,
"HasAns_f1": 67.79623803036668,
"HasAns_total": 5256,
"NoAns_exact": 75.95402298850574,
"NoAns_f1": 75.95402298850574,
"NoAns_total": 4350
Downloads last month
178
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ZeyadAhmed/AraElectra-Arabic-SQuADv2-QA

Finetunes
5 models

Dataset used to train ZeyadAhmed/AraElectra-Arabic-SQuADv2-QA

Spaces using ZeyadAhmed/AraElectra-Arabic-SQuADv2-QA 2