A newer version of this model is available: nur-dev/roberta-large-kazqad

Model Card for RoBERTa-large-KazQAD-Informatics-fp16-lora

The KazRoBERTa-Large KazQAD model is an optimized variant of the RoBERTa model, specifically fine-tuned and adapted for question-answering tasks in the Kazakh language using the KazQAD dataset.

Model Details

Model Description

The model is designed to perform efficiently on question-answering tasks in Kazakh, demonstrating substantial improvements in metrics after fine-tuning and adaptation using LoRA.

  • Developed by: Tleubayeva Arailym, Saparbek Makhambet, Bassanova Nurgul, Shomanov Aday, Sabitkhanov Askhat
  • Model type: Transformer-based (RoBERTa)
  • Language(s) (NLP): Kazakh (kk)
  • License: apache-2.0
  • Finetuned from model [optional]: nur-dev/roberta-large-kazqad

Uses

Direct Use

The model can directly answer questions posed in Kazakh, suitable for deployment in various NLP applications and platforms focused on Kazakh language understanding.

Downstream Use [optional]

Ideal for integration into larger applications, chatbots, and information retrieval systems for enhanced user interaction in Kazakh.

Out-of-Scope Use

Not recommended for:

  • Tasks involving languages other than Kazakh without further adaptation.

  • Critical decision-making systems without additional verification processes.

Bias, Risks, and Limitations

  • Potential biases may arise from the underlying training data sources.

  • Model accuracy may degrade when handling ambiguous or complex queries outside the training domain.

Recommendations

Users should consider additional fine-tuning or bias mitigation strategies when deploying the model in sensitive contexts.

Evaluation results

The evaluation of the model demonstrated significant improvements after fine-tuning and applying LoRA. The base model, before any modifications, showed an Exact Match (EM) score of 17.92% and an F1-score of 31.57%. These low scores indicate that the model had difficulty correctly identifying precise answers in its initial state.

After fine-tuning on the KazQAD dataset, the model's performance improved dramatically, with the EM score rising to 56.69% and the F1-score increasing to 69.70%. This represents a substantial increase of 316.2% in EM and 220.8% in F1-score, confirming that fine-tuning significantly enhances the model's ability to process and understand Kazakh-language questions accurately.

With the application of the LoRA adapter in a mixed precision (FP16) setup, the model maintained a strong improvement over the base version while being computationally more efficient. The LoRA-adapted model achieved an EM score of 37.79% and an F1-score of 56.07%, marking a 210.9% increase in EM and a 177.6% increase in F1-score compared to the original model. This adaptation allows for a balance between performance and resource efficiency, making it a viable option when computational constraints are a concern.

Technical Specifications [optional]

Model Architecture and Objective

RoBERTa architecture optimized via fine-tuning and LoRA.

Compute Infrastructure

Hardware

GPU-based training infrastructure

Software

PEFT 0.14.0

Citation

Detailed citation information will be added later.

Model Card Authors

Tleubayeva Arailym

Saparbek Makhambet

Bassanova Nurgul

Sabitkhanov Askhat

Shomanov Aday

Downloads last month
11
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support question-answering models for peft library.

Model tree for Arailym-aitu/RoBERTa-large-KazQAD-Informatics-fp16-lora

Adapter
(1)
this model

Dataset used to train Arailym-aitu/RoBERTa-large-KazQAD-Informatics-fp16-lora