Finetuning code
Hi,
Can you please provide the details how you have fine-tuned ModernBERT for Question-answering task.
"The model 'ModernBertForMaskedLM' is not supported for question-answering." is the error I am getting when I tried.
Thanks in advance.
Hi,
Yes, the ModernBertForQuestionAnswering
is still waiting to be accepted to be merged into the transformers
repo.
For now, you can use my fork instead of original transformers
library.
> pip uninstall transformersy -y
> git clone https://github.com/bakrianoo/transformers.git
> cd transformers && git checkout feat-ModernBert-QnA-Support && pip install -e .
Then you can use the fine tunning script as mentioned here
Could you please tell me how to use the run_qa.py file? I installed your fork as you mentioned. Then I downloaded the run_qa.py file from https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa.py (not sure if it needs to be from your fork though) and then I ran it with the hyperparameters that you have used. However when I do that I get the error -
Traceback (most recent call last):
File "/home/.../.../modernbert/run_qa.py", line 31, in <module>
from trainer_qa import QuestionAnsweringTrainer
ModuleNotFoundError: No module named 'trainer_qa'