mBERT-hi-te-MLM-SQuAD-TyDi-MLQA Model Card
Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("question-answering", model="hapandya/mBERT-hi-te-MLM-SQuAD-TyDi-MLQA")
Load model directly
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("hapandya/mBERT-hi-te-MLM-SQuAD-TyDi-MLQA") model = AutoModelForQuestionAnswering.from_pretrained("hapandya/mBERT-hi-te-MLM-SQuAD-TyDi-MLQA")
- Downloads last month
- 123
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.