distilbert-base-uncased-distilled-squad

This is the distilbert-base-uncased-distilled-squad model converted to OpenVINO, for accellerated inference.

An example of how to do inference on this model:

from optimum.intel.openvino import OVModelForQuestionAnswering
from transformers import AutoTokenizer, pipeline

# model_id should be set to either a local directory or a model available on the HuggingFace hub.
model_id = "helenai/distilbert-base-uncased-distilled-squad-ov-fp32"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForQuestionAnswering.from_pretrained(model_id)
pipe = pipeline("question-answering", model=model, tokenizer=tokenizer)
result = pipe("What is OpenVINO?", "OpenVINO is a framework that accelerates deep learning inferencing")
print(result)
Downloads last month
392
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.