Spaces:
Runtime error
Runtime error
File size: 2,380 Bytes
2a2c864 b8db52d 6da417a 2a2c864 7dcde50 33ff5cc 7dcde50 33ff5cc debe187 cc29eef 08009f0 b8db52d 8cea305 7dcde50 8cea305 7dcde50 debe187 7dcde50 8cea305 33ff5cc 08009f0 cc29eef 08009f0 cc29eef 6da417a 33ff5cc cc29eef 08009f0 7dcde50 33ff5cc b8db52d 33ff5cc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
import gradio as gr
from transformers import pipeline
qa_pipeline = pipeline(task="question-answering",model="Intel/bert-base-uncased-squadv1.1-sparse-80-1x4-block-pruneofa")
def greet(name):
return "Hello " + name + "!!"
def predict(context,question):
'''
Sample prediction should return a dictionary of the form:
{'score': 0.9376363158226013, 'start': 10, 'end': 15, 'answer': 'seven'}
Score is the probability confidence score
Start is the starting character where it found the answer
End is the ending character where it found the answer
Answer is the part of the text it drew its answer from.
'''
predictions = qa_pipeline(context=context,question=question)
print(f'predictions={predictions}')
score = predictions['score']
answer = predictions['answer']
start = predictions['start']
end = predictions['end']
return score,answer,start
md = """
Author of Hugging Face Space: Benjamin Consolvo, AI Solutions Engineer Manager at Intel \n
Date last updated: 01/05/2023
[b]Introduction[\b]: If you came looking for chatGPT, sorry to disappoint, but this is different. This prediction model is designed to answer a question about a text. It is designed to do reading comprehension. The model does not just answer questions in general -- it only works from the text that you provide. However, accomplishing accurate reading comprehension can be a very valuable task, especially if you are attempting to get quick answers from a large (and maybe boring!) document.
The model is based on the Zafrir et al. (2021) paper: [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754).
The training dataset used is the English Wikipedia dataset (2500M words), followed by the SQuADv1.1 dataset containing 89K training examples by Rajpurkar et al. (2016): [100, 000+ questions for machine comprehension of text](https://arxiv.org/abs/1606.05250)
"""
# predict()
context=gr.Text(lines=10,label="Context")
question=gr.Text(label="Question")
score=gr.Text(label="Score")
start=gr.Text(label="Answer found at character")
answer=gr.Text(label="Answer")
iface = gr.Interface(
fn=predict,
inputs=[context,question],
outputs=[score,start,answer],
examples=[],
title = "Question & Answer with Sparse BERT using the SQuAD dataset",
description = md
)
iface.launch()
|