ayoolaolafenwa
commited on
Commit
·
3423c66
1
Parent(s):
0ccd327
created README
Browse files
README.md
ADDED
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
This is a masked language model that was trained on IMDB dataset using a finetuned DistilBERT model.
|
2 |
+
|
3 |
+
# Rest API Code for Testing the Masked Language Model
|
4 |
+
|
5 |
+
Inference API python code for testing the masked language model.
|
6 |
+
``` python
|
7 |
+
import requests
|
8 |
+
|
9 |
+
API_URL = "https://api-inference.huggingface.co/models/ayoolaolafenwa/Masked-Language-Model"
|
10 |
+
headers = {"Authorization": "Bearer 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"}
|
11 |
+
|
12 |
+
def query(payload):
|
13 |
+
response = requests.post(API_URL, headers=headers, json=payload)
|
14 |
+
return response.json()
|
15 |
+
|
16 |
+
output = query({
|
17 |
+
"inputs": "Washington DC is the [MASK] of USA.",
|
18 |
+
})
|
19 |
+
print(output[0]["sequence"])
|
20 |
+
```
|
21 |
+
|
22 |
+
Output
|
23 |
+
```
|
24 |
+
washington dc is the capital of usa.
|
25 |
+
```
|
26 |
+
It produces the correct output, *washington dc is the capital of usa.*
|
27 |
+
|
28 |
+
## Load the Masked Language Model with Transformers
|
29 |
+
You can easily load the Language model with transformers using this code.
|
30 |
+
``` python
|
31 |
+
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
32 |
+
import torch
|
33 |
+
|
34 |
+
tokenizer = AutoTokenizer.from_pretrained("ayoolaolafenwa/Masked-Language-Model")
|
35 |
+
|
36 |
+
model = AutoModelForMaskedLM.from_pretrained("ayoolaolafenwa/Masked-Language-Model")
|
37 |
+
|
38 |
+
inputs = tokenizer("The internet [MASK] amazing.", return_tensors="pt")
|
39 |
+
|
40 |
+
|
41 |
+
with torch.no_grad():
|
42 |
+
logits = model(**inputs).logits
|
43 |
+
|
44 |
+
# retrieve index of [MASK]
|
45 |
+
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
|
46 |
+
|
47 |
+
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
|
48 |
+
output = tokenizer.decode(predicted_token_id)
|
49 |
+
print(output)
|
50 |
+
```
|
51 |
+
Output
|
52 |
+
```
|
53 |
+
is
|
54 |
+
```
|
55 |
+
|
56 |
+
It prints out the predicted masked word *is*.
|