Fine-tune datasets
- MAGPIE corpus: https://aclanthology.org/2020.lrec-1.35/
- EPIE corpus: https://link.springer.com/content/pdf/10.1007/978-3-030-58323-1.pdf
Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1595156286
- CO2 Emissions (in grams): 0.0422
Validation Metrics
- Loss: 0.012
- Accuracy: 0.996
- Precision: 0.000
- Recall: 0.000
- F1: 0.000
Usage
You can use cURL to access this model:
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/imranraad/autotrain-magpie-epie-combine-xlmr-metaphor-1595156286
Or Python API:
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("imranraad/autotrain-magpie-epie-combine-xlmr-metaphor-1595156286", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("imranraad/autotrain-magpie-epie-combine-xlmr-metaphor-1595156286", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
How to get the idioms:
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model = AutoModelForTokenClassification.from_pretrained("imranraad/idiom-xlm-roberta")
tokenizer = AutoTokenizer.from_pretrained("imranraad/idiom-xlm-roberta")
pipeline_idioms = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
text = "Why are you so bent out of shape? - Why are you so upset?"
idioms = pipeline_idioms(text)
for idiom in idioms:
if idiom['entity_group'] == '1':
print(idiom['word'])
- Downloads last month
- 186
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.