Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 7324788
- CO2 Emissions (in grams): 10.435358044493652
Validation Metrics
- Loss: 0.08991389721632004
- Accuracy: 0.9708090976211485
- Precision: 0.8998421675654347
- Recall: 0.9309429854401959
- F1: 0.9151284109149278
Usage
You can use cURL to access this model:
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lewtun/autotrain-acronym-identification-7324788
Or Python API:
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("lewtun/autotrain-acronym-identification-7324788", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lewtun/autotrain-acronym-identification-7324788", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
- Downloads last month
- 35
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Dataset used to train lewtun/autotrain-acronym-identification-7324788
Spaces using lewtun/autotrain-acronym-identification-7324788 2
Evaluation results
- Accuracy on acronym_identificationself-reported0.971
- Accuracy on acronym_identificationself-reported0.979
- Precision on acronym_identificationself-reported0.920
- Recall on acronym_identificationself-reported0.946
- F1 on acronym_identificationself-reported0.933
- loss on acronym_identificationself-reported0.064
- Accuracy on acronym_identificationvalidation set self-reported0.976
- Precision on acronym_identificationvalidation set self-reported0.934
- Recall on acronym_identificationvalidation set self-reported0.916
- F1 on acronym_identificationvalidation set self-reported0.925