Arabic_poem_meter_3 / README.md
Yah216's picture
Update README.md
970ddbc
|
raw
history blame
1.77 kB
metadata
language: ar
widget:
  - text: قفا نبك من ذِكرى حبيب ومنزلِ  بسِقطِ اللِّوى بينَ الدَّخول فحَوْملِ
  - text: سَلو قَلبي غَداةَ سَلا وَثابا لَعَلَّ عَلى الجَمالِ لَهُ عِتابا
datasets:
  - Yah216/autotrain-data-Poem_meter_3
co2_eq_emissions: 404.66986451902227

Model Trained Using AutoTrain

  • Problem type: Multi-class Classification
  • CO2 Emissions (in grams): 404.66986451902227

Validation Metrics

  • Loss: 0.21315555274486542
  • Accuracy: 0.9493554089595999
  • Macro F1: 0.7537353091512587
  • Micro F1: 0.9493554089595999
  • Weighted F1: 0.9480607076301577
  • Macro Precision: 0.7925160467633223
  • Micro Precision: 0.9493554089595999
  • Weighted Precision: 0.9477713919153736
  • Macro Recall: 0.7352339804511467
  • Micro Recall: 0.9493554089595999
  • Weighted Recall: 0.9493554089595999

Usage

You can use cURL to access this model:

$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "قفا نبك من ذِكرى حبيب ومنزلِ  بسِقطِ اللِّوى بينَ الدَّخول فحَوْملِ"}' https://api-inference.huggingface.co/models/Yah216/Arabic_poem_meter_3

Or Python API:

from transformers import AutoModelForSequenceClassification, AutoTokenizer

model = AutoModelForSequenceClassification.from_pretrained("Yah216/Arabic_poem_meter_3", use_auth_token=True)

tokenizer = AutoTokenizer.from_pretrained("Yah216/Arabic_poem_meter_3", use_auth_token=True)

inputs = tokenizer("قفا نبك من ذِكرى حبيب ومنزلِ  بسِقطِ اللِّوى بينَ الدَّخول فحَوْملِ", return_tensors="pt")

outputs = model(**inputs)