metadata
license: apache-2.0
language:
- sl
metrics:
- f1
- precision
- recall
- confusion_matrix
base_model:
- google-bert/bert-base-cased
pipeline_tag: token-classification
tags:
- NER
- medical
- symptom
- extraction
- slovenian
datasets:
- rigonsallauka/slovenian_ner_dataset
Slovenian Medical NER
Use
- Primary Use Case: This model is designed to extract medical entities such as symptoms, diagnostic tests, and treatments from clinical text in the Slovenian language.
- Applications: Suitable for healthcare professionals, clinical data analysis, and research into medical text processing.
- Supported Entity Types:
PROBLEM
: Diseases, symptoms, and medical conditions.TEST
: Diagnostic procedures and laboratory tests.TREATMENT
: Medications, therapies, and other medical interventions.
Training Data
- Data Sources: Annotated datasets, including clinical data and translations of English medical text into Slovenian.
- Data Augmentation: The training dataset underwent data augmentation techniques to improve the model's ability to generalize to different text structures.
- Dataset Split:
- Training Set: 80%
- Validation Set: 10%
- Test Set: 10%
Model Training
- Training Configuration:
- Optimizer: AdamW
- Learning Rate: 3e-5
- Batch Size: 64
- Epochs: 200
- Loss Function: Focal Loss to handle class imbalance
- Frameworks : PyTorch, Hugging Face Transformers, SimpleTransformers
Evaluation metrics
- eval_loss = 0.3708431158236593
- f1_score = 0.7571850298211653
- precision = 0.7577626541897065
- recall = 0.7566082854003748
How to Use
You can easily use this model with the Hugging Face transformers
library. Here's an example of how to load and use the model for inference:
from transformers import AutoTokenizer, AutoModelForTokenClassification
import torch
model_name = "rigonsallauka/slovenian_medical_ner"
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
# Sample text for inference
text = "Pacient se je pritoževal zaradi hudih glavobolov in slabosti, ki sta trajala dva dni."
# Tokenize the input text
inputs = tokenizer(text, return_tensors="pt")