Edit model card

Portuguese NER- TempClinBr - BioBERTpt(bio)

Treinado com BioBERTpt(bio), com o corpus TempClinBr.

Metricas:

        precision    recall  f1-score   support

           0       0.44      0.29      0.35        28
           1       0.75      0.60      0.66       420
           2       0.57      0.40      0.47        10
           3       0.57      0.36      0.44        11
           4       0.70      0.85      0.77       124
           5       0.72      0.67      0.69       291
           6       0.84      0.90      0.87      2236
           7       0.78      0.77      0.77       112
           8       0.85      0.75      0.80       503
           9       0.64      0.56      0.60        78
          10       0.81      0.82      0.81        71
          11       0.82      1.00      0.90        33

    accuracy                           0.81      3917
   macro avg       0.71      0.66      0.68      3917
weighted avg       0.81      0.81      0.80      3917

Parâmetros:

device = cuda (Colab)
nclasses = len(tag2id)
nepochs = 50 => parou na 16
batch_size = 16
batch_status = 32
learning_rate = 3e-5

early_stop = 5
max_length = 256
write_path = 'model'

Eval no conjunto de teste - TempClinBr OBS: Avaliação com tag "O" (label 7), se necessário fazer a média sem essa tag.

tag2id ={'I-Ocorrencia': 0,
 'I-Problema': 1,
 'I-DepartamentoClinico': 2,
 'B-DepartamentoClinico': 3,
 'B-Ocorrencia': 4,
 'B-Tratamento': 5,
 'O': 6,
 'B-Teste': 7,
 'B-Problema': 8,
 'I-Tratamento': 9,
 'B-Evidencia': 10,
 'I-Teste': 11,
 '<pad>': 12}

              precision    recall  f1-score   support

           0       0.59      0.20      0.29        51
           1       0.77      0.69      0.73       645
           2       0.67      0.71      0.69        14
           3       0.87      0.43      0.58        30
           4       0.71      0.80      0.75       146
           5       0.79      0.77      0.78       261
           6       0.84      0.93      0.88      2431
           7       0.80      0.66      0.73       194
           8       0.87      0.83      0.85       713
           9       0.83      0.62      0.71       146
          10       0.98      0.91      0.94       128
          11       0.54      0.21      0.30        99

    accuracy                           0.83      4858
   macro avg       0.77      0.65      0.69      4858
weighted avg       0.82      0.83      0.82      4858

Como citar: em breve

Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.