LayoutLM_5

This model is a fine-tuned version of microsoft/layoutlmv3-large on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3586
  • Precision: 0.8344
  • Recall: 0.8344
  • F1: 0.8344
  • Accuracy: 0.9343

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 2000

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 3.7 100 0.8644 0.0 0.0 0.0 0.7818
No log 7.41 200 0.6214 0.7857 0.0728 0.1333 0.8
No log 11.11 300 0.4714 0.7303 0.4305 0.5417 0.8657
No log 14.81 400 0.4046 0.7955 0.6954 0.7420 0.9189
0.6176 18.52 500 0.3755 0.8194 0.7815 0.8000 0.9301
0.6176 22.22 600 0.3611 0.7935 0.8146 0.8039 0.9245
0.6176 25.93 700 0.3679 0.7848 0.8212 0.8026 0.9245
0.6176 29.63 800 0.3292 0.8289 0.8344 0.8317 0.9357
0.6176 33.33 900 0.3408 0.8289 0.8344 0.8317 0.9315
0.1555 37.04 1000 0.3479 0.8141 0.8411 0.8274 0.9315
0.1555 40.74 1100 0.3491 0.8247 0.8411 0.8328 0.9357
0.1555 44.44 1200 0.3704 0.7888 0.8411 0.8141 0.9245
0.1555 48.15 1300 0.3591 0.8194 0.8411 0.8301 0.9315
0.1555 51.85 1400 0.3420 0.8344 0.8344 0.8344 0.9343
0.0746 55.56 1500 0.3546 0.8421 0.8477 0.8449 0.9357
0.0746 59.26 1600 0.3442 0.8421 0.8477 0.8449 0.9371
0.0746 62.96 1700 0.3687 0.8205 0.8477 0.8339 0.9357
0.0746 66.67 1800 0.3743 0.8258 0.8477 0.8366 0.9343
0.0746 70.37 1900 0.3626 0.8301 0.8411 0.8355 0.9343
0.0502 74.07 2000 0.3586 0.8344 0.8344 0.8344 0.9343

Framework versions

  • Transformers 4.29.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.4
  • Tokenizers 0.13.3
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.