mtyrrell's picture
Update README.md
6bf6c1c
|
raw
history blame
4.28 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: IKT_classifier_netzero_best
    results: []
widget:
  - text: >-
      "We have put forth a long-term low- emissions development strategy (LEDS)
      that aspires to halve emissions from its peak to 33 MtCO2e by 2050, with a
      view to achieving net-zero emissions as soon as viable in the second half
      of the century. This will require serious and concerted efforts across our
      industry, economy and society"
    example_title: NET-ZERO
  - text: >-
      "Unconditional Contribution In the unconditional scenario, GHG emissions
      would be reduced by 27.56 Mt CO2e (6.73%) below BAU in 2030 in the
      respective sectors. 26.3 Mt CO2e (95.4%) of this emission reduction will
      be from the Energy sector while 0.64 (2.3%) and 0.6 (2.2%) Mt CO2e
      reduction will be from AFOLU (agriculture) and waste sector respectively.
      There will be no reduction in the IPPU sector. Conditional Contribution In
      the conditional scenario, GHG emissions would be reduced by 61.9 Mt CO2e
      (15.12%) below BAU in 2030 in the respective sectors."
    example_title: TARGET_FREE
  - text: >-
      "This land is buffered from the sea by the dyke and a network of drains
      and pumps will control the water levels in the polder. We have raised the
      minimum platform levels for new developments from 3m to 4m above the
      Singapore Height Datum (SHD) since 2011. Presently, critical
      infrastructure on existing coastal land, notably Changi Airport Terminal 5
      and Tuas Port, will be constructed with platform levels at least 5m above
      SHD."
    example_title: NEGATIVE

IKT_classifier_netzero_best

This model is a fine-tuned version of sentence-transformers/all-mpnet-base-v2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9526
  • Precision Macro: 0.7813
  • Precision Weighted: 0.8164
  • Recall Macro: 0.7734
  • Recall Weighted: 0.7812
  • F1-score: 0.7644
  • Accuracy: 0.7812

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 9.588722322096848e-05
  • train_batch_size: 3
  • eval_batch_size: 3
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 400.0
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Precision Macro Precision Weighted Recall Macro Recall Weighted F1-score Accuracy
No log 1.0 114 0.8267 0.8056 0.8151 0.6601 0.6875 0.6418 0.6875
No log 2.0 228 0.4916 0.8095 0.8371 0.8290 0.8125 0.8113 0.8125
No log 3.0 342 0.4784 0.8535 0.8920 0.8682 0.875 0.8569 0.875
No log 4.0 456 0.8909 0.7813 0.8164 0.7734 0.7812 0.7644 0.7812
0.6167 5.0 570 0.6673 0.8242 0.8650 0.8649 0.8125 0.8260 0.8125
0.6167 6.0 684 0.7110 0.8413 0.8795 0.8845 0.8438 0.8505 0.8438
0.6167 7.0 798 1.3731 0.7778 0.8281 0.7702 0.7188 0.7380 0.7188
0.6167 8.0 912 0.9526 0.7813 0.8164 0.7734 0.7812 0.7644 0.7812

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.13.3