Onegafer's picture
update model card README.md
7d7f8d9
|
raw
history blame
2.9 kB
metadata
license: apache-2.0
tags:
  - vision
  - depth-estimation
  - generated_from_trainer
model-index:
  - name: glpn-nyu-finetuned-diode-230530-204740
    results: []

glpn-nyu-finetuned-diode-230530-204740

This model is a fine-tuned version of vinvino02/glpn-nyu on the diode-subset dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5139
  • Mae: 3.0509
  • Rmse: 3.4756
  • Abs Rel: 5.7613
  • Log Mae: 0.6836
  • Log Rmse: 0.8048
  • Delta1: 0.3028
  • Delta2: 0.3079
  • Delta3: 0.3096

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 24
  • eval_batch_size: 48
  • seed: 2022
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Mae Rmse Abs Rel Log Mae Log Rmse Delta1 Delta2 Delta3
No log 1.0 1 1.5335 3.1427 3.6089 5.9847 0.6920 0.8173 0.3016 0.3077 0.3094
No log 2.0 2 1.5297 3.1246 3.5833 5.9419 0.6903 0.8149 0.3018 0.3077 0.3094
No log 3.0 3 1.5263 3.1085 3.5602 5.9033 0.6889 0.8128 0.3020 0.3078 0.3095
No log 4.0 4 1.5234 3.0947 3.5400 5.8694 0.6876 0.8109 0.3022 0.3078 0.3095
No log 5.0 5 1.5208 3.0825 3.5222 5.8395 0.6865 0.8092 0.3024 0.3079 0.3095
No log 6.0 6 1.5185 3.0723 3.5072 5.8144 0.6856 0.8078 0.3025 0.3079 0.3095
No log 7.0 7 1.5167 3.0639 3.4949 5.7937 0.6848 0.8067 0.3026 0.3079 0.3096
No log 8.0 8 1.5153 3.0574 3.4852 5.7775 0.6842 0.8057 0.3027 0.3079 0.3096
No log 9.0 9 1.5143 3.0531 3.4788 5.7667 0.6838 0.8051 0.3028 0.3079 0.3096
No log 10.0 10 1.5139 3.0509 3.4756 5.7613 0.6836 0.8048 0.3028 0.3079 0.3096

Framework versions

  • Transformers 4.29.2
  • Pytorch 2.0.1+cu118
  • Tokenizers 0.13.3