dinov2-base-finetuned-lora-EA-rank8

This model is a fine-tuned version of facebook/dinov2-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4365
  • Accuracy: 0.8233

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 1024
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 15
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.7805 2 0.5030 0.8142
No log 1.9512 5 0.4567 0.8215
No log 2.7317 7 0.4511 0.8215
0.4811 3.9024 10 0.4438 0.8179
0.4811 4.6829 12 0.4392 0.8215
0.4811 5.8537 15 0.4379 0.8452
0.4811 6.6341 17 0.4365 0.8233

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.1.2
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Safetensors
Model size
87M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for namrahrehman/dinov2-base-finetuned-lora-EA-rank8

Finetuned
(42)
this model