SkLIPA / README.md
lorenzoscottb's picture
Update README.md
490c430 verified
|
raw
history blame
1.99 kB
metadata
library_name: transformers
tags:
  - generated_from_trainer
model-index:
  - name: SkLIP-masked
    results: []
license: mit
datasets:
  - joshuachou/SkinCAP
language:
  - en
base_model:
  - allenai/scibert_scivocab_uncased
  - openai/clip-vit-base-patch32
pipeline_tag: feature-extraction

SkLIP

SkLIP (Skin CLIP) is a hybrid CLIP model finetuned on the SkinCAP, a multi-modal dermatology dataset annotated with rich medical captions. It is built witha SciBERT text encoder and the pre-trained CLIP-32 vision encoder.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.01
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss
4.2018 1.0 57 4.1344
4.1697 2.0 114 4.1298
4.1668 3.0 171 4.1276
4.164 4.0 228 4.1263
4.158 5.0 285 4.1253
4.1583 6.0 342 4.1246
4.1569 7.0 399 4.1243
4.1575 8.0 456 4.1241
4.1564 9.0 513 4.1240
4.1604 10.0 570 4.1240

Framework versions

  • Transformers 4.45.2
  • Pytorch 2.1.0+cu118
  • Datasets 3.0.1
  • Tokenizers 0.20.1