lorenzoscottb commited on
Commit
490c430
1 Parent(s): d16d317

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -2
README.md CHANGED
@@ -5,6 +5,15 @@ tags:
5
  model-index:
6
  - name: SkLIP-masked
7
  results: []
 
 
 
 
 
 
 
 
 
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -13,7 +22,7 @@ should probably proofread and complete it, then remove this comment. -->
13
  # SkLIP
14
 
15
  SkLIP (Skin CLIP) is a hybrid CLIP model finetuned on the [SkinCAP](https://hf.rst.im/datasets/joshuachou/SkinCAP), a multi-modal dermatology dataset annotated
16
- with rich medical captions. It is built witha RoBERTa text encoder and the pre-trained CLIP vision encoder.
17
 
18
  ## Model description
19
 
@@ -62,4 +71,4 @@ The following hyperparameters were used during training:
62
  - Transformers 4.45.2
63
  - Pytorch 2.1.0+cu118
64
  - Datasets 3.0.1
65
- - Tokenizers 0.20.1
 
5
  model-index:
6
  - name: SkLIP-masked
7
  results: []
8
+ license: mit
9
+ datasets:
10
+ - joshuachou/SkinCAP
11
+ language:
12
+ - en
13
+ base_model:
14
+ - allenai/scibert_scivocab_uncased
15
+ - openai/clip-vit-base-patch32
16
+ pipeline_tag: feature-extraction
17
  ---
18
 
19
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
22
  # SkLIP
23
 
24
  SkLIP (Skin CLIP) is a hybrid CLIP model finetuned on the [SkinCAP](https://hf.rst.im/datasets/joshuachou/SkinCAP), a multi-modal dermatology dataset annotated
25
+ with rich medical captions. It is built witha SciBERT text encoder and the pre-trained CLIP-32 vision encoder.
26
 
27
  ## Model description
28
 
 
71
  - Transformers 4.45.2
72
  - Pytorch 2.1.0+cu118
73
  - Datasets 3.0.1
74
+ - Tokenizers 0.20.1