+ +++ ++ ++ + ### Download 3DBodyTex.v1 meshes + +
+ +  + > _A few examples of raw 3D scans in sports-clothing from the 3DBodyTex.v1 dataset showing a wide range of body shapes, pose, skin-tone, and gender._ + + + + The `3DBodyTex.v1` dataset can be downloaded from [here](https://cvi2.uni.lu/3dbodytexv1/). + + `3DBodyTex.v1` contains the meshes and texture images used in this work and can be downloaded from the external site linked above (after accepting a license agreement). + + **NOTE**: These textured meshes are needed to run the code to generate the data. + + We provide the non-skin texture maps annotations for 2 meshes: `006-f-run` and `221-m-u`. + Hence, to generate the data, make sure to get the `.obj` files for these two meshes and place them in `data/3dbodytex-1.1-highres` before excecuting `scripts/gen_data.py`. + + After accepting the licence, download and unzip the data in `./data/`. + ++ ++ ++ + ### Download the 3DBodyTex.v1 annotations + + +
+ + | _Non-skin texture maps_ | _Anatomy labels_ | + |:-:|:-:| + |We provide the non-skin texture map ($T_{nonskin}$) annotations for 215 meshes from the `3DBodyTex.v1` dataset [here](https://cvi2.uni.lu/3dbodytexdermsynth/).
|We provide the per-vertex labels for anatomical parts of the 3DBodyTex.v1 meshes obtained by fitting SCAPE template body model [here](https://cvi2.uni.lu/3dbodytexdermsynth/).
| + |_A sample texture image showing the annotations for non-skin regions._
|![]()
_A few examples of the scans showing the 7 anatomy labels._
| + + The folders are organised with the same IDs as the meshes in `3DBodyTex.v1` dataset. + + **NOTE**: To download the the 3DBodyTex.v1 annotations with the links referred above, you would need to request access to the 3DBodyTex.DermSynth dataset by following the instructions on this [link](https://cvi2.uni.lu/3dbodytexdermsynth/). + ++ ++ ++ + ### Download the Fitzpatrick17k dataset + +
+ + 
+ _An illustration showing lesions from the Fitzpatrick17k dataset in the top row, and it's corresponding manually segmented lesion annotation in the bottom row._ + + + + We used the skin conditions from [Fitzpatrick17k](https://github.com/mattgroh/fitzpatrick17k). + See their instructions to get access to the Fitzpatrick17k images. + We provide the raw images for the Fitzpatrick17k dataset [here](https://vault.sfu.ca/index.php/s/cMuxZNzk6UUHNmX). + + After downloading the dataset, unzip the dataset: + ```bash + unzip fitzpatrick17k.zip -d data/fitzpatrick17k/ + ``` + + We provide a few samples of the densely annotated lesion masks from the Fitzpatrick17k dataset within this repository under the `data` directory. + + More of such annotations can be downloaded from [here](https://vault.sfu.ca/index.php/s/gemdbCeoZXoCqlS). + ++ +++ + ### Download the Background Scenes + +
+ +  + >_A few examples of the background scenes used for rendering the synthetic data._ + + + Although you can use any scenes as background for generating the random views of the lesioned-meshes, we used [SceneNet RGB-D](https://robotvault.bitbucket.io/scenenet-rgbd.html) for the background IndoorScenes. Specifically, we used [this split](https://www.doc.ic.ac.uk/~bjm113/scenenet_data/train_split/train_0.tar.gz), and sampled 3000 images from it. + + For convenience, the background scenes we used to generate the ssynthetic dataset can be downloaded from [here](https://vault.sfu.ca/index.php/s/r7nc1QHKwgW2FDk). + +
+++ ++ ++ + ### Download the FUSeg dataset + +
+ +  + >_A few examples from the FUSeg dataset showing the images in the top row and, it's corresponding segmentation mask in the bottom row._ + + + The Foot Ulcer Segmentation Challenge (FUSeg) dataset is available to download from [their official repository](https://github.com/uwm-bigdata/wound-segmentation/tree/master/data/Foot%20Ulcer%20Segmentation%20Challenge). + Download and unpack the dataset at `data/FUSeg/`, maintaining the Folder Structure shown above. + + For simplicity, we mirror the FUSeg dataset [here](https://vault.sfu.ca/index.php/s/2mb8kZg8wOltptT). + ++ ++ ++ + ### Download the Pratheepan dataset + +
+ +  + >_A few examples from the Pratheepan dataset showing the images and it's corresponding segmentation mask, in the top and bottom row respectively._ + + The Pratheepan dataset is available to download from [their official website](https://web.fsktm.um.edu.my/~cschan/downloads_skin_dataset.html). + The images and the corresponding ground truth masks are available in a ZIP file hosted on Google Drive. Download and unpack the dataset at `data/Pratheepan_Dataset/`. + ++ ++ ++ + ### Download the PH2 dataset + +
+ +  + >_A few examples from the PH2 dataset showing a lesion and it's corresponding segmentation mask, in the top and bottom row respectively._ + + The PH2 dataset can be downloaded from [the official ADDI Project website](https://www.fc.up.pt/addi/ph2%20database.html). + Download and unpack the dataset at `data/ph2/`, maintaining the Folder Structure shown below. + ++ ++ ++ + ### Download the DermoFit dataset + +
+ +  + >_An illustration of a few samples from the DermoFit dataset showing the skin lesions and it's corresponding binary mask, in the top and bottom row respectively._ + + The DermoFit dataset is available through a paid perpetual academic license from the University of Edinburgh. Please access the dataset following the instructions for [the DermoFit Image Library](https://licensing.edinburgh-innovations.ed.ac.uk/product/dermofit-image-library) and unpack it at `data/dermofit/`, maintaining the Folder Structure shown above. + ++ +++ + ### Creating the Synthetic dataset + +
+ +  + >_Generated synthetic images of multiple subjects across a range of skin tones in various skin conditions, background scene, lighting, and viewpoints._ + + + For convenience, we provide the generated synthetic data we used in this work for various downstream tasks [here](https://cvi2.uni.lu/3dbodytexdermsynth/). + + If you want to train your models on a different split of the synthetic data, you can download a dataset generated by blending lesions on 26 3DBodyTex scans from [here](https://cvi2.uni.lu/3dbodytexdermsynth/). + To prepare the synthetic dataset for training. Sample the `images`, and `targets` from the path where you saved this dataset and then organise them into `train/val`. + + **NOTE**: To download the synthetic 3DBodyTex.DermSynth dataset referred in the links above, you would need to request access by following the instructions on this [link](https://cvi2.uni.lu/3dbodytexdermsynth/). + + Alternatively, you can use the provided script `scripts/prep_data.py` to create it. + + Even better, you can generate your own dataset, by following the instructions [here](./README.md#generating-synthetic-dataset). + + + +
We provide the non-skin texture map ($T_{nonskin}$) annotations for 215 meshes from the `3DBodyTex.v1` dataset [here](https://vault.sfu.ca/index.php/s/s8Sy7JdA74r1GN9).
|We provide the per-vertex labels for anatomical parts of the 3DBodyTex.v1 meshes obtained by fitting SCAPE template body model [here](https://vault.sfu.ca/index.php/s/TLLqxCs7MVhS117).
| +| We used the skin conditions from [Fitzpatrick17k](https://github.com/mattgroh/fitzpatrick17k). See their instructions to get access to the Fitzpatrick17k images.
We provide the raw images for the Fitzpatrick17k dataset [here](https://vault.sfu.ca/index.php/s/cMuxZNzk6UUHNmX).
After downloading the dataset, unzip the dataset:
```unzip fitzpatrick17k.zip -d data/fitzpatrick17k/```
We provide the densely annotated lesion masks from the Fitzpatrick17k dataset are given within this repository under the `data` directory. More of such annotations can be downloaded from [here](https://vault.sfu.ca/index.php/s/gemdbCeoZXoCqlS).
|We provide the densely annotated lesion masks from the Fitzpatrick17k dataset are given within this repository under the `data` directory. More of such annotations can be downloaded from [here](https://vault.sfu.ca/index.php/s/gemdbCeoZXoCqlS).
| + + + + +--- + +### Download the Background Scenes + +||| +|:-:|:-:| +||| + +Although you can use any scenes as background for generating the random views of the lesioned-meshes, we used [SceneNet RGB-D](https://robotvault.bitbucket.io/scenenet-rgbd.html) for the background IndoorScenes. Specifically, we used [this split](https://www.doc.ic.ac.uk/~bjm113/scenenet_data/train_split/train_0.tar.gz), and sampled 3000 images from it. + +For convenience, the background scenes we used to generate the ssynthetic dataset can be downloaded from [here](https://vault.sfu.ca/index.php/s/r7nc1QHKwgW2FDk). + +