You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A data generation pipeline for creating photorealistic _in-the-wild_ synthetic dermatological data with rich annotations such as semantic segmentation masks, depth maps, and bounding boxes for various skin analysis tasks.
16
22
17
-

23
+

18
24
>_The figure shows the DermSynth3D computational pipeline where 2D segmented skin conditions are blended into the texture image of a 3D mesh on locations outside of the hair and clothing regions. After blending, 2D views of the mesh are rendered with a variety of camera viewpoints and lighting conditions and combined with background images to create a synthetic dermatology dataset._
19
25
20
26
## Motivation
@@ -39,10 +45,10 @@ DermSynth3D/
39
45
┣ out/ # the checkpoints are saved here (auto created)
40
46
┣ data/ # directory to store the data
41
47
┃ ┣ ... # detailed instructions in the dataset.md
42
-
┣ dermsynth3d/ #
48
+
┣ dermsynth3d/ #
43
49
┃ ┣ datasets/ # class definitions for the datasets
44
50
┃ ┣ deepblend/ # code for deep blending
45
-
┃ ┣ losses/ # loss functions
51
+
┃ ┣ losses/ # loss functions
46
52
┃ ┣ models/ # model definitions
47
53
┃ ┣ tools/ # wrappers for synthetic data generation
-[Post-Process Renderings with Unity](#post-process-renderings-with-unity)
85
+
- [Click to see the a visual comparison of the renderings obtained from Pytorch3D and Unity.](#click-to-see-the-a-visual-comparison-of-the-renderings-obtained-from-pytorch3d-and-unity)
86
+
-[Preparing Dataset for Experiments](#preparing-dataset-for-experiments)
87
+
-[Cite](#cite)
88
+
-[Demo Notebooks for Dermatology Tasks](#demo-notebooks-for-dermatology-tasks)
# Run the container in interactive mode for using DermSynth3D
103
-
# See 3. How to use DermSynth3D
117
+
# See 3. How to use DermSynth3D
104
118
docker run --gpus all -it --rm -v /path/to/downloaded/data:/data dermsynth3d
105
119
```
106
120
We provide the [pre-built docker image](https://hub.docker.com/r/sinashish/dermsynth3d), which can be be used as well:
@@ -122,17 +136,17 @@ If you face any issues installing pytorch3d, please refer to their [installation
122
136
## Datasets
123
137
124
138
Follow the instructions below to download the datasets for generating the synthetic data and training models for various tasks.
125
-
All the datasets should be downloaded and placed in the `data` directory.
139
+
All the datasets should be downloaded and placed in the `data` directory.
126
140
127
141
<aname="tree"></a>
128
142
129
143
<!-- #### The folder structure of data directory should be as follows: -->
130
144
<details>
131
145
132
146
<aname="data_tree"></a>
133
-
<summary>
147
+
<summary>
134
148
135
-
### The folder structure of data directory should be as follows:
149
+
### The folder structure of data directory should be as follows:
136
150
137
151
</summary>
138
152
@@ -144,12 +158,12 @@ DermSynth3D/
144
158
┃ ┣ fitzpatrick17k/
145
159
┃ ┃ ┣ data/ # Fitzpatrick17k images
146
160
┃ ┃ ┗ annotations/ # annotations for Fitzpatrick17k lesions
147
-
┃ ┣ ph2/
161
+
┃ ┣ ph2/
148
162
┃ ┃ ┣ images/ # PH2 images
149
163
┃ ┃ ┗ labels/ # PH2 annotations
150
164
┃ ┣ dermofit/ # Dermofit dataset
151
-
┃ ┃ ┣ images/ # Dermofit images
152
-
┃ ┃ ┗ targets/ # Dermofit annotations
165
+
┃ ┃ ┣ images/ # Dermofit images
166
+
┃ ┃ ┗ targets/ # Dermofit annotations
153
167
┃ ┣ FUSeg/
154
168
┃ ┃ ┣ train/ # training set with images/labels for FUSeg
155
169
┃ ┃ ┣ validation/ # val set with images/labels for FUSeg
@@ -173,12 +187,12 @@ The datasets used in this work can be broadly categorized into data required for
173
187
174
188
<aname="blend_data"></a>
175
189
<summary>
176
-
190
+
177
191
### Data for Blending
178
192
179
193
</summary> <blockquote>
180
194
<!-- list of blending datasets -->
181
-
<details>
195
+
<details>
182
196
<aname="mesh_data"></a>
183
197
<summary>
184
198
@@ -193,11 +207,11 @@ The datasets used in this work can be broadly categorized into data required for
193
207
194
208
The `3DBodyTex.v1` dataset can be downloaded from [here](https://cvi2.uni.lu/3dbodytexv1/).
195
209
196
-
`3DBodyTex.v1` contains the meshes and texture images used in this work and can be downloaded from the external site linked above (after accepting a license agreement).
210
+
`3DBodyTex.v1` contains the meshes and texture images used in this work and can be downloaded from the external site linked above (after accepting a license agreement).
197
211
198
212
**NOTE**: These textured meshes are needed to run the code to generate the data.
199
213
200
-
We provide the non-skin texture maps annotations for 2 meshes: `006-f-run` and `221-m-u`.
214
+
We provide the non-skin texture maps annotations for 2 meshes: `006-f-run` and `221-m-u`.
201
215
Hence, to generate the data, make sure to get the `.obj` files for these two meshes and place them in `data/3dbodytex-1.1-highres` before excecuting `scripts/gen_data.py`.
202
216
203
217
After accepting the licence, download and unzip the data in `./data/`.
@@ -207,7 +221,7 @@ The datasets used in this work can be broadly categorized into data required for
207
221
<details>
208
222
<aname="mesh_annot_data"></a>
209
223
<summary>
210
-
224
+
211
225
### Download the 3DBodyTex.v1 annotations
212
226
213
227
@@ -242,16 +256,16 @@ The datasets used in this work can be broadly categorized into data required for
242
256
We used the skin conditions from [Fitzpatrick17k](https://github.com/mattgroh/fitzpatrick17k).
243
257
See their instructions to get access to the Fitzpatrick17k images.
244
258
We provide the raw images for the Fitzpatrick17k dataset [here](https://vault.sfu.ca/index.php/s/cMuxZNzk6UUHNmX).
245
-
259
+
246
260
After downloading the dataset, unzip the dataset:
247
261
```bash
248
262
unzip fitzpatrick17k.zip -d data/fitzpatrick17k/
249
263
```
250
-
264
+
251
265
We provide a few samples of the densely annotated lesion masks from the Fitzpatrick17k dataset within this repository under the `data` directory.
252
266
253
-
More of such annotations can be downloaded from [here](https://vault.sfu.ca/index.php/s/gemdbCeoZXoCqlS).
254
-
267
+
More of such annotations can be downloaded from [here](https://vault.sfu.ca/index.php/s/gemdbCeoZXoCqlS).
268
+
255
269
</details>
256
270
257
271
<details>
@@ -264,7 +278,7 @@ The datasets used in this work can be broadly categorized into data required for
264
278
265
279

266
280
>_A few examples of the background scenes used for rendering the synthetic data._
The Foot Ulcer Segmentation Challenge (FUSeg) dataset is available to download from [their official repository](https://github.com/uwm-bigdata/wound-segmentation/tree/master/data/Foot%20Ulcer%20Segmentation%20Challenge).
313
+
314
+
The Foot Ulcer Segmentation Challenge (FUSeg) dataset is available to download from [their official repository](https://github.com/uwm-bigdata/wound-segmentation/tree/master/data/Foot%20Ulcer%20Segmentation%20Challenge).
301
315
Download and unpack the dataset at `data/FUSeg/`, maintaining the Folder Structure shown above.
302
316
303
317
For simplicity, we mirror the FUSeg dataset [here](https://vault.sfu.ca/index.php/s/2mb8kZg8wOltptT).
@@ -315,23 +329,23 @@ The datasets used in this work can be broadly categorized into data required for
315
329

316
330
>_A few examples from the Pratheepan dataset showing the images and it's corresponding segmentation mask, in the top and bottom row respectively._
317
331
318
-
The Pratheepan dataset is available to download from [their official website](https://web.fsktm.um.edu.my/~cschan/downloads_skin_dataset.html).
332
+
The Pratheepan dataset is available to download from [their official website](https://web.fsktm.um.edu.my/~cschan/downloads_skin_dataset.html).
319
333
The images and the corresponding ground truth masks are available in a ZIP file hosted on Google Drive. Download and unpack the dataset at `data/Pratheepan_Dataset/`.
320
334
321
335
</details>
322
336
323
337
<details>
324
338
<aname="ph2_data"></a>
325
339
<summary>
326
-
340
+
327
341
### Download the PH2 dataset
328
342
329
343
</summary>
330
344
331
345

332
346
>_A few examples from the PH2 dataset showing a lesion and it's corresponding segmentation mask, in the top and bottom row respectively._
333
-
334
-
The PH2 dataset can be downloaded from [the official ADDI Project website](https://www.fc.up.pt/addi/ph2%20database.html).
347
+
348
+
The PH2 dataset can be downloaded from [the official ADDI Project website](https://www.fc.up.pt/addi/ph2%20database.html).
335
349
Download and unpack the dataset at `data/ph2/`, maintaining the Folder Structure shown below.
336
350
337
351
</details>
@@ -358,7 +372,7 @@ The datasets used in this work can be broadly categorized into data required for
358
372
### Creating the Synthetic dataset
359
373
360
374
</summary>
361
-
375
+
362
376

363
377
>_Generated synthetic images of multiple subjects across a range of skin tones in various skin conditions, background scene, lighting, and viewpoints._
364
378
@@ -368,13 +382,13 @@ The datasets used in this work can be broadly categorized into data required for
368
382
369
383
If you want to train your models on a different split of the synthetic data, you can download a dataset generated by blending lesions on 26 3DBodyTex scans from [here](https://cvi2.uni.lu/3dbodytexdermsynth/).
370
384
To prepare the synthetic dataset for training. Sample the `images`, and `targets` from the path where you saved this dataset and then organise them into `train/val`.
371
-
385
+
372
386
**NOTE**: To download the synthetic 3DBodyTex.DermSynth dataset referred in the links above, you would need to request access by following the instructions on this [link](https://cvi2.uni.lu/3dbodytexdermsynth/).
373
387
374
388
Alternatively, you can use the provided script `scripts/prep_data.py` to create it.
375
389
376
390
Even better, you can generate your own dataset, by following the instructions [here](./README.md#generating-synthetic-dataset).
377
-
391
+
378
392
379
393
380
394
</details>
@@ -387,7 +401,7 @@ The datasets used in this work can be broadly categorized into data required for
387
401
388
402
<aname='gen'></a>
389
403
390
-
### Generating Synthetic Dataset
404
+
### Generating Synthetic Dataset
391
405
392
406

393
407
> _A few examples of annotated data synthesized using DermSynth3D. The rows from top to bottom show respectively: the rendered images with blended skin conditions, bounding boxes around the lesions, GT semantic segmentation masks, grouped anatomical labels, and the monocular depth maps produced by the renderer._
@@ -400,7 +414,7 @@ bodytex_dir: './data/3dbodytex-1.1-highres/' # Name of the mesh to blend
400
414
mesh_name: '006-f-run' # Path to FitzPatrick17k lesions
401
415
fitz_dir: './data/fitzpatrick17k/data/finalfitz17k/' # Path to annotated Fitz17k lesions with masks
402
416
annot_dir: './data/annotations/' # Path to save the new texture maps
403
-
tex_dir: './data/lesions/'
417
+
tex_dir: './data/lesions/'
404
418
``` -->
405
419
406
420
Now, to *generate* the synthetic data with the default parameters, simply run the following command to generate 2000 views for a specified mesh:
Feel free to play around with other `random` parameter in `configs/blend.yaml` to control lighting, material and view points.
425
439
426
440
<aname="post_proc_data"></a>
427
441
428
-
### Post-Process Renderings with Unity
442
+
### Post-Process Renderings with Unity
429
443
430
-
We use Pytorch3D as our choice of differential renderer to generate synthetic data.
444
+
We use Pytorch3D as our choice of differential renderer to generate synthetic data.
431
445
However, Pytorch3D is not a Physically Based Renderer (PBR) and hence, the renderings are not photorealistic or may not look photorealistic.
432
-
To achieve photorealistic renderings, we use Unity to post-process the renderings obtained from Pytorch3D.
446
+
To achieve photorealistic renderings, we use Unity to post-process the renderings obtained from Pytorch3D.
433
447
434
448
<details>
435
449
<summary>
@@ -452,7 +466,7 @@ Follow the detailed instructions outlined [here](./docs/unity.md) to create phot
452
466
<aname='train_prep'></a>
453
467
454
468
## Preparing Dataset for Experiments
455
-
<!--
469
+
<!--
456
470

457
471
>_Generated synthetic images of multiple subjects across a range of skin tones in various skin conditions, background scene, lighting, and viewpoints._ -->
458
472
@@ -472,7 +486,7 @@ You can look at `scripts/prep_data.py` for more details.
472
486
If you find this work useful or use any part of the code in this repo, please cite our paper:
473
487
```bibtext
474
488
@misc{sinha2023dermsynth3d,
475
-
title={DermSynth3D: Synthesis of in-the-wild Annotated Dermatology Images},
489
+
title={DermSynth3D: Synthesis of in-the-wild Annotated Dermatology Images},
476
490
author={Ashish Sinha and Jeremy Kawahara and Arezou Pakzad and Kumar Abhishek and Matthieu Ruthven and Enjie Ghorbel and Anis Kacem and Djamila Aouada and Ghassan Hamarneh},
0 commit comments