Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
RANZCR CLiP - Catheter and Line Position Challenge
|
21st Place Solution : So close Yet So Far
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/nyleve" target="_blank">@nyleve</a> for the kind words</p>
<p>Thanks <a href="https://www.kaggle.com/sanchitvj" target="_blank">@sanchitvj</a> , I am glad you liked our approach</p>
<p>Thanks for all your contributions in this competition . I hope we meet again</p>
<p>Hi all , first of all a big congratulations to the winners , a big thank you to kaggle for organizing this competition . The competition was full of ups and downs for us , we toiled hard , sometimes we thought we could be there at the top and other times we were all discouraged because of the CV-LB gap . I would like to thank <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a> and <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> for their starting baselines , there was so much new to learn from them and also <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for sharing his ideas time to time almost all of which I tried .</p>
<p>Throughout this competition we tried a lot of ambitious and simple ideas and all four of us were training models till the last day . Below are the things that we tried , but didn't work :</p>
<ul>
<li>Training 2nd Stage with self attention</li>
<li>Training 2nd Stage with Transformer attention from teacher</li>
<li>Training Multi-head model with Spatial attention at every head and self attention before the head and output of the last layer </li>
<li>Training 3 stages Multi-Head model </li>
<li>Training 2nd Stage with all 30k images with annotations generated by UNET</li>
<li>Training 3rd stage with heavy augs by <a href="https://www.kaggle.com/underwearfitting" target="_blank">@underwearfitting</a></li>
<li>TTA</li>
<li>Cropping black area preprocess<br>
and many more</li>
</ul>
<p>All of the above techniques didn't provide as a significant boost on lb while our cv improved , at last we decided to stick to simple models</p>
<h1>Our Strategy</h1>
<p>We used multi-head models with spatial attention fine tuned on <em>pseudo labelled NIH data, Kaggle SIM competition data and test data</em> and also four stage resnet200d in our ensemble</p>
<h1>Models</h1>
<ul>
<li><p>Backbone</p>
<ul>
<li>Resnet200d</li>
<li>Effnet B5</li>
<li>Seresnet152d</li></ul></li>
<li><p>Image SIze</p>
<ul>
<li>640</li>
<li>768</li></ul></li>
<li><p>Head</p>
<ul>
<li>Multi-head</li>
<li>Multi-stage</li></ul></li>
<li><p>Pseudo</p>
<ul>
<li>NIH Data</li>
<li>SIIM Data</li>
<li>Test data (3500 images)</li></ul></li>
</ul>
<p>CV strategy : we had used the same folds as given by <a href="https://www.kaggle.com/underwearfitting" target="_blank">@underwearfitting</a> . We found it to every stable and in sync with lb untill we introduced a leakage</p>
<h1>Ensembling</h1>
<p>We have used an ensemble of 8 models for our final submission with optuna and oofs to optimize the weights given to each model in our ensemble .</p>
<h1>Few things that we missed</h1>
<ul>
<li>Training on High Resolution as it would have brought diversity in our ensemble and also would have given better lb score but we had limited hardware resources</li>
<li>Training four stage with our own models instead of using public pretrained models ,might have given us a big shake but we were lucky due to blending of large models we didnt shake</li>
</ul>
<p>At the end I would like to thank all my teammates <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> , <a href="https://www.kaggle.com/adg1822" target="_blank">@adg1822</a> and <a href="https://www.kaggle.com/shivamcyborg" target="_blank">@shivamcyborg</a><br>
This was my first team with all Indian Members and I have enjoyed a lot , besides learning and getting upset together we have developed a special bond .</p>
<p>Thanks for reading</p>
|
RANZCR CLiP - Catheter and Line Position Challenge
|
22nd place solution on a GitHub
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you! <br>
I hope too. Meanwhile, I maintain a <a href="https://github.com/osai-ai/dokai" target="_blank">collection of Docker images</a> for ML/DL and video processing. One of the images was used in the solution ;) </p>
<p>Thank you!</p>
<p>Thanks, Khanh!</p>
<p>Thanks, Sinan!</p>
<p>Maybe there are leaks on my CV. But I didn't use external data or model weights in the selected submissions. I use stratified folds without patient id overlap in the ensemble with the highest CV score (Blend 16).<br>
Maybe it's just a random ¯_(ツ)_/¯</p>
<p>Sorry, I didn't answer the question. "Private LB rank" section is an order of my submissions sorted by private LB score. You can find your order on "My Submissions" page using "Sort by - Private Score".</p>
<p>Hello, thanks for the competition!</p>
<p>Link to solution <a href="https://github.com/lRomul/ranzcr-clip" target="_blank">https://github.com/lRomul/ranzcr-clip</a></p>
<p>Key points: </p>
<ul>
<li>EfficientNet</li>
<li>1024x1024 image resolution</li>
<li>Soft pseudo labels</li>
<li>Some MLOps for training and making a submission</li>
</ul>
<p>The progress of the solution during the competition can be seen in the laboratory journal (<a href="https://docs.google.com/spreadsheets/d/112wrfuQjNXEFyqQLVhu79Vf0uOabnZ1MaayEts2Gvto/edit?usp=sharing" target="_blank">link</a>).<br>
It describes all the single models and ensembles and shows CV, Public/Private LB scores.</p>
<p>Thanks, Usha! </p>
|
RANZCR CLiP - Catheter and Line Position Challenge
|
23rd Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks.</p>
<p>I tried 768x768 with small models, but not 1024x1024.</p>
<p>I don't have time and computation resources for large models such as ResNet200d with 1024x1024 😂</p>
<p>I see. Thanks.</p>
<p>I think there is no difference between them in terms of the model architecture. But pre-trained models of them we can use by <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">timm</a> are trained by different ways.</p>
<p>Yes. I used whole the data for fine-tuning but did cross validation.</p>
<p>I split the competition data by Multi-Label Stratified Group K-Fold manner(K=5).</p>
<p>Thanks!</p>
<p>I didn't try self-attention but, IMO, calculating attention weight by neighborhood features(Spatial-Attention) may be better than by all the features(Self-Attention).</p>
<p>Thanks! </p>
<p>I'm very happy that my concepts and baseline are helpful for participants.</p>
<p>Thanks!</p>
<blockquote>
<p>I achieved 0.972 on private lb just by using ResNet200d with Multi Spatial-Attention Head only</p>
</blockquote>
<p>That's great! </p>
<p>I don't understand "+ 1 fold of Img size 1024." You mean doing ensemble 6 predictions (<strong>5</strong> folds whose training and inference was done with img size 640 and <strong>1</strong> folds with img size 1024), right?</p>
<p>Thanks 😃 </p>
<p>Thanks! </p>
<p>Thanks. I'm glad that my discussions/baselines are helpful to you 🙂</p>
<p>Thanks.</p>
<p>I tried model 6(with segmentation) in last few days but there was no sufficient training time.</p>
<p>Thanks! </p>
<p>My GPU resource is a single TitanRTX(24GB) and I didn't use gradient accumulation/freezing BatchNorm layers.</p>
<p>Batch size I used for each models is as follows:</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>image size</th>
<th>batch size</th>
</tr>
</thead>
<tbody>
<tr>
<td>ResNet200d</td>
<td>640x640</td>
<td>16</td>
</tr>
<tr>
<td>EfficientNet-B5</td>
<td>640x640</td>
<td>16</td>
</tr>
<tr>
<td>SE-ResNet152d</td>
<td>640x640</td>
<td>16</td>
</tr>
<tr>
<td>ResNeSt200e</td>
<td>640x640</td>
<td>15</td>
</tr>
</tbody>
</table>
<p>Sorry, I remember that I was trying to keep their batch size as same as possible. You can increase batch size of ResNet200d and SE-ResNet152d on TitanRTX.</p>
<p>First of all, congrats to all the teams got the medal and participants finished this competition, and thanks to Kaggle team and Competition host.<br>
I learned many things especially teacher-student learning through this competition.</p>
<p>I'd like to say special thanks to <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> . I was about to give up in the middle of competition, but reading <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/215910" target="_blank">his interesting approach</a> made me resume.</p>
<p>My approach is not so special . Then I show brief summary here.</p>
<h3>Summary</h3>
<p>Final submission(Public: 0.970, Private: 0.972) is averaging of the following 6 models.<br>
I used image size of 640x640 for all models.</p>
<ul>
<li>model1: ResNeSt200 (Public: 0.965, Private: 0.966)<ul>
<li>pretrained model: ImageNet</li>
<li>Classification Head: Single MLP</li>
<li>training: fine-tuning on competition data<br>
<br></li></ul></li>
<li>model2: ResNet200d (Public: 0.968, Private: 0.971)<ul>
<li>pretrained model: <a href="https://www.kaggle.com/ammarali32/startingpointschestx" target="_blank">@ammarali32 's starting points</a></li>
<li>Classification Head: Multi Spatial-Attention Head</li>
<li>training: fine-tuning on competition data<br>
<br></li></ul></li>
<li>model3: EfficientNetB5-NoisyStudnet (Public: 0.966, Private: 0.969)<ul>
<li>pretrained model: <a href="https://www.kaggle.com/ammarali32/startingpointschestx" target="_blank">@ammarali32 's starting points</a></li>
<li>Classification Head: Multi Spatial-Attention Head</li>
<li>training: fine-tuning on competition data<br>
<br></li></ul></li>
<li>model4: SE-ResNet152d (Public: 0.967, Private: 0.968)<ul>
<li>pretrained model: <a href="https://www.kaggle.com/ammarali32/startingpointschestx" target="_blank">@ammarali32 's starting points</a></li>
<li>Classification Head: Multi Spatial-Attention Head</li>
<li>training: fine-tuning on competition data<br>
<br></li></ul></li>
<li>model5: ResNeSt200e (Public: 0.966, Private: 0.967)<ul>
<li>pretrained model: trained model on <a href="https://www.kaggle.com/nih-chest-xrays/data" target="_blank">NIH Chest X-rays</a> by Teacher-Student Training </li>
<li>Classification Head: Multi Spatial-Attention Head</li>
<li>training: fine-tuning on competition data<br>
<br></li></ul></li>
<li>model6: ResNet200d (Public: 0.966, Private: 0.970)<ul>
<li>pretrained model: <a href="https://www.kaggle.com/ammarali32/startingpointschestx" target="_blank">@ammarali32 's starting points</a></li>
<li>Classification Head: Multi-Head Attention with <strong>Segmentation</strong> Branch</li>
<li>training:<ul>
<li>stage1: trained on only annotated data</li>
<li>stage2: trained on only <strong>non</strong>-annotated data</li></ul></li></ul></li>
</ul>
<p>I really wanted to do teacher-student training on <a href="https://www.kaggle.com/nih-chest-xrays/data" target="_blank">NIH Chest X-rays</a> for all the models. But I used <a href="https://www.kaggle.com/ammarali32/startingpointschestx" target="_blank">@ammarali32 's starting points</a> due to lack of time and computing resources.</p>
<p>As a matter of fact, you can achieve Private 0.972(silver?) by averaging model 2,3,4 (<a href="https://www.kaggle.com/ttahara/ranzcr-ensemble-fine-tuned-multi-head-models" target="_blank">submission notebook</a>), which is combination of <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/207230" target="_blank">my multi-head approach</a> and <a href="https://www.kaggle.com/ammarali32/startingpointschestx" target="_blank">@ammarali32 's starting points</a>. <br>
I wanted to exceed this score with a single model, but could not 😓</p>
<p>Thanks!</p>
<p>I think your contribution is also great. Thank you so much again.</p>
<p>I trained EfficientNet-B5 by the same pipeline with other models. The different settings among models were batch size and learning rate because of model size.</p>
|
RANZCR CLiP - Catheter and Line Position Challenge
|
36th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would really like to thank my teammates <a href="https://www.kaggle.com/yannmajewski" target="_blank">@yannmajewski</a> and <a href="https://www.kaggle.com/mo5mami" target="_blank">@mo5mami</a> for their hard work and commitment . And I would like to thank all the competitors that shared their codes, ideas, discussions.</p>
<p>Here is the overview of our pipeline:</p>
<p><img src="https://i.imgur.com/2gE74RX.png" alt=""></p>
<h3>Summary</h3>
<p>Final submission(Public: 0.969, Private: 0.972) <br>
We used different image size for our models. [(512x512),(640x640),(736x736)]</p>
<ul>
<li>M1: Multi resnet200d (Public: 0.967, Private: 0.971) :<ul>
<li>details will be provided later by <a href="https://www.kaggle.com/yannmajewski" target="_blank">@yannmajewski</a><br>
<br></li></ul></li>
<li>M2: EcaresNet269d :<ul>
<li>details will be provided later by <a href="https://www.kaggle.com/yannmajewski" target="_blank">@yannmajewski</a><br>
<br></li></ul></li>
<li>M3: <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> 4 stages Models <br>
<br></li>
<li>M4: <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a> <a href="https://www.kaggle.com/ttahara/ranzcr-multi-head-model-inference" target="_blank">Multi Head Model</a> <br>
<br></li>
<li>M5: <a href="https://www.kaggle.com/underwearfitting" target="_blank">@underwearfitting</a> <a href="https://www.kaggle.com/underwearfitting/resnet200d-public-benchmark-2xtta-lb0-965" target="_blank">Resnet200d 2xTTA</a> <br>
<br></li>
<li>M6: TF - EfficienNet B7 (Public: 0.963, Private: 0.964) :<ul>
<li>pretrained model: ImageNet</li>
<li>train with different Losses</li>
<li>split : 10Folds with <a href="https://www.kaggle.com/underwearfitting/how-to-properly-split-folds" target="_blank">@sin's Split Methods</a> , but we just used 4 folds for EfficientNet_final_sub </li></ul></li>
</ul>
<h3>Other Models :</h3>
<ul>
<li>3 stages ResNet200d (Public: 0.962, Private: 0.966) </li>
</ul>
<h3>Post Processing</h3>
<ul>
<li>thanks to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> For <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/211194" target="_blank">Magic trick</a> . </li>
</ul>
<h3>Submission Selections</h3>
<p>Our plan is to choose one submission with setting weights referring to LB Scores , and another based on CV scores. But Both Scored 0.972 with some differences ( submission based on LB give us 36th Place)</p>
<p>To Be Continued …</p>
<p>Thanks a lot <a href="https://www.kaggle.com/underwearfitting" target="_blank">@underwearfitting</a> For you Contribution in Almost Kaggle Competitions , <br>
And Congrats For the second place !</p>
|
RANZCR CLiP - Catheter and Line Position Challenge
|
3rd Place Solution [Preferred CLiP]
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Congratulations to all the winners and thanks to the organizers for making this competition possible!</strong></p>
<p>Here is the solution of our team(Preferred CLiP: <a href="https://www.kaggle.com/charmq" target="_blank">@charmq</a>, <a href="https://www.kaggle.com/la4laaa" target="_blank">@la4laaa</a>, <a href="https://www.kaggle.com/yhirano" target="_blank">@yhirano</a>, <a href="https://www.kaggle.com/suga93" target="_blank">@suga93</a> )</p>
<h1>Short Summary (TL; DR)</h1>
<p>Our strategy is based on the 3-stage training proposed by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> and initially implemented by <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> in a public notebook. We also trained a segmenter to give pseudo annotations to unannotated or external data in stage 1 and 2. We did not use the segmenter in the final inference.</p>
<p>Our training pipeline in a nutshell:</p>
<ul>
<li>Train a segmenter that predicts by which type of catheter (or none) each pixel of the input image is occupied.</li>
<li>Make predictions on unannotated RANZCR data and external data (NIH Chest X-rays and MIMIC-CXR) with the segmenter. We excluded samples without any catheters from the external data.</li>
<li>Perform multi-stage classification.<ul>
<li>Stage 1: Superimpose ground-truth annotation (if exists) or the output of the segmenter (otherwise) on the original images and train a <strong>teacher</strong> model with them.</li>
<li>Stage 2: Train a <strong>student</strong> model using both classification loss and consistency loss calculated by comparing its features with that of the teacher model.</li>
<li>Stage 3: Fine-tune the student model.</li></ul></li>
</ul>
<p><img src="https://media.discordapp.net/attachments/824969420731318335/824969475153199104/ranzcr-clip-overview.png?width=1674&height=936" alt="https://media.discordapp.net/attachments/824969420731318335/824969475153199104/ranzcr-clip-overview.png?width=1674&height=936"></p>
<p><strong>[CV / Public LB / Private LB]</strong><br>
Vanilla multi-stage training, TTA (resnet200d): 0.96606/0.97020/0.97328<br>
+Segmenter: 0.96742/0.97323/0.97389<br>
(+Ensemble of 4 different architectures: 0.97101/0.97430/0.97515)<br>
+Pseudo labels(NIH): 0.96918/0.97335/0.97455<br>
+Ensemble of 8 different training setups, wo TTA: 0.97215/0.97386/0.97611<br>
+Determining ensemble coefficients by logistic regression: <strong>0.97228/0.97434/0.97624</strong></p>
<h2>Data augmentation</h2>
<p>We used the data augmentation proposed by sin in the following notebook for both segmentation and classification. We used <strong>720</strong> px for image_size.<br>
<a href="https://www.kaggle.com/underwearfitting/resnet200d-public-benchmark-2xtta-lb0-965" target="_blank">https://www.kaggle.com/underwearfitting/resnet200d-public-benchmark-2xtta-lb0-965</a></p>
<h2>Segmentation</h2>
<p><strong>Problem setting</strong></p>
<ul>
<li>Pixel-wise multi-label classification</li>
<li>4 classes (ETT, NGT, CVC, Swan ganz)</li>
</ul>
<p>Architecture: DeepLabV3+ <br>
Encoder backbones:</p>
<ul>
<li>Resnet152</li>
<li>Regnety160</li>
</ul>
<p>Image size: 720x720<br>
Loss: BCEWithLogitsLoss + DiceLoss + RecallLoss</p>
<ul>
<li>Since we found that recall is important for this task, we adopted recall loss which is introduced in <a href="https://openreview.net/forum?id=SlprFTIQP3" target="_blank">Recall Loss for Imbalanced Image Classification and Semantic Segmentation</a>.</li>
</ul>
<p><strong>Training steps</strong></p>
<ol>
<li>Trained the first segmenter only with RANZCR annotated data. </li>
<li>Fine-tuned the segmenter using the NIH Chest X-rays as an external dataset in addition to RANZCR annotated data. Images of the NIH Chest X-rays were selected if any of the predicted values from the previous best classifier were over 0.95. Pseudo labels of them were generated from the previous best ensemble segmenter.</li>
</ol>
<p>Segmentation masks used in the classification task are generated by the ensemble segmenter, which is an ensemble of 10 pretrained segmenters (5 fold x 2 backbone). Although we found that there are some noises in annotations, they are still more informative than our segmenter outputs. So we only used the predicted segmentation masks for unannotated data and external data, and used the ground-truth annotations for annotated data.</p>
<h2>Classification</h2>
<p>For classification, we adopted almost the same strategy proposed in the following notebooks:</p>
<ul>
<li>3 stage training (by <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a>)<ul>
<li><a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/207577" target="_blank">https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/207577</a> </li></ul></li>
<li>use annotation without segmentation (by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>)<ul>
<li><a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/205243" target="_blank">https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/205243</a></li></ul></li>
</ul>
<p>Training procedure:</p>
<ul>
<li>batch size: 64</li>
<li>optimizer: RAdam</li>
<li>scheduler: CosineAnnealingWarmRestarts (stage1, 2) or ExponentialLR (stage 3)<ul>
<li>lr_decay_rate=0.8 for stage3</li></ul></li>
<li>epoch: 15 (stage1, 2) or 10 (stage 3)</li>
</ul>
<h2>Pseudo-labeling</h2>
<p>To leverage external data, we gave pseudo labels to both MIMIC-CXR and NIH Chest X-rays. We first trained preliminary models only with the official RANZCR dataset. Then, we filtered out images without any catheters using their prediction values, and iteratively trained both segmenters and classifiers by using the last models' outputs as additional pseudo labels. As for the NIH dataset, we also omitted duplicated images. To avoid leakage, we used different pseudo labels for each fold.</p>
<h2>Ensemble</h2>
<p>For classification, we finally trained 4 architectures: resnet200d, seresnet152d, resnest50d, and efficientnet-b5, with either pseudo-labeled MIMIC-CXR or NIH Chest X-rays as an additional dataset. We performed 5-fold CV for each training setup, yielding 40 models (=4x2x5) in total. Our final submission was an ensemble of these 40 models. We determined the ensemble coefficients of length 8 (corresponding to each training setup) by performing logistic regression on the local validation set.</p>
<p>The execution time in Kaggle notebook was about 7~8 hours. </p>
<p>We used 8 NVIDIA V100 GPUs with 32GB memory for all the training. A training of 15 epochs took roughly 4 hours. Whole 5 fold multi-stage training with resnet200d took roughly 15 GPU days.</p>
|
RANZCR CLiP - Catheter and Line Position Challenge
|
40th Place: A UNet only solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all the winners and also to the hosts for such a successful competition which will hopefully benefit medical professionals in their work going forward</p>
<p>My solution uses only UNets in a 2-stage training process to leverage the annotations and help the backbone attend to the salient parts of the X-ray.</p>
<h1>CV Strategy</h1>
<p>I applied GroupKFold (k=5) separately in the annotated images and the unannotated images and concatenated the two sets</p>
<h1>Pre-processing</h1>
<p>I generated 5 channel masks for each annotated image by grouping the abnormal/borderline/normal classes and used OpenCV to convert the annotations into lines.</p>
<ul>
<li>Channel 1: ETT</li>
<li>Channel 2: NGT</li>
<li>Channel 3: CVC</li>
<li>Channel 4: Swan Ganz Catheter</li>
<li>Channel 5: <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/207183" target="_blank">Lung masks</a> from <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> </li>
</ul>
<h1>Stage 1: UNet Teacher</h1>
<p>Stage 1 was a UNet that would predict the 5 channel mask, and also had a classification head from the bottleneck to predict the 11 classes. This was trained only on the annotated images:</p>
<ul>
<li>AdamW with 0.1 weight decay</li>
<li>Cosine Annealing LR with Tmax=60 epochs</li>
<li>Early stopping with patience=5</li>
<li>LR = 0.00025 with the batch size around 5-7 depending on the backbone</li>
<li>Heavy augmentation similar to what was seen in kernels/discussions</li>
<li>BCEWithLogitsLoss for segmentation</li>
<li>0.01 * BCEWithLogitsLoss for classification</li>
</ul>
<p>I didn’t generally submit these models, but for a ResNet-200D UNet, this was good for 0.953 on public/0.961 private</p>
<h1>Stage 2: UNet Student</h1>
<p>I then trained a second UNet (same architecture) but using the full class labels and the masks generated from the teacher UNet, using essentially the same hyperparameters but with lighter augmentation. These pseudo masks were generated on the fly to better benefit from augmentations and passed through a sigmoid activation before passing to BCE. I didn't use a threshold to create binary masks in an attempt to convey teacher uncertainty.</p>
<table>
<thead>
<tr>
<th>Backbone</th>
<th>Image Size</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>ResNet200D</td>
<td>512</td>
<td>0.9539</td>
<td>0.964</td>
<td>0.968</td>
</tr>
<tr>
<td>EfficientNet-B4</td>
<td>512</td>
<td>0.9453</td>
<td>0.962</td>
<td>0.965</td>
</tr>
<tr>
<td>ResNet200D</td>
<td>640</td>
<td>0.9568</td>
<td>0.966</td>
<td>0.970</td>
</tr>
<tr>
<td>EfficientNet-B4</td>
<td>640</td>
<td>0.9528</td>
<td>TBC</td>
<td>TBC</td>
</tr>
<tr>
<td>SEResNet152d</td>
<td>640</td>
<td>0.9556</td>
<td>0.966</td>
<td>0.971</td>
</tr>
</tbody>
</table>
<p>To use ResNet200D & SEResNet152d as UNet encoders, I forked <a href="https://www.kaggle.com/pavel92" target="_blank">@pavel92</a>'s excellent <a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">segmentation models</a> and added these encoders from timm (thanks <a href="https://www.kaggle.com/rwightman" target="_blank">@rwightman</a>)</p>
<h1>Final submission</h1>
<p>I used 2x TTA (identity & hflip) and a hillclimb ensemble for my final submission of 0.968 public/ 0.972 private</p>
<h1>Stuff that didn't work</h1>
<ul>
<li>Pseudo label hardening/temperature - I found that a temperature (i.e. a number to divide the pseudo label logits by) of 1.0 worked fine. Tried 0.1 but didn't see any significant gain.</li>
<li>I had the idea of using distillation using the NIH dataset using my stage 2 model as the teacher, and then fine-tune that distilled model using the competition data. However, due to time and leakage, I couldn't quite get this to work. Looking forward to seeing how the top teams leveraged this</li>
<li>MixUp - not sure why that didn't work in this competition</li>
<li>Heavy backbones (B7, NFNets etc) - this was probably due to batch size limitations</li>
</ul>
<p>Thanks!! :)</p>
<p>Thank you! :)</p>
<p>Thank you! I'll have to check out the mouse brain segmentation models, sounds very interesting!</p>
|
RANZCR CLiP - Catheter and Line Position Challenge
|
4th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>You can just set the segmentation loss to 0 for the samples with no annotation. That works with the data here, but might lead to unstable training. A more stable variant is to track the loss of the last batch and put that as loss instead. </p>
<h2>Summary</h2>
<p>Our solution is based on a simple average of single-stage classification models with a surrogate loss for segmentation. Our solution only utilizes the training data provided by the hosts and does not resort to any other additional data. For modeling, we rely on a set of EfficientNet models with a separate Unet head for segmentation. We believe that the simplicity of the solution as well as the lack of external data usage makes this solution unique in the context of other top solutions.</p>
<h2>Pipeline</h2>
<p>We approached the competition collaboratively, with each team member working off one central pipeline. We used the following:<br>
Github: Versioning and code sharing<br>
Neptune.ai: logging and visualisation<br>
Kaggle API: dataset upload/download<br>
AWS: data storage</p>
<h2>Data setup & CV</h2>
<p>We only used training data provided by the competition hosts, and did not resort to any external data like additional Chest14 data. Our validation setup was based on a 5-fold random stratified cross validation. We observed good correlation between validation and public leaderboard while developing our solution. Also, our best selected submission was the one that achieved best local cross validation, best public leaderboard score as well as best private leaderboard score demonstrating the robustness of our solution.</p>
<h2>Models</h2>
<p>Each model in the blend is a single stage model that consists of an EfficientNet backbone and one classification and one Unet segmentation head. The Unet head operates as a form of regularization for the model and only the output of the classification head is used for final predictions. That enables delete the segmentation part for inference and to reduce the model to a simple EfficentNet.</p>
<p>We train models by combining the classification loss and segmentation loss and we weight the segmentation loss by 50. We only calculate the segmentation loss on samples where we also have annotations in train data and ignore samples without. For annotations, we interpolate a line between the annotation points using cv2.polylines with a certain thickness. The classification head employs Max Pooling.</p>
<p>We fit models using Adam optimizer and cosine learning rate decay. For train augmentations, we utilize random horizontal flip, shift/scale/rotate, and random brightness. To better handle inverted images we also randomly invert images while training. We also attempt to keep the aspect ratios of images by applying LongestMaxSize and then randomly cropping parts of the image for training. For inference, we use a slightly larger image size (non-cropped) than while training, but do not apply any further TTA.</p>
<h2>Blend</h2>
<p>Our final submission includes 16 models trained on full training data. The models are based on EfficientNet B7 or B8 and trained on either 896 or 1024 squared image size. The blend was a simple average of probability outputs of each model. Our final submission constitutes our best local CV score, best public LB score, and best private LB score.</p>
<p>Yes, the weight has some impact. We tried several weights and used the ones giving the best cv score</p>
|
RANZCR CLiP - Catheter and Line Position Challenge
|
5th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Apologize for the late sharing, as I was fighting in another competition.<br>
I have made figures but the attachment function is disabled, I will edit this post when the function is back.</p>
<p><strong>Overview</strong><br>
My training pipeline can be broken into three steps. In step1, the model was initially trained for some dozens of epochs with all the 11 targets, we can think of this as some type of pretraining or warmup. In step2, the trained model was split into three expert models, based on the three types of lines, namely, ETT, NGT and CVC. The “Swan Ganz Catheter Present” was merged into the CVC model for simplicity. In step3, each expert model performed independent pseudo-labeling on the NIH data and finetuned several epochs. For inference, each expert model was responsible for predicting the subset of labels it was trained on. </p>
<p><a><img src="https://i.ibb.co/vh5N9Hd/Picture1.png"></a></p>
<p><strong>Step 1 and Regularization</strong><br>
For better results, it’s important to regularize the training of the neural network and force it to focus on the lines and endpoints, especially if the input image size is not large enough. In my work, I made use of the train_annotations.csv as segmentation masks. To make it more economic, rather than training a complete segmentation model, I chose to downsize the segmentation mask and let the second last block of the CNN to fit on it as an auxiliary task. Since we know there is only one ETT for each image, it’s not necessary to draw the ETT in the segmentation mask, instead, for each image, I extracted the endpoint of the ETT from the annotation file as a single point with two values and add a regression task for guiding the ETT training. So, the segmentation mask only included CVC, NGT, and SWAN, and all of these lines were drawn in a single mask, and we could use a 1x1 conv layer to bridge the features maps and the mask. Later in the competition, Dr. Konya shared his “5k trachea bifurcation annotation dataset” (<a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/221007)" target="_blank">https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/221007)</a>. This could be served as an additional landmark or reference points, so I added an additional regression head forcing the network to predict the trachea bifurcation. In all these regularization tasks, the losses without annotations were ignored during training. The figure below shows my CNN training in general. The names of layer1, layer2 … is more of the naming convention for resnet implementations. The sub-figure in upperleft represents the step1 training.</p>
<p><strong>Step 2: Expert Models</strong><br>
My initial idea for step 2 was just to finetune the step1 model from different epochs because different line types might converge at different epochs. Later, I discovered that the performance could be further improved by only using targets of specific line types. Probably there was some interference in optimization if all the targets are trained together. So I adjusted targets as well as regularization tasks and trained three expert models. The details can be found in the figure below. I crossed out the irrelevant parts of the expert models in the figure.</p>
<p><a><img src="https://i.ibb.co/vHBZMFJ/Picture2.png"></a><br>
<a><img src="https://i.ibb.co/rZKzMfr/Picture4.png"></a><br>
<a><img src="https://i.ibb.co/CBP40W9/Picture3.png"></a></p>
<p><strong>Step 3: Pseudo-labeling</strong><br>
Since we know all the private test data are in the NIH data, it’s natural to make use of them by pseudo-labeling. In step3, I first excluded this competitions training data from the NIH data with imagehash, then I generated pseudo labels from each of the expert model on the leftover NIH data which should contain all the test data. Since it wouldn’t be of much use to add data with these lines of interest absent, I used summation of the relevant predictions for filtering. For example, I summed the three CVC predictions and compared it with a threshold to only accept data with CVC existed. For CVC there were around 25000 extra data filtered for pseudo-labeling, and interestingly, there were only around 2500 data left for ETT or NGT which implies a much smaller ratio compared with the test:train ratio in the competition data.</p>
<p><strong>Models and Results</strong><br>
I created a 6 fold split and managed to complete training models on 4 of them. The four models were three resnet200d and one efficientnet-b7 with image input size 672x672. The results are CV: 0.9712, public LB: 0.9682, private LB 0.9756. My public results were much lower than expected. It was a little frustrating during the competition as I saw many participants got a better score early and maybe easily. I checked many times my submission kernel and could not find any problem. I’m glad that I didn’t give up during this journey.</p>
<p>I have no idea to be honest. For CV split I simply used sklearn's kfold on patient ids.</p>
|
RANZCR CLiP - Catheter and Line Position Challenge
|
606th place: good CV & ensembling don't make up for mediocre individual models
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks. Congratulations on placing high up in the silver medals!</p>
<p>I considered whether it's worth it to write this up given my finishing position and how amazing the pipelines of some of the winning solutions are, but thought that I learnt some interesting stuff.</p>
<h1>TL;DR</h1>
<p><strong>Diverse mediocre individual models</strong> + <strong>stratified group-5-fold CV</strong> + <strong>ensembling based on CV</strong> lead to decent results that are not good enough for a medal. My ensembling was a regularized form of arithmetic, logit-scale, rank and power averages (more details below). I trained only using main labels, no multiple training stages, no pseudo-labelling, no use of extra annotations. I added two public notebooks to my final submissions, because I just could not match their public LB. Bottom line: Rigorous CV and good ensembling only help so much, if your basic models aren't good enough.</p>
<h1>Interesting learnings/missed opportunities</h1>
<ul>
<li><strong>Biggest missed opportunities</strong>: Self-distillation, using soft-labelled external data and using the extra annotations. I realized these were very serious options, but chose to focus on other things.</li>
<li><strong>Rigorous CV + ensembling helps</strong>: One of the things that worked well for me and that was to be expected.<ul>
<li>Despite my mediocre individual models this got me to within 0.001 of the bronze medal ranks, so that's not as bad as the rank alone sounds.</li>
<li>There were no unpleasant surprises here: What worked in my stratified group-5-fold CV also worked on the public LB and the private LB.</li>
<li>A lot of the basic ensembling techniques I researched and <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/211221" target="_blank">mentioned</a> seemed to work well, as one would expect based on previous competitions: power averaging, rank averaging, simple arithmetic averages (on probability and logit scale) etc. (see below for more).</li></ul></li>
<li><strong>Image size</strong>: With this task, large(-ish) image sizes seem to help a lot, in the end I used 380 x 380 to 750 x 750 for various models.</li>
<li><strong>GPU memory/BatchNorm freezing</strong>: I used a GTX 1080 Ti and Kaggle notebooks, thus, large models like ResNet-200D were a problem. I used small batch sizes (4) + gradient accumulation (to go to 32) + mixed precision training + freezing BatchNorm layers as a workaround, which worked okay, but resulted in slightly lower performance than others got <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/224085" target="_blank">with better hardware</a>. Still, I could only train the larger models at home (perhaps TPU might have worked?), because it usually took about 1 hour per epoch to train e.g. ResNet-200D.</li>
<li><strong>Single channel models</strong> (incl. without ImageNet pre-training):<ul>
<li>I had assumed that one good approach should be to train single channel (e.g. xse_resnext18 with Mish activation, DenseNets etc.) networks from scratch with only the competition data, especially DenseNet had been heavily advocated by <a href="https://uwspace.uwaterloo.ca/bitstream/handle/10012/16290/Riasatian_Abtin.pdf?sequence=3&isAllowed=y" target="_blank">one thesis</a> that I read. I did not get good results with this (probably my fault).</li>
<li>I also failed to get anywhere with single channel Convolution - BatchNorm - ReLU (CBR) networks (see <a href="https://ai.googleblog.com/2019/12/understanding-transfer-learning-for.html" target="_blank">this Google AI blog</a> on this, it is explored in much more detail in <a href="https://maithraraghu.com/assets/files/thesis_final.pdf" target="_blank">this thesis</a>) trained from scratch.</li>
<li>Perhaps I needed to pre-train such models on large external datasets (either grayscale ImageNet or one of the larger chest X-ray datasets)?</li>
<li>By the way, why is there not a <strong>repository of grayscale ImageNet models</strong>?? It seems obvious and there's some <a href="https://openaccess.thecvf.com/content_eccv_2018_workshops/w33/html/Xie_Pre-training_on_Grayscale_ImageNet_Improves_Medical_Image_Classification_ECCVW_2018_paper.html" target="_blank">papers to support this</a> that pre-training on grayscale ImageNet should help for medical image classification (besides making models smaller).</li></ul></li>
<li><strong>You need time</strong>, especially if you don't have an existing good pipeline for X-ray images, yet. <ul>
<li>Certainly, in terms of competition standings entering so late was not helpful for me. However, since the main purpose of Kaggle for me is to learn, that's alright since I learnt a lot.</li>
<li>I concentrated on other competitions and only seriously started with 3 weeks to go. I guess that was not a lot of time, especially given my limited hardware.</li>
<li>I actually "entered" on day 1, when some work colleagues asked me about this new Kaggle competition. So, I tried how one of my sets of <code>fastai</code> <a href="https://www.kaggle.com/bjoernholzhauer/fastai-how-to-set-up-efficientnet-b4-0-945-lb" target="_blank">training</a> and <a href="https://www.kaggle.com/bjoernholzhauer/inference-for-trained-fastai-efficientnet-b4" target="_blank">inference</a> notebooks from the Cassava Leaf Disease competition would perform. Initially, that was the second best public notebook score and scored in the bronze area, but it very quickly dropped to the bottom of the LB. </li></ul></li>
<li><strong>Classification head</strong>: While with smaller models a more complex 2-3 layer classification head worked great (but usually required training the head for 2-3 epochs first with the rest of the model frozen), I ran into trouble with its BatchNorm layers, when using small batch sizes, so for the large models I did not use it (without it, just training the final layer with the rest frozen did not seem to be useful).</li>
<li><strong>Inference time</strong>: Mixed precision inference helped and seemed to not affect performance in a meaningful way. It was interesting to see what other teams did like only including models from some folds.</li>
</ul>
<h1>Best versions of individual models (not all in final ensemble)</h1>
<p>As mentioned, my CV - public LB correlation seemed decent. This plot gives an idea and shows some of my main models (more details in the table below):<br>
<img src="https://i.imgur.com/X0i7TUt.jpg" alt="Plot of CV scores vs. public LB score"></p>
<table>
<thead>
<tr>
<th>#</th>
<th>Model</th>
<th>Pre-trained</th>
<th>Size</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>0*</td>
<td>EfficientNet-B4</td>
<td>ImageNet</td>
<td>380</td>
<td>?</td>
<td>0.945</td>
<td>0.953</td>
</tr>
<tr>
<td>1</td>
<td>ResNeXt-50-32x4d (minimal aug.)</td>
<td>ImageNet</td>
<td>600</td>
<td>0.9344</td>
<td>0.949</td>
<td>0.954</td>
</tr>
<tr>
<td>2</td>
<td>ResNeXt-50-32x4d (more aug.)</td>
<td>ImageNet</td>
<td>600</td>
<td>0.9347</td>
<td>0.950</td>
<td>0.955</td>
</tr>
<tr>
<td>3</td>
<td>EfficientNet-B2 noisy student</td>
<td>ImageNet+</td>
<td>260</td>
<td>0.9011</td>
<td>0.925</td>
<td>0.929</td>
</tr>
<tr>
<td>4</td>
<td>EfficientNet-B4 noisy student</td>
<td>ImageNet+</td>
<td>380</td>
<td>0.9207</td>
<td>0.943</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td>DenseNet-blur-121d</td>
<td>ImageNet</td>
<td>224</td>
<td>0.8717</td>
<td>?</td>
<td>?</td>
</tr>
<tr>
<td>6</td>
<td>DenseNet-blur-121d (with SWA)</td>
<td>ImageNet</td>
<td>224</td>
<td>0.8699</td>
<td>?</td>
<td>?</td>
</tr>
<tr>
<td>7</td>
<td>EfficientNet-B5</td>
<td>ImageNet</td>
<td>512</td>
<td>0.922</td>
<td>0.943**</td>
<td>0.948</td>
</tr>
<tr>
<td>8</td>
<td>Xception</td>
<td>ImageNet</td>
<td>750</td>
<td>0.936</td>
<td>0.957</td>
<td>0.958</td>
</tr>
<tr>
<td>9</td>
<td>Inception v3</td>
<td>ImageNet</td>
<td>704</td>
<td>0.936</td>
<td>0.954</td>
<td>0.956</td>
</tr>
<tr>
<td>10</td>
<td>ResNet-200D</td>
<td>ImageNet</td>
<td>640</td>
<td>0.947</td>
<td>0.958</td>
<td>0.963</td>
</tr>
<tr>
<td>11</td>
<td>SE-ResNet-152D</td>
<td>ImageNet</td>
<td>704</td>
<td>0.948</td>
<td>0.960</td>
<td>0.963</td>
</tr>
<tr>
<td>Y*</td>
<td>ResNet-200D</td>
<td>ImageNet</td>
<td>512</td>
<td>?</td>
<td>0.965</td>
<td>0.967</td>
</tr>
<tr>
<td>Z*</td>
<td>SE-ResNet-152D-320</td>
<td>ImageNet</td>
<td>640</td>
<td>?</td>
<td>0.962</td>
<td>0.964</td>
</tr>
<tr>
<td>Y+Z*</td>
<td>Blended 0.6:0.4</td>
<td>-</td>
<td>-</td>
<td>?</td>
<td>0.965</td>
<td>0.968</td>
</tr>
</tbody>
</table>
<p>The models marked with a * were not part of my cross-validation scheme. Y and Z are public notebooks that I added to one of my final submissions. For other models I got a 5-fold CV using the same stratified group-5-fold CV scheme that I could then use for ensembling. For several models TTA did not make much of a difference, for the EfficientNet-B5 marked with ** it made a difference from 0.940 to 0.943 on the public LB. </p>
<h1>Ensembling model(s)</h1>
<p>I boosted my CV by 0.008 over my best individual model and the LB by the exact same amount. <br>
The figure below shows the correlation of the public LB predictions of the different models. Unsurprisingly my CV (and LB) scores did not improve, when I used models with correlations >>0.95 together. That's how models 1 and 9 were elimintated from the final ensemble, while models 3, 5 and 6 were just too weak to add much.</p>
<p><img src="https://i.imgur.com/2MIvXcN.jpg" alt="Correlation of public LB predictions"></p>
<p>My main interesting idea was this:</p>
<ul>
<li>I saw different models performed differently on different targets in CV. So, simple unweighted averaging are likely non-ideal.</li>
<li>I did not want to simply optimize blending weights on the full set of OOF predictions in order to avoid overfitting. </li>
<li>I created an averaging model (very simple, there's just models * targets parameters that are soft-maxed to force them to sum to 1 for each target) that I fitted by fold. To avoid the overfitting, I used weight decay:<ul>
<li>weight decay of about 1 was good for a batch size of 128 in terms of maximizing mean ROC curve - SD ROC curve across folds</li>
<li>We penalize the non-softmaxed weighting coefficients ("betas") towards all being equally zero. I.e. we are basically penalizing towards a simple arithmetic average and any deviation from that has to be robust across folds to be "accepted". </li></ul></li>
</ul>
<pre><code>class MyAverager(nn.Module):
def __init__(self, n_models, n_targets):
super(MyAverager, self).__init__()
self.betas = nn.Parameter(torch.randn(size=(n_models, n_targets)))
self.softmax = nn.Softmax(dim=0)
def forward(self, inputs):
# Assume input tensors indexed as sample, model, target
# self.betas for weights indexed as model, target
wgts = self.softmax(self.betas)
x = torch.mul(inputs, wgts).sum(dim=1)
return x
</code></pre>
<p>I did the same thing for logit transformed model predictions and other transformations.</p>
<p>The public notebooks I added, I just added with weights that I guesstimated based on their performance on the LB vs. the performance of my CVed models. I looked at how much weight my ensembling model gives to models with that kind of gap and picked those weights with some discounting (I should trust my CV more than a public notebook selected on public LB).</p>
<h1>Selected submissions</h1>
<p>I managed to select my 2nd (no real difference to the best score) and 7th best scores on the private LB, so the submission selection based on my CV went well.</p>
<ul>
<li>Selection 1:<ul>
<li>First level models: 2,4,7,8,10,11,Y and Z</li>
<li>Second level (optimized using PyTorch averaging model; public NBs added with 1.5 x weight): weighted average of probabilities, weigthed power (0.76) average, weighted rank averaging</li>
<li>Third level: Equally weighted rank averaging</li>
<li>Public LB 0.966, private LB 0.969</li></ul></li>
<li>Selection 2:<ul>
<li>First level models: 2,4,7,8,10,11,Y and Z</li>
<li>Second level stacking (optimized using PyTorch averaging model; public NBs added with 1 x weight): weighted average of probabilities, two types of weigthed power average (power 0.57735 and 0.76), weighted rank averaging</li>
<li>Third level: Equally weighted rank averaging</li>
<li>Public LB 0.966, private LB 0.968</li></ul></li>
</ul>
|
RANZCR CLiP - Catheter and Line Position Challenge
|
6th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>you can search the noisy student paper from Quoc Le and his team at Google Brain :)</p>
<p>No, not the epoch thing. First, I already have auxiliary heads, so my models can have up to 3 outputs from different blocks. And in the same iteration, I just call loss(output_weak, output_strong.detach())</p>
<p>sometimes 1, sometimes 2, sometimes 4 V100 and I used some cloud computing too. I have to prioritize company works so no really dedicated GPU</p>
<p>I tried but densenet can not beat big efficientnets :(</p>
<p>I will publish the kernel first then write the solution later today when I have time ;)</p>
<p>key factors: image size, additional classes, auxiliary head training, self-distillation, and noisy student.</p>
<p><a href="https://www.kaggle.com/moewie94/975-with-fixes" target="_blank">https://www.kaggle.com/moewie94/975-with-fixes</a></p>
<p>For this competition, I did not use segmentation mask information because I got quite a good local CV and public LB scores from early without it, and training with segmentation masks cost a lot of time and computation resources so I decided to leave it. I'm still wondering if I can have any boost if I dedicate time and efforts to train models with segmentation masks 😂</p>
<ol>
<li>Image size: To be honest, I doubt if anyone can finish in the gold medal zone with anything less than 1024-by-1024. Firstly I do experiments with small image sizes (448-by-448) but after 1 month, I focused on 1024-by-1024 or bigger image sizes to get high scores. My final models use and collections of sizes (1024, 1280, 1344, and 1408) to improve diversification. I also try to fit as much model as possible into the ensemble and use big boys like efficient-net b6 and b7 😂</li>
<li>Additional classes: I do not train with no-ETT class and cross-entropy loss but I create 3 more classes: CVC present, NGT present, and ETT present. This improves the local CV score a little bit.</li>
<li>Auxilliary heads: To make training more efficient, I also add output layers to the intermediate blocks of the classification model and train them with the same ground truth as the final output layer (some guys call this the Supervision technique).</li>
<li>Self-distillation: I detach the output of deeper blocks and use them as extra ground-truth to train the shallower blocks (weight 0.5)</li>
<li>Noisy student: Every time I finished training k-fold, I generate OOF prediction soft-labels and use the soft-labels to train the model in the next cycle. This boost my local CV and leaderboard score quite a lot (0.004-0.005) and I think this is the reason I can get myself a high position finish :D</li>
</ol>
<p>P/S: Thank <a href="https://www.kaggle.com/underwearfitting" target="_blank">@underwearfitting</a> for the quick submission tricks and <a href="https://www.kaggle.com/roydatascience" target="_blank">@roydatascience</a>, <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a> for the multi head implementation. I benefitted a lot from you guys's ideas and tricks :)</p>
<p>yes. for example: assume first I have oof_b4 as b4 prediction. then I train b5 with it and predict oof_b5. I will take (oof_b4 + oof_b5) / 2. as the new csv and train b6 and so on.</p>
<p>I tried this from the early experiments, it improves my local validation score ~0.003 iirc (for b4 size 448)</p>
<p>First I train normally and do experiments normally. When I finish experimenting single-fold and start doing k fold for submission I call this cycle 0.<br>
Then I keep all configs and only change csv file. I do it twice on my experiment arch (b4). So it can be cycle 1&2.<br>
Then when I do bigger arch or move to bigger image size, I continuously do this. After each time I have done a k-fold, I blend the new OOF with the old OOF to get new csv file. <br>
So it can be 7-8 cycles or 2-3 cycles depends on how you define a cycle :D</p>
<p>I use 1024-by-1024 b7, 1408-by-1408 b7, and 1344-by-1344 b6. So… yes 😂</p>
<p>yes. I used the idea from that paper. also, instead of training with KL-divergence, I just simple train with binary cross-entropy with sigmoid-ed predictions from the last layer.</p>
|
RANZCR CLiP - Catheter and Line Position Challenge
|
71st Place Solution & Code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/faisalalsrheed" target="_blank">@faisalalsrheed</a>! I just made <a href="https://www.kaggle.com/kozodoi/71st-place-ensembling-pipeline" target="_blank">the notebook reproducing my submission</a> public, please have a look if you are interested. The power mean is implemented as <code>np.mean(np.power(preds, p) * w, axis = 1) ** (1/p)</code>, where <code>p</code> is a parameter that can be tuned and <code>w</code> are model weights.</p>
<p>I recommend using Neptune.ai regardless of the environment :) I personally do not use their JupyterLab extension for versioning notebooks and create a separate file for each experiment. But I rely on Neptune API to track CV performance and upload model weights and configurations.</p>
<h2>Summary</h2>
<p>Congrats to the winners and thanks to organizers for hosting this competition! I was only able to join three weeks ago and did not have time to test all ideas (not using external data and annotations). Still, I am glad to reach top-5% and happy to share my solution. Although there are many Grandmasters among the winners, I hope my summary will provide some value for some of you :)</p>
<p>My solution is an ensemble of 5+2 CNN models; see the diagram below. All models are implemented in PyTorch and trained using Google Colab or a local machine with Quadro RTX 6000.<br>
<img src="https://i.postimg.cc/c4cPcXng/ranzcr.png" alt="ensemble"><br>
From my experience, the most important things in this competition were:</p>
<ul>
<li>large image size, which required gradient accumulation to have a decent batch size</li>
<li>careful augmentations, as heavy rotation/crop were harming CNN performance</li>
<li>ensembling with power mean to maximize AUC, which is a ranking indicator</li>
</ul>
<h2>Code</h2>
<ul>
<li><a href="https://www.kaggle.com/kozodoi/71st-place-ensembling-pipeline" target="_blank">Kaggle notebook</a> reproducing my submission and detailing the ensembling pipeline</li>
<li><a href="https://github.com/kozodoi/Kaggle_RANZCR_Challenge" target="_blank">GitHub repo</a> with the complete training codes and notebooks</li>
</ul>
<h2>Data</h2>
<ul>
<li><a href="https://www.kaggle.com/underwearfitting/how-to-properly-split-folds" target="_blank">Group stratified 5-fold CV</a> proposed by <a href="https://www.kaggle.com/underwearfitting" target="_blank">@underwearfitting</a> </li>
<li>Augmenting training folds with up to 1% of pseudo-labeled public test data</li>
</ul>
<h2>Augmentations</h2>
<ul>
<li>Training augmentations:</li>
</ul>
<pre><code>- RandomResizedCrop(scale = (0.8, 1))
- ShiftScaleRotate(0.05, 0.05, 0.05)
- HorizontalFlip
- HueSaturationValue
- RandomBrightnessContrast
- Blur and Distortion
- Cutout
</code></pre>
<ul>
<li>TTA: averaging over 2 images with <code>HorizontalFlip</code></li>
<li>Image size: between 600 and 886</li>
</ul>
<h2>Base models</h2>
<p>The table below from Neptune.ai provides the main parameters of the 5 base models:<br>
<img src="https://i.postimg.cc/sgcZRfD2/Screen-2021-03-16-at-17-51-00.jpg" alt="models"></p>
<ul>
<li>ResNet models were initialized from the <a href="https://www.kaggle.com/underwearfitting/resnet200d-baseline-benchmark-public" target="_blank">public pretrained weights</a></li>
<li>Scheduler: 1-epoch warmup + cosine annealing afterwards</li>
</ul>
<h2>Ensembling</h2>
<p>Although all CNNs have a full 11-label head, I was considering each label separately on the ensembling stage. For each of the labels, I compared two options to mix model predictions:</p>
<ul>
<li>simple blends (arithmetic, geometric, power or rank mean)</li>
<li>stacking with LightGBM (including predictions for all labels as features)</li>
</ul>
<p>Using OOF predictions, I found power mean with <code>p = 1/11</code> to perform best for 6 labels. The intuition behind such a low <code>p</code> is the desire to assign higher scores to images where at least one model suspects a positive class. For 5 other labels, stacking performed slightly better.</p>
<p>Combining stacking and power mean allowed me to reach the local CV of <strong>0.9660</strong>. As usual, it was tempting to choose another sub with a better public LB score, but trusting CV paid off: selecting a higher-scoring sub on public LB would have kicked me out of the medal zone.</p>
<p>Finally, I blended my best CV ensemble (<code>w = 75%</code>) with the 2 public models (<code>w = 25%</code>):</p>
<ul>
<li><a href="https://www.kaggle.com/ammarali32/seresnet152d-cv9615" target="_blank">SeResNet152D</a> by <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a></li>
<li><a href="https://www.kaggle.com/ammarali32/resnet200d-public" target="_blank">ResNet200D</a> by <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a></li>
</ul>
<p>The final solution achieves <strong>0.97156</strong> on the private LB (71st place). Happy to answer any questions in the comments and see you in the next competitions! 😊</p>
<p>Thanks <a href="https://www.kaggle.com/usharengaraju" target="_blank">@usharengaraju</a>!</p>
<p>Thank you <a href="https://www.kaggle.com/underwearfitting" target="_blank">@underwearfitting</a> for all your contributions and congrats with the second place!</p>
|
RANZCR CLiP - Catheter and Line Position Challenge
|
76th Place Simple Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to Kaggle and the RANZCR team for hosting a such an interesting competition. This was a big step for me personally since this was my first image competition. I want to thank <a href="https://www.kaggle.com/underwearfitting" target="_blank">@underwearfitting</a> for the amazing <a href="https://www.kaggle.com/underwearfitting/single-fold-training-of-resnet200d-lb0-965" target="_blank">notebook</a> which was a huge help for me as a beginner learning PyTorch. </p>
<h1>1. Preprocessing and Augmentations:</h1>
<p>My final solution was an ensemble of three <code>resnet200d</code> with slightly different preprocessing methods. Two of them were pretrained on the ChestX dataset and one of them was pretrained on imagenet. <br>
For augmentations, I found that two augmentations especially helped to reduce overfitting</p>
<ul>
<li>Large dropout. For one of my models, I used <code>max_h_size=int(image_size * 0.3), max_w_size=int(image_size * 0.3), num_holes=1, p=0.5</code>. For the other two, I used <code>max_h_size=int(image_size * 0.1), max_w_size=int(image_size * 0.1), num_holes=4, p=0.5</code></li>
<li>Motion Blur. It seemed like that motion blur can mimic the fuzziness of some xrays. For two of my models, I used <code>albumentations.MotionBlur(blur_limit=(7, 15), p=0.5)</code>.</li>
</ul>
<p>I found that applying CLAHE to images as a preprocessing method seems to make the xray clearer, I did that for one of my models and compared to ones without, it seems to improve cv by 0.0002~.</p>
<h1>2. Models and Cross-Validation:</h1>
<p>Very early on, I experimented with some tf models and none of them were able to reach cv > 95.7 and LB > 95.9. Then I put down the competition for a while before coming back in the final week deciding that PyTorch would be the way to go. </p>
<p>I used <code>StratifiedGroupKFold</code> for two of my models and <code>StratifiedKFold</code> for one. <code>StratifiedGroupKFold</code> got higher LB usually, but <code>StratifiedKFold</code> had more correlation between CV and LB. </p>
<ul>
<li><p>First resnet200d:</p>
<ul>
<li>This one was very similar to the one shown in this notebook with some different hyper-parameters. My cv was around 95.4~</li></ul></li>
<li><p>Second resnet200d:</p>
<ul>
<li>This one was pretrained on ChestX and with CLAHE applied to the images before feeding in the model. I used 5 fold <code>StratifiedGroupKFold</code> and the cv was 96.5~. The image size was 575x575</li></ul></li>
<li><p>Third resnet200d:</p>
<ul>
<li>This one was also pretrained on ChestX with a larger droupout compared to the previous two <code>max_h_size=int(image_size * 0.3), max_w_size=int(image_size * 0.3), num_holes=1, p=0.5</code>. I used 5 fold <code>StratifiedKFold</code> and the cv was 96.6~. The image size was 575x575.</li></ul></li>
</ul>
<h1>3. PostProcessing (kinda):</h1>
<p>For inference, I used a larger images size compared to train (100 pixels larger) this lets the model utilize the details of the image. I learned this handy trick from <a href="https://www.kaggle.com/c/landmark-recognition-2020/discussion/188299" target="_blank">here</a>. It improved LB by about 0.0001~ I used simple average between the three models. </p>
|
RANZCR CLiP - Catheter and Line Position Challenge
|
7th Place Solution [RaddbotnaKama 200d]
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Code is now available at <a href="https://github.com/analokmaus/kaggle-ranzcr-clip-public" target="_blank">https://github.com/analokmaus/kaggle-ranzcr-clip-public</a> !<br>
Note that the repository is WIP, and there will be some updates.</p>
<p>First of all, I would like to express deep gratitude to organizers and all the teams for making this competition possible. Also, I would like to thank my teammates - <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> and <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> .</p>
<p>Code will be available at: <a href="https://github.com/analokmaus/kaggle-ranzcr-clip-public" target="_blank">https://github.com/analokmaus/kaggle-ranzcr-clip-public</a></p>
<p>Here is the overview of our pipeline: <br>
<img src="https://pbs.twimg.com/media/EwqG7gdVcAEOmHS?format=jpg&name=large" alt="pipeline"><br>
(n=7)</p>
<h2>Preprocessing</h2>
<p>Input image size had a good correlation with model performance. We used 768 px input for all models, considering the training / inference time. For data augmentation, we used some common transformations such as horizontal flip, affine transform, random brightness / gamma, optical distortion etc. </p>
<h2>Architectures</h2>
<p>We used <strong>UNet-CNN</strong> architecture. Output of UNet was segmentation mask consisting of three channels - ETT, NGT, and CVC+SGC. UNet was supposed to play the role of spatial supervision. Thus, compared to normal CNN-only architecture it showed more robust performance and higher CV / LB correlation. </p>
<p><img src="https://pbs.twimg.com/media/EwqHtBDVgAIorDd?format=jpg&name=medium" alt="model"></p>
<p>As for backbone, big model in general performed better. We used ResNet200d, EfficientNet-b7, and NFNet-F1. </p>
<p>Since there wasn’t ResNet200d encoder for segmentation_models_pytorch, we implemented it. <a href="https://www.kaggle.com/analokamus/resnet200d-encoder-for-segmentation-models-pytorch?scriptVersionId=56975633" target="_blank">Implementation</a></p>
<h2>Training procedure</h2>
<p>For UNet-CNN model, we trained UNet on additional annotations provided by competition host. And then trained the whole network on all images. Since we were not able to build a stable NFNet UNet encoder, we trained NFNet in the same way as <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/207577" target="_blank">three stage training</a> by <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a>.</p>
<h2>External dataset</h2>
<p>Utilising external CXR dataset was another key to improve model performance. We pseudo-labelled NIH and MIMIC CXR dataset using our best performing model, and sampled a subset with balanced class distribution from each dataset in order to reduce data size. </p>
<p>We trained new student model with pseudo labels on external dataset. New student model was supposed to have a different feature extractor from teacher model. We finetuned student model on original dataset, and observed better performance than teacher model (see <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/215910" target="_blank">this discussion</a> by <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> ).</p>
<p>We did also test conventional single stage pseudo labelling. While we observed slight uplift in performance but we gave it up because it converged quite slow.</p>
|
RANZCR CLiP - Catheter and Line Position Challenge
|
Dual-Head Model with 4-stage Training, 2nd Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the winners, well done! I am grateful to my teammates <a href="https://www.kaggle.com/nvnnghia" target="_blank">@nvnnghia</a> <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> and <a href="https://www.kaggle.com/underwearfitting" target="_blank">@underwearfitting</a> you all did a very good job and finally we finished in 2nd place.<br>
I would like to thank <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for your Unet code at hubmap competition and the wonderful idea of combination of the label and the annotations. Also thanks to <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> 's 3-stage training implementation.<br>
And congratulations to VinBigData Group, their employees won 3 solo golds in this competition! <a href="https://www.kaggle.com/moewie94" target="_blank">@moewie94</a> <a href="https://www.kaggle.com/andy2709" target="_blank">@andy2709</a> <a href="https://www.kaggle.com/nguyenbadung" target="_blank">@nguyenbadung</a>. Looking forward to read your solutions</p>
<h2>Data</h2>
<ul>
<li>For training, we only used RANZCR's training set and the official annotation data. We have tried other annotations like lung mask and key point, but none of them worked well at our setup.</li>
<li>We tried pseudo labeling with ChestX dataset we were affected by severe leaking so we gave up pseudo labeling.</li>
<li>We used lighter augmentation based on sin's public notebook, cutmix didn’t improve CV and LB but introduced diversity to our final ensemble.</li>
<li>We failed to use image size larger than 1024, even the performance of 1024 image is somehow bad. Our best model is a <code>Efficientnetb7-unet</code> model trained at 768 size. We will present it later. </li>
<li>Mask generation: we generated masks via plot the line between the annotation points. After training, we generated a full training set pseudo mask by averaging 5 fold's output. Since we use pseudo masks (trainset w/o annotations) as a supervisor, we ignore the potential leakage here. You can refer to this notebook.</li>
</ul>
<h2>Model</h2>
<p>Our model is a dual-head Unet model. with both classification and segmentation output. Unlike other teams, we only used masks as a supervisor, for two reasons: first, we wanted the model to learn the correct pattern with segmentation loss; secondly, our data is grouped and collected from different hospitals or patients, which may lead to different quality imaging and mask output. Directly inputting these masks to classification may reduce the generalization performance of our model.<br>
Our final model used <code>Resnet200d</code>, <code>efficientnet-b5</code> and <code>efficientnet-b7</code> as backbones, unet-decoder part is reduced in order to train with decent VRAM usage.<br>
We also tried adding consistency loss to our model but this has slowed the training procedure a lot.</p>
<p><a href="https://ibb.co/F0HYpCT"><img src="https://i.ibb.co/rcdp9Nr/image1.png" alt="image1"></a></p>
<h2>Training</h2>
<p>We borrowed stage1 and stage2 from the public notebook, only modifying a few training parameters for stage2.</p>
<h3>Mask Generation</h3>
<ul>
<li>stage3: trained <code>resnet200d-unet</code> at 512 and train_annotation generated mask, if an image input without annotations, we calculate the classification loss only. Predicted full train set pseudo mask v1</li>
<li>stage4: trained <code>resnet200d-unet</code> at 768 with v1 mask and scaled up to 768 size. Generate pseudo mask version v2.<br>
<img src="https://i.ibb.co/8MLZg37/image2.png" alt="image2"></li>
</ul>
<h3>Model training</h3>
<ul>
<li>stage3: trained with <code>efficientnet-b5-unet</code>, <code>efficient-b7-unet</code>, <code>resnet200d-unet</code> at 768 resolution</li>
<li>stage4[optional]: scaled up training for <code>resnet200d-unet</code> at 1024 resolution<br>
<a href="https://ibb.co/Z2PDZbs"><img src="https://i.ibb.co/7k5m90c/stage3-2.png" alt="stage3-2"></a></li>
</ul>
<p>Our best performing model is <code>efficientnet-b7-unet</code> which achieved <code>0.9695</code> CV w/ hflip tta at image size 768, <code>resnet-200d-unet</code> achieve <code>0.9685</code> cv and <code>0.972</code> Public LB w/ hflp tta. A single <code>resnet-200d-unet</code> can achieve <code>0.974</code> on Private LB and win a gold medal. After ensemble these models, three model ensembles achieve <code>0.972</code> CV, <code>0.97383</code> Public LB and <code>0.97599</code> Private LB.<br>
Six models ensemble achieved <code>0.973</code> CV, <code>0.97412</code> Public LB and <code>0.97641</code> Private LB.</p>
<h2>Inference</h2>
<p>Our model design enables us to inference with backbone and classification head. We added one 1024 <code>resnet200d-unet</code> model, one 768 image size <code>efficientnet-b5-unet</code> model and an <code>efficientnet-b7-unet</code> model to the inference notebook and it finished in about 8 hours and 50 minutes. We also found an image retrieval trick to reduce the inference time.</p>
<p>Roughly we do the things below:</p>
<ul>
<li>Pseudo label chestx dataset using existing models</li>
<li>Train a metric learning model with target of patient ID</li>
<li>Use a KNN model to retrieve the top-1 similar image from chestx dataset and submit the pseudo label.<br>
<img src="https://i.ibb.co/r6WCnQw/image4.png" alt="image4"></li>
</ul>
<p>Using this method, we improved private LB to 0.97641</p>
<h2>Code</h2>
<p>We are organizing our code and will release our code on github later.</p>
|
RANZCR CLiP - Catheter and Line Position Challenge
|
How to shake down from 28 to 80 ))
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks a lot for the motivating comment)) and congrats</p>
<p>Welcome. Sorry for that fixed and congrats ))</p>
<p>Thanks and congrats for your high standing hope you will get gold at the end</p>
<p>Yesterday I was really lost between three submissions (HIGH LB, HIGH CV, ROBUST MODEL). So I decided to choose the HIGH LB as my first submission But what about my second choice??. I was really confused by the decision of allowing all external data even NIH, which leads me to the possibility that the private testing set has a dependent type of data with a different distribution of classes. Remembering this discussion <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/216219" target="_blank">https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/216219</a>. In this case, the BEST CV will not help so I went with the ROBUST MODEL choice leading me to a bronze medal instead of a silver one. :((<br>
Well, It was a really interesting competition and I really learned a lot from all of you guys. I am really glad to hear that my work also helped some of you. <br>
For those who are interested in the generation process of the starting points <a href="https://www.kaggle.com/morizin" target="_blank">@morizin</a> , <a href="https://www.kaggle.com/projdev" target="_blank">@projdev</a> you can find the notebook here <a href="https://www.kaggle.com/ammarali32/starting-points-generation" target="_blank">https://www.kaggle.com/ammarali32/starting-points-generation</a> Credits to <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> implementation. <br>
UPD: Great thanks to my teammate and my girlfriend <a href="https://www.kaggle.com/walaaothman" target="_blank">@walaaothman</a> </p>
<p>Thanks a lot, I am really glad to hear that ))</p>
<p>Thanks)) that is really motivating. </p>
<p>большое спасибо за вашу поддержку. I hope so , congratulations )) </p>
<p>Thanks)) I really appreciate it. It is ok my happiness about winning first place in the MagNet competition "Driven Data" still some outcome from these three months. My fault was going on a lot of competitions during my Master's degree and a full-time job. You really did a great job and I really wanted to send you a team request but we had submissions over the limit. hope we will be in a competition together one day. Good Luck </p>
<p>Thanks a lot ))</p>
<p>Thanks. I really appreciate that and hope that competition will not be so far. </p>
|
RANZCR CLiP - Catheter and Line Position Challenge
|
In Gauss (God) we Trust! 26th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Transfer Learning</h1>
<p>As we all know, if we train on <code>imagenet</code> weights, we may take quite a while to converge, even if we finetune it. The intuition is simple, <code>imagenet</code> were trained on many common items in life, and none of them resemble closely to the image structures of X-rays, therefore, the model may have a hard time detecting shapes and details from the X-rays. We can of course unfreeze all the layers and retrain them from scratch, using the State of the Art models' backbone, however, due to limited hardware, we decided it is best to use what others have trained. After all, it is much easier to stand on the shoulder of giants like <a href="https://www.kaggle.com/ammarali32" target="_blank">ammarali</a>. Consequently, I conveniently used a set of <code>pretrained</code> weights trained specifically on this dataset as a starting point. The weights and ideas can be found <strong><a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/215910" target="_blank">here</a></strong>.<br>
We used a few models and found out that <code>resnet200d</code> has the best results on this set of training images. There is no deep reason why this model outperforms other State of the Art models, but using <code>gradcam</code> we can see how the model sees the images.</p>
<h2>Cross Validation Strategy</h2>
<p>Knowing how to choose a robust and leak free cross validation strategy is extremely important. Data leakage can cause you to have blind confidence on your model. We are also guilty of committing one since we trained our models with the NiH pretrained weights, without taking into consideration if the weights overlap with the training and validation folds information.</p>
<ul>
<li>Cross Validation Strategy: Multi-Label Stratified Group KFold(K=<strong>5</strong>) - using sin's method to split folds will ensure that there is no leakage. Quoting almost verbatim from <code>sklearn</code> :</li>
<li>Assuming that some data is Independent and Identically Distributed (i.i.d.) is making the assumption that all samples stem from the same generative process and that the generative process is assumed to have no memory of past generated samples.**</li>
</ul>
<h2>Therefore it is paramount to ensure that amongst this 3255 <strong>unique</strong> patients, we need to ensure that each unique patients' images DO NOT appear in the validation fold. That is to say, if patient John Doe has 100 X-ray images, but during our 5-fold splits, he has 70 images in Fold 1-4, while 30 images are in Fold 5, then if we were to train on Fold 1-4 and validate on Fold 5, there may be potential leakage and the model will predict with confidence for John Doe's images. This is under the assumption that John Doe's data does not fulfill the i.i.d process.</h2>
<h2>Model Architectures, Training Parameters & Augmentations</h2>
<p>We built-upon Tawara’s Multi-head model for our best scoring models. In particular, we experimented with the activation functions and dropout rates. We found models with <code>Swish</code> activation in the <code>multi-head</code> component of the network to perform best in our experiments. Our best scoring single model is a multi-head model with a <code>resnet200d</code> backbone. In particular, one single fold of <code>resnet200d</code> gives a private score of 0.970. <br>
We started experimenting with <a href="https://www.kaggle.com/underwearfitting" target="_blank">sin's</a> <a href="https://www.kaggle.com/underwearfitting/single-fold-training-of-resnet200d-lb0-965" target="_blank">pipeline</a> which is similar to qishen ha's pipeline back in Melanoma and used <a href="https://www.kaggle.com/ttahara" target="_blank">Tawara's</a> multihead approach. We did not have time to experiment with the 3-4 stage training as we joined the competition late.</p>
<ul>
<li>model:<ul>
<li><strong>backbone</strong>: <code>ResNet200D</code> and <code>SeResNet152d</code></li>
<li><strong>classifier/multi-head:</strong> independent <strong>Spatial-Attention Module</strong> and MLP by Target Group(ETT(3), NGT(4), CVC(3), and Swan(1))</li>
<li><strong>NOTE: I use <a href="https://www.kaggle.com/ammarali32/startingpointschestx" target="_blank">the pre-trained model</a> shared by <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> .</strong> Thanks!</li></ul></li>
</ul>
<h2>Preprocessing</h2>
<p><a href="https://www.kaggle.com/reubenschmidt" target="_blank">Rueben Schmidt</a> mentioned in this <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/224146" target="_blank">post</a> here that some images have black borders around them. I removed them during both the training and inference process. There was no significant increase on the LB score, even if there was, it is in the 3-4th decimal places, but I noticed my local cv to increase. Thus I decided to remove for all. After all, if I keep this consistent in both training and inference, I reckon that no surprise factor would pop out. </p>
<h2>Augmentation</h2>
<p>In particular, we made use of a different <code>Normalization</code> parameter which is more accustomed to the X-ray pretrained images. Thanks Tawara again! Heavy augmentations are used during <strong>Train-Time-Augmentation.</strong> But during <strong>Test-Time-Augmentation,</strong> we merely used a <code>HorizontalFlip</code> with 100% probability, and only used <code>tta_steps=1</code>. </p>
<pre><code>augmentations_class: AlbumentationsAugmentation
augmentations_train:
AlbumentationsAugmentation:
- name: RandomResizedCrop
params:
height: 640
width: 640
scale: [0.9, 1.0]
p: 1.0
- name: HorizontalFlip
params:
p: 0.5
- name: ShiftScaleRotate
params:
shift_limit: 0.2
scale_limit: 0.2
rotate_limit: 20
border_mode: 0
value: 0
mask_value: 0
p: 0.5
- name: HueSaturationValue
params:
hue_shift_limit: 10
sat_shift_limit: 10
val_shift_limit: 10
p: 0.7
- name: RandomBrightnessContrast
params:
brightness_limit: [-0.2, 0.2]
contrast_limit: [-0.2, 0.2]
p: 0.7
- name: CLAHE
params:
clip_limit: [1,4]
p: 0.5
- name: JpegCompression
params:
p: 0.2
- name: IAAPiecewiseAffine
params:
p: 0.2
- name: IAASharpen
params:
p: 0.2
- name: Cutout
params:
# use int(image_size * 0.1)
max_h_size: 64
max_w_size: 64
num_holes: 5
p: 0.5
- name: Resize
params:
height: 640
width: 640
p: 1.0
- name: Normalize
params:
mean: [0.4887381077884414]
std: [0.23064819430546407]
p: 1.0
- name: ToTensorV2
params:
p: 1.0
augmentations_val:
AlbumentationsAugmentation:
- name: Resize
params:
height: 640
width: 640
p: 1.0
- name: Normalize
params:
mean: [0.4887381077884414]
std: [0.23064819430546407]
p: 1.0
- name: ToTensorV2
params:
p: 1.0
</code></pre>
<hr>
<h2>Batch Size and Tricks</h2>
<p>Due to hardware limitation, we can barely fit in anything more than a <code>batch_size</code> of 8. We quote the well known fact <a href="https://arxiv.org/abs/1609.04836" target="_blank">here</a>:</p>
<blockquote>
<p>It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize […]<br>
large-batch methods tend to converge to sharp minimizers of the training and testing functions—and as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation.<br>
The above shows that large batch size may <code>fit</code> the model too well, as the model will learn features of the dataset in less iterations, and may memorize this particular dataset's features, leading to overfitting and poor generalization. However, too small a batch size causes our convergence to go too slow, empirically, we take 32 or 64 as the ideal batch size in this competition. </p>
<h2>We used both <code>torch.amp</code> and <code>gradient accumulation</code> to be able to fit more batch sizes. We did not freeze the <code>batch_norm</code> layers, which still yielded great results. What we should have done is to experiment more on how to freeze the batch norm layers properly, as I believe that it may help. In the end, we used a batch size of 8 and fit 4 iterations using <code>gradient accumulation</code> and trained a total number of 20 epochs to get a local CV score of roughly 0.969.</h2>
<h2>Optimizer, Scheduler and Loss</h2>
<p>Nothing too fancy here, although we really wanted to try out <code>Focal Loss</code> in this setting. The configuration can be seen here. But note that we incorporated <code>GradualWarmUpScheduler</code> along with <code>CosineAnnealingLR</code>.</p>
</blockquote>
<pre><code>scheduler: CosineAnnealingLR
scheduler_params: # Note that in params we must put 1.e instead of 1e
CosineAnnealingLR:
T_max: 16
eta_min: 1.e-7
last_epoch: -1
verbose: True
train_step_scheduler: False
val_step_scheduler: False
optimizer: Adam
optimizer_params:
Adam:
lr: 0.00002
betas:
- 0.9
- 0.999
eps: 1.e-7
weight_decay: 0
amsgrad: False
criterion_train: BCEWithLogitsLoss
criterion_val: BCEWithLogitsLoss
criterion_params:
CrossEntropyLoss:
weight: null
size_average: null
ignore_index: -100
reduce: null
reduction: mean
LabelSmoothingLoss:
classes: 2
smoothing: 0.05
dim: -1
</code></pre>
<h2>Activation Function</h2>
<pre><code>import torch
sigmoid = torch.nn.Sigmoid()
class Swish(torch.autograd.Function):
@staticmethod
def forward(ctx, i):
result = i * sigmoid(i)
ctx.save_for_backward(i)
return result
@staticmethod
def backward(ctx, grad_output):
i = ctx.saved_variables[0]
sigmoid_i = sigmoid(i)
return grad_output * (sigmoid_i * (1 + i * (1 - sigmoid_i)))
class Swish_Module(torch.nn.Module):
def forward(self, x):
return Swish.apply(x)
</code></pre>
<hr>
<p>Our second best performing model is also a multi-head model with swish activation in the heads, but with a <code>SeResNet152d</code> backbone (<code>seresnet152d</code>).<br>
During training, we use gradient accumulation so that the bath size can scale up eventually to our desired sizes. For the second model, it scales to 16. The training parameters for our second best performing model above are:</p>
<pre><code>image_size = 768
seed = 42
warmup_epo = 1
init_lr = 1e-4
batch_size = 4
valid_batch_size = 4
n_epochs = 30
warmup_factor = 10
num_workers = 4
iters_to_accumulate = 4
use_amp = True
debug = False
early_stop = 10
</code></pre>
<p>We also used the <code>albumentations</code> library to perform augmentations on the datasets. For the second model, the augmentations for the trainingand validation datasets are as follows:</p>
<pre><code>transforms_train = albumentations.Compose(
[
albumentations.RandomResizedCrop(image_size, image_size, scale=(0.9, 1), p=1),
albumentations.HorizontalFlip(p=0.5),
albumentations.ShiftScaleRotate(p=0.5),
albumentations.HueSaturationValue(
hue_shift_limit=10, sat_shift_limit=10, val_shift_limit=10, p=0.7
),
albumentations.RandomBrightnessContrast(
brightness_limit=(-0.2, 0.2), contrast_limit=(-0.2, 0.2), p=0.7
),
albumentations.CLAHE(clip_limit=(1, 4), p=0.5),
albumentations.OneOf(
[
albumentations.OpticalDistortion(distort_limit=1.0),
albumentations.GridDistortion(num_steps=5, distort_limit=1.0),
albumentations.ElasticTransform(alpha=3),
],
p=0.2,
),
albumentations.OneOf(
[
albumentations.GaussNoise(var_limit=[10, 50]),
albumentations.GaussianBlur(),
albumentations.MotionBlur(),
albumentations.MedianBlur(),
],
p=0.2,
),
albumentations.Resize(image_size, image_size),
albumentations.OneOf(
[
JpegCompression(),
Downscale(scale_min=0.1, scale_max=0.15),
],
p=0.2,
),
IAAPiecewiseAffine(p=0.2),
IAASharpen(p=0.2),
albumentations.Cutout(
max_h_size=int(image_size * 0.1),
max_w_size=int(image_size * 0.1),
num_holes=5,
p=0.5,
),
albumentations.Normalize(),
]
)
transforms_valid = albumentations.Compose(
[albumentations.Resize(image_size, image_size), albumentations.Normalize()]
)
</code></pre>
<h2>Selected Submissions</h2>
<p>Our best submission to the competition comprised of a weighted (convex) ensemble of two models </p>
<ul>
<li><code>Multi-Head ResNet200d</code> and</li>
<li><code>Multi-head SeResNet152d</code><br>
both pretrained on NiH data with Swish activation. The weights of the ensemble were determined by forward selection, inspired by Chris Delotte’s original implementation in the Melanoma competition. In summary, the ensemble is of the form: <br>
$$w_1 \times \text{(mean of predictions for model (1))} + w_2 \times \text{(mean of predictions for model (2))}$$<br>
where $w_1 = 0.595$ and $w_2 = 0.405$.<br>
Our best submission obtained a public score of 0.968 and a private score of 0.972. The notebook that illustrates the forward selection approach can be found <a href="https://www.kaggle.com/khoongweihao/ranzcr-forward-selection-oof-ensemble?scriptVersionId=56891135" target="_blank">here</a>, but at version 5 for the weights here. The dataset containing all our OOFs and respective submission files can be found <a href="https://www.kaggle.com/khoongweihao/ranzcr-oof-and-subs" target="_blank">here</a>, where the description in the dataset contains all the models and their scores. The weights for model (1) can be found <a href="https://www.kaggle.com/reighns/ranzcrweights" target="_blank">here</a>, and the 5-folds used in inference are <code>multihead_resnet200d_fold0_best_loss.pth</code>, <code>multihead_resnet200d_fold1_best_AUC (1).pth</code>, <code>multihead_resnet200d_fold2_best_AUC.pth</code>, <code>multihead_resnet200d_fold3_best_AUC.pth</code>, and <code>multihead_resnet200d_fold4_best_AUC.pth</code>. The weights for model (2) can be found <a href="https://www.kaggle.com/khoongweihao/ranzcr-multihead-model-weights" target="_blank">here</a>, and the 5-folds used in inference are <code>grad_accum_multihead_seresnet152d_swish_fold0_best_AUC.pth</code>, <code>grad_accum_multihead_seresnet152d_swish_fold1_best_AUC.pth</code>, <code>grad_accum_multihead_seresnet152d_swish_fold2_best_AUC.pth</code>, <code>grad_accum_multihead_seresnet152d_swish_fold3_best_AUC.pth</code>, and <code>grad_accum_multihead_seresnet152d_swish_fold4_best_AUC.pth</code>.</li>
</ul>
<h2>Our second selected submission obtained a public score of 0.967 and a private score of 0.971. The submission contained only model (2) above, where its 5-folds were inferenced. The weights are the same as above. Note that when inferencing the models, we used Sin’s pipeline available <a href="https://www.kaggle.com/underwearfitting/resnet200d-public-benchmark-2xtta-lb0-965" target="_blank">here</a>.</h2>
<h1>Conclusion</h1>
<p>What we could have done better:</p>
<ul>
<li>Use more variety of <code>classifier head</code> like <code>GeM</code>.</li>
<li>Use more variety of <code>backbone</code> and WE JUST CANNOT MAKE <code>efficietnet</code> work. 😐</li>
<li>Use <a href="http://neptune.ai" target="_blank">Neptune.ai</a> to log our experiments as soon things start to get messy.</li>
<li>Experiment on 3-4 stage training.</li>
<li>Pseudo Labelling</li>
<li>Knowledge Distillation</li>
<li>Experiment more on maximizing AUC during ensembles. <code>rank_pct</code> etc.<br>
EDIT: I also express my gratitude to <a href="https://www.kaggle.com/bjoernholzhauer" target="_blank">@bjoernholzhauer</a> as well, he has provided a lot of tips and insights.<br>
Thank you to Kaggle and the community for hosting this competition. I have learned so much just by standing on the shoulder of giants. <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a> <a href="https://www.kaggle.com/underwearfitting" target="_blank">@underwearfitting</a> <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> to name a few. Taking their ideas and incorporating them into our own pipeline made things work. Of course, I would really like to thank my teammate and buddy <a href="https://www.kaggle.com/khoongweihao" target="_blank">@khoongweihao</a> for all the help he provided me in the past year. </li>
</ul>
<p><a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> Thanks, you are one of the giants whose shoulder we stood on :) </p>
<p>Thanks <a href="https://www.kaggle.com/usharengaraju" target="_blank">@usharengaraju</a> for all the great resources. Saves us a lot of time. </p>
<p>Thanks a lot! Congrats to you too! Been following you since Melanoma. Haha</p>
|
RANZCR CLiP - Catheter and Line Position Challenge
|
Solo 60th Place Solution - Forward Ensembling & Pseudo Labeling External Data
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RANZCR CLiP - Catheter and Line Position Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you Usha, I really appreciate it. </p>
<p>Interesting…I look forward to reading your discussion post.</p>
<p>It was quite the surprise. They launched an ambassadorship program last year and I was asked to join, presumably based on some of my public Kaggle notebooks. (It was not something I applied for). </p>
<p>My best advice is to continue your data science journey and to share what you learn: hopefully a similar opportunity will present itself to you. I think that such programs will become more prevalent as more companies recognize the importance of hardware in data science / machine learning / deep learning. </p>
<p>I feel your pain, I have been trying to secure NVIDIA GPUs for my own personal use (shameless PC gaming) and they are very illusive these days. Hopefully this shortage is temporary and we can all get our hands on some solid GPUs in the near future.</p>
<p>Thank you. Sadly, I only got around to assigning them at the macro level. Only at the last minute did I think to do it for each column. I may play around with the column-wise approach now that the competition is over to compare performance. </p>
<p>No thank you, I could not have done it without all your contributions. Congratulations on 2nd place, very impressive indeed. </p>
<p>Thank you Amin. The lowest resolution I used in stages 1 and 2 was <code>576</code>. Sometimes I went as high as <code>768</code> for these stages, but only before I started reusing lower resolution weights to save time.</p>
<p>I used slightly heavier augmentations each time I increased the image size.</p>
<p><strong>60th Place Solution</strong></p>
<p>Thank you Kaggle and RANZCR for hosting this competition. It was the first competition I really immersed myself in and I have learned a tremendous amount. Thank you to all the competitors, especially those that so openly shared their results and techniques. Kaggle is a wonderful medium through which to learn data science, and it would not be the same without everyone sharing and communicating. </p>
<p>I am not pleased with my final standing, but I believe that all Kagglers should share their procedure, irrespective of their final position, because it is highly unlikely that anyone else did exactly what you did, so there is always something to learn from reading other's write ups. Here it goes.</p>
<p><strong>Image resolutions</strong></p>
<p>My approach was a brute force approach and not particularly elegant. I began with <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a>’s kernels on 3 stage training with annotations and changed several parameters like learning rate, batch size, training augmentations, and optimizer (AdamP showed better CV results than Adam). I also generated stratified folds by coupling <a href="https://www.kaggle.com/underwearfitting" target="_blank">@sin</a>'s <a href="https://www.kaggle.com/underwearfitting/how-to-properly-split-folds" target="_blank">kernel</a> with some <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/210064" target="_blank">useful information</a> about incorrect labels / inverted images shared by <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> and others. Once I had established a CV strategy, I started to experiment with different image resolutions, all with a ResNet200D. I tried image sizes from <code>512</code> to <code>736</code> in increments of <code>16</code> (all with a batch size of <code>32</code>). I saw an increase in AUC + decrease in BCE loss each time I bumped up the image resolution. I validated all models against my first fold to save time (more on that later). </p>
<p><a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/217296" target="_blank">There was some discussion</a> as to whether or not this multi stage approach with teacher-student training on annotated images actually gave any performance boost. I continued to use it anyways for one reason: you can use the stage 2 weights to lower training time. I trained stages 1 and 2 on a lower image resolution and then trained many different stage 3 models of various image resolutions from these weights. In other words, instead of repeating all 3 stages anytime I changed image resolutions / other parameters, I simply reused stage 2 weights and skipped to stage 3. This is not ideal, but it saved me a ton of training time and allowed me to run more experiments. I figured it would work seeing as we frequently fine tune models that have been trained on ImageNet with different image resolutions than the ones they were initially trained on. </p>
<p><strong>Preprocessing</strong></p>
<p>I did minimal preprocessing. <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/224146" target="_blank">It was mentioned</a> that some images have black borders around them. I removed them during inference and saw an increase in score. I hesitated to remove them from training as these black boxes could serve as a form of regularization, but decided to remove them for the last couple models I trained. I figured if they helped with inference, they would help with training, but I did not run any real experiments to verify this.</p>
<p><strong>Forward Selection</strong></p>
<p>I then created a fold prediction correlation heatmap for these different image sizes and saw a surprisingly low correlation between the ResNet200D model when trained on different image sizes (0.85 - 0.91). This showed me that one could get good model diversity by simply changing image resolutions, which got me thinking about ensembling. I remembered reading <a href="https://www.kaggle.com/cdeotte" target="_blank">@chrisdeotte</a>’s <a href="https://www.kaggle.com/cdeotte/forward-selection-oof-ensemble-0-942-private" target="_blank">kernel on forward selection</a> from the Melanoma competition and decided to try it. I saw nice improvements when doing so and decided to experiment with different model architectures as well. I trained a SEResNet152D on <code>672</code> and <code>720</code> image resolutions and checked the fold prediction correlations between them with my previous ResNet200D models: the correlation was even lower, so I threw them into forward selection with my other ResNet200D models. </p>
<p>The above procedure got me to CV <code>0.966</code> and public leaderboard <code>0.968</code>. I then experimented with RegNetYs, EfficientNets, and ResNeSts. RegNetYs took too long to train and EfficientNets were never able to reach the same CV score as the ResNet200D / SEResNet152D. Only the ResNeSt’s got similar scores, but simply took too long to train. I had reached a bit of an impasse and was not sure how to proceed.</p>
<p><strong>Pseudo labels</strong></p>
<p>I then saw <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/221808" target="_blank">this post</a> on duplicate images between the NIH Chest XRays dataset and the training / public test images given to use in this competition. Seeing as I already had an ensemble of models with good diversity, I was confident I could generate fairly high quality pseudo labels from this external dataset. (Note that you must generate pseudo labels from models that have not seen the fold you are validating against otherwise you get subtle CV leakage). I first tried to remove duplicates using CNN embeddings and RAPIDS NearestNeighbors, but this approach still left around 500 duplicate images, so I combined this approach with ImageHash to remove the remaining duplicates. I then started generating pseudo labels. I decided to use soft pseudo labels where at least one catheter class is present (any column with prediction above <code>0.7</code>) and all other classes without any catheters (remaining columns with predictions below <code>0.3</code>). I added Gaussian noise to the pseudo labels in an attempt to reduce confirmation bias.</p>
<p>Instead of re-training the model jointly with concatenated labeled + pseudo labels, I decided to introduce another training stage in which I train on only NIH pseudo labels. The final stage is then fine tuning on labeled training data. I also decided to train the last few epochs without any data augmentation, based on <a href="https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification/discussion/212532" target="_blank">another discussion post</a>, and saw further improvements. I carried out this procedure for my top-3 highest scoring models and threw them back into my forward selection algorithm to compare CV and public leaderboard score. It indeed increased. </p>
<p>I am still not sure if my CV was entirely leak free. I think many teams that used external data had some form of leakage. For this reason, I only included 2 NIH pseudo label pretrained models in the folds I forward selected against. Had I invested more time in examining these duplicate images found by RAPIDS NN and ImageHash, I think I would have had a more robust CV and better public / private score. </p>
<p><strong>Training / validation folds</strong></p>
<p>Up to this point, I had only validated against the first fold, so in some sense I was ‘overfitting’ my own validation set. This is not ideal; it would be much better to validate each model against all folds and use the full OOF to ensemble, as opposed to single fold predictions. That being said, I joined this competition fairly late so I needed to save time somehow while simultaneously running many experiments. I also trusted my fold splits and was not overly concerned with validating against only one fold. However, I did want my final predictions to encompass all training data, so I repeated what I did on the first fold with the second fold, so that my models (in total) had been trained on all data folds. </p>
<p>At the end, my approach was:</p>
<ul>
<li>Train teacher on annotated samples (light augmentations for 4 epochs) </li>
<li>Transfer this information to student (light augmentations for 6-7 epochs)</li>
<li>Use these student weights as a starting point for NIH pseudo labeling training (light augmentations for 7-8 epochs)</li>
<li>Train on full labeled data (heavy augmentation for 7-8 epochs)</li>
<li>Train on full labeled data (no augmentation for 1-2 epochs)</li>
<li>Repeat on different image sizes and model architectures and forward select based on validation fold predictions</li>
</ul>
<p>I had around 10 models validated against fold 1 and 10 validated against fold 2. I forward selected each fold separately and averaged their predictions. Seeing as I never got around to full KFold training, I thought that including public models would add some beneficial diversity, so I added <a href="https://www.kaggle.com/underwearfitting" target="_blank">@sin</a>’s public 5Fold ResNet200D models to inference. My final submission was a convex combination of my own models weighted <code>0.85</code> and <a href="https://www.kaggle.com/underwearfitting" target="_blank">sin</a>’s 5 models weighted at <code>0.15</code>.</p>
<p>I chose my best public leaderboard score for my first submission (seeing as I trusted CV and public correlation) and chose my best CV score for my second submission. They were both <code>0.971</code> on the private leaderboard and not surprisingly, my best CV score was also my best private leaderboard score. </p>
<p><strong>What didn’t really work</strong></p>
<ul>
<li>TTA beyond simple horizontal flipping only showed improvements when the number of steps was fairly high (>5) and this took too long to infer with</li>
<li>Models other than ResNet200D and SEResNet152D (ResNeSt200E worked as well but took too long to train)</li>
<li>Image resolutions below <code>512</code></li>
</ul>
<p><strong>What I wanted to test, but didn’t</strong></p>
<ul>
<li>Experiment with changing image resolutions during training as a means of regularization</li>
<li>Using shape priors and a segmentation model to create features (categorize catheter overlaps with HoG descriptors and intensity histograms)</li>
<li>Use masks and segmentation model outputs to categorize distance between various anatomical features and the catheters to create more features</li>
<li>Upsample images without any catheters present with NIH Chest XRay images and compare performance</li>
<li>Experiment with label smoothing / special loss functions</li>
<li>Use MixUp on pseudo labels to further reduce confirmation bias</li>
</ul>
<p><strong>Acknowledgements</strong></p>
<p>I would first like to thank <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a> and <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for sharing their approach on how to use annotated images to increase CV scores. Without their generous contributions in this competition, I would not have made much progress during these past couple months. </p>
<p>I also want to thank <a href="https://www.kaggle.com/cdeotte" target="_blank">@chrisdeotte</a> for his wonderful notebooks on forward selection, <a href="https://www.kaggle.com/cdeotte/pseudo-labeling-qda-0-969" target="_blank">pseudo-labeling</a>, and <a href="https://www.kaggle.com/cdeotte/rapids-cuml-knn-find-duplicates" target="_blank">finding duplicate images with RAPIDS</a>. Forward selection was very useful for generating high quality pseudo labels and he inspired me to experiment with ensembling models trained on different image sizes. Thank you Chris.</p>
<p>Additionally, I would like to thank <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> for sharing domain knowledge so openly during this competition and <a href="https://www.kaggle.com/underwearfitting" target="_blank">@sin</a> for sharing his use of heavy augmentations and his cross-validation strategy. In coupling their ideas, I was able to generate a robust CV strategy that informed all my decisions during this competition.</p>
<p>Lastly, I want to thank Z by HP & NVIDIA for providing me with a Z8 workstation and ZBook Studio laptop. Without these GPUs, I would not have been able to experiment with these larger networks and different image resolutions so quickly. I ran smaller experiments / debugged my code on the laptop and ran the main experiments on the workstation. GPU memory was the bottleneck in this competition so having powerful local GPUs was integral to my solution. (RTX Quadro 8000 and RTX Quadro 5000 mobile, for reference). When it comes to deep learning and GPUs, bigger seems to be better. That being said, I wish I found more elegant approaches to circumvent this problem, but I am still learning. Next time. </p>
|
RSNA 2022 Cervical Spine Fracture Detection
|
10th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to you too, nice solo gold!<br>
We trained 1st stage on only 235 + 280 patients, no pseudo lableing, and we used the 1490+ patients left at the 2nd stage. I think we share the same approach in balancing the samples.</p>
<p>Did you filter the C1-C7 area before training fracture classification using segentation model? I saw that you mentioned you got AUC 0.91 but when we filtered the C1-C7 area and trained the model only on C1-C7 image, the AUC decreased to around 0.8. We took this as a correct approach because the gap in AUC indicates that if feeding non C1-C7 image to the model, it gave the model extra task of classifying whether it is C1-C7 (meaning fracture classification task implicitly contains the task to classify C1-C7, resulting in the AUC 0.91, which is overestimated. The feature extracted from such model may cause extra difficulty in 2nd stage).</p>
<p>Thus, we think the difference in private LB is caused by backbone and pseduo labeling.</p>
<p>Congratulations to all the winners! Thanks to Kaggle and RSNA for hosting this interesting competition.</p>
<p>This is my first gold medal and I am glad to become a Kaggle Master, though there is still a long way to go. Many thanks to my teammates <a href="https://www.kaggle.com/yokuyama" target="_blank">yokuyama</a> and <a href="https://www.kaggle.com/shotaosaki" target="_blank">sht</a> for giving me the chance to learn so much and inspiring me with new ideas which I could have never come up with as a solo player.</p>
<p>On behalf of our team, let me introduce our solution.</p>
<h1>Stage1: Image Level Model</h1>
<p>This competition shared a lot of similarity to the past RSNA competitions(2019/2020). Thus we spent a lot of time gathering the wisdom shared by the top teams in the past competitions like aux loss, 2stage model, EMA, etc. Special thanks to all of you for sharing such important information!</p>
<p>We also spent a lot of time reading the paper and watching youtube to learn about Cervical Spine fracture, finally we found the paper which contains the key idea to win this competition: Cropping !!</p>
<p><a href="https://arxiv.org/abs/2010.13336" target="_blank">Deep Sequential Learning for Cervical Spine Fracture Detection on Computed Tomography Imaging</a></p>
<h2>Pipeline</h2>
<ol>
<li>Pretrain vertebrae model using Verse2020 (79cases).</li>
<li>Train vertebrae model using Competition data (89cases).</li>
<li>Prepare training data of fracture model. Randomly sampling 258 negative cases, filtering C1-C7 area and croping the vertebrae using vertebrae model trained above.</li>
<li>Train fracture model.</li>
</ol>
<h2>CV strategy</h2>
<ul>
<li>Not using CV strategy in stage1.</li>
</ul>
<h2>Vertebrae model</h2>
<h3>Augmentation</h3>
<ul>
<li>Reused the training pipeline implemented in <a href="https://www.kaggle.com/code/awsaf49/uwmgi-unet-train-pytorch" target="_blank">UWMGI</a>. Big thanks to <a href="https://www.kaggle.com/awsaf49" target="_blank">awsaf49</a> !</li>
</ul>
<h3>Model</h3>
<p>We chose seresnext50_32x4d pretrained on imagenet as the backbone for Unet. The reason why we did not use 3D segmentation model is that in the 2nd stage, the feature extracted from vertebrae model boosted CV about 0.02, thus we chose 2D model to get image level feature.</p>
<p>Thanks timm and segmentation_models_pytorch.</p>
<h3>Training strategy</h3>
<ul>
<li>Pretrain 18 epochs using Verse2020 because according to the experience in UWMGI, it takes long epoch to get a good segmentation model.</li>
<li>Did not use 2.5D image in vertebrae model considering the computing time, cause we are not competing on the dice loss.</li>
<li>train 1 epoch using competition data.</li>
<li>CosineAnnealingLR</li>
</ul>
<h3>What did not work</h3>
<ul>
<li>LovaszLoss</li>
<li>long epoch training using competition data</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5688805%2F6a7e79b4096dd655bd93f71ef6a74f42%2Fstage1_vertebrae.png?generation=1667127778021148&alt=media" alt="vertebrae model"></p>
<h2>Fracture model</h2>
<h3>Augmentation</h3>
<ul>
<li>HorizontalFlip</li>
<li>VerticalFlip</li>
<li>Rotate</li>
<li>RandomBrightnessContrast</li>
</ul>
<h3>Model</h3>
<ul>
<li>seresnext50_32x4d pretrained on imagenet</li>
<li>efficientnet_b4 pretrained on noisy student</li>
</ul>
<h3>Training strategy</h3>
<ul>
<li>[Important] Mixup(beta=1.0, p=0.6) and long epoch(18-25 epochs) which improved the model performance.(mixup CV 0.263 while no mixup CV 0.269). CV of no mixup model got worse in long epoch while mixup model did not.</li>
<li>CosineAnnealingLR</li>
<li>EMA(decay=0.9999)</li>
<li>2.5D image</li>
</ul>
<h3>What did not work</h3>
<ul>
<li>Randomly dropping the channel containing neighbour image</li>
<li>bone view and soft tissue view, used by radiologist to detect fracture (maybe AI has found somewhat new mechanism, hmmm..)</li>
<li>large image size (we used 640 x 640, but not useful…)</li>
</ul>
<p>We have to admit that we are lucky enough to get the gold medal. Our submission was an ensemble of mixup seresnext50(CV 0.263, LB 0.23) and no mixup seresnext50(CV 0.269, LB0.24), getting public LB 0.22(private LB 0.28).</p>
<p>In the last two days, we experimentally trained a efficientnet_b2 got CV 0.269 and LB 0.25, indicating that maybe our seresnext50 fit the public LB well and may show a poor performance in private LB. Thus we trained an efficientnet_b4 in the last day(got CV 0.260) and replaced the no mixup seresnext50 with mixup efficientnet_b4. That pushed our public LB to 0.21 and private LB to 0.27 (the gold medal zone).</p>
<p>We spent too much time improving a single model and did not try many other backbones. In the last day we finally found efficientnet was a better choice because the bottleneck of inference is on transfering extracted feature from GPU to CPU(feature len of seresnext50 is 2048…), not the model inference time.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5688805%2F927d2bf7c4f17d11586c163a9d2b500b%2Fstage1_fracture.png?generation=1667128199554100&alt=media" alt="fracture model"></p>
<h1>Stage2: Study Level Model</h1>
<p>In stage2, we used several slice sequence models for study level classification.</p>
<h2>Pipeline</h2>
<ol>
<li>Extract image-level features from each of the stage 1 fracture and vertebrae models.</li>
<li>Concatenate the fracture and vertebrae features.</li>
<li>Stack the features that belong to one study series together.</li>
<li>Resize or pad the stacked features to feat_dim x 256.</li>
<li>Feed the features into the model.</li>
</ol>
<h2>CV strategy</h2>
<ul>
<li>5 folds. Stratified by patient_overall.</li>
</ul>
<h3>Augmentation</h3>
<ul>
<li>[Important] Mixup(beta=1.0, p=0.7) to the stacked features.</li>
</ul>
<h3>Model</h3>
<p>Finally, we used four RNN/Transformer models. CV scores for each model were about the same, but the RNN and Transformer ensembles improved the scores slightly.</p>
<p>The architecture of each model is based on past RSNA competitions(2019/2020). Again special thanks to all of you for sharing such important information!</p>
<ul>
<li><a href="https://www.kaggle.com/competitions/rsna-str-pulmonary-embolism-detection/discussion/194145" target="_blank">Attention biGRU</a></li>
<li><a href="https://www.kaggle.com/competitions/rsna-str-pulmonary-embolism-detection/discussion/194145" target="_blank">Attention biLSTM</a></li>
<li><a href="https://www.kaggle.com/competitions/rsna-intracranial-hemorrhage-detection/discussion/117228" target="_blank">BERT-like Transformer Encoder</a></li>
<li><a href="https://www.kaggle.com/competitions/rsna-intracranial-hemorrhage-detection/discussion/117228" target="_blank">Dense biLSTM</a></li>
</ul>
<h3>Post-processing</h3>
<ul>
<li>Calibrate the models predicted patient_overall with the predicted probabilities of C1-C7.</li>
</ul>
<p>$$P_{overall} = k_{1} P_{model} + (1-k_{1})(1-\prod_{i=1}^{7}(1 - P_{\rm Ci}))$$</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5688805%2F480954ad32c9759aea1961bc933d978f%2Fstage2.png?generation=1667124473522707&alt=media" alt="stage2"></p>
|
RSNA 2022 Cervical Spine Fracture Detection
|
12th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the kaggle staff and hosts for organizing and hosting this competition.<br>
I would also like to thank my teammate <a href="https://www.kaggle.com/yuyuki11235" target="_blank">@yuyuki11235</a>. He suggested and implemented a lot of ideas and was really helpful.<br>
We hope to try again to win the gold medal next time.</p>
<h2>Summary</h2>
<p>We used a 2-stage approach, extracting the image-level features with CNN models and inferring the patient-level fracture probabilities with sequential models.<br>
The slice images were preprocessed with YOLOX cropping, 2.5D approach and a windowing method.<br>
For final submission, we used efficietnet-v2-l as a feature extractor and LSTM, GRU, and Conv1d for the sequence model.</p>
<h2>Preprocess</h2>
<h5>Bounding box cropping</h5>
<p>We trained YOLOX-l model to detect vertebral positions.<br>
Bbox labels were created from the outer frame of the segmentation masks (similar to the <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362643" target="_blank">3rd place solution</a>).<br>
For cropping, we only used the highest confidence bbox in the images with a confidence level of 0.7 or higher.<br>
20 pixel margin was added to bbox when cropping.</p>
<h5>Use neighbor slices and windowing</h5>
<p>As in previous RSNA competitions, we used a 2.5D approach using neighboring slices and a windowing technique.<br>
For windowing, we applied the values used in <a href="https://arxiv.org/abs/2010.13336" target="_blank">this paper</a>.</p>
<ul>
<li>Standard bone window (w=500, c=1800)</li>
<li>Gross bone window (w=650, c=400)</li>
<li>Soft tissue window (w=300, c=80)</li>
</ul>
<p>We combined 2.5D and windowing as follows (s denotes a slice index).<br>
ch1 : s-1 slice with Standard bone window<br>
ch2 : s slice with Gross bone window<br>
ch3 : s+1 slice with Soft tissue window</p>
<p>An example of input images created by preprocessing is shown below.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6102861%2Ff1139ecd39345e6e4992a1039aee5797%2Finput_images.png?generation=1667030306195626&alt=media" alt=""></p>
<h2>CNN Model (1st stage)</h2>
<p>Our CNN models were based on <a href="https://www.kaggle.com/vslaykovsky" target="_blank">@vslaykovsky</a>'s <a href="https://www.kaggle.com/code/vslaykovsky/train-pytorch-effnetv2-baseline-cv-0-49" target="_blank">public baseline</a>.<br>
We trained our models to infer fracture and vertebral positions.<br>
The main tips related to training are listed below.</p>
<h5>Augmentation</h5>
<ul>
<li>HorizontalFlip</li>
<li>GaussianBlur</li>
<li>GaussNoise</li>
<li>ShiftScaleRotate</li>
<li>CoarseDropout</li>
<li>RandomBrightnessContrast</li>
<li>OneOf([GridDistortion, ElasticTransform])</li>
</ul>
<h5>Label cleaning</h5>
<p>Since the fracture and vertebral positions per image were not given in this competition, we tried to create the accurate image-level labels by multiple pseudo-labeling steps.<br>
Starting with <a href="https://www.kaggle.com/code/vslaykovsky/train-pytorch-effnetv2-baseline-cv-0-49" target="_blank">this notebook</a>'s labels, we first made pseudo-labels of fracture and vertebral positions from the oof predictions of efficientnet-b4 and use them as the labels for the next training (with some post-processings such as thresholding).<br>
By repeating the pseudo-labeling with larger models, we were able to achieve a better CV.</p>
<h2>Sequential Model (2nd stage)</h2>
<p>We trained patient-level classification models using image-level embeddings.<br>
Our final ensemble were LSTM, GRU, and Conv1d models.<br>
The main processing steps are as follows:</p>
<ul>
<li>concatenate image-level embedding for each patient (NxD, N: number of slices per patient, D: embedding dimension)</li>
<li>apply cv2.resize function to fix the length of the sequence (MxD, M: fixed sequence length) (use <a href="https://www.kaggle.com/competitions/rsna-str-pulmonary-embolism-detection/discussion/194145" target="_blank">this notebook</a> idea)</li>
<li>input above features to each model</li>
</ul>
<p>Other tricks are as follows:</p>
<ul>
<li>Mask only the C1~C7 areas by inferring the bone position (NxD→N'xD, N': number of slices of C1~C7)</li>
<li>Attention pooling</li>
<li>Randomly skip slices during training (fixed at skip 3 during inference, see Final Prediction section bellow)</li>
</ul>
<h2>Final Prediction</h2>
<ul>
<li>Use 1/3 slices as df[df["Slices"] % 3 == 0] to reduce inference time</li>
<li>YOLOX-l bbox prediciotn (~1h inference time)</li>
<li>4 out of 5 folds of efficienetnet-v2-l (below 2h inference time per fold)</li>
<li>image size = (512, 512)</li>
<li>3 sequential models (LSTM, GRU, Conv1d) with difference seeds (4 seeds for each model)</li>
<li>fixed sequence length = 224</li>
</ul>
<p>Congrats on your silver medal too!<br>
In our experiments, bbox cropping improved CV by 0.05 or more.</p>
|
RSNA 2022 Cervical Spine Fracture Detection
|
13th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks.</p>
<p>We have fracture probability for each vertebra. Suppose for one patient that is <code>[0.1, 0.2, 0.1, 0.2, 0.3, 0.6, 0.7]</code>. The three highest values are <code>[03, 0.6, 0.7]</code>. The patient's overall probability was calculated as follows:</p>
<pre><code>1 - ([1-0.3] * [1-0.6] * [1-0.7]) = 0.916
</code></pre>
<p>Thanks. </p>
<p>Learned this the hard way :)</p>
<p>Thanks to all the organizers for collecting and annotating this rich pool of C-spine CTs and also congratulations to all the winners for their amazing work. I should also thank my teammate <a href="https://www.kaggle.com/braderickson" target="_blank">@braderickson</a> for all his clinical and technical insights; this solution would not have been possible without him.</p>
<p>It was my first serious Kaggle competition, and I really enjoyed it while making some big mistakes, and hopefully, I can learn from them for my future competitions [more at the end].</p>
<h2>Summary</h2>
<p>From the beginning, we decided to focus on a 3D pipeline as there are some queues in each slice (like the nutrient canals) that might throw off a 2D model (just our initial idea). Our pipeline had a very simple flow, segment C1-C7 vertebrae, and a 3D classifier that gets each vertebra volume and predicts if it is fractured or not. </p>
<h2>Segmentation</h2>
<p>This was a bit challenging; only 87 samples had vertebra segmentation masks and based on the literature around VerSe challenge this was not enough to have a robust segmentation model. We used VerSe challenge dataset in addition to CTSpine1K dataset to train a 3D SwinUNETR model (from MONAI). For the segmentation task, we used patches of [128, 128, 128] with [1, 1, 1] pixel spacing and upsampled the masks to the images' original dimensions. We used the 87 annotated images as the test set to evaluate the robustness of our model. This model reached a mean DSC of ~0.90 on the test set, making it a viable option to be used as a standalone model (no CV for segmentation). </p>
<p>One thing that we did, was to rotate the vertebrae to have the anterior and posterior center of masses align in a horizontal line; this helped fix the hyper-flexed neck positioning that some patients had.</p>
<p>Mask post-processing was applied as serial closing and dilation (to get rid of stray pixels) and getting the 7 largest connected components, one for each vertebra. At this stage, we were able to get a clean vertebra volume. But as others mentioned, there are some overlaps, and two consecutive vertebrae might be visible at the top or bottom of the volume. To overcome this, we blacked out other vertebrae. For example, in the following image of a C1 vertebra, we blacked out the odontoid process of the C2 vertebra. This, in our experiments, had the best performance compared to other approaches.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5189553%2Fbe1f7efe85201de67301b964d03d1c5c%2FScreenshot_4.png?generation=1666964924580012&alt=media" alt=""></p>
<p>With all the resampling, upsampling of the mask, and post-processing this stage took ~4.5 hours of the total runtime.</p>
<h2>Classifier</h2>
<p>We got the volume dimension of each vertebra and found that >90% of samples had a volume less than 256x256x64. So we chose this size as the input to our 3D classifier. The most challenging part of the pipeline was training a robust 3D classifier. MONAI has a great API for creating 3D versions of famous 2D models like EfficientNets (only v1s), DensNets and ResNeXts. We finally decided to use an ensemble of EfficientNetB4s and DenseNet209 models as they had the best performance with our CV and reached an AUC of >.90 on our CV. We added horizontal flipping as TTA, and it helped boost the performance by 0.02.</p>
<p>For calculating the patient overall labels, we used conditional probabilities of the highest three fracture probabilities (as more than 95% of samples had <=3 fractures). Compared to setting the max value as the patients' overall label, this helped by 0.02 score improvement. We rounded the probabilities <0.05a nd >0.90 to zero and one respectively as it did help the public score by 0.02. This was our biggest mistake! After finishing the competition we ran our models without this thresholding, and we got a public score of 0.25 (0.03 improvement).</p>
<p>The 10 classifiers took ~2.5 hours to run, which was pretty fast.</p>
<h2>Lessons Learnt</h2>
<p>Here are some remarks about my first Kaggle competition experience, hope this might help others:</p>
<ol>
<li>Start Early: If this is your first time, starting soon helps with iterating with different ideas and models.</li>
<li>Do Your Research: Search how others have done this before and how you can improve on that. Also, search for related external datasets.</li>
<li>Implement and Use the Competition Metric: This will give you an overall idea of where you are in the leaderboard without needing to submit so often.</li>
<li>Do CV: Though it is tempting to have your hyperparameters optimized in a single fold and get the "Best" results on that before testing it on others, resist that temptation. Ensembling all folds is almost always helpful, so the sooner you reach this point, the better.</li>
<li>Don't Overfit: This is the hardest in my opinion. Try to optimize everything on your CV rather than the public test set. Don't submit just to see if changing a small parameter (like thresholds) will improve your public test score.</li>
<li>Have Fun!</li>
</ol>
<h2>Acknowledgements</h2>
<p>I used MONAI and Pytorch-Lightning for all the experiments, and it made my life much easier as iterating between models and hyperparameters was made so simple. Also, thanks to all the Kagglers who participated in this competition; I learned a lot from their codes and discussions. </p>
|
RSNA 2022 Cervical Spine Fracture Detection
|
14th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I wasn't sure if this was worth writing up, but since we seemed to do a few things differently, and not everyone used the bounding box annotations, figured we could share some insights. </p>
<p>Hardware - 3090ti - thanks UT for this… perhaps minor compared to other hardware out there, but was a big upgrade personally over my now aging 1080tis (also huge thanks to crypto bust for making this available 😊)</p>
<p>#</p>
<h1>1st stage -</h1>
<p>3D nnUnet pretrained on totalsegmentator. I was initially hesitant about 3D, mainly because I’ve never trained a 3D model before, and my computer’s 32GB of RAM kept having issues with dataloading beyond a certain size. My teammate Yee showed some good success with 3D, and this finally inspired me to overcome this fear. I also ended up using a library called rising <a href="https://rising.readthedocs.io/en/stable/transforms.html" target="_blank">https://rising.readthedocs.io/en/stable/transforms.html</a>, which made augmentations on the GPU easier. I wrote a 3D version of cutout (3D black cube) for it, but beyond that just used the library as is with standard rotate and random crop.</p>
<ul>
<li>trained on 87 segmentation cases to predict C1-C7</li>
<li>trained for ~12 hours (400 epochs)</li>
<li>intake 1x192x192x192</li>
<li>modified to output 7x192x192x192;</li>
<li>DICE + BCE loss; DSC of 0.95</li>
<li>this gave us the best segmentation results</li>
</ul>
<p>Example (input, ground truth segmentation mask, predicted segmentation mask) -<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1598278%2Fa32d84a2533566a7e06e5c57250acc88%2Fim_130.png?generation=1667006906874204&alt=media" alt=""></p>
<p>All 2019 cases were then pseudolabeled with the segmentations. The total volume of CSpine was cropped to just C1-C7. </p>
<p>#</p>
<h1>2nd stage -</h1>
<p>Unet-CNN 2.5D (EffB7 noisy student backbone), model from SMP. Images resized to 448x448. Axial slices were stacked by 3, and one channel also for providing level information (divide channel index by 8, so C1 was 0.125, C2 was 0.25, etc.). So CNN input was 4x448x448. Segmentation output is 2 channels for 2 separate tasks (below). Auxiliary embedding dim of 512 was taken from the middle of the Unet for additional 3 tasks.</p>
<p>Task 1 - One segmentation channel is to predict bounding boxes (as segmentation maps), simple BCE with pos_weight of 7. </p>
<p>Task 2 - Second segmentation channel is to predict the intersection of bounding box and vertebral segmentation mask (IF it’s positive based on the ground truth label for the case), trained with DICE + BCE (pos_weight 2) - see picture below for example. </p>
<p>Task 3 - Auxiliary embedding (512 layer) output to 1 dim linear head to predict fx vs no fx, based on whether a bounding box slice exists. Since bounding boxes were only annotated for about ~50% of positive cases, we had to exclude about 250 cases. All slices from negative cases for fx (~1000 cases), plus positive cases without bounding box on the slice, could still be used as negative for fx. We intended to train a few bounding box models to pseudo label the remaining 250 cases but ended up losing time in trying other things. BCE loss, pos_weight 2.</p>
<p>Task 4 - Auxiliary embedding (512 layer) output to 7 dim linear head to predict the ratio of vertebral bodies in the slice. We thresholded this at 1000, not sure this makes a difference. So if there are 2000 C5 pixels and 3000 C6 pixels and 500 C7 pixels, ground truth is [0,0,0,0,1,1,0.5] BCE loss, pos_weight 7.</p>
<p>Task 5 - Auxiliary embedding (512 layer) output to 7 dim linear head to predict level fx vs no fx, which was determined by bounding box presence, level presence, and also weakly thresholded by pixel values. BCE loss, pos_weight 7.</p>
<p>Examples (input 3 channel from 3 slices, single channel level segmentation mask, ground truth 2 channel segmentation mask, predicted 2 channel segmentation mask - red is bbox, white is intersection of bbox and segmented vert if positive) -</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1598278%2F14cb4d8a6aca61d605af8e45526ce14e%2Fim_1.jpg?generation=1667006208078177&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1598278%2F7e7e5e1a04cb25d9056155ccc6a9bb74%2Fim_19.jpg?generation=1667006224901265&alt=media" alt=""></p>
<p>For some reason the above model trained very quickly, despite aggressive regularization and augmentation, and performed worse (on 3rd stage) if trained for more than 1 epoch (~3 hrs). I think this needed better optimization, but a lesson I learned is that multitask is harder to optimize in general. The ideal ratio of loss weighting to balance the different tasks would need more hyperparameter optimization than a simpler solution? Had some slight success when favored segmentation losses over classification losses. Happy to get feedback/intuition on this in general if anyone has ideas.</p>
<p>#</p>
<h1>3rd stage -</h1>
<p>RNN+Attention on the 512 dim embedding layer from the 2nd stage. Pretty standard stuff here.</p>
<p>#</p>
<p>We had high hopes on doing lots of ensembling of the above. Particularly because in addition to the RNN+Attention from the auxiliary layer, the segmentation channel outputs can be averaged and potentially be used for a 3rd stage 3D model, which some early tests showed modest success (just on fx channel though?). About 2 weeks from the end, I realized that the kaggle kernel will not handle even 2 folds of a simple pipeline. Oops. Lesson learned. Lots of optimizing of dataloading later, more than 2 folds of the above wasn’t possible. Out of 31 submissions, 6 were testing to get things working, and 19 were errored out (narrowed it down to VRAM more than CPU/RAM). So EffB7ns was too big for this comp perhaps. Last minute incomplete testing of smaller models didn’t learn level information as well though. </p>
<p>Things that also kind of worked </p>
<ul>
<li>2.5D CNN without the 1st stage 3D model, but predicting 8/9 channels (7 channels for each level, loss only calculated for the 87 segmentation cases and weighting of ~20, 1-2 channels for fx detection, weighting of ~2)</li>
<li>3D multitask model</li>
</ul>
<p>Things that didn't work</p>
<ul>
<li>Pure 3D model - we wasted so much time here haha</li>
</ul>
<p>#</p>
<p>Had a lot of fun and learned a lot as usual. Congrats to winners, and all the teams that participated!</p>
<p>Edit: Sorry misunderstood initially. </p>
<p>I think it made a difference. But not sure exactly how much. Local cv testing without any segmentation mask at all was ~0.45 vs ~0.3. Second segmentation channel alone is probably less of a difference… Never purely tested bbox channel vs both channel at the end, but earlier tests were ~0.05 difference.</p>
|
RSNA 2022 Cervical Spine Fracture Detection
|
17th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Everyone,<br>
I’d like to thank the organizers for a fun and insightful competition and congratulations to the winners!<br>
This is my solution that ended up ranked at 17th place:</p>
<h3>Overview</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F22991%2Fa8f0c94ca26774330c3bcfada33015b1%2FScreen%20Shot%202022-10-29%20at%208.32.10%20PM.png?generation=1667100758660919&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F22991%2F16083e42e2d76812e6db31e2212a1535%2FScreen%20Shot%202022-10-29%20at%208.35.36%20PM.png?generation=1667100949403148&alt=media" alt=""></p>
<h3>Stage 1.1 - trained a vertebrae number detection model</h3>
<p>Trained an EfficientNetv2-s for detecting the vertebrae number. I followed the approach outlined by <a href="https://www.kaggle.com/vslaykovsky" target="_blank">@vslaykovsky</a> <a href="https://www.kaggle.com/code/vslaykovsky/pytorch-effnetv2-vertebrae-detection-acc-0-95" target="_blank">here</a> and my model ended up achieving similar 95% accuracy in vertebrae number detection.</p>
<h3>Stage 1.2- Vertebrae bounding box detection model</h3>
<p>I trained a Yolov5s model to detect bounding boxes in each dicom slice. The training set of the bounding box was generated based on the annotated pixels in the .nii files. The bounding box can surround multiple vertebrae. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F22991%2F1d43777ca39152f43243f81231322a49%2FScreen%20Shot%202022-10-29%20at%208.39.08%20PM.png?generation=1667101171836481&alt=media" alt=""></p>
<h3>Stage 2- slice fracture model</h3>
<p>I followed the following steps for each slice:</p>
<ol>
<li>Apply bone windowing (400,1800)</li>
<li>Crop the slice based on the bounding box. Add added padding to retain aspect ratio, surrounding tissues and compensate for overly tight bounding box predictions.</li>
<li>On training time apply augmentations on the cropped images<br>
The cropped image size is 384x384 which is the “native” resolution for EfficientNetV2-S models, also enabling starting from pre-trained weights. </li>
</ol>
<p>Some cropped examples before augmentation</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F22991%2F622d2e799a6bdebb33ec0b8b3e97e973%2FScreen%20Shot%202022-10-29%20at%208.40.38%20PM.png?generation=1667101265349846&alt=media" alt=""></p>
<p>For training I used a semi-supervised approach using the following method, which gave me a 0.03 points reduction on both private and public scores:</p>
<ul>
<li>Trained a small model first based on the data that had bounding box annotations. For these slices we have a clean set of labels on which slice has a fracture. I used the rest of the slices with no annotations from these studies as negative labels and also added slices from studies that patient_overall is 0. The point was to create the cleanest possible training set.</li>
<li>I used the model created above to predict fractures on the studies that had fractures but no bounding boxes using the following heuristic, and with the aid of the prediction from the model in stage 1.1: If (p(fracture)>0.5) and (the vertebra for which the slice belongs to has fractures in train.csv) then generate a positive (fracture) pseudo label for that slice.</li>
<li>Trained a new model with the original bounding box labels and the new pseudo labels. That’s the final model for stage 2.</li>
</ul>
<p>The output of the model is whether there is a fracture or not.</p>
<h3>Stage 3- vertebra fracture sequence model</h3>
<p>The approach here was to create a sequence of slice embeddings per vertebra and train a sequence model. I used the following method per vertebrae v_n:</p>
<ol>
<li>Using the prediction from the model in stage 1.1, I have the probability of each vertebrae number being present in each slice. Select the first 150 slices where p(v_n) > 0.5</li>
<li>The model architecture has a fully connected layer before the GRU to reduce dimensionality of the embeddings to 64 before passing it to the GRU. I also added the following features both to the FC layer and with a skip connection to the GRU: all 7 vertebrae number prediction probabilities and the stage 2 slice model final layer logit are concatenated </li>
<li>The input to the model is the 1280 size embedding taken from the GAP layer in the Stage 2 model + 7 vertebrae number predictions from stage 1.1 model + output logit from stage 2 model, overall 1288 features.</li>
<li>The output is the probability of whether the given vertebra has a fracture.</li>
</ol>
<h3>Combining the predictions to patient_overall</h3>
<p>Some of my submission had a model that re-calibrates and adds patient_overall prediction based on the per vertebra predictions in stage 3. However what eventually gave me the best results was the simple approach of using the stage 3 model predictions per vertebra as is and calculating patient_overall based using the following formula: <br>
patient_overal = 1 - [1-p(frac|V1)]*[1-p(frac|V2)] *..* [1-p(frac|V7)]</p>
<h3>Inference</h3>
<p>Overall runtime of the submission is 4 hours. I used a single model for stages 1.1, 1.2 and 2. For stage 3 I used a 5x ensemble</p>
<p>Due to the large size of the scan data in the competition reading the files is a big bottleneck for the execution runtime. A helpful optimization in the submission notebook was to avoid reading the dicom files repeatedly. One trick I used here was to create a dataloader that loads the image once and returns multiple tensors that were needed for models 1.2 and 2, and calling the two models sequentially in the inference loop with their corresponding input tensors.</p>
<h3>Hardware</h3>
<p>In this competition I leveraged AWS. My goto EC2 instance was g5.2xlarge and I was mostly using spot instances. These instances have 32GB of ram, nvidia A10G (24GB RAM) and 8 vcpus</p>
<p>Thanks-<br>
Ori</p>
|
RSNA 2022 Cervical Spine Fracture Detection
|
18th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thx ! <br>
We got +0.02 CV/LB</p>
<h4>Intro</h4>
<p>This was a really interesting competition, thanks to the hosts for organizing another cool medical challenge.<br>
Thanks also to my teammates <a href="https://www.kaggle.com/optimo" target="_blank">@optimo</a> and <a href="https://www.kaggle.com/tivfrvqhs5" target="_blank">@tivfrvqhs5</a> for the awesome competition. David and I joined <a href="https://www.kaggle.com/optimo" target="_blank">@optimo</a>'s team shortly before the team-merge deadline when he was already at LB 0.4, which gave gave us a week to improve his already great pipeline.</p>
<p>I will keep the write-up short but feel free to ask any question in the comments.</p>
<h4>Solution</h4>
<p><a href="https://ibb.co/DDpJSM1"><img src="https://i.ibb.co/RpyFV6g/rsna.png" alt="rsna"></a></p>
<p>Our overall solution is a 3 stage pipeline with additional stuff on the side, illustrated above.</p>
<ol>
<li>Predict fractures the same way as the public kernel. We use a custom 3D-CNN model for sampling frames in the right vertebra, as well as available fractures bounding boxes.</li>
<li>Re-use the same pipeline in 2.5D feeding 5 frames and adding a LSTM head</li>
<li>Switch to study level feeding predicted probabilities for all the slices to a small RNN or 1D-CNN. We also add the probability of the highest confident fracture box of a yolo-v5-l. </li>
</ol>
<h4>Comments</h4>
<ul>
<li>The 2nd model was quite tricky to make work, and we did not have the time to use a better cropping than the 384 center crop, but we had the csv with vertebrae centered crops ready. Joining late is tough.</li>
<li>3D models did not really work for us, but probably because they required a lot of tricks to function optimally.</li>
<li>Our models don't even look great at detecting fractures, but the study-level model does a good job exploiting the signal</li>
<li>I was optimizing the wrong metric (because of normalization) until the last day, switching to the correct implementation helped quite a lot </li>
</ul>
<p>Thanks for reading, hope to see you all on the next medical imaging comp =)</p>
<p>Yes, exactly</p>
|
RSNA 2022 Cervical Spine Fracture Detection
|
1st Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thx! Didn't try that.</p>
<p>Thx. Nothing should be made public before submission.<br>
Just upload model weight and write inference code using kernels.</p>
<p>Thx. I tried from very beginning and it's not as good as end2end then I sticked to end2end.</p>
<p>Thx. I haven't tried 2d seg but I don't think 2d seg would work here because z-axis information is too important.</p>
<p>Thx. Haha just check sub page frequently…</p>
<p>Congrats to your solo gold!<br>
I knew it when I first saw the LB, well deserved!</p>
<p>Type2 make the CV of patient_overall 0.02 better, so for whole CV 0.01 or so.</p>
<p>Thx. The 0.35 submission is just kind of POC of my ideas. Nothing change in pipeline afterward.</p>
<p>Thx.<br>
Haven't tried.<br>
Yes but not working.<br>
No. Just simply norm to 0~1</p>
<p>Haven't tried no crop</p>
<p>Thx.<br>
I'm sorry I don't understand your question. What do you mean by indices</p>
<p>Thx. It's end2end.</p>
<p>I didn't use position embedding. For (1) the idea is quite simple, just use the same label for all 15 slices from a vertebrae. For (2) it's complicated, because for individual vertebrae the type1 models are better. There are probably many reasons for this.</p>
<p>Thx. Just chose to trust CV.</p>
<p>Thanks to the organizers and congrats to all the winners and those who worked hard to develop new pipelines and stuck with it until the end of the competition. </p>
<p>This is a very interesting competition, because we can think of many different ways to approach this dataset. Therefore the most important thing for this competition is to develop a reasonable pipeline, followed by optimization of the model. </p>
<h1>Code</h1>
<p><a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362787" target="_blank">https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362787</a></p>
<h1>Summary</h1>
<p>I designed a 2-stage pipeline to deal with this problem.</p>
<p>stage1: 3D semantic segmentation -> stage2: 2.5D w/ LSTM classification.</p>
<p>In addition, there are 2 different types of classification models in stage2.</p>
<h1>3D Semantic Segmentation</h1>
<p>For 3D semantic segmentation, we only have 87 samples w/ 3d mask in the dataset, but it's sufficient to train 3D semantic segmentation models with good performance. </p>
<p>I use 128x128x128 input, to train resnet18d or efficientnet v2s + unet model, for segmenting C[1-7] vertebraes (7ch output).</p>
<p>After the training was completed, I predicted 3d masks for each vertebrae for all 2k samples in the training set.</p>
<p>Here is an example of predicted masks of C[1-7] vertebrae. Center slice of x, y, z dimension view, from left to right.<br>
<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F448347%2F6389a0864805b2f51c12dfe64c590cd5%2F1.png?generation=1666929046271861&alt=media" alt=""></p>
<h1>Prepare Data for Classification</h1>
<p>Next step is to prepare data for classification.</p>
<p>First using the predicted 3D mask for each vertebrae, we can crop out 7 vertebraes from a single original 3d image (there might be multiple vertebraes shown in a single crop, but It's fine). At this moment, we cropped 2k * 7 = 14k samples and for each sample there is only one single binary label.</p>
<p>Then for each vertebrae sample, I extracted 15 slices evenly by z-dimension, and for each slice, I further extracted +-2 adjacent slices to form an image with 5 channels. E.g if a 3D vertebrae sample have a shape of (128, 128, 30), I extracted 0th, 2nd, 4th, 6th….26th, 28th slices, then for example for the 2nd one, I use 0th~4th slices to form a 5-channel image.</p>
<p>In addition, I added the predicted mask of corresponding vertebrae as the 6th channel to each image, as a way to exclude the effect of having multiple vertebraes in a single sample.</p>
<p>Here is an example of one slice of a single vertebrae, and its predicted mask (with augmentations). We can see that the left half of the vertebrae in the image do not belong to the vertebrae specified by this crop.<br>
<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F448347%2Fca4e40db1dc08930b7db99416f122bb0%2F2.png?generation=1666929152961988&alt=media" alt=""></p>
<h1>2.5D + LSTM Classification</h1>
<p>We now have 14k 3D training samples of vertebrae. Theoretically the easiest way to deal with this data is to train 3D CNN on it. But unfortunately this method does not work. Training a 3D CNN on this data did not give me satisfactory results.</p>
<p>So I backed off and chose the 2.5D approach. Here 2.5D means that each 2D slice in a vertebrae sample has the information of several adjacent slices, so it is written 2.5D. But the model is a normal 2D CNN with 5-channels input.</p>
<p>The structure of this model is that, I first input 15 slices from a single sample into a 2D CNN, extracted out features of each slice, and then follow it with an LSTM model. So that the whole model can learn the features of the whole vertebrae. I call it type1 model ↓</p>
<p><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F448347%2F15ff7ec369c300c4f8d8f5e3df64b071%2F3.png?generation=1666929243244770&alt=media" alt=""></p>
<p>This model structure above, while being able to train a single vertebrae for fracture, does not able to train the patient as a whole for the presence of a fracture. So I designed another model.</p>
<p>The second classification model is basically the same as the one above, except that it treats a patient as one training sample (the model above treats a vertebrae as one training sample). This model is fed with 7x15 2D images at the same time, so that it has the ability to learn patient_overall labels. I call it type2 model ↓</p>
<p><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F448347%2Feecde695d666d4be6cb27791fa813ac0%2Fv2-0c70d96faae895d030d7b7f023570598_1440w.png?generation=1666929301847778&alt=media" alt=""></p>
<p>However, the disadvantage of this model is that it takes up too much GPU memory and therefore can only use small backbones (Imagine a model with batch_size = 1 that has to be trained on 105 images at one time, it is insane). </p>
<h1>Final Submission</h1>
<p>3D Seg</p>
<ul>
<li>5fold resnet 18d unet (128x128x128)</li>
<li>5fold effv2 s (128x128x128)</li>
</ul>
<p>2.5D Cls</p>
<ul>
<li><p>Type1 5fold effv2s (512x512)</p></li>
<li><p>Type1 5fold convnext tiny (384x384) </p></li>
<li><p>Type2 5fold convnext nano (512x512)</p></li>
<li><p>Type2 2fold convnext pico (512x512)</p></li>
<li><p>Type2 2fold convnext tiny (384x384)</p></li>
<li><p>Type2 2fold nfnet l0 (384x384)</p></li>
</ul>
<p>The submission time is 7.5 hours.</p>
<p>Thanks to timm library for having so good implementation of those models. I always using it.</p>
<h1>Acknowledge</h1>
<p>In this competition, more than half of my models are trained on Z8G4 Workstation with dual A6000 GPU from Z by HP.<br>
I would say I couldn't have achieved this without this workstation, thanks a lot!</p>
<p>I'm very interested if anyone has got good performence with 3D classification, if yes please leave a comment here!</p>
<p>haha thx, I've just published my inference code with some small models and you can take a look at it.<br>
I think the type1+type2 ensemble should have made the score 0.01~0.02 better.</p>
<p>Thanks for your kind words ;)</p>
<p>Thx.<br>
Good questions!</p>
<ol>
<li>I didn't try transformers here. The sequence is not too long…</li>
<li>Final model takes 12h~24h and little experiments takes 6~12h.</li>
<li>To avoid overfitting, mixup is one of the choses.</li>
<li>Ensemble make CV score around ~0.02 better.</li>
<li>Also kind of avoiding overfit, nothing special behind it.</li>
</ol>
<p>check my inference code!</p>
<p>Good work!</p>
|
RSNA 2022 Cervical Spine Fracture Detection
|
2nd place solution : Segmentation + 2.5D CNN + GRU Attention
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle and RSNA for such a great competition, we are very happy to have finished second. From this complex game, we tried to find the most concise and efficient solution, and gained a lot of knowledge. It was also one of the most hard-drive intensive competitions I've ever seen, and we wasted time loading data because we didn't have enough space to save the high-resolution pseudo-label voxel.</p>
<p>To cut to the chase, our solution consists of two stages, and use 2.5D CNNs, which we learned from <a href="https://www.kaggle.com/awsaf49" target="_blank">@Awsaf</a> in UWM <a href="https://www.kaggle.com/code/awsaf49/uwmgi-2-5d-train-pytorch" target="_blank">UWMGI: 2.5D Train [PyTorch] | Kaggle</a> </p>
<p>stage1: 2.5D CNN + Unet for Segmentation </p>
<p>stage2: CNN + BiGRU + Attention for Classification </p>
<h2>Stage 1</h2>
<p>First, we used the 87 studies of segmentation samples provided by the organizers . We recreated the mask labels according to the following method</p>
<pre><code>0 ---> background
1 ---> C1
2 ---> C2
...
8 ---> T1 - T12
</code></pre>
<p>We used the more general 2.5D and with 3 channels of image data, i.e., the original image i and its sides: i-1, i+1. Thanks to Kaggle and RSNA for such a great competition, we are very happy to have finished second. From this complex game, we tried to find the most concise and efficient solution, and gained a lot of knowledge. It was also one of the most hard-drive intensive competition I've ever seen, and we wasted time loading data because we didn't have enough space to save the high-resolution pseudo-label voxel.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3799319%2F29389b33392e4b84d56d0d53be39c23e%2FSnipaste_2022-11-09_01-18-47.png?generation=1668028486937787&alt=media" alt=""></p>
<p>The data augmentation section here is as follows, similar to <a href="https://www.kaggle.com/code/awsaf49/uwmgi-2-5d-train-pytorch" target="_blank">UWMGI: 2.5D [Train] [PyTorch] | Kaggle</a>, without much change.</p>
<p>We also tried heavier data augmentation, but it did not work better.</p>
<pre><code>Resize(CFG.img_size, CFG.img_size, interpolation=cv2.INTER_NEAREST),
HorizontalFlip(p=0.5),
ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.05, rotate_limit=10, p=0.5),
OneOf([
GridDistortion(num_steps=5, distort_limit=0.05, p=1.0),
ElasticTransform(alpha=1, sigma=50, alpha_affine=50, p=1.0)
], p=0.25),
</code></pre>
<p>For segmentation model,we used segmentation_models_pytorch lib, backbone was efficientnet-b0, decoder was unet</p>
<p>Optimizer="AdamW" </p>
<p>Scheduler="CosineAnnealingLR" + "GradualWarmupSchedulerV3"</p>
<h2>Crop Voxel</h2>
<p>Once we trained the segmentation model, we generalized it to all 2019 studies, we did the same preprocessing as before for the input data, and after the model predicted the results, we manually looked at several predicted images and found that the accuracy was pretty good.</p>
<p>We cropped out all 7 cervical vertebrae of each study separately. Each cervical vertebrae to a fracture label from train.csv. According to our EDA, most of the studies contain 200-300 slices, so the average of each vertebrae is about 30 slices. We chose 24 slices, which will be satisfied by most vertebras. For cervical vertebrae with more than 24 slice, we used a simple numpy function to get 24 slices evenly</p>
<pre><code>sample_index = np.linspace(0, len(one_study_cid)-1, sample_num, dtype=int)
</code></pre>
<p>One of the challenges for us was the training images for this competition are 300GB, and if we were to save the cropped 3D high-resolution training images locally, it would exceed the capacity of the hard disk, so we are forced to choose to record the cropped, [x0:x1, y0:y1, z0:z1] and the corresponding slice's dcm file number for the training process in stage2 for reading and cropping.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3799319%2F4ab026464572fb40996597d4aaa424c2%2Fimage-20221110043219911.png?generation=1668028409059020&alt=media" alt=""></p>
<h2>Stage 2</h2>
<p>After we crop, the data shape of our input was (bs, 24, img_size, img_size), 24 channels, representing 24 uniformly distributed slices, and also the seq_len of GRU.</p>
<p>For data sampling, we ignored the wrong study 1.2.826.0.1.3680043.20574 and 1.2.826.0.1.3680043.29952</p>
<p>Regarding data augmentation, we used similar methods to stage1 with a little new augmentation.</p>
<p>For the model we used CNN + biGRU + Attention, where for the CNN backbone we used tf_efficientnetv2_s and resnest50d from the timm library. For some other details, we initialized the GRU, since it seems that the original GRU weights on Pytorch are not very good. We also added SpatialDropout , which also gives us a little improvement.</p>
<h2>Things we didn't had time to do</h2>
<ol>
<li>use bbox csv in yolo</li>
<li>Transformer for sequential model</li>
<li>buy new hard-drive :)</li>
</ol>
<h2>Code</h2>
<p><a href="https://github.com/ryanyuerong/RSNA2022RAWE" target="_blank">https://github.com/ryanyuerong/RSNA2022RAWE</a></p>
<p>thanks for catching this, it was a old version of code. I have updated the code in GitHub</p>
|
RSNA 2022 Cervical Spine Fracture Detection
|
32nd place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Wow, I read your solution and realized it. The solution is quite similar but the score is twice as different lol<br>
I should do some additional experiments to figure out what made the difference.</p>
<p>I used original labels. "8 4-channel images" are finally encoded into a single output via global average pooling. It is not the case that 8 images are independently used to predict (eight) labels.</p>
<p>Yes, the cropped voxels have large overlap between them. Cropped voxels are then resized into the fixed sizes (32x256x256) via F.interpolate function.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F745525%2Fcf946877ffa2055047fade8fb8a8808f%2Frsna2022.png?generation=1666915657071769&alt=media" alt=""></p>
<p>Congrats to all prize and medal winners!<br>
I enjoyed this competition because there are a great variety of options for solving this task. I could not try all of them of course, but for example:</p>
<ul>
<li>classifier vs. object detection approach</li>
<li>3D model vs. 2.5D model</li>
<li>simultaneously detect vertebrae and fracture vs. first detect vertebrae and then classify them (or detect bboxes)</li>
<li>train segmentation mask to extract cervical vertebrae vs. use raw voxel (slices)</li>
<li>calculate overall probability from C1-C7 probabilities vs. train a dedicated model for overall fracture prediction</li>
</ul>
<p>My brief summary of solution is:</p>
<ul>
<li>pipeline<ul>
<li>use segmentation model to extract cervical vertebrae</li>
<li>crop C1-C7 regions as voxels</li>
<li>predict fracture probability for each cervical vertebra (C1-C7) with classification model</li>
<li>use the sum of C1-C7 fracture probabilities as a overall fracture probability (with clipping)</li></ul></li>
<li>segmentation model<ul>
<li>MONAI 3D UNet</li>
<li>surprisingly works well with a small amount of training data</li></ul></li>
<li>classification model<ul>
<li>2.5D CNN (EfficientNetV2-L) + Transformer encoder</li>
<li>use different positional embedding for each cervical vertebra (C1-C7)</li></ul></li>
</ul>
<p>Does not work for me</p>
<ul>
<li>3D CNN classifier<ul>
<li>I prefer 3D CNN approach to 2.5D approach ;(</li>
<li>same as UW-Madison GI Tract Image Segmentation competition (3D CNN works but 2.5D was better)</li></ul></li>
<li>simultaneously predict C1-C7 probabilities with attention (without segmentation)</li>
<li>dedicated overall model</li>
</ul>
<p>I look forward to seeing the other teams' solutions!</p>
<p>Thx! 4 channel 256x256 images (8 images) are fed into the same CNN backbone independently.</p>
|
RSNA 2022 Cervical Spine Fracture Detection
|
3rd place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats all and thank you RSNA for another great challenge. Special kudos to <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> (and <a href="https://www.kaggle.com/selimsef" target="_blank">@selimsef</a>) and others who showed model performance can really be pushed more here. </p>
<p>Here is my solution, I may fill out a bit more detail, or correct some of it in time. </p>
<p>Training code : <a href="https://github.com/darraghdog/RSNA22" target="_blank">https://github.com/darraghdog/RSNA22</a><br>
Inference code : <a href="https://www.kaggle.com/code/darraghdog/rsna-2022-3rd-place-solution-inference" target="_blank">https://www.kaggle.com/code/darraghdog/rsna-2022-3rd-place-solution-inference</a><br>
Slides located <a href="https://docs.google.com/presentation/d/1lS4yOTJT4EyaCjODGIO811RGex9jKQdZypzOjY_jxDA/edit?usp=sharing" target="_blank">here</a><br>
Video of solution : <a href="https://youtu.be/f-QA5MLN16Q" target="_blank">here</a></p>
<h3>TL;DR</h3>
<p>Adjacent to the problem, I hoped to get a solution which required less intense work on labelling, so this could be more easily scaled by the spine radiology specialists from the ASNR and ASSR. <br>
I did not use the fracture bounding boxes, and only used highlevel data from the segmentation maps. This consisted of two things from the segmentation maps (1) bounding box of the C1-C7 vertebrae by taking the outer limit of the segmentation, (2) ratio of vertebrae volume in the slice divided by max vertebrae volume seen in any slice. This second point is explained in a bit more detail below. If we had bounding box labels, instead of segmentations, for the vertebrae, these could also be used to calculate the ratios. <br>
For point (1) the individual slice level bounding boxes were not used downstream. Instead a study level bounding box for the C1-C7 vertebrae, which was taken from the rollmean max of the individual slice level bounding boxes. <br>
From point (2), in combination with the fracture labels in <code>train.csv</code> we can get an approximate label for fracture and vertebrae type in each slice to fit models. I only used 2.5D CNN + 1D RNN which is pretty much lifted straight from <a href="https://www.kaggle.com/wowfattie" target="_blank">@wowfattie</a> ‘s <a href="https://www.kaggle.com/competitions/rsna-str-pulmonary-embolism-detection/discussion/194145" target="_blank">first place solution</a> to RSNA two years ago. </p>
<h3>Bounding box preprocessing</h3>
<p>Again lifted from first place approach two years ago, I found it increased performance to zoom in on the vertebrae. I used efficientnet-v2 to predict five labels - <code>x0, y0, x1, y1, has_bbox</code>. The four corners of the bounding box, and a probability if the slice has a vertebrae or not. In downstream models, the range of slices before C1-7 vertebrae began in the z-axis, and after they ended, were excluded for training and inference. This range was found by using z-axis wise rollmean of <code>has_bbox</code> probability.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F213493%2Fb811845bd8a6afd471f1453695189e65%2Fbbox.png?generation=1666948850696604&alt=media" alt=""><br>
Also in downstream models the vertebrae were cropped in preprocessing. I used the outer bbox of all boxes in the z-axis to crop the whole study in one go with a single box per study. The cropped study was then resized to 512 * 512, and the same augmentation (shiftscale, cutout, etc) was applied to the study. I also cropped and resized the dicom before converting to uint8 in the hope that the interpolating the raw dicom values to a larger size would give greater resolution - not sure if this helped or not, but it gave me peace of mind 😊 </p>
<h3>Model 1 : Slice level vertebrae labels</h3>
<p>A 2.5D CNN+1d RNN (same as described below) was used to train a model using the 87study level segmentation maps. As mentioned the label was the ratio of vertebrae volume in the slice divided by max vertebrae volume seen in any slice. This was trained on z-axis windows of studies - so 32 * 3 slices per sample, aggregated to 32 2.5D images passed through the CNN and then the RNN predicts the vertebrae ratio. RMSE loss was used. <br>
The slice level CV predictions for C1-C7 vertebrae were multiplied by the study level fracture labels to give slice level fracture labels as seen below. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F213493%2F263b4ef7609e970ee0810902307a0589%2Fpreds.png?generation=1666948962692906&alt=media" alt=""></p>
<h3>Model 2 : Initial sequential model</h3>
<p>The same architecture again (2.5D CNN+1d RNN) was used to train on z-axis windows of studies with RMSE loss on three different targets - 1) Slice level vertebrae ratio 2) Slice level fracture label (seen above) and 3) Max fracture value in the window.<br>
I trained on random 32 * 3 slice windows of studies. A lot of these windows had no fracture so I used a batchsize of 48 (using accumulation 16) to ensure some fractures are in each batch. Experiments with undersampling did not help. <br>
The model was simple enough, but the addition of the attention mechanism over the 1d RNN output helped a lot. Transformers or other architectures did not improve it. Two backbone’s were used from timm - resnest50d and seresnext50. Resnest50d was used by <a href="https://www.kaggle.com/vaillant" target="_blank">@vaillant</a> in the RSNA STR competition two years ago and performed best. The model trained for a long time - ~9 hours per fold. </p>
<h3>Model 3 : Final sequential model</h3>
<p>The same architecture again (2.5D CNN+1d RNN) was used to train the full study over on the final study level labels, found in <code>train.csv</code>. The CNN backbone was loaded from the checkpoint weights of model 2 and set to no gradients, and a new 1d RNN and attention mechanism were initialized and trained on top of this. With the final labels, the competition metric as loss was used. The only additional difference in the model was squeezing the embedding layers in very long studies - again, similar to <a href="https://www.kaggle.com/wowfattie" target="_blank">@wowfattie</a> ‘s - if there were more than 192 * 3 slices outputted, torch functional interpolation was used to reshape them to a max sequence of 192. <br>
As mentioned before, for model 2 and 3, a single bounding box was used to crop all slices in a study, and the same augmentation used across the study. For model 3 only, the <code>has_bbox</code> range was used to exclude slices before the vertebrae started and ended. And for model3, the CNN outputted embeddings were extracted in chunks of 32 * 3 2.5d images. </p>
<h3>Final Submission</h3>
<p>Weights from the bounding box model and the model3 were used for inference. A combination of the resnest50d and seresnext50, which had 6 and 4 respectively sets of weights from model 3 trained over the full dataset. <br>
I found it kind on RAM to collect batches of studies in uint8 format, and normalize on GPU; and inference through the CNN was performed in chunks of 32 * 2.5d images at a time. </p>
<p>Inference time was ~ 4.5 hours. </p>
<p>Models were initialised roughly like, </p>
<pre><code>self.backbone = timm.create_model('resnest50d')
hidden_size = self.backbone.fc.in_features # 2048
self.backbone.fc = torch.nn.Identity()
self.rnn = nn.LSTM(hidden_size, hidden_size, batch_first=True, bidirectional=True)
</code></pre>
<p>Then forward step (with shapes for illustration)</p>
<pre><code>batchsize, seqlen, ch, h, w = batch['image'].shape # (2,32,3,512,512)
x = batch['image'].view(-1, ch, h, w) # (64,3,512,512)
emb = self.backbone(x) # (64,2048)
emb = emb.view(batchsize, seqlen, -1) #&nbsp;(2, 32 ,2048)
logits = self.rnn(emb)[0] # (2, 32 , 4096)
</code></pre>
<blockquote>
<p>was the algorithm ever thrown off by stray pixels that can appear anywhere on the image? Do you have a preprocessing step to ignore pixel islands that are less than a number? </p>
</blockquote>
<p>I was just thinking about this today, it probably was - I took a z-axis rolling mean of bounding box coordinates which would somehow counteract this, but not fully. </p>
<blockquote>
<p>I did something similar but I calculated the center of mass and standard deviation of the distribution of the mask and then use it to create bounding boxes (bounding cubes actually). But the boxes are not as tight around the spine as the images you showed above.</p>
</blockquote>
<p>Nice approach, it probably would have helped me. </p>
|
RSNA 2022 Cervical Spine Fracture Detection
|
4th place solution, CSN is all you need for 3D
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I started working on this challenge really late, made first sub 10 days before deadline, so could not run a lot of experiments and used codebase/tricks from <a href="https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation" target="_blank">GI tract segmentation challenge</a> and NASA comet detection challenge.</p>
<h3>TLDR</h3>
<p>Two stage approach, 3D segmentation + 3D classification for each vertebra crop</p>
<h3>The main trick</h3>
<p>A lot of failed attempts to use 3D networks for classification are related to overfitting . That's the case for pure 3D convolutions which usually are not needed and we can easily work with architectures that achieve state of the art performance on video/action classification datasets. <br>
So the options are:</p>
<ul>
<li>inflate some state of the art 2d nets, like effnets but start using 3Ds from 2nd or 3rd block . It works but still tends to overfit and imagenet pretraining is not the best in this case</li>
<li>use 2D networks with LSTM/ConvLSTM/Transformer heads. It works and allows to use a lot of architectures, the main issue here is that 3D nature is considered only on the latest stage.</li>
<li>use pretrained networks from ig65m/kinetics datasest. That gave the best results for me. </li>
</ul>
<p>From <a href="https://paperswithcode.com/sota/action-classification-on-kinetics-400" target="_blank">https://paperswithcode.com/sota/action-classification-on-kinetics-400</a> it is clearly seen that the best convolutional architecture for video classifcation in 2022 is still <a href="https://arxiv.org/abs/1904.02811" target="_blank">ir-CSN-152</a>. Even though transformers achieve higher scores on Kinetics datasets they don't perform as good on small datasets like RSNA and CSNs are the best for small sized datasets and if one needs fast and accurate 3D segmentation and/or classification.</p>
<p>I used mmaction2's implementation of CSNs.</p>
<h3>Segmentation</h3>
<ul>
<li>UNet like multiclass segmentor</li>
<li>encoder: <strong>ir-CSN-50</strong></li>
<li>decoder: standard unet decoder with nn upsampling but with <a href="https://arxiv.org/abs/1711.11248v3" target="_blank">(2+1)d convolutions</a></li>
<li>pure 3d convolutions in decoder lead to NaNs in amp training</li>
</ul>
<p><strong>Training</strong>:</p>
<ul>
<li>2x subsampling(just slice ::2) by z axis</li>
<li>2x linear downsampled images</li>
<li>memory mapping to reduce IO/CPU overhead</li>
<li>AdamW + wd, cosine LR annealing</li>
<li>Loss: focal-jaccard optimized loss for faster multiclass jaccard computation</li>
<li>2D augs, ReplayCompose in Albumentations library, lightweight geometric augs + hflip</li>
</ul>
<h3>Classification</h3>
<ul>
<li>4 folds of ir(ip)-CSN-152 with global max pooling</li>
<li>multilabel 8 classes, if less than 30% of vertebra is visible then make vertebra's label 0 and recompute overall label, otherwise keep the same as provided</li>
</ul>
<p><strong>Training</strong></p>
<ul>
<li>3 channel input (img, img, integer encoded segmentation mask</li>
<li>2x subsampling(just slice ::2) by z axis</li>
<li>using 40 slices(80 in original data) around each vertebra</li>
<li>crops were resized to 256x256</li>
<li>BCE loss</li>
<li>target metric as validation</li>
<li>AdamW + wd, cosine LR annealing</li>
<li>Augmentations: 2D augs, replay compose in Albumentations library, flips, rotations, geometric. Needed to make more augs compared to segmentation pipeline</li>
</ul>
<h3>Code</h3>
<ul>
<li>inference kernel and weights: <a href="https://www.kaggle.com/code/selimsef/rsna-csn-segmentor-classifier" target="_blank">https://www.kaggle.com/code/selimsef/rsna-csn-segmentor-classifier</a></li>
<li>github code for training: <a href="https://github.com/selimsef/rsna_cervical_fracture/" target="_blank">https://github.com/selimsef/rsna_cervical_fracture/</a></li>
</ul>
<p><a href="https://www.kaggle.com/mariuszwisniewski" target="_blank">@mariuszwisniewski</a> I used integer encoded predicted segmentation mask as an additional input. That's not the best approach but it was the first thing (combined with 3D crop around each vertebra) that worked for me. The other approach is to just use binary masks.<br>
Why not just img + mask? Was a bit lazy to recompute the first conv weights to 2 channels.</p>
|
RSNA 2022 Cervical Spine Fracture Detection
|
58th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Notebook link <br>
<a href="https://www.kaggle.com/code/arunodhayan/effib1ns/notebook?scriptVersionId=108829887" target="_blank">https://www.kaggle.com/code/arunodhayan/effib1ns/notebook?scriptVersionId=108829887</a></p>
<p>Big thanks to the organizers for the great competition!<br>
Also thanks to all competitors for sharing their experiments here</p>
<h2>Models</h2>
<p>In this competition, less profound models performed better. I used (EfficientnetB1-B5_ns,EfficientnetV2s,Seresnet101)</p>
<p>All models trained on 3 different resolutions - 384<em>384, 515</em>512, 768*768</p>
<p>5 Fold Cross-validation</p>
<h2>Augmentations</h2>
<p>transforms_train = transforms.Compose(<br>
[<br>
transforms.Resize((IMG_SIZE, IMG_SIZE)),<br>
transforms.RandomHorizontalFlip(p=0.3),<br>
transforms.RandomVerticalFlip(p=0.3),<br>
transforms.RandomResizedCrop(IMG_SIZE),<br>
transforms.ToTensor(),<br>
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),<br>
]<br>
)</p>
<h2>Best Solution</h2>
<p>3 Model Ensemble - 0.43 public 0.49 private (Without TTA) - Efficientnet B1-NS Resolution 512 <br>
9 Model Ensemble - 0.44 public 0.49 private (Without TTA) - Efficientnet B1-NS Resolution 384<br>
5 Model Ensemble - 0.45 public 0.50 private (Without TTA) - EfficientnetV2s Resolution 384</p>
<p>3 Model Ensemble - 0.43 public 0.50 private (With TTA) - Efficientnet B1-NS Resolution 512<br>
9 Model Ensemble - 0.44 public 0.50 private (With TTA) - Efficientnet B1-NS Resolution 384</p>
|
RSNA 2022 Cervical Spine Fracture Detection
|
5th place solution (Team Speedrun)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>5th place solution</h1>
<p>Thanks to kaggle and the sponsors for organizing this interesting and leak-free competition which had a lot of different angles to explore. Coming directly from the DFL and joining quite late here, we were explicitly looking for the sprint aspect and to challenge ourselves to derive a good solution in only 11 days. Hence our team name: Speedrun ( <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a>, <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a> <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> ) we could build on ideas shared from previous 3D-RSNA competitions as well as competitions we did before and we are proud of achieving 5th rank in only this short timespan. We also want to thank authors of various github repositories which enable us to quickly iterate and try ideas, such as <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">timm</a>, <a href="https://github.com/albumentations-team/albumentations" target="_blank">albumentations</a> and <a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">segmentation models pytorch</a></p>
<h2>TL;DR</h2>
<p>Our solution is an ensemble of two quite similar approaches which follow a 3-stage paradigm:</p>
<ol>
<li>Train a 2D classification and segmentation model for vertebrae using the provided segmentation labels</li>
<li>Use the resulting model to predict the vertebrae class visible in each all dicom images and multiply the result with the overall fracture label to get 2D image level labels if a certain vertebrae is fractured or not.</li>
<li>Collect all image level labels per study and train an aggregation model to predict the given study level labels. </li>
</ol>
<p>The following figure gives a summary of our solution:<br>
<img src="https://i.imgur.com/2Q1RU35.jpeg" alt=""></p>
<h2>Stage 1</h2>
<p>The first stage of our approach uses only the 87 studies for which we were given 3D-segmentation masks. We used the 3D-segmentation labels to derive two type of labels for a given 2D slice. On the one hand we created 7 dimensional binary labels for each 2D slice if a certain vertebrae is visible or not and trained an EfficientNet-B5. (model S1A). On the other hand we use a vertebrae segmentation mask for each 2D slice to train a EfficientNet-B3-UNet. (model S1B)</p>
<p>Using model S1A we predict if the vertebrae are visible for all 2000 studys in the train set using a threshold of 0.5. We then map the target labels from study level to 2D by multiplying the prediction with the overall study labels if a vertebrae is fractured. The so created pseudo labels will be used in Stage 2. Additionally S1B is used to predict segmentation mask if any vertebrae is visible and derive a bounding box (x_min,x_max,y_min,y_max) out of it for each 2D slice. We aggregate all bounding boxes for per study using the 0.05 quantile of x_min and y_min and the 0.95 quantile for x_max and y_max to get a single box per study which will be used in stage2 for cropping the region of interest.</p>
<h2>Stage 2</h2>
<p>For our stage 2 models, we directly aim at predicting the probability for a fracture at each of the seven cervical vertebrae. The labels for each slice have been built by our stage 1 models.</p>
<p>All our models in the second stage follow a 2.5D + 3D schema. The input is a 2.5D slice with 3 channels, where the center channel is the z-dimension of interest, and the two neighboring slices form the other channels. This is then run through a 2d backend, and the last layer(s) of the backbone are transformed to 3D Convolution layers and are pooled with average across the slices. For the 3D part we take a step-size of 5 and also only take two extra slices in each direction. </p>
<p>So for a sample input frame with index 15, we would first stack the channels 14,15,16, and then add the frames at position -5, and +5, which also have 3 channels stacked, so we add the stacked channels 9,10,11 and 19,20,21. The input dimension is then (3,3,height,width).</p>
<p>We train two types of stage 2 models:</p>
<ul>
<li>Full images: The input here is the full regular extracted images from the DICOM. </li>
<li>ROI cropped images: The input here is the extracted ROI from a first stage model, see above description for details.</li>
</ul>
<p>It is very easy to overfit this data, so we apply heavy regularization, some of the most import ones:</p>
<ul>
<li>Mixup</li>
<li>Random crop + resize</li>
<li>Random shift, scale, rotate</li>
<li>Random brightness</li>
</ul>
<p>All our models are very lightweight efficientnetv2 models. For inference, it was very helpful for us to upscale the images by at least 1.125 and sometimes also apply a center crop.</p>
<h2>Stage 3</h2>
<p>As our second stage model is trained on individual slices, our models cannot fully learn the actual level of interest, which is on a study level. So we train a simple 3rd stage feed-forward neural network, that directly optimizes the competition metric, and has as input only the mean, min and maximum of all individual predictions of a study. This helps specifically to boost the overall prediction score, which we cannot optimize directly also in the second stage.</p>
<p>The final sub is a 30-70 blend of the max-aggregated output from 2nd stage and 3rd stage models and improves our scores by 2-3 LB points.</p>
<p>edit: </p>
<ul>
<li>We uploaded a video explaining our solution <a href="https://www.youtube.com/watch?v=c_YZHwhK0Jo" target="_blank">here</a></li>
<li>The inference kernel is shared <a href="https://www.kaggle.com/code/ilu000/rsna2022-5th-place-solution-inference" target="_blank">here</a></li>
<li>Github repository with training code and instructions can be found <a href="https://github.com/pascal-pfeiffer/kaggle-rsna-2022-5th-place" target="_blank">here</a></li>
</ul>
<p>edit: </p>
<ul>
<li>We uploaded a video explaining our solution <a href="https://www.youtube.com/watch?v=c_YZHwhK0Jo" target="_blank">here</a></li>
<li>The inference kernel is shared <a href="https://www.kaggle.com/code/ilu000/rsna2022-5th-place-solution-inference" target="_blank">here</a></li>
<li>Github repository with training code and instructions can be found <a href="https://github.com/pascal-pfeiffer/kaggle-rsna-2022-5th-place" target="_blank">here</a></li>
</ul>
|
RSNA 2022 Cervical Spine Fracture Detection
|
7th place solution, segmentation for detection tasks
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the organizers for holding this interesting competition and congrats to all those who worked hard to develop solutions for the competition. </p>
<h2>Summary</h2>
<p>In our solution, 3D segmentation methods are utilized for fracture detection task. Since host do not provide segmentation label for fracture region, we use data-augmentations and bounding box GT to generate Pseudo segmentation masks. Our final framework consist of 3 stages:</p>
<ul>
<li>Stage 1: Segment C1-C7 using 3D-UNet</li>
<li>Stage 2: Segment bone fracture region using 3D-UNet</li>
<li>Stage 3: Predict final score using outputs from Stage 1 and Stage 2</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2727680%2F9f121f7059058523df7cbc6cecf6e4dc%2Foverview.png?generation=1667922758651944&alt=media" alt=""></p>
<h2>Stage 1: Segment C1-C7 using 3D-UNet</h2>
<ul>
<li><p>Data:</p>
<ul>
<li>Competition data: 87 cases</li>
<li>TotalSegmentator (CC-BY-4.0): 160 cases (<a href="https://zenodo.org/record/6802614#.Y2nkrHYzZPY" target="_blank">https://zenodo.org/record/6802614#.Y2nkrHYzZPY</a>)</li>
<li>Verse2020(CC-BY-SA-4.0): 42 cases (<a href="https://github.com/anjany/verse" target="_blank">https://github.com/anjany/verse</a>)</li></ul>
<p>External data may not be necessary. Compared to fracture detection, segmenting C1-C7 is a kind easy task, using only competition data can also get accurate results. </p></li>
<li><p>Model: 3D-UNet (32, 64, 128, 256, 320, 320)</p></li>
<li><p>Loss: DICE + BCE</p></li>
<li><p>Input size: (160, 128, 128)</p></li>
<li><p>Image resolution: (1.5, 1.5, 1.5)mm</p></li>
<li><p>Batch size: 2</p></li>
<li><p>Training iterations: 250k, ~2days on 3090</p></li>
<li><p>Post-processing: keep largest connected component for C1-C7</p></li>
</ul>
<h2>Stage 2: Segment bone fracture region using 3D-UNet</h2>
<p>There are 2019 CT images and 2019 patient-level GT but only 235 patients’ fracture bounding boxes are provided, so we use Pseudo-labeling techniques to generate fracture segmentation mask</p>
<ul>
<li><p>Step1: Initial Pseudo fracture segmentation mask generation <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2727680%2F600fed9cd3a252ce1f126f3cacd21b4f%2Ffig2.png?generation=1667922802526478&alt=media" alt=""></p>
<ul>
<li>Type1: generated using data augmentations </li>
<li>Type2: generated using 235 bounding box labels (remove bounding box out out bone)</li></ul></li>
<li><p>Step2: training models using the 235 Initial Pseudo fracture segmentation mask </p>
<ul>
<li>Model: 3D-UNet (32, 64, 128, 256, 320, 320)<ul>
<li>Loss: DICE + BCE</li>
<li>Input size: (96, 192, 192)</li>
<li>Image resolution: (0.8, 0.4, 0.4)mm</li>
<li>Batch size: 2</li>
<li>Training iterations: 250k, ~2days on 3090</li></ul></li></ul></li>
<li><p>Step3: Pseudo mask refinement (finally we got Pseudo masks of 823 cases )<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2727680%2F933217791478093bad656e253f904fb5%2Ffig3.png?generation=1667922829959202&alt=media" alt=""><br>
There are 2019 patient-level GT but only 235 patients’ fracture bounding boxes are provided, so we can firstly using models to get predictions of all 2019 cases and then using patient-level GT to refine these results. We can modify Pseudo mask where model predictions are inconsistent with patient level information. For example, if patient-level GT point out C1 is normal, we can directly remove all predicted fracture masks on C1.</p></li>
</ul>
<h2>Stage 3: Predict final score using outputs from Stage 1 and Stage 2</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2727680%2F869f095e6dabd72e650457b752d2828f%2Ffig4.png?generation=1667922959405919&alt=media" alt=""><br>
After getting predictions from stage 1 and stage 2, we resize all predictions to (96, 96, 96) and train a very tiny 3D-CNN to get final score. (only new few minutes to train a single model) </p>
<ul>
<li>Loss: BCE</li>
<li>Input size: (96, 96, 96)</li>
<li>Batch size: 4</li>
<li>Training iterations: 20k, ~1 hour on 3090</li>
<li>Post-processing: Since competition evaluation metrics are sensitive to 0 and 1, we clip all predicted scores to a range, here are the best parameters for CV (C1-C7):<ul>
<li>'min_score': [0.01, 0.015, 0.015, 0.01, 0.02, 0.032, 0.048],</li>
<li>'max_score': [0.999, 0.993, 0.99, 1.0, 0.943, 0.997, 0.999]</li></ul></li>
</ul>
<h2>Final submission</h2>
<ul>
<li>Final Results: 3<em>stage1 + 4</em>stage2 + 5*stage3, it takes around 7 hours to finish inference. (Single model can get similar results, my ensemble do not really work)</li>
<li>Private Leaderboard Score: 0.2634</li>
<li>Public Leaderboard Score: 0.2127</li>
</ul>
<h2>Code</h2>
<p>This solution is mainly based on nnUNet(<a href="https://github.com/MIC-DKFZ/nnUNet" target="_blank">https://github.com/MIC-DKFZ/nnUNet</a>, Apache-2.0 license) and batchgenerators (<a href="https://github.com/MIC-DKFZ/batchgenerators" target="_blank">https://github.com/MIC-DKFZ/batchgenerators</a>, Apache-2.0 license)</p>
<ul>
<li>Training: <a href="https://github.com/LSL000UD/RSNA2022-7th-Place" target="_blank">https://github.com/LSL000UD/RSNA2022-7th-Place</a></li>
<li>Inference: <a href="https://www.kaggle.com/code/lsl000ud/rsna2022-7th-place-inference" target="_blank">https://www.kaggle.com/code/lsl000ud/rsna2022-7th-place-inference</a></li>
</ul>
|
RSNA 2022 Cervical Spine Fracture Detection
|
8th Place Solution + Code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the organizers for hosting RSNA 2022 with such a huge diversity of data, there are so many approaches to learn from.</p>
<p>Many Many Congratulations to the winners and everyone who completed their goals. For me, my goal was to get my first gold to be a solo gold.</p>
<h2>Summary</h2>
<p>[NOTE]: I use "vertebrae" and "bone" interchangeably</p>
<p>-Sagittal segmentation model to classify which image belongs to what vertebrae in a single go<br>
-Bone segmentation model to crop the ROI<br>
-Slice-level classification model<br>
-RNN on extracted features</p>
<h2>Sagittal Segmentation</h2>
<p>I trained a Unet-B1 binary segmentation model on sagittal view, 0 for background and 1 for bone, used this model's pretrained weights to train another model on the same data with class 0 for background class 1 for C1… class 7 for C7 and class 8 for any bone other than C1-C7</p>
<p>I predicted the whole dataset with it to assign bone to slice.</p>
<h2>Bone Segmentation</h2>
<p>I trained a Unet-B1 segmentation model (weights not taken from sagittal model) on the slices, this time in axial view with the same 8 classes.</p>
<p>I predicted the whole dataset and got ROI bounding box for every slice.</p>
<h2>Data Preparation</h2>
<p>Before we get into how I did slice-level classification, we have take in mind the data trick which made this approach possible and was overlooked by most people, it is more of an assumption as the organizers have not confirmed this:</p>
<p>Every slice which has a fractured bounding box annotation, is fractured, and for that particular patient, every other slice is non fractured…</p>
<p>Now, to balance out the dataset and all, long story short, I took all the slices from train_bounding_boxes.csv as fractured=1, every other slice for that patient as fractured=0, 280 patients which do not have any fractures meaning that all their slices is also fractured=0</p>
<p>Then I took the liberty to clear the data by removing all the non-fractured slices which have label either 0 or 8 (referenced earlier in the Sagittal Segmentation). </p>
<p>Now we have a dataset with 515 patients and a bit over 100,000 slices with just about 7% of them labeled as fractured.</p>
<h2>Slice-Level Classification</h2>
<p>I trained a binary-classification efficientnet b5 model, with image size of 456x456<br>
I used 2.5D for the slice with [slice-1, slice, slice+1] as 3 channels, cropped bone ROI and perfectly tuned the augmentations, the interesting thing I did for augmentation is I did resize to preserve aspect ratio with this bit of code</p>
<p><code>A.Compose([</code><br>
<code>A.LongestMaxSize(CFG.SZ_H),</code><br>
<code>A.PadIfNeeded(CFG.SZ_H, CFG.SZ_W, border_mode=0, p=1),</code></p>
<p>And now the input looks like this, this image is labeled fractured</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1794509%2Fd15c9728ee434337e2021690caf4d021%2Fdownload.png?generation=1666959968765797&alt=media" alt=""></p>
<p>I achieved .916 slice-level AUC doing this, and then I did pseudo on rest of dataset to achieve over .94 AUC, there were many different approaches I did pseudo in just to end up a bit over .94 AUC with most of them so I won't go into which way I finalized upon as it needs not to be very specific, the point to keep in mind is that not all the remaining ~1500 patients are going to be good as pseudo label and don't waste resources pseudo-labeling the images which do not even have potential of being fractured, you can know this from competition's train.csv file.</p>
<p>Now, predicting on all of the images of a patient in inference and then using <a href="https://www.kaggle.com/competitions/rsna-str-pulmonary-embolism-detection/discussion/193422" target="_blank">this</a> with modifications, I achieved a .26 public which scored .27 in private (most of my other .26 public scores were not this robust though)..</p>
<h2>RNN</h2>
<p>Input would be BSx64x2048, if it a vertebrae has less than 64 slices then pad zeros, else take first 64 slices.</p>
<p>I used 2x GRU bidirectional layers, output fed both into an Attention and a Conv1D layer, concatenated the output of those 2 layers and then reshaped to (BS/7)x7x2048, flattening out to (BS/7)x2048*7 feeding into a linear to get (BS/7)x8 output.</p>
<p>The input was sent in with consideration in mind that the output will be a proper study-level, if it does not make sense, either I am explaining it wrong or it is completely wrong, I don't any experience with 1D or RNN models but with a lot of tries, this is what worked for me, scoring me a .23 public and .26 private</p>
<h2>What did not work:</h2>
<p>Lots of ideas but the important ones are:</p>
<p>Normally not something to put in here, but, ensemble, my ensembles did not work, probably because I do not know how to make RNN models, I think that was my major shortcoming in this competition.</p>
<p>I did get a bit better CV using ensemble, but it did not reflect on Public and Private LB.</p>
<p>Aux Loss with bounding box, like it was used in last year's RSNA</p>
<p>Training study-level with hope to guide it with slice-level predictions, I did it exactly like many top solutions have done, but in my case it did not do better than slice-level.</p>
<p>Competition was very much of a rollercoaster for me with lots false alarms of better CV just to not follow on LB (and private leaderboard consistently followed public leaderboard in my submissions below .26 public)</p>
<p>I struggled with RNN models for quite a long time stuck on .26, got to know the importance of them for the task.</p>
<h2>Code</h2>
<p><a href="https://www.kaggle.com/code/harshitsheoran/rsna-2022-8th-place-1-sagittal" target="_blank">1. Sagittal</a><br>
<a href="https://www.kaggle.com/code/harshitsheoran/rsna-2022-8th-place-2-axial" target="_blank">2. Axial</a><br>
<a href="https://www.kaggle.com/code/harshitsheoran/rsna-2022-8th-place-3-fracture" target="_blank">3. Fracture</a><br>
<a href="https://www.kaggle.com/code/harshitsheoran/rsna-2022-8th-place-4-study" target="_blank">4. Study</a><br>
<a href="https://www.kaggle.com/code/harshitsheoran/8th-place-inference/" target="_blank">5. Inference</a></p>
<p>It does not particularly add anything, and it is not a blend, I was just experimenting that how I can know labels (labels here is vertebrae number) of axial view using sagittal view and then I kept it, as it was so efficient, just a single pass image to know it all</p>
|
RSNA 2022 Cervical Spine Fracture Detection
|
Open sourcing my solution [2D Model, PyTorch]
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Even though our team dropped 350 places in private LB I still wanted to share our approach (& code) in the spirit of collaboration and helping others learn. First I want to say a huge thank you to my team mate <a href="https://www.kaggle.com/jirkaborovec" target="_blank">@jirkaborovec</a>, kaggle and all the hosts - this really was a fantastic competition.</p>
<h2>Model architecture</h2>
<p><a href="https://postimg.cc/mzg7csjV" target="_blank"><img src="https://i.postimg.cc/264GKjZp/2-D-model-architecture.png" alt="2-D-model-architecture.png"></a></p>
<p>Our model was inspired by <a href="https://www.kaggle.com/code/vslaykovsky/train-pytorch-effnetv2-baseline-cv-0-49" target="_blank">Vladimir's</a> that most have seen and <a href="https://www.kaggle.com/competitions/siim-isic-melanoma-classification/discussion/175412" target="_blank">1st place from the Melanoma Classification comp</a>. We used image data and meta data simultaneously in the hopes of helping the model better predict which vertebrae is in each image.</p>
<h2>Feature engineering</h2>
<p>We extracted metadata (see my <a href="https://www.kaggle.com/datasets/samuelcortinhas/rsna-2022-spine-fracture-detection-metadata" target="_blank">dataset here</a>) from the dicom files including slice number, slice thickness and patient position. We then performed feature engineering on this:</p>
<ul>
<li>Calculate maximum slice number</li>
<li>Calculate ratio of slice number to max slice</li>
<li>Create indicator if images are reversed in z-axis (i.e. order of vertebrae is reversed).</li>
</ul>
<h2>Learning rate scheduler</h2>
<p><img src="https://i.postimg.cc/5tbHbxHq/lrschedule.png"></p>
<p>We used CosineAnnealingWarmupRestarts from the <a href="https://github.com/katsura-jp/pytorch-cosine-annealing-with-warmup" target="_blank">github here</a>. Each cycle is one epoch. Optimisers tried included Adam and AdamW. </p>
<h2>Augmentations</h2>
<p><img src="https://i.postimg.cc/0yFzYBVr/augs.png"></p>
<ul>
<li>A.Resize(*(256,256), interpolation=cv2.INTER_LINEAR)</li>
<li>A.HorizontalFlip(p=0.35)</li>
<li>A.ShiftScaleRotate(…, p=0.4)</li>
<li>A.RandomBrightnessContrast(…, p=0.5)</li>
<li>A.OneOf([A.GridDistortion(p=1.0),A.ElasticTransform(p=1.0)], p=0.25)</li>
</ul>
<p>A=Albumentations library. See links at the bottom for code.</p>
<h2>Torch, GPU, epochs</h2>
<p>We used PyTorch. It was actually my first time using it and I really like it. We only used kaggle GPU and kaggle notebooks to train our models. 1 epoch took roughly 10 hours. Best submission used 6 epochs. </p>
<h2>Code</h2>
<p>I am open sourcing all my code for anyone that wants to look. I've tried to comment it as best as possible. Happy to answer questions if anything is unclear.</p>
<ol>
<li><a href="https://www.kaggle.com/samuelcortinhas/rnsa-2d-model-train-pytorch" target="_blank">RNSA - 2D model [Train] [PyTorch]</a> (training script)</li>
<li><a href="https://www.kaggle.com/samuelcortinhas/rnsa-2d-model-validate-pytorch" target="_blank">RNSA - 2D model [Validate] [PyTorch]</a> (evaluate on competition metric)</li>
<li><a href="https://www.kaggle.com/samuelcortinhas/rnsa-2d-model-infer-pytorch" target="_blank">RNSA - 2D model [Infer] [PyTorch]</a> (submission notebook)</li>
</ol>
<h2>Lessons learnt</h2>
<ul>
<li>If there was more time I would have liked to train a segmentation model</li>
<li>Crop images to focus on region of interest</li>
<li>Look into 2.5D</li>
</ul>
|
RSNA 2022 Cervical Spine Fracture Detection
|
[38th Place] Single Stage Single Model Efficientnetv2
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes. 3D models look important in a lot of solutions.<br>
Congratulations to you too!</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4630396%2Fddfb714d23d450f8bf5d50fcac1e8ff5%2Frsna_pipeline.png?generation=1666947796450371&alt=media" alt=""></p>
<p>Thanks to RSNA and Kaggle for conducting such an exciting competition, and congratulations to all the winners!</p>
<p>It is a modified version of the pipeline shared by <a href="https://www.kaggle.com/vslaykovsky" target="_blank">@vslaykovsky</a> in <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/348462" target="_blank">this</a> discussion. </p>
<h4>CV Strategy</h4>
<p>I use a 5 fold strategy, grouped by StudyInstanceUID and stratified by the fractures in all vertebrae (A column is created concatenating all the fractures like 0_1_0_0_0_1_0 and then used to stratify)</p>
<h4>Preprocessing</h4>
<p>I use windowing to preprocess the Dicom image used in previous RSNA solutions. The three window values were derived from <a href="https://arxiv.org/abs/2010.13336" target="_blank">this</a> paper. </p>
<pre><code>def window(img, WL=50, WW=350):
upper, lower = WL+WW//2, WL-WW//2
X = np.clip(img.copy(), lower, upper)
X = X - np.min(X)
X = X / np.max(X)
X = (X*255.0).astype('uint8')
return X
def dicom_load(self, siuid, slic):
path = f"data/train_images/{siuid}/{slic}.dcm"
img = dicom.dcmread(path)
img.PhotometricInterpretation = 'YBR_FULL'
data = img.pixel_array
slope = img[('0028','1053')].value
intercept = img[('0028','1052')].value
data = data*slope + intercept
windowed = np.stack((window(data, WL=80, WW=300),
window(data, WL=500, WW=1800),
window(data, WL=400, WW=650)
), axis=-1)/255.
return windowed
</code></pre>
<h4>Training Data</h4>
<p>Each 3-Channel slice is concatenated with +1/-1 Neighbour slices to create a 9-channel Input. This 9 Channel input predicts the fracture probabilities and vertebrae presence of the middle slice.</p>
<p>During inference, this 3-slice window is shifted by stride 2 for faster Inference.</p>
<h4>Model</h4>
<p>I use <code>Timm</code> library to use pretrained models and modify the first layer to take a 9-channel input.</p>
<pre><code>self.model = timm.create_model('tf_efficientnetv2_l', pretrained=True, num_classes=0)
self.model.conv_stem = nn.Conv2d(9, 32, kernel_size=(3, 3), stride=(2, 2), padding='same', bias=False)
</code></pre>
<h3>What didn't work</h3>
<p>This is a huge list, but I will narrow it down to 3 approaches to predict patient_overall -></p>
<ul>
<li>In the above approach, the sequential information between the slices is only used in the +1/-1 concatenation. To leverage more context, I generated embeddings for all the slices and then tried to predict the <code>patient_overall</code> using an LSTM/Transformer.</li>
</ul>
<p>Possible Reason for not working: This Stage2 model had only 2k data points to learn from, which might not have sufficed.</p>
<ul>
<li>Uniformly/Randomly select a fixed number of slices from the patient scans and train a 3D Classifier.</li>
</ul>
<p>Possible Reason for not working: Fractures can only be seen in some slices and specific slices need to filtered out.</p>
<ul>
<li>Trained an XGBoost to to predict patient overall from the fracture probabilities of C1-C7 vertebrae.</li>
</ul>
<p>Possible Reason for not working: Could not beat the prediction performance of <code>1-np.prod(1-c1c7)</code> </p>
<p>I tried training resnet models as well, but they didnt perform as well as efficientnet. </p>
|
RSNA 2022 Cervical Spine Fracture Detection
|
[6th place] Solution Overview: 3D CNN + TD-CNN
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2022 Cervical Spine Fracture Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to the organizers for putting together this interesting competition, and congratulations to all of the competitors on their efforts. I'm glad that shakeup was kind to me on private LB. One of my goals after becoming GM was to win a gold model with <10 submissions, and I'm happy to accomplish that with this challenge. This is a brief overview of my solution. </p>
<p>A more detailed writeup is available here: <a href="https://docs.google.com/document/d/1GcHeUDks2dnECKmJn97nF2djFRTVVnGpORFpUH0nMuU/edit" target="_blank">https://docs.google.com/document/d/1GcHeUDks2dnECKmJn97nF2djFRTVVnGpORFpUH0nMuU/edit</a></p>
<h2>Summary</h2>
<p>-3D cervical spine segmentation<br>
-Individual vertebra extraction<br>
-Stage 1 models: 3D CNN and TD-CNN for vertebra-level fracture classification<br>
-Stage 2 models: transformers for final exam-level classification using features extracted from stage 1 models </p>
<h2>Segmentation</h2>
<p>I used a 3D DeepLabV3+ encoder-decoder architecture with X3D 3D CNN backbone (<a href="https://pytorchvideo.readthedocs.io/en/latest/api/models/x3d.html" target="_blank">https://pytorchvideo.readthedocs.io/en/latest/api/models/x3d.html</a>) trained at 192x192x192 resolution to generate segmentations of C1-C7. </p>
<p>Models were first trained on the 87 semantic segmentation labels provided by the organizers. I then pseudolabeled the remaining studies with these models and retrained on the whole dataset. </p>
<p>Using the output of the segmentation models, one can extract a cube containing the vertebra of interest. Inevitably, there is some overlap with other vertebra levels due to their orientation within the scan. </p>
<h2>Stage 1: 3D CNN</h2>
<p>Once a cube is extracted for each vertebra, I trained a X3D-L 3D CNN to perform binary fracture classification for each level. It was straightforward to map each vertebra to a binary label, as we're provided with the fractured levels for each study. The input size was 64x288x288 for vertebra. I also modified the model so that the z-stride was 1 for all layers. Thus, the first dimension (64) was not downsampled until final pooling. </p>
<p>After training the 3D classification model, features (432-D) were extracted from each vertebra. Thus, each study was represented by a 7x432 sequence. </p>
<h2>Stage 2: TD-CNN</h2>
<p>A TD-CNN is simply a 2D CNN feature extractor with a sequence model head. In my case, I used a 2-layer transformer. For a given volume NxHxW, the 2D CNN extracts a D-dimensional feature for each individual image (HxW) in the volume, which are then input (NxD) to the sequence model head. Ultimately, the model is trained to end-to-end. </p>
<p>I trained this model in 3 parts. First, I trained a TF-EfficientNetV2-S model to act as the 2D CNN feature extractor. </p>
<p>In order to train this model, I needed image-level labels. While there was a subset of the dataset which did have those labels, I was able to pseudolabel the entire dataset by a method I call class activation sequence. </p>
<p>Using the 3D CNN fracture classification model, I could generate a 3D class activation map for each vertebra by removing the pooling and classification layers and then obtaining a weighted average of the 3D feature maps using the classification layer weights. For each z-axis image in the 3D feature map, the max value was taken, converting the 3D feature map into a 1D sequence. This sequence was then rescaled to [0, 1] and resampled back to the original number of slices so a single value could be corresponded to each slice.</p>
<p>e.g., (64, 288, 288) -> (432, 64, 9, 9) -> (64, 9, 9) -> (64, ) -> (original # of slices, )</p>
<p>Because one vertebra-level input usually contained more than 1 level, there was overlap between these sequences. To deal with this, I just averaged the overlapping values. </p>
<p>I thresholded the values at 0.5 to generate pseudolabels for each image in the dataset. I did not use the provided image-level labels for training. </p>
<p>Stepwise training occurred as follows:<br>
1) Train 2D CNN feature extractor as binary classifier on pseudolabels (288x288)<br>
2) Add transformer head, freeze feature extractor, train head <br>
3) Fine-tune entire model end-to-end (32x288x288)</p>
<p>Similar to the 3D CNN model, a feature was extracted for each vertebra (256-D) after training, resulting in a 7x256 sequence for each vertebra. </p>
<h2>Stage 2: Transformers</h2>
<p>Now that we have essentially converted each study into a sequence, we can use transformers to model the final output (C1-C7 fractures and overall fracture). </p>
<p>I used a 3-layer transformer and trained on 3 separate inputs:<br>
1) 7x432 input from 3D CNN (5-fold CV: 0.3205)<br>
2) 7x256 input from TD-CNN (5-fold CV: 0.3369)<br>
3) 7x688 input by fusing the above 2 sequences (5-fold CV: 0.2962) </p>
<p>The final submission was an ensemble of the outputs from the above 3 models (0.25, 0.25, 0.5). </p>
<h2>Things that did not help:</h2>
<p>-Adding mask as separate channel in image<br>
-Masking out the non-segmented parts of the image<br>
-Extracting slice-wise features and training a 2D-CNN + sequence model on study-wise features- doing the vertebra-level approach was better</p>
<h2>Code</h2>
<p><a href="https://github.com/i-pan/kaggle-rsna-cspine" target="_blank">https://github.com/i-pan/kaggle-rsna-cspine</a></p>
<p>Thanks!</p>
<p>I tried training feature extractors using only the image-level labeled subset, but it didn't seem to work as well as when I trained on the pseudo-labeled whole dataset. </p>
<p>I didn't try using the both (e.g., provided labels when available, pseudolabels when not) since I didn't have much time, and things seemed to work well using only the pseudolabels. </p>
<p>I've had issues with 3D CNNs in the past, but the pretrained X3D models are surprisingly good for a variety of 3D tasks. </p>
<p>Thanks, and congratulations to you as well! </p>
<p>Thanks and congrats as well!</p>
|
RSNA 2023 Abdominal Trauma Detection
|
10th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all prize and medal winners! I enjoyed this competition because there are a great variety of options for solving this task as with last year's RSNA competition. We share our team's solution ( <a href="https://www.kaggle.com/ren4yu" target="_blank">@ren4yu</a> + <a href="https://www.kaggle.com/tattaka" target="_blank">@tattaka</a> ).</p>
<h1>Summary</h1>
<p>Our solution is to first segment the organs, cut out each organ region, and build a dedicated model for each organ.<br>
For the bowel and extravasation classes, for which large regions must be explored, we do not perform segmentation to cut out the regions, but instead perform simple black region removal and input the large regions into the models.<br>
The results of each model are refined by the stacking model and submitted as the final result.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F745525%2Ff5b5b27bf27dbd11ae78efca228ff017%2F1.png?generation=1697414699260439&alt=media" alt=""></p>
<h1>Segmentation Model</h1>
<p>We used the 3D SwinUNETR model provided by MONAI. It works surprisingly well with even a small amount of training data. To reduce the computational cost, the entire voxel was resized into 128x128x128 before segmentation.</p>
<h1>Organ Models</h1>
<p>For liver, spleen, kidney, and bowel cropped regions, 2.5D CNN + LSTM models are used.</p>
<h2>Liver and Spleen Models</h2>
<p>Cropped region is resized into 16x386x386, and fed into dedicated models.</p>
<h2>Kidney Model</h2>
<p>Left and right kidney are independently cropped, resized, and concatinated along with horizontal axis. This enables us horizontal flip augmentation and TTA. Concatenated region becomes 16x224x448.</p>
<h2>Bowel Model</h2>
<p>Cropped region is resized into 64x224x224. In training bowel model, both patient level label and image level labels were used.</p>
<h1>Stacking Model</h1>
<p>The purpose of the stacking model is to directly optimize the average of the weighted logloss, which is the metric for this competition, including the logloss for any_injury. Each model is optimized for the weighted logloss for each of the injury type, but not for any_injury because it is automatically calculated from the probabilities of the other injuries. For any_injury, it is essential to optimize any_injury because the weight for any_injury is relatively large.<br>
As a stacking model, we use a simple 4-layer MLP, and trained with competition metric as loss function.</p>
<p>The table below shows the CV evaluation results before and after stacking (The values below are out of date because we wrote this solution before the competition was extended. The final submission's CV is 0.3686).</p>
<table>
<thead>
<tr>
<th></th>
<th>bowel</th>
<th>extravasation</th>
<th>kidney</th>
<th>liver</th>
<th>spleen</th>
<th>any</th>
<th>overall</th>
</tr>
</thead>
<tbody>
<tr>
<td>w/o stacking</td>
<td>0.1186</td>
<td>0.4716</td>
<td>0.2948</td>
<td>0.4282</td>
<td>0.4301</td>
<td>0.5528</td>
<td>0.3827</td>
</tr>
<tr>
<td>with stacking</td>
<td>0.1034</td>
<td>0.4885</td>
<td>0.2861</td>
<td>0.4409</td>
<td>0.4452</td>
<td>0.4777</td>
<td>0.3736</td>
</tr>
</tbody>
</table>
<h1>tattaka's Part</h1>
<p>I was in charge of bowel and extravasation classification. Our solution followed <a href="https://www.kaggle.com/competitions/rsna-str-pulmonary-embolism-detection/discussion/194145" target="_blank">the 2-stage approach of the RSNA competition 3 years ago</a>. <br>
The basic setup is as follows</p>
<h2>Image-Level Modeling (1st stage)</h2>
<p>The input for the 1st stage is a 3-channel image including adjacent frames. <br>
1epoch training was performed on all labeled images. </p>
<ul>
<li>backbone: resnetrs50 </li>
<li>head: <ul>
<li>Separate the head by bowel and extravasation</li></ul></li>
</ul>
<pre><code> nn.Sequential(
nn.Conv2d(num_features[-], , kernel_size=),
nn.AdaptiveAvgPool2d((, )),
Flatten(),
nn.Dropout(, inplace=),
nn.Linear(, ),
)
</code></pre>
<p>In the 1st stage, students learn bowel and extravasation at the same time. </p>
<h2>Series-Level Modeling (2nd stage)</h2>
<p>The input for the 2nd stage also followed the previous solution. <br>
Use the 512 dimensions after Flatten in the head of the 1st stage as image features.<br>
Image features are created with stride=3 instead of all images, and the input sequence length is set to a maximum of 256 in the same way as the previous solution. <br>
The differences between adjacent features are combined, and the input to the model is in the form (bs, 256, 1536). </p>
<ul>
<li>model:<ul>
<li>Combines the attention pooling and max pooling of the BiGRU outputs in one layer to create an series-level prediction.</li>
<li>The BiGRU output should also predict the image label. </li></ul></li>
</ul>
<p>Unlike the 1st stage, bowel and extravasation were optimized separately.</p>
<h2>Tricks for Successful Training</h2>
<p>There are a few tricks to successful learning in this competition.</p>
<ul>
<li>Because of large data imbalance, upsampling of the positive sample by a factor of 10 (high impact)<ul>
<li>In addition, focal loss is used</li></ul></li>
<li>After training stage1 at image level, max of the logit and gt are trained again as labels for the new image level (high impact)<ul>
<li>repeated it twice</li>
<li>Perhaps the image label is noisy</li></ul></li>
<li>Rule-based removal of the outer black areas before the image is entered into the model<ul>
<li>Because of the longer computation time when using a larger resolution, a size of 384x384 is used after removing the outer area.</li></ul></li>
</ul>
<pre><code> () -> np.ndarray:
image_1ch = (img.mean() * ).astype(np.uint8)
kernel = np.ones((, ), np.uint8)
image_1ch = cv2.erode(image_1ch, kernel, iterations=)
mask = image_1ch >
mask.() == :
img
rows = np.(mask, axis=)
cols = np.(mask, axis=)
y_min, y_max = np.where(rows)[][[, -]]
x_min, x_max = np.where(cols)[][[, -]]
(y_max - y_min) > (x_max - x_min) > :
img = img[y_min:y_max, x_min:x_max]
img
</code></pre>
<p>The bowel and extravasation scores before stacking are</p>
<table>
<thead>
<tr>
<th></th>
<th>bowel logloss</th>
<th>bowel auc</th>
<th>ev logloss</th>
<th>ev auc</th>
</tr>
</thead>
<tbody>
<tr>
<td>stage1</td>
<td>0.2719</td>
<td>0.9314</td>
<td>0.4602</td>
<td>0.8287</td>
</tr>
<tr>
<td>stage2</td>
<td>0.1167</td>
<td>0.9167</td>
<td>0.4579</td>
<td>0.8264</td>
</tr>
</tbody>
</table>
<h2>Not works</h2>
<ul>
<li>label smoothing</li>
<li>resnet3dcsn<ul>
<li>Not bad, but I didn't have time to tune in.</li></ul></li>
<li>scaling logit</li>
<li>GeM pooling</li>
<li>Other backbone<ul>
<li>ConvNeXt is slightly worse than resnetrs50</li>
<li>I could not get the backbone of the transformer to work </li></ul></li>
</ul>
|
RSNA 2023 Abdominal Trauma Detection
|
12th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks, It wasn't possible without your hard work !!</p>
<p>Thanks kaggle and the hosts for this interesting competition, <br>
Big thanks to kagglers out there for their great ideas and engaging descussions.<br>
Thanks a lot as well to my great teammate <a href="https://www.kaggle.com/siwooyong" target="_blank">@siwooyong</a> </p>
<h1>Summary</h1>
<p>Our solution is an ensemble of one stage approach without segmentation and two stage approach with segmentation.</p>
<h2>One stage approach (public LB = 0.4Thanks kaggle and the hosts for this interesting competition,</h2>
<p>Big thanks to kagglers out there for their great ideas and engaging descussions.<br>
Thanks a lot as well to my great teammate <a href="https://www.kaggle.com/siwooyong" target="_blank">@siwooyong</a> </p>
<h1>Summary</h1>
<p>Our solution is an ensemble of one stage approach without segmentation and two stage approach with segmentation.</p>
<h2>One stage approach (public LB = 0.5, private LB = 0.45)</h2>
<h3>Data Pre-Processing</h3>
<p>If the dimension for the image is greater than (512, 512), we cropped the area with a higher density of pixels to get a (512, 512) image, then the input is resized to (96, 256, 256) for each serie following the same preprocessing steps that were used by hengck23 in his great <a href="https://www.kaggle.com/code/hengck23/lb0-55-2-5d-3d-sample-model" target="_blank">notebook</a></p>
<h3>Model : resnest50d + GRU Attention</h3>
<p>We tried to predict each target independently from the others so we have 13 outputs</p>
<pre><code> (nn.Module):
():
().__init__()
self.seq_len = seq_len
self.model_arch = model_arch
self.model = timm.create_model(model_arch, in_chans=, pretrained=pretrained)
cnn_feature = self.model.fc.in_features
self.model.global_pool = nn.Identity()
self.model.fc = nn.Identity()
self.pooling = nn.AdaptiveAvgPool2d()
self.spatialdropout = SpatialDropout(CFG.dropout)
self.gru = nn.GRU(cnn_feature, hidden_dim, num_layers=, batch_first=, bidirectional=)
self.mlp_attention_layer = MLPAttentionNetwork( * hidden_dim)
self.logits = nn.Sequential(
nn.Linear( * hidden_dim, ),
)
():
bs = x.size()
x = x.reshape(bs*self.seq_len//, , x.size(), x.size())
features = self.model(x)
features = self.pooling(features).view(bs*self.seq_len//, -)
features = self.spatialdropout(features)
features = features.reshape(bs, self.seq_len//, -)
features, _ = self.gru(features)
atten_out = self.mlp_attention_layer(features)
pred = self.logits(atten_out)
pred = pred.view(bs, -)
pred
</code></pre>
<h3>Augmentation</h3>
<ul>
<li>Mixup</li>
<li>Random crop + resize</li>
<li>Random shift, scale, rotate</li>
<li>shuffle randomly the indexes of the sequence, but respecting the same order and keeping the dependency between each three consecutive images:</li>
</ul>
<pre><code> inds = np.random.choice(np.arange(, -), , replace = )
inds.sort()
inds = np.stack([inds-, inds, inds+]).T.flatten()
image = image[inds]
</code></pre>
<p>Loss : BCEWithLogitsLoss<br>
scheduler : CosineAnnealingLR<br>
optimizer : AdamW<br>
learning rate : 5e-5</p>
<h3>Postprocessing</h3>
<p>We simply multiplied the output by the weights of the competition metric :</p>
<pre><code>preds.loc[:, [, , , ]] *=
preds.loc[:, [, , ]] *=
preds.loc[:, []] *=
</code></pre>
<h2>Two stage approach (public LB = 0.45, private LB = 0.43)</h2>
<h3>stage1 : Segmentation</h3>
<p>Model : regnety002 + unet</p>
<p>Even with only 160 of 200 data (1th fold) used as training data, the model has already shown good performance.</p>
<pre><code> (nn.Module):
():
(SegModel, self).__init__()
self.n_classes = (
[
,
,
,
,
,
])
in_chans =
self.encoder = timm.create_model(
,
pretrained=,
features_only=,
in_chans=in_chans,
)
encoder_channels = (
[in_chans]
+ [
self.encoder.feature_info[i][]
i ((self.encoder.feature_info))
]
)
self.decoder = UnetDecoder(
encoder_channels=encoder_channels,
decoder_channels=(, , , , ),
n_blocks=,
use_batchnorm=,
center=,
attention_type=,
)
self.segmentation_head = SegmentationHead(
in_channels=,
out_channels=self.n_classes,
activation=,
kernel_size=,
)
self.bce_seg = nn.BCEWithLogitsLoss()
():
enc_out = self.encoder(x_in)
decoder_out = self.decoder(*[x_in] + enc_out)
x_seg = self.segmentation_head(decoder_out)
nn.Sigmoid()(x_seg)
</code></pre>
<h3>stage2 : 2.5DCNN</h3>
<h4>Data Pre-Processing:</h4>
<p>We used the segmentation logits obtained from stage1 to crop livers, spleen, and kidney, and then resized each to (96, 224, 224). <br>
(We use 10-size padding when we crop the organs with segmentation logits)<br>
In addition, full ct data not cropped is resized to (128, 224, 224) and a total of four inputs are put into the model (full_video, crop_liver, crop_spleen, crop_kidney)</p>
<h4>Model : regnety002 + transformer</h4>
<p>We initially used a custom any_injury_loss function, but found that it did not improve the performance. For the model input channel, we experimented with different values, including 2, 3, 4, and 8. <br>
We found that a channel size of 2 performed the best, we also initially tried using a shared CNN and transformer model for all organs, but found that separate CNN and transformer models for each organ performed better. we also experimented with increasing the size of the CNN (using ConvNeXt and EfficientNet models), but this resulted in a decrease in performance. Therefore, we used the RegNet002 model, which is a smaller CNN model.</p>
<pre><code> (nn.Module):
():
(FeatureExtractor, self).__init__()
self.hidden = hidden
self.num_channel = num_channel
self.cnn = timm.create_model(model_name = ,
pretrained = ,
num_classes = ,
in_chans = num_channel)
self.fc = nn.Linear(hidden, hidden//)
():
batch_size, num_frame, h, w = x.shape
x = x.reshape(batch_size, num_frame//self.num_channel, self.num_channel, h, w)
x = x.reshape(-, self.num_channel, h, w)
x = self.cnn(x)
x = x.reshape(batch_size, num_frame//self.num_channel, self.hidden)
x = self.fc(x)
x
(nn.Module):
():
(ContextProcessor, self).__init__()
self.transformer = RobertaPreLayerNormModel(
RobertaPreLayerNormConfig(
hidden_size = hidden//,
num_hidden_layers = ,
num_attention_heads = ,
intermediate_size = hidden*,
hidden_act = ,
)
)
self.transformer.embeddings.word_embeddings
self.dense = nn.Linear(hidden, hidden)
self.activation = nn.ReLU()
():
x = self.transformer(inputs_embeds = x).last_hidden_state
apool = torch.mean(x, dim = )
mpool, _ = torch.(x, dim = )
x = torch.cat([mpool, apool], dim = -)
x = self.dense(x)
x = self.activation(x)
x
(nn.Module):
():
(Custom3DCNN, self).__init__()
self.full_extractor = FeatureExtractor(hidden=hidden, num_channel=num_channel)
self.kidney_extractor = FeatureExtractor(hidden=hidden, num_channel=num_channel)
self.liver_extractor = FeatureExtractor(hidden=hidden, num_channel=num_channel)
self.spleen_extractor = FeatureExtractor(hidden=hidden, num_channel=num_channel)
self.full_processor = ContextProcessor(hidden=hidden)
self.kidney_processor = ContextProcessor(hidden=hidden)
self.liver_processor = ContextProcessor(hidden=hidden)
self.spleen_processor = ContextProcessor(hidden=hidden)
self.bowel = nn.Linear(hidden, )
self.extravasation = nn.Linear(hidden, )
self.kidney = nn.Linear(hidden, )
self.liver = nn.Linear(hidden, )
self.spleen = nn.Linear(hidden, )
self.softmax = nn.Softmax(dim = -)
():
full_output = self.full_extractor(full_input)
kidney_output = self.kidney_extractor(crop_kidney)
liver_output = self.liver_extractor(crop_liver)
spleen_output = self.spleen_extractor(crop_spleen)
full_output2 = self.full_processor(torch.cat([full_output, kidney_output, liver_output, spleen_output], dim = ))
kidney_output2 = self.kidney_processor(torch.cat([full_output, kidney_output], dim = ))
liver_output2 = self.liver_processor(torch.cat([full_output, liver_output], dim = ))
spleen_output2 = self.spleen_processor(torch.cat([full_output, spleen_output], dim = ))
bowel = self.bowel(full_output2)
extravasation = self.extravasation(full_output2)
kidney = self.kidney(kidney_output2)
liver = self.liver(liver_output2)
spleen = self.spleen(spleen_output2)
any_injury = torch.stack([
self.softmax(bowel)[:, ],
self.softmax(extravasation)[:, ],
self.softmax(kidney)[:, ],
self.softmax(liver)[:, ],
self.softmax(spleen)[:, ]
], dim = -)
any_injury = - any_injury
any_injury, _ = any_injury.()
bowel, extravasation, kidney, liver, spleen, any_injury
</code></pre>
<h4>Augmentation</h4>
<pre><code> (nn.Module):
():
(CustomAug, self).__init__()
self.prob = prob
self.do_random_rotate = v2.RandomRotation(
degrees = (-, ),
interpolation = torchvision.transforms.InterpolationMode.BILINEAR,
expand = ,
center = ,
fill =
)
self.do_random_scale = v2.ScaleJitter(
target_size = [s, s],
scale_range = (, ),
interpolation = torchvision.transforms.InterpolationMode.BILINEAR,
antialias = )
self.do_random_crop = v2.RandomCrop(
size = [s, s],
pad_if_needed = ,
fill = ,
padding_mode =
)
self.do_horizontal_flip = v2.RandomHorizontalFlip(self.prob)
self.do_vertical_flip = v2.RandomVerticalFlip(self.prob)
():
np.random.rand() < self.prob:
x = self.do_random_rotate(x)
np.random.rand() < self.prob:
x = self.do_random_scale(x)
x = self.do_random_crop(x)
x = self.do_horizontal_flip(x)
x = self.do_vertical_flip(x)
x
</code></pre>
<p>Loss : nn.CrossEntropyLoss(no class weight)<br>
scheduler : cosine_schedule_with_warmup<br>
optimizer : AdamW<br>
learning rate :2e-4</p>
<h4>Postprocessing</h4>
<p>We multiplied by the value that maximizes the validation score for each pred_df obtained for each fold.</p>
<pre><code>weights = [
[, , , , , , , ],
[, , , , , , , ],
[, , , , , , , ],
[, , , , , , , ],
[, , , , , , , ]
]
y_pred = pred_df.copy().groupby().mean().reset_index()
w1, w2, w3, w4, w5, w6, w7, w8 = weights[i]
y_pred[] *= w1
y_pred[] *= w2
y_pred[] *= w3
y_pred[] *= w4
y_pred[] *= w5
y_pred[] *= w6
y_pred[] *= w7
y_pred[] *= w8
y_pred = y_pred **
</code></pre>
<h3>Reference</h3>
<p><a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362607" target="_blank">RSNA 2022 1st place solution</a><br>
<a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/365115" target="_blank">RSNA 2022 2nd place solution</a><br>
<a href="https://github.com/pascal-pfeiffer/kaggle-rsna-2022-5th-place" target="_blank">RSNA 2022 5th place solution</a><br>
<a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>'s <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/435053" target="_blank">descussion</a></p>
<h2>Code :</h2>
<p><a href="https://github.com/siwooyong/RSNA-2023-Abdominal-Trauma-Detection" target="_blank">https://github.com/siwooyong/RSNA-2023-Abdominal-Trauma-Detection</a><br>
Inference notebook : <a href="https://www.kaggle.com/code/ahmedelfazouan/rsna-atd-channel2-512-inference-ensemble?scriptVersionId=146616538" target="_blank">https://www.kaggle.com/code/ahmedelfazouan/rsna-atd-channel2-512-inference-ensemble?scriptVersionId=146616538</a></p>
<p>Thanks for your kind words, good luck on Bengali competition !!!</p>
<p>😂😂 It's still a long way</p>
|
RSNA 2023 Abdominal Trauma Detection
|
14th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Overview</h2>
<p>I used an efficient preprocessing pipeline and small multi-task models in a single stage framework. I didn't use image level labels and segmentation masks because I forgot they were given 🤦♂️.</p>
<p>Kaggle Notebook: <a href="https://www.kaggle.com/code/gunesevitan/rsna-2023-abdominal-trauma-detection-inference" target="_blank">https://www.kaggle.com/code/gunesevitan/rsna-2023-abdominal-trauma-detection-inference</a><br>
Kaggle Dataset: <a href="https://www.kaggle.com/datasets/gunesevitan/rsna-2023-abdominal-trauma-detection-dataset" target="_blank">https://www.kaggle.com/datasets/gunesevitan/rsna-2023-abdominal-trauma-detection-dataset</a><br>
GitHub Repository: <a href="https://github.com/gunesevitan/rsna-2023-abdominal-trauma-detection" target="_blank">https://github.com/gunesevitan/rsna-2023-abdominal-trauma-detection</a></p>
<h2>Dataset</h2>
<h3>2D Dataset</h3>
<ul>
<li>Bit shift with DICOM's bits allocated and stored attributes</li>
<li>Linear pixel value rescale with DICOM's rescale slope and intercept attributes</li>
<li>Window with DICOM's window width and center attributes (abdominal soft tissue window; width 400, center 50)</li>
<li>Adjust minimum pixel value to 0 and scale pixel values with the new maximum</li>
<li>Invert pixel values if DICOM's photometric interpretation attribute is MONOCHROME1</li>
<li>Multiply pixel values with 255 and cast image to uint8</li>
<li>Write image in lossless png format with raw size</li>
</ul>
<p>My 2D and 3D dataset pipelines are separated because this part can run very fast in parallel because of non-blocking IO. I can export all of the training DICOMs as pngs in approximately 20 minutes.</p>
<h3>3D Dataset</h3>
<p>I saved lots of different CT scans from training set as videos and examined them. I noticed each of their start and end points were different on the z dimension. Some of them were starting from the shoulders and ending just before the legs or some of them were starting from the lungs and ending somewhere around middle femur.</p>
<p>I studied the anatomy and decided to localize ROIs. I manually annotated bounding boxes around the largest contour on axial plane. I labeled slices before the liver as "upper" and slices after the femur head as "lower". Slices between those two location are labeled as "abdominal". I trained a YOLOv8 nano model and it was reaching to 0.99x mAP@50 on all those classes easily. I dropped slices that are predicted as "upper" and "lower", and I used slices that are predicted as "abdominal" and cropped them with the predicted bounding box.<br>
<img src="https://i.ibb.co/JpNsJD1/val-batch2-pred.jpg" alt="yolo"></p>
<p>Eventually, I ditched this approach because it was too slow and it didn't improve my overall score at all. In my latest 3D pipeline, I was using a lightweight localization by simply cropping the largest contour on the axial plane and keep all slices on the z dimension.</p>
<ul>
<li>Read all images that are exported as pngs in a scan and stack them on the z-axis</li>
<li>Sort z-axis in descending order by DICOMs' image position patient z attribute</li>
<li>Flip x-axis if DICOMs' patient position attribute is HFS (head first supine)</li>
<li>Drop partial slices (some slices at the beginning or end of the scan were partially black)</li>
</ul>
<p>I dropped those slices by counting all black vertical lines and their differences on z-axis. Normal slices had 0-5 all black vertical lines. If all black vertical line count suddenly increases or decreases then that slice is partial.</p>
<pre><code>
scan.shape[] != :
scan_all_zero_vertical_line_transitions = np.diff(np.(scan == , axis=).(axis=))
slices_with_all_zero_vertical_lines = (scan_all_zero_vertical_line_transitions > ) | (scan_all_zero_vertical_line_transitions < -)
slices_with_all_zero_vertical_lines = np.append(slices_with_all_zero_vertical_lines, slices_with_all_zero_vertical_lines[-])
scan = scan[~slices_with_all_zero_vertical_lines]
scan_all_zero_vertical_line_transitions, slices_with_all_zero_vertical_lines
</code></pre>
<ul>
<li>Crop the largest contour on the axial plane</li>
</ul>
<p>I didn't do that to each image separately because it would break the alignment of slices. I calculated bounding boxes for each slice and calculate the largest bounding box by taking minimum of starting points and maximum of ending points.</p>
<pre><code>
largest_contour_bounding_boxes = np.array([dicom_utilities.get_largest_contour(image) image scan])
largest_contour_bounding_box = [
(largest_contour_bounding_boxes[:, ].()),
(largest_contour_bounding_boxes[:, ].()),
(largest_contour_bounding_boxes[:, ].()),
(largest_contour_bounding_boxes[:, ].()),
]
scan = scan[
:,
largest_contour_bounding_box[]:largest_contour_bounding_box[] + ,
largest_contour_bounding_box[]:largest_contour_bounding_box[] + ,
]
</code></pre>
<ul>
<li>Crop non-zero slices along 3 planes</li>
</ul>
<pre><code>
mmin = np.array((scan > ).nonzero()).(axis=)
mmax = np.array((scan > ).nonzero()).(axis=)
scan = scan[
mmin[]:mmax[] + ,
mmin[]:mmax[] + ,
mmin[]:mmax[] +
]
</code></pre>
<ul>
<li>Resize 3D volume into 96x256x256 with area interpolation</li>
<li>Write image as a numpy array file</li>
</ul>
<p>To conclude, those numpy arrays are used as model inputs. I wasn't able to benefit from parallel execution at this stage. </p>
<h2>Validation</h2>
<p>I used multi label stratified group kfold for cross-validation. Group functionality can be achieved by splitting at patient level. I converted one-hot encoded classes into ordinal encoded single columns and created another column for patient scan count. I split dataset into 5 folds and 5 ordinal encoded target columns + patient scan count column are used for stratification.</p>
<h2>Models</h2>
<p>I tried lots of different models, heads and necks but two simple models were the best performing ones.</p>
<h3>MIL-like 2D multi-task classification model</h3>
<p>This model is a very simple one that is similar to MIL approach and ironically this was my best performing model. The architecture is:</p>
<ol>
<li>Extract features on 2D slices</li>
<li>Average or max pooling on z dimension</li>
<li>Average, max, gem or attention pooling on x and y dimension</li>
<li>Dropout</li>
<li>5 classification heads for each target</li>
</ol>
<h3>RNN 2D multi-task classification model</h3>
<p>This model is similar to what others used in previous competitions. The architecture is:</p>
<ol>
<li>Extract features on 2D slices</li>
<li>Average, max or gem pooling on x and y dimension</li>
<li>Bidirectional LSTM or GRU max while using z dimension as a sequence </li>
<li>Dropout</li>
<li>5 classification heads for each target</li>
</ol>
<h3>Backbones, necks and heads</h3>
<ul>
<li>I tried lots of backbones from timm and monai but my best backbones were EfficientNet b0, EfficientNet v2 tiny and DenseNet121. I think I wasn't able to make large models converge.</li>
<li>I also tried lots of different pooling types including average, sum, logsumexp, max, gem, attention but average and attention worked best for the first model and max worked best for the second model.</li>
<li>I only used 5 regular classification heads for 5 targets<ul>
<li>n_features x 1 bowel head + sigmoid at inference time</li>
<li>n_features x 1 extravasation head + sigmoid at inference time</li>
<li>n_features x 3 kidney head + softmax at inference time</li>
<li>n_features x 3 liver head + softmax at inference time</li>
<li>n_features x 3 spleen head + softmax at inference time</li></ul></li>
</ul>
<h2>Training</h2>
<p>I used BCEWithLogitsLoss for bowel and extravasation heads, CrossEntropyLoss for kidney, liver and spleen weights. The only modification I did was implementing exact same sample weights like this:</p>
<pre><code> ():
():
(SampleWeightedBCEWithLogitsLoss, self).__init__(weight=weight, reduction=reduction)
self.weight = weight
self.reduction = reduction
():
loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction=, weight=self.weight)
loss = loss * sample_weights
self.reduction == :
loss = loss.mean()
self.reduction == :
loss = loss.()
loss
</code></pre>
<pre><code> ():
():
(SampleWeightedCrossEntropyLoss, self).__init__(weight=weight, reduction=reduction)
self.weight = weight
self.reduction = reduction
():
loss = F.cross_entropy(inputs, targets, reduction=, weight=self.weight)
loss = loss * sample_weights
self.reduction == :
loss = loss.mean()
self.reduction == :
loss = loss.()
loss
</code></pre>
<p>Final loss is calculated as the sum of each heads' loss and backward is called on that.</p>
<p>Training transforms are:</p>
<ul>
<li>Scale by max 8 bit pixel value</li>
<li>Random X, Y and Z flip that are independent of each other</li>
<li>Random 90 degree rotation on axial plane</li>
<li>Random 0-45 degree rotation on axial plane</li>
<li>Histogram equalization or random contrast shift</li>
<li>Random 224x224 crop on axial plane</li>
<li>3D cutout</li>
</ul>
<p>Test transforms are:</p>
<ul>
<li>Scale by max 8 bit pixel value</li>
<li>Center 224x224 crop on axial plane</li>
</ul>
<pre><code>training_transforms = T.Compose([
T.EnsureChannelFirst(channel_dim=),
T.RandFlip(spatial_axis=, prob=transform_parameters[]),
T.RandFlip(spatial_axis=, prob=transform_parameters[]),
T.RandFlip(spatial_axis=, prob=transform_parameters[]),
T.RandRotate90(spatial_axes=(, ), max_k=, prob=transform_parameters[]),
T.RandRotate(
range_x=transform_parameters[],
range_y=transform_parameters[],
range_z=transform_parameters[],
prob=transform_parameters[]
),
T.OneOf([
T.RandHistogramShift(num_control_points=transform_parameters[], prob=transform_parameters[]),
T.RandAdjustContrast(gamma=transform_parameters[], prob=transform_parameters[])
], weights=(, )),
T.RandSpatialCrop(roi_size=transform_parameters[], max_roi_size=, random_center=, random_size=),
T.RandCoarseDropout(
holes=transform_parameters[],
spatial_size=transform_parameters[],
dropout_holes=,
fill_value=,
max_holes=transform_parameters[],
max_spatial_size=transform_parameters[],
prob=transform_parameters[]
),
T.ToTensor(dtype=torch.float32, track_meta=)
])
inference_transforms = T.Compose([
T.EnsureChannelFirst(channel_dim=),
T.CenterSpatialCrop(roi_size=transform_parameters[]),
T.ToTensor(dtype=torch.float32, track_meta=)
])
</code></pre>
<pre><code>
[, ]
[, , ]
[, , ]
[, , ]
</code></pre>
<p>Batch size of 2 or 4 is used depending on the model size. Cosine annealing learning rate schedule is utilized to explore different regions with a small base and minimum learning rate. AMP is also used for faster training and regularization.</p>
<h2>Inference</h2>
<p>2x MIL-like model (efficientnetb0 and densenet121) and 2x RNN model (efficientnetb0 and efficientnetv2t) are used on the final ensemble. </p>
<p>Since the models were trained with random crop augmentation, inputs are center cropped at test time. 4x TTA (xyz, xy, xz and yz flip) are applied and predictions are averaged.</p>
<p>Predictions of 5 folds are averaged and then activated with sigmoid or softmax functions.</p>
<h2>Post-processing</h2>
<p>Different weights are used for 4 models for different targets. Those weights are found by minimizing the OOF score.</p>
<pre><code>mil_efficientnetb0_bowel_weight =
mil_densenet121_bowel_weight =
lstm_efficientnetb0_bowel_weight =
lstm_efficientnetv2t_bowel_weight =
mil_efficientnetb0_extravasation_weight =
mil_densenet121_extravasation_weight =
lstm_efficientnetb0_extravasation_weight =
lstm_efficientnetv2t_extravasation_weight =
mil_efficientnetb0_kidney_weight =
mil_densenet121_kidney_weight =
lstm_efficientnetb0_kidney_weight =
lstm_efficientnetv2t_kidney_weight =
mil_efficientnetb0_liver_weight =
mil_densenet121_liver_weight =
lstm_efficientnetb0_liver_weight =
lstm_efficientnetv2t_liver_weight =
mil_efficientnetb0_spleen_weight =
mil_densenet121_spleen_weight =
lstm_efficientnetb0_spleen_weight =
lstm_efficientnetv2t_spleen_weight =
</code></pre>
<p>I aggregated scan level predictions on patient_id and took the maximum prediction.</p>
<p>I also scaled injury target predictions with different multipliers and they are also set by minimizing OOF score.</p>
<pre><code>df_predictions[] *=
df_predictions[] *=
df_predictions[] *=
df_predictions[] *=
df_predictions[] *=
df_predictions[] *=
df_predictions[] *=
df_predictions[] *=
</code></pre>
<p>My final ensemble score was <strong>0.3859</strong> and target scores are listed below. I really enjoyed how my OOF scores are almost perfectly correlated with LB scores. I selected the submission that had the best OOF, public and private LB scores thanks to stable cross-validation.</p>
<table>
<thead>
<tr>
<th>bowel</th>
<th>extravasation</th>
<th>kidney</th>
<th>liver</th>
<th>spleen</th>
<th>any</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.1282</td>
<td>0.5070</td>
<td>0.2831</td>
<td>0.4186</td>
<td>0.4736</td>
<td>0.5050</td>
</tr>
</tbody>
</table>
|
RSNA 2023 Abdominal Trauma Detection
|
16th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yeah, sorry for not being clear. </p>
<p>I take bboxes from <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/441402" target="_blank">here</a>, treat bboxes as semantic segmentation masks, and train <strong>U-Net Bi-Conv-LSTM</strong> on it. </p>
<p>Because the annotations are obviously not pixel-wise perfect, the Dice score is not perfect either (about 0.2-0.3 per image), but in practice it looks more or less like this: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2755695%2Fcbe24ea20acdd591b9f831443853da31%2F2023-10-16%2016.42.48.jpg?generation=1697463778578613&alt=media" alt=""></p>
<p>Thanks to the organizers for such a great competition. Unfortunately, I couldn’t take part last year in a similar one due to the lack of hardware. However, this year is different, and I’m really happy with the results that we managed to achieve. </p>
<h2>Problem</h2>
<p>In this competition we were tasked with predicting the intensity of injuries for different abdominal organs. The available data consists of big CT images (in DICOM format) with partial supplemental segmentation annotations (in NIFTI format).</p>
<p>In ML terms, all comes down to 3D segmentation / classification models, lack of data / annotations, and heavily-penalizing metric.</p>
<h2>Summary</h2>
<ul>
<li><strong>U-Net Bi-Conv-LSTM</strong> segmentation for organs (kidney, liver, spleen, bowel) and separate model for Extravasation based on boxes from here (thanks a lot!)</li>
<li><strong>Resnet 3D CSN</strong> for 3D crops separated into 2 stages: (kidney, liver, spleen) and (bowel)</li>
</ul>
<h2>3D Semantic Segmentation</h2>
<p>The semantic segmentation part to identify organs is quite straightforward (compared to the later classification) and could be effectively performed by <strong>U-Net Bi-Conv-LSTM</strong> with Effnet_v2_b0 backbone and <strong>CE-Dice-Focal</strong> loss. </p>
<p>To make things efficient, we train semantic segmentation on <strong>96x256x256 crops</strong> and predict the whole image using crops of size 96x256x256 with <strong>overlaps of 48x256x256</strong> (later on overlaps were removed to save time and space in inference).</p>
<p>To elevate overfitting (it’s not that critical, especially compared to classification), we added <strong>geometric augmentations</strong> like ShiftScaleRotate, RandomBrightnessContrast, Flips, GridDistortion, ElasticTransform. </p>
<p>On average, about 10% of total volume was dedicated to kidney, liver, spleen, and about 20% - to bowel.</p>
<p>The <strong>macro dice score per image</strong> is around <strong>0.96</strong>. </p>
<p><strong>Training time</strong> - about <strong>12 hours</strong>.</p>
<h2>3D classification</h2>
<p>Based on extracted crops from segmentation masks, we train 2 models: one for kidney, liver, spleen, one for bowel. </p>
<p>The <strong>CSN models</strong> from mmaction proved to be very fast and accurate. In order to figure out how to deal with temporal dimension, several possibilities were explored, but in the end basic interpolation (<strong>3D resize</strong>) was used to convert crops to <strong>96x256x256 resolution</strong>.</p>
<p>To battle overfitting (which is really severe even with CSN), <strong>intensive geometric augmentations</strong> were used, including ShiftScaleRotate, RandomBrightnessContrast, and 4 different types of Blurs.</p>
<p>The mean competition loss across all folds is <strong>0.401</strong> for kidney, liver, spleen and 0<strong>.156</strong> for bowel.</p>
<p><strong>Training time</strong> - about <strong>4 hours</strong> per kidney, liver, spleen fold; and <strong>8-10 hours</strong> per bowel fold. </p>
<h2>3D classification for Extravasation</h2>
<p>In order to make predictions for Extravasation, a segmentation model was utilized. The motivation is simple: if semantic segmentation model predict anything, there is Extravasation, and it should be reflected in the probabilities. </p>
<p>To make the <strong>transition from semantic segmentation to classification</strong>, the following trick was used:</p>
<ul>
<li>Turn 3D mask to 1D </li>
<li>Sort probabilities </li>
<li>Take top_n probabilities</li>
<li>Find mean values of them. That’s the probability for positive Extravasation.</li>
</ul>
<p>In pseudo-code:<br>
<code>cls_pred = np.mean(np.sort(np.ravel(sigmoid(mask)))[::-1][:top_n])</code></p>
<p>The mean log loss across all folds is <strong>0.543</strong> for extravasation, and <strong>0.501</strong> for any_injury. </p>
<p><strong>Training time</strong> - about <strong>4 hours</strong> per fold. </p>
<h2>Validation</h2>
<p><strong>StratifiedGroupKFold</strong> (stratification based on classification labels, grouping based on patients) with 4 folds. </p>
<p>Mean log loss across all folds and all groups (kidney, liver, spleen, bowel, extravasation, any_injury), (which is the <strong>competition metric</strong>) is <strong>0.400</strong>. </p>
<h2>Additional tricks</h2>
<ul>
<li>No post-processing.</li>
<li>SWA on final checkpoints.</li>
<li>EMA during training.</li>
<li>Temporal shifting in classification to battle overfitting even more.</li>
<li>Gradient checkpointing to have bigger batches (important for classification).</li>
<li>memmap (uint8) using numpy to speed-up data reading and crop extraction. </li>
<li>2 final subs: one minimizing competition loss, one maximizing AUC</li>
</ul>
<h2>Things that didn’t work</h2>
<ul>
<li>Samplers</li>
<li>Heavier models (2+1D or Uniformer)</li>
</ul>
<h2>Final notes</h2>
<p>During the final 2 days of the competition, we managed to improve the models for kidney, liver, and spleen from <strong>0.4</strong> to roughly <strong>0.38</strong>, which brought the overall loss from <strong>0.4</strong> to <strong>0.39</strong>, but made some errors in the submission process, which made them useless. The trick is simple - increase batch size. Usually we train with the batch of 14, but could increase it to 24 (with the help of A100 cards). </p>
<p>The total <strong>training time</strong> (including all 4 folds for each stage) is around <strong>80 hours</strong> using a single RTX A6000 Ada.</p>
<p>The total <strong>submission time</strong> is around <strong>8-9 hours</strong> using a single Tesla P100. </p>
<p>We believe this solution could be pushed much further. However, we made the first sub (that isn’t sample submission or just a bunch of static predictions) 2 days before the competition ended, so that also played some role.</p>
<p>P.S. Man that sucked to mess up the models for 0.39 :) </p>
|
RSNA 2023 Abdominal Trauma Detection
|
17th Place Solution - How to Learn and Practice as a Beginner
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to RSNA for hosting this interesting competition and congrats to all the winnners for their hard work.<br>
I am a beginner who changed my major from physics to software engineering for just one year, I like to learn from practice, so I choose the kaggle competition platform which has many excellent learning resources.</p>
<p>I am very happy that I can go from having no knowledge about 3D image processing to beating the baseline and finally getting the silver medal. I hope my competition experience can bring you some inspiration, especially the beginners like me who are struggling to beat the baseline at first.</p>
<h1>Method</h1>
<h3>Summary</h3>
<p>For learning purposes, I plan to try all of the 2d classification, 3d classification, 2d segmentation, 3d segmentation in this competition. So my pipeline may be a little complicated.</p>
<h3>Stage1: 2D + 3D segmentation</h3>
<ol>
<li><code>2D UNet</code> to segment liver, spleen, kidney_left, kidney_right and bowel.</li>
<li><code>3D UNet</code> to further finely segment spleen.</li>
</ol>
<h3>Stage2: liver, spleen, kidney: 3D classification</h3>
<ol>
<li>Crop organs from segmentation and resize: liver(64, 312, 312), spleen(80, 224, 224), kidney_left(40, 128, 128), kidney_right(40, 128, 128).</li>
<li>Use 3D classifier <code>X3D_l</code> to classify liver, spleen and kidney(For kidney, only backpropagate kidney with higher probability of being positive)</li>
</ol>
<h3>Stage3: bowel: 2.5D + 1D classification</h3>
<ol>
<li>Crop bowel by using bbox of bowel's masks(only choose pixels of the mask > 1000 for each slice)</li>
<li>Sample 64 slice uniformly from the cropped bowel and resize each slice to (512, 512), each slice has 4 channels(z-1, z, z+1, mask). So the input shape is (B, N, 4, 512, 512), B is batch size and N is num of slice.</li>
<li>Use <code>convnext_tiny</code> as feature extractor. Input data(B, N, 4, 512, 512) after passing through the feature extractor will be converted into features(B, N, 768).<br>
If <code>N < 64</code>, will use zero features to padding it. so the final output features are shape of (B, 64, 768).</li>
<li>Use <code>lstm</code> + <code>attention pooling concat maxpooling</code> to fusion features.</li>
<li>Use <code>nn.Linear</code> to classify.</li>
</ol>
<h3>Stage4: extravasation: 2.5D + 1D classification</h3>
<ol>
<li>Sample 64 slice uniformly, For each slice I use 5crop(top_left, top_right, bottom_left, bottom_right, center), and then resize each crop to (512, 512), then stack them. So the input shape is (B, N, 5, 3, 512, 512), B is batch size and N is num of slice.</li>
<li>Use <code>convnext_tiny</code> as feature extractor. Input data(B, N, 5, 3, 512, 512) will be converted into features(B, N, 5, 768).<br>
If <code>N < 64</code>, will use zero features to padding it. so the final output features are shape of (B, 64, 5, 768).</li>
<li>Use <code>attention_pooling concat maxpooling</code> to fuse 5 features of each slice, so the output is shape of (B, 64, 768).</li>
<li>Use <code>lstm</code> + <code>attention pooling concat maxpooling</code> to fusion features.</li>
<li>Use <code>nn.Linear</code> to classify.</li>
</ol>
<h3>Some useful details:</h3>
<ol>
<li>When I started working on the extravasation classification, I saw <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/441402" target="_blank">IAN PAN's extravasation bbox discussion</a>(Thanks!), I think it will help with the classification of extravasation.<br>
For positive sample, I use it with <code>albumentations.BBoxSafeRandomCropFixedSize(288, 288)</code>, it is a useful function of data augmentation that can random crop a part of the image around bbox. So it can help <code>convnext_tiny</code> to pay more attention to small area.<br>
For negative sample, I use <code>albumentations.RandomCrop(288, 288)</code> to random crop a part of input.</li>
<li>I count every positive bowel slice, I find all of them <code>mask.sum() >= 1000</code>(mask is generated by TotalSegmentator). So I think the slice which <code>mask.sum() < 1000</code> can be ignored, thus making the network more concentrated in the ROI area.</li>
<li>By <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/discussion/430242" target="_blank">NISCHAY DHANKHAR's 3rd solution</a>, I learned to use pseudo labeled dataset for initial training with a large learning rate and finally use a fine-labeled dataset for fine-tuning. So when I train 2D UNet, I use TotalSegmentator's prediction as pseudo label then use 206 fine segmentation to fine-tuning it.</li>
<li>Since about 5% TotalSegmentator's zero-shot segmentations have serious problems, so I use trained 2D UNet segmentation to calculate the dice with it. If it is less than 0.75, it will be reconsidered.</li>
<li>For Notebook Out of Memory:</li>
</ol>
<ul>
<li>by <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/443256" target="_blank">Shai Ronen's discussion</a>, convert 2D dicom to uint8 as soon as loaded and delete + <code>gc.collect()</code> helped a lot.</li>
<li>reduce <code>DataLoader.num_workers</code> can save RAM, set <code>DataLoader.pin_memory=False</code> can save GPU memory.</li>
</ul>
<h3>Things may work</h3>
<ol>
<li>For the task which has only small area of contrast, <code>maxpooling</code> may better than <code>avgpooling</code></li>
<li>Training <code>convnext_tiny</code> + <code>lstm</code> + <code>attention pooling concat maxpooling</code> + <code>nn.Linear</code> end-to-end.<br>
(I see local CV increased a lot, But I didn't have enough time to submit it before the end of competition)</li>
</ol>
<h3>Things may not work</h3>
<ol>
<li>Use <code>uniformer</code> instead of <code>X3D</code></li>
<li>Use <code>efficientnetv2_s</code> instead of <code>convnext_tiny</code></li>
</ol>
<h1>For beginners like me</h1>
<p>I know it will be a little hard when you first join a competition which unfamiliar to you.<br>
I will list some personal experience to help beginners take the first step.</p>
<h3>Before joining the competition</h3>
<p>Before joining the competition, you must first clarify the competition task type(3D CT multi-label classification) and estimate the required calculation and required computing resources and hard disk capacity.<br>
This is used to decide whether you should join this competition, because you may be distressed when you don't have enough hard disk capacity to store the processed data.</p>
<h3>Before coding</h3>
<p>For an unfamiliar task(3D CT classification), the best way to get started is to read solultions from similar competitions that have ended.<br>
It just so happens that RSNA has held many similar competitions in the past few years.<br>
I find <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection" target="_blank">RSNA 2022 Cervical Spine Fracture Detection (last year)</a> and <a href="https://www.kaggle.com/competitions/rsna-str-pulmonary-embolism-detection/" target="_blank">RSNA STR Pulmonary Embolism Detection (3 years ago)</a> are highly relevant to this competition, their tasks are all 3D CT classification.<br>
You should read a lot about the winner's solution to decide the method of your own experiments.<br>
My finding are that 2d backbone to extract features(or 2.5d) + 1d rnn to fusion them tend to perform best.</p>
<h3>About coding</h3>
<p>After reading the top solutions, you have two routes to develop your own pipeline.<br>
First is to copy and edit public code, but you should understand every line of code.<br>
Second is to refer to public code and use your own programming habits or code segment to develop pipeline.<br>
I use the second method because it can greatly improve my coding ability.<br>
In my learning process of deep learning in the last year, I continuously accumulate and write a <a href="https://github.com/m1dsolo/yangDL" target="_blank">simple pytorch-based framework for multi-fold train, val, test, predict</a> framework.<br>
It can exercise my coding skills and greatly improves the speed of my coding, it feels really good to have any code in your own hands.<br>
At the same time, this accumulation of code will also be beneficial to similar tasks in the future.</p>
<h3>Design method</h3>
<p>Most people's methods can't beat the baseline propbably because they just throw 3D data to the network for training.<br>
I did this too at the beginning, but the results were poor.<br>
<a href="https://www.kaggle.com/competitions/rsna-str-pulmonary-embolism-detection/discussion/194145" target="_blank">GUANSHUO XU's 1st solution</a> tells me that cropping ROI is really important.<br>
Maybe because it is difficult for the network to learn ROI from just a few thousand of training data in 3d classification task.<br>
So I think segmenting each organ is an important first step.</p>
<p>Since there are 2D classification labels for bowel and extravasation, so I decide to use 2d classification method for both of them and 3d classification method for liver, spleen and kidney.</p>
<p>By <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/364837" target="_blank">Selim's 4th solution (CSN)</a> and <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362651" target="_blank">IAN PAN's 6th solution (X3D)</a>, it seems that using <code>transformer</code> as backbone is not good for small amounts of training data. Finally I choose <code>X3D_l</code> as my 3d classifier.</p>
<p>Next is to design the 2d + 1d method for bowel and extravasation classification.<br>
By <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362607" target="_blank">QISHEN HA's 1st solution</a> and <a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/392449" target="_blank">Đăng Nguyễn Hồng's 1st solution</a>, <code>convnext</code> should be a good choice as 2D features extractor.<br>
By <a href="https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection/discussion/194145" target="_blank">Guanshuo Xu's 1st solution</a>, model of <code>lstm</code> + <code>attention pooling concat maxpooling</code> is selected by me to fuse features.</p>
<p>For segmentation, because I already wrote 2D segmentation code during last competition <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature" target="_blank">hubmap-hacking-the-human-vasculature</a>, so I just decide to train a 2D UNet in early experiments.<br>
Later I discovered that 2D UNet was not very good for segmenting spleen (maybe because my bad training skill), so I use 3D UNet to further refine segment spleen by inputting only data cropped by bbox of 2D UNet's coarse segmentation (spleen dice: 0.880 -> 0.943)</p>
<h1>Further</h1>
<p>After I briefly readed other winning teams' solution, I list some tips that I might try in the future.</p>
<ol>
<li><a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447449" target="_blank">NISCHAY DHANKHAR's 1st solution</a>: Auxiliary Segmentation Loss, 3D segmentation, generate image level label from series level label by organ visibility.</li>
<li><a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447453" target="_blank">THEO VIEL's 2nd solution</a>: infer which organs are present each of the slice, heavy augmentation, 3D <code>resnet18</code> to crop organs, use <code>RNN</code> to aggregate information from previous model and optimize the competition metric directly.</li>
<li><a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447464" target="_blank">YUJIARIYASU's 3rd solution</a>: enlarge mask before crop, input all organs into one model.</li>
<li><a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447549" target="_blank">LLREDA's 7th solution</a>: idea of <code>Mask2Former</code>, use image level label to assist feature learning, method of crop.</li>
<li><a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447706" target="_blank">IAN PAN's 8th solution</a>: use square root to scale probabilities.</li>
<li><a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447506" target="_blank">KAPENON's 9th solution</a>: post-process to improve the optimization of any_injury.</li>
<li><a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447450" target="_blank">YU4U's 10th solution</a>: stacking model to optimize any_injury, upsample positive smaples, train <code>max(logits, gt)</code> instead of <code>gt</code> because noise in image level label, <code>region_crop()</code> to remove outer black areas.</li>
</ol>
|
RSNA 2023 Abdominal Trauma Detection
|
1st Place Solution: Team Oxygen
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/nathanbanaste" target="_blank">@nathanbanaste</a> It was more up to the experimentation. One major downside of using a single channel and 96 slices would be a large increase in computation time as we are passing 3x num images to the encoder. Performance-wise, we tried channels 1, 3, 5, 6, out of these 3 & 5 worked the best for us. </p>
<p>Haha Thank you. I guess you already mentioned secret sauce. "hard work, routine" :D </p>
<p>Firstly, Thank you RSNA for hosting another interesting competition & my teammates <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> <a href="https://www.kaggle.com/harshitsheoran" target="_blank">@harshitsheoran</a> - formation of <strong>Team oxygen</strong> ? :)) It was amazing to be #1 on public leaderboard for almost a month. I am sharing a quick overview of our solution, we will release the entire solution soon. It was really fun competing for #1 with <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> <br>
<strong>Edit: Full solution published.</strong> <br>
Here is the inference code you may refer: <a href="https://www.kaggle.com/nischaydnk/rsna-super-mega-lb-ensemble" target="_blank">link</a> <br>
Our GitHub repo w/ all preprocessing + training code: <a href="https://github.com/Nischaydnk/RSNA-2023-1st-place-solution" target="_blank">link</a><br>
Demo Inference notebook: <a href="https://www.kaggle.com/code/haqishen/rsna-2023-1st-place-best-model-infer-cleaned" target="_blank">link</a></p>
<h4><strong>Split used:</strong> 4 Fold GroupKFold ( Patient Level)</h4>
<h2><strong>Our solution is divided into three parts:</strong></h2>
<p><strong>Part 1:</strong> 3D segmentation for generating masks / crops [Stage 1]<br>
<strong>Part 2:</strong> 2D CNN + RNN based approach for Kidney, Liver, Spleen & Bowel [Stage 2]<br>
<strong>Part 3:</strong> 2D CNN + RNN based approach for Bowel + Extravasation [Stage 2]</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2Fee28ed8eef7827d8f2cc69601875e5c2%2FScreenshot%202023-10-22%20at%2011.54.15%20AM.png?generation=1697955883461262&alt=media" alt=""></p>
<h2><strong>Data Preprocessing:</strong></h2>
<p>Here comes the key part of our solution, we will describe it later in more depth. <strong>Note:</strong> <em>All models were trained on image size 384 x 384. We use datasets preprocessing from <a href="https://www.kaggle.com/TheoVeol" target="_blank">@TheoVeol</a> and our data which we made rescale dicoms and applying soft-tissue windowing.</em></p>
<p>We take a patient/study, we run a 3d segmentation model on it, it outputs masks for each slice, we make a study-level crop here based on boundaries of organs - liver, spleen, kidney & liver. </p>
<p>Next, we make volumes from the patient, each volume extracted with equi-distant 96 slices for a study which is then reshaped to (32, 3, image_size, image_size) in a 2.5D manner for training CNN based models.</p>
<p>3 channels are formed by using the adjacent slices.</p>
<p>All our model takes in input in shape (2, 32, 3, height, width) and outputs it as (2, 32, n_classes) as the targets are also kept in shape (2, 32, n_classes).</p>
<p>To make the targets, we need 2 things, patient-level target of each organ and how much the organ is visible compared to its maximum visibility, this data is available after normalizing segmentation model masks in 0-1 based on number of positive pixels</p>
<p>Then we multiply targets * patient-level target for each middle slice of the sequence and that is our label</p>
<p>For example if a patient has label 0 for liver-injury and the liver visibility is as follows in the slice sequence</p>
<p>[0., 0., 0., 0.01, 0.05, 0.1, 0.23, 0.5, 0.7, 0.95, 0.99, 1., 0.95, 0.8, 0.4 …. 0. ,0., 0.]</p>
<p>We multiply it with label which is currently 0 results in an all zeros list as output, but if target label for liver-injury was 1, then we use the list mentioned above as our soft labels.</p>
<h2><strong>Stage2: 2.5D Approach ( 2D CNN + RNN):</strong></h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2Fe8df4581839b1fa7dcadf68fe2a715a1%2FScreenshot%202023-10-22%20at%205.31.23%20AM.png?generation=1697935695484067&alt=media" alt=""></p>
<p>In stage 2, we trained our models using the volumes either based on our windowing or theo's preprocessing approach and the masks/crops generated from 3D segmentation approach. Each model is trained for multiple tasks (segmentation + classification). For all 32 sequences, we predicted slice level masks and sigmoid predictions. Further, simple maximum aggregation is applied on sigmoid predictions to fetch study level prediction used in submissions. </p>
<p>For training our models, some common settings were:</p>
<ul>
<li><strong>Learning rate:</strong> (1e-4 to 4e-4) range</li>
<li><strong>Optimizer:</strong> AdamW</li>
<li><strong>Scheduler:</strong> Cosine Annealing w/ Warmup </li>
<li><strong>Loss:</strong> BCE Loss for Classification, Dice Loss for segmentation</li>
</ul>
<h3><strong>Auxiliary Segmentation Loss:</strong></h3>
<p>One of the key things which made our training much more stable and helped in improving scores was using auxiliary losses based on segmentation. </p>
<p>Encoder was kept same for both classification & segmentation decoders, we used two types of segmentation head:</p>
<ul>
<li><strong><em>Unet based decoder</em></strong> for generating masks</li>
<li><strong><em>2D-CNN</em></strong> based head </li>
</ul>
<pre><code>nn.Sequential(
nn.Conv2d(nb_ft, 128, =3, =1),
nn.BatchNorm2d(128),
nn.ReLU(=),
nn.Conv2d(128, 128, =3, =1),
nn.BatchNorm2d(128),
nn.ReLU(=),
nn.Conv2d(128, 4, =1, =0),
)
</code></pre>
<pre><code> self = self(true_encoder)
self = self(true_encoder)
</code></pre>
<p>We used the feature maps generated mainly from last and 2nd last blocks of the backbones & apply dice loss on the predicted masks & true masks. This trick gave us around +0.01 to +0.03 boost in our models. We used similar technique in Covid 19 detection competition held few years back, you can also refer my solution for more detailed use of auxiliary loss & code snippets. <br>
<a href="https://www.kaggle.com/c/siim-covid19-detection/discussion/266571" target="_blank">link of discussion</a></p>
<p>Here is an example code for applying aux loss:</p>
<pre><code></code></pre>
<h3><strong>Architectures used in Final ensemble:</strong></h3>
<ul>
<li>Coat Lite Medium w/ GRU - <a href="https://github.com/mlpc-ucsd/CoaT" target="_blank">original source code</a></li>
<li>Coat Lite Small w/ GRU!</li>
<li>Efficientnet v2s w/ GRU [Timm]</li>
</ul>
<h3><strong>Augmentations:</strong></h3>
<p>We couldn't come up with several augmentations to use, but these were the ones which we used in our training.</p>
<pre><code> .Perspective(p=.),
.HorizontalFlip(p=.),
.VerticalFlip(p=.),
.Rotate(p=., limit=(-, )),
</code></pre>
<h2><strong>Post Processing / Ensemble:</strong></h2>
<p>Final ensemble for all organs model includes <strong>multiple Coat medium and V2s based models</strong> trained on either 4 Folds or Full data. </p>
<p>For extravasation, We mainly used Coat Small and v2s in ensemble. <br>
<strong>No major postprocessing</strong> was applied except for tuning scaling factors based on CV scores.<br>
To get the predictions, we aggregated the model outputs at slice level and simply took the maximum value for each patient.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2Fd6fa2cc524588b85b82906cccb6552bf%2FScreenshot%202023-10-22%20at%206.02.32%20AM.png?generation=1697936329146043&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2Fb69766140607127e717966085c927bce%2FScreenshot%202023-10-22%20at%206.07.13%20AM.png?generation=1697936371125187&alt=media" alt=""></p>
<h4><strong>Ensemble:</strong></h4>
<p>Within folds of each models, we are doing slice level ensemble.<br>
For different architectures & cross data models (theo/ours), we did ensemble after the max aggregation. </p>
<h4><strong>Best Ensemble OOF CV</strong>: 0.31x</h4>
<h4><strong>Best single model 4 fold OOF CV</strong>: 0.326 [Coat lite Medium]</h4>
<p>Organ level OOF for single model looks like this:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2F9143afd07ba3069f2b2259b1d8fe80eb%2FScreenshot%202023-10-16%20at%204.05.07%20AM.png?generation=1697413214073394&alt=media" alt=""></p>
<p>Thank you. </p>
<p>EDIT 1: 3D segmentation code: <a href="https://www.kaggle.com/code/haqishen/rsna-2023-1st-place-solution-train-3d-seg/notebook" target="_blank">notebook link</a></p>
<p>Thank you. We are curious about that too :))</p>
<p>Thank you, yes we used 3D segmentation models output as ground truth, while training classification models, along with linear classifier head, we used 1 / 2 separate mask heads/decoders. So, our loss function looks something like this.</p>
<pre><code></code></pre>
<p>Not much difference in performance, but it brought some diversity between the models, improving the ensemble scores. </p>
|
RSNA 2023 Abdominal Trauma Detection
|
2nd Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I use Ranger for both crop and 2D models. AdamW is used for the RNN though.</p>
<p>For the scheduler I always use linear scheduling, sometimes with some warmup. I tweak the lr to maximise performance, and usually try both Ranger and AdamW. Ranger has been working well with CNNs for me lately.</p>
<p>You're welcome !</p>
<ol>
<li>RNN only sees probabilities precomputed by the CNN, so training is done in 2 stages.</li>
<li>Crop to ~600 sheets, then resize to 200 with linear interpolation.</li>
</ol>
<p>Nothing was done to fight imbalance. Strong models could handle it :)</p>
<p>Thanks ! I will release clean code during the week </p>
<p>Thanks ! I train models on 8x 32GB V100, RAM is not the bottleneck for most models. I use at most 11 frames per input though, biggest batches I am handling are of size 8x11x3x 224x224 and the GPU is not full.</p>
<p>You're welcome !</p>
<p>Yep, it's trained on the provided segmentations.</p>
<p>My solution combines knowledge acquired in participating in the previous RSNA challenges, and involves much more than the month I spent working intensively in the competition. I've always enjoyed joining RSNA challenges, and have a special affection for medical imaging because of my relatives' medical profession. </p>
<p>Although 2nd is a great finish, the conditions in which it happened (i.e. unjustified deadline extension) make it really painful. The Kaggle team still does not understand how much modifying rules last minute hurts participants, or they simply don't care. I was already burnt out by competing full time for a month, adding 2 days on top plus missing first place by nothing is too much for me.</p>
<p><strong>Updates:</strong> </p>
<ul>
<li>More details added, fixed num_classes mistake.</li>
<li>Inference code : <a href="https://www.kaggle.com/code/theoviel/rsna-abdominal-inf" target="_blank">https://www.kaggle.com/code/theoviel/rsna-abdominal-inf</a></li>
<li><strong>Training code on Github :</strong> <a href="https://github.com/TheoViel/kaggle_rsna_abdominal_trauma" target="_blank">https://github.com/TheoViel/kaggle_rsna_abdominal_trauma</a></li>
</ul>
<h2>Data</h2>
<p>I use <a href="https://www.kaggle.com/theoviel/datasets?sort=votes" target="_blank">my datasets</a>! Give them a quick upvote so I can reach 4x GM. <br>
In addition, I resize the longest edge to 512 & center crop to 384. I also use 1 frame out of 2 to speed up 2D models inference, and limit stack size to 600. For models requiring a specific input size, images were simply resized afterwards. <br>
Images are loaded with <code>dicomsdl</code> and processed on GPU. It's fast. The pipeline without ensembling runs in less than 4h. </p>
<h2>Models</h2>
<h3>Overview</h3>
<p>Pipeline is below. It has two components: </p>
<ul>
<li>2D models + RNN, where the frame-level labels are inferred using organ visibility classification when needed. </li>
<li>Crop models for kidney / liver / spleen. Results are fed to the RNN after pooling.</li>
</ul>
<p>It re-uses winning ideas from the RSNA fracture competition (main references: <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/363232" target="_blank">[1]</a>, <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362640" target="_blank">[2]</a>, <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/363232" target="_blank">[3]</a>).</p>
<p><a href="https://ibb.co/MBBh8wP"><img src="https://i.ibb.co/fDDS86r/RSNA-Abd-drawio.png" alt="RSNA-Abd-drawio"></a></p>
<h3>2D models</h3>
<p>The key to achieve good performance with 2D models is cleverly sampling frames to feed meaningful information and reduce label noise.<br>
To do so, I use a simple but fast <code>efficientnetv2_rw_t</code> to infer which organs are present on every frame. During training, frames are sampled the following way:</p>
<ul>
<li>kidney / liver / spleen / negative bowel : Pick a random frame inside the organ.</li>
<li>positive bowel / positive extravasation : Use the frame-level labels.</li>
<li>Negative extravasation : Sample anywhere</li>
</ul>
<p>This model extracts probabilities for every 1/2 frame in the stack, and a RNN is trained on top to aggregate results. </p>
<p><strong>Details :</strong></p>
<ul>
<li>Heavy augmentations (HFlip, ShiftScaleRotate, Color augs, Blur augs, ElasticTransform) + cutmix (<code>p=0.5</code>)</li>
<li><code>maxvit_tiny_tf_512</code> was best. <code>convnextv2_tiny</code> and <code>maxvit_tiny_tf_384</code> were also great. </li>
<li>Ranger optimizer, <code>bs=32</code>, 40 epochs, <code>lr=4e-5</code></li>
<li>Only 3D info is the 3 adjacent frames used as channels.</li>
<li>11 classes : <code>[bowel/extravasation]_injury</code>(BCE optimized). And <code>[kidney/liver/spleen]_[healthy/low/high]</code> optimized with the cross entropy.</li>
</ul>
<h3>Crop models</h3>
<p>Strategy is similar : key is to feed to the model crops where the information is located. In that case, I used a 3D <code>resnet18</code> to crop the organs, and feed the crop to a 2D CNN + RNN model. It improves performances on kidney, liver and spleen by a good margin. </p>
<p><strong>Details :</strong></p>
<ul>
<li>Same augmentations with more cutmix (<code>p=1.</code>)</li>
<li>Ranger optimizer, <code>bs=8</code>, 20 epochs, <code>lr=2e-5</code></li>
<li>Best model uses 11 frames sampled uniformly in the organ. I used different number of frames for ensembling.</li>
<li><code>coatnet_1_rw_224</code> + RNN was best. I used different heads (RNN + attention, transformers) and other models CoatNet variants for ensembling.</li>
<li>3 class cross-entropy loss.</li>
</ul>
<h3>RNN model</h3>
<p>It is trained separately. Its role is to aggregate information from previous models, and optimize the competition metric directly.</p>
<p><strong>Details :</strong></p>
<ul>
<li>Restrict stack size to 600 (for faster loading), use 1/2 frame (for faster 2D inference). Sequences are then resized to 200 for batching. </li>
<li>Heavily tweaked LSTM architecture :<ul>
<li>1x Dense + Bidi-LSTM for the 2D models probabilities. Input is the concatenation of the segmentation proba (<code>size=5</code>), the classification probas (<code>size=11 x n_models</code>), and the classification probas multiplied by the associated segmentation (<code>size=11 x n_models</code>)</li>
<li>Pool using probabilities predicted by the segmentation model to get organ-conditioned features.</li>
<li>Use the mean and max pooling of the <code>22 x n_models</code> 2D classification features</li>
<li>Independent per organ logits, which have access to the corresponding pooled features. For instance the kidney logits sees only the crop features for the kidney (<code>3x n_crop_models</code> fts) , the RNN features pooled using the kidney segmentation, and the <code>3 x n_models</code> pooled 2D features for the kidney class.</li></ul></li>
<li>AdamW optimizer, <code>bs=64</code>, 10 epochs, <code>lr=4e-5</code></li>
</ul>
<h3>Things that did not work</h3>
<ul>
<li>YoloX + Ian Pan extravasation boxes. Tried using the data to get crops, and adding the confidence to the RNN model. It did not really help and was painful to implement.</li>
<li>Adding a sequential head to the first stage worked early on, but as my crop models got stronger I figured out 2D was enough. This allowed for a significant speed up of my inference pipeline which is nice.</li>
<li>Ensembling did not really help on private ultimately and my best sub is my strongest single model.</li>
<li>Stuff I tried during the 2 days extended deadline. I was happy with how I managed my time and knew the extension only meant more time for other teams to catch up. Thanks again Kaggle 😭</li>
</ul>
<h2>Scores :</h2>
<ul>
<li>2D Classification + RNN :<ul>
<li>Using ConvNext-v2: <strong>Public 0.41</strong> - <strong>Private 0.39</strong></li></ul></li>
<li>Add the crop model:<ul>
<li>MaxVit (instead of ConvNext) + CoatNet-RNN : <strong>Public 0.37</strong> - <strong>Private 0.35</strong> (best private)</li></ul></li>
<li>Ensemble:<ul>
<li>3x2D models, 8x 2.5D models : <strong>Public 0.35</strong> - <strong>Private 0.35</strong></li></ul></li>
</ul>
<p><em>Thanks for reading !</em></p>
<p>If you look in the master branch of the repo the code is in one of the notebooks. It's not interesting though, which is why I shared the files here : <br>
<a href="https://www.kaggle.com/datasets/theoviel/rsna-abdominal-prepro-data" target="_blank">https://www.kaggle.com/datasets/theoviel/rsna-abdominal-prepro-data</a></p>
|
RSNA 2023 Abdominal Trauma Detection
|
32nd place overview + mistakes
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I know that 32nd place is not necessarily the dream place people will aim for in future competitions but it's always important to learn from mistakes and we've made a few so I'll share our solution here and share to my future self and others how to improve for the next time.</p>
<h2>Overall solution</h2>
<p><strong>First stage: organ segmentation</strong><br>
In order to go from weak labels to strong labels it's important to be able to segment the organs of interest, so that's what we did like many other teams.</p>
<p>One potential mistake we did here is to keep the original labels of the annotated data and treat them as out-of-fold predictions for the next stages. Although it might seem a harmless decision, the fact that the annotated data all had injuries and that ground truth segmentation could be distinguishable by second stage models might have introduce a small data leakage.</p>
<p><strong>Second stage: organ injury classification</strong><br>
Like many others we used our segmentation mask to crop region of interests and feed them to a 2.5D + LSTM models (one for each of the segmented organs: bowel, liver, spleen and kidney). But we decided to also give the segmentation mask to the model as input, this is where we did something different which probably was not necessary and might have introduce a leak in our pipeline (not the only problem though but a point of concern).</p>
<p>One thing we've done differently from most solution I've read so far is to train one single model for all targets but in the axis parallel to the z-axis: the CT scan is fully stacked to form a 3D image Z, W, H and then slices of data are generated by slicing across W and H -> this allows to see all the organs within only few frames (16 or 24), to train end to end with all organs, to augment training by switching the slicing between W and H and to ensemble a very different approach.</p>
<p>On both approaches we noticed that our models were bad for extravasation, so we simply ignored it and never had time to come back to it before end of the competition. Note to myself, joining a competition 3 weeks before the end is short.</p>
<p><strong>Thrid stage: MLP for ensembling and competition metric optimization</strong></p>
<p>Since the goal of a Kaggle competition is to optimize a metric it's important to have a final approach that tries to minimize that metric. So for each patient we took the predictions of our two approaches, stacked them and stacked the predictions for two series belonging to a same patient.<br>
Now we just train an MLP to minimize the official competition metric.<br>
Note that the final model is predicting extravasation class without any input information about it.</p>
<p>Individually both of our second stage approaches reached about CV 0.42 - LB 0.52, when stacked together they reached CV 0.386 - LB 0.49.</p>
<h2>Big mistake</h2>
<p>When entering a competition you are always eager to know what your first solution is worth on the public LB, so you go for a quick and dirty inference notebook. For basic image competition everything goes fine, for 3 stages solution things can get ugly pretty fast.</p>
<p>You don't care too much about your crappy inference code when your LB score is improving, so you add new stuff until your score does not improve anymore and your CV - LB gap is the worst among all participants! 😭</p>
<p>In the end I've spent the last few days (+ the 2 extra days) trying to figure out from where this huge CV - LB gap came from…</p>
<p>Score on fold 0 during CV 0.393 -> score on fold 0 out of kaggle inference notebook 0.418… Silent errors in machine learning are frequent so be careful!</p>
<p>Anyway it was a fun competition to participate in, congratulations to everyone and see you soon!</p>
|
RSNA 2023 Abdominal Trauma Detection
|
382nd Place Solution for the RSNA 2023 Abdominal Trauma Detection
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>1. Introduction</h1>
<p>It brings me joy to be a part of the RSNA 2023 Abdominal Trauma Detection. I would like to express my gratitude to the organizers, sponsors, and Kaggle staff for their efforts, and I wish all the participants the best. I gain a lot of knowledge from this competition and other participants.</p>
<p>I want to express my gratitude to MIRENA ANGELOVA for providing the public notebook <a href="https://www.kaggle.com/code/mirenaborisova/rsna-0-66-lb" target="_blank">https://www.kaggle.com/code/mirenaborisova/rsna-0-66-lb</a> </p>
<h1>2. Context</h1>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data</a></li>
</ul>
<h1>3. Overview of the approach</h1>
<p>The solution was a copy of the public notebook(Public/Private LB of 0.66708/0.67282) <br>
with a change in the mean multiple coefficient for extravasation injuries and any injuries to 27 (Public/Private LB of 0.66669/0.67180).</p>
<p>The data preprocessing process is not used.<br>
The algorithms employed are Mean (without a model).</p>
<ul>
<li>Algorithm and Inference: <a href="url" target="_blank">https://www.kaggle.com/code/liudacheldieva/rsna-0-66-lb-aa7716?scriptVersionId=141544801</a> <br>
( copy from codes provided by MIRENA ANGELOVA in a public notebook <a href="url" target="_blank">https://www.kaggle.com/code/mirenaborisova/rsna-0-66-lb?scriptVersionId=141091151</a>)<br>
The validation strategy has not been implemented.</li>
</ul>
<table>
<thead>
<tr>
<th>mean()</th>
<th>public notebook Mean multiple by</th>
<th>my Mean multiple by</th>
</tr>
</thead>
<tbody>
<tr>
<td>bowel_healthy</td>
<td></td>
<td></td>
</tr>
<tr>
<td>bowel_<strong>injury</strong></td>
<td>4</td>
<td>-</td>
</tr>
<tr>
<td>extravasation_healthy</td>
<td></td>
<td></td>
</tr>
<tr>
<td>extravasation_<strong>injury</strong></td>
<td>28</td>
<td>27</td>
</tr>
<tr>
<td>kidney_healthy</td>
<td></td>
<td></td>
</tr>
<tr>
<td>kidney_<strong>low</strong></td>
<td>4</td>
<td>-</td>
</tr>
<tr>
<td>kidney_<strong>high</strong></td>
<td>6</td>
<td>-</td>
</tr>
<tr>
<td>liver_healthy</td>
<td></td>
<td></td>
</tr>
<tr>
<td>liver_<strong>low</strong></td>
<td>4</td>
<td>-</td>
</tr>
<tr>
<td>liver_<strong>high</strong></td>
<td>6</td>
<td>-</td>
</tr>
<tr>
<td>spleen_healthy</td>
<td></td>
<td></td>
</tr>
<tr>
<td>spleen_<strong>low</strong></td>
<td>4</td>
<td>-</td>
</tr>
<tr>
<td>spleen_<strong>high</strong></td>
<td>6</td>
<td>-</td>
</tr>
<tr>
<td>any_<strong>injury</strong></td>
<td>28</td>
<td>27</td>
</tr>
</tbody>
</table>
<p>This means from the training set used to fill in the solution values.</p>
<h1>4. Method</h1>
<p>Solution: Mean.</p>
<table>
<thead>
<tr>
<th>notebook</th>
<th>score(private)</th>
</tr>
</thead>
<tbody>
<tr>
<td>MIRENA ANGELOVA public <a href="https://www.kaggle.com/code/mirenaborisova/rsna-0-66-lb" target="_blank">https://www.kaggle.com/code/mirenaborisova/rsna-0-66-lb</a></td>
<td>0.67282</td>
</tr>
<tr>
<td>Change feature.split('_')[1] == 'injury' and feature != 'bowel_injury': submission[feature] *= 28 <br> to submission[feature] *= 27</td>
<td>0.6718</td>
</tr>
</tbody>
</table>
<p><br></p>
<h1>5. Final result</h1>
<table>
<thead>
<tr>
<th>low or bowel_injury</th>
<th>high</th>
<th>linjury and not eq bowel_injury</th>
<th>public LB</th>
<th>private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>mean * 4</td>
<td>mean * 6</td>
<td>mean * 27</td>
<td><strong>0.6718</strong></td>
<td>0.66669</td>
</tr>
<tr>
<td>mean * 4</td>
<td>mean * 6</td>
<td>mean * 29</td>
<td>0.6739</td>
<td>0.66754</td>
</tr>
<tr>
<td>mean * 4</td>
<td>mean * 5</td>
<td>mean * 28</td>
<td>0.67335</td>
<td>0.66502</td>
</tr>
<tr>
<td>mean * 3</td>
<td>mean * 6</td>
<td>mean * 28</td>
<td>0.67181</td>
<td>0.66819</td>
</tr>
<tr>
<td>mean * 4</td>
<td>mean * 6</td>
<td>mean * 28</td>
<td>0.67282</td>
<td>0.66708</td>
</tr>
</tbody>
</table>
<h1>6. Sources</h1>
<ul>
<li><a href="url" target="_blank">https://www.kaggle.com/code/mirenaborisova/rsna-0-66-lb?scriptVersionId=141091151</a></li>
</ul>
|
RSNA 2023 Abdominal Trauma Detection
|
3rd Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>In my normal model, the box of one organ is the input. The number of records is 4711(= patient-series unique num).<br>
In the model that uses the box of all organs, all organs will be inputs. since there are 4 organs, the number of records and the training time will be quadrupled.<br>
Since each organ has a different size, the appropriate image size and number of images should be different. Therefore, I used a custom sampler to ensure that only boxes of the same organ are included in the same batch. (If batch_size is 4, for example, 4 liver boxes will be included in one batch.)</p>
<p>This is the part that is hard to understand, so if you have any other questions, please feel free to ask!<br>
Congratulations on your silver medal!</p>
<p>Anoter 1st!! Whenever I look up, I see your back haha. Congrats!</p>
<p>My CV was not bad, so I was expecting a jump:) But your CV are even more amazing! Congratulations!</p>
<p>I salute you all for fighting to the end. Thanks also to the RSNA for their support. It was another great competition.</p>
<h2>Overview</h2>
<p>1, The 3D segmentation was trained with the given masks and each organ was cut into a cube shape using the predicted results.<br>
2, Multiple organ cubes were each entered into various 2.5D+3D classification models and the results enumerated.</p>
<h2>Segmentation</h2>
<p>I used Qishen's 3D segmentation code. <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362607" target="_blank">https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362607</a><br>
The models used were resnet18 and resnet50. The average of the output of all models was used as a mask.</p>
<h2>Crop</h2>
<p>Since information around the organs is essential for trauma detection, the mask was slightly enlarged before the boxes were cut out. Two patterns of mask sizes were employed and two datasets were created for each organ.</p>
<h2>Classification</h2>
<p>All classification models follow a 2.5D + 3D structure. Typically, multiple 2.5D images are generated from the organ box and input into the model. The input sizes are (8, 15, 3, 128, 128), where 8 is batch_size, 15 is image, and 3 is channel. Each image is transformed into a feature map through a 2D CNN and input to subsequent processes such as pooling and lstm.<br>
Because of the correlation in injuries between organs, the problem was solved in a multiclass problem, using essentially all targets. I trained a variety of model patterns, including:</p>
<p>・Multiple Image Sizes<br>
・Multiple image counts<br>
・Multiple necks (average pooling / max pooling / lstm / gru)<br>
・Multiple crop sizes<br>
・Multiple backbones (convnext / se_resnext / maxvit / caformer / xcit)<br>
・Multiple augmentation sets<br>
・Multiple epochs(without early stop)<br>
・Multiple targets (all targets / single organ targets)<br>
・Some models are pre-trained with image-level bowel / extravasation labels and used as initial values for weights.<br>
・Some models reduce noise by using box<em>mask as input. There is also a model that uses box</em>mask as input to reduce noise. This type of model is specific to the liver. Because of its shape, the liver has a lot of noise information if you just crop it with a box, so masking was very effective.<br>
・Some models use all organ boxes for training. Different image sizes/number of images are used for each organ to ensure the same resolution. A custom sampler was defined so that only boxes of the same organ exist in the same batch, allowing simultaneous training with different sizes/numbers.</p>
<p>The following are those that have made a particularly significant contribution to accuracy:<br>
・masking for liver model<br>
・custom sampler for all class models<br>
・2types of crops</p>
<h2>Ensemble</h2>
<p>A simple weighted average was performed for each target.<br>
Below is a simplified weights.</p>
<pre><code>{
'bowel_injury':
{
'liver_gru_chaug_256_cropv1': ,
'liver_maxvit_224_cropv1': ,
'liver_maxvit_224_cropv2': ,
'spleen_gru_128_cropv1': ,
'spleen_maxvit_224_25epochs_cropv1': ,
'kidney_maxvit_224_cropv1': ,
'kidney_caformer_192_cropv2_pretrain': ,
'kidney_maxvit_224_cropv2': ,
'kidney_maxvit_224_25epochs_cropv2': ,
'bowel_lstm_256_n15_cropv1': ,
'bowel_288_n25_cropv1_pretrain': ,
'bowel_288_n25_25epochs_cropv1_pretrain': ,
'all_pretrain_cropv1_input_bowel': ,
'all_pretrain_cropv1_input_kidney': ,
'all_lstm_pretrain_cropv2_input_kidney': ,
},
'kidney_healthy':
{
'liver_gru_chaug_256_cropv1': ,
'liver_maxvit_224_cropv1': ,
'liver_maxvit_224_cropv2': ,
'spleen_gru_128_cropv1': ,
'spleen_maxvit_224_25epochs_cropv1': ,
'kidney_maxvit_224_cropv1': ,
'kidney_caformer_192_cropv2_pretrain': ,
'kidney_maxvit_224_cropv2': ,
'kidney_maxvit_224_25epochs_cropv2': ,
'bowel_lstm_256_n15_cropv1': ,
'bowel_288_n25_cropv1_pretrain': ,
'bowel_288_n25_25epochs_cropv1_pretrain': ,
'all_pretrain_cropv1_input_bowel': ,
'all_pretrain_cropv1_input_kidney': ,
'all_lstm_pretrain_cropv2_input_kidney': ,
},
}
</code></pre>
<p>The final oof log loss is as follows:</p>
<pre><code>{
'bowel_injury': ,
'extravasation_injury': ,
'kidney_healthy': ,
'kidney_low': ,
'kidney_high': ,
'liver_healthy': ,
'liver_low': ,
'liver_high': ,
'spleen_healthy': ,
'spleen_low': ,
'spleen_high':
}
</code></pre>
<h2>Post processing</h2>
<p>After a simple weighted average of each model, each prediction was further weighted. This is the same post-processing as in the public notebook.<br>
I also tried stacking which directly optimizes the metric, but the results were slightly worse than the simple pp due to overfit.<br>
The final CV is 0.3316 and the respective scores are as follows.</p>
<table>
<thead>
<tr>
<th></th>
<th>bowel</th>
<th>extravasation</th>
<th>kidney</th>
<th>liver</th>
<th>spleen</th>
<th>any</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>0.095</td>
<td>0.4853</td>
<td>0.2434</td>
<td>0.3489</td>
<td>0.3751</td>
<td>0.4417</td>
</tr>
</tbody>
</table>
<p><br></p>
<p>train code: <a href="https://github.com/yujiariyasu/rsna_2023_abdominal_trauma_detection" target="_blank">https://github.com/yujiariyasu/rsna_2023_abdominal_trauma_detection</a><br>
inference code: <a href="https://www.kaggle.com/code/yujiariyasu/3rd-place-inf-code/notebook" target="_blank">https://www.kaggle.com/code/yujiariyasu/3rd-place-inf-code/notebook</a></p>
<p>Thanks everyone for your hard work!</p>
|
RSNA 2023 Abdominal Trauma Detection
|
439th-Solution-RSNA_ATD
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Place 439th Solution for the RSNA Abdominal Trauma Detection competition and Insights.</h1>
<h2><strong>Context</strong></h2>
<p>Clinical Context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview</a></p>
<p>Data Context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data</a></p>
<h2>Intro</h2>
<p>Firstly thanks to the RSNA for hosting the Abdominal Trauma Detection competition. It was both challenging and well-structured. A big shout-out to our community for the insightful discussions and for demonstrating what's possible, we learn a lot not also by making research on reading papers but also reading the solution notebooks of the community. Congratulations to the winning teams; we learned a lot from the solution write-ups, also congratulations to our team <a href="https://www.kaggle.com/diegoramirezmendoza" target="_blank">@diegoramirezmendoza</a>, <a href="https://www.kaggle.com/pedromartnezbarrn" target="_blank">@pedromartnezbarrn</a> , <a href="https://www.kaggle.com/arantzabazalda" target="_blank">@arantzabazalda</a>, <a href="https://www.kaggle.com/eliudlimon" target="_blank">@eliudlimon</a> for such an amazing collaboration and perfect teamwork!</p>
<h2>Overview</h2>
<p>We implement 2 different pipelines for this competition, a 3D-CNN and a 2D-CNN embedded with a LSTM arquitecture, those were 2 different approaches.</p>
<h2>EDA</h2>
<p>As usual, we couldn't start this competition without an exploratory data analysis. The aim of this stage was to understand the data, its particular distribution, getting acquainted with the labels to classify, and exploring metadata that could provide better quality data preprocessing. By doing this, we were able to build a robust strategy based on the characteristics of the data and also considering the available resources. This is something we will discuss later in this document, but for now, let's say this first step (EDA), as a data scientist's best practice, provides good insights about the data and possible strategies, but also about the computational resources required.</p>
<h2>Data Preprocessing</h2>
<p>The data preprocessing step was something laborious, the objective was to generate scripts to automate the generation and transformations of the competitions data that was stored locally. Even though the developed static functions and methods to fit something like image data generators we observed that the time taken to preprocess all this data during training wasn´t really efficient, instead when training the proposed models the time to train a single epoch was enormously high.</p>
<p>Essentially the "standard" part of the data preprocessing was to use the raw dicom files transformed to arrays, then rescaled them from the original shape to 128 x 128, normalized and fixed the pixel value representations due to some dicom files storage characteristics. From here we start exploring different models and possible solutions that summarizes our 3 proposed models, the first model is a 3D CNN based from the work of [], the second model is a CNN + LSTM layers and the third model was designed to train from scratch some state-of-the-art CNN´s like VGG16 and ResNet50b (which don´t gave us good results).</p>
<h3>The strength of segmentations data</h3>
<p>From the EDA stage, we gained insights into the segmentation data provided for the competition. These NIfTI files could offer a better understanding of the data and help develop a more refined data preprocessing pipeline.</p>
<p>Although there is a limited amount of segmentation data, it was deemed sufficient for our purposes. The proposed methodology was to train a U-Net model from scratch using this data and integrate its predictions into the main pipeline. But why? The reason is that some of the scans are not as informative as we would like, and this is particularly important given the competition's objective to focus on abdominal trauma. Consequently, some of the scans above and below the abdomen can be considered as "noise".</p>
<p>Based on this proposition, we decided to use the segmentation data to train a U-Net for the task of segmenting the organs, as provided by the masks in the NIfTI files. We then developed a threshold function to clean the inferences, using the first appearance of the liver as the upper limit and the last segment of the bowel as the lower limit. This strategy for reducing the data was successfully implemented and integrated into the preprocessing pipeline.</p>
<p>The experiment setup for training the U-Net model consisted of several steps. First, we implemented the model by extracting data as numpy files. Second, we defined the hyperparameters for the model. Third, we evaluated the performance of the model.</p>
<p>The final model used for the data reduction preprocessing pipeline was trained using the Adam optimizer with a learning rate of 0.000001. We used categorical crossentropy as the loss function, MeanIoU as our tunable metric and set the batch size to 128. The model was trained for 128 epochs, although due to GPU resource limitations on Google Colab, our training crashed somewhere around epoch 75 and total amount of data used for training was 15,520 DICOM and NIfTI files, which included both images and segmentation masks. Despite the challenges, the trained U-Net model provided valuable information for the preprocessing pipeline and allowed us to effectively reduce the data to focus on the region of interest in abdominal trauma cases.</p>
<h2>Training Results:</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11304439%2Fa610792c5a9897038986087883df671c%2Fdescarga.png?generation=1699395350287888&alt=media" alt=""></p>
<h2>Inference Results:</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11304439%2F7b77812541848956d381fbbf1416ed2e%2Fdescarga%20(2).png?generation=1699395316405025&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11304439%2F28d7f4e90e094f9dbd48760e27c4b037%2Fdescarga%20(3).png?generation=1699395391353207&alt=media" alt=""></p>
<h2>3D CNN - data preprocessing</h2>
<p>For the 3D CNN model, after completing our "standard" preprocessing procedure and fixing the scale to 128 x 128, the question of determining the depth of the volume arose. To answer this, we referred to the work from [] and considered the GPU capacity. Ultimately, we set the volume depth to 64.</p>
<p>To get the data into the shape of 128x128x64, we used the zoom [] function from Python's scipy module. To automate this preprocessing task and data generation, we developed a preprocessing script that maps the patient data, storing the DICOM paths in a DataFrame (which was later saved as a CSV file and also stored in an SQLite database). From these mapped paths of the original data, we then distributed the specific "maps" to correctly generate the data.</p>
<p>The data was generated and saved as numpy files, normalized to values between 0 and 1, with the final shape of 128x128x64 for each series (patients' folders store series, with some patients having only one series and others having two series). As a reminder, the total number of training data 128x128x64 data volumes generated sums up the complete number of series present in the competition's dataset.</p>
<h2>CNN-LSTM - data preprocessing</h2>
<p>On this model we use the down-sampling block of the U-net previously trained for the Semantic Segmentation using the masks, to extract the Feature vector of the image, this part of the U-net is called Encoder, the pipeline was:</p>
<p>Semantic Segmentation with the U-net -> Reduce Volume Shape to 128x128x64 (using Zoom Function) -> Feature Extraction (Pretrained Encoder) -> Bidirectional-LSTM<br>
Note: In the Feature Extraction we notice there were many 0's in the Feature Vectors, we applied a drop to that (it doesn't give us important information) and the final shape to be inputed in the model was (Batch_size, 64, 624)</p>
<h2>Models</h2>
<ul>
<li>3D CNN</li>
<li>CNN-LSTM (Pretrained Encoder from U-net -> Bi-LSTM)</li>
</ul>
<h2>What did not work</h2>
<ul>
<li>3D-CNN model has bad results, we think that this kind of arquitecture only works with a very good cropping of the volumes, the bad performance is because of the black pixels who tend to overfit the model.<br>
Currently working on different approaches…</li>
</ul>
<h2>NOTEBOOKS</h2>
<p>Training: (The train of the model was on a local computer)</p>
<p>Inference: <a href="https://www.kaggle.com/alejandrolunamtz/inference-rsna-atd-fv" target="_blank">https://www.kaggle.com/alejandrolunamtz/inference-rsna-atd-fv</a></p>
|
RSNA 2023 Abdominal Trauma Detection
|
4th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to RSNA and Kaggle for hosting this competition. Congratulations to all the winners and participants.</p>
<h2>Overview</h2>
<p>I employed a 2.5 pipeline and trained for both classification and segmentation tasks.</p>
<h2>Dataset</h2>
<p>I utilized 3D masks from TotalSegmentator and subsequently retrained a 2D model for the liver, spleen, bowel, kidney, and body. The DICOM images were rescaled to (1, 1, 5) and stored with a 5-channel mask.<br>
For each epoch, I first sampled N=14 frames from the rescaled array, then used the body mask to filter out hands or other irrelevant areas.<br>
I used the organ mask to limit the Z-axis space, as slices without the target organ might contain less valuable information.</p>
<h2>Model</h2>
<p>I used a Unet model integrated with Pyramid Vision Transformer V2 and MaxViT encoder. Transformers outperformed the convolutional models, especially for the extravasation target.</p>
<p>Pretraining on the 2D mask facilitated convergence.<br>
6 classification heads were employed for prediction.<br>
Perhaps the lack of an RNN layer is the primary reason I didn't match the performance of the top teams.</p>
<h2>Loss</h2>
<p>I used the CE loss with weights identical to the metric.</p>
<h2>Results</h2>
<table>
<thead>
<tr>
<th>Encoder</th>
<th>CV</th>
<th>LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>pvt-b2</td>
<td>0.3783</td>
<td>0.41</td>
</tr>
<tr>
<td>pvt-b3</td>
<td>0.3750</td>
<td>0.41</td>
</tr>
<tr>
<td>pvt-b4</td>
<td>0.3786</td>
<td>0.4</td>
</tr>
<tr>
<td>maxvit_t</td>
<td>0.3810</td>
<td>0.42</td>
</tr>
<tr>
<td>ensemble</td>
<td>0.3570</td>
<td>0.4</td>
</tr>
<tr>
<td>ensemble w/ scale</td>
<td>0.3530</td>
<td>0.39</td>
</tr>
</tbody>
</table>
<h2>Code</h2>
<p><a href="https://github.com/iseekwonderful/RSNA-2023-Abdominal-Trauma-Detection-4th-Place-Code.git" target="_blank">Github link</a></p>
<p>Thank you.</p>
<p>The gap between effb0 and pvtb2 is about 0.02. btw, your 3D model performance is impressive, mine even cannot reach 0.5. Do you use any pretrain weight, roi or some other method to improve the performance?</p>
<p>Thanks, I tried several backbones, only densenet can converge. It seems crop is necessary on 3D model.</p>
|
RSNA 2023 Abdominal Trauma Detection
|
51 Place Solution for the RSNA 2023 Abdominal Trauma Detection
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This solution only focused on solid organ classification because I didn't find a good solution for the bowel and extravasation. In the final submission, the bowel and extravasation predictions are the same as the mean baseline.</p>
<h1>Overview of the Approach</h1>
<ul>
<li>2D UNet Segmentation (<a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">Pytorch Segmentation Model</a> with efficientnet-b0) + Bbox crop</li>
<li>2.5D CNN classification (timm EfficientNetV2-s + FC head)</li>
<li>4-fold Ensemble</li>
</ul>
<h1>Details of the submission</h1>
<h2>2D UNet Segmentation</h2>
<p>A 2D UNet is trained on the front view (coronal) slices from the given segmentations. This is because my main goal is to roughly crop the solid organs from the full height CT scans to avoid extra dataloading and remove noise for classification training.</p>
<p>The model is trained on 256x256 resolution because the pixel-wise accuracy is not critial for getting the bounding box, which makes the training very fast (<1hr).</p>
<h3>GT vs predicted mask:</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3964695%2F07ac8b5d37ea68487242c15b67c549f3%2FScreenshot%20from%202023-11-02%2011-37-46.png?generation=1698943423363415&alt=media"></p>
<p>The bbox of each slice is retrieved from the segmentation mask. The solid organs bbox of entire CT scan can be found from the union of the slice bboxes with a small margin. </p>
<p>All the solid organ volumes are then cropped from the full-height scans and saved as 3D arrays (.npy) for classification training and inference.</p>
<h2>2.5D CNN Classification</h2>
<p>A 2.5D CNN Classification model is trained on the cropped solid organ volume.<br>
Each volume is resized and augmented by the data loader. (Some sample have a large amount of scans(tensor height), it is faster to do the interpolation before augmentations). </p>
<h3>DataLoader Example (Batch x 160 x 352 x 352):</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3964695%2F4b5f19a8b6cb0943b748898573e777df%2FScreenshot%20from%202023-11-02%2011-38-15.png?generation=1698943501191093&alt=media" alt=""></p>
<p>The model is trained at the series level and the final prediction for each patient is calculated from the mean of the series predictions.</p>
<h1>Sources</h1>
<p><a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">Pytorch Segmentation Model</a> </p>
|
RSNA 2023 Abdominal Trauma Detection
|
514th Place Solution for the RSNA 2023 Abdominal Trauma Detection Competition
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank Radiological Society of North America and Kaggle for organizing and running this competition. I would also like to thank all the competitors who shared their views and notebooks.<br>
Congratulations to all the winners! Although my ranking was not good, I would like to share with you what I did and the results. I worked on the prediction using a small number of images to conserve computational resources. I made predictions only for the liver, pancreas and kidneys of a part of patients. I set my goals small, because dealing with a lot of data was likely to cause me to give up in the process.</p>
<h2><strong>Context</strong></h2>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data</a></li>
</ul>
<h2><strong>Overview of the Approach</strong></h2>
<p>Twenty input images were selected per series_id. From the selected images, the nearest neighbors model selected images that showed the target organs, excluding images that only showed lungs, legs, etc. The remaining images were used to predict damage to the spleen, liver, and kidneys, and so on.</p>
<h2><strong>Details of the submission</strong></h2>
<p>First, using the <a href="https://www.kaggle.com/code/parhammostame/construct-3d-arrays-from-dcm-nii-3-view-angles?scriptVersionId=138964121" target="_blank">notebook [1]</a> method by <a href="https://www.kaggle.com/parhammostame" target="_blank">Parham Mostame</a>, 20 images (256*256 PNG images) were selected per series_id. In this process, unusual DICOM files were standardized according to the <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/427217" target="_blank">post [2]</a> by <a href="https://www.kaggle.com/huiminglin" target="_blank">Hui Ming Lin</a>. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2F1afcba921294d620d637fa28cb1c681c%2Fcor.png?generation=1699094504090575&alt=media" alt=""><br>
Because some of the selected images contained mainly non-target-related items, such as lungs and legs, only the necessary images were further selected. For this, 206 sets of data with segmentation information were used to predict images containing the liver, images containing the spleen, images containing the liver and spleen, and so on. The model was from the <a href="https://www.kaggle.com/competitions/digit-recognizer" target="_blank">"Digit Recognizer" Competition</a> notebook [<a href="https://www.kaggle.com/code/t0m0ff3l/top-score-using-nearest-neighbours?scriptVersionId=12826678" target="_blank">3</a>, <a href="https://www.kaggle.com/code/shadabhussain/digit-recognition-using-nearest-neighbors" target="_blank">4</a>] and used nearest neighbors learning. It runs fast, can be used without a GPU [<a href="https://www.kaggle.com/code/hiroshisakiyama/top-score-using-nearest-neighbours-without-gpu?scriptVersionId=146578718" target="_blank">5</a>], and seems to have worked somewhat well. <br>
Then, about one-third of the data was used to make predictions for each organ. The aforementioned nearest neighbors model was used, but was planed to be changed later to a better model. For the liver, it appeared to work somewhat well, and for the spleen, some of the results seemed to work well. The kidneys were not so good, and the bowels were not good at all. This time, I stopped here and only made final predictions for the liver, spleen, and kidneys for limited patients. Where no prediction was made, the average value was used.<br>
In the private score results, the score was better when only the liver and spleen were predicted, without including the kidney prediction. (Note: LS means liver and spleen; LSK means liver, spleen and kidneys.)<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2F061e1fe83f5391b7e5763477f98fa576%2FRSNAscore.png?generation=1699094621201857&alt=media" alt=""><br>
Next time, I would like to learn more and work with a better model appropriately.</p>
<h2><strong>Sources</strong></h2>
<p>[1] <a href="https://www.kaggle.com/parhammostame" target="_blank">Parham Mostame</a>, <a href="https://www.kaggle.com/code/parhammostame/construct-3d-arrays-from-dcm-nii-3-view-angles?scriptVersionId=138964121" target="_blank">Construct 3D arrays from DCM/NII (+ 3 view angles)</a>.<br>
[2] <a href="https://www.kaggle.com/huiminglin" target="_blank">Hui Ming Lin</a>, <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/427217" target="_blank">Standardizing Unusual Dicoms</a>.<br>
[3] <a href="https://www.kaggle.com/t0m0ff3l" target="_blank">Hendrik</a>, <a href="https://www.kaggle.com/code/t0m0ff3l/top-score-using-nearest-neighbours?scriptVersionId=12826678" target="_blank">Top Score using Nearest Neighbours</a>.<br>
[4] <a href="https://www.kaggle.com/shadabhussain" target="_blank">Shadab Hussain</a>, <a href="https://www.kaggle.com/code/shadabhussain/digit-recognition-using-nearest-neighbors" target="_blank">Digit Recognition using Nearest Neighbors</a>.<br>
[5] <a href="https://www.kaggle.com/hiroshisakiyama" target="_blank">Hiroshi Sakiyama</a>, <a href="https://www.kaggle.com/code/hiroshisakiyama/top-score-using-nearest-neighbours-without-gpu?scriptVersionId=146578718" target="_blank">Top Score using Nearest Neighbours without GPU</a>.</p>
|
RSNA 2023 Abdominal Trauma Detection
|
548th Place Solution for the RSNA 2023 Abdominal Trauma Detection Competition
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>CONTEXT SECTION</h2>
<ul>
<li><strong>Business context</strong>: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview" target="_blank">Contest Page</a></li>
<li><strong>Data context</strong>: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data" target="_blank">Dataset</a></li>
</ul>
<hr>
<h2>OVERVIEW OF APPROACH</h2>
<h3>DATASET PREPERATION:</h3>
<p>Dataset provided for the contest consists of 3 main data sources:</p>
<ol>
<li>Metadata for each patient</li>
<li>Dicom or CT-SCAN images for each patient</li>
<li>NII files or MRI scan images for each patient</li>
</ol>
<p>Our initial aim of this project was to leverage all 3 parts together. Before going there, let us understand the data provided as images:</p>
<p>The dataset provided consisted of 2 types of images (CT scan) - .dcm files and .ni files.</p>
<h4>Exploring ‘.dcm’ Files:</h4>
<p>.dcm stands as an extension for DICOM files, an abbreviation for Digital Imaging and Communications in <br>
Medicine. It is a set or sequence of X-ray images that CT scan is comprised of providing details on organ <br>
health.</p>
<p><strong>examples of DICOM images:</strong><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16972845%2F7ffb464d82e78e29bca3dafe437a1532%2FDICOM1.png?generation=1699299484698499&alt=media" alt="Patient 10004 - record 21057 - IMG 1000.dcm"> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16972845%2F47a42e18f96eda654081f04d8eba4cc3%2FDICOM2.png?generation=1699299601100767&alt=media" alt="Patient 10004 – record 21057 – IMG 1029.dcm"><br>
Patient 10004 – record 21057 – IMG 1000.dcm, Patient 10004 – record 21057 – IMG 1029.dcm</p>
<h4>Exploring ‘.ni’ Files:</h4>
<p>For this problem, we decided to take an alternate route to the problem by converting the 3D lattice into 3 sets of lateral snapshots each having 2 axes fixed at 0 and 1 available for lateral traversal. </p>
<p>To explain it in simple terms. We changed value of z while keeping x and y at 0. This produced slices parallel to x-y plane at regular intervals in z axis from z = 0 to z = max.</p>
<p>A visualisation of this lateral segment can be seen in the screenshot below. The screenshot is captured on a web-app available to public access via this <a href="https://socr.umich.edu/HTML5/BrainViewer/" target="_blank">link</a></p>
<p><strong>examples of NII files:</strong><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16972845%2Fde92fe3838a9035bcb54aa3b1ec46d33%2FNII_1.jpg?generation=1699299895059718&alt=media" alt="Patient 10000 NII File"><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16972845%2Fa1b52b75588e1468665c0f2c5d578695%2FNII_2.jpg?generation=1699300536859629&alt=media" alt=""></p>
<hr>
<h3>EXPLORATORY DATA ANALYSIS:</h3>
<h4>META DATA Normalisation</h4>
<p>This was done using the mathematical expression:<br>
<strong>normalised-aortic-hu</strong> = (𝐚𝐨𝐫𝐭𝐢𝐜-𝐡𝐮 − 𝐥𝐨𝐰𝐞𝐬𝐭-𝐚𝐨𝐫𝐭𝐢𝐜-𝐡𝐮𝐞)/(𝐡𝐢𝐠𝐡𝐞𝐬𝐭-𝐚𝐨𝐫𝐭𝐢𝐜-𝐡𝐮𝐞 − 𝐥𝐨𝐰𝐞𝐬𝐭-𝐚𝐨𝐫𝐭𝐢𝐜-𝐡𝐮𝐞)</p>
<h4>NII FILE ANALYSIS</h4>
<p><strong>Redundant Full Black image cleanup</strong><br>
The cleaning operation was primarily to remove the full black images from the nii file generated dataset. This was particularly difficult when we see that the number of such images in the dataset of nii file output was not the same. In some MRI images, first 12 images were fully black, in others first 16 were. This created an irregular sized set of images if we proceeded with this tactic. This required an alternate route of first finding <br>
the minimum such count of fully black images and then reduce that size from both sides of an image thus reducing the dataset, although keeping it uniform in all patient MRI images and in an optimised set size reducing irregularity and difficulty for model to process each frame.</p>
<p>So to talk numerically,<br>
We took 100 snapshots per axis of each MRI scan or each NII file.<br>
So, each NII file contributed along x = 100, y = 100, z = 100 => total 300 images.<br>
We found minimum non-full black image at 9th position. i.e. we decided to remove 8 images from each side <br>
on all axes. So, new dataset => x = 84 (100 - 8 - 8), y = 84, z = 84. => total = 252 images per NII file<br>
So effective dataset reduced by 16% after removal of redundant images.</p>
<p><strong>Note:</strong> Feature Extraction could not be done well due to inaccurate slice ranges and image rotation in NII files unlike DICOM files.</p>
<h4>DICOM FILES ANALYSIS</h4>
<p><strong>Redundant Full Black image cleanup</strong><br>
Initial Approach of the dicom file processing was like NII file about removing redundant images, but it turned <br>
out that nearly all of the images were significant and unskipable. This was great because now we had an <br>
option to utilise full dataset and each individual image training was meaningful.</p>
<h4>Feature Extraction on DICOM Images</h4>
<p>The second exploratory analysis process that we implemented was core feature extraction from images by localisation of organs. How exactly? As seen from the image below, the organs are localised to certain positions of CT scan or alternatively DICOM images. We tried to extract specific location of each organ with <br>
a 20% buffer border around each organ to adjust for any dislocation of organ due to natural causes like genetics or in body fat layers. This buffer was also to account for the organ movement due to diaphragm compression and relaxation during breathing.</p>
<p>We tried to localise this subsection of each organ and train it individually for each organ health splitting the DICOM based ML model into its subsections. Although this was successfully executed to separate specific organs from images separately with 20 buffers in each axis (10% on each border), there was still overlap.</p>
<p>It so happened that the considered organs overlapped over each other’s specific sub-region images. So, an image for kidney health analysis contained a significant part of spleen as well, so if we went forward with this implementation, It was possible that damage on spleen could be reflected in damage on kidney by model predictions due to spleen sharing a significant portion in images. </p>
<hr>
<h3>Validation Stratergy</h3>
<p>For testing of data on the dataset, we used 2 methods of testing excluding the public dataset-based testing. </p>
<ol>
<li>In-sample testing (dataset that was a part of model training)</li>
<li>out-sample testing (dataset that model has never seen)<br>
• For out-sample testing, we split the dataset into an 80:20 ratio of train : test. We reserved the 20% dataset as out-sample testing dataset. <br>
• For in-sample testing, we used randomised selection of 25% of the training dataset (training dataset = 80% of total dataset). The specific number 25% was a result of trying to match the out-sample testing to create an effective 20% total dataset for in-sample testing as well.</li>
</ol>
<hr>
<h3>ML MODEL SECTION</h3>
<h4>MODEL LOGIC</h4>
<p><strong>Initial Approach:</strong><br>
As discussed earlier we planned to utilise all the 3 types of data together. But there was a problem with this approach. The 3 datasets showed a lot of variations. If we were to treat MRI images and CT-SCAN images as a single input to the model, we were bound to face issues with training and model accuracy plunging down. To solve this issue, we decided to take an ensemble model like approach to the model where we would be treating each dataset separately with their own model and then we would combine the generated result <br>
from each model with appropriate weights to decide on the best output to be returned as a result.</p>
<p>An illustration of the same can be seen as:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16972845%2F47fae49205d7d037c21eadfb3f126f25%2FMLDA_Model.jpg?generation=1699305429235805&alt=media" alt=""></p>
<p>The data had disparity for each patient and not every patient had both CT scan (aka CAT scan in US) as well as MRI done while diagnostics. In such cases, we simply changed the weights for such patients and distributed the other weights proportionally.</p>
<p>For example:<br>
If the decided ideal weights were: α=0.4, β=0.4, γ=0.2<br>
If a patient A only performed CT-SCAN and not MRI, we could simply set value of β=0 and redistribute α and γ proportionally as α = α/(α+ γ), γ = γ/(α+ γ)<br>
Although this was our initial plan, we observed that the predicted values by the NII file model had a lot of discrepancy. Due to this, the idealistic β value would have been near 0. Thus, to save processing time, we eliminated the NII file processing segment and ML model entirely in final solution keeping just DICOM and Metadata.</p>
<hr>
<h2>DETAILS OF THE SUBMISSION</h2>
<h3>MODEL ALGORITHMN</h3>
<h4>ML model trained on DICOM images</h4>
<p>To build this model, we have taken help of EfficientNet_B4 prebuilt model. </p>
<h4>Pseudo Code for DICOM - EfficientNet_b4 implementation</h4>
<pre><code>weights_path =
def build_model(num_classes):
model = create_model(, =)
os.path.exists(weights_path):
model.load_state_dict(torch.load(weights_path, =), =)
:
(f)
model.classifier = nn.Linear(model.classifier.in_features, num_classes)
return model
device = torch.device( torch.cuda.is_available() )
model = build_model(len(train_df.columns) - 1).(device)
criterion = torch.nn.BCEWithLogitsLoss()
optimizer = optim.Adam(model.parameters(), =0.001)
num_epochs = 3
epoch range(num_epochs):
model.train()
running_loss = 0.0
i, (inputs, labels) enumerate(train_loader):
(i)
inputs, labels = inputs.(device), labels.(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.()
running_loss += loss.item()
(f)
</code></pre>
<p>In summary, this process leverages DICOM images to train a deep learning model, fine-tunes its parameters, and refines its understanding of the dataset through dynamic weight assignments. By iteratively training, testing, and adjusting the model, we aim to achieve a high level of accuracy in its predictions. This approach represents a key component of our strategy for effectively utilizing DICOM images in our machine learning workflow.</p>
<h4>Weighted Baseline condition model</h4>
<p>In the weighted baseline approach, we assign distinct weights to each of the nine target classes within the dataset. These weights reflect the relative significance of each class, taking into account the clinical importance of various medical conditions. For instance, we assign a weight of 2 to injuries like "kidney low" and "liver low," indicating their moderate impact. Conditions such as "spleen high" and "kidney high" are assigned a weight of 4, reflecting their higher clinical significance. Additionally, we assign a weight of 6 to conditions like "excavation high" and "excavation low," and set the base weight for no injury cases to 0. Please refer detailed solution linked at the end for code.</p>
<hr>
<h3>MODEL REASONING:</h3>
<h4>DICOM Model (EfficientNet_b4)</h4>
<p><strong>Why EfficientNet_B4 and not other models?</strong><br>
To explain this answer, it would be better to pictorially represent a study of comparable image detection ML Models like Resnet.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16972845%2F891e0be6c537b1b858773f1459a0502e%2Fefficientnetb4%20comparision%20graph.png?generation=1699306350364423&alt=media" alt="efficientNet_b4 comparision"><br>
<strong><em>NOTE:</em></strong> EfficientNet BenchMarking Data belongs to the creator - <a href="https://ai.googleblog.com/2018/08/mnasnet-towards-automating-design-of.html" target="_blank">AutoML MNAS</a>. The above image is used only for educational purposes. The original Image is in Free to use MIT license.</p>
<p>Here, we can see that on same dataset benchmarking, the EfficientNet Model (all versions B0-B7) outperform other models like NASNet, Resnet, AmoebaNet etc. which are also a part of industry standard vision models. The graph although in a efficiency vs Parameter domain can be sought to have at par result in computational resources and dataset size due to its direct relation with number of parameters.</p>
<p><strong>For our model implementation, we used efficientNet_B4 prebuilt Model</strong><br>
Efficient net provides various versions ranging from B0 to B7. All these versions differ in model size, computational complexity, and accuracy. As the number of parameters increases the models become increasingly data-hungry requiring more data to train and tend to overfit to the data really fast if dataset size is inadequate for the model complexity and parameter count.</p>
<p>For our data set, B4 was most suitable. This was evident from when we compared computational complexity and accuracy, B4 was found to be more suitable for our model than B0, B7 or any other mode for same parameters when trained on the complete training set for out-sample testing. Hence, we have used <strong><em>Efficient_b4 model</em></strong> to train our model.</p>
<hr>
<h3>HYPER-PARAMETERIC TUNING</h3>
<h4>Model Parameters:</h4>
<p>We decided to go for individual weights for each organ where all values of the three model parameters α, β and γ from Result = αX + βY + γZ were different for each organ.</p>
<p>Taking Result = αX + βY + γZ we observe that the ideal values of parameters for organs are:<br>
Kidney: α = 0.43, β = 0.04, γ = 0.53<br>
Liver: α = 0.49, β = 0.03, γ = 0.48<br>
Spleen: α = 0.47, β = 0.03, γ = 0.50<br>
Extravasation: α = 0.39, β = 0.00, γ = 0.61<br>
Bowel: α = 0.46, β = 0.03, γ = 0.51<br>
Averaged: α = 0.44, β = 0.03, γ = 0.52</p>
<p><strong><em>Note:</em></strong> As observed, the value of coefficient or Weigh NII Model is near 0 and thus regarded as insignificant contributor. In such case we thought that calculating NII would be a waste of computational resources and dropped it out of the final submission considering only Weighted baseline and DICOM model with adjusted accuracy similar to as seen above.</p>
<p>Epoch tuning (Epoch vs score curve [score α 1/accuracy] thus min = better) on Competition public dataset<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16972845%2Fd55a58b1055ebdc1175c6eb8e7faac38%2Fepoch1.jpg?generation=1699306932413783&alt=media" alt="epoch optimisation curve"></p>
<p>From the above graph we can make out that the best accuracy (score) is obtained at epoch count = 3. Thus while submission to the competition, we decided to go ahead with 3 epoch model.</p>
<p>Apart from the efficiency-epoch curve, we also used ROC-AUC curve and efficienct vs -log(λ) that was plotted for both data and observed to check for Overfit-underfit conditions and track result changes towards positive direction at each subsequent step. </p>
<p>There was minimal amount of overfitting found for out-sample testing dataset, although it was minimised in full training dataset so overfit was self-controlled. Still to be double sure and reduce it further we implemented basic L1 regularisation. We also changed Batch-Normalisation marginally to suit the curve better.</p>
<hr>
<h3>RESULTS [PRECONTEST SUBMISSION]</h3>
<h4>IN-SAMPLE TESTING: 20% Train segment from training Dataset</h4>
<p><strong>Note:</strong> the problem was a multiclass output type problem, so the confusion matrix was multiclass and individually seen for all 13 predictable classes. we checked if expected value matches with a tolerance of 10% on both sides to take that prediction as correct.<br>
Following the same logic:</p>
<p><strong>TPtotal</strong> = TP1 + TP2 + …. + TP13<br>
<strong>F1 SCORE:</strong> TP/(TP+ 1/2(FP+FN)) = 0.77<br>
<strong>Testing accuracy:</strong> (TP + TN)/(TP + TN + FP + FN) = 76%</p>
<h4>OUT-SAMPLE TESTING: 20% Test Segment from training Dataset</h4>
<p>For Out-sample testing as well, the confusion matrix for all 13 parameters was evaluated similarly.<br>
<strong>F1 SCORE:</strong> = 0.73<br>
<strong>Testing accuracy:</strong> = 67%</p>
<hr>
<h3>What Went Wrong?</h3>
<ul>
<li><p>Initially, we had planned to utilise the NII file dataset but later we discovered that no significant or notable value came from it. As evident in hyperparameteric tuning section, the beta coefficient value is ~0, indicating low contribution thus had to let it go.</p></li>
<li><p>We attempted to localise organs in dicom images and split the single Model into multiple organ specific models for better accuracy but failed due to image overlap and occlusion. </p></li>
</ul>
<hr>
<h3>Referances:</h3>
<ul>
<li><a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/453651" target="_blank">Detailed Soluion</a></li>
<li><a href="https://www.kaggle.com/code/datark1/what-are-dcm-and-nii-files-and-how-to-read-them" target="_blank">Guide to DICOM and NII Files</a></li>
<li><a href="https://socr.umich.edu/HTML5/BrainViewer/" target="_blank">NII viewer</a></li>
<li><a href="https://www.shutterstock.com/image-illustration/liver-gallbladder-pancreas-spleen-kidneys-annotated-1487138555" target="_blank">organ isolation hint</a></li>
<li><a href="https://www.kaggle.com/code/arjunrao2000/beginners-guide-efficientnet-with-keras" target="_blank">EfficientNet guide</a></li>
<li><a href="https://blog.research.google/2019/05/efficientnet-improving-accuracy-and.html" target="_blank">EfficientNet Performance</a></li>
</ul>
|
RSNA 2023 Abdominal Trauma Detection
|
59th Place Solution for the Detect and Classify Traumatic Abdominal Injuries Competition
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First and foremost, we extend our gratitude to RSNA for organizing this captivating competition, through which we gained invaluable insights. </p>
<h1>1. Context section</h1>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data</a></li>
</ul>
<h1>2. Overview of the Approach</h1>
<p>In this competition, the motivation is to contribute to the improvement of patient outcomes from traumatic injuries, which account for over 5 million global deaths each year. Blunt-force abdominal trauma, often caused by vehicle accidents, can damage organs and lead to internal bleeding. With traditional methods like physical exams and lab tests often being inconclusive, the emphasis shifts to the need for accurate medical imaging interpretations.</p>
<p>In this study, our models predict a probability for each of the different possible injury types and degrees using CT scans. While vital for evaluating abdominal trauma, CT scans can be challenging to decipher, especially when injuries are subtle or multiple. Our goal is to harness the power of AI and ML to better interpret CT scans.</p>
<h1>3. Details of the submission</h1>
<h2>3.1 Data Interpretation and Preprocessing</h2>
<p>The dataset is primarily stored in the .dcm (DICOM) format, which offers a detailed insight into traumatic injuries. The patients underwent CT scans, with each undergoing one to two scans, generally covering from the upper neck down to the region below the anus. However, the specific number of scan images per individual, per scan, remained variable. This variability was due to the unknown intervals between each scan slice and the differing heights of the individuals, introducing a level of complexity in the initial assessment of the data. </p>
<p>The scans could be classified into two types: complete and incomplete. A complete scan provides a comprehensive view, encompassing all organs within the range from the upper neck to the below the anal region. In contrast, an incomplete scan is a localized examination, focusing on specific organs within this anatomical spectrum. While the training dataset came labeled with indications of completeness, such classification were absent in the test dataset. The training dataset encompassed the CT scan images of 3,147 patients, while the test dataset was composed of approximately 1,300 patents’ scans.</p>
<p>When these multiple .dcm images from a single scan were stacked together, they coalesced to form a vivid 3D representation of the human body, encapsulating a plethora of details. Such data, especially in the field of medical imaging, is paramount to diagnose and comprehend the intricacies of internal injuries.</p>
<p>For convenience, we converted the given .dcm (DICOM) files into the more universally recognized .png format, facilitated by a <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/427427" target="_blank">method</a> provided by Kaggle platform, streamlining subsequent operations and analyses. </p>
<h2>3.2 Model Pipeline Design</h2>
<h3>3.2.1 2D Semantic Segmentation Model</h3>
<h4>Segmentation with U-Net</h4>
<p>Our objective with U-Net was twofold: to identify the span of CT images that contained each organ (liver, spleen, kidneys, bowels) and to pinpoint their precise locations within each image. </p>
<p>We initially employed the Total Segmentator tool (available at: <a href="https://github.com/wasserth/TotalSegmentator" target="_blank">Total Segmentator on GitHub</a>), but faced significant time constraints due to the tool’s processing speed. This constraint necessitated a shift in our approach toward a more efficient solution, leading us to use the U-Net architecture. </p>
<p>We utilized the Total Segmentator to generate marked pixel points as input data. These annotated 2D images served as training data for our U-Net model. By harnessing the efficiency of U-Net, we trained a 2D segmenter to efficiently discern the presence and location of the targeted organs across the CT scan images. The refined segmentation process not only indicated which scans contained the organs of interest but also provided their spatial coordinates, thus streamlining the subsequent stages of our analysis. </p>
<p>The following figures are an illustration of segmentation with U-Net. The black-and-white figure on the left is the original CT scan image, and the right one is the one segmented. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2497266%2F79d5484239de43371fef834bdb1fd493%2Ffigure3.png?generation=1699408525140588&alt=media" alt="U-Net"></p>
<h4>Constructing 2.5D Input</h4>
<p>For this part, our objective was to create a 2.5D input data for the neural network. We implemented a strategy to select 32 evenly spaced images from each scan across the training dataset. Given that the minimum number of slices per scan was 44, this uniform selection process ensured comprehensive coverage of each individual’s scan while maintaining a focus on representation consistency for each organ. Our selection also guaranteed that every organ appeared in at least 4 separate images, thereby capturing the essential anatomical features needed for accurate analysis. </p>
<p>Then we used EfficientNet architecture. Each image was processed through the network, with the output of the penultimate layer capturing the feature representation extracted by the network after processing. Subsequently, a fully connected layer served as the final decision-making component, distinguishing between health and injury. </p>
<p>To synthesize the 32 image-derived insights into a single diagnostic outcome for each scan, we utilized Long Short-Term Memory (LSTM) implemented in TensorFlow. This approach allowed us to analyze the sequence of images capturing the spatial continuity and progression of anatomical structures. </p>
<p>The following is a prototype of our model.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2497266%2F920be15372ec12783f6e8891e5f34336%2Ffigure2.png?generation=1699408559685906&alt=media" alt="model"></p>
<h3>3.2.2 Data Augmentation Techniques</h3>
<p>To bolster the quality and variety of our dataset, we initially planned to use the <code>albumentations</code> library for data augmentation. However, due to compatibility issues with TPU training, the library could not be directly applied. As a workaround, we drew inspiration from the augmentation techniques available in <code>albumentations</code> and replicated them using TensorFlow. Our customized augmentation pipeline included transformations such as horizontal and vertical flipping, transposition, and various blurring techniques like Gaussian blur, mean blur, and motion blur. We also introduced random Gaussian noise to further augment the dataset.</p>
<h2>3.3 Loss Function Design</h2>
<h3>3.3.1 Organ-Specific Loss Calculation</h3>
<p>For each organ (liver, spleen, kidneys, and bowels), the loss is calculated by taking the slices that correspond to the starting and ending indices of the organ within each scan. For example. If the liver is visualized from slice index 1 to 5, the model will predict the probability of injury for these slices, and the five separate loss values will be computed. The final loss for the liver will be the average of these five loss values. This process is repeated for each organ to obtain individual organ losses. </p>
<h3>3.3.2 Extravasation Loss Calculation</h3>
<p>For extravasation, the loss is computed across 32 evenly selected images from each scan. This comprehensive approach ensures that the loss calculation for extravasation is representative of the entire scan.</p>
<h3>3.3.3 Any Injury Loss Calculation</h3>
<p>The <code>any_injury</code> loss is calculated automatically on the Kaggle platform based on the previously mentioned loss values. This loss serves as an aggregate indicator of the model’s to detect any form of injury present across the scan.</p>
<h3>3.3.4 Loss Function Weights</h3>
<p>Different weights are applied within the loss function: a health prediction is assigned 1 point, and an injury prediction is given 2 points, emphasizing the model’s need to accurately identify injuries. </p>
<h1>4. Sources</h1>
<ul>
<li>Standardizing Unusual Dicoms: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/427217" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/427217</a></li>
<li>Total Segmentator: <a href="https://github.com/wasserth/TotalSegmentator" target="_blank">https://github.com/wasserth/TotalSegmentator</a></li>
<li>Data in PNG Format: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/427427" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/427427</a></li>
<li>EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling: <a href="https://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html" target="_blank">https://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html</a></li>
</ul>
|
RSNA 2023 Abdominal Trauma Detection
|
63rd Place Solution for the RSNA 2023 Abdominal Trauma Detection
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I created my solution following the advice of the <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/435053#2425570" target="_blank">discussion</a>. My solution is not the solution of the top winners, but I hope it will be of some help. When participating in this competition, I referred to various public notebooks and data. I would like to express my appreciation to everyone who was willing to post.</p>
<h2>Overall solution</h2>
<ul>
<li><p>Trauma detection for all organs is performed using a simple single-stage architecture. Then, detection for the three organs is performed using a multi-stage architecture, and finally ensembled.</p></li>
<li><p>single-stage architecture</p>
<ul>
<li>using <a href="https://arxiv.org/ftp/arxiv/papers/2002/2002.04752.pdf" target="_blank">CT-Net</a></li></ul></li>
<li><p>multi-stage architecture</p>
<ul>
<li>for kidney+liver+spleen</li>
<li>stage1 : sub-volume prediction using EfficientNet-v1b0</li>
<li>stage2 : prediction using CT-Net</li></ul></li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3593902%2Fd6f113ee7831e5e087d7eeb7148640ca%2F1.PNG?generation=1698845265814490&alt=media" alt=""></p>
<p>I was doing trial and error right up until the deadline, so I ended up with a half-finished architecture configuration.<br>
In the case of only single-stage architecture, the score was LB0.61 without tuning.</p>
<h2>notebook</h2>
<ul>
<li><a href="https://www.kaggle.com/code/sho124/rsna-infer-notebook" target="_blank">inference notebook</a></li>
</ul>
<h2>Impression</h2>
<ul>
<li>I had no experience implementing recognition technology using 3D data. So, this competition was very challenging and educational for me. </li>
<li>My best solution was created during the two-day extension period. I think this two-day extension was controversial. But it was a precious two days where I was able to get the best score for me. As a result I was able to get my first medal.</li>
</ul>
|
RSNA 2023 Abdominal Trauma Detection
|
64th Place Solution | YOLO Extravasation Classification
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>𝐎𝐮𝐫 𝐚𝐩𝐩𝐫𝐨𝐚𝐜𝐡:</h1>
<h3>Kidneys, spleen, cookies:</h3>
<p>3D segmentation of organs -> cutting out organs using a mask -> 3D classification of cut out organs.</p>
<h3>Extravasation:</h3>
<p>YOLO detection -> compiling a confidence series based on slices of a patient’s CT scan -> classifying the series according to features from it.</p>
<h1>𝐌𝐨𝐝𝐞𝐥𝐬:</h1>
<p>Segmentation: 3D ResNet<br>
Classification: 3D ResNet + UNet<br>
Detection: YOLOv8x<br>
Row classification: LogReg + BalancedRandomForest</p>
<h1>𝐖𝐡𝐚𝐭 𝐝𝐢𝐝𝐧'𝐭 𝐰𝐨𝐫𝐤:</h1>
<ol>
<li>Post-processing predictions under LogLoss using Optuna.</li>
<li>2D classification (they started developing the idea too late).</li>
</ol>
<h1>𝐄𝐱𝐭𝐫𝐚𝐯𝐚𝐬𝐚𝐭𝐢𝐨𝐧 𝐬𝐞𝐫𝐢𝐞𝐬 𝐜𝐥𝐚𝐬𝐬𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐨𝐧 𝐘𝐎𝐋𝐎 𝐜𝐨𝐧𝐟𝐢𝐝𝐞𝐧𝐜𝐞 𝐚𝐩𝐩𝐫𝐨𝐚𝐜𝐡:</h1>
<p>I made a rather interesting approach to classify extravasation and want to share it with you.</p>
<p>My idea would have been impossible to implement without the bounding box <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/441402" target="_blank">dataset</a> from <a href=")(https://www.kaggle.com/vaillant" target="_blank">Ian Pan</a> . Thanks a lot for his work.</p>
<p>Using this dataset, I trained YOLO detection and collected my “time” series dataset. The idea was to collect confidence and area of the bounding boxes. Walking through the sorted scans of the axial plane, I collected model predictions into my dataset.</p>
<h1>𝐓𝐡𝐢𝐬 𝐢𝐬 𝐰𝐡𝐚𝐭 𝐭𝐡𝐞 “𝐭𝐢𝐦𝐞” 𝐬𝐞𝐫𝐢𝐞𝐬 𝐥𝐨𝐨𝐤𝐞𝐝 𝐥𝐢𝐤𝐞:</h1>
<h2>𝐄𝐱𝐭𝐫𝐚𝐯𝐚𝐬𝐚𝐭𝐢𝐨𝐧</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11837581%2Fd1aff4084ad3b318e2a6350a4869fca3%2FScreenshot%202023-10-16%20at%2009.55.53.png?generation=1697440072235399&alt=media" alt=""></p>
<h2>𝐍𝐨 𝐞𝐱𝐭𝐫𝐚𝐯𝐚𝐬𝐚𝐭𝐢𝐨𝐧</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11837581%2F6235a05af397f1b3e2e0e42c01c9585e%2FScreenshot%202023-10-16%20at%2010.08.53.png?generation=1697440146957921&alt=media" alt=""></p>
<h1>𝐃𝐚𝐭𝐚 𝐩𝐫𝐞𝐩𝐚𝐫𝐢𝐧𝐠:</h1>
<p>When assembling the dataset, I also experimented with TTA, but as practice has shown, this did not bring a big increase in quality, but took 4 times more time for inference.</p>
<h1>𝐅𝐞𝐚𝐭𝐮𝐫𝐞 𝐞𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠:</h1>
<p>After that, I started generating features for these series. After many attempts, I came to the conclusion that the simplest features, such as std, mean, median, etc. There were already enough of them for the optimal metric. I couldn’t separate the classes more clearly.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11837581%2Fc33d7d159235bb1375e68bf64315da5d%2FScreenshot%202023-10-16%20at%2010.11.35.png?generation=1697440490402173&alt=media" alt=""></p>
<h1>𝐂𝐥𝐚𝐬𝐬𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐦𝐨𝐝𝐞𝐥𝐬:</h1>
<p>Using these features, I experimented with various models, but settled on the logistic regression ensemble and <a href="https://imbalanced-learn.org/stable/references/generated/imblearn.ensemble.BalancedRandomForestClassifier.html" target="_blank">lmblearn Balanced Random Forest Classifier</a>.</p>
<p>I made stratified cross-validation.</p>
<pre><code>Score = Metric(label = 5)
val_scores = []
i range(N_SPLITS):
X_train = df[feature_cols][df.fold != i]
y_train = df[df.fold != i].label
X_val = df[feature_cols][df.fold == i]
y_val = df[df.fold == i].label
LR = LogisticRegression(=21,
=,
class_weight={0 : 1, 1 : 6},
=,
=0.9)
BRF = BalancedRandomForestClassifier(=100,
=,
=None,
=2,
=1,
=0.,
=,
=None,
=0.,
=,
=,
=,
=,
=21,
=0,
=,
class_weight={0 : 1, 1 : 5},
=0.,
=None
)
fit_LR = LR.fit(X_train, y_train)
fit_BRF = BRF.fit(X_train, y_train)
pred = np.array(0.5 * fit_BRF.predict(X_val) + 0.5 * fit_LR.predict(X_val), dtype = np.uint8)
f1 = f1_score(y_val, pred)
= np.array(y_val)
pred_LR = np.array(fit_LR.predict_proba(X_val))
pred_BRF = np.array(fit_BRF.predict_proba(X_val))
pred = 0.5 * pred_BRF + 0.5 * pred_LR
val_score = Score.get_score(, pred)
val_scores.append(val_score)
(f)
(f)
(f)
()
(f)
</code></pre>
<h1>𝐀𝐧𝐝 𝐠𝐨𝐭 𝐭𝐡𝐞 𝐟𝐨𝐥𝐥𝐨𝐰𝐢𝐧𝐠 𝐫𝐞𝐬𝐮𝐥𝐭𝐬:</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11837581%2F4c45ae6d9091e8491ed5ad6553e1a396%2FScreenshot%202023-10-16%20at%2010.44.42.png?generation=1697442310307315&alt=media" alt=""></p>
<p>This approach showed 0.02 better logloss than the best statistical approach.<br>
It seems that the idea can be improved, for example by collecting better data or generating more suitable features.<br>
Thank you for your attention, I look forward to your criticism and suggestions.<br>
I'm waiting for your questions.</p>
<p><3</p>
<p>𝐎𝐮𝐫 𝐜𝐨𝐝𝐞: <a href="url" target="_blank">https://github.com/Egorgij21/RSNA_2023_Abdominal_Trauma_Detection</a></p>
|
RSNA 2023 Abdominal Trauma Detection
|
6th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, Thank you to RSNA and Kaggle for hosting this competition. <br>
Congratulations to all competitors. <br>
My solution is based on my mistakes in past RSNA competitions and solutions I learned from great competitors.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2816670%2Fa4d9546ab3d62dfe93918f09bc08219c%2FRSNA-Summary.jpg?generation=1697997852740205&alt=media" alt=""></p>
<h2>Data</h2>
<p>I use the datasets from <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a>. Thanks for him. <br>
I make 5 folds based on patient id (n=3147). </p>
<h2>Models</h2>
<p>I divided task based on the label type.</p>
<ul>
<li><strong>Organ Model</strong> : Seg Label(nii) + Study Label</li>
<li><strong>Bowel Model</strong> : Seg Label(nii) + Study Label + Image Label</li>
<li><strong>Extra Model</strong> : Study Label + Image Label</li>
</ul>
<h3>1) Organ Model</h3>
<p>First, I trained 3D segmentation model for generating masks. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2816670%2F5c42e73d71ae8c375b2a321024122e17%2F3d-segmentation.jpg?generation=1698161673119881&alt=media"></p>
<p>I used Qishen's 3D segmentation code. <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362607" target="_blank">https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362607</a><br>
And I cropped organ and get 15 slices for each one. Because I got many ideas from previous RSNA competitions, I started to use adjacent +-2 channels. <br>
And I just tried only 1 slices with 5 channels because I want to see how different, but it performs better. <br>
So finally I used this way. But, I think the original method makes more sense.</p>
<p>And then I trained CNN + sequence model With cropped volumes and study label. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2816670%2F729010131b30b3f5e629ec0c66d2f764%2Forgan.jpg?generation=1698161982441147&alt=media"></p>
<h4>Model:</h4>
<ol>
<li>3D segmentation : generate masks and crop (15 slices in each organ) <ul>
<li>resnet18d</li></ul></li>
<li>CNN 2.5D + sequence : train Organ classifier with study label.<ul>
<li>efficientnetv2s + LSTM</li>
<li>seresnext50_32x4d + LSTM</li></ul></li>
</ol>
<h3>2) Bowel Model</h3>
<p>The 3D segmentation part is same with above. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2816670%2F6dadd1dddf4a0a682e76e1700534da30%2Fslice.png?generation=1698161851139718&alt=media"></p>
<p>The only difference is I cropped 30 slices for bowel. </p>
<p>I trained also CNN + sequence model with cropped volumes and study and image label. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2816670%2F2faf6f6bbbbecb9df6c68c94e97bc78c%2Fbowel.jpg?generation=1698161933059070&alt=media"></p>
<h4>Model:</h4>
<ol>
<li>3D segmentation : generate masks and crop (30 slices in each organ)<ul>
<li>resnet18d</li></ul></li>
<li>CNN 2.5D + sequence : train Organ classifier with study label and image label.<ul>
<li>efficientnetv2s + LSTM</li>
<li>seresnext50_32x4d + LSTM</li></ul></li>
</ol>
<h3>3) Extra Model</h3>
<p>For Extra model, I got slices with stride 5 and +-2 adjacent channels. <br>
For example, each image shape is (5, size, size) and 5 channels are [n-2, n-1, n, n+1, n+2]. <br>
Also I just resized images to 384. I tried the other ways like 512 size, cropped images, but not working well. </p>
<p>Extra Model is based on 2 stage. <br>
First, I trained Feature extractor and got feature embeddings. <br>
Second, I trained Sequence Model. <br>
These are enough for gold zone. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2816670%2Fe2a01ab8a35bb6b060bfa694f322a538%2Fextravasation%20feature.jpg?generation=1698162042017089&alt=media"><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2816670%2F79ef199186ad3a5fe4f8b8463026e836%2Fextravasation%20sequence.jpg?generation=1698162053473465&alt=media"></p>
<p>In addition, thanks to Ian's bbox label, I could improve Extra model more. </p>
<p>In my experiment, training detector with bbox label is not working. <br>
So I used this label to make model to focus on extravasation region.<br>
I added segmentation head to feature extractor and it worked well.</p>
<p>This idea to add segmentation head comes from the previous Siim competition.</p>
<h4>Model:</h4>
<ol>
<li>Feature Extractor<ul>
<li>seresnext50_32x4d</li>
<li>efficientnetv2s</li></ul></li>
<li>Sequence<ul>
<li>GRU</li></ul></li>
</ol>
<h2>Things that did not work</h2>
<ul>
<li>Yolov7 + Ian Pan extravasation boxes. Training detector to crop bboxes is not working well.</li>
<li>seperate organ model. </li>
</ul>
<p>I truly appreciate the many competitors who produce and share great solutions every time. <br>
Thanks to, I was able to learn so much and become a Kaggle master. <br>
Also, Thank you to host and everyone who contributes to the best solution.</p>
<h2>Code</h2>
<p>inference code : <a href="https://www.kaggle.com/madquer/rsna-inference-6th-solution" target="_blank">https://www.kaggle.com/madquer/rsna-inference-6th-solution</a><br>
training code : <a href="https://github.com/sushi58373/RSNA_abdominal_trauma_6th_solution" target="_blank">https://github.com/sushi58373/RSNA_abdominal_trauma_6th_solution</a><br>
Demo Notebook : <a href="https://www.kaggle.com/code/madquer/rsna-inference-6th-solution-clean-version/notebook" target="_blank">https://www.kaggle.com/code/madquer/rsna-inference-6th-solution-clean-version/notebook</a></p>
|
RSNA 2023 Abdominal Trauma Detection
|
74th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I only have a 3090 graphics card, you can follow the github link of my project to learn more about the specific situation</p>
<p>Thanks to kaggle and the organizers.</p>
<p>After experiencing a month of competition, I am thrilled to have achieved a bronze medal as a beginner. Now, I would like to share my code and model with other novice kagglers for learning purposes.</p>
<p>Thank you very much for <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362787" target="_blank">the first-place solution</a> of <a href="https://www.kaggle.com/haqishen" target="_blank">Qishen Ha</a> in the 2022 RSNA competition. I have completely copied their baseline and made corresponding modifications based on the content of this competition.</p>
<p>The submission time is around 180 min (mostly due to data loading) and is able to get such scores, the public score is 0.65, while the private score is 0.59, and a final ranking of 74th place.</p>
<h1><strong>Code</strong></h1>
<p>inference code : <a href="https://www.kaggle.com/code/naocanzouyihui/rsna-2023-74th-place-solution-inference" target="_blank">https://www.kaggle.com/code/naocanzouyihui/rsna-2023-74th-place-solution-inference</a><br>
training code : <a href="https://github.com/naozouyihui/RSNA_abdominal_trauma_74th_solution/tree/main" target="_blank">https://github.com/naozouyihui/RSNA_abdominal_trauma_74th_solution/tree/main</a></p>
<h1><strong>Summary</strong></h1>
<p>Designed a 2-stage pipeline to deal with this problem.</p>
<p>stage1: 3D semantic segmentation -> stage2: 2.5D w/ LSTM classification.</p>
<h1><strong>3D Semantic Segmentation</strong></h1>
<p>I use 128x128x128 input, to train efficientnet v2s + unet model, for segmenting organs (5ch output,included bowel, left kidney, right kidney, spleen and liver).</p>
<p>After the training was completed, I predicted 3d masks for each organ for all 10k samples in the training set.</p>
<h1><strong>Prepare Data for Classification</strong></h1>
<p>Next step is to prepare data for classification.</p>
<p>First using the predicted 3D mask for each organ, we can crop out 5 parts from a single original 3d image. After combine the masks of the left and right kidney, we cropped 10k * 4 = 40k samples </p>
<p>Then for each organ sample, I extracted 20 slices evenly by z-dimension, and for each slice, I further extracted +-2 adjacent slices to form an image with 5 channels. </p>
<p>In addition, I added the predicted mask of corresponding organ as the 6th channel to each image.</p>
<p>I chose the 2.5D approach to do this work. here 2.5D means that each 2D slice in a sample has the information of several adjacent slices, so it is written 2.5D. But the model is a normal 2D CNN with 5-channels input.</p>
<p>The structure of this model is that, I first input 20 slices from a single sample into a 2D CNN, extracted out features of each slice, and then follow it with an LSTM model. So that the whole model can learn the features of the whole organ.</p>
<h1><strong>Final Submission</strong></h1>
<p>3D Seg</p>
<ul>
<li>5fold effv2s unet (128x128x128)</li>
</ul>
<p>2.5D Cls</p>
<ul>
<li>5fold resnet 50d (224x224)</li>
</ul>
|
RSNA 2023 Abdominal Trauma Detection
|
79th place - beginner tutorial in applying a previous solution with minimal changes and training time
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This post is meant to show other beginners how we can take a previous solution, apply the smallest changes possible, and still achieve a bronze medal with 30 hours of kaggle gpu (silver medal with 30 additional hours of a kaggle gpu). </p>
<p>Thanks to kaggle and the organizers. <br>
Thanks to <a href="https://www.kaggle.com/haqishen" target="_blank">Qishen Ha</a> for their <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362787" target="_blank">1st place solution of the rsna 2022 competition</a>, where the input data was the same, but the targets were fractures in the spinal vertebrae C1-C7. My aim was to learn how this code works and apply it to this competition. Also thanks to <a href="https://www.kaggle.com/theoviel" target="_blank">Theo Viel</a>, who pointed out Qishen’s solution in his post about <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/441557" target="_blank">beating the baseline</a>, and for his notebook about <a href="https://www.kaggle.com/code/theoviel/get-started-quicker-dicom-png-conversion" target="_blank">processing the dicom files into pngs</a>.</p>
<p>I trained one model to segment all the organs, followed by one model for each organ to classify injury. For extravasation I just predicted an optimized constant value, frequency_of_extravasation x 6. </p>
<p>I only used kaggle resources (about 30 gpu hours total), scoring .615 private leaderboard. This is with only 15 epochs for the final classification models. After training those models for 45 total epochs (taking about 30 extra kaggle gpu hours), we get .548 private leaderboard, which puts us right on the edge for a silver medal at 56th place. </p>
<h2>Adapting <a href="https://www.kaggle.com/haqishen" target="_blank">Qishen Ha</a>’s approach step by step</h2>
<h3>Stage 1: Segmentation</h3>
<p>The first step is to build a model that will segment out the relevant organs. <br>
For reference, this is <a href="https://www.kaggle.com/code/haqishen/rsna-2022-1st-place-solution-train-stage1" target="_blank">Qishen’s notebook</a> which builds such a model for spinal vertebrae C1-C7. As an input into that notebook, he has built a dataset with the studies processed into 3D images of size 128x128x128 to save time. </p>
<p>Here is my <a href="https://www.kaggle.com/code/chrisrichardmiles/rsna23-dicom-to-3d-array-128x128x128-fixed" target="_blank">notebook that processes all training studies into 3D images of size 128x128x128</a>. I have combined Qishen’s original code with Theo Veil’s previously mentioned dicom processing code. The output of this notebook is an input into the next notebook, which trains the segmenter model. </p>
<p>Here is the <a href="https://www.kaggle.com/code/chrisrichardmiles/rsna-2023-train-stage1-seg-mask?scriptVersionId=145931960" target="_blank">stage 1 segmentation model training notebook</a>. I link to version 7 because linking to a notebook that timed out crashes your browser. In version 9, I train one fold for 15 epochs (which is what I get after the 12 hours of kaggle gpu). From visualizing the output masks, it seems to be good enough.<br>
Here is the <a href="https://www.kaggle.com/datasets/chrisrichardmiles/rsna23-train-stage1" target="_blank">dataset with the model output</a>,<br>
Input size: 128x128x128<br>
Epochs: 15</p>
<h4>Stage 1.5: Segmentation inference and building 2.5D image input into stage 2 model</h4>
<p>In order to make the stage 2 training efficient, we precompute the segmentation masks by using the stage 1 segmentation model to infer the segmentation mask for each study_id. After finding the mask for the entire study_id, we build 15 “2.5 dimensional” images for each organ. For each organ we find the min and max value across the z axis (spinal axis), and take 15 images evenly spaced across this z range. For each image we also include 2 images from above and 2 from below for extra information. We also include the segmentation mask so that the model knows where the region of interest is. So for each organ (liver, left_kidney, right_kidney, spleen, bowel), our final result is an array of shape (15, 6, 224, 224). We stack all organ’s outputs and save one file. So for each study_id, we save a file of shape (75, 6, 224, 224). This will be used to create 15 inputs into the stage 2 classifier model for each organ. </p>
<p>Since the output of kaggle notebooks is limited to 20GB, we use 30 notebooks to get the segmentation masks for all the training data. Here is <a href="https://www.kaggle.com/code/chrisrichardmiles/fork-of-rsna23-s1-inf-30parts-2/output" target="_blank">#2 of 30 as an example</a>. All 30 must be put as an input into the stage 2 model training notebook.</p>
<h3>Stage 2 models: [classification]</h3>
<p>For reference, here is Qishen’s <a href="https://www.kaggle.com/code/haqishen/rsna-2022-1st-place-solution-train-stage2-type1" target="_blank">stage 2 training notebook</a>.</p>
<p>Here is my <a href="https://www.kaggle.com/chrisrichardmiles/rsna23-train-stage2-final-5" target="_blank">stage 2 training notebook</a>. Note that there is code added that is used to continue training, using the best models saved from previous versions of the same notebook. This code should be commented out on the first run. </p>
<p>Here is the <a href="https://www.kaggle.com/code/chrisrichardmiles/fork-of-rsna23-final-inference-4-diff-agg?scriptVersionId=147503119" target="_blank">final inference notebook</a> which scores .548 private LB. </p>
<p><strong>Key changes in my stage 2 models, compared to Qishen's model</strong>: </p>
<ul>
<li>In Qishen’s notebook, he builds one single model to classify if a vertebra has a fracture. For each vertebrae C1-C7, he makes 105 input samples to train with. This makes sense because each vertebrae looks similar. C2 looks a lot like C5. But for this competition, each organ does not look like the other, so I chose to build 4 different models for liver, kidney, spleen, and bowel. <br>
<strong>special note about kidney</strong>: Since the segmentation data from the organizers had different labels for left and right kidney, my segmentation masks also had left and right kidney. In order to build a single model for the kidneys, I concatenated the left and right kidney. To be clear I took the left and right kidney arrays (shape (15,6,224,224)) resulting from the input building in stage 1.5, and combined them to get an array of shape (15, 6, 448, 224). </li>
</ul>
<p>Here is the dataloader for the stage 2 classifier model: </p>
<pre><code>class CLSDataset(Dataset):
def __init__(self, df, mode, ):
self.df = df.reset_index()
self.mode = mode
self. =
def __len__(self):
self.df.shape[]
def __getitem__(self, index):
= self.df.iloc[index]
image_full = .(.cls_inp_path)
out = defaultdict(dict)
organ, cols, (a, b) zip(ORGANS, LABELS, ABS):
images = []
image_full[a: b]:
= .(, , )
= transforms_train(=)['']
= .(, , )
images.()
images = .stack(images, )
organ == 'kidney':
images = .concatenate((images[:, :, :, :], images[:, :, :, :]), )
out[organ]['images'] = torch.tensor(images).()
out[organ][''] = torch.tensor([[cols]] * n_slice_per_c).()
out
</code></pre>
<p><strong>Note</strong>: </p>
<ul>
<li>Even though the batch_size I use for the dataloader is 1, we get 15 training examples for each batch. So the model is treating each 6x224x224 image on its own, but it is processing all 15 images at once, as if the batch size were 15. </li>
</ul>
|
RSNA 2023 Abdominal Trauma Detection
|
7th Place Solution & Code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you RSNA for hosting this great competition, which gave me a nice experience and I believe this will start a wonderful journey on the Kaggel.</p>
<p>I will briefly introduce the solution I used in this competition.</p>
<h2>Dataset</h2>
<h3>sequence image data</h3>
<ol>
<li><p>My Solution is a 2.5D pipeline,it's necessary to process the sequence to a certain shapes, that is,<strong>[T * 3, 512, 512]</strong> , every series will be <strong>sampled</strong> to a length, for example, T = 32. </p></li>
<li><p>Then each independent slice image be <strong>croped</strong> to include as much valid data as possible in the image. This can be achieved by counting effective pixels.</p></li>
<li><p>Finally reshape them to **[256, 384] **shape.</p></li>
</ol>
<p>The visualization of cropping and reshaping results is as follows.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16201938%2F6ff85850d5189bfbba5eae614eb76096%2Fimage-20231016105954580.png?generation=1697456066783359&alt=media" alt=""></p>
<h3>sequence organ mask</h3>
<p>Just use <a href="https://pubs.rsna.org/doi/10.1148/ryai.230024" target="_blank">the total segmentor model</a> to generate segmentation results for all series. The bowel_mask = colon_mask + duodenum_mask + small_bowel_mask + esophagus_mask.</p>
<p>These masks will be used as mask ground truth, to assist with classification tasks.</p>
<h2>Models</h2>
<p><strong>Backbone:</strong> </p>
<p>InternImage (base) -> out stride (8, 16, 32)</p>
<p><strong>neck:</strong></p>
<p>UnetPlusPlus -> out stride (4, 8)</p>
<p><strong>head:</strong></p>
<p>I think the head section is the most valuable and effective part of this scheme.</p>
<p>The bowel, liver, spleen,and kidneys all have specific shapes and positions,except for extravasation. So there is two heads for classification.</p>
<p>For the first head, I referred to the decoding idea of mask2Former which learned to predict a mask from a query, and using it as the attention of the decoder layer. This can help each query extract effective information for each organ.</p>
<p>For the extravasation head, using the image level label to assist feature learning and enable better classification.</p>
<p>The entire pipeline is as follows.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16201938%2F186012aa8c3e147e92ece5017bb14650%2Fimage-20231016183612262.png?generation=1697456103599237&alt=media" alt=""></p>
<p><strong>loss:</strong></p>
<p>All cross entropy loss are weighted according to the status of each organ in each patient. The organ weight consistent with the weight used for official verification.</p>
<h2>Post processing</h2>
<p>I simply averaged the results of different series of the same patient id.</p>
<h2>Ensemble</h2>
<p>All ensembled models use the same model architecture, only using different sequence lengths(T=24/32/48) and different data folds.</p>
<h3>train code:</h3>
<p><a href="https://github.com/llreda/RSNA/tree/master" target="_blank">https://github.com/llreda/RSNA/tree/master</a></p>
<h3>inference code:</h3>
<p><a href="https://www.kaggle.com/code/hongx0615/rsna-2023-7th-place-solution-inference" target="_blank">https://www.kaggle.com/code/hongx0615/rsna-2023-7th-place-solution-inference</a></p>
|
RSNA 2023 Abdominal Trauma Detection
|
8th Place Solution & Code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all of the winners and competitors. Thank you to Kaggle and the organizers for this interesting competition.</p>
<p>I joined relatively late after releasing my <a href="https://www.kaggle.com/datasets/vaillant/rsna-abdominal-trauma-extravasation-bounding-boxes" target="_blank">extravasation bounding box labels</a>. My solution was pretty similar to my prior solutions in RSNA cross-sectional imaging challenges (pulmonary embolism, cervical spine fracture). I treated the task as 3 separate subtasks: predicting solid organ injury (liver, kidney, spleen), bowel injury, and extravasation. All models were based on CNN-transformer "2.5D" models. </p>
<h2>Data</h2>
<p>I converted all DICOMs to 3-channel PNGs, where each channel was a separate CT window. I used 3 windows (soft tissue: WL=50, WW=400, liver: WL=90, WW=150, and angiography: WL=100, WW=700).</p>
<h2>Crop Model</h2>
<p>Using 3D connected components, I was able to generate a mask for each CT volume in order to train a model to eliminate the empty black space. This mask was converted into bounding box coordinates for each image, which allowed me to train a 2D CNN mobilenetv3_small_050 model on 256 x 256 images to predict the coordinates. For each CT volume, I took the union of the predicted bounding boxes for each individual slice and used this to crop each individual image.</p>
<h2>Liver-Kidney-Spleen Organ Identification Model</h2>
<p>Using the segmentation output from TotalSegmentator, I trained another mobilenetv3_small_050 model on 256 x 256 images to predict presence of these organs on individual slices. </p>
<h2>Liver-Kidney-Spleen Injury Model</h2>
<p>With about a week left, I decided to label some slices with injuries to the liver, kidneys, and spleen so I could train a 2D model slice-wise model. I did not release the labels since the end of the competition was near, and I did not want to create any disruption. They are now available <a href="https://www.kaggle.com/datasets/vaillant/rsna-abd-trauma-organ-injury-slice-labels" target="_blank">here</a>.</p>
<p>I trained a ConvNeXt-tiny model on 2D slice labels with an additional linear layer to reduce the feature dimension to 256. This included the laterality of the kidney injury (i.e., left vs. right), though I am not sure how much this actually helped in the end. The model was trained on cropped images of size 288 x 384 from cropped volumes using the above models. This model was used to extract features from each slice; CT volumes were sampled to 128 images. Thus a CT series was converted to a sequence of shape 128 x 256. </p>
<p>A 3-layer transformer was trained on these sequences to predict the series-level label. A weighted binary cross-entropy loss was constructed to mimic the competition metric. The validation loss was rather unstable, so I also tracked the AUC to make sure the model was learning.</p>
<h2>Bowel Injury Model</h2>
<p>I used the provided slice-wise bowel injury labels to train CNN-transformer model in the same manner as above, except I only cropped the individual images to remove black space, not the CT volume since bowel is present on more slices. Images were resized to 384 x 512. </p>
<h2>Extravasation Model</h2>
<p>Using the bounding box labels I annotated, I generated 12 nonoverlapping patches of size 128 x 128 from images of size 384 x 512 and assigned each patch with a label of injury vs. healthy. I trained a ConvNeXt-tiny model on these patches and extracted features for each patch. Thus each image was converted into a sequence of shape 12 x 256. </p>
<p>I trained a 3-layer transformer using those sequences on slice-wise labels. I then used this transformer to extract features from each image of a resampled 128-slice volume, again resulting in a sequence of shape 128 x 256. This method improved performance over simply training a 2D CNN on whole images. A second-stage transformer was trained on these sequences to predict the series-level labels, using a weighted loss similar to the above.</p>
<h2>Inference</h2>
<p>5-fold ensemble of the above was used for final inference. For patient-level prediction, predictions were averaged across the series, if there were 2. Softmax activation function was applied to each label group's predictions, so all the probabilities were already normalized to 1. All probabilities were then scaled by taking the square root, which improved the private LB loss by 0.4. OOF CV was 0.375.</p>
<h2>Additional Thoughts</h2>
<p>I tried to incorporate 3D models, but they were taking too long to train and the performance was not as high. I was also interested in training segmentation models and training on cropped organs using the segmentation masks but did not have enough time. I tried training models on images with stacked slices as channels (i.e., each channel of the "image" was a separate slice), but this resulted in similar performance (slightly worse on LB). I tried training a single transformer on the concatenation of the features from the 3 types of models above, but this resulted in worse performance. Overall, I am happy to have won my 10th gold medal. </p>
<p>Inference Notebook: <a href="https://www.kaggle.com/code/vaillant/rsna-trauma-submission-v2-1" target="_blank">https://www.kaggle.com/code/vaillant/rsna-trauma-submission-v2-1</a></p>
<p>Source Code: <a href="https://www.kaggle.com/datasets/vaillant/rsna-trauma-src" target="_blank">https://www.kaggle.com/datasets/vaillant/rsna-trauma-src</a></p>
<p>I just saved the PNGs of each series to a folder and deleted the ones that I wanted to label as positive injury. Then I could just see which images were deleted and mark them as positive.</p>
<p>Thanks! You too!</p>
<p>Thank you!</p>
|
RSNA 2023 Abdominal Trauma Detection
|
97th Place Solution for the RSNA 2023 Abdominal Trauma Detection
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Context</strong></p>
<p>Business context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview</a><br>
Data context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data</a></p>
<p><strong>Overview of the Approach</strong></p>
<p>The final model is Resnet 2.5D + LSTM (1 layer with <code>hidden_size = 128</code>) was trained with <code>volume = (128, 64, 64)</code>, <code>window_step = 2</code> and <code>window_width = 3</code>.</p>
<p><strong>Details of the submission</strong></p>
<p>The <em>sigmoid</em> activation fuction was used for the output for <code>bowel_injury</code> and <code>extravasation_injury</code>, while <em>softmax</em> was used for the other targets</p>
<p><strong>Sources</strong></p>
<p><a href="https://www.kaggle.com/code/ayushs9020/understanding-the-competition-rsna" target="_blank">https://www.kaggle.com/code/ayushs9020/understanding-the-competition-rsna</a> — nice EDA notebook<br>
<a href="https://www.kaggle.com/code/awsaf49/rsna-atd-cnn-tpu-train" target="_blank">https://www.kaggle.com/code/awsaf49/rsna-atd-cnn-tpu-train</a> — train EfficientNet on TPU<br>
<a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447449" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447449</a> — 1st place solution</p>
|
RSNA 2023 Abdominal Trauma Detection
|
9th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to express my gratitude to the Kaggle staff for hosting this fantastic competition, as well as to the organizers at RSNA. I also want to extend my appreciation to all the hardworking participants who dedicated themselves to this competition. Special thanks go to my teammate <a href="https://www.kaggle.com/liushuzhi" target="_blank">@liushuzhi</a> for his tireless efforts and insightful discussions.</p>
<h1><strong>Overview</strong></h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3285259%2F81780865e35219e456c375ea284989bd%2FScreenshot%202023-10-16%20at%2017.23.32.png?generation=1697444632841544&alt=media"></p>
<h1><strong>Details of Each Model</strong></h1>
<h3><strong>3D Segmentation</strong></h3>
<ul>
<li>Used the implementation from <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a>'s top solution of the previous competition. [<a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362607" target="_blank">1</a>] (Thank you!)</li>
<li>Input size: (128, 128, 128)</li>
<li>Models used: ResNet18d, UNet</li>
</ul>
<h3>Classification - <strong>Liver/Kidney/Spleen</strong></h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3285259%2F2a1524549d583462f54ab94a9804204d%2FScreenshot%202023-10-19%20at%2020.36.04.png?generation=1697715592043327&alt=media"></p>
<ul>
<li>Developed a single model to classify all three organs instead of individual models for each organ due to better performance.</li>
<li>For patients with 2 series, only used the one with the lower aortic_hu for training. For patients with 1 series, used all of them. This decision was based on experimentation rather than a specific rationale.</li>
<li>Performed cuboid crop using the 3D segmentation masks, slightly expanding the cuboid to preserve edges.</li>
<li>Weighted the loss during training according to the competition metrics [1, 2, 4].</li>
<li>Model architecture: SE-ResNeXt → LSTM → Concatenation of [1D CNN, Attention]</li>
<li>Input size: (96, 4, 256, 256)<ul>
<li>Channels are (i-1, i, i+1, mask), with 'i' representing the index of images along the z-axis.</li></ul></li>
</ul>
<h3>Classification - <strong>Bowel</strong></h3>
<p>Two-stage training was employed to leverage image-level labels.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3285259%2F55260f9b200a2d93f7a51ebcda070f59%2FScreenshot%202023-10-19%20at%2020.36.18.png?generation=1697715686083231&alt=media"></p>
<p><strong>1st Stage</strong></p>
<ul>
<li>Binary classification with a 2D model using image_level_label.csv.</li>
<li>Similar to liver/kidney/spleen model, cuboid crop was done using 3D segmentation masks to create 4-channel input data. Thus, input size was (4, 384, 384).</li>
<li>Sampled data randomly to achieve a 1:10 ratio between positive and negative.</li>
<li>The loss is weighted based on the competition's metrics.</li>
<li>SE-ResNeXt was used for the model.</li>
</ul>
<p><strong>2nd Stage</strong></p>
<ul>
<li>Aggregated features obtained from the model trained in the 1st stage.</li>
<li>Model architecture: SE-ResNeXt → LSTM</li>
<li>Weights from the 1st stage were loaded and kept frozen, updating only the network beyond the LSTM during training.</li>
</ul>
<h3>Classification - <strong>Extravasation</strong></h3>
<p>Similar to the approach for Bowel, a two-stage training strategy was used to leverage image-level labels. The model and aggregation methods were the same, but there were some differences.</p>
<ul>
<li>Cuboid crop was not performed.</li>
<li>Since anomalies were small, its resolution was increased to (96, 3, 512, 512).</li>
<li>Used data with higher aortic_hu for patients with 2 series to highlight features.</li>
</ul>
<h1><strong>Post-Processing</strong></h1>
<p>A post-processing step was introduced to improve the optimization of any_injury. The only post-processing step we performed was simply multiplying the coefficients.</p>
<p>Coefficients were searched and applied based on OOF prediction values. Although the original plan was to create a stacking model for this purpose, it was simplified due to time constraints.</p>
<pre><code>low_coef =
high_coef =
ev_coef =
bowel_coef =
df_pred[] *= low_coef
df_pred[] *= high_coef
df_pred[] *= low_coef
df_pred[] *= high_coef
df_pred[] *= low_coef
df_pred[] *= high_coef
df_pred[] *= ev_coef
df_pred[] *= bowel_coef
</code></pre>
<h1><strong>CV</strong></h1>
<table>
<thead>
<tr>
<th></th>
<th>bowel</th>
<th>ev</th>
<th>kidney</th>
<th>liver</th>
<th>spleen</th>
<th>any_injury</th>
<th>mean</th>
</tr>
</thead>
<tbody>
<tr>
<td>w/o Post-Processing</td>
<td>0.1293</td>
<td>0.5348</td>
<td>0.3146</td>
<td>0.4192</td>
<td>0.4454</td>
<td>0.5533</td>
<td>0.3994</td>
</tr>
<tr>
<td>w/ Post-Processing</td>
<td>0.1293</td>
<td>0.5303</td>
<td>0.3141</td>
<td>0.4190</td>
<td>0.4485</td>
<td>0.4925</td>
<td>0.3889</td>
</tr>
</tbody>
</table>
<h1><strong>Source Code</strong></h1>
<table>
<thead>
<tr>
<th></th>
<th>URL</th>
</tr>
</thead>
<tbody>
<tr>
<td>Inference notebook</td>
<td><a href="https://www.kaggle.com/code/kapenon/rsna2023atd-9th-place-inference" target="_blank">https://www.kaggle.com/code/kapenon/rsna2023atd-9th-place-inference</a></td>
</tr>
<tr>
<td>Training code</td>
<td><a href="https://github.com/kapenon/rsna2023atd_9th_solution" target="_blank">https://github.com/kapenon/rsna2023atd_9th_solution</a></td>
</tr>
</tbody>
</table>
<h1>Acknowledgments</h1>
<p>We want to extend our thanks to the Kaggle staff and hosts who organized this fantastic competition, all the dedicated participants, and Rist Inc. for their support in providing computational resources.</p>
|
RSNA 2023 Abdominal Trauma Detection
|
Awaiting private results. Results of the competition for me.
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>To be precise, with this model I was able to slightly exceed the average values for kidney, liver and spleen. But as I wrote above, the model is meaningless, it does not separate classes, but simply returns values close to their average probabilities</p>
<p>Yes, this is exactly what I do in preprocessing, after loading the dcm into numpy I compress the 3D array to 256,256,256 (for segmentation I use 128,128,128 for each organ). Here I am forced to use Resize due to the use of RandomCropFromBorders, as I understand this method resizes the image somewhat, by the way if you don't use RandomCropFromBorders you don't need to use Resize</p>
<p>!pip install volumentations-3D<br>
from volumentations import *<br>
def get_augmentation(patch_size):<br>
return Compose([<br>
Rotate((-15, 15), (0, 0), (0, 0), p=0.5),<br>
#RandomCropFromBorders(crop_value=0.1, p=0.5),<br>
ElasticTransform((0, 0.25), interpolation=2, p=0.1),<br>
Resize(patch_size, interpolation=1, resize_type=0, always_apply=True, p=1.0),<br>
#ColorJitter(),<br>
Flip(0, p=0.5),<br>
Flip(1, p=0.5),<br>
Flip(2, p=0.5),<br>
RandomRotate90((1, 2), p=0.5),<br>
GaussianNoise(var_limit=(0, 5), p=0.2),<br>
#RandomGamma(gamma_limit=(80, 120), p=0.2),<br>
], p=1.0)<br>
volume3D = get_augmentation((128, 128, 128))<br>
def volume3Dfn(image): <br>
aug_data = volume3D(**{"image":image})<br>
return tf.cast(aug_data["image"], tf.float32)<br>
def build_augmenter(with_labels=True, dim=CFG.img_size):<br>
def augment(img, dim=dim):<br>
# Wraps a python function and uses it as a TensorFlow op.<br>
img_shape=img.shape<br>
img = tf.numpy_function(func=volume3Dfn, <br>
inp=[img], <br>
Tout=tf.float32)<br>
img.set_shape((img_shape[0], img_shape[1], img_shape[2], 1))<br>
img = tf.reshape(img, [img_shape[0], img_shape[1], img_shape[2]])<br>
return img<br>
def augment_with_labels(img, label): <br>
return augment(img), label<br>
return augment_with_labels if with_labels else augment<br>
This is a sample code of how I use volumentations-3D. Note that this library runs on numpy, so you can use it in any pipeline.<br>
Also, for some reason RandomGamma does not work on TPU, it works without problems on CPU. ColorJitter sometimes generates black images, obviously you need to adjust the parameters there.</p>
<p>Hello everybody!<br>
After weeks of trying, I still couldn't get the model to actually work out and not just dance around the class frequencies, but I did gain some useful experience and will share it. I know that in theory I'm posting this a little early, but as I wrote above, my models don't learn to classify classes, so it obviously won't affect the problem solving or the allocation of places. <br>
So let's start:</p>
<ol>
<li>As a basis, I used this public notebook: <a href="https://www.kaggle.com/code/awsaf49/rsna-atd-cnn-tpu-infer" target="_blank">https://www.kaggle.com/code/awsaf49/rsna-atd-cnn-tpu-infer</a> and other notebooks of this branch, special thanks to the author for the work and for the errors…..</li>
<li>I used 2D, 2.5D, 3D models, none of them worked, in the end I came up with the idea of transferring the 3D object to the model, and "compressing it to 2.5D". In fact, I compressed the image using something similar to an encoder, but it was immediately followed by the next layers of the model. This is a serious time saver compared to training a 3D model.</li>
<li>Image expansion 256x256x256.</li>
<li>Augmentation was done using volumentations-3D. Great library, but slow, I screwed it up after "dancing with a tambourine" to build a dataset for tensorflows. But even on TPU, it seriously slowed down work.</li>
<li>After the tests, I realized that I need to try to highlight the areas with the location of the organs, but it did not help, I need high-quality segmentation.</li>
<li>And today I prepared the model after segmentation, but I got a Submission Scoring Error. I don't know why, everything works on part of the training data and test public data, I don't know why it doesn't work. In a word, debugging on a private dataset is bad (I'm stating the fact, although I understand why it's done this way, because people will tweet as soon as you give them more freedom).</li>
</ol>
<p>In conclusion, the result is bad, but I learned a lot. I significantly improved my knowledge in working with data, and working with tensorflows. The next serious competition is only with pytorch.</p>
<p>And I think the classification models are not critical here, this problem requires proper data preprocessing, I look forward to the publication of better solutions. </p>
<p>Good luck to everyone!</p>
<p>P.S. The obvious ignoring of my letters and messages by the kaggle team when they write that "Please let me know if you have any questions!" seemed disrespectful to me. I think that if you do not have the opportunity to answer letters, it is better not to write such a sentence. In general, in the context of correspondence, it looked to me like you do not exist for us. Peace to all!</p>
|
RSNA 2023 Abdominal Trauma Detection
|
Kaggle Solution Write-Up
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>import pandas as pd<br>
import numpy as np</p>
<p>df1 = pd.read_csv("/kaggle/input/rsna-2023-abdominal-trauma-detection/sample_submission.csv")<br>
df1</p>
<p>df1.to_csv("submission.csv", index=False)<br>
df1</p>
|
RSNA 2023 Abdominal Trauma Detection
|
Solution Write-up: Unleashing the Healing Potential: Abdominal Trauma
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Introduction</strong></p>
<p>The RSNA Abdominal Trauma Detection AI Challenge addresses the critical issue of prompt and accurate diagnosis of traumatic injuries in the abdomen using computed tomography (CT) scans. This is crucial as traumatic injuries are a leading cause of death worldwide. CT scans provide detailed cross-sectional images, but interpreting them for abdominal trauma can be complex, especially with multiple injuries or subtle bleeding.<br>
The competition aims to harness artificial intelligence and machine learning to assist medical professionals in rapidly and precisely detecting injuries and grading their severity. This will significantly enhance trauma care and improve patient outcomes on a global scale.</p>
<p><strong>Overview</strong></p>
<p>The provided dataset contains information on patients and their abdominal health status. It includes variables indicating the health and injury status of various abdominal organs (bowel, extravasation, kidney, liver, spleen). Additionally, an "any_injury" variable provides an overall count of injuries detected in a patient.</p>
<p><strong>Preprocessing</strong></p>
<p>The dataset was preprocessed to create a cleaned DataFrame. It includes information on patient ID, health, and injury status of different abdominal organs. Descriptive statistics were generated to provide a summary of the categorical variables, showcasing the prevalence of healthy and injured conditions for each organ.</p>
<p><strong>Data Exploration and Visualizations</strong></p>
<p><strong>Occurrence of Injuries in Different Organs</strong></p>
<p>A bar plot was used to visualize the occurrence of injuries in different organs. It revealed that bowel injuries were the least common, while other organs had one or more injuries.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F4221c4f134c8d908f2b5b837c805b8a2%2FOccurrence%20of%20injuries%20in%20different%20organs.png?generation=1698915704986187&alt=media" alt="image"></p>
<p><strong>Overall Prevalence of Injuries</strong></p>
<p>The "any_injury" variable indicated the overall prevalence of injuries in the dataset, with a total count of 11 injuries.</p>
<p><strong>Relationship Between Injuries in Different Organs</strong></p>
<p>A heatmap illustrated the correlation between injuries in different organs. While perfect correlation (1) was observed on the diagonal, off-diagonal elements indicated correlations between injuries in different organs. In this small dataset, strong correlations may not be present.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2Fc03e89a61bd88c6e53cf310f33f3df41%2FCorrelation%20between%20organ.png?generation=1698916278467505&alt=media" alt="image"></p>
<p><strong>Multi-Planar Reconstruction (MPR)</strong></p>
<p>MPR involves displaying slices in different planes. Libraries like matplotlib or pyvista can be used to create interactive MPR visualizations, enhancing the interpretation of CT scans.</p>
<p><strong>Analysis of Organ Health</strong></p>
<p>The prevalence of healthy, low, and high health conditions for each organ was compared. The plot highlighted that the dataset predominantly consisted of healthy organ conditions, with only a few instances of low or high health status.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F44858e883cf40a860263bb2c31f8e20a%2FOrgan%20health.png?generation=1698916380566194&alt=media" alt="image"></p>
<p><strong>Analysis of Injuries</strong></p>
<p>Occurrences of injuries in different organs were analyzed, along with the overall prevalence of injuries in the dataset. The heatmap revealed potential relationships between injuries in different organs.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F4d807a5cfa2d37a1bdb6277863b16ee6%2FCorrelation%20between%20Injuries.png?generation=1698916468169775&alt=media" alt="image"></p>
<p><strong>Relationship Between "any_injury" and Organ Health</strong></p>
<p>The relationship between the presence of "any_injury" and the health status of each organ was examined. The plot indicated a strong relationship, with certain organs showing injuries in patients with low or high health status.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F7539ee4980700fcf95f04cc8a0695aff%2FComparison%20of%20organs.png?generation=1698916646965693&alt=media" alt="image"></p>
<p><strong>Patient Profiles</strong></p>
<p>Patient profiles were identified based on organ health and injury status using K-means clustering. Two clusters were identified: one representing healthy patients and the other representing injured patients. In a larger dataset, more meaningful patient profiles may emerge.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F1e55b56c5d14a7e12826b6b89dd02a25%2FPatient%20Profile.png?generation=1698916166021267&alt=media" alt="image"></p>
<p><strong>Model Evaluation</strong></p>
<p>Three machine learning models (Random Forests, SVM, Gradient Boosting) were trained and evaluated for accuracy. The results indicated high performance across all models, with Random Forests achieving an accuracy of 96%.</p>
<p><strong>Conclusion</strong></p>
<p>The solution presented here demonstrates a comprehensive approach to addressing the RSNA Abdominal Trauma Detection AI Challenge. Through data preprocessing, exploratory data analysis, visualizations, and machine learning, this solution aims to significantly improve the diagnosis and treatment of traumatic abdominal injuries. The high model accuracy underscores the potential impact of this approach in real-world clinical settings.</p>
<p><strong>Notebook:</strong> <a href="https://www.kaggle.com/code/jocelyndumlao/unleashing-the-healing-potential-abdominal-trauma/notebook" target="_blank">https://www.kaggle.com/code/jocelyndumlao/unleashing-the-healing-potential-abdominal-trauma/notebook</a></p>
|
RSNA Intracranial Hemorrhage Detection
|
#9 Solution with CODE - Team BIG HEAD: Training model bonanza, TTA and L2 stacking.
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Slice is just an "image" (DICOM) in the context of a series of slices that comprise a full brain scan.</p>
<p><strong>CODE UPDATE</strong></p>
<p>Code is here: <a href="http://github.com/antorsae/rsna-intracranial-hemorrhage-detection-team-bighead">http://github.com/antorsae/rsna-intracranial-hemorrhage-detection-team-bighead</a></p>
<p><strong>SOLUTION OVERVIEW</strong></p>
<p>Our solution consists of pretty weak models (CV 0.07x in most of them) using L2 stacking (5 folds) trained with both <em>xgboost</em> and <em>catboost</em> and ensembled via averaging. </p>
<p>We trained ~50 models (10 architectures/losses * 5 folds) in total.</p>
<p>The following table summarizes the architectures, folds, and GPUs to train each model:</p>
<p><img src="https://i.imgur.com/pNaJwFP.png" alt=""></p>
<p><strong>Fastai v1 3-slice networks: standard window and loss</strong></p>
<p>Architectures not highlighted (first five) were implemented using fastai v1 taking 3 consecutive slices (512x512) of a study and feeding them to the vanilla architecture with a fully connected head that outputs 6*3 = 18 logits. Training is done for 15 epochs using 1-cycle-policy. Batch size is allocated dynamically maximizing GPU memory usage. We use random rotations and flips as augmentation.</p>
<p>Loss function is the weighted average of the 3 slices giving more importance to the center slice:
```
W_LOSS = 0.1
GENERAL_WEIGHTS = FloatTensor([2., 1., 1., 1., 1., 1.])
general_weights_3slices = torch.cat([GENERAL_WEIGHTS * W_LOSS, GENERAL_WEIGHTS, GENERAL_WEIGHTS * W_LOSS])</p>
<p>def weighted_loss(pred:Tensor,targ:Tensor)->Tensor:
return F.binary_cross_entropy_with_logits(pred, targ.float(), general_weights_3slices.to(device=pred.device))
```</p>
<p><strong>Fastai v2 3-slice networks: subdural window and subdural focused loss</strong></p>
<p>We decided to use fastai v2 primarily b/c augmentations are done in GPU and a few of the computers we have had CPU bottlenecks doing augmentations, no longer the case with fastai v2.</p>
<p>Architectures highlighted in red we implemented as above with the following differences:
- Fastai v2 was used: much of a learning process and still has rough edges (some of them we realized after stage 1 finished and we could NOT change code).
- Window centered at 100 and width of 254 (to take advantage of the range of <code>uint8</code>)
- Loss weighted on subdural more (10x) than other types:
```
SUBDURAL_WEIGHTS = FloatTensor([.8, .4, .4, .4, .4, 4.])
subdural_weights_3slices = torch.cat([SUBDURAL_WEIGHTS * W_LOSS, SUBDURAL_WEIGHTS, SUBDURAL_WEIGHTS * W_LOSS])</p>
<p>def subdural_loss(pred:Tensor,targ:Tensor)->Tensor:
return F.binary_cross_entropy_with_logits(pred, targ.float(), subdural_weights_3slices.to(device=pred.device))
```</p>
<p><strong>Input to L2 Models</strong></p>
<p>Once models are trained we run OOF predictions using TTA with 10 repetitions. And we use the mean and std of those 10 TTA predictions for each architecture as input to both <em>xgboost</em> and <em>catboost</em>, both for the central and surrounding slices.</p>
<p>Two L2 models are trained: <em>xgboost</em> and <em>catboost</em>, and then simply averaged. One submission we did with the fastai v1 models only (they finished sooner) and the other using both.</p>
<p><strong>Things we would have done differently</strong></p>
<ul>
<li>Class-aware sampling (balance dataset)</li>
<li>Pseudo-label training</li>
<li>Fastai v2 head is different than v1 for vision models, the v1 head works better.</li>
<li>We used a pretty high <em>eps</em> for <em>Adam</em> optimizer in v2, defaults (in v1) work better.</li>
<li>Learnable window</li>
<li>TTA with zoom, crops and cut-out</li>
<li>Add extra channel with distance to center (similar to coord-conv but just radius to center) to make network location-aware.</li>
<li>L2 model using lightgbm too and averaging 5 folds of L2 (we trained with 4 folds and hence we did not use all training set for L2)</li>
</ul>
<p>...and the mandatory meme as a tribute to our team name:
<img src="https://i.imgur.com/1bz50NK.png" alt=""></p>
<p>In order:
L2 models
TTA
3 slices
Sudural weight
(We did not use focal loss, just weighted central slice more than surrounding ones)</p>
|
RSNA Intracranial Hemorrhage Detection
|
10th place solution (+ github code)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all participants and the winners !
And, I became KaggleMaster on this competition !
We will go to Japanese BBQ (Yakiniku) by prize money of this competition, haha.</p>
<p>Following is a summary of our solutions.</p>
<p>Code : <a href="https://github.com/shimacos37/kaggle_rsna_2019_10th_solution">https://github.com/shimacos37/kaggle_rsna_2019_10th_solution</a>
We mostly used <a href="/appian">@appian</a> code. Thank you very much <a href="/appian">@appian</a> !!</p>
<h1>Pipeline</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1227363%2F393c14ac48f011e5cab030c1493eb47e%2FRSNA_pipeline%20(4" alt="">.png?generation=1574649511734185&alt=media)</p>
<h1>Summary</h1>
<h2>Stage 1</h2>
<h3>Preprocess</h3>
<ul>
<li>As most of people did, we applied three window (brain, blood/subdural, bone).</li>
<li>Delete some noisy image (image which has small brain area).</li>
<li>PatientID based 5-fold split.</li>
</ul>
<h3>Train</h3>
<ul>
<li>We trained simply changed backbone in <a href="/appian">@appian</a> code and applied some ideas.</li>
<li>We usually used 512x512 img_size and applied simple augmentations (flip, resize, etc...)</li>
<li>Finally, We constructed eleven models. Consequently, I think that it doesn't need to construct so many models...</li>
</ul>
Simple CNN models
<ol>
<li>SeResNext-50</li>
<li>SeResNext-50 (Resize 410x410)</li>
<li>SeResNext-101 (Mixup used)</li>
<li>Efficientnetb3</li>
<li>InceptionV4</li>
<li>InceptionResNetV2</li>
<li>Xception</li>
</ol>
Some Ideas
<ul>
<li><p>We predicted label without 'any' and 'any' by other label probability (1 - (1-p_1)*(1-p_2)...)</p>
<ul><li>This is not so high score, but should be have some contribution when stacking.</li></ul></li>
<li><p>We used adjacent images for input, and predict center label. Please see following figure.</p></li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1227363%2Ff3dc21d1aa487195426175d6aa5f4401%2FUntitled%20Diagram%20(3" alt="">.png?generation=1574650665165165&alt=media)</p>
<ul>
<li>We applied label smoothing by moving average or interpolation of the sandwiched label area.
<ul><li>Because, we noticed the the boundary of label tends to have high log_loss by our EDA.</li></ul></li>
</ul>
<h2>Stage 2</h2>
<h3>Preprocess</h3>
<ul>
<li>First, we predicted the probabilities of labels per an image.</li>
<li>Second, we sorted the probabilities by Position2 per StudyInstanceUID.</li>
<li>We extracted below features.
<ul><li>Aggregate feature (min, max, mean, std), pred-pred_mean, pred / pred_mean, etc</li>
<li>Moving average feature (3, 5, 7, 9 adjacent prediction), pred - moving_average_pred, pred / moving_average_pred, etc</li></ul></li>
</ul>
<h3>Stacking</h3>
<ul>
<li>We simply trained LightGBM and MLP by above features.</li>
<li>And we constructed CNN stacking model like below figure.</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1227363%2Fce8d6f1abe7fcd102dcdea7a1b93fb72%2Fcnn_stacking%20(3" alt="">.png?generation=1574658076805462&alt=media)</p>
<ul>
<li>We treated above features as images.
<ul><li>height : features from different models</li>
<li>width : feature dimension</li>
<li>channel : adjacent features sorted by Position2</li></ul></li>
</ul>
<h2>Stage 3</h2>
<h3>Preprocess</h3>
<ul>
<li>We used the same method of Stage 2.</li>
</ul>
<h3>Stacking</h3>
<ul>
<li>We simply trained LightGBM</li>
<li>We clipped prediction values by [1e-6 1 - 1e-6] and made submissions.</li>
</ul>
|
RSNA Intracranial Hemorrhage Detection
|
11th place solution (with updated code on github)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you very much!</p>
<ul>
<li>I simply thought 8 is better than 5. Smoothing was important for this competition metric and I wanted to smooth by averaging many predictions.</li>
<li>The last epoch actually overfits and cv got sligtly worse. It's intentional because ensembling probably benefits from overfitted predictions along with less overfitted predictions. </li>
<li>Each series has at least 20 images in 1st stage data. I didn't check 2nd stage test data but I suppose it's same.</li>
</ul>
<p>Hope this answers.</p>
<p>Thank you! I like a simple approach too and satisfied that this turned out to work nicely.</p>
<p>You are welcome!
Now I'm reading your solution/code and nice to learn from it. </p>
<p>Thank you! Yes, it's nice competing and I'd like to read more of your insightful kernels!</p>
<p>Thank you!
I did not even try using ImageNet stats because it does not make sense considering dicom images are so much different from images from ImageNet. I was using min-max normalization before and <code>{'mean': [13.197, 7.179, -78.954,], 'std': [24.509, 55.063, 113.127,]}</code> works sligtly better than that.</p>
<p>Yes, making use of series information is very important. I didn't treat them as a sequence but the sequential model probably does a better job here. </p>
<p>Thank you!
I'm afraid I didn't record the number but I guess it was less than 1% of train data. </p>
<p>Thank you! I'm glad to hear that.</p>
<p>Congratulations to all.
Thank you kaggle and the host team for organizing this interesting competition.</p>
<p>The updated source code is available at <a href="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage">https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage</a>
I will probably upload all trained models later.</p>
<h3>Windowing</h3>
<p>For this challenge, windowing is important to focus on the matter, in this case the brain and the blood. There are good kernels explaining how windowing works.</p>
<ul>
<li><a href="https://www.kaggle.com/dcstang/see-like-a-radiologist-with-systematic-windowing">See like a Radiologist with Systematic Windowing</a> by <a href="https://www.kaggle.com/dcstang">David Tang</a></li>
<li><a href="https://www.kaggle.com/allunia/rsna-ih-detection-eda">RSNA IH Detection - EDA</a> by <a href="https://www.kaggle.com/allunia">Allunia</a></li>
</ul>
<p>We used three types of windows to focus and assigned them to each of the chennel to construct images on the fly for training.</p>
<p>| Channel | Matter | Window Center | Window Width |
----------|--------|---------------|---------------
| 0 | Brain | 40 | 80 |
| 1 | Blood/Subdural | 80 | 200 |
| 2 | Soft tissues | 40 | 380 |</p>
<p>Here is an example before and after applying the windowing. This image is labeled as <code>any intraparenchymal</code> and you can see that windowing helps focusing on the matter. Please check <a href="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/demo/dicom_windowing.ipynb">windowing.ipynb</a> for the detail.</p>
<p><img src="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/demo/windowing.png?raw=true" alt="windowing.png"></p>
<h3>Classification</h3>
<p>This step focuses on pixel data contained in DICOM file not meta data. But still four kind of meta data is used to apply windowing properly. <code>RescaleSlope</code> and <code>RescaleIntercept</code> are used for windowing. <code>BitsStored</code> and <code>PixelRepresentation</code> are used for fixing wrong intercept values which is mentioned in <a href="https://www.kaggle.com/jhoward/cleaning-the-data-for-rapid-prototyping-fastai">Cleaning the data for rapid prototyping</a> written by <a href="https://www.kaggle.com/jhoward">Jeremy Howard</a>. </p>
<ul>
<li>Two architectures are used. <code>se_resnext50_32x4d</code> and <code>se_resnext101_32x4d</code>. </li>
<li>Imagenet pretrained weights from <a href="https://github.com/Cadene/pretrained-models.pytorch">https://github.com/Cadene/pretrained-models.pytorch</a></li>
<li>8 folds each. </li>
<li>Adding a random number to windowed pixel data as augmentation led to a little better generalization performace. This idea is based on a hunch that CT scanners are probably not perfectly calibrated. </li>
<li>Test time augmentations(n=5) are used for predictions.</li>
<li>Checkpoints from 2nd and 3rd epochs are used for predictions and then averaged.</li>
<li>Final predictions are obtained from simple average of <code>se_resnext50_32x4d</code> and <code>se_resnext101_32x4d</code>. </li>
</ul>
<p><strong>The training result of 0th fold of se_resnext50_32x4d (<a href="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/conf/model100.py">model100.py</a>)</strong></p>
<p><img src="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/demo/image/model100_fold0.png?raw=true" alt="model100_fold0.png"></p>
<p><strong>The training result of 0th fold of se_resnext101_32x4d (<a href="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/conf/model110.py">model110.py</a>)</strong></p>
<p><img src="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/demo/image/model110_fold0.png?raw=true" alt="model110_fold0.png"></p>
<p><strong>Logloss for each of the Hemorrhage Types after emsembling (oof)</strong></p>
<p><img src="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/demo/image/ensembled.png?raw=true" alt="ensembled.png"></p>
<p>This ensembled score (0.0642) is similar to the score (0.065) we got on public LB in the first stage before introducing second level model.</p>
<h3>Second Level Model</h3>
<p>The second level model focuses on a series of CT scan unlike the classification model which focuses on a given image(slice). The main idea is that other slices of a certain slice within the same series can be useful to enhance the predictions of that slice. For example, if both of the adjacent slices of a certain slice are inferred as <code>epidural</code>, the middle of the slice is most likey <code>epidural</code>. This kind of relationships can trained using something like LightGBM. The train data can be constructed as follows,</p>
<p>For example, in case of training <code>epidural</code> based on oof predictions, you can construct a record like this,</p>
<p><code>
prediction of the given slice, left1, right1, left2, right2, left3, right3, ...,
</code></p>
<ul>
<li><code>left1</code> indicates the prediction of the first slice to the left from the given slice.</li>
<li><code>right2</code> indicates the prediction of the second slice to the right from the given slice.</li>
</ul>
<p>We included <code>left1</code> to <code>left9</code> and <code>right1</code> to <code>right9</code> for each of the slice. <code>left1</code> and <code>right1</code> are unsurprisingly the most useful features among all other slices except the given slice based on feature importance(lightgbm gain). Some distant slices such as <code>left9</code> or <code>right9</code> are not as important as closer slices to the given slice but still somewhat useful.</p>
<p><img src="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/demo/image/secondlevel.png?raw=true" alt="secondlevel.png"></p>
<ul>
<li>Final predictions are obtained by simply averaging predictions from LightGBM, Catboost and XGB.</li>
<li>The 1st stage public LB score was improved from 0.65 to 0.57 by this.</li>
</ul>
<p>Thank you for reading!</p>
|
RSNA Intracranial Hemorrhage Detection
|
13th place solution with code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Kaggle and RSNA for hosting such an interesting competition.
Thanks the whole team @andy2709 @moewie94 @lego1st @nguyenbadung for the great collaboration.</p>
<p>The code is publicly available at <a href="https://github.com/dattran2346/rsna-2019">https://github.com/dattran2346/rsna-2019</a></p>
<h2>1. Preprocessing</h2>
<h3>Windowing</h3>
<ul>
<li>We use various combination of brain, subdural, bony, and default window and stack them to create a normal 3-channel image.</li>
</ul>
<p><code>
ct_windows = {
'brain': {'L': 40, 'W': 80},
'subdural': {'L': 75, 'W': 215},
'bony': {'L': 600, 'W': 2800},
'default': { # from metadata }
}
</code></p>
<h3>Window setting optimization</h3>
<ul>
<li>The idea is to use a 1x1 convolution and sigmoid activation to learn relevant windows, the weight is initialized to be the default brain, subdural and bony window. @andy2709 tried this method and noticed that the final learned window is very closed to the default window. </li>
</ul>
<h3>Data split</h3>
<ul>
<li>We splited the dataset by both patient id and study id. I, @andy2709 and @lego1st trained the models by patient split, while @nguyenbadung and @moewie94 trained by study split.</li>
</ul>
<h2>2. Model</h2>
<h3>2D Model</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3342872%2F1f61df62cbecc1d1777aeeb6069c19ae%2F2dmodel.png?generation=1574771438363883&alt=media" alt=""></p>
<p>We applied 2 stage training here:
- In the 1st stage, just a normal CNN training, backbones are EfficientNetB2-B5, SEResNeXt50, SEResNeXt101.
- In the 2nd, we use 5 consecutive slices’ outputs and applied a simple CNN to predict the center slice.</p>
<h3>3D Model</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3342872%2F45cd1a578fa954caba1551680f907b69%2F3dmodels.png?generation=1574771686608008&alt=media" alt=""></p>
<ul>
<li>We use a normal backbone as a decoder and a bi-directional LSTM with a FC layer as the decoder, the model was trained end-to-end.</li>
<li>For each study, we select 10 random slices (contiguous and not) in order and put though the network during training. For inference, all slices are considered.</li>
</ul>
<h2>3. Stacking</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3342872%2F3e4aba9020f00b35b66a03f103057fc0%2Fstacking.png?generation=1574771786432271&alt=media" alt=""></p>
<ul>
<li>Concatenate prediction from all model (split by both study-id and patient-id), build simple cnn model:</li>
</ul>
<p><code>
model = nn.Sequential(
nn.Linear(input_dim, 1024),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(1024, 1024),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(1024, 6),)
</code></p>
<ul>
<li>Average prediction from 2 types of model: study id split and patient id split.</li>
</ul>
<h2>4. Summary</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3342872%2F9d5b6d677fa3626ac13f6299e2b6d094%2FScreenshot%20from%202019-11-26%2019-46-08.png?generation=1574772402149756&alt=media" alt=""></p>
|
RSNA Intracranial Hemorrhage Detection
|
15th place solution (0.047) --- Close but no Cigar!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>I see that many of the top 10 have already posted their solutions, but we would still like to share ours!</strong></p>
<p><strong>One reason is</strong> that the starting point of our [best] solution really was the <a href="https://www.kaggle.com/akensert/inceptionv3-prev-resnet50-keras-baseline-model">public kernel</a>. What was added, thanks to my team mates, was albumentation's augmentation (which would be put in <code>_read()</code>), as well as bigger input size and more epochs of training (these are pretty obvious). There were also some alterations in the learning late schedules, but that's pretty much it! </p>
<p>Then as the "next level", we would use [about] a dozen of models (including B2-B6, Xception and InceptionResnetV2 in addition to InceptionV3) to ensemble. ResNe(X)ts didn't work for us (or rather, not for me), so they weren't included in this particular submission.</p>
<p>Below is the method to squeeze out as much as we could from the individually trained models that we had (scores of ~0.068-0.074), which would eventually get us to 0.058 (public LB).</p>
<p>Four (or three) levels of averages (wisdom of the crowds at work!):</p>
<p><em>We had 3 separate ensembles: (I) mix of architectures (90/10 train/val split), (II) 6-CV B4, and (III) 6-CV InceptionResNetV2.</em></p>
<p><strong>For each ensemble in ensembles:</strong>
*<em>(1).</em>* All models' individual predictions are a weighted average of each epoch's prediction (a.k.a. snapshot predictions). The weights are optimized via scipy.optimize.minimize, resulting in 1 prediction matrix (M x N_classes) per model.
<strong>(2).</strong> Now each model's predictions are also averaged (but not optimized weights, because we don't have a validation set here). Now we are at ~0.065 on public LB.
<strong>(3).</strong> Every possible validation prediction data point (from <strong>(1)</strong>) is used to train two sequence models (2-layered LSTM and 2-layered GRU; see figure (credits to my teammate <a href="/ratthachat">@ratthachat</a> :-))), which are then used to "correct"/predict the test set predictions. Target Y would be the true labels (N_classes=6) and the input would be the sequence of predictions for each StudyID's slices (sorted from low ImagePosition3 to high ImagePosition3) and including the floating point value of ImagePosition3 (N_slices=60, N_features=6+1). This step really boosts our log loss: we now advanced from 0.065 -> 0.059 just like that! Each Sequence model (RNN) was trained for 35 epochs, and like <strong>(1)</strong>, we used scipy.optimize.minimize to compute a weighted average of all 35 epochs' predictions (so yet again using the "average snapshot prediction method"; seems to work really well for us)
<strong>end For :-)</strong>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2183023%2F6368f8a9ed88a7bbd1dcc92469d4889a%2Fgru_2.jpg?generation=1573727710325475&alt=media" alt=""></p>
<p>We now had 3 ensemble predictions (3 submission files), which were simply averaged and submitted to Kaggle!</p>
<p><strong>So to summarize</strong>, we used pretty standard models with lots of averages and good post processing. </p>
<p>Unfortunately (for us) it didn't go all the way to the top 10, but we're still extremely grateful for this competition! You know, creating friendships and all that!</p>
<p>Also, good job everyone! :-)</p>
|
RSNA Intracranial Hemorrhage Detection
|
16th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all winners and thanks to kaggle and organizers for opening this learning space.</p>
<p>We started relatively late, but we made a good starting point with the code that <a href="/appian">@appian</a> shared. Great thanks to <a href="/appian">@appian</a> </p>
<p>Our overall procedure is as follows.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2276152%2F770038c6cb65c7774c82ff8b1e1c5877%2F17th_solution_figure2.png?generation=1573814036534212&alt=media" alt=""></p>
<h3>In step 1</h3>
<ul>
<li>Basic training is performed by considering an image as an independent input.
<ul><li>input shape : (batch_size, 512, 512, 3)
<ul><li>4th axis (3) means 3 channels with multiple windowing parameters</li></ul></li>
<li>output shape : (batch_size, 6)</li>
<li>CNN Architectures : SE-ResNeXt-101 and EfficientNet-B6</li>
<li>loss : weighted log loss (weights = [2/7, 1/7, 1/7, 1/7, 1/7, 1/7])</li>
<li>optimizer : Adam (with learning rate from 1e-4 to 1e-5)</li>
<li>sampling : random sampling or location based sampling (sampling middle slices more from image series in patient-level)</li>
<li>5 folds or 7 folds training</li></ul></li>
</ul>
<h3>In step 2</h3>
<ul>
<li>We wanted to calibrate the output distributions considering the relation of labels or adjacent image slices, so we recognized the outputs of patient-level images as a signal and trained the model.</li>
<li>Output distributions are extracted from the validation set. (For example, 5 models from 5 folds can make total training dataset.)</li>
<li>If about 640,000 images are used in step1, about 19,500 output signals (the number of patients) are used in step 2.
<ul><li>input shape : (batch_size, None, 6, 1)
<ul><li>1 axis (None) means the length of signal (the number of slices)</li>
<li>2 axis (6) means the number of labels</li></ul></li>
<li>output shape : (batch_size, None, 6, 1)</li>
<li>CNN Architecture : simple CNN model with 4 convolution layers having 5x6 matrix</li>
<li>loss : weighted log loss (weights = [2/7, 1/7, 1/7, 1/7, 1/7, 1/7])</li>
<li>optimizer : Adam (with learning rate 1e-5)</li>
<li>5 folds training</li></ul></li>
</ul>
<p>```</p>
<hr>
<h1>Layer (type) Output Shape Param # </h1>
<p>input_1 (InputLayer) (None, None, 6, 1) 0 </p>
<hr>
<p>conv2d_1 (Conv2D) (None, None, 6, 64) 1984 </p>
<hr>
<p>conv2d_2 (Conv2D) (None, None, 6, 64) 122944 </p>
<hr>
<p>conv2d_3 (Conv2D) (None, None, 6, 64) 122944 </p>
<hr>
<p>conv2d_4 (Conv2D) (None, None, 6, 64) 122944 </p>
<hr>
<h1>conv2d_5 (Conv2D) (None, None, 6, 1) 65 </h1>
<p>```</p>
<p>We also thought to handle sequential information in image-level, but the deadline was short, so the process was split into two steps and output signals with relatively small dimensions were used as the next best thing. </p>
<p><strong>The results are as follows.</strong>
*<em>step 1 result : 0.05425 (private score)</em>*
<strong>step 2 result : 0.04793 (private score)</strong>
*<em>We think that the core processing of our team, like other teams, also was to reflect sequential information.</em>*</p>
|
RSNA Intracranial Hemorrhage Detection
|
1st Place Solution. Sequential model wins
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you! We have 4*V100 for the 2D CNN training. It may take 3-4 days to finish the 2D CNN models training. For the seq models, It takes a few hours on one GPU.</p>
<p>The code of seq models:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1025985%2F599e346fa8572a0a77cf1e64aa03584b%2F2019-11-26%2010.58.45.png?generation=1574737169081451&alt=media" alt=""></p>
<p>Thank you! I just follow the flowcharts from your team's post :) In my implementation,the sequence model 2 did two things: 1)multi model output stacking; 2)sequence modeling with meta info;
When we finished 4 CNN model ensemble (0.060 in stage1 with naive postprocessing), I use 1d cnn to do stacking (baseline seq model 2). It boosts the score from 0.060 to 0.057. Adding GRU with Position2 get us 0.056 on leaderboard. After this, I start to add the seq model 1 module into the pipeline. Adding some tricks we can get the final score of 0.054 in stage1. Actually, I did not even submit the result of seq model 1. It is a part of my seq model :)</p>
<p>For the backbone seresnext101 model, one image inference time is ~40ms on 1 1080ti without TTA.</p>
<p>I will update the writeup and code soon😂.</p>
<p>The key module of our pipeline is a sequence model. It works well and there is no shakeup.
Code : <a href="https://github.com/SeuTao/RSNA2019_1st_place_solution">https://github.com/SeuTao/RSNA2019_1st_place_solution</a></p>
<h1>Overview</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1025985%2F550d2dfb85771f93e0f2c8f1dbc2f62c%2Fsequence%20model%20with%20big%20bar%20align.png?generation=1574528142297990&alt=media" alt=""></p>
<h1>2D CNN Modeling</h1>
<p><strong>Data pre-processing & augmentation</strong>
Our team has three 2d classifier pipelines. The three pipelines share different input settings (3 channels):
<code>
1. Single sclice with 3 windows.
2. Spatially adjacent 3 slices with one window.
3. Combination of 1 and 2: Spatially adjacent 3 slices with three windows.
</code>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1025985%2Fd22fed8b4fba54c637627bacb41666ad%2F2019-11-24%203.51.01.png?generation=1574582336956604&alt=media" alt=""></p>
<p>The windows we use are:
<code>
Brain Window[40, 80],
Subdural Window[80, 200],
Bone Window[600, 2800]
</code>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1025985%2F93bbbae12be594c19554a35956fe85be%2F1.png?generation=1574581640882170&alt=media" alt=""></p>
<p>Augmentations:
- Random ShiftScaleRotate
- Random resize crop
- Random HFlip</p>
<p>Training strategy
- Randomly sample images form different SeriesInstanceUID
- Each epoch was trained on 4 times SeriesInstanceUIDs
- Adam optimiser with cycle learning rate (5e-4~1e-5)</p>
<h1>Sequence Model Development</h1>
<p><strong>Sequence model 1: MLP + LSTM</strong>
Input:
- Slice embeddings from multi models (num_models*feature dim)
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1025985%2Fcdc6c8d3af12a81695dd50da3d930cb8%2Fsequence%20model%201.png?generation=1574645166546352&alt=media" alt=""></p>
<p><strong>Sequence model 2: 1d CNN + LSTM</strong>
Input:
- Logits from multi 2D CNN models (num_models*6 class output)
- Logits from sequence model 1 (6 class output)
- Meta info (Position)</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1025985%2F4b20e615b4331031628e2b27f5a9ddf2%2F2.png?generation=1574582851737685&alt=media" alt=""></p>
<p>There is a little bit of domain knowledge behind such operation. Slices with different thicknesses have different image quality (noise level) due to the reconstruction settings.</p>
<p>Thank you! I just feed the first order difference of the position2 into the NN which actually represents slice thickness. This thickness information gave me ~0.0005 boost. I also did some random sampling in the sequence during training.</p>
<p>Thank you! We have three pipelines for single 2D CNN training. The uploaded code is one of the three which only applies the 2nd preprocessing method (Spatially adjacent 3 slices with one window). I'm now combining the three pipelines into one. The pretrained models using the code on github is uploaded.</p>
|
RSNA Intracranial Hemorrhage Detection
|
2nd Place Solution - Sequential model
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, and used a mask for padded, then removed padding in loss calculation, and for saving results. </p>
<p>Code & Val/LB scores : <a href="https://github.com/darraghdog/rsna">https://github.com/darraghdog/rsna</a> <br>
Congrats all winners, looking forward to go through your solutions. Big shout out to competition hosts RSNA, kaggle community, pytorch community, albumentations and FB's work on resnext -very cool how they trained this.
We were very sad not to get a top3 in Recursion competition, now we are very happy 😄 </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F213493%2Fc76202951ffb4afcc5d56acb579ac552%2Frsna_nobrainer.png?generation=1573698154700507&alt=media" alt=""></p>
<p><strong>Overview</strong>
In general we just have a single image classifier, data split on 5 folds, we only trained on 3 of them, and then extracted pre-logit layer from the classifier and fed into an LSTM.
Classifier trained on 5 epochs each fold, 480 images with below pre-processing. Each epoch, each fold, we extract embedding layer (use TTA and avg embeddings) train a separate LSTM for 12 epochs on each of those - so 15 LSTMs (3 fold image models X 5 epochs), and average the predictions.
Was a bit concerned the preprocessing filter may lose information, so trained the above again without the preprocessing filter and it did worse; but averaging both pipelines did ever so slightly better. The pipeline from first paragraph above would, for all intensive purposes be just as good as final solution, but as we needed to fix docu pre-stage 2 the two pipelines are in github and final solution. </p>
<p><strong>Preprocessing:</strong>
- Used Appian’s windowing from dicom images. <a href="https://github.com/darraghdog/rsna/blob/master/eda/window_v1_test.py#L66">Linky</a>
- Cut any black space. There were then headrest or machine artifacts in the image making the head much smaller than it could be - see visual above. These were generally thin lines, so used scipy.ndimage minimum_filter to try to wipe those thin lines. <a href="https://github.com/darraghdog/rsna/blob/a97018a7b7ec920425189c7e37c1128dd9cb0158/scripts/resnext101v12/trainorig.py#L159">Linky</a>
- Albumentations as mentioned in visual above. </p>
<p><strong>Image classifier</strong>
- Resnext101 - did not spend a whole lot of time here as it ran so long. But tested SeResenext and Efficitentnetv0 and they did not work as well.
- Extract pre logit layer (GAP layer) at inference time <a href="https://github.com/darraghdog/rsna/blob/a97018a7b7ec920425189c7e37c1128dd9cb0158/scripts/resnext101v12/trainorig.py#L387">Linky</a> </p>
<p><strong>Create Sequences</strong>
- Extract metadata from dicoms : <a href="https://github.com/darraghdog/rsna/blob/master/eda/meta_eda_v1.py">Linky</a>
- Sequence images on Patient, Study and Series - most sequences were between 24 and 60 images in length. <a href="https://github.com/darraghdog/rsna/blob/a97018a7b7ec920425189c7e37c1128dd9cb0158/scripts/resnext101v12/trainlstmdeltasum.py#L200">Linky</a> </p>
<p><strong>LSTM</strong>
- Feed in the embeddings in sequence on above key - Patient, Study and Series - also concat on the deltas between current and previous/next embeddings (<code>current-previous embedding</code> and <code>current-next embedding</code>) to give the model knowledge of changes around the image. <a href="https://github.com/darraghdog/rsna/blob/a97018a7b7ec920425189c7e37c1128dd9cb0158/scripts/resnext101v12/trainlstmdeltasum.py#L133">Linky</a>
- LSTM architecture lifted from the winners of first stage toxic competition. This is a beast - only improvements came from making the hiddens layers larger. Oh, we added on the embeddings to the lstm output and this helped a bit also. <a href="https://github.com/darraghdog/rsna/blob/a97018a7b7ec920425189c7e37c1128dd9cb0158/scripts/resnext101v12/trainlstmdeltasum.py#L352">Linky</a>
- For sequences of different length, padded them to same length, made a dummy embedding of zeros, and then through the results of this away before calculating loss and saving the predictions. </p>
<p><strong>What did not help...</strong> <br>
Too long to do justice... mixup on image, mixup on embedding, augmentations on sequences (partial sequences, reversed sequences), 1d convolutions for sequences (although SeuTao got it working)</p>
<p><strong>Given more time</strong> <br>
Make the classifier and the lstm model single end-to-end model.
Train all on stage2 data, we only got to train two folds of the image model on stage-2 data.</p>
<p>Hi Manoj, the lstm was less about windowing in each individual image and more about what folks call the <code>z-axis</code> in the data... the CT scan moves over the head and takes a number of images in sequence. We put this sequence of images (or the GAP layer from it) to the LSTM, so that let the LSTM simulate the scan moving over the head... hope I understood the question right. </p>
<p>A single brain hemorrhage scan is made up of multiple images, as the scanner moves over the brain. There is one vector per image, and these vectors are sequenced according to time that image was taken within the scan. </p>
|
RSNA Intracranial Hemorrhage Detection
|
3rd place solution & become GM!! (updated with code)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you😆 </p>
<p>>Well deserve for a GM!! </p>
<p>Hahaha, thank you very much. I am so glad to hear that.</p>
<p>Thank you! </p>
<p>Thank you😊 </p>
<h2>update</h2>
<p>code is <a href="https://github.com/okotaku/kaggle_rsna2019_3rd_solution">here</a>.</p>
<p>Hi, dear kagglers. First of all, thank you very much RSNA and kaggle for hosting such a fantastic competition. And congrats winners and all kagglers:)
I finally became kaggle Grandmaster. It was a super tough road but all experience made me stronger. I am very proud of it😆</p>
<p>Here is my solution. I will write details in later parts and will share my github repo after I clean up it.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F841938%2Fbba0a58dc68950ba36d51fb210b8329c%2Fimage1.png?generation=1573695201895407&alt=media" alt=""></p>
<p>Final model: private 0.045
User stacking model only: private 0.043 (I couldn't select it qq)</p>
<h2>Special Preprocessing</h2>
<h3>windowing</h3>
<p>I used 2 types of windowing.
- <a href="https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/discussion/110728#latest-659011">subdural window</a>
- <a href="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage/blob/master/src/cnn/dataset/custom_dataset.py#L16">Appian’s 3 types windows</a></p>
<p>For me subdural was a little better.</p>
<h3>concat user slice</h3>
<p>This method gave me much improvement. There are some images (about20 - 40) in one SeriesInstanceUID. And when sorted by ImagePositionPatient2, you can see that targets are continuous. I will call those images, s1, s2, s3, ..., st, st+1, … in my post.
Here is the example. You can see more details in <a href="https://www.kaggle.com/takuok/eda-of-rsna">my kernel</a>.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F841938%2Fc11506e3c3a5c81bf6277b2963f5df83%2Fimage3.png?generation=1573695232477620&alt=media" alt=""></p>
<p>So I decided to concat some images from the same SeriesInstanceUID.
- st-1, st, st+1
- st, st+1, st+2
- st-2, st-1, st
- st-2, st, st+2
- np.mean (st-3, st-2, st-1), st, np.mean(st+1, st+2, st+3)
- np.mean (st-5, st-4, st-3, st-2, st-1), st, np.mean(st+1, st+2, st+3, st+4, st+5)
- np.mean (st-X for X in all values), st, np.mean(st+X for X in all values)</p>
<p>Then predicted st’s target.</p>
<p>And I tried multi task training.
- st-1, st, st+1 then predict targets of st-1, st, st+1
- st-2, st, st+2 then predict targets of st-2, st, st+2</p>
<p>This model got 0.060~0.062(sry I forgot) on stage1 Public. It was my best single model and those 2 models improved my ensemble score from 0.057 to 0.056 on stage1.</p>
<h2>User Stacking</h2>
<p>I used “concat user slice” method to show models multiple slices of the user. And I used this method on ensemble parts. I call it User Stacking.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F841938%2F8599bb8cc52e1ebac3c14203f98e5427%2Fimage2.png?generation=1573695252564269&alt=media" alt=""></p>
<h2>Other things I used</h2>
<ul>
<li>They didn’t have much improvement, but I write up.</li>
<li>Appian’s 0.066 models</li>
<li>predict 5 classed and fill “any” on max prediction.</li>
<li><a href="https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/discussion/113339#latest-664918">CQ500 External Data</a></li>
<li>crop black area</li>
<li>retrain stage2 data</li>
<li><a href="https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/108065#latest-636669">generalized mean pooling</a></li>
</ul>
<h2>What didn’t work.</h2>
<ul>
<li>EfficientNet</li>
<li>Cbam Resnet</li>
</ul>
<p>*Slide design: Japanese Autumn leaves (紅葉: koyo)</p>
<p>Thank you! I will upload it later;)</p>
<blockquote>
<p>did you also avoided concating the last and first images of two consecutive patients?</p>
</blockquote>
<p>Yes I did. First I made dataframe that has slice information like patientID0, slice1ID=XXX, slice2ID=XXX, ...</p>
<blockquote>
<p>what image size you used for single best model.</p>
</blockquote>
<p>512*512</p>
<p>Congrats you too!!</p>
<p>Thank you😃 </p>
<p>Se-ResNeXt50 <br>
input=st-1, st, st+1 <br>
targets=st-1, st, st+1 <br>
1st stage Public=0.061 2nd stage private=0.048</p>
<p>Thank you! </p>
<p>Thank you!</p>
<p>Thank you! <br>
Congrats to win;)</p>
<p>I used 2 types of input. <br>
1. st-1, st, st+1->concat
2. avg(st-3, st-2, st-1), st, avg(st+1, st+2, st+3)->concat</p>
<p>And 2 types of target.
1. st's target
2. st's target, st-1's target, st-2's target->concat (so predict 18 targets)</p>
<p>Thank you;) I am looking forward to see you again😃 </p>
<p>😆</p>
<p>thank you!
I trust my 1st stage public score:)</p>
<p>thank you!
I am preparing my document now.</p>
<p>thank you!</p>
<p>haha, thank you🙄</p>
<p>thank you!</p>
<p>Thank you for your comments! <br>
<a href="https://github.com/okotaku/kaggle_rsna2019_3rd_solution/blob/master/src/dataset_concat.py#L59-L80">Here</a> is the concat parts. <br>
1. load st-1's image -> resize 512*512 -> window preprocessing
2. load st's image -> resize 512*512 -> window preprocessing
3. load st+1's image -> resize 512*512 -> window preprocessing
4. concat all</p>
<p>I updated with my code.</p>
<ol>
<li>I didn't use studyinstance.</li>
<li>stage1 public LB 0.066->0.060. Ensemble with ImagePositionPatient2 model.</li>
<li>sry what is the meaning of this question?</li>
</ol>
<p>I didn't use validation data. I used whole data for training so I couldn't check cv score.</p>
|
RSNA Intracranial Hemorrhage Detection
|
4th Place Solution with code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Ha, we use PowerPoint to draw all the flow charts. 😊 </p>
<p>Thanks!</p>
<p>Thanks!</p>
<p>PowerPoint😄 </p>
<p>Code: <a href="https://github.com/XUXUSSS/kaggle_rsna2019_4th_solution">https://github.com/XUXUSSS/kaggle_rsna2019_4th_solution</a>
Our code is based on Appian's repo: <a href="https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage">https://github.com/appian42/kaggle-rsna-intracranial-hemorrhage</a></p>
<h1>Overview of the proposed method</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2215248%2F29e2a5b18ea11e273e34e3fd1ffff119%2Foverview.png?generation=1574318153545078&alt=media" alt=""></p>
<p>Our solution includes two stages. We train 2D CNN models in stage 1 for feature extraction, and 1D + 3D CNN models in stage 2 for classification.</p>
<h2>Preprocess</h2>
<ol>
<li>Two window policies:
a) use Appian’s windowing policy
i. Three windows are: [40, 80], [80, 200], [40, 380][<a href="https://github.com/XUXUSSS/kaggle_rsna2019_4th_solution/blob/1e9b6a5bb46d1d329f4af04e9066a3a0b7fa7769/IFE_1/src/cnn/dataset/custom_dataset.py#L68">link</a>]
b) Stack three consecutive slices to a 3-channel image. [<a href="https://github.com/XUXUSSS/kaggle_rsna2019_4th_solution/blob/1e9b6a5bb46d1d329f4af04e9066a3a0b7fa7769/IFE_3/src/cnn/dataset/custom_dataset.py#L97">link</a>]
i. Window: [40, 80] </li>
<li>Remove corrupted images </li>
<li>Filter out blank images by
a) Obtain the difference between maximum and minimum intensity value of each image, i.e., the intensity range, after applying a custom windowing scheme (center = 40, window = 80)
b) Remove images with intensity range < 60 from both training and test sets.
c) The removed test images will be classified as negative during post-processing.</li>
<li>Extract useful meta data from dicom files
a) Patient ID
b) StudyInstance ID
c) SeriesInstance ID
d) Position2</li>
<li>Make patient-wise stratified five folds
a) Images from one patient always belong to the same fold
b) Class distributions are roughly the same across different folds</li>
</ol>
<h2>STAGE 1: 2D Image Feature Extraction</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2215248%2F69ccd1b230d567e1c473413541545b27%2FfeatureExtraction.png?generation=1574240192793088&alt=media" alt=""></p>
<h3>1. Training strategy:</h3>
<p>a) Randomly split the training dataset into 5 folds and train the model five times. Use 4 folds as training set and 1 fold as validation set each time.</p>
<h3>2. Models</h3>
a) EfficientNet B0
<p>i. ImageNet pretrained
ii. Input image size: 512x512
iii. Augmentation: random crop, random hflip, random rotate, random contrast
iv. 5-fold training
v. TTA5: random crop, random hflip , random rotate, random contrast</p>
b) ResNext50 32x4d swsl
<p>i. Semi-Supervised and Semi-Weakly Supervised ImageNet Models <a href="https://github.com/facebookresearch/semi-supervised-ImageNet1K-models">https://github.com/facebookresearch/semi-supervised-ImageNet1K-models</a>
ii. Input image size: 448x448
iii. Augmentation: random crop, random hflip , random rotate, random contrast, pixel and window jittering
iv. 5-fold training
v. Cosine learning rate scheduler
vi. TTA5: random crop, random hflip , random rotate, random contrast</p>
<h3>Summary of stage 1 models:</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2215248%2F9e735ab0c676de4ca8c1eb94df51404e%2FIFLmodels.png?generation=1574238657232358&alt=media" alt=""></p>
<h2>STAGE1: Meta Data Feature Engineering</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2215248%2F22895a3229aa1374353d025496988e4e%2Fmetadata.png?generation=1574245207505107&alt=media" alt=""></p>
<h2>STAGE2: Slice Sequence Model</h2>
<p>In stage2, we train 1D CNN model and 1D+3D CNN models for classification.</p>
<h3>1D CNN model:</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2215248%2Ff3f240c485b348322e17a5081252863c%2F1Dconv.png?generation=1574239313394661&alt=media" alt=""></p>
<h3>1. Pipeline:</h3>
<p>a) Extract 1D feature and metadata from stage 1
b) Stack the features that belong to one CT series together.
c) Pass the stacked feature to customized fully convolutional neuronal networks, and generate the output. [<a href="https://github.com/XUXUSSS/kaggle_rsna2019_4th_solution/blob/ed1c6f59b3077e3c8226671a5d9c38c2028aab5d/cls_2/src/cnn/models/model.py#L22">link</a>]</p>
<h3>2. Augmentation:</h3>
<p>a) No data augmentation</p>
<h3>3. Training strategy</h3>
<p>a) Follow 2D CNN’s fold split</p>
<h3>1D+3D CNN model</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2215248%2Fb5e6427293552f7563b11f71610203ea%2F1D3D.png?generation=1574239073870529&alt=media" alt=""></p>
<h3>1. Pipeline:</h3>
<p>a) Extract metatdata, 1D and 3D features from stage 1.
b) Stack the features that belong to one CT series together.
c) Pass the stacked feature to customized fully convolutional neuronal networks, and generate the final output. [<a href="https://github.com/XUXUSSS/kaggle_rsna2019_4th_solution/blob/ed1c6f59b3077e3c8226671a5d9c38c2028aab5d/cls_1/src/cnn/models/model.py#L141">link</a>]</p>
<h3>2. Augmentation:</h3>
<p>a) No data augmentation</p>
<h3>3. Training strategy</h3>
<p>a) Follow 2D CNN’s fold split</p>
<h3>Summary of Stage 2 models:</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2215248%2Fa93f38cf0b81ef5ec96ad0239c37b410%2Fclsmodel.png?generation=1574246559021560&alt=media" alt=""></p>
<h2>Ensemble Predictions</h2>
<p>4 x 5 x 5 = 100 Predictions from
1. 4 Models
a) Cls_1a trained on Fold_Set_a,
b) Cls_1b trained on Fold_Set_b,
c) Cls_2 trained on Fold_Set_a,
d) Cls_3 trained on Fold_Set_c
2. 5 Folds per Fold Set
3. 5 TTA</p>
<h2>Post-processing</h2>
<ol>
<li>Assign the minimum value over all predictions to the blank test images</li>
<li>Clip the predicted value to the range of [1e-6, 1-1e-6]</li>
<li>Convert the predictions to the required submission format</li>
</ol>
<h2>Score Growth Chart</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2215248%2Fba114abed56147359750a1da8a20c75a%2Fscorechart.png?generation=1574245503408955&alt=media" alt=""></p>
<p>Acknowledgement: Our code is based on Appian’s repo. <a href="/appian">@appian</a> Thank you very much for your great and beautiful work!</p>
|
RSNA Intracranial Hemorrhage Detection
|
5th place solution (with code).
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all winners, </p>
<p>Congrats to my teammate <a href="/tarobxl">@tarobxl</a> achieving GM tier and <a href="/anjum48">@anjum48</a> for his master tier. </p>
<p>On behalf of the team, I would like to make the writeup. </p>
<h1>Image Preprocessing</h1>
<p>We have three types of preprocessing data. <br>
1. Imaging with multiple windows. <br>
We use three windows to construct RGB image. Each channel is corresponded to a window.
<code>
'brain': [40, 80],
'bone': [600, 2800],
'subdual': [75, 215]
</code> </p>
<ol>
<li><p>Imaging with multiple windows then crop. <br>
Same as (1), we crop and keep the only informative part.</p></li>
<li><p>Imaging with spatially adjacent. <br>
We use only one window [40, 80] for preprocessing. To construct RGB images, we use metadata to know the spaitally adjacent. Let say to construct RGB of slice St, we take: <br>
R = St-1, G = St, B = St+1. </p>
<p>Finally, we crop and keep only informative parts as same as (2).
Please refer this kernel for more detail:
<a href="https://www.kaggle.com/anjum48/preprocessing-adjacent-images-and-cropping">https://www.kaggle.com/anjum48/preprocessing-adjacent-images-and-cropping</a></p></li>
</ol>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1938879%2F4b1c04d72e5d1359e7d41a3a03bac540%2Fdata_preprocessing.png?generation=1573699900426447&alt=media" alt=""></p>
<h1>Data Preprocessing</h1>
<p>First, we remove the overlapped patients between train and test. This part may be the reason for the shakeup since we estimate that the shakeup score is in a range of 0.001 - 0.002.</p>
<p>In each fold, we do random sampling such that the number of positive patients ar balanced to the number of negative patients. This step helps to have the correlation between CV and LB, and stable as well. </p>
<h1>Modeling</h1>
<p>We train 5 Folds splitted by patients. The models and their performance on stage 2: </p>
<p>|No. | Model | Data | Before PP | After PP|
|---|---------- |:-------------:|-------------------------|---:|
|1. | Resnet18 | (1) | 0.060 | 0.054 |
|2. | Resnet34 | (1) | - | - |
|3. | Resnet50 | (1) | 0.058 | 0.052 |
|4. | Resnet50 | (3) | 0.054 | 0.051 |
|5. | Densenet169 | (2) | 0.055 | 0.049 |
|6. | InceptionV3 + Deepsupervision | (1) | 0.060 | 0.053 |
|7. | EfficientNet-B0 | (3) | 0.054 | 0.051 |
|8. | EfficientNet-B3 | (2) | 0.055 | 0.050 |
|9. | EfficientNet-B5 | (3) | 0.048 | 0.048 | </p>
<p>PP =Post-processing. </p>
<p>We have three pipelines, the following is mine which is used to train the model <code>No. 1,2,3,4,5</code>. </p>
<ul>
<li>Optimizer: AdamW </li>
<li>Image size: 512x512 </li>
<li><p>Stages: </p>
<ul><li><p>Warmup: Freeze the backbone, train the FC only. </p>
<ul><li>LR: 0.001</li>
<li>num_epochs: 3 </li></ul></li>
<li><p>Warmup: Unfreeze the backbone, train all the model. </p>
<ul><li>LR: 0.0001</li>
<li>num_epochs: 20 </li>
<li>scheduler: ReduceLROnPlateau, patience = 0. </li>
<li>EarlyStoppingCallback: patience = 3. </li></ul></li></ul></li>
<li><p>Augmentations:
<code>python
Resize(*image_size),
HorizontalFlip(),
OneOf([
ElasticTransform(alpha=120, sigma=120 * 0.05, alpha_affine=120 * 0.03),
GridDistortion(),
OpticalDistortion(distort_limit=2, shift_limit=0.5),
], p=0.3),
ShiftScaleRotate(shift_limit=0.05, scale_limit=0.1, rotate_limit=10),
</code> </p></li>
<li><p>TTA: Normal + HFlip. </p></li>
</ul>
<p>By this pipeline, I finish the training at around 8-10 epochs. The deep models (SEResnext50, resnet101, etc) do not work well. Training with longer epochs (upto 25) leads to be overfitted. </p>
<h1>Post-processing</h1>
<p>We leverage metadata and use H2O to build a model for post-processing.
More details will come up by <a href="/tarobxl">@tarobxl</a>. </p>
<h1>Stacking</h1>
<p>First, the do post-processing for each prediction of each model. <br>
Second, we use the stacking pipeline designed by magician <a href="/mathormad">@mathormad</a>.
Please upvote this topic:
<a href="https://www.kaggle.com/c/imaterialist-challenge-fashion-2018/discussion/57934">https://www.kaggle.com/c/imaterialist-challenge-fashion-2018/discussion/57934</a> </p>
<p>Update: <br>
Stacking pipeline is shared at: <br>
<a href="https://www.kaggle.com/mathormad/5th-place-solution-stacking-pipeline">https://www.kaggle.com/mathormad/5th-place-solution-stacking-pipeline</a> <br>
Dont hesitate to upvote it.</p>
<h1>Code</h1>
<p>My pipeline code is published at: <br>
<a href="https://github.com/ngxbac/Kaggle-RSNA">https://github.com/ngxbac/Kaggle-RSNA</a> <br>
The model checkpoints, graph, training processes are recorded by wandb.
<a href="https://app.wandb.ai/ngxbac/Kaggle-RSNA">https://app.wandb.ai/ngxbac/Kaggle-RSNA</a> </p>
<p><a href="/anjum48">@anjum48</a> 's pipeline:
<a href="https://github.com/Anjum48/rsna-ich">https://github.com/Anjum48/rsna-ich</a> </p>
<p><a href="/mathormad">@mathormad</a>'s pipeline to train InceptionV3 + Deepsupervision: <br>
<a href="https://github.com/triducnguyentang/RSNA">https://github.com/triducnguyentang/RSNA</a></p>
<p><a href="/tarobxl">@tarobxl</a> 's post-processing code:
<a href="https://github.com/tiendzung-le/Kaggle-RSNA-5th-place-Solution">https://github.com/tiendzung-le/Kaggle-RSNA-5th-place-Solution</a></p>
|
RSNA Intracranial Hemorrhage Detection
|
6th place solution End to End Sequence to Sequence with sliding window.
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>5 models equally weighted ensemble (xception, inception, EfficientnetB0, Resnet50, ResInc)
best to worst (ResInc, Xception, EfficientnetB0, inception, Resnet50)</p>
<p>We might be able to match or beat our final private score with fewer models or changing the ensemble weighting, have not checked yet.</p>
<p>Wow! Thanks James for letting me know. How do we fix this? need the credit!!! just kidding. Honored Jeremy is fighting for the little guy.</p>
<p>thx!</p>
<p>Congrats to all participants and the winners, and myself, I must say, for becoming a Kaggle GRANDMASTER! Catching that elusive fifth Gold medal that I have been chasing for some time now.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F369212%2Ff3b3bd56f8e374c317bc1fadc08a0a72%2FRSNA.png?generation=1573741035979705&alt=media" alt=""></p>
<p>Down to business. We utilized an end to end sequence to sequence model. A sliding window approach was used to select a fixed window of “n” slices from the ct volume. The FIG above shows the architecture. The architecture made things nice and simple, training was end to end, no data shuffling and gymnastics. Prediction on the slices was done by the LSTM at each time step. This conveniently also enabled some nice test time augmentation (TTA) with the sliding window approach.</p>
<p>We used various backbones the best was a keras pretrained resinc. Don’t need much gpu memory On a 16G v100 can train various models with image size 512 x 512 and “n” of 10 and batch size 2. Same for the 16G p100. Larger batch sizes with Smaller n. Note that the CNN is just one CNN that generates the embedding vectors, the unrolled lstm looks like there are many but it is just one CNN.</p>
|
RSNA Intracranial Hemorrhage Detection
|
7th place outline
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>From this kernel <a href="https://www.kaggle.com/marcovasquez/basic-eda-data-visualization">https://www.kaggle.com/marcovasquez/basic-eda-data-visualization</a>
(0020, 0032) Image Position (Patient) DS: ['-125', '-46', '73.7000732']</p>
<p>'73.7000732' is the z position I meant</p>
<p>I make it short because there is almost no novelty in my solution.</p>
<p><strong>Overall Strategy:</strong>
1. Train a image-level CNN and save to hard drive its GAP features.
2. Recover the original CT sequence by sorting the z-position in the meta data, and input the saved GAP features to train a scan(study)-level RNN model.
This strategy is inspired from <a href="https://rd.springer.com/content/pdf/10.1007%2Fs00330-019-06163-2.pdf">https://rd.springer.com/content/pdf/10.1007%2Fs00330-019-06163-2.pdf</a> </p>
<p><strong>Preprocessing for CNN:</strong>
I used Appian's windowing. Spent some efforts to tweak it but results are all similar.</p>
<p><strong>Augmentation for CNN:</strong>
Heavy augmentation including crop and resize back, affine (360 degree rotation), contrast and brightness, gamma correction, blurring and sharpening, mirroring, optical distortion, grid distortion, elastic transform ...</p>
<p><strong>CNN models:</strong>
efficientnet_b5
efficientnet_b6
inception_resnet_v2
inception_v4
senet154
seresnext50
seresnext101
Totally 7 models, all trained on a different 80-20 training-validation split. The input resolution varied between 384x384 and 512x512 depending on the size of the model.</p>
<p><strong>RNN models:</strong>
Two bidirectional GRU layers. Length of sequence fixed to 72. Padding and loss masking used.</p>
|
RSNA Intracranial Hemorrhage Detection
|
8th place solution + code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, I'd like to thank the whole team for the great collaboration: @meanshift, @tgilewicz, @nordberdt, @dmytropoplavskiy, it was a pleasure working with you on this competition. Following is a summary of our solution and experiments.</p>
<p>Code: <a href="https://github.com/ambrzeski/kaggle-rsna-2019">https://github.com/ambrzeski/kaggle-rsna-2019</a></p>
<h1>Preprocessing</h1>
<h2>Gantry tilt correction</h2>
<p>In order to reduce the impact of radiation on patient’s eyes during head CT, sometimes scans are performed with a tilted gantry. Such studies have slices not aligned properly in the y axis, which causes the distortion in the 3D volume, visible on the sagittal view. To correct the distortion, slices must be shifted in the y axis with a shear transform. The angle of the shear can be determined from “Image Orientation (Patient)” DICOM field.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F868944%2F4d53a9260f20571c9869308cff6ff522%2Ftmp.png?generation=1574347411118888&alt=media" alt="">
<em>Saggital view before and after gantry tilt correction.</em></p>
<p>We observed that gantry tilt is quite common in this dataset, occurring in more than 50% of studies in the train set and 90% in stage1 test set. We haven’t measured the impact of gantry tilt correction on models’ performance. Supposedly it doesn’t matter too much when inputting slices as channels to 2D model, but when using wider 3D context it could make a difference, especially considering the fact that the angle of the shear varies among studies.</p>
<h2>Windowing</h2>
<p>First, HU values were clipped to [-400, 1000] range. Then, instead of using fixed size windows, we apply a non-linear transform, which stretches out most valuable HU ranges and compresses less interesting ranges. The exact shape of the transform function is manually designed by us. We’ve tried modeling this function as a cumulative distribution function of histogram of pixel values in the dataset, but setting the values by hand allowed us to enhance specific ranges containing hemorrhages, and consequently worked slightly better.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F868944%2F73aee48d3ffbd4e946fddb9ffed245f6%2Fcdf.png?generation=1574347839604990&alt=media" alt="">
<em>Mapping from HU values to normalized values</em></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F868944%2Fbfd8f13a2fad296ce7b15bc421a23b0a%2Ftmp2.png?generation=1574347948527157&alt=media" alt="">
<em>From left to right: image clipped to [-400, 1000] range, brain window (L:40, W:80), non-linear transform</em></p>
<p>We’ve also tried fixed and learnable windows, but our non-linear transform gave the best results, while also being the most convenient to use.</p>
<h1>Models</h1>
<p>All of our models are various variants of 2D CNNs with consequent slices being fed to the network as image channels, predicting classes for the middle slice only. Models were trained on 5 folds with no patient overlap between folds. As an optimizer, we used RAdam. For data augmentation we used standard spatial transformations. Any transforms messing with pixel values resulted in a decrease in performance.</p>
<p>For some of the models, slices (or groups of slices) were forwarded separately through the network backbone and their feature maps concatenated and combined by a convolutional layer. As such approach for 5 slices would increase 5x the training time, for the first 4 epochs, the model has been trained on the current slice only with weights of combining 3d convolution related to other slices set to 0. For two more epochs, the model has been trained on all 5 input slices. Combining model outputs improved results more (around 0.006-0.01) compared to multiple inputs to single model.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F868944%2F408f11eb7f3aed5077c61e42b9352bde%2Fd1.png?generation=1574348143538768&alt=media" alt=""></p>
<p>Summary of all the models in the ensemble is presented below.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F868944%2Fec45c41cb7847d4f71d7291a026948fd%2FScreenshot_2019-11-21_16-06-37.png?generation=1574348844202508&alt=media" alt="">
*in stage2 we had a bug in validation, which may cause the results to be slightly inaccurate</p>
<p>As many have noticed, scores on the final leaderboard are significantly lower than stage 1 leaderboard. We also observed a large divergence between model ranking on local cross-validation and private leaderboard. Specifically, simpler models, like resnet18, seem to perform better on private leaderboard, compared to more complex ones.</p>
<h1>Segmentation masks</h1>
<p>We hand-labeled 196 studies with segmentation masks. For the labeling we picked studies with the highest log loss on out-of-fold predictions. We’ve tried multi-task learning, using pre-computed mask as attention masks or additional features for a classifier, but all of these approaches failed to provide any significant boost to log loss metric. Anyway, we included one of the models trained with masks to a final ensemble, hoping to at least increase the variety within the ensemble. Segmentation based model performed slightly better on other metrics like F1.</p>
<p>The model has been trained to do both classification and segmentation (for labeled samples), with segmentation samples oversampled for initial few epoch and switched to the original samples distribution after.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F868944%2F3e2561e87c2d3b3c4f9b9ccdc6408b0f%2Fd2.png?generation=1574349217941420&alt=media" alt=""></p>
<h1>TTA, ensembling and second-level model</h1>
<p>For each model we perform 5-fold ensembling and test-time augmentations. Then, final predictions are obtained using a L2 linear model, which takes as an input 1st level predictions for 5 consecutive slices from each model and returns final predictions for a middle slice. The L2 model gives a very slight improvement (0.0002 difference from simple averaging on LB).</p>
<h1>What didn’t work</h1>
<ul>
<li>We’ve tried transfer learning with 3D CNNs from <a href="https://github.com/Tencent/MedicalNet">https://github.com/Tencent/MedicalNet</a>, but trainings were taking too much time and we had to abandon this idea, but it still might be worth trying given more time</li>
<li>Experiments with segmentation masks</li>
<li>Class balancing</li>
<li>Multiple trainable windows</li>
</ul>
|
RSNA Intracranial Hemorrhage Detection
|
A note from a 46th place finisher (The Purpose)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This post is <strong>non-technical</strong>.
This post is <strong>in memoria</strong> to a friend (Π.Δ) I lost 2 summers ago by Intracranial aneurysm in the age of 28.
This is a reminder on why we build cutting edge technology, why we invest such a lot of time here on Kaggle or in daylight jobs, to create great algorithms, to learn more and more, to improve skills. All these hours spent have to have a <strong>purpose</strong> to stand in time. And the greatest of them is to <strong>serve human lives</strong>. Doctors are fighting for it using medical knowledge and experience, radiologists do as well, <strong>We fight with keyboards</strong> and <strong>ML</strong> :) !</p>
<p>Thanks for reading my post,
neongen</p>
|
RSNA Intracranial Hemorrhage Detection
|
A surprise Gold, GM, and the real 12th place solution*
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First I want to thank my teammate <a href="/zaharch">@zaharch</a> for a great and successful teamwork (this is our 2nd consecutive gold). Until yesterday we had the top Silver medal - 13th place, and today, for some mysterious reason, we got promoted to 12th and Gold. For me this Gold also means GM (5 golds in the last 5 competitions).
^ real because <a href="/appian">@appian</a> was promoted to 11th</p>
<p>So lets go for our solution.</p>
<p>As most/all the top solutions we also used a two stage solution:
1. Base model for feature extraction per image
2. Shallow model - combining all the output features from a full head scan to predict per image.
The 2nd stage also included some post - processing and ensembling.</p>
<h2>Base Model:</h2>
<p>As base model we used a few different models:
* Densenet 169, 161, 201
* SE-ResNet101
* SE - ResNeXt101_32x4d
For all models we used 3 folds, for the SE models we also had 5 folds.
The SE models with 5 folds gave the best results.
The models where trained for ~4 epochs using the usual augmentations: rotation, flip, zoom, position shift, pixel intensity shift.</p>
<h3>WSO</h3>
<p>As many of the other teams do in their base solutions, we also used 3 windows to handle the large dynamic range of the CT pixels values, but instead of using fixed windows we let the network find the best windows, as described in <a href="https://arxiv.org/pdf/1812.00572.pdf">Practical Window Setting Optimization for Medical Image Deep Learning</a>.
The implementation is quit straight forward: <br>
adding 3 layers in front of the model:
<code>
Conv2d(1, 3, kernel_size=(1, 1))
Sigmoid()
InstanceNorm2d(3)
</code>
The convolution layer was initialize with the soft- tissue, blood, bone values. At the end this layer converged to value close to the usual windows values.</p>
<h3>Feature pooling</h3>
<p>Most of the features at the last layer where zero - we used 8 times pooling to decrease the number of features to ~ 250-300</p>
<h3>TTA</h3>
<p>We created 4 sets of features from augmented images for each train image and 8 sets for each test image.</p>
<h2>Shallow Network</h2>
<p>We used two different shallow networks. (I will describe one here and <a href="/zaharch">@zaharch</a> will describe the 2nd later)</p>
<p>One network was a FCN.</p>
<h3>Input</h3>
<p>Features from all the images of one full head scan, ordered by the Z position.</p>
<h3>Layer</h3>
<ol>
<li>9 * Num_features 2D convolution - the output is batch_size * num_images * num _channels * 1 * 6</li>
<li>Squeeze</li>
<li>1D convolution layer of size 7</li>
<li>1D convolution layer of size 5</li>
<li>1D convolution layer of size 3
With batch norms, and ReLUs in between.</li>
</ol>
<p>We trained the shallow network with the TTAed features from the base model and for prediction we used the test features TTA.</p>
<h2>Post Processing and ensembling:</h2>
<p><a href="/zaharch">@zaharch</a> will and as a comment</p>
<h2>Results:</h2>
<p>The base models gave 0.68 - 0.66 on LB (first stage) after fold averaging and with TTA averaging. ** We started using the better models after we already had the shallow network, hence we didn't really submitted a full 5 fold average of base model and the numbers are derived from CV.
The best single 5 fold full model (base + shallow), with TTA and fold averaging was the SE-ResNet101 which gave LB 0.6 (first stage). </p>
<p>One drawback we had - we didn't gain much by ensembling many models, maybe we should have used one model and run it more with different seeds. </p>
<p>And as a last word, I want to thank the organizers and moderators <a href="/juliaelliott">@juliaelliott</a> <a href="/philculliton">@philculliton</a> <a href="/lechuck0">@lechuck0</a> for a great competition and for being flexible and changing the rules to let us use the metadata which helped all the top teams get really good and interesting solutions. </p>
<h3>Code</h3>
<p><a href="https://github.com/nosound2/RSNA-Hemorrhage">The full code can be found here</a></p>
<h3>More information</h3>
<p>More information about our models can also be found in the following files</p>
<p><a href="https://docs.google.com/document/d/1YFwbnmh5QDF77th01eSEqscvb4rWTKyvL0XWSWs-sMg/edit?usp=sharing">Documentation</a></p>
<p><a href="https://drive.google.com/file/d/1Kz_3mkA9volBKNau_u_jzE2YQhkPTIXp/view?usp=sharing">Presentation</a></p>
<p><a href="https://drive.google.com/file/d/1yX6WC9GysdekivPzowU685AeAuE_LHXI/view?usp=sharing">Video</a></p>
<p><a href="/zaharch">@zaharch</a> surly can!</p>
|
RSNA Intracranial Hemorrhage Detection
|
Tricks to boost from 0.66 to 0.49
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Intracranial Hemorrhage Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks, 阿水老师.</p>
<p>trick1: If you sort by [patient ID, 'imageposition2'] you will find the lable is continue
trick2: patientID have an overlap in stage1 (not work at stage2)</p>
<p>You can take this into feature enginnering.
For example, patientID with lable encoder, agg groupy, count encoding and lable encoding.
You can extract some time series feature like lag, diff, next/last, etc..</p>
<p>All you need is a LightGBM/XGboost/Catboost or do some postprocessing.
I perform a stacking with these features. It works well. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F983147%2Fd33b7efc54601e16ab18433c245b8c11%2Fleak.png?generation=1573702109414798&alt=media" alt=""></p>
<p>PS: I think seutao's sequence model is the best solution. All roads lead to Rome.</p>
|
RSNA Pneumonia Detection Challenge
|
11th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Pneumonia Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>A little late, but we've made our 11th place solution available in the following repo:</p>
<p><a href="https://github.com/erniechiew/kaggle_rsna_pneumonia_dancingbears">https://github.com/erniechiew/kaggle_rsna_pneumonia_dancingbears</a></p>
<p>Enjoy!</p>
|
RSNA Pneumonia Detection Challenge
|
14th place solution [6th if resized the boxes]
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Pneumonia Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'll consider releasing my code if more people are interested.</p>
<p>First, I would like to thank the organizers/annotators who make this data and challenge available. It's gonna be a valuable resource to the medical imaging community. I learned a lot by going through the discussion forum, shared kernels, and most importantly through top winners' solutions.
<strong>Congratulations to you all!!</strong></p>
<p>Here I would like to share an overview of my solution (1 segmentation net and 4 detection nets) and some general thoughts. Hopefully, it can be useful to someone. All my models were trained on a machine with a single 1060 GPU.</p>
<p>Stage 1 LB score: 0.220 </p>
<p>Stage 2 LB score: 0.222</p>
<hr>
<h1>Segmentation:</h1>
<p>The first part of my solution is a semantic segmentation network. It follows a UNet-like architecture and trained with dice loss and focal loss from scratch. The training set includes all the negative and positive samples. It's basically a patch-based classifier that gives each pixel a probability value of being pneumonia. I thresholded the raw output with a value of 0.1 cause this gives the highest sensitivity. </p>
<h2>Details:</h2>
<ul>
<li><p>image size: 256x256</p></li>
<li><p>preprocessing: contrast limited adaptive histogram equalization</p></li>
<li><p>training time: ~20 hours</p></li>
<li><p>train/val: ~24000/~1200 (stratified)</p></li>
<li><p>post-processing: erosion with a disk of radius 5 (output is too large by visual inspection)</p></li>
<li><p>optimizer and scheduler: Adam/step scheduler with gamma=0.1 and step_size=20</p></li>
<li><p>batch size: 8</p></li>
<li><p>epochs: 26 </p></li>
<li><p>failures: PSPNet with different backbones; reweight pixels inside each mask according to the intensity</p></li>
</ul>
<p>I think I've spent roughly 3 weeks trying to make it work, but a single segmentation network alone didn't take me very far. The best score I got in stage 1 was 0.12 using a threshold of 0.9 (local validation score almost 0.3). At roughly the same time, people started to get better results by using Maskrcnn and yoloV3. Using thresholding on segmentation maps to create submission just doesn't seem to be as accurate as predicting bounding boxes directly given the evaluation metric of the task. So I switched to detection methods.</p>
<hr>
<h1>Detection:</h1>
<p>I ended up with 4 detection models, one from Maskrcnn, two from fasterrcnn and one from retinaNet. The first two are two-stage methods and the last one is one stage method. I picked them hoping each one can predict from a different perspective. I tried to play around with their hyper-parameters but with no success (could be because of my little experience on detection models) so I used all the default hyper-parameters to create my submission.</p>
<h2>Details:</h2>
<p>Maskrcnn (keras): take directly from Henrique Mendonça's kernel (thanks Henrique!)</p>
<p>fasterrcnn/retinanet (pytorch): trained with code from <a href="https://github.com/open-mmlab/mmdetection">https://github.com/open-mmlab/mmdetection</a></p>
<ul>
<li><p>image size: 512x512 </p></li>
<li><p>preprocessing: none</p></li>
<li><p>post-processing: NMS</p></li>
<li><p>training time: ~5 hours</p></li>
<li><p>epochs: 10 for fasterrcnn 4 for retinanet</p></li>
<li><p>batch size: 2</p></li>
<li><p>train/val:~5000/~500 (only positive samples, for retinanet as well; each one was trained on a different split)</p></li>
<li><p>backbone: resnet50</p></li>
</ul>
<p>My largest performance boost is from combining my segmentation result with Maskrcnn result. Trying simply intersection gave me a score of ~0.19. The performance gain is mainly resulted by the fact that a lot of false positives from the detection network got filtered out. It makes a lot of sense because the detection network didn't see the whole bunch of negative images. Later on, I just incorporated more detection models as mentioned above and I finally got a score of 0.22 in stage 1.</p>
<hr>
<h1>Thoughts</h1>
<p>Generally speaking, you gonna have to find ways to use all the images (both positive and negative samples) to train your network. I also believe using NIH dataset would be beneficial in some way but can't afford to do any experiment. Below is my rate of the importance of various aspects to the success of this challenge.</p>
<ul>
<li><p><strong>ensemble</strong> ****: In cases where there's too much variability in the dataset, the ensemble is the right way to go.</p></li>
<li><p><strong>post-processing</strong> ****: Both Ian and Dmytro have mentioned resizing the box. I didn't try resizing in my submission though. If I had done it (shrinking box by a factor of 0.875), my stage2 score would have been 0.236 according to late submission.</p></li>
<li><p><strong>validation</strong> ****: Many people have lost their rank due to over-fitting to the stage 1 LB. So don't be too obsessed with the public score.</p></li>
<li><p><strong>model adoption</strong> ***: Detection models seem to be better than segmentation models in this challenge. Among detection models, retinanet is the most elegant one due to its simplest pipeline and ability to handle both positive and negative sample. I don't think there's a big difference between the other 2 stage detection models.</p></li>
<li><p><strong>backbone</strong> ***: se-resnext101 seems to be the best backbone so far on this task according to the other teams' solution share.</p></li>
<li><p><strong>augmentation</strong> ** : I only tried horizontal flip, small scale and rotation transform. The other fancier augmentation operations do not seem to help much.</p></li>
<li><p><strong>image size</strong> **: 512x512 is slightly better than 256x256 in detection but worse in segmentation. </p></li>
<li><p><strong>hyper-parameter tuning</strong> **: used default setting for my detection models</p></li>
<li><p><strong>preprocessing</strong> *: </p>
<p>If I were to do this challenge again, I would start with retinanet as Dymtro did.</p></li>
</ul>
<h1>Acknlowledgements</h1>
<p><a href="https://www.kaggle.com/chenyc15/mean-average-precision-metric">Yicheng Chen's metric kernel</a></p>
<p><a href="https://www.kaggle.com/hmendonca/mask-rcnn-and-coco-transfer-learning-lb-0-155">Henrique Mendonça's kernel</a></p>
<p><a href="https://github.com/ahrnbom/ensemble-objdet%29">detection ensemble</a> mentioned by Ian in a <a href="https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/discussion/68739#latest-411691">post</a> </p>
|
RSNA Pneumonia Detection Challenge
|
18th solution: SENet-DeepLabV3+
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Pneumonia Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>@Hader, thank you for pointing that out. I fixed the typo.</p>
<p>In my case, my model tends to predict bigger bounding box than the true one (you can find it from the fig above). Maybe It is because I used class weighted cross-entropy (background:opacity = 1:2). BTW, your resize 87.5% approach give me score 0.234. What a clever solution! </p>
<p>No, the score 0.234 is a result of late submission.</p>
<p>Here I would like to share my solution.</p>
<p>My score is Stage 1 Public LB 0.211 (42nd)/Stage 2 Private LB 0.218 (18th)</p>
<h3>Models</h3>
<p>I tackle this competition with semantic segmentation approach because, </p>
<ul>
<li>There are a few objects and each object is well separated, so splitting objects from segmentation mask must be easy.</li>
<li>Shapes of opacity areas are ambiguous, so rough masks generated from bounding boxes are not so unnatural.</li>
<li>Training a segmentation model is easier than an object detection model, I believe.</li>
<li>A segmentation model suits for ensemble compared to an object detection model.</li>
</ul>
<p>I choose DeepLabV3+ model, the state of the art for semantic segmentation. I take the DeepLabV3 implementation from <br>
<a href="https://github.com/jfzhang95/pytorch-deeplab-xception">https://github.com/jfzhang95/pytorch-deeplab-xception</a> <br>
I change the model head from Xception to SENet or SE-ResNext101, because Xception head is not trained stably somehow. I take the SENet and SE-ResNext101 implementation from <br>
<a href="https://github.com/Cadene/pretrained-models.pytorch">https://github.com/Cadene/pretrained-models.pytorch</a> </p>
<p>I train 3 models.</p>
<ol>
<li>SE-ResNext101-DeepLabV3+ (resized input)</li>
<li>SE-ResNext101-DeepLabV3+ (cropped input)</li>
<li>SENet-DeepLabV3+ (resized input)</li>
</ol>
<p>For resized-input models, input images are resized into 448x448. For cropped-input models, input images are cropped into 448x448 in training and original size image is used in prediction. Using 2 input patterns, I hope global and local features are learned.</p>
<h3>Preprocessing</h3>
<p>I use ellipses inscribed to bounding boxes as true masks. rounded masks are more natural than rectangles and applicable for rotate augmentation.</p>
<h3>Training</h3>
<ul>
<li>5 fold CV</li>
<li>Adam optimizer</li>
<li>batch size: 8</li>
</ul>
<p>Learning rate is scheduled from 1e-3 to 1e-6 by cosine annealing with 3 cycles, 16 epochs per 1 cycle.
In each cycle, the training condition is modified like below,</p>
<ul>
<li>Cycle 1: train only with opacity images.</li>
<li>Cycle 2: train with opacity and no-opacity images with appearance rate 1:1.</li>
<li>Cycle 3: train with opacity and no-opacity images with appearance rate 1:1 and many augmentations.</li>
</ul>
<p>I use cross-entropy loss with class weight; background:opacity = 1:2.
I try focal loss and lovasz loss but they don't work,
maybe because mask shape is rough and so losses more aware with mask edge are not preferable.</p>
<h3>Augmentations</h3>
<p>I use <a href="https://github.com/albu/albumentations">Albumentations</a> for augmentation.</p>
<ul>
<li>Cycle1 and 2 with cropped input: random cropping, value shifting, and horizontal flip</li>
<li>Cycle3 with cropped input: +
shifting, scaling, rotation, CLAHE, contrast, brightness, gamma, Gaussian noise, and CutOut</li>
<li>Cycle1 and 2 with resized input: shifting, scaling, rotation, value shifting, and horizontal flip</li>
<li>Cycle3 with resized input: +
CLAHE, contrast, brightness, gamma, Gaussian noise, and CutOut</li>
</ul>
<h3>TTA and ensemble</h3>
<ul>
<li>3models</li>
<li>5 fold CV</li>
<li>horizontal flip</li>
<li>Cycle 2 weight, Cycle 3 weight and Cycle 3 weight with CLAHE input</li>
</ul>
<p>In total, 3x5x2x3=90 predictions are averaged.</p>
<h3>Postprocessing</h3>
<p>I generate predicted mask by thresholding model output.
bounding boxes are generated by this predicted mask.
The peak value of model output is used as a confidence score.
A bounding box with confidence score under a threshold is removed.
Preferable confidence-threshold are very different between local CV and Stage 1 public LB.
As described <a href="https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/discussion/64723">here</a>,
The most of stage 1 train labels are verified by 1 doctor and test data of stage 1 and 2 are verified by three doctors including certificated radiologist.
It looks like many images regarded as no opacity in train label criteria are regarded as opacity in test label criteria.
For the train data, the best mask threshold = 0.50 and the best confidence threshold = 0.77.
For the stage 1 test, the best mask threshold = 0.51 and the best confidence threshold = 0.62.
While Stage 1, I searched the best threshold based on public LB score
and I select mask threshold = 0.50 and confidence threshold = 0.60.</p>
<p>Here, I show some results.</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/415221/10600/sample.png" alt="result"></p>
<h3>Things do not work</h3>
<p>I try to use classification models to improve segmentation prediction.
For classification 3 class label and NIH Chest X-ray 14 data can be used and it looks promising.
I use SE-ResNeXt model.
As a modification, I try mean teacher semi-supervised learning using NIH Chest X-ray 14 images.
I also try multi-task training using 3 class label and 14 class label of NIH Chest X-ray 14 dataset.
AUC of each model is as below.</p>
<p>model / AUC <br>
SE-ResNeXt baseline / 0.853 <br>
SE-ResNeXt mean teacher / 0.897 <br>
SE-ResNeXt multi task / 0.880 </p>
<p>I try to use prediction of classification for selecting bounding boxes but it makes a score decrease.
The peak value of segmentation prediction can classify images with about AUC 0.89,
so maybe there is no room to improve by classification models.</p>
<p>I try to apply <a href="https://arxiv.org/abs/1703.01780">mean teacher</a> to segmentation model.
It may be promising as described in <a href="https://arxiv.org/abs/1807.04657">this paper</a>.
But mean teacher makes model bad. In this competition task, prediction quality is low so that self-teaching may not work well.</p>
<p>I try to classify predicted bounding box is hit or not by LGBM as like <a href="https://www.kaggle.com/c/data-science-bowl-2018/discussion/54741">the DSB 2018 1st solution</a>.</p>
<h3>Score History</h3>
<ul>
<li>Xception-DeepLabV3+ (crop input) -> LB 0.077 </li>
<li>remove low bounding box with low confidence score -> LB 0.146 </li>
<li>change mask shape from rectangle to ellipse -> LB 0.163 </li>
<li>SE-ResNeXt-DeepLabV3+ (crop input) -> LB 0.186 </li>
<li>add SE-ResNeXt-DeepLabV3+ (resize input) -> LB 0.195 </li>
<li>cycle 2 training -> LB 0.207 </li>
<li>add SE-ResNeXt-DeepLabV3+ (resize input) and flip TTA -> LB 0.215 </li>
<li>bug fix (lol) -> LB 0.219 </li>
<li>add cycle 3 training and CLAHE TTA (final model) -> LB 0.221 -> stage 2 LB 0.219 </li>
<li>resized 87.5% (following 1st solution, late submission) -> stage 2 LB 0.234 (Wow!) </li>
</ul>
|
RSNA Pneumonia Detection Challenge
|
20th partial solution : MASKRCNN
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Pneumonia Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>sorry typo</p>
<p>LEARNING_RATE = 0.005</p>
<p>no , just ensemble maskrcnn csv file with retinanet csv file</p>
<p>when the items of retinanet csv file are empty (detect nothing) ,
remove the corresponding item (boundary boxes) of maskrcnn csv file.</p>
<p>keep waiting for other winners share.....</p>
<p>I would like to share some works about MASKRCNN.</p>
<p>The based model from
<a href="https://github.com/matterport/Mask_RCNN">https://github.com/matterport/Mask_RCNN</a></p>
<p>Since I only have about one week to finetune the network before the due date,
the parameters may not be optimized.</p>
<p>The classifier of MASKRCNN performs badly , so I use <strong>retinanet model</strong> as classifier</p>
<p>(Thanks for my teammate <a href="https://www.kaggle.com/andrewwang7">andrewwang</a> )</p>
<p>Final result : Stage1 score 0.236 / Stage2 score 0.217</p>
<p><strong>Hyperparameter Setting</strong></p>
<pre><code>class DetectorConfig(Config):
"""Configuration for training pneumonia detection on the RSNA pneumonia dataset.
Overrides values in the base Config class.
"""
# Give the configuration a recognizable name
NAME = 'pneumonia'
# Train on 1 GPU and 8 images per GPU. We can put multiple images on each
# GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
GPU_COUNT = 1
IMAGES_PER_GPU = 8 #256:8
BACKBONE = 'resnet101'
NUM_CLASSES = 2 # background + 1 pneumonia classes
IMAGE_MIN_DIM = 256
IMAGE_MAX_DIM = 256
RPN_ANCHOR_SCALES = (64, 128,192)
RPN_NMS_THRESHOLD = 0.9
TRAIN_ROIS_PER_IMAGE = 16
MAX_GT_INSTANCES = 3
DETECTION_MAX_INSTANCES = 2 #ytt
DETECTION_MIN_CONFIDENCE = 0.7
DETECTION_NMS_THRESHOLD = 0.3
STEPS_PER_EPOCH = 500
TRAIN_BN =True
</code></pre>
<p>Set NMS threshold 0.9 boosts the lb score.</p>
<p><strong>Augmentation</strong></p>
<ul>
<li>many kind of augmentation methods improve training loss , but <strong>only
horizontal flip helps LB score</strong>.</li>
</ul>
<p><strong>Model modification</strong></p>
<ul>
<li><p>add scSE blocks in resnet101 backbone</p></li>
<li><p>add dropout in resnet101 backbone</p></li>
</ul>
<p>The changes reduce training loss but not much.</p>
<p><strong>Training</strong></p>
<p>only positive samples (5659) used in training</p>
<p>Training on 5093 samples
,Validating on 566 samples</p>
<pre><code>LEARNING_RATE = 0.005
model.train(dataset_train, dataset_val,
learning_rate=LEARNING_RATE*2,
epochs=1, #default 2
layers='heads',
augmentation=None) ## no need to augment yet
model.train(dataset_train, dataset_val,
learning_rate=LEARNING_RATE,
epochs=6,
layers='all',
augmentation=augmentation)
model.train(dataset_train, dataset_val,
learning_rate=LEARNING_RATE/5,
epochs=9,
layers='all',
augmentation=augmentation)
model.train(dataset_train, dataset_val,
#learning_rate=LEARNING_RATE/5,
learning_rate=LEARNING_RATE/10,
epochs=12,
layers='all',
augmentation=augmentation)
</code></pre>
<p>the maskrcnn source code in mrcnn/model.py, copy the whole "mrcnn" folder to my codebase and modify it!</p>
|
RSNA Pneumonia Detection Challenge
|
2nd place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Pneumonia Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My solution is based on the modified retinanet based model. Single model, ensembled outputs of 4 folds.</p>
<p>I decided to use retinanet as it's much simpler comparing to Faster-RCNN like models or SSD while having comparable results, this allows much easier experiments and debugging/tuning of model. </p>
<p>Credits to pytorch-retinanet implementation my solution is based on: <a href="https://github.com/yhenon/pytorch-retinanet">https://github.com/yhenon/pytorch-retinanet</a></p>
<p>I scaled the original images to 512x512 resolution, with 256 resolution I have seen results degradation and using the full resolution was not as practical with heavier base models.</p>
<p>Modifications I have done to the original pytorch-retinanet implementation:</p>
<ul>
<li><p>tested different base models, se-resnext101 worked the best, se-resnext50 slightly worse</p></li>
<li><p>added an extra output for smaller anchors (level 2 pyramid layer) to handle smaller boxes</p></li>
<li><p>added another classification output predicting the class of the whole image ('No Lung Opacity / Not Normal', 'Normal', 'Lung Opacity'). I have not used the output but even making the model to predict other related function improved the result.</p></li>
<li><p>I found the original pytorch-retinanet implementation ignored images with no boxes, I changed it to calculate loss on them as well.</p></li>
<li><p>As classification outputs overfits much faster comparing to anchors position/size regression outputs, I added dropout to anchor and the whole image class outputs. In addition to extra regularisation, it helped to achieve the optimal classification and regression results around the same epoch.</p></li>
</ul>
<p>Augmentation used:
Mild rotations (up to 6 deg), shift, scale, shear and h_flip, for some images random level of blur and noise and gamma changes. I limited the amount of brightness/gamma augmentations as it was hard for me to verify if it does not invalidate labels. To reduce impact of rotation to bounding box sizes, instead of rotating the corners I rotated two points at each edge, at 1/3 and 2/3 edge length from corner, 8 points in total and calculated the new bounding box as min/max of rotated points.</p>
<p>![Rotation box augmentation][1]</p>
<p>I used 4 folds, stratified split by class.</p>
<p>For submission, I ensembled models trained on each fold and a few checkpoints around CV loss minimum. I have averaged model outputs before applying any thresholds and NMS stages.</p>
<p>As everyone else noticed, one of challenges of this competition was different distribution of train and test datasets, most likely due to different labeling methodology. Multiple radiologists have labeled each image with intersection used for similar labels. I'd expect this would lead to more boxes predicted, but of smaller size especially in complex cases.
I tried to roughly simulate this process, using outputs from different folds. Instead of using the mean output of anchor sizes, I used value at 20 percentile and reduced it even more proportionally to difference between 80 and 20 percentile between models.</p>
<p>I have not used any other metadata from images, I felt the orientation field borderlines with being the leak.</p>
<p>I trained model for around 12 epochs, around 1 hour per epoch on 1080ti GPU.</p>
<p>What I'd like to try:
I was not able to work on this competition for the last two week before stage 2 starts and I missed the post about NIH dataset allowed to use. It's a bigger dataset but with lower quality of labels, would be very interesting to check if training the model to predict both datasets in interleaved way would improve the result, or at least use it to pretrain the base model.</p>
<p>I'd like to thank organisers for this competitions, it was a pleasure to participate and hopefully the solutions would make an impact.</p>
<p>I'll share my code shortly, will do some cleanup etc. first</p>
|
RSNA Pneumonia Detection Challenge
|
3rd place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Pneumonia Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks. The conversion in <code>prepare_data</code> should indeed probably be 1024, for some reason I converted to size 1000 jpg from the very start. I'm going to leave it as 1000 in the code since that's the way I converted my data. It probably doesn't make a big difference since the images are later used at size 224 resolution anyway. I assume the converted jpg images look ok?</p>
<p>I'm not sure why your outputs are all 0's, I get nonzero outputs even after the first epoch of training. Are you using python 3 (I used python 3.6.6 from Anaconda distribution)? Could you check whether you can predict bounding boxes using <code>predict.py</code> and my trained .h5 models (located on resources tab of Github repo)?</p>
<p>Hm, that shouldn't be happening. I'm assuming you ran <code>prepare_data.py</code> and are running the <code>train_50.sh</code> script? Can you send me the output?</p>
<p>Python 2 is the problem, I know I used python 3 style division in my code, so an integer divided by an integer gives a float (in python 2 you get an integer). I think you can import python 3 style division into python 2, but I don't have experience with this. I used f-strings in the code as well, which I think is a python 3 feature.</p>
<p>That's an interesting solution! I'm looking forward to studying your code. I haven't seen the problem of reduced accuracy of rotated bounding boxes addressed elsewhere.</p>
<p>Hello! This is an overview of my solution to the pneumonia detection challenge. I'm an abdominal radiologist at the Keck School of Medicine of USC in Los Angeles, CA, USA. Although I have played with convolutional neural networks for medical image classification, this is my first experience with object detection. It is also my first experience with machine learning competitions. </p>
<h2>Summary</h2>
<p>My models used <a href="https://github.com/fizyr/keras-retinanet">keras-retinanet</a> by Hans Gaiser and collaborators, based on the <a href="https://arxiv.org/abs/1708.02002">focal loss paper by Lin et al</a>. This is a single-stage convolutional neural network detection architecture, which was appealing to me for training simplicity. I optimized two RetinaNet models using Keras, with resnet-50 and resnet-101 backbones that were pretrained on ImageNet images. I used non-maximum suppression to eliminate any overlapping bounding boxes from each network. I then took weighted averages of overlapping bounding boxes from both trained neural networks. I also applied a global fixed percentage size reduction to all final bounding boxes, which appeared to significantly improve Stage 1 test scores.</p>
<p>Code is posted on <a href="https://github.com/pmcheng/rsna-pneumonia">Github</a>. Note that the two model .h5 files used in my solution are under the Releases tab of the repository, because these files are too large to be included within the repo.</p>
<h2>Feature selection (or lack thereof)</h2>
<p>I did not perform manual image feature selection or engineering. I briefly played with using the DICOM view projection (AP vs PA), specifically assigning different score thresholds based on the view, but this did not improve my results. </p>
<p>I did not make use of the “No Lung Opacity / Not Normal” labels in the training set.</p>
<p>I did not use any external training data for this competition.</p>
<h2>Training</h2>
<p>I decided early on that high image resolution was not necessary for pneumonia bounding box prediction. I used the training images at a 224 x 224 resolution, which made training much more efficient on my hardware. I used sklearn’s <a href="http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html">stratified K fold function</a> to divide the 25684 training images into a large training set (24399 images, 95%) and a small validation set (1285 images, 5%) . I had originally used a much larger validation set but later found that I had significantly better training results when I shifted more images into training. Images were augmented with rotation, translation, scaling, and horizontal flipping (shearing and vertical flipping were turned off). I also added random constants to the images, an idea I found from a <a href="https://www.kaggle.com/c/google-ai-open-images-object-detection-track/discussion/64633">post by @ZFTurbo</a> regarding RetinaNet in a previous competition. </p>
<p>The default training parameters were modified to match the focal loss paper (i.e., SGD with learning rate = 0.01, momentum =0.9, decay = 0.0001, nesterov = True), due to anecdotal reports about more generalizable models produced by SGD with momentum compared to Adam. I used resnet-50 and resnet-101 backbones for the final models. I did experiment with resnet-152, but the training took longer and the results were slightly worse.</p>
<p>I trained for 25 epochs with 2500 steps per epoch and batch size of 8, saving the model snapshot after each epoch. After each epoch I calculated bounding boxes on the validation set; I changed the score threshold in the filter_detections layer from keras-retinanet to 0.01 (from 0.05) so that I could evaluate lower score thresholds. At each epoch I calculated the score threshold providing the maximum Youden index on the validation set (sensitivity + specificity – 1). More specifically, I calculated sensitivity and specificity with respect to images, not bounding boxes, with the idea that from a scoring standpoint, it was important for the system to classify whether an image as a whole was positive or negative for pneumonia. I also calculated the RSNA metric as implemented in <a href="https://www.kaggle.com/chenyc15/mean-average-precision-metric">Yicheng Chen's excellent kernel</a>. I found that snapshots that performed best on the leaderboard were ones with the highest Youden index, with the score threshold lowered to give a sensitivity close to 90%. I suspect that the benefit of lowered score thresholds was due to higher prevalence of pneumonia in the test set relative to the training set. </p>
<p>I aggressively used non-maximum suppression to eliminate any overlapping bounding boxes from each network’s output for a given image. My idea was that physician annotators would most likely specify nonoverlapping ground truth bounding boxes. Even though Tensorflow has an NMS function, I found it more useful and instructive to modify <a href="https://www.pyimagesearch.com/2015/02/16/faster-non-maximum-suppression-python">code posted by Adrian Rosebrock on his blog</a>.</p>
<p>I then took weighted averages of overlapping bounding boxes from both trained neural networks, using the scores for the boxes from each network as the weights; I experimented with box unions and intersections, which did not work nearly as well. For bounding boxes that did not overlap between the two neural networks, I used a separate higher threshold value to decide whether the solitary box should be retained. I also applied a global fixed percentage size reduction to all final bounding boxes, which significantly improved Stage 1 test scores.</p>
<p>I did not train with Stage 1 test data for Stage 2, because I didn’t know this was an option (I did not upload automated training code). However, even if I had been aware of this option, I doubt that I would have retrained in Stage 2. I had used the Stage 1 test set scores extensively for validation, and was already worried about overfitting the Stage 1 test set. </p>
<h2>Interesting findings</h2>
<p>I think my single most important observation was that the bounding boxes from my models were systematically too large. I found that by reducing all bounding boxes by a fixed percentage (17% in each dimension), I improved my Stage 1 leaderboard score substantially. I had actually first observed this when I had a larger validation set and I manually reviewed the predicted bounding boxes superimposed on the internal validation set images, and saw that they were generally too large. Shrinking the bounding boxes improved both my internal validation set score and my Stage 1 leaderboard score. This led me to believe that perhaps the L1 loss used by RetinaNet may not be optimal for the mean average precision metric used in this competition. Alternatively, the image rotations for augmentation may have led to a slight increase in bounding box size for training, though I think this effect is small, as I limited the maximum possible rotation to about 0.05 radians.</p>
<p>Interestingly, however, when I switched to the smaller validation set, shrinking the bounding boxes actually reduced my internal validation set score, but still improved my Stage 1 leaderboard score. It was then that I carefully read <a href="https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/discussion/64723">Dr. Anouk Stein’s post</a> describing the annotation method for test set cases; these cases were multiread and “the intersection was used if there was at least a 50% overlap by one of the boxes”. This would suggest that bounding boxes of the test set images could quite easily be systematically smaller in the test set compared to the training set. On the assumption that Stage 2 test set bounding boxes would have statistical properties similar to the Stage 1 test set bounding boxes, I decided to treat the Stage 1 leaderboard scores as more accurate validation set scores than my own internal validation set scores.</p>
<p>At the end the only parameters I was tuning were the score thresholds for each model, the score threshold for solitary unmatched bounding boxes, and the bounding box shrinkage factor.</p>
<p>Combining two RetinaNet models in an ensemble gave a mild boost to my Stage 1 leaderboard score. My best trained resnet-101 model by itself (with the usual postprocessing steps including non-maximum suppression and bounding box shrinkage) gave a Stage 1 leaderboard score of 0.232. The ensemble had a Stage 1 leaderboard score of 0.238. I think the ensemble improved specificity by eliminating most bounding boxes proposed by only 1 of the 2 networks, and this may have helped in Stage 2. </p>
<p>My stage 2 test set score (0.239) was practically identical to my Stage 1 test set score (0.238).</p>
<h2>Concluding thoughts</h2>
<p>The competition was an exciting and educational experience. I thank the RSNA/STR organizers for all their hard work organizing and annotating the data sets for competition; large medical image data sets of sufficient size and quality for this purpose are rare. Thanks also to Kaggle and its staff for their support of this competition. </p>
<p>Given the close spacing between the scores of many of the top teams (and the virtual tie between 3rd and 4th place), I'm sure a different final test set would have produced different rankings, though Ian/Alex and Dmytro clearly set themselves apart at the top. I was surprised and lucky to finish so high in the rankings. I know that I have a lot to learn about object detection based on the varied and interesting forum posts; my solution is fairly minimalist by comparison. My congratulations and respect to all the participants! </p>
<p>Yes, I just used standard preprocessing. I agree that it's not clear this would be optimal for grayscale images. However, when using pretrained models, each channel is treated differently by the network, and I imagine we might leverage a kind of ensemble effect across the channels. I have had success in the past with classification models using similar Imagenet-style preprocessing of grayscale images. It would be interesting to experiment with other preprocessing steps, but I didn't have time.</p>
|
RSNA Pneumonia Detection Challenge
|
4th Place Solution - 16bit.ai / layer6
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Pneumonia Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Link to our winners' model report. Comments/questions are welcome.</p>
<p><a href="https://github.com/DanielEftekhari/Machine-Learning-Presentations-Blogs/blob/master/Kaggle-RSNA-pneumonia-detection-challenge/16bit_layer6_RSNA_Pneumonia_Detection_Challenge_Winner_Documentation.pdf">https://github.com/DanielEftekhari/Machine-Learning-Presentations-Blogs/blob/master/Kaggle-RSNA-pneumonia-detection-challenge/16bit_layer6_RSNA_Pneumonia_Detection_Challenge_Winner_Documentation.pdf</a></p>
|
RSNA Pneumonia Detection Challenge
|
6th place solution: U-net-like segmentation
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Pneumonia Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our source codes are now available at <a href="https://github.com/pfnet-research/pfneumonia">https://github.com/pfnet-research/pfneumonia</a></p>
<p>We are software engineers at PFN -- Preferred Networks Inc., Japan. Our solution is based on semantic segmentation, using U-net-like network architecture. One key insight of ours is that it is crucial to detect the ‘edges’ of bounding-boxes. We tried to predict top, bottom, left and right edges of bounding-boxes separately.</p>
<h2>U-net-like backbone</h2>
<p>We adopted Imagenet-pretrained ResNet152 as the feature extractor. Input image is resized to 512x512, processed by the extractor to the size 16x16, then unpooled four times to get 256x256 output. For each step of unpooling, the feature vector is concatenated with the corresponding ResNet layer, then processed 3x3 convolution and ReLU twice. The shape of the final output is 64x256x256.</p>
<h2>‘seg’ and ‘edge’ predictions</h2>
<p>A 1-channel 1x1 convolution is applied to this output to get ‘seg’ layer. This layer indicates the confidence of each pixel being inside a bounding-box. Sigmoid of this layer is copied and concatenated to the 64-channel output, applied 3x3 convolution and ReLU twice, and finally applied a 1x1 convolution to produce 4-channel ‘edge’ layer. The first channel of this layer indicates the confidence of each pixel being one of the ‘topmost’ pixels of a bounding-box. The other 3 channels are for bottommost, leftmost and rightmost.</p>
<p>The loss for ‘seg’ layer is <code>1 - f1</code>, where f1 is the differentiable F1 score of segmentation. The loss for ‘edge’ layer is cross-entropy loss. The whole loss of the network is the sum of these two losses.
As most of pixels of ground-truth for ‘edge’ layers are negative (even for images that have bounding-boxes), such negative pixels are appropriately undersampled.</p>
<h2>Training details</h2>
<p>We used x-flip, 90-degree rotation, zoom-in/out and random contrast changes (let <code>k</code> be a random int between [-10, 10], add <code>k</code> over all pixels) as data augmentation. Positive samples were three times oversampled. The batch size was 10.</p>
<p>We used Adam optimizer with weight decay 1e-4, and trained for 30 epochs. Lr was divided by 10 after 20 and 27 epochs finished. We freezed the weight of extractor for the first 200 iterations, then fine-tuned over the rest of training.</p>
<p>All trainings were done with eight Tesla P100s. It took about 6 hours to complete a training.</p>
<h2>Inference</h2>
<p>At inference time, first we split each image into two pieces (right and left) by the line <code>x = c</code>, where c is the weighted average of x-coordinates of all pixels (weight = pixel values). For each piece of image, we examine every possible rectangle with at least 40 px height and width, and find one that maximizes <code>p = p_top * p_bottom * p_left * p_right</code> where <code>p_top</code> is the geometric mean of sigmoid of the first channel of ‘edge’ layer, over the topmost pixels of the rectangle being examined (same as <code>p_bottom</code>, <code>p_left</code> and <code>p_right</code>). Finally, if p >= 0.3, it is considered as a prediction.</p>
<h2>Test-time augmentation and ensembles</h2>
<p>At test time, we perform x-flip augmentation in which ‘seg’ layers and ‘edge’ layers are both averaged over flipped and non-flipped images.
We perform 10-fold CV and ensemble them only by averaging ‘edge’ layers, as averaging ‘seg’ layers over different models turned out to score worse. We divide 10 models evenly into 2 groups, and predict with them independently to get 2 submission files. Finally, we merge those two submissions: if two boxes overlap with an IoU more than 0.5, we adopt the intersection as the final prediction. Otherwise, we preserve the whole bbox as the prediction.</p>
<h2>Things that didn’t work</h2>
<ul>
<li>Predicting the class labels of images as well as ‘seg’ and ‘edge' layers made the training unstable, and scored worse than the solution above even at the best of times.</li>
<li>Using deconvolutions instead of unpoolings scored about the same or slightly worse, probably because the final layers should be simple rectangles or lines.</li>
<li>Using cross-entropy loss instead of f1-loss for ’seg’ layer made the training longer to converge. F1-loss allows higher variance of confidence values, which might help ‘edge’ layer to guess. </li>
</ul>
<p>Thank you. We tried both segmentation based and object-detection (Faster R-CNN) based approaches in parallel. We got the better result with segmentation based approach, so we decided to dedicate ourselves to this solution.</p>
|
RSNA Pneumonia Detection Challenge
|
7 Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Pneumonia Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our Solution is hosted on docker hub
You can access it with<code>nvidia-docker run -p 8888:8888 -it deepradiology/kaggle:rsna2018</code></p>
|
RSNA Pneumonia Detection Challenge
|
8th solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Pneumonia Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>8th-Solution:
I’m pleasure to here to share my solution of detection. In the stage one/ two, my score is 0.237 (Pubic LB) / 0.228 (Private LB). My detection framework, similarity to S3FD (face detection framework), was implemented with myself using Pytorch.
In the training phase, I have chosen 5158 positive samples (‘Pneumonia’) as the training set. The validation set consist of 500 ‘Pneumonia’ images, 500 ‘No Lung Opacity’ / Not Normal images and 500 ‘Lung Opacity’ images. Due to time constraints, I did not use cross validation method. In order to reduce the GPU memory and keep the high resolution, I resized the input images to 800 x 800. And some data augmentation techniques (RandomSampleCrop and RandomMirror) are used in the training phase. Firstly, I designed my multi-layer detection loss to calculate the input image without ground truth, so I can use the all images to train the detection model. Actually, the results of the experiment are not good. After that, I cropped the box area in the Pneumonia images and put it in the Normal images to generate new training set. Finally, I putted the original training set and the new training set data together to improve the performance of the detection model.
Anchor
The anchor design is very importance to detection task, so I designed two anchor solutions using k-menas.
1: Size: detection layer 1 [0.1, 0.1] detection layer 2 [0.2, 0.2] detection layer 3 [0.4, 0.4]
detection layer 4 [0.8, 0.8], ratios are [1:1].
2: Size: detection layer 1 [0.1, 0.1] detection layer 2 [0.2, 0.2] detection layer 3 [0.4, 0.4]
detection layer 4 [0.8, 0.8], ratios are [[1:1], [0.3:1], [0.6:1], [1.2:1]]
The 2th anchor solution was achieved the better result.
Loss Function
In the loss function, the original classification loss with cross entropy and regression loss with smooth L1 are changed to weighted cross entropy and weighted smooth L1, respectively. The weight depends on the IOU between the GT box and anchor. This method can to slightly reduce the unbalance between the regression confidence and classification confidence.
Detection Network
In order to improve the feature extraction capability of my detection model, a Feature Pyramid Network (FPN) was also used. And also I tried to implement the RefineDet (two stages SSD <a href="https://arxiv.org/abs/1711.06897">https://arxiv.org/abs/1711.06897</a>) to detection the Pneumonia, but it have many false positive samples.
Backbone
My base models are only use pre-trained VGG16 model.
Post-Processing
The two stage NMS method was used in the inference phase, firstly I removed the predicted boxes which are overlap >= 0.7, but the xmin, ymin, xmax, ymax of the keep box is mean of the removed boxes and itself. Then the normal NMS method with overlap >= 0.05 as the two-stage NMS method.
Ensemble
The six models which I have attempted are used to ensemble a submission result. The two stage NMS and multi-scale testing methods are used in here.
Thanks organizers for this competitions.</p>
|
RSNA Pneumonia Detection Challenge
|
The 5th solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Pneumonia Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Summary:</h1>
<p>we use the classification-detection solution as the pipeline.
For the detail network:
Modified Faster/Mask R-CNN for Detection / DenseNet169 for Classification
Keras / Pytorch / TensorFlow implementation</p>
<h1>Training Methods:</h1>
<ol>
<li>Classification:
Cross-validation training.
<ul><li>Data augmentation: horizontal flip, small angle rotate, multi-scale </li></ul></li>
<li>Detection:
<ul><li>Cross-validation training using 3 detection frameworks.</li>
<li>Data augmentation: horizontal flip, small angle rotate, multi-scale resize</li>
<li>First train on positive images only, then fine-tune the model on all images.</li>
<li>OHEM (Online Hard Example Mining) in rpn class loss.</li>
<li>(Optional) Focal loss for rpn class loss and rcnn class loss.</li>
<li>(Optional) Classification branch: giving image an extra image level attribute, then design the network as multi-tasks learning: detection and classification. It helps to reduce FP.</li></ul></li>
</ol>
<h1>Ensemble strategy</h1>
<p>Multi-scale inputs for different models</p>
<p>Intra-model NMS plus cross-model box voting in order to avoid different/unnormalized confidence scales among models</p>
<p>Average the bbox of different model</p>
<h1>Important and Interesting Findings:</h1>
<p>OHEM:
Online hard example mining is used to handle with the involvement of negative images (without any objects at all) as well as to balance pos/neg samples.</p>
<p>Classification
A standalone classification model (DenseNet169) is trained to reduce false positive bounding boxes on image level.</p>
<p>Finetune Strategy:
Freezing all layers but heads and fine-tuning merely using the test dataset of stage 1 contributes to fitting the fine annotated images provided in stage 2.</p>
<h1>the whole project source</h1>
<p>we have upload our whole project at <a href="https://github.com/JiYuanFeng/Kaggle_RSNA_Pneumonia-Detection-Challenge">https://github.com/JiYuanFeng/Kaggle_RSNA_Pneumonia-Detection-Challenge</a></p>
|
RSNA Pneumonia Detection Challenge
|
[10th place] Solution Overview and Code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Pneumonia Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is an overview of our team solution to the pneumonia detection challenge. Our team, DASA-FIDI-IARA, is composed of computer scientists, a computer engineer, a radiologist informatician, and a neuroradiologist. We had much fun participating on this challenge, and we want to thank organizers and all people that worked hard to make it possible. Code and trained models are available on GitHub (link in the end).</p>
<h1>Summary</h1>
<p>Our solution is based on <a href="https://github.com/fizyr/keras-retinanet">retinanet</a> with resnet101 as backbone. We trained 5 folds of the training set from stage 1 and used the best single one. We experimented many ‘out-of-the-box’ ideas instead of implementing ensemble, making the stage 1 submission to be a single model. We have heavily used data augmentation, both on images and bounding boxes. After a careful analysis of width and height ratios of bounding boxes, we have tried many anchor ratios setups. Also, we played with inference hyperparameters, such as non-maximum suppression and score thresholds.</p>
<h1>Training</h1>
<p>We resized the original images to 800x800, as we got worse results for smaller sizes. We have modified the original retinanet implementation, including the following features:</p>
<ol>
<li>Adding more anchor ratios, in a total of 9, ranging from 0.25 to 4;</li>
<li>Hyperparameter tuning: learning rate on training, non-maximum suppression and score threshold on inference;</li>
<li>Data augmentation
<ul><li>On images and bounding boxes: rotations, horizontal flips, translations, shearing and scaling</li>
<li>On images: contrast and brightness increase/decrease, gaussian noise</li>
<li>On bounding box: shifts on points (x1,y1) and (x2,y2); scaling up/down bounding boxes (this feature was not actually used in our model submission)</li></ul></li>
<li>Dropouts both for regression and classifier heads;</li>
<li>We had information from the discussion in Kaggle that the annotation was performed in a different way for the training and test sets. So, we guessed how different it could be and trained our best model with the same training set, except by progressively reducing bounding boxes. The best model was trained with 15% reduction in height and width.</li>
</ol>
<h1>Ideas that did not work</h1>
<p>We tried a lot of different things that did not work so well for us:</p>
<ol>
<li>Resnet52, resnet152, vgg, mobilenet, and even custom architectures as backbones;</li>
<li>Adding more anchor boxes;</li>
<li>Test Time Augmentation;</li>
<li>YOLO and Light-Head RCNN;</li>
<li>Freezing first layers of backbone;</li>
<li>Histogram equalization as pre-processing;</li>
<li>Two step approach: classifier + detector (pneumonia vs others)</li>
<li>Adding gender and view position as features.</li>
<li>Training specified networks for each gender (M/F) and each view position (AP/PA)</li>
<li>Manually re-annotate the training set (not-normal cases) to remove evident false positives/negatives.</li>
</ol>
<h1>Final Thoughts</h1>
<p>As mentioned, our Stage 1 submission was a single model, and we believe that ensemble could improve our score. For instance, after the final submission, we implemented an ensemble of 2 of our best models (just for curiosity), achieving 0.254 score in the Stage 1 testset. </p>
<p>The most important realization was finding that the annotation of the test set was different than the training set. Specifically, we found that targeting smaller bounding boxes yielded better results. The use of additional anchor-ratios also improved our mAP significantly. Our code for training and inference, as well as the trained models, are available on GitHub. There is also a demo where you can test our model (you need to request an account to try it). All the links can be found below.</p>
<h1>Links</h1>
<ul>
<li>Documentation: <a href="https://rsna-challenge-2018.readthedocs.io/en/latest/">https://rsna-challenge-2018.readthedocs.io/en/latest/</a></li>
<li>Training/inference code: <a href="https://github.com/alessonscap/rsna-challenge-2018.git">https://github.com/alessonscap/rsna-challenge-2018.git</a></li>
<li>Demo page: <a href="https://iarahealth.com/pneumonia/">https://iarahealth.com/pneumonia/</a></li>
</ul>
|
RSNA Pneumonia Detection Challenge
|
[1st place] Solution Overview & Code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Pneumonia Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We have uploaded our solution code to GitHub. Please find it here: <a href="https://www.github.com/i-pan/kaggle-rsna18">https://www.github.com/i-pan/kaggle-rsna18</a></p>
<p>If you have any issues, please let us know! </p>
<p>Thanks for the tip, we didn't address this bug and used the code as is. We will try fixing this in the code and seeing what happens.</p>
<p>Thanks! Alexandre noticed this first and experimented with resizing boxes and found a significant improvement with his early models. He anticipated that inter-rater agreement would be fairly low, so taking the intersection of boxes with 50% overlap would likely lead to a significant reduction in box size for triple-read studies. </p>
<p>We had no idea which images in the training set were triple-read, but the vast majority were single-read, so we knew that the models would learn to predict larger boxes. </p>
<p>We are very excited to have been awarded 1st place in this competition. This was our first foray into the world of object detection. I'm Ian, currently a 3rd-year medical student at Brown University, Providence, RI, USA. My teammate Alexandre is a radiologist practicing in Lanaudière, Québec, Canada. We are happy to represent the medical community in this competition. Huge congratulations to Dmytro for becoming Grandmaster. His top score during the public LB really motivated us to make successive improvements to our model. Truly an honor to be in the same ranks as him for this challenge. Congratulations to all of the other top 10 winners as well. </p>
<h1>tl;dr</h1>
<ul>
<li>We did NOT retrain when stage 1 test labels were released</li>
<li>We used a classification-detection pipeline </li>
<li>10-fold CV ensemble for classification, combination of 5 10-fold CV ensembles for detection (50 models) </li>
<li>For detection, we used:
<ol><li>RetinaNet: <a href="https://github.com/fizyr/keras-retinanet"></a><a href="https://github.com/fizyr/keras-retinanet">https://github.com/fizyr/keras-retinanet</a></li>
<li>Deformable R-FCN: <a href="https://github.com/msracver/Deformable-ConvNets"></a><a href="https://github.com/msracver/Deformable-ConvNets">https://github.com/msracver/Deformable-ConvNets</a></li>
<li>Deformable Relation Networks: <a href="https://github.com/msracver/Relation-Networks-for-Object-Detection"></a><a href="https://github.com/msracver/Relation-Networks-for-Object-Detection">https://github.com/msracver/Relation-Networks-for-Object-Detection</a></li></ol></li>
<li>Boxes were ensembled using: <a href="https://github.com/ahrnbom/ensemble-objdet">https://github.com/ahrnbom/ensemble-objdet</a></li>
<li><em>We resized the lengths and widths of our final predictions by 87.5%</em></li>
<li>Code available here: <a href="https://www.github.com/i-pan/kaggle-rsna18">https://www.github.com/i-pan/kaggle-rsna18</a></li>
<li>We found that a 6-model ensemble with 1 InceptionResNetV2 for classification and 5 Deformable Relation Networks achieved 0.253 on stage 2 private LB</li>
</ul>
<h1>Classification</h1>
<p>We used Keras 2.2 for classification. Our classification model was composed of the following individual models [format: modelArchitecture (numClasses) (imgSize)]. Models were trained on either 2 classes (opacity vs. not) or 3 classes (opacity vs. not normal/no opacity vs. normal). Each model was trained on a different fold.
- InceptionResNetV2 (2) (256), InceptionResNetV2 (2) (320)
- InceptionResNetV2 (3) (256), InceptionResNetV2 (3) (320)
- Xception (2) (384), Xception (2) (448)
- Xception (3) (384), Xception (3) (448)
- DenseNet169 (2) (512), DenseNet169 (3) (512)
First, we trained ImageNet pre-trained networks on the NIH ChestX-ray14 dataset using 15 classes (14 findings + abnormal vs. normal). Then we fine-tuned those weights on the pneumonia dataset. This improved results from training using ImageNet weights only by about 1% locally. We were getting about 0.88-0.90 AUC across our folds. </p>
<p>Models were trained with 50% probability of being color-inverted, 50% of being flipped, and 50% of being augmented with some other augmentation (e.g. contrast enhancement, crop, rotation). We used 15x TTA per model for our final predictions and averaged the 150 predictions for our final classification score. When the stage 1 test labels were released, our classification ensemble had an AUC of 0.93. </p>
<p>As many other competitors noticed, the distribution of the training (single-read) data and test (triple-read) data were quite different. We think that thoracic fellowship-trained radiologists from the STR and the increased number of readers contributed to increased sensitivity or higher clinical suspicion of opacities that led to the increase in prevalence. We tuned our threshold based on the stage 1 public LB results.</p>
<h1>Detection</h1>
<p>Please see the tl;dr for the repos we used for detection. We used a combination of 5 10-fold CV ensembles for detection. We computed the metric provided by Yicheng Chen (<a href="https://www.kaggle.com/chenyc15/mean-average-precision-metric">https://www.kaggle.com/chenyc15/mean-average-precision-metric</a>) for both positive images only and all images to inform our model selection. Unfortunately we did a poor job of keeping track of how well our experiments did on stage 1 public LB, and the results are no longer visible. </p>
<h2>Detection Ensemble 1</h2>
<p>Each fold was trained on a different resolution (224-512 by increments of 32). <em>Positive images only.</em> We found that lower resolutions did not lower performance locally and actually increased performance (0.005-0.008) on public LB, so we stuck with it. It also allowed us to spam models in our ensemble due to lower training/inference overhead. The first detection ensemble was a 10-fold CV ensemble of deformable R-FCN. We mainly used default parameters, except we unfroze the non-BN layers that are frozen in the default configs. These models use a ResNet101 ImageNet pre-trained backbone. We also changed the max number of detections per image to 5 but kept a threshold of 0.001. Only data augmentation used was flip. These models trained very quickly (~2 hours) depending on the image resolution. </p>
<h2>Detection Ensemble 2</h2>
<p>Basically the same as #1, except we used deformable relation networks (<a href="https://arxiv.org/abs/1711.11575">https://arxiv.org/abs/1711.11575</a>). Honestly, I don't really know how these work, and I stumbled upon them late into the competition. Since it was basically the same as the deformable R-FCN repo, it was easy to train these models and they performed very well. We tried the version where you attempt to learn the NMS thresholds to use, but that performed very poorly. </p>
<h2>Detection Ensemble 3</h2>
<p>Exactly the same as #2, except we used the default config (i.e. kept the backbone layers frozen). Keeping the backbone layers frozen had a slight decrease in performance, but we threw in this ensemble to decrease model correlation with the other ensembles, and it had a slight improvement in our stage 1 public LB score. </p>
<h2>Detection Ensemble 4</h2>
<p>This is a RetinaNet ensemble, also 10-fold CV, but trained only at 384 x 384 resolution because we had issues with changing anchor sizes. I believe a recent update allows you to specify a config.ini file where these changes can be made. This was probably our favorite ensemble. We trained on <em>concatenated</em> images where each negative image was randomly concatenated with a positive image from the same fold on the left or right (so final image sizes were 384 x 768). We trained for 8000 steps/epoch, batch size 1, for 8 epochs dividing learning rate by 10 after epoch 4 and 6. Validation was performed after each epoch. We selected the model with the best mAP metric over the 8 epochs. Using the <code>--random-transform</code> argument didn't really help/hurt but seemed to make training more unstable so we didn't use it (without specifying this, only data augmentation is flip). Half of the 10-fold CV ensemble was trained using ResNet101 backbone, the other half with ResNet152. ResNet101 was clearly better than ResNet50, but ResNet152 was the same as ResNet101. </p>
<p>Training on concatenated images allowed RetinaNet to better balance precision and recall. You can achieve the same results by picking the right proportion of positives/negatives as well, but this seemed more "clean" to us. Inference was performed on single images. We looked at the AUC of these models using the max box score as class prediction and it was on par with our classification networks. The mAP on positive images only went down, but the mAP on all images went up, so this trade-off was beneficial to include in our ensemble. If we wanted to use a single model, it would be this one as it does not need a classifier to achieve good performance. Interestingly, training on concatenated images did not work well for detection ensembles 1-3. This may be because they are 2-stage detectors, but we didn't have time to look at this more closely. </p>
<h2>Detection Ensemble 5</h2>
<p>Same as #4, except trained on <em>positive images only.</em> </p>
<h1>Ensemble</h1>
<h2>Detection Ensemble 1+2+3</h2>
<p>For detection ensembles 1-3, we applied 6x TTA (original, flipped, 80%/120% for both original and flipped). For each of the 30 models, we combined the TTA predictions using (<a href="https://github.com/ahrnbom/ensemble-objdet">https://github.com/ahrnbom/ensemble-objdet</a>) with IoU threshold 0.4. Box score predictions were then adjusted by multiplying by the fraction of TTAs that contained that box (i.e. if 5/6 TTAs predicted a box with average score 0.5, it was multiplied by 5/6). This code expects <em>center coordinates</em> when computing the IoU overlap between boxes in the <code>getCoords</code> function. We didn't realize this until late in the competition (we were using top left), and when we changed this to take in top left coordinates, there was a drop in performance of about 0.01 in public LB. If anyone can help us figure out why, we haven't solved this yet. </p>
<p>Now we have 30 models worth of predictions, so we combined those again with IoU threshold 0.4. No weighting was used. Box score predictions were then adjusted by multiplying by the fraction of models that contained that box (i.e. if 24/30 models predicted a box with average score 0.5, it was multiplied by 0.8 and the score was adjusted to 0.4). </p>
<p>To incorporate the classification network, we multiplied the ensemble-averaged box score predictions by the classification score for that image. We eliminated boxes with an adjusted score of <0.225. </p>
<h2>Detection Ensemble 4+5</h2>
<p>We applied 10x TTA (resolutions 320, 352, 384, 416, 448 for both original/flipped), performing the same kind of ensembling as above, including score adjustment based on fraction of TTAs predicting that box. Top 10 (or fewer) detections per TTA were selected. However we did not combine the predictions across 20 models in both ensembles at this stage. Instead, we used a classification score threshold of 0.2 and box score threshold of 0.3 for detection ensemble 4. The multiplication method we used above did not work well for RetinaNet. Though we said detection ensemble 4 didn't necessarily need a paired classifier, it did improve the stage 1 public LB score by ~0.003 so we used it since it was readily available. For detection ensemble 5, we used a classification score threshold of 0.325 and box score threshold of 0.35. </p>
<p>After applying these thresholds, we combined boxes from ensembles 4 and 5 using the same code and IoU threshold 0.4. In this case, detection ensemble 4 was given 1.2 weight versus 0.8 for ensemble 5. This was because ensemble 4 performed slightly better (~0.004) than ensemble 5. Then we adjusted the score using the same strategy as above (in this case, if a box was only present in one model, score was divided in half). </p>
<p>For tuning these thresholds, we aimed for a prevalence of about 35-37% and experimented with what worked best on stage 1 public LB. We didn't have any real local validation because we realized early on that LB score would be the best indicator of model performance on the final test data. </p>
<h1>Final Ensemble</h1>
<p>We are now left with 2 ensembles. We combined them using equal weight and IoU threshold as described above, using the same adjustment strategy. Final box threshold used was 0.15.</p>
<h1>Post-processing</h1>
<p>We realized that the triple-read boxes were smaller than the single-read boxes. This makes sense because in the annotation process described here (<a href="https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/discussion/64723">https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/discussion/64723</a>) the <em>intersection</em> of boxes was used as opposed to the average. We tried to mimic this intersection process by taking the average of multiple intersections of boxes across models. This gave ~10-15% (!) improvement on stage 1 public LB. As an alternative to this, we simply resized the boxes by multiplying length/width by a fraction. We found 87.5% for each was a good reduction and worked a bit better than doing the intersection. It was also much easier to implement. We discovered this early on, and didn't submit anything without resizing after that. Early on in the competition we had an improvement from 0.181->0.209 with resizing. Towards the end, I wanted to see the effect of resizing again and it was an improvement of 0.218->0.252, which is huge. For our final stage 1 submission, resize improved our score from 0.222->0.260. It seemed like other people were seeing better success with higher resolution models, so we wonder if the resizing was more complementary with lower resolution models. Maybe if we didn't resize, higher resolution models would perform better.</p>
<h1>Statistics for final submission</h1>
<pre><code>Stage 1:
NUM POSITIVES: 351
% POSITIVES: 0.351
NUM BOXES: 582
AVG # BOXES PER CASE: 1.65811965812
BOX SIZE [MEDIAN]: 60084.5
BOX SIZE [MEAN]: 66052.2405498
BOX SIZE [MIN]: 16065
BOX SIZE [MAX]: 151580
Stage 2:
NUM POSITIVES: 1106
% POSITIVES: 0.368666666667
NUM BOXES: 1830
AVG # BOXES PER CASE: 1.65461121157
BOX SIZE [MEDIAN]: 60877.5
BOX SIZE [MEAN]: 66454.8060109
BOX SIZE [MIN]: 13431
BOX SIZE [MAX]: 192648
</code></pre>
<h1>Miscellaneous thoughts</h1>
<p>Basically we had a good model that we turned into a "great" model by resizing the output. It will be interesting to see if other top teams achieved their score by improving more upon classification versus box precision and how their scores would be affected by resize if they did not apply any post-processing to their predictions. We went overboard with ensembles and probably could have achieved the same performance with <20 models, but it became so easy to train them that we just included a bunch. We could have experimented with more hyperparameters in our detection models as well. I really dislike hyperparameter tuning (never really developed a good strategy for it) and often try and compensate by ensembling different models together. </p>
<h1>Things that didn't work</h1>
<ul>
<li>We tried training another classifier on out-of-fold bounding box predictions produced by our detection models to classify into IoU >0.4 and <0.4</li>
<li>NMS of overlapping bounding boxes in our final submission: a number of images in our final submission for both stage 1 and stage 2 LBs had overlapping boxes (usually a smaller one contained in a larger one). Suppressing these actually reduced our score. </li>
<li>We tried various experiments that treated AP/PA images differently (e.g. different thresholds, different resizes), but in the end it was easier and better to be view-agnostic</li>
<li>Getting all bounding box predictions from all TTAs and models and combining them at once. This didn't work as well as the stepwise approach we described above. This may have something to do with non-standardized prediction scores (though we tried standardizing and it didn't help much). </li>
<li>SoftNMS -- this makes sense because you would not expect overlapping objects in this challenge as opposed to others like COCO. </li>
<li>For RetinaNet: other backbones. Only the ResNet backbones worked well for us. </li>
<li>For RetinaNet: pre-training detector heads. It was a lot easier to use ImageNet pre-trained backbones and then just start training the whole network from the beginning.</li>
<li>For RetinaNet: pre-training the backbone on the pneumonia dataset. No improvement. </li>
</ul>
<p>Interesting - did you ever try doing a fixed resizing of the boxes as a comparison? We also wanted to account for variance which we tried to simulate using our intersection method, but it actually performed worse. Your method might be better than that though. </p>
<p>We tried 0.125, 0.15, 0.175 for final box thresholds because they gave us a final prevalence of 35-37% positive cases. 0.15 performed best on stage 1 public LB so we stuck with that.</p>
<p>I used 2x 1080 Ti and Alexandre has 2x 1070. I also had access to a couple of Titan Vs that I used to train the classification models. </p>
|
RSNA STR Pulmonary Embolism Detection
|
10th Place Solution with code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>According to the other solutions it is very possible. I did not try to run on a larger image size.</p>
<p><strong>code</strong> - <a href="https://github.com/OrKatz7/RSNA-Pulmonary-Embolism-Detection" target="_blank">https://github.com/OrKatz7/RSNA-Pulmonary-Embolism-Detection</a><br>
<strong>Full Pipeline</strong><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1094066%2F2c42c343c91bc3fd09f089c43fd8c17d%2FRSNA.png?generation=1603799132907836&alt=media" alt=""></p>
<p><strong>Overall Strategy</strong></p>
<ol>
<li>Train an image-level 2d CNN and save to hard drive features</li>
<li>Train an exam-level 3d CNN and save to hard drive features</li>
<li>input the 2d and 3d features into sequence model</li>
</ol>
<p><strong>2D CNN Modeling</strong></p>
<ol>
<li>Data pre-processing - based on Ian Pan: <a href="https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection/discussion/182930" target="_blank">https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection/discussion/182930</a></li>
<li>augmentation - RandomBrightnessContrast, HorizontalFlip, ElasticTransform, GridDistortion, VerticalFlip, ShiftScaleRotate, RandomCrop</li>
<li>cnn models - efficientnet-b3, efficientnet-b4, efficientnet-b5</li>
</ol>
<p><strong>3D CNN Modeling</strong><br>
based on <a href="https://www.kaggle.com/boliu0" target="_blank">@boliu0</a> pipline <a href="https://www.kaggle.com/boliu0/monai-3d-cnn-training" target="_blank">https://www.kaggle.com/boliu0/monai-3d-cnn-training</a></p>
<p><strong>Sequence Model</strong><br>
Input - Slice embeddings from multi 2d models + exam embeddings from 3d models<br>
loss - rsna metric</p>
<p>B5,b4,b3 - 2 folds<br>
3d densenet121 - 3 folds X 3<br>
Lstm - 2 folds<br>
Note that my input size is not 512X512. My models were trained on 224X224 so is faster.</p>
<p>In fact it takes 7.5 hours.<br>
<a href="https://github.com/OrKatz7/RSNA-Pulmonary-Embolism-Detection/blob/main/submission.ipynb" target="_blank">submission code</a></p>
<p><a href="https://github.com/OrKatz7/RSNA-Pulmonary-Embolism-Detection/blob/main/submission.ipynb" target="_blank">https://github.com/OrKatz7/RSNA-Pulmonary-Embolism-Detection/blob/main/submission.ipynb</a></p>
|
RSNA STR Pulmonary Embolism Detection
|
12th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you :)</p>
<p>Thank you :)</p>
<p>First, we would like to thank all the organizers, Kaggle and all the medical institutions who contributed their data and all the data annotators. As someone who previously tried to annotate an MRI scan during my internship I know a bit about how cumbersome and sensitive it's to annotate medical data. So appreciate it all!</p>
<p>In overall it was a great yet challenging competition in terms of the data volume. We initially started training models from data shared by <a href="https://www.kaggle.com/vaillant" target="_blank">@vaillant</a>, without his generosity the barrier of entry to this competition would be too high for many participators. So, a special thank to him goes from our team.</p>
<p>Although, starting off with 256x256 images was great for prototyping in computer vision problems you can always get significant boosts just by training with higher resolution images. At first we tried to create and save full resolution images using Kaggle kernels but it wasn't fun and easy since only 5GB disk space is allowed so we ended up using a cloud provider for the remaining experiments.</p>
<p>First, we created full resolution training images using the same windowing shared publicly and also leveraged great utilities from <a href="https://docs.fast.ai/medical.imaging" target="_blank">https://docs.fast.ai/medical.imaging</a>. GDCM was also a requirement, because not all images were readable without it.</p>
<p>We extracted both images and metadata.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F558069%2Fc18414a7c1f639ec7d082bc8632e17e3%2Fdata_prep.jpg?generation=1603759954173470&alt=media" alt=""></p>
<p>Later we trained CNN models for predicting Image level PE.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F558069%2Fd930773d07281b83ce33642edd90bf0e%2Fcnn_models.jpg?generation=1603760506036737&alt=media" alt=""></p>
<p>Then we used an LSTM model to predict image level PE and exam level predictions.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F558069%2Fdf271e5837b3de4d8a0fefad7f7cd63f%2Flstm_sigmoid.jpg?generation=1603760549832762&alt=media" alt=""></p>
<p><a href="https://www.kaggle.com/keremt/12th-place-rsna-pe-inference?scriptVersionId=45505224" target="_blank">Inference Kernel</a></p>
<p>Other Notes:</p>
<ul>
<li>5 folds validation scheme.</li>
<li>Sequence model directly optimized on competition metric.</li>
<li>Tried EfficientNet but we had problems with overfitting.</li>
<li>Didn't have time for stacking experiments.</li>
</ul>
<p>Code for this competition will be publicly available in this <a href="https://github.com/KeremTurgutlu/rsna-pulmonary-embolism" target="_blank">repo</a>. </p>
<p>Special thanks to my teammates: <a href="https://www.kaggle.com/jesucristo" target="_blank">@jesucristo</a>, <a href="https://www.kaggle.com/josealways123" target="_blank">@josealways123</a> and <a href="https://www.kaggle.com/atikahamed" target="_blank">@atikahamed</a></p>
<p>Thank you, hopefully next time :)</p>
|
RSNA STR Pulmonary Embolism Detection
|
18th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><em>I am actually quite surprised I was able to do this well in the competition, I joined way too late and trained very few models. I only had one selected submission that luckily didn't time out. It is a 5-fold efficientnet-b3 followed by a 5-fold sequential model based on its features.</em></p>
<h3>Introduction</h3>
<p>My solution consists of a two steps pipeline, similarly to the overpowered baseline :</p>
<ul>
<li>Image level efficientnet-b3, trained to classify whether the image has PE. This model is then used to extract features for each slice of the CT scan.</li>
<li>A sequential is trained on the features extracted by the CNN, it predicts both the image and exam level labels, and directly optimizes the competition metric.</li>
</ul>
<p>I joined the competition 9 days before the end, with the overall motivation of doing something similar to the winners of the previous RSNA competition. <br>
I was able to quickly build the CNN pipeline, and started training a bunch some models. The issue was that those models would be really long to train on my hardware (1x RTX 21080Ti), so I had to improvise.</p>
<p>Then, it was about quickly engineering the second part of the pipeline and the inference code, which was far from easy. </p>
<p>Shortly after I joined, the overly powerful baseline was released. I did not end up really using any of the components of it, but it was an additional motivation for me to keep pushing.<br>
I was able to come up with my first submission one day before the deadline, which <em>somehow</em> scored 23rd on the public leaderboard. </p>
<h3>Data</h3>
<p>As I could not fit the 900 Gb dataset on my computer, I solely relied on the <a href="https://www.kaggle.com/vaillant/rsna-str-pe-detection-jpeg-256" target="_blank">256x256 jpgs</a> extracted by <a href="https://www.kaggle.com/vaillant" target="_blank">@vaillant</a>.<br>
Thanks a lot for making it possible for people like me to join the competition.</p>
<p>I therefore didn't experiment at all with the windowing, and simply loading the data takes a huge part of my runtime.</p>
<h3>First level : Convolutional Neural Networks</h3>
<h4>Undersampling</h4>
<p>The issue with 2D images constructed from CT scans is that images are very similar. Therefore it makes sense not to use every slice per patient. <br>
As there is about 400 slices per patient on average, one epoch would take ages and this is not a path I wanted to go. <br>
Therefore, I only used 30 images per patient at each epoch, this is done using a custom sampler. </p>
<p>Once this was done, I was able to train the models for 20 epochs in approximately 9 hours, using a 5-fold grouped by patient validation.</p>
<h4>Models</h4>
<p>Models were trained as part of a classical binary classification problem, using the binary cross-entropy<br>
First experiments were conducted with a ResNeXt-50 model as it is usually a reliable baseline. <br>
I then tried to switch to a bigger ResNext-101, but results were not bigger so I quickly gave up with big architectures.<br>
The last model I trained is an efficientnet-b3, which was chosen because a batch size of 32 could fit on my GPU. <br>
It performed slightly better so I sticked with this model, and I had no time left to train other models.</p>
<p>The efficientnet was trained for 15 epochs using a linear scheduled learning rate with 0.05 warmup proportion. </p>
<h4>Augmentations</h4>
<pre><code>- albu.HorizontalFlip(p=0.5)
- albu.VerticalFlip(p=0.5)
- albu.ShiftScaleRotate(shift_limit=0.1, rotate_limit=45, p=0.5)
- albu.OneOf([albu.RandomGamma(always_apply=True), albu.RandomBrightnessContrast(always_apply=True),], p=0.5)
- albu.ElasticTransform(alpha=1, sigma=5, alpha_affine=10, border_mode=cv2.BORDER_CONSTANT, p=0.5)
</code></pre>
<h3>Second Level</h3>
<h4>Model</h4>
<p>The model I used is a MLP + BidiLSTM one that predicts both the image and exam targets using the CNN extracted features as input. <br>
Two 2-layer classifiers are plugged on the concatenation of the output of the MLP and of the LSTM.<br>
I used the concatenation of average and max pooling for the exam level targets.<br>
In addition, multi-sample dropout was used for improved convergence. </p>
<h4>Training</h4>
<p>The model was trained using the loss function that matches the metric. <br>
I also used stochastic weighted averaging for the last few epochs, once again to have a bit more robustness.<br>
A single epoch took approximately a minute. </p>
<p>The validation scheme is a normal 5-fold, and my CV scores were quite close to the 0.179 score I had on the public LB.</p>
<h3>Inference</h3>
<p>My inference code is available here : <a href="https://www.kaggle.com/theoviel/pe-inference-2" target="_blank">https://www.kaggle.com/theoviel/pe-inference-2</a><br>
I used clipping to make sure the label assignment rules were respected, which dropped my score of approximately <code>0.003</code>.</p>
<h3>Final words</h3>
<p>Congratz to the winners, I'm pretty sure my solution is nowhere near what the top 10 has come up with and I'm really glad I was able to finish 18th. I wanted to tackle a medical imaging challenge for a long time but was always hesitating because of the dataset sizes.</p>
<p>Hopefully next time I don't procrastinate too much and join a bit earlier, I'm pretty sure I'll benefit a lot from teaming up with people and spending more time experimenting.</p>
<p>Also, the code is available on <strong>GitHub</strong>, although I still have some cleaning to do, and the ReadMe to complete : <a href="https://github.com/TheoViel/kaggle_pulmonary_embolism_detection" target="_blank">https://github.com/TheoViel/kaggle_pulmonary_embolism_detection</a></p>
<p>Thanks for reading ! </p>
<p>If you plan on using attention you might as well want to use transformers as several competitors did. It seems to work well.</p>
<p>Thanks Chris ! Turns out having limited computational ressources has some plus sides </p>
<p>You're welcome ! My validation logloss is of about 0.110 on each fold.</p>
<p>Thanks to you two !</p>
<p>What happens is that before starting the epoch, the sampler does a run on all the shuffled images. It keeps only the 30 first it sees per patient to form the batches.</p>
<p><a href="https://www.kaggle.com/ademyanchuk" target="_blank">@ademyanchuk</a> Models from the 2nd stage take all the slices as input</p>
|
RSNA STR Pulmonary Embolism Detection
|
1st place solution with code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all the winners! Thanks to Kaggle and RSNA for hosting this competition and presenting us this interesting problem. The data size is big and of high quality and there is no shakeup. I’m glad I can win this one and I have learnt a lot during this journey.<br>
Special thanks to <a href="https://www.kaggle.com/vaillant" target="_blank">@vaillant</a> for providing the topic introduction and useful input processing code. Also, credits should go to last year’s RSNA winners, lots of their ideas are incorporated in my solution.<br>
<a href="https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/discussion/117242" target="_blank">https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/discussion/117242</a></p>
<p>My solution is described below. <br>
Code: <a href="https://github.com/GuanshuoXu/RSNA-STR-Pulmonary-Embolism-Detection" target="_blank">https://github.com/GuanshuoXu/RSNA-STR-Pulmonary-Embolism-Detection</a><br>
Inference kernel: <a href="https://www.kaggle.com/wowfattie/notebook6fff7ff27a?scriptVersionId=45476524" target="_blank">https://www.kaggle.com/wowfattie/notebook6fff7ff27a?scriptVersionId=45476524</a></p>
<h1>Preprocessing</h1>
<p>Early after I joined this competition, I noticed that increasing input image size from 512x512 to 640x640 improves the modeling performance. By browsing the training images, I further noticed that the lungs did not occupy large and consistent portions of the images. This is inefficient because we know input size matters and it’s not worthy to waste computing time on irrelevant things in the images, and this could also give the modeling unnecessary difficulty to learn large scale and shift invariance. So, it’s necessary to have a high-quality lung localizer. There are some existing pretrained lung localizer online, I did not try them because according to my observation it’s easy for a CNN to accurately localize the lung area from images as long as we have the bbox labels of the lungs. So, I annotated the train data and built a lung localizer with the bboxes and Efficientnet-b0 as the backbone. For simplicity I only annotated four images per study. The training and prediction process were also on only four images per study to save time. Some examples of this preprocessing are given below. The localizer is very robust even in some relatively difficult conditions. The idea of preprocessing the input is partly inspired from last year’s 2nd place solution.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F478989%2F51ad8deb361d5f8c50f2265970a30d3c%2FPicture1.png?generation=1604080349102371&alt=media" alt=""></p>
<h1>Training/validation split</h1>
<p>Since the provided data are big and of high quality, we don't have to do cross validation, a single training/validation split is reliable enough. In this competition, I randomly set aside 1000 studies for validation and used the rest 6200+ studies for training and hyperparameter tuning. For final LB submission I re-trained my models with the full training set and the optimized hyperparameters.</p>
<h1>Image-level modeling</h1>
<p>I used the same 2-stage training strategy as in last year’s RSNA competitions. For image-level modeling, the 3-channel input was the PE windows of the current image and its two direct neighbors. Using neighboring images has proved to be effective in last years 1st and 3rd place solutions. My experiments also confirmed that this input setting outperformed single images with 3 types of windows.</p>
<p>Apart from predicting image-level labels, this year we are given various study-level labels. At first glance it appeared to me that, because the input of the study-level models are image embeddings, we need to use these study-level labels during image-level modeling so that the following study-level model could have sufficient knowledge to model and predict them. But after I tried lots of combinations of them and various loss masking tricks, the best performing model in both the image-level and study-level stages was still the one trained with the image-level labels only. I’m a little puzzled how the image embeddings are encoded with the study-level labels, for example, the exact position labels (center, left, right) and the more refined acute and chronic, when the image-level models were not trained using any of those labels.</p>
<p>The training loss was the vanilla BCE loss with linear lr scheduler. No special data sampling was applied. I found that a single epoch through the train data was the optimal for my settings. The best augmentations were </p>
<pre><code>albumentations.RandomContrast(limit=0.2, p=1.0),
albumentations.ShiftScaleRotate(shift_limit=0.2, scale_limit=0.2, rotate_limit=20, border_mode=cv2.BORDER_CONSTANT, p=1.0),
albumentations.Cutout(num_holes=2, max_h_size=int(0.4*image_size), max_w_size=int(0.4*image_size), fill_value=0, always_apply=True, p=1.0),
</code></pre>
<p>My final ensemble were with one serexnext50 and one seresnext101. Their respective validation performance for image-level PE prediction was</p>
<pre><code> Loss AUC
seresnext101 0.079 0.964
seresnext50 0.080 0.962
</code></pre>
<p>Other good backbones are inception_resnet_v2 and efficientnets. Densenets and resnexts performed a lot worse. Input were resized to 576x576 after the lung localization, this was the largest size the models could finish running in the 9 hours.</p>
<h1>Study-level modeling</h1>
<p>Image embeddings of dimension 2048 served as the input to a RNN for both image-level and study-level modeling. </p>
<p>One thing we need to handle was that the number of images each study has could vary from 100+ to 1000+. As we don’t know the information of the private test data, it was hard to predefine an input sequence length for our RNN model if we want to predict all the images. Stacking all the images into a 3-D array and resizing it along the z-axis before generating image embeddings is an option, but it was not compatible to my inference pipeline. For convenience, I swapped the order of embedding generation and resizing, in other words, I chose to resize the features instead of images. For example, given a study which has N images, the input feature shape is Nx2048. If the max sequence length limit in the RNN is M, the cv2.resize function is applied to resize features to Mx2048 if N>M, otherwise if N<M zero-padding is used. The image-level labels and the predictions are zoomed in and out in the same way during training and inference. To find the best M, I ran a search in the step size of 32, and M=128 gave the best performance. In the train set, the majority of Ns is in the range of 200-250. This means downsizing across the z-pos first before sequence modeling improves the performance. In my final models, I actually set m=192 because I believed there might be more big Ns in the private test data.</p>
<p>Inspired from last year's 2nd place solution, I also computed the difference of embeddings between current and the two direct neighbors and concatenate with the current features. So the input size was expanded to 2048x3. </p>
<p>The exact RNN architecture is not very important, I settled down to only a single bidirectional GRU layer, with the study-level labels predicted by a concatenated attention weighted average pooling and max pooling over the sequence. My local validation loss was around 0.18, I have no idea why it is much higher than the LB scores.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F478989%2F5aabec8cb08e9bc19b65b9efd4c0dcdd%2FPicture5.png?generation=1604080838336664&alt=media" alt=""></p>
<h1>Postprocessing</h1>
<p>The main purpose of the postprocessing step is to satisfy the consistency requirement of the labels. Since this consistency requirement agrees with how the data was labeled, a careful postprocessing could improve the performance. In my case, the local validation has a tiny improvement after postprocessing. The brief workflow of the postprocessing is </p>
<pre><code>for each study:
if the original predictions satisfy the consistency requirement
do nothing
else
change the original predictions into consistent positive predictions, and compute loss between them
change the original predictions into consistent negative predictions, and compute loss between them
choose from the positive and negative predictions based on which causes the smaller loss
</code></pre>
<p>The weights of the loss function is almost same as the competition metric, except that the q_i of image loss weight is replaced by a fixed 0.005 because we don’t have the ground truth of the test data. Code of this postprocessing can be found in my inference kernel.</p>
<p>I only have a result of a naive model, with lung localization the AUC improved from 0.945 to 0.953.</p>
<p>Thanks. Congratulations to you too.<br>
I computed the largest possible bbox from the four bboxes for each study/scan. The code from my inference kernel.</p>
<pre><code> xmin = np.round(min([bbox[0,0], bbox[1,0], bbox[2,0], bbox[3,0]])*512)
ymin = np.round(min([bbox[0,1], bbox[1,1], bbox[2,1], bbox[3,1]])*512)
xmax = np.round(max([bbox[0,2], bbox[1,2], bbox[2,2], bbox[3,2]])*512)
ymax = np.round(max([bbox[0,3], bbox[1,3], bbox[2,3], bbox[3,3]])*512)
bbox_dict[series_id] = [int(max(0, xmin)), int(max(0, ymin)), int(min(512, xmax)), int(min(512, ymax))]
</code></pre>
<p>I used both max and attention pooling. The attention pooling is similar to average pooling.</p>
<p>Sequence model may not be suitable for your problem. You can take a lot at some previous kaggle competitions, for example<br>
<a href="https://www.kaggle.com/c/yelp-restaurant-photo-classification" target="_blank">https://www.kaggle.com/c/yelp-restaurant-photo-classification</a><br>
<a href="https://www.kaggle.com/c/cdiscount-image-classification-challenge" target="_blank">https://www.kaggle.com/c/cdiscount-image-classification-challenge</a></p>
|
RSNA STR Pulmonary Embolism Detection
|
23rd Place - Fast GPU Experimentation Pipeline!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you Radiological Society of North America (RSNA®), Society of Thoracic Radiology (STR), and Kaggle for hosting this fun competition. Thank you Nvidia for providing compute resources.</p>
<p>This has been one of my favorite competitions. I enjoyed building an elaborate multi-stage pipeline of stacked models! Working with 3D images was fun and provided an additional challenge compared with 2D images. I particularly enjoyed tackling the challenge of building a fast experimentation pipeline when the training data is <code>1_000_000_000_000 bytes</code> of data! One trillion bytes! This is the largest dataset I have ever worked with.</p>
<h1>RSNA STR Pulmonary Embolism Detection</h1>
<p>In the figure below, each row is an exam (i.e. study, i.e. single patient). The row of images are CT scan "slices" from the 3D image of a patient's chest. In this competition, we need to predict 9 targets for each patient (each row) (like is pe on left side? on right side? etc) and we need to classify every image (is pe present?). If below were all the data, then we would need to predict <code>3 rows * 9 targets = 27 exam targets</code> and <code>15 images * 1 target = 15 image targets</code>. In total we would need to predict 42 targets.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F7d5ce77d471fbf8ef7e746731eab734c%2Fstudy.png?generation=1603731145650332&alt=media" alt=""></p>
<h1>Understanding the Metric</h1>
<p>On the description page, the metric seems very confusing. However it is just a weighted average of 10 log losses (the 9 types of exam predictions and the 1 type of image prediction). (I explain the metric <a href="https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection/discussion/193598" target="_blank">here</a>). After computing the 10 weights, we find that 50% of our LB score is from the 9 exam predictions and 50% of our LB score is from the image predictions. Furthermore, the log loss for the image predictions is itself a weighted log loss where an image that is part of an exam without pulmonary embolism has weight zero (very important observation!) </p>
<ul>
<li>Improving image level predictions is equally important as improving exam predictions.</li>
<li>There is no penalty for false positives, so we can train our image prediction models with only the 30% of the data from positive exams!</li>
</ul>
<h1>Stage 1 - Model One - Image Level Predictions</h1>
<h1>(CNN EfficientNet B4)</h1>
<pre><code>inp = tf.keras.Input(shape=(320, 320, 1)) # INPUT IS UINT8
x = tf.keras.layers.Concatenate()([inp/255., inp/255., inp/255.])
base_model = efn.EfficientNetB4(weights='imagenet', include_top=False)
x = base_model(x)
x = tf.keras.layers.GlobalAveragePooling2D()(x)
x = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model = tf.keras.Model(inputs=inp, outputs=x)
opt = tf.keras.optimizers.Adam(lr=0.000005)
model.compile(loss='binary_crossentropy', optimizer = opt)
model.fit(X, y, sample_weight = X.groupby('StudyInstanceUID')
.pe_present_on_image.transform('mean') * 5.6222 )
</code></pre>
<p>I built two models. Model one predicts image level predictions (i.e. <code>pe_present_on_image</code>) and model two predicts patient level predictions (i.e. exams i.e. studies, like <code>leftsided_pe</code> etc)</p>
<ul>
<li>EfficientNet B4 pretrained on <code>imagenet</code></li>
<li><strong>Only Mediastinal window, (ie. level=40, width=400)</strong> i.e. 1 channel <code>uint8</code></li>
<li>Random crops of 320x320 from 512x512</li>
<li>Rotation (+-8 deg) Scale (+-0.16) augmentation</li>
<li>Coarse Dropout (16 holes sized 50x50)</li>
<li>Mixup (swap slices of similar Z position with other exams)</li>
<li>Adam optimizer with constant <code>LR = 5e-6</code></li>
<li>Training sample weight equal to pe proportion in exam</li>
<li><strong>Only train on 30% of train data with pe present in exam</strong></li>
<li>40 minute epochs using 4x V100 GPU</li>
<li>Train 15 epochs with batch size 128</li>
</ul>
<p>Below illustrates my augmentations. For display purposes, we illustrate Mixup with a large yellow, green, or blue square so you can see it better. (During runtime, it was an actual second image).</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F5515206e0fd5e5a7379f6272e0e56dc3%2Fmixup.png?generation=1603733206613194&alt=media" alt=""> </p>
<h1>Stage 1 - Model Two - Patient Level Predictions</h1>
<h1>(CNN EfficientNet B4)</h1>
<p>Each patient has an average of 200 images. Among those 200, if PE is present, it is usually on the middle slices. Therefore I only train my patient level model with slices <code>0.35 < z < 0.65</code>. Then to predict the 9 targets for each patient, I only infer <code>0.35 < z < 0.65</code> and then take the 9 average predictions.</p>
<ul>
<li>Most details same as model one</li>
<li><strong>Only Mediastinal window, (ie. level=40, width=400)</strong> i.e. 1 channel <code>uint8</code></li>
<li>Output layer of 9 sigmoid units</li>
<li><strong>Only train on 30% of train data with Z Position between <code>0.35 < z < 0.65</code></strong></li>
<li>Loss <code>weighted_log_loss</code></li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F5ed4f1ae6291e5edc4f422b479214f80%2Fslices.png?generation=1603734372678374&alt=media" alt=""></p>
<h1>Experimentation Pipeline</h1>
<p>How do we discover the details above? All the settings above were discovered by performing dozens of experiments on <strong>smaller images and smaller backbones</strong>. For example, use 128x128 (with 80x80 crops) EfficientNetB0 and/or 256x256 (with 160x160 crops) EfficientNetB2. Using these smaller models, we can test out ideas on a single GPU in minutes! Also note that we only use 1 channel images of <code>uint8</code>. This is 33% less data than converting images to 3 channels of 3 different CT window schemes.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F60349090637a446427fc8f7c8775a70f%2Fexp2.png?generation=1603744581352698&alt=media" alt=""></p>
<p>Remember we are only training with 30% original data. Then using crops makes it 12% of data. Then using 256x256 reduces this to 3% of data. And using 128x128 reduces this to 0.75% of data! Even Kaggle notebooks P100 GPU can train quickly on 80x80 crops from 128x128 and 30% train data.</p>
<p>Once you find a configuration that works well, then run 512x512 with EfficientNetB4 overnight. Using only 2D predictions for image and 2D predictions for patient, <strong>the above two models obtain LB 0.215 and CV 0.235.</strong> </p>
<p>We will now increase our CV LB by building stage 2 models that use stage 1 predictions as input</p>
<h1>Stage 2 - Model One - Image Level Predictions</h1>
<h1>(Random Forest)</h1>
<pre><code>FEATURES = ['oof']
for k in NEIGHBORS:
tmp = train.sort_values('PosZ').groupby('StudyInstanceUID')[['oof']]
train['b%i'%k] = tmp.shift(k)
train['a%i'%k] = tmp.shift(-k)
FEATURES += ['a%i'%k, 'b%i'%k]
train.fillna(-1,inplace=True)
model = RandomForestClassifier(max_depth=9, n_estimators=100,
n_jobs=20, min_samples_leaf=50)
model.fit(train.loc[idxT,FEATURES],train.loc[idxT,'pe_present_on_image'],
sample_weight = 5.6222 * valid.loc[idxT,'weight'])
</code></pre>
<p>All images are slices from 3D images. So adjacent images (within the same exam) contain helpful information. Each plot below displays all 200 or so image level predictions from 1 study. The x axis is z position and the y axis is the prediction value (0 to 1). The blue line is the ground truth, the orange line is the prediction from the model described above. The black line is the random forest Stage 2 model.</p>
<p>For each image level prediction, a random forest model takes as input the prediction and neighbor predictions [1,2,3,4,5,6,7,8,9,10,15,20,25,30,35,40,45,50,60,70,80,90,100,150,200,250] on either side. Then the random forest model predicts a new image level prediction display in black below. Notice when the original prediction is close to 1, then the random forest pushes it up to 1 and when the original prediction is close to 0, then the random forest pushes it down to 0.</p>
<p><strong>This stage 2 image level model increased LB to 0.204 from 0.215 and CV to 0.224 from 0.235 (gain = 0.011)</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fcccdbe82e765c3e7d762031b358dd290%2Fi-smooth.png?generation=1603736540573190&alt=media" alt=""></p>
<h1>Stage 2 - Model Two - Patient Level Predictions</h1>
<h1>(GRU + 1D-CNN)</h1>
<pre><code>inp = L.Input(shape=(64, 1792))
x = L.Bidirectional(L.GRU(48, return_sequences=True,
kernel_initializer='orthogonal'))(inp)
x = L.Bidirectional(L.GRU(48, return_sequences=False,
kernel_initializer='orthogonal'))(x)
x = L.Dense(9, activation='sigmoid')(x)
model = tf.keras.Model(inputs=inp, outputs=x)
opt = tf.keras.optimizers.Adam(lr=0.00005)
model.compile(loss=weighted_log_loss, optimizer = opt)
</code></pre>
<p>Similarly we can use adjacent slice information to improve our patient level predictions. Most patients have between 160 and 310 images per study.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fa4a52476743ad1e1ada257c586c607fb%2Fhist2.png?generation=1603741964817694&alt=media" alt=""></p>
<p>Below are plots of the top view (Z position is vertical axis) of the 3D image (not the ordinary slice view of the 3D image). We notice that most of the crucial information is between 25% and 75% in top view. Therefore we extracted 64 images equally spaces between 25% and 75% Z position. Then we took those 64 images and extracted the GAP embeddings from both our stage 1 model one and stage 1 model two. We trained a stage 2 GRU model and a stage 2 1D-CNN model to predict exam level predictions from these 64 GAP embeddings.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fe46e41fab360d847c3fadb5d6dff9002%2Ftop.png?generation=1603741873425399&alt=media" alt=""></p>
<p><strong>This stage 2 exam level model increased LB to 0.183 from 0.204 and CV to 0.203 from 0.224 (gain = 0.021)</strong></p>
<h1>Other Misc Ideas</h1>
<p>When using global average pooling 2D in your CNN, location information is lost. Therefore I tried giving my models location information in various ways to help predict targets related to location (i.e. <code>leftsided_pe</code> etc) and related to size (i.e. <code>rv_lv_ratio_gte_1</code> etc). Unfortunately, none of my ideas increased CV or LB. My favorite is below.</p>
<h2>Locating PE with Class Activation Maps CAM</h2>
<p>I extracted class activation maps from my stage 1 models and feed the location information into stage 2 exam models. (CAMs explained <a href="https://www.kaggle.com/cdeotte/unsupervised-masks-cv-0-60" target="_blank">here</a>). In the below figure, the ground truth is in the title. The green circles are the CAM of my EfficientNetB4 model. Note that CT scans are flipped so the left side of the image is "right" and the right side of the image is "left. You can see that the CAM does a good job of locating the PE.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F291dc983514a322d5ce4e56b22a9a681%2Fcam.png?generation=1603755601607192&alt=media" alt=""></p>
<h1>Thank you</h1>
<p><a href="https://www.kaggle.com/jensvannahl" target="_blank">@jensvannahl</a> We build the stage 1 model with only 1 output and train it with 1 output. After it is done training, you build a second model that has 2 outputs and transfer the trained weights from the first model to the second model. Then use the second model for inference only (i.e. <code>_, GAP = model2.predict(train)</code> )to extract the GAP layer. (We do not train any model that has 2 outputs).</p>
<p>Hi Aman. For each patient (i.e. study), I discard all slices that are outside <code>0.35 < z < 0.65</code>. So for example, if a patient has 100 slices, I sort them in Z order and then discard the first 35 and last 35 slices. I only keep the 30% middle slices from <code>0.35 < z_normalized < 0.65</code>. I put all the slices I'm keeping from all the patients in <code>X_train_exam</code>.</p>
<p>I then train an EfficientNet using on <code>X_train_exam</code> with an output layer of 9 sigmoid units and weighted log loss (based on the metric weights for each of the 9 exam targets).</p>
<p>Finally for inference, for each patient, I predict the 9 targets for each of a patient's 30% middle slices (i.e. <code>X_test_exam</code>). So if a test dataset patient originally has 100 slices, i will predict for 30 middle slices. I now have 30 predictions for each of the 9 targets. For a specific target such as <code>rv_lv_ratio_gte_1</code>, i will take the average of those 30 predictions and submit that to Kaggle for that patient.</p>
<p>==========</p>
<p>Using the above technique, I could make a submission to Kaggle using only Stage 1 models and achieved LB 0.215. After building a Stage 2 model to predict exam level predictions, I no longer used these Stage 1 predictions in my final submission. </p>
<p>For a while I ensembled my Stage 1 and Stage 2 exam predictions (and that was better than using either alone). But then my Stage 2 model became so accurate that using Stage 1 predictions only made it worse.</p>
<p>The factor <code>5.6222</code> isn't really needed. Instead I could increase the learning rate by <code>5.6222</code> or just train longer.</p>
<p>The point is this. When an NN adjusts the model weights during training, it uses the error gradient multiplied by learning rate multiplied by sample weight. So when you adjust the sample weight you effectively adjust your model's learning rate.</p>
<p>In order to keep the same learning rate that worked before using sample weight, I adjust by 5.6222. The scaling factor <code>5.6222 = train.shape[0]/train.weight.sum()</code></p>
<p>The sample weight parameter adds a new training weight to each row of the training data. Notice that if each row had default <code>sample weight = 1</code>, then <code>train.shape[0]/train.weight.sum() = 1</code> and no adjustment is needed.</p>
<p>The metric is more complicated than oversampling. Imagine we have two patients, <code>patient A</code> and <code>patient B</code>. Now say that <code>patient A</code> has 400 slices and <code>patient B</code> has 300 slices. Imagine that <code>patient A</code> has <strong>only</strong> one slice with <code>pe_present_on_image = 1</code>. Imagine <code>patient B</code> has 150 slices with <code>pe_present_on_image = 1</code>.</p>
<p>According to the competition metic, the weight assoicated with <code>patient A's</code> one slice should be <code>1/400</code> and the weight associated to each of <code>patient B's</code> 150 slices should be <code>150/300</code> each. Notice that we aren't just making <code>pe_present_on_image = 1</code> oversampled.</p>
<p>There is a fundamental difference between <code>patient A's</code> one positive slice and <code>patient B's</code> 150 positive slices. The pulmonary embolism in <code>patient A</code> will look different because it is smaller and only on one slice. The embolism in <code>patient B</code> will be larger and cover multiple slices. We need to weight these larger embolisms more. We cannot use oversample and just natively weight all embolisms (i.e. <code>pe_present_on_image = 1</code>) more.</p>
<p>Thanks Rob. Congrats to your team too.</p>
<p>It's my intuition and CV proved it correct. (GBM had lower CV) My stage 2 random forest has 53 features. They are <code>oof</code> and then 26 forward shifts and 26 backward shifts of oof. I wanted all these features to be treated somewhat similar. I didn't want the model searching for signal in specific shifts.</p>
<p>In Ion comp, random forest also did better than GBM for the same reason when using signal and shifted signal. (Random forest overfitted less than GBM).</p>
<p>I haven't compared all the GPUs yet myself. The main difference between the V (professional) series and the RTX (consumer) series is that the V series double checks to prevent computational errors and V series has more VRAM which is very helpful. However for building models (which use random augmentations already), an occasional error may not matter. So perhaps RTX is more bang for your buck. But maybe you will need a few of them so you have enough VRAM.</p>
<p>Thank you Redwan <a href="https://www.kaggle.com/redwankarimsony" target="_blank">@redwankarimsony</a> . Your notebooks were very helpful. I read all of them and used some of your ideas. Thank you.</p>
<p>Thanks Laura. Using only one "window" (1-channel) reduced disk and RAM storage by 66% compared with saving 3 channel images to disk or RAM. (also note that original dicom is <code>uint16</code>, so applying windowing additionally reduces data by 50%). Furthermore, all images are <code>uint8</code> outside of the model which is 75% less RAM than <code>float32</code> images. (i.e. don't divide by 255 outside the model).</p>
<p>So, when using 128x128 <code>uint8</code> 1-channel images, all the train data can fit in 8.9GB. (that's 542_769 of the 1_790_594 train images where exam has PE). You don't even need to read from hard drive after putting it all in RAM. The data loader is incredibly fast. I'll say that again, the original 1_000_000_000_000 bytes (1TB) of train data can fit in 8.9GB of memory!</p>
<p>Then the model receives the 1-channel image and converts to 3-channel <code>float32</code> before inputting into EfficientNet as below</p>
<pre><code># THIS MODEL TRAINS ON RANDOM 80X80 CROPS FROM 128X128
inp = tf.keras.Input(shape=(80, 80, 1)) # INPUT IS UINT8
x = tf.keras.layers.Concatenate()([inp/255., inp/255., inp/255.])
base_model = efn.EfficientNetB0(weights='imagenet', include_top=False)
x = base_model(x)
</code></pre>
<p>I experimented with using different windowing schemes, but nothing increased my CV LB. I even did creative stuff like making 7 channel images where the additional 6 channels were neighboring slices. But again it didn't help. I also did strange stuff like embedding location information into images. For example, put left side of image in Red channel and right side of image in Blue channel but again it didn't help.</p>
<p>Thanks Theo</p>
<p>Good insight. I elaborate further <a href="https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection/discussion/193598" target="_blank">here</a></p>
<p><code>GAP embedding</code> stands for output of the global average pooling 2D layer. The model below outputs both the <code>pe_present_on_image</code> prediction and the GAP embedding. </p>
<pre><code> inp = tf.keras.Input(shape=(320, 320, 1))
x = tf.keras.layers.Concatenate()([inp/255., inp/255., inp/255.])
base_model = efn.EfficientNetB4(weights='imagenet', include_top=False)
x = base_model(x)
GAP = tf.keras.layers.GlobalAveragePooling2D()(x)
x = tf.keras.layers.Dense(1, activation='sigmoid')(GAP)
model2 = tf.keras.Model(inputs=inp, outputs=[x, GAP] )
</code></pre>
<p>After training the <code>model</code> described in the main discussion post above, we save the weights <code>model.save_weights('model.h5')</code>, then we load these weights into <code>model2</code> and extract the GAP embeddings as follows</p>
<pre><code>model2.load_weights('model.h5')
_, GAP = model2.predict(X_train)
</code></pre>
<p>Now <code>GAP</code> is a NumPy array of dimension <code>[ len(X_train), 1792]</code> because EfficientNetB4 has 1792 features in its global average pooling 2D output. If <code>X_train</code> is organized as every 64 rows is one exam, then we reshape GAP with <code>GAP = GAP.reshape((-1,64,1792))</code>. We then train our <code>model_GRU</code> with <code>model_GRU.fit(GAP, targets)</code> where <code>targets</code> has dimension <code>[ len(X_train)/64, 9]</code> and contains the exam targets.</p>
<p>Thanks Sayid</p>
|
RSNA STR Pulmonary Embolism Detection
|
28th Place - Quick writeup - Improving the baseline
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/watzisname" target="_blank">@watzisname</a> - Congrats to you on a solid finish too!</p>
<p>If you send me your post processing function I could run it on our best sub and see what LB score we get.</p>
<p>Interesting. Does your submission pass the <code>check_label_consistency</code> function tests?</p>
<p>I want to thank my amazing team <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a> <a href="https://www.kaggle.com/drhabib" target="_blank">@drhabib</a> <a href="https://www.kaggle.com/cateek" target="_blank">@cateek</a> and <a href="https://www.kaggle.com/proletheus" target="_blank">@proletheus</a>. This was a very challenging competition and I had a great time collaborating as a team and discussing ideas. Also thanks to the host and winners! I trust the top solutions will positively benefit physicians and those impacted by pulmonary embolism.</p>
<p>Even though our best submission was based off the from the "baseline" kernel that was released a week before the deadline, it's a little bittersweet. It's disappointing that such a high scoring kernel was released so late- I know it frustrated many teams (ours included). Our team was working on other end-to-end and 3D based models. I took on the role of improving the baseline as our backup plan.</p>
<p>This turned into more of an engineering challenge, as we had to balance the limited inference time with such a large amount of data- without exceeding GPU or local memory. It was also challenging because inference time seemed to vary randomly. A submission might complete in 7 hours but then a nearly identical submission would go over 9 hours and fail.</p>
<p>Differences between our solution and the public kernel:</p>
<ul>
<li>Trained a b6 in place of b0 for the stage 1 models.</li>
<li>Changed the inference loop to bag predictions from all 10 (5 folds x 2) stage 1 models.</li>
<li>Bagged 5x predictions from stage 2.</li>
<li>Changed the code so that it would only predict the private test during inference and used offline calculated public test predictions.</li>
<li>Modified the code to only loop through the dataloader once instead of twice for stage 1 predictions.</li>
<li>Ahmet used some magic to tweak the stage 2 model to get some added boost, including modifying the loss function.</li>
</ul>
<p>What didn't work:</p>
<ul>
<li>ResNext101 in stage 1 models, although it had a slightly better CV score.</li>
<li>Parallel GRU + LSTM for stage 2 model.</li>
</ul>
<p>Our solution also meets the required criteria and shouldn't have conflicting label predictions. Our best submission ignoring these restrictions would've given us a private LB score of 0.177 but we didn't select it to stay within the rules.</p>
<p>I'd also like to personally thank Z by HP and NVIDIA for providing me the Z8 desktop which I put to good use on this challenge.</p>
<p>Thanks Chris! Removing the rules gave us a pretty big bump on LB ~0.01 consistently. I'm not sure exactly why other teams didn't experience this. The 0.177 sub seems like it might have been extra lucky too, <a href="https://www.kaggle.com/robikscube/b0b6-cnn-gru-baseline-stage2-train-inference?scriptVersionId=45499507" target="_blank">the submission is version 24 here if you want to take a closer look</a>.</p>
|
RSNA STR Pulmonary Embolism Detection
|
30th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/bacterio" target="_blank">@bacterio</a> our teammate Professor Bacterio is the true fastai master here.</p>
<p>thanks Nico!!! You are GM in the make!!💯</p>
<p>thanks <a href="https://www.kaggle.com/amiiiney" target="_blank">@amiiiney</a>. We will continue improving :D</p>
<p>Thanks! Wish you luck too :D</p>
<p>😬😬😬doing our best here.</p>
<p>Thanks to the Kaggle team and organizers for this challenging competition. Huge props to my teammates <a href="https://www.kaggle.com/wuliaokaola" target="_blank">@wuliaokaola</a> <a href="https://www.kaggle.com/bacterio" target="_blank">@bacterio</a>, they worked very hard!! We have learned a lot and now we are exhausted both computationally and physically 😃, just like many of you. This solution is developed within 15 days and it's far away from the top winning solutions. But hopefully there's something helpful in this!</p>
<h2>TL; DR</h2>
<ul>
<li><p><strong>Efficientnet B1</strong> for image level, <strong>Efficientnet 3D CNN</strong> + <strong>Resnet 3D</strong> for study level. <strong>CNN + Transformer</strong> for image and study level.</p></li>
<li><p>Used TFRecords by the organizers and windows function by <a href="https://www.kaggle.com/vaillant" target="_blank">@vaillant</a>.</p></li>
<li><p>Applied mask to hide left/right side of the image</p></li>
<li><p>Optimized blending weights based on OOF calculation.</p></li>
</ul>
<h2>Timeline</h2>
<ul>
<li><p>We started the competition about 15 days ago. </p></li>
<li><p>Trained <strong>Efficientnet B0</strong> on image level and blended with the mean predictions by <a href="https://www.kaggle.com/osciiart" target="_blank">@osciiart</a>, aka the public notebook scored 0.325. 0.4 * mean prediction + 0.6 * Efficient B0 along with the mean prediction for study levels. This gives us LB: <code>0.292</code>. </p></li>
<li><p>Developed <strong>Resnet3D</strong> that scored <code>0.370</code> LB. We ran a local validation for this model and found out that this model did well on the study level but poorly on the image level. Hence, for study level, we replaced then mean prediction with our Resnet3D and achieved <code>0.253</code> LB.</p></li>
<li><p>Improved <strong>Efficientnet B0</strong> with masks(masking one half of the image). And achieved <code>0.248</code> LB. </p></li>
<li><p>The amazing public baseline came out and we were frustrated because it scored <code>0.233</code> and it made us panic. We tried to inference that and it took forever so we decided to not incorporate that into our pipeline.</p></li>
<li><p>We added <a href="https://www.kaggle.com/bacterio" target="_blank">@bacterio</a> to our team and blended with our existing results(take the mean) and got <code>0.226</code> LB. <a href="https://www.kaggle.com/bacterio" target="_blank">@bacterio</a> had a <strong>CNN + Transformer</strong> at that time, which is completely disparate from our approach. It calmed us down a bit and we knew that if we continue improving both image level and study level, we will get better.</p></li>
<li><p>We improved by training three <strong>Efficientnet B1</strong>s with similar techniques and <a href="https://www.kaggle.com/wuliaokaola" target="_blank">@wuliaokaola</a> came out with a new architecture <strong>Efficientnet 3D CNN</strong> and we got <code>0.212</code> LB. </p></li>
<li><p>Oh! We only had <strong>two days left</strong>. What should we do? We realized that it's quite unrealistic add in another architecture. So we should work on upgrading existing models. We added TTAs to <strong>Efficientnet 3D CNN</strong> and fine-tuned <strong>Efficientnet B1</strong>, <strong>Resnet 3D</strong>, <strong>Efficientnet 3D CNN</strong> and we got <code>0.204</code> LB.</p></li>
</ul>
<h2>Modeling</h2>
<p>In this section I will show our final models.</p>
<h3>Efficientnet B1(TF)</h3>
<ul>
<li>Trained with TPU using TFRecords provided by the organizer</li>
<li>3 windows as 3 channels by <a href="https://www.kaggle.com/vaillant" target="_blank">@vaillant</a></li>
<li>Configurations:<ul>
<li>Batchsize: 768</li>
<li>Epochs: 17</li>
<li>Scheduler: 9 epochs <strong>1e-3</strong>, 2 epochs <strong>1e-4</strong> and 6 epochs <strong>1e-5</strong>. Three phases, pick the best model at each phase and continue with a new learning rate next phase.</li></ul></li>
<li>Augmentations:<ul>
<li>rotation, shear, hzoom, wzoom, hshift, wshift by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>. <a href="https://www.kaggle.com/cdeotte/rotation-augmentation-gpu-tpu-0-96" target="_blank">https://www.kaggle.com/cdeotte/rotation-augmentation-gpu-tpu-0-96</a></li>
<li>mask, basically a cutout, this has <strong>significantly boosted our valid loss</strong>. Implementation shown as below <a href="https://www.kaggle.com/wuliaokaola" target="_blank">@wuliaokaola</a>.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2F27df2eb5da63345c737175b23062be4f%2Fmask.png?generation=1603764000748568&alt=media" alt=""><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2Fb03deaf567cceb73532a54fcdca4d53d%2FScreenshot_2020-10-26%20b1_fold3%20Kaggle.png?generation=1603764244357127&alt=media" alt=""></li></ul></li>
<li>Results:<ul>
<li>Around <code>0.22</code> image level weighted loss</li></ul></li>
</ul>
<h3>ResNet 3D(TF)</h3>
<ul>
<li>Randomly take 256 images from each study distributed by position; if the study does not contain 256 images, tile</li>
<li>Input shape: [256, 256, 256, 3], output shape: 256(image level) + 9(study level)</li>
<li>ResNet 50 Backbone</li>
<li>Change Conv2D to Conv3D</li>
<li>Keep z dim as input and use it for image level label</li>
<li>It can predict image level labels as well but the performance is not as we expected</li>
<li>Results:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2F1ea1b91bcae244d0c00abda661385556%2FScreenshot_2020-10-26%20rsna%20sped%20cv%201026%20tta.png?generation=1603765140289343&alt=media" alt=""></li>
</ul>
<h3>EfficientNet 3D CNN(TF)</h3>
<ul>
<li>Connect 256 <strong>EfficientNet B0</strong>s with a <strong>TimeDistributed</strong> layer </li>
<li>use the same identity_block and conv_block like ResNet on the top</li>
<li>It can also predict image level labels as well but the performance is not as we expected</li>
<li>Results:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2F617169bcd0724022099c5bf12568dbdb%2FScreenshot_2020-10-26%20rsna%20sped%20cv%201026%20tta(1).png?generation=1603765158757751&alt=media" alt=""></li>
</ul>
<h3>CNN + Transformer(Fastai v2 + Pytorch)</h3>
<ul>
<li>Resnet34 Backbone</li>
<li>6 Stacked Reformers head </li>
<li>Trained with 256x256 images with a <strong>single window</strong> but stacking 5 <strong>consecutive slices as channels</strong></li>
<li>Affine Augmentations</li>
<li>Results: around <code>0.245</code> on image level and <code>0.270</code> on exam level with 5xTTAs</li>
</ul>
<h3>Blending</h3>
<ul>
<li>Blending <strong>EfficientNet 3D CNN</strong> and <strong>ResNet3D</strong> for study level based on OOF:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2F6c30952a10b542c019b5745f10c213dd%2FScreenshot_2020-10-26%20rsna%20sped%20cv%201026%20tta(2).png?generation=1603765267593556&alt=media" alt=""><br>
Here the exam level loss is around <code>0.19</code> with weight <code>[0.71, 0.05, 0.07, 0.38, 0.28, 0.78, 0.71, 0.13, 0.45]</code> for each label.</li>
<li>Blended with </li>
<li>Along with our existing image level models(<strong>EfficientNet B1</strong>), we can get around a 0.2 CV</li>
</ul>
<h2>Hardwares</h2>
<p>Yes, this competition is really heavy on hardwares. We also found the problem of IO bottleneck, i.e. GPU cannot run at full power, as shown below. Here are a list of hardwares we primarily use:</p>
<ul>
<li>Kaggle TPU, GCP TPU for Tensorflow development</li>
<li>V100(32G) * 2 for Fastai and Pytorch development</li>
<li>Rtx Titan(24g) for casual testing<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2F6872f49c2ad156431fdceab9512fbae1%2FScreenshot_2020-10-26%20Slack%20general%20KLLEE.png?generation=1603766768077129&alt=media" alt=""></li>
</ul>
<h2>Additional Tips</h2>
<ul>
<li>When <strong>blending Tensowflow and Pytorch</strong> models, it's often a good practice to write them as <strong>scripts</strong> as show below.<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2Fd91e7eea26f12e581cf6bf8109853b47%2FScreenshot_2020-10-26%20rsna%20sped%20submit%201026%20tta.png?generation=1603766413299332&alt=media" alt=""></li>
</ul>
<h2>Fin</h2>
<p>Thanks for reading this solution. Happy kaggling!</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.