Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
RSNA STR Pulmonary Embolism Detection
|
3rd place 0.156 public lb, 0.148 private lb
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>downvoted =)</p>
<p>Edit: Github added.</p>
<p>First of all, congrats to all the winners.</p>
<p>Our solution is quite straight forward. We mostly use the same tricks from last year's RSNA-IHD with some updated modules for performance improvements, label consistency, and exam-level prediction.</p>
<p>The solution can be divided into 3 main parts:</p>
<p>Part one is single-image training. We used CNN to predict <code>pe_present_on_image</code> probability for each slice and modify the CNN model by adding a FC layer with 7 units before the final binary prediction layer. This layer act as a embeddings-generator layer and the final binary will just be a linear function of these 7 inputs. We chose 7 because they actually are fine-grained labels of <code>pe_present_on_image</code> which are combinations of slice-level labels and exam-level labels (<code>pe_present_on_image</code> and <code>rv_lv_ratio_gte_1</code> or <code>pe_present_on_image</code> and <code>central pe</code> for example). We also tried to grain the label finer but no luck.</p>
<p>Part two is sequential slice-level models. From each slice in part one, we get a 7-d embeddings vector, concatenating 31 consecutive slice's embeddings vector and we have a 31-by-7 image as embeddings feature map of the center slice (in other words, for each slice, we also look at 15 slices before it and 15 slices after it). For edge cases, padding is used. Part two model is a module combining a simple shallow CNN with no pooling and a sequential model with two bi-directional LSTM layers. We get the output of this model as the final prediction of each slice, and with reversing augmentation and two models (CNN and LSTM), we have 4 outputs. Concatenate them all and we have a final 32-d embeddings vector for each slice.</p>
<p>Part three is the exam-level CNN models. We just stacking the 32-d embeddings vector of all the slice in an exam and chose 1024-by-32 as the common image size. For exams with less than 1024 slices, we zero padding and for exams with more than 1024 slices, we truncate it. This model will predict 9 exam-level labels.</p>
<p>My training source code:<br>
<a href="https://github.com/moewiee/RSNA2020-Team-VinBDI-MedicalImaging" target="_blank">https://github.com/moewiee/RSNA2020-Team-VinBDI-MedicalImaging</a></p>
<p>You guys can also take a look at my inference kernel:<br>
<a href="https://www.kaggle.com/moewie94/rsna-2020-inference" target="_blank">https://www.kaggle.com/moewie94/rsna-2020-inference</a></p>
<p>co dinh apply dot 2 chuong trinh 200AI ben anh khong ? :D</p>
<p>thank you <a href="https://www.kaggle.com/garybios" target="_blank">@garybios</a> :) failed bengali, didn't have a chance to team up on herbarium but I still hope we can collab in the future :)</p>
<p>thank you <a href="https://www.kaggle.com/underwearfitting" target="_blank">@underwearfitting</a> :)</p>
<p>actually sometime while scrolling through CTs, i'm quite annoyed by the reversing order thing so make it an augmentation =)</p>
<p>thank you <a href="https://www.kaggle.com/imoore" target="_blank">@imoore</a> :)</p>
<p>Thanks, while solving this problem i actually split screen and coding on 1 side, browsing your post and github from last year on the other side to learn :)</p>
<p>for stage 1 I use pretrained EfficientNets. Should have tried resnet based architecture to enrich my ensemble :( and no, I dont use any weighting for stage 1/2, just simple binary ce</p>
|
RSNA STR Pulmonary Embolism Detection
|
4th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, Thank you to the organizers and Congrats to all the participants and winners !</p>
<p>My solution is very simple.<br>
・train CNN backbone (Stage-1)<br>
・extract embedding<br>
・train LSTMs (Stage-2)</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2422331%2F37a328ab2727b4b5c02b459c978f80fb%2FRSNA2020%20solution.png?generation=1604015018732684&alt=media" alt=""></p>
<p><strong>Stage-1:</strong><br>
Split 3fold and train CNN which predict pre_present_on_image label for each images.<br>
As a result of exploring several backbones, I decided to use rexnet200. Efficientnet-b4 and b5 got nan loss with mixed precision so I gave up using.<br>
Full-size(512x512) jpeg images preprocessed by Ian Pan’s windowing are fed into CNN.<br>
For augmentation, I used the following:<br>
・Horizontal Flip<br>
・ShiftScaleRotate<br>
・One of(Cutout, GridDopout)<br>
Since the ratio of pe present on image was quite small, I also use focal loss.<br>
In Stage-1, I created 6 models. (3-fold BCE and 3-fold Focal) </p>
<p><strong>Stage-2:</strong><br>
The embedding was dumped and sorted in z-axis order and stored in a disk, and then LSTMs were trained using it. I implemented a mini-batch training of variable length series since the number of images for each study is different. Each series is filled with invalid values until the maximum series length (1083) of the training data is reached, and the invalid part is ignored when calculating the loss.<br>
For each study, embeddings were put into BiLSTMx2 and each LSTM’s outputs went into two branches: one to predict pe_present_on_image and the other to predict the exam level label. In pe_present_on_image branch, the two outputs were simply added and transformed by Linear Layer. In exam_level branch, features were aggregated using attention layer and transformed by Linear Layer.<br>
For Loss, I used simple BCE.</p>
<p><strong>Inference:</strong><br>
I implemented a model that connects backbone and LSTMs for inference without dumping embedding to disk. Inference is done by batch_size=1. There were not enough time to apply TTA.</p>
<p><strong>Submit model:</strong><br>
・6-model average ensemble. <br>
3-fold with backbone trained by BCE Loss + 3-fold with backbone trained by Focal Loss.<br>
public LB: 0.159 private LB: 0.152<br>
・3-model average ensemble.<br>
3-fold with backbone trained by BCE Loss<br>
public LB: 0.158 private LB: 0.153</p>
<p><strong>Thank you !</strong><br>
This is my first gold medal. I’m super glad to finally be a kaggle master!<br>
And this also is my first time of posting discussion. <br>
I'm sorry if my English is not good enough to convey my solution.</p>
<p>The code link will appeare here once I clean it up.</p>
<p>Update:<br>
<a href="https://github.com/piwafp0720/RSNA-STR-Pulmonary-Embolism-Detection" target="_blank">repository</a><br>
<a href="https://www.kaggle.com/kazumax0720/rsna-str-pulmonary-embolism-detection-inference" target="_blank">inference kernel</a></p>
<p>I used NVIDIA DGX station.<br>
For submission, please check my inference kernel.</p>
<p>for focal loss, I used this <a href="https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/65938" target="_blank">discussion</a>. Kaggle community helps me learn a lot. thanks !</p>
|
RSNA STR Pulmonary Embolism Detection
|
5th Place Solution Overview
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Good question… the next submissions I was adding included two layers, but it failed on submission with consistency check. I'm running now to see how it would have done. <br>
I tried 4 layers and it was heavy on memory. </p>
<p>Thanks <a href="https://www.kaggle.com/Giba" target="_blank">@Giba</a>, and yes it was first time I tried A100 - compute constraints disappear more or less<br>
Linky <a href="https://github.com/darraghdog/rsnastr/blob/70fd97dbba23278232aa02b7b73826b5d04d1cdb/training/pipeline/train_image_mask_classifier.py#L229-L242" target="_blank">here</a> how image loss was set up. For each step, I got</p>
<p>1) Image loss : was just <code>pe_present_on_image</code> loss using BCE for each image. <br>
2) Study Loss : for each image in the batch get the weighted BCE study loss as hosts weighted them, but excluded <code>negative_exam_for_pe</code> as it is constant for +ve images. Before getting the mean study loss, I masked it to only include +ve images in the batch.</p>
<p>Weights for the above are <a href="https://github.com/darraghdog/rsnastr/blob/70fd97dbba23278232aa02b7b73826b5d04d1cdb/configs/512/effnetb5_lr5e4_multi.json#L28-L31" target="_blank">here</a>. I just tuned the weights for each loss so image and study loss would be close roughly equal when converging. Final loss was sum of 1) and 2).</p>
<p>This was not ideal, as I got different number of positives each batch, and some batches had no +ve images (I skipped those steps - no backprop). Ideally the sampler would have sampled an equal number of positives each batch - but time was running so I just made batchsize large enough = 48 images with sample of 4:1 for -ve to +ve. </p>
<p><img src="https://raw.githubusercontent.com/darraghdog/rsnastr/main/docs/architecture.jpg" alt=""><br>
<strong>Code base</strong> : <a href="https://github.com/darraghdog/rsnastr" target="_blank">https://github.com/darraghdog/rsnastr</a> <br>
<strong>Kaggle submission</strong> : <a href="https://www.kaggle.com/darraghdog/rsnastr2020-prediction" target="_blank">https://www.kaggle.com/darraghdog/rsnastr2020-prediction</a></p>
<p>Thanks to all the organisers for hosting this competition. And congratulations to all the participants and winners - these competitions and solutions are getting more and more advanced 😁 </p>
<p>My compute was a v100 on AWS initially with 320X320 images, with prototyping on Macbook (a lot can be checked on CPU). Then in the last 3 weeks I moved to an A100 to run 512X512 (thanks DoubleYard) - 40GB GPU memory per card - one card was enough. <br>
Overall the solution pipeline ended up being pretty similar to last year, but the journey to get there was a lot different. The metric used by the competition was difficult to simulate, and all parts of the pipeline needed to be finalised/integrated to get feedback on the competition benchmark. </p>
<p><strong>Preprocessing</strong><br>
Use the <a href="https://github.com/darraghdog/rsnastr/blob/14c8516d3a81bc26c5101afff004ce1e99c3f5a9/preprocessing/dicom_to_jpeg.py#L59-L75" target="_blank">windowing</a> from Ian Pan's <a href="https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection/discussion/182930" target="_blank">post</a> pretty much as is, just <a href="https://github.com/darraghdog/rsnastr/blob/14c8516d3a81bc26c5101afff004ce1e99c3f5a9/preprocessing/dicom_to_jpeg.py#L77-L118" target="_blank">parallelised</a> it to speed it up. Some dicom's fell out which I saw later was due to the way pydicom was used, but I think the majority were good. <br>
Light <a href="https://github.com/darraghdog/rsnastr/blob/14c8516d3a81bc26c5101afff004ce1e99c3f5a9/training/pipeline/train_image_classifier.py#L72-L74" target="_blank">augmentations</a>, maybe more would have helped but did not want to lose information which side the PE lay on. </p>
<p><strong>Image Level</strong><br>
Efficientb5 seemed to be better than anything else I tried both in terms of speed and loss. Given the time and compute limits on submission, speed was important. <br>
The datasampler took at least <a href="https://github.com/darraghdog/rsnastr/blob/14c8516d3a81bc26c5101afff004ce1e99c3f5a9/training/pipeline/train_image_classifier.py#L112-L117" target="_blank">two images from each study, in each epoch, and positive images were oversampled</a> to give a rate of 4:1 -ve:+ve, with image batchsize of 48 (using half point precision - amp). <br>
<a href="https://github.com/darraghdog/rsnastr/blob/14c8516d3a81bc26c5101afff004ce1e99c3f5a9/training/pipeline/train_image_classifier.py#L132-L138" target="_blank">Weighted study loss</a> was calculated at each step on the positive samples only (weights as per competition metric weights excl. <code>negative_exam_for_pe</code>), image loss was calculated on all samples. Final loss per step was <a href="https://github.com/darraghdog/rsnastr/blob/14c8516d3a81bc26c5101afff004ce1e99c3f5a9/training/pipeline/train_image_classifier.py#L200-L209" target="_blank">sum of both averages</a>; image loss was tuned so that image loss and study loss would be roughly equal. <br>
Each epoch took about 5 mins and ran ~15 epochs, final solution used three of five folds.</p>
<p><strong>Study Level</strong><br>
Extracted gap layer of each image to disk. These were fed into <a href="https://github.com/darraghdog/rsnastr/blob/14c8516d3a81bc26c5101afff004ce1e99c3f5a9/training/pipeline/train_sequence_classifier.py#L113-L124" target="_blank">three independent sequence models</a>. <br>
1) 2X Bi-LSTM, more or less <a href="https://github.com/darraghdog/rsna" target="_blank">same architecture</a> as last year.<br>
2) Bert transformer model with 1 layer. <br>
2) Bert transformer model with 2 layers. <br>
Each of these used the same loss <a href="https://github.com/darraghdog/rsnastr/blob/14c8516d3a81bc26c5101afff004ce1e99c3f5a9/training/pipeline/train_sequence_classifier.py#L150-L170" target="_blank">(or at least as close as I could get)</a> as the competition metric on each step, study batchsize 64, no accumulation (again with amp).<br>
Finally both models and all folds were averaged. <br>
I submitted a few more transformer models on last day which I had high hopes for but most broke the consistency check - I only allowed submission, if it passed - and then time ran out. (Thanks <a href="https://www.kaggle.com/yuval6967" target="_blank">@yuval6967</a> for the tip to only drop to disk if it passed). Actually I think 7 of my last 10 submissions failed. </p>
<p><strong>Inference</strong><br>
A few tricks to help speed up and bring RAM and GPU memory down - <a href="https://www.kaggle.com/darraghdog/rsna512-effnetb5-fold-all-exam-xfrmr-validated?scriptVersionId=45523025" target="_blank">final sub</a>.<br>
Batchsize of 1. Feed sequences of images into image level model in chunks of 128 images, but only normalize (uint8->float32) when that chunk is to be passed into image model. Then concat all output (GAP layer) chunks to make the sequence. <br>
Convert all models and input data to half point precision <code>model=model.half()</code> - this halved the memory, but results were just as good. <br>
With the above, I managed to squeeze three folds out of the above pipeline - four folds also worked, but it failed on consistency check on last day. Interested to see optimisations from other competitiors. </p>
<p>A final note, about two weeks ago, I was stuck on CV could not get the metric working, and decided to kick the competition. Luckily I stuck with it to try a first submission which got in at LB 0.181 single fold :) For such a challenging metric and large dataset the timeline was tight, I think with another couple of weeks the community could have got to 0.12x or so on the leaderboard - but maybe that would overfit the dataset. </p>
<p>FYI - slight bump by adding a 2-layer transformer in addition to the above - it would get to 0.154 private and public which would not change LB positions. But I'll probably put it in the final solution. </p>
<p>Thanks Theo, congrats on your solution also! I found it was handling the metric which was the brain cruncher here --but its good that we are being forced to build a solution which makes sense clinically. </p>
|
RSNA STR Pulmonary Embolism Detection
|
6th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all the winners. Thank you to Kaggle and RSNA for hosting this competition.<br>
I'm happy about my result because I'm a medical doctor and therefore, RSNA competitions are the most important competitions for me. In RSNA 2018, I did my best but I failed to get gold. I couldn't participate in RSNA 2019 because I had to prepare for the national exam for medical doctors. In RSNA 2020, I've got the solo gold finally. I'm so sad that I can't attend the RSNA conference in place because it becomes an online conference.</p>
<p>Here I describe my solution. The overview is shown below. Actually, there is nothing special; 2D-CNN for image-level feature extraction and 1D-CNN for exam-level classification.</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/1072220/17414/figure1.png" alt="Figure 1"></p>
<h1>Stage 1: 2D-CNN for feature extraction</h1>
<p>First, I trained 2D-CNN (EfficientNet B0 or B2) with trainable 3 windows (WSO, <a href="https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/discussion/117480" target="_blank">following Yuval's solution</a>) on 2D images with the condition shown below. To save memory and time, I trained models with mixed-precision.</p>
<ul>
<li>Loss: BCE with the weights reflecting the competitions matric weights</li>
<li>Window: initialized with <a href="https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection/discussion/182930" target="_blank">Ian's windows</a></li>
<li>Image size: 512 (law image)</li>
<li>BatchSize: 80 for B0/50 for B2</li>
<li>Training Steps: 8192 (= 0.5 epoch)</li>
<li>Optimizer: Adam</li>
<li>LR: 1e-3 decline to 1e-4 with cosine annealing</li>
<li>Augmentation: ShiftScaleRotate, BrightnessContrast, Crop (448 x 448), CutOut</li>
</ul>
<p>Training takes 3 hours x 5 folds with P100. I extracted 2D-CNN's feature, the output of the global average pooling layer, and used it as input for stage 2. Feature extraction takes 2. 5 hours x 5 folds.</p>
<h1>Stage 2: 1D-CNN for exam-level classification</h1>
<p>Next, I trained 1D-CNN for exam-level classification. As input, I used feature sequences extracted by 1st-stage 2D-CNN. Feature pooling, Skip connection, and SE-module are employed. For image-level prediction, I used U-Net-like upconv architectures. The training conditions are shown below.</p>
<ul>
<li>Exam-level loss: BCE with the weights reflecting the competitions metric weights</li>
<li>Image-level loss: BCE and BCE with the q_i weights</li>
<li>BatchSize: 64</li>
<li>Epoch: 16</li>
<li>Optimizer: Adam</li>
<li>LR: 1e-4 decline to 1e-5 with cosine annealing</li>
<li>Augmentation: Crop (128 slices), Flip<br>
Training takes 6 minutes x 5 fold.</li>
</ul>
<h1>Postprocessing</h1>
<p>I averaged the predictions of B0 and B2. I did postprocessing to solve the conflict of label consistency with minimal modification. The 5-fold CV score is shown below.</p>
<pre><code>negative_exam_for_pe bce: 0.345601, auc: 0.895471
indeterminate bce: 0.084615, auc: 0.812207
chronic_pe bce: 0.158846, auc: 0.682470
acute_and_chronic_pe bce: 0.081380, auc: 0.842780
central_pe bce: 0.111196, auc: 0.949153
leftsided_pe bce: 0.287363, auc: 0.900652
rightsided_pe bce: 0.298690, auc: 0.911001
rv_lv_ratio_gte_1 bce: 0.228942, auc: 0.902246
rv_lv_ratio_lt_1 bce: 0.342175, auc: 0.835369
exam-level score bce: 0.196321
q_i weighted_image_bce bce: 0.206749, auc: 0.965775
total_score bce: 0.201473
</code></pre>
<h1>Final submission</h1>
<p>As final submissions, I selected</p>
<ol>
<li>B0 model with 5-fold averaging: public: 0.161, private: 0.157</li>
<li>B0 and B2 model with 5-fold averaging and model averaging: public: 0.160, private: 0.156</li>
</ol>
<h1>Comparison test</h1>
<p>Stage 1</p>
<pre><code> exam BCE exam AUC image BCE image AUC
B0 final model 0.314768 0.709304 0.101278 0.951922
num step X2 0.316794 0.708131 0.100873 0.951224
initial LR=1e-4 0.321354 0.698805 0.113893 0.938055
BatchSize=16 0.320763 0.687397 0.101027 0.950439
w/o CutOut 0.316906 0.703339 0.100654 0.950882
InputSize=256 0.328765 0.681211 0.134463 0.917054
w/o WSO 0.315638 0.703576 0.103514 0.947023
with MixUp 0.317532 0.693032 0.103800 0.944957
InputSize=640 0.321643 0.703420 0.108685 0.953675
B2 final model 0.315517 0.707378 0.099449 0.950943
</code></pre>
<p>I did comparison tests after the deadline. scores are calculated without any weights. 0.5 epoch is enough to converge. InputSize=512 is better than InputSize=256. InputSize=640 may be better than InputSize=512. LR=1e-3 is better than LR=1e-4. BatchSize=80 may be better than BatchSize=16. Cutout, WSO, and MixUp may not be necessary.</p>
<p>Stage 2</p>
<pre><code> exam BCE exam AUC image BCE image AUC
B0 final model 0.200486 0.867666 0.222240 0.964814
w/o pred-2 0.200840 0.867577 0.221665 0.963847
LSTM 0.203616 0.864214 0.216788 0.963465
GRU 0.195599 0.873768 0.219781 0.963367
B2 final model 0.196107 0.871995 0.220244 0.962613
</code></pre>
<p>Scores are calculated with weights of competition metrics. Training with pred-2 may not be necessary. LSTM and GRU may be able to achieve the same performance as CNN. </p>
<p>What I had to do in this competition was obvious because this competition is very similar to the last year's one and Yuval, the last year's winner, was at the top of the LB. Actually, What I did was just implementing the last year's solution and making a custom loss that minimizes the competition metric directly. It is a very baseline model. I think my solution would be in the middle of silver medals in usual competitions. This competition is a little bit harder than usual competitions, because of a large dataset, complicated metric, short span, and notebook competition. Maybe that's why this baseline model can get gold.</p>
<p><a href="https://github.com/OsciiArt/Kaggle_RSNA2020_6th_Solution" target="_blank">All the codes are available here.</a></p>
<p>I did rule-based postprocessing to solve the conflicts. The 14th cell in <a href="https://github.com/OsciiArt/Kaggle_RSNA2020_6th_Solution/blob/main/notebook/postprocess.ipynb" target="_blank">https://github.com/OsciiArt/Kaggle_RSNA2020_6th_Solution/blob/main/notebook/postprocess.ipynb</a> </p>
|
RSNA STR Pulmonary Embolism Detection
|
7th place - EfficientNet, Transformer and a 2nd opinion [edited]
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I used a two stage model.</p>
<ol>
<li>A EfficientNet (B5, B3) used for feature extraction per image</li>
<li>A transformer used per series to predict the series related classes and the 'PE Present on Image' per image</li>
</ol>
<p>This is the 2nd competition I use such network and an extensive description of this model can be found <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/181830" target="_blank">here</a> </p>
<p>The targets for the EfficientNet where: </p>
<ul>
<li>The original targets for images where PE Present on Image = 1</li>
<li>0 for every other image. Except the Intermediate target which remained the same.</li>
</ul>
<p>The loss was weighted BCE - the weights reflecting the competitions matric weights.<br>
I used flip, rotate, random resize/crop, mean/std shift as augmentation.<br>
I also use trainable 3 windows to convert the CT image to jpeg (<a href="https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/discussion/117480" target="_blank">WSO</a> )</p>
<p>The transformer was a 4 layer encoder (using Pytorch's transformer encoder module). Where the relative and absolute places of the images in the series were embedded and added to the features vectors (as is done for positional embedding in NLP transformers such as BERT)<br>
The loss function reflected the competition's matric.</p>
<p>This model gave an <strong>LB of 0.166</strong> </p>
<p>Ensembling improved the <strong>LB to 0.162</strong>, but I could only ensemble 2 models in the time frame (I didn't use the public/private LB trick that can add 25%). to improve this I used a <strong>2nd opinion mechanism</strong> - instead of using a 2nd model to inference all the data again, I did what an MD will do, I chose only series where the results where the most uncertain (near 0.5), inference them with another model and ensembled - I did this 3 times for ~ 30-40% of the data each time gaining an equivalence of ensembling 4 models => <strong>LB 0.157</strong></p>
<p>As a last step, I checked if any prediction meet the competition's <a href="https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection/discussion/183473" target="_blank">Label Consistency Requirement </a> and if it didn't, made the minimal changes needed to meet the requirements</p>
<ul>
<li><p>The full code can be found in <a href="https://github.com/yuval6957/RSNA2020_final.git" target="_blank">git</a></p></li>
<li><p>The inference code can be found <a href="https://www.kaggle.com/yuval6967/rsna2020-inference-2nd-op-final" target="_blank">in this notebook</a></p></li>
<li><p>The models' weights are <a href="https://www.kaggle.com/yuval6967/rsna2020-models" target="_blank">in this</a> public dataset</p></li>
<li><p>A more detailed description can be found <a href="https://github.com/yuval6957/RSNA2020_final/blob/master/Documentation.md" target="_blank">in this documentation</a> and <a href="https://github.com/yuval6957/RSNA2020_final/blob/master/RSNA2020%20presentation.pdf" target="_blank">this presentation</a></p></li>
<li><p><a href="https://www.youtube.com/watch?v=hVgIawktZgs" target="_blank">This is a video</a> which present this solution</p></li>
</ul>
|
RSNA STR Pulmonary Embolism Detection
|
9th place solution ( + github code)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all winners !<br>
This competition was hard on me in many ways.</p>
<h1>Solution Overview</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1227363%2Fb7889dcd4b8229c53e2103cfb622a8e1%2FRSNA%202020%20solution%20(1).png?generation=1603760751579416&alt=media" alt=""></p>
<h1>Preprocess</h1>
<ul>
<li><p>In the train data, no CT image have PE after 400th image. So, we used only images before 400th image.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1227363%2Fc51f745506070907901a148f396ae5d7%2Fdownload-20.png?generation=1603800720901371&alt=media" alt=""></p></li>
<li><p>For stage 1 training, we preprocessed image-level labels like following image.</p>
<ul>
<li>Before<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1227363%2F41ffbb129012a32e592556611c4a155d%2Fdownload-21.png?generation=1603760930759265&alt=media" alt=""></li>
<li>After<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1227363%2F0a0fa0aacd34ca23e8b7a00e385d25b0%2Fdownload-22.png?generation=1603760956442464&alt=media" alt=""></li></ul></li>
</ul>
<h1>Stage 1 training</h1>
<ul>
<li>We used 512 x 512 image + efficientnet-b5 and 384 x 384 image + efficientnet-b3 and using preprocessed labels.</li>
</ul>
<h1>Stage 2 training</h1>
<ul>
<li>Inference time was so severe because we used 512 x 512 image + efficientnet-b5. So, we subsampled 400 sequences to 200 sequences and used Deconvolution module.<ul>
<li>We got the same CV score when using 400 sequences.</li></ul></li>
<li>We was not able to use various models in stage 1 because of resource. Therefore, we trained various models in stage 2.<ul>
<li>Input: b5-feature only, b3-feature only, b5-feature + b3-feature</li>
<li>model: Conv1D, LSTM, GRU, Conv1D + LSTM</li>
<li>output: 3 x 4 = 12 predictions</li></ul></li>
</ul>
<h1>Stacking</h1>
<ul>
<li>We trained LGBM, Conv1D and GRU.<ul>
<li>We used only PE-exam when training pe_present_on_image by lgbm because image from negative PE doesn't affect competition metric.</li></ul></li>
</ul>
<h1>Postprocess</h1>
<ul>
<li>We've implemented a heuristic post process.<ul>
<li>This postprocess increased CV and public score 0.002 (Private 0.160 -> 0.162)</li>
<li>Main idea<ul>
<li>replace <code>pe_present_on_image</code> with <code>1 - negative_exam_for_pe</code> when <code>1 - negative_exam_for_pe <= pe_present_on_image</code></li>
<li>repeat sigmoid -> logit -> logit += s -> sigmoid until satisfying label consistency</li></ul></li></ul></li>
</ul>
<pre><code>label_cols = [
"pe_present_on_image",
"negative_exam_for_pe",
"indeterminate",
"chronic_pe",
"acute_and_chronic_pe",
"central_pe",
"leftsided_pe",
"rightsided_pe",
"rv_lv_ratio_gte_1",
"rv_lv_ratio_lt_1",
]
def postprocess(x, s=2.0):
logit = np.log(x/(1 - x))
logit = logit + s
sigmoid = 1 / (1 + np.exp(-logit))
return sigmoid
def satisfy_label_consistency(df):
rule_breaks = consistency_check(df).index
print(rule_breaks)
if len(rule_breaks) > 0:
df["positive_exam_for_pe"] = 1 - df["negative_exam_for_pe"]
df.loc[
df.query("positive_exam_for_pe <= pe_present_on_image").index,
"pe_present_on_image",
] = df.loc[
df.query("positive_exam_for_pe <= pe_present_on_image").index,
"positive_exam_for_pe",
]
rule_breaks = consistency_check(df).index
df["positive_images_in_exam"] = df["StudyInstanceUID"].map(
df.groupby(["StudyInstanceUID"])["pe_present_on_image"].max()
)
df_pos = df.query("positive_images_in_exam > 0.5")
df_neg = df.query("positive_images_in_exam <= 0.5")
if "1a" in rule_breaks:
rv_filter = "rv_lv_ratio_gte_1 > 0.5 & rv_lv_ratio_lt_1 > 0.5"
while len(df_pos.query(rv_filter)) > 0:
df_pos.loc[df_pos.query(rv_filter).index, "rv_min"] = df_pos.query(
rv_filter
)[label_cols[8:]].min(1)
for rv_col in label_cols[8:]:
df_pos.loc[
df_pos.query(rv_filter + f" & {rv_col} == rv_min").index, rv_col
] = postprocess(
df_pos.query(rv_filter + f" & {rv_col} == rv_min")[
rv_col
].values,
s=-0.1,
)
rv_filter = "rv_lv_ratio_gte_1 <= 0.5 & rv_lv_ratio_lt_1 <= 0.5"
while len(df_pos.query(rv_filter)) > 0:
df_pos.loc[df_pos.query(rv_filter).index, "rv_max"] = df_pos.query(
rv_filter
)[label_cols[8:]].max(1)
for rv_col in label_cols[8:]:
df_pos.loc[
df_pos.query(rv_filter + f" & {rv_col} == rv_max").index, rv_col
] = postprocess(
df_pos.query(rv_filter + f" & {rv_col} == rv_max")[
rv_col
].values,
s=0.1,
)
df.loc[df_pos.index, label_cols[8:]] = df_pos[label_cols[8:]]
if "1b" in rule_breaks:
pe_filter = " & ".join([f"{col} <= 0.5" for col in label_cols[5:8]])
while "1b" in consistency_check(df).index:
for col in label_cols[5:8]:
df_pos.loc[df_pos.query(pe_filter).index, col] = postprocess(
df_pos.loc[df_pos.query(pe_filter).index, col], s=0.1
)
df.loc[df_pos.index, label_cols[5:8]] = df_pos[label_cols[5:8]].values
if "1c" in rule_breaks:
chronic_filter = "chronic_pe > 0.5 & acute_and_chronic_pe > 0.5"
df_pos.loc[df_pos.query(chronic_filter).index, label_cols[3:5]] = softmax(
df_pos.query(chronic_filter)[label_cols[3:5]].values, axis=1
)
df.loc[df_pos.index, label_cols[3:5]] = df_pos[label_cols[3:5]]
if "1d" in rule_breaks:
neg_filter = "negative_exam_for_pe > 0.5 | indeterminate > 0.5"
while "1d" in consistency_check(df).index:
for col in label_cols[1:3]:
df_pos.loc[df_pos.query(neg_filter).index, col] = postprocess(
df_pos.loc[df_pos.query(neg_filter).index, col], s=-0.1
)
df.loc[df_pos.index, label_cols[1:3]] = df_pos[label_cols[1:3]].values
if "2a" in rule_breaks:
neg_filter = "negative_exam_for_pe > 0.5 & indeterminate > 0.5"
while len(df_neg.query(neg_filter)) > 0:
df_neg.loc[df_neg.query(neg_filter).index, "neg_min"] = df_neg.query(
neg_filter
)[label_cols[1:3]].min(1)
for neg_col in label_cols[1:3]:
df_neg.loc[
df_neg.query(neg_filter + f" & {neg_col} == neg_min").index,
neg_col,
] = postprocess(
df_neg.query(neg_filter + f" & {neg_col} == neg_min")[
neg_col
].values,
s=-0.1,
)
neg_filter = "negative_exam_for_pe <= 0.5 & indeterminate <= 0.5"
while len(df_neg.query(neg_filter)) > 0:
df_neg.loc[df_neg.query(neg_filter).index, "neg_max"] = df_neg.query(
neg_filter
)[label_cols[1:3]].max(1)
for neg_col in label_cols[1:3]:
df_neg.loc[
df_neg.query(neg_filter + f" & {neg_col} == neg_max").index,
neg_col,
] = postprocess(
df_neg.query(neg_filter + f" & {neg_col} == neg_max")[
neg_col
].values,
s=0.1,
)
df.loc[df_neg.index, label_cols[1:3]] = df_neg[label_cols[1:3]]
if "2b" in rule_breaks:
while "2b" in consistency_check(df).index:
for col in label_cols[3:]:
df_neg.loc[df_neg.query(f"{col} > 0.5").index, col] = postprocess(
df_neg.loc[df_neg.query(f"{col} > 0.5").index, col], s=-0.1
)
df.loc[df_neg.index, label_cols[3:]] = df_neg[label_cols[3:]].values
return df
</code></pre>
<p>Updated: <br>
We uploaded code on github (<a href="https://github.com/shimacos37/kaggle-rsna-2020-9th-solution)" target="_blank">https://github.com/shimacos37/kaggle-rsna-2020-9th-solution)</a>.</p>
|
RSNA STR Pulmonary Embolism Detection
|
A CNN Post-Processing method for Image level.
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all the participants and the winners. And I also want to thank my teammates <a href="https://www.kaggle.com/haqishen" target="_blank">qishen</a>, <a href="https://www.kaggle.com/boliu0" target="_blank">Bo</a> for their efforts.<br>
I'd like to share a trick used by our team here, which may not be as good as the top team's solution, but I hope this can help you.<br>
And this trick is a post-processing method for image level.</p>
<h2>Extract the probability prediction</h2>
<p>Extract the probability prediction from 2D-CNN(like efficientnet-b0), which was trained on 2D images. And save the prediction for next step.</p>
<h2>Pooling Post-Processing</h2>
<p>From the previous step, we have the 2d-CNN probability, and then we sort them by the <strong>ImagePositionPatient_z</strong>. As we all know, contiguous images from same person should have similar label/probability after sorted. So we used 1D-Pooling to adjust the contiguous probability, it can improve our image level CV by about 0.01+<br>
The code below.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1270655%2F4ee0cb9c54a362ff8f2cc4ff7d5088dc%2Fpoolingpp.png?generation=1603765054719087&alt=media" alt=""></p>
<h2>CNN Post-Processing</h2>
<p>Meanwhile, we also designed a simple cnn as a stage2-model to predict the contiguous probability, hope that the cnn can learn more context information from the contiguous probability. And the training-set is also 2d-CNN probabilitysorted by <strong>ImagePositionPatient_z</strong>. In training, we sampling 80 probs to train the cnn, like that.<br>
The CNN Post-Processing is improve our image level CV by about 0.01+.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1270655%2F099c16938208c0c881f7c53440b56475%2Fsampling.jpg?generation=1603765599705036&alt=media" alt=""><br>
And the shape of model's output is same as input. Finally, we interpolated the probability predictions back based on the total number of the person's original image samples.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1270655%2Feca02ccfc3151804652067a84bd9cde5%2Finter.jpg?generation=1603765870690625&alt=media" alt=""></p>
<p>and the cnn model<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1270655%2F9e0cca527bed91c0d0ff03bd340ffcbc%2Fcnn.jpg?generation=1603765956961639&alt=media" alt=""></p>
<h2>Combined the Pooling Post-Processing and CNN Post-Processing</h2>
<p>Finally, just combined the above two Post-Processing results directly. it would improve our image level CV by about 0.02+.</p>
<p>That's all I want to share, and I hope it helps.</p>
|
RSNA STR Pulmonary Embolism Detection
|
Provisional 8th Place Solution - Monai x EfficientNets x LGBM
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Chris<br>
I learned so much from you since starting Kaggle. Thank you for your contributions to the platform!</p>
<p>Github: <a href="https://github.com/lisosia/kaggle-rsna-str" target="_blank">https://github.com/lisosia/kaggle-rsna-str</a></p>
<h3>What a competition!</h3>
<p>First of all Congratulations to everyone and especially to <strong>Yuji-san</strong> ( <a href="https://www.kaggle.com/yujiariyasu" target="_blank">@yujiariyasu</a> ) and <strong>Yama-san</strong> <br>
( <a href="https://www.kaggle.com/lisosia" target="_blank">@lisosia</a> ) for hopefully -fingers crossed- archiving <strong>Master status</strong>. <br>
And a heartfelt Thank you to the organizers and the Kaggle Team for an awesome competition.</p>
<h1>Solution Overview</h1>
<p>In the big scope our solution is split into (1) image and (2) exam level predictions, which then are (3) ensembled in a Decision Tree. </p>
<p>On an image level we predict Pe present on image, left,central and right. <br>
On the exam level we predict Left/Central/right, RV/LV Ratio and Acute/Chronic. <br>
We feed the predictions into multiple Decision Trees each fintuned with specific features, which then outputs the final predictions. <br>
We skipped predicting Indeterminate due to inference time purposes.</p>
<p><img src="https://storage.cloud.google.com/kaggleimages/kagglersna.JPG?generation=1603760751579416&alt=media" alt=""></p>
<h2>PreProcess</h2>
<p>Ian Pans ( <a href="https://www.kaggle.com/vaillant" target="_blank">@vaillant</a> )dataset was a great start. But we very early on created a 512*512 image jpg dataset without jpg compression. With jpg compression our score was always worse. For this we used Ian Pans Script for preprocessing with the same window sizes. </p>
<h2>Image Level</h2>
<p>In our final solution we used 2 Efficient Net models to predict on image level.<br>
They were trained on all images to predict PE present on Image and the efficient Net b0 also to predict left/central/right . </p>
<p>Important Edit: <br>
Our Image level models predictions were transformed by calibrating the predicted probability of pe_present_on_image using:</p>
<pre><code>def calib_p(arr, factor): # set factor>1 to enhance positive prob
return arr * factor / (arr * factor + (1-arr))
</code></pre>
<p>It is conducted to equalize each folds pe_present_on_image predictions before stacking with LGBM . The factor for each fold is determined so that the per-fold validation weighted-logloss is minimized. Yama-san came up with this idea and it boosted our image level predictionstogether with LGBM by a lot(see below).</p>
<h2>Exam Level</h2>
<p>Our pipeline here is very much based on the awesome kernel of <a href="https://www.kaggle.com/boliu0" target="_blank">@boliu0</a> .<br>
Images are center cropped and first and last 20% of images in the z axis removed and resized to 100<em>100</em>100 spatial size. We now see some clever approaches for determining the Heart level (for example by Ian Pan).<br>
The 3D model is very bad at predicting positive_exams. Therefore rv_lv_ratio gte was combined to one target. <br>
We have trained the exam level model only on exams with PE. The main target for us initially was to predict RV/LV ratio but the model is also surprisingly good at predicting left/central/right and acute/chronic, which also gave us a boost for these features. </p>
<h2>LGBM</h2>
<h5>Image Level LGBM</h5>
<p>For Image level Predictions we used a LGBM which for each image got the last 10 and the next 10 images as input to predict each image. This essentially simulates the CNN+LSTM method many competitors used. But in our case CNN + LGBM always outperformed CNN + LSTM.</p>
<p>CV pe present on image: 0.12 (raw prediction ) -> 0.105 (with LGBM + calibration)</p>
<h5>Exam Level LGBM</h5>
<p>We used the raw predictions of the Monai model and derived features from the image level models. These features mostly consist of percentiles [30, 50, 70, 80, 90, 95, 99] and number of images over a certain threshold [0.1, 0.2 ,, ..0.9] , also mean and max of image level models predictions. </p>
<h2>Post Process.</h2>
<ol>
<li>We clipped right/left/center predictions to average(right) < ave(pe_present)</li>
<li>We simply set Inderminate to pos_exam * MEAN_WHEN_POS * (1-pos) * MEAN_WHEN_NOT_POS as we had no time left in inference and it had a low weight assigned to it.</li>
<li>Consistency Requirement: we have adjusted the exam level predictions to fit the consistency requirement with the lowest weighted difference (using the official metric weights).</li>
</ol>
<h2>Final Score</h2>
<p>5 Fold CV:<br>
(exam level unweighted)</p>
<table>
<thead>
<tr>
<th>Target</th>
<th>LogLoss</th>
</tr>
</thead>
<tbody>
<tr>
<td>pe_present_on_image</td>
<td>0.105</td>
</tr>
<tr>
<td>rv_lv_ratio_gte_1</td>
<td>0.231</td>
</tr>
<tr>
<td>rv_lv_ratio_lt_1</td>
<td>0.334</td>
</tr>
<tr>
<td>leftsided_pe</td>
<td>0.282</td>
</tr>
<tr>
<td>central_pe</td>
<td>0.114</td>
</tr>
<tr>
<td>rightsided_pe</td>
<td>0.285</td>
</tr>
<tr>
<td>acute_and_chronic_pe</td>
<td>0.087</td>
</tr>
<tr>
<td>chronic_pe</td>
<td>0.160</td>
</tr>
</tbody>
</table>
<h2>Things that did not work</h2>
<p>We have tried many different architectures. Sequence models always yielded worse results for us compared to image level prediction ensembles with LGBM. I suppose this may be due to the very inconsistent number of images per exam compared to last years challenge. </p>
<h2>Thank you!</h2>
<p>2 days before submission deadline we did not have a final submission, which made things realy close to the end with Kaggle commits being slower to start and commit. I have never witnessed this issue before but it is good to keep this in mind for future competitions. Our final best submission finished just in time. </p>
<p>Thank you!</p>
<p>Thanks Theo. </p>
<p>Yeah, Kaggle would be boring without the emotional roller coasters. :D</p>
|
RSNA STR Pulmonary Embolism Detection
|
[2nd place] Solution Overview & Code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Update: code available at <a href="https://github.com/i-pan/kaggle-rsna-pe" target="_blank">https://github.com/i-pan/kaggle-rsna-pe</a></p>
<p>Congratulations to all the participants and the winners. Special congrats to prize winners <a href="https://www.kaggle.com/osciiart" target="_blank">@osciiart</a> and <a href="https://www.kaggle.com/jpbremer" target="_blank">@jpbremer</a> who are on track to be physician GMs. This was a tough, compute-heavy challenge given the large amount of data and short timeframe. My setup was 4 24 GB Quadro RTX 6000 GPUs. I always feel guilty during competitions like these since I have the luxury of strong compute. Models were trained using DDP in PyTorch 1.6 with automatic mixed precision. </p>
<p>Even though the results aren't final yet and my submission may be removed for violating the label consistency requirements (hopefully my heuristic for fixing those worked!), I still wanted to share my solution.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F281652%2Ff6a15edbc81d867c0faecd0894d6aa27%2Fpe.png?generation=1603756913344030&alt=media" alt=""></p>
<p>Here is a schematic outlining my solution. It has a lot of moving parts, so I apologize for the lengthy summary. </p>
<h1>Step 1: Feature Extraction</h1>
<p>Last year's RSNA Intracranial Hemorrhage Detection challenge shared a lot of similarities with this year's challenge. Most of the top solutions combined 2D CNN feature extraction with sequence modeling. The backbone of my solution also relied on a similar setup. </p>
<p>I first trained ResNeSt50 on 2D images. Images were windowed using the PE-specific window (WL=100, WW=700) that I mentioned in one of my initial posts on DICOM processing. Each "image" was 3 channels, with each channel representing an individual slice. Thus the 2D image was a stack of 3 continuous slices. The targets were the 7 PE-related labels (i.e., excluding the RV/LV ratio labels). Note: most of the PE-related labels were exam-level labels. However, I just assigned the exam labels to each slice positive for PE (negative slices had all zeros), with the understanding that there would be label noise. Also, I predicted the labels for the middle slice among the 3 slices in the image. Models were trained using vanilla binary cross-entropy loss (<code>BCEWithLogitsLoss</code> in PyTorch), 512x512 with 448x448 random crops (single center crop during inference), RandAugment data augmentation, batch size 128, 5 epochs, 2500 steps per epoch, RAdam optimizer with cosine annealing learning rate scheduler. I used generalized mean pooling and reduced the final feature vector to 512-D. Mean loss was around 0.06 (AUC 0.95-0.96 for slice-wise prediction of PE vs. no PE). Features were then extracted for all slices. </p>
<h1>Step 2: Sequence Modeling</h1>
<p>Many of last year's solutions used LSTMs/GRUs as the sequence model of choice. For this competition, I used <code>huggingface</code> transformers (specifically, the <code>Transformer</code> class from <code>transformers.modeling_distilbert</code>). I used one 4-layer transformer to produce slice-wise <code>pe_present_on_image</code> predictions and another transformer to predict exam-wise PE-labels. Sequence length was 512 during training, padded/truncated as necessary. During inference, I used the sequence without modifications. </p>
<p>Important point: <strong>Images from non-PE exams do not contribute to the loss.</strong> At first, I was training the slice-wise transformer on all exams. Then, I decided to train these models on positive exams only. This lowered my CV by about 0.01-0.02. I used a custom weighted loss where I weighted the loss from each example by the proportion of positive PE slices (as described in the metric), though I'm still not sure I wrote it correctly. </p>
<p>The exam-level transformer was trained using a weighted BCE loss based on the competition label weights. Exam-level validation losses ranged from 0.15-0.17. </p>
<h1>Step 3: Time-Distributed CNN</h1>
<p>To add some variety into my modeling, I then trained a time-distributed CNN, which is just another way of saying I stacked a transformer on top of a CNN feature extraction backbone and trained end-to-end. </p>
<p>But before doing that, I performed inference using the slice-wise transformer model to get PE scores for every slice (5-fold OOF predictions). Then, when training the TD-CNN, I only trained on the top 30% of slices from each exam, sorted by PE score. These were trained on 3D volumes of size 32x416x416 cropped to 32x364x364 using the same windowing strategy (WL=100, WW=700) in batches of 16. </p>
<p>I initialized the CNN backbone and the transformer head with trained models from steps 1 and 2 to help with convergence. I forced all the batch normalization layers in the backbone to <code>eval</code> mode as well- this prevents the running mean and variance in each layer from updating and only trains the coefficients. Exam-level validation losses ranged from 0.15-0.17, similar to step 2. </p>
<h1>Step 4: Heart Slice Prediction</h1>
<p>RV/LV ratio is a significant portion of the loss. I hand-labeled slices with heart in 1,000 CT scans and trained a model (EfficientNet-B1 pruned, AUC 0.998, 256x256->224x224 crops) to classify heart slices in each CT scan. I did this because I felt that by focusing a model on the heart, I could get better, more consistent results across scans. It actually wasn't hard to label 1,000 scans- probably a full day's worth of work. I just needed to find the top and bottom heart slices; everything in between thus must also contain the heart.</p>
<h1>Step 5: RV/LV 3D CNN</h1>
<p>I trained a 3D CNN to classify RV/LV ratio. Specifically, I used a 101-layer channel separated network, pretrained on 65 million Instagram videos (<a href="https://arxiv.org/abs/1904.02811" target="_blank">https://arxiv.org/abs/1904.02811</a>, <a href="https://github.com/facebookresearch/VMZ)" target="_blank">https://github.com/facebookresearch/VMZ)</a>. </p>
<p>This model was trained only on heart slices from each exam. Also, it was only trained on positive exams. This is because RV/LV ratio was not labeled for negative exams- both RV/LV labels are 0. Thus, it didn't make sense to me to try and train my model on the entire dataset's labels directly. Models were trained using a weighted BCE loss using the competition weights. I resized the input to 64x256x256->64x224x224 crops and used mediastinal window (WL=50, WW=350). AUC for RV/LV ratio > 1 was about 0.85. The validation loss ranged from 0.44-0.48. I then used these models to extract 2048-D features from each exam.</p>
<p>The challenge here was that the likelihood of an exam being PE-positive greatly influenced the RV/LV labels. From step 2, I calculated 5-fold OOF exam-level predictions for all exams. Then, I trained a linear model that took as input the concatenation of the 2048-D 3D CNN feature and the 7 PE exam labels. This linear model was trained across <strong>all exams</strong> using the labels directly. This way, the model could take into account the imaging features from the scan but also adjust the predictions based on the likelihood of PE. Validation losses ranged from 0.22-0.25. </p>
<p>I didn't validate my entire pipeline that often due to time constraints, instead choosing to focus on the individual component losses and optimizing each one as well as I could. I did periodically do a sanity-check on 200 single-fold exams to make sure that my entire pipeline was working, and my final validation loss was 0.183 for my 0.150 public LB submission. I'm still not confident that I implemented the metric correctly, but I was seeing good correlation (0.159/0.195->0.156/0.188-> 0.150/0.183).</p>
<p>At the end, I applied a function to enforce label consistency requirements for each exam. There was about 0.001 change in CV after applying this function. I have some thoughts about this requirement which maybe I'll save for another post. </p>
<h1>Final Models</h1>
<p>Overall, I used these models:<br>
-2x ResNeSt50 feature extractors<br>
-6x exam transformers (3 for each extractor)<br>
-6x slice transformers<br>
-5x ResNeSt50 TD-CNN<br>
-1x EfficientNet-B1 pruned heart slice classifier<br>
-5x ip-CSN-101 3D CNN RV/LV feature extractor<br>
-5x RV/LV linear model </p>
<p>Inference took about 7-8 hours for the entire test set. </p>
<p>Code to follow once I clean it up. </p>
<p>Things that didn't work as well:<br>
-3D CNN for PE exam-level prediction<br>
-Pseudolabeling RV/LV ratio for negative exams<br>
-Using logits/probabilities instead of features for second-stage model<br>
-TD-CNN for RV/LV ratio prediction<br>
-Training feature extractor on positive exams only to improve slice-wise PE prediction </p>
<p>Thank you! When I started out on Kaggle, I felt like my medical imaging domain knowledge would give me a surefire edge in these competitions. Kagglers are too smart though, so at best it's only a slight advantage. </p>
<p>The feature extractors took about 5.5 hours to train. TD-CNNs took about 7.5-8 hours to train. The 3D CNN took about 30 minutes, and the transformers were very fast- just a few minutes. </p>
<p>I tried initially with EfficientNet-B4 but in the past I've found that ResNet-like architectures worked better for tasks relying on feature extraction, so I tried ResNeSt which worked better (0.01 decrease in loss). </p>
<p>I also tried EfficientNet-B3 pruned at lower resolution to increase inference speed but this was significantly worse (0.02 increase in loss). </p>
<p>Thanks, congrats on your strong finish as well! After the initial setup, it's just a matter of running experiments, most of which is just waiting for models to finish training. At the beginning of the challenge, I laid out my plan for what I was going to do and just focused on executing it. I didn't deviate from that initial plan very much. Fortunately it worked out. </p>
<p>Seems like it didn't make too much of a difference. 😅</p>
|
RSNA STR Pulmonary Embolism Detection
|
[54th solution] Baseline inference optimization with TTA (No additional training)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Don't worry about downvotes, it's a great question. I would not quantify 0.014 log loss as "a lot", but in the scheme of this competition, it's the difference between 150th and 56th, so to me, it matters a lot :)</p>
<p>This is the solution for my team, "Speed is all you need". We were about 40th before the public baseline. TLDR at bottom.</p>
<p>First of all, thanks to my team, especially <a href="https://www.kaggle.com/Neurallmonk" target="_blank">@Neurallmonk</a>, who has been exceedingly helpful, especially considering it is his first competition. Thanks to Ian Pan for the dataset, upon which our first month of experimenting was based on, and thank you to Kun for the baseline, though I was frustrated when it was initially released, I have learned so much from the code, and a medal is only a medal, but learning the code lasts forever.</p>
<p>Our solution is nothing special - just hours and hours of optimizing Kun's baseline inference. We did not have the compute resources in order to retrain the first stage models, and a few of our teammates were busy during the last week of the competition, myself included as I had exams throughout. Otherwise, we would have liked to replace GRU with LSTM and ensemble.</p>
<p>I optimized Kun's inference by moving model loading outside of for loops, replacing zoom with albumentations' resize, and hundreds of other small changes which combined together to make the inference over 3x faster. This results in a score of 0.229 public, 0.230 private LB in 2.5 hours (original notebook was 0.233 public, 0.232 private in 9 hours). Admittedly, I have no clue why this notebook has a slightly better public score, but the main goal was to be faster. This 2.5 hour long notebook is <a href="https://www.kaggle.com/stanleyjzheng/fast-baseline-with-tta?scriptVersionId=45225265" target="_blank">Here</a></p>
<p>Then, we added TTA. We used TTAx3 with flips, CLAHE, brightness/contrast/hue. TTAx3 was the maximum we could fit in 9 hours, and these augmentations were found with trial and error on the public LB. Our final notebook with TTA is <a href="https://www.kaggle.com/stanleyjzheng/fast-baseline-with-tta-after-gru-8e13cc?scriptVersionId=45424927" target="_blank">Here</a></p>
<p>TLDR: We used the original models trained in the public baseline, and changed purely inference. After optimization, our notebook runs in 2.5 hours (0.229 public, 0.230 private), and after TTAx3, we achieve a score of 0.221 public, 0.216 private.</p>
<p>Thanks Parker! </p>
<p>Thank you so much Chris!</p>
<p>Thank you!</p>
<p>Thank you! Congratulations on your medal too.</p>
<p>Thanks so much, never thought I'd get complemented by a GM when I started! We predicted on both public and private test on all submissions - we considered private only, but we only got TTA working 2 days before the end, so with only 10 submissions to figure out which augmentations were most helpful, we needed the public LB scores.</p>
|
RSNA Screening Mammography Breast Cancer Detection
|
10th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!<br>
Certainly, shape information may be important for multi-input approaches such as Multi-view/laterality, and shape information may be used to improve results.<br>
I had an idea for multi-input, but lacked the implementation time and computing resources😅</p>
<p>First, I would like to thank the competition hosts for organizing this competition and also the participants.</p>
<p>I’m very happy to have won my first gold medal at Kaggle!</p>
<h2>Overview</h2>
<p>My model is a very simple single input model, with a simple pipeline. (So I didn't expect to win a gold medal.)</p>
<p>My final model is a simple average ensemble of the following 3 models. (model names are from timm)</p>
<ul>
<li>tf_efficientnetv2_s (no aux loss)</li>
<li>tf_efficientnetv2_s (using aux loss)</li>
<li>maxvit_tiny_tf_384.in1k (using aux loss)</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9058219%2F6c141d2897b20fac001918828caaef9b%2Fpipeline.png?generation=1677661238257982&alt=media" alt="pipeline"></p>
<p>I built the Baseline myself, but it is similar to <a href="https://www.kaggle.com/vslaykovsky" target="_blank">@vslaykovsky</a>'s (<a href="https://www.kaggle.com/code/vslaykovsky/train-pytorch-aux-targets-weighted-loss-thres" target="_blank">https://www.kaggle.com/code/vslaykovsky/train-pytorch-aux-targets-weighted-loss-thres</a>) and I actually used it as a reference.</p>
<h2>Solution</h2>
<ul>
<li>CV Strategy<ul>
<li>StratifiedGroupKFold (n=4, groups=patient_id)</li></ul></li>
<li>Data・Preprocessing<ul>
<li>No external data was used.</li>
<li>ROI extraction was performed using rule-based method (reference: <a href="https://www.kaggle.com/code/vslaykovsky/rsna-cut-off-empty-space-from-images" target="_blank">https://www.kaggle.com/code/vslaykovsky/rsna-cut-off-empty-space-from-images</a>).</li>
<li>Fast loading was achieved using nvJPEG2000 (reference: <a href="https://www.kaggle.com/code/snaker/easy-load-the-image-with-nvjpeg2000" target="_blank">https://www.kaggle.com/code/snaker/easy-load-the-image-with-nvjpeg2000</a>).</li>
<li>The resolution was set to 1536x960 for efficientnet and 1536x768 for maxvit.</li>
<li>The channel number was set to 3 (to use pretrained models).</li>
<li>Min-max scaling (-1.0 ~ 1.0) was applied.</li>
<li>VOI LUT was applied.</li>
<li>8 bit</li>
<li>Care was taken to ensure that the preprocessing pipeline remains unchanged between training and inference.</li></ul></li>
<li>Dealing with Imbalance Data<ul>
<li>Batch size of 8. Adjusted to have a majority (Not Cancer) to minority (Cancer) ratio of 7:1 for each batch. This is essentially oversampling.</li></ul></li>
<li>Data Augmentation<ul>
<li>I used albumentations (a data augmentation library).</li></ul></li>
</ul>
<pre><code> albumentations A
A.HorizontalFlip(p=)
A.VerticalFlip(p=)
A.ShiftScaleRotate(shift_limit=, scale_limit=, rotate_limit=, p=)
A.OneOf([
A.RandomGamma(gamma_limit=(, ), p=),
A.RandomBrightnessContrast(brightness_limit=, contrast_limit=, p=)
], p=)
A.CoarseDropout(max_height=, max_width=, p=)
</code></pre>
<ul>
<li>Model Parameters<ul>
<li>drop_rate: 0.8</li>
<li>drop_path_rate: 0.2</li>
<li>criterion: BCEWithLogitsLoss</li>
<li>optimizer: Adam (lr: 1.0e-4)</li>
<li>scheduler: OneCycleLR (pct_start: 0.1, div_factor: 1.0e+3, max_lr: 1.0e-4)</li>
<li>epoch: 5</li>
<li>batch_size: 8 (and accumulate_grad_batches=4, so 8*4=32)</li>
<li>fp16 (training and inference)</li>
<li>aux loss<ul>
<li>aux target: site_id, laterality, view, implant, biopsy, invasive, BIRADS, density, difficult_negative_case, age</li>
<li>aux loss weight: 1.0</li>
<li>From the discussion (<a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/370341" target="_blank">https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/370341</a>), there was information that there were machine_id in the LB that were not in the training data, so the machine_id was not used.</li>
<li>However, there is a feeling that a model specific to this competition has been created by using site_id. It is questionable whether this model can be applied to mammography images from other facilities…</li></ul></li></ul></li>
<li>Inference<ul>
<li>top3 mean aggregation(Average of the top 3 predictions per prediction id)</li>
<li>To reduce inference time, when doing the ensemble, the 4 models from the 4-fold cross-validation were not used for inference, but a single model trained on all data was used.<br>
(cross-validation was used to check CV scores and to find thresholds.)</li>
<li>The ensemble is a simple average; I also tried optimising the weights using the Nelder-Mead method, but as the weights were calculated almost identically across all models, I opted for a simple average.</li></ul></li>
<li>Results</li>
</ul>
<table>
<thead>
<tr>
<th>Model</th>
<th>AUROC</th>
<th>local cv pF1</th>
<th>public pF1</th>
<th>private pF1</th>
</tr>
</thead>
<tbody>
<tr>
<td>1. tf_efficientnetv2_s (no aux loss)</td>
<td>0.897</td>
<td>0.424</td>
<td>0.48※</td>
<td>0.41※</td>
</tr>
<tr>
<td>2. tf_efficientnetv2_s (using aux loss)</td>
<td>0.894</td>
<td>0.435</td>
<td>0.48※</td>
<td>0.45※</td>
</tr>
<tr>
<td>3. maxvit_tiny_tf_384.in1k (using aux loss)</td>
<td>0.896</td>
<td>0.443</td>
<td>Not submitted</td>
<td>Not submitted</td>
</tr>
<tr>
<td>ensemble (1+2+3 prob mean)</td>
<td>0.917</td>
<td>0.482</td>
<td>0.54</td>
<td>0.49</td>
</tr>
</tbody>
</table>
<p>※ simple mean aggregation (not top3 mean aggregation)</p>
<h2>What Worked</h2>
<ul>
<li>Large image size</li>
<li>Strong dropout (dropout rate=0.8)</li>
<li>Stochastic depth rate (timm's drop_path_rate)</li>
<li>Strong augmentation<ul>
<li>By default, the border_mode of ShiftScaleRotate in Albumentations is cv2.BORDER_REFLECT_101, which produced better results than cv2.BORDER_CONSTANT.</li>
<li>If border_mode = cv2.BORDER_REFLECT_101, the overflowing image is reflected by the affine transformation and a weird image is generated. ↓<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9058219%2F3bc241b5353f842f03bdecab1625886a%2Freflect2.png?generation=1677661658109905&alt=media" alt="reflect2"></li>
<li>However, as the results were better when the image was reflected, I considered that the task of this competition was more about texture than about the shape of the image.</li></ul></li>
<li>Dealing with imbalances in mini-batches.</li>
<li>ROI extraction</li>
<li>Aux Loss</li>
<li>VOI-LUT</li>
<li>Anomaly image processing<ul>
<li>Some images were black and full of noise, which were removed. (reference: <a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/373208" target="_blank">https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/373208</a>)</li></ul></li>
<li>Ensemble</li>
</ul>
<h2>What Didn’t Worked</h2>
<ul>
<li>Dealing with imbalances by loss functions<ul>
<li>Focal Loss, Weighted Cross Entropy</li></ul></li>
<li>Metric learning</li>
<li>Center-crop to maintain aspect ratio</li>
<li>TTA<ul>
<li>Not adopted as results did not change much and inference time increased</li></ul></li>
<li>Large parameter models<ul>
<li>Only tried efficientnetv2_l, but it is only slightly better than s. Not adopted due to increased inference time.</li></ul></li>
<li>Some data extensions<ul>
<li>CutMix, MixUp, Grid distortion, elastic transformation, CLAHE</li></ul></li>
</ul>
<h2>Code</h2>
<ul>
<li>GitHub -> <a href="https://github.com/moritake04/rsna-breast-cancer-detection" target="_blank">https://github.com/moritake04/rsna-breast-cancer-detection</a></li>
<li>inference notebook -> <a href="https://www.kaggle.com/code/moritake04/private10th-final-sub" target="_blank">https://www.kaggle.com/code/moritake04/private10th-final-sub</a></li>
</ul>
<p>(Sorry for the dirty code.) </p>
<h2>Thanks and Acknowledgements</h2>
<p>Again, thank you to the organizers and participants of this competition, and I will continue to work hard to become a kaggle master. </p>
<p>Thank you!<br>
I did a grid search with drop_rate of 0.2, 0.5 and 0.8 and the higher the drop_rate the better the auc and pf1.</p>
<p>Here is a graph of my mid-experiment (yellow: 0.8, grey: 0.5, blue: 0.2).<br>
This is the result for only one fold during the 4-fold cross-validation, with a difference of about 0.05 between drop_rate = 0.2 and 0.8 in pf1 (per image).<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9058219%2Fc291d279c3d26c8eda338b0e6d876daf%2F2023-03-01%20192035.png?generation=1677666089756731&alt=media" alt=""><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9058219%2F18dbadb9e5900e5d300829e0f8f5e9a2%2F2023-03-01%20192054.png?generation=1677666104255578&alt=media" alt=""></p>
<p>Maybe this is because the python version of the kaggle notebook has recently been upgraded from 3.7 to 3.10.<br>
If you set the python version of the notebook to 3.7, you should be able to run it. (Sorry, I don't know the specific solution right now…)</p>
|
RSNA Screening Mammography Breast Cancer Detection
|
16th place solution : segmentation & meta-classifier
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Q) <code>pre-trained model at this stage cannot predict correct region ~</code></p>
<p>yes. i agree with you! But, I think that although (predicted) region is not perfect, it still can guide a model easier than without. Also, I checked segmentation boosts the cv score (I forgot the exact score), so I decided to use it.</p>
<p>I don't remember the exact number of RoI images of the cbis & vindr datasets, maybe about 20K images i guess.</p>
<p>Q) <code>Which stage did you switch ~</code></p>
<p>The model is segmentation model (which also has classification head too), but auxiliary head is added at the main stage!</p>
<ul>
<li>In the pre-training stage, the model has two heads (cls, seg)</li>
<li>In the main stage (training with competition data), the model has three heads (cls, seg, aux)</li>
</ul>
<p>Q) <code>means the outputs from classification head?</code></p>
<p>Yes, you are right and sorry for the confusion. To be more specific, the model predicts with image-level while the meta-classifier is trained with breast-level (combination of patient_id & laterality). So, when i'm aggregating the prediction (to make a breast-level probability), I computes the statistics (mean, max, min, std) instead of only mean.</p>
<p>I hope this helps : ) Thank you!</p>
<p>I initially think the training with the RoI (Region of Interest) image (of the external datasets) would help better convergence than training only with the label (cancer). In my experiment, model trained only with label tends to fit a wrong region (eg. edge or sth else) in a few cases.</p>
<p>congrat you too to get a medal! thank you : )</p>
<p>you use all predicted negative/cancer images? -> yeap, I just used them all. as i remembered, maybe 6~70% of the negative samples have an empty segment. Also, I have planned to clean & remove the segments, but i got not enough time & resources to do it : (</p>
<p>for the brief settings,</p>
<ul>
<li>arch : effnet-v3 + u-net++</li>
<li>input : 1x1536x960 (grayscale image)</li>
<li>output : 1x1536x960 (segment), (1,) (label)</li>
<li>bs : 16</li>
<li>opt : adamw</li>
<li>lr : cosine annealing, 1e-4 ~ 1e-6</li>
<li>loss : dice + cross entropy</li>
<li>mixed precision</li>
</ul>
<p>Hello everyone!</p>
<p>First, thanks to Kaggle team and the organizers for hosting a competition! Also, congratulations to all the winners!</p>
<h2>Data</h2>
<h3>Preprocessing</h3>
<p>My preprocessing code heavily depends on the public notebooks (eg. remove letters, crop breast via contour).</p>
<ol>
<li>decode <code>.jpeg</code> with <code>dicomsdl</code> & <code>nvjpeg2000</code>.</li>
<li>crop edge (margin pixel 10)</li>
<li>extract breast with <code>opencv2</code> (contour based)</li>
<li>resize to 1536x960. (I roughly guess that resizing into a 1.5 ~ 2.0 aspect ratio is fine.)</li>
</ol>
<p>In my experiment, windowing doesn't affect the score positively, so I decide not to use it.</p>
<h3>Augmentation</h3>
<p>Heavy augmentation works well. Light augmentation tends to overfit.</p>
<ul>
<li>v/hflip</li>
<li>scale / rotate</li>
<li>brightness / contrast</li>
<li>cutout (coarse dropout with large patch size)</li>
<li>mixup</li>
</ul>
<h2>Architecture</h2>
<p>I couldn't spend much time running various experiments due to a lack of time & computing resources. So, I only tested few backbones & training recipes. (about 70% of my submissions are runtime errors & mistakes lol)</p>
<p>Here's a full pipeline.</p>
<ol>
<li>pre-train segmentation model with the <code>cbis-ddsm</code> & <code>vindr</code> datasets.<ul>
<li>segment: provided RoI image.</li>
<li>label: <code>malignant</code> to cancer / <code>BIRADS 5</code> to cancer.<br>
% Of course, the label doesn't perfectly correlate with the competition standards. But, I roughly think that maybe it could help train the model in some ways.</li></ul></li>
<li>train with competition data (initialize the weight with the pre-trained model)<ul>
<li>segment: inferred with the pre-trained model.</li>
<li>auxiliary: given meta-features (total 11 features).</li></ul></li>
<li>re-label the external data with the <code>step 2</code> model.</li>
<li>re-train with competition data (initialize with <code>step 3</code> model)</li>
<li>train a meta-classifier (oof + meta-features (e.g. laterality, age, …))</li>
</ol>
<p>For a baseline, I run step 1 ~ 2, 5 and achieve CV 0.4885 LB 0.59 (PB 0.46). Also, I test only with the <code>cbis-ddsm</code> dataset for pre-training, and there were about 0.02 drops on CV & LB, but the same score on PB (CV 0.4656 LB 0.57 PB 0.46).</p>
<p>A week before the deadline, I finished up to step ~ 5 and got CV 0.5012 LB 0.55 (PB 0.51). Sadly, I didn't choose it as a final submission : (</p>
<p>Last day of the competition, I ensembled <code>effnet_v2_s</code> backbone and got CV 0.5063 LB 0.56 (PB 0.49).</p>
<p>Lastly, I choose the best LB & CV for the final submission.</p>
<h3>Meta-Classifier</h3>
<p>I built a meta-classifier with meta-features like age, laterality, and the (per-breast) statistics of the predictions. Usually, It gives ~ 0.02 improvements on the CV & LB (also PB).</p>
<p>I'm worried about overfitting into some meta-features (eg. machine id, (predicted) density, …), so I decided to use only 3 auxiliary features (age, site_id, laterality) to train the model.</p>
<ul>
<li>feature: age, site_id, laterality, (mean, std, min, max) of the predictions.</li>
<li>cv: stratified k fold (5 folds)</li>
<li>model: CatBoost</li>
</ul>
<h2>Works</h2>
<ul>
<li>higher resolution (1536x768 ~ 1024) is good.</li>
<li>external data<ul>
<li>it gives about +0.02 boosts.</li></ul></li>
<li>architecture<ul>
<li>encoder: backbone: <code>effnet-b3</code> works best</li>
<li>decoder: u-net++</li></ul></li>
<li>augmentation</li>
<li>mixup (alpha 1.0)</li>
<li>loss<ul>
<li>0.6 * cls_loss (cross_entropy) + 0.4 * seg_loss (dice) + 0.1 * aux_loss (cross-entropy)</li></ul></li>
<li>stratified group k fold (4 folds)</li>
<li>meta-classifier</li>
<li>TTA</li>
</ul>
<p>thanks for reading! I hope this could help you :)</p>
<p>thank you! my experiment codes are messy and unorganized, maybe it'll take some time to clean up. I'll try!</p>
<blockquote>
<p>Did you have your own model for the predicted density?</p>
</blockquote>
<p>yes! the model is also trained with predicting the auxiliary features (eg. BIRADS, density, age, …). But, I didn't check the accuracy of each feature.</p>
|
RSNA Screening Mammography Breast Cancer Detection
|
18th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle, the hosts and competitors for this meaningful competition.</p>
<p>In the following, I want to provide a brief summary of my solution.</p>
<h2>Overview</h2>
<p>Similar to many public codes, my pipeline is as follows.</p>
<ol>
<li>Detect a breast area for each image and crop that area</li>
<li>Predict cancer image-wise using various backbones</li>
<li>Aggregate image-wise predictions and apply thresholding to get final prediction for each target</li>
</ol>
<h2>Preprocessing</h2>
<p>My preprocessing depends on many public codes. I am grateful to the authors of those codes.</p>
<p>Sigmoid/linear windowing is applied based on <code>VOILUTFunction</code>, <code>WindowCenter</code> and <code>WindowWidth</code> in dicom data. After windowing, images are processed with min-max scaling and treated as 8-bit images.</p>
<h2>Breast detector</h2>
<p>I annotated breast bounding boxes for about 1000 images. In addition to those labels, I also used labels provided by <a href="https://www.kaggle.com/remekkinas" target="_blank">@remekkinas</a> (about 500 images) in <a href="https://www.kaggle.com/code/remekkinas/breast-cancer-roi-brest-extractor/notebook" target="_blank">this code</a> to train a single YOLOv5n6 with the input size of 1024. mAP_0.5:0.95 of a validation split is 0.952.</p>
<p>Given the detections, affine transformation is applied to obtain fixed size cropped images.</p>
<p>At that time, expanding the bboxes so that the aspect ratio and the size of the bbox relative to the original images did not change too much improved somewhat of local cv.</p>
<p>However, the submission with the highest private LB did not use a detector…</p>
<h2>Cancer model</h2>
<p>I used <code>tf_efficientnet_b3.ns_jft_in1k</code>, <code>tf_efficientnetv2_s.in21k</code>, <code>eca_nfnet_l1</code> and <code>dm_nfnet_f0</code> in timm library for ensembling in my final submisison. Each model is trained with different input size (800×1200 to 1024×1536), learning rate and training epochs.</p>
<p>GeM pooling with fixed p is used as global pooling.</p>
<h2>Augmentations</h2>
<p>I used the following list of augmentations implemented with imgaug.</p>
<pre><code>iaa.Sometimes(
,
iaa.Affine(
rotate=(-, ),
shear=(-, ),
scale={: scale_x, : scale_y},
translate_percent={: shift_x, : shift_y},
),
),
iaa.Resize({: image_size[], : image_size[]},
interpolation=cv2.INTER_LINEAR),
iaa.Fliplr(),
iaa.Flipud( row[] == ),
iaa.Sequential([
iaa.Sometimes(, iaa.SomeOf(, [
iaa.GaussianBlur(sigma=(, )),
iaa.AdditiveGaussianNoise(scale=(, )),
])),
iaa.Sometimes(, iaa.Multiply((, ))),
iaa.Sometimes(, iaa.LinearContrast((, ))),
], random_order=),
</code></pre>
<h2>Training settings</h2>
<p>I applied bce loss for positive samples and focal loss for negative samples, since I thought that the correct identification of hard negative cases would contribute to the improvement of pF1.</p>
<p>As an auxiliary loss, focal loss to classify invasive is adopted.</p>
<p>To mitigate overfitting, I employed exponential moving average (ema) of model weights with warmup where the decay is 1 + t / 10 + t and t is training iteration.</p>
<p>The final model weights are obtained by averaging normaly trained weights and ema weights that had the highest pF1 on a validation set.</p>
<p>Other training settings are as follows:</p>
<ul>
<li>Optimizer: AdamW with weight decay 1e-3</li>
<li>Scheduler: OneCycleLR</li>
<li>6~8 epochs of training depending on backbones</li>
</ul>
<h2>Post-processing</h2>
<ul>
<li>Flip test is used as TTA</li>
<li>Image-wise predictions were aggregated by LP pooling, where p is determined based on the validation score</li>
<li>Ensemble is performed by simple averaging</li>
<li>Referring to <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">2nd place solution of BirdCLEF 2021</a>, percentile based thresholding is adopted</li>
</ul>
<h2>Submissions</h2>
<p>My final submission and the best submission are as follows. Simpler method performed better on private data in my case.</p>
<table>
<thead>
<tr>
<th>Submission</th>
<th>Backbones</th>
<th>W/ detector</th>
<th>Agg. method</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Final submission</td>
<td>tf_efficientnet_b3.ns_jft_in1k, tf_efficientnetv2_s.in21k, eca_nfnet_l1, dm_nfnet_f0</td>
<td>✔︎</td>
<td>LP pooling</td>
<td>0.65</td>
<td>0.48</td>
</tr>
<tr>
<td>Best private LB</td>
<td>tf_efficientnet_b3.ns_jft_in1k, tf_efficientnetv2_s.in21k, 2x nfnet_l0</td>
<td>×</td>
<td>mean</td>
<td>0.56</td>
<td>0.50</td>
</tr>
</tbody>
</table>
<p>Thank you for reading.</p>
<p>Thanks! I'm happy to hear that.</p>
<p>Thanks! I only experimented this shceduler with deferent pct_start. 0.2 is used in my solution, but there wasn't a big difference.</p>
|
RSNA Screening Mammography Breast Cancer Detection
|
19th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>19th Place Solution</strong></p>
<p>Thanks to the team at RSNA and Kaggle for putting this competition together, and thanks to my teammates <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> and <a href="https://www.kaggle.com/harshitsheoran" target="_blank">@harshitsheoran</a>.</p>
<p>The final submission consisted of 2 CNN models : eca_nfnet_l0 and tf_efficientnet_b3_ns.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>CV</th>
<th>Best Public</th>
<th>Best Private</th>
<th>Used in Ensemble</th>
</tr>
</thead>
<tbody>
<tr>
<td>eca_nfnet_l0 @ 1536 (Harshit)</td>
<td>.488</td>
<td>.6</td>
<td>.47</td>
<td>0</td>
</tr>
<tr>
<td>eca_nfnet_l0 @ 1536 (Ivan)</td>
<td>.4688</td>
<td>.61</td>
<td>.46</td>
<td>1</td>
</tr>
<tr>
<td>tf_efficientnet_b3_ns @ 1536 (Ivan)</td>
<td>.491</td>
<td>.63</td>
<td>.48</td>
<td>1</td>
</tr>
<tr>
<td>tf_efficientnet_b3_ns @ 1920 by 1536 (Martin)</td>
<td>.464</td>
<td>.6</td>
<td>.51</td>
<td>0</td>
</tr>
</tbody>
</table>
<p><br></p>
<p>When combined, these two models had a best Public LB of .65 and a best Private LB of .5. Unfortunately there was no correlation between public and private or CV and private. This led to us picking the wrong submission. In the end we had 12 submissions that would have gotten us into gold but had no way of telling if they were the correct ones. </p>
<p><strong>What Worked?</strong></p>
<ul>
<li>Label Smoothing</li>
<li>Auxiliary Classes (Bi-raids 2, Benign, Invasive, Biopsy)</li>
<li>Weighted BCELoss</li>
<li>Mosaic (with class max as target)</li>
<li>Mixup (with class max as target)</li>
<li>ROI Cropping (during training)</li>
</ul>
<p>For inference we were able to run more folds over the test data by first running 2 folds, seeing if it scored above a threshold (~0.03), and only run the rest of the folds if it was above that threshold. That trick worked well and it allowed us to use more folds on samples that were relevant.</p>
<p>Another trick was using the Auxiliary class "Difficult Negative Case" to smooth out the Cancer class by using it in a weighted average. This gave a slight improvement on the CV, but nothing game changing.</p>
<p>Thanks for reading, and good luck on your future Kaggle endeavors 👍</p>
|
RSNA Screening Mammography Breast Cancer Detection
|
1st place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!</p>
<p>Thanks for your kind words !</p>
<p>I'm sorry, but the notebook you mentioned should be move to <code>notebooks/3rd/</code> directory instead since i just download <a href="https://www.kaggle.com/remekkinas" target="_blank">@remekkinas</a> 's <a href="https://www.kaggle.com/code/remekkinas/breast-cancer-roi-brest-extractor" target="_blank">notebook</a>. Giving credit to his awesome discussions (<a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/371630" target="_blank">here</a> and <a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/369749" target="_blank">here</a>), <a href="https://www.kaggle.com/code/remekkinas/roi-detector-yolov5-training-and-annotations/notebook" target="_blank">yolov5 training notebook</a>, <a href="https://www.kaggle.com/code/remekkinas/breast-cancer-roi-brest-extractor" target="_blank">yolov5 inference notebook</a> and <a href="https://www.kaggle.com/datasets/remekkinas/rsna-roi-detector-annotations-yolo" target="_blank">annotated dataset</a>. </p>
<p>Thank you! I guess that the training code should be available in next 2 days.</p>
<p>Hi, I just tried single-view models. Multi-views model is potential so I'll give it a try later 😊</p>
<p><strong><em>Update 10/04/2023</em></strong></p>
<p>Ablation study: despite of a large improvement on a particular fold (below), soft positive label trick does not show a clearly improvement in performance over the standard label smoothing technique. The use of external datasets improve F1-score about 0.02 on local OOF validation and Private Leaderboard (with extractly same training pipeline + hyper-params).</p>
<table>
<thead>
<tr>
<th>External data</th>
<th>Loss</th>
<th>OOF F1</th>
<th>LB</th>
<th>PL</th>
</tr>
</thead>
<tbody>
<tr>
<td>x</td>
<td>label smoothing=0.1</td>
<td>0.4921</td>
<td>0.60</td>
<td>0.53</td>
</tr>
<tr>
<td>x</td>
<td>soft positive label=0.8</td>
<td>0.4853</td>
<td>0.60</td>
<td>0.53</td>
</tr>
<tr>
<td>✓</td>
<td>label smoothing = 0.1</td>
<td>0.5161</td>
<td>0.58</td>
<td><strong>0.56</strong></td>
</tr>
<tr>
<td>✓</td>
<td>soft positive label = 0.9</td>
<td><strong>0.5182</strong></td>
<td><strong>0.61</strong></td>
<td>0.55</td>
</tr>
</tbody>
</table>
<hr>
<p>First of all, I would like to thank Kaggle and competition's host for such an amazing challenge, a lofty goal with high data quality. Thank you to all paticipants/kagglers with many active and helpful discussions/codes. My solution was just built up from every pieces of kindly shares from you. I learned a lot and I'm very appreciated for that.<br>
I'm also very happy and suprised with the 1st place. This is my first gold medal and I'm writing my first writeup. It was such a great journey for me.<br>
For the solution, I use a very simple pipeline which can be described in just few lines:</p>
<ul>
<li>Use some external datasets: VinDr-Mammo, MiniDDSM, CMMD, CDD-CESM, BMCD.</li>
<li>4 x Convnextv1-small 2048x1024, validated on 4-folds splits of competition data.</li>
<li>Soft positive label</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10254700%2F15d5d89bcd9cd18e45b6da133d5f5ad7%2Fpreprocess.jpg?generation=1685726162522472&alt=media" alt="inference pipeline"></p>
<p>Now I want to share some experiments and my thought about those. Many of theme could be found in another discussions by excellent kagglers. Many of theme seem obvious. Hope this helps some new comer getting started in the future. Kindly note that it's just my own opinion/thoughts with very limited experiments and knownledge. I'm appreciated for your discussions and feel free to correct me if something was wrong.</p>
<h1>1. ROI crop</h1>
<p>ROI cropping was performed since it effectively help keeping more texture/detail given a fixed resolution. I use YOLOX-nano 416x416 for ROI detector. The advantage of DL detector vs rule-based methods is the obtained bbox is smaller, aspect ratio is more stable and focus to the breast region.</p>
<ol>
<li>Train a YOLOX on <a href="https://www.kaggle.com/remekkinas" target="_blank">@remekkinas</a> 's <a href="https://www.kaggle.com/datasets/remekkinas/rsna-roi-detector-annotations-yolo" target="_blank">dataset</a> (472 bbox-annotated images)</li>
<li>Inference on all available training data with low <code>conf_thres</code> and high <code>iou_thres</code>. Only 3 miss-detected images (all contain noise) and over 100 images with 2 boxes (almost overlapped). I manually select and label 99 of those images. Therefore, I have 571 annotated images in total.</li>
<li>Retrain YOLOX on new images: 521 for train, 50 for val. Note that these 50 val images include all 47 val images of original <a href="https://www.kaggle.com/remekkinas" target="_blank">@remekkinas</a> 's dataset. The new dataset version contain original resolution images (same as original dicoms), preprocessed with simple min-max normalization. Tried various model/image sizes and finally choose YOLOX-nano 416x416 as final model due to the consistent result and small overhead.</li>
</ol>
<table>
<thead>
<tr>
<th><strong>model size</strong></th>
<th><strong>image size</strong></th>
<th><strong>interpolation</strong></th>
<th><strong>AP_new_val</strong></th>
<th><strong>AP_remek_val</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td><em>nano (selection)</em></td>
<td><em>416</em></td>
<td><em>LINEAR</em></td>
<td><em>96.26</em></td>
<td><em>94.21</em></td>
</tr>
<tr>
<td>nano</td>
<td>416</td>
<td>AREA</td>
<td>94.09</td>
<td>91.60</td>
</tr>
<tr>
<td>nano</td>
<td>640</td>
<td>LINEAR</td>
<td>95.85</td>
<td>88.40</td>
</tr>
<tr>
<td>nano</td>
<td>768</td>
<td>LINEAR</td>
<td>96.22</td>
<td>82.09</td>
</tr>
<tr>
<td>nano</td>
<td>1024</td>
<td>LINEAR</td>
<td>94.92</td>
<td>89.40</td>
</tr>
<tr>
<td>tiny</td>
<td>416</td>
<td>LINEAR</td>
<td>94.23</td>
<td>90.20</td>
</tr>
<tr>
<td>tiny</td>
<td>640</td>
<td>LINEAR</td>
<td>94.95</td>
<td>89.84</td>
</tr>
<tr>
<td>tiny</td>
<td>768</td>
<td>AREA</td>
<td>96.21</td>
<td>68.03</td>
</tr>
<tr>
<td>tiny</td>
<td>1024</td>
<td>AREA</td>
<td>93.69</td>
<td>73.70</td>
</tr>
<tr>
<td>s</td>
<td>416</td>
<td>LINEAR</td>
<td>95.03</td>
<td>0.86</td>
</tr>
<tr>
<td>s</td>
<td>640</td>
<td>LINEAR</td>
<td>96.10</td>
<td>70.80</td>
</tr>
<tr>
<td>s</td>
<td>768</td>
<td>LINEAR</td>
<td>96.79</td>
<td>78.70</td>
</tr>
</tbody>
</table>
<p>[email protected] is 1.0 in all experiments. We see a large gap between [email protected] between two validation sets. Some reasons for that:</p>
<ul>
<li>New version add more 3/50 typical hard cases.</li>
<li>Inconsistent processing pipeline: val images in Remek's val was resized 2 times (original --> 1024 --> 416) </li>
<li>Training images is annotated according to personal bias (no standard way/consentration to annotate the breast boxes correctly). So higher AP may not indicate a better model.</li>
<li>The validation size is also not large enough to judge</li>
<li>No hyper parameters tuning</li>
</ul>
<p>Did these things led to the large gap, particularly with stronger model and larger image size ?<br>
All these efforts are just to ensure an "as good as posible" ROI detection model. I think <a href="https://www.kaggle.com/remekkinas" target="_blank">@remekkinas</a> 's <a href="https://www.kaggle.com/datasets/remekkinas/rsna-roi-detector-annotations-yolo" target="_blank">dataset</a> is enough to train good YOLOX models and they could perform equally well in hidden test set.</p>
<p>Simpler Otsu thresholding + findCountours() slightly modified from <a href="https://www.kaggle.com/code/snnclsr/roi-extraction-using-opencv" target="_blank">this notebook</a> is used to find breast bbox as a fall back in case of YOLOX's miss-detection.<br>
Or, if both miss the breast box, just use the whole image without any cropping.</p>
<h1>2. The inference pipeline</h1>
<p>Operations on large array take time, so I try to transfer the computation task to GPU as much as posible.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10254700%2F9de52f41201b4a92774f5fee331cb879%2Fwriteup-Page-1.drawio.png?generation=1678001361222187&alt=media" alt=""></p>
<h1>3. Early experiments</h1>
<p>My final solution use external datasets, but I stitch up with using only competition data for almost of the time (until "7 days to go"). Hence most of my experiments are done on competition data only: 5-folds splits with StratifiedGroupKFold based on <code>patient_id</code>. Training hyperparams used for the final solution are almost inherited from these early experiments.</p>
<h2>3.1. About the metric</h2>
<p>The competition pF1 score is not stable and hard to track for me. Therefore, I mainly track my experiments based on multiple metrics: <code>{ PR_AUC, ROC_AUC, best_PF1 (binarized), best_threshold }</code> instead of just one.</p>
<ul>
<li><strong>PR_AUC</strong>: correlated with but more stable than best_PF1. It focuses on positive cases, and is strongly affected by prior data distribution (% of positive).</li>
<li><strong>ROC_AUC</strong>: less affected by prior data distribution. Much more stable, but seem to be over optimistic which led to just a small gap between a good model and a bad model.</li>
<li>To get a high binaried pf1, model should not predict too many positives which usually led to large FP --> dramatically reduce best_PF1. A good scored pf1 model tends to prioritize Precision over Recall. I personaly don't like this behaviour, especially for real life application.</li>
</ul>
<h2>3.2. Augmentations</h2>
<p>I stitch with this augmentation pipeline for all experiments, no tuning at all:</p>
<pre><code>A.Compose([
custom_augs.CustomRandomSizedCropNoResize(scale=(, ), ratio=(, ), p=),
A.HorizontalFlip(p=),
A.VerticalFlip(p=),
A.OneOf([
A.Downscale(scale_min=, scale_max=, interpolation=(upscale=cv2.INTER_LINEAR, downscale=cv2.INTER_AREA), p=),
A.Downscale(scale_min=, scale_max=, interpolation=(upscale=cv2.INTER_LANCZOS4, downscale=cv2.INTER_AREA), p=),
A.Downscale(scale_min=, scale_max=, interpolation=(upscale=cv2.INTER_LINEAR, downscale=cv2.INTER_LINEAR), p=),
], p=),
A.OneOf([
A.RandomToneCurve(scale=, p=),
A.RandomBrightnessContrast(brightness_limit=(-, ), contrast_limit=(-, ), brightness_by_max=, always_apply=, p=)
], p=),
A.OneOf(
[
A.ShiftScaleRotate(shift_limit=, scale_limit=[-, ], rotate_limit=[-, ], interpolation=cv2.INTER_LINEAR,
border_mode=cv2.BORDER_CONSTANT, value=, mask_value=, shift_limit_x=[-, ],
shift_limit_y=[-, ], rotate_method=, p=),
A.ElasticTransform(alpha=, sigma=, alpha_affine=, interpolation=cv2.INTER_LINEAR, border_mode=cv2.BORDER_CONSTANT,
value=, mask_value=, approximate=, same_dxdy=, p=),
A.GridDistortion(num_steps=, distort_limit=, interpolation=cv2.INTER_LINEAR, border_mode=cv2.BORDER_CONSTANT,
value=, mask_value=, normalized=, p=),
], p=),
A.CoarseDropout(max_holes=, max_height=, max_width=, min_holes=, min_height=, min_width=,
fill_value=, mask_fill_value=, p=),
], p=)
</code></pre>
<p>For the random crop choice: real breast size/ratio vary largly between images --> popular pipeline of <code>longest resize + padding</code> introduces multi-scales problem. Of course, it would introduce higher risks of wrong positive label.</p>
<p><em>Example batch</em><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10254700%2F4f149f8617c07c9a8e0d8e571f6a68fa%2Fexample_batch.jpg?generation=1685726233222765&alt=media" alt="example batch"></p>
<h2>3.3. Up/down sampling</h2>
<p>I upsample pos cases in each epoch for all of my experiments.</p>
<ul>
<li>Ensuring at least 1 pos in a batch/iteration is pretty important to stablize training. I found training difficult while set 0.5 pos per batch.</li>
<li>Upsampling ratio can largly affect CV score and prediction distribution. It also vary between backbones and other hyperparams choices. However, I prefer smallest pos/neg ratio as posible (since it's near to the real data distribution) but ensure at least 1 pos/batch.</li>
<li>Large pos/neg ratio helps training faster in early epochs. I tried linearly increase/decrease the pos/neg ratio between epochs to face with some problems of prediction distribution/threshold (especially EffB4). But in the end, i got no improvement in CV.</li>
</ul>
<h2>3.4. Model/backbone</h2>
<p>I tried <code>Eff-B2</code>, <code>Eff-B4</code>, <code>Effv2-s</code> and <code>Convnextv1-small</code></p>
<ul>
<li>Each model has its own characteristic and training phenomenon.</li>
<li>All models could perform equally well in local CV. Except that <code>Convnextv1-small</code> give higher CV score.</li>
<li><code>EfficientNet</code> (no model EMA) tends to overfit quickly after fews epochs with high pos/neg ratio: longer training reduce AUC largely and may slightly increase best_pf1 --> model tends to predict less positives. Small pos/neg ratio helps training more stable but reduce CV. Linearly increase pos/neg ratio between epochs (by a sampler) did not help much.</li>
<li><code>Convnext-small</code> (with/without EMA) shows both stable training and better CV.</li>
</ul>
<h2>3.5. drop_rate, drop_path_rate</h2>
<p>Playing with drop_rate and drop_path_rate:</p>
<ul>
<li>We can use large dropout rate of >= 0.5 to regularize training and reduce overfiting.</li>
<li>With very large drop_rate = 0.9 or drop_path_rate = 0.5, I still can get a "not bad as expected" model in CV score. The following results is for Eff-B4, pos/neg = 1/3 on fold 0:</li>
</ul>
<table>
<thead>
<tr>
<th>drop_rate</th>
<th>drop_path_rate</th>
<th>auc</th>
<th>best_pf1</th>
<th>best_thres</th>
<th>epoch</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.9</td>
<td>0.2</td>
<td>91.90</td>
<td>47.73</td>
<td>0.78</td>
<td>4</td>
</tr>
<tr>
<td>0.7</td>
<td>0.2</td>
<td>90.36</td>
<td>52.27</td>
<td>0.55</td>
<td>3</td>
</tr>
<tr>
<td>0.5</td>
<td>0.2</td>
<td>90.02</td>
<td>50.00</td>
<td>0.72</td>
<td>4</td>
</tr>
<tr>
<td>0.3</td>
<td>0.2</td>
<td>91.23</td>
<td>48.24</td>
<td>0.82</td>
<td>4</td>
</tr>
<tr>
<td>0.5</td>
<td>0.5</td>
<td>90.45</td>
<td>46.33</td>
<td>0.95</td>
<td>4</td>
</tr>
</tbody>
</table>
<p>However, I set drop_rate = 0.5 and drop_path_rate = 0.2 for most of my experiments, including the final ones.</p>
<h2>3.6. Global pooling</h2>
<p>I stitch with max pooling for almost my experiments as my inductive bias:</p>
<ul>
<li>Max pooling is suitable and seem to be effective for anomalies detection or "needle in the haystack" tasks in literature. In this case, cancer may appear in a very small region and the rest are all normal.</li>
<li><code>max()</code> provides stronger learning signal, but less stable than mean() in term of gradient.</li>
<li>My guess: <code>gem</code> > <code>max</code> > <code>mean</code> when enough data provided.</li>
</ul>
<h2>3.7. Soft positive label/ Positive label smoothing</h2>
<p>Convnextv1-small look good in CV scores with stable AUC, PR_AUC, best PF1 across epochs. But there're differences in behaviour between Effv2-s and Convnextv1-small especially in best threshold for image/breast level.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10254700%2F6771d0c6e87067873e53021742e7bf6b%2Fwriteup_smoothing.png?generation=1678006236367845&alt=media" alt=""><br>
<em>The two lower green ones belong to <code>Effv2-s</code> and the other belong to <code>Convnextv1-small</code></em></p>
<p>Some discussions suggest smaller best threshold (<0.55) may indicate a better model. For single-image, Convnext show a very high threshold of > 0.92, which could indicate the problem of over-confident. Stronger models with larger number of parameters is easier to be over-confident or overfitted, especialy in this highly imballanced dataset scenario. About the above figure, <code>label_smoothing = 0.1</code> was used but seem like it was not enough.<br>
So, just add harder label smoothing to regularize training. Or use positive weight < 1.0 to reduce the priority of positive samples.</p>
<table>
<thead>
<tr>
<th>loss</th>
<th>num_logits</th>
<th>target {neg, pos}</th>
<th>pr_auc</th>
<th>roc_auc</th>
<th>best_pf1</th>
<th>best_thres</th>
<th>epoch</th>
</tr>
</thead>
<tbody>
<tr>
<td><em>(baseline) bce_smooth 0.1</em></td>
<td>2</td>
<td>{ [0.95, 0.05], [0.05, 0.95] }</td>
<td>0.4755</td>
<td>0.9278</td>
<td>0.497</td>
<td>0.66</td>
<td>14</td>
</tr>
<tr>
<td>bce_smooth 0.4</td>
<td>2</td>
<td>{ [0.8, 0.2], [0.2, 0.8] }</td>
<td>0.4749</td>
<td>0.9248</td>
<td>0.5</td>
<td>0.6</td>
<td>25</td>
</tr>
<tr>
<td>bce_pos_smooth 0.4</td>
<td>2</td>
<td>{ [1.0, 0.0], [0.2, 0.8] }</td>
<td>0.5191</td>
<td>0.9153</td>
<td>0.5488</td>
<td>0.53</td>
<td>13.5</td>
</tr>
<tr>
<td><em>(best) bce_pos_smooth 0.2</em></td>
<td>1</td>
<td>{ 0.0, 0.8 }</td>
<td><strong>0.5401</strong></td>
<td>0.9281</td>
<td><strong>0.5714</strong></td>
<td>0.49</td>
<td>20</td>
</tr>
<tr>
<td>bce_pos_smooth 0.3</td>
<td>1</td>
<td>{ 0.0, 0.7 }</td>
<td>0.522</td>
<td><strong>0.933</strong></td>
<td>0.517</td>
<td>0.5</td>
<td>17</td>
</tr>
<tr>
<td>bce_smooth 0.1 + pos_weight 0.4</td>
<td>1</td>
<td>{ 0.05, 0.95 }</td>
<td>0.4946</td>
<td>0.9146</td>
<td>0.5393</td>
<td>0.39</td>
<td>19</td>
</tr>
</tbody>
</table>
<p><em>Note:</em></p>
<ul>
<li><em>Table above is fold 0 CV results</em></li>
<li><em><code>num_logits = 2</code> means using <code>sigmoid</code> (BCEWithLogitsLoss) for training and <code>softmax</code> for inference. Refer <a href="https://github.com/ultralytics/yolov5/issues/5401" target="_blank">here</a>.</em></li>
</ul>
<p>Soft positive labeling look reasonable: we have per-breast label and not per-image label. For some images belong to same patient, cancer signal may not appears clearly in some images, or even all images (MG is not enough to judge for cancer/non-cancer) --> the positive label should not be the maximum bound value of 1.0, but less confident.</p>
<p>Soft postive label trick improve CV and helps threshold looks much better.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10254700%2Fc36afde0a06afbeb3dbd38a76753276c%2Fpos_smooth_thres.png?generation=1678007271357249&alt=media" alt=""></p>
<p>As some discussions, very sharp prediction distribution may indicate worse result/generalization.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10254700%2F7caf77b112791cd16eb20de6f6d443bd%2Fdistribution.png?generation=1678007391776163&alt=media" alt=""></p>
<hr>
<h1>4. Final experiments</h1>
<h2>4.1. External datasets</h2>
<p>A week left to the competition deadline, I was thinking about training final experiments for the final submission and should not make any mistakes or missing something. I read some discussions again and relized I was missing a big part: external data. In particular, external data contains a large number of positive cases which are valuable.<br>
These external datasets summary:</p>
<table>
<thead>
<tr>
<th>Dataset</th>
<th>num_patients*</th>
<th>num_samples*</th>
<th>num_pos_samples*</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://physionet.org/content/vindr-mammo/1.0.0/" target="_blank">VinDr-Mammo</a></td>
<td>5000</td>
<td>20000</td>
<td>226 (1.13 %)</td>
</tr>
<tr>
<td><a href="https://www.kaggle.com/datasets/cheddad/miniddsm2" target="_blank">MiniDDSM</a></td>
<td>1952</td>
<td>7808</td>
<td>1480 (18.95 %)</td>
</tr>
<tr>
<td><a href="https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70230508" target="_blank">CMMD</a></td>
<td>1775</td>
<td>5202</td>
<td>2632 (50.6%)</td>
</tr>
<tr>
<td><a href="https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=109379611" target="_blank">CDD-CESM</a></td>
<td>326</td>
<td>1003</td>
<td>331 (33 %)</td>
</tr>
<tr>
<td><a href="https://zenodo.org/record/5036062" target="_blank">BMCD</a></td>
<td>82</td>
<td>328</td>
<td>22 (6.71 %)</td>
</tr>
<tr>
<td>All</td>
<td>9135</td>
<td>34341</td>
<td>4691 (13.66 %)</td>
</tr>
</tbody>
</table>
<p><strong>*</strong> The number may not indicate original dataset characteristics, but processed data I used for this competition.</p>
<p><strong>Some details:</strong></p>
<ol>
<li><p><strong>VinDr-Mammo</strong>: contains BIRADS scores for each image of 0-5. I treated BIRADS 5 as cancer (1) and all other as normal (0). With only Digital Mammograms and BIRADS categories, one can't confirm 100% if a case is cancer or not . BIRADS 4 indicate 30% chance of cancer, then I was treated it as normal. My decision only happen in just a few seconds as i could remember. Reading other posts, i'm feeling my decision is not as good, except that it helps reduce sensitivity and can "improve" the pF1 (I don't want to see it that way). Maybe I make a huge mistake here. Better solution is to use soft/uncertain label or pseudo labeling for these ambigous (BIRADS-4) cases instead. Some images has LUTDescriptor. The image look over-exposured when apply VOILUT (voi + windowing), so I just apply windowing on this dataset, equivalent to pydicom's <code>apply_voi_lut(prefer_lut = False)</code></p></li>
<li><p><strong>MiniDDSM</strong>: I used all 7808 samples. Found that the status label of <code>{Cancer, Benign, Normal}</code> is per patient_id, not per laterality. So I treat a patient-laterality as cancer if and only if status == 'Cancer' and at least 1 supicious region annotation (segmentation map) for that laterality is available. End up in 1480 positives and the remaining 6318 negatives. I use 16-bits png part for the less information loss. No windowing parameters as presented. Since there're watermarks noise with very high pixel intensity in ROI crop, percentile min-max scaled was performed instead of min-max scaled for normalization.</p></li>
<li><p><strong>CMMD</strong>: Total of 5202 breast images belong to 1872 patient id. Note that some patient ids start with 'D2' are almost malignant and usually had label for one laterality only. For those cases, I treated the other laterality (no laterality-level label specified in csv file, but still have image) as normal (EDA from the competition data show that cancer only appears in one laterality). Original dicom images are in 8-bits depth with windowing parameters available.</p></li>
<li><p><strong>CDD-CESM</strong>: consisting Contrast-enhanced spectral mammography (CESM) images. This dataset contain label of <code>{Normal, Malignant, Benign}</code>. I treated Malignant as cancer, Normal or Benign as normal and only use the <strong>low-energy images</strong> as it is comparable to digital mammograms (MG), or at least they look pretty similar for me. Low-energy images is in 8-bits jpeg, no windowing information.</p></li>
<li><p><strong>BMCD</strong>: contains 100 patients (50 normal + 50 suspicious cases) with 82 biopsy-confirmed cases of <code>{'NORMAL', 'BENIGN', 'DCIS', 'MALIGNANT'}</code> and mammogram images of them at the time of screening and avg 2.2 year before. I treat 'DCIS' or 'MALIGNANT' patient's last screening images as cancer and all the remaining as normal. Original dicom images is in 16-bits depth and windowing parameters are available.</p></li>
</ol>
<h2>4.2. Validation strategy</h2>
<p>I found inconsistence in CV between 5-folds splits of competition data, probably because the number of positive is not sufficient. Although, hidden test should has distribution/property closer to the competition data, so I change validation strategy to 4-splits as:</p>
<ul>
<li>Do 4-folds splitting on competition data</li>
<li>Use 1 fold for validation, the rest 3 folds + all external data for training.<br>
Then, for each split, training data contain about 5560/75400 positive cases (~7.38 %).</li>
</ul>
<h2>4.3. Training</h2>
<p>I managed to get 4 x Convnextv1-small corresponding to the above 4 splits. Some unexpected results were founded during training, so the training stages was changed and in short consist of:</p>
<ol>
<li>Train 2 models on fold 0 and fold 1 with <code>soft_pos_label = 0.8</code></li>
<li>Train 2 models on fold 2 and fold 3 with <code>soft_pos_label = 0.9</code></li>
<li>Finetune 2 models obtained from stage 1 on fold 0 and fold 1 with <code>soft_pos_label = 0.9</code></li>
</ol>
<p>In details, I start training on first two folds: fold 0 and fold 1 with the following config:</p>
<ul>
<li>Model: timm's convnext_small.fb_in22k_ft_in1k_384</li>
<li>Input size: 2048x1024</li>
<li>Loss: vanila BCE (no class weight)</li>
<li>Sampler: upsampling pos samples per epoch to pos/neg = 1/7, ensure each batch contains at least 1 pos sample.</li>
<li>Batchsize: 8</li>
<li>Automatic Mixed Precision (AMP): enable</li>
<li>Model EMA: enable</li>
<li>Global pooling: max</li>
<li>Soft positive label = 0.8</li>
<li>Optimizer: SGD with momemtum=0.9</li>
<li>Scheduler: Cosine lr decay(epoch = 24, lr = 1e-3, min_lr = 1e-5) + linear warmup(warmup_lr = 1e-5, warmup_epoch = 4)</li>
<li>Drop_rate = 0.5, drop_path_rate = 0.2</li>
</ul>
<p>Once training finished, results are not as my expectation on fold 0:</p>
<ul>
<li>CV results are not good</li>
<li>Threshold is much smaller. I expected it to be in the range [0.35, 0.5], but it's just around 0.25+-0.02</li>
<li>Training is not converged yet. I guess that CV can be improved with more additional training epochs.</li>
</ul>
<p>So I start train fold 2 and fold 3 with few changes: longer training with larger learning rate and reduce soft positive label.</p>
<ul>
<li>Scheduler: Cosine lr decay(epoch = 30, lr = 3e-3, min_lr = 5e-5) + linear warmup(warmup_lr = 3e-5, warmup_epoch = 4)</li>
<li>Soft positive label: 0.9</li>
</ul>
<p>Results on fold 2 and fold 3 seem to be better. So I decided to finetune fold 0 and fold 1 with the same value of soft_positive_label = 0.9 from the previous last checkpoints.</p>
<h2>4.4. Checkpoints selection</h2>
<ol>
<li>For each fold, I manually select 3-7 best checkpoints by looking at multiple metrics.</li>
<li>Grid search over all posible combinations of each fold's checkpoints (3x5x7x5 = 525 combinations for my case), compute Out-Of-Fold (OOF) results for each combination.</li>
<li>Select the best combination with highest binarized pF1. The best combination has local OOF pf1 = 0.5187, while the worst with 0.4951.</li>
</ol>
<p>Final results:</p>
<table>
<thead>
<tr>
<th>name</th>
<th>soft_positive_label</th>
<th>pr_auc</th>
<th>roc_auc</th>
<th>best_pf1</th>
<th>best_thres</th>
<th>epoch</th>
</tr>
</thead>
<tbody>
<tr>
<td>fold 0</td>
<td>0.8</td>
<td>0.3983</td>
<td>0.9142</td>
<td>0.4716</td>
<td>0.25</td>
<td>24</td>
</tr>
<tr>
<td><strong>(selected)</strong> fold 0 + fine-tune</td>
<td>0.9</td>
<td>0.4363</td>
<td>0.9119</td>
<td>0.4785</td>
<td>0.35</td>
<td>11 (24 + 11)</td>
</tr>
<tr>
<td>fold 1</td>
<td>0.8</td>
<td>0.5151</td>
<td>0.9202</td>
<td>0.5291</td>
<td>0.34</td>
<td>18</td>
</tr>
<tr>
<td><strong>(selected)</strong> fold 1 + fine-tune</td>
<td>0.9</td>
<td>0.5381</td>
<td>0.9149</td>
<td>0.5381</td>
<td>0.34</td>
<td>8 (24 + 8)</td>
</tr>
<tr>
<td><strong>(selected)</strong> fold 2</td>
<td>0.9</td>
<td>0.4946</td>
<td>0.9234</td>
<td>0.5185</td>
<td>0.34</td>
<td>26</td>
</tr>
<tr>
<td><strong>(selected)</strong> fold 3</td>
<td>0.9</td>
<td>0.5088</td>
<td>0.9401</td>
<td>0.5455</td>
<td>0.31</td>
<td>19</td>
</tr>
</tbody>
</table>
<p>Some thoughts:</p>
<ul>
<li>Fold 0 result looks weird? Maybe I need more time inspecting it.</li>
<li>With much more data especially positive ones, guesses for the best hyperparams became obsolete. Maybe I could forget the soft positive label trick and get better result?</li>
</ul>
<p>OOF validation was done to determine the best threshold value of 0.34</p>
<pre><code> auc @th f1 | prec recall | sens spec
single image [0] 0.87296 0.40000 0.41907 | 0.48365 0.37047 | 0.37047 0.99145
grouby mean() [0] 0.92043 0.34000 0.51820 | 0.60989 0.45122 | 0.45122 0.99391
grouby max() [0] 0.91939 0.61000 0.50913 | 0.57545 0.45732 | 0.45732 0.99289
--------------
single image [1] 0.84866 0.40000 0.33649 | 0.38241 0.30120 | 0.30120 0.98881
grouby mean() [1] 0.89225 0.34000 0.39587 | 0.47027 0.34252 | 0.34252 0.99139
grouby max() [1] 0.88917 0.61000 0.39424 | 0.44554 0.35433 | 0.35433 0.99016
--------------
single image [2] 0.89611 0.40000 0.53331 | 0.62912 0.46356 | 0.46356 0.99453
grouby mean() [2] 0.94288 0.34000 0.64699 | 0.75419 0.56722 | 0.56723 0.99632
grouby max() [2] 0.94329 0.61000 0.63182 | 0.71428 0.56722 | 0.56723 0.99548
--------------
</code></pre>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10254700%2Feeb0cc23fad1497f249613e0d4f87307%2Ffinal_sub_plot.png?generation=1678010502203196&alt=media" alt=""></p>
<p><em>The results was generated by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> ’s <a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/378521" target="_blank">script</a></em></p>
<h2>4.5. Submission</h2>
<p>My training progress was done in the last day of the competition.<br>
5 final submissions include:</p>
<ul>
<li>1 submission of models in earlier epochs while waiting training to be done. The main purpose is to ensure the pipeline is correct and no exception occurs, which give me 0.56 LB.</li>
<li>4 submission of same final (best) model with different threshold: 0.31, 0.34 (best oof), 0.37, 0.40 . I guessed the LB's best threshold > CV's best threshold, but I'm totally wrong (I did not probe the LB). But it was fortunate that threshold=0.31 is the one on the peak :D</li>
</ul>
<p>Those submissions bring me from LB 600th to LB 22nd in one day. The PL 0.55 submission was successfully finished when ~30 mins left to the deadline.</p>
<table>
<thead>
<tr>
<th>Threshold</th>
<th>OOF</th>
<th>LB</th>
<th>PL</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.27 (late sub)</td>
<td>0.4877</td>
<td>0.60</td>
<td>0.53</td>
</tr>
<tr>
<td>0.28 (late sub)</td>
<td>0.4917</td>
<td>0.60</td>
<td>0.54</td>
</tr>
<tr>
<td>0.29 (late sub)</td>
<td>0.4973</td>
<td>0.60</td>
<td>0.54</td>
</tr>
<tr>
<td>0.30 (late sub)</td>
<td>0.5027</td>
<td>0.60</td>
<td>0.55</td>
</tr>
<tr>
<td><strong>(selection) 0.31</strong></td>
<td><strong>0.5049</strong></td>
<td><strong>0.61</strong></td>
<td><strong>0.55</strong></td>
</tr>
<tr>
<td><strong>(selection) 0.34</strong></td>
<td><strong>0.5187</strong></td>
<td><strong>0.58</strong></td>
<td><strong>0.53</strong></td>
</tr>
<tr>
<td>0.37</td>
<td>0.5000</td>
<td>0.55</td>
<td>0.52</td>
</tr>
<tr>
<td>0.40</td>
<td>0.4896</td>
<td>0.54</td>
<td>0.50</td>
</tr>
</tbody>
</table>
<hr>
<h1>5. Code</h1>
<ul>
<li>Submission notebook: <a href="https://www.kaggle.com/dangnh0611/1st-place-submission-code" target="_blank">https://www.kaggle.com/dangnh0611/1st-place-submission-code</a> </li>
<li>Training code: <a href="https://github.com/dangnh0611/kaggle_rsna_breast_cancer" target="_blank">https://github.com/dangnh0611/kaggle_rsna_breast_cancer</a></li>
</ul>
<hr>
<p>I just got luck with simple pipeline and simple decision. Many teams had much better models but did not select it in final, as I could see.<br>
Thanks for your attention.</p>
<p>Hi, i get the bounding box output from YOLOX, then the coordinates will be used to crop the original resolution image, as described <a href="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10254700%2F9de52f41201b4a92774f5fee331cb879%2Fwriteup-Page-1.drawio.png?generation=1678001361222187&alt=media" target="_blank">here</a></p>
<p>Thanks for you kind words and suggestions 😄 </p>
<blockquote>
<p>unless of course you already have one</p>
</blockquote>
<p>I did not even have a Master's degree 😂</p>
<p>Thank you. Hope it helps</p>
<p>Thanks, I also learned a lot from you 💯</p>
<p>Yes. And I have updated <a href="https://github.com/dangnh0611/kaggle_rsna_breast_cancer" target="_blank">the code</a>. Thanks for your attention 😄</p>
<blockquote>
<p>What is GEM?</p>
</blockquote>
<p>Generalized Mean Pooling</p>
<p>Yes, it's raw pixel array from the original dicom image, e.g <code>pydicom.dcmread(dcm_path).pixel_array</code></p>
<p>ah sorry, just a small note that during inference, windowing is applied to the ROI-cropped patch only, which then resized to smaller fixed size, e.g 2048x1024. But, this only saves a little computation 👀</p>
<p>Yes, it's simply copying samples using a custom batch sampler</p>
<p>Because that's all I had at that time :D. Winning submission was created and submitted in the last day of this competition and I have no other stronger models (e.g with different architectures or training strategies) at that time --> no more complex ensemble should be tried.<br>
4 is always better stabilization/generalization/score than 1 in almost every cases. In this particular case, 0.02 LB and 0.01 PB difference is not a negligible improvement.</p>
<p>In my quick test without threshold tunning, single model on single fold could archive LB 0.59 and PB 0.54</p>
<p>Hi, insufficient number of val samples and personal labeling bias can make the [email protected] unstable. <br>
For YOLOX, I use <a href="https://github.com/dangnh0611/kaggle_rsna_breast_cancer/blob/reproduce/src/roi_det/YOLOX/exps/projects/rsna/yolox_nano_bre_416.py" target="_blank">this hyperparams config</a></p>
<p>Hi,<br>
It's just a baseline when I'm start with this competition. I experienced slightly better results with BCE instead of CCE for binary classification tasks in the past. Another reason is to easily integrating custom auxiliary losses (usually BCEs too) and balancing these loss weights, e.g all losses have same scales. Also, 2 logits --> double the number of params in the linear head </p>
<p>As you see the result (table above), sigmoid with num_logits=1 was used in my final submission since “sigmoid for training+softmax for inference” worsen the result (did not help)</p>
|
RSNA Screening Mammography Breast Cancer Detection
|
22th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks!<br>
I submit the ensemble model many times with different threshold values</p>
<table>
<thead>
<tr>
<th>threshold</th>
<th>0.61</th>
<th>0.57</th>
<th>0.54</th>
<th>0.52</th>
<th>0.46</th>
</tr>
</thead>
<tbody>
<tr>
<td>Public</td>
<td>0.54</td>
<td>0.59</td>
<td>0.58</td>
<td>0.59</td>
<td>0.57</td>
</tr>
<tr>
<td>Private</td>
<td>0.45</td>
<td>0.48</td>
<td>0.48</td>
<td>0.47</td>
<td>0.46</td>
</tr>
</tbody>
</table>
<p>Thank you! <br>
I use two 3090 gpu to train the model. Upsampling duplicates positive images many times in one epoch, and makes it easy to overfit positive cases.</p>
<p>First of all, many thanks to everyone who contributed to this competition and I've learnt a lot from this comp, thanks very much!<br>
My solution doesn't have much innovation, but I want to share some tricks that improve the performance in my practice.</p>
<h2>Preprocessing</h2>
<ul>
<li>dicom2png<br>
Mostly refers to <br>
<a href="https://www.kaggle.com/code/hengck23/3hr-tensorrt-nextvit-example" target="_blank">3hr tensorRT NextVIT example</a><br>
<a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/372275" target="_blank">Easy load the image with nvJPEG2000(5x faster)</a><br>
<a href="https://www.kaggle.com/code/radek1/how-to-process-dicom-images-to-pngs" target="_blank">how to process DICOM images to PNGs</a></li>
<li>roi<br>
Use opencv findContours() to extract roi following <a href="https://www.kaggle.com/code/snnclsr/roi-extraction-using-opencv" target="_blank">ROI Extraction using OpenCV</a></li>
</ul>
<h2>Augmentation</h2>
<ul>
<li>The augmentations are hflip, vflip, translation(shift), scale</li>
<li>We keep aspect ratio the same and only resize down once to avoid distortion or blur. (Images before transformation are png files with original resolution. )</li>
<li>Mix up (with class max as target)</li>
</ul>
<h2>Training setting</h2>
<ul>
<li><p>Data<br>
resolution 1536x960<br>
batch size = 8<br>
positive images upsample x6 (use high dropout rate due to upsampling)</p></li>
<li><p>Model</p></li>
</ul>
<pre><code> timm.models efficientnet
backbone = efficientnet.tf_efficientnetv2_b2(drop_rate=, drop_path_rate=)
</code></pre>
<ul>
<li>Optimizer</li>
</ul>
<pre><code>optimizer = AdamW(param_group, lr=, betas=(, ), weight_decay=)
</code></pre>
<ul>
<li><p>Loss Function<br>
BCEWithLogitsLoss()</p></li>
<li><p>Soft target (or self distillation?)<br>
May help model pay less attention to dataset noise, i.e., hard positive and negative samples</p></li>
</ul>
<pre><code>logits = ddp_model(images)
torch.no_grad():
lam =
targets = lam*targets + ( - lam) * logits.sigmoid()
loss = loss_func(logits, targets)
</code></pre>
<ul>
<li>SWA<br>
Perform swa following this blog <a href="https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/" target="_blank">Stochastic Weight Averaging in PyTorch</a></li>
</ul>
<pre><code> torchcontrib.optim SWA
opt = SWA(optimizer, swa_start=, swa_freq=, swa_lr=)
</code></pre>
<h2>Submission</h2>
<p>The final submisstion is an ensemble of 8 tf_efficientnetv2_b2 models training with different settings, e.g., whether to use mix up, resolution 1536x960 or 1024x640, using focal loss or not<br>
The PF1 score curve on LB is quite stable and has a wide flat region of max value</p>
<h2>Code for Augmentations</h2>
<pre><code>
torchvision transforms
torchvision.transforms functional F
torchvision.transforms.functional InterpolationMode
transform = transforms.Compose([
transforms.ToTensor(),
transforms.RandomHorizontalFlip(p=),
transforms.RandomVerticalFlip(p=),
ScaleTransform(size=, scale=(, ), transl=),
])
(torch.nn.Module):
():
(ScaleTransform, self).__init__()
self.size = size
self.scale = scale
self.transl = transl
self.train = train
self.interpolation = InterpolationMode.BILINEAR
():
_, h, w = im.shape
scale_ = np.random.uniform(self.scale[], self.scale[])
height, width = (scale_ * self.size), (scale_ * self.size / )
s = (height / h, width / w)
im = F.resize(im, [(h * s), (w * s)], interpolation=self.interpolation)
ms =
_, h, w = im.shape
height, width = (ms * self.size), (ms * (self.size / ))
top, left = ((height - h) / ), ((width - w) / )
bottom, right = height - h - top, width - w - left
pad_value =
padding = [left, top, right, bottom]
im = F.pad(im, padding, fill=pad_value, padding_mode=)
y0 = height / - self.size / + np.random.uniform(-self.transl * self.size, self.transl * self.size)
x0 = width / - self.size / / + np.random.uniform(-self.transl * (self.size / ), self.transl * (self.size / ))
im = im[:, (y0):(y0 + self.size), (x0):(x0 + self.size / )]
im
</code></pre>
|
RSNA Screening Mammography Breast Cancer Detection
|
26th solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>huh i missed something here.. one epoch for me does not mean the whole dataset, instead, it is <code>number_of_positives * (negtive_ratios + 1)</code>, I normally chose negative_ratios as 4 to 6. So here one epochs means all the positive samples and <strong>some</strong> of the negative samples, <strong>some</strong> = <code>number_of_positives * negative_ratios</code></p>
<p>"Do you mean you drop false negative in train data, and train another model from scratch without train data without false negative"</p>
<p>that's correct, train model without the FPs, validation should use all the data.</p>
<p>remove is based on a fact: label is tagged on <code>laterality + patient_id</code>, not on every image. If one of the images which has the same <code>laterality + patient_id</code> is positive, then all the image belongs to the same <code>laterality + patient_id</code> are marked as positive. So there must be many FPs in the dataset. </p>
<p>Glad I can help… Some words lists by you are unfamiliar to me, I'll have a look at these new techs, thanks for sharing.</p>
<pre><code>image = utils.crop_roi(image)
long_edge = (image.shape[:])
pad_fn = A.PadIfNeeded(long_edge, long_edge, border_mode=cv2.BORDER_CONSTANT, position= self.training , value=, always_apply=, p=)
</code></pre>
<p>for <code>utils.crop_roi</code>, you can find the source here: <a href="https://github.com/louis-she/rsna-2022-public/blob/03e9c542b756cb16b8ee6de2bb79d1b2bd346e1f/utils.py#L24" target="_blank">https://github.com/louis-she/rsna-2022-public/blob/03e9c542b756cb16b8ee6de2bb79d1b2bd346e1f/utils.py#L24</a></p>
<p>Training source code: <a href="https://github.com/louis-she/rsna-2022-public" target="_blank">https://github.com/louis-she/rsna-2022-public</a><br>
Infernece notebook: <a href="https://www.kaggle.com/code/snaker/rsna-infer-t4-threshold?scriptVersionId=120550686" target="_blank">https://www.kaggle.com/code/snaker/rsna-infer-t4-threshold?scriptVersionId=120550686</a></p>
<p>Thank everyone who contributed to the competition and thanks Kaggle for hosting.</p>
<p>My solution didn't involve complicated pipeline, just as most does, single image classification. Here are some settings you may be interested in:</p>
<ul>
<li><strong>Augmentation</strong> RandomRotate90, HorizontalFlip, VerticalFlip</li>
<li><strong>Models and Resolutions</strong> <ol>
<li>ConvnextV2_nano @ 1536</li>
<li>ConvnextV2_nano_another_seed @ 1536</li>
<li>ConvnextV2_nano @ 2048</li>
<li>EfficientnetV2_s @ 1536</li></ol></li>
<li><strong>Optimizer and Scheduler</strong> Adam with 3e-5 + OneCycleLR</li>
<li><strong>Epochs</strong> 33</li>
<li><strong>Agg</strong> by maximum</li>
<li>Aux Losses: no</li>
<li>Tablet Features: no</li>
<li>SWA: no</li>
<li>External Dataset: no</li>
</ul>
<h2>The only trick: drop false positives</h2>
<p>I realized that all the label tagged for <code>laterality + patient_id</code> is the same. That means the label is actually for <code>patient + laterality</code>, not image. So the easiest way to deal with this, is to train OOF and drop False Negatives with high confidence( e.g. the label is 1 but model predict probability is like 0.1 or 0.01). The CV score boost huge after this is done( about 0.03 to 0.04 ).</p>
<h2>Things I have learned</h2>
<p>The performance matters a lot in the competition. I'm happy that I take almost one month to optimize the inference pipeline, and I really learned something, like open-sourced the <a href="https://github.com/louis-she/nvjpeg2k-python" target="_blank">nvjpeg2k-python</a> ( first time writing some cuda code, it feels good ). To utilze the maximum of the hardware, I use the 2 x T4 kernel and some multiprocessing tricks. You can see that in the inference notebook.</p>
<p>I use the percentile threshold trick I mentioned in the thread, cause it boost my LB score about ~0.015 . I know that in this way I will have a very high chance to overfit LB but I still chose it as one of my last selection sub, because it is just too good on LB… The result of manual threshold without percentile is 0.61 , the 0.63 score of LB is so attractive that I left everything behind.</p>
<hr>
<p>Congrats to all the winners! And thanks for the great community, let's meet in another competition.</p>
|
RSNA Screening Mammography Breast Cancer Detection
|
2nd place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>2nd place solution</h1>
<p>I would like to express my gratitude to Kaggle for hosting this meaningful competition, and to my teammates, particularly <a href="https://www.kaggle.com/kapenon" target="_blank">@kapenon</a>, who persevered alongside me throughout the entire competition.</p>
<p>I would like to extend my gratitude to <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> for providing the fast DALI inference notebook, which greatly aided in the completion of this competition. Additionally, I would like to thank <a href="https://www.kaggle.com/pourchot" target="_blank">@pourchot</a> for generously sharing the external data, which contained valuable positive case data that contributed to the success of our final solution.</p>
<p>Fortunately, our team was able to get 2nd place, and I am excited to share our approach.</p>
<h2>Summary of our approach</h2>
<h3>Stages</h3>
<ol>
<li>Pretrain a single view model in 1280x1280 resolution with external dataset (Thanks to <a href="https://www.kaggle.com/pourchot" target="_blank">@pourchot</a>, <a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/377790" target="_blank">Dataset</a>)</li>
<li>Fine-tune the single view model in 1536x1536 resolution without external dataset</li>
<li>Use the fine-tuned single view model to further fine-tune a dual view model and a four view model</li>
</ol>
<h3>Model</h3>
<ul>
<li>ConvnextV1 small (from mmclassification)</li>
</ul>
<h2>1. Data preparation</h2>
<ul>
<li>We performed manual annotation of the bounding box for the target breast in approximately 300 images. Subsequently, we trained a basic Faster R-CNN model to crop all the breast regions for the subsequent stages of training.<ul>
<li>When annotating the bounding box, our aim was to refine the size of the box, with the intention of directing the focus of the subsequent stage model more precisely on the breast region. More specifically, our approach involved minimizing the bounding box to exclude the nipple and other extraneous body parts</li></ul></li>
<li>Use trained Faster R-CNN to crop external dataset</li>
</ul>
<h2>2. Data augmentation</h2>
<ol>
<li><code>ShiftScaleRotate</code> from albumentation<ul>
<li>We assumed that the model detects cancer based on the tissue or texture of the image. Therefore, it should be safe to rotate the image at any angle.</li></ul></li>
<li>RandomFlip from mmcls</li>
<li>RandAugment from mmcls</li>
<li>RandomErasing from mmcls</li>
</ol>
<h2>3. Model</h2>
<p>Backbone: ConvnextV1 small</p>
<h3>Single view model</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2369671%2Fbc8e8c9d3b6bb90135c6319d97115fbf%2Fsingle_view.png?generation=1677747094568388&alt=media" alt="singleview"></p>
<h4>Loss</h4>
<ol>
<li>Cancer: EQL loss <a href="https://github.com/Ezra-Yu/ACCV2022_FGIA_1st" target="_blank">link</a></li>
<li>Aux Loss with weight 0.1:<ol>
<li>BIRADS: EQL loss</li>
<li>Density: EQL loss</li>
<li>Difficult_negative_case: EQL loss, only for negative case</li>
<li>View: CE loss</li>
<li>Invasive: CE loss, only for positive case</li></ol></li>
</ol>
<h3>Dual view model</h3>
<p>Meta Info: age, implant, machine_id<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2369671%2Fb134019ffa72690bb6136e4e393d22ed%2Fdual_view.png?generation=1677747161738296&alt=media" alt="dualview">)</p>
<h3>Multi laterality dual view model</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2369671%2F38ea4a585c5f3e5954aa60c9867a60e1%2Fmulti_laterality_dual_view.png?generation=1677747196689304&alt=media" alt="fourview"></p>
<h2>4. Train</h2>
<ul>
<li>Optimizer: AdamW</li>
<li>lr: 0.00015</li>
<li>Scheduler: CosineAnnealingLR</li>
<li>Epochs: 24</li>
<li>Batchsize: 192 (gradient acclumulation)<ul>
<li>Large batchsize is important in our experiments to get better performance and stable training results</li></ul></li>
<li>EMA</li>
</ul>
<h2>5. Submission</h2>
<h3>Ensemble (Not used)</h3>
<p>During our discussion on final submission methods, two approaches were considered:</p>
<ol>
<li>Utilizing a single model with high resolution, without the use of an ensemble.</li>
<li>Using ensemble with lower resolution.</li>
</ol>
<p>Ultimately, our team concluded that resolution plays a crucial role in detecting cancer, leading us to choose the first option of using a single model with high resolution.</p>
<h3>About cropping</h3>
<p>After fine-tuning the model using the original images without cropping, there was a slight improvement in the score and it remained stable. This suggests that the size of the breast may play a role in cancer detection. Taking this into account, our team decided not to crop the images for final submission.</p>
<h3>TTA</h3>
<ul>
<li>Diagonal flip</li>
</ul>
<h3>Best submission results of each model</h3>
<p>Our team has limited computational resources, and we only trained with five folds at the beginning of the competition. However, upon discovering that the scores obtained from fold 0 exhibit a strong correlation with both cross-validation and leaderboard results, we decided to solely focus our experimentation on this particular fold. Therefore, we only have pf1 score for fold 0 locally.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>External Data</th>
<th>Fold 0 score</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>SingleView(2nd stage fine-tuned)</td>
<td>No</td>
<td>0.42~44(calculated by each image)</td>
<td>0.57</td>
<td>0.51</td>
</tr>
<tr>
<td>SingleView(2nd stage fine-tuned)</td>
<td>Yes</td>
<td>0.44~0.45(calculated by each image)</td>
<td>0.58</td>
<td>0.53</td>
</tr>
<tr>
<td>DualView</td>
<td>Yes</td>
<td>0.55~0.56(calculated by each prediction_id)</td>
<td>0.57</td>
<td>0.52</td>
</tr>
<tr>
<td>MultiLateralityDualView</td>
<td>Yes</td>
<td>-</td>
<td>0.52</td>
<td>0.53</td>
</tr>
</tbody>
</table>
<p>After careful evaluation, our team has selected the second SingleView model and the DualView model due to their higher Public LB score and Fold 0 score.</p>
<p>The MultiLateralityDualView model, on the other hand, aims to compare the left and right breasts when detecting cancer. We noticed that the image style varies significantly with machine IDs, but for a particular patient, the image style remains the same. Additionally, most patients only have cancer on one side of the breast. Therefore, we believed it would be more logical to enable the model to compare the left and right sides to predict cancer.</p>
<p>Unfortunately, we came up with this idea towards the end of the competition, and there was insufficient time to optimize and tune the model. As a result, we trained a model with the complete dataset and submitted it. However, we believe that the MultiLateralityDualView model still holds potential for future research and development.</p>
<h2>What works</h2>
<ul>
<li>ConvnextV1</li>
<li>EQL loss</li>
<li>High resolution</li>
<li>Large batchsize</li>
<li>Auxiliary loss</li>
<li>More training epochs</li>
<li>External dataset for 1st stage pretraining</li>
<li>MultiLateralityDualView?(Holds potential in our opinion)</li>
</ul>
<h2>Not work for us</h2>
<ul>
<li>Train models with external dataset at 2nd stage</li>
<li>Concat cropped image horizontally</li>
<li>Rule-based crop with cv2</li>
<li>Upsample dataset</li>
<li>Effcientnet, SE-ResNext, SwinTransformer</li>
<li>Mixup augmentation</li>
<li>Max pooling</li>
<li>Lion optimizer</li>
<li>BIRADS, density pseudo label</li>
<li>Train models by machine id</li>
<li>SWA? (Neither a reduction in the local score nor any improvement)</li>
<li>DualView? (Minor reduction in the Public LB score)</li>
</ul>
<h2>Solution code</h2>
<p>We have made our code publicly available.</p>
<ul>
<li><a href="https://github.com/ShuzhiLiu/RSNABreast2ndPlace" target="_blank">Code</a></li>
<li><a href="https://www.kaggle.com/code/liushuzhi/2ndplacesolutionsingleviewinfer" target="_blank">Inference notebook</a></li>
<li><a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/377790" target="_blank">External data</a></li>
</ul>
<p>The code includes a single view model, which has demonstrated strong performance with a Private Leaderboard score of 0.53. Furthermore, the results obtained with this model can be reliably reproduced using a single RTX 3090.</p>
<h2>Acknowledgement</h2>
<p>We would like to express our gratitude to the Kaggle support system and the emotional support of Rist inc. </p>
<p>Thank you!</p>
<p>Thank you! The dataset helps a lot!</p>
<p>I wheeled the mmclasification related packages with another notebook and added it to inference notebook to install them. Wheel them takes time but installation is fast in my case.</p>
<p>Yes, we were also looking forward to it very much, but we ran out of time.</p>
<p>Hope it helps!</p>
|
RSNA Screening Mammography Breast Cancer Detection
|
31st place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! Yes it took some time and effort but it's an important area and AI in medical is interesting even though I don't work with it, but maybe some day :)</p>
<p><strong>Summary</strong></p>
<p>The solutions are based on the Tensorflow framework with Pytorch image preprocessing and XGB GPU/Cuda classifier.</p>
<p>Best selected submission - Pr.L. 0.46 Pu.L. 0.59 - 2x4 fold ensemble (100 fold split ~ all data)<br>
Best private submission with room for improvements - Pr.L 0.48 Pu.L 0.52 - 2x4 fold ensemble ( 5 fold split) + 2 XGB ensemble. ( CV 0.55 - agg. method median w/ threshold 0.243 )</p>
<p><strong>Dataset</strong></p>
<p>Used <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> croped 2048-1024 ds from the original competition dataset aswell as the training code as a base.<br>
<a href="https://www.kaggle.com/datasets/awsaf49/rsna-bcd-roi-2048x1024-png-v2-dataset" target="_blank">https://www.kaggle.com/datasets/awsaf49/rsna-bcd-roi-2048x1024-png-v2-dataset</a><br>
<a href="https://www.kaggle.com/code/awsaf49/rsna-bcd-efficientnet-tf-tpu-1vm-train" target="_blank">https://www.kaggle.com/code/awsaf49/rsna-bcd-efficientnet-tf-tpu-1vm-train</a></p>
<p><strong>Training and Models</strong></p>
<p>Upsample positive class by x 10<br>
Heavy augmentation<br>
SigmoidFocalCrossEntropy</p>
<p>Model used are B5 and Convnext_base_384_in22ft1k(v1) from tfimm library.<br>
Added a classifier head with 32 vs 64 Silu Dense layer.<br>
AdamW with SWA optimizer and 8 epoch training setup was used.</p>
<p><strong>Inference</strong> - T4 x2 in mixed precision.</p>
<p>Here is where I put majority of the competition time and effort.</p>
<p>I used the faster inference with NVIDIA Dali for speeding up the image handling.<br>
<a href="https://www.kaggle.com/code/theoviel/rsna-breast-baseline-faster-inference-with-dali" target="_blank">https://www.kaggle.com/code/theoviel/rsna-breast-baseline-faster-inference-with-dali</a><br>
Made some changes for handling the croped dataset from training and configurations. As pytorch and TF doesn't run well on same GPU process memory I run this image processing in separate kernel/notebook.</p>
<p>The above was common for all the solutions but the rest below are where it gets interesting and the score increasing happens.</p>
<p><strong>Solution 1:</strong></p>
<p>Used more data for the B5 and ConvNext training and less for validation. It gave a boost vs 5 fold split but restricted the option for the solution no.2. Nevertheless I used it as solution/sub no. 1 if the second more complex solution would fail.<br>
I increased the dim size for inference vs train, often give a boost in score, and did here as well.</p>
<p><strong>Solution 2:</strong></p>
<p>The idea here was to take use of all other information META etc and together with vision model feed features to a XGB model, which is SOTA in handling imbalanced and mix of feature information. We also had information for the negative class that could be used for smoothing the imbalanced situation, like create more classes.<br>
For creating extra features and extra classes I got inspirations from 1st place solution in the SIIM-ISIC Melanoma Classification<br>
<a href="https://www.kaggle.com/competitions/siim-isic-melanoma-classification/discussion/175412" target="_blank">https://www.kaggle.com/competitions/siim-isic-melanoma-classification/discussion/175412</a></p>
<p>The features for the XGB training and inference where:<br>
''site_id','laterality','view','age','implant','n_images','image_size' + extracted information from the last dense layer from every model instead of embeddings to minimize the size if the vision information and features to 32 vs 64 features.<br>
I created a total of 9 classes from the information in the traindata, "neg", "pos", "diffneg", "rneg", "negA", "negB" etc. Data not seen in test data, but it doesn't matter as you only need to train the XGB to classify them instead which help sort the total information within the negative class region of information, leaving it less imbalanced.</p>
<p>For the traning of the two XGB models per vision model I used a custom version of the Autoxgb framework as base to have a solid standard. I did changed many things in it to fit the problem, updated it with Prune for HPO speed, added comp metric to XGB val at image level and for Optuna metric mimic the final non-image level agg PP for every CV in Optuna trial. Added all XGB parameters to the Optuna HPO and other things like only upsample the cancer class in the 9 classes.</p>
<p>For Inference I used the best parameters and setup from the HPO and used all 4 fold vision model data merged to a big oof train set,it was better for the XGB to train on all features not only the single fold/model information that it would predict on later. It also speed up the training as all vision folds used the same and single pretrained XGB for prediction.</p>
<p>So the solution did, end-to-end, in test/inference time for 2 vision model (B5 and Conv base) a 4 fold each:</p>
<ul>
<li>Extracted the information from the vision inference of the images to features.</li>
<li>Created the complete OOF train data with extra features + vision features + 9 classes.</li>
<li>Trained the 2 XGB in CV 5 fold each.</li>
<li>Extract the prob. for the cancer class in the 9 classes.</li>
<li>Run several of optimizations searches for agg image threshold and math method (mean,max and median) together with weighted ensemble threshold for the two XGB.</li>
<li>Sort the searches for the best setup and use it in the final submission post-processing.</li>
</ul>
<p>This solution increased the score from a private score 0.44 with only image inference to 0.48.</p>
<p>So what could have been improved now looking at the results: </p>
<ul>
<li>Run above with higher inference dim size vs train. </li>
<li>Use more "different models architectures"- fold to inference instead of using all the CV folds. <br>
Looking at the result from other tests that would have increased the score.</li>
<li>More Feature Engineering.</li>
</ul>
<hr>
<p>That's it! </p>
|
RSNA Screening Mammography Breast Cancer Detection
|
3rd Place Solution (Part of data processing and Image-level model)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your suggestions and congratulations.</p>
<h2>Introduction</h2>
<p>Thank you to all the participants for your hard work in the competition.We are honored to have achieved a good result, coming in third place in this competition. We also want to express our deepest gratitude to the organizers for putting together such a fantastic event.Thank you very much.<br>
Finally, I want to thank my excellent teammates <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> , <a href="https://www.kaggle.com/boliu0" target="_blank">@boliu0</a> , <a href="https://www.kaggle.com/kevin1742064161" target="_blank">@kevin1742064161</a> . On behalf of my teammates, I would like to introduce part of our solution, and another part is presented by <a href="https://www.kaggle.com/boliu0" target="_blank">@boliu0</a> <a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/391779" target="_blank">in another thread</a>.</p>
<h2>1. Overview of the pipeline</h2>
<ul>
<li>Extract ROI with a fixed aspect ratio(1.6:1) using YOLOX</li>
<li>Feed the ROI into different classification models</li>
<li>Average weighting fusion of the results from the classification models</li>
</ul>
<h2>2. External Data</h2>
<p>We use 4 external data in total. Not all models used all the external data. Some models only used CBIS-DDSM + CMMD, while the remaining models used all four external data. Although these external data appear to be different from the competition data, they can improve the CV and significantly enhance the stability of the training.</p>
<p>1) <a href="https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=22516629" target="_blank">CBIS-DDSM</a><br>
The classification labels of CBIS-DDSM are: MALIGNANT, BENIGN WITHOUT CALLBACK, BENIGN. We consider MALIGNANT as positive and the others as negative, resulting in 1,350 positive and 1,753 negative samples.</p>
<p>2) <a href="https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70230508" target="_blank">CMMD</a><br>
The classification labels of CMMD are: MALIGNANT, BENIGN. We consider MALIGNANT as positive and BENIGN as negative, resulting in 4,094 positive and 1,108 negative samples.</p>
<p>3) <a href="https://physionet.org/content/vindr-mammo/1.0.0/" target="_blank">Vindr</a><br>
Vindr did not provide classification labels, but provided BI-RADS indices. The detailed explanation of the indices can be found <a href="https://radiopaedia.org/articles/breast-imaging-reporting-and-data-system-bi-rads" target="_blank">here</a>: <br>
We finally chose to consider BI-RADS-4 and BI-RADS-5 as positive, BI-RADS-2 and BI-RADS-3 as negative, and discard other categories. Finally, we obtained 988 positive and 5,606 negative samples.</p>
<p>4) <a href="https://www.kaggle.com/datasets/cheddad/miniddsm2" target="_blank">Mini-DDSM</a><br>
The classification labels of Mini-DDSM are: Benign, Cancer, Normal. Since we found that the detection model had a large number of wrong bboxes in the Normal category, we finally chose to discard all Normal images, consider Cancer as positive, and Benign as negative. Finally, we obtained 2,716 positive and 2,684 negative samples.</p>
<h2>3. Preprocessing</h2>
<p>1) Firstly, we use 1,000 annotated images, 500 of which are labeled using the open-source annotations provided by <a href="https://www.kaggle.com/remekkinas" target="_blank">@remekkinas</a> and 500 of which are manually annotated by us (referred to as BCD_1k), to train YOLOX-m. No validation set is used, and all data is used for both training and validation. <br>
2) Using the model from step 1, we predict the whole dataset using YOLOX-m, which will be used as the ROI crop for the official dataset. Meanwhile, the detection boxes will be saved and a pseudo-label dataset (referred to as BCD_all) will be created. <br>
3) Using the BCD_all dataset obtained in step 2, YOLOX-nano and YOLOX-x are trained, with no validation set. YOLOX-nano will be used for final online submissions, while YOLOX-x will be used for ROI crop on external datasets.<br>
4) Images were cropped according to the detection boxes and resized to a 1.6:1 aspect ratio (1536<em>960 or 1280</em>800) with padding.</p>
<p>The link to the BCD_1k and BCD_all datasets:<br>
<a href="https://www.kaggle.com/datasets/kevin1742064161/bcd-dataset" target="_blank">https://www.kaggle.com/datasets/kevin1742064161/bcd-dataset</a><br>
The link to the YOLOX code:<br>
<a href="https://www.kaggle.com/datasets/kevin1742064161/yolo-x" target="_blank">https://www.kaggle.com/datasets/kevin1742064161/yolo-x</a><br>
The link to the bboxes of official datasets and external datasets:<br>
<a href="https://www.kaggle.com/datasets/kevin1742064161/yolox-bboxes" target="_blank">https://www.kaggle.com/datasets/kevin1742064161/yolox-bboxes</a></p>
<h2>4. Data augmentation</h2>
<p>HorizontalFlip, VerticalFlip, RandomBrightnessContrast, ShiftScaleRotate, MedianBlur, GaussianBlur, GaussNoise, ElasticTransform, GridDistortion, OpticalDistortion, CoarseDropout, Mixup</p>
<h2>5. Model</h2>
<p>We used two types of classification models. <br>
One type is a CNN ( EfficientNet or Convnext) that integrates metadata at the image-level and uses Mean to generate prediction scores. <br>
The other is a multi-view model based on CNN+LSTM, which applies the idea of multiple instance learning. (We will call this CNN+LSTM models <code>LSTM</code> for convenience in the following). <a href="https://www.kaggle.com/boliu0" target="_blank">@boliu0</a> has provided a <a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/391779" target="_blank">more detailed explanation of this model</a>. </p>
<h3>5.1 Meta data</h3>
<h4>Motivation of meta data</h4>
<p>There is a significant score difference between the two sites, which is essentially caused by different machines.</p>
<h4>How to encode and insert meta data</h4>
<ul>
<li>Considering that there are machines in the test set that have not appeared in the training set, <a href="https://www.kaggle.com/boliu0" target="_blank">@boliu0</a> suggests to use one-hot encoding for machines. Since there are 10 machines in the training set, the encoding is a 1x10 vector. If it is external data or a machine that has not appeared in the training set, it is a vector of all zeros.</li>
<li>Site and view are encoded in the same way, as a 1x2 vector.</li>
<li>The machine or machine+site+view vectors are sent to an MLP for encoding, and the encoded features are concatenated with the CNN backbone features and sent to another MLP for classification.</li>
</ul>
<h2>6. Ensemble</h2>
<ul>
<li>Considering the significant risk of shake in this competition, we adopted a relatively conservative strategy for our final submissions. One submission focused on the Mean model and included 7 image-level models and 4 LSTM models. The other submission focused on LSTM and included 7 LSTM models and 4 image-level models. The threshold for combining the two types of models was optimized using the best cross-validation (CV) threshold obtained from the out-of-fold (OOF) data. </li>
<li>The image-level models were trained using a 4-fold split, while the LSTM models were trained using a 5-fold split. When searching for the threshold, the entire OOF was used. If a model utilized <code>n</code> folds in the submitted code, it was given a weight of <code>n</code>. Avoiding overemphasis on the weight of individual models was a key strategy for avoiding overfitting.</li>
<li>In the end, the LSTM-focused model achieved a higher score in both submissions, which was consistent with our CV and simulated private scores. For more details about our simulated private score, please check <a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/390958" target="_blank">here</a>. We are not sure that we simulated in the right way, just a funny story.</li>
</ul>
<h3>6.1 First submission: 7 Image-level(Mean) + 4 LSTM</h3>
<p>Private LB: 0.51 <br>
Public LB: 0.67 <br>
CV: 0.549<br>
Local Private score: 0.521</p>
<p><a href="https://postimg.cc/Y4BNyqKk" target="_blank"><img src="https://i.postimg.cc/nLCdrC3s/sub1.png" alt="sub1.png"></a></p>
<h2>6.2 Second submission: 7 LSTM + 4 Image-level(Mean)</h2>
<p>Private LB: 0.53<br>
Public LB: 0.67<br>
CV: 0.554<br>
Local private Score: 0.524</p>
<p><a href="https://postimg.cc/9RMDLvMC" target="_blank"><img src="https://i.postimg.cc/63VCpKx2/sub2.png" alt="sub2.png"></a></p>
<h2>7. Some interesting thoughts</h2>
<p>1) LSTM has a much more stable threshold in CV, but performs slightly worse than Mean in LB. For the Mean model, a threshold change of ±0.1 can cause a fluctuation of about 0.05 in pF1. For example, if the optimal threshold is 0.5, the worst pF1 can be around 0.45 when the threshold is 0.4~0.6. However, for the LSTM model, a threshold change of 0.3 may only cause a fluctuation of 0.05.</p>
<p>2) We had 14 submissions that scored above 0.55 in private leaderboard, including two submissions that ranked highest in the Public Leaderboard (0.68). However, their CV and simulated Private scores were not as good as our final two submissions, so we didn't choose them. </p>
<p>3) We believe that the LSTM model is the better model (than Image-level models). Although the local pF1 and AUC are similar, using LSTM is obviously a better choice from a theoretical standpoint. Moreover, the LSTM model has a more stable threshold.</p>
<p><strong>Inference Code is public!</strong><br>
The code of the two submissions has been publicly available in <a href="https://www.kaggle.com/code/forcewithme/0226-yoloxnano-yoloxs-mean2" target="_blank">here</a> and <a href="https://www.kaggle.com/code/forcewithme/final-lstm2" target="_blank">here</a></p>
|
RSNA Screening Mammography Breast Cancer Detection
|
4th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Introduction</h2>
<p>Congrats all and thank you RSNA for another great challenge. Our solution is below. Code and slides will be posted over the coming weeks. </p>
<h2>Preprocessing</h2>
<p>We used the same preprocessing and resizing for all models, which enabled us to use more models.</p>
<p>Pixel values for each image were windowed using the width and center from the DICOM metadata. A linear window was used regardless of the specified function in the metadata to decrease processing time. We applied a coarse CNN with minimum filter to crop the image to the breast area, and hopefully any noise/text, from the image. </p>
<p>The cancer was often small and the images large, so when resizing it was important to lose as little detail as possible. <a href="https://arxiv.org/pdf/2104.11222.pdf" target="_blank">This paper</a> , concludes that the best resize method is PIL with lanczos, or tensorflow with antialias. There is a good example (Fig 1. in the paper) of how resizing can lose information, comparing cv2, PIL, pytorch and others. We used PIL, which was slow but results outperformed any cv2 method. Using the <code>.thumbnail()</code> function seemed to help speed this up. The trade off was we did not generate any different sized images for modeling, but only did one down size of the image. <br>
All images were reduced to 1152 dim. In addition, during training we applied augmentations prior to downsizing, as they would lose more information if applied to a small image. The augmentations are mainly cv2 based. Also, we kept the original filtered image aspect ratio and padded to a square image. </p>
<h2>Augmentations</h2>
<p>Train augmentations before downsizing were vflip, hflip,transpose, shift, scale, rotate, grid distortion & affine. After downsizing, we used one of random grid shuffle & coarse dropout. In train, we random cropped to 1024, and in val we center cropped. <br>
Example batch below.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1424766%2Fc28321c365bb1d679c2c35eb1c03e8f7%2FScreenshot%202023-02-28%20at%2018.09.06.png?generation=1677604161886568&alt=media" alt=""></p>
<h2>Models</h2>
<h3>Model type 1: Breast level feature combination 1D-CNN with CNN backbone</h3>
<p>Auxiliary loss improved time to convergence a lot. For each backbone, stage 1 model trained for 9 epochs, stage 2 model trained for 2 epochs with frozen backbone. Heavy dropout (0.5) was applied on the linear output. <br>
The 1D-CNN was set up with a filter size of 2 and no padding, so we get a (2, n_features) shape input and a (1, n_features) output. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1424766%2F59667ff80e38d33611d00b97752a6aaa%2FScreenshot%202023-03-01%20at%2010.33.04.png?generation=1677663219589043&alt=media" alt=""></p>
<p>In our final ensemble we ran the above architecture with efficientnet b3 b4 b5 v2s and v2m.</p>
<p>Some of the solution was inspired by Bo’s great write up on the <a href="https://www.kaggle.com/competitions/siim-isic-melanoma-classification/discussion/175412" target="_blank">SIIM melanoma competition 1st place solution</a>.</p>
<h3>Model type 2: Patient-level multi-view-multi-lateral transformer with CNN backbone</h3>
<p>Inspired by the paper <a href="https://arxiv.org/pdf/2206.10096.pdf" target="_blank">Transformers Improve Breast Cancer Diagnosis from Unregistered Multi-View Mammograms</a>, we designed another 2nd stage approach, but on a patient level. </p>
<h4>Stage 1:</h4>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1424766%2F809528c5c906c90e6752f2f6729f1f5c%2FScreenshot%202023-02-28%20at%2013.38.19.png?generation=1677604933375806&alt=media" alt=""></p>
<p>In the first stage we train an image level CNN which is trained with an auxiliary segmentation loss by predicting masks we got from training a Yolo_v7 on CBIS dataset.</p>
<h4>Stage 2:</h4>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1424766%2Fa908b5aad5f9bf7b0b1dc8ce77461eab%2FScreenshot%202023-02-28%20at%2013.38.10.png?generation=1677605002279914&alt=media" alt=""></p>
<p>We take the CNN trained in stage 1 and aggregate to patient level by considering four images per patient. Two views for each breast, i.e L-CC, L-MLO, R-CC, R-MLO. Using a 2-view/ 2-lateral input, the transformer can hopefully learn: </p>
<ul>
<li>consistency between views </li>
<li>differences in laterals</li>
</ul>
<p>We extract the output feature maps of the image level CNN for each view ending up with a tensors of size (4,out_channels, 32,32), which is projected to a predefined grid-size (4,hidden_dim, 16,16) using a conv layer. Then the 4x16x16 features, which can be seen as patch-tokens are spatially flattened and concatenated. The result is a sequence (hidden_dim, 1024) which is put into a transformer. Output of that transformer are two cancer predictions, one for each breast.</p>
<p>We first froze the CNN backbone for a few epochs to let the pretrained vision transformer "adjust" before fine-tuning the whole 2-stage model end2end. <br>
If a patient has more than one image per view we randomly sample one image per view for training and use multiple 4-view combinations for inference which are then averaged. <br>
Backbone-wise we used pretrained seresnext50 and convnext tiny as CNN and pretrained deit-tiny-patch16-224 as transformer.</p>
<h2>Ensembling</h2>
<p>So in total we used single seed fullfits of the following models</p>
<ul>
<li>Effnet v2_s + 1D-CNN</li>
<li>Effnet v2_m + 1D-CNN</li>
<li>Effnet b3 + 1D-CNN</li>
<li>Effnet b4 + 1D-CNN</li>
<li>Effnet b5 + 1D-CNN</li>
<li>SE-ResNext50 + deit-tiny-patch16-224</li>
<li>ConvNext_tiny + deit-tiny-patch16-224</li>
</ul>
<p>A threshold to convert to binary output was selected based on the CV of the blend. <br>
Our two highest submissions according to Public LB were also our two highest blends on CV. They were not the highest private, but close to it. For the shake up, it probably helped that we did not start blending the models until the last few days of the competition, so did not get distracted by high Public LB scores. </p>
<h2>Things that did not work/were ineffective:</h2>
<ul>
<li>Pretraining on DDSM/VinDr-Mammo</li>
<li>Training a ROI extractor and training subsequent models on focused ROIs rather than whole images</li>
<li>Higher resolution images (similar to worse performance, longer inference times)</li>
</ul>
|
RSNA Screening Mammography Breast Cancer Detection
|
5th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/namgalielei" target="_blank">@namgalielei</a> We might release training code after LECR ends 😅</p>
<p>Congratulations to all the winners and new competition GMs. Thank you to Kaggle team, the competition hosts and all other Kagglers who actively participated in the forum discussions. Part of our team solution was inspired by reading your generously shared ideas 💯</p>
<p>I'm really impressed with other winning teams' solutions, in particular ones using multi-image, multi-view models. Our team failed to obtain good score with those architectures and just went with a simpler route.<br>
Our final submission was mean-aggregation of 10 single-image models (5 folds x 2 backbones, efficientnetv2_s and efficientnet_b5_ns).</p>
<p>There were 2 stages in the training process.</p>
<ol>
<li>Pre-train on VinDR data</li>
<li>Finetune on Kaggle + DDSM</li>
</ol>
<p><strong>Pre-processing</strong></p>
<p>All Kaggle and external data were pre-processed similarly.</p>
<ul>
<li>Original images resized to 1536 longest edge</li>
<li>YOLOX trained at 640x640 to infer breast bounding boxes</li>
<li>Cropped breast regions, then resized to 1536x1024</li>
</ul>
<p><strong>Pre-training</strong></p>
<p>We converted VinDR lesion bounding box annotations to multiple lesion labels per image while discarded the coordinates. The two backbones were then multi-task pre-trained with BIRADS classification, breast density classification and lesions classification for 20 epochs.</p>
<pre><code>model = timm.create_model(backbone_name)
model.reset_classifier(0, "")
model.birads_fc = nn.Linear(model.num_features, 5)
model.density_fc = nn.Linear(model.num_features, 4)
model.lesions_fc = nn.Linear(model.num_features, 11)
...
loss = cross_entropy(birads_logits, birads_labels) + cross_entropy(density_logits, density_labels) + binary_cross_entropy(lesions_logits, lesions_labels)
</code></pre>
<p><strong>Finetune</strong></p>
<p>We loaded the last epoch checkpoints from first stage and continued fine-tuning the models for 10 epochs on Kaggle+DDSM data. The exact architecture can be found in our inference notebook ( <a href="https://www.kaggle.com/code/hoanganhpham/fork-of-rsna-bcd-ensemble" target="_blank">https://www.kaggle.com/code/hoanganhpham/fork-of-rsna-bcd-ensemble</a> ). It was a deep-supervision model with auxiliary losses on early blocks of effnet.</p>
<p>There were many important tricks which greatly helped the fine-tuning part since the data was extremely imbalanced.</p>
<ul>
<li><p>Balanced batch sampler. Optimal ratios also varied greatly between models.</p>
<ul>
<li>1 positive - 7 negatives for effnetv2_s.</li>
<li>1 positive - 3 negatives for effnet_b5.</li></ul></li>
<li><p>Data augmentation</p>
<ul>
<li>Shift scale rotate breast regions</li>
<li>Hflip/ Vflip/ BrightnessContrast</li>
<li>CoarseDropout</li></ul></li>
<li><p>Simple BCE loss worked best. Weighted BCE and focal loss were much worse or led to divergence.</p></li>
</ul>
<p><strong>Things that didn't work for us</strong></p>
<ul>
<li>Per breast side, we aggregated the image probabilities by different models and computed mean, min, max probs. Concatenating those with images' embeddings and feeding to xgboost/ MLP.</li>
<li>Multi-image transformer</li>
<li>MVCCL model (<a href="https://www.kaggle.com/code/hengck23/mvccl-model-for-admani-dataset" target="_blank">https://www.kaggle.com/code/hengck23/mvccl-model-for-admani-dataset</a><br>
)</li>
</ul>
<p><strong>Links</strong><br>
Training: <a href="https://github.com/nhannguyen2709/rsna-breast" target="_blank">https://github.com/nhannguyen2709/rsna-breast</a><br>
Inference: <a href="https://www.kaggle.com/code/hoanganhpham/fork-of-rsna-bcd-ensemble" target="_blank">https://www.kaggle.com/code/hoanganhpham/fork-of-rsna-bcd-ensemble</a> </p>
<p>Hey Remek, I put the model + loss compute code in my post. You can take a look at Vindr dataset. It has 5000 patient ids, each patient has 4 images (left and right MLO/ CC), and each image has BIRADS labels (from 1-5), density labels ( A-D) and bounding boxes of lesions. After pre-training, I only kept the backbone for fine-tuning and removed the three Linear heads.</p>
|
RSNA Screening Mammography Breast Cancer Detection
|
66th (LB 10th) solution: k-means for background noise reduction
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/ludditep" target="_blank">@ludditep</a> <br>
Sorry for my late response.<br>
There were a little improve (less than 0.01) in the PB.</p>
<p>Thank you to everyone involved in this competition.<br>
This is my first image competition, so I learned a lots of things.<br>
(To be honest, I am so disappointed to have dropped so far from the Gold Zone. However, this is what I am capable of now.)</p>
<p>I want to write my brief summary of my solution.</p>
<h1>Overview</h1>
<ol>
<li>Crop a breast area.</li>
<li>Background noise reduction using k-means.</li>
<li>Predict cancer for each image using efficientnet_v2</li>
<li>Aggregate (mean aggregation)</li>
</ol>
<h1>Preprocess</h1>
<h2>Windowing</h2>
<p>I referred <a href="https://www.kaggle.com/code/hengck23/3hr-tensorrt-nextvit-example" target="_blank">this windowing code</a>. in this comp. I learned lots from <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> . I really appreciate it.<br>
So windowing is just applied based on VOILUTFunction.<br>
Then I put out images, which has 2048xXXXX size</p>
<h2>Background noise reduction (kmeans)</h2>
<p>I apply kmeans-clustering into each image using pixel values, then the pixel values of a cluster which has the lowest summation change into zero.<br>
To save time, I used the <a href="https://github.com/subhadarship/kmeans_pytorch" target="_blank">kmeans-pytorch</a><br>
Here is <a href="https://www.kaggle.com/code/taruto1215/66th-lb-10th-simple-image-crop-kmeans/notebook" target="_blank">my code</a>.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3830852%2F02777b71ba9b4886858ab554550f9a72%2Fkmeans.png?generation=1677580535294417&alt=media" alt=""></p>
<p>This method gave me almost same CV score, but it boosted my LB (~0.58 --> ~0.63).</p>
<h2>Image crop</h2>
<p>I cropped images using below code.<br>
This approach gave me slightly better CV and LB score than cropping using YOLO v5 (ref <a href="https://www.kaggle.com/remekkinas" target="_blank">@remekkinas</a> 's code, thank you very much).<br>
then it put out images (resize as 1536x960)<br>
`</p>
<pre><code> frame_org = copy.copy(frame)
thres1 = np.min(frame)+68 #Adjustments were made while viewing the crop image.
np.place(frame, frame < thres1, 0)
thres2 = frame_org.sum() / (h*w)
vertical_not_zero = [True if frame[:,idx].sum() > thres2 else False for idx in range(w)]
horizontal_not_zero = [True if frame[idx,:].sum() > thres2 else False for idx in range(h)]
crop = frame_org[horizontal_not_zero,:]
crop = crop[:,vertical_not_zero]
</code></pre>
<p>`</p>
<h1>Model</h1>
<ul>
<li>I used EffecientNet_V2_s (Final sub: 3 models ensemble which has different hyper-parameters, etc.) </li>
<li>I train the model for cancer and aux targets. loss weight is cancer:aux = 1:2 the multi-task learning gave me better CV and LB result.</li>
<li>Dropout_rate = 0.30, drop_path_rate=0.20</li>
<li>Multi-sampled dropout (drop_rate=0.2, num_drop=5). it gave me better CV and LB score.</li>
</ul>
<h1>Training</h1>
<ul>
<li>Optimzer: AdamW</li>
<li>Scheduler: CosineAnnealing (warmup=0.1)</li>
<li>epoch = 5</li>
<li>batch_size=36</li>
</ul>
<h2>Upsampling</h2>
<p>I used <a href="https://github.com/louis-she/exhaustive-weighted-random-sampler" target="_blank">ExhaustiveWeightedRandomSampler</a> (weight: Pos:Neg=7:1)<br>
this sampler also worked well for me.</p>
<h2>Loss function</h2>
<p>I used the weighted binary cross entropy (weight: Pos:Neg = 2:1)</p>
<h2>Data-augmentaion:</h2>
<p>I used below code using Kornia. ( it also refered <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> 's code. Thanks)<br>
`</p>
<pre><code> self.flip = nn.Sequential(
K.RandomHorizontalFlip(p=0.5),
K.RandomVerticalFlip(p=0.5),
)
p=0.8
self.transform_geometry = ImageSequential(
K.RandomAffine(degrees=20, translate=0.1, scale=[0.8,1.2], shear=20, p=p),
K.RandomThinPlateSpline(scale=0.25, p=p),
random_apply=1, #choose 1
)
p=0.5
self.transform_intensity = ImageSequential(
K.RandomGamma(gamma=(0.5, 1.5), gain=(0.5, 1.2), p=p),
K.RandomContrast(contrast=(0.8,1.2), p=p),
K.RandomBrightness(brightness=(0.8,1.2), p=p),
random_apply=1, #choose 1
)
</code></pre>
<p>`</p>
<h2>Post-processing.</h2>
<ul>
<li>I tried some aggregration method, but just mean aggregation is the best for my best submission.</li>
</ul>
<p>Thank you.</p>
<p>Thanks for the encouragement!</p>
|
RSNA Screening Mammography Breast Cancer Detection
|
6th Place Solution: Multi-view Multi-lateral Multi-stage Approach
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Using mixup usually making training very slow to converge, and the final results are not always better than without mixup.</p>
<p>Thank you! Please refer to this thread: <a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/390974#2162251" target="_blank">https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/390974#2162251</a></p>
<p>First of all, I would like to express deep gratitude to the competition organisers and the Kaggle team. The competition dataset is of very high quality, and we would be happy if our solution could contribute to improve the quality of breast cancer screening. I would also like to thank my teammates who struggled together with me throughout the competition.<br>
Finally, big congratulations to all the winners (and to ourselves, three new competitions Grandmasters)! </p>
<h1>Preprocessing</h1>
<ul>
<li>Original image arrays were resized into 2048 x 2048 x 1</li>
<li>VOI-LUT was applied to all images</li>
<li>Images were then cropped to exclude blank space<ul>
<li>YOLOX model was trained to generate breast bbox</li>
<li>Compared to simple rule-based breast extraction, YOLOX cropped images usually have a smaller region, which seemed to prevent our models from overfitting</li>
<li>In order to shorten inference time, we used simple rule-based crop during inference</li>
<li>Images were cropped with a random margin during training as augmentation</li>
<li>Bbox mix (YOLOX crop + rule-based crop) were also used as augmentation</li>
<li>Cropped images are resized to an aspect ratio of 2:1 (1024 x 512 or 1536 x 768)</li></ul></li>
<li>A wide range of augmentations were used <ul>
<li>Affine transform, V/H flip, brightness/contrast, blur, CLAHE, distortion, dropout</li></ul></li>
</ul>
<h1>Model architectures</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1973217%2Fd4438a7bcbaf552a03df881ade23a0de%2FRSNA-Mammo-3.svg?generation=1677551759021366&alt=media" alt=""><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1973217%2F5702736cffb022faafcd9fe8b04b9edc%2FRSNA-Mammo.svg?generation=1677551773315924&alt=media" alt=""><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1973217%2Fbe83f5153c2966c8ad8a8a986fdbc569%2FRSNA-Mammo-2.svg?generation=1677551796605732&alt=media" alt=""></p>
<h2>Some key results</h2>
<table>
<thead>
<tr>
<th>Model name</th>
<th>Description</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Aug07lr0</td>
<td>MV, 1024x512</td>
<td>0.493</td>
<td>0.64</td>
<td>0.46</td>
</tr>
<tr>
<td>Res02lr0</td>
<td>MV, 1536x768</td>
<td>0.488</td>
<td>0.59</td>
<td>0.46</td>
</tr>
<tr>
<td>Res02mod2</td>
<td>MVF, 1536x768</td>
<td>0.516</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Res02mod3</td>
<td>MVF, 1536x768</td>
<td>0.525</td>
<td>0.63</td>
<td>0.48</td>
</tr>
<tr>
<td>charm_convnext_small_multi_lat</td>
<td>MVL, 1024x512</td>
<td>0.498</td>
<td>0.60</td>
<td>0.50</td>
</tr>
</tbody>
</table>
<h1>Some tricks for training</h1>
<ul>
<li>Due to the unstable nature of PF1 metric, area under precision recall curve (AUCPR) worked well as a surrogate metric<ul>
<li>AUCPRLoss (<a href="https://github.com/Shlomix/global_objectives_pytorch" target="_blank">https://github.com/Shlomix/global_objectives_pytorch</a>)</li></ul></li>
<li>The model performance is still unstable! We used exponential moving average (EMA), simple average weights</li>
<li>Using auxiliary loss (age, biopsy, etc.) improved the performance</li>
<li>Convnext (v1) was the best backbone for this task</li>
</ul>
<h1>Ensemble</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1973217%2F355889cbe156ce03999e4ea891ba30d4%2FRSNA-Mammo-4.svg?generation=1677551811042501&alt=media" alt=""><br>
Also, our best submission scored private LB 0.53, but it was not chosen because it has lower CV and public LB than our selected submissions.</p>
<h1>Things did not work</h1>
<ul>
<li>Positive weight, oversampling, weighted samplers, etc. did not work<ul>
<li>They seemed to work when you have a weak model, but not no longer once you have a good enough model</li></ul></li>
<li>Focal loss and label smoothing did not improve CV</li>
<li>Vanilla BCE is all you need</li>
<li>Mixup did not work</li>
<li>We tried pseudo-labeling and pretraining on external datasets (VINDR and DDSM), but there was no clear improvement in both CV and LB</li>
</ul>
<h1>Code</h1>
<ul>
<li>Submission notebook -> <a href="https://www.kaggle.com/code/amanatsu/rsna-mammo-rejection-ensemble-v2-ishikei?scriptVersionId=120432251" target="_blank">https://www.kaggle.com/code/amanatsu/rsna-mammo-rejection-ensemble-v2-ishikei?scriptVersionId=120432251</a></li>
<li>RabotniKuma part -> <a href="https://github.com/analokmaus/kaggle-rsna-breast-cancer" target="_blank">https://github.com/analokmaus/kaggle-rsna-breast-cancer</a></li>
<li>YOLOX(ishikei) part -> <a href="https://github.com/ishikei14k/RSNA_Screening_Mammography_Breast_Cancer_Detection" target="_blank">https://github.com/ishikei14k/RSNA_Screening_Mammography_Breast_Cancer_Detection</a></li>
<li>Charmq part -> <a href="https://github.com/tyamaguchi17/rsna_mammo" target="_blank">https://github.com/tyamaguchi17/rsna_mammo</a></li>
</ul>
<p><a href="https://www.kaggle.com/vaillant" target="_blank">@vaillant</a> Good question! We randomly sampled an image per view during training, and sampled the image with most valid pixels (valid pixel is defined as pixel with value ⊂ [16, 160]) during inference. So actually we didn't use all images in the test dataset.</p>
<p><a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> Thank your for the question! We learnt a lot from your discussion topics (as usual 😊).<br>
With regard to multi-view training, we tried efficientnet, convnext, seresnext, nextvit, and some other backbones, and none of them significantly overfitted. Compared to fusion models with stronger spatial regularization, normal multi-view models usually were slightly overfitted after the same training epochs, but we did not observe big differences.</p>
<p>Thank you for the question, let me clarify:</p>
<pre><code>[1st stage] [100% test data] x [model 1, model 2]
-> bottom 25% were removed and labelled as 0
[2nd stage] [75% test data] x ([model 1, model 2] from 1st stage + [model 3])
-> bottom 40% (x 75%) were removed and labelled as 0
[3rd stage] [45% test data] x ([model 1, model 2, model 3] from 2nd stage + [model 4])
</code></pre>
<p>In this way, additional models are run only on suspicious data.</p>
<p>I used AdamW optimizer, cosine annealing with warm restarts scheduler, and initial learning rate of 1e-5 for batch size 16. </p>
|
RSNA Screening Mammography Breast Cancer Detection
|
7th Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Of course I considered discarding that weaker models, but I thought surely the lower covariance between breast- and laterality-level would be beneficial to the ensemble, and I was reassured by the fact that the scores were up in LB😏</p>
<p>First of all, we would like to express our respect to all the participants and thank the organizers for making this competition possible. This competition was tough for us due to the volatility of the metrics and the fact that we had to complete our inference in less than 9 hours. In the end, by choosing our best local cv model (CV0.534), we were able to win 7th place.<br>
Here we will describe how we increased our CV score and how we made our entire inference faster.</p>
<h2>Summary</h2>
<ul>
<li>Images: Kaggle train data and VinDr-Mammo as the external data</li>
<li>Preprocess: ROI cropping in a rule-based way and sigmoid windowing</li>
<li>Resolution: 1520x912</li>
<li>Model: EfficientNetV2S (and EfficientNet B5) with GeM pooling (p=3)</li>
<li>CV: 4-fold, grouped by patient and stratified by cancer, BIRADS, density, age, biopsy, implant, and machine_id</li>
<li>Train:<ul>
<li>Augmentation: V/H Flip, Geometric transformation (Affine and Elastic)</li>
<li>Loss function: BCE</li>
<li>Optimizer: Adam</li>
<li>Scheduler: Cosine decay (starting from 5e-5)</li></ul></li>
<li>Inference: 2xTTA (vertical flip)</li>
<li>Ensemble: seed averaging (2 seeds) and 2 level ensemble (breast- and laterality-level)</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3706526%2Ff93bb633bb5bcfeffde33cb6931f5bd3%2Fsolution.png?generation=1677589173238328&alt=media" alt=""></p>
<h2>Preprocessing</h2>
<p>We obtained the ROI breast in a rule-based way. In brief, after converting values below 40 to 0, uniform columns and rows were removed because they were assumed to be background. This process was so simple yet effective and fast. After this preprocessing, we obtained images with an aspect ratio of 1:1.6~2 on average, and which were then resized to 1520x912. This resolution was determined after comparing four options: ①640x640, ②1024x1024, ③1520x912, and ④2689x1569. The order of CV score was ① < ② < ③ > ④. Additionally, we generated images with sigmoid windowing applied as well, which did not have a significant effect on improving the score, and even on ensemble. However, we trained some models with these images and included them in the ensemble to make our prediction more robust.<br>
The most important thing was to generate 2 types of input images, namely breast-level and laterality-level. The breast-level consisted of one breast per image, while the laterality-level had two or more breasts per image by simply concatenating breasts in columns. The models trained with the laterality-level was much weaker (-0.04) than those with the breast-level, but the ensemble of the two levels was very effective.</p>
<h2>Model Architectures</h2>
<p>We found that larger models did not necessarily score better (EfficientNetB2 < EfficientNetB5 > EfficientNetB7). Out of several models, including EfficientNetV2, EfficientNetB5, SeResNeXt50, ConvNeXt tiny/small, and NextViT base, EfficientNetB5 performed the best (CV0.47 on 1520x912), and EfficientNetV2S came in second-best (CV0.45 on 1520x912). Additionally, we replaced the pooling layer from 'average' to 'generalized mean' (GeM), which resulted in a slight improvement in score (0.005~0.01). Note that we ultimately used EfficientNetV2S in inference because EfficientNetB5 takes longer to infer than EfficientNetV2S. However, the difference in metrics between the two was negligible (CV0.49, single model) by using B5's predictions on external data when training V2S, as described later.</p>
<h2>External Data</h2>
<p>We used VinDr-Mammo dataset as external data, whose labels were defined by breast-level predictions after aggregating them into laterality-level. We found a significant improvement in the CV score by 0.02 using this dataset.</p>
<h2>Inference Speed Up</h2>
<p>The inference time limit was very tight, so we made efforts to ensure inference completed in time. First, we used DALI to decode images and preprocessed most of them on GPU. This significantly increased processing speed. However, as mentioned <a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/384231" target="_blank">here</a>, some images could not be processed using DALI, so we used dicomSDL and cupy as a fallback method for decoding and preprocessing images. Additionally, we implemented a 2-stage method as shown in the following image. Through data analysis, we observed that the data contained many explicit negative samples which did not require prediction by a strong model or ensemble. By filtering out those data with the threshold of 0.01, we were able to reduce the number of data (to 25%) which need to be saved to disk and predicted without losing accuracy.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3706526%2Fcfd0afe964fcafd12ca8460914bd6816%2Fsolution1.png?generation=1677589474115470&alt=media" alt=""><br>
To speed up prediction, we compiled our models using tensorRT. Through our experiments, we observed that the fp16 model was 1.3 times faster than fp32, but its numerical error is not negligible. In the end, we used models compiled in fp32.</p>
<h2>Code</h2>
<p>train: <a href="https://github.com/Masaaaato/RSNABreast7thPlace" target="_blank">https://github.com/Masaaaato/RSNABreast7thPlace</a><br>
inference: <a href="https://www.kaggle.com/code/masato114/2stage-ensemble/notebook" target="_blank">https://www.kaggle.com/code/masato114/2stage-ensemble/notebook</a></p>
<p>Thanks. Once images were loaded with DALI and saved in storage, DALI was no longer used, even for augmentation. Elastic deformation was implemented by <code>albumentations.augmentations.geometric.transforms.ElasticTransform</code> with alpha=10 and sigma=15.</p>
<p>Try the following.</p>
<pre><code> ():
img_copy = img.copy()
img = np.where(img <= , , img)
height, _ = img.shape
y_a = height // + (height*)
y_b = height // - (height*)
b_arr = img[y_b:y_a].std(axis=) !=
continuing_ones = CountUpContinuingOnes(b_arr)
col_ind = np.where(continuing_ones == continuing_ones.())[]
img = img[:, col_ind]
_, width = img.shape
x_a = width // + (width*)
x_b = width // - (width*)
b_arr = img[:,x_b:x_a].std(axis=) !=
continuing_ones = CountUpContinuingOnes(b_arr)
row_ind = np.where(continuing_ones == continuing_ones.())[]
img_copy[row_ind][:, col_ind]
():
dicom = dicomsdl.(in_path)
data = dicom.pixelData()
data = data[:-, :-]
dicom.getPixelDataInfo()[] == :
data = np.amax(data) - data
data = data - np.(data)
data = data / np.(data)
data = (data * ).astype(np.uint8)
img = ExtractBreast(data)
img = cv2.resize(img, SIZE, interpolation = cv2.INTER_AREA)
cv2.imwrite(out_path, img)
</code></pre>
<p>You can find it in <a href="https://www.kaggle.com/code/masato114/rsna-generate-train-images/notebook" target="_blank">https://www.kaggle.com/code/masato114/rsna-generate-train-images/notebook</a>.</p>
<p>Yes, I did the same process as what is applied to test images in inference notebook.</p>
<p>In short,</p>
<ol>
<li>Low resolution images were used for ver049. Since this is not special as to both the model architecture and the training procedure, we were not specific on that. They should be generated with reference to the inference code. However, if you simply want to keep the inference time to 9 hours, use the pre-trained models provided.</li>
<li>That is because it was better out of several initial conditions.</li>
<li>To shorten the inference time. Effb5 is heavier than Effv2s.</li>
</ol>
<p>Hi, thanks for your comment.</p>
<p>Yes, you can use train.py with the configs you indicated for step 3. Once you successfully generated pseudolabels on vindr, which are labeled for each breast of each image, you can make lateral-level labels by aggregating them on both patient_id and laterality. This should be done before proceeding to step 3.</p>
<p>As you are saying, you can finetune them in any order and ensemble later.</p>
<p>Hi, I apologize for making you confused.<br>
There are no extra steps between step 2 and 3. But perhaps you need to rename columns of vindr annotation file so that they are the same as of train.csv. I may have overlooked something about vindr whose patient column is named as 'study_id' not 'patient_id', I guess.<br>
If you have hit errors, please kindly share the concrete problems.</p>
|
RSNA Screening Mammography Breast Cancer Detection
|
8th place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>(1536, 896), I've updated it now.</p>
<p>light or heavy,both have mixup. when we set mixup rate 0.3~0.5 ,cv is better.</p>
<p>Before everything starts, I would like to say a big thank you to kaggle and the contest organizers for creating such an admirable contest.</p>
<p>My heartfelt gratitude goes to my teammates <a href="https://www.kaggle.com/zephyruszx" target="_blank">@zephyruszx</a> and <a href="https://www.kaggle.com/calvchen" target="_blank">@calvchen</a> for their incredible support throughout the competition. </p>
<h2>DataSet</h2>
<p>We chose Nvidia Dali, which can decode dicom image files using the GPU, and generated uint16 bit png files. We did not use more complex algorithms for crop, but just <code>cv2.connectedComponentsWithStats</code>, which works very well and is fast.</p>
<p>image size: 1536*896</p>
<p>Cross-validation strategy:<code>StratifiedGroupKFold</code> 5-Folds</p>
<p>negative samples strategy : 35%~50% of negative samples downsampled</p>
<p>Data augmentation strategies with different levels of LIGHT and HEAVY:</p>
<pre><code> ():
alpha > ,
x.shape[] > ,
lam = np.random.beta(alpha, alpha)
rand_idx = torch.randperm(x.shape[])
mixed_x = lam * x + ( - lam) * x[rand_idx, :]
yc_j, yc_k = yc, yc[rand_idx]
mixed_x, yc_j, yc_k, lam
():
data == :
Compose([
ToFloat(max_value=),
RandomResizedCrop(img_size[], img_size[], scale=(, ), ratio=(, ), p=),
HorizontalFlip(p=),
VerticalFlip(p=),
ShiftScaleRotate(rotate_limit=(-, ), p=),
RandomBrightnessContrast(brightness_limit=(-,), contrast_limit=(-, ), p=),
JpegCompression(quality_lower=, quality_upper=, p=),
Affine(p=),
ToTensorV2(),
])
data == :
Compose([
ToFloat(max_value=),
Resize(img_size[], img_size[]),
ToTensorV2(),
])
</code></pre>
<p>Pseudo label: We simply used the Vindr data as an external dataset, using almost the same data processing approach.</p>
<h2>Models</h2>
<p>Because of our training time, we chose the smallest version of the major models to try as much as possible, and ended up using three models Ensemble, which have a number of parameters between 15M and 21M, with broadly similar model training parameters, with some tuning of the parameters, which according to our experiments have a huge and sensitive impact on the learning rate.</p>
<ul>
<li>tf_efficientnetv2_s,lr: 1e-4</li>
<li>convnext_nano, lr: 7e-6</li>
<li>eca_nfnet_l0,lr: 3e-5</li>
</ul>
<p>After Backbone, we choose <code>GeM Pooling</code>, p_trainable=True, and add <code>dropout</code> of fc layer.</p>
<h2>Training</h2>
<ul>
<li>3 Stage Training<ul>
<li>1. Training with competition data</li>
<li>2. Training with pseudo data</li>
<li>3. Finetune with competition data</li></ul></li>
<li>Params:<ul>
<li>AdamW,weight_decay = 0.01</li>
<li>Loss:BCEWithLogitsLoss</li>
<li>scheduler:OneCycleLR</li></ul></li>
</ul>
<h1>Inference</h1>
<ul>
<li>Horizontal flip tta</li>
<li>Binarization post-processing</li>
</ul>
<h2>Doesn't work or doesn't do</h2>
<ol>
<li>RCNN/Yolo to crop</li>
<li>Larger sizes such as 2048</li>
<li>Focal Loss</li>
<li>More external Data</li>
<li>site1 and site2 Threshold</li>
</ol>
<h2>Code</h2>
<p><a href="https://github.com/chqwer2/RSNA_Solutions/tree/main/RSNA_2023_Screening%20Mammography%20Breast%20Cancer%20Detection" target="_blank">RSNA_Solutions · GitHub</a></p>
<p>Haven't experimented on the final version, but JpegCompression brings a cv boost when experimented with in earlier versions</p>
|
RSNA Screening Mammography Breast Cancer Detection
|
Multi-GPU Inference with TensorRT in Kaggle Notebooks
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the competition organizers, the Kaggle team, and all the participants. I have learned a lot from this competition. </p>
<p>In this topic, I will share how to run multiple GPUs simultaneously using Python's "threading" and "queue" modules.</p>
<p>In Kaggle Notebook, we can use two Tesla T4 GPUs. Moreover, since T4 GPUs are equipped with units of Tensor Cores, we can accelerate inference using TensorRT (discussed <a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/375881" target="_blank">here</a>). However, TensorRT does not seem to be able to perform multi-GPU processing like PyTorch's DataParallel (<a href="https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#faq" target="_blank">reference</a>). I wanted to efficiently use TensorRT models on two GPUs simultaneously.</p>
<p>Here is the notebook that we used for submission.</p>
<p><a href="https://www.kaggle.com/code/tmyok1984/rsna-inference-lightgbm" target="_blank">https://www.kaggle.com/code/tmyok1984/rsna-inference-lightgbm</a></p>
<p>Below is a summary of the process</p>
<p>Implement pre-processing, GPU0 processing, and GPU1 processing, respectively.</p>
<pre><code>def preprocess(args, params):
return result
def GPU0_process(args, params):
return result
def GPU1_process(args, params):
return result
</code></pre>
<p>Implement queues and threads.</p>
<pre><code>import threading
import queue
def wrap_func_for_mt(func, params):
def wrap_func(queue_input, queue_output):
while True:
input = queue_input.get()
if input is None:
queue_output.put(None)
continue
result = func(input, params)
queue_output.put(result)
return wrap_func
def prepare_multithreading(params):
# funcs to proc in pipeline
func_params = [
(preprocess, (params)),
(GPU0_process, (params)),
(GPU1_process, (params)),
]
wrap_funcs = list(map(lambda func_param: wrap_func_for_mt(func_param[0], func_param[1]), func_params))
# prepare queues
queues_input = [queue.Queue() for _ in range(len(wrap_funcs))]
queues_output = [queue.Queue() for _ in range(len(wrap_funcs))]
# create Threads
threads = []
for wrap_func, queue_input, queue_output in zip(wrap_funcs, queues_input, queues_output):
t = threading.Thread(target=wrap_func, args=(queue_input, queue_output), daemon=True)
threads.append(t)
for t in threads:
t.start()
return queues_input, queues_output, len(wrap_funcs)
def loop_proc(queues_input, queues_output, inputs):
for queue_input, input in zip(queues_input, inputs):
queue_input.put(input)
outputs = []
for queue_output in queues_output:
output = queue_output.get()
outputs.append(output)
return outputs
</code></pre>
<p>Execute the process.</p>
<pre><code>queues_input, queues_output, len_wrap_funcs = prepare_multithreading(params)
while len(prediction_id_list) < len(prediction_ids):
if idx >= len(prediction_ids):
args = None
else:
args = {
"prediction_id": prediction_ids[idx],
}
if idx == 0:
init_inputs = [args] + [None]*(len_wrap_funcs - 1) # [[], None, None, ...]
inputs = init_inputs
else:
inputs = [args] + outputs[:-1]
outputs = loop_proc(queues_input, queues_output, inputs)
result = outputs[-1]
if result is not None:
prediction_id_list.append(result["prediction_id"])
cancer_list.append(result["cancer"])
idx = idx + 1
</code></pre>
<p>Since three processes run simultaneously, ideally, the process that takes the longest time will be hidden by other processes. By using this technique, it is expected that inference on T4 GPU can be executed more efficiently. If you have any other good ideas, please let me know.</p>
|
RSNA Screening Mammography Breast Cancer Detection
|
[FR Team] 20th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Kagglers,</p>
<p>Thanks for this nice competition!</p>
<p>First, I would like to thank <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> and <a href="https://www.kaggle.com/optimo" target="_blank">@optimo</a> who helped me to make my dream true: Make a French team for one Kaggle competition. My expectation was to finish top #2 like in the FIFA world cup 🙂 but I’m quite happy with the current result. Some insights of our solution:</p>
<h2>Theo’s Part</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2F858b18feec74c7a59e178265eebcd519%2Ftheo.png?generation=1677617629716416&alt=media" alt=""></p>
<ul>
<li>Pretrain for 5 epochs using VinDr data and the BIRADS target</li>
<li>Finetune for 5 epochs on the competition data + external data<ul>
<li>bs=8 (6 for v2-s), lr=4e-4 (3e-4 for v2-s), Ranger, Linear Schedule with no warm up</li></ul></li>
<li>External data varies among models, some only use CBIS. I also trained models with pseudo-labels on VinDr that are used in the final ensemble.</li>
<li>BCE loss with <strong>no class weight</strong>. Some models use BIRADS as an auxiliary target</li>
<li>No over/under-sampling</li>
</ul>
<p>This is my best submitted ensemble. Models were changed a bit in the final ensemble to maximize CV.</p>
<h2>Optimo’s Part</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2F845fc6e58654ca5ef2987f5b4a9d79eb%2Foptimo.png?generation=1677617643849717&alt=media" alt=""></p>
<p>Likewise my timm backbones were pretrained on VinDr by predicting BIRADS. I used CBIS, Vindr PL as external data (boosted my CV by ~0.02).</p>
<p>As Theo had better results than mine on ‘standard models’ I tried to provide as much diversity as possible with ideas coming from research papers. I ended up using two different architectures :</p>
<ul>
<li>a modified version of GMIC: inspired by this code I implemented a version which allows any timm network as global network and/or local network. I also changed the crop normalization in order to fully use the local brightness. As I was at first only changing the local network I used the public pretrained weights on NYU datasets. Which ended up being forbidden 2 days before the end of the competition. So I trained from scratch a model with 2xeffnet-b0 networks as local and global networks with input size (1472, 960), crop size 384 and 4 extracted patches. CV : <strong>43.69</strong></li>
<li>a modified version of MVCCL (BiView): inspired by <a href="https://www.kaggle.com/heng" target="_blank">@heng</a> code <a href="https://www.google.com/url?q=https://www.kaggle.com/code/hengck23/mvccl-model-for-admani-dataset&sa=D&source=editors&ust=1677621085747449&usg=AOvVaw3ovbCIfqBBwEX0rKY3P-iF" target="_blank">https://www.kaggle.com/code/hengck23/mvccl-model-for-admani-dataset</a> The main changes where to allow any timm model as main network and use the representation from both main and auxiliary view in the final summation. I used two models in final submission with input size (1536, 768): one b0 with 16 attention heads (<strong>CV: 43.45</strong>) and one b2 with 8 attention heads (<strong>CV 44.99</strong>)</li>
<li>ensembling those 3 models at laterality level gave <strong>CV 46.94</strong></li>
</ul>
<h2>MPWARE’s Part</h2>
<ul>
<li>DALI decoder + YoloX ROI followed by crop and resize with aspect ratio = 1 to generate images with height=1024. No windowing.</li>
<li>External data included: CBIS-DDSM (mass + calcification full images) + PASM</li>
<li>Training pipeline with limited class oversampling and weighted CrossEntropyLoss on positive labels.</li>
<li>Augmentations: Random crop, H/V flips, minor RotateShiftResize, Noise/Blur, Random BrightnessContrast and Coarse Dropout</li>
<li>Backbones: NFNet + NextViT</li>
<li>No GeM but regular adaptive average pooling.</li>
<li>Max aggregation for laterality.</li>
<li>Spent some time to get TensorRT 1.3 working to compile more backbones.</li>
<li>CV comparable to Optimo’s ensemble at the end (but big CV/LB gap, got LB=0.61)</li>
</ul>
<p>What did not work for me:</p>
<ul>
<li>Wavelets additional layer</li>
<li>Age as additional input feature</li>
<li>Mixup augmentation</li>
<li>VinDr as external data (no boost compared to CBIS)</li>
<li>High Resolution with stride=1, it worked at the beginning with EffNet but becomes useless when moving to some different backbones</li>
<li>Level 2 model based on embeddings.</li>
</ul>
<p>Take away: Learn a lot again, great teammates, nice competition, bad metric, hope to make another one with more FR teammates in the coming months.</p>
<h2>Final submissions & Randomness</h2>
<p>Our two selected submissions were determined on CV :</p>
<ul>
<li><p>A blend of 7 models, with CV 0.555</p>
<ul>
<li>Public 0.61, private 0.48</li>
<li>For this submission we used fullfit models except for MPWARE’s models which were 4 folds. We had a similar submission with a lower public LB that used only fullfit models and that would have ranked #10.</li></ul></li>
<li><p>A vote of our 3 pipelines with CV 0.547</p>
<ul>
<li>Public 0.6, private 0.48</li>
<li>This vote was updated on the last day, we had a very similar vote with Public 0.61 private 0.52 before that</li></ul></li>
</ul>
<p>We got unlucky with submissions selection, our CV 0.5+ blends <strong>all scored 0.48 private or above</strong> (best 0.52, avg 0.495). Unfortunately we chose two subs on the lower end of the gaussian. Even simply selecting our best public LB would’ve put us in the top 8. The metric was too random, next time we’re staying away from F1-score competitions :)</p>
<p>Thanks! Not tried to plot GradCAMs on my side.</p>
|
RSNA Screening Mammography Breast Cancer Detection
|
⭐️ Remek & Andrij - #9 solution ⭐️
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you <a href="https://www.kaggle.com/michau96" target="_blank">@michau96</a> - and as a bonus: our team did not use any pixel from external dataset 😁</p>
<p><a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> thank you very much. I am really, really happy. Still reading solution and working on some improvements. Still learning and have fun during competition. See you soon.</p>
<p>Hi, thank you. Code will be available soon - I am working on some improvements (testing ideas provided by rest of participants). </p>
<p>0.5 score is from unfrozen model layers (during training). We tried to increase batch size and decrease resources needed for training. This is why we decided to freeze some layers (we used this method only for effnet and resnet - first month of competition). Strategy was simple:</p>
<ul>
<li>test unfrozen model </li>
<li>test frozen backbone - head unfrozen </li>
<li>test 1/3 unfrozen backbone (only low level layers were trainable) - head unfrozen </li>
<li>increase to 2/3 unfrozen …</li>
</ul>
<p>The best in our case was unfrozen case. 75% unfrozen layers was slightly worse but gave us larger batch size. First month of competition I used for all experiment frozen setup to speed up. </p>
<p>For more information about Effnet training procedure please see this tutorial: <a href="https://keras.io/examples/vision/image_classification_efficientnet_fine_tuning/" target="_blank">https://keras.io/examples/vision/image_classification_efficientnet_fine_tuning/</a></p>
<p>We had the same problem. I read a paper about augumentation techniques in breast cancer detection/classification. Tested most of them. Setup I described works the best for us … but I am waiting for participants solution description and will apply new things to see if we can progress. </p>
<p>Now I can see that we had "partial" CV/LB consistency <=> no consistency 😂</p>
<p>I am open to:</p>
<ul>
<li>learn</li>
<li>share</li>
</ul>
<p>AI is not only job for me but passion 😍 I really love do it … although it can devastate and frustrate me 😂 It's a complicated relationship.</p>
<p>First of all congratulations to all participants. Congratulations to dream teams from gold zone. I’m impressed by your consistency in winning Kaggle competition. Waiting to learn from your solution.</p>
<p>Thank you my team mate Andrij <a href="https://www.kaggle.com/aikhmelnytskyy" target="_blank">@aikhmelnytskyy</a> We had great collaboration 👍👍👍 - I feel that from first minute we played in one team having one goal - find better solution.</p>
<p>Gold in competition was dream for me. Last year we (with <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> ) were #1 in sliver (#12 solution in Image Matching Challange 2021). This year I decided to work hard to experience gold zone and finally become competition master. Even there is no official LB finalized …. we are #9 and in gold! :) and I am …… extremely happy! 😁😁😍😜</p>
<p>This competition was really great for testing many different computer vision techniques. Three months passed very quickly. The first phase of the competition was difficult. We performed a large number of different tests that did not gave us results higher than 0.3 (LB score). It was very frustrating. We were unable to find any correlation between local CV and LB. Then we setup good training pipeline – main our success point are:</p>
<ul>
<li>Sampling strategy and positive class balancing</li>
<li>Augumentation</li>
<li>Model selection</li>
<li>Postprocessing</li>
</ul>
<p>Last two weeks of competition were hard to me – I caught covid and had to pause (during recovery time I coded using iPad). But we cooperate all the time and we finally managed to jump to TOP10 and 0.63 (public LB). Finally we were closing gold zone on #13 (public LB) and #9 in private LB!</p>
<p><strong>Models score summary</strong><br>
• best public LB: 0.63 (ensemble) / 0.57 (single model) / local CV (0.48)<br>
• private lb: 0.50 (max: 0.50)</p>
<p><strong>Competition achievements</strong><br>
• new experience in Kaggle competition - a lot of good discussion <br>
• 1x gold medal - dataset (I am very happy - my first one)<br>
• 2x gold medals - notebook <br>
• 1x gold medal - competition-> extremely happy! 😁😁😍😜</p>
<p><strong>Our final selection</strong><br>
We selected two different solution which based on the same model setup.<br>
• 3 models - ensemble average model prediction probabilities -> LB: 0.63 (PL: 0.47)<br>
• 3 models - voting strategy and then score average (on votes score) -> LB: 0.62 (PL: 0.5)<br>
After many tests we had strong feeling that our second choice (even score was lower on LB that many of our rest solution) is more stable (was less sensitive on th) than other solution. So we closed eye and trusted in our test rather then LB score.</p>
<p><strong>Solution description in 4 steps</strong><br>
Our solution is very simple. We tried different ways to predict breast cancer but finally it appeared that simples solution wors for us the best (both local CV and LB).</p>
<ol>
<li>Process dicom files to png (windowing).</li>
<li>Inference – 3 convnext (v1) models with TTA</li>
<li>Ensemble – probabilities average or voting</li>
<li>Thresholding – th ~0.5 -> final prediction result 0|1</li>
</ol>
<p><strong>Dataset</strong><br>
• Image resolution: 1536x768<br>
• ROI cropped - cv2.connectedComponentsWithStats method (we started with yolov5 for prototyping phase but then we used cv2 - since licence regulations)<br>
• 4-GroupFold: on patient_id<br>
• image pixel scaling: div by 255.<br>
• dicom2png – proposed by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> (<a href="https://www.kaggle.com/code/hengck23/3hr-tensorrt-nextvit-example" target="_blank">https://www.kaggle.com/code/hengck23/3hr-tensorrt-nextvit-example</a>)<br>
• we do not use external dataset – all models were trained on competition data<br>
• we processed data in two steps (to avoid problem with file storage capacity – during the tests we exported images to different resolutions – max 2400px):</p>
<ol>
<li>process j2k files format (dicom2png and crop to 1536xW - we do not resize it in this step to final resolution)->inference->delete files </li>
<li>process nonj2k files->inference->delete files</li>
</ol>
<p>fast and reliable crop roi function we used during competition (cretits to <a href="https://www.kaggle.com/vslaykovsky" target="_blank">@vslaykovsky</a> : <a href="https://www.kaggle.com/code/vslaykovsky/rsna-cut-off-empty-space-from-images" target="_blank">"RSNA: Cut Off Empty Space from Images"</a>)</p>
<pre><code>def crop_roi(img, photometric_interpretation):
# it can be improved
Y = img
xmin = Y.min()
xmax = Y.max()
norm = np.empty_like(Y, dtype=np.uint8)
dicomsdl.util.convert_to_uint8(Y, norm, xmin, xmax)
if photometric_interpretation == 'MONOCHROME1':
norm = 255 - norm
X = norm
X = X[5:-5, 5:-5]
output= cv2.connectedComponentsWithStats((X > 10).astype(np.uint8)[:, :], 8, cv2.CV_32S) #
stats = output[2]
idx = stats[1:, 4].argmax() + 1
x1, y1, w, h = stats[idx][:4]
x2 = x1 + w
y2 = y1 + h
X_out = Y[y1: y2, x1: x2]
return X_out
</code></pre>
<p><strong>Augumentation</strong> <br>
• Albumentations (we tried Kornia and works great but we need more time to rewrite functions we used in competition).</p>
<pre><code>transformation = [
A.OneOf([
A.RandomBrightnessContrast(always_apply=False, p=.5, brightness_limit=(-1, 1.0), contrast_limit=(-1, 1.0), brightness_by_max=True),
A.RandomGamma(always_apply=False, p=.5, gamma_limit=(60, 120), eps=None),
], p = 0.5),
A.Rotate(limit=5, p=0.5),
A.Affine(rotate = 5, translate_percent=0.1, scale=[0.9,1.5], shear=0, p=0.5),
A.HorizontalFlip(p=0.5),
A.VerticalFlip(p=0.5),
A.Resize(im_size[0], im_size[1]),
A.ShiftScaleRotate(always_apply=False, p=.2,
shift_limit_x=(-1.0, 1.0),
shift_limit_y=(-1.0, 1.0),
scale_limit=(-0.1, 0.1),
rotate_limit=(-5, 5),
interpolation=0,
border_mode=3,
value=(0, 0, 0),
mask_value=None,
rotate_method='largest_box'),
A.OneOf([
GridDropoutv2(always_apply=False, p=.2, ratio = .25, unit_size_min = 100, unit_size_max = 400, holes_number_x=100, holes_number_y=100),
CoarseDropoutv2(always_apply=False, p=.2, max_holes=12, max_height=250, max_width=100, min_holes=3, min_height=50, min_width=50, mask_fill_value=None)], p=0.25),
ToTensorV2()
]
</code></pre>
<p>GridDropoutv2 and CoarseDropoutv2 is our modification of Albumentations function. It generates cuts in random greyscale.</p>
<p><img src="https://i.ibb.co/6WtqnNv/batch-images-ep7.jpg" alt=""></p>
<p><strong>Training</strong><br>
• Framework: Pytorch scripts (multi gpu support: DDP)<br>
• Optimizer: RAdam <br>
• Lookahead: on<br>
• Scheduler: OneCycle - no warm up<br>
• Weight decay: 1e-2<br>
• Dataset each epoch was seeded by different seed.<br>
• Batch size: 16<br>
• Mixed precision (AMP): on<br>
• Gradient clipping: on<br>
• Loss function: BCEWithLogitsLoss with pos_weight = 1.0 - 1.25<br>
• Epochs: 7 (best models are from 4-5 epochs) - we decided to take early stopping (best probf1score) models instead of the last one.<br>
• Sampler (thanks to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>): custom sequential sampler (sequence -> positive sample / negative sample = 8) – small changes compare to the original one.</p>
<pre><code>class Balancer(torch.utils.data.Sampler):
def __init__(self, pos_cases, neg_cases, ratio = 3):
self.r = ratio - 1
self.pos_index = pos_cases
self.neg_index = neg_cases
self.length = self.r * int(np.floor(len(self.neg_index)/self.r))
self.ds_len = self.length + (self.length // self.r)
def __iter__(self):
pos_index = self.pos_index
neg_index = self.neg_index
np.random.shuffle(pos_index)
np.random.shuffle(neg_index)
neg_index = neg_index[:self.length].reshape(-1,self.r)
#pos_index = np.random.choice(pos_index, self.length//self.r).reshape(-1,1)
pos_index_len = len(pos_index)
pos_index = np.tile(pos_index, ((len(neg_index) // pos_index_len) + 1, 1))
pos_index = np.apply_along_axis(np.random.permutation, 1, pos_index)
pos_index = pos_index.reshape(-1,1)[:len(neg_index)]
index = np.concatenate([pos_index,neg_index], -1).reshape(-1)
return iter(index)
def __len__(self):
return self.ds_len
</code></pre>
<p><strong>Models</strong><br>
ConvNext_v1 small (timm - checkpoint: convnext_small.fb_in22k_ft_in1k_384). We tried different architectures but this one works in our solution the best.<br>
• avg pooling or <br>
• GEM pooling (trainable parameters set to True)<br>
• drop_path_rate=0.2 and drop_rate=0.05 (we tested different settings but these works the best for our setup)<br>
• input size: (1536, 768), 3 channels (b&w images)</p>
<p><strong>Local validation</strong><br>
• Metrics: probf1, ROC_AUC, prec/recall and MCC (Matthews’s correlation coefficient)<br>
• For local validation we used tool provided by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> - <a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/378521" target="_blank">https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/378521</a><br>
• For each traing epoch we additionaly plot: prediction dynamic, confusion matrix.<br>
• We looked into prediction and performed visual analysis of missed or incorrectly classified samples. </p>
<p><strong>Inference</strong><br>
• Three models – ensemble (voting or model probabilities averaging):</p>
<ol>
<li>avg pooling - fold 1 (trained on 1536 x 768) -> probf1: 0.52 (local CV) / 0.57 (LB)</li>
<li>GEM pooling - fold 2 (trained on 1536 x 768) -> probf1: 0.44 (local CV) (we do not test it on LB)</li>
<li>GEM pooling - fold 3 (trained on 1536 x 768) -> probf1: 0.46 (local CV) (-)<br>
• TTA: h_flip only (TTA weighted - 0.6 original image + 0.4 flipped image) - > it inceased our score by 0.03</li>
</ol>
<p><strong>Ensembling</strong><br>
• Model probabilities ensembling using better then median function</p>
<pre><code>def better_than_median(inputs, axis):
"""Compute the mean of the predictions if there are no outliers,
or the median if there are outliers.
Parameter: inputs = ndarray of shape (n_samples, n_folds)"""
spread = inputs.max(axis=axis) - inputs.min(axis=axis)
spread_lim = 0.6
print(f"Inliers: {(spread > spread_lim).sum():7} -> compute mean")
print(f"Outliers: {(spread <= spread_lim).sum():7} -> compute median")
print(f"Total: {len(inputs):7}")
return np.where(spread > spread_lim,
np.mean(inputs, axis=axis),
np.median(inputs, axis=axis))
</code></pre>
<p>• Voting and then averaging</p>
<pre><code>def voting_confidence(x):
if x<=0.25:
return 0.0
elif x<=0.45:
return 0.35
elif x<=0.55:
return x
elif x<=0.75:
return 0.65
else:
return 1.0
</code></pre>
<p><strong>CPU/GPU</strong><br>
• Training: 1-2xA6000 or A100 GPU-80GB on Cloud </p>
<p><strong>Experiment tracking</strong><br>
• Weights & Biases </p>
<p><strong>Things we tested during competition and did not improve our score (probably more tests are needed)</strong><br>
• Full dataset training, site_1/site_2 separate models, up-sampling (by sample frac), down-sampling.<br>
• Stochastic Weight Averaging and Exponential Moving Average<br>
• Optimizers – AdamW, SGD, NAdam, Lion, Lamb<br>
• Training with layer freeze (different levels)<br>
• Loss function: Focal loss, LDAM, LMFLoss, Label smoothing<br>
• Models: Effnet, NextVit (simmilar score but harder to train and slower), DenseNet, Inception_v3, maxvit<br>
• Effnet + pixel-wise self attention, Effnet + cross attention<br>
• Convnextv1 + max and avg concatenated pooling<br>
• Synthetic dataset: synthetic breast cancer generator (experienced radiologist is required to evaluate solution).<br>
• PatchGD - <a href="https://arxiv.org/pdf/2301.13817.pdf" target="_blank">https://arxiv.org/pdf/2301.13817.pdf</a><br>
• Training attitudes - Pair training – MLO / CC – patient_id and side (L/R) <br>
• Metamodel - SVM and XGBoost on image enbeddings.<br>
• Pseudolabeling - using new samples during submission time.<br>
• Selecting best samples for cases where patient has more then 2 views </p>
<pre><code>sub_s1 = sub_s1[['prediction_id', 'site_id', 'patient_id', 'laterality', 'cancer']].\
groupby(['patient_id','laterality']).\
apply(lambda x: x.nlargest(3,'cancer')).reset_index(drop=True)
</code></pre>
<p><strong>Special thanks to</strong><br>
• <a href="https://www.kaggle.com/Andrij" target="_blank">@Andrij</a> - for great cooperation during the competition. For very good and substantive talks aimed at solving the problem and improving results.<br>
• <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> - Thanks for the great activity on the forum and sharing knowledge. I implemented many of your ideas, many of them improved our score. Thank you!</p>
<p>Source code will be available soon (in week) on [Remek github](<a href="https://github.com/rkinas" target="_blank">https://github.com/rkinas</a></p>
<p>Thank you very much. The source of inspiration was from Trivial Augument: <a href="https://github.com/automl/trivialaugment" target="_blank">https://github.com/automl/trivialaugment</a> </p>
<p><a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a> I would like to thank you very much for your inspiration. This is true story. I was really devastated by this competition :) The more I wanted give up the harder I worked thanks to this message:</p>
<p><img src="https://i.ibb.co/Bn4m9L1/twitt.jpg" alt=""></p>
<p>Thank you very much! 👍</p>
<p>Thank you!</p>
<p>0.5 (priv LB)</p>
<ul>
<li>3 convnext (v1) models with TTA (model *0.6 + model_tta * 0.4) - model1=fold_1, model2=fold_2, model3=fold_3 -> as a result we get 3 model predictions (probabilities)</li>
<li>each model prediction was divided into bins by voting_confidence function (see solution description) -> we get 3 models prediction but in confidence prediction 0, 0.35, (0.45-0.55), 0.75 and 1.0</li>
<li>we made sum(model_1+model_2+model3) / 3</li>
<li>as a final step: aggregate (avg) and apply th = 0.489</li>
</ul>
|
RSNA-MICCAI Brain Tumor Radiogenomic Classification
|
12th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA-MICCAI Brain Tumor Radiogenomic Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks. We didn't use any handcrafted features in final solution because they didn't work.</p>
<p>To start with, I want to congratulate my teammate <a href="https://www.kaggle.com/authman" target="_blank">@authman</a>. Mad man finally did it. He just became competitions master after countless high silver medals.</p>
<p>I wanted to spend more time in this competition but unfortunately I was stuck at Optiver Realized Volatility Prediction and <a href="https://www.kaggle.com/authman" target="_blank">@authman</a> was working on G2Net Gravitational Wave Detection. We started this competition when 2.5 weeks left so we couldn't try every idea we had but at least some of them worked and we landed in 12th place.</p>
<p>After starting the competition, I quickly read through discussions and notebooks, but the quality of shared content was really poor. I only learned dicom preprocessing from <a href="https://www.kaggle.com/davidbroberts" target="_blank">@davidbroberts</a>' notebooks.</p>
<h2>Task 1 Data Preparation</h2>
<p>We thought 144^3 spatial dimensions is good enough for both segmentation and classification so we decided to use MRIs and masks in that shape. We removed empty slices along X-Y, X-Z and Y-Z axes then resized them from the longest axis. We converted ground-truth segmentation masks to non-mutually exclusive one-hot encoded masks with this code.</p>
<pre><code>def convert_labels(mask):
"""
Convert 3D spatial segmentation mask to 4D one-hot encoded segmentation mask
Parameters
----------
mask [np.ndarray of shape (depth, height, width)]: Array of 3D segmentation mask
Returns
-------
mask [np.ndarray of shape (channel, depth, height, width)]: Array of 4D one-hot encoded segmentation mask
"""
one_hot_encoded_mask = np.stack([
np.logical_or(mask == 1, mask == 2, mask == 3), # Whole tumor
np.logical_or(mask == 2, mask == 3), # Tumor core
(mask == 3), # Enhancing tumor
]).astype(np.uint8)
return one_hot_encoded_mask
</code></pre>
<p>Finally, we saved MRIs and masks as npy files for training faster. Preprocessing code for nii files can be found <a href="https://github.com/gunesevitan/rsna-miccai-brain-tumor-radiogenomic-classification/blob/master/src/nii_utils.py" target="_blank">here</a>.</p>
<h2>Task 1 Validation</h2>
<p>Single shuffled train/test split with 80/20 ratio is used for validation. Case directories were split so when every modality is included, it worked like group split without leak.</p>
<h2>Task 1 Preprocessing</h2>
<p>We used random rotation on X, Y and Z axes between -90 and 90 radians with 25% chance because task 1 MRIs were resampled and registered. We had to break the perfect alignment because task 2 wasn't resampled and registered. We also standardized every MRI like this <code>mri = (mri - mri.mean()) / mri.std()</code> which probably wasn't a correct way to do it.</p>
<h2>Task 1 Models</h2>
<p>This was my first segmentation experience so I used a high-level library and we had no time to write our models from scratch. We used <a href="https://docs.monai.io/en/latest/networks.html#segresnet" target="_blank">SegResNet</a> model from monai for segmentation. We trained 5 single split models for FLAIR, T1w, T1wCE, T2w and all modalities included. All of them trained with dice loss and converged at 0.2-0.3 validation loss. We used a dynamic threshold for converting sigmoided output to labels. Cutoff point is dynamically found using <code>(mean prediction of positives + mean prediction of negatives) / 2</code>.</p>
<h2>Task 2 Data Preparation</h2>
<p>We used 144^3 spatial dimensions in task 2 MRIs as well. We removed empty slices along X-Y, X-Z and Y-Z axes then resized them from the longest axis. We didn't use any voi lut or downcasted MRIs to 8 bit because it was lossy and redundant. We saved MRIs to npy files as 16 bit numpy arrays with default values for training faster. Preprocessing code for dicom files can be found <a href="https://github.com/gunesevitan/rsna-miccai-brain-tumor-radiogenomic-classification/blob/master/src/dicom_utils.py" target="_blank">here</a>.</p>
<h2>Task 2 Segmentation and Feature Extraction</h2>
<p>We validated our segmentation models by visualizing predictions on task 2 MRIs. They were "good enough" but extracted features or using predicted masks as additional channels didn't help our models to score better. It was really sad for us to abandon segmentation idea because we dedicated our first week to it. </p>
<h2>Task 2 Validation</h2>
<p>We used 5 shuffled stratified folds for cross-validation. Folds are stratified on MGMT_value.</p>
<h2>Task 2 Preprocessing</h2>
<p>We used random rotation on X, Y and Z axes between -15 and 15 radians and random 3d elastic deformations with 100% chance. We standardized every MRI like this <code>mri = (mri - mri.mean()) / mri.std()</code> here as well.</p>
<h2>Task 2 Models</h2>
<p>We used 3D DenseNet121 and DenseNet169 models with classification head and models are trained with bce with logits loss. We noticed that our models can reach 0.66-0.67 val loss consistently with cosine annealing scheduler if it was a lucky run. Loss was correlated with ROC AUC score only after reaching that point so we run our models countless times until we get lucky and validation loss reaches 0.66-0.67. When that happens, val ROC AUC score was always > 0.6. That's when I thought we had a solid chance to grab gold medal. We trained DenseNet121 and DenseNet169 models for every modality separately so we had 4 (modality) * 5 (folds) * 2 (DenseNet121 + DenseNet169) models at the end. Their scores can be seen below.</p>
<pre><code>------------------------------
Evaluating densenet121
------------------------------
FLAIR
-----
Fold 1 - ROC AUC Score: 0.625220
Fold 2 - ROC AUC Score: 0.579333
Fold 3 - ROC AUC Score: 0.678390
Fold 4 - ROC AUC Score: 0.612221
Fold 5 - ROC AUC Score: 0.605067
------------------------------
OOF ROC AUC Score: 0.624692
------------------------------
T1w
---
Fold 1 - ROC AUC Score: 0.595601
Fold 2 - ROC AUC Score: 0.656616
Fold 3 - ROC AUC Score: 0.563338
Fold 4 - ROC AUC Score: 0.521013
Fold 5 - ROC AUC Score: 0.591654
------------------------------
OOF ROC AUC Score: 0.576726
------------------------------
T1wCE
-----
Fold 1 - ROC AUC Score: 0.626393
Fold 2 - ROC AUC Score: 0.595433
Fold 3 - ROC AUC Score: 0.639046
Fold 4 - ROC AUC Score: 0.615201
Fold 5 - ROC AUC Score: 0.518629
------------------------------
OOF ROC AUC Score: 0.598063
------------------------------
T2w
---
Fold 1 - ROC AUC Score: 0.664516
Fold 2 - ROC AUC Score: 0.566745
Fold 3 - ROC AUC Score: 0.621162
Fold 4 - ROC AUC Score: 0.664978
Fold 5 - ROC AUC Score: 0.666170
------------------------------
OOF ROC AUC Score: 0.63008
------------------------------
</code></pre>
<pre><code>------------------------------
Evaluating densenet169
------------------------------
FLAIR
-----
Fold 1 - ROC AUC Score: 0.621114
Fold 2 - ROC AUC Score: 0.584309
Fold 3 - ROC AUC Score: 0.619672
Fold 4 - ROC AUC Score: 0.619672
Fold 5 - ROC AUC Score: 0.625633
------------------------------
OOF ROC AUC Score: 0.608471
------------------------------
T1w
---
Fold 1 - ROC AUC Score: 0.595894
Fold 2 - ROC AUC Score: 0.556499
Fold 3 - ROC AUC Score: 0.558569
Fold 4 - ROC AUC Score: 0.581818
Fold 5 - ROC AUC Score: 0.620566
------------------------------
OOF ROC AUC Score: 0.579769
------------------------------
T1wCE
-----
Fold 1 - ROC AUC Score: 0.563343
Fold 2 - ROC AUC Score: 0.569379
Fold 3 - ROC AUC Score: 0.613711
Fold 4 - ROC AUC Score: 0.616990
Fold 5 - ROC AUC Score: 0.559762
------------------------------
OOF ROC AUC Score: 0.57746
------------------------------
T2w
---
Fold 1 - ROC AUC Score: 0.673314
Fold 2 - ROC AUC Score: 0.645492
Fold 3 - ROC AUC Score: 0.616692
Fold 4 - ROC AUC Score: 0.561550
Fold 5 - ROC AUC Score: 0.585097
------------------------------
OOF ROC AUC Score: 0.603427
------------------------------
</code></pre>
<h2>Task 2 Post-processing, Blending and Submission</h2>
<p>We didn't use any post-processing to logits or sigmoided outputs. We only used average blending and we submit the predictions. We tried assigning different weights to different models but it wasn't better than average blending. I think it was because predictions' correlations were really low and all of them were contributing to final blend score regardless of their OOF scores.</p>
<p><img src="https://i.ibb.co/GpJVrM3/Screenshot-from-2021-10-15-14-03-36.png" alt="corrs"></p>
<p>Our final average blending score was</p>
<pre><code>Blend
-----
Fold 1 - ROC AUC Score: 0.707038
Fold 2 - ROC AUC Score: 0.682377
Fold 3 - ROC AUC Score: 0.712668
Fold 4 - ROC AUC Score: 0.698957
Fold 5 - ROC AUC Score: 0.709687
------------------------------
OOF ROC AUC Score: 0.692443
------------------------------
</code></pre>
<h2>What didn't work</h2>
<ul>
<li>2D segmentation by randomly selecting 3 slices from X-Y, X-Z or Y-Z axes</li>
<li>Using U-Net or U-Net variations with 144^3 spatial dimensions, it wasn't possible to fit enough data into memory</li>
<li>SegResNet with variational auto-encoder</li>
<li>Features extracted from segmentations</li>
<li>ResNet or EfficientNet classification models</li>
<li>Weighted blending</li>
<li>Only using folds with > 0.6 ROC AUC score</li>
<li>Only using FLAIR</li>
<li>Only using FLAIR and T2w</li>
</ul>
<h2>Links</h2>
<p>Here are the links of everything I made for this competition.</p>
<p>Kaggle Notebooks:</p>
<ul>
<li><a href="https://www.kaggle.com/gunesevitan/rsna-miccai-btrc-inference-and-blend" target="_blank">RSNA-MICCAI BTRC - Inference and Blend</a></li>
</ul>
<p>Kaggle Datasets:</p>
<ul>
<li><a href="https://www.kaggle.com/gunesevitan/rsnamiccai-btrc-dataset" target="_blank">https://www.kaggle.com/gunesevitan/rsnamiccai-btrc-dataset</a></li>
</ul>
<p>GitHub Repository</p>
<ul>
<li><a href="https://github.com/gunesevitan/rsna-miccai-brain-tumor-radiogenomic-classification" target="_blank">RSNA-MICCAI Brain Tumor Radiogenomic Classification</a></li>
</ul>
<h2>References</h2>
<p><a href="https://arxiv.org/abs/1505.04597" target="_blank">U-Net: Convolutional Networks for Biomedical Image Segmentation</a><br>
<a href="https://arxiv.org/abs/1606.06650" target="_blank">3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation</a><br>
<a href="https://www.researchgate.net/publication/350381125_Trialing_U-Net_Training_Modifications_for_Segmenting_Gliomas_Using_Open_Source_Deep_Learning_Framework" target="_blank">Trialing U-Net Training Modifications for Segmenting Gliomas Using Open Source Deep Learning Framework</a><br>
<a href="https://arxiv.org/abs/1606.06650" target="_blank">3D MRI brain tumor segmentation using autoencoder regularization</a></p>
<p>Sorry for not being clear. Ground-truth segmentation mask labels were; 1 for the necrotic center and non-enhancing tumor, 2 for edema, and 4 for enhancing tumor. Our segmentation model's output contained 3 non-mutually exlusive classes derived from ground-truth segmentation masks. Our classes were:</p>
<ul>
<li>Whole tumor (voxels corresponding to class 1, 2 or 4)</li>
<li>Tumor core (voxels corresponding to class 1 or 2)</li>
<li>Enhancing tumor (class 4 by itself)</li>
</ul>
|
RSNA-MICCAI Brain Tumor Radiogenomic Classification
|
15th (VIT) Vision Transformer 3D on MRI
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA-MICCAI Brain Tumor Radiogenomic Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I didn't expect I can get rank15 on this competition, I just want to try VIT on 3D image and compare to efficient3D, after I build the VIT3D and submit successful, I stop on this competition.</p>
<p>Just want to share my model (VIT on 3D image) to everyone, and many thanks to <a href="https://www.kaggle.com/rluethy" target="_blank">@rluethy</a> great notebook.</p>
<p><a href="https://www.kaggle.com/super13579/vit-vision-transformer-3d-with-one-mri-type/" target="_blank">https://www.kaggle.com/super13579/vit-vision-transformer-3d-with-one-mri-type/</a></p>
|
RSNA-MICCAI Brain Tumor Radiogenomic Classification
|
1st place solution with very simple code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA-MICCAI Brain Tumor Radiogenomic Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all congratulations to all the winners! Thanks as well to Kaggle and RSNA for hosting this competition and giving us the chance to work on such an interesting problem.</p>
<p>On the contrary of what top solutions look like, my final solution was one of the very first baselines I started with. There was no model ensembling, no complex/big models, and no sophisticated training techniques.</p>
<h3><strong>Team name secret:</strong></h3>
<p>As many of you have seen, the team name was set to “I hate this competition” during the last month and I will explain below the reasons that made me hate spending time on this competition.</p>
<ul>
<li>In every competition, I always start with a very simple model and then submit it. After this, I try to use different validation strategies in order to find the best strategy that really reflects the model performance. What made me upset about this competition is that the CV score was almost random in my early models and that no strategy worked to validate the performance of the model.</li>
<li>Bigger models were giving noisy CV scores (almost random) and I couldn’t fine-tune any of them. I only limited my models to resnet10-50 and b0-b3 during all my experiments (large resents, densents and efficientnets failed).</li>
<li>The batch norm layer in this Efficientnet implementation <a href="https://github.com/lukemelas/EfficientNet-PyTorch" target="_blank">here</a> was giving non-sense output when I used 3 spatial inputs for the CNN. I found that other people suffered from this problem after searching about this issue and I felt disappointed since many of my experiments were just a total failure. I switched to <strong>monai</strong> library at that time.</li>
<li>Ensembling was not really improving the score. Once I change the folds, the scores completely change and the std of the scores was so high.</li>
<li>Seeing people getting 0.8+ on the public LB while my models were stuck at around 0.5-0.6 was annoying for me.</li>
<li>I was certain and 100% sure that most of the public LB scores were just random scores and that all of the scores will drop in the private LB.</li>
<li>I saw someone in the discussion forum who shared that he/she got 0.7+ in the public LB by just using random predictions.<br>
I was reading the discussions every day and I was really amazed by how many people were targeting 0.7 and 0.8+ scores. This seemed like an impossible mission for me. My utmost ambition was getting a model that scores 0.6+ on validation and public LB (which I failed to get even after the competition ended). </li>
</ul>
<h3><strong>What I concluded after a whole week doing EDA:</strong></h3>
<ul>
<li>Public LB does make sense if you don’t overfit your validation set.</li>
<li>Most of the public models fail to learn any useful patterns.</li>
<li>Many teams are just focusing on improving their Public scores and didn’t even take into consideration the shakeup that could happen at the end of the competition.</li>
<li>There will be no escape from the shakeup.</li>
<li>It will be very easy to get a top100 position in the private LB with a simple model because many many competitors only focus on the public score.</li>
<li>Teaming up (in order to ensemble different models) won’t help to improve any score.</li>
<li>I should not invest more than 2 weeks working on this competition because the risk is so high and I cannot guarantee to get any medal. </li>
<li>The chances to be in the top 1% or in the middle of the LB (500-700) are almost the same.</li>
</ul>
<h3><strong>Very slow validation strategy:</strong></h3>
<p>Training the same model with the same everything (parameters, hardware, fixing all random seeds), with the same training data and the same validation set will give different aucroc scores. I remember that when I trained a EF-b0 more than 100 times (same everything) I got CV scores between 0.53 and 0.62 with a high std (can’t remember it). Training 5 folds 100 times, reduced drastically the std of the scores and I got scores (after averaging the 5 folds) between 0.52 and 0.56.<br>
Guess what? To judge any model/idea/approach I used to train 100 models (Every model is trained 20 times X 5folds). I ranked the experiments based on the scores average. Then, I selected the top 5 ideas+models and I ran 250 models per idea (Every model is trained 50 times X 5folds. The folds in the second stage are different from the first stage folds). After this, I re-ranked the ideas based on the average score for the 250 models (I do the oof of every idea/model and I average the 50 oofs).<br>
Please note that some of what I called models in this section represent 4 trained models (on "FLAIR", "T1w", "T1wCE", "T2w”). Whenever I want to try an idea I apply it to the CNN model using all the 4 types data and I train 4 different models and I average the 4 models. I have excluded the “T2w” data in most of the final models (I think that more than 50% of the models were not using the “T2w” data). Anyway, let’s skip this part because the ideas and models I was trying were totally random. In fact, most of the ideas I tried were just ideas I get when I close my eyes to sleep. I have to be honest, this competition ruined many of my nights and made me feel like a stupid loser.</p>
<h3><strong>What was the top one idea after the 2 stage ranking?</strong></h3>
<p>I am not sure but I remember that I started with 8 simple ideas and then I tried some sophisticated and customized models (around 20).<br>
The top 2 models after applying the 2 stage validation filtering were using the same model and same training techniques but the top 1 was using all of the 4 different structural multi-parametric MRI data (FLAIR, t1w, t1wce, t2w) while the second top model was just using the “T1wCE” data.</p>
<h3><strong>The final model:</strong></h3>
<ul>
<li>3D CNN</li>
<li>Resnet10</li>
<li>BCE loss</li>
<li>Adam optimizer</li>
<li>15 epochs</li>
<li>LR: epoch 1->10; lr = 0.0001 | epoch 10 to 15 lr=0.00005</li>
<li>Image size: 256x256</li>
<li>Batch size: 8 (the bigger bs I use the worse CV I get, I was alternating between bs=4 and bs=8)</li>
<li>No mixed-precision is used.</li>
<li>Used a small trick to build the 3D images. Let’s call it “The best central image trick”.</li>
<li>One epoch takes around 1minute and 20 seconds using an RTX 3090.</li>
</ul>
<h3><strong>The best central image trick:</strong></h3>
<p>Each independent case has a different number of images for all the MRI scans. Using all the scans will confuse the model to learn the spatial dependence of the brain pixels.<br>
For example, Let’s assume that case_1 has 80 T1wCE scans and case_2 has 500 T1wCE scans and that we will use 40 images as an input for the model.<br>
What people did, is that they take the central image (aka image number 40 (80//2) for case_1 and image number 250 (500//2) in case_2) and then they build the 3d images this way:<br>
case_1: from image number 20 to image number 60<br>
case_2: from image number 480 to image number 520.<br>
In this example, we will end up with 2 3d images that don’t represent the same portion of the brain. Thus, the model will not only fail to learn the tumor pattern but will also start learning some spatial patterns that are not useful in our case.<br>
What I wanted to do and failed is to select a fixed starting and ending point for all the brains and train with the same information for all the cases. But, I couldn't find a way to successfully make it work.<br>
After many failures, I found that using the biggest image as a central image (the image that contains the largest brain cutaway view) will slightly improve my local CV (improvements were between 0.01 and 0.02). I think that this was the only 100% successful experiment I did in this competition. It is not exactly what I wanted, but it kinda worked.</p>
<h3><strong>What did not work:</strong></h3>
<ul>
<li>2d CNNs</li>
<li>4 backbones 3D CNNs (each for one structural multi-parametric MRI folder)</li>
<li>Ensembling</li>
<li>Pretrained on brain images</li>
<li>Using the metadata from the DCIM images</li>
<li>Stacking the output from the different CNNs using based tree models.</li>
<li>Deep CNNs</li>
<li>Some tricks I did to normalize the voxels in some consistent ways. (It has been more than a month since I did that and I now think that what I did seems completely stupid lol)</li>
</ul>
<h3><strong>One funny fact:</strong></h3>
<p>I forgot to select my best submissions since I am on vacation and I forgot about the last day of the competition. The cool thing is that my best 2 models had the best public LB score and they were automatically selected as my final submissions :D</p>
<h3><strong>One quick final thought:</strong></h3>
<p>I did not enjoy making submissions in this competition as much as I enjoyed the private LB results which explains the fact that the team name was changed from “I hate this competition” to “I love this competition”. The data was too small and the models were barely learning, I am sure that if everyone re-runs his best model again we will still have a small shakeup in the top teams. Finally, I feel super happy to have had my first first-place finish and my first solo gold and I apologize for not interacting with the community in the discussion forum. See you in the next competitions :D </p>
<h3><strong>Code</strong></h3>
<p>Training code is available <a href="https://github.com/FirasBaba/rsna-resnet10" target="_blank">here</a><br>
The winner inference notebook can be found <a href="https://www.kaggle.com/rinnqd/monai-simple-prediction-from-flair" target="_blank">here</a></p>
<p>I didn't try to explain the model to be honest. I think that explaining the output from a 3D matrix will take forever which is not worth it. </p>
|
RSNA-MICCAI Brain Tumor Radiogenomic Classification
|
255th place solution - Using 3D Resnet50 model
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA-MICCAI Brain Tumor Radiogenomic Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Cheers mate, you too</p>
<p>Although my model didn't generalise i'd still like to summarise my approach, this being my first kaggle competition.</p>
<h1>Data preprocessing steps</h1>
<ul>
<li>Load T2w Images</li>
<li>Removed Blank slices </li>
<li>Crop slices </li>
<li>SIZ [1] depth 64</li>
<li>Normalise</li>
<li>Zero centre</li>
</ul>
<h1>Training</h1>
<ul>
<li>Trained a Resnet50 3D with a LR scheduler (factor 10, tolerance 5) Initial LR=1e-2, batch size = 16</li>
<li>Single fold cross validation<br>
Generally the results appeared like:<br>
<a href="https://postimg.cc/QHzXL2c8" target="_blank">Screenshot-from-2021-10-20-12-20-00.png</a><br>
Chose best 5 models from each training cycle and created a final ensemble for submission</li>
</ul>
<p>[1] <a href="https://link.springer.com/chapter/10.1007%2F978-3-030-59354-4_15" target="_blank">https://link.springer.com/chapter/10.1007%2F978-3-030-59354-4_15</a></p>
|
RSNA-MICCAI Brain Tumor Radiogenomic Classification
|
2nd place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA-MICCAI Brain Tumor Radiogenomic Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi <a href="https://www.kaggle.com/pranshu15" target="_blank">@pranshu15</a>! I'm sorry, I don't have code to generate Grad-CAM maps since I don't have experience in generating activation maps yet</p>
<p>Thank you <a href="https://www.kaggle.com/a11t01n3" target="_blank">@a11t01n3</a>!</p>
<p>Thank you <a href="https://www.kaggle.com/yeeseng" target="_blank">@yeeseng</a>. I have seen ideas that people orient sequences from all studies to be on the same axis before training and inferencing, which makes a lot of sense in my opinion. Shamefully I didn't do it in this pipeline. </p>
<p>Thank you <a href="https://www.kaggle.com/ivansps" target="_blank">@ivansps</a>! At least on my part, I got lucky when training the model. I recently ran each of my 5-fold models individually, only 2/5 models has 'good' scores, and one is really close to my private LB, the others are around 0.5 private AUC. <br>
Personally I think our models did learn something, but it did not outshine the noise of the given data, which made us not convinced of all of our results. I am truly sorry to all the teams who spent months in this competition but did not their expected results.</p>
<p>First of all, I would like to thank Kaggle and the competition host for hosting this challenge, and also to all competitors who participated in this challenge. My main reason for joining this competition was to learn from the community, and I sure did learned a lot from you. <br>
The second place came as a surprise to me and I was not expecting it at all, since my public LB score was not high (0.65-0.67)<br>
Anyway, I will share my approach to this competition. My approach is simple, using a CNN-LSTM architecture to do Classification task. For the CNN part I used Efficientnet and I trained LSTM part from scratch.<br>
All 4 types of MRI image sequences (FLAIR, T1w, T1wCE, T2w) are used as inputs. </p>
<h1>Generate fused MRI sequences</h1>
<ul>
<li>I obtained images from <a href="https://www.kaggle.com/jonathanbesomi" target="_blank">@jonathanbesomi</a> 's dataset <a href="https://www.kaggle.com/jonathanbesomi/rsna-miccai-png" target="_blank">https://www.kaggle.com/jonathanbesomi/rsna-miccai-png</a>, who converted all images into PNG format and removed empty DICOM images</li>
<li>For each MRI image sequence, a number of T=10 frames is selected using uniform temporal subsampling. For example, with a video containing 91 frames, the frames 1, 11, 21, …, 91 are selected with this sampling strategy. </li>
</ul>
<pre><code>def uniform_temporal_subsample(x, num_samples):
'''
Moddified from https://github.com/facebookresearch/pytorchvideo/blob/d7874f788bc00a
7badfb4310a912f6e531ffd6d3/pytorchvideo/transforms/functional.py#L19
'''
t = len(x)
indices = torch.linspace(0, t - 1, num_samples)
indices = torch.clamp(indices, 0, t - 1).long()
return [x[i] for i in indices]
</code></pre>
<ul>
<li>In each selected time frame, 4 single-channel MRI images are concatenated to one 4-channels feature image before proceeding to the training/inference phase. If one (or more) MRI image type is missing for a patient, that image channel(s) is filled with 0.<br>
<img src="https://drive.google.com/file/d/1ZhzxfxWtIsSoiEXkDj61HdQIEBufJnQS/view?usp=sharing" alt=""></li>
</ul>
<h1>Model</h1>
<p><img src="https://drive.google.com/file/d/1ovsY6s9TlhlORjYl332B-34hEZy7gVY5/view?usp=sharing" alt=""><br>
The chosen CNN model for image features extraction task is a pre-trained EfficientNet B0 model. Since the input image has 4 channels, each corresponding to an MRI image type, a 2D convolution is applied to map the 4-channel image into a 3-channel feature map to fit the input shape of the pre-trained EfficientNet model. The classification head of the pre-trained model is also replaced by a fully-connected layer of size 256.<br>
After obtaining the embeddings from all 10 selected frames, the embeddings are passed to 2 LSTM layers with hidden size=32 for both layers. Following that is a prediction layer with 1 node outputs the score.<br>
The model is trained for 15 epochs using binary cross-entropy loss, Adam optimizer with learning rate = 1e-4.</p>
<h1>Preprocessing and augmentation</h1>
<p>All input images are normalized and resized to the size of (256, 256). Images from one MRI type from one patient are randomly augmented with the same parameters.<br>
The list of augmentations are as follow:</p>
<ul>
<li>Horizontal Flip</li>
<li>ShiftScaleRotate</li>
<li>RandomBrightnessContrast</li>
</ul>
<p>I found that using too many and/or heavy augmentation methods did not help with the training process, therefore I only used simple augmentations when prepared the training data.</p>
<h1>Cross validation.</h1>
<p>Stratified K-fold cross validation is used with K=5 on MGMT value. During inference time, the mean prediction value from all 5 models is used as the ensemble’s prediction value.</p>
<h1>Notebooks</h1>
<p>All training and inference steps were done with Kaggle notebooks with GPUs:</p>
<ul>
<li>Training notebook: <a href="https://www.kaggle.com/minhnhatphan/rnsa-21-cnn-lstm-train/notebook" target="_blank">https://www.kaggle.com/minhnhatphan/rnsa-21-cnn-lstm-train/notebook</a></li>
<li>Inference notbook: <a href="https://www.kaggle.com/minhnhatphan/rnsa-21-cnn-lstm-inference" target="_blank">https://www.kaggle.com/minhnhatphan/rnsa-21-cnn-lstm-inference</a></li>
<li>Refactored Github <a href="https://github.com/minhnhatphan/rnsa21-cnn-lstm" target="_blank">link</a> and notebook <a href="https://www.kaggle.com/minhnhatphan/rnsa21-cnn-lstm-refactored/notebook" target="_blank">link</a></li>
</ul>
<p>I agree with others competitors that the results are not clinically useful. I found myself lucky to be on the top positions in the challenge. However, I think everyone can get something out of this, and the knowledge from this competition can benefit future competition as well as our understandings of similar problems. Again, I'd like to thank all participants and the Kaggle community for your insights and knowledge.</p>
<p>Thank you!</p>
<p>Thank you so much! Your dataset helped me out a lot 🔥 Best wishes for you!</p>
<p>Thank you!</p>
|
RSNA-MICCAI Brain Tumor Radiogenomic Classification
|
3rd place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA-MICCAI Brain Tumor Radiogenomic Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I would like to thank Kaggle, RSNA and MICCAI for hosting the competition. I would also like to thank <a href="https://www.kaggle.com/billqi" target="_blank">@billqi</a> for his kernel which I used as a work base for mine. </p>
<h1>Approach</h1>
<ul>
<li>I used stratified split based on patient ids and class on train dataset to sample a validation dataset</li>
<li>I trained four EfficientNet-B3. One for each kind of MRI scans (FLAIR, T1w, T1wCE, T2w) </li>
<li>I aggregated results by patient ids to compare differences between the maximum prediction and the average of predictions to average average and the minimum of predictions to keep the prediction linked to the highest difference. </li>
</ul>
<h1>Notebooks</h1>
<p>Notebook is available on Kaggle : <a href="https://www.kaggle.com/cedricsoares/tf-efficientnet-transfer-learning-strat-split" target="_blank">https://www.kaggle.com/cedricsoares/tf-efficientnet-transfer-learning-strat-split</a><br>
And Github (with some comments on french) : <a href="https://github.com/cedricsoares/kaggle-rsna-miccai-brain-tumor-radiogenomic-classification" target="_blank">https://github.com/cedricsoares/kaggle-rsna-miccai-brain-tumor-radiogenomic-classification</a></p>
|
RSNA-MICCAI Brain Tumor Radiogenomic Classification
|
4th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA-MICCAI Brain Tumor Radiogenomic Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We’d like to thank Kaggle and the competition host for this challenge. We’d also like to thank all the other competitors who shared their wisdom, knowledge, code and critiques. We appreciate you.</p>
<p>Thanks to my partner <a href="https://www.kaggle.com/mrinath" target="_blank">@mrinath</a> for his expertise and hard work in putting the whole project together.</p>
<p>Our approach is simple and consisted of Object Detection and Classification. We used YOLOv5 for OD and EfficientNet 2D and 3D backbones for classification.</p>
<p>We used only the T1wCE series in the axial plane for object detection and 2D networks. The basic strategy was ..</p>
<h3>1. Extract all images into the same plane</h3>
<ul>
<li>Resample coronal and sagittals to axial. Thanks to <a href="https://www.kaggle.com/boojum" target="_blank">@boojum</a> for his nifty resampling method. -> <a href="https://www.kaggle.com/boojum/connecting-voxel-spaces" target="_blank">https://www.kaggle.com/boojum/connecting-voxel-spaces</a></li>
<li>Eliminate empty or mostly empty images (mean pix value < 40)</li>
<li>Choose seven equally spaced slices out of the series.</li>
<li>Find the center slice, two slices inferior and three slices superior to center (since most of the brain is above the center slice in the axial plane).</li>
</ul>
<h3>2. Find slices with tumors.</h3>
<ul>
<li>We trained a YOLO Object Detection model on ~400 hand-annotated images from the RSNA-MICCIA training dataset and used it to find slices with tumors.<br>
OD Train Notebook -> <a href="https://www.kaggle.com/davidbroberts/brain-tumor-yolo-od-train" target="_blank">https://www.kaggle.com/davidbroberts/brain-tumor-yolo-od-train</a><br>
OD Train Dataset -> <a href="https://www.kaggle.com/davidbroberts/brain-tumor-object-detection-datasets" target="_blank">https://www.kaggle.com/davidbroberts/brain-tumor-object-detection-datasets</a></li>
</ul>
<p>We tried OD on sagittal and coronal planes but got a better mAP with axials. Our axial mAP was around .68.</p>
<h3>3. 2D Classification</h3>
<p>Training:</p>
<p>We exported all images into each plane for a total of 12 datasets. Next, we used the YOLO OD models to find images with tumors, which were added to training datasets. The idea was to eliminate duplicate and non-tumor images. We tried about 20 different backbones on each 'sampled' dataset. We found the best results overall (which still weren't very good) were on T1wCE axials and EffNet B3.. so we stuck with that combination with the intent to ensemble the lesser performing models later.</p>
<ul>
<li>Train-Val random split with 25%data in val set. </li>
<li>Using optimizer AdamW and used cosine_schedule_with_warmup as the LR scheduler.</li>
<li>Augmentations include : CenterCrop,CLAHE,RandomRotate90,HorizontalFlip,VerticalFlip,RandomContrast and Cutmix</li>
</ul>
<p>Prediction:</p>
<ul>
<li>Images that YOLO detected tumours on were passed to EffNet classifiers (1-7 images per study).</li>
<li>If multiple slices from the same case are detected, the first and the last slice is used.</li>
<li>The best validation AUC was 0.6829.</li>
<li>We used 4x TTA and power ensembling for the final prediction, it was better than plain averaging.</li>
<li>If after TTA we had 4 prediction sets y1, y2, y3, y4 then y_final = (y1² +y2² +y3² +y4²)/4<br>
For studies with no usable T1wCE images or no tumours detected by OD, we globbed all the series together and passed them to 3D network.</li>
</ul>
<h3>4. 3D Classification</h3>
<p>Training:</p>
<p>For the 3D model, we knew having pretrained models as we use for other competitions would be beneficial. But most 3D models we were training from MONAI had no pre-trained weights. So we decided to use imagenet pre-trained models for 3D classification. We knew these models are for 2D images, so how can we classify 3D data? In most 2D challenges we have seen people change the last layers and add their own fully connected layer at the end of a pretrained model and change it according to their own. We changed the front part of the model.</p>
<ul>
<li><p>EffNet B1 was the best backbone model for us.</p></li>
<li><p>Augmentations include CentreCropping and adding noise to the 3D data.</p></li>
<li><p>SAM optimizer with base Adam optimizer with cosine_with_hard_restarts_schedule_with_warmup</p></li>
<li><p>5 fold stratified k-fold.<br>
Prediction:</p></li>
<li><p>We used our best performing fold for predicting the test, the best validation AUC was 0.6936.</p></li>
</ul>
<h3>Tools/Frameworks:</h3>
<ul>
<li>Hardware used: RTX 3090, Google Colab Pro and of course, Kaggle.</li>
<li>Model metric tracking: Weights & Biases (<a href="https://wandb.ai" target="_blank">https://wandb.ai</a>)</li>
<li>Framework: PyTorch</li>
<li>Pretrained models: Timm</li>
<li>Object detection annotation: <a href="https://makesense.ai" target="_blank">https://makesense.ai</a></li>
</ul>
<h3>Things that didn't work (or help):</h3>
<ul>
<li>Object detection on T2w series</li>
<li>Mixup augmentation<br>
YOLO classification (likely because of a small dataset of ~400 images)</li>
<li>Contrast filtering ( hist EQ, manual LUT etc)</li>
<li>Aux Loss<br>
We were inspired by the COVID-19 Detection winning solutions where many people worked with aux loss. We created an extra head in our 2D models to segment the tumour. Although it was promising during training, the scores were not good on the Public LB.</li>
<li>Using 3D models with no pretrained weights</li>
</ul>
<h3>Notebook:</h3>
<p>Here's a cleaned up, quick save a version of our final infer notebook. The fully documented notebook will be shared when it's complete.</p>
<p>Infer Notebook -> <a href="https://www.kaggle.com/mrinath/infer-refactored" target="_blank">https://www.kaggle.com/mrinath/infer-refactored</a><br>
Model dataset -> <a href="https://www.kaggle.com/mrinath/rsna-winning-models" target="_blank">https://www.kaggle.com/mrinath/rsna-winning-models</a></p>
<p>Our CV metrics and LB scores were not improving after a month of experiments. We started to realize the task isn't really possible. Other teams started reporting the same issue, which confirmed the difficulty for us.</p>
<p>Our overall assessment is that due to the small dataset, we probably trained on noise or features unrelated to MGMT status .. and some amount of randomness happened to put us in the medal zone. The result isn't clinically useful itself. But we do think the collective knowledge gained in this competition will benefit the community. This is how we learn things.</p>
|
RSNA-MICCAI Brain Tumor Radiogenomic Classification
|
5th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA-MICCAI Brain Tumor Radiogenomic Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I would like to thank Kaggle and the competition hosts for this learning opportunity. </p>
<p>Link to the <a href="https://github.com/bhaveshtangirala786/RSNA" target="_blank">Github</a> code.</p>
<h1>Dataset</h1>
<p>This kaggle <a href="https://www.kaggle.com/jonathanbesomi/rsna-miccai-png" target="_blank">dataset</a> was used for training the models. I thank <a href="https://www.kaggle.com/jonathanbesomi" target="_blank">@jonathanbesomi</a> for providing us this compact dataset</p>
<h1>Approach</h1>
<ul>
<li>I sampled 10 images for each type of mpMRI scan (namely FLAIR, T1w, T1Gd, T2) and calculated the mean for each type to get 4 2D images.</li>
<li>I then concatenated these 4 images to get a 4 channel (4 x h x w) image which I passed through a 1x1 convolution bottleneck to get a 3 channel (3 x h x w) feature map.</li>
<li>This feature map is passed through a CNN (Efficientnet) to predict the MGMT value</li>
<li>I used Taylor Cross Entropy loss for training as I thought this dataset might be a bit noisy due to less number of samples.</li>
</ul>
<h1>Notebook</h1>
<p>Training notebook : <a href="https://www.kaggle.com/abhimanyukarshni/rsna-training/notebook" target="_blank">https://www.kaggle.com/abhimanyukarshni/rsna-training/notebook</a><br>
Inference notebook : <a href="https://www.kaggle.com/abhimanyukarshni/rsna-inference/notebook" target="_blank">https://www.kaggle.com/abhimanyukarshni/rsna-inference/notebook</a></p>
|
RSNA-MICCAI Brain Tumor Radiogenomic Classification
|
6 place solution (2 stage models)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA-MICCAI Brain Tumor Radiogenomic Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, I want to have a special thank to the hosts, who have created this challenge, all of my team mates in Train4Ever <a href="https://www.kaggle.com/tungvs" target="_blank">@tungvs</a> <a href="https://www.kaggle.com/researchbntz" target="_blank">@researchbntz</a> <a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a> <a href="https://www.kaggle.com/damtrongtuyen" target="_blank">@damtrongtuyen</a> and all the great Kagglers in this competition. Below is my team's brief solution summary.</p>
<p>You can also find all the code and detailed guiding on this Github repo: <a href="https://github.com/gallegi/T4E_MICCAI_BrainTumor" target="_blank">https://github.com/gallegi/T4E_MICCAI_BrainTumor</a></p>
<h2>1. Solution overview</h2>
<h3>1.1. Architecture:</h3>
<p>Our best model on the private leaderboard is the one that combined a 2 stage training and inference. </p>
<ul>
<li>The first stage was independently training a <strong>segmentation model</strong> that could segment the tumor masks properly (We called this segmentation stage).</li>
<li>The second stage was the classification stage, where a <strong>classification model</strong> was trained. We used the trained segmentation model to generate the tumor masks, <em>combining with the original images to form a 3-channel inputs</em> that were taken by the classification model to train and run inference on.</li>
<li>All the models approached the problem using 2D image. Segmentation stage used a Unet++ model with Densenet121 backbone and the classification stage utilized an architecture of Long Short Term Memory with Eca NFNet L0 backbone as a feature extractor.</li>
</ul>
<p><img src="https://storage.googleapis.com/kagglesdsdata/datasets/1662331/2727201/Stage1.png?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20211021%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20211021T105808Z&X-Goog-Expires=345599&X-Goog-SignedHeaders=host&X-Goog-Signature=7f92a21977b240e9851f220ec9fea1ce9beea68ba6ddddc7a75460d0399c5c44ce5f4979719bcce25dd25a3b6c78f31c5736cf9919a6e956e8d73f5d487b2dedbefaf996f126d55dde09f18250e0fe70582c70d5c693a24a26c041d24c228afa4d3ee307b843ff79586acbca05f3b53f9e912636f60ac700c2cb9b0ad2264eae57b1948bdc7c04520b0ce20c6f66d7344fbea9c14b6bd3a09801f1baf7f6179265b0083961d3091e95518c01992982dddf9db7cf5b4b7514f86906bf7cd382f344a6cd5fd5073c54c03fd10ffdb2a3d450b80931e2cb16d9b7167d3f83ad0fa470bc79235c6d413ca46e9cf067b666a8f0bf02e2000d59fe7548f3279f4ba2ae" alt="Stage 1 Image"></p>
<p><img src="https://storage.googleapis.com/kagglesdsdata/datasets/1662331/2727201/Stage2.png?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20211021%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20211021T105850Z&X-Goog-Expires=345599&X-Goog-SignedHeaders=host&X-Goog-Signature=39b298f22548d22bee9e4db02159ebd3416c6eed332dca8c1e0639ca8d4561f71aebbbce1a4e5f10b9930d91c08cb1fe944065f6fe918e7f3d1881613140c20580c80386e48b81c68e5d2dacb80b1d51bfc35e9f8580b50cffaa26c9e72a8daf67704febd9c44859e40ef41de67c105b3c41edc5e002f8629771639553e909981763b3abb3215dd7481241e27074da7f3131ab6036bfc120571e88fe251a2e6b97dd8b1773190c9562dedda8a67f707881ae9ae0116ca3316b6d328a872f38b3ffafc7cbcdc91d6bde56a9f6f58e43a08b98573e45c3e5cedc0e5faaec05864ecb7bd44186c464a8a822e6031b527d15f32e3b4e859b67ac0b47742cd27eff5d" alt="stage 2"></p>
<h3>1.2. Stage 1 detail</h3>
<ul>
<li>There is a parrallel track held by the same host called Task 1, besides Task 2 which was hosted on Kaggle. The dataset is provided in 3D arrays with shape (240,240,155), stored in nii.gz files.</li>
<li>Remove data of patients overlapping with those in Task 2 data: Task 1 data has 1251 samples corresponding to 1251 unique patients. However, there are about 574 patients with IDs overlapping with Task 2 data. In order to prevent data leakage, it was safer to remove data of overlapping patients. Thus, only the non-overlapping part was kept, which had about 677 3D samples.</li>
<li>Data preparation: with each 3D sample, we extracted 2D slices and their corresponding masks from each plane: Coronal, Sagital and Axial. To illustrate, if there are 600 3D samples, 4 MRI types each, 3 different plane views, and within each 3D sample we were able to extract 100 slices, then the total 2D images received is 600x4x3x100. </li>
<li>Sampling: Because nearby slices in the same plane are usually very similar to each other, we used a sampling method to keep only relatively distinct images. We believed that it did no harm for training the model, and certainly reduced the training time.</li>
<li>Filtering: Only images with tumor area over brain area over 0.01 were kept. We think that would make the model model stable during training</li>
<li>Image size: 224x224x3</li>
<li>Mask: Refer to the competition paper [1], we constructed 2 types of masks for each 2D image: Whole Tumor (WT) and Enhancing Tumor (ET). WT represents the morphology of the whole tumor while ET is the middle layer be wrapping the necrotic part to form the tumor core.</li>
<li>Model: Densenet121 backbone with a linear classifier layer at top, pretrained on imagenet</li>
<li>Training: All images regardless of mri types and planes after the filtering step above are used. Note that a set of 20% patients was held out to validate the model during training. At first, the backbone was freezed to warm up in 10 epochs, then, we unfreezed it and trained to more than 180 epochs when the improvement are minor. </li>
</ul>
<h3>1.3. Stage 2 detail</h3>
<ul>
<li>Data preparation: Using the trained segmentation model on Task 2 data to generate 2 types of mentioned masks, we concatenated them with the original image to create 3-channel 2D images as the input for this second stage.</li>
<li>Filtering: We used the predicted masks to determine which images should be kept during training. Only images with predicted tumor area over brain area larger than 0.025 were considered as informative. Besides, we also decided to remove ones with total number of separated tumor contours more than 5 to avoid noises, because it was unlikely that we have a brain with multiple tumors.</li>
<li>Chunking: By using LSTM, we inputed a series of 3-channel images into the model at the same time, we needed to determine how many <em>time-step</em> per series. By viewing the distribution of number of images in each series and after doing some tuning, we decided that the sequence length was 35. This would not be an optimal one, but we found the result acceptable. Larger sequence lenght might lead to unstable training and much resource comsumption. So, for each series, we create consecutive 35-step chunks with stride 5 and treated them as independent samples when training.</li>
<li>Image: 224x224x3</li>
<li>Model: Biderectional LSTM with Eca nfnet l0 backbone extractor. The backbone is shared between time steps and output and embedding of size 2034. The hidden size of the LSTM cell is 64. All the embedding from all the time steps are concatenated before going to a linear classifier. </li>
<li>Training: We trained a model for each MRI type separatedly, data from 20% patients was held out for validation. The backbone was freeze and warm up for 5 epochs before unfreezing and continued training. </li>
<li>Inference: Because we splitted 1 series of 1 patients to many chunks, the final prediction of 1 patient was the average of all the probability outputs of those chunks. We observe that this kind of <em>emsembling</em> make the model more robust. </li>
<li>TTA: Specific for test set, we checked that the ratio of tumor area over brain area was smaller in some cases, so we decided use TTA Zoom In 1.2 as the post processing step.</li>
</ul>
<h2>2. Result and observation:</h2>
<h3>Stage 1 result</h3>
<table>
<thead>
<tr>
<th></th>
<th>Valdidation Dice Loss</th>
<th>Validation IOU Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>Segmentation model</td>
<td>0.077</td>
<td>0.856</td>
</tr>
</tbody>
</table>
<p>We were pretty confident with the segmentation model because the results it outputted were good, and the training and validation loss perfectly correlated. </p>
<h3>Stage 2 result</h3>
<table>
<thead>
<tr>
<th></th>
<th>Valid AUC (patient)</th>
<th>Public LB AUC</th>
<th>Private LB AUC</th>
</tr>
</thead>
<tbody>
<tr>
<td>Classification model</td>
<td>0.685</td>
<td>0.678</td>
<td>0.60696</td>
</tr>
</tbody>
</table>
<p>While training we found that the classification model could quickly go overfiting, we still think that this task need more data to be trained on, before we can conclude that whether or not this is feasible in practice.</p>
<p><strong>Note that the AUC is calculated among patients, which requires averaging predictions of all chunks belong to each patient to obtain that person's prediciton.</strong></p>
<h2>3. Final submission notebook</h2>
<ul>
<li>Original submission notebook at: <a href="https://www.kaggle.com/namgalielei/ensemble-brainlstm-v6-tta-zoomin/notebook" target="_blank">T4E Final Submiision</a></li>
<li>Notebook after update dataset version: <a href="https://www.kaggle.com/namgalielei/fix-deleted-data-ensemble-brainlstm-v6-tta-zoomin" target="_blank">T4E Final Submision Update DS</a></li>
</ul>
<h2>4. What did not work</h2>
<ul>
<li>We have spent a lot of time and with many different approaches but it did not work, or at least we were not able to make it work.</li>
<li>At first we tried 3D model because we believed that the information represented in 3D space was much more rational and the data was easier to manage. But the point was how to normalize the 3D arrays of voxels in some consistent ways. We used both simple and complex methods form rotating the brain onto the similar plane and direction then resizing to a fixed shape, to registering to SRI24 as done in Task 1 data by the host. We found out that the registration data was good and it not only preserved the spacing of the voxels in space but also ensured all the brains to lie consistently on a direction. However, no matter good the registration data looked, the model still perform badly on the public leaderboard. Besides, lack of pretrained weights for 3D model was another factor that could affect the model performance, especially when the number of training data was small.</li>
<li>In exploring external dataset, we found some that could be relevant to the problem: IXI dataset, fastMRI dataset, TCIA dataset and tried to manipulate them for self-supervised training and speudo labeling but we failed to have a better result. </li>
</ul>
<h2>5. References</h2>
<p>[1] The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification: <a href="https://arxiv.org/abs/2107.02314" target="_blank">https://arxiv.org/abs/2107.02314</a></p>
<p>[2] UNet++: A Nested U-Net Architecture for Medical Image Segmentation: <a href="https://arxiv.org/abs/1807.10165" target="_blank">https://arxiv.org/abs/1807.10165</a></p>
<p>[3] Long Short-term Memory: <a href="https://www.researchgate.net/publication/13853244_Long_Short-" target="_blank">https://www.researchgate.net/publication/13853244_Long_Short-</a> term_Memory</p>
<p>[4] segmentation model pytorch: <a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">https://github.com/qubvel/segmentation_models.pytorch</a> </p>
<p>[5] timm: <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">https://github.com/rwightman/pytorch-image-models</a></p>
<h2>6. Some of our intuition:</h2>
<p>I don't jump into a conclusion whether this task is feasible in practice. Due to the small amount of data in this competition, you might see an extreme shakeup and I admit that no one who survived the shake could do it without any luck. However, it did not mean there is no data science in this competition, at least in my minor knowledge. What we tried to do was to soon realize the instability of our models and try to balance the AUC score between local and public leaderboard. In addition, when viewing some model's Grad CAM, we knew that the model was biased in some other than the tumor zone, even if the AUC was high in validation set. We then decided to make the model focus on the tumor features by add predicted masks as channels of input images in classification phase. Thus, we believed that it might help stabilize the model. Other point is that we average all the predictions of all chunks for each patient, so that it make the prediction less prone to noise, too.</p>
|
RSNA-MICCAI Brain Tumor Radiogenomic Classification
|
662nd place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA-MICCAI Brain Tumor Radiogenomic Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for sharing your approach.</p>
<p>Thank you! We are sad that we did not generalize to the private LB. I was rooting for you and your team because of how much you contributed to the Discussions forum. Congrats on an awesome finish!</p>
<p>Thank you Gunes, and big congrats on your and authman's finish. I was actually relying a lot on your TRENDS Neuroimaging solution for some of the work we did in this competition.</p>
<p>For segmentation, we used the pretrained BraTS 2020 segmentation model provided by MONAI. <a href="https://github.com/Project-MONAI/MONAI" target="_blank">https://github.com/Project-MONAI/MONAI</a></p>
<p>Sadly we didn't choose the submission using the techniques inspired by you. Nor did we choose the "PCA" submission inspired by 1st place from Trends. Maybe a lesson for next time!</p>
<p>Our submission is a 50/50 average between a model based on Pyradiomics, and a model based on 3d neural networks.</p>
<p>Knowing that the train sample size was very small and also we have a very low signal-to-noise ratio, we sought to make a robust approach.</p>
<ul>
<li>Only focus on the tumor instead of the whole brain's image</li>
<li>Use information from all 4 modalities</li>
<li>Use Log Loss in local Cross-Validation</li>
<li>Use pretrained models where possible, which have seen many more brains than our dataset size</li>
<li>Remove features where train and test set distributions differ</li>
<li>LightGBM parameters are very shallow / "dumb". Also it uses only 6 features</li>
<li>Neural networks blended across multiple different architectures</li>
<li>Models are bagged / models trained with augmentations to reduce volatility</li>
<li>Combine Pyradiomics submission with Neural Network submission to get the best of both worlds, try to be as robust as possible</li>
</ul>
<table>
<thead>
<tr>
<th>Model</th>
<th>Local CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pyradiomics LGBM</td>
<td>0.65531</td>
<td>0.62420</td>
<td>0.52228</td>
</tr>
<tr>
<td>3d Neural Networks</td>
<td>0.6474</td>
<td>0.62315</td>
<td>???</td>
</tr>
<tr>
<td>50/50 Blend</td>
<td>???</td>
<td>0.65380</td>
<td>0.53221</td>
</tr>
</tbody>
</table>
<p>See image:<br>
<img src="https://raw.githubusercontent.com/Quetzalcohuatl/rsnamiccaikaggle2021/main/rsna_miccai_brain_tumor_mgmt_kaggle_solution_2021.png" alt="662nd solution"></p>
<p>Sincere thanks to Anil for his hard work and Luminide for access to data storage and servers! I thought working on their platform, tracking experiments, and having the instance detach after training was complete was quite useful.</p>
|
RSNA-MICCAI Brain Tumor Radiogenomic Classification
|
691nd place solution. (Code)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA-MICCAI Brain Tumor Radiogenomic Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Big congrats to all the winners. Though we couldn't survive the shake-up and were way too behind, we thought it might be useful for some readers. From the beginning, we've mainly aimed to hands-on the 3D data set and 3D modeling approach first time with <code>TensorFlow.Keras</code> and 3D augmentation for that as well.</p>
<p><strong>3D modeling</strong>: We only use <a href="https://github.com/ZFTurbo/efficientnet_3D" target="_blank"><code>3D-EfficientNet-BO</code> </a> for most of the cases due to limited resources. We trained the model with 5 fold stratified dataset, considering all modalities. In most of the folds training, we used <a href="https://github.com/ZFTurbo/volumentations" target="_blank">volumentations</a> for augmenting the dataset. Basically, the most training part was taken from my <a href="https://www.kaggle.com/ipythonx/tf-3d-2d-model-for-brain-tumor-classification" target="_blank">public notebook</a>. Each of the 5 folds was trained with the same configuration (same seed, lr, etc), the local cv for each fold varies between 0.49~0.61. Next, we've used the Bayesian Optimization technique to get the optimal linear weights, using my other <a href="https://www.kaggle.com/ipythonx/optimizing-metrics-out-of-fold-weights-ensemble/notebook" target="_blank">notebook here</a>. </p>
<table>
<thead>
<tr>
<th>-</th>
<th>3D-EfficientNet-BO</th>
<th>SimpleAvg</th>
<th>BaysianOpt</th>
<th>TTA (Volumentation: Step 5)</th>
</tr>
</thead>
<tbody>
<tr>
<td>CV</td>
<td>(0.49~0.62)</td>
<td>0.55</td>
<td>0.57</td>
<td>0.601</td>
</tr>
<tr>
<td>Public LB</td>
<td>--</td>
<td>0.54</td>
<td>0.59</td>
<td>0.635</td>
</tr>
</tbody>
</table>
<hr>
<p><strong>2D Ensemble Modeling</strong>. We've built a giant model for end-to-end training consisting of 4 models i.e. <code>EfficientNetB2</code>, <code>DenseNet121</code>, <code>ResNet18</code> and <code>SeResNeXt50</code> - where each modality passed as in input for each models but <strong>randomly</strong>. </p>
<pre><code>def build_model(base_net):
if base_net == 'a':
base_Mchannel = model.EfficientNetB2
x = layers.Dropout
return Model(inputs=base_Mchannel.input,
outputs=x, name='flair')
elif base_net == 'b':
base_Mchannel = model.DenseNet121
x = layers.GlobalAveragePooling2D
return Model(inputs=base_Mchannel.input,
outputs=x, name='t1')
elif base_net == 'c':
get_model, _ = Classifiers.get('resnet18')
base_Mchannel = get_model
x = layers.GlobalAveragePooling2D
return Model(inputs=base_Mchannel.input,
outputs=x, name='t1w')
elif base_net == 'd':
get_model, _ = Classifiers.get('seresnext50')
base_Mchannel = get_model
x = layers.GlobalAveragePooling2D)
return Model(inputs=base_Mchannel.input,
outputs=x, name='t2')
# detect and init the TPU
model_a = build_model('a') # for flair
model_b = build_model('b') # for t1
model_c = build_model('c') # for t1w
model_d = build_model('d') # for t2
x_a = layers.Dense(764, activation='relu')(model_a.output)
x_b = layers.Dense(764, activation='relu')(model_b.output)
x_c = layers.Dense(764, activation='relu')(model_c.output)
x_d = layers.Dense(764, activation='relu')(model_d.output)
output = layers.average([x_a, x_b, x_c, x_d])
head_layer = layers.Dense(512, activation='relu')(output)
head_layer = layers.Dropout(0.5)(head_layer)
findal_output = layers.Dense(1, activation='sigmoid')(head_layer)
model = Model(
inputs = [model_a.input, model_b.input,
model_c.input, model_d.input],
outputs = [findal_output]
)
</code></pre>
<p>We have also the <code>Swin-Transformer</code> one of the four models but it didn't perform well (or we were too impatient). Additionally, during the training time, we randomly switch the modality from the dataloader. It's done on the assumption that each of the four models may have the knowledge of modalities. </p>
<pre><code> if self.split == 'train':
if np.random.rand() < 0.2:
flair_x, t1_x = t1_x, flair_x
elif np.random.rand() < 0.4:
t1w_x, t2_x = t2_x, t1w_x
elif np.random.rand() < 0.6:
t2_x, flair_x = flair_x, t2_x
else:
pass
# dictionary mapping for corresponding models
return {
'flair' : flair_x,
't1' : t1_x,
't1w' : t1w_x,
't2' : t2_x
}
</code></pre>
<p>For 2D modeling, we've used built-in <a href="https://keras.io/api/layers/preprocessing_layers/image_augmentation/" target="_blank"><code>keras augmentation</code></a></p>
<table>
<thead>
<tr>
<th>-</th>
<th>SimpleAvg</th>
<th>TTA (Keras: Step 5)</th>
</tr>
</thead>
<tbody>
<tr>
<td>CV</td>
<td>0.59</td>
<td>0.61</td>
</tr>
<tr>
<td>Public LB</td>
<td>0.57</td>
<td>0.621</td>
</tr>
</tbody>
</table>
<p>Final Ensemble </p>
<table>
<thead>
<tr>
<th>-</th>
<th>3D</th>
<th>2D</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>0.635</td>
<td>0.621</td>
<td>0.68340</td>
<td>0.52730</td>
</tr>
</tbody>
</table>
<hr>
<p><strong>Final Note</strong></p>
<p>It's not that robust model with many issues and we'll be continuing working on such problems. We've made all the code publicly accessible <a href="https://github.com/innat/BraTS-MGMT-Classification" target="_blank">BraTS-MGMT-Classification</a> with the structural format and actively updating. And here is the minimum <a href="https://www.kaggle.com/ipythonx/tf-3d-2d-model-for-brain-tumor-classification" target="_blank">notebook view</a>.</p>
<h2>Code: <a href="https://github.com/innat/BraTS-MGMT-Classification" target="_blank">BraTS-MGMT-Classification</a></h2>
|
RSNA-MICCAI Brain Tumor Radiogenomic Classification
|
7th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA-MICCAI Brain Tumor Radiogenomic Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I’d like to thank Kaggle, RSNA and the MICCAI Society for hosting this competition, as well has everyone here who shared great discussions and notebooks. For me, they were really valuable and knowledgeable! My congratulations to all the winners!</p>
<p>Actually, I had time constraints, just one week to begin with this competition and make a submission. Given time limitations and provided competition dataset, initially I came up with a first easy solution of constructing a CNN model from scratch, no transfer learning was used here. There is no model ensembling, no big models, and no complex training techniques. No external datasets are used in this solution. either. </p>
<p>I see the other competitors spent much time and effort to improve their solutions. There is a certain amount of luck I managed to get to the top. This is not my first experience of participating in Kaggle competitions, but rather first award for my previous efforts.</p>
<p>The train/predict notebook is available here:<br>
<a href="https://www.kaggle.com/igorlashkov/rsna-miccai-btumor-classification-finished" target="_blank">https://www.kaggle.com/igorlashkov/rsna-miccai-btumor-classification-finished</a></p>
|
RSNA-MICCAI Brain Tumor Radiogenomic Classification
|
8th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA-MICCAI Brain Tumor Radiogenomic Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Well, that was unexpected. First of all I'd like to thank Kaggle and the host to make this competition. I'd also like to thank everyone who discussed leaderboard shakeup on the forum, because it really helped me understand what just happened. This was my first competition ever and I was participating as part of an assignment from a ML class I'm taking, so I got really confused with the results.</p>
<p>Nonetheless, I'm here to document and share my solution. But before that, I need to mention that the code is messy and all in just one notebook (<a href="https://www.kaggle.com/arturhcpereira/rsna-miccai-submission/notebook" target="_blank">here</a>), I had one week to make the submission. Also, the idea of this assignment was to exercise what we had learned in class so far, so I just went with what we had just seen (we were seeing feature extraction and bag-of-words approach at the time).</p>
<h2>1. Extracting features</h2>
<p>The first step was to extract the features of all images of the training set. I used the ORB feature detector from OpenCV because it would take to long to run the solution if I used SIFT. I should have preprocessed all the data just once and serialized to use as input afterwards, but I was still figuring kaggle out at the time, so I didn't think of it.</p>
<p>So in this step, I iterate over the training data going one sample at a time. For each sample, I iterate over every image (FLAIR, T1w, T1Gd, T2), extract its features and stack them making a feature matrix of <em>n_features</em> by <em>descriptor_size</em> dimensions. At the end, I have a list of <em>train_size</em> matrices with varied number of rows.</p>
<h2>2. Creating the visual vocabulary</h2>
<p>Once I have all the features for all the samples, I need to establish what I will consider a word for my bag-of-words. So in this step I stack all the feature matrices of the training set making a <em>n_features0</em> + <em>n_features1</em> + … + <em>n_featuresN</em> by <em>descriptor_size</em> matrix (<em>all_features</em>).</p>
<p>I run a clustering algorithm over <em>all_features</em> and the clusters of the fitted model is my vocabulary. I used the MiniBatchKMeans from scikit-learn because I was running in memory issues with the amount of descriptors used to fit the model.</p>
<h2>3. Word frequency histograms</h2>
<p>With the vocabulary in hand, I just needed to run through each sample's feature matrix to convert it to a histogram of word frequency. For each feature in a matrix I used the clustering model to predict it's cluster and incremented the correspond position in the histogram by 1/<em>n_features</em>. This process is later repeated for the test set in order to get the predictions.</p>
<h2>4. The classifier and the predictions</h2>
<p>Here I just used an SVC from scikit-learn with all the parameters set to default. I trained the model in the training set histograms and did the predictions on the test set histograms. I set probability to true when making the predictions.</p>
<h2>Final considerations</h2>
<p>I could see with this competition that the problem posed is really complex and, more likely than not, my rather simple approach of bag-of-words is not suited for this task. One thing I found interesting when looking at the probabilities of the predictions is that no test case had a split more pronounced then 0.6/0.4, which I think means the model is not confident in what it is predicting.</p>
<p>I know that I was really lucky with the private leaderboard score, since I did a couple of late submissions to see if the performance was consistent and they got 0.58 and 0.59 in the private leaderboard, instead of the previous 0.6. But it seems that no model was able to perform well in the task, so I think the results serve the purpose of showing that there is a long way to go in order to solve the task at hand.</p>
<h2>Notebook</h2>
<p><a href="https://www.kaggle.com/arturhcpereira/rsna-miccai-submission/notebook" target="_blank">https://www.kaggle.com/arturhcpereira/rsna-miccai-submission/notebook</a></p>
|
RSNA-MICCAI Brain Tumor Radiogenomic Classification
|
Top 30 finish with 1 submission
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA-MICCAI Brain Tumor Radiogenomic Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I checked the results this morning and was surprised to see a top 30 finish. TBH I did not expect it because my public score was only 0.54 and going by the early public scores of 0.7+ I was not expecting any better results. I want to highlight that this wasn't a fluke and I really did spend a lot of time on this but ended up submitting only one version. I wanted to try combining traditional models with CNNs but there wasn't enough time in the end. For the only submission that I was able to put in I used decision trees based models (RF, XGBoost etc.) and used Radiomics for extracting features related to tumor shapes. I am sharing my approach here hoping it will be of help to some:</p>
<p>Use <strong>Task 1 Dataset</strong> to create segmentation model (thank you <a href="https://www.kaggle.com/dschettler8845" target="_blank">@dschettler8845</a> for creating the dataset)</p>
<ol>
<li>Convert Task 1 Nifti files to Dicom files using plastimatch. This was done to align with Task 2 file formats. I only used FLAIR and Segmentation files. </li>
<li>Use pydicom to return 3D voxels from Dicom files. Randomly select one of the three axis for a subject and convert to TFrecords for training a VGG based U-Net with nodes aggregation segmentation model. I did not predict on multiple classes… model only predicts tumorous cells. I also did not refine the model too much and was satisfied with a CV score of about 0.92.</li>
</ol>
<p><strong>Task 2 Dataset:</strong></p>
<ol>
<li>Resample Task 2 dataset to correct for inconsistent spacing between slices & the image shapes.</li>
<li>Predict tumor segments using model trained in step 1. Choose 154 slices for each subject with the middle slice having the max tumor area.</li>
<li>Use Pyradiomics to extract features by passing predicted masks and the corresponding slices from #2. I only extracted the firstorder, shape, and grey length features… totaling 58 features. Use these features to train a XGboost model.</li>
<li>Pass Test images from the same pipeline and predict prob of MGMT class.</li>
</ol>
<p>As you can see there is a lot of scope for improvement in this approach. By using true masks from Task 1 dataset I was getting CV scores of 0.64+. The competition hosts can use true masks for Task 2 dataset, and also use T1/T2 type scans. I had hoped to also extract some features using a simple CNN model using the predicted masks but did not get time. </p>
<p>Hopefully this explanation was useful to some. I did learn a lot from this competition so a shout out to the competition hosts.</p>
|
RSNA-MICCAI Brain Tumor Radiogenomic Classification
|
[9th] Simple and kinda stable approach
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA-MICCAI Brain Tumor Radiogenomic Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everyone! First of all, I would like to thank the organizers of the competition, as a result of it I became much closer to my goal of getting the Kaggle Competition Master rank at the age of 17.</p>
<p>I started solving this competition right after the previous one two weeks before the end, so I didn't have time to write my pipeline from scratch as usual. Instead of this I picked <a href="https://www.kaggle.com/rluethy/efficientnet3d-with-one-mri-type" target="_blank">this</a> notebook as a base. I spent 2 weeks trying different experimental approaches. I tried to solve this problem as a segmentation or detection one, I was taking pretrained models and replacing their head to classificational one. None of these ideas ever produced an acceptable leaderboard score. </p>
<p>At the same time, I was training Efficientnet3D. Since I didn't have a lot of hardware and time, I used a small image size - 224. Instead of modeling or applying augmentations as usual, I decided to spend the time to check how stable my solution is. As an experiment, I decided to run my solution 10 times on different seeds (the training takes less than 3 hours on RTX 3080), this was the first competition where I did something like this. After already 5 such trainings the CV range was equaled to [0.5, 0.54], so I interrupted this. There were a few days left until the end of the competition and I decided to train the model on a larger image size and check the stability of this solution in the same way, so I switched the image size from 224 to 384. This time CV range after 5 trainings was around [0.52, 0.55], which I found quite interesting, because this supposed to mean that larger image size produces a better stability. </p>
<p>Finally, I picked an ensemble of these 5 trainings (5 folds each one) as my final submission. It scored 0.66701 on public and 0.60186 on private, which is better than previous submission (based on single training on 224 image size) on 0.07 and 0.03 respectively. Maybe greater image size would produce even higher and stable score, but I didn't have time to train it. I only watched the validation and didn't look at the leaderboard at all, I made a submission of only the two best validation solutions so that I didn't have to choose. </p>
<p>The reason for my personal shake-up lies in the fact that my best validation solution was also the most stable (as tested) at the same time. I made it as stable as I could, but I clearly understand that it's possible to get the same score by just using a random predictions.</p>
<p>For those who were solving, but in the end didn't receive a medal, firstly I express my great respect to you, and secondly I ask you not to get upset. I know how it feels, I used to solve one competition for 3 months, but I didn't even get a medal because of noisy leaderboard. You can check this solution <a href="https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/220586#1209522" target="_blank">here</a>. I wrote it when I was 16 years old and and at that time it was my best solution.</p>
<p>I wish you good luck at the next competitions, see you there!</p>
<p>They have almost the same range</p>
<p>No, it wasn't. If the overall range is equaled to 0.14, a single CV range is equaled to 0.09-0.12.</p>
|
RTA Freeway Travel Time Prediction
|
Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RTA Freeway Travel Time Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|>For anybody interest, here's the actual solution.
|
Rainforest Connection Species Audio Detection
|
#6 Solution 🐼Tropic Thunder🐼 🚫 No hand labels🚫
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Very nice question. I have been thinking about that I am not sure about the answer, I would need to test. But one thing for sure is that in that case the injection of coord matrixes should <em>also</em> be tested at the bottleneck, i.e. it makes sense to either insert positional not compressed and narrow (2ch) spatial encodings (similar to coordconv) or positional dense (512/1024/2048ch) encodings right before final global average pooling. My suspicion the latter would be better but I have not tested (yet).</p>
<p>Thanks!!! Long time since we did Avito!</p>
<p>Significantly, all other params being equal we burned a sub testing impact:</p>
<p><img src="https://i.imgur.com/PCyPUfA.png" alt=""></p>
<p>It was a great competition. I'll document our journey primarily from my perspective, my teammates <a href="https://www.kaggle.com/jpison" target="_blank">@jpison</a> <a href="https://www.kaggle.com/amezet" target="_blank">@amezet</a> <a href="https://www.kaggle.com/pavelgonchar" target="_blank">@pavelgonchar</a> may chime in to add extra color… </p>
<p>Our solution is rank ensemble of two types of models, primarily 97% using the architecture described below, and 3% an ensemble of SED models. The architecture below was made in the last 7 days… I made my first (pretty bad) sub 8 days ago, and I joined team just before team merge deadline and all the ideas were done/implemented in ~7 days…</p>
<p>A single 5-fold model achieves 0.961 private, 0.956 public; details:</p>
<p><strong>Input representation.</strong> This is probably the key, we take each TP or FP and after spectrogram (not MEL, b/c MEL was intended to model human audition and the rainforrest species have not evolved like our human audition) crop it time and frequency wise with fixed size on a per-class basis. E.g. for class 0: minimum freq is 5906.25 Hz and max is 8250 Hz, and time-wise we take the longest of time length of TPs, i.e. for class 0: 1.29s. </p>
<p>With the above sample the spectrogram to yield an image, e.g.:<br>
<img src="https://i.imgur.com/bTw0aeQ.png" alt=""></p>
<p>The GT for that TP would be: </p>
<p><code>tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 1., nan, nan, nan,nan, nan, nan, nan, nan, nan, nan, nan]))</code></p>
<p>As other competitors, we expand the classes from 24 to 26 b/c to split species that have two songs. </p>
<p>Other sample:</p>
<p><img src="https://i.imgur.com/JST401j.png" alt=""></p>
<p><code>tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 1., nan, nan, nan, nan, nan,nan, nan, nan, nan, nan, nan, nan, nan]))</code></p>
<p>Note the image size is the same, so time and frequency are effectively strectched wrt to what the net will see, but I believe this is fine as long as the net has enough receptive field (which they have). </p>
<p>The reason of doing it this way is that we need to inject the time and frequency restrictions as inductive bias somehow, and this looks like an nice way.</p>
<p><strong>Model architecture.</strong> This is just an image classifier outputing 26 logits, that's it. The only whistle is to add relative positional information in the freq axis (à la <a href="https://arxiv.org/abs/1807.03247" target="_blank">coordconv</a>), so model is embarrasingly simple:</p>
<pre><code>class TropicModel(Module):
def __init__(self):
self.trunk = timm.create_model(a.arch,pretrained=True,num_classes=n_species,in_chans=1+a.coord)
self.do = nn.Dropout2d(a.do)
def forward(self,x):
bs,_,freq_bins,time_bins = x.size()
coord = torch.linspace(-1,1,freq_bins,dtype=x.dtype,device=x.device).view(1,1,-1,1).expand(bs,1,-1,time_bins)
if a.coord: x = torch.cat((x,coord),dim=1)
x = self.do(x)
return self.trunk(x)
</code></pre>
<p><strong>Loss function.</strong> Just masked Focal loss. Actually this was a mistake b/c Focal loss was a renmant of a dead test and I (accidentally) left it there, where I thought (until I checked code now to write writeup) that BCE was being used. Since we are doing balancing BCE should work better.</p>
<p><strong>Mixover.</strong> Inspired by mixup, mixover takes a bunch of (unbalanced) TP and FPs (which strictly speaking are TNs) and creates combinations of them so that the resulting labels can be supervised (1+NaN=1, 0+NaN=NaN,0+0=0) making sure linear interpolation is not destructive (beta,alpha=4; clip to 0.2,0.8); then it samples from the resulted mixed items computing class distribution so that resulting samples are balanced. Code is a bit tricky, but still:</p>
<pre><code>class MixOver(MixHandler):
"Inspired by implementation of https://arxiv.org/abs/1710.09412"
def __init__(self, alpha=): super().__init__(alpha)
def before_batch(self):
ny_dims,nx_dims = len(self.y.size()),len(self.x.size())
bs=find_bs(self.xb)
all_combinations = L(itertools.combinations(range(find_bs(self.xb)), 2))
lam = self.distrib.sample((len(all_combinations),)).squeeze().to(self.x.device).clip(0.2,0.8)
lam = torch.stack([lam, 1-lam], 1)
self.lam = lam.max(1)[0]
comb = all_combinations
yb0,yb1 = L(self.yb).itemgot(comb.itemgot(0))[0],L(self.yb).itemgot(comb.itemgot(1))[0]
yb_one = torch.full_like(yb0,np.nan)
yb_one[yb0>0.5] = yb0[yb0>0.5]
yb_one[yb1>0.5] = yb1[yb1>0.5]
yb_two = torch.clip(yb0+yb1,0,1.)
yb_com = yb_one.clone()
yb_com[~torch.isnan(yb_two)] = yb_two[~torch.isnan(yb_two)]
n_ones_or_zeros=(~torch.isnan(yb_com)).sum()
ones=torch.sum(yb_com>=0.5,dim=1)
zeros=torch.sum(yb_com<0.5,dim=1)
p_ones = (n_ones_or_zeros/(2*( ones.sum())))/ones
p_zeros= (n_ones_or_zeros/(2*(zeros.sum())))/zeros
p_zeros[torch.isinf(p_zeros)],p_ones[torch.isinf(p_ones)]=0,0
p=(p_ones+p_zeros).cpu().numpy()/(p_ones+p_zeros).sum().item()
shuffle=torch.from_numpy(np.random.choice(yb_com.size(0),size=bs,replace=True,p=p)).to(self.x.device)
comb = all_combinations[shuffle]
xb0,xb1 = tuple(L(self.xb).itemgot(comb.itemgot(0))),tuple(L(self.xb).itemgot(comb.itemgot(1)))
self.learn.xb = tuple(L(xb0,xb1).map_zip(torch.lerp,weight=unsqueeze(self.lam[shuffle], n=nx_dims-1)))
self.learn.yb = (yb_com[shuffle],)
</code></pre>
<p><strong>Augmentations.</strong> Time jitter (10% of time length), white noise (3 dB).</p>
<p><strong>External data.</strong> We used <a href="https://www.kaggle.com/rohanrao/xeno-canto-bird-recordings-extended-a-m" target="_blank">Xeno Canto A-M and N-Z</a> recording and since they have 264 species we made the wild assumption that if we have 24 species and they have 10X randomly sampling would give you a "right" label (TN) 90% of the time… the goal was to add jungle/rainforest diversity at the expense of a small noisy labels which were accounted for labeling these weak labels as 0.1 (vs 0).</p>
<p><strong>Pseudolabeling.</strong> We also pseudolabeled using OOF models the non labeled parts of training data to mine more TPs, and manually balanced the resulting pseudo-label TPs. </p>
<p><strong>Code.</strong> Pytorch, Fastai.</p>
<p><strong>Final thoughts.</strong> It was a very fun competition and I am very glad of achieving a gold medal with a very short time, I want to thank Kaggle, sponsor and competitors for the competition and discussions; and of course my teammates <a href="https://www.kaggle.com/jpison" target="_blank">@jpison</a> <a href="https://www.kaggle.com/amezet" target="_blank">@amezet</a> <a href="https://www.kaggle.com/pavelgonchar" target="_blank">@pavelgonchar</a> for inviting me giving me the small nudge I needed to join 😉</p>
<p>Edit: I've added no hand labels to the title because this competition was VERY unique in that Kaggle effectively allowed hand labeling and that's very unusual. (Re: hand labeling vs external data, it was also very uncommon not having to disclose which external data you used during competition).</p>
<p>Hi Alex,</p>
<ol>
<li>Yes.</li>
<li>Yes.</li>
<li>Rationale is the same: TP means species is present, we don't know anything else. FP means species is not present, we don't know anything else either; so everything is init as nan and populated w/ 1s and 0s with the information we know for sure.</li>
</ol>
|
Rainforest Connection Species Audio Detection
|
10th-place solution - Agnostic loss for semi-supervised learning
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>To clarify, those are the frames which I kept - the scores indicated that the model was very confident a class was/wasn't present in it. I would then use this as a definite label.</p>
<p>Cheers Corey!</p>
<p>I believe it was very important, but I've been through some of my predictions and the pseudolabels were indeed poor for some classes - species 20 looks to be the worst. The most common classes in the data seem to be ones that were quite easy for models to identify, which makes me wonder if this competition would've been more interesting if we were evaluated on class-averaged metrics.</p>
<p>Good question about the input channels - I never even tested your alternative. Since non-SED models were a significant part of my submission, I just assumed it would be important to include some temporal information via delta and/or delta-delta.</p>
<p>Thanks! Using ResNeSt gave me substantially better results. If you want the SED variant, it's <a href="https://www.kaggle.com/bigironsphere/single-channel-resnest-for-panns-sed-architecture" target="_blank">here</a></p>
<p>Thank you to the organisers, Kaggle, and to everyone who shared ideas and code for this competition. I learned a lot, as I'm sure many of you have, and I thought I would break down my approach since I know many competitors couldn't find a way to use the False Positive labels. I'm thrilled to have secured my first gold (and solo gold) in a competition, so it's the least I could do!</p>
<h3>Summary</h3>
<p>On a high-level, my approach was as follows:</p>
<ul>
<li>train models using existing labels</li>
<li>generate pseudo-labels on train/test</li>
<li>isolate the frames which had a very high/low prob across an ensemble of models</li>
<li>conservatively threshold these and use as new TP/FP values for the relevant class</li>
<li>repeat</li>
</ul>
<p>This gradually increased the amount of data I had to work with, until I had at least 2 frames from each recording with an identified TP and/or FP. The growing data diversity allowed successive models to generalise better.</p>
<h3>What Worked</h3>
<p><strong>Fixed Windows</strong></p>
<p>I used a window of 5 seconds centred on the TP/FP. For predicting on test, I used overlapping windows with a step of 2.5 seconds. This ensured that train/test had identical preprocessing for their inputs. The maximum value was taken for each class across all of these windows. For some submissions, I used the average of the top 3 predictions but this didn't seem to notably change the LB score.</p>
<p><strong>Agnostic Loss</strong></p>
<p>Perhaps a better term already exists for this, but this was what I called my method for using both the TPs and FPs in a semi-supervised fashion. The problem we face with unlabelled data is that any spectrogram can contain multiple classes, so setting the target as 0 for everything apart from the <em>given</em> label will penalise true positives for other classes. We can only know for sure that one class is present or absent, and the loss needs to reflect this. So I excluded all non-definite targets from the loss calculation. In the target tensor, a TP is 1 while an FP is 0. Unlabelled classes are given as 0.5. These values are then excluded from the loss calculation. So if we had 5 classes (we have 24, but I'm saving room here) and this time window contained a TP for class 0 and an FP for class 3:</p>
<pre><code>y = torch.Tensor([1., 0.5, 0.5, 0., 0.5])
</code></pre>
<p>And in the loss calculation:</p>
<pre><code>preds = model(inputs)
preds[targets==0.5] = 0
loss = BCEWithLogitsLoss(preds, targets)
loss.backward()
</code></pre>
<p>Thus the model is 'agnostic' to the majority of the potential labels. This allows the model to build a guided feature representation of the different classes without being inadvertently given false negatives. This approach gave me substantially better LB scores.</p>
<p>The figure of 0.5 is arbitrary and could've been any value apart from 0 or 1: the salient point is that the loss resulting from unlabelled classes is always constant. Note that this kind of inplace operation is incompatible with <code>nn.Sigmoid</code> or its functional equivalent when performing backprop so you need to use the raw logits via <code>torch.nn.BCEWithLogitsLoss()</code>.</p>
<p><strong>ResNeSt</strong></p>
<p>I found EfficientNet to be surprisingly poor in this competition, and all of my best scores came from using variants of ResNeSt (<a href="https://github.com/zhanghang1989/ResNeSt" target="_blank">https://github.com/zhanghang1989/ResNeSt</a>) paper available <a href="https://arxiv.org/abs/2004.08955" target="_blank">here</a>.</p>
<p>For 3-channel input I used the <code>librosa</code> mel-spectrogram with power 1, power 2 and the <code>delta</code> function to capture temporal information. With some models I experimented with a single power-1 spectrogram, delta and delta-delta features instead. While quicker to preprocess, I noticed no impact on scores.</p>
<p>I also incorporated it into the SED architecture as the encoder. This showed very promising metrics during training, and while sadly I didn't have time to run a fully-convergent example its inclusion still helped my score. In future competitions this could be a very useful model. ResNeSt itself only takes a 3-channel input and has no inbuilt function to extract features, so I had to rejig it to work properly: I'll be uploading a script with that model shortly in case anyone is interested.</p>
<p><strong>Augmentations</strong></p>
<p>From Hidehisa Arai's excellent kernel <a href="https://www.kaggle.com/hidehisaarai1213/rfcx-audio-data-augmentation-japanese-english" target="_blank">here</a>, I selected <code>GaussianNoiseSNR()</code>, <code>PinkNoiseSNR()</code>, <code>TimeShift()</code> and <code>VolumeControl()</code>. I was wary of augmentation methods that blank out time windows or frequency bands like <a href="https://arxiv.org/abs/1904.08779" target="_blank">SpecAugment</a>. Some of the sounds occur in a very narrow frequency range (e.g. species 21) or in a very narrow time window (e.g. species 9) and I didn't want to make 'empty', positive samples that would coerce the model into learning spurious features. I also added some trivial augmentations of my own:</p>
<ul>
<li>swapping the first and last half of the audio vector</li>
<li>adding a random constant before spectrogram normalisation (occluding the relevant features)</li>
<li>'jiggling' the time window around the centre of <code>t_mid</code>, at a maximum of 1 second offset in either direction</li>
</ul>
<p><strong>Eliminating species 19</strong></p>
<p>A minor point, but this class was so rare that setting all of its predictions to zero usually improved the LB score by 0.001. There were many unrelated sounds that would presumably cause the model to produce false positives. My best submission didn't do this however; it was a simple blend of models including ResNeSt-50, ResNeSt-101, EfficientNet-b1 and the SED architecture described above. I used weighted averaging roughly in proportion to the individual models' performance.</p>
<h3>What didn't work</h3>
<ul>
<li>Using separate models for different frequency ranges (these models never gained an adequate feature representation, and produced many false positives).</li>
<li>EfficientNet alone gave poor results, but helped as part of an ensemble.</li>
<li>Larger models (ResNeSt101, EfficientNet b3) didn't improve scores.</li>
<li>TP-only training.</li>
<li>Models that worked on all windows for a single clip - these were slow and produced inferior results.</li>
</ul>
<p>Otherwise I was quite lucky - I thought about my methodology for a while and most of what I tried worked well on the first attempt. If I'd had more time, I would have liked to try:</p>
<ul>
<li>automatically labelling some species as an FP of a similar class (e.g species 9 & 17)</li>
<li>probing the LB for class distribution (I suspect you could get +0.9 by only predicting the most common half of the classes and ignoring everything else) - I realised the importance of this too close to the deadline</li>
<li>experimenting with different encoders for the SED architecture.</li>
<li>using a smaller window size (<=3 seconds) for greater fidelity. </li>
</ul>
<p>The overall class prediction histograms for my final submission were as follows:</p>
<p><img src="https://i.imgur.com/GmJ9BkI.png" alt=""></p>
<p>Some classes gave me particular trouble. I used my own, simple scoring metric during training that recorded the proportion of positive cases that were predicted above a certain threshold. I never satisfactorily made a model that could detect the rarer classes like 6, 19 or 20 in a reliable fashion.</p>
<p>Overall I had an interesting time exploring how to work with the application of CNNs to spectrograms, and with large amounts of unlabelled data. In the next audio competition, perhaps I'll aim a little higher! I'm looking forward to seeing how those who scored > 0.97 managed to achieve their results.</p>
<p>If you have any questions I'll do my best to answer them!</p>
<p>I've provided the code for single-channel ResNeSt and its incorporation into the PANNs SED model here:</p>
<p><a href="https://www.kaggle.com/bigironsphere/single-channel-resnest-for-panns-sed-architecture" target="_blank">https://www.kaggle.com/bigironsphere/single-channel-resnest-for-panns-sed-architecture</a></p>
<p>I hope someone finds it useful!</p>
|
Rainforest Connection Species Audio Detection
|
11th place, The 0.931 Magic Explained: Image Classification
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I didn't think of post processing but I did think of pseudo labeling during the competition, yet could not make it work. After reading all the writeups where teams who passed me successfully used PL I revisited what i did. The issue was that instead of randomly sample crops for PL, I imposed a distribution that matches training samples, i.e. same number of positive pseudo labels per class. When I remove this bias then pseudo labeling works. In my first experiment, a single model gets a 0.01 boost on public and private LB. With tuning and iterations I now see how I could have moved higher. </p>
<p>I am not sure why I imposed this sampling bias. I'll try to be more careful next time.</p>
<blockquote>
<p>Do you mean for each class slide for x axis(time) and fixed crop for y axis (as we know frequency)? So we move along x and take max?</p>
</blockquote>
<p>yes.</p>
<ol>
<li><p>I cropped the spectrogram after generating it. I used fmin=90 and fmax=14000 for generating spectrogram. To find which pixels to crop, librosa has utilities functions that map frequencies to the number of mel.</p></li>
<li><p>I used FP and TP the same way. They give a 0 or a 1 label for an image.</p></li>
</ol>
<p>What is wrong with FP labels? They were good enough to get me a gold medal ;)</p>
<p><a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> Thanks, and thanks for addressing my concerns about oversharing ;)</p>
<p>It was a bit stressful to wake up every morning and see someone pass me while I was stuck. But the story has a happy ending, everything is good.</p>
<p>Congrats on your nth win in a row, this is amazing.</p>
<p>Tx, will check your writeup next.</p>
<p>For transformers I refer to a speech to text model that was released recently by facebook I think. I haven't looked in detail, maybe it is not applicable because ground truth is too sparse here.</p>
<blockquote>
<p>they are actually our best 0's. </p>
</blockquote>
<p>Indeed. I'd even say they are our ONLY 0's.</p>
<p>Thanks. We all tried stuff that failed for us but worked for others apparently. Mine was pseudo labeling. </p>
<p>I just checked, assuming all other targets are 0 instead of masking costs 0.01 on LB. I tried both BCE and softmax in that case.</p>
<blockquote>
<p>I thought Coereba flaveola was species_id=1 but I was not sure.</p>
</blockquote>
<p>Based on what data?</p>
<blockquote>
<p>Instead of cropping though I simply masked out the regions outside of the frequency band and time that were irrelevant. So I would crop around the region of the time and then I would mask out the frequencies that were irrelevant to that prediction. </p>
</blockquote>
<p>That's what I did. Crop then pad. Sorry if this is not clear enough in my post.</p>
<p>It looks like you had the same idea as me ;)</p>
<p>I train one model, but crops are different for each class. I need to apply the model to each crop separately.</p>
<blockquote>
<p>doesn't that mean the image generated per species is not comparable anymore </p>
</blockquote>
<p>Yes, this is why I need to perform inference separately for each class. </p>
<p>Thanks for sharing your codes. Mine is different.</p>
<p>Given that each crop has a single class target, I just use a single target overall. I know what class I am predicting when I crop, hence I don't need to give it to the model. My code is then very simple:</p>
<pre><code> loss_fct = nn.BCEWithLogitsLoss()
logits = self.head(x)
mask = input_dict['mask']
preds = (logits * mask).sum(-1, keepdim=True)
mono_targets = input_dict['mono_target']
loss = loss_fct(preds, mono_targets)
</code></pre>
<p>This does no work:</p>
<pre><code> with torch.no_grad():
loss[target<1] = 0
</code></pre>
<p>because FP crops have a target of 0 and you want the model to learn that. You must mask only the loss for other classes.</p>
<p>How is this</p>
<pre><code> p = torch.sigmoid(l)
logp = - torch.log(torch.clamp(p, 1e-4, 1-1e-4))
logn = - torch.log(torch.clamp(1-p, 1e-4, 1-1e-4))
loss = t*logp +(1-t)*logn
loss = loss.mean()
</code></pre>
<p>different from </p>
<pre><code> loss_fct = nn.BCEWithLogitsLoss()
loss = loss_fct(l, t)
</code></pre>
<p>I did ask you for how your method differs from mine. I hope you will explain.</p>
<p>Tiny crops of code is not the same as a written explanation. </p>
<p>From what I see, you crop by f_min and f_max, hence I don't see how it is different from what I did.</p>
<blockquote>
<p>It turns out that CNN knows position of image. Because of effects of padding</p>
</blockquote>
<p>This is what i thought as well. But I explored, for instance I read the paper you quote and others on positional encoding. It turns out that adding some positional encoding helps.</p>
<p>I used 5 fold CV on my crops. And I used roc-auc.</p>
<p>Good question!</p>
<p>Yes I tried it at some point and it was a bit worse than vanilla bce.</p>
<p>Focal loss only worked for me in segmentation tasks. This is the original use case of focal loss unless mistaken.</p>
<p>I only tried 24 classes omce, and it was a bit worse.</p>
<p>Congrats on your strong finish. Looking forward to your writeup. </p>
<p>I wish I had waited two more week before submitting ;)</p>
<p>Just kidding, I am sure others would have submitted high scores soon.</p>
<p>Grid may be more effective actually, I'll try it next time. Congrats on your strong silver.</p>
<p>First of all, thanks to the host, for providing yet another very interesting audio challenge. The fact that it was only partly labelled was a significant difference with previous bird song competition.</p>
<p>Second, congrats to all those who managed to pass the 0.95 bar on public LB. I couldn't, and, as I write this before competition deadline, I don't know why. I tried a lot of things, and it looks like my approach has some fundamental limit.</p>
<p>Yet it was good enough to produce a first submission at 0.931, placing directly at 3rd spot while the competition had started two months earlier.</p>
<p>This looked great to me. In hindsight, if my first sub had been weaker, then I would not have stick to its model and would have explored other models probably, like SED or Transformers.</p>
<p>Anyway, no need to complain, I learned a lot of stuff along the way, like how to efficiently implement teacher student training of all sorts. I hope this knowledge will be useful in the future.</p>
<p>Back to the topic, my approach was extremely simple: each row of train data, TP or FP, gives us a label for a crop in the (log mel) spectrogram of the corresponding recording. If time is x axis and frequency the y axis, as is generally the case, then t_min, t_max gives bounds on x axis, and f_min, f_max gives bounds on the y axis.</p>
<p>We then have a 26 multi label classification problem (24 species but two species have 2 song types. I treated each species/song type as a different class). This is easily handled with BCE Loss.</p>
<p>The only little caveat is that we are given 26 classes (species + song type) but we get only one class label, 0 or 1 per image. We only have to mask the loss for other classes and that's it!</p>
<p>I didn't know it when I did it, but a similar way has been used by some of the host of the competition, in this paper (not the one shared in the forum): <a href="https://www.sciencedirect.com/science/article/abs/pii/S0003682X20304795" target="_blank">https://www.sciencedirect.com/science/article/abs/pii/S0003682X20304795</a></p>
<p>The other caveat is that the competition metric works with a label for every class. Which we don't have in train data. However the competition metric is very similar to a roc auc score per recording: when a pair of predictions is in the wrong order, i.e. a positive label has a prediction lower than another negative label prediction, then the metric is lowered. As a proxy I decided to use roc-auc on my multi label classification problem. Correlation with public LB is noisy, but it was good enough to let me make progress without submitting for a while.</p>
<p>What worked best for me was to not resize the crops. It means my model had to learn from sometimes tiny images. To make it work by batch I pad all images to 4 seconds on the x axis. Crops longer than that were resized on the x axis. Shorter ones were padded with 0. One thing that helped was to add a positional encoding on the frequency axis. Indeed, CNNs are good at learning translation independent representations, and here we don't want the model to be frequency independent. I simply added a linear gradient on the frequency axis to all my crops.</p>
<p>For the rest my model is exactly what I used and shared in the previous bird song competition: <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183219" target="_blank">https://www.kaggle.com/c/birdsong-recognition/discussion/183219</a> Just using the code I shared there was almost good enough to get 0.931. The only differences are that I add noise as in the first solution in that competition. I also did not use a no call class here, nor secondary labels.</p>
<p>For prediction I predict on slidding crops of each test recording and take the overall max prediction. This is slow as I need to do each of the 26 classes separately. This is also maybe where I lost against others: my model cannot learn long range temporal patterns, nor class interactions.</p>
<p>With the above I entered high with an Efficient B0 model, and moved to 0.945 in few submissions with Efficientnet B3 . Then I got stuck for the remainder of the competition. </p>
<p>I was convinced that semi supervised learning was the key, and I implemented all sorts of methods, from Google (noisy student), Facebook, others (mean student). They all improved weaker models but could not improve my best models.</p>
<p>In the last days I looked for external data with the hope that it would make a difference. Curating all this and identifying which species correspond to the species_id we have took some time and I only submitted models trained with it today. They are in same range as previous ones unfortunately. with a bit more time I am sure it could improve score, but I doubt it would be significant..</p>
<p>For matching species to species id I used my best model and predicted the external data. It would be interesting to see if I got this mapping right. Here is what I converged to :</p>
<p>0 E. gryllus<br>
1 Leutherodactylus brittoni<br>
2 Leptodactylu albilabris<br>
3 E. coqui<br>
4 E. hedricki<br>
5 Setophaga angelae<br>
6 Melanerpes portoricensis<br>
7 Coereba flaveola<br>
8 E. locustus<br>
9 Margarops fuscatus<br>
10 Loxigilla portoricensis<br>
11 Vireo altiloquus<br>
12 E. portoricensis<br>
13 Megascops nudipes<br>
14 E. richmondi<br>
15 Patagioenas squamosa<br>
16 Eleutherodactylus antillensis<br>
17 Turdus plumbeus<br>
18 E. unicolor<br>
19 Coccyzus vieilloti<br>
20 Todus mexicanus<br>
21 E wightmanae<br>
22 Nesospingus speculiferus<br>
23 Spindalis portoricensis</p>
<p>The picture in the paper shared in the forum helped to disambiguate few cases: <a href="https://reader.elsevier.com/reader/sd/pii/S1574954120300637" target="_blank">https://reader.elsevier.com/reader/sd/pii/S1574954120300637</a> The paper also gives the list of species. My final selected subs did not include models trained on external data, given they were not improving.</p>
<p>This concludes my experience in this competition. I am looking forward to see how so many teams passed me during the competition. There is certainly a lot to be learned.</p>
<p>Edit. I am very pleased to get a solo gold in a deep learning competition., This is a first for me, and it was my goal here.</p>
<p>Edit 2: The models I trained last day with external data are actually better than the ones without. The best one has a private LB of 0.950 (5 folds). However, they are way better on private LB but not on public LB. Selecting them would have been an act of faith. And late submission show they are not good enough to change my rank. No regrets then.</p>
<p>Edit 3 Using Chris Deotte post processing. my best selected sub gets 0.7390 on private LB. It means that PP was what I missed and that my modeling approach was good enough probably. I'll definitely look at test prediction distribution from now on!</p>
<p>Thanks. You finished strongly too. I am sure that if your studies had left you more time then you would have passed me. Also, I guess that most teams reused your SED model from last competition. This is also something to be proud of.</p>
<p>Sorry, I did not use it here. My bad. Let me fix the writeup.</p>
<p>I am not sure about what is not clear. You compute the BCE loss for all targets (no reduction), then multiply by the mask (1 for the song class to be predicted, 0 for all other classes), then take the mean for the batch.</p>
<p>Yes. I cropped by f_min and f_max.</p>
<p>Tx. I didn't look at public notebooks at all. Maybe I should have… It also looks like others used the same overall method, but got better results than me. </p>
<p>Tx. My limited skills in computer vision may be what prevented me from getting a higher score. Looking forward to your writeup. I am sure I'll learn stuff. And congrats as well on your result.</p>
<p>tx. Not only did I show it, but I shared I was using an image classification model. Next time I'll share less maybe ;)</p>
<p>Congrats on your solo gold too.</p>
|
Rainforest Connection Species Audio Detection
|
13th Place Solution – Mean Co-Teachers and Noisy Students
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hey Everybody, I wanted to dump my solution real quick in case anyone was interested. </p>
<p>It seemed to me that the critical issue is that there are a <strong>TON</strong> of missing labels. The provided positive examples data (train_tp.csv) has ~1.2k labels. The lb probing that <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> did suggest 4-5 labels per clip on average. If the train data follows the same distribution we should expect ~21k labels, and that's just at the <em>clip</em> level. We'd expect to see multiple calls from the same bird per clip, i.e. multiple frame labels per clip label. My best models seemed to think there were closer to 40k labels.</p>
<p>So my idea was to do something along the lines of <a href="https://arxiv.org/abs/1911.04252" target="_blank">Noisy Student</a>, where the general idea is to do progressive pseudo labeling where each successive model is larger and there's more noise applied to the training data. On its own, Noisy Student doesn't work very well, so I used a few other tricks. </p>
<h3>1. Mean Teacher</h3>
<p>My first setup looks super similar to what's going on in <a href="https://www.kaggle.com/reppic/mean-teachers-find-more-birds" target="_blank">Mean Teachers Find More Birds</a>. I train on a combo of centered positive examples and random unlabeled samples using consistency and BCE loss. Here, I'm using SED + resnet34 and some light augmentation: gaussian noise, frame/frequency dropout. This gets me to <strong>0.865</strong>on the public lb.</p>
<p>Using 5-fold mean-teacher models, I do OOF prediction to get pseudo labels over the entire training dataset. </p>
<h3>2. Co-Teaching</h3>
<p>Now I want to train on my pseudo labels, but it's safe to assume they're pretty noisy. To deal with the bias introduced by my new, noisy labels, I do something along the lines of <a href="https://arxiv.org/abs/1804.06872" target="_blank">Co-Teaching</a>. Briefly, the idea is to train 2 models simultaneously on the same data, but with different augmentations applied to each. Then the samples with the highest loss from Model A are ignored when doing backprop in Model B and vice versa. The % of ignored samples gets ramped up slowly. The theory is that the models will learn the correct labels early in training and start to overfit to noise later on. By dropping potentially noisy labels, we avoid introducing a bad bias from our pseudo labels. </p>
<p>I modified the authors idea slightly for the competition. In my setup, it's impossible for either model to ignore the good labels from train_tp or train_fp. Only pseudo labels can be ignored. I believe this helps with class imbalance issues. </p>
<p>Using this setup with more aggressive augmentation and densenet 121, I'm able to get to <strong>0.906</strong> on the public lb. </p>
<h3>3. Heavy Mixup</h3>
<p>Finally, using my second round of pseudo labels, I train on randomly sampled segments from all the training data. Here I apply even more aggressive augmentations and add mixup 60% of the time with a mixing weight sampled from <code>Beta(5,5)</code> (typically around 0.5). For mixup, any label present in either clip gets set to 1.0. I run this for 80 epochs. The prev 2 models were run for around 32 epochs. A 5 fold ensemble with this setup using densenet 121 gets me up to <strong>0.940</strong> on the public lb.</p>
<p>I’m able to get to 0.943 by ensembling ~90 models taking the geometric mean.</p>
<h3>Other Tricks</h3>
<ul>
<li>Centering the labels from train_tp in the sampled clip segment early on seemed to help.</li>
<li>When making predictions I’m averaging 4 metrics: average and max clip-wise and frame-wise predictions.</li>
<li>Mixup only worked for me when it was done on the log mel spectrograms. Doing it on the audio didn't work.</li>
<li>Augmentations (intensities varied) (excluding mixup): </li>
</ul>
<pre><code>augmenter = A.Compose([
A.AddGaussianNoise(p=0.5, max_amplitude=0.033),
A.AddGaussianSNR(p=0.5),
A.FrequencyMask(min_frequency_band=0.01, max_frequency_band=0.5, p=0.5),
A.TimeMask(min_band_part=0.01, max_band_part=0.5, p=0.5),
A.Gain(p=0.5)
])
</code></pre>
<p>Let me know if you have any questions! </p>
|
Rainforest Connection Species Audio Detection
|
14th Place Solution - Binary Classification on cropped frequency x time
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>There is only one classifier and for that classifier, all tp-samples are one and all fp-sample are zero.</p>
<p>During inference, if the model says one, then we track back using the frequency region info of the crop, for which class it is saying one.</p>
<p>First of all thanks to Kaggle and Hosts for organizing this competition.</p>
<p>We ( me & <a href="https://www.kaggle.com/ks2019" target="_blank">@ks2019</a> ) were initially working in Cassava Leaf Disease Classification but thanks to this <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/212610" target="_blank">post</a> by <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> that gave us the direction that something different needs to be tried then what is going on in the public. On closer inspection, we found something similar to him. The frequency-time crops in the audios for a specie_id are almost constant (i.e. say for specie_id 23 - in most of the recordings audio frequency lies between 6459 and 11628, and its duration lasts for about 16 seconds). This gave us the idea of cropping out all the potential regions from the spectrogram and perform binary classification on them.</p>
<p>Our approach can be summarized as -</p>
<ol>
<li>Crop images from spectrogram with frequency ranging between max-min frequency observed for a specie_id and with time duration 2 times the max duration observed for a specie_id </li>
<li>Pre-Process: resize crops to size 128 x 256, scale between 0 and 1, and perform augmentation</li>
<li>Train B0 binary classifier detecting the presence of specie (a single binary classifier - here we tracked back using the frequency information of the crop that which ID we are asking classifier to detect for)</li>
<li>Generate Pseudo-labels</li>
<li>Retrain</li>
<li>Perform inference on the test, and take the mean of max n(in our case it was 3) probabilities observed for a specie_id in a recording as the probability of that specie_id</li>
</ol>
<p>Note: On the very first submission, a single model with the above approach gave us 0.921 as public LB (has private LB 0.927), then pseudo labeling and a little bit of blending took private LB to 0.948</p>
<h2>Cropping</h2>
<p>From each spectrogram, for each specie_id x songtype, we cropped out image sequences with the frequency range between min and max frequency observed for that specie_id x songtype, and then created image sequences with duration 2 times the max time interval the audio lasted for that specie_id x songtype in the train.<br>
<img src="https://github.com/PRATEEKKUMARAGNIHOTRI/CMS-trigger/blob/master/images/RFCX1.png" alt="Img"><br>
<a href="https://github.com/PRATEEKKUMARAGNIHOTRI/CMS-trigger/blob/master/images/RFCX1.png" target="_blank">Here</a> - In case img not visible</p>
<h2>Augmentation</h2>
<p>Along with adding random noise, we took a false positive sample of the same specie_id and added that to the audio sample. After this augmentation, the label of the recording id x specie id remained the same(i.e. a false-negative remained the false negative and a true positive remained the true positive)</p>
<p>Yes, your understanding is correct!<br>
Only instead of T frames per sequence, we cropped along the time axis too with <code>duration 2 times the max time interval the audio lasted for that specie_id x songtype in the train</code>. And the hop length was half of the sequence cropping length.</p>
|
Rainforest Connection Species Audio Detection
|
17th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>That is addressed by this</p>
<blockquote>
<p>When on inference time, we crop 24 patches (each of them corresponds to each species) from one time window, and put that 24 patches into CNN, so it takes 24x longer than usual CNN forward path.</p>
</blockquote>
<p>Congratulations to all the winners and gold getters, I guess those teams that broke the 0.950 wall have found the essence of this competition, which we couldn't. <br>
<br><br>
Firstly, thanks to the host for holding quite an interesting competition. Partly labeled classification is a challenging task, which made this competition more interesting than a simple bioacoustics audio tagging competition.<br>
<br><br>
Our solution is a ranking average of image classification models and SED models. <a href="https://www.kaggle.com/yasufuminakama" target="_blank">Y.Nakama</a>, <a href="https://www.kaggle.com/kaerunantoka" target="_blank">kaerururu</a>, and I worked a lot on SED models but couldn't break 0.90 until we merge with <a href="https://www.kaggle.com/thiraiwa" target="_blank">Taku Hiraiwa</a>. His model was based on image classification similar to <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/220304" target="_blank">@cpmpml's model</a>. We found that quite good , so we focused on improving image classification models in the remained days.</p>
<h2>Image classification models</h2>
<p>It was <a href="https://www.kaggle.com/thiraiwa" target="_blank">Taku Hiraiwa</a>'s idea to only use the <em>annotated</em> part of the train data. To do so, we crop the image patches from log-melspectrogram of train data based on the <code>t_min</code>, <code>t_max</code>, <code>f_min</code>, <code>f_max</code> information of train_tp.csv and train_fp.csv, and resized the patch to fixed shape (say, 320 x 320 or so). The cropping is performed on the fly through training, so the part we crop out is randomized along with time axis.<br>
With these image patches we trained EfficientNet models, and monitored F1 score with threshold 0.5. <br>
Here's the other details of image classification models.</p>
<ul>
<li>image size: varies from (244, 244) to (456, 456) between models</li>
<li>backbone: EfficientNetB0 - B5 (used <a href="https://rwightman.github.io/pytorch-image-models/" target="_blank"><code>timm</code></a> and used <code>tf_efficientnet_b<0-5>_ns</code> weights).</li>
<li>augmentation: GaussianNoise, Gain, PitchShift of <a href="https://github.com/iver56/audiomentations" target="_blank">audiomentations</a> on raw waveform. Also HorizontalFlip also had positive impact on LB slightly, so we used (but don't know why it worked).</li>
<li>AdamW optimizer with linear warmup scheduler</li>
<li>BCEFocalLoss<br>
<br><br>
In the end, we trained a stacking model that takes the output of models below which achieve public 0.942:</li>
</ul>
<ol>
<li><code>tf_efficientnet_b0_ns</code> image size 244</li>
<li><code>tf_efficientnet_b0_ns</code> image size 320</li>
<li><code>tf_efficientnet_b0_ns</code> image size 456</li>
<li><code>tf_efficientnet_b1_ns</code> image size 456</li>
<li><code>tf_efficientnet_b2_ns</code> image size 456</li>
<li><code>tf_efficientnet_b3_ns</code> image size 456</li>
<li><code>tf_efficientnet_b4_ns</code> image size 456</li>
<li><code>tf_efficientnet_b5_ns</code> image size 456</li>
</ol>
<h2>SED models</h2>
<p>All of our SED models use the head architecture introduced in <a href="https://github.com/qiuqiangkong/audioset_tagging_cnn" target="_blank">PANNs repository</a>. The CNN encoder is either EfficientNet or ResNeSt, and they are trained with weak/strong supervision. We tried a lot of things on this, but couldn't find factors that consistently work well - which means the LB scores varied quite randomly w.r.t CV score.<br>
Our best SED model is rank average of 11 models(public: 0.901, privte: 0.911) below - each of them differs slightly so we describe the difference briefly.</p>
<h3>2 x kaerururu's model (public: 0.882, 0.873)</h3>
<ul>
<li>Based on public starter SED (EffnB0) notebook (<a href="https://www.kaggle.com/gopidurgaprasad/rfcx-sed-model-stater" target="_blank">https://www.kaggle.com/gopidurgaprasad/rfcx-sed-model-stater</a>)</li>
<li>3ch input</li>
<li>10sec clip</li>
<li>waveform mixup</li>
<li>some augmentation (audiomentations)</li>
<li>pseudo-labeled datasets (add labels on tp data)</li>
<li>trained with tp and fp dataset (1st training)</li>
<li>trained with pseudo-labeled tp (2nd training)</li>
<li>tta=10</li>
</ul>
<h3>5 x arai's model (public: 0.879,0.880, 0.868, 0.874, 0.870)</h3>
<ul>
<li>Based on Birdcall's challenge 6th place (<a href="https://github.com/koukyo1994/kaggle-birdcall-6th-place" target="_blank">https://github.com/koukyo1994/kaggle-birdcall-6th-place</a>)</li>
<li>ResNeSt50 encoder or EfficientNetB3 encoder</li>
<li>AddPinkNoiseSNR / VolumeControl / PitchShift from <a href="https://www.kaggle.com/hidehisaarai1213/rfcx-audio-data-augmentation-japanese-english" target="_blank">My Notebook</a></li>
<li>tp only</li>
</ul>
<h3>4 x Y.Nakama's model (public: 0.871, 0.863, 0.866, 0.870)</h3>
<ul>
<li>Based on Birdcall's challenge 6th place (<a href="https://github.com/koukyo1994/kaggle-birdcall-6th-place" target="_blank">https://github.com/koukyo1994/kaggle-birdcall-6th-place</a>)</li>
<li>ResNeSt50 encoder or ResNet200D encoder</li>
<li>mixup & some augmentations</li>
<li>2nd stage training<ul>
<li>1st stage: weighted loss for framewise_logit & logit</li>
<li>2nd stage: loss for logit</li></ul></li>
<li>tp only</li>
</ul>
<p>Not unfortunately.<br>
At first I thought it is worth trying because I thought the key of this competition is Few-shot learning, but after a while I found this competition is more like a PU learning competition, so I changed the priority of COLA or other audio representation learning methods.</p>
<p>Thank you! </p>
<blockquote>
<p>I am glad you mentioned my Cornell model :)</p>
</blockquote>
<p>Oh, I meant it turned out that our image classification model was similar to your model in this competition (I found that after we read you solution.), but yes we were also helped a lot from your solution in Cornell's competition. <br>
I saw many top performing teams used loss masking technique, which was also used in your Cornell's solution. It's not sure whether they came up with that idea alone or inspired from yours, but I'm quite sure your solution had big impact on this competition.</p>
<p>Thanks!</p>
<blockquote>
<p>In Image classification models, can models detect multi label?</p>
</blockquote>
<p>No, it cannot. To be precise, image classification model only takes the region around where a specific species exist. We created dicts that tell the frequency range of species, which was calculated based on train data. When we crop image patch, we only take the region inside that frequency range. Of course, the frequency range is different between species, so the result of cropping have different shape each other, so we resize that to a fixed shape. <br>
When on inference time, we crop 24 patches (each of them corresponds to each species) from one time window, and put that 24 patches into CNN, so it takes 24x longer than usual CNN forward path.</p>
<p>Thank you😊😊😊</p>
<p>Same to you! Also good to see many competitors of Birdcall comes to this competition. At first it seemed similar to birdcall but it turned out it wasn't - both are interesting in different ways</p>
<p>Thank you! You've also finished in good place!</p>
|
Rainforest Connection Species Audio Detection
|
1st place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yep, I think it is 3 on average.</p>
<p>Thanks! We spent only little time on it, maybe we should have explored it further. I think it has some potential, maybe also in combination with masking or something similar.</p>
<p>Thanks to Kaggle and hosts for this very interesting competition with a tricky setup. This has been as always a great collaborative effort and please also give your upvotes to <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> and <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a>. In the following, we want to give a rough overview of our winning solution.</p>
<h3>TLDR</h3>
<p>Our solution is an ensemble of several CNNs, which take a mel spectrogram representation of the recording as input and predict on recording level using “weak labels” or on a more granular time level using “hard labels”. Key in our modeling is masking as part of the loss function to only account for provided annotations. In order to account for the large amount of missing annotations and the inconsistent way how train and test data was labeled we apply a sophisticated scaling of model predictions.</p>
<h3>Data setup & CV</h3>
<p>As most participants know, the training data was substantially differently labeled compared to the test data and the training labels were sparse. Hence, it was really tricky, nearly impossible to get a proper validation setup going. We tried quite a few things, such as treating all top 3 predicted labels as TPs when calculating the LWLRAP (because we know that on average a recording has 3 TPs), or calculating AUC only on segments where we know the labels (masked AUC), but in the end there was no good correlation that we could find to the public LB. This meant that we had to fully rely on public LB as feedback for choosing our models and submissions. Thankfully, it was a random split from the full test population, but everything else would not have made much sense anyways most likely.</p>
<h3>Models</h3>
<p>Its worthy to note that for most models we performed also the mel spec transformation and augmentations like mixup or coarse dropout on GPU using the implementation that can be found under torchlibrosa (<a href="https://github.com/qiuqiangkong/torchlibrosa/blob/master/torchlibrosa/stft.py" target="_blank">https://github.com/qiuqiangkong/torchlibrosa/blob/master/torchlibrosa/stft.py</a>).</p>
<p>Our final models incorporate both hard and weak label models as explained next.</p>
<h4>Hard label models</h4>
<p>We refer to hard labels as labels that have hard time boundaries inside the recordings. Our hard label models were trained on the provided TPs (target = 1) and FPs (target = 0) labels with time aware loss evaluation. We used a log-spectrogram tensor of variable time length as input to an EfficientNet backbone and restricted the pooling layer to only mean pool over the frequency axis. After pooling, the output has 24 channels for each species and a time dimension. </p>
<p>We then map the time axis from the model to the time labels from the TPs and FPs and evaluate the BCE loss <strong>only</strong> for the parts with provided labels. For all other segments (which is actually the majority) the loss is ignored, as we have no prior knowledge about the presence or absence of species there. In the figure below we show how a masked label looks like: yellow means target=1, green is target=0 and purple is ignored.<br>
<img src="https://i.imgur.com/y9pBxlU.png" alt=""></p>
<p>For some models we added hand labeled parts of the train set but saw diminishing returns when labeling species that were missed by the TP/FP detector, which makes us wonder how the test labeling was done. Also, we wonder where the cut was made for background songs (e.g. species 2 had some calls in the background of several recordings, but the parts were labeled as FP). Most notably, adding TP labels for species 18 gave a substantial boost to LB score, and we believe that adding some hand labels to the mix of models in the blend helped with diversity and generalization. </p>
<p>For some models, similar to other top performing teams, we trained a second stage in which we replaced the masked part of the label with pseudo predictions of the first stage, but downweighted with factor 0.5. The main difference here to other teams is that we scaled the pseudo predictions in the same way we scale test predictions.</p>
<p>As augmentation we used mixup with lambda=3, SpecAugment and gaussian noise.</p>
<h4>Weak label models</h4>
<p>The models in this part of the blend are based on weak label models. The input is the log-spectrogram of the full 60 seconds of an audio recording including all the labels for that clip. So it directly fits on the format where the final predictions need to be made. Due to missing labels, just fitting on the known TPs does not work too well as we incorporate wrong labels by nature. Also we cannot use the FPs, because even though an FP might be present in one part of the recording, does not mean there might not be a TP at another position.</p>
<p>Hence, the models fit here include pseudo labels from our hard label models (see above) as well as some partial hand labels. For the pseudo labels, we take the raw output from the hard label models, but scale them to our expected true distribution (see post processing). For the hand labels, we only pick the TPs as well as FPs that span over a 60second period so that we are sure the species is not part of that recording. In loss, we weight the pseudos between 0.3-0.5 and the original labels and hand labels as 1.</p>
<p>If we would just fit on the raw pseudo outputs, we would not learn anything new, so we employ concepts from noisy-student models. That means we utilize not only simple augmentations and mixup, but also randomly sample pseudo labels for each recording each time we train on it based on a pool of stage 1 hard label models. So for example, you fit 10 hard label models, and then randomly sample one each time in the dataloader. This introduces randomness and further boosts on top of the stage 1 models.</p>
<p>Additionally, we fit several backbones (efnetb0, efnetb3, seresnext26, mobilenetv2_120d) where each is trained on the full data (no folds) with several seeds. In the end this part of the blend is a bag of around 120 models, where some also have additional TTA (horizontal flip).</p>
<h3>How we are blending</h3>
<p>We are blending different model types described above as depicted by the following graphic:<br>
<img src="https://i.imgur.com/liB2Sic.png" alt=""></p>
<h3>Post processing</h3>
<p>We noticed that the test distribution of the target labels is substantially different to the provided train labels. Due to this fact, the models assume an unreasonable low or high probability when they are uncertain (Chris already has started a great thread about it <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/220389" target="_blank">here</a>). To tackle this, we used several a priori information from the test distribution and scaled our predictions accordingly: by probing the public leaderboard we extracted a test label distribution which was aligning well with a previous research paper from the hosts. With additional prior knowledge about the average number of labels per row (3) -- also confirmed by LB probing, as well as the research paper -- we applied either a linear (species_probas *= factor) or a power scaling (species_probas **= factor) per species to our predicted probabilities to match the top3 predictions distribution (orange) with the previously mentioned estimated test distribution (blue). But we didn’t stop there, as we know that the number of labels per row is not always 3 but can be as low as 1 or as high as 8 (stated in the paper). Based on the sum of our probas in each row, we estimated the most likely topX (with a minimum count of 1) distribution (green) of the test set, and optimized the scaling factors by minimizing the total sum of the errors. <br>
<img src="https://i.imgur.com/Fbi6eQn.png" alt=""></p>
<h3>What did not work</h3>
<p>I think in the end quite a few things we tried ended up in the blend fostering the diversity in it. But naturally, there are also many different things that did not work, after all we ran close to 2,000 experiments throughout the course of this competition. One noteworthy thing we tried was object detection based on the bounding boxes we had available in training. It worked reasonably well on simple CV setting reaching >0.7 LWLRAP on full 60 second recordings, but we never continued to work on it on smaller crops or other settings.</p>
<p>We explored quite some architectures in the hope to improve our ensemble. So we tried models that work on the raw wave like Res1DNet or the just released wav2vec. But none did sufficiently well.</p>
<p>Thanks for reading. Questions are very welcome. <br>
Christof, Pascal & Philipp</p>
<p>Thanks to my amazing teammates and congrats on top 3 and top 10! <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a> </p>
<p>I agree, specifically that the paper exists is a bit weird, not the first time for research competitions that this happens.</p>
<p>Without the PP is not really possible for us as we already incorporate our pseudos this way and the final dist is already biased towards that. I think in the end the metric needs some form of scaling. For example, as the data contains 90% S3 labels, if you do not predict these high enough, then the metric is hurt a lot. But the scaling can be achieved via different things. For example if you hand-label all the data as some did then you automatically move towards the test distribution as the populations are roughly similar. I think ranking loss maybe has some potential, but we did not find time to explore it.</p>
<p>After all, I see the public dataset as a validation set here. And the validation set is a fair sample from the test set. So naturally you will try to fit the validation set better, which includes properly moving the TPs to the top.</p>
<p>The weak label models fitted really quickly, the data is quite small. I fitted ~40 in 24 hours on 3 GPUs.</p>
<p>Thanks Guanshuo! </p>
|
Rainforest Connection Species Audio Detection
|
21st place solution - FP co-teaching with loss improvements
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to the top finishers!</p>
<p>The whole my solution is here: <a href="https://github.com/MPGek/mpgek-rfcx-species-audio-detection-public" target="_blank">https://github.com/MPGek/mpgek-rfcx-species-audio-detection-public</a>.<br>
I left the best configs that were used in the final submission.</p>
<h2>Summary</h2>
<p>Summary of my solution:</p>
<ul>
<li>General augmentations with frequency band augmentations:<ul>
<li>Frequency band filtering</li>
<li>Frequency band mixup</li></ul></li>
<li>TP training with a combined loss of the BCE for confident samples and LSoft 0.7 for noisy samples</li>
<li>FP training with a BCE for confident samples and ignored losses for samples with noisy samples</li>
<li>FP co-teaching training with loss as described in the co-teaching paper, but with the extra loss for the high loss samples</li>
<li>Ensemble of TP, FP, FP co-teaching results.</li>
</ul>
<h2>Spectrogram and random crop</h2>
<p>For the training, I used a random crop with the centered sample in 10 seconds and a random crop for 6 seconds.<br>
For validation and prediction, I used 6 seconds crop with a stride in 2 seconds with maximizing outputs.<br>
For the mel spectrogram, I used the following parameters:</p>
<ul>
<li>mels count: 380</li>
<li>FTT: 4096</li>
<li>window length: 1536</li>
<li>hop length: 400</li>
<li>fmin: 50</li>
<li>fmax: 15000</li>
</ul>
<p>So one sample in 6 seconds produced an image with size 720 x 380 pixels</p>
<h2>Augmentations</h2>
<p><strong>Augmentation that improves LB:</strong></p>
<ul>
<li>Gaussian noise</li>
<li>Random crop + resize with size reduction in 40 and 20 pixels</li>
<li>Frequency band filtering - based on f_min and f_max of the sample I set 0 values to all mels that lower or higher than f_min and f_max with a sigmoid transition to remove sharp edges<ul>
<li>Frequency band mixup - for some samples I used frequency band filtering and then I mixed it with different samples with a band that higher than f_max and with a band that lower than f_min. So I managed to get a single image with the 3 mixed samples.</li></ul></li>
</ul>
<p><strong>Example of the Frequency band filtering (top - original sample, bottom - sample after filtering):</strong><br>
<img src="https://github.com/MPGek/mpgek-rfcx-species-audio-detection-public/raw/main/img/Band%20filtering.png" alt=""><br>
<strong>Example of the Frequency band mixup (top - original sample, bottom - sample after mixup):</strong><br>
<img src="https://raw.githubusercontent.com/MPGek/mpgek-rfcx-species-audio-detection-public/main/img/Band%20mixup.png" alt=""><br>
<strong>Augmentation that doesn't improve LB:</strong></p>
<ul>
<li>SpecAugment</li>
<li>Sum mixup</li>
</ul>
<h2>Network topology</h2>
<p>I got the best results on EfficientNetB2, B4, B7 (noisy students weights) with a simple FC to 24 classes after the adaptive average pool.</p>
<p>I tried to use different head but all of them gave the same or worse result:</p>
<ul>
<li>the hyper column</li>
<li>the hyper column with auxiliary losses after each pooling</li>
<li>extra spatial and channel attentions blocks - CBAM</li>
<li>dense convolutions</li>
</ul>
<h2>TP training</h2>
<p>Based on the post where described that every sound file can present unlabeled samples I have to work noisy samples.</p>
<p>I split all samples into confident samples and noisy samples:</p>
<ul>
<li>confident samples - all sigmoid outputs for classes which present in the train_tp.csv file (1 in targets tensor)</li>
<li>noisy samples - all sigmoid outputs for classes which not described in the train_tp.csv file (0 in target tensor)</li>
</ul>
<p>For the confident samples, I used simple BCE. For the noisy samples, I used LSoft with beta 0.7 (<a href="https://arxiv.org/pdf/1901.01189.pdf)" target="_blank">https://arxiv.org/pdf/1901.01189.pdf)</a>. <br>
LSoft:</p>
<pre><code>def forward(self, input: torch.Tensor, target: torch.Tensor):
with torch.no_grad():
pred = torch.sigmoid(input)
target_update = self.beta * target + (1 - self.beta) * pred
loss = F.binary_cross_entropy_with_logits(input, target_update, reduction=self.reduction)
return loss
</code></pre>
<p>In the loss function, I flatten all outputs (even batch dim) to a linear array. And split items into 2 arrays where targets were 1 and where targets were 0.</p>
<p>With LSoft I got on EfficientNetB7 0.912-0.915 LB.<br>
Without LSoft - BCE for all samples I got only about 0.895-0.900.</p>
<h2>FP training</h2>
<p>For the FP training, I used a dataset with undersampling of the FP samples. Each epoch had all TP samples and the same count of the FP samples.<br>
I used batch sampling to provide a balanced batch of TP/FP samples - after each TP I added FP with the same species id.</p>
<p>In the loss function, I calculate loss only for those sigmoid outputs that present in train_tp.csv or train_fp.csv. So all noisy samples are ignored.</p>
<h2>FP co-teaching training</h2>
<p>I have tried to find a way how to use FOCI or SELFIE to work with noisy data, but all of them use historical predictions of each sample. With my random crop and frequency band mixup it's almost impossible. Even shift for 0.5-1 seconds can add a new species to the sample. So historical data will be incorrect.</p>
<p>I tried co-teaching training because it doesn't require historical data.<br>
Paper: <a href="https://arxiv.org/pdf/1804.06872.pdf" target="_blank">https://arxiv.org/pdf/1804.06872.pdf</a><br>
Code sample: <a href="https://github.com/bhanML/Co-teaching" target="_blank">https://github.com/bhanML/Co-teaching</a></p>
<p>When I implemented co-teaching training I have only 5 days before the deadline.<br>
The first experiments with co-teaching gave 0.830 LB for the TP and 0.880 LB for the FP. So it looked like a bad experiment.</p>
<p>I tried to improve loss function by adding high loss samples with changed targets (by default co-teaching should ignore high loss samples as wrong samples).</p>
<p>The final loss function consists of:</p>
<ul>
<li>50% lowest loss samples (all confident samples mandatory added to this part of loss with scale factor 2)</li>
<li>45% ignored losses</li>
<li>5% highest loss samples with the changed target (1 for predictions with sigmoid >= 0.5 and 0 for sigmoid < 0.5)<br>
The loss implementation is here: <a href="https://github.com/MPGek/mpgek-rfcx-species-audio-detection-public/blob/main/model/forward_passes_coteaching.py" target="_blank">https://github.com/MPGek/mpgek-rfcx-species-audio-detection-public/blob/main/model/forward_passes_coteaching.py</a></li>
</ul>
<p>When I invented this loss I had only 3 days before the deadline.<br>
This loss has a good potential for future experiments. I trained only 2 experiments with this loss, both with the same hyperparameters. The first one had a bug so it produces extremely high metrics and used wrong epochs to submission - however, even with the bug it produces a good LB score comparing to the original FP training.</p>
<h2>Folds and best epoch metric</h2>
<p>To chose the best epoch in all training I used the BCE which calculated only on confident samples.<br>
In some experiments I used 5 stratified KFold, in some, I used 7 folds with stratified shuffle split with test size 0.3.</p>
<h2>Ensembles</h2>
<p>The TP training with EfficientNetB7 gave me only 0.912-0.915 on the public LB.<br>
The FP training with EfficientNetB2-B4 gave only 0.887-0.893 LB.<br>
The ensemble of the TP and FP gave 0.929 LB.</p>
<p>The FP co-teaching training on simple EfficientNetB2 gave me 0.925 LB (a quite good improvement from the original FP with 0.893)</p>
<p><strong>The final ensemble consists of all best experiments (0.941 public LB and 0.944 private LB):</strong></p>
<ul>
<li>TP EfficientNetB7 with 0.915</li>
<li>FP EfficientNetB2-B4 with 0.893</li>
<li>FP co-teaching EfficientNetB2 with 0.925</li>
</ul>
|
Rainforest Connection Species Audio Detection
|
23rd Place Solution: Supervised Contrastive Learning Meets Domain Generalization (with TF code)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h3>Introduction</h3>
<p>Thanks Kaggle for this exciting competition and our team ( <a href="https://www.kaggle.com/dathudeptrai" target="_blank">@dathudeptrai</a> <a href="https://www.kaggle.com/mcggood" target="_blank">@mcggood</a> <a href="https://www.kaggle.com/akensert" target="_blank">@akensert</a> <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> <a href="https://www.kaggle.com/ratthachat" target="_blank">@ratthachat</a> ) congratulate all winners -- we have learned a lot from this competition and all winners’ solutions !!</p>
<p>From the winner solutions, it turns out there are mainly 4 ways to break 0.95x</p>
<ol>
<li>Masked loss</li>
<li>Post-processing</li>
<li>Pseudo-labelling</li>
<li>Extra labeling</li>
</ol>
<p>We actually tried the first three but unfortunately could not really make them work effectively.<br>
Here, as an alternative solution, we would love to share our own solution which is able to reach 0.943 Private. </p>
<h3>Training pipeline (improve from 0.80x baseline to 0.92x)</h3>
<h4>Baseline</h4>
<p>Our baseline models score slightly above 0.8. We adopt an audio tagging approach using a densenet121 backbone and the BCE loss. </p>
<p>Our training pipeline includes the following tricks :</p>
<ul>
<li>Class-balanced sampling, using 8 classes per batch. Batch sizes were usually 64, and 32 for bigger models.</li>
<li>Cyclical learning rate with min_lr is 0.0001 and max_lr is 0.001, step_size is 3 epochs. We train models for 100 epochs with early stopping. </li>
<li>LookAhead with Adam optimizer (sync_period is 10 and slow_step_size is 0.5)</li>
</ul>
<h4>Pretraining with Supervised contrastive Learning (SCL) [0.81x -> 0.85x]</h4>
<p>Because of the small amount of data, models overfit quickly. To solve this problem,two options were using external data and cleverly pretraining our models. Unlike a lot of competitors, we focused on self-pretraining techniques : <a href="https://www.kaggle.com/dathudeptrai" target="_blank">@dathudeptrai</a> tried auto-encoders, GANs, SimCLR, Cola, and <a href="https://arxiv.org/abs/2004.11362" target="_blank">Supervised Contrastive Learning</a> which ultimately was the only thing to work. </p>
<h4>Non-overlap time Cutmix [0.85x -> 0.88x]</h4>
<p><img src="https://i.imgur.com/gH2ONQh.png" alt=""></p>
<p>Our sampling strategy consists of randomly selecting a crop containing the label. Most of the time, crops are bigger than labels which introduces false positives. One idea to make full use of our windows was to adapt cutmix to concatenate samples such that labels are entirely kept (when possible). </p>
<h4>Domain Generalization with MixStyle [0.88x -> 0.89x]</h4>
<p>Domain shift always exists in deep learning, in both practice and kaggle challenges, especially for small data. Therefore, domain generalization techniques should help with robustness. We applied a simple yet effective technique called <a href="https://openreview.net/pdf?id=6xHJ37MVxxp" target="_blank">Mixstyle</a>.</p>
<h4>Multi Scale inference (MSI) [0.89x -> 0.91x]</h4>
<p>Duration of species’ call varies quite a lot. For example, for class 3 it is around 0.7 seconds while for class 23 is around 8 seconds. To use this prior information, we use multiple window sizes (instead of using a single one). For each class, we choose the one that yields the best CV. In case we have multiple window sizes reaching the maximum, we take the largest window. Although our CV setup which consists of crops centered around the labels did not correlate really well with LB, the 2% CV improvement reflected on LB quite well.</p>
<h4>Positive learning and Negative learning [0.91x -> 0.92x]</h4>
<p>We used the following assumption to improve the training of our models : <br>
For a given recording, if a species has a FP and no TP, then it is not in the call. Our BCE was then updated to make sure the model predicts 0 for such species. </p>
<h3>Ensembling</h3>
<p>Our best single model densenet121 scores around 0.92 public and 0.93 private. Averaging some models with different backbones, we were able to reach 0.937. We tried many different ensembling, scale fixing and post-processing ideas, and were able to improve our score a bit, but unfortunately we could not find the real magic.</p>
<p>In the end, we empirically analyzed the most uncertain class predictions from our best models, and averaged predictions with other (weaker) models. We relied on diversity to make our submission more robust. Our final ensemble scored public 0.942 and private 0.943.</p>
<h5>Thanks for reading !</h5>
<p><a href="https://github.com/dathudeptrai/rfcx-kaggle" target="_blank">TensorFlow Code Here</a></p>
|
Rainforest Connection Species Audio Detection
|
27th place simple solution (0.932 public, 0.940 private) - dl_pipeline
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>You are correct <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>, I considered 0 to all other species. I worked on this as a single class classification problem using cross-entropy loss for training. Initially I was using smaller audio crops (128x128 or 128x256) and the rationale was that probably there's not much overlap of classes in the small crops around the TPs (i.e. if I know class "A" is observed in that small crop, it's quite likely that most, if any, of the other 23 won't be in the same crop). And indeed with cross-entropy loss the model converges quite well. I now see that the idea of using BCE with TP as 1 and FP as 0 and masking all other values is what I was missing and a great way to incorporate the FP. Nevertheless, with Chris Deotte post-processing my best single model gets 0.950 on private LB (resnest50 and resnest101), generating the predictions over crops of 128 (steps 64) and 256 (steps 128). </p>
<p><strong>In summary:</strong></p>
<ul>
<li>Best single model (0.925 public lb): densenet121 features + fastai head</li>
<li>Loss function: cross entropy</li>
<li>Sampling: 128x1024 crops around true positive samples</li>
<li>Spectrogram parameters: n mels 128, hop length 640, sample rate 32000</li>
<li>Augmentations: clipping distortion, pitch shift and mixup</li>
<li>Inference: Predict on crops with a small width (e.g. 128x128 instead of 128x1024 used for training) and calculate the max probability for each of the 24 classes.</li>
</ul>
<p><strong>Introductory monologue</strong><br>
First and foremost, this was an interesting competition and a good learning opportunity as it is often the case in Kaggle! One “problem” of this competition is that the test data was labeled with a different method and no samples of labeled test data were provided. This makes it difficult to get a sense of validation score and increases the danger of overfiting the public test results. In fact, I almost gave up on this competition when I realized this was the case. But eventually I decided to get back to it and work on a simple solution and on a python library – <strong>dl_pipeline</strong> ( <a href="https://github.com/mnpinto/dl_pipeline" target="_blank">https://github.com/mnpinto/dl_pipeline</a>) – that I will use as a general framework for future kaggle competitions in general. Initially, the idea for dl_pipeline was just to keep my code more organized and more reusable but I figured that maybe there’s also some value in sharing it. </p>
<p><strong>Data preprocessing</strong><br>
Save all wave files in npy files with sample rate of 32000 Hz to save time.</p>
<pre><code>def audio2npy(file, path_save:Path, sample_rate=32_000):
path_save.mkdir(exist_ok=True, parents=True)
wave, _ = librosa.load(file, sr=sample_rate)
np.save(path_save/f'{file.stem}.npy', wave)
</code></pre>
<p>I didn't convert the audio to spectrograms right away since I still want the ability to use audio augmentation on the waveforms.</p>
<p><strong>Augmentations and Spectrograms</strong></p>
<ul>
<li>First I create crops on the waveform including the true positive labels with a number of samples calculated so that the spectrogram will have a width of 1024. </li>
</ul>
<p><strong>Note:</strong> Cropping before applying the augmentations is much faster than the other way around.</p>
<ul>
<li>Then for the waveform augmentations I used the <strong>audiomentations</strong> library (<a href="https://github.com/iver56/audiomentations)" target="_blank">https://github.com/iver56/audiomentations)</a>. I ended up using just the following augmentations as based on public lb I didn't find that others were helping, although this would require a proper validation to take any conclusions. </li>
</ul>
<pre><code>def audio_augment(sample_rate, p=0.25):
return Pipeline([
ClippingDistortion(sample_rate, max_percentile_threshold=10, p=p),
PitchShift(sample_rate, min_semitones=-8, max_semitones=8, p=p),
])
</code></pre>
<p><strong>Note:</strong> Some augmentations are much slower, for example, pitch shift and time stretch. When using those augmentations the probability of use makes a big difference in how long the training takes. </p>
<ul>
<li>Then I searched the fastest way to convert the audio to spectrograms in the GPU and I ended up using <strong>nnAudio</strong> (<a href="https://github.com/KinWaiCheuk/nnAudio)" target="_blank">https://github.com/KinWaiCheuk/nnAudio)</a>. Again, converting to spectrogram after the waveform is cropped is a nice gain in processing time.</li>
</ul>
<p><strong>Model</strong><br>
I tried several models but the one that got me a better result in the public leaderboard was densenet121 and the second-best ResNeSt50. One particularity is that I use for all the models the fastai head with strong dropout.</p>
<p>The fastai head (using <code>create_head(num_features*2, num_classes, ps=0.8)</code>).</p>
<pre><code>(1): Sequential(
(0): AdaptiveConcatPool2d(
(ap): AdaptiveAvgPool2d(output_size=1)
(mp): AdaptiveMaxPool2d(output_size=1)
)
(1): Flatten(full=False)
(2): BatchNorm1d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): Dropout(p=0.4, inplace=False)
(4): Linear(in_features=2048, out_features=512, bias=False)
(5): ReLU(inplace=True)
(6): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(7): Dropout(p=0.8, inplace=False)
(8): Linear(in_features=512, out_features=24, bias=False)
)
</code></pre>
<p><strong>Training</strong><br>
I guess code speaks more than words, particularly for those familiar with fastai:</p>
<pre><code>bs = 32
learn = Learner(dls, model, loss_func=cross_entropy, metrics=[accuracy, lrap], cbs=cbs)
learn.to_fp16(clip=0.5);
learn.fit_one_cycle(30, 1e-3, wd=3e-2, div_final=10, div=10)
</code></pre>
<p>So in English, this is a one cycle learning rate schedule with <strong>30 epochs</strong> starting with lr=1e-4, increasing to 1e-3 and then decreasing back to 1e-4 following cosine anealing schedule. The <strong>loss function</strong> is the good old cross-entropy. Also, a <strong>weight decay</strong> of 3e-2 was used, a <strong>gradient clip</strong> of 0.5 and the train was done with <strong>mixed-precision</strong> so that my GTX 1080 can handle a <strong>batch size</strong> of 32 with 128x1024 image size.</p>
<p>One training epoch takes about 1 minute on my GTX 1080, I guess it's not bad considering that I'm doing waveform augmentations on CPU that even with p=0.25 take some time.</p>
<p><strong>Inference</strong><br>
This is the fun part because it was almost by mistake that I realised that making inference with smaller tiles is way better. I presume that this is the case because I'm training with cross-entropy for a single label problem but the test data is labelled with multiple labels. By using smaller crops the predictions are more multilabel friendly. The reason I've been using cross-entropy instead of binary cross-entropy and sigmoid for the typical multilabel problem is that for me the convergence was much faster using the cross-entropy approach and with better results. Maybe I made a mistake somewhere I don't know, I didn't investigate it in much detail.</p>
<ul>
<li><p>Run predictions on crops of the spectrogram with a width of 64, 128 and 256 (remember training was done with 1024), calculate the max probability for each class for each case (64, 128, 256) and the average of the 3 cases. The average of the 3 gave me public lb 0.928 on my best single model that I describe above, compared to 0.925 for just the 128 width inference.</p></li>
<li><p>The final solution with public lb 0.932 and private lb 0.940 is an ensemble of a few training interations with some modifications. (I will update this tomorrow with more information).</p></li>
</ul>
<p><strong>dl_pipeline</strong><br>
And again the code for this solution is now public on this repo: <a href="https://github.com/mnpinto/dl_pipeline" target="_blank">https://github.com/mnpinto/dl_pipeline</a></p>
<p>The following code should correspond to the best single model solution but I need to check if I didn't mess up anything when cleaning the code:</p>
<pre><code>#!/bin/bash
arch='densenet121'
model_name='model_0'
sample_rate=32000
n_mels=128
hop_length=640
for fold in 0 1 2 3 4
do
echo "Training $model for fold $fold"
kaggle_rainforest2021 --fold $fold --model_name $model_name \
--model $arch --sample_rate $sample_rate --n_mels $n_mels \
--hop_length $hop_length --bs 32 --head_ps 0.8 \
--tile_width 1024 --mixup true >> log.train
done
for tw in 64 128 256
do
echo "Generate predictions for $model with tile_width of $tw"
kaggle_rainforest2021 --run_test true --model_name $model_name \
--model $arch --sample_rate $sample_rate --n_mels $n_mels \
--hop_length $hop_length --tile_width $tw \
--save_preds true >> log.predict
done
</code></pre>
<hr>
<p>Thanks for reading!</p>
|
Rainforest Connection Species Audio Detection
|
2nd place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>With one input channel instead of RGB it is quite easy to use pretrain, just sum weights from 3 channels. With 2 channels it’s a bit different. Timm library supports that out of the box, just pass in_chans=2.</p>
<p>982-983 depending on submission</p>
<h2>Overview</h2>
<p>I trained simple classification models (24 binary classes) with logmel spectrograms :</p>
<ol>
<li>bootstrap stage: models are trained on TP/FP with masked BCE loss</li>
<li>generate soft pseudo labels with 0.5 second sliding window </li>
<li>train models with pseudo labels and also sample (with p=0.5) places with TP/FP - this partially solves confirmation bias problem. </li>
</ol>
<p>Rounds of pseudo labeling and retraining (points 2,3) were repeated until the score on public LB didn't improve. Depending on the settings it took around 4-10 rounds to converge.</p>
<p>My initial models that gave 0.86 on TP/FP alone easily reached 0.96x with pseudo labeling . After this success I gave this challenge a 5 weeks break as I lost any motivation to improve my score :)<br>
Later to my surprise it was extremely hard to beat 0.97 even with improved first stage models.</p>
<h2>Melspectrogram parameters</h2>
<ul>
<li>256 mel bins</li>
<li>512 hop length</li>
<li>original SR</li>
<li>4096 nfft</li>
</ul>
<h2>FreqConv (CoordConv for frequency)</h2>
<p>After my first successful experiment with pseudo labeling that reached 0.969 on public LB I tried to just swap encoders and blend models but this did not bring any improvements. <br>
So I visualised the data for different classes and understood that when working with mel spectrograms for this task we don’t need translation invariance and classes really depend on both frequency and patterns.<br>
I added a channel to CNN input which contains the number of mel bin scaled to 0-1. This significantly improved validation metrics and after this change log loss on crops around TP after the first round of training with pseudo labels was around 0.04 (same for crops around FP). Though it only slightly improved results on the LB.</p>
<h2>First stage</h2>
<p>For the first stage I used all tp/fp information without any sampling and made crops around the center of the signal.</p>
<p><strong>Augmentations</strong></p>
<ul>
<li>time warping</li>
<li>random frequency masking below TP/FP signal</li>
<li>random frequency masking above TP/FP signal</li>
<li>gaussian noise</li>
<li>volume gain</li>
<li>mixup on spectrograms</li>
</ul>
<p>For mixup on spectrograms - I used constant alpha (0.5) and hard labels with clipping (0,1). Masks were also added. </p>
<h2>Pseudolabeling stages</h2>
<p>Sampled TP/FP with p=0.5 otherwise made a random crop from the full spectrogram. </p>
<p>Without TP/FP sampling labels can become very soft and the score decreases after 2 or 3 rounds.<br>
After training 4 folds of effnet/rexnet I generated OOF labels and ensembled their predictions. Then the training is repeated from scratch.</p>
<p><strong>Augmentations</strong></p>
<ul>
<li>gaussian noise</li>
<li>volume gain</li>
<li>mixup </li>
<li>time warping</li>
<li>spec augment</li>
<li>mixup on spectrograms</li>
</ul>
<p><strong>Mixup</strong></p>
<p>I used constant alpha (0.5) and added soft labels from two samples. This hurts logloss on FP a bit but at the same time significantly increases recall on TP.</p>
<h2>Validation</h2>
<p>Local validation did not have high correlation with the public leaderboard. Logloss on TP was somehow correlated but still it was not robust. <br>
So without proper validation I decided to not select the best checkpoints and just trained 60 epochs (around 200 batches in each epoch) with CosineLR and AdamW optimizer.</p>
<p>My best models on validation - auc 0.999, log loss 0.03 did not produce great results (0.95). After the competition though It turned out that they can be easily improved with postprocessing to 97x-98x range.</p>
<h2>Final ensemble</h2>
<p>I used 4 models with 4 folds from Effnet and Rexnet (<a href="https://arxiv.org/abs/2007.00992" target="_blank">https://arxiv.org/abs/2007.00992</a> lightweight models with great performance) families:</p>
<ul>
<li>Rexnet-200 (4 sec training/inference), EffnetB3 (4 sec training/inference)</li>
<li>Rexnet-150 (8 sec training/inference), EffnetB1 (8 sec training/inference)</li>
</ul>
<p>Rexnet was much better than EfficientNet alone (less overfitting), but in ensemble they worked great.</p>
<p>During inference I just used 0.5 second sliding window and took max probabilities for the full clip and then averaged predictions from different models.</p>
<h2>Lessons learned</h2>
<p>I did not know about the paper and lacked this useful information about the dataset.</p>
<p>In my solutions I often rely on models alone but don’t explore the data deeply. <br>
In this case I understood that the relabeled train set has similar class distribution to the test set and decided that models would easily learn that. I was wrong and simple post-processing could significantly improve results (though this happened due to severe class imbalance). </p>
<p>example - FP for s0<br>
mask=[1, 0, 0, …,0], targets=[0, 0, 0, …, 0] - only the first element from the targets and output will be considered by the loss function</p>
<p>Masks for loss function have the same shape as labels, 1 for the classes we know (TP/FP) 0 for others</p>
<pre><code>class BCEMasked(nn.Module):
def forward(self, inputs, targets, mask=None):
bce_loss = binary_cross_entropy_with_logits(inputs, targets, reduction='none')
if mask is not None:
bce_loss = bce_loss[mask > 0]
return bce_loss.mean()
</code></pre>
<p>Thanks! Actually pseudo-labeling did not work that well for SED models (tried framewise/clipwise pseudo labels) and I switched to standard image classification</p>
<p><a href="https://arxiv.org/abs/2007.00992" target="_blank">https://arxiv.org/abs/2007.00992</a> Rexnet-200 (200 = 2.0 scale, 150 = 1.5) is a lightweight model. It is not related to resnet, it is a modification of MobileNet that performs like EffNet. <br>
So it was around 1 minute per epoch </p>
<p>max distance = 0.25 seconds, so it should not have big impact on model's quality. More frames would be better but it would take much more time to generate pseudolabels.</p>
<p>Good point! Just checked some checkpoint</p>
<pre><code>With proper frequencies
neg_logloss: 0.1588028629548308
pos_logloss: 0.05171160377776966
With zeros
neg_logloss: 0.19649322897329777
pos_logloss: 0.3455986050609499
</code></pre>
<p>So my models really use it somehow</p>
<blockquote>
<p>a question on this, do you train on the same random sampled audio crop and pseudo labels after or you re-sample tp/fp or made a random crop again in stage 2?</p>
</blockquote>
<p>My pseudolabels (for 4 second models) had 113 frames per audio clip (0.5 sec sliding window). <br>
If TP/FP is sampled - I made a random crop around the center and then found nearest frame from pseudolabels. Pseudo labels were fixed using tp/fp. <br>
Otherwise I just took random frame from 113 and postprocessed labels if they overlap with tp/fp data.</p>
<p>Thanks! I was exactly there some time ago😄Looks like I need to participate in a few competitions in a row (it is not possible to get high global rank otherwise due to Kaggle points decay)</p>
<p>No, single 4 fold model scores 0.97 on public LB. Single checkpoint something like 0.968 (Public). Where did you see 24 models?) </p>
<p>you are using only TP samples, of course if you have all targets = 1 NN will predict 1.0</p>
<p>Actually I tried different SSL approaches, pseudo labeling with mining confident labels and loss masking was much worse than soft labels. Postprocessing could fix that I guess. The main reason for than is comp metric. Something like F1 per class would better assess predictive performance and would not be affected by class imbalance.</p>
|
Rainforest Connection Species Audio Detection
|
32nd place solution - what worked for me
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks. Yeah, I see solutions are quite diverse in this competition😃</p>
<p>Yeah, it worked better than taking max of crop predictions in my early experiments, but I might have done something wrong 🤔</p>
<p>Thanks for your reply! I also agree that more negative samples will improve the performance. But from the solutions I've seen so far, external data is not used and I used mixup. Did you get performance boost from creating more negative samples by using external data or mixing negative samples?</p>
<p>Thanks again for kindly sharing your code😀</p>
<p>Thanks! Nice to hear that LSep worked for others too.</p>
<p>Congratulations to the top finishers!</p>
<p>This was my first encounter to audio competition, so I tried a lot of maybe implausible ideas and learned a lot. Especially, solutions from Cornell Birdcall Identification and Freesound Audio Tagging 2019 were helpful.</p>
<p>My finish is not strong, but I wanted to share some of the things that I <em>believe(not sure since my score is not sufficiently high)</em> worked for me (increased cv or public score), and hear what other kagglers experienced.</p>
<ul>
<li><p>Frequency Crop</p>
<ul>
<li>For one audio clip, crop 24 different crops according to fmin&fmax of each species.</li>
<li>I believe it is similar to what <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> did and <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> did(without repeating convolution computations)</li></ul></li>
<li><p>LSoft</p>
<ul>
<li><p>I used only TP and FP crops as labels. For example, for each row in TP or FP, only 1 out of 24 labels is present.</p></li>
<li><p>Based on BCELoss, I used 'lsoft' for unknown labels. LSoft is introduced kindly by <a href="https://www.kaggle.com/romul0212" target="_blank">@romul0212</a> at <a href="https://github.com/lRomul/argus-freesound/blob/master/src/losses.py" target="_blank">https://github.com/lRomul/argus-freesound/blob/master/src/losses.py</a></p></li>
<li><p>This is my loss computation with LSoft. <code>mask</code> indicates where the label is known. <code>true</code> for unknown labels is initialized with 0.</p></li></ul>
<pre><code>tmp_true = (1 - lsoft) * true + lsoft * torch.sigmoid(pred)
true = torch.where(masks == 0, tmp_true, true)
loss = nn.BCEWithLogitsLoss()(pred, true)
</code></pre></li>
<li><p>Iterative Pseudo Training</p>
<ul>
<li>Since train set is only very sparsely annotated, I thought re-labeling with the model then re-training will help, and it indeed helped. I pseudo trained for 3 stages.</li>
<li>When pseudo training, I didn't use LSoft and used vanilla BCE.</li></ul></li>
<li><p>LSEPLoss</p>
<ul>
<li><p>Our metric is LWLRAP, so it is important to focus on rank between labels for each row. I used LSepLoss which fits this purpose, which is introduced kindly by <a href="https://www.kaggle.com/ddanevskyi" target="_blank">@ddanevskyi</a> at <a href="https://www.kaggle.com/c/freesound-audio-tagging-2019/discussion/97926" target="_blank">https://www.kaggle.com/c/freesound-audio-tagging-2019/discussion/97926</a></p></li>
<li><p>After stage3 of BCE pseudo training, I pseudo trained extra 2 stages with LSEPLoss</p></li>
<li><p>I fixed original code a bit to allow soft labels.</p></li></ul>
<pre><code>def lsep_loss(input, target):
input_differences = input.unsqueeze(1) - input.unsqueeze(2)
target_differences = target.unsqueeze(2) - target.unsqueeze(1)
target_differences = torch.maximum(torch.tensor(0).to(input.device), target_differences)
exps = input_differences.exp() * target_differences
lsep = torch.log(1 + exps.sum(2).sum(1))
return lsep.mean()
</code></pre></li>
<li><p>Global Average Pooling on only positive values</p>
<ul>
<li><p>We need to know if the species is present or not. We don't care if it appears frequently or not. I thought doing global average pooling on whole last feature map of CNN will yield high probabilities for frequent occurrences of birdcall in one clip and low probabilities for infrequent occurrences, which doesn't match our goal. So I took mean of only positive values from the last feature map of CNN.</p></li>
<li><p>Following code is attached at the end of CNN's extracted feature map</p></li></ul>
<pre><code>mask = (x > 0).float()
features = (x*mask).sum(dim=(2, 3))/(torch.maximum(mask.sum(dim=(2, 3)), torch.tensor(1e-8).to(mask.device)))
</code></pre></li>
<li><p>Augmentations</p>
<ul>
<li>Gaussian/Pink NoiseSNR, PitchShift, TimeStretch, TimeShift, VolumeControl, Mixup(take union of the labels), SpecAugment</li>
<li>Thanks to <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a> for kindly sharing <a href="https://www.kaggle.com/hidehisaarai1213/rfcx-audio-data-augmentation-japanese-english" target="_blank">https://www.kaggle.com/hidehisaarai1213/rfcx-audio-data-augmentation-japanese-english</a></li></ul></li>
<li><p>One inference on full clip</p>
<ul>
<li>I didn't resize the spectrogram, so I was able to train on crops and infer on full image.</li>
<li>When we don't resize, due to the property of CNN, I believe doing sliding windows prediction on small crops is just an approximation for doing one inference on the full image.</li></ul></li>
<li><p>Validation - only use known labels</p>
<ul>
<li>I did validation clip-wise, only on TP and FP labels. From the prediction, I removed all values corresponding to unknown labels, flattened, then calculated LWLRAP. It correlated with LB quite well on my fold0</li></ul></li>
</ul>
<p>My baseline was not so strong(~0.8), so I might had fundamental mistakes in my baseline.<br>
I achieved 0.927 public with efficientnet-b0 fold0 3seed average, but my score worsened when doing 5fold ensembling. I tried average, rank mean, scaling on axis1 then taking mean, calculating mean of pairwise differences taking average, but it didn't help.<br>
I'm planning to study top solutions to find out what I missed</p>
<p>I'd really appreciate it if you share some opinions with my approaches and things that I missed.</p>
|
Rainforest Connection Species Audio Detection
|
33rd place solution - SED model
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats all the winners.<br>
I would like to thank the organizers for hosting a fun competition.</p>
<p>Our team's solution is an ensemble of seven models.<br>
The ensemble includes three SED models and four not-SED models.<br>
I will discuss the single model with the best score among them. (Public: 0.919/Private: 0.927)</p>
<h3>Model</h3>
<p>My model is based on the SED model described in <a href="https://www.kaggle.com/shinmura0" target="_blank">@shinmura0</a>'s discussion.<br>
<a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/211007" target="_blank">https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/211007</a></p>
<ul>
<li>Feature Extractor:EfficientNet-B3</li>
<li>Loss Function:BCELoss</li>
<li>Optimizer:SGD</li>
<li>Scheduler:CosineAnnealingLR</li>
<li>LR:0.15</li>
<li>Data Augmentation:denoise</li>
<li>CV: 4Fold multilabel-stratifiedkfold</li>
</ul>
<p>My model is characterized by a very large learning rate, and as I reduced the learning rate from 0.15, the accuracy decreased.<br>
This is contrary to my experience.</p>
<p>I thought that data expansion would be very effective, and I tried various data expansions, but most of them did not work.</p>
<p>The only data enhancement that worked was the denoising that <a href="https://www.kaggle.com/takamichitoda" target="_blank">@takamichitoda</a> introduced in the discussion.<br>
In the discussion, he de-noises all the data, but it worked when extended with p=0.1.<br>
<a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/214019" target="_blank">https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/214019</a></p>
<h3>Training data</h3>
<p>The training data was randomly cropped from a meruspectrogram at 60/9 seconds.<br>
I experimented with crop sizes ranging from 60/2 seconds to 60/20 seconds, and found that 60/9 and 60/10 gave good results.<br>
Most of the discussions used a 60/10 second crop, but I think a smaller size would have reduced the probability of including missing labels.</p>
<h3>Validation/Test data</h3>
<p>For the validation and test data, I used the same 60/9 seconds as the training date.</p>
<p>I create a total of 17 images by sliding the size of 60/9 seconds by half.<br>
Make a prediction for each of the 17 images, and use the max value of the probability of occurrence of each label as the prediction value.</p>
<p><img src="https://i.imgur.com/XBc6x3R.png" alt=""></p>
<p>The validation was very time consuming, as we needed to infer 17 images to validate 1 clip.</p>
<h3>Didn't work.</h3>
<p>Pseudo labels (train/test, soft label/hard label)<br>
Prediction model for class3<br>
-> The accuracy of 3 was extremely low, so I tried to create a model specific to 3, but it didn't work.</p>
|
Rainforest Connection Species Audio Detection
|
38 place writeup(image classification on TPU/Colab only with s3 trick)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi all.<br>
First, thanks for a great competition. Thats my favorite type of competitions where there are incomplete or noisy labels and you have to think how to deal with them.<br>
Second, thanks to those who shared code. In particular to the authors of the following kernels:</p>
<ol>
<li><a href="https://www.kaggle.com/ashusma/training-rfcx-tensorflow-tpu-effnet-b2" target="_blank">https://www.kaggle.com/ashusma/training-rfcx-tensorflow-tpu-effnet-b2</a> - that was a great starter and I was just doing edits of that kernel to move on</li>
<li><a href="https://www.kaggle.com/aikhmelnytskyy/resnet-tpu-on-colab-and-kaggle" target="_blank">https://www.kaggle.com/aikhmelnytskyy/resnet-tpu-on-colab-and-kaggle</a> - that showed how you can train on colab as well</li>
</ol>
<p>My code is in the following notebook - <a href="https://www.kaggle.com/vzaguskin/training-rfcx-tensorflow-tpu-effnet-b2-with-fp" target="_blank">https://www.kaggle.com/vzaguskin/training-rfcx-tensorflow-tpu-effnet-b2-with-fp</a>. The final submission is a merge of several versions of submissions from that code(and similar code on colab) plus s3 trick. </p>
<p>Now, how I got from initial 80+ in the starter notebook to 90+ and silver zone.</p>
<ol>
<li>5-second cut worked better then initial 10 second</li>
<li>Added FP data with masked BCE/BCEFocal loss. I simply calculate BCE loss only on the label I know is missing(and just usual BCE with label smoothing 0.2/Usual BCEFocal on TP data)</li>
<li>Use heavier model(B4)</li>
<li>Added mixup/cutmix</li>
</ol>
<p>The best version of that code got .89+ on private LB. <br>
Than ensembling goes - I just collect all the well scoring submissions(.87+ public) and average them. Ranking average seem to work slightly better than simple average(by 0.001 approximately).<br>
The best private score I could get with that approach is .912</p>
<p>My version of s3 trick is that I multiply s3 by 2.5 and s7 by 2. That gave me .93 on private LB(.918 public). The version I selected for final had same .918 public and .929 private which is pretty much the same.</p>
<p>Again, thanks a lot for the competition. Learned many things and had a lot of fun.</p>
<p><strong>Upd:</strong> I've added postprocessing from Chris and now <a href="https://www.kaggle.com/vzaguskin/training-rfcx-tensorflow-tpu-effnet-b2-with-fp" target="_blank">this kernel</a> scores 0.95467 private (gold zone) - which means complete training and inference on Kaggle TPU only within less than 3 hours and gold level score.</p>
<p>First I noticed that my CV score for several classes is much lower than average. S3 was the most obvious. I've tried submitting all 0.5 for s3 and it gave me a score boost close to 0.01.<br>
I started investigating and found that I actually underpredicting s3(and probaly s7 as well)<br>
So I tried upscaling the prediction for those two classes that would give the best boost in local CV on one of the folds and got those values.<br>
I guess one could do much better than that(like <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/220389" target="_blank">https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/220389</a>)</p>
|
Rainforest Connection Species Audio Detection
|
3rd Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! no custom tool was used, I just used a notebook to visualize the 60 seconds spectrogram(with an adequate amount of xticks and yticks along with original tp and fp bounding box annotations) and to play the audio file while referring to an unaggregated visualization of the 5-second ensembled model pseudo labels</p>
<p>thank you!</p>
<p>forgot to mention that the initial few epochs with learning rate warmup helped too</p>
<h1>3rd Place Solution</h1>
<p><strong>TLDR</strong></p>
<p>Our solution is a mean blend of 8 models trained on True positive labels of all recording ids in train_tp.csv (given + hand-labeled labels) also from some recording ids in train_fp.csv (hand-labeled labels) with heavy augmentations. Additionally, some models are also trained on pseudo labels and a hand-labeled external dataset. We also post-processed the blended results by thresholding species 3.</p>
<p>From 308 submissions it is obvious that we have tested a lot of techniques and I will share more detailed information by category below.</p>
<h2>Data Preparation</h2>
<p>I couldn't get a proper validation framework setup after trying out many techniques and decided at one point to start digging into the data and figured out that there are many unlabeled samples both in and out of the range of t_min and t_max labels given in train_tp.csv. In <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/197735" target="_blank">https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/197735</a> it was mentioned that using hand-labeled species was allowed, so I started labeling the data manually and after labeling 100 recording ids I could already get a > 0.9 public lb score and pretty consistent local CV scores that somewhat correlates with the public lb. Naturally, I continued to label the entire train_tp.csv seeing that it has only around 1.3k recording ids. Further labeling of train_fp.csv helped the score but only minimally so I stopped at one point. As I grew more familiar with the data I could label 300 recording ids in a day :), referring to pseudo labels helped a lot too. I also went through the train_tp.csv a few more rounds to make sure I have quality data. I used both spectrograms and listening strategy to analyze and label the data, some species are easy to spot with spectrograms and some are easier to spot by listening, and in some cases, both listening and visual inspection of the spectrograms can act as a multi verification technique to get more quality labels, especially when birds/frogs are very distant away from the recorder or there are strong noises like waterfall sounds. By labeling and analyzing the data I also figured out the kinds of sounds/noises that would appear and inspired me to try out a few augmentation methods which I will share below. Along with true positive labels, I also added noisy and non-noisy labels based on my confidence in the completeness of labels in a specific recording id. I am not a perfect labeler so I wanted to handle complete and non-complete labeled recording ids differently, which I will share below too.</p>
<p>I also removed some labels from train_tp.csv as I found some true positives suspicious, I didn't test not removing the labels before so not sure how much this helped.</p>
<p>Additionally, after finding out the paper from the organizers I searched for suitably licensed datasets with those species and found one dataset with species in this competition with a proper license. But there weren't any labels so I labeled it manually too with the same format as train_tp.csv. <a href="https://datadryad.org/stash/dataset/doi:10.5061/dryad.c0g2t" target="_blank">https://datadryad.org/stash/dataset/doi:10.5061/dryad.c0g2t</a> . I reuploaded the dataset <a href="https://www.kaggle.com/dicksonchin93/eleutherodactylus-frogs" target="_blank">HERE</a> with my manual labels.</p>
<p>I uploaded the extra labels as a dataset <a href="https://www.kaggle.com/dicksonchin93/extra-labels-for-rcfx-competition-data" target="_blank">https://www.kaggle.com/dicksonchin93/extra-labels-for-rcfx-competition-data</a>, feel free to use it and see if you can get a better single model score! mine was 0.970 on public lb</p>
<h2>Modeling / Data Pre-processing</h2>
<p>I used Mel Spectrograms with the following parameters: 32kHz sampling rate, a hop size of 716, a window size of 1366, and 224 or 128 Number of Mels. Tried a bunch of methods but plainly using 3 layers of standardized Mel Spectrograms works the best. The image dimensions were (num_mel_bins, 750). </p>
<p>Using train_tp.csv to create folds will potentially leak some training data into your validation data so I treated the problem as a multilabel target and used <a href="https://github.com/trent-b/iterative-stratification" target="_blank">iterative-stratification</a> to stratify the data into 5 partitions using unique recording ids and its multilabel targets. I had two different 5 fold partitions using different versions of the multi labels and used a mix of both in the final submission. </p>
<p>I used multiple different audio duration during the competition and at different stages of the competition, the best duration varied in my implementation but in the end, I used 5 seconds of audio for training and prediction as the LWLWRAP score was better on both public lb and local validation. </p>
<p>The 5-second audio was randomly sampled during training and in prediction time a 5-second sliding window was used with overlap and the max of predictions was used. How the 5-second audio is randomly sampled is considered to be an augmentation method in my opinion and so I will explain it in the heavy augmentations category below</p>
<h6>Augmentations</h6>
<ul>
<li>Random 5-second audio samples: <br>
a starting point was chosen randomly on values between reference t_mins and t_maxes obtained from </li>
</ul>
<pre><code>def get_ref_tmin_tmax_and_species_ids(
self, all_tp_events, label_column_key="species_id"
):
all_tp_events["t_min_ref"] = all_tp_events["t_min"].apply(
lambda x: max(x - (self.period / 2.0), 0)
)
def get_tmax_ref(row, period=self.period):
tmin_x = row["t_min"]
tmax_x = row["t_max"]
tmax_ref = tmax_x - (period / 4.0)
if tmax_ref < tmin_x:
tmax_ref = (tmax_x - tmin_x) / 2.0 + tmin_x
return tmax_ref
all_tp_events["t_max_ref"] = all_tp_events[
["t_max", "t_min"]
].apply(get_tmax_ref, axis=1)
t_min_maxes = all_tp_events[
["t_min_ref", "t_max_ref"]
].values.tolist()
species_ids = all_tp_events[label_column_key].values.tolist()
return t_min_maxes, species_ids
</code></pre>
<p>Labels were also assigned based on the chosen starting time and ending time with t_min and t_max labels.</p>
<ul>
<li>audio based pink noise</li>
<li>audio based white noise</li>
<li>reverberation</li>
<li>time stretch</li>
<li>use one of 16kHz or 48kHz sample rate data and resample it to 32kHz sample rate using randomly chosen resampling methods <code>['kaiser_best', 'kaiser_fast', 'fft', 'polyphase']</code></li>
<li>use different window types to compute spectrograms at train time <code>['flattop', 'hamming', ('kaiser', 4.0), 'blackman', 'hann']</code> , hann window is used at test and validation time</li>
<li>masking out non labeled chunks of the audio with a 10% chance</li>
<li>one of spectrogram <a href="https://arxiv.org/abs/2002.12047" target="_blank">FMix</a> and audio based mixup with the max of labels instead of using the blend from the beta parameter</li>
<li><a href="http://dcase.community/documents/challenge2019/technical_reports/DCASE2019_Bouteillon_27_t2.pdf" target="_blank">spec mix </a>: <br>
only one strip was used for each axis, for the horizontal axis when the chosen frequency range to mask out completely covers a specific species minimum f_min and maximum f_max , that species label will be dropped. Specmix is also using the max of labels instead of using the blend from the beta parameter. The code below shows how I obtain the function that can output frequency axis spectrogram positions from frequency</li>
</ul>
<pre><code>def get_mel_scaled_hz_to_y_axis_func(fmin=0, fmax=16000, n_mels=128):
hz_points = librosa.core.mel_frequencies(n_mels=n_mels, fmin=fmin, fmax=fmax)
hz_to_y_axis = interp1d(hz_points, np.arange(n_mels)[::-1])
# reversed because first index is at the top left in an image array
return hz_to_y_axis
</code></pre>
<ul>
<li>bandpass noise </li>
<li>Water from <a href="https://zenodo.org/record/4060432#.YC6IJ3VKjCI" target="_blank">Freesound50k</a> removing samples that have license to prevent derivative work</li>
<li>Engine and Motor Sounds from <a href="https://zenodo.org/record/4060432#.YC6IJ3VKjCI" target="_blank">Freesound50k</a> removing samples that have license to prevent derivative work</li>
<li>Honk, Traffic and Horn sounds from <a href="https://zenodo.org/record/4060432#.YC6IJ3VKjCI" target="_blank">Freesound50k</a> removing samples that have license to prevent derivative work</li>
<li>Speech sounds from <a href="https://zenodo.org/record/4060432#.YC6IJ3VKjCI" target="_blank">Freesound50k</a> removing samples that have license to prevent derivative work</li>
<li>Bark sounds from <a href="https://zenodo.org/record/4060432#.YC6IJ3VKjCI" target="_blank">Freesound50k</a> removing samples that have license to prevent derivative work</li>
</ul>
<p>checkout recording_id b8d1e4865 to find dogs barking and some human speech :D</p>
<h6>Architectures used</h6>
<p>No SED just plain classifier models with GEM pooling for CNN based models</p>
<ul>
<li>Efficientnet-b7</li>
<li>Efficientnet-b8</li>
<li>HRNet w64</li>
<li>deitbase224</li>
<li>vit_large_patch16_224</li>
<li>ecaresnet50</li>
<li>2x resnest50 from <a href="https://www.kaggle.com/meaninglesslives" target="_blank">https://www.kaggle.com/meaninglesslives</a>, checkout his writeup in a minimal notebook <a href="https://www.kaggle.com/meaninglesslives/rfcx-minimal" target="_blank">HERE</a>!</li>
</ul>
<h6>Loss</h6>
<p>The main loss strategy used for the final submission was using different loss function for samples which I am confident is complete in labels and samples which I am not confident is complete in labels. BCE was used for non-noisy/confident samples and a modified Lsoft loss was used on the noisy/non-confident. Lsoft loss was modified to be applied only to nonpositive samples, as I was confident in my manual labels. It looks like this </p>
<pre><code>def l_soft_on_negative_samples(y_pred, y_true, beta, eps = 1e-7):
y_pred = torch.clamp(y_pred, eps, 1.0)
# (1) dynamically update the targets based on the current state of the model:
# bootstrapped target tensor
# use predicted class proba directly to generate regression targets
with torch.no_grad():
negative_indexes = (y_true == 0).nonzero().squeeze(1)
y_true_update = y_true
y_true_update[negative_indexes[:, 0], negative_indexes[:, 1]] = (
y_true_update[negative_indexes[:, 0], negative_indexes[:, 1]] * beta +
(1 - beta) * y_pred[negative_indexes[:, 0], negative_indexes[:, 1]]
)
# (2) compute loss as always
loss = F.binary_cross_entropy(y_pred, y_true_update)
return loss
</code></pre>
<p>This was inspired by the first placed winner in the Freesound competition <a href="https://github.com/lRomul/argus-freesound" target="_blank">https://github.com/lRomul/argus-freesound</a> but I noticed that it doesn't make sense if it is used with mixup since audio will be mixed up anyways. So I also obtain the max of noisy binary labels so that noisy labels mixed with clean labels are considered to be noisy labels.</p>
<h6>Pseudo Labels</h6>
<p>I didn't get much boost from pseudo labels, maybe I did something wrong but nonetheless, it was used in some models. I used a 0.8 threshold for labels generated with 5-second windows and utilized the same window positions during training. Using raw predictions didn't help the model at all on lb.</p>
<h6>Post processing</h6>
<p>We set the species 3 labels to be 1 with a 0.95 threshold and it boosted the score slightly</p>
<h6>Other stuff</h6>
<ul>
<li>Early stopping of 20 epochs with a minimum learning rate of 9e-6 to start counting these 20 epochs</li>
<li>Reduce learning rate on Plateau with a factor of 0.6 and start with a few warmup epochs, when LR is reduced the best model weights was loaded back again</li>
<li>Catalyst was used</li>
</ul>
<h2>Things that failed</h2>
<ul>
<li>using models without pre-trained weights</li>
<li>timeshift</li>
<li>using species from the Cornell Competition that are confused with species in this competition as a distractor noise, for example, moudov is similar to species 15, reevir is similar to species 11, rebwoo is similar to species 6, bkbwar is similar to species 7, cacwre is similar to species 19, every is similar to species 17 and nrwswa is similar to species 20</li>
<li>using plane sounds from Freesound50k data</li>
<li>using PCEN, deltas or CQT</li>
<li><a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183269" target="_blank">Random Power</a></li>
<li>TTA with different window types</li>
<li>Manifold mixup with resnest50</li>
<li>using trainable Switchnorm as an initial layer replacing normal standardization</li>
<li>using trainable Examplar norm as an initial layer replacing normal standardization</li>
<li>Context Gating</li>
<li>split audio intro three equal-length chunks and concat as 3 layer image</li>
<li>lsep and <a href="https://arxiv.org/abs/2009.14119" target="_blank">Assymetric loss </a></li>
<li>using rain sounds from Freesound50k data</li>
<li>using a fixed validation mask similar to how I used random training mask</li>
<li>use <a href="https://vsitzmann.github.io/siren/" target="_blank">SIREN</a> layer</li>
<li>Tried to separate some confusing patterns as separate manual labels but didn't get the chance to test them</li>
</ul>
<p>Hopefully, I didn't miss anything. Oh, we were holding off submitting a good model until <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> came along :)</p>
<p>nope, only hand labeling test set is not allowed <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/197735" target="_blank">https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/197735</a></p>
<p>Thanks! and yeah, imagine the stress when you did so much work and someone came along and said they got top 10 in one submission 😨</p>
<p>Thanks Shinmura-san :)</p>
<p>Thanks! I couldn't run vit_large_patch16_224 with a large batch size because of all my techniques, I only used batch size 10 but I don't think batch size is the problem, I couldn't get it to converge initially too, and decided to try a lower learning rate (0.0001 instead of 0.001) and it could slowly converge nicely. </p>
|
Rainforest Connection Species Audio Detection
|
43rd place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for your comment. </p>
<blockquote>
<p>would you end up with some empty bands?</p>
</blockquote>
<p>Yes. In 850 mel bands, empty bands is exist. But 850 is better than 512 or 751.<br>
It is strange. I cannot understand this reason.</p>
<p>Maybe small feature is more critical. In the paper[2](state-of-the-art in audioset), EfficientNet (pretrained by ImageNet) is used. But resolution is not square.</p>
<p>First of all, many thanks to Kaggler.<br>
I got a lot of ideas from Kaggler in the discussion. And it was fun.</p>
<p>My solution summary is below.</p>
<ul>
<li>The high resolution of spectrogram</li>
<li>Post-processing with moving average</li>
<li>Teacher-student model for missing labels</li>
</ul>
<h1>1st stage: SED</h1>
<p>I started from basic experiment with <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/211007" target="_blank">SED</a>. Why SED? Because I think SED is strong in the multi label task. </p>
<p>I used log mel-spectrogram as the input of SED. Basic experiment involves data augmentation (Gaussian noise, SpecAugment and MixUP), backbone model choice and adjusting resolution of log mel-spectrogram. As a result, below condition was the best for me.</p>
<ul>
<li>No noise injection</li>
<li>MixUp</li>
<li>The best model architecture is EfficientNet</li>
<li><strong>The higher the resolution</strong> of log mel-spectrogram, the better the result.</li>
</ul>
<h3>The resolution</h3>
<p>The most important one is the resolution. Recently, in Kaggle computer vision solution, the higher the resolution of the image, the better the result. In spectrogram, the same phenomenon may happen. In mel-spectrogram, the resolution can be changed by adjusting "hop_size" and "mel_bins". Following result is changing the resolution with ResNest50(single model).</p>
<table>
<thead>
<tr>
<th>Resolution(Width-Height)</th>
<th>public LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>1501-64(PANNs default)</td>
<td>0.692</td>
</tr>
<tr>
<td>3001-128</td>
<td>0.805</td>
</tr>
<tr>
<td>6001-64</td>
<td>0.725</td>
</tr>
<tr>
<td>1501-256</td>
<td>0.761</td>
</tr>
<tr>
<td>751-512</td>
<td><strong>0.823</strong></td>
</tr>
<tr>
<td>1501-512</td>
<td>0.821</td>
</tr>
</tbody>
</table>
<p>The resolution was critical! According to experimental result, good resolution was "high" and similar to the square. 751-512 looks good. As a result, I chose 858-850. This configuration is as follows.</p>
<pre><code>model_config = {
"sample_rate": 48000,
"window_size": 1024,
"hop_size": 560,
"mel_bins": 850,
"fmin": 50,
"fmax": 14000,
"classes_num": 24
}
</code></pre>
<h3>Post-processing</h3>
<p>I used <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/211007" target="_blank">Framewise output</a> for submission. It contains time and classes information. But there is a lot of false positive information in framewise output. Because they are not processing by a long time information. Therefore <strong>a short event of framewise output should be deleted.</strong> I prepared post-processing for framewise output. It is a moving average.</p>
<p><img src="https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/264781/2d0ad5b8-f046-64da-a98a-96f0ae79ba2e.png" alt="image.png"></p>
<p>By taking a moving average in the time direction for each class, we can delete short events. This idea is based on <a href="http://dcase.community/documents/challenge2020/technical_reports/DCASE2020_Chan_6.pdf" target="_blank">the paper</a>[1]. The sample code is as follows.</p>
<pre><code>def post_processing(data): # data.shape = (24, 600) # (classes, time)
result = []
for i in range(len(data)):
result.append(cv2.blur(data[i],(1,31)))
return result
</code></pre>
<p>I improved LB by using moving average. The following result is comparing post-processing with EfficientNetB3(single model).</p>
<table>
<thead>
<tr>
<th></th>
<th>public LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>w/o post-processing</td>
<td>0.785</td>
</tr>
<tr>
<td>w/ post-processing</td>
<td><strong>0.840</strong></td>
</tr>
</tbody>
</table>
<h3>Summary</h3>
<ul>
<li>MixUp(alpha=0.1)</li>
<li>Epoch 30</li>
<li>Adam(lr=0.001) + CosineAnnealing(T=10)</li>
<li>Batchsize 6</li>
<li>Use only tp label</li>
<li>Get random tp clip 10 sec</li>
<li>The resolution of log mel-spectrogram: 858-850</li>
<li>Loss function: BCE</li>
<li><a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/211007" target="_blank">Weak label training</a></li>
<li>Post-processing: moving average</li>
</ul>
<p>Then I got <strong>0.916 public LB</strong> with EfficientNetB0(5-folds average ensemble).</p>
<h1>2nd stage: missing labels and the ensemble</h1>
<p>I reported <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/208830#1139171" target="_blank">discovering missing labels and re-train</a>. And It didn't work. After that, I thought about missing labels again. My answer is that the model is not correct for discovering missing labels. There are a lot of missing labels around tp. Therefore the model is not correct. </p>
<p><img src="https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/264781/0e6009db-c1cf-7297-3ac4-df92dd3420c5.png" alt="train.png"></p>
<p>To solve this issue, I used teacher-student model. </p>
<p><img src="https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/264781/79d8e1c8-6814-68c6-0a9d-d41d830fcdfb.png" alt="geretation.png"></p>
<p>1st generation is similar to 1st stage. I gradually increased model prediction ratio. By using teacher-student model, I could discover missing labels. Specially, in strong label training, teacher-student model was effective. Following result is teacher-student model score with EfficientNetB0.</p>
<p><img src="https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/264781/ddded273-c2b1-92c1-3272-1e19377a5b77.png" alt="image.png"></p>
<p>"MixUp rate" is probabilistic MixUp. This method is based on <a href="https://arxiv.org/abs/2102.01243" target="_blank">the paper</a>[2]. </p>
<p>Finally, I made the ensemble of 1st stage model and 2nd stage model. Ensemble procedure is simple average. Then I got <strong>0.924 public LB.</strong></p>
<h1>References</h1>
<p>[1] Teck Kai Chan, Cheng Siong Chin1 and Ye Li, "SEMI-SUPERVISED NMF-CNN FOR SOUND EVENT DETECTION".<br>
[2] Yuan Gong, Yu-An Chung, and James Glass, "PSLA: Improving Audio Event Classification with<br>
Pretraining, Sampling, Labeling, and Aggregation".</p>
<h1>Appendix: the resolution and EfficientNet</h1>
<p>Finally, I show interesting result. It is relationship between EfficientNet and the resolution. The following result is public LB(5-folds average ensemble).</p>
<table>
<thead>
<tr>
<th>Resolution(W-H)</th>
<th>751-512</th>
<th>751-751</th>
<th>858-850</th>
</tr>
</thead>
<tbody>
<tr>
<td>EfficientNetB0</td>
<td>0.893</td>
<td>0.904</td>
<td><strong>0.916</strong></td>
</tr>
<tr>
<td>EfficientNetB3</td>
<td><strong>0.913</strong></td>
<td>0.912</td>
<td>0.900</td>
</tr>
</tbody>
</table>
<p>In B0, the higher resolution, the better result. But B3 was vice versa. Usually, the larger EfficientNet, the better at <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/154683" target="_blank">high resolution</a> it is. But the above is reverse. Why?</p>
<p>Maybe <strong>domain shift</strong>(train: noisy sound -> test: clean sound) is concerned. B3 has learned about <strong>train domain features</strong>(noisy sound). On the other hand, B0 has less representational ability than B3. Therefore B0 has learned the <strong>common features</strong> of the train and test domain with high resolution. Without domain shift, B3 would have also shown good results with high resolution.</p>
|
Rainforest Connection Species Audio Detection
|
4th place solution (5th public LB)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yeah, always 5 sec region crop around center with some random shifts despite the length of event. </p>
<p>First of all, I'd like to thank my teammates, Rainforest Connection and Kaggle for this interesting and tricky challenge!</p>
<p>The major issue in this competition was obviously the labelling quality. True- and False-Positives audios contain lots of unlabelled regions that adds too much noise for the models. As a consequence, till the very end of the competition we haven't managed to establish a reliable local validation strategy and were mostly relying on the Public LB scores.</p>
<p>Moreover, labeled regions in the TP audios were balanced, i.e. each class had an equal number of labels. However, we've noticed that test predictions contain mostly the 3rd class as a top-1 probability. And with the higher percentage of the 3rd class, the LB score tends to be better. The similar situation was for some other classes (e.g. top-2 was mostly the 18th class). It gave us an idea that probably test files have completely different class distributions compared to the TP data. That's why we've applied additional multipliers for the 3rd and 18th class to artificially increase probabilities for them (naming it class balancing).</p>
<p>Our final solution consists of 3 stages.</p>
<h2>1st Stage</h2>
<ul>
<li>Data: only TP labels on 26 classes (for each song_type).</li>
<li>Models: SED-classifiers (EfficientNet-B1 and EfficientNet-B3)</li>
<li>Cropping strategy: Random crops around TP regions</li>
<li>Loss: BCE</li>
<li>Augmentations: spectrogram augmentations (SpecAugment, Noise) and CutMix: cutting the TP regions and pasting them into the random time regions in the other TP and FP audios.</li>
<li>Public LB score: 0.909 -> 940 (after balancing)</li>
<li>Private LB score: 0.915 -> 0.938</li>
</ul>
<h2>2nd Stage</h2>
<p>Taking the models from the 1st Stage we've made a set of pseudolabels for TP (OOF), FP and test data. The pseudolabels have been generated using the SED framewise output. At this point, audio files have much more labeled regions compared to the initial TP data. And on this stage models are being trained on the pseudolabels only. We've applied two approaches:</p>
<h5>SED-classification</h5>
<ul>
<li>Data: TP pseudolabels + random 2000 samples from FP pseudolabels for each fold. Use soft labels (0.9) for the pseudolabels</li>
<li>Models: SED-classifiers (EfficientNet-B0, EfficientNet-B1, MobileNetV2, DenseNet121)</li>
<li>Cropping strategy: Random 5 seconds crops around pseudolabeled regions</li>
<li>Loss: modified LSEP loss</li>
<li>Augmentations: raw audio augmentations, such as: GaussianNoiseSNR, PinkNoiseSNR, PitchShift, TimeShift, VolumeControl </li>
<li>TTA: 6 different crop sizes are used during the inference: 2, 5, 10, 20, 30 and 60 seconds</li>
<li>Best single model (5 fold) public LB score: 0.957 (after balancing)</li>
<li>Private LB score: 0.963</li>
</ul>
<h5>Usual classification</h5>
<ul>
<li>Data: TP + FP pseudolabels. Pre-train models on the test pseudolabels</li>
<li>Models: Usual classifiers (EfficientNet-B1, ResNet34, SE-ResNeXt50)</li>
<li>Cropping strategy: Random crops around pseudolabeled regions</li>
<li>Loss: BCE</li>
<li>Augmentations: spectrogram augmentations (SpecAugment, Noise) and CutMix</li>
<li>Best single model (5 fold) public LB score: 0.952 (after balancing)</li>
<li>Private LB score: 0.959</li>
</ul>
<h2>3rd Stage</h2>
<p>Taking the overall ensemble from the 2nd Stage allows to get the Public LB score of 0.965 (Private LB: 0.969). To achieve our best 0.969 Public LB (Private LB: 0.971) we're applying single class semantic segmentation models for 3rd, 11th and 18th classes (other classes didn't give any score improvements on the Public LB).</p>
<p>The segmentation polish is done in the following manner:<br>
<code>class_score = class_score * (1 + 0.1 * num_instances) if num_instances > 0 else class_score * 0.9</code>, where <code>num_instances</code> is the number of instances predicted by the semantic segmentation model for each recording.</p>
<h2>What didn`t work</h2>
<ol>
<li>PANN pretrained weights (or other audio pretrained models) - imagenet performs best</li>
<li>Using "fat" encoders</li>
<li>Focal loss with soft penalty (But as we see It works for the other participants)</li>
<li>Multiclass segmentation</li>
<li>Raw audio classification with 1d convolutions </li>
</ol>
<p>We tried to teach binary semantic segmentation for every class, but most of them trained poorly with low IOU score on validation ( e.g. 1, 12 classes), others just decrease LB score (e.g. 7, 15 classes). So we decide to use only classses which were most frequent to decrease fp quantity.</p>
<blockquote>
<p>tp annotation can have actually multiple labels in the time interval tmin to tmax.</p>
</blockquote>
<p>yeah, so we found all labels that appears on every crop and add them to gt labels for this crop.</p>
|
Rainforest Connection Species Audio Detection
|
5th place solution (Training Strategy)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you comment and sorry for late.</p>
<p>I treat separately original labels and pseudo labels.</p>
<p>My first 3rd stage loss function is that:</p>
<pre><code>def rfcx_3rd_criterion(outputs, targets):
clipwise_preds_att_ti = outputs["clipwise_preds_att_ti"]
posi_label = ((targets == 1).sum(2) > 0).float().to(device)
soft_posi_label = ((targets == 2).sum(2) > 0).float().to(device)
nega_label = ((targets == -1).sum(2) > 0).float().to(device)
soft_nega_label = ((targets == -2).sum(2) > 0).float().to(device)
posi_y = torch.ones(clipwise_preds_att_ti.shape).to(device)
nega_y = torch.zeros(clipwise_preds_att_ti.shape).to(device)
posi_loss = nn.BCEWithLogitsLoss(reduction="none")(clipwise_preds_att_ti, posi_y)
nega_loss = nn.BCEWithLogitsLoss(reduction="none")(clipwise_preds_att_ti, nega_y)
soft_posi_loss = nn.BCEWithLogitsLoss(reduction="none")(clipwise_preds_att_ti, posi_y)
soft_nega_loss = nn.BCEWithLogitsLoss(reduction="none")(clipwise_preds_att_ti, nega_y)
posi_loss = (posi_loss * posi_label).sum()
nega_loss = (nega_loss * nega_label).sum()
soft_posi_loss = (soft_posi_loss * soft_posi_label).sum()
soft_nega_loss = (soft_nega_loss * soft_nega_label).sum()
loss = posi_loss + nega_loss + soft_posi_loss*0.5 + soft_nega_loss*0.5
return loss
</code></pre>
<p>But soft_nega is not good work, so I have removed it.</p>
<p>Thank you comment and sorry for late.</p>
<p>I calculated LWRAP by 3rd stage's re-labeled and it was 0.9621 which near LB.<br>
I had tried Precision, Recall, and AUC but I could not completely correlate CV and LB.<br>
So we trust LB and using various model ensembles to hold robustness.</p>
<h1>5th place solution (Training Strategy)</h1>
<p>Congratulations to all the participants, and thanks a lot to the organizers for this competition! This has been a very difficult but fun competition:)</p>
<p>In this thread, I introduce our approach about training strategy.<br>
About ensemble part will be written by my team member.</p>
<p>Our team ensemble each best model.</p>
<p>My model is Resnet18 which has a SED header. This model's LWLRAP is Public LB=0.949 /Private LB=0.951, and I trained by google colab using <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/198048" target="_blank">Theo Viel's npz dataset</a>(32 kHz, 128 mels). Thank you, Theo Viel!!</p>
<p>Our approach has 3 stage, <br>
Other team members are different in some things likes the base model and hyperparameter, but these default strategies are about the same.</p>
<h2>1st stage: pre-train</h2>
<p><em>※I think this part is not important. Team member Ahmet skips this part.</em></p>
<p>This stage transfers learning from Imagenet to spectograms.</p>
<p>Theo Viel's npz dataset can be regarded as 128x3751 size image.<br>
I cut to 512 by this image in sound point from t_min and t_max.<br>
I train this image by tp_train and 30 sampled fp_train.</p>
<p>Parameters:</p>
<ul>
<li>Adam</li>
<li>learning_rate=1e-3</li>
<li>CosineAnnealingLR(max_T=10)</li>
<li>epoch=50</li>
</ul>
<p>Continue 2nd and 3rd stage use this trained weight.</p>
<h2>2nd stage: pseudo label re-labeling</h2>
<p>The purpose of stage2 is to improve the model and make pseudo labels by this model.</p>
<p>Use 1st stage trained weight.</p>
<p>The key point I think is to calculate gradient loss only labeled frame. The positive labels were sampled from tp_train.csv only and the negative labels were sampled from fp_train.csv only.<br>
I put 1 to positive label and -1 to negative label.</p>
<pre><code>tp_dict = {}
for recording_id, df in train_tp.groupby("recording_id"):
tp_dict[recording_id+"_posi"] = df.values[:, [1,3,4,5,6]]
fp_dict = {}
for recording_id, df in train_fp.groupby("recording_id"):
fp_dict[recording_id+"_nega"] = df.values[:, [1,3,4,5,6]]
def extract_seq_label(label, value):
seq_label = np.zeros((24, 3751)) # label, sequence
middle = np.ones(24) * -1
for species_id, t_min, f_min, t_max, f_max in label:
h, t = int(3751*(t_min/60)), int(3751*(t_max/60))
m = (t + h)//2
middle[species_id] = m
seq_label[species_id, h:t] = value
return seq_label, middle.astype(int)
# extract positive label and middle point
fname = "00204008d" + "_posi"
posi_label, posi_middle = extract_seq_label(tp_dict[fname], 1)
# extract negative label and middle point
fname = "00204008d" + "_nega"
nega_label, nega_middle = extract_seq_label(fp_dict[fname], -1)
</code></pre>
<p>loss function is that:</p>
<pre><code>def rfcx_2nd_criterion(outputs, targets):
clipwise_preds_att_ti = outputs["clipwise_preds_att_ti"]
posi_label = ((targets == 1).sum(2) > 0).float().to(device)
nega_label = ((targets == -1).sum(2) > 0).float().to(device)
posi_y = torch.ones(clipwise_preds_att_ti.shape).to(device)
nega_y = torch.zeros(clipwise_preds_att_ti.shape).to(device)
posi_loss = nn.BCEWithLogitsLoss(reduction="none")(clipwise_preds_att_ti, posi_y)
nega_loss = nn.BCEWithLogitsLoss(reduction="none")(clipwise_preds_att_ti, nega_y)
posi_loss = (posi_loss * posi_label).sum()
nega_loss = (nega_loss * nega_label).sum()
loss = posi_loss + nega_loss
return loss
</code></pre>
<p>And image are cut and stack by sliding window.<br>
I set the window size to 512 and cut out the entire range of 60's audio data by covering it little by little. Cover 49 pixels each, considering that important sounds may be located at the boundaries of the division.</p>
<pre><code>N_SPLIT_IMG = 8
WINDOW = 512
COVER = 49
slide_img_pos = [[0, WINDOW]]
for idx in range(1, N_SPLIT_IMG):
h, t = slide_img_pos[idx-1][0], slide_img_pos[idx-1][1]
h = t - COVER
t = h + WINDOW
slide_img_pos.append([h, t])
print(slide_img_pos)
# [[0, 512], [463, 975], [926, 1438], [1389, 1901], [1852, 2364], [2315, 2827], [2778, 3290], [3241, 3753]]
</code></pre>
<p>I predict each sliding window and put the pseudo label, so I got 8 windows in one 60 sec recording.</p>
<table>
<thead>
<tr>
<th>patch idx</th>
<th>pixcel</th>
<th>time(s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0〜512</td>
<td>0〜8</td>
</tr>
<tr>
<td>1</td>
<td>463〜975</td>
<td>7〜15</td>
</tr>
<tr>
<td>2</td>
<td>926〜1438</td>
<td>14〜23</td>
</tr>
<tr>
<td>3</td>
<td>1389〜1901</td>
<td>22〜30</td>
</tr>
<tr>
<td>4</td>
<td>1852〜2364</td>
<td>29〜37</td>
</tr>
<tr>
<td>5</td>
<td>2315〜2827</td>
<td>37〜45</td>
</tr>
<tr>
<td>6</td>
<td>2778〜3290</td>
<td>44〜52</td>
</tr>
<tr>
<td>7</td>
<td>3241〜3753</td>
<td>51〜60</td>
</tr>
</tbody>
</table>
<p>Parameters:</p>
<ul>
<li>Adam</li>
<li>learning_rate=3e-4</li>
<li>CosineAnnealingLR(max_T=5)</li>
<li>epoch=5</li>
</ul>
<h2>3rd stage: train by label re-labeled</h2>
<p>This stage trains on the new labels re-labeled by 2nd stage model.</p>
<p>Use 1st stage trained weight.</p>
<p>The new label is ensemble by our team output like my 2nd stage.</p>
<ul>
<li>our prediction average value is<ul>
<li><code>>0.5</code>: soft positive = 2</li>
<li><code><0.01</code>: soft negative = -2</li></ul></li>
</ul>
<p>In this stage, I calculate gradient loss only labeled frame as with 2nd stage. </p>
<p>Parameters:</p>
<ul>
<li>Adam</li>
<li>learning_rate=3e-4</li>
<li>CosineAnnealingLR(max_T=5)</li>
<li>epoch=5</li>
</ul>
<p>Some My Tips:</p>
<ul>
<li>Don't use soft negative.</li>
<li>The re-label's loss(soft positive) is weighted 0.5.</li>
<li>last layer mixup(from <a href="https://medium.com/analytics-vidhya/better-result-with-mixup-at-final-layer-e9ba3a4a0c41" target="_blank">this blog</a>)</li>
</ul>
<h2>CV</h2>
<p>I use <a href="https://github.com/trent-b/iterative-stratification" target="_blank">iterative-stratification</a>'s MultilabelStratifiedKFold. Validation data is made from tp_train only and fp_train data is used training in all fold.</p>
<p>Each stage LWLRAP is that:</p>
<table>
<thead>
<tr>
<th>stage</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>1st</td>
<td>0.7889</td>
<td>0.842</td>
<td>0.865</td>
</tr>
<tr>
<td>2nd</td>
<td>0.7766</td>
<td>0.874</td>
<td>0.878</td>
</tr>
<tr>
<td>3rd</td>
<td>0.7887</td>
<td>0.949</td>
<td>0.951</td>
</tr>
</tbody>
</table>
<p>3rd stage's re-labeled LWRAP is 0.9621.</p>
<h2>predict</h2>
<p>In test time, I increase COVER to 256, so I got 14 windows in one 60 sec recording.<br>
The prediction is max pooling in each patch.</p>
<p>I use clipwise_output in training, and I use framewise_output in prediction. This approach came from <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/209684" target="_blank">shinmura0's discussion thread</a>. Thank you shinmura0:)</p>
<h2>did not work for my model</h2>
<ul>
<li>TTA</li>
<li>26 classes (divide song_type)</li>
<li>label wright loss</li>
<li>label smoothing(but team member's kuto improved)</li>
</ul>
<hr>
<p>Finally, I would like to thank the team members.<br>
If I was alone, I couldn't get these result.<br>
kuto, Ahmet, thank you very much.</p>
<p>My code:<br>
<a href="https://github.com/trtd56/RFCX" target="_blank">https://github.com/trtd56/RFCX</a></p>
<p>Thank you!</p>
<p>Yes, I did.<br>
<code>clipwise_output in training</code> mean is weak label training.<br>
I train by clipwise label each sliding window.<br>
I called it soft frame wise training.</p>
<p>Your dataset was very useful for someone only having a laptop PC like me. It can make experiments fastly. Thank you!</p>
|
Rainforest Connection Species Audio Detection
|
7th place solution - Beluga & Peter
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> We predicted both the train and the test set using our best (non-pseudo) ensemble.<br>
We had predictions (sigmoid) for 3 seconds chunks using a 1-sec sliding window. For all of the labels, of course.<br>
I used 0.7 as a threshold for selecting positive chunks. For that chunk, only the birds with 0.7+ prediction considered as positive.<br>
It is kind of weak-pseudo labeling.</p>
<p>My results for one model (Efficientnet B0 - 4 folds avg)<br>
With manual labels, I got 0.933 (0.939 private)<br>
With manual labels + train only pseudo: 0.949 (0.961 private)<br>
With manual labels + test only pseudo: 0.950 (0.956 private)</p>
<p>The manual+train was better, but it was 4x slower, so I chose the manual + test-only version.</p>
<p><a href="https://www.kaggle.com/gaborfodor" target="_blank">@gaborfodor</a> used different thresholds/selection.</p>
<p>Thanks, Kaggle and RFCx, for this audio competition, and special thanks to my teammate <a href="https://www.kaggle.com/gaborfodor" target="_blank">@gaborfodor</a> Without him, I probably would have given up a long time ago, somewhere at 0.8xx.</p>
<h2>Data preparation</h2>
<p>Resampling everything to 32kHz and split the audio files into 3 seconds duration chunks. We used a sliding window with a 1-second step.</p>
<h2>Collecting more label</h2>
<p>The key to our result was that Beluga collected tons of training samples manually. He created an awesome annotation application; you can find the details <a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/220305" target="_blank">here</a>. <em>Source code included</em>.<br>
After the first batch of manually labeled examples, we quickly achieved 0.93x with an ensemble of a varying number of PANN (cnn14) models. </p>
<h2>Input</h2>
<p>We used mel-spectrograms as inputs with various <code>n_bin</code> (128, 192, 256, 288). Beluga trained PANN-cnn14 models with one input channel. For the other backbones (effnets, resnets, etc) I used three input channels with a simple trick:</p>
<ul>
<li>I used different <code>n_mel</code> and <code>n_fft</code> settings for every channel. E.g. n_mel=(128, 192, 256), n_fft=(1024, 1594, 2048). This results in different height images, so resizing to the same value is necessary.<br>
We both used <code>torchlibrosa</code> to generate the mel-spectrograms.</li>
</ul>
<h2>Augmentation</h2>
<p>We used three simple augmentations with different probability:</p>
<h5>Roll</h5>
<pre><code>np.roll(y, shift=np.random.randint(0, len(y)))
</code></pre>
<h5>Audio mixup</h5>
<pre><code>w = np.random.uniform(0.3, 0.7)
mixed = (audio_chunk + rnd_audio_chunk * w) / (1 + w)
label = (label + rnd_label).clip(0, 1)
</code></pre>
<h5>Spec augment</h5>
<pre><code>SpecAugmentation(time_drop_width=16, time_stripes_num=2, freq_drop_width=16, freq_stripes_num=2)
</code></pre>
<h2>Architectures</h2>
<ul>
<li>PANN - cnn14</li>
<li>EfficientNet B0, B1, B2</li>
<li>Densenet 121</li>
<li>Resnet-50</li>
<li>Resnest-50</li>
<li>Mobilnet v3 large 100<br>
We trained many versions of these models with different augmentation settings and training data. Beluga used a PANN - cnn14 model (I think it is the same as the original) from his <a href="https://www.kaggle.com/c/birdsong-recognition/discussion/183407" target="_blank">Cornell solution</a>.<br>
I trained a very similar architecture with different backbones and I used attention head from SED:</li>
</ul>
<pre><code>x = ...generate mel-specotrogram...
x = self.backbone.forward_features(x)
x = torch.mean(x, dim=2)
x1 = F.max_pool1d(x, kernel_size=3, stride=1, padding=1)
x2 = F.avg_pool1d(x, kernel_size=3, stride=1, padding=1)
x = x1 + x2
x = F.dropout(x, p=0.5, training=self.training)
x = x.transpose(1, 2)
x = F.relu_(self.fc1(x))
x = x.transpose(1, 2)
x = F.dropout(x, p=0.5, training=self.training)
(clipwise_output, norm_att, segmentwise_output) = self.att_block(x)
segmentwise_output = segmentwise_output.transpose(1, 2)
framewise_output = interpolate(segmentwise_output, self.interpolate_ratio)
output_dict = {
"framewise_output": framewise_output,
"clipwise_output": clipwise_output,
}
</code></pre>
<h2>Training</h2>
<p>Nothing special. Our training method was the same for all of the models:</p>
<ul>
<li>4 folds</li>
<li>10 epochs (15 with a higher probability of mixup)</li>
<li>Adam (1e-3; *0.9 after 5 epochs)</li>
<li>BCE loss (PANN version)</li>
<li>We used the weights of the best validation LWRAP epoch for inference.</li>
</ul>
<h2>Pseudo labeling</h2>
<p>After we had an excellent ensembled score on the public LB (0.950), we started to add pseudo labels to our dataset. The result after we re-trained everything was 0.96x.</p>
<h2>Final Ensemble</h2>
<p>Our final ensemble had 80+ models (all of them trained with 4-folds) </p>
|
Rainforest Connection Species Audio Detection
|
8th Place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!<br>
I saw your discussion.<br>
Great insights and hard work!<br>
I'll see you at another convention somewhere!</p>
<p>I am always amazed at your discussions and solutions!<br>
You are one of the kaggler I personally respect!</p>
<p>Sorry, there is a mistake in my text!<br>
I tried the hand label=> Pseudo label<br>
I did not try Pseudo label from the beginning.</p>
<blockquote>
<p>how much difference is there between the 2 set?</p>
</blockquote>
<p>So I don't have a set of pseudo labels with me.<br>
I was not able to compare them.<br>
I can't say that the pseudo labels did not work. I will fix it.</p>
<blockquote>
<p>can I modify so that the 2 distribution are more similar?</p>
</blockquote>
<p>I've looked at all the train_data.<br>
s3,s7,s11,s12,s18 seem to be particularly frequent.<br>
In Chris's notebook.<br>
<a href="https://www.kaggle.com/cdeotte/rainforest-post-process-lb-0-970" target="_blank">https://www.kaggle.com/cdeotte/rainforest-post-process-lb-0-970</a><br>
[113,204,44,923,53,41,3,213,44,23,26,149,255,14,123,222,46,6,474,4,17,18,23,72].<br>
The distribution is similar to</p>
<p>If we do a Pseudo label using all the train_data, we may get a distribution exactly like test_data.</p>
<p>The hand label=> Pseudo label I tried mainly upsampled the tail class to eliminate the class imbalance, so it did not work well because it was far from the test_data distribution.</p>
<p>Thank you for opening this competition<br>
Also the notebooks and discussions have helped me a lot, thank you all!</p>
<p>My solution was similar to Beluga & Peter in 7th Place.<br>
<a href="https://www.kaggle.com/https" target="_blank">@https</a>://<a href="http://www.kaggle.com/c/rfcx-species-audio-detection/discussion/220443" target="_blank">www.kaggle.com/c/rfcx-species-audio-detection/discussion/220443</a>.</p>
<ul>
<li>Multi-class multi-label problem</li>
<li>Data cleaning train_data with hand labels</li>
<li>SED model</li>
</ul>
<p></p>
<p><strong>[hand labels]</strong><br>
While observing the data from t_min and t_max in the given train__tp.csv, I found that there are many kinds of bird calls mixed together.<br>
So I decided to treat it as a multi-class multi-label problem.<br>
It was also mentioned in the discussion that the test labels were eventually carefully labeled by humans.<br>
The TPs in the given t_min, t_max range are all easy to understand, but there are many TPs in the 60s clip that are difficult to understand and not labeled.<br>
I thought it would be better to label them carefully by myself to make the condition as close to test as possible in case such incomprehensible calls are also labeled in test.<br>
And I was thinking of doing Pseudo labeling after the accuracy of the model improves.</p>
<p>I trimmed the 5s~ around t_min and t_max in train__tp.csv.<br>
Hand labels took about a week.<br>
As a result, a total of 2428 clips and 5s chunks were used as train_data.<br>
The distribution of the train_data classes looks like this<br>
(I couldn't upload the image, so I'll post it later)</p>
<p>class nb<br>
s3 1257<br>
s12 520<br>
s18 512<br>
:<br>
s16 100<br>
s17 100<br>
s6 97</p>
<p>I can see that there is a label imbalance, especially for s3, s12, and s18, because their labels co-occur among the other classes of clips.</p>
<p>In particular, s3 is dominant, so it tends to output high probability, while a few classes output low probability, so i thought this is a bad problem for this evaluation index.<br>
Therefore, in order to achieve a more balanced distribution, I oversampled the minority classes and undersampled the majority classes.<br>
However, the LB became worse.<br>
Looking back, I didn't think of approaching the test distribution, as Chris pointed out.<br>
<a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/220389" target="_blank">https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/220389</a></p>
<p>Eventually, i ensembled 15 models with various losses(FocalLoss,FbetaLoss+FocalLoss,ResampleLoss).</p>
<p><strong>[trainning]</strong><br>
example single model <br>
PANNsDense161 (public_LB 0.95548, private_LB 0.96300)</p>
<p>I also tried EfficientNet_b0, Dense121, etc., but Dense161 worked well.</p>
<p>train_data(sr=48000,5s)<br>
window_size=2048,hop_size=512,mel_bins=256<br>
MultilabelStratifiedKFold 5fold<br>
BCEFocalLoss(α=0.25,β=2)<br>
GradualWarmupScheduler,CosineAnnealingLR(lr = 0.001,multiplier=10,epo35)</p>
<p><strong>Augmentation</strong><br>
GaussianNoise(p=0.5)<br>
GaussianSNR(p=0.5)<br>
FrequencyMask(min_frequency_band=0.0, max_frequency_band=0.2, p=0.3)<br>
TimeMask(min_band_part=0.0, max_band_part=0.2, p=0.8)<br>
PitchShift(min_semitones=-0.5, max_semitones=0.5, p=0.1)<br>
Shift(p=0.1)<br>
Gain(p=0.2)</p>
<p><strong>[inference]</strong><br>
stride=1<br>
framewise_output max<br>
No TTA (I used it in the final ensemble model)</p>
<p>Finally, I've uploaded the train_data_wav (sr=48000) and csv that I used. <br>
<a href="https://www.kaggle.com/shinoda18/rainforest-data" target="_blank">https://www.kaggle.com/shinoda18/rainforest-data</a></p>
|
Rainforest Connection Species Audio Detection
|
9th Place - Post Process LB 0.926 to LB 0.963!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Giba</p>
<p>That's awesome <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> . You have a very accurate model ! </p>
<p>When i use <code>MODE=1</code> with the code posted here including the <code>d2</code> vector from paper, my result is private LB 0.970. During comp i computed my own <code>d2</code> array and didn't use the paper and achieved private LB 0.963 so it seems the paper's test distribution is the correct one and better than i could calculate from my test predictions.</p>
<p>Interesting. Nice analysis and discovery!</p>
<p>You are correct that probing public LB and using the result to post process for private LB is risky. However in many comps, public and private have a similar distribution and public is large enough to compute accurate enough means, so it works. In this comp, the 1st place team said they probed public LB and it worked on private LB.</p>
<p>Yes, i trained <code>2:1</code> for ease of PP. By controlling the train distribution carefully, I could use Bayes Rule to adjust these odds for the private test dataset based on the distribution observed from making predictions on the full test dataset.</p>
<p>This PP doesn't require that public and private are similar. This PP requires that we calculate <code>FACTORS</code>. When we calculate the <code>FACTORS</code> we need to make sure that they pertrain to private LB.</p>
<p>I list 3 ways to calculate <code>FACTORS</code> above. If we compute <code>FACTORS</code> from probing public LB, that is risky. If we use the entire test data to compute <code>FACTORS</code> that is safest. The last method is using the <code>FACTORS</code> from the paper. That is risky too because we are not sure if they pertain to private.</p>
<p><a href="https://www.kaggle.com/fffrrt" target="_blank">@fffrrt</a> That is interesting. For me, i gain a lot of benefit by post processing all species. Perhaps you already downsample the other species in your training process. </p>
<p>I train all species with 50% true positive and 50% false positive. So even my rare species get predicted with high probabilities and thus I need to PP them to make them smaller.</p>
<p>Wow, simple adjustment for big gain. I see your predictions are logits, so adding 5 is like scaling, i.e. multiplying, the odds by <code>1.6 = log(5)</code>. </p>
<p>This simple trick works in many comps. Whenever the metric is AUC comparing different targets, and this comp metric was AUC in disguise, then we can improve LB by making sure each group of predicted targets is properly calibrated against other groups. This same trick was used in Jigsaw Toxic Comp <a href="https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification/discussion/160980" target="_blank">here</a>, described in section post process.</p>
<p>And other metrics benefit from PP adjustments too. For example, Log Loss and MSE are sensitive to the mean as shown <a href="https://www.kaggle.com/cdeotte/moa-post-process-lb-1777" target="_blank">here</a></p>
<p><a href="https://www.kaggle.com/vlomme" target="_blank">@vlomme</a> That's great Kramarenko. What <code>MODE</code> worked best for you?</p>
<p><a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> CPMP, try the above PP and see if it helps your submission.csv file. Make sure your file contains probabilities, not logits. And follow the instructions above which suggests different modes and fudges in case it doesn't work.</p>
<p><a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> Thanks Theo!</p>
<p>Try the following. Make sure your <code>submission.csv</code> are probabilities between and including 0 and 1. Do not use logits. First try <code>MODE=1</code>, then if that isn't good, try <code>MODE=2</code>. If that isn't good try <code>MODE=3</code>. If that isn't good, try <code>MODE=1</code> with a different <code>FUDGE</code>. Perhaps try 0.5, 1, and 3.</p>
<pre><code># USE MODE 1, 2, or 3
MODE = 1
# LOAD SUBMISSION
import pandas as pd, numpy as np
FUDGE = 2.0
FILE = 'submission.csv'
df = pd.read_csv(FILE)
for k in range(24):
df.iloc[:,1+k] -= df.iloc[:,1+k].min()
df.iloc[:,1+k] /= df.iloc[:,1+k].max()
# CONVERT PROBS TO ODDS, APPLY MULTIPLIER, CONVERT BACK TO PROBS
def scale(probs, factor):
probs = probs.copy()
idx = np.where(probs!=1)[0]
odds = factor * probs[idx] / (1-probs[idx])
probs[idx] = odds/(1+odds)
return probs
# DIFFERENT DISTRIBUTIONS
d1 = df.iloc[:,1:].mean().values
d2 = np.array([113, 204, 44, 923, 53, 41, 3, 213, 44, 23, 26, 149, 255,
14, 123, 222, 46, 6, 474, 4, 17, 18, 23, 72])/1000.
for k in range(24):
if MODE==1: d = FUDGE
if MODE==2: d = d1[k]/(1-d1[k])
if MODE==3: s = d2[k] / d1[k]
else: s = (d2[k]/(1-d2[k]))/d
df.iloc[:,k+1] = scale(df.iloc[:,k+1].values,s)
df.to_csv('submission_with_pp.csv',index=False)
</code></pre>
<p>The paper says they stored all their audio at website ARBIMON <a href="https://arbimon.rfcx.org/" target="_blank">https://arbimon.rfcx.org/</a> . I didn't search nor use any external data in this comp. I wonder whether the paper's 512471 audio clips are publicly available on ARBIMON? That's a half million audio clips!</p>
<p>I just post processed my final submission using the test distribution from Table 2 in the paper <a href="https://www.sciencedirect.com/science/article/pii/S1574954120300637" target="_blank">here</a>. It would have achieved private LB 0.970. So it looks like the paper's distribution is better than the one I calculate from my test predictions.</p>
<p><img src="http://playagricola.com/Kaggle/paper_LB.png" alt="image"></p>
<p>Congratulations tugstugi. Great job achieving solo gold ! I saw you climbing quickly the last few days. Well done!</p>
<p>It's interesting how PP increased your private LB score more than you public LB score. Do you have any idea why?</p>
<p>Thanks CPMP. Yes i think you can use logits. Then you "move" the mean with addition instead of multiplication as in <br>
$$\text{new logits} = \text{adjust} + \text{old logits}$$<br>
where <code>adjust = log(FACTOR)</code> in my formula.</p>
<p>Note that i simplified my PP a bit. I actually did it repeatedly. I would start with my original submission.csv then calculate FACTOR, then apply it to scale the submission.csv. Then i would calculate new FACTOR from the modified submission.csv. Next, I would start with my orginal submission.csv again, then apply new FACTOR. Then calculate another FACTOR. I would do this repeatedly until the FACTOR did not change anymore. Then I would use that FACTOR to create my final submission from original submission.</p>
<p><code>FACTOR = test odds / train odds</code> but this needs subtle adjustment too. Because we train crops with <code>odds = x</code>, but we must estimate full clip odds from crop odds. For example if we train crops with 50 / 50 true postive and false positive, that is <code>1:1</code> crop odds. But the odds of a species being present in the entire 60 seconds after taking max over sliding crops is more like <code>2:1</code> train odds because there is a higher probability of seeing a species in 30 crops versus 1 crop. So when computing <code>FACTOR</code>, we use the higher <code>2:1</code> for <code>train odds</code>.</p>
<p>Thanks Kaggle and RFCx for a fun competition. My final submission without post process achieves private LB 0.926 and with post process achieves <strong>private LB 0.963</strong>! That's +0.037 with post process!</p>
<h1>How To Score LB 0.950+</h1>
<p>The metric in this competition is different than other competitions. We are asked to provide <code>submission.csv</code> where each row is a test sample, and each column has a species prediction.</p>
<p>In other competitions, the metric computes <strong>column wise AUC</strong>. In this competition, the metric is essentially <strong>row wise AUC</strong>. Therefore we need each column to represent probabilities. The columns of common species need to be large values and the columns of rare species need to be small values.</p>
<h1>Train Distribution</h1>
<p>The distribution of the train data has roughly 6x the number of false positives versus true positives for each species. If you train your model with that data, then when your model is unsure about a prediction, it will predict the mean that it observed in the train data which is 1/7. Therefore if it is unsure about species 3, it will predict 1/7 and if it is unsure about species 19, it will predict 1/7.</p>
<p>This is a problem because species 3 appears in roughly 90% of test samples whereas species 19 appears in roughly 1% of test samples. Therefore when your model is unsure , it should predict 90% for species 3 and 1% for species 19.</p>
<h1>Test Distribution - Post Process</h1>
<p>In order to correct our model's predictions we scale the odds. Note that scaling odds doesn't affect predictions of 0 and 1. It only affects the unsure middle predictions. </p>
<p>First we convert the <code>submission.csv</code> column of probabilities into odds with the formula<br>
$$ \text{odds} = \frac{p}{1-p}$$<br>
Then we scale the odds with<br>
$$ \text{new odds} = \text{factor} * \text{old odds}$$<br>
And lastly we convert back to probabilities<br>
$$ \text{prob} = \frac{\text{new odds}}{1 + \text{new odds}}$$</p>
<h1>Sample Code</h1>
<pre><code># CONVERT PROB TO ODDS, APPLY MULTIPLIER, CONVERT BACK TO PROB
def scale(probs, factor):
probs = probs.copy()
idx = np.where(probs!=1)[0]
odds = factor * probs[idx] / (1-probs[idx])
probs[idx] = odds/(1+odds)
return probs
for k in range(24):
sub.iloc[:,1+k] = scale(sub.iloc[:,1+k].values, FACTORS[k])
</code></pre>
<h1>Increase LB by +0.040!</h1>
<p>The only detail remaining is how to calculate the <code>FACTORS</code> above. There are at least 3 ways.</p>
<ul>
<li>Create a model with output layer sigmoid, not softmax. Train with BCE loss using both true and false positives. Predict the test data. Compute the mean of each column. Convert to odds and divide by the odds of training data.</li>
<li>Probe the LB with <code>submission.csv</code> of all zeros and one column of ones. Then use math to compute <code>factor</code> for that species. UPDATE use random numbers less than 1 instead of 0s to avoid sorting uncertainty.</li>
<li>Use the factors listed in RFCx's paper <a href="https://www.sciencedirect.com/science/article/pii/S1574954120300637" target="_blank">here</a>, Table 2 in Section 2.4</li>
</ul>
<p>Personally, i used the first option listed above. I didn't have 5 days to probe the LB 24 times for the second option. And I didn't find the paper until the last day of the competition for the third option.</p>
<p>My single model scores private LB 0.921 without post process and scores private LB 0.958 with post process. Ensembling a variety of image sizes and backbones increased the LBs to 0.926 and LB 0.963 respectively.</p>
<h1>Model Details</h1>
<p>I converted each audio file into Mel Spectrogram with Librosa <code>feature.melspectrogram</code> and <code>power_to_db</code> using sampling rate <code>32_000</code>, <code>n_mels = 384</code>, <code>n_fft=2048</code>, <code>hop_length=512</code>, <code>win_length=2048</code>. This produced NumPy arrays of size <code>(384, 3751)</code>. I later normalized them with <code>img = (img+87)/128</code> and trained with random crops of <code>384x384</code> which had frequency range 20Hz to 16_000Hz and time range 6.14 seconds. Each crop contained at least 75% of a true positive or false positive.</p>
<p>I concatenated the true postive and false positive CSV files from Kaggle. My dataloader provided labels, masks, and one color images. The labels and masks were contained in the vector <code>y</code> which was 48 zeros where 2 were potentially altered. For species <code>k</code>, the <code>kth</code> element was 0 or 1 corresponding to false positive or true positive respectively. And the <code>k+24th</code> element was 1 indicating mask true to calculate loss for the <code>kth</code> species. I trained TF model with the following loss</p>
<pre><code> def masked_loss(y_true, y_pred):
mask = y_true[:,24:]
y_true = y_true[:,:24]
y_pred = tf.convert_to_tensor(y_pred)
y_true = tf.cast(y_true, y_pred.dtype)
mask = tf.cast(mask, y_pred.dtype)
y_pred = tf.math.multiply(mask, y_pred, name=None)
return K.mean( K.binary_crossentropy(y_true, y_pred), axis=-1 )*24.0
</code></pre>
<p>I used EfficientNetB2 with <code>albu.CoarseDropout</code> and <code>albu.RandomBrightnessContrast</code>. The optimizer was Adam with learning rate <code>1e-3</code> and reduce on plateau <code>factor=0.3</code>, <code>patience=3</code>. The final layer of the model was <code>GlobalAveragePooling2D()</code> and <code>Dense(24, activation='sigmoid')</code>. I monitored <code>val_loss</code> to tune hyperameters.</p>
<h1>Try PP on Your Sub</h1>
<p>If you want to try post process on your <code>submission.csv</code> file, I posted a Kaggle notebook <a href="https://www.kaggle.com/cdeotte/rainforest-post-process-lb-0-970" target="_blank">here</a>. It uses the 3rd method above for computing <code>FACTORS</code>. It also has 3 <code>MODE</code> you can try to account for different ways that you may have used to train your models.</p>
<p>Thanks Psi. Congrats on another amazing 1st place finish. So impressive.</p>
<p>Good point about the sorting algorithm. I probed one species and the result was a little weird, i think you just explained why. I will update my post with your idea.</p>
|
Rainforest Connection Species Audio Detection
|
[completed] rank 19th solution in 13 days - LB 0.937/0.939 (public/private) 5 fold model
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/CPMP" target="_blank">@CPMP</a> Thanks for the comment, Your first 0.930 submission inspires me to enter the competition and race against time. Thanks a lot and congrats on your solo gold!</p>
<p>here are what I think are the same and differences of our approaches:</p>
<p>the same: high level objectives</p>
<ul>
<li>to use limited frequency fmin and fmax per class</li>
<li>basically, train a classifier on the "crop spectrograms"</li>
<li>make the CNN <strong>variant</strong> to the frequency information</li>
</ul>
<p>the differences: implementation details</p>
<ul>
<li><p>you crop the sepctrograms but I didn't. I have a larger receptive field (could be more signal or more noise). I use sliding roi window instead. There is no resizing of the roi to fixed size like 224x224. hence I preserved the same scale across all classes(with may or may not be a good thing??)</p></li>
<li><p>most CNN do average pooling at the last layer before classification. Here I use a conv kernel to do aggregation instead. In this way I make the CNN <strong>variant</strong> to the frequency information.</p></li>
</ul>
<p>Apart from these, I think the rest are the same. I can see that the use of TP and FP and loss function are the same. There will be some differences in augmentation, etc.</p>
<p>anyone can suggest a good free image hosting website. The one I am using may delete the images after some time.</p>
<p>[summary]</p>
<ul>
<li><p>Design a sliding window conv classifier net. Trained with tp and fp annotations (no pesudo label) on resnet34, this architechiture achieved LB 0.937/0.939 (public/private) in 5 fold model. A single fold model gives 0.881/0.888 (public/private). The first max pooling resnet34 is removed to make the stride of the backbone 16 (instead of 32)</p></li>
<li><p>Final submission is LB 0.945/0.943 (public rank 15/private rank 19) is an ensemble of this network with different CNN backbone.</p></li>
</ul>
<hr>
<p>I am grateful for Kaggle and the host Rainforest Connection (RFCx) for organizing the competition. </p>
<p>As a Z by HP & NVidia global datascience ambassador under <a href="https://datascience.hp.com/us/en.html.HP" target="_blank">https://datascience.hp.com/us/en.html.HP</a> & Nvidia has kindly provided a Z8 datascience workstation for my use in this competition. Without this powerful workstation, it would not be possible for me to develop my idea from scratch within 13 days. </p>
<p>Because I can finish experiments at a very great speed (z8 has ax quadro rtx 8000, NVlink 96GB), I gain a lot of insights into model training and model design when the experimental feedback is almost instantaneous. I want to share these insights with the kaggle community. </p>
<p>Hence I started another thread to guide and supervise kagglers who wants to improve their training skills. This is targeted to bring them from sliver to gold levels. You can refer to:<br>
<a href="https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/220379" target="_blank">https://www.kaggle.com/c/rfcx-species-audio-detection/discussion/220379</a><br>
<a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/217238" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/217238</a></p>
<hr>
<p>data preprocessing</p>
<ul>
<li>log mel spectrogram with modified PCEN denoising (per- channel energy normalization). A 10sec clip has melspec of size 128x938<br>
n_fft = 2048<br>
win_length = 2048<br>
hop_length = 512 <br>
num_freq = 128</li>
</ul>
<hr>
<p>augmentation</p>
<ul>
<li>I haven't tried much augmentation yet. For TP annotation, random shift by 0.02 sec. For FP annotation, random shift width of the annotation and then random flip, mixup, cut and paste of the original annotation and its shift version.</li>
<li>Augmentation is done in the temporal time domain because of my code. (this is not the best solution)</li>
<li>It can be observed that if I increased the FP augmentation, the bce loss of the validation FP decreases and the public LB score improved.</li>
<li>heavy dropout in model</li>
</ul>
<hr>
<p>model and loss</p>
<ul>
<li>please see the below images</li>
<li>during training, I monitor the separate log loss of validation TP and FP. I also use the LRAP assumpting one label (i.e. top-1 correctness). This is to decide when to early stop or decrease learning rate. These are not the best approach.</li>
</ul>
<p><img src="https://i.ibb.co/txpjSWk/Selection-041.png" alt=""><br>
<img src="https://i.ibb.co/C9fQh7P/Selection-042.png" alt=""><br>
<img src="https://i.ibb.co/wymW9fB/Selection-040.png" alt=""><br>
<img src="https://i.ibb.co/9hhCfrw/Selection-039.png" alt=""></p>
<p>daily progress:<br>
<img src="https://i.ibb.co/7kGcfpj/Selection-050.png" alt=""><br>
I enter the competition are reading CPMP post, saying that he has 0.930 LB in the first submission. WOW! Reading between the lines, it means:</p>
<ol>
<li>the public kernels are doing it "wrong". It could be the problem setting, the network architecture, the data, or some magic features. there is something very fundamental that can do better than the public kernel.</li>
<li>0.930 is quite high score (in the gold region). With 2 more week to the deadline, I decided to give it a try. if I solved this unobvious puzzle, i may end up in the gold region as well.</li>
<li>the first step is to go through the public kernel (some PANN SED) and the baseline image classifier. I want to see what most people are doing (and could be improved)</li>
<li>after working for a week, I realize that there are two major "potential flaws":<ul>
<li>treating it as a multi-instance learning SED. Well, MIL could be a solution, but the problem is that we don't have bag label (clip label). Most MIL required bag level label, but lack instance level label (segment label).Here we have the opposite.</li>
<li>not using FP in train. Most public kernel use only TP as train only. </li></ul></li>
<li>hence I start to design my own network architecture. The first step is an efficient way to do crop classification based on mean annotation box. Hence I design the slide window network.</li>
<li>The next step is to use TP+FP in training</li>
<li>The last step is to use pesudo labels, but I don't have time to complete this. But I do have some initial experimental results on this. Top-1 (max over time) pesudo labels is about 95% accurate for LB score of 0.94. This is good enough for distillation.</li>
<li>Because pesudo labeling requires many probing to prevent error propagation, I could not do it because I don't have sufficient slots in the last days. Worst still, there is no train data at clip level at all. This makes local testing impossible.</li>
</ol>
<p>I was able to train 5-fold efficientB0 and resnet50 in the last day. Because of the large GPU card of the HP workstation I am using, I can train large models with large batch size. When I compared training the same model with smaller batch size on my old machines, I find that the results are different and inferior, even if I use gradient accumulation.</p>
<p>I strongly feel that we have reached a new era. I think this is also why the latest RTX3090 has larger GPU memory than the previous cards. The future is the transformers …. meaning more GPU memory. That's is how fast deep learning are moving!</p>
|
Rainforest Connection Species Audio Detection
|
s3 class postprocessing (+0.008 on private LB)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Rainforest Connection Species Audio Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>As it was mentioned by others, s3 was a very interesting class. Looking more into this, our team figured out that we should scale up predictions for s3 class. We checked a number of ways to do this, but eventually a simple scale of 1.5x worked best for us (our predictions were all positive numbers, that scaling probably wouldn't work if your predictions are in logits).</p>
<p>This scaling lifted our final ensemble from 0.933 to 0.941 (+0.008) on private LB.</p>
<p>The intuition to convince ourselves that this was not a small public LB fluke (because it similarly helped on public LB) was as follows:</p>
<ul>
<li>Each class roughly has similar number of TP labels </li>
<li>But you can see on OOF train predictions and on test prediction that classes are very imbalanced (let's say looking at frequencies of different classes being top1 or top3 predictions). And s3 class is actually clearly the most ubiquitous. </li>
<li>Now because of specific labeling of this competition, we are in a situation where s3 is very common class, and hence very often present next to other labels, and model gets 0 target for s3 class (if you did not do any corrections, like loss masking)</li>
<li>So model learns to be extra cautious at predicting s3, when in fact it should be quite aggressive in predicting it</li>
</ul>
<p>Hence we applied post-processing here. <br>
We could not leverage this on other classes, but s3 really stood out as a clear outlier for us. </p>
|
Raising Money to Fund an Organizational Mission
|
Post-Mortem
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Raising Money to Fund an Organizational Mission
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I've been looking at the demographic data and have a question about how it's organized particularly around the STOTPOP or total population. If I take all of the rows with a given city name and sum the STOTPOP columns, the resulting sum is a multiple of
what the actual population is (according to wikipedia). I was wondering if you could tell me how the data is organized as it doesn't appear to be by zip / zip9 / city. In other words if I wanted to find out the population of a given city, how would I do
so?</p>
<p>Anybody do or find anything interesting in this dataset? Or find any good tools for working with large data sets?</p>
<p>I found it barely manageable given the size. It also took me awhile to wrap by head around what was in each of the files. Also the database restrictions (1 and 2 but not 3, 1 and 2 and 3, 2 and 3 but not 1, etc) made a really difficult task much harder
and considerably less enjoyable. I ended up having to write 12 large files (and accompanying SQL code) for each model to compensate for this.
</p>
<p>I started out with a ~2% sample of the training data in R but even this was rough. I tried using the
<em>ff</em> package without much luck. I ended up doing most of the data manipulation using SQL Server 2008 R2 Express and the SSMS which I found to be a bright spot in the whole process as it performed really well given the data size. I especially appreciated
the data import manager which helps with wide data sets.</p>
<p>My best model ended up just being prior averages for prospectid, zip5, and packageid with linear regression. I predicted donation amount (not amount2) and response rate using separate models. I'd then do predictedGift^1.15 * predictedResponseRate for a
final prediction.</p>
<p>I tried to use some of the demographic data but had a hard time as I was using zip5 as the key to get state abbreviations as a factor, but some of the zip codes cross state lines which leads to duplicates and zip9, even when indexed, just took too long.</p>
<p>I think the contest was a cool idea but would have been much better on just one of the 3 databases. Without that restriction I would have had more time to explore the demographic and historical data. The one thing I would have really liked to explore is
people's giving before the training period especially as much of the mailings seemed to be political and the test data is sitting within 12 months of a presidential election.
</p>
|
Recruit Restaurant Visitor Forecasting
|
12th place insight/detailed approach
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recruit Restaurant Visitor Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>By the way, I read your discussion on WTF and favorita a LOT. <br>
I probably spent more time reading them than the time you took writing it. <br>
Your discussions are always inspiring. Keep it up :)</p>
<p>Thank you for your kind words! </p>
<p>Hey, thanks! Congrats to you too. <br>
I definately could have handled the holidays better, I never expected it would turn out to be this improtant!</p>
<p>First, I want to thank Recruit and Kaggle for hosting this exciting competition. As a relatively new Kaggler, it was a really enjoyable competition. <br>
Secondly, I want to encourage all other new Kagglers. I have no ML/stats/math background. If I can get a gold medal, you can, too :) </p>
<p>Now, I want to present some of my findings which got me a gold medal, and then my detailed approach. </p>
<h1>Findings</h1>
<h2>Next day is holidays</h2>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/279665/8521/holiday_difference.PNG" alt="distribution of visitors"> <br>
You can see from this image, that the number of visitors on holidays are, <br>
(a) Close to weekend mean. <br>
(b) If you divide the holidays to whether the next day is a holiday or not, it will become quite close to a Saturday or Sunday. <br>
I dug further into this finding, and I found that it was a very consistent pattern accross the year. </p>
<p>Features from this finding: <br>
- whether the next day is a holiday or not <br>
- "dows" which converts all holidays into Saturdays(5) or Sundays(6), depending on the next day </p>
<p>Score improvement:
0.003, securing my gold medal. <br>
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/279665/8520/dows_difference.PNG" alt="Score difference with the feature"></p>
<p>What I missed:
- I should have done a "dowsf" feature, converting every day before a holiday into Friday(4) </p>
<h2>Reserve data</h2>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/279665/8525/reserve_most_in_a_week.PNG" alt="most_reserve-visit_datediff_is_within_1week">
As you can see from the image, most of the reserves are made within one week from visit. <br>
That means that if you train your model based on the raw reserve data, your model will predict that visitors in May will be really low, because it has low reserve numbers (which should not be the case, because the reserve numbers are low just because we don't know yet on 2017-04-22)</p>
<p>Features from this finding: (I trained 5 models, split based on the week number it was going to predict) <br>
- shifted(lagging) reserve data: <br>
shift the reserve data by n weeks, n being the number of weeks the model was trying to predict into the future. <br>
- shrinked reserve data: <br>
only use the reserve data, which is within the models prediction. That means that I would use reservations that are "reserve_and_visit_date_diff > 7" for my week2(2017-04-29 to 2017-05-06) model.</p>
<p>Score improvement: <br>
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/279665/8526/huge_overfit.PNG" alt="massive_overfit">
- avoided a massive overfit, which let me climb 20 places in the private leaderbaord <br>
- roughly 0.001 to 0.004 improvement with reserve data, depending on the week </p>
<h1>Detailed approach</h1>
<p>Before my detailed approach, I would like to thank @huntermcgushion for his weather data, @maxhalford for a lot of idea sharing, and @gertjac for his wonderful documentation of the Rossman competition. </p>
<h2>General Approach</h2>
<p>As a software engineer, I assumed that my strength would be in implementation skills. <br>
Basically, I tried to implement any idea that I could come up with, and then delete ones that would make my model overfit.
I am also able to read English (never underestimate what you can do with that ability), so I read a lot of winner's solution on Kaggle. I based my initial model on the 1st place solution in the Rossman competition. </p>
<h2>Basic model idea</h2>
<p>Based on lightgbm (best library in 2018 for liberaly throwing in data), treated the problem as a time series problem. <br>
I made two submissions, one was a simpler version, and one was complex ensembled variation using the same features. <br>
The simple version was a weekly model, which means I had 5 models in total. <br>
I also had a daily model for week2. The reason is that the number of reserve decays exponentially (see above image), so the reserve data is important on closer-to-public-leaderboard days. </p>
<p>In the complex version, I trained stores that have data before 2016-03-01 (about 300/820 of the stores do) separately.
Those "good" store's prediction was based on the separate model, and the other "bad" stores were trained all together with the "good stores. I also trained my whole model without outliers (thanks @maxhalford). Finally averaged by equal weight, which is my final submission. <br>
The complex version improves my score by >0.001. </p>
<h2>Features</h2>
<ul>
<li><p>basic features <br>
air_store_id, air_genre, air_area </p></li>
<li><p>rolling features <br>
groupedby (air_store_id), and (air_store_id, day_of_week) <br>
for 1, 5, 15, 55 weeks. <br>
mean, median, max, skewness <br>
shifted by n week (n being the week number that model was predicting) </p></li>
<li><p>temporal features <br>
is_holiday, is before holiday, dowh(dow(0-6), holiday as 7), dows(described in "findings" section), <br>
day of week, month, year, holidays in week, next week number of holidays, prev week number of holidays. </p></li>
<li><p>reserves <br>
air/hpg reserve sum, shifted by n weeks (n being the week number that model was predicting) <br>
air/hpg reserve sum, cut by number of weeks (so for week3, I only used reserve done 21days before visit) </p></li>
<li><p>regression <br>
Got this idea from the Rossman competition. <br>
For extrapolation, I blended in regression(Lasso and Ridge) <br>
Divide the whole data by months, and then for each month, do a regression grouped by air_store_id, using the previous 3months, previous 13months. Only use two features for regression: (visitors) is y and (day-delta, day of week) is x </p></li>
<li><p>extra ideas <br>
rolling medians divided by another rolling median
exponential weighted mean <br>
weather data: percipitation and avg_temperature (improved my model roughly 0.001, thanks @huntermcgushion) </p></li>
</ul>
<h2>Features that didn't work</h2>
<p>I did a manual greedy backward elimination of features. Yep, I just invented that term :)
That means that I commented out some of my features and tested if it would make my score better. If i made my model better, that would mean that the feature is bad, and they would go to garbage. </p>
<p>Week of year, day of month were very overfitty. <br>
Rolling-kurtosis and log-tranformed features were slightly bad. <br>
Rolling-min and a lot of my dow-grouped features were slightly bad to no effect.(I dropped them for simplicity) <br>
To my surprise, the "prefecture" and "city" that I extracted from the area name gave me a worse score. <br>
The total of air + hpg reserve was also bad for my model. </p>
<h2>Validation</h2>
<p>I had 4 validation period, basically training with data till day n, and testing with day n+1 to n+7.
I made minor tweaks, like testing with n+8 to n+14, but the results were pretty consistent, so I stuck with this setup in the end.
train:(2016-01-16 to 2017-03-11) / test:(2017-03-12 to 2017-03-19) <br>
train:(2016-01-16 to 2017-04-01) / test:(2017-04-02 to 2017-04-08) <br>
train:(2016-01-16 to 2017-04-08) / test:(2017-04-09 to 2017-04-15) <br>
train:(2016-01-16 to 2017-04-15) / test:(2017-04-16 to 2017-04-22) <br>
I started my train period from 2016-01-16, because a lot of my rolling features were null in the beginning of 2016. <br>
I also want to note that I was extremely careful in information leaks. <br>
I am really surprised that the public kernels performed so well, after having so much information leak from the future.</p>
<p>I also did some manual holdout-validation for the golden week. <br>
Training with the same periods, but holding-out a few goldenweek-days from the train data. <br>
example: omit 2016-05-03, train with the 4 weeks mentioned above, then validate using 2016-05-03. <br>
The more goldenweek train days I included, my score improved, so I was kind of confident about my goldenweek. <br>
However, I did not make any features based on this validation, because we only have one year worth of golden-week, and I feared overfitting. </p>
<p>My final submission was trained on the (2016-01-16 to 2017-04-15) / (2017-04-16 to 2017-04-22) split. <br>
The reason being that there were stores that appeared only from 2017-03, and I wanted to squeeze in any available data for that store. </p>
<h2>What I missed</h2>
<p>I should have had "air_genre" grouped rolling features as well. How the <em>*</em>* did I not think about that? </p>
<h1>Final words</h1>
<p>Question: Why are there so many national holidays in Japan? <br>
Answer: The government is upsampling holidays, to make kagglers better predict the number of restaurant visitors on holidays. <br>
(Joke from twitter)</p>
<p>Done for the day, will write more on Saturday <br>
Edit: Finally done writing. <br>
Somehow my first image doesn't show up though...</p>
<p>lol, wait, I haven't event finished writing half of it. <br>
You are so fast in reaction! Care to read when I finish writing it all? <br>
Edit: wrote all my improtant(ish) findings. Hope you will read my detailed approach as well :) <br>
Edit: Done!</p>
|
Recruit Restaurant Visitor Forecasting
|
20th place solution based on custom sample_weight and data augmentation
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recruit Restaurant Visitor Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>@CPMP Thanks! And you always give us insight and stimuli.</p>
<p>@Oscar Takeshita
I defined the getting sick index as a combined feature of temperature and humidity. So it was not directly given by Japan Meteorological Agency. Yes, I measured the effect of augmentation in my validation system. It was effective especially for pseudo-Private part (without:0.495, with: 0.492).</p>
<p>Thank you for my teammate Taka, Kaggle Admin, Recruit company, and all participants of this exciting competition. We would like to share our 20th solution, which is based on target encoding, customized sample_weight, data imputation and augmentation.</p>
<h2>1. Problem setting</h2>
<p>We were required to predict Japanese restaurants visitors from the restaurants' attribute information and historical information. Although the training data covers the dates from 2016 until April 2017, most of the stores are not observed January 2016. In terrible case, its first observation is March 2017, which is just before the test period.</p>
<h2>2. Validation method</h2>
<p>We split the last one month of the train data as validation set before conducting any feature engineering. The validation set was split to first 6 days as pseudo-Public and the rest as pseudo-Private. This method was really important for avoiding overfitting to Public Leaderboard and to know genuine performance of our models.</p>
<h2>3. Feature engineering</h2>
<p>We mainly conducted two feature engineering methods. One is target encodings and the other is creation of weather-based features. In addition, we used some ideas from public kernel carefully.</p>
<h3>a. Target encodings</h3>
<p>We conducted target encodings with combinations of "attribute information of the store" and "time-based features". Various statistics were created by this encoding as below.</p>
<ul>
<li><p>attribute information of the store
air_store_id, air_genre_name, air_area_name, prefecture</p></li>
<li><p>time-based features
year, month, day, day_of_week, is_holiday (including Sat and Sun), is_yesterday_holiday, is_tomorrow_holiday, rolling mean of is_holiday (forward), rolling mean of is_holiday (backward) etc.</p></li>
<li><p>statistics
mean, min, max, sum, var</p></li>
</ul>
<h3>b. Weather features</h3>
<ul>
<li>Daily weather (precipitation, daylight, temperature)</li>
<li>Inner product of "hourly weather vector" and "hourly visitor vector".</li>
<li>Getting sick index, which is distance from optimal temperature and humidity for virus. We conducted 7 day rolling mean of the index considering virus latency period. This feature was more effective than we expected.</li>
</ul>
<h2>4. Sample weight</h2>
<p>We designed the sample weight as the sum of two sample_weights (a + 0.3*b) considering the importance of recent events and annual periodicity. All parameters such as half-life was determined experimentally by validation scores. For validation and test, we used sample_weights with different values produced by the same logic, rather than sample_weights with same values. </p>
<h3>a. Exponentially decayed weight (sample_weight_a)</h3>
<p>We created an exponentially decayed weight with half-life of 60 days.</p>
<h3>b. similarity-based weight (sample_weight_b)</h3>
<p>We defined similarity of each months to the target period as predictability by LightGBM.</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/280160/8536/custom_weight.jpg" alt="sample_weight">
Fig 1. Customized sample_weight designed as sum of two components</p>
<h2>5. Data imputation and augmentation</h2>
<h3>a. Imputation</h3>
<p>Although month-based engineered features were really effective for public and pseudo-public score, it caused a catastrophic result to the pseudo-private score. Therefore, we imputed missing values of these statistics of May by ensemble of various regressors. We used the attribute features of the stores and the trend of 2016 as training data and these of 2017 as test data for this imputation.</p>
<h3>b. Augmentation</h3>
<p>Our hypothesis is that the first observation date of the stores is simply date of registration and not date of grand opening. Therefore, it is arbitrary decision by store owner and it should not change the visitors substantially. Based on this hypothesis, we randomly dropped past observation of both of train and test data. After that, obtained predictions were averaged. We expected that this processing increases the number of training data and make our model more robust. It should be noted that this processing is different from simple subsample because target encoding is conducted after augmentation and it brings change to created features.</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/280160/8537/drop_out.jpg" alt="drop_out past obs">
Fig 2. Stochastic drop of past observations as data augmentation</p>
<h2>6. Result</h2>
<p>20th/2158.
We couldn't get gold medals! Will try next time again!</p>
|
Recruit Restaurant Visitor Forecasting
|
7th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recruit Restaurant Visitor Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>As has been widely discussed, I had trouble with CV schemes, engineered features, and with how to use the reservation data. In the end i built many models based on a variety of ways for treating both of these. </p>
<p>For timed-based CV I used 6-week blocks starting at week 14 of the data. So I trained on weeks 1-13, evaluated on weeks 14-19; trained on weeks 1-20, evaluated on weeks 21-26, etc. This gave me a fold that had a golden week . Doing this meant I had sets of training data developed only on what preceded the evaluation fold. But then I also used a random 5-fold CV for some models.</p>
<p>For the reservation data, I ended up using it "indirectly". I calculated the standard deviation of the reservation counts and reservation visitors per store for things like each city & weekday, or each genre and weekday, or each ward, etc. I used these to calculate an "expected visitor" number for each store based on the store's mean/stdev visitors, but using the std of the reservation counts or reservation visitors. So if the store was in city X and that city had a stdev of 1.5, I got a expected visitors for every store in city X using (store mean visitors)+1.5(store stdev visitors). For a single expected visitors value I averaged all of the estimates for each store. The reservation count (i.e. number of reservations for a day) ended up being a better estimator than the reservation visitors.</p>
<p>Engineered Features. I didn't use very many. The best ones were: Reservation-based expected visitors Rolling averages (I used a simple linear interpolation to fill in missing days). I also used these for an LSTM that didn't work out so well. Store competitor counts. So store counts with similar genres that were within 1 mile, or within "walking distance" (800 meters) I also set my "golden_week" flag to 1 during the Christmas week.</p>
<p>Target feature. My weekday rolling averages, after adjusting for holidays, was an almost overpowering feature in LGBM and XGB so for a couple models I changed the target feature to be visitors-the rolling average. That really mixed up feature importances, and performed a bit worse than the visitor-target, but good enough to include in my ensembling.</p>
<p>Ensemble. I used a Neural Network (one hidden layer) to combine the various models.</p>
<p>Thanks for reading through all that! That solution didn't fair so well. Because I lost the last two lottery competitions by sticking with my CV, I decided I had better choose an "overfit the public LB" model just in case. So I averaged the .479 SurpriseMe! with my best single public LB model, that used a 5-fold CV, and built all features using all of the training data (future included). And that model got me 7th place :-)</p>
|
Recruit Restaurant Visitor Forecasting
|
8th place solution write-up
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recruit Restaurant Visitor Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everyone. I put my solution on GitHub inside a notebook. Feel free to check it out <a href="https://github.com/MaxHalford/kaggle-recruit-restaurant">here</a>.</p>
<blockquote>
<p>thanks for sharing. A minor remark, I see you use 5**2 - 1 as number of leaves, which looks like a typo, given lgb users usually use 2$$n - 1 with various n. So is it a (lucky) typo? ;)</p>
</blockquote>
<p>LOL indeed that's a typo... I copy/pasted the code from somewhere else.</p>
<blockquote>
<p>What is most noticeable in your approach AFAIK is that you did not use any time based CV. Seems that other top teams did the same, while those who tried it dropped in LB.</p>
</blockquote>
<p>I tried to a time based CV but it simply didn't fair as well. I think that it doesn't matter as long as the behaviour of the timeseries doesn't change over time, which I guess is a reasonable assumption for restaurants.</p>
<blockquote>
<p>Also, thanks for you ewm code, I reused it quite heavily.</p>
</blockquote>
<p>Glad it helped. I put the code in my <a href="https://github.com/MaxHalford/xam/blob/master/docs/tsa.md">personal toolkit</a> for future use.</p>
<blockquote>
<p>Last, but not least, congrats on your gold medal!</p>
</blockquote>
<p>Cheers! It's a nice confidence boost.</p>
<p>Thanks!</p>
<p>I don't think so. I make sure that the features are calculated on past values; this ensures that there is no leakage.</p>
|
Recruit Restaurant Visitor Forecasting
|
A Deep Learning Approach
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recruit Restaurant Visitor Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'm more than happy to share my code, but I think you would learn more from the following repositories- (I got most of my ideas from these respective repositories, and I modified my wavenet straight from SJV):</p>
<p>1) For the wavenet, see @SJV: <a href="https://github.com/sjvasquez/web-traffic-forecasting">https://github.com/sjvasquez/web-traffic-forecasting</a>. It's one of the best repositories a Kaggler has created to date. </p>
<p>2) For the RNN, check out @Arthur Suilin: <a href="https://github.com/Arturus/kaggle-web-traffic">https://github.com/Arturus/kaggle-web-traffic</a>. It's a little bit cumbersome, but it contains all of the critical ideas to make an RNN work well. </p>
<p>The best single wavenet was .519, a bag of 8 with different training methods brought it down to .514.
The best RNN single model was .525, a bag of 8 with different training methods brought it down to .520. With both ensembled together and the LGB added it was .511, the LGB scored .518 single model.
I used weightings of .4 wavenet/.3 RNN/.3 LGB for ensemble. </p>
<p>It looks like a lot of the top teams, based on the solution sharing, used gradient boosting to achieve their scores, whereas mine was primarily deep learning. @CPMP and @China also mentioned methods that worked well on Favorita and WTF didn't work as well here, but I found that to not be the case. Because of this, I thought I would share my ideas.</p>
<p>I primarily used two models: 1) I used an encoder-decoder wavenet architecture similar to @SJV's web traffic forecasting, but instead of a single decoder, I used 3 separate decoders to create predictions and then averaged them for the final predictions. The first decoder was the classic generative decoder similar to @SJV's. The second was a dense layer fully connected to the encoder output, which generated predictions for all timesteps simultaneously. The third was a rolling decoder which generated seven days at a time in parallel and then used those as generated inputs for the remaining time period. </p>
<p>The second was an encoder-decoder RNN architecture with a short timeframe (96 days) and 2 time-steps worth of features from a convolutional attention mechanism appended to the time series at each timestep during decoding. This model scored about .003 worse than the best wavenet, but it was very diverse and gave .004 additional CV score when ensembled at 50%/50%. </p>
<p>Training was hard because we didn't have a lot of data, so I augmented by starting and ending at random dates during the training period and by scaling each series AFTER zero normalization by a factor between (.8,1.2) to preserve patterns but make the model more robust to variations of scale. I'm not sure how much this helped; I didn't seriously start working on this competition until a couple of days before so I never trained models without that adjustment.
I also helped account for poor training/lots of variance in training by bagging 7-8 runs of each model with different sets of features, different hold-out sets, and different CV strategies (look forward, in parallel). Because the dataset was so small, model training was extremely fast- <1hr for both of my models to complete, so bagging runs wasn't cumbersome at all computationally.</p>
<p>My best private score was then the XGB model from the "surprise me !" kernel modified to have non-leaky reservation data, a few additional features, and appropriate cross-validation and a more robust training period ensembled with the above models ( I actually used lightGBM because it is faster, but generally the same idea).</p>
<p>After retraining the wavenet from my best weights with the mentioned weekend/holiday flags added it scored about .008 better on local CV, so I think there is the potential for this approach to compare to the winning approaches if it had been handled better. </p>
<p>I think the key to getting the wavenet to score better was adding the fully connected dense decoder in addition to the generative decoder-- it essentially makes the network treat the problem more like a regression problem from the discovered encoder features than a true timeseries problem. Without that change, I saw similar scores to the RNN.</p>
|
Recruit Restaurant Visitor Forecasting
|
A simple method to get 0.009 booster in private
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recruit Restaurant Visitor Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for confirming !</p>
<p>The detail rules:</p>
<p>Treat holiday as Saturday</p>
<p>If the day before holiday is weekday ,treat the day before holiday as Friday</p>
<p>If the day after holiday is weekday ,treat the day after holiday as Monday</p>
<p>it work not only golden week but also a lot other holidays.</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/278784/8505/2018-02-07%2008-17-11%E5%B1%8F%E5%B9%95%E6%88%AA%E5%9B%BE.png" alt="trick"></p>
<p>Treat 5.3 ~ 5.5 as Saturday</p>
<p>Treat 5.2 as Friday</p>
<p>Update:</p>
<p>The rules:</p>
<p>Treat holiday as Saturday</p>
<p>If the day before holiday is weekday ,treat the day before holiday as Friday</p>
<p>If the day after holiday is weekday ,treat the day after holiday as Monday</p>
<p>it work not only golden week but also a lot other holidays.</p>
<p>So the trick is from careful EDA and CV rather than luck </p>
<p>simple script:</p>
<p><a href="https://www.kaggle.com/h4211819/holiday-trick">https://www.kaggle.com/h4211819/holiday-trick</a></p>
|
Recruit Restaurant Visitor Forecasting
|
My 7 day journey in this competition and solution sharing
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recruit Restaurant Visitor Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for the feedback, Danijel^_^</p>
<p>The total computation time for my XGB (in total 39 xgbs for 39 days) is about 5 hrs on a GCP 64 core instance, which sounds ok for me. NN would take a little bit more time.</p>
<p>Based on my experience, applying feature selection (and thus feature reduction) usually would reduce the training time quite a bit, but usually only help the accuracy performance marginally at the best case (sometimes even hurt the accuracy if you do the feature reduction too aggressively). Therefore, since I have a bit limited time here in this competition, I didn't spend time on feature selection/reduction--it just has lower priority on my to-do list. If I have more time in this competition, then finally I might perform some feature selection/reduction though. I expect it to provide some improvements, but not sth very significant.</p>
<p>Hope this helps.</p>
<p><strong>Update:</strong> It seems that the "golden week" holiday adjustment is indeed the key missed in my solution. By simply applying the holiday effect adjustment by 30CrmnSiA <a href="https://www.kaggle.com/h4211819/holiday-trick">https://www.kaggle.com/h4211819/holiday-trick</a>, my best single xgb model scored 0.508 on private LB (without golden week holiday adjustment, it scored 0.470 on public LB and 0.516 on private LB).</p>
<hr>
<hr>
<p>Congratulation to the winners, and thanks to Kaggle and the sponsor for organizing this competition^_^</p>
<p>I entered this competition when there were only 7 days left. Those shared awesome kernels (especially this one <a href="https://www.kaggle.com/nitinsurya/surprise-me-2-neural-networks-keras">https://www.kaggle.com/nitinsurya/surprise-me-2-neural-networks-keras</a>) gave me a great starting point ( I climbed to 4th place on public LB with score 0.469 in the 3rd day), so here I am briefly sharing my approach to give back to the community. </p>
<p>I finished at 4th place on the public LB, but unfortunately shaked down quite a bit to 48th place on the private LB. My best single xgb scored 0.470 on public LB and 0.516 on private LB, my NN scored 0.476 on public LB and 0.528 on private LB. 0.7-0.3 ensemble of xgb+nn scored 0.468 on public LB and 0.515 on private LB. I used last 6 weeks of train set as validation set (or more precisely, used 39 days starting from Mar 12 (Sunday) 2017, to be made more similar to LB period). My best public LB scored submission is also my best private LB scored submission (excluding my last one 0.467 public LB submission, which was made just for fun (I didn't even use valid model predictions for private LB period for that submission), where I blended my solution with LB0.480 h2o surprise-me public kernel to see public LB score).</p>
<p>For now, I am not sure what exactly contributed to my larger public-private LB score gap (compared with other top teams on the private LB), but one possible main reason might be that my time was a bit limited in this competition and I didn't spend time investigating how to deal with the "golden week" (/holiday effects). Maybe later (if I have time) I could add the simple holiday trick by 30CrMnSiA (<a href="https://www.kaggle.com/c/recruit-restaurant-visitor-forecasting/discussion/49100">https://www.kaggle.com/c/recruit-restaurant-visitor-forecasting/discussion/49100</a> ) to my script and see whether that is the key point I missed.</p>
<p>I spent most of my time on feature engineering, mostly with file "air_visit_data.csv" (which only include 3 columns, 'air_store_id', 'visit_date' and 'visitors"). In total I generated 300+ engineered features from this single file "air_visit_data.csv", and they are the most important features in my solution based on my xgb/lgb feature importance output. These 300+ engineered features are basically the lagging visitors features 1-14 (by 'air_store_id' and by dayOfWeek), lagging visitors features 1-14 (by 'air_store_id' only), the lagging-difference visitors features 1-13, the lagging-difference-delta4 visitors features 1-10, WeightedMovingAverage for the lagging visitors features, mean/median/min/max/(percentile10,30,70,90)/sum/count visitor stats features for past 14 days, 28 days, 60 days, 90 days, 120 days, 180 days, 364 days (by 'air_store_id' and by dayOfWeek), mean/median/min/max/(percentile10,30,70,90)/sum/count visitor stats features for past 14 days, 28 days, 60 days, 90 days, 120 days, 180 days, 364 days (by 'air_store_id' only). </p>
<p>At high level, my feature engineering ideas in this competition are similar to our 1st place solution to the most recent master only kaggle competition (that's also one main reason for me to make good progress within only one week). One essential tip for the above feature engineering is that, we need to generate these features separately for each day of the 39 days of the LB period, so in total I have generated 39 feature sets (about 10+GB csv files), and I train the xgb/lgb/nn model separately for each of the 39 days. The reason is that, when we want to make prediction for day n (n=1, 2, ..39) of the LB period, the past visitor (target) info for the past 1, past 2, past 3,.., past (n-1) days are unavailable on test data, and thus we need to also exclude the corresponding days visitor information in the train data when preparing feature sets for the model used for day n prediction. Similar thing for using the reservation data. Namely, when we want to make prediction for day n (n=1, 2, ..39) of the LB period, we should only use those reservation data whose reserve_visit-dateDifference >= n days. In this way, we would avoid the overfitting (to public LB and/or CV) issue caused by utilizing unavailable future information (including both reservation data, or visitor (target) data) for feature generation. Such a tip is actually essential for our 1st place winning in the most recent master-only kaggle competition.</p>
<p>Above engineered features (300+ features from "air_visit_data.csv", and 39 separate models for each day of the LB period) might be basically the main difference between my approach and other teams' approach. All my other engineered features are basically almost the same as the public kernel <a href="https://www.kaggle.com/nitinsurya/surprise-me-2-neural-networks-keras">https://www.kaggle.com/nitinsurya/surprise-me-2-neural-networks-keras</a> (except address the time gap issue for reservation data). Based on such a feature set, my single xgb (but trained for each day of the 39 days period separately) scored 0.470 on public LB and 0.516 on private LB, my NN scored 0.476 on public LB and 0.528 on private LB, 0.7-0.3 average of xgb and NN scored 0.468 on public LB and 0.515 on private LB. </p>
<p>That's all, and see you in some other future competition. Have a great day!</p>
<p>(P.S.: It seems that 7 days are still a bit insufficient for me to extensively explore a competition^_^ Next time if I want to play a competition seriously, I will enter it when there are at least half a month left~ )</p>
<p>Best regards,</p>
<p>Shize</p>
<p>I haven't got time to carefully gone through his solution yet, but I guess he might have done some off-line tuning (including the choice of 14/28/42, as well as other parameters) but just didn't include that part of cv tuning code in that kernel, i.e., that kernel might be just the code for generating his final submission after identifying those optimal setting/parameters. Just my two cents^_^</p>
<p>Given the significant variance of data across the time in this competition (and the golden week in private LB), it is just very tricky and difficult to come up with a perfect validation set that is very similar to LB period data set. And since all different kinds of validation didn't perform sufficiently well, the impact of validation mechanism is somewhat dominated by other factors (e.g., such as holiday adjustment, etc.) when coming to the ranking of the private LB. That's why it gave you the impression that it seems all different kinds of cv mechanism sounds ok in this competition--it is not that all cv mechanisms works well; it is that all cv mechanisms not work very well (and thus the difference between cv mechanisms is not that essential). But generally speaking, I would say random kfold cv split is not a good cv option for time series problem. Just my two cents on this^_^ </p>
<p>Hope this helps.</p>
<p>@feng: In this competition data, given the significant variance of the visitors across the time, it is not that uncommon/weird that validation error is smaller than training error. If the data distribution pattern across time is much more similar/stationary, that would be a completely different story, and usually we will see significantly larger validation error than training error.</p>
<p>To quickly identify whether your training nround is large enough, you could just simply used early stopping (available in both lgb and xgb, etc), or just check whether further increase nround would reduce validation error. If further increasing nround does reduce validation error, then it implies that the nround you used is indeed not large enough. If further increasing nround did not reduce the validation error, then the issue is not that nround is not large enough, and it probably just results from the data variance across the time (i.e., validation data are significantly different from train data).</p>
<p>Hope this helps.</p>
<p>Thank you Danijel for sharing your solution^_^ Great job!</p>
<p>Btw, did you create some holiday features like "days-from-holiday" or "days-to-holiday" like in this post <a href="https://www.kaggle.com/c/recruit-restaurant-visitor-forecasting/discussion/49177">https://www.kaggle.com/c/recruit-restaurant-visitor-forecasting/discussion/49177</a>, or treat holiday as Saturday (and treat the day before that as Friday (if it is a weekday), treat the day after the holiday as Monday) like in this post <a href="https://www.kaggle.com/c/recruit-restaurant-visitor-forecasting/discussion/49100">https://www.kaggle.com/c/recruit-restaurant-visitor-forecasting/discussion/49100</a>, or create holiday_flg2 which treat holiday and weekend the same like in this 1st place lgb kernel <a href="https://www.kaggle.com/pureheart/1st-place-lgb-model-public-0-470-private-0-502/comments">https://www.kaggle.com/pureheart/1st-place-lgb-model-public-0-470-private-0-502/comments</a>? If you already have such features, then golden week holiday effects might possibly have at least partially captured by such features and the improvement might not be that large. On the other hand, if you did not do any extra engineering efforts on holidays, I guess the golden week holiday adjustment probably would also give you very nice improvement. I am looking forward to hearing your feedback when you have a chance to apply the holiday adjustment on your solution^_^</p>
<p>On my side, I didn't do any additional feature engineering about holiday (except using the raw holiday_flg given in the raw date_info.csv file), and I guess that might be one main reason why the holiday adjustment gave me nice improvement on private LB.</p>
<p>Finally, have a safe and wonderful trip, my friend^_^</p>
<p>Best regards,</p>
<p>Shize</p>
<p>Thanks, 30CrmnSiA^_^ It seems that holiday effect adjustment is indeed the key issue missed in my solution. </p>
<p>Without retraining my model and simply applying your holiday trick post processing adjustment script above <a href="https://www.kaggle.com/h4211819/holiday-trick">https://www.kaggle.com/h4211819/holiday-trick</a>, it improved my single xgb model private LB score from 0.516 to 0.508. By retraining the model, I guess the improvement might be even larger.</p>
<p>Thanks, and congrats for the winning!</p>
<p>Thanks for following^_^</p>
<p>>The reason of big gap between public and private score you mentioned above might be retaining "reservation" info. Models might over-weighted the reservation features for the teams whose scores have big shaking.</p>
<p>See my update at the beginning of my post. It seems that it is actually the holiday effect that I didn't address mainly contributes to the larger public/private score gap for me. After that holiday fix, my best single xgb scored 0.470 on public LB and 0.508 on private LB, somewhat similar to other top finishers in this competition. On the other hand, if some other top teams didn't fully utilize the reservation data (in a legitimate way without future leak), then of course my utilization of reservation data could also possibly somewhat explain the larger public-private score gap.</p>
<p>Have a great day!</p>
|
Recruit Restaurant Visitor Forecasting
|
My first kaggle and 11th place solution sharing
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recruit Restaurant Visitor Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>That's a good question. I have thought about it for a while. I rolling mean I used for training set is a 1-week lagged rolling mean. For test set, I have the 1-week lagged rolling mean for the first week. For the weeks beyond, I used 2-week lagged rolling mean, 3-week lagged rolling mean and so on as this lagged rolling mean feature.
After reading some terrific solutions, I think it would be a better idea if I train 5 models, one for each week in test set. And use the same n-week lagged rolling mean for training set and test set.</p>
<p>Congrats to all the winners! Thanks to Kaggle and the Recruit Holdings!
It's my first kaggle competition, I learnt a lot from the discussion and public kernels. It's a great community! Thanks all ^_^</p>
<p>I used time series CV for my local CV. I mainly tuned my parameters based on the CV result for the 33 days starting from 12/29/2016 and 33 days ending on 04/22/2017 . My thought was New Year should share some similarity with Golden Week. And I also wanted to look at the CV result for the most recent period.</p>
<p>For feature engineering, I borrowed some features from 'surprise me'. A thing I found was those features['min_visitors', 'max_visitors','median_visitors','mean_visitors','count_observations'] brought my local CV score up significantly(~0.1). I'm not sure why?
I excluded reservation data which was made for visit within 7 days in training set to avoid overfitting.</p>
<p>I used the rolling weighted mean(similar to the method of hklee, I adjusted some parameters based on CV ) as a feature, which turned out to be a very important feature in my models.
I also used 'day to nearest holiday', 'day from nearest holiday' as my features, since I noticed there was some after-holiday effect in the data.
I added weather data(rain and avg temp) for 2 of my models. It improved the local CV for ~0.02.</p>
<p>I trained LGBM, XGB, RF and used the mean of the 3. I also added the prediction by prophet to my final submission. Adding prediction by prophet improved my private LB score from 0.511 to 0.509.</p>
<p>Happy Kaggling!</p>
<p>hahaha, I wanted to test it in my local CV, but I didn't have time. To play safe, I only gave it a 10% weight and add it to one of the submission.</p>
<p>Thank you!</p>
<p>oh..It's 0.535</p>
<p>It was 0.499, </p>
<p>Sure, I'm working on organizing my code now</p>
<p>Thanks! Congrats to you too! :)</p>
<p>For the New Year, I just wanted to see how my model perfom on a similar toublesome period. I used simple mean for LGBM, RF, XGB stacking. The prophet was combined with weighted mean. I only gave it 0.1 weight.</p>
|
Recruit Restaurant Visitor Forecasting
|
Solution(public:0.471,private:0.505 )
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recruit Restaurant Visitor Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>plantsgo approach summary:</strong></p>
<p><strong>Models</strong></p>
<p>I have three same models with the same features:</p>
<p>model_1(step=14): use 14 days for target and slip 30 times,so I have 14*30 samples.</p>
<p>model_2(step=28): use 28 days for target and slip 15 times,so I have 28*15 samples.</p>
<p>model_3(step=42): use 42 days for target and slip 10 times,so I have 42*10 samples.</p>
<p>The reason is I don't want my model only focus on the days before or the days last.</p>
<p><strong>Features:</strong></p>
<p>1.visit_info:(21,35,63,140,280,350,420) days before groupby:</p>
<p>air_store_id,weekday,holiday,air_area_name,air_genre_name like:</p>
<p>(air_store_id,weekday),(air_store_id,holiday),(air_area_name,air_genre_name ,holiday) and so on.</p>
<p>2.reserve info:(35,63,140) days before groupby:</p>
<p>air_store_id,weekday,holiday,air_area_name,air_genre_name</p>
<p><strong>Ensembel:</strong></p>
<p>Use (xgb,lgb,nn) 0.7*lgb+0.2*xgb+0.1*nn: only improved 0.0002 offline.</p>
<p>0.334*model_1+0.333*model_2+0.333*model_3:improved 0.002 offline.</p>
<p><strong>Code:</strong></p>
<p><a href="https://www.kaggle.com/plantsgo/solution-public-0-471-private-0-505">https://www.kaggle.com/plantsgo/solution-public-0-471-private-0-505</a></p>
<p>Average of all members</p>
|
Recruit Restaurant Visitor Forecasting
|
Unexpected 10th place model
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recruit Restaurant Visitor Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I propose my model (<a href="https://www.kaggle.com/emorej/unexpected-10th-place-model">Kernel</a>) hoping that it can be useful to someone.</p>
<p>I used 3 models with the same features (XGB, LGBM, CatBoost)
Evaluation of the different models with classical 4 folds cross-validation, and a 2017-03-15 to 2017-04-22 test set.
I retained a new feature/group of features each time it improved both CV score and test score.
Ensembling 0.35 / 0.45 / 0.2 after testing all combinations on test set.</p>
<p>This approach worked for me since my best offline model corresponds to my best private leaderboard score.</p>
<p>Thanks to Kaggle, Recruit and you all for the instructive discussions and kernels!</p>
<p>Here is the graph of XGB features importance of my final choice (cf kernel for details on features):</p>
<p><img src="http://www.jeansdenim.fr/images/feature_importance.png" alt="Feature importance"></p>
<p>Thanks compatriot!
Sure, nonsense! 39 days from 2017-03-15 to 2017-04-22 of course :) I correct.</p>
<p>P.S. : XGB, LGBM and CatBoost single models scores: 0.512, 0.510 & 0.510 respectively, so effective ensembling! </p>
|
Recruit Restaurant Visitor Forecasting
|
What is you actual best model?
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recruit Restaurant Visitor Forecasting
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We had a single model with private LB 0.508 but we didn't select it as the final solution since it did slightly worse in our own last 6 week CV. So we missed the gold medal. Perhaps, we were just overfitting our last 6 weeks CV :)</p>
<p>We also observed that our best submission in public LB (those scored about 0.472) end with terrible private LB (~0.6-0.9). Fortunately, we didn't select them since they don't do well in the the last 6 weeks CV. If you split the last 6 week CV into first 6 days (corresponding to public LB) and last 33 days (corresponding to private LB) , you can actually see that doing too good in public LB will make the private LB score worse. This is the point we stopped looking at public LB.</p>
<p>Another interesting boost to our score is the used of reservation data. We found that if you remove reservation data that are too close to the visit date (about < 10 days), it can actually boost your score by 0.002 in private LB. This works because it prevent the model from relying on precise reservation data too much, and we know in the actual test period the amount of reservation data is decaying due to the cutoff of reserved date.</p>
<p>What are your hidden best solution? How did you do your CV?</p>
|
Recursion Cellular Image Classification
|
10th place - Pure Magic Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the winners and thanks to the host and Kaggle hosted such an interesting competition. Huge thank to my teammates you are the greatest!</p>
<p><strong>Overview:</strong>
All our models were trained with 512x512x6 images. We sampled a random site for each sample per epoch. All our were 2-headed.
First head: classification head had an output with 1139 neurons,
Second head: embeddings.</p>
<p><strong>Challenges:</strong>
Validation
Specific experiments like U2OS-4 failed </p>
<p><strong>Validation for CNNs:</strong>
We noticed that specific experiment types perform very differently from others. We chose the hardest experiment for our model and built the validation based on it. </p>
<p><strong>Training Augmentations:</strong>
- flips (horizontal, vertical)
- rot90
- transpose
- shift (0.25) with 101 reflections
- rot360 (-180, 180)
- cutout (32x32, 2 holes)
- noise (gaussian, localvar, poisson, salt&pepper, speckle)
- clahe
- gamma (0.9,1.1)</p>
<p><strong>Data Pre-Processing:</strong>
per channel (img - img.mean()) / (img.std() + 1e-6)
We noticed that due to heavy augmentations we had a true_division warning. This warning caused batchnorms to feel bad, so we did this trick to prevent zero division.</p>
<p><strong>Model training:</strong>
We train our models with NVidia Apex and Pytorch 1.2 on 8x1080ti GPU server for 110 epochs. The training takes 1-3 days depending on the model.
We did not use oversample.
Because of the long iteration, we didn’t use any K-Fold training. We wanted to train it closer to the end of the competition, but we didn’t have much time. </p>
<p><strong>Optimizer:</strong> SGD</p>
<p><strong>Scheduler:</strong> init LR 0.1,
5 epochs warmup,
drop LR every 40 epochs with factor 0.1</p>
<p><strong>Loss Functions:</strong>
Smooth CrossEntropy for classification, Center Loss for embeddings. </p>
<p><strong>Model:</strong>
Our final model is Senet154 trained on train data + pseudo-labels.
To produce pseudo-labels were using an ensemble SeNet154, SeResnext50, SeResnext101, Polynet, EfficientNet-b6, ResNeXt101-wsl. </p>
<p><strong>Test time augmentations:</strong>
output = (model(img1) + model(img2) + model(img1.flip(2)) + model(img1.flip(3)) + model(img2.flip(2)) + model(img2.flip(3)) / 6.</p>
<p><strong>Post-processing:</strong>
Hungry experiment re-calibration and plate re-calibration
<a href="https://www.kaggle.com/c/quickdraw-doodle-recognition/discussion/73803#latest-438270">https://www.kaggle.com/c/quickdraw-doodle-recognition/discussion/73803#latest-438270</a> </p>
<p><strong>What didn’t work or had about the same performance:</strong>
- Mix-up, manifold mixup
- ArcFace
- Last stride 1
- Focal Loss
- XGBoost ensembling
- Metric search based on the embeddings distances
- GeM
- Different optimizers like Adam, Radam, Ranger, etc.
- ResNeXt101-wsl performed worse than expected. </p>
<p>We wanted to find the nearest neighbor based on these embedding (and maybe to combine with classification part), but we didn’t succeed with re-calibration (1 class per experiment) based on the distances. </p>
<p>So center loss was just a regularizer for the model’a embeddings. </p>
<p>We used only logits from model’s linear output. </p>
<p>Regarding pseudo-labels, we didn't apply any methods to select them.
We added all of them after re-calibration to our train set and trained with Smooth-CE loss</p>
|
Recursion Cellular Image Classification
|
117th place solution [0.718] using PyTorch & fastai
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><em>A big thank you goes out to the competition organizers and they great community!</em></p>
<p><em>This was my first kaggle competition I joined from the beginning and I learned a lot! :-D</em></p>
<p><strong>Final setup for highest score:</strong>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1686473%2F0b3b7757f33b40bd54f75953d3d49659%2Frcic_network.png?generation=1570045807295295&alt=media" alt="">
- <a href="https://github.com/lukemelas/EfficientNet-PyTorch">ImageNet-pretained EfficientNet-B3</a> with 6D input stage (2x 3D weights copied to get 6D input stage at the beginning of training).
- <a href="https://arxiv.org/abs/1905.00292">AdaCos metric learning approach</a> was implemented, which should be more stable without hyperparameter tuning when compared to other cosine-based softmax losses.
- Standard data augmentation (flip vertical/horizontal, rotation, zoom, lighting, warp) with custom implementation of <a href="https://arxiv.org/abs/1707.06183">„Color Augmentation for histopathology images”</a>. Data visualization function displaying the 6D images as 3D+3D image:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1686473%2F83c27c07dc7a61366fe8bc1a9f3bfbbe%2Frcic_data.png?generation=1570045826942943&alt=media" alt="">
- Embedding layers for meta data to let the model learn from celltype, plategroup, and experiment data (i.e., data leak). (However, there was no time in the end to optimize the embeddings used for the full training and test predictions. Something like a similarity search for the test experiments would have been an interesting approach.)
- <a href="https://github.com/oguiza/fastai_extensions/tree/master/shared/0_image_data_augmentation">CutMix</a> was used for the image data for some parts of the training. In addition, a linear interpolation of the embeddings was used during CutMix training.
- Predictions were carried out with 3x TTA to obtain the feature embeddings of the image and meta data input.
- Finally, <a href="https://github.com/facebookresearch/faiss">FAISS</a> was used for cosine similarity search of the extracted feature embeddings.</p>
<p><strong>General training setup:</strong>
1. Train new network output stage/head with frozen pretrained feature extractor network.
1. Retrain pretrained extractor network with new head (approx. 60 epochs).
1. Add meta data embeddings, freeze feature extractor network and train embeddings and output stage (approx. 50 epochs).
1. Train entire network (approx. 150 epochs).</p>
<p>For all training stages the <a href="https://sgugger.github.io/the-1cycle-policy.html">1cycle policy</a> was used for several cycles. EfficientNet training was carried out with AdamW. CutMix was used for the late training stages.</p>
<p><strong><a href="https://github.com/MicPie/recursion-cellular-image-classification/blob/master/README.md">Notebooks from the GitHub repo</a>:</strong>
- EfficientNet-B3 training:
- <a href="https://github.com/MicPie/recursion-cellular-image-classification/blob/master/rcic_v20_efficientnet-b3.ipynb">Pretraining</a>
- <a href="https://github.com/MicPie/recursion-cellular-image-classification/blob/master/rcic_v32_efficientnet-b3_celltype_plate-group-exp_Full.ipynb">Complex network</a>
- <a href="https://github.com/MicPie/recursion-cellular-image-classification/blob/master/rcic_v41_efficientnet-b3_celltype_plate-group-exp_12D.ipynb">Complex network 12D</a> (2x 6 channel images)
- EfficientNet-B4 (training not finished):
- <a href="https://github.com/MicPie/recursion-cellular-image-classification/blob/master/rcic_v43_efficientnet-b4_Mish_Ranger_LabelSmoothingLoss.ipynb">Pretraining</a>
- <a href="https://github.com/MicPie/recursion-cellular-image-classification/blob/master/rcic_v44_efficientnet-b4_celltype_plate-group-exp_12D_model-exchange.ipynb">Complex network 12D</a> (2x 6 channel images)</p>
<p><strong>What didn’t work out or had no impact over a more basic setup:</strong>
- Separate classifiers for celltypes and/or plategroups (i.e., siRNA groups of 277).
- Some domain adaption techniques.
- Mean or k-means clustering of siRNA feature embeddings for cosine similarity search.</p>
<p><strong>Learnings:</strong>
- A lot about PyTorch, fastai, and adjacent DL/ML topics.
- Stay longer on the smaller model side, do more A/B or CV testing of improvements, then first really over-fit, and then try to reduce it.
- EfficientNet-B3 retraining took forever and training of EfficientNet-B4 was not finished at the competition deadline (however, this looked very promising).</p>
<p>If you have any questions or are interested in further information, please feel free to contact me! :-)</p>
<p>Thank you very much!</p>
|
Recursion Cellular Image Classification
|
11th place solution - AttentionHeads
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>AttentionHeads</h1>
<p>First of all, many thanks to my teammates, <a href="/dempton">@dempton</a>, <a href="/ddanevskyi">@ddanevskyi</a>, <a href="/orgunova">@orgunova</a> and <a href="/cutlass90">@cutlass90</a>. Also our congratulations to <a href="/dempton">@dempton</a> for getting his Grandmaster badge.
So, here is our (pretty simple) approach.</p>
<h1>Model</h1>
<ul>
<li>Ensemble of 2 <strong>EfficientNets</strong>: B0, B5</li>
<li><strong>6 channel</strong> input with first Conv layer replaced with 6 channel version</li>
<li><strong>Batch Normalization</strong> before first Conv layer</li>
<li>We normalize each image using mean/std <strong>computed on all images in experiment</strong></li>
</ul>
<h1>Training</h1>
<ul>
<li>Simplest approach possible: <strong>3-fold split, stratified by cell type</strong>, as well as by <strong>visual appearance</strong>: We made simple visualizations of experiments within cell type and tried to split similarly looking experiments into different folds, so to have all kinds of images in every fold.</li>
<li>No smart sampling or special losses, just basic <strong>Cross Entropy</strong>, without label smoothing or metric learning</li>
<li><strong>SGD</strong> with <a href="https://arxiv.org/abs/1907.08610">LookAhead optimizer</a> (a=0.5, k=5) and 4-step <strong>gradient accumulation</strong></li>
<li><a href="https://sgugger.github.io/the-1cycle-policy.html">1cycle policy and LR range test</a> to find best learning rate for training</li>
<li><strong>Polyak averaging</strong> (exponentially weighted version). When evaluating model we used exponentially weighted average of model parameters instead instead of last/best parameters, this does not give any performance improvements of final model, but has very pleasant <strong>smoothing effect on metric/loss curves</strong> and gives huge performance improvements in almost all training steps except for the very last, where it reaches same performance as model without averaging
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F726978%2F6d0d5effe9c9f35f100cde2c0ec5bf2e%2FScreen%20Shot%202019-09-27%20at%2012.44.21%20PM.png?generation=1569577526910469&alt=media" alt="polyak averaging effect"></li>
<li>Augmentations: randomly sample site and apply flip and transpose, as well as random <strong>channel reweighting</strong> (just multiply each channel by some positive values with restriction that they should sum to 6)</li>
<li><strong>Progressive resize</strong> during training: starting from random crops of size 224 we linearly scale crop size to 512. This gives <strong>2x faster training</strong> without drop in performance</li>
<li>1 round of <strong>pseudo labeling</strong>. We just selected top-K samples with most confident predictions, added them to each fold's train set and finetuned our models for several additional epochs</li>
</ul>
<h1>Post-processing</h1>
<ul>
<li><strong>LAP solver</strong> within each experiment to assign classes to images</li>
<li><strong>Softmax Temperature</strong>. Just multiply logits by some positive number before taking softmax <code>(logits * t).softmax()</code>, this sharpens or softens distribution which has huge impact on perfrmonace when used with LAP. Temperature value can be picked on validation set.</li>
<li>The "277 classes per plate" trick. It might be different from what other participant were doing, but basically we did the following:
<ol><li>Search for best <strong>temperature</strong> that maximizes metric.</li>
<li>Use <strong>LAP</strong> within experiment to get model predictions.</li>
<li>Now we need to decide: within each experiment, what sirna groups should be assigned to each plate. The next step if to just <strong>zero out probabilities</strong> of classes which <strong>does not belong to this group</strong>.</li>
<li>Run <strong>LAP</strong> again.</li></ol></li>
</ul>
<h1>TTA</h1>
<ul>
<li><strong>None</strong>, just average of 2 sites</li>
</ul>
<h1>Tools and hardware</h1>
<ul>
<li><strong>PyTorch</strong></li>
<li>1-2 1080ti most of the time, and about 8 GPUs in last 2-3 days.</li>
<li><a href="https://github.com/gatagat/lap">LAP solver</a></li>
</ul>
<p>Usually when you normalize images for conv net you compute per channel mean and std for all images in your train set and use this stat during training/evaluation/inference. We did the same, but had statistics computed only on images from same experiment, i.e. normalize each image from HUVEC-05 using stats computed on all images in HUVEC-05. I couldn't remember all the numbers but it gave about 0.02 improvement.</p>
|
Recursion Cellular Image Classification
|
14th place solution, [0.989] Colab and Kaggle kernels
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I want to thank Recursion Pharmaceuticals and Kaggle for organizing this competition. I also want to thank everyone who participated and especially <a href="https://www.kaggle.com/zaharch">nosound</a> for explaining and providing sample code for leak exploitation.</p>
<p>My solution was developed using only freely available computational resources. I used Colab at the start of the competition, then briefly switched to Kaggle kernels up to the point when Kaggle enforced GPU time limitations and then Colab again. My final submission was a single densenet201-based model.</p>
<p><strong>Tools</strong>
- PyTorch with Apex and a bit of Ignite.</p>
<p><strong>Data</strong>
- 6x512x512. First I've tried to use smaller resolutions, but understood fairly quickly that full-size images are necessary for maximum performance.
- For augmentation I've used rotations, horizontal flip, cutout and random brightness/contrast. I've also used 4 images rotated by 90 degrees for TTA.</p>
<p><strong>Model</strong>
- densenet201 with ArcFace loss.
- separate 1108 classes for every cell type.</p>
<p><strong>Training process</strong>
- AdamW and cyclical learning rate with restarts, triangular schedule.
- validating on 10% of train dataset to find a point where model begins to overfit. Training on the whole dataset up to the cycle that showed improvement on validation.
- using the last checkpoint in a cycle for prediction.
- averaging distances for two sites.</p>
<p><strong>Post-processing</strong>
- using leak. Getting maximum prediction combination using lapjv for every plate.</p>
<p>Pseudo-labeling and blending different models' predictions would have likely boosted my score, but unfortunately there was not enough time and computational resources.</p>
<p><strong>Congratulations to the winners!</strong></p>
<p>Unfortunately I currently don't have time to bring my codebase to a level acceptable for public release. My implementation of ArcFace does not deviate from its description in the paper. You can refer to <a href="https://github.com/ronghuaiyang/arcface-pytorch/blob/master/models/metrics.py">this</a> for details. My implementation is even less numerically stable.</p>
<p>The training itself did not take prohibitively long time actually. About 30 to 40 epochs were enough and it took about 3 days on T4. But actually making decisions about pipeline/model/loss was not a straightforward process at all. I've experimented with less and more heavy architectures, different normalization schemes, different resolutions, losses, validation schemes etc. The bottleneck was colab instability, its overall performance and that it does not allow to train for an acceptable amount of time. This hinders iterating over ideas significantly and is actually a painful process. Unfortunately my kaggle account was locked during competition for almost two months and I was not able to use Kaggle kernels at all. After it was unlocked there was not that much time left and then GPU limits came. </p>
<p>You can refer to this <a href="https://colab.research.google.com/drive/17MsV4f8Bap8y2MU8BcKBNXRdTNE2ftP3">notebook</a>.</p>
|
Recursion Cellular Image Classification
|
16th place solution [0.988 private LB]
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p></p>
<p>First, I would like to thank Recursion Pharmaceuticals and Kaggle for organizing such an interesting competition.
Then, I deeply appreciate <a href="https://www.kaggle.com/zaharch">nosound</a> and <a href="https://www.kaggle.com/giuliasavorgnan">Giulia Savorgnan</a> for reporting the plate leak. </p>
<p>My solution is pretty simple.
The overview is shown in the figure above. (<strong>SORRY for my messy handwriting..</strong>)</p>
<h2>Setup</h2>
<ul>
<li>I used cloud instances with some V100s.</li>
<li>PyTorch</li>
</ul>
<h2>Data</h2>
<ul>
<li>6 channel, 512x512 input</li>
<li>Contrast limited adaptive histogram equalization(CLAHE) is applied to some models</li>
<li>Typical augmentation</li>
</ul>
<h2>Model</h2>
<ul>
<li>Basically cosFace with various backbones</li>
<li>RAdam optimizer and cyclic learning rate</li>
<li>Each site is treated separately</li>
<li>2 stage training from <a href="https://www.kaggle.com/c/recursion-cellular-image-classification/discussion/100414#latest-634062">this discussion</a></li>
<li>Cross validation in 2nd stage training</li>
</ul>
<h2>Prediction</h2>
<ul>
<li>No<code>model.eval()</code> (this is due to difference between each experiment)</li>
<li>Test time augmentation(TTA) is carried out 8 times for each image</li>
<li>Predictions from both sites are averaged</li>
<li>Soft voting</li>
</ul>
<h2>Post processing</h2>
<ul>
<li>Raw prediction is corrected in the same way <a href="https://www.kaggle.com/zaharch/keras-model-boosted-with-plates-leak">this kernel</a> does</li>
<li>Hungarian algorithm is used to remove duplicated prediction in each plate </li>
</ul>
<h2>What didn’t work</h2>
<ul>
<li>Mixup augmentation</li>
<li>Control image ( I tried a two head model in which experiment image features are subtracted by control image features, but it performed worse)</li>
</ul>
<p>I really wanted to try pseudo labeling, which was likely to boost the score, but I didn’t manage to do that due to lack of time.</p>
<p><strong>Finally, congratulations to the winners!</strong></p>
<h1> </h1>
<p>P.S.
I work at <a href="https://aillis.jp">Aillis Inc.</a>, a Japanese medical device startup developing advanced diagnosis device using throat images. We are hiring! Please contact me if you are interested.</p>
|
Recursion Cellular Image Classification
|
1st Place Solution Write-up & Code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><ul>
<li>zoom transform -- I observed significant boost in score when I introduced it, because back then I used only flips + 90 deg rotation, and didn't use mixup nor cutmix (it overfitted to the training set very fast). I don't know what difference it makes with mixup/cutmix</li>
<li>mixup, cutmix -- significant boost in score. Few teams stated that mixup didn't work for them -- probably they trained for too few epochs (mixup requires much more epochs to converge)</li>
<li>relatively long training time -- I suspect that for mixup it was a necessity, but for cutmix I could reduce training epochs maybe by ~30% (I didn't verify it)</li>
<li>randomness on normalization -- I think it didn't change anything at the end, as I didn't observe any change in score on local validation. I used it out of sentiment because it helped me realize that normalization over channels helps a lot</li>
<li>5:1 split -- That was a stupid choice. Actually, I've never train full CV for any configuration. There were too many ideas to test, and larger models took much more time then I expected (more than 1 GPU hour per epoch). However, looking at validation results I had, and confidence on validation and test set, I estimated that I would have ~0.998 private which is not very distant from how I scored</li>
<li>progressive PL -- I don't think that was a key for the success. Without any PL I have very good public-private score ratio. For example:
<ul><li>single model 90 epochs: 0.98997 private (13 teams above it) and 0.95802 public (20 teams above it)</li>
<li>2 model ensemble 90 epochs: 0.99199 private (11 teams) and 0.96795 public (15 teams)</li>
<li>ensemble of 90 epochs models: 0.99540 private (3 teams) and 0.98262 public (8 teams)</li></ul></li>
<li>metric loss -- I used different approach than bestfitting's solution with gamma=0.2 (and also many gold solutions from this competition). The difference between my and bestfitting's solution is that bestfitting instead of using 2 layers (head and arc margin product) used only arc margin product. Difference between arc margin product and my head is that arc margin product normalizes embedding and my head doesn't -- it gave me better results when I had ~0.8 public score</li>
<li>What is also unique to other solutions, is concatenating cell type one hot to the GAP output -- after using this, fine-tuning model for each cell type stopped giving me any boost in the score (but I introduced it when I had ~0.6 public score IIRC)</li>
</ul>
<p>To sum up, I think that advantage over others may be because of:
1. zoom augmentation,
2. cutmix / long training time,
3. arc margin product / head approach, or
4. concatenating cell type to the neck input.</p>
<p>I used 2x Titan V, PyTorch, and trained in mixed precision</p>
<p>Thanks to Recursion and Kaggle for hosting such an interesting competition. It was really fun to participate.</p>
<p><strong>UPDATE: source code is here: <a href="https://github.com/maciej-sypetkowski/kaggle-rcic-1st">https://github.com/maciej-sypetkowski/kaggle-rcic-1st</a></strong></p>
<h2>Data pre-processing & augmentation</h2>
<ul>
<li>Loading original images (512x512)</li>
<li>HUVEC-18 is moved to the training set (known leak)</li>
<li>For training, all control images (also these from the test set) are used in the same way as non-control images</li>
<li>Training augmentations
<ul><li>Random resized crop preserving aspect with scale ~ uniform(0.5, 1) using nearest-neighbor interpolation</li>
<li>Random horizontal and vertical flip, and 90 degrees rotation</li>
<li>Normalizing each image channel to N(0, 1)</li>
<li>For each channel: channel = channel * a + b, where a ~ N(1, 0.1), b ~ N(0, 0.1)</li></ul></li>
<li>Test-time augmentations
<ul><li>Horizontal and vertical flip, and 90 degrees rotation</li></ul></li>
</ul>
<h2>Model</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2631834%2F21c9fd3c9f61e1ea9970a395ed046111%2Fmodel.png?generation=1569708962482420&alt=media" alt="">
* Backbone is pre-trained on ImageNet and first convolution is replaced with 6 input channel convolution
* Neck: BN + FC + ReLU + BN + FC + BN
* Head: FC</p>
<p>I found it important to not normalize input for the head, and that's the reason why the head and arc margin product are separate layers with different weights for fully connected layer (contrary to <a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/78109">what bestfitting did in Human Protein Atlas Image Classification</a>).</p>
<h2>Training</h2>
<ul>
<li>Batch size: 24 (48 with gradient accumulation)</li>
<li>Optimizer: Adam</li>
<li>Weight decay: 1e-5</li>
<li><a href="https://arxiv.org/abs/1905.04899">Cutmix</a></li>
<li>Loss = ArcFaceLoss / 2 * 0.2 + SoftmaxCrossEntropyLoss * 0.8
(ArcFaceLoss is divided by 2 to more or less preserve magnitude between losses)</li>
<li>90 epochs</li>
<li>Learning rate: 1.5e-4 with cosine scheduling</li>
</ul>
<h2>Post-processing</h2>
<ol>
<li>Predictions from different site images and different test-time augmentations are combined by taking mean of logits</li>
<li>Predictions for control classes are ignored</li>
<li>831 classes that can't be on the given plate are marked as impossible</li>
<li>Linear Sum Assignment (LSA) is applied</li>
</ol>
<p>With such configuration (training on all labeled part of the dataset -- no validation), I got 0.98997 private score and 0.95802 public score (single model).
Ensembling it with models trained in the same or very similar way (most of them with train/val split 5:1) (3x DenseNet161, 2x DenseNet161 with mixup (instead of cutmix), 5x DenseNet201 also with mixup, 3x ResNeXt50 also with mixup) gave me 0.99540 private and 0.98262 public (between 3rd-4th place on private LB).</p>
<p>To reach score of 0.997 private with single model I needed to add one more trick, which I would call:</p>
<h2>Progressive pseudo-labeling</h2>
<p>In all write-ups I've read so far, pseudo-labeling methods consist of iteratively training new model(s) and enlarging training set using them. In my method, small amount of most confident predictions is pseudo-labeled and added to the training set <strong>each epoch</strong>. Precisely, for each epoch:
1. Predict all test and validation examples that weren't added to the training set yet (without TTA, only with combining over sites -- predicting with TTA could probably lead to a small improvement, but it would take more time to compute)
2. For each prediction, mark as impossible:
* control classes,
* classes that can't be on the plate,
* classes that are already assigned to any image for the plate in the training set.
3. Select K most confident prediction (difference between greatest and second greatest class prediction)
4. Add new examples to the training set. If at least two examples are on the same plate and have the same class pseudo-labeled, add to the training set only the most confident one (to preserve uniqueness of classes on the plate)</p>
<p>During class assignment, I use greedy-like approach instead of LSA. However, to generate final predictions, the same post-processing as earlier (without pseudo-labeling) is applied (and LSA there). Using LSA increases score because examples that were added later are more difficult and have smaller confidence (because network saw them fewer times, and learning rate was smaller (decreasing learning rate policy)).</p>
<h2>Training previous single model further</h2>
<ul>
<li>for additional 40 epochs,</li>
<li>pseudo-labeling 40% of the test set at the start, and then adding 1.5% each epoch,</li>
<li>cosine learning rate schedule with initial learning rate = 6e-5 scheduled for 60 epochs (i.e. at the end of training it is 1.5e-5),</li>
</ul>
<p>gave me 0.99700 private and 0.99029 public, which already puts me on the 1st position.</p>
<p>Ensembling it with another model trained in the same way, but with train/val split 5:1, I got 0.99749 private and 0.99187 public. Adding one more model to the ensemble (also trained in the same way, but on the different split) gave me 0.99763 private and 0.99187 public, which matches my private score.</p>
<p>I noticed that around 120th epoch (30th epoch of pseudo-labeling -- 85% of test set already added) first pseudo-label misclassifications on validation set started to occur, hence I tried to fine-tune model even further (taking checkpoint from 120th epoch), starting with 80% of test set, and again incrementally adding new images for 30 epochs. And then again taking checkpoint 5 epochs before end, and starting from 95% for 20 epochs. However, by doing this I was able to classify correctly only one private example more (0.99707) and increase public score (i.e. probably only U2OS-04 experiment) to 0.99232.</p>
<p>My final submission is an ensemble of 11 models (6x DenseNet161, 5x DenseNet201) (each of them with pseudo-labeling) with more TTA (also predicting on crop-resized images with scale 0.75 and 0.85), but it didn't give me any boost on the private test set (0.99763), but helped on the public test set (probably only on U2OS-04) (0.99480).</p>
<h2>Other insights</h2>
<ul>
<li>Mixup performs a little better than cutmix on the part without pseudo-labeling, but it converges slower. On the contrary, with pseudo-labeling, cutmix was a little better (probably because of faster convergence)</li>
<li>Larger architecture is better: DenseNet121 < DenseNet169 < DenseNet201 < DenseNet161 -- for people without knowledge about DenseNets: DenseNet161 has less layers but more parameters than DenseNet201</li>
<li>EfficientNets and ResNeXts didn't work for me</li>
</ul>
<h2>Attempt to use control images in a smarter way</h2>
<p>I want to share the approach I've tried, however it didn't give me any boost in the score, and I didn't use it in the final submission. But I think it's very valuable information, especially for the further research.
My idea was to instead of feeding to the head embedding only, feed also some information about any reference image from the same plate/experiment (e.g. its embedding and one-hot label).
Using only control images as a reference, would lead to overfitting. To tackle that problem, I used also non-control images as reference -- during training, control and non-control images are treated in the same way; during inference, only control images are used as reference (obviously, non-control images in the test set are not labeled).
We have 1139 classes per experiment (or 277 + 31 = 308 per plate), so the head would see each pair of classes after 1139 * 1139 = 1297321 images (once per 15 epochs) or 308 * 308 = 94864 (once per 1 epoch). To solve this, I ensure that in every batch there will be constant number of images from each of randomly chosen experiments/plates. For example, for batch size = 48, I can have 8(number of experiments/plates) x 6(number of images from given experiment/plate), and run the head on each pair among each experiment/plate in the batch. That gives 8 * 6 * 5 = 240 pairs in one batch (I forbid the image and the reference to be the same image), and doesn't increase training time (embedding of each image is calculated only once, and the head consists of few fully connected layers).</p>
<p>However, it didn't work any better than normal classification, and sometimes even worse. I tried to add or modify features for the head, for example:
* concatenate difference between / multiplication of image and reference embedding,
* concatenate corresponding vector from the arc margin product layer to the reference label,
* normalize / not normalize embeddings,
* detaching some of the features (not computing gradient through them).</p>
<p>I also tried:
* add more layers to the head,
* heavy-augment all images in a batch for the same experiment/plate in the same way -- all images in the batch belonging to the same experiment/plate would have the same brightness, contrast, gamma correction, the same scale applied, and so on -- the idea was to artificially simulate other cell types to direct model toward using references more effectively,
* use mixup only within images from the same experiment/plate.</p>
<p>However, no luck.</p>
<p>What more, after training such model and feeding random noise as the reference, network still inferred very similar predictions with almost the same validation score (and not always worse). So, network didn't learn how to use the references properly.
That would imply that creating a model that performs well on different cell types (not seen during the training) using control images may be a very hard and challenging problem.</p>
<ol>
<li>I wrote everything from scratch for this competition. It is important to fully understand and be aware what's in your code when you run an experiment.</li>
<li>I didn't use any special approach for testing, just printed validation score for each experiment separately to see more or less variance and accuracy per cell type. Also if you use public LB for validation, do it wisely. In this competition we had only 4 experiments and among them U2OS-04 -- the most difficult one in entire test set.</li>
<li>I stated my results without progressive pseudo-labeling in the write-up and also in <a href="https://www.kaggle.com/c/recursion-cellular-image-classification/discussion/110543#636393">this comment</a>. (0.99540 private and 0.98262 public)</li>
<li>And my tip for beginners: don't waste too much time on reading tips from others :) It can help you only in the short term, because if you want to reach top positions you must learn how to think out of the box. Everyone thinks differently, so the best way is to develop your own way instead of copying others -- especially because it's not easy to transfer such knowledge/experience.</li>
</ol>
<p>I've looked at the logs again, and it seems like "1 GPU hour per epoch" is not entirely true (my bad...). I think I accidentally took the last epoch where training were performed on entire training + test set (pseudo-labeling), and then it took 1 hour. The described schedule (130 epoch) took 98 hours, where the first epoch (no pseudo-labeling) took 40 minutes.</p>
<p>I trained one job on one GPU, 2 jobs in parallel, so I didn't check how it scales on 2 GPUs.</p>
<p>There may be few other things you're not considering:
* Training is performed on entire training set together with HUVEC-18 (no validation)
* Inferring remaining part of the test set while pseudo-labeling every epoch
* Training on larger and larger set (pseudo-labeling)
* Using deterministic CUDA kernels (to ensure reproducibility) which are slower
* Memory-efficient implementation of DenseNet, which is slower but allows to train with larger batches.
* PyTorch supports only NCHW tensor layout, but fastest FP16 CUDA convolution kernels takes NHWC input. If I'm not mistaken, in PyTorch, it is done by transposing the tensor before and after calling each FP16 convolution kernel. But we can hope it will be improved someday as part of <code>torch.jit</code></p>
<p>I'm almost 100% sure I wasn't bottleneck on I/O and CPU. </p>
<p>To give you more comparable performance metrics:
* training speed -- 37 img/sec
* inference speed (w/o TTA) -- 123 img/sec</p>
<p>Good catch! For the first linear layer there's no reason for it, just a typo. Probably it doesn't matter because it's the only such place in the entire model. For the second linear layer -- it's directly before the batch norm which would remove the bias.</p>
|
Recursion Cellular Image Classification
|
3rd place solution.
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1025985%2Ffade7b8de4750f88c191d2d23fc6dfb8%2F2019-09-29%2012.13.24.png?generation=1569687562171949&alt=media" alt=""></p>
<p><a href="https://github.com/SeuTao/CellSignal-Cell-Image-Analysis">https://github.com/SeuTao/CellSignal-Cell-Image-Analysis</a></p>
|
Recursion Cellular Image Classification
|
41st place pytorch solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>1. Data</strong>
- 6 channels</p>
<p><strong>2. Augmentation</strong>
- RandomScale, Rotate, HorizontalFlip, VerticalFlip, Resize, RandomBrightnessContrast, RandomGamma, Normalize</p>
<p><strong>3. Model design</strong></p>
<ul>
<li>Backbone: DenseNet201 pretrained on ImageNet</li>
<li>Head: 2 linear layers with batch normalization</li>
</ul>
<p><strong>4. Loss</strong>
- Binary Cross Entropy Loss</p>
<p><strong>5. Training</strong></p>
<ul>
<li>Optimizer: Adam</li>
<li>Different learning rates for different layers</li>
<li>Image size: 512</li>
<li>Batch size: 64</li>
<li>Epochs: 75</li>
<li>Finetuning for each cell type</li>
<li>Mixed precision</li>
</ul>
<p><strong>6. Prediciton</strong></p>
<ul>
<li>TTA: 10</li>
<li>Use embeddings instead of final probability scores</li>
<li>Run k-Nearest Neighbors for each cell type separately</li>
<li><strong>Hungarian algorithm is used to match cell types with plates, wells with siRNAs</strong></li>
</ul>
<p><strong>7. Result</strong></p>
<ul>
<li>Public LB: 0.701</li>
<li>Private LB: 0.959</li>
</ul>
<p><strong>8. Observations</strong></p>
<ul>
<li>I didn't manage to leverage ArcFace :(</li>
<li><strong>Hungarian algorithm boosted score a lot</strong></li>
<li>TTA helps too</li>
</ul>
<p>GitHub link: <a href="https://github.com/rebryk/kaggle/tree/master/recursion-cellular">https://github.com/rebryk/kaggle/tree/master/recursion-cellular</a></p>
|
Recursion Cellular Image Classification
|
4th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all prize and medal winners!
Our brief solution summary:</p>
<p>1) heavy model ensemble
- 1108-way classification
- seresnext50, 101, densenet, efficientnet x 6C6, 5C6 4C6 channel selection x cross validation
- 512x512 input, 90rot+clip aug</p>
<p>2) solve linear sum assignment problem to make the most of 'the groups of 277 per plate restriction'
<a href="https://www.kaggle.com/c/recursion-cellular-image-classification/discussion/102905#latest-624588">https://www.kaggle.com/c/recursion-cellular-image-classification/discussion/102905#latest-624588</a></p>
<p><code>
from scipy.optimize import linear_sum_assignment
plate_prob = plate_prob / plate_prob.sum(axis=0, keepdims=True)
row_ind, col_ind = linear_sum_assignment(1 - plate_prob)
</code></p>
<p>3) make psuedo label and go back to 1 (with a few selected models like efficientnet)</p>
<p>The code can be found in <a href="https://github.com/ngxbac/Kaggle-Recursion-Cellular">https://github.com/ngxbac/Kaggle-Recursion-Cellular</a>
(mainly the 1) part)</p>
|
Recursion Cellular Image Classification
|
5th place solution. AdaBN[domain==plate]
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I had the idea of conditional BN gamma/beta for different domains, but this would not work for unseen domains.
I've also tried some structures of "BN gamma/beta from negative controls". I believe that could be a real solution to this challenge. But no luck with that.</p>
<p>In practice, I simply set <code>track_running_stats</code> of all BN layers to <code>False</code> and use large batch size (111) in val/test. Prediction is stable if batch size is large enough. I tried to fit a whole plate into a batch with FP16 and SyncBN, but the accuracy did not change compared to batch size 111.</p>
<p>You could estimate domain statistics with an extra forward, but that may not be accurate (think about several sequential BN layers). There is a good example in the repo of <a href="https://github.com/timgaripov/swa/blob/master/utils.py">SWA</a>. Actually, I noticed the domain shift when trying SWA.</p>
<p>First, I would like to thank Recursion and Kaggle for organizing this interesting challenge, and thank team Double strand for their contribution.</p>
<p>Here's a summary of my solution.</p>
<h3>In this competition we have a multi-source and multi-target domain dataset. And we have domain labels.</h3>
<h3>What is "domain" in this dataset?</h3>
<p>cell, experiment, or plate</p>
<p>My EDA suggests [domain==experiment]. But [domain==plate] worked better in practice. This is reasonable since batch effects and plate effects exists.</p>
<h3>How to use domain labels? Simply <a href="https://arxiv.org/abs/1603.04779">AdaBN</a>.</h3>
<p>In short, do not challenge your model(with BN layers) with cross-domain batches.</p>
<p>In training, use domain(plate) aware batch sampling.</p>
<p>In testing, use domain batch statistics in BN layers.</p>
<p>With batch norm done right, this competition <strong>IMMEDIATELY</strong> becomes a regular classification challenge: training converges smoothly and I had consistent validation/LB scores (except for HUVEC-05).</p>
<p>Some results of my early experiments, ResNet50, 224x224 input, same hyperparameters
- random batch sampling, val acc 40+%
- sample batches in the same cell type, val acc 50+%
- sample batches in the same experiment, val acc 60+%
- sample batches in the same plate, val acc 70+%</p>
<h3>Model</h3>
<p>Sequential(BatchNorm2d(6), backbone, neck, head)</p>
<p>backbone: DenseNet201, ResNeXt101_32x8d, HRNet-W18, HRNet-W30</p>
<p>neck: gap or gap+bn</p>
<p>head: 5 fc layers (1 shared and 4 for different cells)</p>
<h3>Loss:</h3>
<p>ArcFaceLoss(s=64, m=0.5) for gap+bn neck</p>
<p>ArcFaceLoss(s=64, m=0.3) for gap neck</p>
<h3>Exemplar Memory</h3>
<p>In this dataset, we could get more supervision than siRNA labels.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F381412%2F27dd6f8424d2b8e2e3d48a55be5bf12a%2Fdataset.png?generation=1569561415969234&alt=media" alt=""></p>
<p><a href="https://arxiv.org/abs/1904.01990">Exemplar Memory</a> fits this structure perfectly.</p>
<p>Fine tuning with 19 exemplar memory modules (HUVEC-05 and 18 test experiments) gave me ~1% LB boost, and HUVEC-05 validation accuracy increased from ~35% to ~46%(~65% with 277 linear assignment)</p>
<h3>Training</h3>
<p>1108-way classifier with treatment only.
input 512 -> random crop 384 -> random rot90 -> random hflip
loss = 0.5 * loss_fc_cell + 0.5 * loss_fc_shared</p>
<p>No pseudo labeling was used.</p>
<h3>Prediction</h3>
<p>input 512
Use fc_cell.
No TTA.
Averaging two sites.
lapjv for linear assignment.</p>
<h3>DenseNet201 results</h3>
<p>| | Public | Private | Public (leak) | Private (leak) |
| --- | --- | --- | --- | --- |
| train data only | 0.92620 |0.97325 | 0.98307 | 0.99303
| + exemplar memory fine tune | 0.95531 | 0.98321 | 0.98691 | 0.99394</p>
<h3>Some interesting finding</h3>
<p>HUVEC-05 prediction of fc_shared is always better than fc_cell. What's wrong with this experiment?</p>
<p>If you want to try AdaBN in pytorch,</p>
<p>```
from torch.nn.modules.batchnorm import _BatchNorm</p>
<p>for m in net.modules():
if isinstance(m, _BatchNorm):
m.track_running_stats = False
```</p>
<p>```
import random
import itertools
from torch.utils.data.sampler import BatchSampler</p>
<p>class TrainBatchSampler(BatchSampler):
def <strong>init</strong>(self, dataframe, batch_size):
self.batch_size = batch_size</p>
<pre><code> dataframe = dataframe.copy().reset_index(drop=True)
index_groups = []
for _, df in dataframe.groupby(['experiment', 'plate']):
index_groups.append(df.index.values)
self.group_sizes = [len(g) // self.batch_size for g in index_groups]
self.index_groups = [
self._take_every(self._cycle_with_shuffle(g), self.batch_size)
for g in index_groups
]
self.length = sum(self.group_sizes)
def __len__(self):
return self.length
def __iter__(self):
batches = []
for size, group in zip(self.group_sizes, self.index_groups):
for _ in range(size):
batches.append(next(group))
random.shuffle(batches)
return iter(batches)
def _cycle_with_shuffle(self, xs):
while True:
random.shuffle(xs)
yield from xs
def _take_every(self, it, n):
while True:
chunk = []
for _ in range(n):
chunk.append(next(it))
yield chunk
</code></pre>
<p>class TestBatchSampler(BatchSampler):
def <strong>init</strong>(self, dataframe, batch_size):
self.batch_size = batch_size</p>
<pre><code> dataframe = dataframe.copy().reset_index(drop=True)
index_groups = []
for _, df in dataframe.groupby(['experiment', 'plate']):
index_groups.append(df.index.values)
self.batches = []
for g in index_groups:
self.batches.extend(self._split_every(g, self.batch_size))
def __len__(self):
return len(self.batches)
def __iter__(self):
return iter(self.batches)
def _split_every(self, xs, n):
ret = []
chunk = []
for x in xs:
chunk.append(x)
if len(chunk) == n:
ret.append(chunk)
chunk = []
if chunk:
ret.append(chunk)
return ret
</code></pre>
<p>```</p>
<p>Sorry I forgot to add some important details. </p>
<p>Actually, I ended up with 1108-way classification models. Positive controls does not contribute to accuracy. Once negative controls added to batches, training became unstable.</p>
|
Recursion Cellular Image Classification
|
6th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi kagglers! Nice one, congratulations to all the winners and all who've learned something new. Congratulations and many thanks my team Andriy ( <a href="/ayaroshevskiy">@ayaroshevskiy</a> ), Sasha ( <a href="/pajari">@pajari</a> ), Vitaliy ( <a href="/ladler0320">@ladler0320</a> ). Short summary of our very simple and straightforward solution.</p>
<p>About "<strong>leak</strong>". I won't argue about the definition. But actually, that was simply target class co-occurance and I recommend to check this in every EDA of every multiclassification challenge same as you check class imbalance. EDA is must.
For example:
<code>df.groupby(["experiment", "plate"]).sirna.apply(lambda x: x.sort_values().unique())</code>
So why do they co-occur that way? Why experiments are shuffled only within the plates? I guess its by design hard to shuffle them every single experiment for 1100+ classes but anyway that knowledge reduced an error significantly.</p>
<p>Also, as you already know - crucial was to solve <strong>assignment problem</strong> having raw/softmax predicts from neural network. So we've used hungarian algorithm as well. </p>
<ul>
<li><p>Validation. I've used 30/33 experiments as a validation splitted by <strong>3 folds</strong>. (3 other experiments went to all the folds). Iteratively, I've manually swapped experiments to align their scores. HUVEC-18 went to the train too. We've found that public LB contains U2OS-04 and it's hard to achieve good score for that particular one. I removed it from most of our submissions to see score for other tree cells. But also that gave us a chance that someone will overfit to public LB. Private U2OS-05 was alright and looked much more like U2OS-03/U2OS-01.</p></li>
<li><p>Image <strong>normalization</strong>. Organizers gave us pixel stats so we've used it in exact way - scaled all experiments by their mean/std. By the end of competition I've found that scaling U2OS-04 with mean/std of U2OS-02 increased its score by 10%. But who cares about public LB experiments ¯\_(ツ)_/¯</p></li>
<li><p>Networks. We've trained our first classification model with softmax - efficientnet-b0 in kaggle kernels before the gpus quotas. Then in my 2x1080ti devbox. I didn't have much time for modeling because of lack of gpus but I've found EfficientNets work comparably better than densenets and resnets/se-resnets. One fold of b0 takes about 9 hours but the CPU was a bottleneck.</p></li>
<li><p>Hard <strong>augmentations</strong>. All flips/rotates. Hard Brightness and Contrast with prob=0.8. Gamma, ChannelDropout. One thing I've found - uint8 augmentations work worse than float because of distribution discretization. </p></li>
<li><p>Training process. I train everything with <strong>Adam</strong>, ReduceOnPlateau and few warmup iterations. I've spent so much time trying to understand why people declare SGD works better but no chance - Adam performed better as usual.</p></li>
<li><p><strong>Controls</strong>. We've simply added train and test (!) controls as additional classes 1108:1139. That boosted score significantly. </p></li>
</ul>
<p><strong>First iteration</strong>. Pretrained <strong>EfficientNets</strong> b0, b1, b2 achieve almost the same score but blending their raw predictions and than solving assignment problem boosted score nicely. Averaging this 3 models scores 0.744/0.750 (3 cells, U2OS-04 reduced) public and 0.990 private LB (13th).</p>
<p>I saw few strategies of how to split test experiments for <strong>pseudo-labeling</strong>. And actually I've tried around 6 of them with one or two iterations of PL. By the end I've came with very careful strategy - soft labels and balanced experiments split between folds:
<code>
pl_all_folds = ["HEPG2-10", "HUVEC-19", "RPE-10", "U2OS-05"]
pl_folds = [["HEPG2-08", "HUVEC-17", "HUVEC-23", "RPE-08"],
["HEPG2-09", "HUVEC-20", "HUVEC-21", "RPE-09"],
["HEPG2-11", "HUVEC-22", "HUVEC-24", "RPE-11"]]
</code>
As you see U2OS-04 was reduced.
<strong>Soft</strong> pseudo-labelling seemed to work better than hard. Now I understand that the best strategy might be to add them iteratively based on some threshold after solving assignment problem.</p>
<p><strong>Second iteration</strong>. Same b0, b1, b2 now with pseudo-labels. People don't usual train same architectures in PL stage so I've added b4 and b6 as well. EfficientNet-b4 scored best validation and LB score and unfortunately EfficientNet-b6 didn't converge well in time. For big models we used <strong>gradient accumulations</strong> up to x12. TTA: 2 sites with 8 combinations of flip/rotate each. </p>
<p>What didn't work:
- SGD with CLRs (triangles, cosine etc)
- Lookahead and Ranger
- ArcFace (achieved almost the same score as Softmax)
- My favourite se-resnext50</p>
|
Recursion Cellular Image Classification
|
7th place solution summary
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks, I think I will share my TPU model notebook later today. To the question on before/after integer programming, the original MIP (not LB probing) formulation is just a way to solve the assignment problem, one of many ways that give the same solution. And it does improve the score reliably above argmax() and another more involved approach that we tried. In the kernel of Christopher Berner that I mentioned he also gives the score how much it helped him.</p>
<p>Thank you Recursion and Kaggle for organizing this unique and challenging competition! When I decided to participate in it, I was impressed by the quality of data and organization, <a href="https://www.rxrx.ai/">this beautiful site</a> alone gave me a lot of motivation. GCP and TPU credits were also indispensable for me, hope to see more in the future. The leaks that were found were handled smoothly by Kaggle, specifically thanks to <a href="https://www.kaggle.com/sohier">Sohier Dane</a>. And credits to <a href="https://www.kaggle.com/giuliasavorgnan">Giulia Savorgnan</a> for reporting the second leak. </p>
<p>I was very fortunate to team up with Yuval on this one. His models scored and maintained top10 for very long time, and I was able to contribute meaningfully only much later in the competition. I still can't wrap my head around how it was possible for him to put so much work in this competition, work full-time at a day job, but also prepare for and run UTMB 171k ultra-marathon race at the end of August. He finished after 50k due to injury, but in my eyes it is already a super-human level of toughness. </p>
<h1>Setup</h1>
<p>All my models I trained with pytorch and TPUs, and used exactly all 600$ of the free GCP credit that I had. My setup I described <a href="https://towardsdatascience.com/running-pytorch-on-tpu-a-bag-of-tricks-b6d0130bddd4">in this post</a>. Despite being able to get overall good training speeds, the experience using TPUs with pytorch was rough. The main problem was that it hangs unexpectedly after a few hours of training. At times it was so annoying that I questioned myself if I lived this life correctly. Pytorch/XLA guys tried to help on the forum, but at the moment I think Pytorch/XLA is just not quite production ready (but I will still be happy to receive free TPUs for the next competition!).</p>
<h1>The model</h1>
<p>From the beginning I started with resnet50 and didn’t have an opportunity to successfully try anything else. In hindsight, it was a good choice for this competition. Yuval has used other backbones, I believe he will describe his work in a separate post. </p>
<ul>
<li>Resnet50 backbone</li>
<li>ArcFace (m=0.2, s=30, 512 features), used “as is” from the beginning of training, no adjustments. A vanilla implementation <a href="https://arxiv.org/abs/1801.07698">from the original article</a>.</li>
<li>384x384 images cropped from original 512x512 with random shifts, flips and all angles rotations (albumentations package)</li>
<li>Normalization of images per experiment and channel, with small randomization</li>
<li>Not using well, plate and experiment meta info.</li>
<li>3 folds each containing full experiments</li>
<li>Training for about 10 epochs together and then a separate model per cell type.</li>
<li>Each site is treated as a separate sample</li>
<li>Adam optimizer, decreasing learning rate</li>
<li>HUVEC-05 is removed, as it is too different from HUVEC test experiments</li>
<li>HUVEC-18 is moved to the train set</li>
</ul>
<p>Besides this standard structure described above I had 3 special features in the model. I have not done any serious comparisons with and without them, but my feeling is that the first gives me a major boost, and second and third some additional smaller improvements (but a fair comparison vs regular pseudo-labeling is definitely missing).</p>
<ol>
<li><strong>Training with test</strong>. The model is trained on all train, test and control images together, predicting 1139 classes (1108 + 31 controls). For the test images I took their softmax output as a target for the log-loss. In this case it can be shown that the gradient of the features before the softmax is zero, that is, these samples have no effect. It makes sense intuitively, I take the predicted probability distribution as a target, - this is already the perfect prediction according to log-loss. But then I modify this target probabilities vector for each sample according to the rich structural information that we hold, - and this is quite unique for this competition. First, - <a href="https://www.kaggle.com/c/recursion-cellular-image-classification/discussion/102905">the plates leak</a>, all components besides 277 are zeroed out. Second, - each set of 277 samples contains every sirna only once. This second one was not easy to enforce, but as a soft constraint in the loss it worked reasonably well. Additional note here, - validation set data was in the train as well, with hidden sirnas, to simulate similar setting to test. One more thing, - ArcFace didn’t work well here, so it was turned off for test/validation samples. And the last point, - I think this approach in general can be thought of as a dynamic type of pseudo-labelling.</li>
<li><strong>Mean normalization of ArcFace features</strong>, kudos to Yuval who pioneered this idea. I save EMA of 512 features before ArcFace for each sample, calculate their mean per experiment and overall mean (vectors of length 512 again), and to each sample on the forward pass to these features I add the overall mean and subtract its experiment mean. This way the features entering ArcFace don’t have per-experiment bias and we improve this side of domain adaptation problem. Theoretically.</li>
<li><strong>Incremental hard pseudo-labelling (PL)</strong>. The PL in point 1 above is a probabilistic PL, in the sense that the target is a probabilities vector of size 277, but not forced to be a specific value, anything is good as long as it is in the set. In this addition to the model at the end of the training I try to push all reasonably confident test and validation samples to lock into one sirna, which becomes its target. I can then zero this sirna for all other samples in the 277-set, which helps them to converge. Once locked, this sample also gets into ArcFace (remember, it is turned off for the test). In practice, judging by validation experiments, first 200 samples out of 277 can be easily locked this way without a single error for all cell types. They were confident anyway, so probably not a huge benefit, but helpful. </li>
</ol>
<h1>Test-time-augmentation (TTA) and ensemble</h1>
<p>For each sample and each site I run 16 predefined transformations, aggregating those 16 by quantile 75, and then aggregating by gmean between the sites. I then aggregated also by gmean between the folds, and again by gmean between different runs (both me and Yuval had 2 runs). Guess what aggregation function we used to merge the results between me and Yuval’s probabilities? That’s right, gmean again. It just worked very well everywhere, despite me trying 7-8 competitors. We selected aggregation weights for all of this by CV and public LB, different per cell type.</p>
<h1>Mixed integer programming (MIP)</h1>
<p>The output of the above procedure is a set of <code>18*4=72</code> matrices 277 by 277 where each row sums up to 1 (samples) but column sums range from 0.5 to 3 (sirnas). We want columns to sum up to one as well, as we know that each set contains every sirna only once. To achieve this we divided the matrices iteratively by sum of columns and then sum of rows several times (e.g. 10), to force it into desired <a href="https://en.wikipedia.org/wiki/Stochastic_matrix">stochastic matrix structure</a>. This procedure does reliably improve score. It doesn’t sound ideal, but we couldn’t do any better. </p>
<p>And then the Hungarian algorithm, to get the solution. The name Hungarian algorithm was introduced to us by <a href="https://www.kaggle.com/christopherberner">Christopher Berner</a> in <a href="https://www.kaggle.com/christopherberner/hungarian-algorithm-to-optimize-sirna-prediction">this great kernel</a>. But we solve this matching problem with pulp package with Cplex solver, with a mixed integer programming formulation so short that I can paste it here. Cplex solves each such matrix in a couple of seconds.</p>
<p>```
prob = LpProblem("Recursion",LpMaximize) <br>
p_vars = LpVariable.dicts("match",(range(L),range(L)),0,1,LpInteger)</p>
<h1>objective</h1>
<p>prob += lpSum(lpSum(p_vars[d][i] * mat[d,i] for i in range(L)) for d in range(L))</p>
<h1>constraints</h1>
<p>for d in range(L):
prob += lpSum(p_vars[d][i] for i in range(L)) == 1, "OneSelected_%i"%d
for i in range(L):
prob += lpSum(p_vars[d][i] for d in range(L)) <= 1, "NoDuplication_%i"%i
```</p>
<h1>LB probing (1st place public explained)</h1>
<p>The public set is comprised of experiments HUVEC-17, HEPG2-08, RPE-08, U2OS-04, as can be easily verified and initially <a href="https://www.kaggle.com/c/recursion-cellular-image-classification/discussion/98075">discussed here</a>.</p>
<p>The LB probing idea stems from the observation that the above MIP formulation does not solve the problem with all the available information. Specifically, we know the scores of our own submissions, so we can decrease the feasible set with two constraints per submission:</p>
<p><code>
for s in range(S):
prob += lpSum(lpSum(p_vars[d][i] for i in range(L) if subs_sel[s,d,i])
for d in range(D)) / D &gt;= vals[s], "SubLow_%i"%s
prob += lpSum(lpSum(p_vars[d][i] for i in range(L) if subs_sel[s,d,i])
for d in range(D)) / D &lt;= vals[s] + 0.001, "SubHigh_%i"%s
</code></p>
<p>Note that here we need to solve for 16 sets, 4431 public samples simultaneously, because the constraints use all of them together. The LB probing procedure is simple, we generate a new solution with this formulation, it is a feasible solution in the sense that this solution can be the actual one given all the constraints. After submitting it we get some non-perfect score, which makes the found solution to be infeasible now, and we start from the beginning again.</p>
<p>This is the same approach that I used in LANL earthquake prediction competition, <a href="https://towardsdatascience.com/how-to-lb-probe-on-kaggle-c0aa21458bfe">this is a post about it</a>. It contains some interesting insights about the approach, and it is the same as in this case, so take a look if you are interested in more details.</p>
<p>Besides the constraints and the feasibility question, there is of course still the objective to maximize the likelihood of the matching, same as in initial MIP formulation. It helps to look for a solution in the right region, as this combinatoric problem is huge. Without good objective function which already solves most of the problem by itself (gives high public LB scores), it is hopeless to advance anywhere with this approach. For good and for bad, the problem here is easy enough for the approach to work. When we got to un-constrained score 0.979 (about 93 errors) the probing got traction and it solved the puzzle in 18 submissions from that. Of course building also on all 200+ submissions that we did before, both probing and regular ones, but getting a high score by itself is when it really clicked. </p>
<p>Two comments on the LB probing
1. It would not be possible to get anything with the LB probing, if not for the plates leak. It just simplified the problem too much, so it worked. Otherwise it would have been searching in the infinite feasible set forever. Therefore I think that the original design of the competition was correct, - probing not possible.
2. We corrected 7 HEPG2, 8 RPE, and 45 U2OS predictions thanks to probing and added it all to the training. Note that this is very few. U2OS-04 is one of the hardest, while U2OS-05 is easy, and it is the only U2OS test experiment, so the benefit for it is small. Overall, my estimation is that it gave us private LB boost of 0.001 at the absolute maximum. But it was fun!</p>
<p>Finally, congratulations to the winners and kudos to all kagglers who participated in the kernels and forums discussions!</p>
<p>And I will drop this random but nice image from my notebook here, to remember what it was all about</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2243477%2F44cc7ebcc0b45cbd684aae0ef7fdb019%2Fhuvec05.png?generation=1569543490620681&alt=media" alt=""></p>
|
Recursion Cellular Image Classification
|
8th place solution summary
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Code will be shared later in this discussion thread.</p>
<p>Congrats to all on a great competition, and especially to top3. Please do share the NeurIPS content you present. </p>
<p>Shout out to Catalyst team <code>Reproducible and fast DL & RL</code>
<a href="https://catalyst-team.github.io/catalyst">Catalyst</a>
Huge thanks to Albumentations:
<a href="https://github.com/albu/albumentations">Albumentations</a>
And kudos to pytorch-toolbelt, for TTA on GPU:
<a href="https://github.com/BloodAxe/pytorch-toolbelt">PyTorch-toolbelt</a></p>
<h3>Main points are …</h3>
Preprocessing
<ul>
<li>Concat 6 channels</li>
<li>Controls from train and test added as additional classes</li>
<li>Treat each sirna sit separately for loading</li>
<li>Normalize images by experiment/plate/channel</li>
<li>Augmentations from albumentation - filp, rotate, transpose, cutout holes, shiftscale rotate</li>
<li>Apply mixup/cutout by batch (big help)</li>
<li>5-fold CV</li>
<li>Built 256x256 and 512x512 based models</li>
</ul>
Modelling
<ul>
<li>Denesent121 was workhorse, also used EfficientNet-B1, SE-Restnet101 and Densenet169</li>
<li>Cosine LR policy and sawtooth policy worked pretty well, incl. Ralamb + Lookahead.</li>
<li>Apex mixed precision helped speed up training</li>
<li>Pseudo labels for test</li>
<li>Label smoothing of ~ 0.1</li>
<li>Continue training per individual experiment group (only worked without pseudo)</li>
</ul>
Post processing
<ul>
<li>No arcface - simply average train LOGITs on experiment level, and get cosine similarity of test LOGITs to nearest sirna</li>
<li>TTA - flip, rotate, transpose</li>
<li>Apply Plate Leak</li>
<li>Prediction balancing from Doodle @Pavel <a href="https://github.com/PavelOstyakov/predictions_balancing">https://github.com/PavelOstyakov/predictions_balancing</a></li>
</ul>
|
Recursion Cellular Image Classification
|
Low Budget LB 0.9759 kaggle-kernels-only solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Good question
I wanted to leave some room for model learn further features, as the validation score was about 80% at that point.
I did try to slowly reduce the number of features (to 180 and then 150), but didn't seem to help.
After starting using pseudo labels, I have actually increased it to 320 to allow the models to [over]fit to the test data. We saw some improvements but didn't have time to test anything else...</p>
<p>I will try to summarise here my part of the 2 months battle with Kaggle kernels in this comp.
Our team's solution was basically entirely run on kaggle GPU's (mostly prior to the new quotas).
The final kernel is here: <a href="https://www.kaggle.com/hmendonca/fold1h4r3-arcenetb4-2-256px-rcic-lb-0-9759">https://www.kaggle.com/hmendonca/fold1h4r3-arcenetb4-2-256px-rcic-lb-0-9759</a></p>
<p><strong>Quick outline in chronological order:</strong>
1. CV split by experiments about ⅓ of the data, 3 folds.
2. Trained a quick and dirty classifier (left it training while analysing the data) with all 1108 classes, 6 channels and no augmentation. OK: <strong>LB 0.10-0.25</strong> and established a reasonable learning rate schedule which was used in the rest of the comp.
3. Normalised the data by experiment mean and std, added random crops: <strong>LB 0.32</strong>
Added more aug: no improvement.
4. Added ArcNet head with 512 features and default params, using pre-trained model from above, eventually <strong>LB 0.35</strong>
5. Started training with each cell type as a different class 1108 * 4 = 4432 classes: <strong>LB 0.44</strong>
6. Cell type balancing in training data: <strong>LB 0.45</strong>
7. Added the 31 controls to the training data (but not to validation) 1139 * 4 = 4556 classes: LB 0.46
8. Added a custom cosine distance loss to the ArcNet features (same class should be close to each other, different classes far...) <strong>LB 0.47</strong>
9. PCA showed that 100 dimensions explained all the variance in features, so I reduced the ArcNet features to 200 (double, just in case ;) <strong>LB 0.52</strong>
10. Applied plates leak, loads of normalisation and assigned single treatment per plate LB 0.82 !!
More normalisation, and lots of multi-fold ensembles later: <strong>LB 0.88</strong> (Thanks to <a href="/aharless">@aharless</a> and <a href="/giuliasavorgnan">@giuliasavorgnan</a>)
11. Used the former predictions as pseudo labels and trained further with train + test data, eventually <strong>LB 0.93</strong> (this process was repeated a few times, always training the latest models with pseudo labels from the best possible submission/ensemble in an amazing team effort! Thanks <a href="/aharless">@aharless</a> <a href="/giuliasavorgnan">@giuliasavorgnan</a> <a href="/stillsut">@stillsut</a> <a href="/lastlegion">@lastlegion</a>)
12. Failed badly trying to blend the new best models (single models had better LB than the blends) probably due to models overfitting to the pseudo labels?
Reduce the amount of pseudo labels in the training data (from 50/50% to 33/66% now) and improved feature distance metric: <strong>LB 0.943</strong>
13. Out of gas, $$ and quota, but I have never learned so much in a comp before! Super happy with the result</p>
<p>The code for all that is in the <a href="https://www.kaggle.com/hmendonca/fold1h4r3-arcenetb4-2-256px-rcic-lb-0-9759">kernel linked above</a> (including some visualisations).</p>
<p><strong>In the beginning</strong>
I basically started looking at the data and decided to just try a normal classifier without any fanciness. I ported an EfficientNetB4 kernel from another comp and watched the batch loss for a few different learning rates.
The loss during cosine annealing with warm restarts is very informative, almost as much as the LR finder scheduler used in fast.ai, so after a couple of runs you can work out what is too high and what is too low, and also how much you need to decrease it from epoch to epoch.
I was training 3 folds and later 3 architectures (EfficientNet B4, B5 and seResNet-xt101). However, the journey described above is actually mainly from the initial B4 I used, and that also ended up being the best model and submission. Its weights were carved slowly by training several different heads one after another, with several branches used to experiment with different ideas.
Unfortunately, we didn't have the budget to do clean experiments and try each idea with their optimal hyper-params and initial conditions. Therefore, some changes used here might not have been optimal but still ended up improving the score, so I had to keep them in the pipeline for the lack of a better reference.</p>
<p><strong>Normalisation and Post-processing:</strong>
The experiment setup let out a lot of information, and thanks to the public kernels that was made clear for everyone. That played a huge role for us to improve the submission score and create the pseudo labels. Cheers <a href="/zaharch">@zaharch</a> and <a href="/christopherberner">@christopherberner</a> !
Looking at the raw model predictions we could see that the initial models were obviously preferring some treatments to others. The predictions weren't balanced across the different classes, but we knew that they should, so a basic idea was to normalise across all treatments. Each treatment should appear exactly the same number of times in each experiment. Normalising the prediction probabilities across the siRNA already helped increasing the validation and LB score.
Andy and Giulia went further and created normalisation loop iteratively normalising the probabilities vertically and horizontally until convergence.
This process showed great improvements in both validation and LB scores and was crucial to our solution. </p>
<p><strong>Platform</strong>
As I mentioned before, the solution was run on GPU only.
TPU's would have been amazing but I had a look initially and wasn't super excited about its pytorch support. I have also trained a ResNet101 model using Colab and the provided notebook in the comp's intro, but I found it a bit too annoying to work with all the deprecated TF code (hopefully it will get better soon as they're fixing the new TF2.0), so I gave up after a few days and went back to PyTorch. Maybe that was a mistake?
We had a few GCP credits but $400 goes really quick, and we only made a few experiments in the beginning. I also used some more (including a bit of my own pocket) to train another few models improving the pseudo labels a bit. However, our best model was trained exclusively in kaggle in about 13 runs in total (100hrs approximately, excluding all abandoned branches)</p>
<p>I tried Colab again after the kaggle quotas were introduced, but I found their GPU notebook quite flaky, as it keeps disconnecting my runs during training :(
Maybe I should buy a GPU lol</p>
<p>Many thanks to all the organisers and the amazing community of kagglers always sharing some much!</p>
<p><a href="/bioin4">@bioin4</a> Many top solutions used ArcNet, which is considered metric learning (including us, please see the topic above). </p>
<p>I believe it doesn't work very well from scratch if you have a large ArcNet margin ('m' param), as it probably need to have good features to start with, and imageNet wasn't good enough here. But it worked fine for me with margin 0 and imageNet weights.</p>
<p>I also found that hard mining didn't help the validation score much, as it tends to quickly overfit to the training data. Hard mining is used in most metric learning pipelines if I'm not wrong.</p>
<p>This the PCA explained variance of the ArcNet features as I described above:
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F451025%2F3a5e5eb0aa2f983a663cc5d63e2fee3d%2Fvar_components.png?generation=1569596328000292&alt=media" alt="ArcNet features PCA explained variance"></p>
<p><code>
pca.fit(arcnet_features)
plt.plot(pca.explained_variance_.cumsum())
</code></p>
<p><a href="/zaharch">@zaharch</a> Cheers and congrats for your team's gold! ;) I also want to try using XLA next!</p>
<p>I found normal jupyter notebooks quite reliable actually, if it disconnects it should reconnect fine and sync the outputs, but I don't really want to leave the browser open the whole time...
Now Colab and kaggle on interactive/edit mode can be very frustrating!!
I generally just run it for like 30min to check if things are going fine, then either commit (in kaggle) or generate the py script and run it with 'ipython -i' on top of 'screen' (great tool!)
Using 'ipython -i' allows you to still go in and debug it in case it crashes...</p>
|
Recursion Cellular Image Classification
|
Our solution: 9th place
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="/backaggle">@backaggle</a>, [Name]Face block outputs probabilities that further goes to cross-entropy loss. So you can use embeddings or probabilities from [Name]Face block. </p>
<p>Dear all, thank you for excellent spirit of competition and astonishing results! Without such favorable pressure we wouldn’t be able to achieve such scores.</p>
<p>I will split description into two main parts: pipeline and prediction with post-processing.</p>
<p><strong>Pipeline:</strong>
- <strong>6-channel images.</strong>
- <strong>Super Resolution.</strong> During exploratory data analysis we suddenly realize that the size of individual cells is rather small and convolutional layer might have a hard time learning all patterns. So we use simple bicubic superresolution technique. That really boost our scores. We checked up to 960 * 960. We have tried to use GAN’s (ESRGAN in particular) for this task but its performance was worse compared to vanilla bicubic interpolation.
- <strong>3Fold by experiments.</strong>
- <strong>Add all controls to train.</strong> We tried to add controls only from train experiments, but it didn’t work.
- <strong>Mixed Precision:</strong> Due to the large size of images we decided to use only Mixed Precision learning because of inevitable graphics card’s memory shortage.
- <strong>Gradual layer unfreezing:</strong> We found out that gradual layer unfreezing was crucial condition for our models not to diverge. First 5 epochs we unfreeze 20% of layers from head each epoch. Presumably it was because of Mixed Precision - some sort of gradient explosion or something similar.
- <strong>Metric learning - CosFace loss:</strong> We took my favorite metric learning loss. We also tried ArcLoss and AdaCos but they fail to converge or achieve worse results in Mixed precision mode.
- <strong>Cyclic Linear Lr Annealing.</strong>
- <strong>Cyclic Linear Scale annealing:</strong> This idea is inspired by AdaCos. <a href="https://arxiv.org/abs/1905.00292">https://arxiv.org/abs/1905.00292</a> where authors decrease scale as model achieve high scores. S_max = 64, S_min = 24.
- <strong>Classes for distinct cell types are distinct as well:</strong> So we predict vector of 1108 * 4 + control classes. It improved convergence drastically.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F362676%2F318492d44eeea2d6e21b5ec53a34ffac%2FScreenshot%20from%202019-09-24%2020-27-00.png?generation=1569525109617780&alt=media" alt=""></p>
<ul>
<li><p><strong>Unsupervised Domain Adaptation by Backpropagation:</strong> <a href="https://arxiv.org/abs/1409.7495">https://arxiv.org/abs/1409.7495</a>. We took plate and cell type as domain label and schedule gradient reversal layer coefficient as lr and Scale for CosFace.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F362676%2F81d59df47d3bf1faf271f11d74245223%2FScreenshot%20from%202019-09-24%2020-55-48.png?generation=1569525156026046&alt=media" alt=""></p></li>
<li><p><strong>Backbones:</strong> seresnet50 (img_size up to 960), senet154 (img_size up to 672), xception (img_size up to 860), inceptionresnetv2 (img_size up to 512), densenet161 (img_size up to 840), densenet201 (img_size up to 720). </p></li>
<li>SGD optimizer with momentum. We tried Adam, Lookahead and Ranger but they didn’t improve validation score.</li>
<li>Gradient normalization.</li>
<li>Batch size = 16 with accumulation up to 32.</li>
<li>Focal loss: gamma = 32.</li>
</ul>
<p><strong>Prediction:</strong>
- <strong>Prediction balancing.</strong> This technique can be used when classes are equally balanced. Excellent implementation: <a href="https://www.kaggle.com/c/quickdraw-doodle-recognition/discussion/73803#latest-438270">https://www.kaggle.com/c/quickdraw-doodle-recognition/discussion/73803#latest-438270</a>.
- <strong>Progressive prediction.</strong> Idea is simple: for every experiment each class can be presented only once. So if we predict some class with high probability there is a little chance that this class can be predicted further with lower probability. We take 50% most probable classes, remove them from consideration and predict the next sample and so on.
- <strong>Leak.</strong></p>
<p><strong>What didn’t work:</strong>
- Mix-up
- ArcFace
- AdaCos
- Deformable ConvNet - <a href="https://arxiv.org/pdf/1811.11168.pdf">https://arxiv.org/pdf/1811.11168.pdf</a>
- Deep Supervision
- Weight Standardisation
- Concatenation of two sites together to make 12-channel image</p>
|
Recursion Cellular Image Classification
|
Quick and dumb 87'th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I think it's probably the most effortless and dumbest solution posted. I lazily started doing something 5 days before the deadline mostly because I got triggered by the word "Arcface" and wanted to try it.</p>
<p><strong>Data Loading</strong>: For each image in my training dataset I randomly chose between site 1 and site 2, created 6-channel image from .png's and normalized with np.mean() and np.std()
<strong>Model</strong>: Single 512x512 resnet50 from torchvision, adapted to work with 6 channels as in this <a href="https://www.kaggle.com/yhn112/resnet18-baseline-pytorch-ignite">kernel</a>
<strong>Training</strong>: 50 epochs, same starting parameters as in the kernel above. it took maybe 8-9 hours with single 1080Ti.
<strong>Augmentations</strong>: I used D4 group augmentations from <a href="https://github.com/albu/albumentations">albumentations</a> library (HorizotalFlip, VerticalFlip, Transpose, RandomRotate90)
<strong>Validation</strong>: stratified split by sirna's with 0.05 left for validation (very smart)
<strong>Prediction</strong>: d4_tta from <a href="https://github.com/BloodAxe/pytorch-toolbelt">pytorch toolbelt</a> sor each site, then average sites and apply leak from [this kernel] (<a href="https://www.kaggle.com/zaharch/keras-model-boosted-with-plates-leak/">https://www.kaggle.com/zaharch/keras-model-boosted-with-plates-leak/</a>) (~40 min test prediction time on single 1080Ti).</p>
<p>And that's it. I obtained my LB solution in the first day (it was supposed to be just a baseline) and for the rest 4 days tried to do something smart with arcface, but it simply didn't work (even after applying batchnorm in the beginning I could not get >10% training accuracy).</p>
<p>Without leak and TTA, my model scores 0.52. Now I wonder if I could achieve higher results with the same straightforward approach. </p>
<p>I experienced the problem when my model accuracy was ~0 during model.eval(), but it happened simply because I forgot to normalize input</p>
|
Recursion Cellular Image Classification
|
Second Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It was 5 different backbones trained on all the data. </p>
<p>This is Kaggle, we all inspire each other to get better. To be honest, if we were to work on the problem by ourselves, we probably will stop at 0.6 and be satisfied. Team OrdenaLenina and others motivated us to move higher early on. </p>
<p>Yes, we do plan to open source the code. </p>
<p>Thanks to Recursion and Kaggle for sponsoring this very interesting competition and congratulations to all that went through the journey. We learned a lot and enjoyed the entire process. Looking forward to learning from other teams’ solutions.</p>
<h3>Input and preprocessing:</h3>
<ul>
<li>6 channel input, </li>
<li>per image standardization (minus the mean and divide by the standard deviation), </li>
<li>random crop 384x384,</li>
<li>random flip, </li>
<li>random rotation multiple of 90 degrees. </li>
</ul>
<h3>Modeling:</h3>
<p>We modified ResNet to limit the receptive field size of the output, as we suspect it is the individual cells and their immediate neighbors that contain the most discriminating information. Here are the list of things we modified from the vanilla ResNet:
- Fewer blocks. ResNet typically has 4 chunks of blocks, some of our models only has 2 chunks.
- More 1x1 conv blocks
- Average pooling from lower blocks, concatenated with average pooling from higher blocks
- Remove the immediate max-pooling after the first convolution
- Replace the 1x1 convolution with 3x3 convolution in the shortcut layer. This increased the smoothness of the test accuracy during training, but only increased the final testing accuracy slightly. </p>
<p>We used all the negative and positive controls (including those in the test plates) as part of the training set. The output is 1139x4 logits.</p>
<h3>Loss:</h3>
<p>We used the [ArcFace loss posted by bestfitting] (<a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/78109">https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/78109</a>). We made the <code>gamma</code> parameter adjustable. We were struggling to get ArcFace to converge initially until we tuned the gamma parameter. Our settings are <code>gamma = 0.2, m = 0.4, s = 32</code></p>
<h3>Optimizer:</h3>
<p>Adam optimizer with the following schedules:
<code>python
0-20: linear warm up to 3e-3
20-60: 3e-3
60-80: 9e-4
80-90: 3e-4
90-100: 3e-5
100-120: 3e-6
</code></p>
<h3>Post-processing:</h3>
<ul>
<li>Center the embeddings by plate.</li>
<li>Average the embeddings from both sites to obtain per-well embedding.</li>
<li>For each cell line, obtain train center embeddings by averaging together the siRNA embeddings.</li>
<li>Compute cosine-similarity of each well’s embedding to train center embeddings. </li>
<li>Use LSA to compute label assignment based on the 277 leak.</li>
</ul>
<h3>Pseudo-labeling:</h3>
<p>An ensemble of 5 models achieved a public LB score of 0.993 and a private LB score of 0.9957 without pseudo-labeling (single model 0.990 and 0.9947). We then collected all our public LB 0.990+ predictions and identified 327 examples that were not consistent. All the test predictions not in this set of 327 were then used as pseudo-labels. An ensemble of models trained on this pseudo-label set achieved a public LB of 0.997, and a private LB of 0.9967. We also tried iteratively adding more pseudo-labels to the training set (500, 700, 900 per experiment), but it did not improve our public LB score.</p>
|
Recursion Cellular Image Classification
|
[Boring] 131 place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi every one and congratulations to all the participants!
Thanks organizer for a great opportunity to learn!</p>
<p>We rolled into this competition very late (6 days before the competition end) and tried to squeeze something from the pipeline we used in APTOS 2019 Blindness Detection.
<a href="https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/108007">https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/108007</a></p>
<p>Actually, this performed not as bad as we expected!</p>
<p><strong>Models</strong>
We had time to train only two models, they were:
- 1-fold ResNet34, 512x512
- 3-fold SEResNeXt50, 320x320</p>
<p>Single ResNet34 was better than ensemble of SEResNeXt50s. Probably, due to the higher resolution.
As final solution, we ensembled all 4 models. </p>
<p><strong>Preprocessing</strong>
We used 6 channel images, normalized by channel with mean and std of non-black parts.
To make it compatible with ImageNet pretrained model, we just added single convolution layer from 6 to 3 channels.</p>
<p><strong>Augmentations</strong>
We used standard augmentation, not that hard. All from Albumentations library:
CLAHE, HorizontalFlip, VerticalFlip, RandomRotate90, ShiftScaleRotate, RGBShift, RandomBrightnessContrast, Blur, Sharpen, RandomGamma.</p>
<p><strong>Training</strong>
Just usual finetuning from ImageNet.
SGD+Nesterov+Cosine Annealing LR for 70 epochs.</p>
<p>For folds, we used just random split with no stratification.
In hindsight, we should split by experiment...</p>
<p><strong>Postprocessing</strong>
We used only the leak with 277 siRNA per plate, thanks <a href="/zaharch">@zaharch</a> for his awesome kernel!
<a href="https://www.kaggle.com/zaharch/keras-model-boosted-with-plates-leak/">https://www.kaggle.com/zaharch/keras-model-boosted-with-plates-leak/</a></p>
<p><strong>Hardware</strong>
This time we used server with a single V100 from FastGPU.net. Thanks them!</p>
<p><strong>Conclusion</strong>
Overall, I'm glad that we tried. Probably, with more time and dedication, we could achieve better results. However, it was cool to see that pipline from different competition can score that high wiht almost no understanding of the subject area and very little understanding of the competition itself.</p>
<p>Happy kaggling!</p>
|
Recursion Cellular Image Classification
|
[private 0.977]: solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Recursion Cellular Image Classification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>our first kaggle competition, and we will keep on:
model:densenet121
classification: 1108
input: resolution 512*512, rotate, flip, aug(lightness, contrast), normalize...
ensemble: avg</p>
<p>train method: Adam optimizer, decreasing learning rate, iteratively train with tests,
plate leak, 31controls, rearrange, </p>
<p>loss: ce, center-loss, arc</p>
|
Restaurant Revenue Prediction
|
Winning Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Restaurant Revenue Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi all, greetings from Arsenal. I am sure you guys did great work on this problem. I wouldn't be surprised that my 2 submissions ended in private top 100, but I was quite surprised that mine was the 1st. I would say for this problem, top 5% by skill, top one by luck.</p>
<p>I am still waiting for the final validation. I need to write a document anyway to claim the prize and I plan to share it. So be patient. (I will not share my code for this one since I am not good at version control right now.)</p>
<p>What I can tell you for now is that the strategy I selected my 2 submissions were simple. One is the ensemble of several models with the highest public LB score. Another is a single (winning) model that the pre-process approach I thought was logical and reasonable yet not that complicated. The underlying model is just gbm. The public LB score for this one is ~1725, and the private LB score is ~1727. It looks consistent to me and could have certain degree of generality.</p>
<p>So how useful is my solution? I mentioned in another thread that age shouldn't be included in this problem if TFI's only intention was to predict future revenues for new restaurants. Age is influential only because long standing restaurants tends to have higher revenues. For new restaurants, age is just useless. I encountered very similar problems in my work, and I excluded age at the first stage. Back to TFI, I would say $30000 totally worth it for them. I've seen million data warehouse project requiring only few analyst working on it for one or two years. TFI's limited prize attracted thousands of brilliant guys working on this particular problem for over one month and brings potential hire for them.</p>
<p>Regarding my solution, if it's valuable, I would say it will be less than 10% of the total value. But do I deserve the prize? I say yes. But you tell me. Anyway, I will update this thread regarding more details of my solution once my winning solution document are ready.</p>
<p>[quote=Yuanjun Gao;79912]</p>
<p>Thanks Arsenal for the post! I am curious about the feature engineering part. Square root transform revenue is definitely helpful, but seems to me that if you use gbm, then pre-processing the obfuscated P variables by square root transformation is not necessary right? (since gbm is invariable to monotone transformation of covariates)</p>
<p>[/quote]</p>
<p>It matters for the imputation.</p>
<p>Thanks a lot!</p>
<p>Here are the important steps of my solution:</p>
<h3><strong>1) Feature Engineering</strong></h3>
<p><strong>i) Square root transformation was applied to the obfuscated P variables with maximum value >= 10, to make them into the same scale, as well as the target variable “revenue”.</strong></p>
<p><strong>ii) Random assignments of uncommon city levels to the common city levels in both training and test set, which I believe, diversified the geo location information contained in the city variable and in some of the obfuscated P variables.</strong></p>
<p>Note: I discovered this to be helpful <strong>by chance</strong>. My intention was to assign uncommon city levels to their nearest common city levels. But it read the city levels differently on my laptop and on the server. It performed significantly better on the server. I am not 100% sure, but my explanation were given above.</p>
<p><strong>iii) Missing value indicator for multiple P variables, i.e. P14 to P18, P24 to P27, and P30 to P37 was created to help differentiate synthetic and real test data.</strong></p>
<p>Note: These variables were all zeroes on 88 out of 137 rows in the training set. The proportion was much less on the test set, i.e. those rows on which these variables were not zeroes at the same time has higher probability to be synthetic. </p>
<p>iv) Type “MB”, which did not occur in training set, was changed to Type “DT” in test set.</p>
<p>v) Time / Age related information was also extracted, including open day, week, month and lasting years and days.</p>
<p>vi) Zeroes were treated as missing values and mice imputation was applied on training and test set separately.</p>
<h3><strong>2) Modelling and Selection Criteria</strong></h3>
<p>i ) Gradient boosting models were trained on the feature-engineered training set. I used R caret package and 10-fold cv repeated 10 times (default setting) to train the gbm models. The parameters grid used was simple to reduce over-fitting:</p>
<p>gbmGrid <- expand.grid(interaction.depth = c(6, 7, 8, 9),<br> n.trees = (3:7) * 10,<br> shrinkage = 0.05)</p>
<p><strong>ii ) Two statistics were used to determine the model(s) to choose: training error and training error with outliers removed. The error limits are 3.7 * 10^12 and 1.4 * 10^12, respectively.</strong></p>
<p>Note: I tested this strategy <strong>post-deadline</strong>, it was very effective choosing the "right" models. Around one in 15-20 models trained in step i) satisfied the two constraints. i.e. I trained about 200 models (using different seed) and 11 of them had training error and training error with outliers removed both lower than the limit I set. I randomly averaged 4 of them to make it more robust as a final model. These final models scored ~1718 to ~1735 privately and ~1675 to ~1707 publicly. (I guess taking average was more effective on the public data.) </p>
<h3><strong>3) Possible Improvements</strong></h3>
<p><strong>I read from the forum that dealing with outliers properly could improve scores, although I did not try it out myself. And in this situation, my strategy in 2) ii) might need modification.</strong></p>
<h3><strong>4) Conclusion</strong></h3>
<p><strong>I got lucky on this competition, while my true intention was to stabilize my performance on top 5%. As you guys can see, I am trying hard to stay in top 5% on two other competitions near its end. The techniques or methods I used here might not be a good strategy for other problems/competitions, and vice versa. I am learning a lot from the Kaggle forum and by participating in Kaggle competitions and gathering experience throughout these practices. Cheers!</strong></p>
<p>PS: I am also from a mathematical background, and I personally am very impressed by team BAYZ's work. It seems to me that hundreds of submissions would do the job, but 110, it's just amazing and it must have involved really smart strategies.</p>
<p>PS2: I wrote about my story of this competition and submitted to Kaggle's winners blog. This post has already covered all the details in my solution, so reading that blog will help you know a little bit about me. I will post the link here once it's ready. You guys are welcome to take a look.</p>
<p>[quote=Agamemnon;79577]</p>
<p>Thanks Arsenal for the clarification. Just out of curiosity, is not code sharing a requirement for claiming a prize? Or do you just have to share it with the competition organizer?</p>
<p>Cheers,</p>
<p>[/quote]</p>
<p>I believe I must share my code with the competition organizer to claim the prize but not obligated to share it to the public.</p>
<p>[quote=gardener;79493]</p>
<p>I'm curious about "random assignments of uncommon city levels to the common city levels in both training and test set" - the permutation loses information and is similar to what random forests use to assess variable importance, but why not just drop the city levels altogether?</p>
<p>[/quote]</p>
<p>Thank you for raising this question.</p>
<p>First of all, random assignments of uncommon city levels improves all four scores significantly: training error, training error with outliers removed, public LB score and private LB score.</p>
<p>Second, I discovered this by chance. What I can do is to try to explain the phenomenon I noticed. My explanation was, the random assignments diversified the geo location information contained in the city variable and in some of the obfuscated P variables. We know that some of the obfuscated P variables contain geo location information. If we use the city level information directly or 'correctly', you can imagine that these information will be highly correlated. You'd better drop the city variable.</p>
<p>Third and last, I suggest we do not put too much weight on one data set and one problem. I believe what you can learn most from one competition, but not this particular one, is some ideas that <strong>might</strong> be useful when dealing with new problems. You add some new weapons to your arsenal.</p>
<p>PS: I will also address the code sharing concerns here. I personally value ideas more than implementation. But, I will not make this an excuse, since the most important reason that I don't share my code, is that I was not good at version control at this time. I would love to share my code if I end in top 3 in future competitions.</p>
<p>[quote=Anna Montoya;78987]</p>
<p>We just published a blog from Wei (aka Arsenal) on his background and winning solution. You can find it on our blog, no free hunch, <a href="http://blog.kaggle.com/2015/05/18/tab-food-winners-interview-1st-place-wei-yang-aka-arsenal/" target="_blank">here</a>.</p>
<p>Thanks for the great write-up, Wei!</p>
<p>[/quote]</p>
<p>Thanks Anna, for putting this together!</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.