Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
Human Protein Atlas - Single Cell Classification
43rd : Positive-Unlabeled Learning based Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>thanks for the comment. it took around 1.2 hour.<br> machine : P100.<br> total was 2 days…</p> <p>Congratulations to the winners and thanks for all the competitors, great hosts!🎉<br> We are members of Harada Laboratory at Tokyo University.</p> <p>This competition was very interesting in that classification should be done with weakly supervised labels.<br> And I thought even not high-score solution is worth to be made public.</p> <p><strong>Digging and saying "here was nothing" is also important.</strong></p> <p>I'll share our Positive-Unlabeled based solution.</p> <hr> <h1>Overview</h1> <ul> <li>cut images into cells, treat labels as Negative-Unlabeled then optimized AUC</li> <li>optimization of AUC which uses sigmoid was hard and the score didn't improved enough</li> <li>treating images as i.i.d. samples could induce not sufficiently tight loss</li> </ul> <p>We are members of the same laboratory. We participated this competition to search our master course theme. m0ka took care of image level classifier solution and I searched image-level classifier solution. Considering PublicLeaderBoard scores, we decided to use the image-level classifier as our final model.</p> <h1>Pipeline</h1> <ul> <li>ensemble of 10 ResNet34 models.</li> <li>AUC optimization under Negative-Unlabeled label setting</li> </ul> <p><img src="https://user-images.githubusercontent.com/45588624/120889912-7350a580-c63a-11eb-92a1-14f3d193b7ed.png" alt="image"></p> <h1>Purpose</h1> <ul> <li>train models without label noise</li> <li>Optimization minimize PR-AUC</li> <li>validation of models can be done under label noise</li> </ul> <p>We had image-level labels, but we didn't have cell-level labels. If we classify cell images in this situation, we would suffer from falsely added labels. And even after training, we can't validate our models with these noisy labels.</p> <p>Even under this situation, we can train our models without such bias if we use statistical machine learning technique. Below I will explain our solution.</p> <h1>Assumption and Setting</h1> <ul> <li>treat labels as Negative-Unlabeled.</li> </ul> <p>Let's consider cell-images with 0-17 class labels.</p> <p>Because these labels were added to original whole image, there are many False-Positively added labels. Then we could assume that</p> <ol> <li>Added labels can be actually negative.</li> <li>Not-Added labels are always negative.</li> </ol> <p>In this point of view, we can see this competition setting as Negative-Unlabeled Setting. Negative label is always negative and positive labels are always positive.</p> <p><strong>[Edit]</strong> After this competition, I found a paper, <a href="https://arxiv.org/abs/1905.12226" target="_blank">[Peng and Zhang, 2019]</a> which treats this setting😅. However, this paper's method requires class prior P(y_{k}) for the unbiased estimation, which can't be accessed. To resolve this issue and optimize AUC rather than Bayes-Risk, we introduce AUC based solution. </p> <h1>Optimization of AUC</h1> <ul> <li>Optimize AUC so that PR-AUC improves.</li> <li>we don't have to know class-prior</li> </ul> <p>In normal setting, mAP is enhanced by optimizing some losses like BCE Loss. However because labels are noisy, this loss can be hard to optimize.</p> <p>Rather I decided to optimize AUC which resembles ROC-AUC.</p> <p>Notate sample x's class i score output as fx. P is positive samples' probability and N is negative one. AUC of class i is calculated as</p> <p><img src="https://user-images.githubusercontent.com/45588624/120889964-db06f080-c63a-11eb-8975-b3882c50be9d.png" alt="image"></p> <p>I don't write precise theory here, but in PU Learning setting, it is known that</p> <ul> <li>using symmetric loss, which satisfies l(x) + l(-x) = const. can be reduce bias caused by Falsely added labels.</li> </ul> <p>Combining AUC optimization and symmetric loss leads object function, which we don't have to use class prior. we can notate it as</p> <p><img src="https://user-images.githubusercontent.com/45588624/120889998-0f7aac80-c63b-11eb-8a6a-5ce819864a4b.png" alt="image"></p> <h1>Good points &amp; Bad points</h1> <ul> <li>Good points<ul> <li>we can use whole images!<ul> <li>only 1 labeled images are limited.</li></ul></li> <li>there was correlation between LB and validation score relatively.<ul> <li>this wasn't true when I used BCE Loss and score improved +0.06pt which was large in this competition.</li></ul></li></ul></li> <li>Bad points<ul> <li>optimization of sigmoid was hard.</li> <li>taking bags of image apart could make loss loose.</li></ul></li> </ul> <p>After training this method, I couldn't improve our model much. I tried hard to solve issue caused by sigmoid which is difficult to opmize, ended up first 3weeks solution.</p> <p>8th and 9th also use image-based classifier, but they treated image-level labels as no-noise labels. This can be tight loss compared with this solution.</p> <h1>Others</h1> <p>If we could, we wanted to give some contribution to HPA community like bestfitting did in the last competition. We couldn't and he did it again, congrats!🎉</p> <p>We can't say our solution worked enough, but I hope you enjoyed this solution. Thanks!</p> <h2>Appendix</h2> <h3>solutions' background</h3> <p>positive label is described as y=1, negative is y=0.<br> unlike PU setting, we can't assume P(y=1) = P(y=1|s=1). So I changed some assumptions.</p> <p><img src="https://user-images.githubusercontent.com/45588624/117967161-b41d0d80-b35f-11eb-9eee-21f4bbaee9c7.png" alt="image"></p>
Human Protein Atlas - Single Cell Classification
46th place solution - Simple Image Level Multilabel Classifier
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you!</p> <p>Congrats to all winners! Thanks to the organizers of this competition.</p> <p>And specially thanks to <a href="https://www.kaggle.com/rdizzl3" target="_blank">@rdizzl3</a>, <a href="https://www.kaggle.com/phalanx" target="_blank">@phalanx</a>, <a href="https://www.kaggle.com/alexanderriedel" target="_blank">@alexanderriedel</a>, <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a>, <a href="https://www.kaggle.com/linshokaku" target="_blank">@linshokaku</a>, <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a>, <a href="https://www.kaggle.com/dschettler8845" target="_blank">@dschettler8845</a>, <a href="https://www.kaggle.com/samusram" target="_blank">@samusram</a> for sharing knowledge and datasets. I have learned a lot from you.</p> <h2>Summary</h2> <p>"Simple Image Level Multilabel Classifier"</p> <p>The label was determined by applying a classifier to the single cell mask obtained by HPA-Cell-Segmentation.</p> <p>Classifier was trained using the full dataset.</p> <h2>Tools</h2> <ul> <li>Colab Pro, GCE, Tesla V100 16GB single GPU</li> <li>GCS</li> <li>Pytorch Lightning</li> <li>Neptune</li> <li>Kaggle API</li> </ul> <h2>Dataset</h2> <p>I used both the Competitions default dataset and the extra dataset.</p> <p><a href="https://www.kaggle.com/phalanx/hpa-512512" target="_blank">HPA 512 PNG Dataset</a> by <a href="https://www.kaggle.com/phalanx" target="_blank">@phalanx</a></p> <p><a href="https://www.kaggle.com/phalanx/hpa-768768" target="_blank">HPA 768 PNG Dataset</a> by <a href="https://www.kaggle.com/phalanx" target="_blank">@phalanx</a></p> <p><a href="https://www.kaggle.com/sunghyunjun/hpa-1024-png-dataset" target="_blank">HPA 1024 PNG Dataset</a></p> <p><a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/223822" target="_blank">HPA Public Data 768x768 "rare classes" dataset</a> by <a href="https://www.kaggle.com/alexanderriedel" target="_blank">@Alexander Riedel</a></p> <p>The extra dataset was downloaded by referring to the public note.<br> Images saved to 768px png. The size is approximately 200 GB.</p> <p><a href="https://www.kaggle.com/lnhtrang/hpa-public-data-download-and-hpacellseg" target="_blank">HPA public data download and HPACellSeg</a></p> <h2>Validation</h2> <p>MultilabelStratifiedKFold, 5-fold split was used.</p> <p>The performance of Multilabel Classifier was verified with Macro-F1, Micro-F1 Score.</p> <p><a href="https://github.com/trent-b/iterative-stratification" target="_blank">iterative-stratification</a></p> <h2>Model training</h2> <p>3-channel RGB images<br> The image size is 1024px, and trained with the following dataset.</p> <ul> <li>1024px Competition default dataset + 768px rare classes dataset(1024 resized)</li> <li>1024px Competition default dataset + 768px extra dataset(1024 resized)</li> <li>AdamW</li> <li>CosineAnnealingLR</li> <li>epochs = 5 for full, 10 for rare<br> bce, focal loss was used.</li> </ul> <table> <thead> <tr> <th>model</th> <th>dataset</th> <th>folds</th> <th>loss</th> <th>batch_size</th> <th>init_lr</th> <th>weight_decay</th> <th>macro F1</th> <th>micro F1</th> <th>public LB</th> <th>private LB</th> </tr> </thead> <tbody> <tr> <td>efficientnet_b0</td> <td>full</td> <td>2 of 5</td> <td>bce</td> <td>16</td> <td>6.0e-4</td> <td>1.0e-5</td> <td>0.7663</td> <td>0.8171</td> <td>0.454</td> <td>0.429</td> </tr> <tr> <td>efficientnet_b0</td> <td>rare classes</td> <td>single</td> <td>bce</td> <td>16</td> <td>6.0e-4</td> <td>1.0e-5</td> <td>0.8154</td> <td>0.8368</td> <td>0.394</td> <td>0.360</td> </tr> <tr> <td>seresnext26d_32x4d</td> <td>full</td> <td>single</td> <td>alpha=0.75, gamma=0.0</td> <td>14</td> <td>6.5e-5</td> <td>1.0e-5</td> <td>0.7317</td> <td>0.7956</td> <td>0.381</td> <td>0.335</td> </tr> <tr> <td><strong>final ensemble</strong></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td><strong>0.471</strong></td> <td><strong>0.433</strong></td> </tr> </tbody> </table> <h2>Segmentation</h2> <p>HPA-Cell-Segmentation was used, and the speed was improved by referring to <a href="https://www.kaggle.com/linshokaku" target="_blank">@linshokaku</a> 's notebook.</p> <p>The input image was resized by 1/4, and the CellSegmentator scale_factor=1.0.</p> <p>The related values of the label_cell function have been adjusted to 1/4.</p> <p><a href="https://github.com/CellProfiling/HPA-Cell-Segmentation" target="_blank">HPA-Cell-Segmentation</a></p> <p><a href="https://www.kaggle.com/linshokaku/faster-hpa-cell-segmentation" target="_blank">Faster HPA Cell Segmentation</a><br> by <a href="https://www.kaggle.com/linshokaku" target="_blank">@linshokaku</a></p> <h2>Augmentation</h2> <pre><code>A.Compose( [ A.Resize(height=resize_height, width=resize_width), A.RandomScale(scale_limit=(-0.2, 0.2), p=1.0), A.PadIfNeeded( min_height=resize_height, min_width=resize_width, border_mode=cv2.BORDER_CONSTANT, value=0, p=1.0, ), A.RandomCrop(height=resize_height, width=resize_width, p=1.0), A.RandomBrightnessContrast(p=0.8), A.HorizontalFlip(p=0.5), A.VerticalFlip(p=0.5), A.Rotate(border_mode=cv2.BORDER_CONSTANT, value=0, p=0.5), A.Normalize(mean=norm_mean, std=norm_std), ToTensorV2(), ] ) </code></pre> <h2>TTA 4x</h2> <p>HorizontalFlip, VerticalFlip, Resize 0.8, Resize 1.2</p> <h2>What did not work</h2> <ul> <li><p>Label Smoothing</p></li> <li><p>pos/neg balanced weighted loss</p> <p>X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers.<br> (Dec. 2017). "ChestX-ray8: Hospital-scale chest X-ray database and<br> benchmarks on weakly-supervised classification and localization of common thorax diseases.", (p. 5) <a href="https://arxiv.org/abs/1705.02315" target="_blank">https://arxiv.org/abs/1705.02315</a></p></li> </ul> <h2>Using GCS and errors</h2> <p>The dataset size of this competition is really huge. I had some difficult to download extra public data. It took a lot of time. Multiprocessing was not helped because more than two files couldn't be downloaded at the same time.</p> <p>I use colab-pro.I usually downloaded dataset to colab VM for convinience. But to train huge extra dataset I loaded dataset directly from my GCS bucket.</p> <p>I refer the article <a href="https://medium.com/pytorch/training-faster-with-large-datasets-using-scale-and-pytorch-946dfe774d8c" target="_blank">Training Faster With Large Datasets using Scale and PyTorch</a> And I didn't implement Asynchronous dataload. In my case, multiprocessing of torch.utils.data.DataLoader is enough for latency hiding.</p> <p>But training from GCS had got some rare errors. (504 GatewayTimeout, 104 Connection reset by peer)</p> <p>I don't know exact reason but it seems relate belows.</p> <ul> <li><p>opencv multithreading deadlock with pytorch DataLoader (num_workers&gt;0, pin_memory=True)<br> <a href="https://stackoverflow.com/questions/54013846/pytorch-dataloader-stucked-if-using-opencv-resize-method" target="_blank">https://stackoverflow.com/questions/54013846/pytorch-dataloader-stucked-if-using-opencv-resize-method</a><br> <a href="https://github.com/pytorch/pytorch/issues/1355#issuecomment-675018985" target="_blank">https://github.com/pytorch/pytorch/issues/1355#issuecomment-675018985</a><br> solution: cv2.setNumThreads(0)</p></li> <li><p>CPU memory leaks of copy on write<br> <a href="https://github.com/pytorch/pytorch/issues/13246#issuecomment-737442812" target="_blank">https://github.com/pytorch/pytorch/issues/13246#issuecomment-737442812</a><br> solution<br> <a href="https://gist.github.com/vadimkantorov/86c3a46bf25bed3ad45d043ae86fff57" target="_blank">https://gist.github.com/vadimkantorov/86c3a46bf25bed3ad45d043ae86fff57</a></p></li> </ul> <h2>Source Code</h2> <p>Source code is available at <a href="https://github.com/sunghyunjun/kaggle-hpa" target="_blank">https://github.com/sunghyunjun/kaggle-hpa</a></p> <p>Submission notebook is <a href="https://www.kaggle.com/sunghyunjun/hpa-faster-final-ensemble-w-o-rot-exp-2" target="_blank">HPA faster final ensemble w/o rot exp 2</a></p>
Human Protein Atlas - Single Cell Classification
4th Place Solution: MILIMED
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>And if you want to use the results of the functions:</p> <pre><code>import multiprocessing import multiprocessing as mp pool = mp.Pool(mp.cpu_count()) result = [] for x in mt: pool.apply_async(solve_cells, args=(x,), callback=result.append) pool.close() pool.join() </code></pre> <p>Absolutely :)</p> <p>This is the main idea:</p> <pre><code>import multiprocessing import multiprocessing as mp pool = mp.Pool(mp.cpu_count()) for x in mt: pool.apply_async(solve, args=(x,)) pool.close() pool.join() </code></pre> <p>And this is the exact code for creating masks:</p> <pre><code>def create_masks(cell_segmentation, nuc_segmentation, name): nuclei_mask, cell_mask = label_cell(nuc_segmentation, cell_segmentation) ID = name.replace('_red.png','').split('/')[-1] np.save(test_nuclei + ID + '.npy', nuclei_mask) np.save(test_cells + ID + '.npy', cell_mask) return batch = 32 pos = 0 mt = glob.glob(test_dir + '*_red.png') er = [f.replace('red', 'yellow') for f in mt] nu = [f.replace('red', 'blue') for f in mt] while pos &lt; len(mt): images = [mt[pos:pos+batch], er[pos:pos+batch], nu[pos:pos+batch]] pos += batch nuc_segmentations = segmentator.pred_nuclei(images[2]) cell_segmentations = segmentator.pred_cells(images) pool = mp.Pool(mp.cpu_count()) for i in range(len(cell_segmentations)): name = images[0][i] pool.apply_async(create_masks, args=(cell_segmentations[i], nuc_segmentations[i], name)) pool.close() pool.join() </code></pre> <p>We are a very diverse team of computer scientists and medical doctor/students. It was our great pleasure to participate in this demanding challenge. Hope some of you find this solution useful and/or interesting.</p> <h1>Solution overview</h1> <ol> <li>Segmentation -&gt; HPA-Cell-Segmentation</li> <li>Dataset -&gt; 512x512 cell images (20% removed)</li> <li>Parallelization -&gt; speed-up -&gt; 3h left for inference</li> <li>Manual Labeling -&gt; smaller classes &amp; validation (soft labels)</li> <li>Pseudo-Labeling -&gt; negative labeling (&amp; positive for mitotic spindle)</li> <li>EfficientNetB0 Ensemble + semi-balanced data sampling</li> <li>Fine-tuning -&gt; on manually labeled &amp; non-labeled validation data</li> <li>Cell/Image Weighting -&gt; final confidence = 0.7 * cell_confidence + 0.3 * image_confidence</li> </ol> <h1>1. HPA-Cell-Segmentation</h1> <p>The test set was based on this segmentator so it made no sense to spend a lot of time creating a custom segmentator which could make the IoU worse. The authors of the contest said that only 10% modifications were made on the outputs of the segmentator.</p> <h1>2. Dataset</h1> <p>Our dataset was created from the Train &amp; PublicHPA 16-bit images. Seems most teams used 8-bit images in the end.</p> <p>Each image in the final dataset is a 512x512 image of a cell based on the cell masks from the segmentator. Padding (to square) was used to retain original height/width ratio. No surrounding pixels were used (non-cell-mask pixels). My feeling is that it might be better to use a bit larger surrounding, but did not have time to test this (this might be good for some classes such as plasma membrane).</p> <p>We decided to go with large images (512x512) since some labels/organelles required higher resolution and their size varied a lot based on the cell size &amp; re-scaling. E.g. sometimes the nucleus was very small and sometimes it as big as the whole image. We even tried to train nuclear organelles on images based on nuclei masks, but since the deadline was too close, we decided not to invest more time on this approach.</p> <p>I am eager to find out if diving the problem into nuclear and cytosolic organelles classification would yield better results. I think it would be easier to classify organelles inside the nucleus since it would be approximately the same size for each cell image this way.</p> <p>We used a simple heuristic to determine how much of the nuclei was outside of the image and decreased its final predicted confidence accordingly. All images with nuclei that were not present almost completely in the cropped image were removed from the train set.</p> <p>Similarly we tried to determine false positive segmentations by finding outliers based on the red channel and a product of the blue and yellow channel. Outliers at inference time got their confidence decreased dramatically. Outliers in the train dataset were removed completely. I assume the accuracy of this heuristic was around 50%. Since false positives were a big score crusher, this seemed acceptable.</p> <p>We lost around 20% of the images from the trainset.</p> <h1>3. Parallelization</h1> <p>HPA-Cell-Segmentation took quite some time so we decided to parallelize most things. Even with only two cores we got a boost in the submission time.</p> <p>The first boost was by running the <code>label_cell</code> function in parallell. The second boost was in running all the previously mentioned heuristics and image cropping in parallel as well.</p> <p>This left us with more than 3 hours for inference.</p> <h1>4. Manual labeling</h1> <p>We manually labeled smaller classes or classes with smaller % of occurrence in the initial images (e.g. mitotic spindle, aggresome, intermediate &amp; actin filaments …). We made a simple GUI and relabeled only one label at a time for an image.</p> <p>Mostly we would give a score from 1 to 5 on how confident we were that the given cell image contained the image-level label. These scores we transferred to soft labels. Each mapping was different (e.g. 1:0.0, 2:0.2, 3:0.7, 4:0.9, 5:1.0).</p> <p>In the end, we tried to create a validation set in the same way with high quality labeling. We managed to do get a few thousand examples for most classes.</p> <h1>5. Pseudo-labeling</h1> <p>Inspired by the Meta Pseudo Labels paper, we wanted to get rid of some false positives and help our models avoid overfitting. A cut off of 0.3 seemed to remove approx. 15% of image with high accuracy. Here we used an underfitt ResNet18.</p> <p>Later we used a better model to find more examples of mitotic spindles in a similar way, but withing the images that did not have mitotic spindle assigned. I think we found around 100 extra mitotic spindles, compared to around 250 that we found in the labeled images.</p> <p>In the end, we did not do this for other classes. I think we found a few aggresomes and quickly decided to skip positive labeling.</p> <h1>6. EfficientNetB0</h1> <p>This network is just awesome :) I am a big fan of solving problems with simple models, so I was quite happy when EfficientNetB0 seemed to be good enough for this challenge. We tried using B4, but it was slower to train and the results did not impress enough to continue playing with it. There was a solution that ensembled some B4-s, but no boost on the private LB.</p> <p>We had an 3-part ensemble with weights [0.2, 0.4, 0.4]. All EfficientNetB0s, but trained with different augmentation and loss function combinations.</p> <ol> <li><p>Single B0 - 0.2 ensemble weight<br> Augmentation: Flipping &amp; Rotation<br> Loss: FocalLoss<br> Did not have time to test if this network actually helped much.</p></li> <li><p>2 Checkpoint Ensemble B0s - 0.4 final ensemble weight<br> Augmentation: RandomResizer (40% chance), Flipping &amp; Rotation<br> Loss: FocalLoss<br> *RandomResizer -&gt; Resize to (RSIZE, RSIZE) + Resize back to (512, 512), where RSIZE is a random number between 256 and 384</p></li> <li><p>4 Checkpoint Ensemble B0s - 0.4 final ensemble weight<br> Augmentation: RandomResizer (30% chance), RandomPad (30% chance), Flipping, Rotation, &amp; Resize(512,512)<br> Loss: BCELoss<br> *RandomPad -&gt; pad each side (independently) with a random length between 0 and 200</p></li> </ol> <p>Since it was hard to estimate the "best" model on the local validation set, we used the idea of checkpoint ensembling to try to avoid overfitting &amp; boost our score.</p> <p>We oversampled classes with less examples or less true positives and tried to avoid overusing images with more assigned labels. </p> <h1>7. Fine-tuning</h1> <p>We fine-tuned all networks for one "epoch" on the validation set. Unlabeled images were used once, while images that we labeled (soft labels) were used multiple times in the "epoch". The "epoch" was around 200-250k images. We used the same augmentations &amp; loss function for each B0 as it was used during training of that network.</p> <h1>8. Cell/Image weighting</h1> <p>To extreme outliers we weighted the final confidences with the mean of confidences of all other valid cells (excluding border and outliers). The final confidence was 0.7 * cell_confidence + 0.3 * image_confidence.</p> <p>Seems that even extreme values such as 0.6/0.4 tend to work well here. We did not test if smaller weighting gave better score.</p> <h1>Conclusion</h1> <p>512x512 images, re-labeling, simple network (B0), fine-tuning &amp; final confidence weighting seem to work well enough for this problem.</p> <h1>Thank you</h1> <p><a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> &amp; <a href="https://www.kaggle.com/cwinsnes" target="_blank">@cwinsnes</a> -&gt; <a href="url" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/229284#1258324</a><br> We were not aware of this at that time. This valuable responses made a huge impact on our approach/results.</p> <p><a href="https://www.kaggle.com/h053473666" target="_blank">@h053473666</a> -&gt; <a href="url" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/230940</a><br> Thank you for the motivation for the final weighting.</p> <p><a href="https://www.kaggle.com/emmalumpan" target="_blank">@emmalumpan</a>, <a href="https://www.kaggle.com/lnhtrang" target="_blank">@lnhtrang</a> &amp; <a href="https://www.kaggle.com/cwinsnes" target="_blank">@cwinsnes</a> I hope there will be more opportunities to participate in your journey in the future (Kaggle or non-Kaggle related). Thank you for the ride! :)</p>
Human Protein Atlas - Single Cell Classification
5th place solution - my part
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <a href="https://www.kaggle.com/usharengaraju" target="_blank">@usharengaraju</a> !</p> <p>Thanks! it was a super challenging and interesting problem to work on</p> <p>Thanks Sasza! You know where the next stop is :)</p> <p>Thanks <a href="https://www.kaggle.com/corochann" target="_blank">@corochann</a> Congrats on your result too! </p> <p>Thanks Raman! I enjoy discussions on Kaggle - the community here is simply amazing .</p> <p>Thanks <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> ! Congrats on your amazing result too!</p> <p>First of all, I would like to thank my fantastic teammates <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> and <a href="https://www.kaggle.com/tivfrvqhs5" target="_blank">@tivfrvqhs5</a> . We realized 5 days before the deadline that we have to recalculate everything, and we managed to do so, selecting our submissions 1 hour before the deadline. The fact that it worked is a miracle.</p> <p>Big congrats to <a href="https://www.kaggle.com/bestfitting" target="_blank">@bestfitting</a> who once again has shown his greatness, to the surprise of nobody :) We were suspecting you were #1 already for a long time, even when we were higher on public LB.</p> <p>Congrats to all other teams - it was great to compete with you. </p> <p>Finally, I would like to thank the hosts for creating such an interesting problem for us to tackle.</p> <p>I will share key parts of my solution, which brought the largest score boost. Other components of my models are fairly standard:</p> <p><strong>1. Model on cell-level and progressive pseudo-labeling</strong> </p> <p>I started with models trained on a whole image level, then I moved to models trained on a single-cell level. When assigning labels to single-cell images, I used the following approach:</p> <pre><code>threshold_std_above_mean = 0.5 threshold_pred = 0.9 for i in range(num_classes): cell_level_df[f'cell_label_class{i}'] = ((cell_level_df[f'gt_class_{i}'] == 1) &amp; ( (cell_level_df[f'img_pred_rank_{i}'] == 1) | (cell_level_df[f'std_from_mean_{i}'] &gt; threshold_std_above_mean) | (cell_level_df[f'pred_class_{i}'] &gt; threshold_pred) ) ).astype(int) </code></pre> <p>The logic behind the above formula is the following:<br> I set the label for a single-cell image to 1 for a given class only if:</p> <ul> <li>The whole image has label 1 for this class</li> <li>This particular cell has the highest prediction for this class among all cells in the image, or is above 0.9 or is 0.5 standard deviations higher than the mean prediction for this class on this image</li> </ul> <p>Those parameters were tuned using feedback from LB. I did 3 iterations -&gt; models -&gt; preds -&gt; labels. This was the single biggest source of boost for my models. </p> <p><strong>2. Filtering our cells detected by segmentation model, but invisible to humans</strong> </p> <p>When the blue channel is very weak, sometimes the official segmentation model provided by the hosts detects a cell, even when it is nearly invisible to the human eye, and could surely be removed by manual labelers. This is a simple condition I used, which brought like 0.04 improvement on the LB ( I assume blue is the 2nd channel):</p> <pre><code>cell_img[2,:,:][cell_img[2,:,:]&gt;5] &lt; 25 </code></pre> <p>Such cells were removed from the predictions</p> <p><strong>3. Manual review of mitotic spindle</strong></p> <p>I could not resist :) I spent a couple of evenings manually reviewing all images with mitotic spindle (label for class_11 == 1) and some high-predictions for class 0. This improved score of the mitotic spindle from 0.024 to 0.032. I can release this dataset if anyone is interested.</p>
Human Protein Atlas - Single Cell Classification
6th place Solution Summary(0.549)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We would like to thank the competition host(s) and Kaggle for organizing the competition and congratulate all the winners, and anyone who benefited in some way from the competition. Special thanks to my teammates <a href="https://www.kaggle.com/zehuigong" target="_blank">@zehuigong</a> <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> <a href="https://www.kaggle.com/dschettler8845" target="_blank">@dschettler8845</a> <a href="https://www.kaggle.com/felipebihaiek" target="_blank">@felipebihaiek</a> </p> <p>I am sorry for late share,because I am busy with two competitions at the same time, I only spent the last 3 weeks preparing for the competition. If I start earlier, I still have a lot of ideas that I haven’t been able to realize.</p> <h1>Solution Components:</h1> <ul> <li>Cell level models</li> <li>Image-level Model</li> <li>Gridify and Gapmask Inference</li> </ul> <h1>1. Cell level models</h1> <p>Baseline solution: The settings of our baseline is as follows:<br> • Backbone: EfficientNet-b3;<br> • Data: Only training data, cell-level classification.<br> • Optimizer: Adam, BS = 128, LR=0.001, warm up + constant learning, 16 epoch.<br> • Image size: 320 for training and inference.<br> • Augmentations: The optimized transform policy of the previous third place solution<br> • Focal loss (gamma=2)<br> • Using the offline SWA to generate the inference weights.<br> The above model achieved 0.351 LB score.</p> <h4>Tricks on the baseline:</h4> <p>(1) Label smoothing with gaussian random noise, eps=0.05, the smooth label are generated as follows: (0.351 -&gt; 0.412)<br> <img src="https://github.com/yangsenwxy/png_githu1/blob/main/2.png?raw=true" alt="random"><br> (2) Mixup augmentation (0.412 -&gt;0.428);<br> (3) Attention class head (boost ~0.002);<br> (4) Train + public (except for class 0 and 16) + label smoothing + mixup: 0.507<br> (5) Deep supervision, add supervision on the middle layer of the backbone network, 2 layer supervision (boost ~0.004, 0.507-&gt; 0.511), 3 layer supervision (0.511 -&gt; 0.513). <br> (6) Merge the cell-level prediction with image-level(RGBY+G) prediction.</p> <h1>2. Image-level Model</h1> <p>• Backbone: EfficientNet-b7;<br> • Data: training +public data(RGBY+G), img-level classification.<br> • Image size: 600 for training and inference.<br> • Focal loss (gamma=2).</p> <h4>Inference</h4> <p>We use 8-TTA for inference, and the inference image size is the same as training image, e.g., 320.Segmentation Model is original HPA Segmentator</p> <h4>Post-processing</h4> <p>We use a threshold of 0.001 to filter the predictions, and merge the image-level and cell-level classification results, which achieves an improvement of 0.04 mAP, with the weight of 0.7 for cell-level, 0.3 for image level predictions, respectively (0.513 -&gt; 0.553). </p> <h1>3. Gridify and Gapmask Inference</h1> <p>A detailed breakdown of this step has been posted in a separate <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238365" target="_blank">discussion</a></p> <h2>Final Results</h2> <ol> <li>Original HPA Segmentatorse+2 layer +3 layer supervision cell level+img-level +Gridify and Gapmask Inference( Public: 0.5771 , Private: 0.5499)</li> <li>Faster HPA Segmentatorse+2 layer +3 layer supervision cell level+img-level +Gridify and Gapmask Inference( Public: 0.5724 , Private: 0.5410)</li> </ol> <h2>Ideas in mind</h2> <ul> <li>OOF models.</li> <li>Use metric learning technique, such as ranking loss. (paper: Improving Pairwise Ranking for Multi-label Image Classification)</li> </ul> <h2>Things didn’t work for us</h2> <p>(1) Using GAP and GMP(global max pooling), and concatenate two output, before the final classification layer, we add two more fc layers.<br> (2) Currently, our cell-level image are all multiply with the corresponding cell mask, we train a model with cell-level images without multiplying the cell mask and, adding the cell mask as an additional input channel;<br> (3) BCE + Focal<br> (4) Class aware training sampling, we assign the sampling ratio for each of the training images according to the class frequency. This will lower the sampling ratio for the major class images, while improve the ratios for the minor class images.</p>
Human Protein Atlas - Single Cell Classification
7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you <a href="https://www.kaggle.com/usharengaraju" target="_blank">@usharengaraju</a> !</p> <p>Thank you Mathurin!</p> <p>Thanks for the question.</p> <ol> <li>Yes, Figure 2 is the training process for the image level classifier. We have trained all CNN (imagenet pretrained) weights with this.</li> <li>Let me explain about CAM. A normal image classifier obtains the logits of the classes by applying Global Average Pooling -&gt; Linear to the feature map in this order. Here, since GAP and Linear are linear operations, the values of logits does not change even if Linear -&gt; GAP is applied in this order. When Linear is applied first, the tensor immediately after the Linear operation is called CAM. Since both Linear and GAP are differentiable operations, the overall process is also differentiable.</li> <li>We implemented it as follows.</li> </ol> <pre><code># CutMix rand_index = torch.randperm(x.size()[0]).cuda() bbx1, bby1, bbx2, bby2 = rand_bbox(x.size(), lam) x[:, :, bbx1:bbx2, bby1:bby2] = x[rand_index, :, bbx1:bbx2, bby1:bby2] # Obtain a CAM cam = compute_cam(model, x) # Resize the CAM to the original resolution cam = F.interpolate(cam, x.shape[-2:], mode="bilinear", align_corners=False) # Inverse CutMix inv_rand_index = torch.argsort(rand_index) mixed_mask = torch.zeros(cam.shape, dtype=torch.bool, device=x.device) mixed_mask[:, :, bbx1:bbx2, bby1:bby2] = True cam = torch.where(mixed_mask, cam[inv_rand_index], cam) </code></pre> <p>We are planning to release the code at a later date, so it would be better to check the detailed implementation there. I'll ping you in this thread when I publish it!</p> <p>Thanks! <a href="https://www.kaggle.com/duykhanh99" target="_blank">@duykhanh99</a></p> <p>First of all, I would like to thank the host for organizing such an exciting competition. I learned a lot from the challenging tasks that I have never dealt with before, and I was helped by the many kind contributions to Discussion by the host. </p> <h2>Solution Overview</h2> <p>Our team gave up on improving the segmentation mask by HPA-Cell-Segmentation early on, and concentrated on improving the accuracy of multi-label classification for each cell.</p> <p>We mainly experimented with following two approaches:</p> <ol> <li>Predict each cell from a Class Activation Map (CAM) of image level classifiers, as is commonly used in Weakly Supervised Semantic Segmentation.</li> <li>Crop the image for each cell and predict one by one</li> </ol> <p>In approach 1, the image level label is cleaner (compared to using it as a cell level label), so it is easier to train the classifier. On the other hand, it has a disadvantage for images with high SCV because the prediction of individual cells is easily affected by neighboring cells. In approach 2, it is difficult to perform well by simply using the image level label. But it is less affected by SVC because prediction is done for each cell separately. To take advantage of these two complementary approaches, we created models with both approaches and used them as an ensemble.</p> <p>The training pipeline is shown in the following image.<br> <img src="https://user-images.githubusercontent.com/8179588/117969069-eaf42300-b361-11eb-8bdc-719658486cc0.png" alt=""><br> We repeated the offline pseudo learning process twice, training a new model using pseudo labels from an ensemble of multiple models and TTA. All pseudo labels are soft labels after applying the sigmoid function. All ensembles were done with simple average, and the TTA used all D4 augmentation. The image level classifier was trained using 768 x 768 images except for the 1536 one, and the cell level classifier was trained using 192 x 192 images. Cosine classifier uses cosine similarity between feature map and linear layer instead of linear transformation of feature map (We follow the equation 3 of <a href="https://arxiv.org/abs/2103.16370" target="_blank">https://arxiv.org/abs/2103.16370</a>)</p> <h2>Data</h2> <p>We used all the training data from this competition and the public HPA data.</p> <h2>Validation Strategy</h2> <p>To split the data, we used MultilabelStratifiedKFold from <a href="https://github.com/trent-b/iterative-stratification" target="_blank">iterative-stratification</a> with 5 folds. We mainly monitored image level mAP, Focal loss, and binary cross entropy, but we could not find any metrics that correlated with public LB, so we relied on feedback from public LB.</p> <h2>Image Level Classifier</h2> <p><img src="https://user-images.githubusercontent.com/8179588/117969094-f6474e80-b361-11eb-89e9-ab9af1e65923.png" alt=""></p> <p>For the image level classifier, in addition to image level Focal loss, we used a consistency loss such that the prediction of the cell level under weak augmentation matches the prediction of the cell level under strong augmentation (CutMix). For cell level prediction, we used the average of the CAM in the region occupied by each cell (since the number of channels in CAMs is small, it worked reasonably fast even using such as scatter_add). The idea of the consistency loss is based on <a href="https://arxiv.org/abs/2010.09713" target="_blank">PseudoSeg</a> and <a href="https://arxiv.org/abs/2101.11253" target="_blank">PuzzleCAM</a> (I think the reconstruction loss in PuzzleCAM can be regarded as a consistency loss using a variant of Cutout).</p> <p>We mainly used EfficientNet-B2 as the image level classifier. This is because using other architectures (We tried ResNet and ResNeSt) or the larger EfficientNet would have improved the local image level mAP, but not the public LB. (This choice may have caused the public LB to overfit).</p> <p>When using a pseudo label from an ensemble of other models, "Cell Level Pseudo Label" in the figure is replaced with the pseudo label from the ensemble.</p> <h2>Cell level Classifier</h2> <p><img src="https://user-images.githubusercontent.com/8179588/117969124-fe9f8980-b361-11eb-9704-859ff2ada179.png" alt=""></p> <p>For the Cell level Classifier, in order to input both the shape of the entire cell and the size of the cell into the CNN at a somewhat small resolution, we concatenated both a fixed-scale, nucleus-centered crop and a variable-scale, whole-cell crop into the CNN. We did not use a model trained with a simple image level label because it did not perform well.</p> <h2>Post Processing</h2> <p>From the following comment in the <a href="https://www.kaggle.com/lnhtrang/single-cell-patterns#18.-Negative" target="_blank">single-cell-patterns notebook</a>:</p> <blockquote> <p>Please also note that border cells where most of the cells are out of the field of view and to cells that have been damaged or suffer from staining artifacts. A good rule of thumb (that our annotators used in generating ground truth) is if more than half of the cell is not present, don't predict it!</p> </blockquote> <p>We scaled the confidence with a value based on the area of its cell (shown as “edge scale” in the bellow image) so that the confidence of the small cells at the edges of the image would be small. We also scaled the confidence of cells that are not at the edge of the image by a value (shown as non-edge scale), assuming that smaller cells are harder to predict.</p> <p><img src="https://user-images.githubusercontent.com/8179588/117969457-61912080-b362-11eb-8e3d-40b8d5274a1f.png" alt="image"></p> <h2>Scores</h2> <table> <thead> <tr> <th>Training</th> <th>Architecture</th> <th>Pseudo Label</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>Image level classification</td> <td>EfficientNet-B2</td> <td>-</td> <td>0.531</td> <td>-</td> </tr> <tr> <td>Image level classification</td> <td>EfficientNet-B5</td> <td>-</td> <td>0.526</td> <td>-</td> </tr> <tr> <td>Image level classification</td> <td>EfficientNet-B7</td> <td>-</td> <td>0.502</td> <td>-</td> </tr> <tr> <td>Image level classification</td> <td>EfficientNet-B2 (1536)</td> <td>-</td> <td>0.526</td> <td>-</td> </tr> <tr> <td>Image level classification</td> <td>EfficientNet-B2</td> <td>1st</td> <td>0.554</td> <td>-</td> </tr> <tr> <td>Image level classification</td> <td>EfficientNet-B2-cos</td> <td>2nd</td> <td>0.554</td> <td>-</td> </tr> <tr> <td>Image level classification</td> <td>EfficientNet-B2-cos</td> <td>2nd</td> <td>0.566</td> <td>-</td> </tr> <tr> <td>Cell level classification</td> <td>ResNeSt50</td> <td>1st</td> <td>0.551</td> <td>-</td> </tr> <tr> <td>Cell level classification</td> <td>ResNeSt50</td> <td>2nd</td> <td>0.571</td> <td>-</td> </tr> <tr> <td>Cell level classification</td> <td>ResNeSt50</td> <td>2nd</td> <td>0.569</td> <td>-</td> </tr> <tr> <td>-</td> <td>Final Ensemble</td> <td>-</td> <td>0.580</td> <td>-</td> </tr> <tr> <td>-</td> <td>Final Ensemble-postprocess</td> <td>-</td> <td>0.594</td> <td>0.540</td> </tr> </tbody> </table> <h2>Code</h2> <p>(Added on May 28, 2021) We have published the code.<br> <a href="https://github.com/pfnet-research/kaggle-hpa-2021-7th-place-solution" target="_blank">https://github.com/pfnet-research/kaggle-hpa-2021-7th-place-solution</a></p> <p><a href="https://www.kaggle.com/yuvaramsingh" target="_blank">@yuvaramsingh</a> The code is now available! Sorry for the delay.<br> <a href="https://github.com/pfnet-research/kaggle-hpa-2021-7th-place-solution" target="_blank">https://github.com/pfnet-research/kaggle-hpa-2021-7th-place-solution</a></p>
Human Protein Atlas - Single Cell Classification
8th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, but I'm not sure which public kernel has similar approach. I shared some code snippets below for better understanding</p> <p>For example, this is my dataset. I added some comments for better understanding. </p> <pre><code>class HPADataset(Dataset): # image_list -&gt; list of names of full images # label_dict -&gt; dict of image-level labels # target_size -&gt; output size of cell images (patches) # num_cell_per_image=8 # transform -&gt; augmentations def __init__(self, image_list, label_dict, target_size, num_cell_per_image, transform): self.image_list=image_list self.label_dict=label_dict self.target_size=target_size self.num_cell_per_image=num_cell_per_image self.transform=transform def __len__(self): return len(self.image_list) def __getitem__(self, index): # Initialize memory for output of a single full image composed of num_cell_per_image cells x_image = np.zeros((self.num_cell_per_image, self.target_size, self.target_size, 4), dtype=np.uint8) # Load all filenames of pre-saved cells within a single full image image_files = sorted(glob.glob('../../../cell_seg/cell_segmentations/cells/'+self.image_list[index]+'/rgb/'+'/*.png')) # permutation perm = torch.randperm(len(image_files)) # read four channels, padding to square, and augmentations for i in range(min(self.num_cell_per_image, len(image_files))): x_rgb = cv2.imread(image_files[perm[i]]) x_y = np.expand_dims(cv2.imread(image_files[perm[i]].replace('rgb', 'y'), cv2.IMREAD_GRAYSCALE), axis=-1) x = np.concatenate((x_rgb, x_y), axis=-1) x = pad_to_square(x) x = self.transform(image=x)['image'] x_image[i] = x # channel first x_image = x_image.transpose(0, 3, 1, 2) # image-level labels y = self.label_dict[self.image_list[index]] return x_image, y </code></pre> <p>And this is the model class for training.</p> <pre><code>class HPAImageResnet200d(nn.Module): def __init__(self, ): super().__init__() self.net = timm.create_model('resnet200d', drop_path_rate=0.2, pretrained=True, in_chans=4) self.avg_pool_cell = nn.AdaptiveAvgPool2d(1) # GAP within each cell image self.avg_pool_image = nn.AdaptiveAvgPool1d(1) # image-level GAP self.last_linear = nn.Linear(2048, 18, bias=False) def forward(self, x, num_cell_per_image): # input size 80x4x256x256, assume we sample 8 cells from each of the 10 full images bs = x.size(0) x = self.net.forward_features(x) # output shape before GAP 80x2048x8x8 x = self.avg_pool_cell(x) # output shape 80x2048x1x1 x = x.view(x.size(0), -1) # output shape 80x2048 x = x.view(bs//num_cell_per_image, num_cell_per_image, -1) # output shape 10x8x2048 x = x.permute(0, 2, 1) # 10x2048x8 x = self.avg_pool_image(x) # 10x2048x1 x = x.view(bs//num_cell_per_image, -1) # output shape 10x2048 x = self.last_linear(x) # 10x18 return x </code></pre> <p>The output of dataloader has shape 10x8x4x256x256, so we make it 80x4x256x256 before inputting it to the model<br> <code>logits = model(images.view(-1, images.size(2), images.size(3), images.size(4)), num_cell_per_image)</code></p> <p>Model class for inference. Only need to add a wrapper for cell prediction. We can see this wrapper just removed a few lines in the training model, and we directly have cell-level predictions.</p> <pre><code># This part is same as the training code class HPAImageResnet200d(nn.Module): def __init__(self, ): super().__init__() self.net = timm.create_model('resnet200d', drop_path_rate=0.2, pretrained=False, in_chans=4) self.avg_pool_cell = nn.AdaptiveAvgPool2d(1) self.avg_pool_image = nn.AdaptiveAvgPool1d(1) self.last_linear = nn.Linear(2048, 18, bias=False) def forward(self, x, num_cell_per_image): bs = x.size(0) x = self.net.forward_features(x) x = self.avg_pool_cell(x) x = x.view(x.size(0), -1) x = x.view(bs//num_cell_per_image, num_cell_per_image, -1) x = x.permute(0, 2, 1) x = self.avg_pool_image(x) x = x.view(bs//num_cell_per_image, -1) x = self.last_linear(x) return x # Add a wrapper for cell-level prediction for convenience. class HPAImageResnet200dCellPredict(nn.Module): def __init__(self, ): super().__init__() self.net = HPAImageResnet200d() def forward(self, x): x = self.net.net.forward_features(x) # output shape 80x2048x8x8 x = self.net.avg_pool_cell(x) # output shape 80x2048x1x1 x = x.view(x.size(0), -1) # output shape 80x2048 x = self.net.last_linear(x) # cell-level predictions, output shape 80x2048 return x </code></pre> <p>Hi Darek, I think this sheet summarizes pretty well the pros and cons of the major approaches to this problem. Yes, my approach looks closer to number 4, and it solves all the weakness of number 2 and 3. </p> <p>But I'm not sure if your #4 is same as mine. <br> First, did you do cell-level global pooling before image-level global pooling? It is important to do global pooling two times. <br> Second, it might be wrong to use global max pooling for image-level pooling, global average pooling maybe the only choice here. But you are free to choice other types of global pooling for the cell-level pooling.</p> <p>Yeah, I'm just not sure if max pooling is proper here because this method is still in essence CAM-based, and for CAM I think the default is mean pooling.<br> For augmentation, I used randomresizedcrop, rotate90, flip, cutout, and mixup. When using mixup, I mixed cells from one whole image with cells from another whole image. </p> <p>Since we are given image-levels labels and cell segmentations, we can train on image labels and predict cell labels directly.</p> <p>Let's say during training we sample B images, within each image we sample M cells. Let's further assume input channels=4, input_size=256x256, feature_dim=2048. </p> <ol> <li>Input shape to CNNs -&gt; BMx4x256x256</li> <li>CNN feature extraction -&gt; BMx2048x8x8</li> <li>GAP -&gt; BMx2048</li> <li>Reshape and permute -&gt; Bx2048xM</li> <li>Another GAP -&gt; Bx2048 </li> <li>Last linear -&gt; Bx18 (compute loss with image labels)</li> </ol> <p>During inference, to predict cell labels, we remove step 4 and 5:</p> <ol> <li>Input shape to CNNs -&gt; BMx4x256x256</li> <li>CNN feature extraction -&gt; BMx2048x8x8</li> <li>GAP -&gt; BMx2048</li> <li>Last linear -&gt; BMx18 (cell label prediction completed)</li> </ol> <p>This approach is partly inspired from the <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/146855" target="_blank">lafoss concat tile pooling</a> and the class activation mapping. </p> <p>Postprocessings that bring some improvement, ranked roughly by their importance:<br> 1) Downscale border cells predictions.<br> 2) Cell-level finetuning with OOF predictions in CV.<br> 3) 0.9xcell predictions+0.1xaverage cell predictions within image, to make use of label correlations within a image. It should be better to train some independent image level models though <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/230940" target="_blank">example</a>. Here I just reused cell predictions.</p> <p>I pad zero cell images if not enough cells. I used resnet200d and efficientnetb5, input size 288x288. All cells are pre-saved on disk for faster loading in the dataset. </p> <p>I randomly sample M=8 cell patches from each full image, if number of cells&lt;8, I pad zero patches. So assume B=10, the input of network is 80x4x256x256.<br> This approach doesn't use original full images, it only accepts input of cell patches.<br> I'll share some example code later</p> <p>I tried GeM, but no big difference in score</p>
Human Protein Atlas - Single Cell Classification
9th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi,</p> <p>We had a tough competition, congratulations to all the kagglers who persevered to the end and many thanks to the organizers and my wonderful teammates <a href="https://www.kaggle.com/daishu" target="_blank">@daishu</a> <a href="https://www.kaggle.com/garybios" target="_blank">@garybios</a> <a href="https://www.kaggle.com/boliu0" target="_blank">@boliu0</a> </p> <h1>Summary</h1> <p>We have two kinds of pipeline in this competition, they are:</p> <ul> <li>Pipeline1: Train on full image, test on single cells (mask out other cells)</li> <li>Pipeline2: Train on cropped cells, test on cropped cells also</li> </ul> <p>Finally, we trained 20~ folds of model with these two pipelines and ensembled them by taking the mean value.</p> <p><strong><em>Notice that we do not use image-level prediction.</em></strong></p> <h1>Methods</h1> <h3>Pipeline1</h3> <p>We used two methods to train-test this pipeline.</p> <p>The first one is to train with 512 images, and the test input is also 512. We loop n times for each image (n is the number of cells in the image), leaving only one cell in each time and masking out the other cells to get single cell predictions.</p> <p>The second one is trained with 768 random crop 512, and then tested almost the same way as the first one, but not only mask out the other cells, but we also put the position of the cells left in the center of the image.</p> <h3>Pipeline2</h3> <p>We pre-crop all the cells of each image and save them locally. Then during training, for each image we randomly select 16 cells. We then set bs=32, so for each batch we have 32x16=512 cells in total.</p> <p>We resize each cell to 128x128, so the returned data shape from the dataloader is <code>(32, 16, 4, 128, 128)</code> . Next we reshape it into <code>(512, 4, 128, 128)</code> and then use a very common CNN to forward it, the output shape is <code>(512, 19)</code></p> <p>In the prediction phase, we will directly take this output and use it as the predicted value for each cell. ↑</p> <p>But during the training process, we rereshape this <code>(512, 19)</code> prediction back into <code>(32, 16, 19)</code> . Then the loss is calculated for each cell with image-level GT label.</p> <h1>Acknowledge</h1> <p>Special Thanks to Z by HP &amp; NVIDIA for sponsoring me a Z8G4 Workstation with dual RTX6000 GPU and a ZBook with RTX5000 GPU!</p> <p>Since I got the Z8G4 workstation and the ZBook last December, this is the third gold medal I've won from Kaggle ;)</p>
Human Protein Atlas - Single Cell Classification
Fair Cell Activation Network and Swin Transformer, the 1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi <a href="https://www.kaggle.com/micheomaano" target="_blank">@micheomaano</a>,<br> Sorry for late reply, I did not open kaggle pages often..<br> We can feed the images to the network in batch…As a training image-&gt;5 input images for FCAN, we can feed 2,3,4 training time to the network at same time, and the real batch-size will be 10,15,20.</p> <p>Hi <a href="https://www.kaggle.com/sameedhusain" target="_blank">@sameedhusain</a>,<br> Sorry for late reply!<br> It's not easy to describe in detail, but the rule is quite easy to understanding, if the cell prediction from FCAN is high and the image level prediction is also high, then, the cell level label will be high…we can set the confidence of image level and cell level prediction to 2-3 levels and then set the cell level label accordingly. What's more, we should assign a label at least 0.25 if the label exists in the image level labels.</p> <p>Thanks, glad to find that our solutions are so similar. :) </p> <p>Thanks, I am sure you or guanshuo <a href="https://www.kaggle.com/wowfattie" target="_blank">@wowfattie</a> will be back to #1 very soon, and <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> is also on the way to this position :)</p> <p>Hi <a href="https://www.kaggle.com/sgalib" target="_blank">@sgalib</a> <br> Thanks!<br> As to you questions:</p> <ol> <li>Using OOF and training set perform similar but we should set different thresholds, I used training set.</li> <li>I trust image level labels.</li> </ol> <p>Hi <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a>,</p> <p>There is a link <a href="https://www.proteinatlas.org/about/download" target="_blank">https://www.proteinatlas.org/about/download</a>, you can download <a href="https://www.proteinatlas.org/download/subcellular_location.tsv.zip" target="_blank">https://www.proteinatlas.org/download/subcellular_location.tsv.zip</a> </p> <p>The first column is Gene, we can download related xml for every Gene, for example:<br> <a href="https://www.proteinatlas.org/ENSG00000134057.xml" target="_blank">https://www.proteinatlas.org/ENSG00000134057.xml</a></p> <p>There are a lot of information, we need background knowledge to understand them all. But you can search antibody id= you will find one or many antibody for this gene. For exmaple:<br> <strong>antibody id="CAB000115"</strong><br> For every antibody, there are images, please search imageUrl, for example, <strong>http://images.proteinatlas.org/115/672_E2_1_blue_red_green.jpg</strong><br> you will find the antibody-id(115) in the link, and you can also find the image_id(672_E2_1), then, we can add antibody-id attribute to this image, you can use other information, such as cell-line, age… </p> <p>Perhaps you did not enter the last HPA's competition, there are discussions on how to use information on <a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/69984#430860" target="_blank">https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/69984#430860</a><br> for example:<br> <a href="https://storage.googleapis.com/kaggle-forum-message-attachments/430860/10777/Parce%20XML%20and%20Download%20HPAv18%20Image.html" target="_blank">https://storage.googleapis.com/kaggle-forum-message-attachments/430860/10777/Parce%20XML%20and%20Download%20HPAv18%20Image.html</a></p> <p>Congrats to all the winners, and thanks to the Human Protein Atlas team and kaggle hosted such an interesting competetion.</p> <p><strong>1. Introduction</strong></p> <p>The main challenge of this competition is to find a way to label every cell in a labeled image, it is a new type of weakly supervised challenge as we are provided with a cell segmentation model which means this is not a problem widely discussed like weakly supervised object detection or segmentation.</p> <p>The common method for this problem is to find CAM or attention on cells, but the activations of a CNN network is focus on most discriminative parts of an image which lead to a low recall rate, to solve this problem I developed a network called Fair Cell Activation Network(FCAN) based on Puzzle-CAM.</p> <p>After getting the prediction of each cell from FCAN, I relabeled the cells to 5 levels with label [1.0, 0.75, 0.5, 0.25, 0 ] by rule and trained a Swin Transformer model to predict the cell label. </p> <p>Ensemble of this two models and post-processing by reducing the confidence of the cells on image border can achieve the first place with 0.555 on private LB, a more complex ensemble solution with 6 models can reach 0.566 on private LB.</p> <p><strong>2. Methods</strong></p> <p><strong>2.1 Fair Cell Activation Network</strong><br> The activations of CNN on feature map of an image is focus on most descriminative instance of a class despite many instances exists. I call this phenomena unfair activation, to address this problem, a network was proposed based on Puzzle-CAM.</p> <p><strong>Training</strong><br> <img src="https://bestfitting.github.io/kaggle/hpa2021/figures/FCAN_train.png" alt="FCAN train"><br> <strong>Inference</strong><br> <img src="https://bestfitting.github.io/kaggle/hpa2021/figures/FCAN_inference.png" alt="FCAN inference"><br> The main difference to Puzzle-CAM in train part of this model is we can select cells instead of splitting the image to grid.<br> The confidence of a cell is multiplication of image-level prediction and cell level prediction. </p> <p><strong>Details</strong></p> <p>Images are resized to 512x512 px</p> <p><strong>Backbone</strong>: EfficientNet-B0.</p> <p><strong>Losses</strong><br> Lcls is FocalLoss + SymmetricLovaszLoss + HardLogLoss<br> Lml is ArcFaceLoss metric learning supervised by antibody-id.<br> Lre is MSELoss</p> <p><strong>Augmentation</strong><br> flip, transpose, scale, rotate, crop<br> Adding mitotic spindles with high confidence to other images to generate more positive samples of this type(lead to a boost with 0.02)<br> Test Time augmentation: default,flipud,fliplr,transpose.</p> <p><strong>Validation</strong><br> Select 433 images in public test set which can be found in public-hpa dataset, and remove them from training set of the model.</p> <p><strong>Compare the score of models</strong><br> <img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/hpa2021/figures/CompareModels.png" alt="FCAN compare results"><br> <strong>Compare the models real images</strong><br> <img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/hpa2021/figures/Label1_compare.png" alt="FCAN demo label1"><br> <img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/hpa2021/figures/Label2_compare.png" alt="FCAN demo label2"><br> <img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/hpa2021/figures/Label15_compare.png" alt="FCAN demo label15"><br> The left part of these images are the results from traditional CNN model, the middle are the results from the puzzle-cam, the right part is the results from FCAN,.<br> The top part of every figure is image with positive label. the bottom is negative image.<br> The number on each cell is the confidence of this cell.</p> <p><strong>2.2 Swin Tranformer based cell classification model</strong></p> <p><strong>Data</strong><br> Crop the cells in an image by using Cell-Segmentaion model.<br> The cells were labeled to 5 levels with label [1.0, 0.75, 0.5, 0.25, 0 ], this is a rule based procedure, After getting the outputs of all cells of train set from FCAN introduced above, we can give higher label value if the image probability and cell probability are high, and the cells from an image with label A were given at least 0.25 of this label A. The thresholds of the rule were not sensitive according to my experiments.</p> <p><strong>Model</strong><br> Swin Transformer with pretrained weights small_patch4_window7. <br> The cells are resized to 128x128 px to feed into the network</p> <p><strong>Loss</strong><br> FocalLoss</p> <p><strong>Augmentation</strong><br> Flip, transpose, scale, rotate, crop<br> Test Time augmentation:default,flipud,fliplr,transpose.</p> <p><strong>Validation Strategy</strong><br> Getting the max cell confidence in an image and use this confidence as the image confidence, calculate the MAP of the image level. <br> Although this is not a strategy always keep consistency with public LB, but it can reflect the capability of the model to some extend. </p> <p><strong>Inference</strong><br> The confidence of a cell is multiplication of FCAN image-level prediction and cell level Swin Transformer prediction. </p> <p><strong>2.3 Ensemble</strong><br> Weighted average of the prediction of FCAN and Swin transformer. </p> <p><strong>2.4 Post-Processing</strong><br> As the host did not label some cells on border, if we give the cell with high confidence, the Fasle Positive cells will increase, so I trained a model to predict the completeness of a cell. If the probability to be a whole cell is very low, the confidence of this cell is multiplied by low value such as 0.3.</p> <p><img src="https://bestfitting.github.io/kaggle/hpa2021/figures/Border_cell.png" alt="FCAN train"><br> The data to train this model is generated by randomly cutting out some area on the border a cell, and target is the area of the remaining part a cell.</p> <p>The backbone of this model is EfficientNet-B0, 3 epochs is enough to get a quite good model.</p> <p>The score can improve 0.007 to 0.01 after this step. </p> <p><strong>3. Results</strong></p> <p><strong>Results of simple solution</strong><br> <img src="https://bestfitting.github.io/kaggle/hpa2021/figures/Simple.png" alt="Simple-Solution"></p> <p><strong>Results of final submission</strong><br> <img src="https://bestfitting.github.io/kaggle/hpa2021/figures/Complex.png" alt="Complex Solution"></p> <p><strong>4. Conclusion</strong></p> <p>4.1 The Fair Cell Activation Network(FCAN) can increase cell level recall which is very important to this competition.</p> <p>4.2 The vision transformer models have shown promising capability.</p> <p>4.3 Larger model not always means better result as most pre-trained models are designed for ImageNet, our models should find relationship of relative position of pixels instead of abstract semantic. </p> <p>4.4 I found little differences among JPEG, PNG and 8bit 16bit formats.</p> <p>Hi <a href="https://www.kaggle.com/jackchungchiehyu" target="_blank">@jackchungchiehyu</a>, yes, the output is between 0 and 1, if the value&gt;0.98, it's a border cell, if the value &lt;0.1 or &lt;0.2, we should decrease the confidence of the cell.</p> <p>Hi thedrcat,</p> <p>The activation maps/ CAMs can not find all the cells, they will focus on part of cells, this is the key problem with a CNN model. What puzzle-cam and my network trying to do is force the network not so focus on the most discriminative part.</p> <p>You can visualize the activations of an image, you will find out the truth.</p> <p>Hi <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a>,<br> Nice to discuss with you!<br> I update my post with some visualizations, to get these cell probabilities, I forwarded every cell to the network. The confidences were ranked and then mix-max normalized to [0-1] on each class.</p> <p>By the way, there is a problem with grad-cam and other similar methods, the activations or the CAM can not compared between images. </p> <p>And, we need not compare the prob with other class, the order of the confidences to be a class is important.</p> <p>As to the regularization, I think it play some role as we force the network activate correct part of each cell.</p>
Human Protein Atlas - Single Cell Classification
HPA 2nd Place Solution [red.ai]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Preface</h2> <p>As promised, we will share our detailed solution within 24 hours. We would like to thank the organizers for this awesome competition since all of us had no experience in dealing with weakly-supervised classification problems and we have learned a lot from the the kind sharings by other kagglers and self-discoveries. The organizers are very active in this competition; huge props to all of you. I am also grateful for my teammates for making my journey to <strong>Kaggle Competition Grandmaster</strong> smooth and gratifying. To say I am excited is a huge under-statement. Without further ado, let's dive into our solution.</p> <h2>TLDR</h2> <p>Our solution consists of a total of 3 simple pipelines. We did not use any advanced techniques from any paper but we tried to understand the data well and build our model architecture w.r.t the problem statement. Here is a diagram for our final pipeline:</p> <p><img src="http://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Fdf998cf9-f273-4eef-9659-2875d8726a03%2FScreen_Shot_2021-05-12_at_4.53.39_PM.png?table=block&amp;id=d1c0c9a3-db31-4f09-80bf-aa4ffc10eab7&amp;width=2610&amp;userId=053c66e4-923c-48af-b90f-a2fe3ed3608c&amp;cache=v2" alt=""></p> <h2>Pipeline 1: Duo-Branch Cell Model</h2> <ul> <li>Motivation: A duo-branch(head) cell model was designed in a way that it takes cell tiles as input but has the ability to predict both as cell-level and image-level. Multi-tasking has been shown to be effective in improving model learning. A strong champion in dota2 called Jakiro also has two heads.</li> <li>Loss formulation: since the output is cell-level and image-level, we need two losses for both outputs. The final loss is the weighted sum of cell-level loss and image-level loss. We used basic <strong>BCE</strong> loss for both cell-level and image-level. For cell-level, the labels are not certain so it's intuitive to assign a lower weight (=0.1). l = 0.1*loss_cell + loss_image</li> <li>Data: we used original data, external data shared by Phil as well as some rare class samples by using the API. The input size for a single cell is 256.</li> <li>Training details: we take 4-channel images and crop&amp;resize the cells first; then we random sample N (=16) cells as input of our network. The cells are flattened as a large batch then we feed them into a CNN and backprop. For data augmentations we used dihedral, shift, rotate, scale, distortions, brightness contrast and cutout. The heavy data augmentations allows the model to better generalize as cells can be in any forms in reality. We train 5 folds and 20 epochs each. A single Model (b3, 256) takes about 30 hours to train on a single RTX3090.</li> <li>Inference: for each image, the pooled features are concatenated and feed into the last linear layer to predict at a cell-level. We generate image level prediction and cell level prediction and calculate their product as our final prediction.</li> <li>Result: with 16xTTA(scale, rotate, flip at random), 256 size cell-tiles, N=16, ensemble of efficientnet B3, B5 resnet200d and se_resnext50 backbone, our model score <code>0.550</code> on the public leaderboard and <code>0.550</code> on the private leaderboard. This single architecture can achieve second place in this competition.</li> </ul> <p><img src="http://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F62cb83d5-d1d5-4c3c-85f5-c6d931998873%2FScreen_Shot_2021-05-12_at_12.51.45_PM.png?table=block&amp;id=be18cfce-9ac3-401e-8c9b-ebd63fc4ff38&amp;width=2610&amp;userId=053c66e4-923c-48af-b90f-a2fe3ed3608c&amp;cache=v2" alt=""></p> <h2>Pipeline 2: Image-level Model</h2> <ul> <li><strong>Misc:</strong> The image level model is similar to last HPA's competition. We feed in whole images and perform a multi-label classification. It may surprise you, this pipeline is developed in fastai. We have found fastai's many implementations to be extremely slow (for instance, resize) and had gone through many days of debugging during the final inference phase; we spent a whole last week attempting to figure out how to submit. In the end, we optimized the fast.ai inference code by a lot that helped us cut the inference time almost twice. Luckily, hard work paid off.</li> <li><strong>Motivation:</strong> we can train at an image level but predict at a cell-level (with other cells masked) and the result is very promising. We decide to add this our pipeline.</li> <li><strong>Data:</strong> we used all data available on HPA official website and resized it to 512 using only RGB 3 channels.</li> <li><strong>Training details:</strong> we train 20 epochs with class weight [0.1, 1., 0.5, 1., 1., 1., 1., 0.5, 1., 1., 1., 10., 1., 0.5, 0.5, 5, 0.2, 0.5, 1.] and BCE loss for 2 folds only. We used average precision score for checkpointing. For data augmentations, we used fastai's <code>aug_transforms(flip_vert=True, max_lighting=0.1, max_warp=0.1, p_affine=0.5, p_lighting=0.5)</code></li> <li><strong>Result:</strong> we had 10 (5x2folds) models and we took the mean of the final output. And we use them to predict at both cell-level and image-level. We take the mean as our final output.</li> </ul> <p><img src="https://lh4.googleusercontent.com/HB9VR9q024ZJ2meKMjNz1cE_BEi07d4k2hEujYn1rv7-vNkBYsAHj46S1y0kIPaHb6C71x6WZ7DNNw2vpQ2nHi3MXitVIh1Ut20C0NtYog3GdDB0tkM8dTneY94NFq7dtl3VQOjC" alt=""></p> <h2>Pipeline 3: Cell-level Model</h2> <ul> <li><strong>Motivation:</strong> we can train at cell-level using the image-level labels but it's a bit counter intuitive. Since his will introduce lots of noise as image-level labels are not ground truth for cells so we think it's beneficial to train less epochs. We ended up only training 2 epochs (1 with backbone freezed and 1 with backbone unfreezed).</li> <li><strong>Data:</strong> we used all data available on HPA official website, use the cell segmentor to crop the cells and resized the cells to 168 using only RGB 3 channels. There are a total of 1620178 cropped cells.</li> <li><strong>Training details:</strong> we used fastai's built-in <code>finetune</code> and fastai's learning rate finder to train only 2 epochs with the same class weight [0.1, 1., 0.5, 1., 1., 1., 1., 0.5, 1., 1., 1., 10., 1., 0.5, 0.5, 5, 0.2, 0.5, 1.] and bce loss. We did not use anything for validation.</li> <li><strong>Result:</strong> we had 10 (10x1folds) models. We predicted at cell-level and simply took the mean of the final output.</li> </ul> <p><img src="https://lh6.googleusercontent.com/Y2bRKz-YpUF9MDtGrkBai9DRWtRhHfhmOOsXx57GXomcTma8d5J2oChHXk71ljKZaDOxyGs8s72ZrIYki3dyIldBsWx3Q34oKWiYd1ntJdD-Vfakss6aSB82AZ1z2UBPa2VMDCXE" alt=""></p> <h2>Segmentation Model</h2> <p>We are inspired by <a href="https://www.kaggle.com/samusram" target="_blank">@samusram</a> Even Faster HPA Cell Segmentation and <a href="https://www.kaggle.com/alexanderriedel" target="_blank">@alexanderriedel</a> Segmentation with a Scaling Factor, we modified the original HPA Segmentator to gain speed but keep the segmentation quality.</p> <ul> <li>Post-processing: We slightly changed label_cell function from the original implementation. We found that in many cases, border cells are segmented in a wrong way: some of them are combined together with border cells that have no nuclei (or it’s outside of the image). We tweaked the watershed distance threshold in order to separate cells masks a little bit further from each other than they were before, then we ignored the masks on the border that became separated from the main cell. Furthermore, we removed the border cells with nuclei whose area was less than a half of the median area of the non-border nuclei on the image. And we also removed the cells that did not have the corresponding nuclei. Below is an example of a difference between original label_cell implementation (left) and ours (right):</li> </ul> <p><img src="http://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Fa924aedb-fdb4-4cc2-bbda-f7dcf7749e4c%2FUntitled.png?table=block&amp;id=6e5fb18d-9550-4b31-9b4e-09a3f2f28cae&amp;width=2610&amp;userId=053c66e4-923c-48af-b90f-a2fe3ed3608c&amp;cache=v2" alt=""></p> <p><img src="http://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Fabc60835-6045-45ba-bb9a-923bd4b0d7e9%2FUntitled.png?table=block&amp;id=214638be-245a-4780-81a1-7866fc81081b&amp;width=2610&amp;userId=053c66e4-923c-48af-b90f-a2fe3ed3608c&amp;cache=v2" alt=""></p> <h2>Arcface Model</h2> <p>We also trained an arcface model with eca_nfnet_l0 backbone to classify antibody_id. Antibody_id can be found on the HPA's official website in a XML file. There are a total of 11582 antibody_id and it's extremely difficult to train. We used arc_margin_product and bce loss to train for 15 epochs then extracted the feature embeddings for the whole dataset. We used faiss_gpu library for cosine similarity search during inference. It worked well on the public leaderboard but it didn't quite work on the private leaderboard.</p> <h2>Duplicate samples.</h2> <p>We found about ~400 image in public test set duplicated either within the train set or the external data. You can check the csv file at <a href="https://www.kaggle.com/steamedsheep/hpa-2021-duplicated-sample" target="_blank">https://www.kaggle.com/steamedsheep/hpa-2021-duplicated-sample</a>. Our public leaderboard score, excluding the boost from duplicates is about 0.58, we have a relative consistent gap w.r.t. the 1st place in both public and private leaderboard.</p> <h2>Things that didn't work</h2> <ul> <li>Segmentation post-processing on scaled-up outputs of the segmentator led to a slight decrease in the score</li> <li>Tiling a plot with a single cell and classifying such cells with the image level models.</li> </ul> <h2>Solution code</h2> <ul> <li>Pipeline 1's code is now available at <a href="https://github.com/iseekwonderful/HPA-singlecell-2nd-dual-head-pipeline" target="_blank">github</a></li> </ul> <p>I just input all 0 to model if cell number is less than 16. <br> Maximumis the easiest way to get a batch size indepenedent feature map of image and also make sense. Many situation one image contains only one or two postitive cell of certain type.</p> <p>out_features=81313 is just a default value, 19 is used here. After pooling, every cell in batch have 1280 features. Here I calculate the max of the 16 cells and put input image head. </p> <p>The median of cells in image is 17, due to the limit of the GPU memory, we sample 16 cells.</p> <p>It's pretty simple, please check the code below.</p> <pre><code>class EfficinetNet(nn.Module): def __init__(self, name='efficientnet_b0', pretrained='imagenet', out_features=81313, dropout=0.5, feature_dim=512): super().__init__() self.model = torch.hub.load('rwightman/gen-efficientnet-pytorch', name, pretrained=(pretrained == 'imagenet')) self.model.conv_stem = Conv2dSame(4, self.model.conv_stem.out_channels, kernel_size=(3, 3), stride=(2, 2), bias=False) self.last_linear = nn.Linear(in_features=self.model.classifier.in_features, out_features=out_features) self.last_linear2 = nn.Linear(in_features=self.model.classifier.in_features, out_features=out_features) self.pool = GeM() self.dropout = nn.Dropout(dropout) def forward(self, x, cnt=16): x = self.model.features(x) pooled = nn.Flatten()(self.pool(x)) viewed_pooled = pooled.view(-1, cnt, pooled.shape[-1]) viewed_pooled = viewed_pooled.max(1)[0] return self.last_linear(self.dropout(pooled)), self.last_linear2(self.dropout(viewed_pooled)) </code></pre>
Human Protein Atlas - Single Cell Classification
HPA 36th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all we would like to thank the host for this really interesting competition, and congrats to all the winners.</p> <h1>Overview</h1> <p>These are the issues and solutions we had.</p> <p><strong>Issue</strong><br> ① Learning with weak supervised learning<br> ② The nucleus protruding from the image<br> ③ The problem of imbalance in the number of labels</p> <p><strong>Solution</strong><br> For ③, the problem of imbalance in the number of labels could be reinforced by using external data (downsampling was performed so that each label would have about 10,000 labels). This resulted in an LB score of +0.01. In the past competitions, when the number of labels was unbalanced, using focal loss for loss was a good example of learning, but this time it didn't.</p> <p>As a measure against weak supervised learning in ①, we used the difference in pixel values. If there were cells with different labels in the image,we assume that the staining intensities are different and that when cropped into a single cell, the average pixel values ​​in the image will vary. Therefore, we contributed to improving the accuracy of the model by using a data set in which 20% of the average value is removed as the threshold value for the average value of the images. In addition, by setting a threshold value, we were able to eliminate a certain number of problems ② in which the nuclei protrude from the image. (Unfortunately, we couldn't compare the thresholds because we didn't have enough time.) As a result, the LB score was +0.03~0.04.</p> <h1>Models</h1> <p>The models are divided into two types, a single cell model and an image level model.</p> <p><strong>Single cell model</strong><br> Model:ResNet50+EfficientNet B4<br> Image size:128x128<br> Loss:BCEWithLogitsLoss<br> Augmentation:<br> Flip<br> TTA (n=3)<br> Dataset:<br> Train:179.2k<br> Validation:44.7k</p> <p><strong>Image level model</strong><br> Model:SEResNeXt50 32×4d+EfficientNet B7<br> Image size:640x640<br> Loss:<br> BCEWithLogitsLoss (EfficientNet B7)<br> FocalLoss (SEResNeXt50 32×4d)<br> Augmentation:HorizontalFlip(p=0.5)<br> Dataset:Use only green channel<br> Train: 17.4k<br> Validation: 4.4k</p> <p>That's how I was able to win the silver medal.<br> And with this medal, I was able to be promoted to Kaggle Master! !<br> Very glad!</p> <p>A year before I joined kaggle, I didn't understand Python and machine learning at all, but I feel that by aiming for medals, I've gradually become able to do what I couldn't do. In addition, I think that I was able to train a lot mentally by experiencing a lot of shake-ups and shakedowns XD</p> <p>Kaggle is the best data science learning platform for me.<br> Thanks to kaggle and all kagglers.</p> <p>I will continue to take on challenges</p> <p>I appreciate your celebration message.</p> <p>I would like to explain the content you pointed out below.</p> <p><em>U</em>: Mean value of all single cell images<br> <em>μ_i</em>: Mean value of <em>i</em>-th single cell image<br> <em>thr</em>: Coefficient for threshold (0.2 here)<br> <em>thr</em>× <em>U</em>: Threshold</p> <p>If <em>μ_i</em> &lt;<em>thr</em> ×<em>U</em>, the <em>i</em>-th single cell image is removed from the dataset.</p> <p>However, I have not been able to verify whether <em>thr</em> = 0.2 is better, <em>thr</em> = 0.3 is better, or something else is.</p>
Human Protein Atlas - Single Cell Classification
Private 0.458 with 3 single cell classifiers (37th place solution) - a data centric approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you Kaggle and HPA for hosting this competition, and congrats to all the winners. </p> <p>I've taken this competition more seriously than any other competition in my Kaggle journey. This post will have a brief description of my approach, which is heavily data-centric. Although my standing isn't among the very best, I'm hoping this post will offer a unique perspective.</p> <p><strong>Overview</strong></p> <p>Since we are expected to predict labels per cell, I focused all my efforts entirely on building the best cell level image classifiers. Given the size of the HPA dataset, we can extract a lot of cell images. But, assigning the image level label to cell level image results in a lot of cell level images that are mislabeled. I initially trained models on images with a single label since they would have the least amount of noise. I was able to push my public score to 0.376 at most. </p> <p>However, it soon became clear that to increase my score even further, only images with single labels would not be enough. So, I started using images with 2 or 3 different labels. As mentioned before, assigning the image level label to cell level image results in a noisy dataset. And, re-labeling hundreds of thousands of cell level images by hand is tedious. So, I came up with various heuristics per class using which I could use to assess whether an image belongs to a particular class. These heuristics helped me select the best images from a lot of label combinations and make a less noisy dataset. Using this dataset, I trained 3 models: inception v3, mobilenet, and densenet121. The ensemble of these models gave private score 0.458 and public score 0.464.</p> <p><strong>Selecting the best images using heuristics:</strong></p> <p>The best part about this dataset is that they are all from the same domain. By that I mean that they are all images of cells. And, as a bonus, we have cell segmentation tools. From the images and HPACellSeg, we can get the following important locations of the cell: Cell mask (from segmentor), Nucleus (from segmentor), Cytosol (Cell mask XOR Nucleus mask), Microtubules (red) and Endoplasmic reticulum (yellow). By looking at the intensities of the green channel in these areas, we can judge whether an image belongs to a specific class with a high confidence. The following are some of my best heuristics:</p> <p><strong>Nucleoplasm:</strong> High green average in nucleus, whose location can be found using segmentor.<br> <strong>Nuclear membrane:</strong> High green average in the border of the nucleus mask.<br> <strong>Nucleoli and Nuclear Fibrillar Center:</strong> High green average in the nucleolus, which usually occurs as a dark spot inside the nucleus.<br> <strong>Microtubules and Endoplasmic reticulum:</strong> High similarity between green channel and red/ yellow channel. Using a hasing algorithm like PHash works well.<br> <strong>Cytosol:</strong> High green average in the cytosol (Cell mask XOR Nucleus mask).<br> <strong>Mitochondria:</strong> High green intensities in the microtubules.</p> <p>These heuristics are no way perfect, but they do help in removing incorrectly labeled images and outliers. As an example, consider the label '2|16'. Good quality cell level images of this label should have a high green staining in the nucleoli and cytosol. Therefore, we can discard images that don't have either of those. To give a more specific example, <a href="https://drive.google.com/file/d/14Hs2mnJz2KeoW6rUU1LIln5f3aYGQLGO/view?usp=sharing" target="_blank">this</a> image, has a higher green staining in both these locations compared to <a href="https://drive.google.com/file/d/15xMAkD9xXGbEoGa6JVcTbs7vkgc9EW_c/view?usp=sharing" target="_blank">this</a> one. So, we can keep images like the first and discard those that are similar to the second.</p> <p>I wasn't able to come up with amazing heuristics for all classes. For rarer classes like Mitotic spindle, I had to rely on hand labeling. I ended up with a dataset of ~250k cell level images consisting of around 144 unique label combinations. I used this to train an inception v3, mobilenet, and densenet121. The images were resized to 448 x 448 and simple augmentations (rotations, flips) were used. The ensemble of these models gave private score 0.458 and public score 0.464. </p> <p><strong>Conclusion</strong></p> <p>Due to time and resource constraints, I wasn't able to experiment with too many other models. Combining image level and cell level predictions seemed to be an approach used by almost all, but I got the above score without image level models. My cell level model submissions only improved very little (and one time decreased). Clearly, I did not have too much success with combining image level predictions. But, it seems like those who were able to use image level and cell level images got the best scores. </p> <p>You may say that this is an extreme focus on the data, and it is. But, I felt it was necessary for me if I wanted a good score as I do not have access to expensive hardware (I had to rely on my HP laptop, kaggle GPUs and Colab). Filtering out images improves the quality of the dataset while using a lower quantity of images without sacrificing model performance. </p> <p>Compared to other top solutions, my models are simpler (no transformers, no segmentation models, no multiple losses, etc). If you've had success with combining image level and cell level predictions without focusing on data preprocessing, I encourage you to try training on <a href="https://www.kaggle.com/novice03/clean-data" target="_blank">my dataset</a>. I cannot guarantee that all the images are correctly labelled, but I ensure you that there are a high proportion of correctly labelled images and also images from rare classes from the external dataset.</p> <p>Finally, I'd like to thank <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> and <a href="https://www.kaggle.com/dschettler8845" target="_blank">@dschettler8845</a> for their notebooks and <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> and <a href="https://www.kaggle.com/rdizzl3" target="_blank">@rdizzl3</a> for their datasets and the competition hosts for helping all of us.</p>
Human Protein Atlas - Single Cell Classification
Private 22nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you!<br> I didn't submit only NFNet inference, but LB score increased by 0.003 with it for ensemble.</p> <p>Thank you <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a><br> Hope to work together in the future competition soon.</p> <p>I tried the models above plus ResNet200D and SEResNeXt101.<br> Those didn’t help the score in my case.</p> <p>Congratulations to all the winners, and thanks so much for hosting such an interesting competition!!<br> This task was really challenging in mostly two points: weak-labels and class imbalance.<br> I spent hard time on solving them, and learned a lot in the middle of it.</p> <h1>Summary</h1> <p>・I tackled this competition as a classification task (I didn't use any segmentation models other than HPA Cell Segmentator).<br> ・Environment: Kaggle Notebook and Datasets, TPU training, GPU inference, PyTorch<br> ・Cell tiles: 'nucleus BBox center' chosen as tile center, 'cell BBox short side' chosen as tile one side length→score improved!!<br> ・2-Stage Training Pipeline (For 2nd stage, pseudo-labels, thresholding and sampling methods were used.)<br> ・Green Image Level Label prediction further added, shared by <a href="https://www.kaggle.com/h053473666" target="_blank">@h053473666</a> </p> <h1>Training</h1> <p>My pipeline is the following.<br> CV: multilabel stratified group kfold (group by image id)<br> augmentation: flip, random rotate, shift scale rotate<br> loss: BCEWithLogitsLoss<br> optimizer: Adam<br> scheduler: cosine annealing<br> number of cell tiles used as input: about 70000 (1st stage), about 75000 (2nd stage)<br> epochs: 5eps w/o early stopping<br> training time: 1~2 hrs per model</p> <p><img alt="Screen Shot 2021-05-12 at 12 54 37" src="https://user-images.githubusercontent.com/63890401/117921313-7f8f5e80-b32b-11eb-85cd-aa79dee1314d.png"></p> <h1>Inference</h1> <p>I used <a href="https://www.kaggle.com/samusram" target="_blank">@samusram</a> fast segmentator with a little modified.<br> Classification predictions by model above was combined with segmentator instance segmentation result.<br> <a href="https://www.kaggle.com/drtausamaru/hpa-ct-ill-inference-private" target="_blank">https://www.kaggle.com/drtausamaru/hpa-ct-ill-inference-private</a><br> <img alt="Screen Shot 2021-05-12 at 12 48 40" src="https://user-images.githubusercontent.com/63890401/117918045-4b18a400-b325-11eb-84b6-b53930e576b1.png"></p> <h1>What didn't work for me (score dropped)</h1> <p>・Cell tiles other than my approach (whole cell tiles, cell BBox long side length, conversion to all values = 0 of the area outside the targeted cell, only green signal used…etc)<br> ・focal loss<br> ・BCEWithLogitsLoss with pos-weight argument &gt; 1.0<br> ・label smoothing<br> ・pseudo hard labels (0/1)<br> ・For 1st-stage, using &gt;=4 cell tiles at once<br> ・MLSKF (not grouped)<br> ・Models: ResNet200D, SEResNeXt101<br> ・cellline classification model</p> <h1>What I didn't try</h1> <p>・RGBY<br> ・cell tiles: &gt;256x256, uint16 <br> ・other segmentation models (object detection)<br> ・other augmentations (mixup, brightness modification, cutout…etc)</p> <h1>In the end</h1> <p>I really enjoyed this competition because the task itself is interesting and challenging, there were a few public kernels of just ensembling or forking, and I was able to compete with top Kagglers.<br> Thanks for reading :)</p> <p>Thanks for the comment.<br> Pseudo labels are created by averaging only cell-level models as shown in the solution image above.<br> I used XLA for TPU.</p> <p>You mean, model backbone or pipeline?</p>
Human Protein Atlas - Single Cell Classification
💥💥Human Protein Atlas - Solution Compilation 💥💥
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>2 nd Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238645" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238645</a><br> 5 th Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238527" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238527</a><br> <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238487" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238487</a><br> 6 th Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238365" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238365</a><br> 7 th Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238507" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238507</a><br> 8 th Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238361" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238361</a><br> 9 th Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238678" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238678</a><br> 10 th Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238364" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238364</a><br> 12 th Place :<a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238343" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238343</a><br> 13 th Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238333" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238333</a><br> 16 th Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238371" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238371</a><br> <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238552" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238552</a><br> 18 th Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238385" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238385</a><br> 21 st Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238474" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238474</a><br> 22 nd Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238387" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238387</a><br> 23 rd Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238512" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238512</a><br> 24 th Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238401" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238401</a><br> 27 th Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238439" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238439</a><br> 37 th Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238380" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238380</a><br> 41 st Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238624" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238624</a><br> 43 rd Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238504" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238504</a><br> 46 th Place : <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238373" target="_blank">https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/238373</a></p>
Human Protein Atlas Image Classification
11th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations too and you are welcome.</p> <p>Yep, Thank you. The third stage was funny.</p> <p>Congratulations to each of the kagglers, it was a very interesting game, and thanks to each of the selfless kagglers on the discussion. In the meantime, I really appreciate the hard work of each of my teammates, and without their experiments and their GPUs, I don't think we can get the gold medal.</p> <h2>The following is our experiment:</h2> <p><strong>[Update]</strong></p> <p><a href="https://github.com/Gary-Deeplearning/Human_Protein">The code of our solution </a></p> <h3>The first stage experiments</h3> <p>&gt; We used the external HPA data in Gray format(512 size)</p> <p><strong>Models</strong></p> <ul> <li>res18 (batchsize=64)</li> <li>res34 (batchsize=32)</li> <li>bninception (batchsize=32)</li> <li>inceptionv3 (batchsize=32)</li> <li>xception (batchsize=24, P40-24G) </li> <li>Se-resnext50(batchsize=24, P40-24G)</li> </ul> <p><strong>Data Augumentation</strong></p> <ul> <li><p>train &gt; Add/Multiply/Crop/Affine/Filplr/Filpub/</p></li> <li><p>12 TTA</p></li> </ul> <p><strong>Optimizer</strong></p> <ul> <li>NAdam with different LR for different layers </li> </ul> <p><strong>Loss Function</strong></p> <ul> <li>bce</li> </ul> <p><strong>Threshold</strong> &gt; We tried the search threshold,but it was not work, so we finally had no idea and chose 0.205 as threshold.</p> <p><strong>Result</strong></p> <ul> <li>The best score from single model with 5 fold was 0.597(public)</li> </ul> <h3>The second stage experiments</h3> <p>&gt; we changed the format external HPA data (you can find this in discussion)</p> <p><strong>Ensemble</strong> we used the first and second stage model to ensemble, and it should get 0.62+ score(public)</p> <h3>The third stage experiments</h3> <p>&gt; <a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/77289">My teammate, shisu's method</a></p> <p>Yep</p> <p>I've uploaded our team's code to github,thanks for all kagglers.</p>
Human Protein Atlas Image Classification
12th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Great idea. Even though what I would like is to compare my F1-Scores per class with the ones of top winners. I guess when top winners upload their code I can generate their submission and extract theirs as well. Nice.</p> <p>Hey kagglers, you know what they say: Be careful of overfitting, rare classes and bestfitting! Hah.</p> <p>Jokes apart, congratulations to all the winners, this was a really tough competition. Thanks to the Kaggle organization and Human Protein Atlas for providing such a great challenge and also thanks to @brian @heng and @tilii for all the useful posts, I have learnt a lot from you. </p> <p>I am going to briefly detail my solution:</p> <h2>Hardware</h2> <p>First I had a 980Ti GPU, 2600k CPU and 14GB RAM. Four weeks ago I bought a 1080Ti.</p> <h2>Preprocessing</h2> <p>Removing duplicates was essential for improving your validation set and consequently for finding your class thresholds. To do so I compared image channels using the imagehash library proposed by @tilii</p> <h2>Base model</h2> <p>ResNet50 trained with float16 precision, RGB, 1024x1024, lots of data augmentation, weight decay, sgd, gradient clipping, bce loss and multi-scale resolution images. All in PyTorch. Best single model [0.628 public, 0.553 private]</p> <h2>Gamma correction</h2> <p>Due to the dark exposure of some images, half of my models were trained with an extra initial layer to correct the gamma. The goal was to find the best gamma per channel for all the images. I tried to make a layer that extracted a custom gamma per image taking into account the image as a context but without success. I tried things like i) using statistical image values as features and ii) applying a small convolution to the image then fc; all of them very unstable. The approach that worked best was the simplest: learn independent gammas (y_1 , y_2 , y_3) per channel and only use one parameter initialized at 1 without an activation. This reduced my global val_loss and increased my internal F1_score of some classes. <img src="https://i.postimg.cc/rmh5h58T/Captura-de-pantalla-2019-01-11-a-les-12-34-16.png" alt="Gamma correction formula"> Amusingly all the learnt gammas were quite similar, in the range [0.6-0.65]. Meaning that the images were quite cleared. I guess dark images benefitted from this layers but others - specially noisy images - don't, that's why I trained also networks without it.</p> <h2>What should I trust?</h2> <p>Is my model overfitting? This was the big question of all the competition. It was hard answering it for the rare classes but for the other ones I found a pattern that I believe was quite useful. When comparing models, most of the times the models that were overfitting had the largest F1-Score in a zone where Precision was extremely dominating vs Recall. Example in the following first figure. My hypothesis is that such models are learning things that are only present in the training set (similar cell/protein types, microscope features, who knows) and then become very confident in some samples increasing the Precision. When this happened I lowered thresholds or I discarded the model and choose another one (second figure). <img src="https://i.postimg.cc/SR3cW84K/Captura-de-pantalla-2019-01-11-a-les-13-11-08.png" alt="F1 curves"></p> <h2>Model variety</h2> <p>I trained 11 models, 2 DenseNets121 and 9 ResNets50. They were fast to train and provided the best results in my val_set. Each model had different settings such as: Excluding data leak, including it, change model seed, using yellow images , different class balance, full sized images, crops...</p> <h2>Rare classes</h2> <p>I did not know how to tackle them properly and I guess my fall has been mainly due to these classes (It's a pity that we cannot see F1-Score per class). I was about to try some Few Shot Learning approaches, but the image size was a limiting factor. Apart from this, I assume that the data leak eliminated most of the rare samples so detecting even just one of them was crucial (maybe too much). I hope I can learn from the top solutions about this.</p> <h2>Ensembling</h2> <p>I tried Logistic Regression, Averaging, Stacking and XGBoosting. In abundant classes it worked, but with others it overfitted and I thought it was not reliable. In the end I did the following:</p> <p><em>Expert models</em> I decided which model were the best for specific classes, for instance: Yellow image model for classes 6, 7; cropped image model for large classes such as 0, 25, full for 16; etc. These decisions were made taking into account hpa web info data, F1-curves, validation data, and public leaderboard.</p> <p><em>One vote ensembling with top predictions</em> Apart from expert models I increased the recall of my predictions by adding the top predictions from other models. Most of my models scored similar F1-scores per class, so it made sense. I increased the thresholds of non-expert models using this heuristic formula T_opt = T_opt + (1 - T_opt) *0.3 and included them as in one vote is enough.</p> <h2>Things that did not work</h2> <ul> <li>Training a Multi Head Attention Module (Transformer paper) from the predictions of crops of an image in order to focus the attention in the spatial domain. </li> <li>Training GapNet style/Feature Pyramid with ResNet18 as backbone did not provide any benefits in my setup (I guess I did not play enough with it because @Dieter was able to make it work).</li> <li>Training with 2048x2048 images, I guess batch size was too small and the extra resolution was not that important for the majority of the samples.</li> </ul> <p>I had some models with only comp data because it was cleaner. Nevertheless, for rare classes I trained with HPA leaks.</p> <p>Two days. The key is using float16 in Pytorch, if you do that you can double the size of your batch.</p> <p>I made a new layer in PyTorch. It's just a layer that raises each channel to a specific scalar value.</p>
Human Protein Atlas Image Classification
15th place solution with Deep Supervision
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is a writeup on my team's approach to this challenge</p> <p>Data: Initially we split the Kaggle dataset into 6 folds. After the external data was discovered, we incorporated it directly into the train part in every fold (so we had roughly 100k train images and 5k valid. images). We trained and predicted using resized 512x512 RGB/ RGBY images.</p> <p>Models: SE_ResNext50/101 and Inceptionv3/v4 We realized that multi-scale predictions are really crucial since the trace of a protein may appear in a very small region of the image. We solved this problem by applying the deep supervision trick. The chosen networks consist of multiple blocks of gradually decreasing resolutions; on top of these blocks, a small auxiliary classification network (Global Pooling + BN + Dense) was built. The final loss is an equal sum of these auxiliary losses. At inference time, we averaged all the auxiliary blocks' predictions and the final Dense layer predictions. For Inception networks, we picked 6 mixed blocks. For ResNext networks, block 3, 4, 5 and a feature pyramid network outputs were used.</p> <p>Loss: Pure binary cross entropy, no upsampling/downsampling rare classes. Focal loss didn't work well for us.</p> <p>Threshold: 0.5 for several popular classes, rest 0.2</p> <p>Final submission is a weighted average of 4 above models. Stacking gave a strong 5% boost on local F1 score yet sucked both in public and private LB. My teammate also did extra postprocessing on submission file but we didn't go with it at the end since it gave slightly lower public LB score. Sadly, this post-processed submission would have given us another gold medal XD</p> <p>Finally, huge congrats to all the winning teams and gold medalists XD</p>
Human Protein Atlas Image Classification
231st place keras solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners and great thanks to discussion participants!</p> <p>We started the competition using Keras because we were familiar with it and then preferred to stick to it. </p> <p>At first, we were just trying different models (Inception, ResNets, Xception) and some small architectures (MobileNet, SqueezeNet) and chose to use pretrained ResNet50 from Keras applications. After long time with it and no result more than 0.47 LB we switched to Inception networks. </p> <p>All models were trained on single fold: 85% train and 15% test. We splitted data using iterative stratification package from discussion. From external data we used only rare classes and excluded all duplicates from training.</p> <h2>Things that worked</h2> <ul> <li>F1 loss and BCE with weighted positive predictions</li> <li>SGD with warm restarts (cosine learning rate, initial lr=0.1)</li> <li>Log-dampened class weights</li> <li>Oversampling of rare classes by 5x</li> <li>Undersampling classes 0 and 25 by 0.5x</li> <li>"Standard" augmentations (flip rl/ud, shift, rotate, shear, brightness)</li> <li>Squeeze-Excitation module</li> <li>Weight decay</li> <li>Randomly dropping green channel and all classes in 1% of images led to +0.04 LB</li> <li>BatchNorm running mean/variance update on test set</li> </ul> <h2>Things that did not work</h2> <ul> <li>Imagenet weights (we still wonder why)</li> <li>Focal loss</li> <li>Threshold fitting</li> <li>GapNet-like things</li> <li>Convolutional block attention module</li> <li>Filling empty predictions with most popular class</li> <li>Mixup, probably because of the specialty of the green channel</li> <li>Heavy augmentations (blur, various types of noise, dropout, contrast, elastic transformations, sharpen/emboss)</li> <li>Small crops (less than 40% of number of pixels)</li> <li>AdamW</li> <li>Global average + global max pooling</li> <li>TTA (but not hurt)</li> <li>BatchNorm in dense layers reduced performance dramatically</li> </ul> <h2>Things that we regret:</h2> <ul> <li>Realized the way we overfit (high recall, low precision) too late (2 days before the end of the competition)</li> <li>Not using cross-validation</li> <li>Not ensembling different architectures</li> <li>Closer to final: using Keras</li> </ul> <h2>Top solution</h2> <p>Top performing solution scored 0.505 on private LB. We didn't choose it because it performed worse both in local validation and on public LB. It was an ensemble of all the best SE-BNInception (6 models) with fixed threshold. 4 of them were RGBY and 2 were RGB.</p> <h2>Hardware</h2> <p>We used primarily the server with one 1080Ti and 64GB of RAM. To reduce disk bottleneck, we put all training data into ramdisk with plenty of swap (~3x load speedup). Occasionally we used standard workstations with 1060 and 8-16GB RAM.</p>
Human Protein Atlas Image Classification
25th solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Data Processing : simple rotate, flip on 512x512/1024x1024 RGBY</p> <p>Loss : I used focal loss and took lots of time to optimize gamma(I used 2 and 2.5 in final version), and it seems alpha=1, gamma=2.5 actually works better on public leader board but not so good on private leader board.</p> <p>Model : I used se-resnext50 on 512x512/1024x1024 size images as baseline model and resnet34 on 512 size as a low capacity model. I made 5 cross validation models(total 24 models) and weighted averaged them(I gave more weight on gamma=2.5 models). Due to the limitation of time and resources, I couldn't make the full planned ensemble models.</p> <p>Thoughts: Although I exhausted to make 24 models as an ensemble and it seemed worked well in public leader board, but my hidden best private score was from the ensemble which I just 'or'ed all output labels of two single models and have relatively low public score,(0.571/0.538),(0.561/0.525) each(※(public LB/private LB))). And It just turned out that it reached private LB 0.550 which was slightly better than my final ensemble model,0.547. They were from resnet34 and se-resnext50 on both 512 size images, so maybe 24 models for ensemble was too much.</p>
Human Protein Atlas Image Classification
30 place solution writeup
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, we used pytorch with pretrained SEResnext50</p> <p>First of all, thanks to Kaggle and HPA team for this interesting competition!. This is the most difficult and memorable competition since I start my journey as a Kaggle competitor. I learned a lot from this</p> <p>Overview, our solution is so basic. I see there are some interesting points inside it. I hope they will be useful for you. </p> <ol> <li>Data <br> We use RGBY and addition HPAv18. </li> <li>Preprocessing <br> <ul><li>We were only able to train with 512x512 images. To save the disk space and loading time, we resize the HPAv18 image to 512x512. </li> <li>Upsampling rare classes: Any class which has a number of image less than 450, we upsample up to 450 with random rotation from (0, 360) </li></ul></li> <li>Model <br> We use following model for final submission: <ul><li>SEResnext50 ( public: 0.604, private: 0.534)</li> <li>SEResnet50 (public: 0.599, private: 0.534)</li> <li>InceptionV3 (public: 0.585, private: 0.521) </li> <li>Resnet34 (public: 0.580, private: 0.508) </li></ul></li> <li>Loss <br> In our experiment, weighted BCE loss performs best. Other combinations such as: FocalLoss, F1 does not work. We stuck at public 0.577LB for a long time when using it. <br> After that, we try a tricky loss method: Train the model with weighted BCE loss with even epoch, and F1 loss with odd epoch. Then, we get 0.04 addition for each model and ensemble. </li> <li>Augmentation <ul><li>Rotation (0, 360) </li> <li>HFlip, VFlip </li> <li>Random rotate 90</li> <li>Affine </li></ul></li> <li>Threshold <br> Fixed threshold: 0.2 </li> <li>Ensemble <br> Weighted average: (1 * resnet34 + 1 * inceptionv3 + 1 * seresnet50 + 3 * seresnext50) / 6 </li> </ol> <p>Cheers,</p> <p>Edit: We use 3TTA: Flip, VFlip and Normal</p> <p>Testset is divided into two parts called public and private. We dont know which test case belongs to which test set, but we do know how many percentages of each test set. When you submit, the system gives you a public score that is evaluated on the public test set until the deadline. The private score will be used for the final ranking.</p> <p>We use stratified kfold which is mentioned in the discussion somewhere. </p>
Human Protein Atlas Image Classification
31st place solution (16th in public lb)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and HPA team for holding this interesting competition. And thanks to my teammates: <a href="https://www.kaggle.com/xf1994">xf1994</a>, <a href="https://www.kaggle.com/cheninghan">hanhan chen</a>, <a href="https://www.kaggle.com/zhouyanghust">T-mac</a>, ...</p> <p>Below is some points of our solution.</p> <p><strong>Input</strong>: we trained our models using three different input sizes 1024*1024, 786*786, 512*512, RGBY images. We initialize conv weights of Y channel from red channel in pretrain models. Besides, we tried self-designed dual-path network using two pathways of inputs (one for RGB, one for Y).</p> <p><strong>Models</strong>: we trained around 20+ models including se-resnet, se-resnext, resnet, resnext, dpn incpetions, inception-resnet and some self-designed networks. Generally these networks show similar performance except dpn. Dpn performs bad in this task. Further, we reproduce a so-called MLFN network from : <a href="https://arxiv.org/abs/1803.09132">https://arxiv.org/abs/1803.09132</a> , it did gave us some improvements compared with its counterpart ResNext50.</p> <p>Some good single models include: seresnext101: public 0.572, private 0.515 self-designed dualpath: public 0.574, private 0.502</p> <p><strong>Data augmentation</strong>: rand horizontal flip and random vertical flip.</p> <p><strong>Loss</strong>: weighted BCE loss, thanks to <a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/74065">Tilii</a>.</p> <p><strong>Thresholds</strong>: One useful trick on public lb for us is finding thresholds for each class separately. We find the threshold according to the amount of each class. Adjusting the threshold according to the public lb helps a lot. Finally for most classes, we set the thresholds around 0.21. for some scarce classes, thresholds are around 0.1.</p> <p><strong>Ensemble</strong>: The most useful trick is ensemble. We ensemble our model totally according to the public lb performance. This gives us 16th in public lb. However, it drops to 31st in private lb. We checked our submission on private lb. Our best model on private lb is 0.554, whose public lb score is only 0.608. This model ensembles models more evenly compared with our best model on public lb.</p> <p><strong>Tried but didn't work</strong>: focal loss, oversampling, some augmentations(scale, rotate, shift)...</p> <p>We will release our code soon...</p>
Human Protein Atlas Image Classification
33th Place Algorithm on Private LB
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I participate this competition as a kind of test. I wanted to develop a algorithm, which generate reasonable models without lots of human intervention.</p> <p>Since data preprocessing and augmentations are very labor intensive works, I focused on cross-validated, various-architecture, robust single models and ensembled models from them, with very clean and basic script.</p> <p>My single models are like 0.55~0.57 on public LB. Unfortunately, unlike my expectations, ensemble from LOTS of models wasn't that much effective(0.601 on public LB). I learned that basic single model should perform better than this. If I have a more time(I participated late), I will definately try to improve my single models, with changing input size, data augmentations, various network architecture, ROI cropping and ensemble with other known hand crafted features for this dataset.</p> <p>Here is the github repository so I hope this is helpful for many people. <a href="https://github.com/ildoonet/kaggle-human-protein-atlas-image-classification">https://github.com/ildoonet/kaggle-human-protein-atlas-image-classification</a></p> <h3>Models</h3> <ul> <li>vgg16</li> <li>resnet50, resnet101, ...</li> <li>densenet121, densenet169 *</li> <li>inception v3, inception v4 *</li> <li>se152</li> <li>polynet</li> <li>NASNet, PNASNet</li> </ul> <h3>Implementations</h3> <ul> <li>Data Loader for External Datas and Merger</li> <li>Basic data augmentations <ul><li>Rotation, Flip *</li> <li>Channel drops</li></ul></li> <li>16 Test-Time Augmentation</li> <li>5-Folds Cross Validation</li> <li>Simple Threshold Search Algorithm</li> <li>Ensembles <ul><li>Test-Time Augmentation Averaging *</li> <li>Majority Voting *</li> <li>Fully-Connected Neural Network</li> <li>logits -&gt; output</li> <li>logits + features -&gt; output</li> <li>XGBoost</li></ul></li> <li>Loss <ul><li>Soft F1 Loss *</li> <li>Binary Cross Entropy *</li> <li>Focal Loss</li> <li>MultiLabelMarginLoss</li></ul></li> </ul>
Human Protein Atlas Image Classification
39th solution-Attention Gated Resnet18 ( single model without cv)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry. I used HPA external data. Already edited.</p> <p>Our solution is based on the Attention Gated Network (AGN). In our model, Resnet18 is used as the backbone and feature maps of the last 3 blocks are used to generate attention gate. Given the attended features at last 3 blocks of Resnet18, we combine them for final prediction by average mean.We random crop the original image(512x512) into 3 different size (256,384,512) to fit 3 different AGN model, and finally ensemble their predict prob on full size (512x512) with threshold 0.2 as the final results. Our solution used only single model and 512x512 PNG files with HPA external data, and did't use cross validation.</p> <p>Attention Gated Network (AGN): <a href="https://arxiv.org/pdf/1804.05338.pdf">https://arxiv.org/pdf/1804.05338.pdf</a> <img src="https://drive.google.com/file/d/18lVqM3YEI2Z6u-b3LFT5zqr6gKR8_Dm_/view" alt="Attention Gated Network"> <img src="https://drive.google.com/file/d/1zSJ1KZOIn-ngS3LPGL-kOROL7VOcUAG3/view" alt="Attention Unit"></p> <p><strong>Dataset</strong> kaggle data and HPA external data(512x512 RGBY), split by Multilabel Stratification Python Package, not use TIFF images </p> <p><strong>Training methods</strong> Simply training, SGD with momentum, learning rate = 0.1, ReduceLROnPlateau lr scheduler.</p> <p><strong>Loss functions</strong> The sum of soft f1 loss and focal loss</p> <p><strong>Data augmentation</strong> Random flip and random crop.</p> <p><strong>TTA</strong> Random flip</p> <p><strong>Result</strong> ensemble three image size: 0.604| 0.540 ensemble three image size and oversample(size 256): 0.601|0.547</p>
Human Protein Atlas Image Classification
3rd place solution with code.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>UPDATE: code available on github</h2> <p><a href="https://github.com/pudae/kaggle-hpa">https://github.com/pudae/kaggle-hpa</a></p> <hr> <p>Congrats to all the winners and thanks to all kagglers who posted great discussions. It was very helpful to me.</p> <p>Thanks to Kaggle and HPA team for an interesting competition.</p> <p>Here is overview of my solution.</p> <h2>Dataset Preparation</h2> <p>Like almost the other competitors, I also used official + <a href="http://www.proteinatlas.org">external</a> data. (Thanks to <a href="https://www.kaggle.com/tomomimoriyama">TomomiMoriyama</a> and <a href="https://www.kaggle.com/dr1t10">David Silva</a>)</p> <p>I splited dataset as following:</p> <ul> <li>1/10 holdout set for ensemble.</li> <li>building 5 folds cross validation sets using rest of 9/10.</li> <li>using phash and ahash to prevent duplicate images in difference splits. If the labels are not matched between official and external, I used official one. (Thanks to <a href="https://www.kaggle.com/tilii7">Tilii</a>)</li> </ul> <h2>Input Preprocessing</h2> <p>I found that the distributions of image mean and stddev are very difference between official and external. So, I used mean and stddev of individual images for input normalization.</p> <h2>Augmentation</h2> <p>I searched suitable data augmentation as following <a href="https://arxiv.org/pdf/1805.09501.pdf">AutoAugment</a>. For simplicity, I used random search instead of RL.</p> <h2>Loss</h2> <p>Focal loss with gamma 2.</p> <h2>Training</h2> <ul> <li>Adam optimizer, learning rate 0.0005.</li> <li>no learning rate scheduling.</li> <li>For the large model with 1024x1024 images, I used gradient accumulation so that weights are updated every 32 examples.</li> <li>Early stopping <ul><li>If I choose checkpoints that record best macro F1 score for the validation set, LB scores are poor.</li> <li>After analyzing F1 scores of each classes, I found that while macro F1 score is increasing, F1 scores of high-proportion classes (like 0, 1) are decreasing. Because of relying on rare class score is risky, I decided to stop training when F1 score of 0 class is decreasing.</li></ul></li> </ul> <h2>Inference</h2> <ul> <li>Averaging the weights of last 10 checkpoints.</li> <li>8 test time augmentation</li> <li>weighted averaging ensemble</li> </ul> <h2>Thresholds</h2> <p>Because of rare classes, macro F1 score is very sensitive to thresholds. I tested various method for finding good threshold, but almost tries are failed.</p> <p>My final method is following:</p> <ul> <li>For each classes, I choose the thresholds that make the proportion of positive predictions in validation set are closed to the proportion of positive examples. (Thanks to <a href="https://www.kaggle.com/iafoss">lafoss</a> for the LB probing)</li> </ul> <h2>Models</h2> <p><strong>512x512</strong></p> <ul> <li>resnet34: 5 fold ensemble with TTA: Public LB 0.574 / Private LB 0.500</li> </ul> <p><strong>1024x1024</strong></p> <ul> <li>inceptionv3: single fold with TTA: Public LB 0.583 / Private LB 0.549</li> <li>se_resnext50: single fold with TTA: Public LB 0.601 / Private LB 0.531</li> <li>From 1024x1024, the mean and stddev of individual images are used.</li> <li>In case of 1024x1024 input, using global average pooling is not good performance in my case.. maybe... So, I modified last layer following: <ul><li>remove global average pooling.</li> <li>compute MxM logits using 1x1 convolution.</li> <li>compute weight maps using 1x1 convolution followed by softmax.</li> <li>using weight maps, compute weighted averaged logits.</li></ul></li> <li>Final submission is ensemble of above three predictions.</li> <li>Additional models are trained, but the ensemble results were not good.</li> </ul> <p>Because I failed to make stable CV, I can't be sure that methods described above were effective. Finding good methods without stable CV was painful process. So, I hope to learn from the <a href="https://www.kaggle.com/bestfitting">bestfitting</a>'s solution that produce stable results always.</p>
Human Protein Atlas Image Classification
5th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><ol> <li>Scale means the size of the input image(base size is 512x512).</li> <li>I didn't find the situation you said. (BN and dropout after full connection,SGD, BCE or MultiLabelSoftMarginLoss, reasonable learning rate and lrscheduler may be helpful)</li> </ol> <p>First of all, congratulations to all the winners! Thanks to Kaggle and HPA team for hosting such an interesting competition and thanks to <a href="https://www.kaggle.com/tomomimoriyama">TomomiMoriyama</a>, <a href="https://www.kaggle.com/hengck23">Heng CherKeng</a>, <a href="https://www.kaggle.com/manyfoldcv">ManyFoldCV</a> and <a href="https://www.kaggle.com/spytensor">Spytensor</a>. </p> <p>Here is a brief summary of our solution.</p> <h3>DataSet</h3> <p>Like most other competitors, we used both official (both PNG and TIFF) and external data. To deal with class-imbalance, we used WeightedRandomSampler (method in pytorch) during training and <a href="https://github.com/trent-b/iterative-stratification">MultilabelStratifiedShuffleSplit</a> to split the data into training and validation. We constructed 10 folds cross validation sets with 8% for validation.</p> <h3>Image Preprocessing</h3> <p>The HPA dataset has four dyeing modes each of which is an RGB image of its own, so we took only one channel (r=r,g=g,b=b,y=b) to form a 4-channel input for training.</p> <p>All PNG images are kept at their original 512 size, whereas the TIFF images are resized to 1024.</p> <h3>Augmentation</h3> <p>Rotation, Flip, and Shear.</p> <p>We didn't use random cropping. Instead we trained 5 models using crop5 (method in pytorch) and found it to be more effective.</p> <h3>Models</h3> <p>For our base networks, we mainly used Inception-v3,-v4, and Xception. We have also tried DenseNet, SENet and ResNet, but the results were suboptimal.</p> <p>We used three different scales during training (512 for PNG images and 650, 800 for TIFF images) with different random seeds for the 10-folds CV.</p> <p>Modifications</p> <ol> <li>Changed the last pooling layer to global pooling.</li> <li>Appended an additional fully connected layer with output dimension 128 after the global pooling.</li> <li>We also divided the training process into two stages where the first stage used size 512 with model trained on ImageNet, and the second stage used size 650 or 800 with model trained from the first stage. We found this to be slightly better than training with fixed size all the way.</li> </ol> <h3>Training</h3> <ul> <li>loss: <a href="https://pytorch.org/docs/stable/nn.html?highlight=multilabelsoftmarginloss#torch.nn.MultiLabelSoftMarginLoss">MultiLabelSoftMarginLoss</a></li> <li>lr: 0.05 (for size 512, pretrained on ImageNet),0.01 (for size 650 and 800,pretrained using size 512); lrscheduler: steplr(gamma=0.1,step=6)</li> <li>optimizer: SGD</li> <li>epochs: 25, early stopping for training with size 650 or 800 (around 15 epochs), model selected based on loss (instead of F1 score)</li> <li>sampling weights for different classes: [1.0, 5.97, 2.89, 5.75, 4.64, 4.27, 5.46, 3.2, 14.48, 14.84, 15.14, 6.92, 6.86, 8.12, 6.32, 19.24, 8.48, 11.93, 7.32, 5.48, 11.99, 2.39, 6.3, 3.0, 12.06, 1.0, 10.39, 16.5]</li> </ul> <h3>Multi-Thresholds</h3> <p>We used the validation sets to search for threshold for each class by optimizing the F1 score begining with 0.15 for all classes.</p> <h3>Test</h3> <p>(with multi-thresholds)</p> <p><img src="https://s2.ax1x.com/2019/01/17/kpZ7lV.png" alt=""></p> <h3>Ensembling</h3> <p>Final prediction is ensemble of above methods: Size 800, 10-fold for Inception-v3; Size 650 and 800, 10-fold for Inception-v4; Size 800, 10-fold, Size 650, 1-fold, Size 512, 5-fold for Xception (the reason for 5-fold instead of 10 was simply because we didn't have enough submissions to check the performances of all models, so we simply took the best ones).</p> <h3>Things that did not work for us</h3> <ul> <li>Training with larger input size (&gt;= 1024), which forced us reduce the batch size.</li> <li>3-channel input</li> <li>focal loss</li> <li>C3D</li> <li>TTA: unlike a lot of other competitors, TTA during test time actually didn't work for us.</li> <li>Other traditional machine learning methods such as DecisionTree, RandomForest, and SVM.</li> </ul> <p>HA HA! We had p100x4, batch size was 10 (inception-v4, 800, single GPU)</p> <ol> <li>We simply did weighted arithmetic mean on sigmoids. </li> <li>Yes, we used the external data. The following is our script. <br> <code> import os import pandas as pd import cv2 import multiprocessing import tqdm import requests colors = ['red', 'green', 'blue', 'yellow'] BASE_DATASET_PATH = './external_data/HPAv18/' DIR = BASE_DATASET_PATH + "jpg/" Gray_DIR = BASE_DATASET_PATH + "rgby_1024_png/" v18_url = 'http://v18.proteinatlas.org/images/' if not os.path.exists(Gray_DIR): os.mkdir(Gray_DIR) def download_img(item_name): img = item_name.split('_') for color in colors: img_path = img[0] + '/' + "_".join(img[1:]) + "_" + color + ".jpg" img_name = item_name + "_" + color + ".jpg" img_url = v18_url + img_path try: r = requests.get(img_url, allow_redirects=True) open(DIR + img_name, 'wb').write(r.content) except Exception, e: print e print 'Error,{},{}'.format(img_url,img_name) def rgb_to_gray(item_name): for color in colors: img_name = item_name + "_" + color + ".jpg" img_path = DIR + img_name save_path = Gray_DIR + img_name[:-4] + '.png' if os.path.exists(save_path): continue img = cv2.imread(img_path) index = 0 if color == 'blue': index = 0 elif color == 'green': index = 1 elif color == 'red': index = 2 elif color == 'yellow': index = 1 img_gray = img[..., index] if img_gray.shape[0] != 1024 or img_gray.shape[1] != 1024: img_gray = cv2.resize(img_gray, (1024, 1024)) cv2.imwrite(save_path, img_gray) if __name__ == '__main__': imgList = pd.read_csv(BASE_DATASET_PATH + "HPAv18RBGY_wodpl.csv") pool = multiprocessing.Pool(processes=50) pool.map(download_img, imgList['Id']) pBar = tqdm.tqdm(total=len(imgList)) for i, item_name in enumerate(imgList['Id']): rgb_to_gray(item_name) if i % 100 == 0: pBar.update(100) pBar.close() </code></li> </ol> <p>hi jaideep <br> During the competition, we merged the HPAv18 and official datasets directly. did you tried the parameters in my solution? especially the loss, optimizer and learning rate.</p> <p>I just used this file <a href="https://storage.googleapis.com/kaggle-forum-message-attachments/432870/10816/HPAv18RBGY_wodpl.csv">HPAv18RBGY_wodpl.csv</a> , added to the training set. No other operation has been taken.</p>
Human Protein Atlas Image Classification
64th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I joined this competition already in the early stages with a personal goal to apply all my current knowledge and to learn a lot of new things. And hopefully achieve a nice score ;-)</p> <p>Here some things I tried in the first weeks that worked...but not good enough. Self designed CNN's and training from scratch...too slow and quickly overfitting. Fully Convolutional Networks and train from scratch. Didn't seem to overfit but to max out at certain level of Loss and F1. </p> <p>I then started out trying transfer-learning with Resnet50, VGG16 setup with Keras and Tensorflow backend. This looked really good but with not being able to use 4 seperate channels as input on Keras pretrained models (anybody found a way to do that?) I decided to look beyond Keras.</p> <p>Seeing that there were various kernels that used Pytorch where it was only a few lines of python code to modify the number of input channels I decided to give Pytorch a try. Also the list of pretrained models for Pytorch is really impressive.</p> <p>My final solution that I used:</p> <ul> <li>Pretrained Models: BN-Inception and NASNET Large (only used for 1 model..good results but way to heavy for my 1070 Ti)</li> <li>6-folds CV</li> <li>Multiple runs with variations in batch size, seed, learning rate and pretrained model.</li> <li>Epochs 20-25</li> <li>Adam optimizer with learning rate either 0.001 or 0.0005. I tried some stepping schedules but didn't notice any significant difference.</li> <li>Batch sizes varying between 24 - 36.</li> <li>Image size: mostly 512 pixels but also some with 448 pixels.</li> <li>Image augmentations: Rotation, Flip and Shear.</li> <li>Binary Cross Entropy loss</li> <li>Oversampling of the minority classes.</li> <li>No Test-Time Augmentation</li> <li>Optimal Threshold search on each epoch.</li> <li>I generated a full probs file and optimal threshold file after each epoch.</li> </ul> <p>For my final submission I selected multiple good folds from the various runs. From each selected fold I then used between 3 to 6 files with the probabilities and between 8 to 12 files with the optimal thresholds. I ended up using 53 probability files and 127 optimal threshold files and use simple averaging to generate the final values. Being unsure if I should use a fixed threshold for all classes or use the average for each class instead I did multiple submission for both.</p> <p>The submission with a fixed threshold of 0.2 was my personal best. However the other submissions with an average threshold for each class are on the private leaderboard almost just as good and multiple ones are even better compared to the fixed threshold used for that same submission. </p> <p>Yes could be that with more epochs and different parameters rn50 would be better...however time needed to train is also an important factor. With the time available in a day I could sometimes try 3 different things on a day...with more epochs that would be a lot less. And yes deep learning is a dark art.. I like that one ;-)</p> <p>Hey Moshel, So I did some experiments where I lined up a Resnet50 model to be as much as possible the same as the BN-Inception approach. Training 2 epochs head only, then 20 epochs full model. The outcomes vary a little bit each run..but I get about 0.050 - 0.070 difference between Resnet50 and BN Inception. I'am not sure if that all comes down to the differences in the 2 networks but it is quite a difference. I presume that further optimizing and tuning parameters for the Resnet50 would make the difference much smaller.</p> <p>On my 1070 Ti I could fit a max batch size of 8 for 512x512....which makes training of a Resnet50 also consume more time.</p> <p>Interesting..seems like the merge of a 4th channel into the other 3 channels loses any benefit if you don't do it the right way. May'be I will give it a try...thanks for the info anyway.</p> <p>Yes I have red those....interesting because I noticed in my setup a very slight improvement all other things beging equal. It wasn't much but enough to keep me using all channels. I guess that also depends on the complete model setup. I think I'll have some time coming week to try 2 runs with my Resnet50 setup... will let you know here when I have my results.</p> <p>On my personal Datascience PC I have an NVidia GTX 1070 Ti card. With a Pytorch BN Inception pretrained model I could load a batch size of 26 images with size 512 * 512. A batch size of 28 or higher would give me a Cuda out of memory message consistently. With 448 * 448 pixs I could do batch sizes of 32. I also run the model on an Azure VM with a K80. I could use batch sizes of 44 with the memory available. However a full fold with 20 epochs needed about 32 hours to run...so very impressive batch size there but the speed was horrible.</p>
Human Protein Atlas Image Classification
73th solution, only resnet18, with pytorch code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>thanks for the reply! I agree with you. Using both will break the balance for the loss. I think using one of them is the correct choice, but maybe the weights could be adjusted more.</p> <p>thanks for sharing! btw, in my experiments, removing dropouts led to overfitting, although it is said that using dropout with bachnorm could be harmful...I still kept it in my codes. (I forgot the source of this rumor</p> <p>I attend this competition to test my config-based pytorch pipeline borrowed from FAIR's <a href="https://github.com/facebookresearch/maskrcnn-benchmark/tree/master/maskrcnn_benchmark">maskrcnn-benchmark</a>, which has been proven to be a nice architecture for fast prototyping using config files while maintaining simple &amp; clear reusable and easy-to-scale code base. I'm going to share some tools which might be helpful to build cv projects by pytorch 1.0.</p> <p>I did a lot of explorations like different networks, loss function, optimizer, lr schedulers etc. but it turns out <strong>a simple 4-Fold resnet18</strong> ensemble could achieve <strong>0.530</strong> (I failed to select my best submission). The key to train a good-to-go single model(about <strong>0.580</strong> LB) needs:</p> <ol> <li>Train valid set split using <a href="https://github.com/trent-b/iterative-stratification">Multilabel Stratification</a>, you can find my implementation <a href="https://github.com/shawnau/kaggle-HPA/blob/d6071ff37d5db7612b2f380323a7d48309cc13fe/tools/preprocess.py#L32">here</a></li> <li><strong>Weighted sampler</strong> to tackle unbalanced data in each batch, which could by achieved by <code>torch.utils.data.WeightedRandomSampler</code>, the weights could be generated by <a href="https://github.com/shawnau/kaggle-HPA/blob/d6071ff37d5db7612b2f380323a7d48309cc13fe/tools/preprocess.py#L79">this method here</a></li> <li><strong>Train augmentation</strong> by random crop and resize. you could find the code <a href="https://github.com/shawnau/kaggle-HPA/blob/d6071ff37d5db7612b2f380323a7d48309cc13fe/dl_backbone/data/transforms/build.py#L5">here</a>, I'd say <a href="https://github.com/aleju/imgaug">imgaug</a> is really a useful tool for image augmentation. I use 288~448 crop size at a step of 32 then resize to 512</li> <li><strong>External data</strong>. remember to use correct preprocessing to match mean/std between train/external. this has been mentioned in the external data thread. I'd like to even do a histogram match between train and extra if I had time</li> <li><strong>Macro F1 loss</strong>. It seems that Weighted BCE works fine as well, but I'm using macro f1 directly since it has better performance than BCE in my experiments. you can find my pytorch implementation <a href="https://github.com/shawnau/kaggle-HPA/blob/d6071ff37d5db7612b2f380323a7d48309cc13fe/dl_backbone/model/loss.py#L25">here</a>, which was inspired by <a href="https://www.kaggle.com/rejpalcz/best-loss-function-for-f1-score-metric">this kernel</a></li> <li>After all these preparations above, I used a simple <strong>resnet-18 with global average pooling + global max pooling concat</strong> as the final pooling layer. but merely using avgpool works fine in my experiments(only slight loss on performance in my observation). I also tried res34, res50, bn-inception, gapnet but none of them outperformanced my res18. you can find all my implementations <a href="https://github.com/shawnau/kaggle-HPA/tree/master/dl_backbone/model/base">here</a>. Pretrainedmodels accelerates training, but training from scratch should achieve similar performance with enough epochs.</li> </ol> <hr> <ol> <li><strong>TTA</strong> didnt help, but no harm as well. I didn't try to submit with tta due to limited time. I just put my implementations <a href="https://github.com/shawnau/kaggle-HPA/blob/2f58e4b7a4739b29f74e988c4b554774fdff1cd4/dl_backbone/data/transforms/build.py#L74">here</a> for refer</li> <li>RGB have similar performance like RGBY, I just ensemble RGB with RGBY, but it's not a must for achieving 0.530 PB. It seems <strong>Y channel is not quite useful</strong> like many other kagglers reported</li> <li><strong>Threshold optimize</strong> always hurt my performance....I think its due to inconsistent of external+train dataset and test set. I used a brute-force method to pick hand-made thresholds for top-5 frequent classes, which boost public LB for 0.005 but not working in private LB.</li> </ol> <p>All the configurations of different settings(network, loss function, lr_scheduler, optimizer, sampler, tta, etc) could easily been achieved by <a href="https://github.com/shawnau/kaggle-HPA/tree/master/tools/config">different config files here</a>. you could see how convenient to build config-based experiment pipeline.</p> <p>Finally, thanks kaggle for this competition, hope my code might be helpful to the kagglers who fights kaggle by pytorch. I almost switched to fastai at the beginning of the competition seeing <a href="https://www.kaggle.com/iafoss/pretrained-resnet34-with-rgby-0-460-public-lb">lafoss' kernel</a>....it's just too easy to use haha</p> <p><a href="https://github.com/shawnau/kaggle-HPA">My contest code is here</a></p> <p>Good question. I was concerning about this issue when using crop and resize as augmentation as well, so I just used a straightforward method: shrink(-16 pixel) the crop size step by step, to see the performance of each experiment. I stopped at 256*256, it do have better performance on my local cv. But I didn't try smaller crops due to limited resource...thank you!</p>
Human Protein Atlas Image Classification
7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone,</p> <p>Below is the key points of my 7th place solution:</p> <p><strong>network input:</strong> RGB with HPA external</p> <p><strong>network architecture:</strong> replaced the last global average pooling by concat([attention weighted average pooling, global max pooling, global std pooling]), all the statistics were 2D-plane-wise calculated, followed by two fc layers.</p> <p><strong>data augmentation:</strong> contrast(applied independently on each color channel), and rotate, scale, shear, shift</p> <p><strong>ensemble:</strong> weighted average of ten models (3 x resnet18 on 1024x1024, 3 x resnet34 on 1024x1024, 2 x resnet34 on 768x768, 2 x inceptionv3 on 768x768)</p> <p><strong>a trick:</strong> There are tons of duplicates in the test set. I managed to find some easier duplicates using pair-wise correlation on RGBY separately. Averaging the output probabilities of the duplicates added around 0.04-0.05 LB. I believe the treasure lies in the duplicates that are harder to find because their prob output should differ more.</p> <p><strong>thresholds:</strong> I need more still in this. For me, the best seems to be 0.2 with some high occurrence class set to 0.3, but I really believe optimal thresholds depends on the models, and luck. Since we know there are leaks for the rare classes on public LB, I gambled to lower the thresholds of the five rarest classes to 0.1 and 0.05 (got slightly worse public score), used those two as my final two, and in the end the private LB score dropped too.</p> <p><strong>Correction:</strong> Averaging the output probabilities of the duplicates boosts 0.004-0.005 LB, not 0.04-0.05</p> <p>Sorry my complete pipeline is messy and I dont have time to make it in good shape. But I'm happy to answer any questions from you. I'll do my best</p> <p>I made the attention layer in the link below to 2D ... <a href="https://github.com/bfelbo/DeepMoji/blob/master/deepmoji/attlayer.py">https://github.com/bfelbo/DeepMoji/blob/master/deepmoji/attlayer.py</a></p> <pre><code>from __future__ import absolute_import, division import sys from os.path import dirname sys.path.append(dirname(dirname(__file__))) from keras import initializers from keras.engine import InputSpec, Layer from keras import backend as K class AttentionWeightedAverage2D(Layer): def __init__(self, **kwargs): self.init = initializers.get('uniform') super(AttentionWeightedAverage2D, self).__init__(** kwargs) def build(self, input_shape): self.input_spec = [InputSpec(ndim=4)] assert len(input_shape) == 4 self.W = self.add_weight(shape=(input_shape[3], 1), name='{}_W'.format(self.name), initializer=self.init) self.trainable_weights = [self.W] super(AttentionWeightedAverage2D, self).build(input_shape) def call(self, x): logits = K.dot(x, self.W) x_shape = K.shape(x) logits = K.reshape(logits, (x_shape[0], x_shape[1], x_shape[2])) ai = K.exp(logits - K.max(logits, axis=[1,2], keepdims=True)) att_weights = ai / (K.sum(ai, axis=[1,2], keepdims=True) + K.epsilon()) weighted_input = x * K.expand_dims(att_weights) result = K.sum(weighted_input, axis=[1,2]) return result def get_output_shape_for(self, input_shape): return self.compute_output_shape(input_shape) def compute_output_shape(self, input_shape): output_len = input_shape[3] return (input_shape[0], output_len) </code></pre> <p>I did not use any special sampling during training. I used plain BCE loss.</p> <p>Global std pooling is same as global avg pooling, except I calculate standard deviation instead of mean. The weights of "weighted average of the ten models" are determined mostly by my understanding of the strength of each type of model, plus some LB feedback.</p>
Human Protein Atlas Image Classification
8 place solution writeup
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Actually we used Fast.ai frameworks with many tricks from Jeremy's sleeves and pretrained model from amazing <a href="https://github.com/Cadene/pretrained-models.pytorch">https://github.com/Cadene/pretrained-models.pytorch</a> . Our pipeline was strictily inspired by first or third lection from the new version of fast.ai course.</p> <p>To be honest, I don't really think the general reference exists. </p> <p>We treated probabilities of each class from each models as a features with some meta information about an images (brightness, blurness, size and so on). In this point we got standard tabular problem.</p> <p>Stacking models with F1 or ROC-AUC is a bit tricky. The main idea of our solution to split holdout to folds then split training part of the fold to folds again and voting between models trained on inner folds and validate on the validation part of the outter fold. </p> <p>First of all, thanks to Kaggle and HPA team for this interesting competition! Even leak couldn’t spoil it! </p> <p>Our solution is an ensemble of 14 models. Most of them were trained on 512x512 RGB with additional data.</p> <p>Many thanks Jeremy Howard for great fast.ai framework! It’s nice how easy it is to patch it. Fast.ai has been changed 40 times since the competition began, so it’d have hurt if it was not open sourced.</p> <p>Our models are: Se-ResNext-50 trained on 256x256, 512x512 and 768x768 sizes, InceptionV4, BN-Inception and Xception (all trained on 512x512). <br> We didn’t have enough resources to train models on high resolution images. There were two ways to deal with it: resizing and crops, but training on crops is risky since some organelles occur only once per image (e.g. cytokinetic bridge), and it is difficult to select proper crops, so we didn’t do it.</p> <p>Things that worked: 1. Learning rate finder and cyclic learning rate (long cycles at the beginning, short cycles at the end). 2. Differential learning rate with gradual reducing (as described here: <a href="https://blog.slavv.com/differential-learning-rates-59eff5209a4f">https://blog.slavv.com/differential-learning-rates-59eff5209a4f</a> ) helped to preserve weights from ImageNet. 3. Focal loss with default gamma, LSEP loss ( <a href="https://arxiv.org/pdf/1704.03135.pdf">https://arxiv.org/pdf/1704.03135.pdf</a> ). 4. Simple one-layer network head. 5. Brightness augmentations, D4 and wrap transforms. 6. Average of 32 TTAs (we used same augmentations as during training).</p> <p>Things that didn’t work: 1. Training on RGBY. 2. Training with sample pairing ( <a href="https://arxiv.org/abs/1801.02929">https://arxiv.org/abs/1801.02929</a> ). 3. Mixup, which probably didn’t work because green channel is too important ( <a href="https://arxiv.org/abs/1710.09412">https://arxiv.org/abs/1710.09412</a> ). 4. Complex network head. 5. Large architectures such as Nasnet or Senet-154 (they would have probably worked, if we had more GPUs). 6. Training one-vs-all models and training on subsets of similar classes. 7. Training classifier over bottleneck features of networks (we tried lots of approaches here, but unfortunately all of them proved to be worse than our models). 8. Complex augmentations, such as green channel modifications discussed here: <a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/75768">https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/75768</a></p> <p>External data was acquired from Human Protein Atlas site. After trying to find duplicates between train and externaldata we found out that labels wasn't match some times. First of all they shouldn't be taken from subcellular_location.tsv as these locations is raleted to gene/protein not to the sample. But parsing labels from xml files also wasn't quite correct. More correct labels are actually on web-site, so we took them. And it looks like they merged some rare labels that wasn't presented in our task into others, so we do the same.</p> <p>Our resources: 4x1080ti. It seems that the full training cycle consumed approximately two weeks of computing time.</p> <p>Findings: 1. Yellow channel wasn’t very helpfulas many participants noticed on the forum. 2. Labeling was probably quite noisy (we found several contradictions between HPAv18 data and data provided in this competition). 3. Several classes are extremely similar visually. It is almost impossible to distinguish between lysosomes and endosomes, for example (no wonder: endosomes are literally reborn into lysosomes at some point of their lifecycle). So it is not surprising that models don’t perform well enough on these classes too.</p> <p>Our validation is a kind of an Adversarial Validation. We arranged train by similarity to the test using an NN with simple architecture and got 8K samples as a holdout. We used this holdout to fit the thresholds for single models and check the scores. We tried our best to avoid duplicates between train and our holdout. All leaked images from HPA were added to the validation set too.</p> <p>The most challenging part of the competition is how to deal with “small” classes with only a few positive cases. We couldn’t handle it better than ensembling models with linear models and voting between those stacks. </p> <p>Other classes were stacked with LightGBM model. It was stable enough on the validation set because we followed the “folds-inside-folds” scheme (similar to Strategy C from <a href="https://www.kaggle.com/general/18793">https://www.kaggle.com/general/18793</a> ). The gap between the local validation and leader board was stable with 0.01 precision. It wasn’t as accurate as we wanted it to be, but still it was ok to trust our validation.</p> <p>P.S. Some stats: Our group chat contains 1325 screenshots, 321 files, 428 links and thousands of messages.</p>
Human Protein Atlas Image Classification
92nd place pytorch solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>You are welcome!</p> <p>Could you please share your code snippets? </p> <p>Good question. I started with this and didn't change it later.</p> <p>Sure! I use log weights and choose label with the maximum weight.<br> Here is my code:</p> <pre><code>def parse_target(target: str) -&gt; np.ndarray: y = np.zeros(28, dtype=np.int) indices = [int(it) for it in target.split()] y[indices] = 1 return y def get_sampler(df: pd.DataFrame, alpha: float = 0.5) -&gt; Sampler: y = np.array([parse_target(target) for target in df.Target]) class_weights = np.round(np.log(alpha * y.sum() / y.sum(axis=0)), 2) class_weights[class_weights &lt; 1.0] = 1.0 weights = np.zeros(len(df)) for i, target in enumerate(y): weights[i] = class_weights[target == 1].max() return WeightedRandomSampler(weights, len(df)) </code></pre> <p><strong>1. Data</strong></p> <ul> <li>Channels: RGB</li> <li>Oversampling</li> <li>External data: <a href="http://v18.proteinatlas.org">http://v18.proteinatlas.org</a></li> </ul> <p><strong>2. Augmentation</strong></p> <ul> <li>Resize, Rotate, RandomRotate90, HorizontalFlip, RandomBrightnessContrast, Normalize</li> </ul> <p><strong>3. Model design</strong></p> <ul> <li>Backbone: Resnet50 pretrained on ImageNet</li> <li>Head: 2 linear layers with batch normalization and dropout</li> </ul> <p><strong>4. Loss</strong></p> <ul> <li>Binary Cross Entropy</li> </ul> <p><strong>5. Training</strong></p> <ul> <li>5-fold CV</li> <li>Optimizer: Adam</li> <li>Different learning rates for different layers</li> <li>Head fine-tuning with frozen backbone (1 epoch)</li> <li>Scheduler: Cyclical Learning Rates</li> </ul> <p>Stage 1:</p> <ul> <li>Image size: 256</li> <li>Batch size: 128</li> <li>Epochs: 16</li> </ul> <p>Stage 2:</p> <ul> <li>Image size: 512</li> <li>Batch size: 32</li> <li>Epochs: 6</li> </ul> <p><strong>6. Prediciton</strong></p> <ul> <li>TTA: 8</li> <li>TTA augmentation: Resize, Rotate, RandomRotate90, HorizontalFlip, Normalize</li> <li>The mean of the predictions</li> <li>Threshold: 0.2</li> </ul> <p><strong>7. Result</strong></p> <ul> <li>Training takes ~35 hours on Tesla v100</li> <li>Public LB: 0.595</li> <li>Private LB: 0.523</li> </ul> <p><strong>8. Observations</strong></p> <ul> <li>Mixed precision works poorly</li> <li>External data helps a lot</li> <li>BCE Loss with oversampling is much better than Focal Loss</li> <li>Resnet50 outperforms Resnet18 and Resnet34</li> <li>5 folds improve score by 0.024</li> <li>TTA helps too</li> </ul> <p>GitHub link: <a href="https://github.com/rebryk/kaggle/tree/master/human-protein">https://github.com/rebryk/kaggle/tree/master/human-protein</a></p> <p>Some people used this to find thresholds. I tried too. It didn't help me.</p>
Human Protein Atlas Image Classification
A CNN classifier and a Metric Learning model,1st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sorry for late reply.</p> <p><strong>I was wondering how many crops you did per image? And why not resize instead of crop?</strong> I randomly cropped during training and predicting, using cropping instead of resizing was to adding diversity of the data, and this step was similar to the process of the images were captured, I guess.</p> <p><strong>I was also wondering about what research papers you tried that didn't actually work.</strong> I read a lot of papers about multi-label classification, but it not work in this one since most of them were on image-net. This task was a little challenging.</p> <p><strong>You said you split your validation set using stratification but I thought that stratification was for building train and test sets.</strong> Yes, it's for building train and test sets.</p> <p>Sorry for late reply, I did not read this post recently. The antibody ids are from hpav18 dataset, I remembered in the External data threads, where there is a csv file containing information of every hpa v18 image. And if you want to test metric learning, I suggest you use Whale competition.</p> <p>We don't need use above code before feed images into our models, as we have fine-tuned the model.</p> <p>512x256 means width=512 height=256. Please refer to codes of the top solutions of whale competition.</p> def fit_test_th(x,y): p = [] for idx in tqdm(range(len(y))): _y = y[idx] _x = x[:, idx] min_error = np.inf min_p = 0 for _p in np.linspace(0, 1, 10000): error = np.abs((_x &gt; _p).mean() - _y) if error <p>Hi, I refered to the implementation here, <a href="https://github.com/ronghuaiyang/arcface-pytorch/blob/master/models/metrics.py">https://github.com/ronghuaiyang/arcface-pytorch/blob/master/models/metrics.py</a>, and I found the result was good, so I used it. I also noticed the paper did not explain the not easy_margin branch of the code,I did not find the explanation on the internet, as I am very busy in recently months, I am sorry that I can not give you the answer now. Perhaps I will try to understand it when I am not so busy. If you can contact the author of the ARCFACE and find the reason behind it, it will be helpful if you share it to us.</p> <p>I used lovasz loss to let the network balance the precision and recall.Since we use sigmoid and binary classification on every pixel on mask prediction tasks, we can treat 28 labels as 28 pixels and then we can use lovasz loss on it. </p> <p>Hi, just get mean and std of dataset and apply it to the images and then forward to the network, we don't need apply the normalization twice.</p> <p>Same as you</p> <p>Hi The densenet121's head is in my post above. I splitted the data into 5 folds, so the val set was 20% of all the data. The focalloss of my val set was about 0.57 without TTA. No other augmentation, randomly cropping from 768x768 then resize to 512x512 was enough,we should use external data.</p> <p>It is case by case, it depend on the competition and the free time I have during a competition, I tried to do experiments automatically and re-use the source codes from previous competitions and github, so I can work or study at the same time. You can also refer my reply before. <a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/78109#472761">https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/78109#472761</a></p> <p>Hi oldufo,moshel, The quickest way to get details of my metric learning is read the <a href="https://www.kaggle.com/c/humpback-whale-identification/discussion/82484">3rd place solution </a> from <a href="/pudae81">@pudae81</a> of Whale competition, I am happy to find that it is quite similar to mine, but I missed the flip whale tricks. I have been very busy with my everyday job recently, I just used the same model structure, hyper-parameters, training methods...in both competitions(This is not an excuse, I may not find those good tricks either, even if I had a lot of time) , I also found that many others' solutions are very good, I suggest you refer to those beautiful ones, when I have time, I will learn and experiment some of their great ideas, for example, <a href="https://www.kaggle.com/c/humpback-whale-identification/discussion/82366">The first place solution</a> from <a href="/qiaojian">@qiaojian</a> is very creative on make full use of the data and design on loss function.</p> <p>I usually spend more time after I entered a competition,then do all kinds of experiments on server as automatically as possible,and then work hard in the last week of the competition,so my GPU server work all the time ,and I can finish other jobs or go out to run at the same time,and I think if we want to get top 10, we all want to have as much time as possible :) </p> <p>Same fold,several crops,max probability from the predictions.</p> <p>I remember I used Batch Normalization in Carvana Competition, I accumulated several iterations and then updated the parameters. And Since I have 4 GPUS on my server, when I trained my densenet121 model with image size of 1024,the batch_size is 36,it is big enough for use Batch Normalization.</p> <p>Yes,exactly.</p> <p>From pretrained model on imagenet, not HPA data. </p> <p>I used gradient accumulation in Carvana competition and tried in this one but it did not work</p> <p>Hi,this is a common practice when we want to pool the feature map to feed to fc layers, as you can find in CBAM related paper, as Max and AVG are both important signals retrieved by the network.</p> <p>HPA website provided the all the information needed.</p> <p>1.Heavy augmentation, please refer to winners' source code. 2.512x256 size may help.</p> <p>@Moshel,<a href="/jaideepvalani">@jaideepvalani</a>,I have updated the main post and have added some details. Thanks.</p> <p>Metric Learning is based on resnet50 model,so the ... ... means resnet50 backbone the get the feature map. And I used CrossEntropyLoss to help coverage quickly, but you may need it as I find if I remove it,it is also ok </p> <p>Perhaps I did not get you question correctly, the zip file containing tiff files can download directly using the links above.</p> <p>According to my experiments, this was a key augmentation to prevent overfitting to train set, anding some label noises is not harmful here. And to the test set, max probs can solve the problem of missing region of a target label.</p> <p>Hi, I referred to this link, :) <a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/68678">https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/68678</a></p> <p>I downloaded the zip files from the link in data page of this competition,it is same as your link.</p> <p>oh,I see, but I am afraid I can not help you as I have never used GCP service.</p> <p>if the batch size is b, the results from 4 crops probs will be bx4x28, max(probs,1) will return bx28 vector, after applying thresholds on the result, we can get labels for each sample.</p> <p>I usually start from a simple network,such us resnet 18, I do almost all the experiments on it. Then I will transfer the model structure and parameters,augmentations...to resnet 34 resnet 50. Then I will try densenet,inception v3 if time is enough. I like resnet, the result of resnet 50 is also good, but densenet is a little more better in this competition. As to select model,if the dataset is large, I will choose large and deeper model,but I will start from resnet 18 resnet34 too.</p> <p>Please refer to 'Post-processing' part of the solution,if select the threshold to make sure there are certain number of samples in a class.So I have 28 thresholds.</p> <p>If I use FocalLoss, the model converge rapidly, the only problem is overfit to train set, so I added crop augment and 512x512 image size, I paste focal loss code I used here for you reference.</p> <pre>class FocalLoss(nn.Module): def __init__(self, gamma=2): super().__init__() self.gamma = gamma def forward(self, logit, target): target = target.float() max_val = (-logit).clamp(min=0) loss = logit - logit * target + max_val + \ ((-max_val).exp() + (-logit - max_val).exp()).log() invprobs = F.logsigmoid(-logit * (target * 2.0 - 1.0)) loss = (invprobs * self.gamma).exp() * loss if len(loss.size())==2: loss = loss.sum(dim=1) return loss.mean() </pre> <p>Hi,Tim,thank you! Andrew Ng's Deep Learning Course on Coursera introduces Siamese network and Triplet Loss,I remember it's Part 4.2-4.5,it's a good start,although these can not ensure a good position in a competition related. Since you are interested in the metric learning,I invite you to the Whale identification challenge,you will find some kernels useful <a href="https://www.kaggle.com/c/humpback-whale-identification/kernels">https://www.kaggle.com/c/humpback-whale-identification/kernels</a>. As I am busy recently,I did not read them carefully,but most upvoted ones are always useful. Whale competition is a suitable competition with not so many images,I think you can get a very good understanding of metric learning after a month and use it in a lot of scenarios.<br></p> <p>Congrats to all the winners, and thanks to the host and kaggle hosted such an interesting competetion.<br><br> I am sorry for late share, I have worked hard to prepare it in recent days,tried to verify my solution and to make sure it’s reproducible,stable,efficient as well as interpretative.</p> <p><strong>Overview</strong><br> <img src="https://bestfitting.github.io/kaggle/protein/images/001_pipeline.png" alt="enter image description here"><br> <strong>Challenges:</strong><br><br> <em>Extreme Imbalance,rare classes hard to train and predict but play an important role in the score.</em><br><br> <em>Data distribution is not consistent in train set,test set,and HPA v18 external data.</em><br><br> <em>The images are with high quality,but we must find a balance between model efficiency and accuracy.</em><br></p> <p><strong>Validation for CNNs:</strong><br><br> I split the val set according to <a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/67819" target="_blank">https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/67819</a> great thanks to <a href="https://www.kaggle.com/trentb" target="_blank">@trentb</a></p> <p>I found Focal Loss of the whole val set is a relative good metric to the model capability, F1 is not a good metric as it’s sensitive to the threshold and the threshold is depend on the distribution of the train and val set.</p> <p>I tried to evaluate the capability of a model by set the ratio of each class to the same as train set. I did so because I thought I should not adjust the thresholds according to the public LB,but if I set the ratio of the prediction stable,and,if the model is stronger,the score will improve. That’s to say,I used public LB as another validation set.</p> <p><strong>Training Time Augmentations:</strong><br><br> Rotate 90,flip and randomly crop 512x512 patches from 768x768 images(or crop 1024x1024 patches from 1536x1536 images)</p> <p><strong>Data Pre-Processing:</strong><br> Remove about 6000 duplicates samples from v18 external data, using hash method which been used to find test set leak.</p> <p>Calculate mean and std using train+test,and used them before feeding images to the model.</p> <p><strong>Model training:</strong><br><br> <strong>Optimizer</strong>:Adam<br><br> <strong>Scheduler</strong>:</p><pre>lr = 30e-5 if epoch &gt; 25: lr = 15e-5 if epoch &gt; 30: lr = 7.5e-5 if epoch &gt; 35: lr = 3e-5 if epoch &gt; 40: lr = 1e-5 </pre><br> <strong>Loss Functions</strong>:FocalLoss+Lovasz,I did not use macro F1 soft loss, because the batch size is small and some classes are rare, I think it’s not suitable for this competition.I used lovasz loss function because I thought although the IOU and F1 are not the same,but it can balance the Recall and Precision to some extend.<p></p> <p><strong>I did not use oversample.</strong><br><br> <strong>Model structure:</strong><br> My best model is a densenet121 model, which is very simple,the head of the model is almost same as public kernel <a href="https://www.kaggle.com/iafoss/pretrained-resnet34-with-rgby-0-460-public-lb" target="_blank">https://www.kaggle.com/iafoss/pretrained-resnet34-with-rgby-0-460-public-lb</a> by <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a>.</p> <pre> (1): AdaptiveConcatPool2d( (ap): AdaptiveAvgPool2d(output_size=(1, 1)) (mp): AdaptiveMaxPool2d(output_size=(1, 1)) ) (2): Flatten() (3): BatchNorm1d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): Dropout(p=0.5) (5): Linear(in_features=2048, out_features=1024, bias=True) (6): ReLU() (7): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (8): Dropout(p=0.5) (9): Linear(in_features=1024, out_features=28, bias=True) </pre> <p>I tried all kinds of network structure according to the Multi-Label classification papers, the results were not improved instead of their beautiful structures and theory behind them.:) <br><br> <strong>Prediction time augmentations:</strong><br> I predicted the test set by using best focal loss epoch with 4 seeds to random crop 512x512 patches from 768x768 images, and got the max probs from the predictions. </p> <p><strong>Post-processing:</strong><br> At the final stage of the competition, I decided to generate two submissions:<br> 1.The first one was keep the ratio of the labels to the public test set,since we did not know the ratio of the rare classes,I set them to the ratio of the train set.<br> 2.The second one was keep the ratio of the labels to the average ratio of train set and public test set.</p> <p>Why? Although I tried to add or reduce the count of rare classes by 2-5 samples,the public LB can improve, but this was a dangerous way.I just only used it to evaluate the possible shakeup.<br> </p><hr><br> Metric Learning:<br> I took part in the landmark recognition challenge in May 2018,<a href="https://www.kaggle.com/c/landmark-recognition-challenge,and" target="_blank">https://www.kaggle.com/c/landmark-recognition-challenge,and</a> I had planed to use metric learning in that competition,but time was limited after I finished the TalkingData competition. But I read many papers related,and did many experiments after that. <p></p> <p>When I analyzed the predictions of my models,I wanted to find the nearest samples to compare,I first used the features from CNN model,I found they are not so good,so I decided to try Metric Learning.</p> <p>I found it’s very hard to train in this competition,it took me a lot of time but the result was not so good,and I found the same algorithm can work very well in Whale identification competition,but I did not give up and finally found a good model in last two days.</p> <p>By using the model,I could find the nearest sample on validation set,<strong>the top1 accuracy &gt;0.9</strong><br> These are the demo:<br><br> Correct sample with single Label<br> <img src="https://bestfitting.github.io/kaggle/protein/images/002_Sample%20with%20single%20Label.jpg" alt=""><br> Correct sample multiple labels<br> <img src="https://bestfitting.github.io/kaggle/protein/images/003_Sampe%20with%20multi%20labels.jpg" alt=""><br> Correct sample with rare label:Lipid droplets<br> <img src="https://bestfitting.github.io/kaggle/protein/images/004_Rare%20Label_Lipid%20droplets.jpg" alt=""><br> Correct sample with rare label:Rods &amp; rings<br> <img src="https://bestfitting.github.io/kaggle/protein/images/005_Rare%20Label_Rods%20%26%20rings.jpg" alt=""><br> Missed a label<br> <img src="https://bestfitting.github.io/kaggle/protein/images/006_Missed%20a%20label.jpg" alt=""><br> Incorrectly add a label<br> <img src="https://bestfitting.github.io/kaggle/protein/images/007_incorrectly%20add%20a%20label.jpg" alt=""></p> <p>Since the top1 accuracy&gt;0.9,I thought I could just use the metric learning result to set the labels of test set. But I found that the test set is a little different to V18, and some of samples can not find nearest neighbor in train set and V18. So I set a threshold and replace the labels with found sample’s. Fortunately,the threshold is not sensitive to the threshold. Replacing 1000 samples in test set is almost the same score as replacing 1300 samples. By doing so, my score can improve 0.03+,which was a huge improvement in this competition.</p> <p>I think my method is important not only improve the score,it can help HPA and their users in following way:<br><br> <em>1.When someone want to label or learn to label an image or check the quality, he can get the nearest images for referring to.<br> 2.We can cluster the images by the metric and find the label noises and then improve the quality of the labels.<br> 3.We can explain why the model is good by visualizing the predictions.<br></em></p> <p><strong>Ensemble:</strong><br> To keep the solution simple,I don't discuss the ensemble here, a single model or even a single fold + metric learning result is good enough to get the first place.</p> <p><strong>The scores on LB:</strong><br> <img src="https://bestfitting.github.io/kaggle/protein/images/008_scores.png" alt=""><br> <br><br> I am sorry I can not describe the details of this part now, as I mentioned before,the whale identification competition is still on-going. </p> <p><strong>Introspection</strong>:<br> Before I entered this competition,I never expected I can find a way out,it’s very hard to build a stable CV and the score is sensitive to the distribution of rare classes.A gold medal is my max expectation.<br><br> I feel kaggle competitions are becoming harder and harder.In all honesty,there are no secrets but hard work.I treat every competition as a force to push me forward.I force myself not to learn and use too much competition skills but knowledge to solve real problems.<br><br> It’s quite lucky I found a relatively good solution in this competition as I failed to find a Reinforcement Learning algorithm in Track ML, and failed to finish a good CNN-RNN model in Quick Draw competition in time,but anyway,if we compete only for win,we may loss,if we compete for learning and providing useful solution to the host,nothing to loss.</p> <p><strong>Update,Metric Learning Part:</strong></p> <p>Sorry for late update!</p> <p>As I noticed that the sample with same antibody-id have almost same labels, so I thought I may treat the antibody-id as face id, and use face-recognition algorithms on HPA v18 dataset.</p> <p>When training, I used V18 data antibody IDs to split the samples,keep a sample in validation set,and put the other samples with same ID in train set.I used top1-acc as validation metric.</p> <p><strong>Metric Learning Model:</strong><br> Network:resnet50<br> Augmentations:Rotate 90,flip<br> Loss Functions:ArcFaceLoss<br> Optimizer:Adam<br> Scheduler:lr = 10e-5, 50 epochs.</p> <p><strong>Model details:</strong></p> <pre>class ArcFaceLoss(nn.modules.Module): def __init__(self,s=30.0,m=0.5): super(ArcFaceLoss, self).__init__() self.classify_loss = nn.CrossEntropyLoss() self.s = s self.easy_margin = False self.cos_m = math.cos(m) self.sin_m = math.sin(m) self.th = math.cos(math.pi - m) self.mm = math.sin(math.pi - m) * m def forward(self, logits, labels, epoch=0): cosine = logits sine = torch.sqrt(1.0 - torch.pow(cosine, 2)) phi = cosine * self.cos_m - sine * self.sin_m if self.easy_margin: phi = torch.where(cosine &gt; 0, phi, cosine) else: phi = torch.where(cosine &gt; self.th, phi, cosine - self.mm) one_hot = torch.zeros(cosine.size(), device='cuda') one_hot.scatter_(1, labels.view(-1, 1).long(), 1) # -------------torch.where(out_i = {x_i if condition_i else y_i) ------------- output = (one_hot * phi) + ((1.0 - one_hot) * cosine) output *= self.s loss1 = self.classify_loss(output, labels) loss2 = self.classify_loss(cosine, labels) gamma=1 loss=(loss1+gamma*loss2)/(1+gamma) return loss class ArcMarginProduct(nn.Module): r"""Implement of large margin arc distance: : Args: in_features: size of each input sample out_features: size of each output sample s: norm of input feature m: margin cos(theta + m) """ def __init__(self, in_features, out_features): super(ArcMarginProduct, self).__init__() self.weight = Parameter(torch.FloatTensor(out_features, in_features)) # nn.init.xavier_uniform_(self.weight) self.reset_parameters() def reset_parameters(self): stdv = 1. / math.sqrt(self.weight.size(1)) self.weight.data.uniform_(-stdv, stdv) def forward(self, features): cosine = F.linear(F.normalize(features), F.normalize(self.weight.cuda())) return cosine def __init__(self,.... ... ... self.avgpool = nn.AdaptiveAvgPool2d(1) self.arc_margin_product=ArcMarginProduct(512, num_classes) self.bn1 = nn.BatchNorm1d(1024 * self.EX) self.fc1 = nn.Linear(1024 * self.EX, 512 * self.EX) self.bn2 = nn.BatchNorm1d(512 * self.EX) self.relu = nn.ReLU(inplace=True) self.fc2 = nn.Linear(512 * self.EX, 512) self.bn3 = nn.BatchNorm1d(512) def forward(self, x): ... ... x = torch.cat((nn.AdaptiveAvgPool2d(1)(e5), nn.AdaptiveMaxPool2d(1)(e5)), dim=1) x = x.view(x.size(0), -1) x = self.bn1(x) x = F.dropout(x, p=0.25) x = self.fc1(x) x = self.relu(x) x = self.bn2(x) x = F.dropout(x, p=0.5) x = x.view(x.size(0), -1) x = self.fc2(x) feature = self.bn3(x) cosine=self.arc_margin_product(feature) if self.extract_feature: return cosine, feature else: return cosine </pre> <p>Please refer to the paper:<br> ArcFace: Additive Angular Margin Loss for Deep Face Recognition <br> <a href="https://arxiv.org/pdf/1801.07698v1.pdf" target="_blank">https://arxiv.org/pdf/1801.07698v1.pdf</a><br> Deep Face Recognition: A Survey <br> <a href="https://arxiv.org/pdf/1804.06655.pdf" target="_blank">https://arxiv.org/pdf/1804.06655.pdf</a></p> <p>As I was very busy after this competition(and will be for a little long time),I used almost the same model finsihed the Whale competition and the winners' models are very good, so I think I need not write a summary of that competition. I the person re-idenfication related papers and solutions are good choice to Whale competition.</p> <p>Thanks for your patience! </p> <p>1.According to my experiences, oversample by adding samples will change the probabilities of some classes, but will not change the model's capability a lot, the real problem is not the probabilities of a class in this competition, the problem is the order of them instead.But,I must say, I did not do experiments to verify it. Perhaps I will use oversample in other competitions, for example, there is only one image of many categories in whale competition.And, oversample is time consuming.Competition is a series of decision in limited time and resources, we must try to select most promising methods. 2.To keep whale challenge not be disturbed by my post,I think you can understand I can not say too much, there are so many clever kagglers here, it's unfair to those leading teams.(Perhaps I had said too much already) 3.I must decide which samples be replaced, so I should set a distance, to say,0.35, if the distance&lt;0.35, then I replace the labels with found sample in V18. If I set the the threshold to 0.3, the score on LB does not change too much, we can say, it's not sensitive to the threshold.As I did not try to overfit to public LB when select the thresholds of CNN model, so I can say, my solution is not sensitive to thresholds in a whole.</p> <p>Yes, I used pre-trained model, the LR schedule is OK, I did not try too many other options, I pay more attention to the data and the train-val loss relationship usually.</p> <p>I trained a model to generate a vector for each sample(with RGBY 4 channels ),then calculate the distance between every val/test set sample and every train/V18 sample, and sort the distance, then I can get the nearest sample of each val/test sample.</p>
Human Protein Atlas Image Classification
Meditation on 74th place.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This competition is a first time when we have managed to get 74 place. I can not say that I am completely happy with the results, and definitely we have a million of points for improvement, but there are some philosophical thoughts that I would like to share:</p> <hr> <ol> <li><p>Our models was much worse then most competion participants are reporting, but it seems that our way to blend models worked pretty well <a href="https://github.com/petrochenko-pavel-a/proteins/blob/master/proteins.py">Final Submission Code</a></p></li> <li><p>Having a <a href="https://github.com/petrochenko-pavel-a/classification_training_pipeline">declative pipeline</a> that allowed us to manage experiment settings quickly, helped a lot but there are still a lot of room for improvement, especially in the area of experiment sharing, and postprocessing code.</p></li> </ol> <p><strong>Networks:</strong> xception [256,512], NasNet (256) - only one model one fold <strong>Augmentations:</strong> Flips,Rot90 <strong>Test time augmentations:</strong> Flip, Rot90 <strong>Loss functions:</strong> BCE,Focal Loss <strong>Learning:</strong> Adam , LR Finder, CLR(triangular2)</p> <p>Fail points:</p> <ol> <li><p><strong>Economics and merging</strong>: after merging we gathered a lot of compute power, but all models except one, that was a part of our final submission was trained on my 2x1080 ti machine, so it seems that we was not able to utilize our compute effectively. My current feeling is that it is mostly result of later mergine (just few hours, before mergine dead line), so we was not able to align our thoughts quickly enough to make use of our unified compute and brain power.</p></li> <li><p><strong>Augmentations</strong>: we have used very conservative augmentations, and obviously this was one of the pain points of out models. Next time I will use less conservative augmentations and more TTA.</p></li> <li><p><strong>Explotation vs Exploration</strong> . Balancing this two things was critical, and I have a feeling that having multiple different branches of research after merger deadline does not allowed us to fully exploit potential of our best approach. Next time, I will try to switch to explotation a little bit earlier.</p></li> </ol>
Human Protein Atlas Image Classification
part of 4th place solution: GAPNet & dual loss ResNet
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First a big thanks to all my team-members. They all made this competition an awesome experience. I also want to thank @brian, @heng and @lafoss for their fruitful input. </p> <p>Since major ingredient of our solution was diversity and in the end we used around 25 different models, I want to give a separate few notes on two of my main contributions. </p> <p>I want to split up my notes into strategy, architecture as I see each equally important.</p> <p><strong>Strategy:</strong></p> <p><a href="/tunguz">@tunguz</a> <a href="/sasrdw">@sasrdw</a> and myself teamed-up quite early which enabled us to work in different directions right from the beginning. We always had diversity of our models in mind. So I concentrated on models that seem a bit different. Diversity to my team-members was also the main reason I sticked to keras, although in my opinion pytorch would have been more suitable for this competition due to its flexibility. We also used different cross validation schemes for the sake of diversity.</p> <p>After some trouble in the beginning for getting the cross-validation right, I started exploring different architectures as posted by <a href="/hengck23">@hengck23</a> . I found it quite efficient to only use 256x256 RGB images in the beginning because it allows for high iteration of different ideas.</p> <p><strong>Architectures:</strong></p> <p><em>GAPNet</em></p> <p>Immediately, reading the GAPNet paper, I had the idea to change the illustrated architecture to use a pretrained backbone instead. </p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/inbox/113660/a95f150c7153a17538b074def2255e21/GAP.png" alt="original GAPNet"></p> <p>I think one important advantage of the GAPNet architecture is its ability for multiscale. So I tried different backbones and ended up with ResNet18, which also enabled to use a batchsize of 32 on a GTX1080Ti. I also saw minor improvements adding SE-Blocks before the Average Pooling layers with nearly no computational cost, so I added those. I saw no improvement in using RGBY images. I used a weighted bce and f1 loss and a cosine annealing lr schedule and only trained for 20 epochs. After applying our thresholding method to the predictions GAPNet trained on 512x512 RGB images also using the HPA external data a single 5-fold model was able to achieve 0.602 Public LB. I also experimented with different internal/external data proportions, RGBY and 512cropping from 1024 images so I had 4 5-fold GAPNet models which I could ensemble resulted in LB 0.609</p> <p><em>Dual Loss ResNet</em></p> <p>Following another post from <a href="/hengck23">@hengck23</a> I implemented a ResNet34 with a dual loss:</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/415080/10599/attention%20is%20what%20you%20needq.png" alt="enter image description here"></p> <p>Additionally to the "normal" classification loss I used the output of the last 32x32x128 layer within ResNet34 did Conv2D to 32x32x28 and then used a downsampling of the green channel with the according labels as ground truth mask to have a segmentation loss. This segmentation loss works like a regularizer that ensures that that activations of the 32x32x128 layer are "nice". The additional supervised attention added quite some benefit to regularization, and the computational cost was bearable. I trained 2 variants 5-fold each scoring LB 0.603 and added them to the GAPNet ensemble -&gt; LB 0.618</p> <p>I guess <a href="/tunguz">@tunguz</a> will write an overall summary where he explains how my models were then incorporated into our overall ensemble.</p> <p>I attached the GAP blocks to the last layers with 16,32,64,128,256 filters.</p> <p>Even if you use different cross validation schemes (say 5-fold vs 4-fold) you can still stack models using the oof predictions. Nevertheless we used a voting scheme as very last ensembling layer as we did not have oof predictions for all our models and it worked quite well</p> <p>Congratulations to your result Sven :D</p> <p>You mean for the dual loss ResNet ? I actually did not implement an auto encoder (I wanted but it was really computational expensive and a quick test showed, at least in my case, that it was not promising). Additionally to the "normal" classification loss I used the output of the last 32x32x128 layer within ResNet34 did Conv2D to 32x32x28 and then used a downsampling of the green channel with the according labels as ground truth mask to have a segmentation loss. This segmentation loss works like a regularizer that ensures that that activations of the 32x32x128 layer are "nice".</p> <p>I mainly used stratified multilabel split as posted in one of the discussions. But we (me included) also used a 4-fold cv based on clusters of images with similar statistics. For the last day we also ran last minute 2-fold models. </p> <p>thanks, yes thats the paper</p> <p>Here I only address a (small) part of our solution. Since our whole solution consists of many models and tricks we are not finished writing the complete summary yet</p>
Human Protein Atlas Image Classification
~0.5 fastai solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas Image Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Fastai V1 solution</h1> <p>I started this competitions training custom models (darknet, wideresnets, etc..) with little success, around ~0.45LB. I had limited computing power and just wanted to test this fast architectures. We teamed up with David and got the extra data from the HPA v18. (99k images in total where used). We ended using just simple, pretrained Resnets.</p> <ul> <li>RGB worked better for us, only png images where used.</li> <li>Oversampling was done with an Imbalanced sampler, with weights for low classes (a simple pytorch sampler that computer the distribution of classes, and assigns a value to each sample, we used 1/samples and log sample).</li> <li>Very late in the competition (1 weeks ago) we realised that simple Resnet18, 34 and 50 worked better, and thanks to fastai one_cycle we could train them very fast. (a couple of hours).</li> <li>Threshold optim did not worked for us.</li> <li>I was a sad that without external data it was impossible to get good results.</li> <li>Vast.ai is cheap and works very good, I have a bash script that install, copies and setups everything in a couple of minutes, RTX cards are very fast.</li> <li>Mixup and fp16 was used all the time.</li> <li>We should have chosen better models to submit, I forgot to click the better ones.</li> <li>The model f1 score was almost identical to the LB score.</li> <li>We used BCE, focal Loss and soft F1 loss.</li> <li>Only single fold models, did not have time to do CV.</li> <li>90% train, 10% test was used.</li> <li>fastai library changed a lot during the competition, we started with V1.06 and ended in V1.39 (using the datablock API). It was a personal choice to use V1 and not V0.7, thanks @sgugger</li> <li>The highest Private Leaderboard model was a mix of Resnet18, Resnet34 and Resnet50 (0.509)</li> </ul> <h1>Hardware</h1> <p>I used mostly GCP with P4 GPUs and some P100 at the end. It was the first time I used GCP, I was mostly an AWS user, but discovered that we get $300 free credits, so I decided to try. The only thing I would say is that the P4 cards are not very fast, roughly equivalent to a GTX 1070, but you can use up to 4 of them, and they are cheap $0.24/hour. The SSD in the VM is not very fast, and was expensive ($1/day) for 200GB) for a hobbyst like me). The RTX cards available at vast.ai are very fast for fp16 training, I found that the RTX2080 (not TI) was the best value for the money, almost 3x faster than the P4. I would love google added some value cards, like Titan's or even RTX 2080's... - Total personal Cost: $250 credits in GCP and $50 in vast.ai</p> <p>Thanks you to everyone, the forums at fastai and the Discussion here was very inspiring.</p>
Humpback Whale Identification
10th Place Solution: Metric Learning, Classification, Siamese, LGBM
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>TL;DR: - Siamese architecture - Metric learning featuring brand-new CVPR 2019 method (will be published soon) - Classification on features - Large blend for new whale/not new whale binary classification</p> <p>First of all, I would like to thank the competition hosts for such an amazing competition. Also, special thanks to all of my teammates for the hard work during the competition.</p> <p>Our solution features three main approaches.</p> <p>The first one is a Siamese Net, heavily based on Martin’s Piotte kernel in the beginning. However, as it was noted in the Kernels, Martin’s solution out of the box could’ve got you about 0.822 LB.</p> <p>We added lots of augmentations and tried a lot of different branches (mainly ResNet-18, ResNet-34, SE-ResNeXt-50, ResNet-50). We also pursued with a smart flipping strategy, which makes the model differentiate between the left and the right part of the fluke. Specifically, during training, for each pair X, Y of the same whale, we get one more training pair by flipping <strong>both</strong> of them. On the other hand, if X and Y are different whales, then we can flip any of two images, which therefore gives us 3 more training pairs. Also, we added hard positive mining (basically, LAP solving for the positive pairs as well). LAP strategy was also modified to be able to run on multiple threads. The NN was trained progressively 299-&gt;384-&gt;512.</p> <p>The best-performing model could get us 0.929 LB, and the ensemble of all gave 0.940.</p> <p>Another solution will be explained later in detail by @asanakoy. In two words, it is metric learning with multiple branches and margin loss, trained on multiple resolution crops using bboxes, grayscale and RGB input images. He also used his brand-new method from CVPR which allowed for 1-2% score boost.</p> <p>Note that new whales were removed from training in our approach.</p> <p>Our third approach is classification on features. We concatenated all of the features generated by our branch models and trained <strong>classification</strong> model on top of them. The head of classification was two dense layers with a little dropout. This model allowed us to achieve 0.924 MAP@5.</p> <p>Finally, we decided to make gradient boosting to decide, whether or not the whale is new whale or not. To do this, we took our top-performing models and ensembles and took their TOP-4 predictions for each whale. Then, for all of our models, we took their predictions on these set of classes. We used a blend of LogReg, SVM, several KNN models, and LightGBM to solve a binary classification problem. <a href="/ppleskov">@ppleskov</a> did this very well, which allowed us to discover hard cases of new whales and further boost the score. </p> <p>Now, a couple of words on duplicates. As it was noted previously in the discussions, there are lots of (over 46 pairs) of duplicate whale ids. However, my team has managed to find over 106 pairs of duplicates, which affected <strong>more than a thousand images</strong>. Now, I present three strategies that could’ve been used to tackle this problem:</p> <p><strong>Strategy 1:</strong> the first strategy is to compare the ids amongst the same group and to always put the whale id with the larger count in front.</p> <p><strong>Strategy 2</strong> would be to place in front whatever your network predicts to be the first, and then just put the rest afterwards.</p> <p><strong>Strategy 3:</strong> just let the net decide and don’t modify the submission files.</p> <p>Note that in both strategies, we always put the whale ids that belong to the same group one after another, which is completely logical. For us, strategy 2 worked the best and gave 0.0002 LB improvement. Also, when we obtained a new set of duplicate images, we modified our training labels not to confuse the network.</p> <p>Finally, I want to congratulate all people who achieved what they wanted and thank the Kaggle and ODS.ai community for one more amazing experience!</p> <p>Great idea! I've shared them in another thread: <a href="https://www.kaggle.com/c/humpback-whale-identification/discussion/82557">https://www.kaggle.com/c/humpback-whale-identification/discussion/82557</a>. Also, I'm attaching it to the original post.</p>
Humpback Whale Identification
11th golden place. Special thanks
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks)</p> <p>Special thanks to Nestlogic for support (especially to CEO David Gruzman), Martin Piotte for his generous kernels and tips, and Andrew Ng for his amazing Deep Learning Specialization :)</p> <p>Thanks)</p> <p>Thank you!</p>
Humpback Whale Identification
1st solution(classification) && code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My horizontal stripes is 8. I created local feature by horizontal avg pooling. </p> <p><a href="https://arxiv.org/pdf/1711.09349.pdf">https://arxiv.org/pdf/1711.09349.pdf</a></p> <p>I trained tripletloss and BCE loss together.</p> <p>Step 1: Training within all labels with &gt;10 samples (this step helps to converge faster and easier) Step 2: Training with all samples, and fixed all of the networks except the last two layers.</p> <p>Global feature is avg of local feature. </p> <ol> <li>more than 2 images</li> <li>BCE loss is used for every class, my network output size is (batch_size, 5004)</li> <li>i didnt treat dataset as new_whale and not new_whale, i treat them as 5004 class and new_whale.</li> <li>I used bbox</li> </ol> <p>thx :)</p> <p>for example , raw image output is output1 and size is [1, 10008], fliplr image output is output2 and size is [1, 10008]. final_out = torch.sigmoid(output1)[0, :5004] + torch.sigmoid(output2)[0, 5004:] final_out = final_out / 2</p> <p>I havent try vertical striples. I think horizontal stripes could make distance between fliplr image feature and raw image feature larger! Then we could get better "TTA" results.</p> <p>hahah, thanks~</p> <p>yes, classification with metric loss is right.</p> <p>total 10008 class</p> <p>hard triplet loss is metric learning loss and a variant of triplet loss.<a href="https://github.com/Yuol96/pytorch-triplet-loss">https://github.com/Yuol96/pytorch-triplet-loss</a> </p> <p>For few shot learning, my method is from <a href="https://arxiv.org/abs/1707.05574">https://arxiv.org/abs/1707.05574</a>. I used heavy augment and class-balanced sampler.</p> <p>training procedure is from <a href="https://arxiv.org/abs/1707.05574">https://arxiv.org/abs/1707.05574</a>. For local and global feat, i have uploaded code screenshot. It is a common trick for Person Retrieval.</p> <p>Cascade net looks good. Thank you a lot for your tips.</p> <p>yes, after clean up code.</p> <p>thanks!</p> <p>batch size is 40 with 5 titan x pascal. i trained 8 hours and 40 epoch. local feature is part-level feature, like this <a href="http://link.zhihu.com/?target=https%3A//arxiv.org/abs/1711.09349">http://link.zhihu.com/?target=https%3A//arxiv.org/abs/1711.09349</a></p> <p>First of all, thanks to all of my teammates, Venn, Tom and Alex.</p> <p><strong>- Overview</strong> At the very beginning, we utilized softmax + fixed threshold to train the model but didn’t get a good result (&lt;0.9). In order to use new_whale images in our network, we decided to do 2-class classification for each whale class. After several weeks’ experiments, senet154 performs the best and we’ve got a 0.96 (both public &amp; private) result (single model). For further improvements, we added some tricks (will discuss later) and gets 0.969, added 4 fold cross validation with class balance post processing to achieve 0.973. We also tried to ensemble our se154 with other networks like seresnext101, dpn131 but didn’t get any boost.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/481042/11474/network_.png" alt="enter image description here"> <strong>- Network input and training steps</strong> input size is (512, 256) We use 4 channels, RGB + masks (trained by 450 open source labels) as our input. Step 1: Training within all labels with &gt;10 samples (this step helps to converge faster and easier) Step 2: Training with all samples, and fixed all of the networks except the last two layers. <strong>-Flip images (+0.006)</strong> Thanks to Heng’s idea, we flip images and consider flipped id-whales as different whales and keep new whales as the same. <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/481042/11466/hengck.png" alt="enter image description here"></p> <p><strong>- Pseudo labels (+ 0.001)</strong> We added around 2000 test images (with confidence &gt; 0.96) into our training set <strong>- Class balance (+0.001 ~ 0.002)</strong> During our continuous improvements (from 0.8+ to 0.96), we found that the number of labels are correlated with scores. Thus we use the follow strategy to further balance our predictions: For top 5 predictions class1 to class5, if: conf class1 – conf class 2 &lt; 0.3, and class 2 is not used in all top 1 predictions, and class 1 has been used in top 2 predictions for many times, we switch class1 and class2’s positions.</p> <p>Finally, congrats to all participants, especially Heng and Dene . Congrats to 3 new GM, SeuTao, David and Weimin!</p> <p>code of model <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/481042/11472/1551411492(1).png" alt="enter image description here"></p> <p>--<strong>code</strong> <a href="https://github.com/earhian/Humpback-Whale-Identification-1st-">https://github.com/earhian/Humpback-Whale-Identification-1st-</a></p> <p>batchsize, 2048, 8, 16</p> <p><a href="https://drive.google.com/file/d/1hfOu3_JR0vWJkNlRhKwhqJDaF3ID2vRs/view?usp=sharing">https://drive.google.com/file/d/1hfOu3_JR0vWJkNlRhKwhqJDaF3ID2vRs/view?usp=sharing</a> </p> <p>For new_whale images, is the target a (10008,) tensor with all zero value? yes Do you think, without triplet loss, your model can achieve similar accuracy or severe degradation is expected?</p> <p>I havent tried this, But i think it would get 0.96 at least.</p> <p>for most, patterns not horizontally symmetrical. :)</p> <p>triplet_loss and lovasz are used.By the way, why lovasz loss? Refer to <a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/78109">https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/78109</a>. I think method of <a href="/bestfitting">@bestfitting</a> is 2-class classification for each whale too.:)</p> <p>simple U-net is enough.</p> <p>In error loss, target is always zero. I used nonew_loss to balance positive and negative samples</p> <p>refer to <a href="https://github.com/L1aoXingyu/reid_baseline/blob/master/modeling/baseline.py">https://github.com/L1aoXingyu/reid_baseline/blob/master/modeling/baseline.py</a>. Batchnorm without bias irefer to <a href="https://github.com/L1aoXingyu/reid_baseline/blob/master/modeling/baseline.py">https://github.com/L1aoXingyu/reid_baseline/blob/master/modeling/baseline.py</a>. Batchnorm without bias is a common trick of reid. Playground images are from <a href="https://www.kaggle.com/c/whale-categorization-playground">https://www.kaggle.com/c/whale-categorization-playground</a></p>
Humpback Whale Identification
24th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks eagle! </p> <p>The mpiotte siamese single models did indeed stall out around 0.9, the highest I got from any one of them was a 0.907, which was a 512x512 RGB. After the initial 500 epochs with 384 grayscale I continued training it with the different image parameters between 5 and 30 epochs at a time then saving the score array and making a submission, it got to about 1400 epochs before I gave up on it. The 0.935 score was from an average of the highest two dozen or so scores (I can check the exact number if you like) with a threshold of 0.65. Adding more that scored below 0.9 did not help, but I didn't play with the threshold that much.</p> <p>Thanks! I guess its 23rd now lol</p> <p>My solution was mostly based on this <a href="https://www.kaggle.com/seesee/siamese-pretrained-0-822">version</a> of @martinpiotte amazing work in the previous whale competition. I noticed many comments mentioning the variability of results obtained from Siamese networks, which I thought might be advantageous. </p> <p>At first I made no modifications to Martin's network, training it from scratch for 500 epochs. At that point it would consistently get between 0.895-0.905 on lb. Training for another 5-10 epochs resulted in a similar score but the distribution of predicted whales was much more variable though the scores remained very close. I figured this was due to randomness in how the augmentations were applied so I trained several dozen versions of this model with minor variations--changing image size from 224 up to 600 both grayscale and rgb. I would have added TTA, but I do not know how to do it in keras. After this I had about 40 sets of predictions which I used for a hard-voting scheme at each whale position. A simple average of the prediction scores from each model could get a 0.935 with the right threshold and adding this to the voting resulted in 0.941 on public lb. </p> <p>At this point the scores were not increasing and it seemed I had reached the limit of what this network was capable of distinguishing so I added classification and prototype results to the voting. I find the prototype approach really interesting so I wish I had more time to work on it, I was able to get a single prototype model up to only 0.872. Adding several of these other models to the vote got the final score.</p> <p>Great work by everyone, I learned so much and I can't wait to see how you all approached this.</p>
Humpback Whale Identification
25-th place solution: CosFace + ProtoNets
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I was using normal Fully connected layer. Exactyl it look like that in PyTorch:</p> <p>class NormLinear(nn.Module):</p> <pre><code>def __init__(self, in_features, out_features, temperature = 0.05, temperature_trainable = False): super(NormLinear, self).__init__() self.weight = nn.Parameter(torch.Tensor(out_features, in_features)) nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5)) self.scale = 1 / temperature if temperature_trainable: self.scale = nn.Parameter(torch.Tensor(1)) nn.init.constant_(self.scale, 1 / temperature) def forward(self, x): x_norm = F.normalize(x) w_norm = F.normalize(self.weight) cosine = F.linear(x_norm, w_norm, None) out = cosine #* self.scale return out # model preparation device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_name = 'se_resnext101_32x4d' model = pretrainedmodels.__dict__[model_name](num_classes=1000, pretrained='imagenet') model.avg_pool = nn.AdaptiveAvgPool2d((1, 1)) model.last_linear = nn.Sequential(* [nn.LayerNorm(model.last_linear.in_features, elementwise_affine = False), NormLinear(model.last_linear.in_features, 5004)]) </code></pre> <p>'</p> <p>Look like you predict one or two images better than us:) I with <a href="/daisukelab">@daisukelab</a> was observing you for the last week, worrying that you would jump over us. And the story came true.... :P </p> <p>Here I would describe my part of the solution: CosFace approch. The ProtoNets and Ensebling would be described by <a href="/daisukelab">@daisukelab</a></p> <p><strong>1. Preprocessing</strong> Fist of all, I use BB from @radek (thank you!). I just did train model on his annotated data and that's all, I did not invest time for this part of competiton. In later stage I also used updated BB from @radek (I call them v2), but the difference in final results was very small.</p> <p><strong>2. Model and Data-Loading</strong> My final model was se-resnext101 (I also try se154 but it did not work nice). In fact, my model and augmentation was exactly the same like in this kernel: <a href="https://www.kaggle.com/stalkermustang/pytorch-pretraiedmodels-se-resnext101-baseline">https://www.kaggle.com/stalkermustang/pytorch-pretraiedmodels-se-resnext101-baseline</a> Other stuff which I tried: - CutOut -&gt; fail - MixUp -&gt; fail - OverSample -&gt; fail - Cluster-Based Sampling: here I try to sample batch so that there were similar classes (based on cosine similarity) -&gt; fail, overfitting</p> <p><strong>3. Loss Function</strong> In the past, I was developing the Face-Recognition system and the main purpose of approaching the whale problem was testing such technology on whales:) So in general I was testing ArcFace and CosFace. Both of them works pretty good but CosFace was slightly better. For CosFace I use very high margin 0.6 (in original paper it was 0.35). In fact, this step take me the longest time (3 weeks), where I was trying:</p> <ul> <li>BatchNorm vs LayerNorm before L2 normalization: LayerNorm better</li> <li>AlphaDropout vs DropOut: Alpha better but with no influence in debugging model (resnet50) so I did not use both of them, what now I think was one of the biggest mistake I made</li> <li>CosFace vs ArcFace vs SphereFace: CosFace with m=0.6 was clear winner (I also the the idea of the CosFace the most)</li> </ul> <p><strong>4. Optimalization</strong> Here I use AdamW (with fixed weight-decay). I also try OneCycle but it was not working (I think that code was wrong). In general I train the model by 30 epochs, so it was pretty quick.</p> <ul> <li>New-whale I did not use 'new-whale' for training. In sumbission I just want to have ~27% of 'new_whale' I have two approches for this problem which did not work:</li> <li>Each new-whale as different class: In general it was ok, the accuracy on validation set was just 0.2% less. But it does not work well on LB.</li> <li>Use second loss-function (KL-Divergence) which would force the outout distribution afer SoftMax of 'new-whale' to be uniform (so exactly the same probability for each class). It also work fine for validation set, but not in LB. </li> </ul> <p>Look like I would need more time for this approach (especially second one), because I really like it:)</p> <p>The final models was se101resnext trained on: - gray and 448x448 - gray and 256x748 - rgb and 448x448 - rgb and 256x748</p> <p>In general, the approch was pretty simple and work moderate. But look like @pudae had the similar idea, so I'm now looking into his approach :)</p> <p>The ProtoNet and Ensemble part would be explained by <a href="/daisukelab">@daisukelab</a>.</p> <p>Code: <a href="https://github.com/melgor/kaggle-whale-tail">https://github.com/melgor/kaggle-whale-tail</a> This is minimal code for training single model and create sumbission without 'new-whale'</p> <p>I will relese full code later, but this is CosFace implementation:</p> <pre><code>class CosineMarginCrossEntropy(nn.Module): def __init__(self, m=0.60, s=30.0): super(CosineMarginCrossEntropy, self).__init__() self.m = m self.s = s self.ce = torch.nn.CrossEntropyLoss() def forward(self, input, target): one_hot = torch.zeros_like(input) one_hot.scatter_(1, target.view(-1, 1), 1.0) # -------------torch.where(out_i = {x_i if condition_i else y_i) ------------- output = self.s * (input - one_hot * self.m) loss = self.ce(output, target) return loss criterion = CosineMarginCrossEntropy().cuda() </code></pre> <p>The code which I post there is the main idea of all my score.</p> <p>I have just quash it into a square changing the original aspect ratio. This is why I also trained second model with image size 256x748 to keep the ratio better. However, the accuracy of both model are comparable. </p> <p>Did you also CosFace/ArcFace? What model? What Augmentation? I'm aslo compering my solution to Pudae's and look like mean-feature per class give him nice boost. I need to check it also. Also, I was using just AvgPool (where Pudae flatten-&gt;Dropout&gt;BN-&gt;FC-&gt;BN-&gt;FC), what did you use?</p> <p>Currently I'm maily focus on traning longer using Res50, my best model on 224x224, my standard augmentation, pure classification (no center for every class) I get 0.656. The results is not best-one, but based on my all history, I'm able to get 0.02 better score using same model just for longer training (I use 100 epochs) but not training on all data (1k images for validation). And Res50 because it learn fast:)</p> <p>About LayerNorm vs BatchNorm, I will test it again to confirm if there is any difference. Then I will also try pudae's augmentation. And finaly center of every class.</p> <p>About BB, I I'm not sure if training beter detector would help a lot. I trained ones (using radek code) and compare to the second release of BB. In both the results was pretty much same.</p> <p>Also, best model within competition could get 0.682 (se101, 448x448)-&gt; 0.93. But I did not use center features, so maybe you would get 0.94:)</p> <p>If you would get score &gt; 0.95 with single model, let me know:)</p> <p>Hi, I added code: <a href="https://github.com/melgor/kaggle-whale-tail">https://github.com/melgor/kaggle-whale-tail</a> But this is minimal example, without creating submission with new-whale, learn single model. My code is rather mess as I spend not so much time for this copetition (mostly weekends).</p> <p>In general, I also did not used playgroud-data. Also the 3rd solution learn for a very long time (500 epochs vs 40 mine). I did some experiments and look like training longer res50 can match se101 using same input-size (so it is interesting). Then I decided to try again with removing avg-pool but training longer.</p> <p>How do you use 512FC? Are you adding this layer after Pooling? (Also with DropOut)?</p> <p>In fact, I'm continuting experiments with Whales, that is why I did not release the code. I will try to find tomorrow some time to release the code.</p>
Humpback Whale Identification
27th Place solution (pure metric, pure C++)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, my congrats to the winners! And, thanks to the organizers! Also, thanks to all peoples who make great tools for deep machine learning in C++, as Dlib (did you ever heard about it?) and OpenCV! </p> <p>So, to be brief. My final training setup could be found <a href="https://github.com/pi-null-mezon/Kaggle/blob/master/Whales/Dlib/Learner/main.cpp">here</a>. My best single model architecture definition (it is ResNet variation with 512x192x1 input and 128 feature vector output) could be found <a href="https://github.com/pi-null-mezon/Kaggle/blob/master/Whales/Dlib/Learner/customnetwork.h">here</a>. I have started from public LB 0.748 and finished at private LB 0.948. How this progress has been done:</p> <p>1) My training pipeline was assembled when the test competition was held. So, as first attemp, I just retrain my old model on new data ("new_whales" set were removed) and have got 0.748. By playing with identification threshold have improved the score to 0.78. Already good, thanks to great <a href="http://dlib.net/dlib/dnn/loss_abstract.h.html#loss_metric_">metric_loss</a> layer from DLib.</p> <p>2) Have taken into account Martin's Piotte <a href="https://www.kaggle.com/martinpiotte/whale-recognition-model-with-score-0-78563">findings</a>. In particular, have replaced RGB input to grayscale (with centering and noralization). And have removed all whales with a single sample from the trainnig set. Public score has growed up to 0.84-0.85. </p> <p>3) Ok, what to do next? Let's try experimentation with data augmentation (scale,shift,rotation,perspective distortion, but not horizontal flip yet) and train/validation division. Few models/iterations later, by voting among best single submissions (voting code, it is Matlab's script, could be found <a href="https://github.com/pi-null-mezon/Kaggle/blob/master/Whales/Matlab/Postprocess/Script.m">here</a>) finally come to 0.86 score.</p> <p>4) What else Martin recommend? Flukes cropping. But I am not familiar with Python, so I did not want to use any fluke detection model that competitors have done. Ok, I have take this as challenge. And maybe it is not original way, but I have written a <a href="https://github.com/pi-null-mezon/Kaggle/blob/master/Whales/Dlib/Imgtransformer/main.cpp">code</a> that generates attention heatmap of my model for a picture (by sliding rectangle on top of the picture), then binarizes this heatmap, aligns binary mask by PCA (among mask's pixels coordiantes) and finally crops and resize image. When I generate such crops (heatmaps from my best at those time model) and retrain best model on cropped data, I have got 0.89 score. Interesting that if I have trained new model on cropped flukes sometimes I have got around 0.85. So two steps trainng (first on original pictures, than on cropped) was selected as important for me.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/481336/11475/Cropping.png" alt="enter image description here"></p> <p>5) What to do next? Somehow increase size of training set? So it was time to horizontal flipping (at those time i have already saw by my eyes many of training examples, so I came to understanding that almost all whales photographed without mirroring, it is good because we can mirror them and add this mirrored whales as new classes). This effectivelly doubled training set size. Have checked scrore, got improvement from 0.86 to 0.88 (this is without cropping, because I have already known that cropping definetelly will improves result, so I have performed experimenations without it). I have also made some experimentations with image negatives (multiplied by -1.0) and new classes compositions from left and right pars of different whales. But, none of them (except horizontal flipp) have improved score. </p> <p>6) Where else we can get more trainnig examples? At the playground competitions of course. I have collected training set from playground competition (also only classes with more than one samples per whale), added it to my training set. And, LB score have grown from 0.88 to 0.907 (single model, yet without picture cropping). Maybe some test examples resides in the playground trainng set?? Did not check it. But I have found that there is a lot of noise in training data, and it became even more after union with playground data. Partially this noise could be filtered by good model and some hand labeling (note, taht this is not prohibited as we label training data, not test). So, I have wrote another <a href="https://github.com/pi-null-mezon/Kaggle/blob/master/Whales/Dlib/Traincleaner/main.cpp">tool to cleanup training data</a> in semi-automatic manner. This allows me to boost my score to 0.928.</p> <p>7) It was time to tune prediction procedure. Untill that moment I have only predicted top 1 label. My classifier produce a 128-dimensional vector of loats for a picture (so each whale could be represented as biometric template with the size of 512 bytes). So, after each model has been trained, I generated enrollment templates. One template for each picture in training set (for each "new_whale" too, as it improves results). And at submission generation step, for each test picture I compared the distances (Euclidean) between test picture's identification template and all of the enrollment templates. Then sort all of the distances and made decision about particular test sample. If minimum distance between templates was lower than a particular threshold, sample got label from enrollment set, otherwise sample got "new_whale" label. But competition's score also rewards you even if right prediction was 2nd, 3rd, or even 5th. So, we can improve score by submit all predictions in distance ascending order. That way I have improved my LB from 0.928 to 0.938 (my best single model). </p> <p>8) What's next? Ensembling, of cource! Even first ensemble (consist of four different networks (0.938, 0.935, 0.854, 0.876) allows to get 0.943 LB score. I am ensembling them by simple concatenation of templates (also have tried to train head on this features, but without any luck). At this moment I have started to train different modifications of my base solution with different number of filters and other hyperparameters. At a final point I was able to stack 25 different models, and got public LB 0.948. Some combinations works better than the others. My final submission has been made by 9-models ResNet ensemble, and public LB was 0.953.</p>
Humpback Whale Identification
2nd place code, end to end whale Identification model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is the code of my solution [https://github.com/SeuTao/Kaggle_Whale2019_2nd_palce_solution]</p> <p>Wow, Thanks for your information! </p> <p>Yes, I also use clf based method, whereas I think my network is simpler and lighter, only 3 channels' input as well as one-stage lr schedule are used. Actually, margin based softmax loss is the key in my pipeline, removing triplet loss has no harm on the final score. </p> <p>My baseline model is trained on arcface only (non-new_whale images). For inference, I remove the margin m and directly use the softmax result of last fc. The L2-normed weight matrix of last fc is close to the center-of-class feature. My arcface only model can achieve around 0.930~940 on public LB. The further improvement comes from the added binary head with focal loss (both on non-new_whale and new_whale images ). The ID flipping trick is important which gaves me the final 0.01 boost.</p>
Humpback Whale Identification
31st place solution + source code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Source code: <a href="https://github.com/suicao/Siamese-Whale-Identification">https://github.com/suicao/Siamese-Whale-Identification</a> . I'll try to update this repo later but it should be simple enough to follow for now.</p> <p>Our approach was simple. We took the amazing solution by <a href="/martinpiotte">@martinpiotte</a> and added a few twists:</p> <ul> <li>Using RGB instead of grayscale images.</li> <li>Changing the feature extraction CNN with Imagenet trained models. My teammate <a href="/iafoss">@iafoss</a> was able to achieve 0.937 single model with a DenseNet121 encoder. At the last few weeks he also noticed a severe bug in my code where I froze the branch model instead of the feature extractor in the first few epochs, which helped boosting the score.</li> <li>Simply training on bigger image size worked, but we didn't have the resources needed to try anything bigger than 512x512.</li> <li>Adding TTA made the results worse, we haven't got time to investigate this just yet.</li> </ul> <p>That's it, at the end I made an ensemble of a few high scoring models trained by my teammates using <a href="https://www.kaggle.com/matthewa313/ensembling-algorithm-for-average-precision-metric">this method</a>. Big thanks to <a href="/matthewa313">@matthewa313</a> .</p> <p>I didn't even have any accesses to GPUs for the majority of this competition, and one of my other teammate couldn't compete either due to hardware problems, so this is quite unfortunate for us. </p> <p>Anyway I'm happy with the final standings, congrats everyone, I wish you do <em>whale</em> in the future competitions.</p>
Humpback Whale Identification
3rd place solution with code: ArcFace
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><blockquote> <p>When you say center, you mean that for each individual you run the net for all images of that individual, L2 normalize it and take the mean? </p> </blockquote> <ul> <li>After taking the mean, I normalized it again to locate it on the hypersphere.</li> </ul> <blockquote> <p>Then during test you compare (dot product) the test image's embedding with each class/individual center?</p> </blockquote> <ul> <li>You're right</li> </ul> <p>The image size was 320x320.</p> <h2>UPDATE: code available on github</h2> <p><a href="https://github.com/pudae/kaggle-humpback">https://github.com/pudae/kaggle-humpback</a></p> <hr> <p>Congrats to all the winners. Thanks to Kaggle and hosting team for an interesting competition.</p> <p>Here is my solution summary.</p> <h1>Solution Summary</h1> <h2>Dataset</h2> <ul> <li><strong>Validation set</strong>: randomly sampled 400 identities that has 2 images + 110 new whales (= 400 * 0.276).</li> <li><strong>training set</strong>: all images except new whales.</li> <li>I doubled up the identities by horizontal flip.</li> </ul> <h2>Model</h2> <p><strong>bounding box &amp; landmark</strong></p> <ul> <li>I used annotations by <a href="https://www.kaggle.com/c/humpback-whale-identification/discussion/78699">Paul Johnson</a> and <a href="https://www.kaggle.com/c/humpback-whale-identification/discussion/76281">Radek Osmulski</a>. (Thanks to Paul and Radek. Without your contribution, I couldn't achieve such high score.)</li> <li>I made 5 fold CV and trained 5 models using them.</li> <li>IOU: 0.93</li> </ul> <p><strong>whale identifier</strong></p> <ul> <li><a href="https://arxiv.org/pdf/1801.07698.pdf">ArcFace</a> approach is used.</li> <li>Following the paper, the layers after last convolution were replaced to flattening -&gt; BN -&gt; dropout -&gt; FC -&gt; BN.</li> <li>densenet121</li> <li>m 0.5 (the default value of the paper)</li> <li>weight decay 0.0005, droupout 0.5</li> </ul> <h2>Augmentation</h2> <ul> <li>average blur, motion blur</li> <li>add, multiply, grayscale</li> <li>scale, translate, shear, rotate</li> <li>align or no-align</li> </ul> <h2>Training</h2> <ul> <li>adam optimizer</li> <li>learning rate of 0.00025 -&gt; 0.000125 -&gt; 0.0000625</li> </ul> <h2>Inference</h2> <p><strong>getting embedding feature for identity</strong></p> <ul> <li>For each images, I got multiple feature vector by using 5 bounding boxes and landmarks.</li> <li>For each identities, the center of all feature vectors was used as final embedding feature.</li> </ul> <p><strong>getting embedding feature for test image</strong></p> <ul> <li>For each images, multiple feature vectors were generate and the center of the feature vectors was used.</li> </ul> <p><strong>computing similarity</strong></p> <ul> <li>The cosine similarity of above two feature vectors was used as the measure of similarity.</li> </ul> <p><strong>selecting threshold</strong></p> <ul> <li>The threshold for new whale was selected so that the proportion of new whale is about 0.276.</li> </ul> <h1>The process to the final method</h1> <p>Followings are the process to the final method.</p> <p><strong>without landmark</strong></p> <p>At first, I excluded the identities having only one image and new whales from the training set. For inference, the identity of the most similar image of the training set was used as the predicted identity.</p> <p>&gt; Public LB: 0.90, Private LB: 0.90 </p> <p>After using the center of all feature vectors in the same identity, I got</p> <p>&gt; Public LB: 0.942 / Private LB: 0.939</p> <p>After using weight decay 0.0005</p> <p>&gt; Public LB: 0.946 / Private LB: 0.946</p> <p>After including the identities having one image to training set</p> <p>&gt; Public LB: 0.963 / Private LB: 0.961</p> <p><strong>with landmark</strong></p> <p>When I used aligned image, network was trained faster but the score was not improved.</p> <p>&gt; Public LB: 0.962 / Private LB: 0.959</p> <p>The bounding boxes and landmarks of some images are very poor and it seems to prevent improving scores. So I also used non-aligned images.</p> <p>&gt; Public LB: 0.965 / Private LB: 0.961</p> <p>Finally, I doubled up identities by horizontal flip. Flipped images have different identities but visually very similar. So I set the logit value of flipped to zero to prevent flowing gradient.</p> <p>&gt; Public LB: 0.968 ~ 0.971 / Private LB: 0.965 ~ 0.968</p> <p>Congrats to winners again. Thanks.</p> <p>The kernel size for blurring was between 3 to 5. Thank you~ :)</p> <p>Thank you again very much!!</p> <p>Thanks :)</p> <blockquote> <p>The FC you mentioned is before the 5004 dimensions normalized FC of the paper, right?</p> </blockquote> <p>Yes, you're right.</p> <blockquote> <p>If yes how many dimensions did you use?</p> </blockquote> <p>the dimension of the feature vector was 512.</p> <blockquote> <p>So now you have ~10008 identities, right? </p> </blockquote> <p>Yes~</p> <blockquote> <p>But I didn't understand what you did with the logit. Can you elaborate?</p> </blockquote> <p>Without zero setting, network would be trained such that the distance between flip and non-flip are apart from each others. It seemed to make training difficult in my case. Setting the logit value of flipped label to zero was helpful in this case.</p> <p>Thank you!!</p> <p>Yes. LB &gt; 0.965 models used aligned and non-aligned image on training and inference. </p> <blockquote> <p>1) so your solution is a pure classifier, right? I wonder why ArcFace can tackle such class skew problem. did you used any oversampling or heavy augmentation for the imbalance problem?</p> </blockquote> <ul> <li>I have tried nothing for the imbalance problem.</li> </ul> <blockquote> <p>2) I noticed that using the mean vector of an identity for its representation makes a dramatic boost on your prediction. would you please share the motivation of this idea? </p> </blockquote> <ul> <li><p>From the <a href="https://arxiv.org/pdf/1801.07698.pdf">paper</a>: </p> <blockquote> <p>To get the embedding features for templates (e.g. IJB-B and IJB-C) or videos (e.g. YTF and iQIYI-VID), we simply calculate the feature centre of all images from the template or all frames from the video.</p> </blockquote></li> </ul> <blockquote> <p>3) what's the hyper param of s in your ArcFace metric and how you tune it?</p> </blockquote> <ul> <li>I have used s(=65) and m(=0.5) which are the default value of the paper. I have tried several values, but it didn't improve the scores.</li> </ul> <p>Thank you!! </p> <p>After clean up the codes, I will share it. :)</p> <p>I trained from scratch.</p> <p>From the submissions over LB 0.968, that method was used.</p> <p>The method using 5 bounding boxes came to mind too late to use it on training.</p>
Humpback Whale Identification
4th Place Solution: SIFT + Siamese
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My goal in this competition was to learn more about low-shot learning problems and to try to get to GM, so I’ll share what I learned.</p> <p>I find it useful to try to think like the sponsor and ask why they would host the competition and if I were them what would want to get out of it. There was already a playground competition, so why release it again? My thoughts were that 1. maybe Kaggle wanted to show the difference in quality of solutions for a free playground vs a prize value based solution competition, or 2. the sponsor wanted to get more out of the really challenging part of the problem, namely how to identify new_whale (N=0) and N=1 samples. So my focus was on the latter and specifically how to identify as many N=1 samples as possible.</p> <p>There are three main components to my pipeline:</p> <ul> <li><strong>Keypoint matching</strong> – old school approach with a few new school tricks</li> <li><strong>Siamese network</strong> – like many, <a href="https://www.kaggle.com/martinpiotte/whale-recognition-model-with-score-0-78563">Martin’s previous work</a> formed the basis here</li> <li><strong>Post-processing</strong> – to give low sample classes a fair shake</li> </ul> <p><strong>Keypoint matching</strong> This accounted for &gt;80% of my final predictions, and was classic keypoint matching, one of the original low-shot methods. I tried SIFT, ROOTSIFT, and a host of binary descriptors and matchers, there wasn’t a lot of difference between the different techniques.</p> <p>The dataset here was in the sweet spot where brute force keypoint matching came into play. 7960 test images vs 15,697 train images is within the realm of reason. I chose the pure brute-force method at full image resolution, all test images vs all train images, no bag-of-words or knn clustering of the keypoints. There were a couple big challenges I had to overcome:</p> <ol> <li><em>Speed</em>. Keypoint descriptors/matching can take up to 1-2s per image depending on your HW setup, but I used several tricks like indexing all keypoints to a hdf5 file, storing all keypoints into RAM during matching, and use of the great <a href="https://github.com/facebookresearch/faiss">faiss library</a>. Across two systems I could finish a brute force run of the full dataset in ~12 hours.</li> <li><em>False positives</em>. The main issue with kp matching on this dataset was the false positives which had two sources: the background ocean and many of the bright points on the whale flukes. I addressed this by using a unet to segment only the whale tail, and a custom xgboost model of the homography matrix to classify the final homography between image pairs as valid or not.</li> </ol> <p>Final kp matching pipeline: - Extract all kps from train and test (raw images, full scale) into hdf5 files, restricting keypoints to unet predicted mask area of whale fluke. Extracting from CLAHE preprocessed images worked best. - Matching: a. Loop through all test/train pairs b. Match keypoints using faiss c. Double homography filtering of keypoints (LMEDS followed by RANSAC) c. xgboost prediction to validate homography matrix d. if # of matches &gt; threshold, then use prediction</p> <p><strong>Siamese network</strong> This is the weakest part of my pipeline, there are other posts indicating much stronger networks than what I used. I just adapted Martin’s code a bit, and used an ensemble of InceptionResNetV2, InceptionV3, and ResNet50. I didn’t add in any augmentations and stuck with grayscale images, nothing fancy. To help training move on a little quicker, I did a fair amount of pretraining of the backbone network before feeding it in the Siamese network, which seemed to help. My pretraining pipeline was: - train classification on top 200 classes - fine-tune on all classes where N&gt;8 (~576 classes) - fine-tune on all classes - fine-tune on all classes + mixup + image size 384x384 </p> <p><strong>Post processing</strong> I found some similarities in the data between this competition and the Statoil Iceberg challenge, so I was able to use some of the same tricks from Weimin and my <a href="https://www.kaggle.com/c/statoil-iceberg-classifier-challenge">winning solution</a> there, mainly that there were insights from test predictions that could be used to further enhance the test predictions. </p> <p>When analyzing the resulting prediction matrix from the Siamese network, I noticed that there was always a handful of the same train images that disproportionately dominated the top-5 positions. This got me thinking that I needed to find a way to either suppress the dominate predictions or figure out how to get the N=1 classes a more fair chance to rise to the top of the prediction pool.</p> <p>The idea I came up was pretty simple: instead of looking at the prediction matrix in the traditional way of “which train image is closest to my test image”, I transposed the matrix to now look at “which test image is closest to my train images”. When I limited the transposed matrix to the N=1 samples, I found that I could use a new threshold along the train axis for the N=1 train samples. This was highly effective at generating many more of the correct N=1 samples in my top-1 prediction. I’m sure there are better ways of accomplishing the same goal.</p> <p>I was surprised by the number of mislabels other competitors found, and thanks to <a href="https://www.kaggle.com/c/humpback-whale-identification/discussion/81885">Alex Mokin and the contributors to this post</a> I took advantage of making sure the redundant classes were accounted for appropriately.</p> <p><strong>Pipleline weaknesses:</strong> Again, thinking like the sponsor, they may not love my solution for a couple of reasons: 1. very computationally expensive, especially the keypoint matching pipeline, and 2. the difficulty to convert the pipleine into an easy way to do single image inference due to the post processing.</p> <p>I would probably take someone else’s solution who has a strong siamese network and drop it into my pipeline as a pure replacement. This would require retuning of the post processing pipeline but it’s possible.</p> <p><strong>Pipleline strengths:</strong> I think the keypoint pipeline is pretty strong, without a lot of opportunity to squeeze more if using traditional keypoint algorithms. The unet and xgb model incorporation into the pipeline really helps cut the false positives to be negligible.</p> <p>Thanks Miha! I didn't really explore keypoints outside of the traditional descriptors provided in OpenCV. I did a single run using DELF descriptors but the results were a bit worse so I just stuck to the basics.</p> <p>Overall keypoint matching provided me better top-1 performance than siamese networks, for the images they classified. The problem with keypoints was on the lower resolution images where not many keypoints are detected. I definitely needed the siamese networks to complement keypoint detection.</p> <p>yes, you have the idea right. kp matching was used for top1 replacement only, and complemented a full prediction matrix from a siamese network.</p> <p>new_whale was only predicted from siamese network. kps matches via sift became top-1 predictions where a match exists, otherwise default is siamese</p> <p>SIFT detector and RootSIFT extractor</p> <p>Posted: <a href="https://github.com/daustingm1/humpback-whale-4th-place">https://github.com/daustingm1/humpback-whale-4th-place</a></p>
Humpback Whale Identification
57th place solution, SoftTripletLoss, 256x512 image size, fastai v1
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>When I arrived on this competition I had no idea how to solve this kind of problem. It took me a while just to find papers about this problem and understand the intuition behind the concepts involved like episodes, n-way k-shot, metric learning, few-shot learning, meta learning and so on. I implemented all the framework in fastai v1.</p> <h3>Loss Function</h3> <p>I have tried several approaches like central loss [1] and prototypical networks [2] but I then found on the discussions this two papers [3], [4] where a variation of triplet loss is used where the hard margin is replaced by a soft margin using the softplus function (that's why I'm referring to it as Soft Triplet Loss). They also use a Batch Hard strategy (BH) in which for each anchor image only the hardest positive and hardest negatives in the mini-batch are used in the loss. This is my implementation of Soft Triplet Loss with Batch Hard and L2 regularization (as suggested by @Iafoss [5]):</p> <pre><code>class SoftTripletLoss(nn.Module): def __init__(self, fsc, wd=1e-4): super().__init__() self.k_shot = fsc.k_shot self.new_class_number = fsc.new_class_number self.wd = wd def forward(self, x, y): # x (64, 128) self.n_way = x.size()[0]//self.k_shot emb_sz = x.size()[-1] # (128) x = x.view(-1, self.k_shot, emb_sz) # (16, 4, 128) L = 0; EPS = 1e-6 for i in range(self.n_way-self.new_class_number): for j in range(self.k_shot): I = torch.zeros(self.n_way).long() I[i] = 1 J = torch.zeros(self.k_shot).long() J[j] = 1 xa = x[I==1, J==1, :].view(1, -1) # (1, 128) xp = x[I==1, J==0, :] # (3, 128) xn = x[I==0, :, :].view(-1, emb_sz) # (15, 4, 128) -&amp;gt; (60, 128) Dp = F.relu((xa-xp).pow_(2).sum(1)+EPS).sqrt_() # (3) Dn = F.relu((xa-xn).pow_(2).sum(1)+EPS).sqrt_() # (60) L += F.softplus(Dp.max(0)[0] - Dn.min(0)[0]) # (1) L += self.wd*((Dp**2).mean() + (Dn**2).mean()) # Regularization return L </code></pre> <h3>Hard samples mining</h3> <p>Batch hard strategy allowed for a good improvement but it was not enough alone. The next main step was implementing a technique for mining hard samples, hereafter referred as Sample Hard (SH). At the end of each training epoch I compute the distance matrix between all train samples, then to build the mini-batches for the next epoch I follow the steps (I'm using 10-way, 4-shot episodes):</p> <ol> <li>Select an image at random [A1]</li> <li>Select the 3 hardest images from the same class (hard positives, largest distances) [A1, A2, A3, A4]</li> <li>Select the one hardest image from a different class (hard negatives, closest distance) [A1, A2, A3, A4, B1]</li> <li>Repeat from 2. until the mini-batch is constructed [A1, A2, A3, A4, B1, B2, B3, B4, C1, C2, C3, C4, ...] </li> <li>For the last 4 images in each mini-batch I select 4 random <em>new whales</em>.</li> </ol> <p>So the distance matrix and mini-batches for the next epoch are only computed at the end of each epoch. This SH strategy is then used together with BH. </p> <h3>Model</h3> <p>The model I used is a <strong>Densenet121</strong> with the following head:</p> <pre><code>class Head(nn.Module): def __init__(self, in_channels=1024, emb_sz=128): super().__init__() self.flat = nn.Sequential( AdaptiveConcatPool2d(1)) self.flatten = Flatten() self.bn0 = nn.BatchNorm1d(4*in_channels) self.lin0 = nn.Linear(4*in_channels, in_channels) self.relu = nn.ReLU(inplace=True) self.bn1 = nn.BatchNorm1d(in_channels) self.lin1 = nn.Linear(in_channels, emb_sz) def forward(self, x): cut = x.size()[-1]//2 x0 = self.flat(x[...,:cut]) x1 = self.flat(x[...,cut:]) x = torch.cat((self.flatten(x0), self.flatten(x1)), dim=1) x = self.relu(self.lin0(self.bn0(x))) return self.lin1(self.bn1(x)) </code></pre> <p>Since I'm using images with ratio 1:2 and there are some horizontal symmetry I divide the images in two (left and right parts) (x0 and x1 in the code), I apply the pooling as usual (using AdaptiveConcatPool2d from fastai) and I finally concatenate the two. The activation maps shown by Heng [6] in another discussion topic show that in some images there are two modes in the activation map, one for the left and other for the right part, that is the intuition why this may help. I didn't check however if the increase in performance is just due to the increase in parameters.</p> <h3>Image Augmentations</h3> <p>I cropped the images with bounding boxes and applied the following augmentations (fastai):</p> <pre><code>from torchvision.transforms import ColorJitter, ToPILImage, ToTensor from fastai.vision.transform import * def _colorjitter(x, brightness=0, contrast=0, saturation=0, hue=0): topill = ToPILImage() totensor = ToTensor() xmin, xmax = x.min(), x.max() x = (x-xmin)/(xmax-xmin) cj = ColorJitter(brightness, contrast, saturation, hue) x = topill(x) x = cj(x) x = totensor(x) x = x*(xmax-xmin) + xmin return x colorjitter = TfmLighting(_colorjitter) def _cutout(x, n_holes:uniform_int=1, length:uniform_int=40): "Cut out `n_holes` number of square holes of size `length` in image at random locations." h,w = x.shape[1:] for n in range(n_holes): h_y = np.random.randint(0, h) h_x = np.random.randint(0, w) y1 = int(np.clip(h_y - length / 2, 0, h)) y2 = int(np.clip(h_y + length / 2, 0, h)) x1 = int(np.clip(h_x - length / 2, 0, w)) x2 = int(np.clip(h_x + length / 2, 0, w)) x[:, y1:y2, x1:x2] = 0 return x cutout = TfmPixel(_cutout, order=20) tfms = get_transforms(do_flip=False, xtra_tfms=[colorjitter(saturation=1.1, hue=0.05), cutout(n_holes=(1, max(3, int(3*rcutout))), length=(10*rcutout, 40*rcutout), p=0.5)]) </code></pre> <h3>Training</h3> <p>I train only the Head part of the model with BH for a few epochs (Densenet121 is using imagenet weights) and then I unfreeze the model and train for longer with SH + BH (the first epoch only BH however). Both before and after unfreezing I use one cycle policy (<a href="https://sgugger.github.io/the-1cycle-policy.html">https://sgugger.github.io/the-1cycle-policy.html</a>) implemented in fastai v1. Just one cycle before unfreezing and one cycle after. I found this worked better than multiple cycles in my experiments.</p> <h3>Best single model + TTA</h3> <ul> <li>Image size; public LB; approximate train time (1 nvidia 1080)</li> <li>64x128 – 0.788 - 3.5h</li> <li>128x256 – 0.867 - 6h</li> <li>256x512 – 0.904 - 15h</li> </ul> <p>Final score is an ensemble of 13 submissions with public LB &gt; 0.860, reaching the final score of ~ <strong>0.93</strong>.</p> <h3>References</h3> <p>[1] <a href="https://ydwen.github.io/papers/WenECCV16.pdf">https://ydwen.github.io/papers/WenECCV16.pdf</a></p> <p>[2] <a href="https://arxiv.org/pdf/1703.05175.pdf">https://arxiv.org/pdf/1703.05175.pdf</a></p> <p>[3] <a href="https://arxiv.org/pdf/1703.07737.pdf">https://arxiv.org/pdf/1703.07737.pdf</a></p> <p>[4] <a href="https://arxiv.org/pdf/1901.03662.pdf">https://arxiv.org/pdf/1901.03662.pdf</a></p> <p>[5] <a href="https://www.kaggle.com/c/humpback-whale-identification/discussion/79086">https://www.kaggle.com/c/humpback-whale-identification/discussion/79086</a></p> <p>[6] <a href="https://www.kaggle.com/c/humpback-whale-identification/discussion/79524">https://www.kaggle.com/c/humpback-whale-identification/discussion/79524</a></p>
Humpback Whale Identification
5th solution blog post + code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>just BCE. As I understand, diff between Contrastive and BCE is square vs log. I believe in this case log will do better. </p> <p>Congrats to the winners! </p> <p>Updated github code is <a href="https://github.com/aaxwaz/Humpback-whale-identification-challenge">here</a>. </p> <p>Blog post sharing the solution is <a href="https://weiminwang.blog/2019/03/01/whale-identification-5th-place-approach-using-siamese-networks-with-adversarial-training/">here</a>. </p> <ol> <li>It is end when comparing two images - match (1) or unmatch (0)</li> <li>not sure what you asked but don;t think we used</li> </ol> <p>yes we shuffle rows every few epochs. </p> <p>1 unfortunately stacking didn't show much improvement on LB for this comp. We ensembeld stacking in our final so it helped a bit overall (like a diff model). 2 We started early so we slowly trained those models with very limited GPU resources, and batchsize used was between 32 and 64. 3. We used lap to solve, and cut the matrix into 3~4 sub-blocks to estimate and reduce computation time 4. We chose threshold that gives us around 2200 new whales on test dataset. So single model thre was 0.99 whereas final ensemble thre was down to around 0.5</p>
Humpback Whale Identification
7th place Pure Magic thanks Radek solution: classification
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Link to that paper is actually in my comment above - <a href="https://arxiv.org/pdf/1711.09349.pdf">https://arxiv.org/pdf/1711.09349.pdf</a></p> <p>I am not fast.ai student, but I admire community there :) I will put some code online soon.</p> <p><a href="https://medium.com/@ducha.aiki/thanks-radek-7th-place-solution-to-hwi-2019-competition-738624e4c885">https://medium.com/@ducha.aiki/thanks-radek-7th-place-solution-to-hwi-2019-competition-738624e4c885</a></p> <p>Cleaned-up version of the code is here <a href="https://github.com/ducha-aiki/whale-identification-2018">https://github.com/ducha-aiki/whale-identification-2018</a></p> <p>Metric learning part by <a href="/geneva">@geneva</a> and <a href="/igorkrashenyi">@igorkrashenyi</a> : <a href="https://medium.com/&lt;a href=">@anastasiya</a>.mishchuk/thanks-radek-7th-place-solution-to-hwi-2019-competition-metric-learning-story-c94b74a3eaa2"&gt;https://medium.com/<a href="/anastasiya">@anastasiya</a>.mishchuk/thanks-radek-7th-place-solution-to-hwi-2019-competition-metric-learning-story-c94b74a3eaa2 Congrats to winners!</p> <p>Thanks! Well, query expansion, diffusion improves mAP. So if and only if you have good top-1 or top-2, you can use them to get other, more difficult images from the database. In this competition it doesn`t matter, because you need to get the first one correct. </p> <p>I selected those, which are easy to implement AND give nice results in publications/competitions. For example, Center loss is very popular in metric learning and couple lines of codes. Temperature scaling is one coefficient, again you can code it in 5 minutes. <a href="/martinpiotte">@martinpiotte</a> solution was too hard to implement fast - so I postponed it until classification stop help me. VGG16 was advice from friend of mine, actually :)</p> <p>You are welcome. I like when people share their solutions - so I have to share mine to be fair :)</p> <p>It is available</p> <p>I uploaded minimalistic == clean version of the code: <a href="https://github.com/ducha-aiki/whale-identification-2018">https://github.com/ducha-aiki/whale-identification-2018</a></p> <p>I calculate descriptors for all images. Descriptors are just l2 normalized activations of pre-last layer. For each test (and val) image I calculate distance to all train + all new_whale images.</p> <p>This gives val x train distance matrix. Then pick minimum distance among images of the each class as a representative. This gives me val_size x 5005 matrix. Then I convert distance to similarity. I can directly do weighted sum on this sim and preds from classifier. In addition I clone similarity matrix and replace column 5004 with constant which gives the best results on val - and add it to weighted sum </p> <p>You may better look at the code:</p> <p><a href="https://github.com/ducha-aiki/whale-identification-2018/blob/master/train_VGG16.py#L133">https://github.com/ducha-aiki/whale-identification-2018/blob/master/train_VGG16.py#L133</a></p> <p>"Change backbone to VGG16-BN — 0.942 lb Change pooling to constant GeM(3.74) pooling — 0.944 lb. This is the best results I was able to get from ImageNet-pretrained single network."</p> <p>This is this network</p> <p>It corresponds to number of vertical stripes. If 'num_clf' = 1, then it is just global classifier. If ==2 , then there are two heads: one for left part of image and one for the right and so on. See splits of red just before green part on image below. But on these image they are horizonal, while I used vertical</p> <p></p> <p>This figure is describing architecture, which is inspired my PCBRingHead2. I am too lazy to dray anything, if can find smth similar enough. PCB in the name is artifact from my previous experiments with PCB architecture <a href="https://arxiv.org/pdf/1711.09349.pdf">https://arxiv.org/pdf/1711.09349.pdf</a> <img src="http://cmp.felk.cvut.cz/~mishkdmy/aux/pcb.png" alt="PCB architecture"></p>
Humpback Whale Identification
87 place solution - thanks Martin
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners! Thaks all for the meaningful discussions!</p> <p>At first, I tried to use Martin's awesome kernel as is just to replicate the results but did not succeded to outperform the pretrained model. After it, I experimented with triplet loss and semi-hard negative mining. This gave me a score ~0.5 along with very long training times (Keras), so I abandoned this idea. Later on, I was using Martin's kernel as an approach backbone and modified it.</p> <p>To reduce training time, I trained encoder as a classification model first. Embedding layer was L2-normalized and then scaled with learnable parameter. To make embeddings more compact, I added L1L2 regularization to scale parameter. Head was taken from Martin's kernel but with bigger internal dimension. In Martin's kernel I modified schedule of random values to linear decreasing from 0.75 to 0.15 over 50 epochs.</p> <p>All models were trained without local validation :D</p> <h2>Things that worked</h2> <ul> <li>Ensembling</li> <li>BCE loss</li> <li>SGD with warm restarts</li> <li>Progressive resizing while training classification model</li> <li>Heavy augmentations (almost like in imgaug demo)</li> <li>SEResNet34/50</li> <li>Weight decay</li> <li>Label smoothing while finetuning</li> <li>Threshold selection based on 30% new whales fraction</li> <li>Linear layer after GAP</li> </ul> <h2>Things that did not work</h2> <ul> <li>DenseNets (were too memory-hungry)</li> <li>Focal loss</li> <li>Global Max Pooling (-0.05 LB)</li> <li>Convolutional block attention module</li> <li>Triplet loss (I was not trying that hard)</li> <li>Mixup (again)</li> <li>Light augmentations led to underfitting (but why?!)</li> <li>Half-fluke crops</li> <li>Images larger than 512x512</li> </ul> <h2>Top solution</h2> <p>Top performing solution scored 0.922 on private. Top individual model scored 0.904 on private. It was an ensemble of 4 the best models: 2xSEResNet34 with 512x512 resolution and 2xSEResNet50 with 384x384 resolution.</p> <h2>Hardware</h2> <p>We used primarily the server with one 1080Ti and 64GB of RAM. To reduce disk bottleneck, we put all training data into ramdisk with plenty of swap (~3x load speedup).</p> <p>Martin's kernel is really awesome idea-wise. Unfortunately, its code is not that good at all. Also, its original version takes a lot of time to train from scratch and (probably) highly depends or random seed. My top result of just running it as-is was something like ~0.81 on public LB in 500 epochs (4 days on 1080 Ti!).</p> <p>Some people, probably took substitutions for quick LAP from discussions. And these substitutions are not the complete alternative to a original solution, so proper modifications to epoch schedule are also needed...</p> <p>The devil is in the detail.</p>
Humpback Whale Identification
9th place solution or how we spent last one and a half month
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>TL;DR</strong> Adam, Cosine with restarts, CosFace, ArcFace, High-resolution images, Weighted sampling, new_whale distillation, Pseudo labeled test, Resnet34, BNInception, Densenet121, AutoAugment, CoordConv, GAPNet </p> <p>We’d like to share our solution as a story how we gradually improve our models. </p> <p>But first of all I’d like to thank my teammate <a href="https://www.kaggle.com/vlad0922">Vladislav</a> for fruitful collaboration, kaggle community for motivation and ods.ai for kind support:)</p> <p>To start with, it was obvious idea to consider whale’s flukes as human faces. Fortunately there are tons of papers for face identification, re-identification and verification. </p> <p>So in the beginning of this competition this paper <a href="https://arxiv.org/abs/1804.06655">https://arxiv.org/abs/1804.06655</a> helped us a lot. It features comprehensive survey of state-of-the-art face identification techniques. According to it softmax-based losses look really promising. Due to their classification nature and the fact that we already have classification pipeline from Protein Atlas and Draw challenges we decided to focus on them. </p> <p>Among others <strong>Cosface</strong> and <strong>Arcface</strong> stand out as newly discovered SOTA for face recognition task. The main idea is to bring examples of the same class close to each other in cosine similarity space and to pull apart distinct classes. Training with cosface or arcface generally is classification, so the final loss was CrossEntropy. One can read more details in their papers: <a href="https://arxiv.org/abs/1801.09414">https://arxiv.org/abs/1801.09414</a>, <a href="https://arxiv.org/abs/1801.07698">https://arxiv.org/abs/1801.07698</a>. After we train cosface or arcface net we took embeddings and calculate cosine similarity between train and test images. Then average similarities for each class in train and took 5 most similar.</p> <p>In the beginning of every competition you should always devise robust validation procedure. We did it poorly. But nevertheless, to do so we select about 1000 sample from classes with number of instances greater than 3, one example for each class. Also we chose about the same number of new_whale images. This setup show good correlation between local score and public LB score. Unfortunately threshold for new_whale derived from local validation was slightly biased. That was really bad, because threshold was unreliable. Another way for threshold determination was to adjust it so top-1 new_whale percentage should be around 30%. </p> <p>To keep up with kaggle community in conquering LB we decided to construct our pipeline in the following way:</p> <ol> <li>Decrease training time of model as much as possible</li> <li>Test a lot of hypothesis as much as possible</li> </ol> <p>To do that we restrict image size to 256x256 and number of epochs up to 64. This kind of restrictions gave us a model that can be trained in 2 hours on 1080ti or even faster on 2080ti. This setting let us iterate quickly in testing new hypothesis or optimizing hyperparameters. After we have established our general training we start endless array of experiments for low-res images. </p> <p>Let’s divide our experiments into two broad groups:</p> <ol> <li>Model engineering: what to train</li> <li>Training engineering: how to train</li> </ol> <p>Here comes Model engineering. We started with some heavy encoders such as inceptionv4, seresnext50 etc. But it appeared that for us in classification task they seem to overfit a lot. Then we decided to main some light networks such as resnet34, bninception and densenet121. After several competitions I begin to realise that sometimes when you don’t have much data light encoders may really boost your score. Like they don’t tend to overfit much to rare classes and label noise. This is just a hypothesis that need to be carefully verified.</p> <p>To get final models after initial 64 epochs on 256x256 images we increase image size up to 1024 for resnet34, up to 512 for bninception and up to 640 for densenet121 and train for 64 epochs more. </p> <p>To boost model performant we tried a lot of modification. According to our findings <strong>CoordConv</strong> <a href="https://arxiv.org/abs/1807.03247">https://arxiv.org/abs/1807.03247</a> and <strong>GapNet</strong> architecture <a href="https://openreview.net/forum?id=ryl5khRcKm">https://openreview.net/forum?id=ryl5khRcKm</a> helped to improve resnet34 score. Unfortunately we didn’t have time to test this mods on bninception and densenet121. Also adding some sophisticated convolution blocks to our nets didn’t help. So Squeeze-and-Excitation, Convolutional Block Attention Module didn’t help. That was a sad story because a lot of time was spent on trying to optimize model architecture instead of optimizing training itself. </p> <p>When it comes to training one of the first things that comes to mind is how not to overfit to training data. Especially when one working with zero and few-shot learning. Inspired by AutoAugment paper <a href="https://arxiv.org/abs/1805.09501">https://arxiv.org/abs/1805.09501</a> we search augmentation space by random sampling and came up with the following augmentations:</p> <ol> <li>HorizontalFlip</li> <li>Rotate with 16 degree limit</li> <li>ShiftScaleRotate with 16 degree limit</li> <li>RandomBrightnessContrast</li> <li>RandomGamma</li> <li>Blur</li> <li>Perspective transform: tile left, right and corner</li> <li>Shear</li> <li>MotionBlur</li> <li>GridDistortion</li> <li>ElasticTransform</li> <li>Cutout</li> </ol> <p>Those augmentations we took from albumentations <a href="https://github.com/albu/albumentations">https://github.com/albu/albumentations</a> and Augmentor <a href="https://github.com/mdbloice/Augmentor">https://github.com/mdbloice/Augmentor</a> modules. Firstly we thought that this is to much for our networks. But actually for our models it was essential part not to overfit to training data.</p> <p>Cosface and Arcface parameters was optimised as well. Cosface: S = 32.0, M=0.35. Arcface: M1 = 1.0, M2 = 0.4, M3 = 0.15.</p> <p>We experimented a lot with optimizers and their hyperparameters: Adam, AdamW, SGD, SGDW. But the best optimizer for us appeared to be good old <strong>Adam with Cosine annealing</strong>.</p> <p>In the end we tried different kinds of TTA, but it didn’t help to improve the score. Mixed precision learning didn’t show good results either. </p> <p>Starting from the beginning we realised that it is essential to do something with new_whales in order to incorporate them into training process. Simple solution was to assign each new_whale probability of each class as 1 / 5004. With help of weighted sampling technique it gave us some boost. But then we realised why don’t we use softmax predictions for new_whales derived from trained ensemble. So we came up with <strong>distillation</strong>. We choose distillation instead of pseudo labels, because new_whale is considered to have different labels from train labels. Though it might not to be really true. </p> <p>To further boost model capabilities we add test images with <strong>pseudo labels</strong> into train. Eventually our single model can hit 0.958 with snapshot ensembling. Unfortunately ensembling of models trained in this way didn’t give score improvement. Maybe it was due to less variety because of pseudo labels and distillation. </p> <p>In the end I should mention that this competition was really interesting and give us an opportunity to develop face/fluke recognition skills. Thank your Kaggle!</p> <p>4 years in deep learning seems like an eternity:)</p> <p>Thank you! As what kaggle have taught me is to iterate as fast as possible and fully utilize your hardware. </p>
Humpback Whale Identification
Approximation solution for LAP, 30th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many thanks to the organizers and Kaggle for this very interesting competition, and congratulations to all winners. And thanks to <a href="https://www.kaggle.com/martinpiotte">@martinpiotte</a> for his great work, we learned so much from it. </p> <p>Our approach was based on Martin's awesome solution. One of the issues with Martin's original solution that he acknowledged was the computation time for solving the LAP problem, which is a O(n^3) problem, and is the limiting step in training time. Furthermore since we are solving LAP for a score matrix with added randomness to it, we don't really need to precisely solve it. A greedy approximation to the LAP problem would be more than suffice, as discussed in <a href="https://antimatroid.wordpress.com/2017/03/21/a-greedy-approximation-algorithm-for-the-linear-assignment-problem/">this link</a>. This greedy approach is a O(n^2 log(n)) approach, but we don't even need to be this precise. We ended up using a complete random approach where we search for minimum on a random permutation of rows, and after each row search remove the corresponding rows/column from the matrix. This results in an O(n^2) approximation that can be calculated in a few seconds.</p> <p>With bootstrap we were able to get a single model to 0.915 public LB, and with ensemble we got to 0.944 public LB. That seems to be the limit for Martin's approach, as observed also by @interneuron in the 24th place write up. We should have diversified our model structures more. </p> <p>One thing we noticed after the private LB comes out is that ensembling through voting can overfit the public lb quite easily, whereas ensembling through score outputs results in much less separation between public and private LB.</p>
Humpback Whale Identification
Half fluke trick (+0.02 LB)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to share a simple trick: <br> 1. extract cropped images using bounding boxes <br> 2. split each image by a half and horizontally flip the right part <br> 3. [your training procedure here] <br> 4. at test time compare the corresponding halves and combine the results <br></p> <p>Pros: <br> - doubles the training dataset <br> - higher image resolution <br></p> <p>Cons: <br> - bounding box dependency <br> - the center of the object is split <br></p> <p>In my case private / public LB scores: <br> - full-fluke 0.93686 / 0.93642 <br> - half-fluke 0.95398 / 0.95214 <br> - ensemble 0.95652 / 0.95722 <br></p> <p>Has anyone else used this strategy? Please share your experience.</p> <p>Yes, only one side is flipped. I also didn’t use any flip augmentation during the training.</p> <p>It's exactly what I meant. And thanks, Radek, for sharing your bounding box solution. I also had one based on Tensorflow Object Detection model with about 83% IoU for your 400 annotated samples. Finally, different bounding boxes helped in TTA.</p>
Humpback Whale Identification
[143th place] single model trained with cross entropy / contrastive loss using fastai
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Humpback Whale Identification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, extremely big congratulations to the competition winners! And big congrats to everyone who participated in the competition. The level seemed extremely high and the conversations on the forums were great!</p> <p>Here is the <a href="https://github.com/radekosmulski/whale/blob/master/classification_and_metric_learning.ipynb">final addition</a> to my <a href="https://github.com/radekosmulski/whale">whale repository</a>, a notebook for constructing and training the model that served as the basis for my submission.</p> <p>There is a lot of text in the notebook so not to duplicate things, here is a very short tldr:</p> <ul> <li>resnet50 cnn pretrained on imagenet</li> <li>custom loss (a combination of cross entropy and contrastive loss)</li> <li>training with the one cycle policy, progressive image resizing, gradual unfreezing, discriminative lrs, Adam <ul><li>some data augmentation but nothing too extreme</li> <li>generating progressively harder datasets as training progressed while balancing classes to some extent</li> <li>no cleaning up of train data, prediction based on euclidean similarity between feature vectors</li> <li>model trained on bounding boxes extracted in one of the earlier notebooks</li> <li>all code for the training, predicting and generating submission in this single notebook</li></ul></li> </ul> <p>And last but not least, thanks to the organizers for a really fun competition!</p> <p>Thank you Giba! 🙂 </p>
ICDM 2015: Drawbridge Cross-Device Connections
1st place solution summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICDM 2015: Drawbridge Cross-Device Connections <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, if device_1 and cookie_1 appear on both ip_1 and ip_2 then our training data has two entries:</p> <p>device_1, cookie_1, ip_1, ...</p> <p>device_1, cookie_1, ip_2, ...</p> <p>Our initial models were built on only non-cellular ip addresses and entries were weighted by the reciprocal of the number of non-cellular ip addresses shared by a cookie and device. We weren't certain how to generalize the notion when including cellular ip addresses and ended up weighting everything equally in later models.</p> <p>First off, I'd like to thank the competition hosts at drawbridge for all their work putting this contest together. Next, I'll thank <a href="http://www.datalabusa.com/">DataLab USA</a> for providing the computing resources for our solution. Finally, thanks to all the competitors who kept things interesting. I'm looking forward to seeing your approaches.</p> <ul> <li>We generated a training set from the data as a collection triples of (device_id, cookie_id, ip) for each instance where a device and cookie appeared on the same ip.</li> <li>To this collection of triples, we join all the basic information about the device, cookie, and ip address.</li> <li>In addition, we generate a few hundred features based on the interaction between device, cookie, and ip. The most influential of these were ranks of each cookie partitioned by device and ordered by a basic attribute, say idxip_anonymous_c3 or ip_anonymous_c2 for instance.</li> <li>Cookie properties and any categorical features were one-hot encoded and also joined to the training set. We winnowed the list down substantially through cross-validation and were left with a few hundred in our final model.</li> <li>To this we added out-of-sample predictions of a few xgboost models built on cookie_all_basic to predict whether a cookie was matched to any device in the train set.</li> <li>To reduce the size of the training set we downsampled the cellular ip addresses quite heavily and also dropped any -1 drawbridge handle cookies.</li> <li>On this reduced dataset we built a learning-to-rank model which was a modified version of xgboost's &quot;rank:pairwise&quot; partitioning by device.</li> <li>For each device, we took the drawbridge handle of the highest scoring cookie and submitted all the cookie_ids for that drawbridge handle.</li> </ul> <p>We did attempt some ensembling for a final solution. It seems this was ill-fated as the private leaderboard score for strongest model was higher than the ensemble.</p>
ICDM 2015: Drawbridge Cross-Device Connections
Solution from Team Monkeys
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICDM 2015: Drawbridge Cross-Device Connections <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for the competition host and all competitors to make this competition interesting and fun. Our team places 14 in the leaderboard (the most heart breaking thing is we can improve our prediction significantly after removal of all cookie with unknown handle, 0.863723.... ==). Below are our approaches:</p> <ol> <li>To reduce the dataset, we find the device and cookie id pair which share at least one IP. The IP must be non-cellular IP or cellular IP which appears less than 10 different cookie ids (using this filtering method, we can reduce the dataset approximately 7M samples and it covers around 97.5% of all the positive samples.</li> <li>Instead of forming device-cookie pair, we construct dataset using device-handle mapping. For handle with more than one cookie, we aggregate the data by taking average value for integer value feature. For categorical data, we take the mode (known value has higher priority than unknown value).</li> <li>For categorical feature, we estimate the lower bound of conditional probability to capture the information of likelihood between different device type/os to computer os/browser version, device country to cookie country.</li> <li>Most of our features are generated from IP-related data. We have found that the sum of IP frequency in id_all_ip.csv is higher than the frequency in ipagg_all for some IP. Our guess is that there might be some hash collision for the IP value therefore we remove those IPs for some of our IP-related features. We have calculated the total id shared by the overlap IPs, total device/cookie id shared by the overlap IPs, number of non-cellular IPs, device ids to cookies id ratio, etc. Another important feature is to calculate the (sum of IP frequency)/(IP freqency in ipagg_all.csv). </li> <li>For the property features, we multiply property frequency with the corresponding property category vector, and then calculate the vector sum for the corresponding cookie_id and device_id. After that, we divide the sum of min over sum of max. We have found that this feature is quite important and it is the only property related feature for our model.</li> <li>There are 171 features and we use xgboost as our model. We thought there might be useful to use cookie-cookie model &amp; device-device model to improve our performance. However, there is not much success.</li> </ol> <p>We are looking forward to seeing other competitors to share their approaches.</p>
ICR - Identifying Age-Related Conditions
"Simple" and "Easy" Time Series approach but still have a medal for ICR-2023
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone,</p> <p>In the ICR competition, we're tackling the overfitting challenge. To ensure robust models, rigorous validation is a must. While cross-validation is common, this is time series validation for a fresh perspective and better defense against overfitting.</p> <p>I have also tried to use Cross-validation and Nest-CV, but I choose Time-series in the end (explanation below)</p> <h1><strong>Time-Series Validation Approach:</strong></h1> <p>I have a <strong>4078 increase</strong> from the public leaderboard to the private leaderboard with only <strong>3 Entries</strong>.<br> This is how I do it!</p> <h2>1. Data process</h2> <ul> <li>Standard Scaler for the input data (this might not affect much if using the tree-base model)</li> <li>Feature selection </li> <li>Add Epsilon as a features</li> </ul> <h2>2. Model</h2> <ul> <li>Single LGBM with tuning parameters</li> </ul> <h2>3. Validation</h2> <p>The above parts seem not much different from most of all. Here is the time-series approach. <br> I split train data into 2 fold:</p> <ul> <li><p>Fold 1: </p> <ul> <li>train-&gt;2019/01/03 </li> <li>test: 2019/01/04 -&gt; 2019/11/12<br> =&gt; This is <code>model_1</code> with <code>balance_log_loss_1</code> and <code>weight_1</code>. </li></ul></li> <li><p>Fold 2: </p> <ul> <li>train-&gt;2019/11/12</li> <li>test: 2019/11/13 -&gt; 2020/09/29<br> =&gt; This is <code>model_2</code> with <code>balance_log_loss_2</code> and <code>weight_2</code>. </li></ul></li> </ul> <p><strong>Noticed</strong>: <code>weight_{i} = 1 / balance_log_loss_{i}</code>, these weights use to reduce imbalances I think.<br> Here are other validations of loss</p> <table> <thead> <tr> <th></th> <th>Balance Log Loss</th> </tr> </thead> <tbody> <tr> <td>CV</td> <td>0.21180</td> </tr> <tr> <td>Nested-CV</td> <td>0.13583</td> </tr> <tr> <td>Time Series</td> <td>0.43275</td> </tr> </tbody> </table> <p>=&gt; CV and Nested-CV seem overfit so I chose Time-series instead! </p> <h2>4. Output predict</h2> <p>To have the final output, I calculate the average weight loss from the <code>pred_{i}</code> of 2 models:<br> $$<br> output =\frac{weight_1 * pred_1 + weight_2 * pred_2}{weight_1 + weight_2}<br> $$</p> <h1>Conclusion</h1> <p>This is the approach that helps me have the first model. It can be used for beginners to achieve first medals like me.<br> Please have an upvote if you found it interesting. This is my <a href="https://www.kaggle.com/code/hwhuynguyen/icr-base-line" target="_blank">code</a>.</p>
ICR - Identifying Age-Related Conditions
(Our solution) CV 0.12 - Public 0.11 - Private 0.62 🥲
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone<br> Well after this shakeup I just want to show some of the ideas we used to build our solution. Although we didn't succeed in this competition, it was really a pleasure to team up with <a href="https://www.kaggle.com/muhammad4hmed" target="_blank">@muhammad4hmed</a> <a href="https://www.kaggle.com/ihebch" target="_blank">@ihebch</a> <a href="https://www.kaggle.com/cody11null" target="_blank">@cody11null</a> </p> <p>Actually our best in private was 0.38 which was a very simple lgbm with almost no-processing. It was our first sub in the competition :) For us, the better cv and public, the worse is private.</p> <p>We used StratifiedKFold basd on Alpha with 20 folds and 20 bags.<br> Each one of the following represent the experiments we did to improve our cv to 0.12 and public to 0.11:</p> <ul> <li>Using LGBM with some tuning + undersampling with ratio 0.7 (cv 0.2 - public 0.18 - private 0.38 (the best in private))</li> <li>Using quite small feature_fraction (0.2) and using a special loss function in lgbm called "cross_entropy_lambda" + some more tuning for hyperparams (cv 0.17 - public 0.14 - private 0.45)</li> <li>Using Date as a feature + adding some ideas for date + some manipulating in the features (cv 0.144 - public 0.14 - private 0.43)</li> <li>Treating the years as groups (3 groups: &lt;=2018 , ==2019, ==2020) Then trying to predict the group for the private (I thought this might help in test because maybe the model will find some patterns that are similar to a specific year) (cv 0.128 - public 0.16 - private 0.50)</li> <li>Some feature engineering using nearest neighbor (e.g. nearest neighbor target - average of 5 nearest target…etc) (we didn't submit this, because it had worse cv).</li> <li>Using a separate model for each disease. Actually this brought a really good improvement in cv, and we did feature selection based on each model which brought some quite interesting insights (e.g. one of the diseases uses specific features that are not important at all for the others) (cv 0.113 - public 0.17 - private 0.58).</li> <li>I found a really interesting thing. Actually in the feature "EJ". It has two values, either 0 or 1. We found that building a model trained on only EJ==0 and another one for EJ==1 then concatenate the results improved the results of cv (cv 0.13 - public 0.16 - private 0.44).</li> <li>Started using stacking with lgbm + svm +… etc (cv 0.125 - public 0.11 (our best public) - private 0.62). This is our 1st selected submission.</li> <li>Using EJ Separation model in the stacking (cv 0.10 - public 0.13 - private 0.61)</li> <li>Pseudo Labeling (cv 0.125 - public 0.14 - private 0.73)</li> <li>Using our best public submission (lb 0.11) + Adding the models separated based on diseases mentioned above<br> (cv 0.114 - public 0.13 - private 0.62)</li> <li>Using a 3rd stage stacking (cv 0.09 - public 0.14 - private 0.63)</li> <li>Using a 3rd stage stacking + Using the EJ separation models + Using the disease separated models + adding catboost to stacking (cv 0.10 - public 0.15 - private 0.48). This is our 2nd chosen submission.</li> </ul> <p>Well after all, it was really fun to try all these ideas along with my brilliant teammates. Maybe next time we can secure a gold.</p> <p>Here is the notebook of our 0.11:<br> <a href="https://www.kaggle.com/mohammad2012191/public-0-11-stacking" target="_blank">https://www.kaggle.com/mohammad2012191/public-0-11-stacking</a></p> <p>Here is the notebook of our 2nd sub:<br> <a href="https://www.kaggle.com/mohammad2012191/stacking-3rd-stage-sep-ej" target="_blank">https://www.kaggle.com/mohammad2012191/stacking-3rd-stage-sep-ej</a></p> <p>Sorry they are a little messy.</p>
ICR - Identifying Age-Related Conditions
103rd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for this competition, and congratulations to the final winners. </p> <p>I actually stopped working on this competition a few months ago due to the unreliable cv-lb relationship. This decision, in hindsight, was certainly the reason I got on the lucky side of the shake-up. My solution was quite simple and used a weighted ensemble of three boosting models. Here is a <a href="https://www.kaggle.com/code/ryanbarretto/103rd-place-solution" target="_blank">link</a> to the code. </p>
ICR - Identifying Age-Related Conditions
13th Place Solution for the "ICR - Identifying Age-Related Conditions" Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for your questions. Below are the answers.</p> <p>1.Cross-validation was performed using StratifiedKFold(n_splits=10) instead of GroupKFold.</p> <p>2.The data ratio after downsampling (seed=42) was as follows, maintaining a 1:1 ratio:<br> Original data -&gt; Class0: 509, Class1: 108<br> Downsampled data -&gt; Class0: 108, Class1: 108</p> <p>3.The ensemble ratio was honestly determined intuitively; there might have been room for further exploration.</p> <p>As per your comments, it was anticipated that this competition would be a shake-up, but honestly, I am surprised by this result. Just to make sure, I will provide my solution below.</p> <h1>Context</h1> <p>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></p> <h1>Overview of the Approach section</h1> <h2>Data processing</h2> <p>・Missing Values: Create missing value flags and fill with the mean<br> ・Column "EJ": label encoding<br> ・Addition of Group Features: For variables other than the target, add ['min', 'max', 'mean', 'std'] of each feature grouped by the "EJ" column.</p> <h2>Training</h2> <p>Based on the discussion by Chris below, I created three models with downsampling and added class weights during training.Validation was performed using StratifiedKFold with a value of n_splits=10 for all models, and then the results were aggregated using seed averaging.</p> <p>・Model1:LGBMClassifie(features=all)<br> ・Model2:LGBMClassifier(training with the top 20 impactful features of Model 1)<br> ・Model3:CatBoostClassifier(features=all)</p> <h1>Details of the submission</h1> <p>・submission["class_1"]=Model1<em>0.2 + Model2</em>0.2 + Modell3*0.6<br> ・result:Public=0.22, Private=0.36</p> <h1>Sources</h1> <p>・<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/412507" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/412507</a></p>
ICR - Identifying Age-Related Conditions
147th Place Solution for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First off, I would like to thank the Competition hosts and the Kaggle team for hosting such an interesting Competition. I jumped from rank 3169 in public lb -&gt; 147 in private lb</p> <h2>context:</h2> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></li> </ul> <h2>Overview of the approach:</h2> <p>This competition probably witnessed the biggest shake ever in Kaggle. Due to such a small dataset, it was easy to overfit, so keeping the approach simple was important.<br> In my solution, I only used a single LightGBM model and did feature engineering. </p> <h2>Details of the submission</h2> <h3>Cross-Validation Strategy:</h3> <p>I used 10 folds StratifiedKFold for the validation strategy. Like others, I also noticed a bad correlation between cv and public lb, but what's interesting is during the start of the competition, when my models were weaker ~0.3 LB, they correlated well with cv. As I improved after reaching about 0.2 LB, it stopped correlating. Which I think can be considered one way of identifying overfitting. I wonder if someone else also noticed a similar trend?</p> <p>here is a non exhaustive table of cv vs lb:-</p> <table> <thead> <tr> <th>CV</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>0.61</td> <td>0.3</td> <td>0.55</td> </tr> <tr> <td>0.57</td> <td>0.28</td> <td>0.56</td> </tr> <tr> <td>0.31</td> <td>0.22</td> <td>0.43</td> </tr> <tr> <td><strong>0.18</strong></td> <td><strong>0.20</strong></td> <td><strong>0.39</strong></td> </tr> <tr> <td>0.16</td> <td>0.2</td> <td>0.46</td> </tr> </tbody> </table> <h3>Feature Engineering:</h3> <p>Typically, there are two ways to generate new features:-<br> 1.) Use domain knowledge<br> 2.) Bruteforce different features combination based on cv</p> <p>due to the nature of the data (anonymized and low samples), both were difficult, so instead, I grouped different features based on correlation and tried different combinations between them like mean, std, a few other transformations, etc., and evaluated using cv. My assumption in this approach was that correlated features would likely be related. As a very simple example, consider two features width and length depending on context they should appear somewhat correlated, by simply multiplying we can get new feature, area which can provide more information to model.</p> <h2>Sources:</h2> <p>I used the following notebook to find feature groups:- <a href="https://www.kaggle.com/code/aerdem4/icr-lofo-feature-importance" target="_blank">https://www.kaggle.com/code/aerdem4/icr-lofo-feature-importance</a></p>
ICR - Identifying Age-Related Conditions
149th Place Solution for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>149th Place Solution for the ICR - Identifying Age-Related Conditions Competition</h3> <p>Public LB was ranked 2859th, but 2710th shake up and private LB was ranked 149th. This is the first time I have ever done so much SHAKE. Since the number of data was very small, I tried to keep the solution simple.</p> <h3>Context</h3> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></li> </ul> <h3>Overview of the approach</h3> <p>Our final model (Public/Private LB of 0.19743/0.39377) was a combination of 10 single models. The ensembles used were CatBoost (5-seeds ensemble Public/Private LB 0.22466/0.41235) and LGBM (5-seeds ensemble Public/Private LB 0.18788/0.4098).</p> <p>One final submission was an average ensemble of 10models from CatBoost (5 models) and LGBM (5 models).</p> <h3>Details of the submission</h3> <p>After trying 5fold, 10fold, and 20fold, 20fold had the best CV, so 20fold was chosen. The CV of the best fold was CV~0.05, but the CV of the worst fold was CV~0.27, and the CV of (out of fold) of all folds was CV~0.126, indicating that the variation among folds was very large. Therefore, I tried to keep the approach as simple and with the best generalization performance as possible, and by applying a seed ensemble to the approach, I tried to minimize the variation as much as possible. I also tackled the competition by trusting the behavior of the CVs without much consideration of Public LBs and selected the final sub (the so-called trust CV).</p> <p>Below is a summary of CV, Public LB, and Private LB for each model (all 20 fold).</p> <table> <thead> <tr> <th>model</th> <th>CV</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>LGBM</td> <td>0.12614</td> <td>0.18521</td> <td>0.41131</td> </tr> <tr> <td>LGBM (5 seeds ensemble)</td> <td>0.12456</td> <td>0.18788</td> <td>0.4098</td> </tr> <tr> <td>CatBoost (5 seeds ensemble)</td> <td>0.12545</td> <td>0.22466</td> <td>0.41235</td> </tr> <tr> <td>LGBM (5 seeds ensemble) + CatBoost (5 seeds ensemble)</td> <td>0.12087</td> <td>0.19743</td> <td>0.39377</td> </tr> </tbody> </table> <h3>Preventing overfitting</h3> <p>To prevent overfitting, both LGBM and CatBoost performed early stopping. (I set early_stopping_rounds = 100 for iterations = 50500, but in practice iterations were not needed to this extent because they converged within 1000 iterations.) I also adjusted the parameters related to regularization and overlearning for the LightGBM and CatBoost parameters. The specific LGBM and CatBoost parameters are as follows.</p> <ul> <li>LGBM parameters</li> </ul> <pre><code>lgbm_params = { ': ', ': ', ': CFG.random_seed, ': , ': , ': , ': , ': , ': -, ': , ': , ': , ': , balanced', none', ': } </code></pre> <ul> <li>CatBoost parameters</li> </ul> <pre><code>catboost_params = { : , : , : , : , : , : , : .random_seed, : } </code></pre> <h3>EDA</h3> <p>When I was doing EDA, I was thinking of adding information from greeks.csv to the feature set, but there were so many missing Epsilon values that I wondered how to handle them. Therefore, for the moment, I created models to predict Alpha, Beta, Gamma, and Delta, and tried to utilize Alpha, Beta, Gamma, and Delta as features by using the oof value in train.csv and the respective pred values in test.csv. However, this approach resulted in a lower CV. (Consequently, it was also detrimental from the Private LB perspective.) I thought about this result, and I believe that the reason may be that Alpha is practically Class(0,1), which is the target of this study, and the generalization performance was impaired because the feature called Alpha is too strong. In fact, the feature importance of LGBM at this time showed that the importance of Alpha was quite high.(In addition, the code of <a href="https://www.guruguru.science/competitions/21/discussions/ab028e86-d011-485e-8844-45d15717fec4/" target="_blank">https://www.guruguru.science/competitions/21/discussions/ab028e86-d011-485e-8844-45d15717fec4/</a> was used to illustrate the feature importance. Thank you very much. I really like the fact that the blurring of each CV can be expressed as a boxen plot.) Therefore, I decided to stop using the information in greeks.csv as features and use the features present in the original train.csv. (Instead, as mentioned in the Validation Strategy part, I utilize the information from greeks.csv in the Validation Strategy.)</p> <ul> <li><p>Feature importance of LGBM when predictions of Alpha, Beta, Gamma, and Delta are used as features<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6820505%2Fdff60ef0172acd231afbf6e80f194679%2F__results___35_1.png?generation=1692284555088960&amp;alt=media" alt=""></p></li> <li><p>Feature importance of LGBM when predictions of Alpha, Beta, Gamma, and Delta are not used as features<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6820505%2F23ad8cf3eaea53711dd1a5bdd034a5a7%2F__results___26_1.png?generation=1692284604740016&amp;alt=media" alt=""></p></li> </ul> <h3>Feature Selection</h3> <p>I used all features except for Id, which appears in train.csv.</p> <h3>Validation Strategy</h3> <p>As mentioned above, I wanted to make the validation strategy as unbiased as possible for the training data, since I expected a great deal of variability. Therefore, I decided to combine train.csv with greeks.csv and perform fold partitioning with MultilabelStratifiedKFold for 'Alpha', 'Beta', 'Gamma', 'Delta' and 'Class'.</p> <pre><code>train_df = pd() greeks_df = pd() train_df = pd(train_df, greeks_df, on=, how=) mskf = (n_splits=CFG, shuffle = True, random_state=CFG.random_seed) labels = train_df] train_df = - , (_, train_index) (mskf(train_df, labels)): train_df = train_df = train_df() </code></pre> <h3>Sources</h3> <ul> <li><a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/412507" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/412507</a></li> </ul>
ICR - Identifying Age-Related Conditions
157th (preliminary) silver medal solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I had limited time for this competition, and my previous submission was over 2 months ago. Despite that, the method I'm presenting here won a silver medal. I had additional submissions that could have performed equally as well, indicating the overall approach was sound. With more time, this approach might have yielded even better results if I could have devoted more effort to the competition.</p> <p>A public version of my code is available <a href="https://www.kaggle.com/code/wisawesome/icr-identity-157th-place-prelim" target="_blank">here</a>.</p> <p>Things that worked for me:</p> <ul> <li>CatBoost</li> <li><code>auto_class_weights = "Balanced"</code></li> <li><code>logloss</code> with <code>sample_weights</code> set to account for the competition metric (see my code <a href="https://www.kaggle.com/code/wisawesome/icr-identity-157th-place-prelim" target="_blank">here</a>).</li> <li>ordered boosting. This dataset is small and as such prediction shift is a real concern. In these cases ordered boosting seems to work well</li> <li>Stochastic Gradient Langevin Boosting, see <a href="https://arxiv.org/abs/2001.07248" target="_blank">this paper by Aleksei Ustimenko and Liudmila Prokhorenkova</a> for details.</li> <li>RepeatedStratifiedKFold with 5 folds and 5 repeats resulting in 25 CatBoost models total.</li> <li><code>bootstrap_type= "Bernoulli"</code> with <code>subsampling</code> and <code>colsample_bylevel</code> because otherwise it would take forever to train.</li> <li>parameter tuning using Optuna.</li> <li>Ensembling predictions from all 25 CatBoost models.</li> </ul> <p>Things that did not work for me:</p> <ul> <li>post-processing and thresholding. Very quickly I realized that making as little as 1 mistake will knock me out of the competition with this competition metric. I decided that it was not worth the risk.</li> <li>XGBoost and LightGBM. Could not get scores that were anywhere close (in terms of cross validation) to the ones obtained by CatBoost. Probably ordered boosting had something to do with it, but there are also other design differences (symmetric trees etc.)</li> <li>Ensembling with TabPFN. I really like the idea behind TabPFN and the authors claim that ensembling with other methods may improve results however in this particular instance I was not able to do so in CV. Perhaps TabPFN is more suited for a different type of dataset. I really like TabPFN's execution speed and I would have loved it if it worked as well as CatBoost.</li> <li>Dropping features. Did not spend much time to select features but the feature selection that I did do based on CatBoost feature importances did not yield better results in CV.</li> <li>Greeks. Not available in test so I could not find a way to make them work for me. I did use them for EDA but I could not see how they would be useful if absent from test.</li> </ul> <p>Things that I did not try but I wish I had:</p> <ul> <li>Using the time domain. Lots of other notebooks used time. I just ignored it. It could have been the case that the data is not coming from the same distribution over time due to the way data is collected but there was no guarantee that these hypothetical data collection issues would persist into the testing data. In hindsight I should have spent some time to explore this.</li> <li>Dropping unusual observations. We had very little data to work with anyway so I was hesitant to do this because valuable information might be lost in the process.</li> </ul> <p>I hope you found my solution interesting and good luck in the next competition!</p> <p>Cheers,<br> Wojtek (Victor)</p>
ICR - Identifying Age-Related Conditions
158th solution🥈
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for your comment!!🙌</p> <h1><a href="https://www.kaggle.com/code/nynyny67/icr-submission?scriptVersionId=139202837" target="_blank">Kaggle ICR - Identifying Age-Related Conditions</a></h1> <h1>Best solution among final selection (cv: 0.185, public: 0.17, private: 0.39)</h1> <p>ensemble: (XGB, NN, TabPFN)<br> weights: (19, 1, 10)</p> <p>Weight combination was optimized by distributing 30 among models in ensemble using grid search based on cv score.</p> <h2>Out of fold result</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8363906%2F175ce142039d556b282b859afe31ad53%2Fcv.png?generation=1691739845457339&amp;alt=media" alt=""></p> <h1>best model (cv: 0.235, public: 0.19, private: 0.34, not in final selection)</h1> <p>20 xgb bugging.<br> Bugging method was used with datasets in which negatives were down sampled.(learned from: <a href="https://www.kaggle.com/code/cdeotte/rapids-cuml-svc-baseline-lb-0-27-cv-0-35" target="_blank">https://www.kaggle.com/code/cdeotte/rapids-cuml-svc-baseline-lb-0-27-cv-0-35</a> ).<br> 3% of labels were reversed in each bugging dataset so that the diversity and the robustness of the ensemble increases.<br> I didn't choose this model for final selection because cv score was not so good.</p> <h2>Out of fold result</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8363906%2Fecd886fa266dcd780b6a2d9b9c0010c0%2Fsingle_xgb.png?generation=1691739879738216&amp;alt=media" alt=""></p> <h1>optimization to competition metric</h1> <p>I think it was important to optimize model outputs for the competition metric. These are methods I tried.</p> <h2>Based on the model output.</h2> <p>pp_prob function from this notebook was useful.<br> <a href="https://www.kaggle.com/code/ulrich07/tabfpn-and-xgboost-cv-0-19-lb-0-17/notebook" target="_blank">https://www.kaggle.com/code/ulrich07/tabfpn-and-xgboost-cv-0-19-lb-0-17/notebook</a></p> <p>I think this method is robust against data distribution change because it does not use the counts of positive and negatives in training set but uses the sum of the output of the model.<br> It is not guaranteed that private dataset has the identical ratio of positive and negative samples.</p> <h2>analytical approach</h2> <p>Given the probability for each samples to be positive, what is the value to submit $$\hat{y_i}$$ to get lower metric value?<br> Let's think about the i th sample's expected loss.</p> <p>$$l_i = \frac{p_i}{n_1} \log{\hat{y_i}} + \frac{1-p_i}{n_0} \log{(1-\hat{y_i})} $$</p> <p>To find y_i that make this value small will be useful to minimize the metric.</p> <p>$$\frac{\partial l_i}{\partial \hat{y_i} }= \frac{p_i}{n_1} \frac{1}{\hat{y_i}} - \frac{1-p_i}{n_0} \frac{1}{1 - \hat{y_i}} = 0$$</p> <p>$$\hat{y_i} = \frac{1}{1-\frac{n_1}{n_0} (1 - \frac{1}{p_i})}$$</p> <p>Then you can estimate p value using machin learning approach then calculate y value for each samples. This method worked for me, but I didn't use this approach in final selection because the former method was better in cv score.</p> <h2>code</h2> <p><a href="https://github.com/NYNYNY67/KaggleIcr" target="_blank">https://github.com/NYNYNY67/KaggleIcr</a></p> <p>Thank you!<br> I fixed the url!</p>
ICR - Identifying Age-Related Conditions
15th Place Solution for the "ICR - Identifying Age-Related Conditions"
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I feel incredibly fortunate to have reached 15th place in the competition. I started by referencing the familiar baseline(<a href="https://www.kaggle.com/code/aikhmelnytskyy/public-krni-pdi-with-two-additional-models)" target="_blank">https://www.kaggle.com/code/aikhmelnytskyy/public-krni-pdi-with-two-additional-models)</a>. By delving into the comprehensive notebooks and discussions, I learned and adapted my approach.</p> <p>What did not work for me:</p> <ul> <li>Avoided using Epsilon from greeks due to potential data drift.</li> <li>De-anonymizing (But it's still a magic method for me who is not good at math, thanks!) and feature derivation.</li> <li>Feature selecture using target permutation, I found that the features filtered by "gain" and "split" are not in good agreement.</li> <li>Optuna, postprocessing, and oversampling, they weren't effective for me.</li> </ul> <p>What did work for me:</p> <ul> <li>Simulate a local verification process through Nested k-folds with StratifiedKFold. Not the best_model but full models of cv_outer.</li> <li>A diverse ensemble model with probability reweighting.</li> <li>Incorporating the greeks.Alpha into the training.</li> </ul> <p>I'm very lucky this time (contrary to real life…orz). My deepest gratitude goes out to the members of the Kaggle community who generously and selflessly share their knowledge. I also hold immense respect for those inquisitive minds who are never hesitant to ask questions and challenge the status quo. Kindly bear with any oversights or shortcomings!!</p>
ICR - Identifying Age-Related Conditions
1699th Place Solution (partial) for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for your message. I also tried to predict using only cosine similarity at the beginning. The heatmap is a great idea to show the relationship.</p> <p>I think the Euclidean distance works just as well. Since we want to focus on similarities, I think the inverse of the distance, etc. would be better than the distance itself. Here, I think the Cosine Similarity is easier to use because its value is normalized.</p> <p>Yes, exactly.<br> So the method is suitable only for small test sets.</p> <p>Thanks for sharing your thoughts. I want to develop a good model that will work well with any small data set, no matter how large the diversity.</p> <p>Invertedness may not be necessary. I feel that it is a matter of preference.😀</p> <p>Thank you for your kind message. I made some mistakes and also my explanation was not sufficient.<br> In calculating similarity, fillna(-1) was used for both train and test datasets, and in TabPFN calculation, SimpleImputer(median) was used.<br> I had noticed that there were 9 feature columns with missing values in the training set, but made a mistake when removing them. I forgot to remove 'DU', 'FC', and 'FL', and now realize that I mistakenly removed 'CU' instead.<br> I will correct the description.</p> <h1><strong>Gold zone solution (privateLB=0.351) using Cosine Similarity</strong></h1> <p>First of all, I would like to thank InVitro Cell Research, LLC (ICR) and Kaggle for organizing and running this competition. I would also like to thank all the competitors who shared their views.<br> Congratulations to all the winners! Although I am not a winner, I would like to share one of the models used in the ensemble in my selected submission. In the model, Cosine Similarity Method was used, and the scores for the model alone were as follows: local 0.275, public 0.16, and private 0.35. (In fact, the selected notebooks were an ensemble (bagging) of three models, but by ensembling with more overfitted models, the results were not so good. Therefore, in order not to waste the reader's time, I dared to present only the model that worked well.)<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2Fc4f3a498f373170febd2dd6407994542%2F035.png?generation=1691893371996007&amp;alt=media" alt=""></p> <h2><strong>Context</strong></h2> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></li> </ul> <h2><strong>Overview of the Approach</strong></h2> <p><strong>Cosine Similarity Method</strong><br> The Cosine Similarity Method was developed to improve prediction performance using small training sets with large diversity [1]. In this method, 80% of the patient data that were similar in features to the test set were trained for each target. The procedure is shown below.</p> <ul> <li>Feature 'EJ' was replaced as ('A': 0, 'B': 1).</li> <li>Missing values were treated with fillna(-1).</li> <li>Of the 56 features, 49 features were used. (Note that the original statement is incorrect. I forgot to remove 'DU', 'FC', and 'FL', and mistakenly removed 'CU' instead.)</li> <li>The cosine similarity between the 49-dimensional feature vectors was calculated using the formula below. However, 0.001 was added to each feature value to avoid errors with zero feature values.</li> </ul> <pre><code> (): np.dot(v1, v2) / (np.linalg.norm(v1) * np.linalg.norm(v2)) </code></pre> <ul> <li>For each test data, the training data were sorted in order of high similarity and the top 494 (80%) were used for training. (In TabPFN calculation, SimpleImputer(median) was used.)</li> <li><a href="https://www.kaggle.com/datasets/carlmcbrideellis/tabpfn-019-whl" target="_blank">TabPFN (0.1.9) whl</a> was used for the model [2]. Epsilon was not used, nor was any post-processing such as p0 [p0 &gt; 0.89]. (The notebooks used for reference are <a href="https://www.kaggle.com/code/muelsamu/simple-tabpfn-approach-for-score-of-15-in-1-min" target="_blank">this</a> and <a href="https://www.kaggle.com/code/vadimkamaev/postprocessin-ensemble" target="_blank">this</a>.)</li> </ul> <h2><strong>Details of the submission</strong></h2> <p>When making predictions on training data, similar 494 data other than the target data were trained. The predictions are shown in Figure: train['class_1'] - train['Class'].<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2F9a0339001e1fee3a96029e5a53774c2e%2Fpredictions.png?generation=1691895047137531&amp;alt=media" alt=""><br> The <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/426536" target="_blank">proportion plot</a> is shown below.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2Fdd949e45980b86b39dca8e5219d19f45%2Fproportion.png?generation=1691895185645021&amp;alt=media" alt=""></p> <p>Because the data set was small, I felt that cross-validation did not work well and neglected local validation. In addition, the two figures above seemed to have room for improvement, so I implemented an ensemble that relied on public LB score. These are the points I reflected on.<br> On the other hand, the cosine similarity method itself was later found to perform well. In this method, only data with high similarity to others are used for training, and high generalization performance can be expected by choosing an appropriate selection rate.<br> The example above shows an 80% selection rate, but comparable results were obtained even with a 75% selection rate.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2Fefcab20d6fb3d0acb9b05edf95d54c39%2Fcomparison.png?generation=1692819217934261&amp;alt=media" alt=""></p> <h2><strong>Sources</strong></h2> <p>[1] H. Sakiyama, R. Motoki, T. Okuno, J.-Q. Liu, Improvement of Blood-Brain Barrier Permeability Prediction Using Cosine Similarity, J. Comput. Chem., Jpn. in press.<br> [2] N. Hollmann, S. Müller, K. Eggensperger, F. Hutter, TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second, <a href="https://arxiv.org/abs/2207.01848" target="_blank">arXiv:2207.01848</a>.</p> <p>Thank you for your message, I appreciate it. The ensemble I submitted consists of three models, all using TabPFN, but only one of them uses the Cosine Similarity method I introduced. However, the other two models used the Cosine Similarity Split. In this splitting method, the cosine similarities are calculated within the training data to each other, summed, and sorted (see figure). The data was split 8:1:1, corresponding to 80% common data, 10% rather rare data and 10% unusual data. Only 80% common data were used for training, and others were used for validation.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2F3ac5d3dd1c099e3dcfd7eefbf072262a%2Fsim_split.png?generation=1691903282315318&amp;alt=media" alt=""></p> <p>I made my code public <a href="https://www.kaggle.com/code/hiroshisakiyama/icr-80-sim-tabpfn-privatelb-0-35?scriptVersionId=134986804" target="_blank">here</a>.</p> <p>The similarity was used only for choosing the similar data, and not used as features in training. <br> I apologize that my notebook is filled with junk code, because I didn't have enough time.<br> The cosine similarity method was originally developed for blood-brain barrier permeability (bbbp) prediction [H. Sakiyama et al., in press], and the code was used in the notebook, so there is a lot of unnecessary code left over. <br> Any further questions will be answered.</p>
ICR - Identifying Age-Related Conditions
17th place!🏆🏆
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>At the beginning of the competition, I immediately set two goals without greek and without complex ensembles. My solutions are feature generation by the multiplication method and the SVC model<br> <a href="https://www.kaggle.com/code/markdjadchenko/public-krni-pdi-with-pseudo-labelling" target="_blank">https://www.kaggle.com/code/markdjadchenko/public-krni-pdi-with-pseudo-labelling</a></p> <p>I just went through all the possible options and if the accuracy increased, then I went further, and if not, then I left it, and so on until the accuracy started to deteriorate</p> <p>Private = 0,36 и Public = 0,46, CV = 0,1599</p>
ICR - Identifying Age-Related Conditions
182nd Place Solution for the "ICR - Identifying Age-Related Conditions" Competition (plus the better one)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My first silver medal, thank you everyone at Kaggle. I will try to explain my model that win the medal (which actually my 4th best model) and 3 other models which result better in Private Leaderboard.</p> <h1>Context</h1> <p>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></p> <h1>Model</h1> <h2>1. Submitted Model (my 4th best, Private Score 0.3974, Public Score 0.22606)</h2> <p><strong>1.1. Model</strong><br> One CatBoostClassifier model predicting "Class" as target</p> <p><strong>1.2. Features</strong><br> Selected Features from train.csv file based on Correlation to Alpha that is higher than 0.1 (absolute) </p> <p><strong>1.3. Categorical Features</strong><br> EJ</p> <p><strong>1.4. Validation</strong><br> KFold 5 split</p> <p><strong>1.5. Training</strong><br> CatBoostClassifier multiclass train on dataset where Alpha as target and weighted by each of 4 Class (Alpha, Beta, Gamma, Delta).<br> Validate on dataset where Class as target and weighted by each of 2 class.<br> Weight function is copy and modified from <a href="https://www.kaggle.com/maverickss26" target="_blank">@maverickss26</a> notebook: <a href="https://www.kaggle.com/code/maverickss26/icr-age-related-conditions-eda-modelling" target="_blank">https://www.kaggle.com/code/maverickss26/icr-age-related-conditions-eda-modelling</a></p> <pre><code>X = train[features] y = train[] y_ros = train[] (): nc = np.bincount(y_true) w0, w1 = /(nc[]/y_true.shape[]), /(nc[]/y_true.shape[]) w0, w1 (): nc0 = y_true.value_counts()[] nc1 = y_true.value_counts()[] nc2 = y_true.value_counts()[] nc3 = y_true.value_counts()[] w0 = /(nc0/y_true.shape[]) w1 = /(nc1/y_true.shape[]) w2 = /(nc2/y_true.shape[]) w3 = /(nc3/y_true.shape[]) w0, w1, w2, w3 class_1 = y_ros.value_counts().index[] class_2 = y_ros.value_counts().index[] class_3 = y_ros.value_counts().index[] class_4 = y_ros.value_counts().index[] train_w0, train_w1, train_w2, train_w3 = calc_log_loss_weight_meta(y_ros) valid_w0, valid_w1 = calc_log_loss_weight(y) train_dataset = Pool(data=X_train, label=y_train, weight=y_train.({class_1: train_w0, class_2: train_w1, class_3: train_w2, class_4: train_w3}), cat_features=cat_features) eval_dataset = Pool(data=X_val, label=y_val, weight=y_val.({: valid_w0, : valid_w1}), cat_features=cat_features) </code></pre> <p><strong>1.6. Predicton</strong><br> Predict Probability for target "Class"</p> <h2>2. My 3rd best Model - Private Score 0.39569, Public Score 0.38332)</h2> <p>Inspired by <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> notebook <a href="https://www.kaggle.com/code/raddar/icr-competition-analysis-and-findings" target="_blank">https://www.kaggle.com/code/raddar/icr-competition-analysis-and-findings</a>, which also my model with the most close score between Private and Public Score</p> <p><strong>2.1. Model</strong> <br> Four CatBoostClassifier model for each of 4 Alpha Class (4 Binary Classification Model for Alpha, Gamma, Beta and Delta)</p> <p><strong>2.2. Features</strong> <br> Selected Features from train.csv file based on Correlation to Alpha Class that is higher than 0.1 (absolute) for model Delta and Gamma. <br> Full Features from train.csv file for model Alpha and Beta</p> <p><strong>2.3. Categorical Features</strong><br> EJ as Categorical Features on Alpha and Beta model and no Categorical Features for model Delta and Gamma </p> <p><strong>2.4. Upsampling</strong><br> Upsampling data for model Gamma only</p> <p><strong>2.5. Validation</strong><br> TrainTestSplit stratify for each of the 4 models</p> <p><strong>2.6. Prediction</strong><br> Predict Probability for each Model target use np.argmax to find the largest probability, if the largest porbability is from model A then porbability Class 0 taken from model A else take the largest probability as Class 1:</p> <pre><code>subs_All_df[] = cat_model_A.predict_proba(subs_All_df[features_A])[:,] subs_All_df[] = cat_model_B.predict_proba(subs_All_df[features_B])[:,] subs_All_df[] = cat_model_D.predict_proba(subs_All_df[features_D])[:,] subs_All_df[] = cat_model_G.predict_proba(subs_All_df[features_G])[:,] PRED_TARGET = [, , , ] pred_list = (np.argmax(np.asarray(subs_All_df[PRED_TARGET]), axis=)) i, v (pred_list): v == : subs_df.loc[i, ] = subs_All_df.loc[i, PRED_TARGET[v]] subs_df.loc[i, ] = - subs_All_df.loc[i, PRED_TARGET[v]] : subs_df.loc[i, ] = subs_All_df.loc[i, PRED_TARGET[v]] subs_df.loc[i, ] = - subs_All_df.loc[i, PRED_TARGET[v]] </code></pre> <h2>3. My 2nd best Model - (Private Score 0.37928, Public Score 0.2409)</h2> <p><strong>3.1. Model</strong><br> 1 CatBoostClassifier model predicting "Class" as target</p> <p><strong>3.2. Features</strong><br> All features from train.csv files</p> <p><strong>3.3. Categorical Features</strong><br> EJ</p> <p><strong>3.4. Validation</strong><br> KFold 5 split</p> <p><strong>3.5. Training and Prediction (same as above 4th best model):</strong> <br> CatBoostClassifier multiclass train on dataset where Alpha as target and weighted by each of 4 Class (Alpha, Beta, Gamma, Delta)<br> Validate on dataset where Class as target and weighted by each of 2 class</p> <h2>4. My 1st best Model - (Private Score 0.36871, Public Score 0.22484)</h2> <p>Same as 2nd best Model with different Hyper Parameter tuning</p> <h1>Things that is not working/improving my model:</h1> <ul> <li>Stacking prediction: predict Beta, Gamma, Delta then use it to Predict Alpha</li> <li>StratifiedKFold compare to KFold</li> <li>Scaling using MinMaxScaling</li> <li>UpSample using RandomSampling for all Class: Alpha, Beta, Gamma, Delta</li> <li>Add Epsilon as Feature</li> </ul>
ICR - Identifying Age-Related Conditions
188th place solution for ICR - Identifying Age-Related Conditions competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you. I think that actually the mindset is really important in every competition, since then you make the correct decisions. </p> <p>Greetings everyone! The gambling competition has ended and I would like to share my thoughts and the solution. I also want to really thank <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> for the EDA and insights notebooks, because I took ideas from there, such as the epsilon, missing values and data drift during the years.</p> <h1>Context section</h1> <p>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></p> <h1>Overview of the Approach</h1> <p>My main approach is the robust ensemble of XGBoost models. I chose features, based on their importance, firstly trained h2o automl boostings to get them. Used the epsilon feature to stratify the data and to train models. Deleted samples with a null epsilon and used 2018+ year. I applied undersampling with various random seeds to train diverse models.</p> <h1>Details of the submission</h1> <h3>Important and impactful detail about the submission</h3> <p>I focused on the stabilizing validation score and not overfitting on the public leaderboard. Many models score depended too much on the random seed and folds. In my case, my validation strategy was stable, I got a low variance in my validation scores for different models (from 0.28 to 0.33).</p> <h3>Feature choice</h3> <p>Trained the h2o automl XGBoost, got data about the most important features. I think I could just do the same with XGBoost, but it was faster and I wanted to not spend too much time. I think, I have chosen about 25 features in total. </p> <h3>Epsilon feature</h3> <p>I used the epsilon feature, extracted only the year from it. I used only the data from the 2018+ year, since it was more relevant. I also dropped all samples with nan epsilon, since Raddar found, that they all have the same class. That is, the model will just learn to guess them based on the epsilon, moreover, I thought that these samples might be not correct at all. So I did this:<br> <code>df = df[(~df["Epsilon"].isna()) &amp; (df["Epsilon"] &gt;= 2018)]</code></p> <h3>Validation strategy and undersampling technique</h3> <p>When I tested the models, I split the data on 85%/15% using a stratification <code>stratify=df[["Class", "Epsilon"]]</code>. Then I trained 30 XGBoosts on that data, splitting it again using the same stratification and undersampling, so there will be the same number of classes in the new train and validation. For the each model of these 30 I used a different random_state to split the data (so each one train on the different data). Then I changed the random seed again, split the data again and trained another 30 models. This way I did this for 20 random seeds and stopped, since from that moment more small ensembles did not make the performance better.<br> The really good thing was that models actually were robust. Almost on the each validation set for the each 30-models ensemble (the first one) the score was about 0.28-0.31. When I tried the same strategy without a stratification on the epsilon, my score depended on the seed, that is, scores were from 0.1 to 0.4, which was really sad. </p> <p>Finally, when I was confident in my training process, I stopped using the first validation set in that pipeline and just retrained the same ensemble using the whole data. But sure I was still undersampling the data using different seeds, when trained each individual model. I didn't changed hyperparameters, since it made the score worse. Only changed <code>["max_depth"] = 5</code> and <code>["eta"] = 0.285</code>.</p> <h3>Another finding about the epsilon impact on the leaderboard score</h3> <p>When I submitted, I tried using the epsilon=2020 or 2021 for the whole lb data, but for me it didn't make the difference actually. </p> <h3>What didn't work in my case</h3> <p>My another submission was a TabPFN, trained it using the same process. It scored 0.38 on a public, so I thought that it might overfitted to the private lb, was excited and wanted to gamble with it. I was wrong, <strong>it was just bad.</strong></p> <h3>My other thoughts and the mindset</h3> <p>When I saw the data and the description of this competition, I understood that it will be hard to win here, the probability of overfitting is very high. I tried to minimize the time, that I spend on this competition and "maximize the earnings". So I spent maybe 6-10 hours in total and stopped myself. I thought that there are 3 main types of submissions that will be able to win something:</p> <ol> <li>A robust submission of some gradient boostings. It scores ok on a public lb, it will score ok on a private lb. &lt;- <strong>My case</strong></li> <li>An overfitted to private lb submission. It should score bad on a public lb, but should score really good on a private lb. I suppose, it's a pure luck here, hard to create a such model. </li> <li>Some simple model (maybe KNN, log regression), that will not overfit and will show nice results after the shake up. </li> </ol> <p>So I focus on trying to create such solutions. In my experience with the similar data, oversampling the data and scale_pos_weights have never worked, so I focused only on the undersampling technique. </p> <h3>Competition metric</h3> <p>Another important thing here is a competition metric. It is a balanced log loss. And the main thing that you should remember about it it, that it punishes you really bad, when you confidently predict EVEN 1 SAMPLE. That's why I decided not to do any postprocessing of predicted values, I understood, that I will just make the score even worse. When I submitted, I also was somewhere at 3500th place on the lb. However, I thought that it is a good sign of not overfitting and stayed calm. Didn't take unnecessary actions.</p> <h3>Sources</h3> <ul> <li><a href="https://www.kaggle.com/code/raddar/icr-competition-analysis-and-findings" target="_blank">https://www.kaggle.com/code/raddar/icr-competition-analysis-and-findings</a></li> <li><a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/420342" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/420342</a></li> </ul>
ICR - Identifying Age-Related Conditions
19th Place Solution for the "ICR - Identifying Age-Related Conditions" Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, I simply averaged the probabilities given by each model.<br> Thanks <a href="https://www.kaggle.com/tilii7" target="_blank">@tilii7</a> </p> <p>Hey Kagglers,<br> I guess mine is one of the simplest approach for this competition.</p> <p><br></p> <p><strong>Solution:</strong> <br> <a href="https://github.com/SunilGolden/Kaggle-ICR" target="_blank">https://github.com/SunilGolden/Kaggle-ICR</a></p> <p><br></p> <p><strong>Approach</strong></p> <ul> <li>Under sampled training data</li> <li>Imputed null values with zero</li> <li>Encode categorical column using Ordinal Encoder</li> <li>Scaled other columns using Min Max Scaler</li> <li>Used k-fold cross validation to evaluate TabPFN, XGBoost, CatBoost, HGBoost, Light GBM, Random Forest, AdaBoost, GBM, SVM models and a few versions of their emsembles with balanced log loss</li> <li>Finally, I trained XGBoost, CatBoost, HGBoost, Light GBM, Random Forest, and GBM models and then ensembled them.</li> </ul> <p><br></p>
ICR - Identifying Age-Related Conditions
28th Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>To begin, I'd like to express my gratitude to Kaggle for hosting this competition, and extend my heartfelt congratulations to the winners of this competition!</p> <p>Here is the main takeaway from my solution. After reviewing discussions on the dangers of post-processing and data distribution on the forums and conducting several experiments, I've come to the realization that I should emphasize the prevention of overfitting due to the small dataset, rather than complicating my model. I allocated a significant amount of time to feature engineering while focusing on maintaining simplicity in model building.</p> <p>I used stratified cross-validation, and an XGBoost model was constructed with specific parameters obtained through RandomizedSearchCV. A lot of normal feature engineering methods were used. </p> <p>the public score for my submission is 0.18</p>
ICR - Identifying Age-Related Conditions
30th Place Solution for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>On the competition's closing day, I was surprised to see a message from my friend and, upon checking the leaderboard, I found that I had come in 30th place.</p> <p>After submitting a few times early in the competition, I participated in HuBMAP, so I didn't expect to win a medal.</p> <h3><strong>Context</strong></h3> <ul> <li>Business context:&nbsp;<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a></li> <li>Data context:&nbsp;<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></li> </ul> <h2>Solution</h2> <p>I haven't done anything special.</p> <p>I used this excellent notebook: <a href="https://www.kaggle.com/code/datafan07/icr-simple-eda-baseline" target="_blank">https://www.kaggle.com/code/datafan07/icr-simple-eda-baseline</a> and I am grateful to <a href="https://www.kaggle.com/datafan07" target="_blank">https://www.kaggle.com/datafan07</a>.</p> <p>The private score of this notebook was 0.37816, which is equivalent to a silver medal-worthy score.</p> <p>I only made a modification in the stratified k-fold section of this notebook to perform a split that also considered EJ.</p> <p>After reading this discussion (<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/411632)" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/411632)</a>, I realized that gender should also be considered during fold splitting.</p> <pre><code>greeks = pd.merge(train[[, ]], greeks, on=) train_index,val_index skf.split(train, greeks.iloc[:,:-]): </code></pre> <p>Due to this modification, the private score improved from 0.37816 to 0.37426.</p> <p>Thank you!</p>
ICR - Identifying Age-Related Conditions
322 place solution for the "ICR - Identifying Age-Related Conditions" competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Really didn't think I would make it in the Top 6% with such a simple solution. </p> <h1>Context section</h1> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></li> </ul> <h1>Overview of the Approach</h1> <p>My main concern was that the dataset was imbalanced and the final model may be baised towards the label with greater instances. So my main focus was to keep an eye on recall and try to use ensemble models to get good results.</p> <h1>Details of the submission</h1> <h2>Data Preprocessing</h2> <p>I have done basic data preprocessing like using standardization for numerical features and label encoding for categorical features. I did not try to impute the null values or drop the outliers since I did not have any information about the features as they were anonymized.</p> <ul> <li>Used label encoding for categorical features</li> <li>Used standarization for numerical features</li> <li>Dropped nan values and kept outliers</li> </ul> <h2>Model Selection</h2> <p>I wanted to go with an ensemble model with a 5-fold so I selected the XGBoost model as it has a large number of hyperparameters to optimize. Which gives me much better chances to get good results.</p> <ul> <li>Used XGBoost</li> <li>Used 5 folds of data</li> </ul> <h2>Hyperparameter Optimization</h2> <p>I decided to go for bayesian optimization using Optuna since I had a large number of parameters to search through. My exact hyperparameter grid search is as follows:</p> <pre><code> `params = { : trial.suggest_(, 50, 300), : trial.suggest_(, 2, 25), : trial.suggest_log(, 0.001, 0.5), : trial.suggest_log(, 0.1, 1), : trial.suggest_log(, 0.5, 1), : trial.suggest_log(, 0.001, 1), : trial.suggest_log(, 0.001, 10), : trial.suggest_log(, 0.001, 10.0), : trial.suggest_log(, 0.001, 10.0), : trial.suggest_log(, 0.001, 10), }` </code></pre> <ul> <li>Used optuna optimization</li> </ul> <h2>Final Results</h2> <p>After getting the best parameters I used to train 5 different models using the 5 folds of data. I selected the best model from those folds on basis of F1-score and used it to generate the predictions.</p> <ul> <li>Selected best model for 5 folds on basis of F1-score</li> </ul> <h2>What didn't work</h2> <p>I tried to use balanced weights for different models as the dataset for imbalanced but I didn't get any good results using that.</p> <h1>Sources</h1> <p>I learned about class weights from <a href="https://www.kaggle.com/code/gusthema/identifying-age-related-conditions-w-tfdf" target="_blank">gusthema's notebook</a>.</p>
ICR - Identifying Age-Related Conditions
3rd Place Solution for the "ICR - Identifying Age-Related Conditions" Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I was really surprised to be able to achieve this result. In fact, after participating in the competition and simply implementing a baseline code, I rarely paid attention to this competition again because the company usually worked overtime😅.<br> The final result was a baseline code based on the catboost model at that time, and its public score was 0.21.<br> Due to anonymous data features and medical-related health features, my initial idea is to construct new features through the ratio between different features, just as some indicators in the medical examination report are also calculated by the ratio between other indicators.<br> Before this, I planned to filter some anonymous features through corr, so as not to construct too many invalid features. However, without further attempts, the final code is still the cross calculation of all features.<br> The more effective operations in this competition should be the following two points<br> 1.One is the cross calculation of features<br> 2.The other is the catboost model<br> Because my lightgbm model with the same features got 0.22 on public score and 0.38 on private score.<br> here's my solution:<br> <a href="https://www.kaggle.com/code/junyang680/icr-lightgbmbaseline" target="_blank">https://www.kaggle.com/code/junyang680/icr-lightgbmbaseline</a><br> Regarding the parameter selection of the lightgbm model and the catboost model, it seems to be based on the contents of some notebooks, but sorry, some have forgotten</p> <p>I am wondering if the shake in this competition is so serious, is it because everyone has done too many operations on the small data to cause overfitting, or submitted the post-processing operations for fitting public to private?</p>
ICR - Identifying Age-Related Conditions
40th Place Solution for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I'm really surprised by my position in the private leadrboard having gained over 3283 position :D</p> <p><strong>Context section</strong></p> <ul> <li><p>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions</a></p></li> <li><p>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></p></li> </ul> <p><strong>Overview of the approach</strong></p> <p>The difficulty of this challenge arose from the very limited number of available rows. <br> This also posed significant challenges in defining a suitable cross-validation scheme.</p> <p>As a cross-validation strategy, a stratified scheme was chosen based on the Alpha column.</p> <p>The main component of my solution was to employ a contrastive learning approach. The reason for opting for contrastive learning was that this way, the scarcity of available observations would be compensated by a large number of simulated rows.</p> <p>A substantial portion of the approach is derived from Setfit (source link at the end).</p> <p>The model utilized was an LGBM, which, starting from the absolute difference of all features, aimed to classify whether two features belonged to the same class or not. In addition to the initial features on which the absolute difference was computed, the following additional features were added:</p> <ul> <li>number_zero: % of features with zero difference</li> <li>mean_diff: mean of the absolute difference</li> <li>std_diff: standard deviation of the absolute difference</li> <li>median_diff: median of the absolute difference</li> <li>diff_mean: absolute difference between the means of all initial values</li> <li>diff_std: absolute difference between the standard deviations of all initial values</li> <li>diff_median: absolute difference between the medians of all initial values</li> </ul> <p><strong>Details of the submission</strong></p> <p>For training the model, pairs were sampled in a 1:5 ratio (class 0 vs. 1) to ensure a balanced dataset. For each observation, a certain number of random examples from the same class and from different classes were selected. Any duplicate combinations were removed.</p> <p>The metric used to determine the appropriate number of rounds was AUC.</p> <p>During the inference phase, the following post-processing steps were performed:</p> <ul> <li>Predict the probability that a new observation belongs to class 0 (by comparing it with all class 0 observations).</li> <li>Calculate the probability that it belongs to class 1 (by comparing it with all class 1 observations).</li> </ul> <p>With prob_0 and prob_1 (calculated as the mean of each previous calculated probabilities), calculate prob = prob_1 / (prob_0 + prob_1). This functions as a kind of ensemble.</p> <p>This way, for each individual observation to be predicted, 617 different predictions need to be made.</p> <p>The section to confront each new observation with the train observation is the given function:</p> <pre><code>def get_retrieval_dataset( test: pd.DataFrame, target_example: pd.DataFrame, feature_list:list ) -&gt; Tuple[pd.DataFrame, list]: test_shape = test.shape[0] target_example_shape = target_example.shape[0] test_x = test[feature_list].to_numpy() target_example = np.concatenate( [ target_example _ range(test_shape) ], =0 ) test_x = np.repeat(test_x, target_example_shape, =0) index_test = np.repeat(test.index.values, target_example_shape, =0) retrieval_dataset = fe_pipeline( =target_example, =test_x, =feature_list, ) retrieval_dataset[] = index_test return retrieval_dataset </code></pre> <p>What didn't work:</p> <ul> <li>Metric Learning using Neural Networks, both DNN and TabNet, did not work regardless of the chosen metric (cosine similarity, mse, contrastive loss, etc.). Unfortunately, I couldn't achieve better CV results than the LGBM, likely due to the extremely limited number of available observations playing a key role.</li> <li>Contrastive learning on Alpha column</li> <li>Use a weight for each given training observation the observation</li> </ul> <p><strong>Sources</strong><br> <a href="https://www.kaggle.com/code/stenford23/icr-inference-contrastive-retriever/notebook?scriptVersionId=130640859" target="_blank">Inference Notebook</a></p> <p><a href="https://huggingface.co/blog/setfit" target="_blank">SetFit</a></p>
ICR - Identifying Age-Related Conditions
436th Place Solution for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is a Bronze solution using CatBoost with zero feature engineering, default hyperparameters, and essentially 3 lines of code. "greeks.csv" is not used.</p> <pre><code>X_train, X_val, y_train, y_val = train_test_split(train_X_raw, train_y, test_size=, stratify=train_y, random_state=) clf = CatBoostClassifier(cat_features=[CAT_COL], auto_class_weights=) clf.fit(X_train, y_train) </code></pre> <p><strong>Context</strong></p> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></li> </ul> <p><strong>Overview of the Approach</strong></p> <ol> <li>Evaluation metric and loss function. To use balanced logarithmic loss in mainstream classifiers, a simple way is to set class_weight (they may be under slightly different names) to "balanced". We can show that the weight-adjusted balanced log loss is mathematically equivalent to the regular log loss.</li> <li>Preprocessing. Fill NA with -1.</li> <li>Validation strategy. A stratified train-test split using the labels. </li> </ol> <p><strong>Details of the submission</strong><br> Nothing special here.</p> <p><strong>Sources</strong><br> None.</p>
ICR - Identifying Age-Related Conditions
4978th Place Solution - everything was known from the beginning🤣
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Success in CV</h1> <p>Overwrite predictions with [0.0, 1.0].<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3190389%2Fe11dcd5434589a9376a4a960be4766cb%2F1.png?generation=1691715420918168&amp;alt=media" alt=""></p> <h1>Success in Public LB (52nd)</h1> <p>Overwrite predictions with [0.0, 1.0].<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3190389%2Fe11dcd5434589a9376a4a960be4766cb%2F1.png?generation=1691715420918168&amp;alt=media" alt=""></p> <h1>Failure in Private LB (4978th)</h1> <p>Overwrite predictions with [0.0, 1.0].<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3190389%2Fbed2a8d069cdc8ce49610455a83eb60c%2F2.png?generation=1691716135808111&amp;alt=media" alt=""><br> Everyone, see you all in the next competition😉</p>
ICR - Identifying Age-Related Conditions
4rd Place Solution for the "ICR - Identifying Age-Related Conditions"
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I'm pleasantly surprised to see such a strong perturbation on the leaderboard, resulting in me going to the very top of the leaderboard. No doubt, when I sent my 3rd attempt 2 months ago (0.16 on public), I certainly did not suspect that it would end up in 4th place on private (0.34). Since then, I've seen the score decrease rapidly on public and I've realized that all these solutions are heavily overfitted, which is something I've tried to avoid in every solution I've made. </p> <p>Now for the key features of my solution.</p> <p>1) Recursive filling of gaps in features using regression on CatBoostRegressor (default hyperparameters),<br> 2) greeks['Epsilon'] Unknown were filled with greeks['Epsilon'].min()<br> 3) row_id - row number in train and in test when sorting by Epsilon<br> 4) Creation features with CatBoostClassifier training for each value in 'Alpha', 'Beta', 'Gamma', 'Delta' - probabilities for corresponding values of these categories similarly <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430907" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430907</a>. To avoid overtraining for prediction on test, I used 5 - fold cross validation and then simple averaging,<br> 5) Final model - CatBoostClassifier without any tuning of hyperparameters and feature elimination</p> <p>In subsequent attempts I tried to expand the feature space by inventing various new features, complications of filling in the gaps, for example, I tried to predict epsilon and row_id, but I did not get any improvement in the results on cross validation, moreover, the results became more unstable, so I realized that these complications only lead to overfitting and I stopped these attempts.</p> <p>My solution is <a href="https://www.kaggle.com/code/andrejvetrov/third?scriptVersionId=131958512" target="_blank">https://www.kaggle.com/code/andrejvetrov/third?scriptVersionId=131958512</a></p> <p>Thank you very much Arvind (Yetirajan) Narayanan Iyengar</p> <p>Thank you so much Murugesan Narayanaswamy</p> <p>Yes, you are right, I added this simple feature more intuitively for the reason that although the size of the dataset is small, it has a time component that must be taken into account somehow. According to feature importance in catboost classifier the feature is on the 4th position. I was getting 0.2(0.37) without it, compared to 0.16(0.34) with it. Unfortunately, some other features I tried to extract from epsilon didn't give any gain.</p>
ICR - Identifying Age-Related Conditions
5. place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I've just tried without stacking greeks and private score was 0.4. It definitely helped</p> <p>Knowing the inevitable shakeup, I submitted couple basic solutions months ago and forgot about the competition. I was surprised when my friends congratulated me in the morning.</p> <p><a href="https://www.kaggle.com/code/celiker/icr-5-place-solution/notebook" target="_blank">Here's the code</a></p> <p>Main points:</p> <ol> <li>Trained models for each Alpha, Beta, Gamma, Delta and stacked these probabilities to be used as features.</li> <li>####Created lgbm imputer models for every feature even if it has no missing values on train data.</li> <li>Used RepeatedStratifiedKFold(n_splits=5, n_repeats=5) with a basic catboost model.</li> </ol> <p>note: Removing imputers didn't effect the score, so the main strength is stacking greeks.</p> <p>After this solution, I tried brute-force feature engineering and other modeling approaches, but they didn't help on public so I left it as final submission</p>
ICR - Identifying Age-Related Conditions
520th Place Solution for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello there, kagglers!<br> This is my second effective competition that gave me Kaggle Competition Expert badge. </p> <h2>Context section</h2> <p>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">Competition overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">Data overview</a></p> <h2>Overview of the Approach</h2> <p>First of all, I usually looking forward a big lottery in most medical competitions. The problems are typical: big difference in distribution of train/test or public/private datasets. </p> <p>Secondly, the lack of info in this competition about columns in dataset, that gives an opportunity only to handle standard thing in EDA and FE. </p> <p>Finally, when 0 in metrics reached means, that top guys will fall heavily. </p> <p>All this stuff occurred in this competitions and as the result +2733 places on private LB were easily overtaken. </p> <p>This, the bet was taken on basic preprocessing, being taught on every DS/MLE course and consisted of:</p> <ul> <li>Missing values handle</li> <li>Correlation check</li> <li>Standardization</li> <li>Simple boosting models blending </li> </ul> <h2>About data</h2> <p>As I mentioned above, bad data for EDA and FE was provided. As the result I took only several additional standard processings to be applied to datasets. </p> <h2>Details of the Submission</h2> <h3>My notes</h3> <p>To tell the truth I made only 1 submission to the competition, the second failed with an error. As the basement I have taken another public notebook, link in sources.</p> <h3>My ideas</h3> <p>As the processing I added to original notebook StandartScaler and checked for correlation.<br> As the result BC and DU features were deleted as correlated to BZ and CL respectively.</p> <p>I’ve taken weighed blend of the implementations of gradient boosting (CatBoost, XGBoost and LightGBM) following the original notebook. </p> <p>I also reduced amount of folds with respect to original notebook for training from 20 to 10.</p> <p>I rebalanced blending according to CV scores (better CV = bigger weight).</p> <p>Staring from this point, I didn’t pay much attention to overfitting to LB, cause I felt shake-up coming. I bet on more stable and general solution and got my prize for that.</p> <p>I was completely right this time…</p> <h3>Coderun and inference</h3> <p>Code is using simple config in upper cells, then it comes to feature processing and training. Inference is made in same notebook, cause training isn’t hard enough to split. Every guy here can read the code easily, no need to describe something. </p> <p>Using GPU in config is extremely improves its performance, but at that moment (of submission) I was of out of GPU quota, that’s the reason. </p> <h2>Some statistics</h2> <p>I got this solution as a public kernel which was there for several months. <br> You can watch it <a href="https://www.kaggle.com/code/dzisandy/icr-first-version/notebook?scriptVersionId=129525404" target="_blank">here</a>. It gave me silver in notebooks, heh.<br> This solution was seen by more than 2300 people and was taken as a final submission by 8 people (including me, from 520th to 527th place). Same digits after point.</p> <h2>My feedback</h2> <p>This competition is fairly random in my opinion, the shake-up is a proof. But the luck was on my side. As an advice: Stay humble and continue climbing up the hill. The top is closer than it appears😀</p> <p>P.S. I manage telegram channel about DS, called <a href="https://t.me/dzis_science" target="_blank">Dzis_Science</a>, follow me for more!</p> <h3>Sources</h3> <ul> <li><p>Originally, as the basement the great and <a href="https://www.kaggle.com/code/takaito/icr-iarc-gradient-boosting-ensemble-lb-0-22" target="_blank">simple notebook</a> was taken. </p></li> <li><p>My public kernel is <a href="https://www.kaggle.com/code/dzisandy/icr-first-version/notebook?scriptVersionId=129525404" target="_blank">here</a>.</p></li> </ul> <p><strong>Feel free to ask any questions in comments/DM.</strong></p>
ICR - Identifying Age-Related Conditions
554 Place Solution for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Context</h1> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></li> </ul> <h1>Overview of the Approach</h1> <p>I like tree-based algorithms like XGBoost and LightGBM, that why in this competition I built a super easy pipeline by ensembling these two models trained on cross-validation. <br> I don't use any normalization. Only split columns to categorical and other, because models need to know which features are categorical.<br> Thanks to the shakeup I got another medal for competitions.</p> <pre><code>model_lgbm = lgbm.LGBMClassifier( random_state=, class_weight=, ) model_lgbm = GridSearchCV( model_lgbm, parameters, cv=, scoring=make_scorer(balanced_log_loss, greater_is_better=, needs_proba=), ) </code></pre> <pre><code>model_xgb = xgb.XGBClassifier( scale_pos_weight=((y_train) - (y_train)) / (y_train), random_state=, tree_method=, enable_categorical=, learning_rate=, max_depth=, colsample_bytree=, subsample=, eta=, gamma=, objective=, ) model_xgb = GridSearchCV( model_xgb, parameters, cv=, scoring=make_scorer(balanced_log_loss, greater_is_better=, needs_proba=), ) </code></pre> <h1>Details of the submission</h1> <p>I submitted the mean prediction value from the two models described above<br> <code>(y_pred_xgb[:, 0] + y_pred_lgbm[:, 0]) / 2</code></p> <h1>Sources</h1> <ul> <li>LightGBM: <a href="https://lightgbm.readthedocs.io/en/stable/" target="_blank">https://lightgbm.readthedocs.io/en/stable/</a></li> <li>XGBoost: <a href="https://xgboost.readthedocs.io/en/stable/index.html#" target="_blank">https://xgboost.readthedocs.io/en/stable/index.html#</a></li> <li>CV vs LB comparison: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/410474" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/410474</a></li> </ul> <p><a href="https://www.kaggle.com/jakelj" target="_blank">@jakelj</a> in so simple way</p> <pre><code>tmp = (y_pred_xgb[:, ] + y_pred_lgbm[:, ]) / df_sb[] = tmp df_sb[] = - tmp </code></pre>
ICR - Identifying Age-Related Conditions
585th Place Solution for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone!</p> <p>I'm excited to share my achievement with you – I've secured a bronze medal with my solution. This marks a significant milestone for me, as it's my first medal earned on Kaggle! 🥉 :DD</p> <p>TL;DR:<br> Solution notebook is <a href="https://www.kaggle.com/lukaszsztukiewicz/bronze-medal-solution-11-cells-clean" target="_blank">here</a> (cleaned and commented, only 11 cells!)</p> <p>I am grateful for all the valuable discussions, notebooks, and insights shared publicly. This journey has been an incredible learning experience, and I want to extend special thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> and <a href="https://www.kaggle.com/tilii7" target="_blank">@tilii7</a> ! I learned a lot from you! I'd also like to acknowledge <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> and <a href="https://www.kaggle.com/senkin13" target="_blank">@senkin13</a> , fellow competitors that I was just next to on the public leaderboard – being among such talented individuals reassured me that my solutions were in good company hahaha ;)) Lastly, thanks to the Kaggle team and ICR for this uniquely challenging but enjoyable competition!</p> <h2>Context</h2> <p>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions</a><br> Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></p> <h2>Approach</h2> <h3>Data preprocessing</h3> <p><strong>Missing values:</strong> KNN Imputer with k=3<br> <strong>Feature selection:</strong> dropped "EJ" column <br> <strong>Feature scaling:</strong> RobustScaler (25-75 IQR)</p> <h3>Model(s):</h3> <p>LGBM with handpicked hyperparameters (predictions were made inside 5 x 10 fold CV and then averaged) </p> <h3>Cross Validation</h3> <p>Simple Stratified 10-fold CV averaged across 5 random seeds.</p> <h2>Things that worked (improved my CV):</h2> <p>Robust Scaler improves CV a little over Standard Scaler and no scaling<br> KNNImputer improves CV score a little over the median/mean imputation<br> Best LBGM regularization: max depth + feature sampling by tree </p> <h2>Things that did not work (for me):</h2> <p><strong>Feature selection:</strong> I analyzed the feature importance of XGB, LGBM, and Catboost and tried recursive feature elimination with those<br> <strong>OVR classification:</strong> Tried to build models to predict exact Class instead of binarized version<br> <strong>Other models:</strong> such as RandomForest, ExtraTrees, SVM, LR, TabPFN</p> <h2>Things I wanted to try but didn't have time for:</h2> <p>Probability calibration<br> Binned epsilon feature </p> <p><strong>Second submission:</strong> Stacking Ensemble: 2x LGBM, TabPFN, Catboost, and highly regularized ElasticNet as meta-model ( CV 0.19 | Public LB 0.202 | Private LB 0.473)</p> <p>See you in next competitions!</p>
ICR - Identifying Age-Related Conditions
6th Position winner solution for the : ICR - Identifying Age-Related Conditions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The issue is that for certain parameters, such as "n_estimators," Bayesian optimization treats them as float numbers. So, I had to approximate this float number to an integer, which may reduce the accuracy of the model. So I conducted a new round of optimization using GridSearch for the learning_rate to try compensate this issue. </p> <p>My solution is divided into 7 big steps:<br> (Code version 10)</p> <ol> <li>Interpolate the missing data using a linear method using the 'interpolate' instance from pandas.</li> <li>Use a random forest classifier to find the most important features in the dataset using 'gini-importance'.</li> <li>Use Bayesian optimization to find the optimal parameters of the XGBoost classifier.</li> <li>Repeat step 3 multiple times to gather many optimal parameters for the XGBoost classifier.</li> <li>Make an ensemble of XGBoost classifiers using the optimal parameters.</li> <li>Fine-tune the XGBoost classifiers again using GridSearchCV (because Bayesian optimization is just an estimation of the parameters).</li> <li>Use a voting classifier (the mean of the probabilities of each XGBoost) to classify the test set.</li> </ol> <p>Here is my code (I performed step 2 in my personal computer):<br> <a href="https://www.kaggle.com/code/diegosilvadefrana/notebooke87ef51e7e/notebook" target="_blank">https://www.kaggle.com/code/diegosilvadefrana/notebooke87ef51e7e/notebook</a></p> <p>Since the data was a little imbalanced and it was a binary classification, I figured that ROC_AUC was robust for selecting the optimal parameters. Also, the dataset was too small, so any dev set (in case of using a neural network) would be too biased. </p> <p>Thank very much for your kind words. Your notebooks are beacons of knowledge for me. 😀</p>
ICR - Identifying Age-Related Conditions
7th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>After doing few submission in initial part of the Competition, I didn't work on it, as heavy shakeup was expected.<br> But its a positive shakeup for me now (with a solo gold) 😅.</p> <p>Here my submission details:</p> <ul> <li>Fill Nan data with 0</li> <li>5 fold Multi Label Stratified using Greeks Values</li> <li>EJ was categorical so used Label Encoding</li> <li>Also Label Encoding for Beta, Gamma and Delta</li> <li>Used MultiClass CatBoost Classifier for all the models</li> </ul> <p>Saw that Beta, Gamma and Delta have very high predictive capability, but were given only for training data, so used all the other features to predict encoded Beta, Gamma and Delta using 5fold strategy, and used those features along with other given features to predict Multi Class Alpha. Then converted the Alpha probs to binary probs by adding B, D and G prob to predict class 1 and using A prob to predict class 0</p> <p>Thats it.</p> <p>Thank You</p> <p>PS <a href="https://www.kaggle.com/code/manthanbhagat/simple-baseline-add-greeks-features/notebook" target="_blank">here my submission code</a></p> <p>Thank you <a href="https://www.kaggle.com/pardeep19singh" target="_blank">@pardeep19singh</a> </p> <p>Yes that's exactly what I did. Its not about how much more information we give to model, its about how we represent the information and also what is the predictive power of each feature, since Beta, Gamma and Delta had high predictive power, I use them.</p>
ICR - Identifying Age-Related Conditions
8th Place Solution for the "ICR - Identifying Age-Related Conditions" Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I ran some submissions after the fact to try out CatBoost in the mix. +0.02 (worse) including it with XGBoost and TabPFN. Still testing out some other combinations.</p> <p>Seem Many people tried combinations of XGBoost, TabPFN, LGBM, and CatBoost. I'm kind of keen to know if there's an alternative configuration that would have made a significant difference. </p> <p>Thank you.</p> <p>Frankly, I was somewhat shocked by the result. I had a pretty solid idea that there would be a lot of shakeup - but certainly wasn't expecting to be in the top 10. </p> <p>The public/private scores for this submission are 0.19/0.34 respectively.</p> <h1>Context:</h1> <p>Business context:<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank"> https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></p> <h1>Overview of the approach</h1> <p>I focused first on predicting if someone had a specific Age-Related Condition rather than the class. This proved more effective than predicting class alone. Effectively, I create an Ensemble of Ensemble Predictors focused on specific conditions. The models in the primary ensembles were XGBoost &amp; TabPFN. </p> <h1>Details of Submission</h1> <h2>Imputing Strategy</h2> <p>My primary imputing strategy utilized XGBoost to predict most of the missing values rather than dropping, filling with 0, mean, mode, etc.</p> <p>Compared to median imputing, this resulted in an improvement in the Public score of 0.04 (with no change in Private Score). </p> <p>In this submission, there were two fields that contained the majority of the nan values in the training set. As I couldn't be sure this would hold true in any other set, I created a function to impute these values in a brief loop while dropping additional columns from subsequent impute runs when multiple columns in the target were nan values. </p> <h1>Context:</h1> <p>Business context:<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank"> https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></p> <h1>Overview of the approach</h1> <p>I focused first on predicting if someone had a specific Age-Related Condition rather than the class. This proved more effective than predicting class alone. Effectively, I create an Ensemble of Ensemble Predictors focused on specific conditions. The models in the primary ensembles were XGBoost &amp; TabPFN. </p> <h1>Details of Submission</h1> <h2>Imputing Strategy</h2> <p>My primary imputing strategy utilized XGBoost to predict most of the missing values rather than dropping, filling with 0, mean, mode, etc.</p> <p>Compared to median imputing, this resulted in an improvement in the Public score of 0.04 (with no change in Private Score). </p> <p>In this submission, there were two fields that contained the majority of the nan values in the training set. As I couldn't be sure this would hold true in any other set, I created a function to impute these values in a brief loop while dropping additional columns from subsequent impute runs when multiple columns in the target were nan values. </p> <pre><code> (): () () drop_list = (df.columns).intersection((dl)) imputeDF = df.copy() imputeDF.drop(drop_list, axis=, inplace=) impute_X = imputeDF.drop(field, axis=).dropna() impute_y = imputeDF[field] imputeDF = impute_X.join(impute_y) trainDF = imputeDF[imputeDF[field].notna()].dropna() train_X = pd.get_dummies(trainDF.drop(field, axis=)) train_y = trainDF[field] testDF = imputeDF[imputeDF[field].isna()] test_X = testDF.drop(field, axis=) test_y = testDF[field] test_X.dropna(inplace=) testDF = test_X.join(test_y) test_y = testDF[field] () ((test_X)) == : () colsToUse = (train_X.columns).intersection((test_X.columns)) train_X = train_X[colsToUse] test_X = test_X[colsToUse] modelOne = XGBRegressor(n_estimators=).fit(train_X, train_y) preds = modelOne.predict(pd.get_dummies(test_X)) predictedDF = test_X predictedDF[field] = preds.astype() i, row predictedDF.iterrows(): df.loc[i, field] = row[field] remainingToSolve = df[df[field].isna()].count() () </code></pre> <h2>Feature Engineering</h2> <p>Aside from the imputing step above and converting categoricals into ints, I did no manipulation of the float data. There are quite a few fields with outliers that could perhaps be addressed, and at least one where the values appear to max out. In this case, I didn't have time to play around.</p> <h2>Ensemble Models Used</h2> <p>XGBoost's Classifier w/o fine tuning (fine tuning overfitted)<br> TabPFN</p> <p>As a metric (for XGBoost) I used balanced log loss, with the binary logistic objective. </p> <p>Each model was fit using cross validation, scored by Balanced Log Loss, and the best performing model from each run was chosen. </p> <h2>Greeks &amp; Class</h2> <p>The greeks file contains the Alpha field that differentiates the specific conditions that might result in the Class value being equal to 1. The remainder of the Greeks file was ignored. </p> <h2>Ensemble of Ensemble Strategy</h2> <p>Instead of a single ensemble predicting Class, we train 4. We train ensemble models to predict</p> <ol> <li>Class</li> <li>Alpha "B"</li> <li>Alpha "D"</li> <li>Alpha "G"</li> </ol> <p>For Positive predictions, we take the max of positive predictions for B,D,G &amp; average the result with the positive prediction of class.<br> For Negative predictions, we take the min of negative predictions for B,D,G &amp; average the result with the negative prediction of class.</p> <p>Note that in the code below, I refer to the predictors for B, D, and G, as Beta, Delta, and Gamma. This is not to be confused with the fields of the same name (which are not used). </p> <pre><code> (): alphaPreds = modelAlpha.predict_proba(train_X, ) betaPreds = modelBeta.predict_proba(train_X, ) deltaPreds = modelDelta.predict_proba(train_X, ) gammaPreds = modelGamma.predict_proba(train_X, ) ensemblePreds = [] i, pred (betaPreds): truthy = mean([([betaPreds[i][], deltaPreds[i][], gammaPreds[i][]]), alphaPreds[i][]]) falsey = mean([([betaPreds[i][], deltaPreds[i][], gammaPreds[i][]]), alphaPreds[i][]]) ensemblePreds.append([falsey, truthy]) postProcess: ensemblePreds = modelAlpha.post_process_proba(ensemblePreds) np.array(ensemblePreds) </code></pre> <h3>Special Note on Balanced Log Loss &amp; The Greek Strategy</h3> <p>For this competition, we did not need to ensure that predicted probabilities sum to 1. There are quite a few public notebooks utilizing a version of this function that simply inverts the positive predictions. This would obviously be incorrect with my prediction strategy described above as I can have total probabilities both greater and less than 1. For this notebook, I chose to ensure the Balanced Log Loss function I used took the full prediction probabilities and dealt with them independently rather than ensuring the probabilities summed to 1. Both should presumably be equivalent. </p> <h3>Caveats from a better performing notebook.</h3> <p>A better performing notebook (Private 0.33) ignored the Class model altogether.</p> <h2>Prediction Balancing</h2> <p>I balanced the prediction probabilities based on the weight of the predictions by the ensembled' models on a per model basis during training, and in aggregate when making predictions. I found this performed better during limited testing than providing an up-front weighting to XGBoost for the classes based on the ratio in the training set. </p> <h2>Picking Features</h2> <p>For each of the Ensemble Models (Class, B, G, D), I ran it through a loop removing features with feature importances scored &lt;=0 to check which had the best performance through training. I kept the feature set with the best scoring run.</p> <h3>Caveats from a better performing notebook.</h3> <p>A better performing notebook (Private 0.33) used all features.</p> <h2>Hyperparameter Tuning</h2> <p>I ran through 200 trials w/ Optuna on each model (Class, B, D, G) to tune hyper parameters. Ultimately, I found that tuning resulted in worse scores for both public &amp; private sets (0.01 to 0.05 worse). However, given the time limitation, I did not re-perform this for the reduced feature set for each model. This leaves room for re-running hyperparameter tuning on the reduced feature sets. I still suspect that at this point we're already working on overfitting against my core solution.</p> <h2>A note on Post Processing</h2> <p>I tried simple prediction post processing strategies where I pushed high values towards 1, and low values towards 0. I also tried ratio oriented approaches. Both tended to improve the training result score significantly (~0.5 all the way down to ~0.04 range) - but this resulted in significantly worse scores on both the public and private data set. I observed this strategy in the highest scoring public notebook (<a href="https://www.kaggle.com/code/vadimkamaev/postprocessin-ensemble" target="_blank">https://www.kaggle.com/code/vadimkamaev/postprocessin-ensemble</a>) that seemed to have been the source of many high scores. I'm curious now if this harmed those notebooks in a fashion as it did mine. </p> <h1>Sources</h1> <p>(<a href="https://www.kaggle.com/code/vadimkamaev/postprocessin-ensemble" target="_blank">https://www.kaggle.com/code/vadimkamaev/postprocessin-ensemble</a>)<br> Inspired my failed attempts at post processing &amp; source of my swap to balancing predictions during prediction rather than via the xgboost class weighting parameter (tabPFN didn't seem to have this, and attempts to mix/match were unnecessarily complex).</p>
ICR - Identifying Age-Related Conditions
98th Place Solution for ICR Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My approach to the modelling in this competition was a multistep underbagged ensemble model. Firstly, a model is trained to predict variables from the Greek dataset, which is only available for the training data. This model is used to augment the measurement features (i.e. train/test datasets) with predictions of the Greek variables. This augmented data is used to train the main model, which is an ensemble of (mostly) underbagged models. I published <a href="https://www.kaggle.com/code/nnjjpp/icr-silver-medal-98th-place-top-2" target="_blank">a notebook</a>, which is not the actual submission but a cleaned-up version of very similar code used in the submission.</p> <h2>Context section</h2> <ul> <li><p>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions</a></p></li> <li><p>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></p></li> </ul> <h2>Overview of the approach</h2> <p>We were provided with a range of anonymized measurement data (train/test), and several categorical variables (greeks) that were only provided for the training data. The predictand (the occurrence of one or more conditions: <code>Class</code>) was imbalanced, with most (&gt;80%) rows in the training data having no condition (i.e. value of 0). </p> <p>The greeks dataset consisted of two variables that were refinements of <code>Class</code>. <code>Alpha</code> separated positive <code>Class</code> occurrences into three different conditions, labelled 'B', 'D', and 'G'. The <code>Gamma</code> column further refined <code>Alpha</code>. The meaning of <code>Beta</code> was unclear, but showed a <a href="https://www.kaggle.com/code/nnjjpp/epsilon-eda-for-icr-competition#Annual-analysis" target="_blank">large shift</a> from 'C' to 'B' over the years of the training data. <code>Epsilon</code> appeared to be date of testing, and had a large number (approximately 1/4) of missing values.</p> <p>These considerations suggested that the greeks dataset held quite a bit of predictive information, and in order to use this data directly for predictions, I developed a two-step model: Firstly, we use the training data to model the encoded greeks variables, and use this model to predict greek probabilities. Secondly, the greek probabilities are used to augment the dataset, which is then used to predict the <code>Class</code> variable. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1158024%2F3361fddbc4c59b58e049789258088d94%2Fgreek_schematic.png?generation=1692367050270534&amp;alt=media" alt="Greek preprocessing schematic"></p> <p>To deal with class imbalance, I used <a href="https://www.kaggle.com/code/nnjjpp/underbagging-undersampling-and-bagging-estimators" target="_blank">underbagging</a>:</p> <blockquote> <p>In order to improve model performance, we create an undersampled dataset, i.e. use all the positive cases, but take only a small subset of the negative cases to get a balanced dataset with a roughly 50:50 split of cases. This works to improve classification accuracy, but we also lose a lot since almost all of the data is thrown away. One way to use this data is to create a bagging classifier (i.e. a similar rationale to a random forest classifier), where we fit many copies of the base estimator on different samples of the dataset, but using the undersampling procedure to generate the balanced datasets.</p> </blockquote> <p>And also the parameters of the models (where they existed) to deal with class imbalance.</p> <p>Finally, mean predictions from an ensemble of individually tuned models (various configurations of XGBoost, logistic regression, Catboost and lightGBM) was used to provide the final predictions.</p> <h2>Details of the submission</h2> <p>The general consensus was that the main difficulty in this competition was avoiding overfitting (especially to the public leaderboard). In early attempts, I found that increasing the number of hyperparameter tuning iterations made predictions worse, and this was because CV evaluation was very variable across folds, with hyperparameters being tuned solely to reduce the loss function in difficult folds. Overall performance of a model across all folds drops when this occurs. Experimenting with CV schemes, and avoiding data leakage was critical to avoid overfitting.</p> <p>To avoid data leakage, the greek probability data augmentation needed to be wrapped up in the cross-validation scheme. To this end I wrote the encoding, fitting and prediction as class <code>GreekPreprocessing</code>. I couldn't interface this properly with the scikit-learn <code>RandomizedSearchCV</code> class, as the greeks dataset needs to be passed through to the <code>GreekPreprocessing</code> class constructor, so a hand-coded randomized search function was needed.</p> <ul> <li><p>Preprocessing was minimal: the Greek dataset, and column <code>EJ</code> was encoded, simple imputation for missing data, and scaling was applied. Also an option for dropping columns with <code>SelectKBest</code> was included in the hyperparameter search.</p></li> <li><p>Hyperparameter tuning: A hand-coded randomized search CV function was used for each model in the ensemble (40 iterations). </p></li> <li><p>Cross-validation: As we saw, getting a robust CV scheme was extremely difficult in this competition due to the heterogeneity of the training data, public test and private test datasets. I tried a number of different approaches to cross-validation, including <code>KFold</code>, <code>StratifiedKFold</code> (on <code>Class</code>, <code>Alpha</code> and other combinations of variables from the greeks dataset), random sampling of rows, and a time-based <code>GroupKFold</code> strategy on the <code>Epsilon</code> column from the greeks dataset. Ultimately it was not possible to tell which CV strategy was most appropriate as we had little clues as to what the private test data would look like. I ended up using Alpha-stratification.</p></li> <li><p>Model ensemble: models from the ensemble with CV score greater than 0.4 were excluded, probabilities from the remaining models were blended using the mean. Models without balancing parameters produced lower probability predictions (i.e. were more likely to predict class 0). In contrast, the underbagged models produced higher probability predictions, and the unbagged models balanced with internal parameters produced probability predictions between these two extremes. <a href="https://www.kaggle.com/code/nnjjpp/icr-silver-medal-98th-place-top-2#5.-Blend-ensemble-probabilities" target="_blank">See the figure in the notebook</a>.</p></li> <li><p>Submission evaluation: Actual submission: Public 0.14, Private 0.38, <a href="https://www.kaggle.com/code/nnjjpp/icr-silver-medal-98th-place-top-2" target="_blank">published notebook</a>: Public 0.22, Private 0.36, Validation 0.31, Cross-validated (estimated from average of ensemble member CV scores) 0.35.</p></li> </ul> <h2>Sources</h2> <ul> <li><p>I think we can all agree that the public leaderboard for this competition was downright dangerous.</p></li> <li><p><a href="https://www.kaggle.com/code/nnjjpp/icr-silver-medal-98th-place-top-2" target="_blank">My published notebook</a> was not the actual submission but is a cleaned-up very similar version.</p></li> <li><p><a href="https://www.kaggle.com/code/janiobachmann/credit-fraud-dealing-with-imbalanced-datasets" target="_blank">Undersampling described in gold medal note book by janiobachmann</a>.</p></li> <li><p><a href="https://www.kaggle.com/code/nnjjpp/underbagging-undersampling-and-bagging-estimators" target="_blank">My underbagging notebook</a> contains classes for underbagging scikit-learn estimators.</p></li> <li><p><a href="https://www.kaggle.com/code/nnjjpp/epsilon-eda-for-icr-competition" target="_blank">My Epsilon EDA</a>.</p></li> <li><p>There were many great ideas on <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion" target="_blank">the competition discussion forum</a>. I found posts by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/carlmcbrideellis" target="_blank">@carlmcbrideellis</a>, <a href="https://www.kaggle.com/tilii7" target="_blank">@tilii7</a>, and <a href="https://www.kaggle.com/kaggleqrdl" target="_blank">@kaggleqrdl</a> particularly helpful.</p></li> </ul>
ICR - Identifying Age-Related Conditions
9th Place Solution for the "ICR - Identifying Age-Related Conditions" Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello! This was unexpected, I'm really happy! </p> <h1>Context section</h1> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></li> </ul> <h1>Overview of the Approach</h1> <p>I was very concerned about overfitting my model. With so many different opinions in the discussions and the difficulty of accurately gauging technique effectiveness due to the small dataset, I was unsure about the best approach to take. </p> <p>As a solution, I decided to integrate a mix of different models along with a variety of data preprocessing techniques.</p> <h1>Details of the submission</h1> <h3>Part one:</h3> <ul> <li>Data Preprocessing: <ul> <li>Leveraged an over sampler to balance data distribution.</li> <li>Use of Greeks' Epsilon with <code>Epsilon.max() + 1</code> for the test set.</li> <li>Employing the SimpleImputer with the strategy set to 'constant'.</li></ul></li> <li>Models:<ul> <li>Ensemble of two XGBClassifiers and two TabPFNClassifiers.</li> <li>Employed 5-fold cross-validation, picking the best model.</li></ul></li> </ul> <h3>Part two:</h3> <ul> <li>Data Preprocessing: <ul> <li>No Greek's Epsilon.</li> <li>Implemented feature scaling.</li> <li>Used feature selection, opting for a subset of 40.</li></ul></li> <li>Models: <ul> <li>XGBClassifier and LGBMClassifier.</li> <li>Employed 15-fold cross-validation, culminating in an ensemble mean.</li></ul></li> </ul> <h3>Conclusion</h3> <p>I found that the best score was from computing the mean of the first ensemble assigning a weight of 3 to it, while each of the other two models held a weight of 1.</p> <h3>What did not work for me:</h3> <ul> <li>postprocessing (obviously! 😉).</li> <li>Other models such as Tree Classifiers and Neural Networks…</li> <li>Using other fields (apart from Epsilon) in Greeks</li> </ul> <h1>Sources:</h1> <ul> <li>I drew inspiration from a publicly shared notebook for the first part: <a href="https://www.kaggle.com/code/aikhmelnytskyy/public-krni-pdi-with-two-additional-models" target="_blank">https://www.kaggle.com/code/aikhmelnytskyy/public-krni-pdi-with-two-additional-models</a></li> </ul> <p>I'd also like to thank everyone who actively participated in the discussion forum, I believe that this allowed me to learn a lot!</p> <p>Hello, </p> <ul> <li><p>I employed feature scaling primarily for consistency as I was anticipating the use of other models.</p></li> <li><p>I utilized a few feature selection methods to identify the most informative features. The choice of 40 features was somewhat arbitrary and it ended up like that because of other thresholds I used. </p></li> </ul> <p>Thank you for your congratulations!</p>
ICR - Identifying Age-Related Conditions
As expected the shake-up is the biggest I've seen, or trust your CV.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Great, thanks for sharing. <br> For me, the main problem with using CV was using RandomOverSampler and similar techniques. Because their use worsens the "credibility" of the CV. <br> Damn, I knew about Sliding Time Cross Validation, and why it should be used. If I'm not mistaken, you already demonstrated it a few years ago.</p> <p>Hello everyone! <br> First of all, I want to thank the organizers for a wonderful competition.<br> When I worked with the data and saw the number of participants, I imagined the end of this competition somewhere like that. <br> That's why I published my best result in the hope of getting at least some kind of medal (thanks to everyone who voted). <br> As it turned out, it was a good thing that I did this, because as it turned out, I did not have the opportunity to seriously engage in kaggle competitions for the last two months. <br> Now back to CV, this notebook shows how I did the hyperparameter search <a href="https://www.kaggle.com/code/aikhmelnytskyy/lb-0-11-cv-0-2-tabpfn-xgb-model/notebook" target="_blank">https://www.kaggle.com/code/aikhmelnytskyy/lb-0-11-cv-0-2-tabpfn-xgb-model/notebook</a>. <br> What is most interesting is my best notebook on private LB.<br> After 2 months, I forgot about this notebook when choosing models😝. By the way, I have a rule that if I enter the competition somewhere in the middle and after several attempts I am very high (in the silver, or especially in the gold zone), for me this is a sign that there will be a shake-up and I will be thrown to the end of the peloton. This rule has only failed a few times in 100+ events. <br> Good luck to everyone and peace!</p> <p>I agree here. This influenced my choice. But I left this with a positive attitude, the experience gained will help in the future.</p> <p>This too. But 5000 positions between the best CV and the best public LB on private LB speaks in favor of CV. But you are right: given the small sample we "guessed on coffee grounds".</p> <p>Haha. I always thought it was against the rules. Although I really wonder what the actual number of people, not accounts, accepted in this competition. <br> 6700 members and 1 sub per day is clearly suspicious 😀</p> <p>I think the problem here is a small sample of data. On a larger sample, the model would work better</p>
ICR - Identifying Age-Related Conditions
Ensembling works.. unless you pick the (only) overfit one
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Ensembles or simple, under-tuned models?</h1> <p>I noticed several people reported that single, under-tuned models performed very well for them on the private leaderboard of this competition, better than complex ensembles of highly tuned models. Possible reasons why this might happen are: 1) Overfitting the "highly tuned" models; 2) Some degree of survivorship bias with many simple models being submitted; and/or 3) Under-tuned models outputing less confident predictions when compared to highly tuned models, thus making them inherently less "risky" for the balanced log loss metric. </p> <p>As we know, the metric punishes confident and wrong predictions very badly, which makes the score very vulnerable to even 1 or 2 misclassified "hard cases" (<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430475" target="_blank">link</a>, <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/427187" target="_blank">link</a>, <a href="https://www.kaggle.com/code/raddar/icr-competition-analysis-and-findings/notebook" target="_blank">link</a>), especially on the false negative side. Given the data drift we have with time (<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/418398" target="_blank">link</a>, <a href="https://www.kaggle.com/code/raddar/icr-competition-analysis-and-findings/notebook" target="_blank">link</a>, <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/431067" target="_blank">link</a>), and also the variability in number of "hard cases" present between the train, public and private splits (with the public dataset being the "easiest" and the private seemingly the "hardest"), it is perhaps not a surprise that we've seen such a dramatic <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430752" target="_blank">shake-up</a> (probably one of the <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430716" target="_blank">worst</a> in recent history of Kaggle competitions), and it is also clear that using more confident models would generally be riskier.</p> <p>However, proper ensembling should in theory mitigate both the issues of over-confidence / calibration and overfitting, and if done properly (e.g. using good models that complement each other well - or even weak learners that complement each other <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/427446#2389281" target="_blank">well</a>; using proper weights, etc), it usually is a superior solution to merely using less confident models, since it is likely that different predicted data points are also pushed in the "right" direction, i.e. towards their ground truth, by different ensemble members (e.g. by weighted averaging or stacking their predictions).</p> <p>Was this still true for this competition? Does using simple under-tuned models work better than ensembling well-tuned models in the particular setting we have here (very small data regime, sensitive metric, data drift over time and between train/public/private LBs)? Can we also achieve good, stable scores with diverse ensembles of well tuned models?</p> <p>I want to share my ensembling (and overall model building) approach below, which pretty consistently outperformed single models for me, on both CV, public, and eventually private leaderboards (see snapshot below). In fact, out of a total of 11 ensemble submissions I did using this pipeline, 5 would have given silver medals (best one w'd've ranked 105 on the private LB, score 0.38435), 3 are in the bronze area, and only 3 are out of the medal area. Even so, I managed to pick as final submissions 2 out-of-medal ensembles - both variants of the same ensemble actually, which was quite unlucky, albeit defensible in terms of CV + public LB scores (see below).<br> On the other hand, for single models, I have only 2 (xgboost) models in the medal area (one silver and one bronze), both scoring worse than many ensembles. See screnshot below for the top 8 submissions, out of which 6 are ensembles. Admittedly, very few of my single models were particularly "under-tuned" (except of course TabPFN ones), which might be a key difference for the purposes of this comparison.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3220715%2F262174be2909788750871680f0725d72%2Fsubs.png?generation=1692098551418633&amp;alt=media" alt=""></p> <h1>Model creation and ensembling pipeline:</h1> <p>Model-wise, my ensembles were a mix of:</p> <ul> <li>Models with different feature sets (of size 39, 38 and 35 features). Each feature set was derived using different FS approaches and selected by cross validation. I did not use any models with the complete feature set, and the best performing models used 39 features (for both tree models and TabPFN).</li> <li>Models with different targets (I used Alpha and Class only in final ensembles; Alpha performed better than Class overall)</li> <li>Models with different hyperparamaters (found through Bayesian optimization) </li> <li>LGBM, Xgboost and TabPFN models</li> <li>Initial selection of models to use for ensembling was done using individual CV scores and techniques like correlation analysis of predictions. Each model configuration (type + features + tagets + hyperparams combo) was then bagged (I used either 10- or 20- fold splits)</li> <li>For predicting on the test set, some of the "extremely bad" folds (e.g. CV score &gt; 0.4) were kicked off the ensemble, as "bad folds". As we know, this usually occurred when "hard examples" made it to the validation, rather than training split of that fold. This step is probably discussable, but my thinking was that 1) I already mitigated the risk of overfitting to only good folds to some degree by using all of the folds (no matter how bad) when doing initial model selection / hyperparam tuning to find the individual models in the first place. 2) We arguably don't want folds that are missing important "hard points" from the training sets and provably (badly) underperform. 3) While the threshold cdn't be tuned by CV (obviously) it was set high enough s.t. hopefully only extremely bad folds were eliminated (in general, it caused all of TabPFN|LightGBM|XGboost models on that fold to be dropped, but not always). </li> <li>We then ensemble the remaining models via either stacking (using logistic regression as meta model), or per-fold minimization of scores assuming linear weights.</li> <li>The best ensemble (score 0.38435, w'd get get silver if selected, rank 105 on private LB) used 20 fold bagging, and the above pipeline selected (via either filtering or weights being set close to 0), 26 / 120 bagged models, with (aggregated) model weights of: {'TabPFN': 0.099, 'xgboost_0': 0.249, 'xgboost_1': 0.125, 'xgboost_2': 0.311, 'lgbm_0': 0.178, 'lgbm_1': 0.162}, where xgboost_* and lgbm_* are different configs of Xgboost and LGBM. To compute above aggregated weights, we add weights from all surviving folds of that particular model type and re-normalize. Target for all models in this ensemble was Alpha.</li> <li>The ensemble I actually submitted (unfortunately), was a supposedly more "robust" variation of the above, where I added an extra model configuration with target==Class, weighted binary cross entropy loss and model type LGBM to the ensemble, and removed some other models. Optimal weights for this ensemble were {'tabpfn_0': 0.092, 'lgbm_0': 0.268, 'lgbm_1': 0.083, 'lgbm_2': 0.550, 'xgboost_0': 0.100}, where 'lgbm_2' is the new model. Notice that a lot of weight was placed on the new binary target model. This new ensemble outperformed all others in both CV and public LB. CDFs of the predictions for each of the component models, as well as the final ensemble, are plotted below. Note that, in line with <a href="https://www.kaggle.com/tilii7" target="_blank">@tilii7</a>'s findings <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/426536" target="_blank">here</a>, TabPFN (blue line) performs better (or at least is more confident) for predicting negatives, whereas most tree models are better with positives, so they complement each other quite nicely. Btw, I found this to be even more the case after feature elimination to the 39 features set. Also note that the resulting ensemble (light grey line) gets "best of both worlds", i.e. is quite confident - much more so than any individual model! - in both the positive and negative regions (thus also quite risky when encountering hard cases!)</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3220715%2F4862b88e863e0f59dc61fe8ec64054a8%2Fcdfs_sub2.png?generation=1692096814224138&amp;alt=media" alt="Alt text"></p> <p>A few other details on modelling and other things I have tried (with variable success): </p> <h1>Handling imbalance / Loss:</h1> <p>Like many people, I used weighted binary loss for Class models and weighted softmax for Alpha models, with weights given by inverse class frequencies. For the Class models, I also attempted to use a custom implemented focal loss for LGBM, and hypertune the alpha and gamma parameters, but surprisingly this consistently underperformed basic weighted cross entropy in CV. I did not use any undersampling, oversampling or SMOTE like approaches (all felt risky given the dataset size at hand).</p> <h1>CV fold splitting:</h1> <p>I eventually used stratified splits on {Class, Alpha, Beta, Gamma} for most models. I also tried a more clever approach whereby I assign "hardness classes" to individual points and additionally stratify based on hardness class. To compute hardness classes, unlike <a href="https://www.kaggle.com/tilii7" target="_blank">@tilii7</a>'s very interesting approach <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430781" target="_blank">here</a>, based on an ensemble of supervised models, I took an unsupervised approach: the hardness score of each point is calculated as the residual of a kNN classifier (n_neighbors=20), applied over the (UMAP) dimenisionality reduced space of the points, using an input space of 35 selected features. Then, points are binned into either 2 or 3 classes based on their hardness score. This folding scheme performed worse in CV (but this was expected, as all folds are now forced to validate on hard examples). However, it also performed (slightly) worse on the public LB, for all of TabPFN|LightGBM|Xgboost, thus I decided not to use it for ensembles. However, on the private LB, the best solo (non-ensemble) model I have turns out to be an xgboost folded with this approach and N=2 hardness classes. So perhaps there was something to it (in retrospect, we know that the public LB has much fewer hard cases than private LB, so perhaps this is expected).</p> <h1>TabPFN optimization:</h1> <p>As we know, TabPFN cannot be tuned much, so what I tried is the following approach: rather than feed it the complete dataset as training data (within a fold), only restrict the training data to K nearest neighbors of each validation (or test) example. Note that this way we have to predict all test/validation points one by one, each with a different training set. Nearest neighbors were calculated in a dimensionality reduced space, similarly to the hardness classes. This approach performed worse in both CV, public, and ultimately on private LB compared to including all examples, at least for the feature sets I tried.</p> <h1>Using Epsilon:</h1> <p>As we have seen, there is clearly time <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/418398" target="_blank">drift</a> in this dataset, and as <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> and <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> showed <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/431067" target="_blank">here</a> and <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430963" target="_blank">here</a> in post-analysis, there was considerable benefit to using Epsilon for stratified folding, and also to dropping NaN Epsilon rows (control cohort?). In terms of actually using Epsilon as a feature, my plan was to build a time-independent model as the main submission, with an Epsilon-based model as a backup. To me, using Epsilon as a feature felt quite risky because, although there is a clear trend over time for targets in the training set, so adding Epsilon as a feature might help the model learn it, it is unclear with so little data (essentially only 3 years if we ignore the very sparse data before 2017, and only 617 data points overall) if this trend will continue, reverse, if it was due to COVID, to patient population sampling, treatment interventions, whether there is seasonality for some of the targets etc. Similarly, it is hard IMO with this amount of data to rely on or forecast any trends regarding changes in feature distributions.<br> I thus opted for a "safer" version of using Epsilon, generated from the exact same ensembling pipeline above, where I only included the year as a feature along the other selected features. I felt using anything more granular than that (e.g. month), would be quite prone to overfitting the models (especially since we don't have that granularity level for Epsilon in the test sets). Year in the test set was set to Epsilon.year.max() + 1. That ensemble actually performed OK on both the private LB (0.41) and public LB (0.16), although I had several ensembles <em>without</em> Epsilon.year outperforming it on both.<br> I also briefly considered using a time-based validation scheme (similar to what <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> demonstrated <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/431067" target="_blank">here</a>), but eventually decided against it, mainly because I was wary to draw any conclusions about trends, time periods and "data regimes" based on the very limited data and time span we have available.<br> In the end, I was obviously proven wrong on this, and it seems like using Epsilon in CV was a game changer.</p> <h1>Picking submissions</h1> <p>Since the last ensemble I described (including the Class target model) was beating all other ensembles on both CV and public LB, and I had achieved reasonable CV-public LB correlation with ensemble submissions overall, I decided to give up the original plan (which was to submit a no-Epsilon and an Epsilon-based ensemble), and to go for a "go big or go home" approach. So I submitted 2 variants of the same ensemble, the exact one above and a variant of it which was slightly more confident / risky in the positive area of the predictions CDF, and less confident in the (more dangerous) negative area. Obviously this was the wrong call, and, as mentioned above, had I picked the Epsilon ensemble (or almost any other ensemble I had really), the submission would have been in the medal area (likely silver). In any case, picking a less confident model overall as a backup submission would probably have been a much better bet.</p> <h1>Conclusions and directions of further improvement</h1> <p>Ensembling overall gave good and stable results in my case, consistently better than the component solo models. Modulo the submission picking blunder above, ensembling achieved good scores and was even relatively stable in terms of CV-public (and eventually, to some degree -private LB) correlation, which was notoriously difficult in this competition.</p> <p>However, ensembling is obviously still vulnerable to overfitting, especially in the very low data regime we are in. In my particular case, I think the newly added Class-based model was actually overfit, and since the ensembling pipeline gave that model a very high weight (0.55), the entire ensemble became overfit (at least this is my hypothesis, I did not verify it yet via late submissions).<br> Another source of overfitting would be in the ensemble weights. Had we had more data, we could have used separate holdout sets for the validation and calibration of the stacked meta model for instance to mitigate this. But I felt there was barely enough data to reliably train the base models in this case.</p> <p>Of course, these particular ensemble submissions would still only get silver medals and the base models themselves can be further improved. For instance, with hindsight, I probably wouldn't shy away as much from using Epsilon and actually use some form of time-based validation scheme, where we train and validate on different periods / "regimes" of data (something along the lines of what <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> did <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/431067" target="_blank">here</a>). I was discouraged in taking this path by the fact that we had very little data and a short time span (thus any trends or "data regimes" observed seemed questionable IMO), and also by the fact that the organizers explicitly excluded Epsilon from the test sets (thus building models without making use of it seemed to be suggested). However, it is ultimately a matter of balancing capturing actual data drift signal in the model versus simply noise, and it appears, based on final results, there was clearly enough signal to make using Epsilon worth it .<br> Also, since we did have access to features (but not ground truth labels) for the test dataset upon submission, perhaps other more involved approaches using time were possible, for instance estimating similarity between the test dataset and particular time periods in our training set (using e.g KL divergences, distances between embeddings etc) and using features built upon these similar time periods.<br> Another direction of improvement would probably be recalibrating confident predictions on the test set to account for uncertainty due to data drifts. In general, if this was a real life model deployment, we would probably recalibrate and/or retrain our models with new data to account for drifts. In this Kaggle competition setting, we cannot do that (no ground truth labels for newer data - unless we probe the public LB :) ). Lacking that option, if we can somehow build an estimate of the uncertainty we should expect for our models on the <em>private</em> test set given the observed data drift (again just based on the test set features only, not the labels), we can perhaps find ways to recalibrate the predictions output by our well-tuned models / ensembles to be less confident on the test set, while still maintaining good performance overall. There are several approaches to recalibration possible, e.g. some basic temperature scaling on the output probabilities, or perhaps some more involved approaches taking into account the probability of "hard cases". But the harder problem is estimating calibration parameters like the optimal temperature, or other calibration parameters if using other methods - since again, we do not have access to a labeled calibration set from the time period of the private test set.</p>
ICR - Identifying Age-Related Conditions
Extreme Bagging + Stacking
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My conservative solution for this competition consisted on a bagging strategy with 100 balanced holdout sets. <br> It consistently scored 100Holdout CV: 0.1617 / Public: 0.22 / Private: 0.34 (top4). <br> It consists of a weighted average stacking of 5 models: </p> <ul> <li>Logistic Regression CV: 0.2869</li> <li>MLP CV: 0.2257</li> <li>XGBoost CV: 0.201 / 0.186 / 0.315</li> <li>LightGBM CV: 0.222</li> <li>CatBoost CV: 0.167</li> </ul> <p>Unfortunatelly it wasn't one of my final selections. For that I selected the same notebook but using test pseudo labels at different amounts to add more data to train. Besides the fact that it worked for my local experients, private LB didn't work.</p> <p>Stacking Notebook <a href="https://www.kaggle.com/code/titericz/blend-v12-extreme-bagging?scriptVersionId=139715250" target="_blank">here</a></p> <p>Top 3 Single XGB <a href="https://www.kaggle.com/code/titericz/top3-xgboost/notebook" target="_blank">here</a></p> <p>Top 2 Single XGB <a href="https://www.kaggle.com/code/titericz/xgboost-late-sub" target="_blank">here</a> - adding dates</p>
ICR - Identifying Age-Related Conditions
How on Earth did I win this competetion?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you, now I understand this better)</p> <p>10 hours, if I remember it correct.</p> <p>Thank you very much! It scored 0.14.</p> <p>It is literally the model with 2 VariableSelectionFlow layers instead of 3 in the final version.</p> <p>I made it public just yesterday)</p> <p>Thanks! <br> Maybe, but I didn't experimented with this.</p> <p>No, it's just my wrong naming, I mean "tgt2") I didn't override the original target, the tgt2 was used only for better splits during cross-validation. Step 6 improved the score by 0.02.</p> <p>Thank you! In my experiments standardization of the data deteriorated the cv-score, though I don't fully understand why.</p> <p>Thank you! You see the prediction only on "toy" data, where all features are zeros. Which is a complete mess for a model trained on real data, so it is not surprising that it predicts 0.5. And regarding balanced logloss - as I understand this, reweighting just stretches the probabilities in one or other side, but if they are in the middle (0.5) they stay there) </p> <p>Thank you very much!<br> No, I didn't)</p> <p>Oh, I forgot to write some important details about cv! Will edit my post now.</p> <p>Hello there! This was really unexpected. I hoped to be in top 10%, but never dreamed of something more. Thanks to everyone who participated in the competetion and especially to those who discussed various ideas and shared their code! And big thanks to SAMUEL, who introduced reweighting the probabilities in this notebook: <a href="https://www.kaggle.com/code/muelsamu/simple-tabpfn-approach-for-score-of-15-in-1-min" target="_blank">https://www.kaggle.com/code/muelsamu/simple-tabpfn-approach-for-score-of-15-in-1-min</a></p> <p>My solution was purely based on some sophisticated DNN: <a href="https://www.kaggle.com/room722/icr-adv-model" target="_blank">https://www.kaggle.com/room722/icr-adv-model</a><br> UPDATE: Training notebook attached.</p> <p>What did not work for me:</p> <ol> <li>Gradient boosting was obviously overfitting, although I spent just little time on it and didn't make much fine-tuning.</li> <li>The "greeks" were useless. I think, because we have no greeks for the test data.</li> <li>FE led to overfitting.</li> </ol> <p>What did work:</p> <ol> <li>DNN based on Variable Selection Network. [1]</li> <li>No "casual" normalization of data like MinMaxScaler or StandartScaler, but instead a linear projection with 8 neurons for each feature.</li> <li>Huge values of dropout: 0.75-&gt;0.5-&gt;0.25 for 3 main layers.</li> <li>Reweighting the probabilities in the end worked really good.</li> <li>10 folds cv, repeat for each fold 10-30 times, select 2 best models for each fold based on cv (yes, cv somehow worked in this competition!).The training was so unstable, that the cv-scores could vary from 0.25 to 0.05 for single fold, partially due to large dropout values, partially due to little amount of train data. That's why I picked 2 best models for each fold.</li> <li>The cv was some kind of Multi-label. At first I trained some baseline DNN, gathered all validation data and labeled it as follows: (y_true = 1 and y_pred &lt; 0.2) or (y_true = 0 and y_pred &gt; 0.8) -&gt; label 1, otherwise label 0. So, this label was somthing like "hardness to predict". And the other label was, of course, the target itself.</li> </ol> <p>It will be honest to say, that I was lucky to win, but for me personally that means also that DNN wins, and probably not just by luck. And as a big fan of DNNs this makes me proud and happy))</p> <p>[1] The idea of this network was taken from here <a href="https://arxiv.org/abs/1912.09363" target="_blank">https://arxiv.org/abs/1912.09363</a></p> <p>It was 0.14.</p> <p>I think, it is good. There are several submissions with 0.32-0.34 private score.</p>
ICR - Identifying Age-Related Conditions
ICR by symbolic regression (162th)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>In this competition I tested my symbolic regression algorithm. I built three mathematical models, and used their average in the result. In two cases, the symbolic regression found a mathematical equation of the form tanh(f(x)) which is equivalent to a logistic function. </p> <p>tanh(((x33/x6)+(((x13/(((x24/x1)<em>x45)</em>x37))<em>x1)</em>(0.02532214485108852386))))</p> <p>tanh(((((((0.04220944643020629883)<em>x33)</em>x48)<em>(((0.04220944643020629883)</em>x33)<em>x48))</em>((((0. 04220944643020629883)<em>x33)</em>x48)<em>(((0.04220944643020629883)</em>x33)<em>x48)))+((x1</em>x1)<em>(((-0.05945727974176406860)+(x1</em>x1))*x9))))</p> <p>((((((x33<em>x33)</em>(x33<em>x33))+x9)</em>((x56+x13)<em>(x1</em>x30)))/(((((x33<em>x33)</em>(x33<em>x33))+x9)</em>((x56+x13)<em>(x1</em>x30)))+(((x31<em>((x32</em>x22)+x24))<em>x17)</em>((x32<em>x22)+x24))))</em>(((((x33<em>x33)</em>(x33<em>x33))+x9)</em>((x56+x13)<em>(x1</em>x30)))/(((((x33<em>x33)</em>(x33<em>x33))+x9)</em>((x56+x13)<em>(x1</em>x30)))+(((x31<em>((x32</em>x22)+x24))<em>x17)</em>((x32*x22)+x24)))))</p> <p>The notebook is available here <a href="https://www.kaggle.com/code/jano123/icr-by-symbolic-regression" target="_blank">https://www.kaggle.com/code/jano123/icr-by-symbolic-regression</a>. </p> <p>It was a long time ago, but I remember wanting to use the highest clip possible. With such a small dataset, there is a high probability of misclassifying an unknown data sample. Log(0.97) = -0.03045 and an error of 0.03 for correctly classified data seemed acceptable to me for this case.</p> <p>Hm gplearn is very slow, HROCH is perhaps the fastest symbolic regression available. Benchmark <a href="https://github.com/janoPig/HROCH/blob/main/benchmarks/SRBench.md" target="_blank">https://github.com/janoPig/HROCH/blob/main/benchmarks/SRBench.md</a> or shortened kaggle version <a href="https://www.kaggle.com/code/jano123/feynman-equations-by-symbolic-regression" target="_blank">https://www.kaggle.com/code/jano123/feynman-equations-by-symbolic-regression</a></p> <p>Other interesting methods are<br> PySR with a slightly more complicated installation <a href="https://www.kaggle.com/code/jano123/pysr-install" target="_blank">https://www.kaggle.com/code/jano123/pysr-install</a></p> <p>or RILS-ROLS <br> <a href="https://journalofbigdata.springeropen.com/articles/10.1186/s40537-023-00743-2" target="_blank">https://journalofbigdata.springeropen.com/articles/10.1186/s40537-023-00743-2</a> <br> <a href="https://pypi.org/project/rils-rols/" target="_blank">https://pypi.org/project/rils-rols/</a></p>
ICR - Identifying Age-Related Conditions
Large and simple GBDT ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Wow, this is the craziest shakeup I've seen so far. Makes me feel lucky and grateful for 329th place!</p> <h1>Solution</h1> <p>It's a simple solution created during the last week.</p> <ul> <li>XGBoost, Catboost and LGBM</li> <li>Use (almost) default hyperparameters</li> <li>RandomUnderSampler to balance classes</li> <li>Train models using 1000 different 5-folds-seeds and RandomUnderSampler seeds</li> <li>Simple average ensemble</li> </ul> <h1>Potential improvement</h1> <p>One of many potential improvements is perhaps to also random sample different GBDT hyperparameters</p> <h1>Code</h1> <p>The code can be seen <a href="https://www.kaggle.com/code/erijoel/icr-solution" target="_blank">here</a></p>
ICR - Identifying Age-Related Conditions
Megatron for synthetic data generation + simple LGBM model - Bronze medal with 0.42/0.19 Pr/Pu scores
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to thank ICR and Kaggle for hosting such an interesting competition. Many thanks also to all the participants for the great discussions and useful insights. As another newbie with surprising result, I would like to share my approach which despite the obvious gaps, might be found as insightfull. </p> <p>Inspired by a training lab during NVIDIA GTC 2023 for Synthetic Tabular Data Generation Using Transformers (<a href="https://www.nvidia.com/en-us/on-demand/session/gtcspring23-dlit52224/" target="_blank">https://www.nvidia.com/en-us/on-demand/session/gtcspring23-dlit52224/</a>) I decided this approach would be great to augment the small dataset and help the high scoring public notebooks generalize better. And, it was a great opportunity to test the approach for using synthetic data as I am very keen to use it in my professional field. Even for a beginner like me was relatively straightforward to set up NVIDIA’s Megatron using Nemo framework and train it on a single GPU local machine. The preprocessing, training and inference of Megatron is adaptation of this tutorial: <a href="https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Megatron_Synthetic_Tabular_Data_Generation.ipynb" target="_blank">https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Megatron_Synthetic_Tabular_Data_Generation.ipynb</a> </p> <p>My approach: </p> <ol> <li>Train data preprocessing</li> </ol> <ul> <li>Use of greeks except Epsilon (date) – I decided to use all the records but drop the features that cannot be replaced simply in order to have larger and more diverse dataset</li> <li>Nan replacement – replaced by median value for the groups by categorical features. Where unavailable – median value of the group by Alpha.</li> <li>Tokenization – following the guide above with adjustments for column types, hasnan and token length</li> </ul> <ol> <li>Model configuration and training</li> </ol> <ul> <li>NUM_LAYERS = 8</li> <li>NUM_GPUS = 1</li> <li>HIDDEN_SIZE = 1024</li> <li>NUM_ATTENTION_HEADS = 16</li> <li>SEQ_LENGTH = 1024<br> This model runs for around 3 hours on a local machine on a single GTX 3060 12GB.</li> </ul> <ol> <li>Inference </li> </ol> <ul> <li>20000 empty prompts</li> <li>Temperature = 1.0<br> With this temperature, only around 800 of the generated records are no duplicates and the feature’s values fall within the ranges defined by the train data. The generated dataset follows the same distribution of the train dataset.</li> </ul> <ol> <li>Model definition and training</li> </ol> <ul> <li>Based on <a href="https://www.kaggle.com/code/nomuraryota/icr-identify-age" target="_blank">https://www.kaggle.com/code/nomuraryota/icr-identify-age</a> </li> <li>Except the slightly different hyperparameters, the main difference is that the original train dataset is used as validation only while the LGBM classifier is trained on the generated records only. The assumption is that the model will generalize better</li> <li>100 models trained on this setup; the validation loss ranged between 0.19 and 0.09; mean of the predictions taken</li> </ul> <p>The trained model ideally could have been used directly for submission, but my main focus was on the exploration of the LLM training and the data generation. I’ve tested a lot of configurations for data generation including prompting the model to generate specific combinations of categorical data as constraint, with different temperature, etc. Prompting the model with different categorical data was the approach I bet should work best so that you can generate what records you need. However, my test on this didn’t work, perhaps because I prompted the trained LLM with any available combination of observed categorical combinations (too ambitious probably to generate many records with only 1 or 2 observed sharing the same categorical combination).</p> <p>Comments , critics and recommendations are more than welcome.</p>
ICR - Identifying Age-Related Conditions
My Summary Notes on the 'ICR - Identifying Age-Related Conditions' Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>My Summary Notes on the 'ICR - Identifying Age-Related Conditions' Competition</h1> <h3>About the Competition</h3> <ul> <li>see <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions" target="_blank">ICR - Identifying Age Related Conditions</a></li> </ul> <hr> <h3>Particularities of the Competition</h3> <h5>Test Sets (1) [aka 'public'] &amp; (2) [aka 'private'] are unknown</h5> <ul> <li>During the competition, the test set could not be displayed.</li> </ul> <h5>Final results computed from Test Set (2)</h5> <ul> <li>During the competition, test runs were possible on the 1st half of the entire Test Set.</li> <li>The final results were computed on the 2nd half of the entire Test Set.</li> </ul> <h5>Testing is scarce</h5> <ul> <li>The competition participants were only to run/test 1 model per day. (Other competitions usually allow for up to 5 trials per day.)</li> </ul> <h5>Test metric penalizes confident, wrong predictions</h5> <ul> <li>A test run only returns one metric. Here, the evaluation metric is computed by a '<em>balanced</em> log loss' function (see competition page).</li> <li>The predictions were to be made as probabilities (e.g. 0.89, 0.11), not as class labels ('0' or '1').</li> <li>A confident prediction for a class label ('The predicted probability is high.') that is wrong, leads to a huge drop in the final evaluation metric.</li> <li>A model should take that into account, and predict approx. (0.5, 0.5) (coin flip) when in doubt.</li> </ul> <h5>Models trained on the Training Set perform (far) worse on the Test Set (1) &amp; Test Set (2)</h5> <ul> <li>With all that, models trained &amp; cross-validated on the Training Set generally perform bad on the Test Set. Overfitting was a general issue to deal with.</li> <li><em>A low CV mean training error &amp; low CV training error spread does not say a thing, unless the model is run on Test Set giving similar results.</em></li> </ul> <hr> <h3>Strategic Approach</h3> <h5>Get an overview</h5> <ul> <li>With a limited number of test trials, an unknown test set, and overfitting as a main general issue, the task of classifying data points gets even harder.</li> </ul> <h5>Test many different models</h5> <ul> <li>In order to counter overfitting, a possible strategy is to test many single ML models one-by-one. A drawdown with this approach is that it wastes a lot of trials. However, it's beneficial to establish which models perform at all &amp; which are useless.</li> </ul> <h5>Decide on which models to test further</h5> <ul> <li>Based on the single tests, one can decide on which model to work on further. It's also possible to combine/ensemble different models.</li> </ul> <h5>Do not shy away from trying specialized/smooth models</h5> <ul> <li>Many approaches tried to find specialized models as far as the training &amp; test set allowed.</li> <li>Many approaches used probing techniques on the test set. Though not beneficial to the question, which model describes the data best, these approaches were allowed. Taking a look at the difference between the performance of these approaches on the Test Set (1) &amp; Test Set (2), it becomes clear that they are discouraged.</li> </ul> <h5>Generalize/regularize well</h5> <ul> <li>The only one strategy in this case (strong overfitting) is choosing generalized/regularized approaches.</li> <li>That is, the model is allowed to make more errors to not fit the data as perfectly as before.</li> <li>One takes into account a larger test error in return for greater consistency in predictions.</li> </ul> <h5>Choose your submissions</h5> <ul> <li>During the competition, the best performing approaches reached eval metrics of 0.06 and below. Many approaches favored post-processing. A large part of very well performing models centered around approaches with an eval metric of about 0.11. Better performing models in general were around 0.20 or smaller.</li> <li>A model that performed with an eval metric of around 0.69 made (0.50, 0.50) predictions for all data points, basically.</li> <li>I found the approaches with a test balanced log loss &gt; 0.11 and &lt; 0.3 the most convincing. Though in doubt, I decided on submitting one approach in the middle, and one more generalized approach.</li> </ul> <h5>Final Results</h5> <ul> <li>Due to the particularities of the competition, a (huge) shake-up from test set (1) evaluation to the final evaluation on the test set (2) was to be expected. - (No) Surprise!: There was indeed a huge shake-up among all the participants!</li> <li>Should have decided on and should have worked on approaches that generalize well even more!</li> </ul>
ICR - Identifying Age-Related Conditions
Naive undersampler for the gold medal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was a strange competition with a huge leaderboard shakeup. This is my solution, which would have resulted in a gold medal had I chosen <strong>a random seed of 42 (public score 0.22408, private score 0.3669)</strong> rather than <strong>a random seed of 19 (public score 0.20461, private score 0.41544)</strong> which resulted in a bronze medal.</p> <pre><code> pathlib Path catboost cb numpy np pandas pd imblearn.under_sampling RandomUnderSampler sklearn.model_selection StratifiedKFold, train_test_split data_path = Path() seed = (): y_pred = np.clip(y_pred, , - ) n0, n1 = np.bincount(y_true.astype()) w0 = / (n0 / (y_true)) w1 = / (n1 / (y_true)) l0 = -w0 / n0 * np.(np.where(y_true == , , ) * np.log( - y_pred)) l1 = -w1 / n1 * np.(np.where(y_true != , , ) * np.log(y_pred)) (l0 + l1) / (w0 + w1) train_df = pd.read_csv(data_path / ) test_df = pd.read_csv(data_path / ) greeks_df = pd.read_csv(data_path / ) feature_cols = train_df.columns.tolist()[:-] train_df, val_df = train_test_split( train_df, test_size=, stratify=train_df[], random_state= ) greeks_df = greeks_df.loc[train_df.index] train_df = train_df.reset_index(drop=) greeks_df = greeks_df.reset_index(drop=) train_df.drop(columns=[], inplace=) under_sampler = RandomUnderSampler(random_state=seed) train_df, _ = under_sampler.fit_resample(train_df, train_df[]) oof = np.zeros((train_df)) skf = StratifiedKFold(n_splits=, shuffle=, random_state=seed) test_probs = [] val_preds = [] fold = train_idx, val_idx skf.split(train_df, train_df[]): (.center(, )) fold += X_train, y_train = ( train_df.loc[train_idx, feature_cols], train_df.loc[train_idx, ], ) X_val, y_val = ( train_df.loc[val_idx, feature_cols], train_df.loc[val_idx, ], ) params = { : , : , : , : seed, } model = cb.CatBoostClassifier(**params) model.fit( X_train, y_train, eval_set=[(X_val, y_val)], cat_features=[], verbose= ) oof[val_idx] = model.predict_proba(X_val)[:, ] val_preds.append(model.predict_proba(val_df[feature_cols])[:, ]) test_preds = model.predict_proba(test_df.iloc[:, :]) test_probs.append(test_preds) () () test_probs = np.mean(test_probs, axis=) sub_df = pd.DataFrame( {: test_df.Id, : test_probs[:, ], : test_probs[:, ]} ) sub_df.to_csv(, index=) </code></pre>
ICR - Identifying Age-Related Conditions
Shakeup was expected: trust your work and CV! (one submission - silver medal)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/pedram" target="_blank">@pedram</a>, yes, I also think epsilon is a key factor. I regret that I entered so late. Good week!</p> <p>I earned a silver medal with only one submission and there are some ideas behind that I want to share. Having so many good scores and probing in Public LB was not a good motivation to me at all, but still, with the pieces of work done so far, I decided to put pieces together, learn something testing some ideas and have some fun. I could have ready a submission just two hours before the deadline. It was definitely a good choice to participate <strong>(Rank=201, CV=0.227, Public LB=0.21, Private LB=0.39)</strong>. </p> <p>During personal work during the competition, my goal was trying to get a robust model; it was expected a huge shakeup (new data coming with different conditions): CV works and with so much probing, no way to guide personal submissions with others’ results. The next is a summary of things I did: </p> <p><strong>1. Models:</strong> Many models (starting with linear discriminant) might be good candidates when robust models are needed: combined with XGboost nice results could be obtained. I tried some simple ones whose R packages were already installed by Kaggle (few). To gain robustness, I explored the CV asynchronously.<br> <strong>2. Additional information:</strong> Use of the greeks dataset<br> a. CV folds stratification with greeks subgroups<br> b. Targeting Alpha/ Splitted Alpha instead of Class<br> c. Dealing with imbalance by splitting Class 0 with greeks subgroups<br> <strong>3. Missing values:</strong> Deciding how to handle missing values, particularly trying to gain some insight of the nature of the 60 missing values from BQ and EL; for me, the only ones of serious concern. Since I reached to no conclusion, I used the preProcess tool from the caret package that uses bagged trees for imputation.<br> <strong>4. Influential observations:</strong> Applying the empirical distribution function Fn(x) to preprocess the RHS. <br> <strong>5. Objective and metric:</strong> I did not build one for the multiclass classification cases.<br> <strong>6. Target imbalance:</strong> See splendid post by Chris Deotte (<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/412507" target="_blank">here</a>)</p> <p><strong>Submission</strong><br> The submission was a weighted blend of four settings: </p> <ul> <li>Two models (XGboost and SVM with radial basis) </li> <li>Two transformed datasets (one with standard scaling, the other applying the empirical Fn(x)). </li> </ul> <p><strong>CV</strong><br> Five stratified CV folds (16 strata) with all data. I only could do a test with one holdout sample (to really test blending and data transformations)<br> The interesting part, at least for me, is the target. The purpose of it was to (partially) handling imbalance. There are 9 classes in this experiment):</p> <table> <thead> <tr> <th>Target</th> <th>A</th> <th>B</th> <th>D</th> <th>G</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>51</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>97</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>2</td> <td>125</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>3</td> <td>117</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>4</td> <td>55</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>5</td> <td>64</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>6</td> <td>0</td> <td>61</td> <td>0</td> <td>0</td> </tr> <tr> <td>7</td> <td>0</td> <td>0</td> <td>18</td> <td>0</td> </tr> <tr> <td>8</td> <td>0</td> <td>0</td> <td>0</td> <td>29</td> </tr> </tbody> </table> <p>After prediction columns corresponding to each <strong>Class</strong> must be collapsed. <br> Note that they are 6 for class_0 and 3 for class_1. A good choice (why?) is the mean of class_0 and the sum of class_1. Blending of models was done via an exhaustive grid search.</p> <p>In summary, this was my submission; I hope you liked it.</p> <p>Sure! CV(mean and std) are key together</p> <p><a href="https://www.kaggle.com/pedram" target="_blank">@pedram</a> In this case we have to extrapolate from few patients, since <em>"medical conditions requires a long and intrusive process to collect information from patients" (from goal of the competition)</em> and the need to predict confidently is high. In other areas we really are interpolating when an inference is done. So, yes; I partially agree with you about being a lottery, for winning the competition, but hopefully some models can help to get better predictions. </p> <p>Thanks, Chris! I wish I had the time to read more interesting posts!</p>
ICR - Identifying Age-Related Conditions
Silver Medal - LB=0.39 - Sliding Time Cross Validation!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you ls </p> <p>Thanks Nam</p> <p><a href="https://www.kaggle.com/roberthatch" target="_blank">@roberthatch</a> I have 25 folds. Each fold has validation data size 120 and train size 490. Each fold is represented by a tick mark on the x-axis with date. The corresponding validation data is all rows up to and including that date when rows are sorted by date. The train data is everything else. Therefore it is bi-directional in time and train data is always size 490.</p> <p>We compute 25 CV scores. Then we take the mean of these scores. Instead of taking mean, we could take mean plus 1 std OR mean plus 2 std. Using a different "overall CV score" would motivate different decisions.</p> <p>I simulated my "LB probing sub" on my validation setup and it only worked on 20 folds (i.e. 80%). So if private was random it would have worked in 80% of cases. But the host did say this problem was hard, so maybe we could have assumed that private LB would be like one of the difficult folds.</p> <p>So perhaps i could have selected my 3rd submission which did the best on those 20% difficult folds.</p> <p>Great point <a href="https://www.kaggle.com/roberthatch" target="_blank">@roberthatch</a> , but no i did not optimize hyperparameters for worse case scenario but rather average case scenario. I built my model based on the <strong>mean</strong> behavior over the 25 time windows. <strong>In retrospect it seems obvious</strong> (because host creating this competition since they were having difficulty with prediction) that i should have assumed private would be a difficult time period and I should have tuned everything to the 25% most difficult time windows.</p> <p>This is what <a href="https://www.kaggle.com/kaggleqrdl" target="_blank">@kaggleqrdl</a> says <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/431532" target="_blank">here</a>. If we knew ahead of time that private test would be the worse case, then for CV we could have used <code>mean fold +1 or 2 std folds</code>. This would tune the model to the hardest case scenario. Instead of this, I tuned to <code>mean fold +0 std folds</code>, the average case.</p> <p>When I say "tune", I refer to all decisions. Which models to use, which hyperparameters, which features to eliminate, which "conservative" threshold to use, which data to probe, which post process to balance classes to use, which up/down sampling strategy to use. Etc etc. All of my decisions were based on average case scenario.</p> <p>From the beginning of this competition, I decided that I would use an LB probing strategy which is a risk in itself. So for my model I designed it for "average case" to avoid more risk.</p> <p>I would love to repeat this competition and attempt it without LB probing. I think there are many mysteries in this data that have not been discovered yet nor taken advantage of such as my discussion about time <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/431519" target="_blank">here</a>. </p> <p>It is quite possible that we can teach a model to use time correctly and boost performance. Since time is so important, I'm surprised that the host did not help us and provide more info about why time affects this dataset. Solving time might be why the 1st place solution <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430843" target="_blank">here</a> using Temporal Fusion Transformer works so well. </p> <p>You are correct. This made a big difference about submission errors. I will update my comment to be more clear about the "difference" i was speaking about (namely model accuracy).</p> <p>For example, a possible reason for poor private LB score could have been caused by an entire column being all NAN. Imagine if train has data in GL column and then private test has 100% NAN in GL column. This would really hurt model accuracy. But i did not detect any weird NAN problems to affect accuracy.</p> <p>UPDATE: I added a post comp fun fact to my solution writeup above</p> <blockquote> <p>This builds directly into the CV score any gambles and any hedging you are doing with two complementary submissions.</p> </blockquote> <p><a href="https://www.kaggle.com/roberthatch" target="_blank">@roberthatch</a> That is a very smart approach to hedge for both cases. Both of my final submissions hedged to the same 84%. My sub 1 was LB probe and my sub 2 was LB probe+PP. My simulations showed they both worked on the same 84% of folds (i.e folds 1 thru 20). </p> <p>So if the private LB was like the other 16% difficult folds (which are fold 21,22,23,24 far right) in my discussion post plot, then both my risks failed. Which is what happened 😀</p> <blockquote> <p>it's actually reallllly interesting that public LB didn't 'work' on predicting the newest train data</p> </blockquote> <p>Yes this is <strong>very</strong> surprising especially because this seems to be time-dependent prediction and public test is the most recent data available!</p> <p>Yes, i'm very impressed with default CatBoost parameters. I will try default CatBoost in future comps.</p> <p>Thanks YaGana</p> <p>Great suggestion. It seems that private LB is like the two tails. So if i optimized my models for the tails that would have most likely improved my private LB. Thanks.</p> <p>Thanks Ankur</p> <p>Exciting competition! Thanks Kaggle and ICR. I had three final submissions to choose from</p> <ul> <li>Time-CV Model with no funny business - Code published <a href="https://www.kaggle.com/code/cdeotte/silver-medal-using-sliding-time-cv" target="_blank">here</a> and achieves Silver Medal</li> <li>LB Probing Pseudo Label model 😀 (probing explained <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430764#2386580" target="_blank">here</a> and <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/431275" target="_blank">here</a>. And shown <a href="https://www.kaggle.com/code/cdeotte/use-public-lb-probing-data-lb-0-0" target="_blank">here</a>)</li> <li>LB Probing Pseudo Label with conservative PP Threshold model 😀 (threshold analysis shown below)</li> </ul> <p>What do they say? "Go Big or Go Home!" For my final two submissions, I did not pick my first bullet point above (which I suspected would win a medal). Instead I chose the last 2 bullet points in an attempt to win 1st place! Note that using probed positive targets from public LB actually hurt my private LB score. And even a very conservative PP threshold (<code>p[p&lt;0.1]=0</code>) hurt my private LB score. Both risks failed 😀</p> <p>Let's discuss my solution which did not include any funny business.</p> <h1>Sliding Time CV Scheme</h1> <p>First we sort all the train data by date. Next we assign a random date to the train samples without a date. Afterward we slide a window with indexes <code>START</code> and <code>END</code>. Then our validation data each fold are all rows from <code>START</code> to <code>END</code></p> <pre><code> fold (): fold STEP fold STEP VAL_SIZE X_valid train.iloc[:] </code></pre> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2023/time-cv.png" alt=""></p> <h1>Why Use Time CV?</h1> <p>The purpose of a local validation is to mimic the relationship between train data and test data. In this competition the host tells us that all the test data occurs after (in time) of the train data. In other words, we must train a model on data from one time period and predict data from another time period.</p> <p>Normally i would not think that time would influence what the model learns about medical diagnosis, however in this competition if you train an XGB model with time as a feature column it becomes the most important feature even after fillna() with random dates for the train data without dates. </p> <p>This means that the data is changing. Perhaps the data is collected in one location on set of machines during one time period and later collected by different machines that are calibrated differently. For some reason, the relationship between features and targets is changing over time!</p> <h1>OOF Validation Scores</h1> <p>We observe that different time periods have different difficulty when predicting. Notably easy time periods have a OOF validation balanced log loss of <code>0.15</code> whereas difficult time periods have <code>0.40</code>. Since the host said this was a difficult problem, we can deduce that public test is easy time and private test is hard time:</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2023/val-date.png" alt=""></p> <h1>CV vs Public LB vs Private LB</h1> <p>My time-CV model has an average local <code>CV = 0.27</code> and <code>public LB = 0.17</code> and <code>private LB = 0.39</code>. See notebook <a href="https://www.kaggle.com/code/cdeotte/silver-medal-using-sliding-time-cv" target="_blank">here</a>. When using time CV improvements in CV lead to improvements in public and private LB.</p> <h1>Analyze "Risky" Post Processing</h1> <p>Once we have a local validation that mimics the LB, we can analyze PP. For each of the 25 OOF folds, we can find the best Top and Bottom thresholds and plot them. We notice that during some time periods, thresholding works and during others it does not. If we pick the median bottom threshold (i.e. <code>pred[pred&lt;median_bottom]=0</code>), then CV informs us that we have a 50% chance of it working if private test is a "random time period". </p> <p>Note to find thresholds for private LB, we must redo the plot below using the size of private LB which is around 200 samples. Plot below is for 120 samples per validation fold and comes from my notebook <a href="https://www.kaggle.com/code/cdeotte/silver-medal-using-sliding-time-cv" target="_blank">here</a>. The Yellow line is the best top threshold for each fold (i.e. time period). The Green is bottom. The Blue line is the validation fold balanced log loss metric. And the Red line is how much better each fold can become using the optimal thresholds. We note that the best validation fold with PP achieves <code>0.06</code> just like the public LB. And we note that the worst validation fold does not improve from PP just like private LB.</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2023/time-thresholds2.png" alt=""></p> <h1>Best Fold OOF Predictions - (Balanced Log Loss like Public LB)</h1> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2023/best-fold.png" alt=""></p> <h1>Worst Fold OOF Predictions - (Balanced Log Loss like Private LB)</h1> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2023/worst-fold.png" alt=""></p> <h1>Post Competition Fun Fact</h1> <p>After competition ended, I was curious why the private test is harder than public test and train data to predict. All three have approximately 17.5% <code>target=1</code> which makes them seem similar. </p> <p>First I checked categorical feature <code>EJ</code> (which equals <code>A</code> or <code>B</code>) and found that train, public, private have 64%, 62%, and 50% of <code>EJ=B</code> respectively. Next I checked NAN and found that two additional test columns <code>EH</code> and <code>FD</code> have a few NAN whereas train has no NAN in these columns. This NAN discrepency most likely caused many submission errors but neither of these two discoveries makes a big difference in model accuracy i think. (i.e. I did not find any weird test columns with 100% NAN or something crazy like that).</p> <p>Next using probing techniques in notebook <a href="https://www.kaggle.com/code/cdeotte/probe-alpha-type-of-class-1-in-public-private" target="_blank">here</a> we see that private test has more of the difficult <code>Alpha = D</code> type of <code>target=1</code> Class 1. I think this helps explains what changed over time to make private test more difficult than public test and train data.</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2023/class1.png" alt=""></p> <p>Thanks Tilii and Kaggleqrdl</p> <p>I tried CatBoost. It's weird if i add 10% CatBoost to the ensemble it does not improve the CV score. I tried a dozen models, the only combination that improved ensemble score was TabPFN + XGB + LGBM.</p> <p><a href="https://www.kaggle.com/kaggleqrdl" target="_blank">@kaggleqrdl</a> I was not able to probe the private LB because i did not identify all samples (both pos and neg) from public LB. I only identified all the positive samples from public LB. (Then to score LB=0, we set 26 predictions to <code>pred=1</code> and everything else to <code>pred=0</code>).</p> <p>Identifying all 125 negative samples is much more difficult and requires many more submissions than identifying all 26 positive samples. (And we cannot probe private test until we identify all public test).</p> <p>The reason I use bi-directional is because I assumed the reason time affects data is because the data was collected from different sources at different time periods. If that is the reason, then using a forward and backward time CV mimics having one data collection in validation and another data collection in train.</p> <p>If time has a forward effect, then you are correct that the CV should only use validation which is forward of train. Now that the competition has ended, I am believing more that forward is the way to go. See post <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/431519" target="_blank">here</a></p> <blockquote> <p>if you had a rough idea of class balance, wouldn't it help scale your classifiers more accurately?</p> </blockquote> <p>I don't think so. Since the metric is balance log loss we scale to 0.5 regardless of Class mean. If you want to try some experiments, the facts are private test has 210 samples with 37 positives which is 17.6% positive. (And public has 150 samples with 26 positives which is 17.3% positive).</p> <p>What may be helpful is trying to estimate the distribution of <code>Alpha</code>. We could train a multiclass CatBoost to predict. I wonder what are the proportions of the 3 diseases in private test versus public test. It might be interesting to discover if the host moved certain diseases into private and certain into public.</p> <p><a href="https://www.kaggle.com/wisawesome" target="_blank">@wisawesome</a> thanks. I think there could be many reasons. The data can come from different sources that used different collection techniques etc. Another theory is from <a href="https://www.kaggle.com/lucasmorin" target="_blank">@lucasmorin</a> post <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/431149" target="_blank">here</a>. </p> <p>It is possible that the data contains multiple samples from the same person which share similar features but different targets. In this case it is important to put some of these samples in train and some of these samples in validation. Time-CV does this for us.</p> <p>Because it might be the case that train data contains samples from <code>person A</code> and then test data also contains data from <code>person A</code>. If <code>person A</code> has target=0 in train data, the model cannot be too confident because <code>person A</code> might have target=1 in test data. And a person's features may not change much over time. This may be why we are seeing less confident models like default parameter CatBoost doing better on private test.</p> <p><a href="https://www.kaggle.com/kaggleqrdl" target="_blank">@kaggleqrdl</a> I published a notebook <a href="https://www.kaggle.com/code/cdeotte/use-public-lb-probing-data-lb-0-0" target="_blank">here</a> with all the test index ids of the public test LB. Therefore if you want you can now probe the private LB by removing these rows from test data and computing statistics on the other test rows (and then returning results via different public LB scores). And test your theories about using private distribution information to improve private score.</p> <p>Post comp i have only checked NAN in test. The test data contains two additional columns which contain NAN which are <code>EH, and FD</code>. Each has a few and i don't think their presence affects LBs score very much.</p> <p>Personally i don't think the public test nor private test are that weird nor manipulated. Based on my sliding time CV. It appears that public test is just an easy time period and then private test is a hard time period. In other words they don't appear to be more easy or difficult than periods in train data. And i suspect that each is a different time interval with <code>train &lt; public &lt; private</code>.</p> <p>If i got first place in this competition with my crazy LB probe idea, I would have gained 60,000 points and become Kaggle Overall #1 in the Global Competition ranking 😀</p>
ICR - Identifying Age-Related Conditions
Silver Medal Solution Write-up (Top 3%)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you so much, Chris! You really helped me out. </p> <p>What is your opinion on the feature engineering aspect of my solution? It can be argued that it was risky and could've led to overfitting to those particular CV splits even with multiple seeds. Especially since my score is so far away from my mean. Then again, as you said, a lot of competitors foresaw the testing distribution being very different than the training one.</p> <p>Howdy! </p> <p><strong>Notebook:</strong> <a href="https://www.kaggle.com/code/conradkrueger/silver-medal-catboost-no-postprocess-or" target="_blank">https://www.kaggle.com/code/conradkrueger/silver-medal-catboost-no-postprocess-or</a></p> <p><strong>Model:</strong> My model is just an ensemble of 10 binary CatBoostClassifiers from CV.&nbsp;</p> <h5>Feature Engineering</h5> <p>Baseline GBT models on the default dataset&nbsp;were not it for me. So, I turned to feature engineering to generate new features via arithmetic operations.<br> Huge shout-out to <a href="https://www.kaggle.com/tatudoug" target="_blank">@tatudoug</a> 's Logistic Regression Baseline Notebook (<a href="https://www.kaggle.com/code/tatudoug/logistic-regression-baseline)" target="_blank">https://www.kaggle.com/code/tatudoug/logistic-regression-baseline)</a>.<br> That notebook inspired me to try other arithmetic operations and see if GBTs could find any discriminating features.</p> <h5>Feature Selection</h5> <p>From this point, I tried to be as robust as possible when doing feature selection.&nbsp;<br> <code>RepeatedStratifiedKFold(10, 10)</code> was used to pick features (RFE gave me great results).&nbsp;I did nested CV with multiple seeds to see if ensembling the models in the inner CV loop would lead to better results. Shout-out to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for talking about nested CV in discussion posts.<br> My final decision was to select my best 2 CVs.&nbsp;</p> <h5>Results</h5> <ul> <li>CV: 0.157, STDDEV: 0.67&nbsp;</li> <li>Public LB: 0.21</li> <li>Private LB: 0.39&nbsp;</li> </ul> <p>My best submission (same strat but different features were generated) scored 0.37 on private and 0.21 on public. </p> <p>Looking at the standard deviation, it was not surprising to see my model score 0.21 on the public LB. My logic was to hope my model was robust enough and that many kagglers overfitted to the public LB (and, of course, have definitely totally 0 luck involved).&nbsp;</p> <p>Let me know what y’all think! And congrats to those who had the gut to include Epsilon in their final submission.</p>
ICR - Identifying Age-Related Conditions
Silver medal Solution 0.3995 with CatBoost and Optuna
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I`m very excited with the results. This ir my first Kaggle competition and the result is very good for me.<br> Have several tries with other similar models like Xgboost and Lgbm. But the best result was achieved using CatBoost optimizing the hyperparameters with Optuna.<br> It is a simple solution, although the result is not the best, it is very good for my knowledge.<br> The solution link is :</p> <p><a href="https://www.kaggle.com/code/andrsvercesi/icr-eda-and-modeling-silver-medal" target="_blank">https://www.kaggle.com/code/andrsvercesi/icr-eda-and-modeling-silver-medal</a></p> <p>Greetings and see you in the next competition</p>
ICR - Identifying Age-Related Conditions
Silver medal for a First timer, Thank you for your inputs
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>🙇<strong>Credits to users contributed on Code and Discussion board, you are my extended team and you helped me learn a lot of new things. Thanks a ton 🙇</strong></p> <p><em>What did I do?</em><br> Data Analysis and Feature Engineering: <br> Identified Non-numeric columns and convert them using label encoding</p> <h2>Initially normalized all the columns, but later realized a few of the models like tabpfn don’t need normalization.</h2> <p>Identify null values and filled them with a median <br> Identified correlation between features, including Greeks Alpha, Beta, Gamma, Delta, dropped Epsilon (diff with the toppers)<br> Observed that Alpha and Gamma have a strong correlation (100%) with Class output. </p> <h2>So got an Idea that Alpha and Gamma can be used as Target variables and make it a multi-class classification problem besides class.</h2> <h2>Boxplots on each feature have shown that some features have high outliers, so removed outliers that have more than 3 standard deviations (again this is a random decision, as We don’t know the feature meaning and significance)</h2> <p>The Class distribution is highly biased toward Class 1 with 509 records and Class 0 with 108 records. So used a Random oversampler to balance the data one based on Class and the other based on Gamma</p> <p><em>Model Selection :</em><br> As it is a classification problem, first I tried Ensemble with Logistic Regression, Random Forest, Decision Tree, and Neural networks and average the output probability, didn’t work that great. I observed Decision Trees was doing good in the set. <br> Later I came to know about TabPfn which works well with Tabular data and used TabPfn, for Gamma as the output TabPfn was giving 0.965 Accuracy. <br> As Random Forest was doing good, I thought I should use XGB for class and it has given a Mean accuracy of 0.931<br> For XGB few of the hyperparameters I have taken from other published notebooks in the competition as well and tuned them further. </p> <p><em>Evaluation:</em><br> Used K fold Cross Validation for XGB with k=5, with XGB Parameters the CV distribution was high after certain Hyperparameter tuning found the best parameters with good Mean accuracy and low standard deviation. <br> Tabpfn I had 0.33 train and test split and verified accuracy with output prediction (Should have adopted better training and evaluation here)</p> <p><em>Submission :</em><br> Here is the tricky part we have 3 potential output features Class, Alpha, and Gamma, I need to choose to see which combination of them gives better output, after trying a few of these combinations, I figured Class and Gamma were doing good results during submission, so picked that for final submission. </p> <p>*** I was pretty disappointed, I am very low on the leadership board, people are scoring 0.06, 0.03, and finally 0.00, but one of the guys published 0.06, after looking at his notebook I understood he was kind of cheating adjusting the probability to align with the binary log loss calculation and making a better score. In the end, I made a choice to pick my own work instead of their published notebooks. If you see the final leaderboard none of these guys even made the top 200. So please trust your own work, don’t copy anything which you don’t understand. </p> <p>*** I have not used epsilon time from Greeks, but I see the other teams have used it, looking at the values I didn’t feel like using it, especially as the final data wouldn’t have that values. Similar to other features as I don’t understand what it means, I didn’t use it. May be I am wrong. </p> <p>*** Keep following the Code shared by other teams and take insights, don’t lose your strategy because of them, experiment with those points and evolve your approach to be much better. </p>
ICR - Identifying Age-Related Conditions
This is my solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It was very unexpected, didn't even think about getting in top 30% let alone 10%. Well, my solution is just mostly ensemble algorithms and fine tuning, the details is in the following link. Thanks. PLEASE upvote I want the swag!</p> <p><a href="https://www.kaggle.com/code/quangnhatbui/icr-eda-and-model-selection-xgb-randomforest" target="_blank">https://www.kaggle.com/code/quangnhatbui/icr-eda-and-model-selection-xgb-randomforest</a></p>