Title of Competition
stringclasses
168 values
Title of Writeup
stringlengths
9
139
User
stringclasses
1 value
Writeup
stringlengths
419
61.5k
Google - American Sign Language Fingerspelling Recognition
17th Solution: Conformer + CTCLoss + 500 epoch training
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>TL; DR</h1> <p>Fixed-length input (220 frames), padding shorter inputs and resizing longer inputs.<br> 2 layer MLP landmark encoder + 6 layer 384-dim Conformer + 1 layer GRU and 500 epochs training (takes ~8 hours on Kaggle TPUs).<br> Post-processing (+0.003 in CV and LB) by:</p> <pre><code> len() &lt;= : = + </code></pre> <p>My notebook is available at <a href="https://www.kaggle.com/code/nightsh4de/ctc-transformer/notebook" target="_blank">https://www.kaggle.com/code/nightsh4de/ctc-transformer/notebook</a><br> It is primarily based on the previous 1st solution <a href="https://www.kaggle.com/code/hoyso48/1st-place-solution-training" target="_blank">https://www.kaggle.com/code/hoyso48/1st-place-solution-training</a> and some public notebooks of this competition:<br> <a href="https://www.kaggle.com/code/markwijkhuizen/aslfr-transformer-training-inference#Landmark-Embedding" target="_blank">https://www.kaggle.com/code/markwijkhuizen/aslfr-transformer-training-inference#Landmark-Embedding</a><br> <a href="https://www.kaggle.com/code/irohith/aslfr-transformer" target="_blank">https://www.kaggle.com/code/irohith/aslfr-transformer</a> <br> <a href="https://www.kaggle.com/code/shlomoron/aslfr-ctc-on-tpu" target="_blank">https://www.kaggle.com/code/shlomoron/aslfr-ctc-on-tpu</a>. Many many thanks to them.</p> <h1>Data Preprocessing and Augmentation</h1> <p>Basically the same as the previous 1st solution. But I found using 3d positions (means including depth) and pose landmarks gives better CV and LB score.<br> My input: left-right hand, eye, nose, lips and pose landmarks.</p> <h1>Model</h1> <p>I believe this task would be quite similar to Automatic Speech Recognition (ASR), so I used Conformer <a href="https://arxiv.org/abs/2005.08100" target="_blank">https://arxiv.org/abs/2005.08100</a></p> <h1>Post-preprocessing</h1> <ol> <li>I checked my worst predictions in the validation set and found that shorter predictions are worse. And most very short predictions (length less than 5) are basically predicting nothing, e.g. single characters like "a" or space " ".</li> <li>I found that only few label's length is less or equal to 5. </li> <li>So adding some make-up phrases to very short predictions might be a good idea.</li> <li>I picked the most common chars in training set: "a", "e", "r", "o", "-", " ".</li> <li>I test all the combinations of "a", "e", "r", "o", "-", " " based on validation set and " -aero" is the best.</li> </ol> <h1>What not worked</h1> <ol> <li>Autoregressive transformer models. <br> I spent most of my time on transformers. But they strongly overfits the phrase and I couldn't find a way to solve it.<br> The problem is , for an input X[1…n] and its phrase "123456789". We may expect our model predict "12345" for the first half input X[1…n/2]. <br> For CTC models, it is true. But my autoregressive model always makes half correct prediction "12345" + half random incorrect prediction e.g. some random 5-digit numbers.</li> <li>Masking for variable-length input<br> I tried to use the same masking techniques as the previous 1st solution <a href="https://www.kaggle.com/code/hoyso48/1st-place-solution-training" target="_blank">https://www.kaggle.com/code/hoyso48/1st-place-solution-training</a>, to support longer input frames.<br> However, masking models always produce lower CV and LB scores (-0.02). I believe it is due to the masking limits the output length, but shorter inputs still require a longer output space. Although, unfortunately, I don't have time to verify it, I believe it might enable much better solutions.</li> <li>Empty embedding for fixed-length input<br> I tried to use learnable constant weights for padded empty frames and 2-layer MLP for encoding landmarks in original frames. It seems to me makes more sense, but CV score is worse than direct encoding all frames. </li> </ol>
ICR - Identifying Age-Related Conditions
520th Place Solution for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello there, kagglers!<br> This is my second effective competition that gave me Kaggle Competition Expert badge. </p> <h2>Context section</h2> <p>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">Competition overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">Data overview</a></p> <h2>Overview of the Approach</h2> <p>First of all, I usually looking forward a big lottery in most medical competitions. The problems are typical: big difference in distribution of train/test or public/private datasets. </p> <p>Secondly, the lack of info in this competition about columns in dataset, that gives an opportunity only to handle standard thing in EDA and FE. </p> <p>Finally, when 0 in metrics reached means, that top guys will fall heavily. </p> <p>All this stuff occurred in this competitions and as the result +2733 places on private LB were easily overtaken. </p> <p>This, the bet was taken on basic preprocessing, being taught on every DS/MLE course and consisted of:</p> <ul> <li>Missing values handle</li> <li>Correlation check</li> <li>Standardization</li> <li>Simple boosting models blending </li> </ul> <h2>About data</h2> <p>As I mentioned above, bad data for EDA and FE was provided. As the result I took only several additional standard processings to be applied to datasets. </p> <h2>Details of the Submission</h2> <h3>My notes</h3> <p>To tell the truth I made only 1 submission to the competition, the second failed with an error. As the basement I have taken another public notebook, link in sources.</p> <h3>My ideas</h3> <p>As the processing I added to original notebook StandartScaler and checked for correlation.<br> As the result BC and DU features were deleted as correlated to BZ and CL respectively.</p> <p>I’ve taken weighed blend of the implementations of gradient boosting (CatBoost, XGBoost and LightGBM) following the original notebook. </p> <p>I also reduced amount of folds with respect to original notebook for training from 20 to 10.</p> <p>I rebalanced blending according to CV scores (better CV = bigger weight).</p> <p>Staring from this point, I didn’t pay much attention to overfitting to LB, cause I felt shake-up coming. I bet on more stable and general solution and got my prize for that.</p> <p>I was completely right this time…</p> <h3>Coderun and inference</h3> <p>Code is using simple config in upper cells, then it comes to feature processing and training. Inference is made in same notebook, cause training isn’t hard enough to split. Every guy here can read the code easily, no need to describe something. </p> <p>Using GPU in config is extremely improves its performance, but at that moment (of submission) I was of out of GPU quota, that’s the reason. </p> <h2>Some statistics</h2> <p>I got this solution as a public kernel which was there for several months. <br> You can watch it <a href="https://www.kaggle.com/code/dzisandy/icr-first-version/notebook?scriptVersionId=129525404" target="_blank">here</a>. It gave me silver in notebooks, heh.<br> This solution was seen by more than 2300 people and was taken as a final submission by 8 people (including me, from 520th to 527th place). Same digits after point.</p> <h2>My feedback</h2> <p>This competition is fairly random in my opinion, the shake-up is a proof. But the luck was on my side. As an advice: Stay humble and continue climbing up the hill. The top is closer than it appears😀</p> <p>P.S. I manage telegram channel about DS, called <a href="https://t.me/dzis_science" target="_blank">Dzis_Science</a>, follow me for more!</p> <h3>Sources</h3> <ul> <li><p>Originally, as the basement the great and <a href="https://www.kaggle.com/code/takaito/icr-iarc-gradient-boosting-ensemble-lb-0-22" target="_blank">simple notebook</a> was taken. </p></li> <li><p>My public kernel is <a href="https://www.kaggle.com/code/dzisandy/icr-first-version/notebook?scriptVersionId=129525404" target="_blank">here</a>.</p></li> </ul> <p><strong>Feel free to ask any questions in comments/DM.</strong></p>
Google - American Sign Language Fingerspelling Recognition
[4th Place Solution] Conformer Encoder-Decoder Ensemble with beam search and edit_dist optimization
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and the organizer for running this competition! It was a quite unique challenge and even after three months of optimizing it feels like there are still so many things to improve, which is quite special in my opinion.</p> <h2>TLDR</h2> <p>My solution is an ensemble of two encoder-decoder models. The encoder is a 12 layer adapted conformer and the decoder is a two layer regular transformer. I added various augmentations and training techniques to align the training objective with the edit_distance competition metric. For decoding I implemented a (cached) beam search for TFLite.</p> <h2>Challenge and plan</h2> <p>The goal of the competition was to translate sign language spelling from videos that were preprocessed with human pose recognition. Submissions were made as TFLite models with a limited OPs set and evaluated using the Levenshtein Edit Distance.</p> <p>These choices had a couple of implications for modelling:</p> <ul> <li>"honest" predictions are often not edit-distance-optimal, esp. when the model recognizes no characters in a phrase,<br> the honest prediction "" achieves a score of 0.0 while "2 a-e -aroe" scores 0.16,<br> see <a href="https://www.kaggle.com/code/anokas/static-greedy-baseline-0-157-lb" target="_blank">Anoka's Static Greedy Baseline</a></li> <li>TFLite models in this competition only allowed a very restrictive ops-set, which meant for example that all the native<br> implementations of beam_search and similar algorithms were not supported</li> <li>there were time and size limitations to the model (40MB) causing where to "spend" your parameters becoming a major<br> design decision</li> <li>the datasets contained quite a few samples where most or all data was missing</li> </ul> <p>With these things in mind, I assumed "making things up" would be a significant part of good predictions and<br> encoder-decoder architectures seemed naturally aligned to this. Additionally having a decoder abstracts away one of the time dimensions which made developing downstream algorithms like beam search or ensembling easier. My early tests also suggested encoder-decoders to work slightly better than CTC, so I went with that architecture.</p> <h2>Data</h2> <p>My model used 214 inputs: 21 LHand, 21 RHand, 25 Pose, 1 Nose and 40 Lips points, each using x- and y-coordinates. Data was normalized, NaNs zero-filled and the deltas to t-1 and t-2 values were used as additional Features. During training a maximum of 500 frames was used with longer sequences being resized.</p> <p>I applied quite a few data augmentations:</p> <ul> <li>Flip left-right</li> <li>Resample along the time dimension</li> <li>Scale / Translate / Rotate</li> <li>Mask up to 60% of all frames (worked better than masking sequences)</li> <li>Spatial cutout (similar to <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406684" target="_blank">Hoyso48's 1st place solution</a>) <br></li> </ul> <p>All of these made a significant impact.</p> <p>On top of that I transformed the tokens too. I used the fact that when replacing a single token in a phrase with a random token, the ground truth is still an edit-distance-optimal target. This change gave a quite nice boost of +0.008 (using smaller models). I also added single token deletions and insertions but they had a minor impact (if at all).</p> <p>I split the data five-fold and most of my experiments used only one fold to train with smaller models due to compute restrictions.</p> <h2>Model</h2> <p>The base of my model was a deep <a href="https://arxiv.org/pdf/2005.08100.pdf" target="_blank">Conformer</a> encoder followed by a two-layer transformer decoder. The encoder used twelve layers with dimension of 144. The MHSA had four heads with dim-per-head of 64 and the Convolution used a kernel of size 65.<br> Like the original formulation the model used two macaron-style feed forwards with an expansion factor of four. I made some additional small changes, like changing the position of the BatchNorm and adding DropPath to each Submodule of the Conformer. The model used a drop rate of 0.1 almost everywhere, only before the final classifier it used 0.3. Instead of causal padding I used same padding and explicitly zeroed-out the padded parts. </p> <p>On the decoder side I tested many different configurations but ended up using a very slim, two layer transformer<br> decoder. It used four attention heads with dim-per-head of 32 and a feed forward with expansion factor of only two. This was the smallest configuration that I could train without significant performance drop off. Using a small decoder was important since the autoregressive decoding is very performance intensive.</p> <h2>Training</h2> <p>A full training run for a single twelve layer encoder model took around two days on my local 3090. To experiment with different architectures, augmentations etc., I only trained shallower models on ~20% of the data for most of the competition.<br></p> <p>The final training used a cross entropy loss and RAdam optimizer (but AdamW with warmup worked pretty much the same) with a peak lr of 1e-3 and cosine schedule. Weight decay of 2e-6 and label smoothing 0.2 (very minor effect) were used for regularization in combination with light gaussian weight noise (had similar effect as AWP in my test, but lower overhead). I trained for 300 epochs, the first 100 of which used the supplemental data.<br></p> <p>I used minimum word error rate training after the model finished training. Where character-based edit distance is used as "word error rate". The method starts with a converged model and uses beam search to generate say the top four predictions. It then calculates each prediction's edit distance and uses this as a weight for the model's predicted probabilities. See for example <a href="https://arxiv.org/abs/1712.01818" target="_blank">Minimum Word Error Rate Training for Attention-based Sequence-to-Sequence Models</a>. The method's results are unstable even after optimizing it quite a bit. However, short training runs of 1-5 epochs gave very considerable gains in early testing. Unfortunately on the final large, ensembled model it was a rather modest improvement of 0.001-0.002.</p> <h2>Beam search and inference-time optimizations</h2> <p>Using an ensemble of two models, it was easy to reach the 40MB model size limit. To max out the run time dimension too, I implemented a beam search algorithm that is compatible with the restricted TFLite ops set of this competition. Using it with cached autoregressive decoding allowed me to use beam sizes of five to six (with six sometimes failing the 5h limit). This resulted in + 0.005 on the final ensemble (and even more on earlier, weaker models). The implementation was a bit tricky as there are a few edge cases like having to reorder the decoding caches when beams are changed etc. To prevent the early termination problem when decoding with beam search I used a linear length penalty of 0.15.<br></p> <p>On top of this I realized my model achieved an edit distance of 0.0 on low information samples (e.g. &lt; 50 frames and &lt; 5 frames with any hand showing). But we knew that a greedy prediction of e.g. "2 a-e -aroe" gets a score of 0.16. Since most of these low information samples seem entirely corrupted, I simply replace the model's predictions on these with a constant prediction. I used " a-e -are", which slightly different from the greedy one mentioned before as I optimized it towards shorter, low information sequences.<br><br> In the end, adding this one line:</p> <p><code>x = tf.cond(num_frames &lt; 50 and num_hand_frames &lt;= 3, lambda: tf.constant([[59, 0, 32, 12, 36, 0, 12, 32, 49, 36, 60]]), lambda: tf.identity(x))</code></p> <p>gave an improvement of +0.005 across the board (local eval, private and public LB for all models). Which is as much as the whole beam search …</p> <h2>What worked and didn't</h2> <ul> <li>Beam search gave a decent +0.005 improvement</li> <li>Replacing the model's prediction on corrupt data samples with a constant default prediction gave +0.005</li> <li>Deeper models worked better than wider ones</li> <li>MWER-training gave a small improvement (+0.001 - +0.002) - however, this was with beam search k=5, with greedy decoding gains were larger (+0.005 in local eval)</li> <li>Replacing a single input token with a random one was a decent augmentation</li> <li>CTC didn't help as an auxiliary loss</li> <li>masking decoder input did not help (when random token replacement was used)</li> <li>z-coordinates did not help</li> </ul> <p>As always: really looking forward to reading everyone's solutions. Let me know if there are any questions. Code coming <em>soon</em>.</p>
ICR - Identifying Age-Related Conditions
Megatron for synthetic data generation + simple LGBM model - Bronze medal with 0.42/0.19 Pr/Pu scores
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to thank ICR and Kaggle for hosting such an interesting competition. Many thanks also to all the participants for the great discussions and useful insights. As another newbie with surprising result, I would like to share my approach which despite the obvious gaps, might be found as insightfull. </p> <p>Inspired by a training lab during NVIDIA GTC 2023 for Synthetic Tabular Data Generation Using Transformers (<a href="https://www.nvidia.com/en-us/on-demand/session/gtcspring23-dlit52224/" target="_blank">https://www.nvidia.com/en-us/on-demand/session/gtcspring23-dlit52224/</a>) I decided this approach would be great to augment the small dataset and help the high scoring public notebooks generalize better. And, it was a great opportunity to test the approach for using synthetic data as I am very keen to use it in my professional field. Even for a beginner like me was relatively straightforward to set up NVIDIA’s Megatron using Nemo framework and train it on a single GPU local machine. The preprocessing, training and inference of Megatron is adaptation of this tutorial: <a href="https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Megatron_Synthetic_Tabular_Data_Generation.ipynb" target="_blank">https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Megatron_Synthetic_Tabular_Data_Generation.ipynb</a> </p> <p>My approach: </p> <ol> <li>Train data preprocessing</li> </ol> <ul> <li>Use of greeks except Epsilon (date) – I decided to use all the records but drop the features that cannot be replaced simply in order to have larger and more diverse dataset</li> <li>Nan replacement – replaced by median value for the groups by categorical features. Where unavailable – median value of the group by Alpha.</li> <li>Tokenization – following the guide above with adjustments for column types, hasnan and token length</li> </ul> <ol> <li>Model configuration and training</li> </ol> <ul> <li>NUM_LAYERS = 8</li> <li>NUM_GPUS = 1</li> <li>HIDDEN_SIZE = 1024</li> <li>NUM_ATTENTION_HEADS = 16</li> <li>SEQ_LENGTH = 1024<br> This model runs for around 3 hours on a local machine on a single GTX 3060 12GB.</li> </ul> <ol> <li>Inference </li> </ol> <ul> <li>20000 empty prompts</li> <li>Temperature = 1.0<br> With this temperature, only around 800 of the generated records are no duplicates and the feature’s values fall within the ranges defined by the train data. The generated dataset follows the same distribution of the train dataset.</li> </ul> <ol> <li>Model definition and training</li> </ol> <ul> <li>Based on <a href="https://www.kaggle.com/code/nomuraryota/icr-identify-age" target="_blank">https://www.kaggle.com/code/nomuraryota/icr-identify-age</a> </li> <li>Except the slightly different hyperparameters, the main difference is that the original train dataset is used as validation only while the LGBM classifier is trained on the generated records only. The assumption is that the model will generalize better</li> <li>100 models trained on this setup; the validation loss ranged between 0.19 and 0.09; mean of the predictions taken</li> </ul> <p>The trained model ideally could have been used directly for submission, but my main focus was on the exploration of the LLM training and the data generation. I’ve tested a lot of configurations for data generation including prompting the model to generate specific combinations of categorical data as constraint, with different temperature, etc. Prompting the model with different categorical data was the approach I bet should work best so that you can generate what records you need. However, my test on this didn’t work, perhaps because I prompted the trained LLM with any available combination of observed categorical combinations (too ambitious probably to generate many records with only 1 or 2 observed sharing the same categorical combination).</p> <p>Comments , critics and recommendations are more than welcome.</p>
ICR - Identifying Age-Related Conditions
13th Place Solution for the "ICR - Identifying Age-Related Conditions" Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>As per your comments, it was anticipated that this competition would be a shake-up, but honestly, I am surprised by this result. Just to make sure, I will provide my solution below.</p> <h1>Context</h1> <p>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></p> <h1>Overview of the Approach section</h1> <h2>Data processing</h2> <p>・Missing Values: Create missing value flags and fill with the mean<br> ・Column "EJ": label encoding<br> ・Addition of Group Features: For variables other than the target, add ['min', 'max', 'mean', 'std'] of each feature grouped by the "EJ" column.</p> <h2>Training</h2> <p>Based on the discussion by Chris below, I created three models with downsampling and added class weights during training.Validation was performed using StratifiedKFold with a value of n_splits=10 for all models, and then the results were aggregated using seed averaging.</p> <p>・Model1:LGBMClassifie(features=all)<br> ・Model2:LGBMClassifier(training with the top 20 impactful features of Model 1)<br> ・Model3:CatBoostClassifier(features=all)</p> <h1>Details of the submission</h1> <p>・submission["class_1"]=Model1<em>0.2 + Model2</em>0.2 + Modell3*0.6<br> ・result:Public=0.22, Private=0.36</p> <h1>Sources</h1> <p>・<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/412507" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/412507</a></p>
ICR - Identifying Age-Related Conditions
automl pipeline: featurewiz, multilabel k-fold and optuna
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My second competition concludes with a confident bronze at position 361, however, a wild shakeup of thousands of places and the quality of the data leaves more questions than answers.</p> <p>Over the course of the competition, I dedicated myself to improving my skills in AutoML, and I am pleased to share some key insights from my solution:</p> <p>🛠 Feature Engineering:<br> A crucial step in my strategy was the automatic generation and selection of features. (+0.02 CV LB both) I developed new features from the existing data, extending my understanding of the information. I then used correlation analysis to systematically identify and select the most influential features, significantly improving the robustness of my models.</p> <p>📊 Multilabel Stratification:<br> I also employed a technique called iterative stratification to cope with the data imbalance and skewness. Multilabel stratification by label and age bin allowed me to handle multiple variables simultaneously, ensuring more accurate and stable cross-validation results. This approach was critical in helping me navigate the challenging terrain of the competition dataset.</p> <p>🕯 Ensemble+Optuna:<br> One of the cornerstones of my success was the construction of an ensemble of models using Optuna. Optuna is an open-source hyperparameter optimization framework that enabled me to automatically fine-tune the weights and thresholds of my ensemble models. The optimized ensemble not only boosted my predictive performance but also improved the consistency of my results across multiple test sets.</p> <p>Overall, this competition has been an exhilarating journey. I faced setbacks, questioned my approaches, and ultimately learned a lot. The experience has been invaluable, and I am excited to apply the knowledge gained to future challenges.</p> <p>code: <a href="https://www.kaggle.com/code/sirtimio/bronze-solution-0-4-lb?scriptVersionId=139617098" target="_blank">https://www.kaggle.com/code/sirtimio/bronze-solution-0-4-lb?scriptVersionId=139617098</a></p>
ICR - Identifying Age-Related Conditions
182nd Place Solution for the "ICR - Identifying Age-Related Conditions" Competition (plus the better one)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My first silver medal, thank you everyone at Kaggle. I will try to explain my model that win the medal (which actually my 4th best model) and 3 other models which result better in Private Leaderboard.</p> <h1>Context</h1> <p>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></p> <h1>Model</h1> <h2>1. Submitted Model (my 4th best, Private Score 0.3974, Public Score 0.22606)</h2> <p><strong>1.1. Model</strong><br> One CatBoostClassifier model predicting "Class" as target</p> <p><strong>1.2. Features</strong><br> Selected Features from train.csv file based on Correlation to Alpha that is higher than 0.1 (absolute) </p> <p><strong>1.3. Categorical Features</strong><br> EJ</p> <p><strong>1.4. Validation</strong><br> KFold 5 split</p> <p><strong>1.5. Training</strong><br> CatBoostClassifier multiclass train on dataset where Alpha as target and weighted by each of 4 Class (Alpha, Beta, Gamma, Delta).<br> Validate on dataset where Class as target and weighted by each of 2 class.<br> Weight function is copy and modified from <a href="https://www.kaggle.com/maverickss26" target="_blank">@maverickss26</a> notebook: <a href="https://www.kaggle.com/code/maverickss26/icr-age-related-conditions-eda-modelling" target="_blank">https://www.kaggle.com/code/maverickss26/icr-age-related-conditions-eda-modelling</a></p> <pre><code>X = train[features] y = train[] y_ros = train[] (): nc = np.bincount(y_true) w0, w1 = /(nc[]/y_true.shape[]), /(nc[]/y_true.shape[]) w0, w1 (): nc0 = y_true.value_counts()[] nc1 = y_true.value_counts()[] nc2 = y_true.value_counts()[] nc3 = y_true.value_counts()[] w0 = /(nc0/y_true.shape[]) w1 = /(nc1/y_true.shape[]) w2 = /(nc2/y_true.shape[]) w3 = /(nc3/y_true.shape[]) w0, w1, w2, w3 class_1 = y_ros.value_counts().index[] class_2 = y_ros.value_counts().index[] class_3 = y_ros.value_counts().index[] class_4 = y_ros.value_counts().index[] train_w0, train_w1, train_w2, train_w3 = calc_log_loss_weight_meta(y_ros) valid_w0, valid_w1 = calc_log_loss_weight(y) train_dataset = Pool(data=X_train, label=y_train, weight=y_train.({class_1: train_w0, class_2: train_w1, class_3: train_w2, class_4: train_w3}), cat_features=cat_features) eval_dataset = Pool(data=X_val, label=y_val, weight=y_val.({: valid_w0, : valid_w1}), cat_features=cat_features) </code></pre> <p><strong>1.6. Predicton</strong><br> Predict Probability for target "Class"</p> <h2>2. My 3rd best Model - Private Score 0.39569, Public Score 0.38332)</h2> <p>Inspired by <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> notebook <a href="https://www.kaggle.com/code/raddar/icr-competition-analysis-and-findings" target="_blank">https://www.kaggle.com/code/raddar/icr-competition-analysis-and-findings</a>, which also my model with the most close score between Private and Public Score</p> <p><strong>2.1. Model</strong> <br> Four CatBoostClassifier model for each of 4 Alpha Class (4 Binary Classification Model for Alpha, Gamma, Beta and Delta)</p> <p><strong>2.2. Features</strong> <br> Selected Features from train.csv file based on Correlation to Alpha Class that is higher than 0.1 (absolute) for model Delta and Gamma. <br> Full Features from train.csv file for model Alpha and Beta</p> <p><strong>2.3. Categorical Features</strong><br> EJ as Categorical Features on Alpha and Beta model and no Categorical Features for model Delta and Gamma </p> <p><strong>2.4. Upsampling</strong><br> Upsampling data for model Gamma only</p> <p><strong>2.5. Validation</strong><br> TrainTestSplit stratify for each of the 4 models</p> <p><strong>2.6. Prediction</strong><br> Predict Probability for each Model target use np.argmax to find the largest probability, if the largest porbability is from model A then porbability Class 0 taken from model A else take the largest probability as Class 1:</p> <pre><code>subs_All_df[] = cat_model_A.predict_proba(subs_All_df[features_A])[:,] subs_All_df[] = cat_model_B.predict_proba(subs_All_df[features_B])[:,] subs_All_df[] = cat_model_D.predict_proba(subs_All_df[features_D])[:,] subs_All_df[] = cat_model_G.predict_proba(subs_All_df[features_G])[:,] PRED_TARGET = [, , , ] pred_list = (np.argmax(np.asarray(subs_All_df[PRED_TARGET]), axis=)) i, v (pred_list): v == : subs_df.loc[i, ] = subs_All_df.loc[i, PRED_TARGET[v]] subs_df.loc[i, ] = - subs_All_df.loc[i, PRED_TARGET[v]] : subs_df.loc[i, ] = subs_All_df.loc[i, PRED_TARGET[v]] subs_df.loc[i, ] = - subs_All_df.loc[i, PRED_TARGET[v]] </code></pre> <h2>3. My 2nd best Model - (Private Score 0.37928, Public Score 0.2409)</h2> <p><strong>3.1. Model</strong><br> 1 CatBoostClassifier model predicting "Class" as target</p> <p><strong>3.2. Features</strong><br> All features from train.csv files</p> <p><strong>3.3. Categorical Features</strong><br> EJ</p> <p><strong>3.4. Validation</strong><br> KFold 5 split</p> <p><strong>3.5. Training and Prediction (same as above 4th best model):</strong> <br> CatBoostClassifier multiclass train on dataset where Alpha as target and weighted by each of 4 Class (Alpha, Beta, Gamma, Delta)<br> Validate on dataset where Class as target and weighted by each of 2 class</p> <h2>4. My 1st best Model - (Private Score 0.36871, Public Score 0.22484)</h2> <p>Same as 2nd best Model with different Hyper Parameter tuning</p> <h1>Things that is not working/improving my model:</h1> <ul> <li>Stacking prediction: predict Beta, Gamma, Delta then use it to Predict Alpha</li> <li>StratifiedKFold compare to KFold</li> <li>Scaling using MinMaxScaling</li> <li>UpSample using RandomSampling for all Class: Alpha, Beta, Gamma, Delta</li> <li>Add Epsilon as Feature</li> </ul>
ICR - Identifying Age-Related Conditions
(Our solution) CV 0.12 - Public 0.11 - Private 0.62 🥲
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone<br> Well after this shakeup I just want to show some of the ideas we used to build our solution. Although we didn't succeed in this competition, it was really a pleasure to team up with <a href="https://www.kaggle.com/muhammad4hmed" target="_blank">@muhammad4hmed</a> <a href="https://www.kaggle.com/ihebch" target="_blank">@ihebch</a> <a href="https://www.kaggle.com/cody11null" target="_blank">@cody11null</a> </p> <p>Actually our best in private was 0.38 which was a very simple lgbm with almost no-processing. It was our first sub in the competition :) For us, the better cv and public, the worse is private.</p> <p>We used StratifiedKFold basd on Alpha with 20 folds and 20 bags.<br> Each one of the following represent the experiments we did to improve our cv to 0.12 and public to 0.11:</p> <ul> <li>Using LGBM with some tuning + undersampling with ratio 0.7 (cv 0.2 - public 0.18 - private 0.38 (the best in private))</li> <li>Using quite small feature_fraction (0.2) and using a special loss function in lgbm called "cross_entropy_lambda" + some more tuning for hyperparams (cv 0.17 - public 0.14 - private 0.45)</li> <li>Using Date as a feature + adding some ideas for date + some manipulating in the features (cv 0.144 - public 0.14 - private 0.43)</li> <li>Treating the years as groups (3 groups: &lt;=2018 , ==2019, ==2020) Then trying to predict the group for the private (I thought this might help in test because maybe the model will find some patterns that are similar to a specific year) (cv 0.128 - public 0.16 - private 0.50)</li> <li>Some feature engineering using nearest neighbor (e.g. nearest neighbor target - average of 5 nearest target…etc) (we didn't submit this, because it had worse cv).</li> <li>Using a separate model for each disease. Actually this brought a really good improvement in cv, and we did feature selection based on each model which brought some quite interesting insights (e.g. one of the diseases uses specific features that are not important at all for the others) (cv 0.113 - public 0.17 - private 0.58).</li> <li>I found a really interesting thing. Actually in the feature "EJ". It has two values, either 0 or 1. We found that building a model trained on only EJ==0 and another one for EJ==1 then concatenate the results improved the results of cv (cv 0.13 - public 0.16 - private 0.44).</li> <li>Started using stacking with lgbm + svm +… etc (cv 0.125 - public 0.11 (our best public) - private 0.62). This is our 1st selected submission.</li> <li>Using EJ Separation model in the stacking (cv 0.10 - public 0.13 - private 0.61)</li> <li>Pseudo Labeling (cv 0.125 - public 0.14 - private 0.73)</li> <li>Using our best public submission (lb 0.11) + Adding the models separated based on diseases mentioned above<br> (cv 0.114 - public 0.13 - private 0.62)</li> <li>Using a 3rd stage stacking (cv 0.09 - public 0.14 - private 0.63)</li> <li>Using a 3rd stage stacking + Using the EJ separation models + Using the disease separated models + adding catboost to stacking (cv 0.10 - public 0.15 - private 0.48). This is our 2nd chosen submission.</li> </ul> <p>Well after all, it was really fun to try all these ideas along with my brilliant teammates. Maybe next time we can secure a gold.</p> <p>Here is the notebook of our 0.11:<br> <a href="https://www.kaggle.com/mohammad2012191/public-0-11-stacking" target="_blank">https://www.kaggle.com/mohammad2012191/public-0-11-stacking</a></p> <p>Here is the notebook of our 2nd sub:<br> <a href="https://www.kaggle.com/mohammad2012191/stacking-3rd-stage-sep-ej" target="_blank">https://www.kaggle.com/mohammad2012191/stacking-3rd-stage-sep-ej</a></p> <p>Sorry they are a little messy.</p>
ICR - Identifying Age-Related Conditions
[ICR] 439th Position's Solution Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Update: linked notebooks have been successfully published.</p> <h1>I. Foreword</h1> <p>I want to thank ICR and Kaggle for hosting such an interesting competition. It's still surprising for even newbie like me to see a competition with such a small dataset. The numerous moving pieces make even tiny changes can cause huge impacts, which is the source of the fun.</p> <p>I initially didn't have high hope due to the comp's uncertainty nature, but ended up spending a lot more time than I thought. Due to the dataset's mini size, I believed simple solutions are more likely to win, decided to go that way and expected a shake-up. The final result is a 2200+ rank pump and not enough for a silver, but still a good learning experience!</p> <p>Below is the summary of my workflow shortly. I think my biggest takeaway are the different options for feature selection, which I have shared in another post <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/432072" target="_blank">linked here</a>.</p> <h1>II. Overall workflow</h1> <h2>1. Workflow</h2> <p>I tried several models, and most of them went through the same workflow described below. Final submissions are ensembles of individual models which, together, have highest OOF CVs.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F982488%2Fc6745cd5c92b9e78724253559906a0e8%2FScreenshot_1.png?generation=1692088143683642&amp;alt=media" alt="ICR Overall Workflow"></p> <h3>(a) Feature engineering</h3> <p>Ideas tested include</p> <ul> <li>Feature integerization (credit: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/413198" target="_blank">link to discussion</a>)</li> <li>Feature first-order interaction (<code>X * Y</code> and <code>X / Y</code>) (credit: <a href="https://www.kaggle.com/code/tatudoug/logistic-regression-baseline" target="_blank">link to notebook</a>)</li> <li><code>Epsilon</code>-based features (year, month, week, days, etc.)</li> <li><code>NaN</code> imputation: <code>mean</code>, <code>median</code>, constant</li> <li><code>EJ</code> encoding/left out</li> <li>Multi-class label</li> </ul> <h3>(b) Feature selection</h3> <p>With a dataset this small, I believed using fewer features are crucial to avoid overfitting, and focused a lot of finding the right features. I use 3 different feature selection methods and a few combination of them, in addition with correlation value to remove redundant features.</p> <ul> <li>Information Value (IV) &amp; Weight of Events (WoE) (<a href="https://www.kaggle.com/code/tatudoug/logistic-regression-baseline" target="_blank">notebook source</a>)</li> <li>(For tree-based models) Target Permutation Importance (TPI - <a href="https://github.com/kingychiu/target-permutation-importances" target="_blank">github repo</a> - thanks to <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/424124" target="_blank">this discussion</a> by <a href="https://www.kaggle.com/kingychiu" target="_blank">@kingychiu</a>) </li> <li><code>featurewiz</code> (FW - <a href="https://www.kaggle.com/code/nemanjagojkovic/using-advanced-feature-engineering-with-featurewiz" target="_blank">notebook source</a>)</li> </ul> <h3>(c) Training parameter search</h3> <p>I also tested different training set-up combinations of the following configs</p> <ul> <li>Feature selection's suggestions</li> <li>Imbalance data treatment: sample weighting, oversampling</li> <li><code>StratifiedKFold</code> strategy: target only; target concatenated with Epsilon truncated to year (<code>target_epsilon</code>); target conatenated with Experimental columns (<code>target_experiment</code>)</li> <li>Multiclass/Binary</li> <li>Model-specific hyper-parameters</li> </ul> <h3>(d) Model training</h3> <ul> <li>I selected the best training setups based solely on CV, often limiting to 100 or fewer features.</li> <li>My techniques to avoid overfitting when training include<ul> <li>Limiting model complexity by using hyper-parameters as close to default as possible</li> <li>Randomly selecting hyper-parameters within a close-range of the initially chosen set of hyper-parameters.</li> <li>Using 5 or 10 folds of <code>StratifiedKFold</code>, often repeated by 10 times.</li></ul></li> </ul> <h3>(e) Model selection</h3> <p>Using the above workflow (a)-(d), I tested several model options, including: LGBM, Logistic Regression (LogReg), XGB, CatBoost, TabFPN. I then tried ensembling, and selected models based on CV scores. The best combination is LGBM + LogReg, which was selected for final submissions.</p> <h3>(f) Inference</h3> <p>Each single model instance during training is weighted with <code>1 / balanced_log_loss(y_val_prediction)</code> where <code>y_val_prediction</code> is the model's prediction for the validation set.</p> <h2>2. Comments</h2> <ul> <li>Feature engineering<ul> <li>Feature integerization seemed useful for EDA but was not needed for my final model selection.</li> <li>Many of the first-order interaction features turned out to be powerful.</li> <li>Things that boosted CV but hurt private LB<ul> <li>Keeping <code>EJ</code> column</li> <li>Binary class</li> <li>CV-strat using target only</li></ul></li></ul></li> <li>Feature selection<ul> <li>IV &amp; WoE consistently provides useful features. Its selection is also often fewer than other methods.</li> <li>When combined with TPI, IV &amp; TPI provide even better features for tree-based models.</li></ul></li> <li>Training config<ul> <li>Weighting is consistently better than oversampling. This could be explained by this theory, that the <code>Null</code> Epsilon samples are of bad quality/different from private LB, so any effort in reducing them might have a chance to boost private LB. In other words, undersampling &gt; not doing anything &gt; oversampling.</li></ul></li> <li>Training and inference<ul> <li>LGBM often has high CV and public LB but low private LB.</li> <li>LogReg, CatBoost and XGB are the other way around - medium-low CV, medium-low public LB but good private LB.</li></ul></li> </ul> <h1>III. Submission selections &amp; result</h1> <p>My final submission is a combination of LGBM and Logistic Regression, which I have found to have best CV (0.175) and public LB (0.17). The good correlation made me choose this submission. </p> <p>However, at the end they don't perform well on private LB (0.41). Instead, my private LB-best, not-selected submission is an XGB with CV-public-private LB of 0.258 - 0.25 - 0.367. This submission would have achieved rank 20th and gold medal, but at the time it was impossible to choose it over the ensemble above.</p> <h3>Submission scores</h3> <table> <thead> <tr> <th>Submission</th> <th>CV</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>Selection 1 - LGBM</td> <td>0.1923</td> <td>0.1891</td> <td>0.4997</td> </tr> <tr> <td>Selection 1 - LogReg</td> <td>0.1908</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td><em>Selection 1</em></td> <td><em>0.1752</em></td> <td><strong><em>0.1717</em></strong></td> <td><em>0.4197</em></td> </tr> <tr> <td>Selection 2 - LGBM</td> <td>0.1545</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>Selection 2 - LogReg</td> <td>0.1685</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td><em>Selection 2</em></td> <td><strong><em>0.1478</em></strong></td> <td><em>0.3553</em></td> <td><em>0.6248</em></td> </tr> <tr> <td><em>Best (not selected) - XGB</em></td> <td><em>0.2575</em></td> <td><em>0.2490</em></td> <td><strong><em>0.3679</em></strong></td> </tr> </tbody> </table> <h3>Best submissions' setups</h3> <table> <thead> <tr> <th>Idea</th> <th>Select 1 - LGBM</th> <th>Select 1 - LogReg</th> <th>Best overall (XGB - not selected)</th> </tr> </thead> <tbody> <tr> <td><code>NaN</code> imputing</td> <td>N/A</td> <td><code>mean</code></td> <td>N/A</td> </tr> <tr> <td><code>X * Y</code></td> <td>True</td> <td>True</td> <td>True</td> </tr> <tr> <td><code>X / Y</code></td> <td>True</td> <td>True</td> <td>False</td> </tr> <tr> <td><code>Epsilon</code> features</td> <td>False</td> <td>False</td> <td>False</td> </tr> <tr> <td><code>EJ</code> included</td> <td>True</td> <td>True</td> <td>False</td> </tr> <tr> <td>Multiclass/Binary</td> <td>Binary</td> <td>Binary</td> <td>Multiclass</td> </tr> <tr> <td>CV strat</td> <td><code>target</code></td> <td><code>target</code></td> <td><code>target_experiment</code></td> </tr> <tr> <td>Imbalance treatment</td> <td><code>weighting</code></td> <td><code>weighting</code></td> <td><code>weighting</code></td> </tr> <tr> <td>Feature selection</td> <td>IV &amp; TPI</td> <td>IV</td> <td>TPI</td> </tr> <tr> <td>N-features</td> <td>52</td> <td>59</td> <td>340</td> </tr> <tr> <td>CV</td> <td>0.1923</td> <td>0.1908</td> <td>0.2575</td> </tr> <tr> <td>Public LB</td> <td>0.1891</td> <td>N/A</td> <td>0.2490</td> </tr> <tr> <td>Private LB</td> <td>0.4997</td> <td>N/A</td> <td>0.3679</td> </tr> </tbody> </table> <h1>IV. References</h1> <ul> <li>Submission Selection 1<ul> <li><a href="https://www.kaggle.com/code/hoangnguyen719/icr-fe-v3" target="_blank">FE</a></li> <li>LGBM: <a href="https://www.kaggle.com/code/hoangnguyen719/icr-fs-d3-lgbm-v1" target="_blank">Feature Selection</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-ps-lgbm-d3-fs1-v0" target="_blank">Parameter Search</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-train-lgbm-d3-fs1-v0" target="_blank">Training</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-infer-lgbm-d3-fs1-train0?scriptVersionId=139110044" target="_blank">Inference</a></li> <li>LogReg: <a href="https://www.kaggle.com/code/hoangnguyen719/icr-fs-d3-iv-woe-v1" target="_blank">Feature Selection</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-ps-lr-d3-fs-ivwoe1-v0" target="_blank">Parameter Search</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-train-lr-d3-fs1-v0" target="_blank">Training</a></li> <li><a href="https://www.kaggle.com/code/hoangnguyen719/icr-infer-stack-d3-v1/notebook?scriptVersionId=139399123" target="_blank">Ensemble submission</a></li></ul></li> <li>Submission Selection 2<ul> <li><a href="https://www.kaggle.com/code/hoangnguyen719/icr-fe-v4" target="_blank">FE</a></li> <li><a href="https://www.kaggle.com/code/hoangnguyen719/icr-iv-woe-v5" target="_blank">IV &amp; WoE Result</a></li> <li>LGBM: <a href="https://www.kaggle.com/code/hoangnguyen719/icr-fs-d4-lgbm-v0" target="_blank">Feature Selection</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-ps-lgbm-d4-fs0-v0" target="_blank">Parameter Search</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-train-lgbm-d4-fs0-v0" target="_blank">Training</a></li> <li>LogReg: <a href="https://www.kaggle.com/code/hoangnguyen719/icr-fs-d4-iv-woe5" target="_blank">Feature Selection</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-ps-lr-d4-fs-ivwoe0-v0" target="_blank">Parameter Search</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-train-lr-d4-fs0-v0" target="_blank">Training</a></li> <li><a href="https://www.kaggle.com/code/hoangnguyen719/icr-infer-stack-d4-v0?scriptVersionId=139547779" target="_blank">Ensemble submission</a></li></ul></li> <li>Best submission (XGB)<ul> <li><a href="https://www.kaggle.com/code/hoangnguyen719/icr-fe-fs-xgb-v1" target="_blank">FE &amp; Feature Selection</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-train-xgb-d1-param-search/notebook" target="_blank">Parameter Search</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-train-xgb-v9?scriptVersionId=137025656" target="_blank">Training &amp; Inference Submission</a></li></ul></li> </ul>
Google - American Sign Language Fingerspelling Recognition
29th Place Solution - LB 0.762
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>TL;DR</h2> <p>We used a 1D convolutional network with 122-dimensional features as input and Self Conditional CTC, along with a greedy search as the decoder.</p> <h2>Preprocess</h2> <p>We used xy feature points of the hand performing the gesture and xy feature points of the lips, for a total of 61 xy feature points (= 122-dimensional vector).</p> <h2>Data Augmentation</h2> <p>Overfitting was suppressed by randomly dropping hand feature points. The percentage of dropping was 40% of the total, and at least 5 consecutive frames were dropped.</p> <h2>Model</h2> <p>We used a 1DCNN consisting of 9 Residual Blocks of kernel width 3 and 12 Residual Blocks of kernel width 31.</p> <h2>Loss Function</h2> <p>We used Self Conditional CTC, a variant of CTC, for the loss function. By using Self Conditional CTC, a significant performance improvement was seen without increasing the amount of calculation.<br> <a href="https://arxiv.org/abs/2104.02724" target="_blank">Read more about Self Conditional CTC here</a></p> <h2>Training</h2> <ul> <li>60 epochs</li> <li>CosDecay learning rate</li> <li>AdamW optimizer</li> <li>AWP</li> </ul> <h2>Convert Model</h2> <p>All implementation was done in PyTorch and converted to 16-bit quantized tflite by onnx2tf. Conversion using nobuco was also tried, but was not adopted because, contrary to expectations, it increased processing time.</p> <h2>Not Worked</h2> <ul> <li>Stochastic Weight Averaging was very effective in the last competition but did not contribute to performance in this competition.</li> <li>We thought that Levenshtein OCR could be used as a decoder and tried to implement it, but it did not work.</li> </ul>
ICR - Identifying Age-Related Conditions
147th Place Solution for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First off, I would like to thank the Competition hosts and the Kaggle team for hosting such an interesting Competition. I jumped from rank 3169 in public lb -&gt; 147 in private lb</p> <h2>context:</h2> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></li> </ul> <h2>Overview of the approach:</h2> <p>This competition probably witnessed the biggest shake ever in Kaggle. Due to such a small dataset, it was easy to overfit, so keeping the approach simple was important.<br> In my solution, I only used a single LightGBM model and did feature engineering. </p> <h2>Details of the submission</h2> <h3>Cross-Validation Strategy:</h3> <p>I used 10 folds StratifiedKFold for the validation strategy. Like others, I also noticed a bad correlation between cv and public lb, but what's interesting is during the start of the competition, when my models were weaker ~0.3 LB, they correlated well with cv. As I improved after reaching about 0.2 LB, it stopped correlating. Which I think can be considered one way of identifying overfitting. I wonder if someone else also noticed a similar trend?</p> <p>here is a non exhaustive table of cv vs lb:-</p> <table> <thead> <tr> <th>CV</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>0.61</td> <td>0.3</td> <td>0.55</td> </tr> <tr> <td>0.57</td> <td>0.28</td> <td>0.56</td> </tr> <tr> <td>0.31</td> <td>0.22</td> <td>0.43</td> </tr> <tr> <td><strong>0.18</strong></td> <td><strong>0.20</strong></td> <td><strong>0.39</strong></td> </tr> <tr> <td>0.16</td> <td>0.2</td> <td>0.46</td> </tr> </tbody> </table> <h3>Feature Engineering:</h3> <p>Typically, there are two ways to generate new features:-<br> 1.) Use domain knowledge<br> 2.) Bruteforce different features combination based on cv</p> <p>due to the nature of the data (anonymized and low samples), both were difficult, so instead, I grouped different features based on correlation and tried different combinations between them like mean, std, a few other transformations, etc., and evaluated using cv. My assumption in this approach was that correlated features would likely be related. As a very simple example, consider two features width and length depending on context they should appear somewhat correlated, by simply multiplying we can get new feature, area which can provide more information to model.</p> <h2>Sources:</h2> <p>I used the following notebook to find feature groups:- <a href="https://www.kaggle.com/code/aerdem4/icr-lofo-feature-importance" target="_blank">https://www.kaggle.com/code/aerdem4/icr-lofo-feature-importance</a></p>
ICR - Identifying Age-Related Conditions
ICR by symbolic regression (162th)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>In this competition I tested my symbolic regression algorithm. I built three mathematical models, and used their average in the result. In two cases, the symbolic regression found a mathematical equation of the form tanh(f(x)) which is equivalent to a logistic function. </p> <p>tanh(((x33/x6)+(((x13/(((x24/x1)<em>x45)</em>x37))<em>x1)</em>(0.02532214485108852386))))</p> <p>tanh(((((((0.04220944643020629883)<em>x33)</em>x48)<em>(((0.04220944643020629883)</em>x33)<em>x48))</em>((((0. 04220944643020629883)<em>x33)</em>x48)<em>(((0.04220944643020629883)</em>x33)<em>x48)))+((x1</em>x1)<em>(((-0.05945727974176406860)+(x1</em>x1))*x9))))</p> <p>((((((x33<em>x33)</em>(x33<em>x33))+x9)</em>((x56+x13)<em>(x1</em>x30)))/(((((x33<em>x33)</em>(x33<em>x33))+x9)</em>((x56+x13)<em>(x1</em>x30)))+(((x31<em>((x32</em>x22)+x24))<em>x17)</em>((x32<em>x22)+x24))))</em>(((((x33<em>x33)</em>(x33<em>x33))+x9)</em>((x56+x13)<em>(x1</em>x30)))/(((((x33<em>x33)</em>(x33<em>x33))+x9)</em>((x56+x13)<em>(x1</em>x30)))+(((x31<em>((x32</em>x22)+x24))<em>x17)</em>((x32*x22)+x24)))))</p> <p>The notebook is available here <a href="https://www.kaggle.com/code/jano123/icr-by-symbolic-regression" target="_blank">https://www.kaggle.com/code/jano123/icr-by-symbolic-regression</a>. </p>
ICR - Identifying Age-Related Conditions
5. place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Knowing the inevitable shakeup, I submitted couple basic solutions months ago and forgot about the competition. I was surprised when my friends congratulated me in the morning.</p> <p><a href="https://www.kaggle.com/code/celiker/icr-5-place-solution/notebook" target="_blank">Here's the code</a></p> <p>Main points:</p> <ol> <li>Trained models for each Alpha, Beta, Gamma, Delta and stacked these probabilities to be used as features.</li> <li>####Created lgbm imputer models for every feature even if it has no missing values on train data.</li> <li>Used RepeatedStratifiedKFold(n_splits=5, n_repeats=5) with a basic catboost model.</li> </ol> <p>note: Removing imputers didn't effect the score, so the main strength is stacking greeks.</p> <p>After this solution, I tried brute-force feature engineering and other modeling approaches, but they didn't help on public so I left it as final submission</p>
ICR - Identifying Age-Related Conditions
How on Earth did I win this competetion?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello there! This was really unexpected. I hoped to be in top 10%, but never dreamed of something more. Thanks to everyone who participated in the competetion and especially to those who discussed various ideas and shared their code! And big thanks to SAMUEL, who introduced reweighting the probabilities in this notebook: <a href="https://www.kaggle.com/code/muelsamu/simple-tabpfn-approach-for-score-of-15-in-1-min" target="_blank">https://www.kaggle.com/code/muelsamu/simple-tabpfn-approach-for-score-of-15-in-1-min</a></p> <p>My solution was purely based on some sophisticated DNN: <a href="https://www.kaggle.com/room722/icr-adv-model" target="_blank">https://www.kaggle.com/room722/icr-adv-model</a><br> UPDATE: Training notebook attached.</p> <p>What did not work for me:</p> <ol> <li>Gradient boosting was obviously overfitting, although I spent just little time on it and didn't make much fine-tuning.</li> <li>The "greeks" were useless. I think, because we have no greeks for the test data.</li> <li>FE led to overfitting.</li> </ol> <p>What did work:</p> <ol> <li>DNN based on Variable Selection Network. [1]</li> <li>No "casual" normalization of data like MinMaxScaler or StandartScaler, but instead a linear projection with 8 neurons for each feature.</li> <li>Huge values of dropout: 0.75-&gt;0.5-&gt;0.25 for 3 main layers.</li> <li>Reweighting the probabilities in the end worked really good.</li> <li>10 folds cv, repeat for each fold 10-30 times, select 2 best models for each fold based on cv (yes, cv somehow worked in this competition!).The training was so unstable, that the cv-scores could vary from 0.25 to 0.05 for single fold, partially due to large dropout values, partially due to little amount of train data. That's why I picked 2 best models for each fold.</li> <li>The cv was some kind of Multi-label. At first I trained some baseline DNN, gathered all validation data and labeled it as follows: (y_true = 1 and y_pred &lt; 0.2) or (y_true = 0 and y_pred &gt; 0.8) -&gt; label 1, otherwise label 0. So, this label was somthing like "hardness to predict". And the other label was, of course, the target itself.</li> </ol> <p>It will be honest to say, that I was lucky to win, but for me personally that means also that DNN wins, and probably not just by luck. And as a big fan of DNNs this makes me proud and happy))</p> <p>[1] The idea of this network was taken from here <a href="https://arxiv.org/abs/1912.09363" target="_blank">https://arxiv.org/abs/1912.09363</a></p>
ICR - Identifying Age-Related Conditions
554 Place Solution for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Context</h1> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></li> </ul> <h1>Overview of the Approach</h1> <p>I like tree-based algorithms like XGBoost and LightGBM, that why in this competition I built a super easy pipeline by ensembling these two models trained on cross-validation. <br> I don't use any normalization. Only split columns to categorical and other, because models need to know which features are categorical.<br> Thanks to the shakeup I got another medal for competitions.</p> <pre><code>model_lgbm = lgbm.LGBMClassifier( random_state=, class_weight=, ) model_lgbm = GridSearchCV( model_lgbm, parameters, cv=, scoring=make_scorer(balanced_log_loss, greater_is_better=, needs_proba=), ) </code></pre> <pre><code>model_xgb = xgb.XGBClassifier( scale_pos_weight=((y_train) - (y_train)) / (y_train), random_state=, tree_method=, enable_categorical=, learning_rate=, max_depth=, colsample_bytree=, subsample=, eta=, gamma=, objective=, ) model_xgb = GridSearchCV( model_xgb, parameters, cv=, scoring=make_scorer(balanced_log_loss, greater_is_better=, needs_proba=), ) </code></pre> <h1>Details of the submission</h1> <p>I submitted the mean prediction value from the two models described above<br> <code>(y_pred_xgb[:, 0] + y_pred_lgbm[:, 0]) / 2</code></p> <h1>Sources</h1> <ul> <li>LightGBM: <a href="https://lightgbm.readthedocs.io/en/stable/" target="_blank">https://lightgbm.readthedocs.io/en/stable/</a></li> <li>XGBoost: <a href="https://xgboost.readthedocs.io/en/stable/index.html#" target="_blank">https://xgboost.readthedocs.io/en/stable/index.html#</a></li> <li>CV vs LB comparison: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/410474" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/410474</a></li> </ul>
Google - American Sign Language Fingerspelling Recognition
449th Position Solution for Google ASLFR Competition - Adversarial Regularization, Quantization Aware Training, and KD Code for Tensorflow!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Everyone! I'm just posting my solution here, in case it could help someone! I used a lot of techniques, but didn't have the computing power to push the boundaries required for a very high rank. Nevertheless, I learnt a lot of techniques, and am writing this solution here to help provide useful code!</p> <h1>Competition Context</h1> <p>The context of this solution is the Google - American Sign Language Fingerspelling Recognition Competition:<br> <a href="https://www.kaggle.com/competitions/asl-fingerspelling/overview" target="_blank">https://www.kaggle.com/competitions/asl-fingerspelling/overview</a></p> <p>The data of this competition can be found here:<br> <a href="https://www.kaggle.com/competitions/asl-fingerspelling/data" target="_blank">https://www.kaggle.com/competitions/asl-fingerspelling/data</a></p> <h1>My Approach</h1> <p>My approach was identical to most people, I referred to the really awesome notebook by Rohit Ingilela here: <a href="https://www.kaggle.com/code/irohith/aslfr-ctc-based-on-prev-comp-1st-place" target="_blank">https://www.kaggle.com/code/irohith/aslfr-ctc-based-on-prev-comp-1st-place</a></p> <p>I made several additions and changes, namely by adding Quantization Aware Training, AWP, and Knowledge distillation (not necessarily in the same model). I tested a lot of parameters as well and made a lot of models.</p> <p>My idea was to use Quantization to increase model size. I found that after 24 Million parameters, even though the model size was small enough, the performance became too poor to run within the 5 hours allocated for testing. I found that KD didn't improve my performance too much, so I didn't explore it a lot. I tried AWP only at the end, and even though it improved performance by around 0.003, I didn't get to test it a lot.</p> <p>Nevertheless, here is some code for each idea:</p> <p><strong>Quantization:</strong></p> <pre><code>!pip install -q tensorflow-model-optimization tensorflow_model_optimization tfmot tensorflow.keras layers x = tfmot.quantization.keras.quantize_annotate_layer(layers.Dense(channels, activation = activation, kernel_regularizer = regularizer))(inputs) tflitemodel_base = TFLiteModel(model) tflitemodel_base(frames)[].shape keras_model_converter = tf.lite.TFLiteConverter.from_keras_model(tflitemodel_base) keras_model_converter.optimizations = [tf.lite.Optimize.DEFAULT] keras_model_converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS] tflite_model = keras_model_converter.convert() (, ) f: f.write(tflite_model) </code></pre> <p><strong>Adversarial Regularization in Training</strong></p> <pre><code>! pip install -q neural-structured-learning neural_structured_learning nsl adv_config = nsl.configs.make_adv_reg_config( multiplier = , adv_step_size = , adv_grad_norm = ) (): {: image, : label} train_dataset = train_dataset.(convert_to_dictionaries) val_dataset = val_dataset.(convert_to_dictionaries) strategy.scope(): model = nsl.keras.AdversarialRegularization( model, label_keys = [], adv_config = adv_config ) loss = CTCLoss optimizer = tfa.optimizers.RectifiedAdam(sma_threshold = ) optimizer = tfa.optimizers.Lookahead(optimizer, sync_period = ) model.(loss = loss, optimizer = optimizer) </code></pre> <p><strong>Knowledge Distillation</strong></p> <pre><code> (keras.Model): (): ().__init__() self.student = student self.teacher = teacher (): ().(optimizer=optimizer) self.loss = CTCLoss self.distiller_loss_fn = keras.losses.KLDivergence() self.alpha = alpha self.temp = temperature (): x, y = data is_batch_small = tf.math.less(tf.shape(x)[], train_batch_size) is_batch_small = tf.math.reduce_all(is_batch_small) (): tf.constant(), tf.constant() (): teacher_pred = self.teacher(x, training = ) tf.GradientTape() tape: student_pred = self.student(x, training = ) student_loss = self.loss(y, student_pred) distilled_loss = self.distiller_loss_fn( tf.nn.softmax(student_pred / self.temp), tf.nn.softmax(teacher_pred / self.temp) ) * (self.temp ** ) loss = self.alpha * student_loss + ( - self.alpha) * distilled_loss trainable_vars = self.student.trainable_variables gradients = tape.gradient(loss, trainable_vars) self.optimizer.apply_gradients((gradients, trainable_vars)) self.compiled_metrics.update_state(y, student_pred) student_loss, distilled_loss student_loss, distilled_loss = tf.cond(is_batch_small, pass_batch, normal) results = {m.name : m.result() m self.metrics} results.update( {: student_loss, : distilled_loss} ) results (): () x, y = data is_batch_small = tf.less(tf.shape(x)[], val_batch_size) is_batch_small = tf.reduce_all(is_batch_small) (): tf.constant(), tf.constant() (): y_pred = self.student(x, training = ) teacher_pred = self.teacher(x, training = ) student_loss = self.loss(y, y_pred) distilled_loss = self.distiller_loss_fn( tf.nn.softmax(y_pred / self.temp), tf.nn.softmax(teacher_pred / self.temp) ) * (self.temp ** ) self.compiled_metrics.update_state(y, y_pred) student_loss, distilled_loss student_loss, distilled_loss = tf.cond(is_batch_small, pass_batch, normal) results = {m.name : m.result() m self.metrics} results.update( {: student_loss, : distilled_loss} ) results </code></pre> <p>While I didn't get a super high rank, I did learn a lot and used a plethora of techniques! I wrote most things from scratch, to learn things myself, and referred to other's work only to get a better understanding of what they did and improve my own code as a result.</p> <p><em>Validation</em><br> Quite standard - kept a holdout set and computer Normalized Levenshtein distance on it after every epoch.<br> No CV due to low computational resources.</p> <h1>Improvements</h1> <p>I am quite disappointed that the things I missed out were very trivial - such as not training for more than 300 epochs (I could only manage 100 to try enough models and variations out) and that keeping some missing hands frames was important.</p> <p>I also didn't use enough augmentations which was important, even though I used all the ones used by top submissions except cutout.</p> <p>I also read about Squeezeformer, but got demotivated because I saw high scoring public submissions without Squeezeformer :(</p> <p>To further improve, I'll correct all the points above.</p> <h1>Final Note</h1> <p>I am quite happy to have participated in this fun and awesome contest. I really learnt a lot, and experimented a lot. I am certain that what I did here will help me in future competitions, research and work.</p> <p>If you could give me any suggestions/feedback on my code above, please do so. It will help me improve and learn, and I will be very grateful for your help!</p>
Google - American Sign Language Fingerspelling Recognition
33rd place solution - LB 0.758
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>TL;DR</h2> <p>We used a conformer-like model consisting transformer encoder, 1d convolution with CBAM and bi-LSTM.<br> Overall model size is about 16MB after INT8 quantization.<br> Training objective is CTC with InterCTC loss.</p> <h2>Data preprocess</h2> <ul> <li>Used landmarks<ul> <li>1 for nose, 21 for dominant hand, 40 for lips</li> <li>x, y coordinates</li></ul></li> <li>Normalization<ul> <li>Standardized distance from nose coordinates</li></ul></li> <li>Feature engineering<ul> <li>Concatenation of normalized location, difference of next frame, joint distance of hand</li> <li>Total 582 dims</li></ul></li> <li>Removing data that input frame is shorter than 2 times of target phrase</li> </ul> <h2>Data augmentation</h2> <ul> <li>Horizontal flip landmarks</li> <li>Interpolation</li> <li>Affine transfrom</li> </ul> <h2>Model</h2> <ul> <li>2 stacked encoder with Transformer, 1D convolution with CBAM and Bi-LSTM<ul> <li>hidden dim: 352</li></ul></li> <li>CTC loss and Inter CTC loss after first encoder</li> <li>17M parameters and INT8 quantization</li> </ul> <h2>Train</h2> <ul> <li>2 staged training (300 + 200 epochs)<ul> <li>Use supplemental and train data in first 300 epochs</li> <li>Use train data 200 epochs</li></ul></li> <li>Ranger optimizer</li> <li>Cosine decay scheduler with 12 epochs warmup</li> <li>AWP<ul> <li>It prevents the validation loss from diverging, but it doesn't seem to improve the edit distance.</li> <li>Used to train long epochs.</li></ul></li> </ul> <h2>Not worked</h2> <ul> <li>Augmentation<ul> <li>Time, spatial, landmark masking</li> <li>Time reverse</li></ul></li> <li>Autoregressive decoder<ul> <li>Use joint loss, CTC loss for encoder and Crossentropy for decoder</li> <li>Inference time is longer and the number of parameters larger than those of the CTC encoder alone, but the performance improvement is not clear, so it is not used.</li></ul></li> </ul>
ICR - Identifying Age-Related Conditions
17th place!🏆🏆
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>At the beginning of the competition, I immediately set two goals without greek and without complex ensembles. My solutions are feature generation by the multiplication method and the SVC model<br> <a href="https://www.kaggle.com/code/markdjadchenko/public-krni-pdi-with-pseudo-labelling" target="_blank">https://www.kaggle.com/code/markdjadchenko/public-krni-pdi-with-pseudo-labelling</a></p>
ICR - Identifying Age-Related Conditions
585th Place Solution for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone!</p> <p>I'm excited to share my achievement with you – I've secured a bronze medal with my solution. This marks a significant milestone for me, as it's my first medal earned on Kaggle! 🥉 :DD</p> <p>TL;DR:<br> Solution notebook is <a href="https://www.kaggle.com/lukaszsztukiewicz/bronze-medal-solution-11-cells-clean" target="_blank">here</a> (cleaned and commented, only 11 cells!)</p> <p>I am grateful for all the valuable discussions, notebooks, and insights shared publicly. This journey has been an incredible learning experience, and I want to extend special thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> and <a href="https://www.kaggle.com/tilii7" target="_blank">@tilii7</a> ! I learned a lot from you! I'd also like to acknowledge <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> and <a href="https://www.kaggle.com/senkin13" target="_blank">@senkin13</a> , fellow competitors that I was just next to on the public leaderboard – being among such talented individuals reassured me that my solutions were in good company hahaha ;)) Lastly, thanks to the Kaggle team and ICR for this uniquely challenging but enjoyable competition!</p> <h2>Context</h2> <p>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions</a><br> Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></p> <h2>Approach</h2> <h3>Data preprocessing</h3> <p><strong>Missing values:</strong> KNN Imputer with k=3<br> <strong>Feature selection:</strong> dropped "EJ" column <br> <strong>Feature scaling:</strong> RobustScaler (25-75 IQR)</p> <h3>Model(s):</h3> <p>LGBM with handpicked hyperparameters (predictions were made inside 5 x 10 fold CV and then averaged) </p> <h3>Cross Validation</h3> <p>Simple Stratified 10-fold CV averaged across 5 random seeds.</p> <h2>Things that worked (improved my CV):</h2> <p>Robust Scaler improves CV a little over Standard Scaler and no scaling<br> KNNImputer improves CV score a little over the median/mean imputation<br> Best LBGM regularization: max depth + feature sampling by tree </p> <h2>Things that did not work (for me):</h2> <p><strong>Feature selection:</strong> I analyzed the feature importance of XGB, LGBM, and Catboost and tried recursive feature elimination with those<br> <strong>OVR classification:</strong> Tried to build models to predict exact Class instead of binarized version<br> <strong>Other models:</strong> such as RandomForest, ExtraTrees, SVM, LR, TabPFN</p> <h2>Things I wanted to try but didn't have time for:</h2> <p>Probability calibration<br> Binned epsilon feature </p> <p><strong>Second submission:</strong> Stacking Ensemble: 2x LGBM, TabPFN, Catboost, and highly regularized ElasticNet as meta-model ( CV 0.19 | Public LB 0.202 | Private LB 0.473)</p> <p>See you in next competitions!</p>
Google Research - Identify Contrails to Reduce Global Warming
11th Place Solution Write-up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2><strong>Summary [TLDR]</strong></h2> <ul> <li>Weighted ensemble (optimal weights by Optuna)</li> <li>Multi-stage long training runs (70-100 epochs)</li> <li>Pseudo-labeling (time slices = 1-4, 6-8)</li> <li>SWA [3]</li> <li>TTA d4 / FlipTransforms [4]</li> <li>Image size x2 </li> </ul> <h2><strong>Modelling Approach</strong></h2> <p>We used <code>human_pixel_masks</code> as ground truth. We tried to incorporate <code>human_individual_labels</code> in different ways, but couldn’t improve using them. We didn’t try to average them and train on the soft labels. Our ensemble includes a model that calculates the losses on all individual masks and includes the one with the largest loss as an aux loss.</p> <p>We tried to improve a single model using temporal information and also couldn’t improve over 2d setup, so we moved to create a diverse ensemble using diff encoders and decoders. We used different CNNs and transformers as backbones and we also had models with a transformer decoder (UNetFormer).</p> <p>Many experiments conducted to utilise the additional data in the form of additional time periods so we incorporate that information in at least some way if not directly modelling for it. What ended up working for the pseudo setup in terms of both improving the single model and adding to the ensemble was the following:</p> <ul> <li>Balance the batch to continue ½ pseudo ½ real examples.</li> <li>Apply heavy augmentation on the pseudo data and light to normal augmentation on real data.</li> <li>Mixup the pseudo examples with the real examples.</li> </ul> <h2><strong>Training setup</strong></h2> <ul> <li>Optimizer: AdamW </li> <li>LR: 1.0e-3 or 1.0e-4</li> <li>Scheduler: Cosine Decay with 2 epochs warmup.</li> <li>Augs: Flips / RandomResizedCrop / RandAugment / Mixup / Mosaic</li> <li>Mixed loss: BCE / Dice / Focal </li> </ul> <h2><strong>CV strategy</strong></h2> <p>For most of our experiments we used the split provided by the hosts, i.e. train on full train data and evaluate on <code>validation.csv</code>. We did earlier some experiments with KFold but OOF CV was overestimated and the CV scores on the holdout (validation.csv) weren’t good. </p> <h4>Evaluation</h4> <p>We used 2 variants for evaluation against ground truth labels (256x256): <br> i) Model outputs x2 size --&gt; <code>interpolate(256, mode='nearest')</code> --&gt; calculate Dice score <br> ii) Model outputs directly 256 size </p> <p>PS: Most of our experiments used <code>(i)</code>, although we believe 2nd setup without interpolation is more reliable </p> <h2><strong>Architectures / Backbones</strong></h2> <p>Final ensemble </p> <h4><code>CV: 0.7026</code> / <code>Public LB: 0.71868</code> / <code>Private LB: 0.71059</code></h4> <table> <thead> <tr> <th>Backbone</th> <th>Architecture</th> <th>CV</th> <th>TTA</th> <th>Pseudo</th> <th>Mixup</th> </tr> </thead> <tbody> <tr> <td>efficientnet-b7</td> <td>Unet (SMP)</td> <td>0.6428</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>tf_efficientnet_b8</td> <td>Unet++ (Timm)</td> <td>0.6704</td> <td>No</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>tf_efficientnet_b8</td> <td>Unet++ (Timm)</td> <td>0.6752</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>maxvit_small_tf_512</td> <td>Unet (Timm)</td> <td>0.6677</td> <td>No</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>convnext_base</td> <td>Unet++ (Timm)</td> <td>0.6618</td> <td>No</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>convnext_large</td> <td>Unet (Timm)</td> <td>0.6670</td> <td>No</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>timm-resnest200e</td> <td>Unet</td> <td>0.6869</td> <td>Yes</td> <td>No</td> <td>No</td> </tr> <tr> <td>timm-resnest200e</td> <td>Unet</td> <td>0.6853</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>timm-resnest200e</td> <td>Unet</td> <td>0.6885</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>convnext_large_384_in22ft1k</td> <td>UnetFormer</td> <td>0.6788</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>tf_efficientnet_b6</td> <td>UnetFormer</td> <td>0.6532</td> <td>No</td> <td>No</td> <td>No</td> </tr> <tr> <td>eva02_large_patch14_448</td> <td>Unet</td> <td>0.6542</td> <td>Yes</td> <td>No</td> <td>No</td> </tr> <tr> <td>convnext_large_384_in22ft1k</td> <td>UnetFormer</td> <td>0.6841</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> </tbody> </table> <h4>The selected ensemble was our <strong>best CV</strong> / <strong>best Public LB (6th place)</strong> and the <strong>best PVT LB (11th place)</strong></h4> <h2>CV-LB Plot</h2> <p><img src="https://raw.githubusercontent.com/i-mein/Kaggle_images/main/2023-08/CV-PVT.png" alt=""></p> <p><img src="https://raw.githubusercontent.com/i-mein/Kaggle_images/main/2023-08/CV-Public.png" alt=""></p> <h2><strong>Things didn't work</strong></h2> <ul> <li>Use of ConvLSTM, UTAE, 3D-Unets to leverage information from other time slices </li> <li>Post-processing with t-2, t-1, t slices </li> <li>diff image resolutions (384, 768, 1024) </li> </ul> <h2><strong>Acknowledgements</strong></h2> <p>We would like to thank hosts and Kaggle team for organizing such an interesting research competition. We are also grateful to all kagglers that share their ideas in discussions and notebooks. </p> <h2>Libraries</h2> <p>[1] <a href="https://github.com/huggingface/pytorch-image-models" target="_blank">Timm</a> <br> [2] <a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">segmentation_models.pytorch (SMP)</a> <br> [3] <a href="https://github.com/izmailovpavel/contrib_swa_examples" target="_blank">SWA implementation</a><br> [4] <a href="https://github.com/qubvel/ttach" target="_blank">ttach</a></p> <h2><strong>Team GIRYN</strong></h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2596066%2Fc292aef766c287db0c594c6ed07ababb%2FScreenshot%202023-08-18%20at%2015.39.37.png?generation=1692362680181936&amp;alt=media" alt=""></p> <ul> <li><a href="https://www.kaggle.com/rohitsingh9990" target="_blank">Rohit Singh (Dracarys)</a></li> <li><a href="https://www.kaggle.com/phoenix9032" target="_blank">Nihjar (doomsday)</a> </li> <li><a href="https://www.kaggle.com/imeintanis" target="_blank">Ioannis Meintanis</a></li> <li><a href="https://www.kaggle.com/yousof9" target="_blank">Yousef Rabi</a></li> <li><a href="https://www.kaggle.com/titericz" target="_blank">Giba</a></li> </ul>
ICR - Identifying Age-Related Conditions
Extreme Bagging + Stacking
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My conservative solution for this competition consisted on a bagging strategy with 100 balanced holdout sets. <br> It consistently scored 100Holdout CV: 0.1617 / Public: 0.22 / Private: 0.34 (top4). <br> It consists of a weighted average stacking of 5 models: </p> <ul> <li>Logistic Regression CV: 0.2869</li> <li>MLP CV: 0.2257</li> <li>XGBoost CV: 0.201 / 0.186 / 0.315</li> <li>LightGBM CV: 0.222</li> <li>CatBoost CV: 0.167</li> </ul> <p>Unfortunatelly it wasn't one of my final selections. For that I selected the same notebook but using test pseudo labels at different amounts to add more data to train. Besides the fact that it worked for my local experients, private LB didn't work.</p> <p>Stacking Notebook <a href="https://www.kaggle.com/code/titericz/blend-v12-extreme-bagging?scriptVersionId=139715250" target="_blank">here</a></p> <p>Top 3 Single XGB <a href="https://www.kaggle.com/code/titericz/top3-xgboost/notebook" target="_blank">here</a></p> <p>Top 2 Single XGB <a href="https://www.kaggle.com/code/titericz/xgboost-late-sub" target="_blank">here</a> - adding dates</p>
Google Research - Identify Contrails to Reduce Global Warming
346 Rank on leader board
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>My Approach and Achievements</h1> <h2>Leveraging Image Channels and False Color</h2> <p>I tackled the challenge of detecting contrails using an innovative strategy. The provided images were captured in 8 different wavelengths, and annotators used a false color image for labeling contrails. I decided to train my model using this false color representation, as it seemed to improve the visibility of contrails and enhance their detectability.</p> <h2>Effective Interpolation for Boundary Annotations</h2> <p>A significant hurdle I encountered was accurately labeling contrail boundaries. Since contrails suddenly appear or enter from the sides of the image, capturing their true extent was crucial. To address this, I employed an interpolation technique. I first upscaled the image to 384 x 384 and then downscaled it to 256 x 256 using bilinear interpolation. This innovative approach proved successful in obtaining precise boundary labels.</p> <h2>Iterative Model Refinement</h2> <p>I pursued an iterative model refinement strategy that gradually improved my results. I initially trained a U-Net model from scratch using PyTorch. While this yielded a decent dice score of around 0.53, I knew I could do better. I then turned to the <code>segmentation_models_pytorch</code> library and selected a U-Net architecture with a pretrained <code>resnet26</code> encoder. This choice pushed my dice score up to 0.59 on the Kaggle Public leaderboard, showcasing noticeable progress. My determination to enhance performance led me to the final step: training a U-Net model with a pretrained <code>resnet50</code> encoder. This decision paid off with a remarkable dice score of 0.64985 on the Kaggle Public leaderboard and 0.64455 on the Private leaderboard.</p> <h2>Harnessing PyTorch Lightning for Efficiency</h2> <p>To accelerate my training process and streamline metric tracking and hyperparameter management, I adopted the PyTorch Lightning framework. This choice enabled me to harness the power of multiple GPUs efficiently and focus more on experimenting with various strategies rather than dealing with technical intricacies.</p> <h1>Things that Didn't Work</h1> <h2>Time Dependency and LSTMConv Layer</h2> <p>Understanding that contrails must either suddenly appear or be visible in at least 2 images, I attempted to incorporate the temporal aspect by using an LSTMConv layer in my model. Unfortunately, this approach didn't yield the anticipated results. It appears that the temporal aspect might not play a significant role in this problem, or perhaps the architecture I selected wasn't suited for effectively utilizing this information.</p> <h2>Exploring Additional Convolutional Layer</h2> <p>In an effort to enhance the model's input capabilities, I added an extra convolutional layer before the input entered the encoder. This adjustment was aimed at allowing the pretrained model to take a 3-channel input while utilizing data from all 8 wavelengths. However, this change didn't lead to significant performance improvements. It appeared that the resnet50 encoder could already effectively extract relevant features from the multi-channel data. As a result, further alterations at this stage in the architecture didn't seem to provide the anticipated benefits.</p> <h1>Opportunities for Improvement</h1> <h2>Soft Labels for More Accurate Annotations</h2> <p>In hindsight, I realized that using the mean of the annotations from the four annotators might have provided a more accurate representation of contrail regions. This could have potentially reduced noise caused by differing interpretations and improved overall model performance.</p> <h2>Hybrid Loss Functions</h2> <p>Initially starting with Binary Cross-Entropy and then shifting to Dice Loss to align with evaluation metrics, I now recognize the value of potentially combining both loss functions. This hybrid approach could strike a better balance between precision and recall during training, leading to improved overall results.</p> <p>My journey in tackling this complex contrail detection problem has taught me a great deal about <code>model architecture</code>, <code>data preprocessing</code>, and the importance of learning from both successes and failures. Armed with these insights, I am excited to continue refining my strategies and exploring new avenues for even better performance.</p>
ICR - Identifying Age-Related Conditions
40th Place Solution for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I'm really surprised by my position in the private leadrboard having gained over 3283 position :D</p> <p><strong>Context section</strong></p> <ul> <li><p>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions</a></p></li> <li><p>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></p></li> </ul> <p><strong>Overview of the approach</strong></p> <p>The difficulty of this challenge arose from the very limited number of available rows. <br> This also posed significant challenges in defining a suitable cross-validation scheme.</p> <p>As a cross-validation strategy, a stratified scheme was chosen based on the Alpha column.</p> <p>The main component of my solution was to employ a contrastive learning approach. The reason for opting for contrastive learning was that this way, the scarcity of available observations would be compensated by a large number of simulated rows.</p> <p>A substantial portion of the approach is derived from Setfit (source link at the end).</p> <p>The model utilized was an LGBM, which, starting from the absolute difference of all features, aimed to classify whether two features belonged to the same class or not. In addition to the initial features on which the absolute difference was computed, the following additional features were added:</p> <ul> <li>number_zero: % of features with zero difference</li> <li>mean_diff: mean of the absolute difference</li> <li>std_diff: standard deviation of the absolute difference</li> <li>median_diff: median of the absolute difference</li> <li>diff_mean: absolute difference between the means of all initial values</li> <li>diff_std: absolute difference between the standard deviations of all initial values</li> <li>diff_median: absolute difference between the medians of all initial values</li> </ul> <p><strong>Details of the submission</strong></p> <p>For training the model, pairs were sampled in a 1:5 ratio (class 0 vs. 1) to ensure a balanced dataset. For each observation, a certain number of random examples from the same class and from different classes were selected. Any duplicate combinations were removed.</p> <p>The metric used to determine the appropriate number of rounds was AUC.</p> <p>During the inference phase, the following post-processing steps were performed:</p> <ul> <li>Predict the probability that a new observation belongs to class 0 (by comparing it with all class 0 observations).</li> <li>Calculate the probability that it belongs to class 1 (by comparing it with all class 1 observations).</li> </ul> <p>With prob_0 and prob_1 (calculated as the mean of each previous calculated probabilities), calculate prob = prob_1 / (prob_0 + prob_1). This functions as a kind of ensemble.</p> <p>This way, for each individual observation to be predicted, 617 different predictions need to be made.</p> <p>The section to confront each new observation with the train observation is the given function:</p> <pre><code>def get_retrieval_dataset( test: pd.DataFrame, target_example: pd.DataFrame, feature_list:list ) -&gt; Tuple[pd.DataFrame, list]: test_shape = test.shape[0] target_example_shape = target_example.shape[0] test_x = test[feature_list].to_numpy() target_example = np.concatenate( [ target_example _ range(test_shape) ], =0 ) test_x = np.repeat(test_x, target_example_shape, =0) index_test = np.repeat(test.index.values, target_example_shape, =0) retrieval_dataset = fe_pipeline( =target_example, =test_x, =feature_list, ) retrieval_dataset[] = index_test return retrieval_dataset </code></pre> <p>What didn't work:</p> <ul> <li>Metric Learning using Neural Networks, both DNN and TabNet, did not work regardless of the chosen metric (cosine similarity, mse, contrastive loss, etc.). Unfortunately, I couldn't achieve better CV results than the LGBM, likely due to the extremely limited number of available observations playing a key role.</li> <li>Contrastive learning on Alpha column</li> <li>Use a weight for each given training observation the observation</li> </ul> <p><strong>Sources</strong><br> <a href="https://www.kaggle.com/code/stenford23/icr-inference-contrastive-retriever/notebook?scriptVersionId=130640859" target="_blank">Inference Notebook</a></p> <p><a href="https://huggingface.co/blog/setfit" target="_blank">SetFit</a></p>
Google - American Sign Language Fingerspelling Recognition
[5th place solution] Vanilla Transformer, Data2vec Pretraining, CutMix, and KD
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and competition hosts for this incredible and meaningful competition. Here is my solution.</p> <p>The code is available at <a href="https://github.com/affjljoo3581/Google-American-Sign-Language-Fingerspelling-Recognition" target="_blank">the github repository</a>.</p> <h2>TL;DR</h2> <ul> <li>Using hands, pose, and lips landmarks with 3D-based strong augmentation</li> <li>Vanilla transformer with conv stem and RoPE</li> <li>Data2vec 2.0 pretraining</li> <li>CTC segmentation and CutMix</li> <li>Knowledge distillation</li> </ul> <h2>Data Processing</h2> <p>I used 3d landmark points instead of 2d coordinate because it looks like applying rotation augmentation on 3d space makes models to be more robust. According to <a href="https://developers.google.com/mediapipe/api/solutions/java/com/google/mediapipe/tasks/components/containers/NormalizedLandmark" target="_blank">the mediapipe documentation</a>, the magnitude of z uses roughly the same scale as x. Since x and y are normalized by width and height of camera canvas and they are recorded on smartphone devices, I have to denormalize the landmarks with original aspect ratio to apply correct rotation transform. I simply estimate the aspect ratio by solving affine matrix that maps to the normalized hands at each frame to the standard hand landmarks and get scale factors from the affine matrix. The average aspect ratio is <code>0.5268970670149133</code> and I simply multiply <code>1.8979039030629028</code> to the normalized y values, i.e., <code>points *= np.array([1, 1.8979039030629028, 1])</code>.</p> <p>Correct Rotation Transform:<br> $$X' = SRX$$</p> <p>Wrong Rotation Transform:<br> $$X' = RSX$$</p> <p>It's important to note that applying rotation after normalizing coordinates is incorrect. Denormalize, rotate, and then normalize again. Actually, I did not normalize again because it is not necessary.</p> <p>For inputs, I utilized landmarks from the left hand, right hand, pose, and lips. Initially, I focused solely on hand landmarks for the first few weeks and achieved a public LB score of 0.757. I believed fingerspelling is totally related to hand gestures only. Surprisingly, incorporating auxiliary landmarks such as pose and lips leads better performance and helped mitigate overfitting. The inclusion of additional input points contributed to better generalization.</p> <p>Here's a snippet of the data augmentation code I employed for both pretraining and finetuning:</p> <pre><code>Sequential( LandmarkGroups(Normalize(), lengths=(, , )), TimeFlip(p=), RandomResample(limit=, p=), Truncate(max_length), AlignCTCLabel(), LandmarkGroups( transforms=( FrameBlockMask(ratio=, block_size=, p=), FrameBlockMask(ratio=, block_size=, p=), FrameBlockMask(ratio=, block_size=, p=), ), lengths=(, , ), ), FrameNoise(ratio=, noise_stdev=, p=), FeatureMask(ratio=, p=), LandmarkGroups( Sequential( HorizontalFlip(p=), RandomInterpolatedRotation(, np.pi / , p=), RandomShear(limit=), RandomScale(limit=), RandomShift(stdev=), ), lengths=(, , ), ), Pad(max_length), ) </code></pre> <p>The total number of input points is 75. Each component consists of 21, 21, 14, and 40 points, respectively. The hand with more <code>NaN</code> values is discarded and only the dominant hand is selected. As seen in the snippet above, spatial transformations are applied separately to the components. The landmark groups are first centered and normalized by maximum x-y values of each group. Note that z-values can exceed 1.</p> <h2>Model Architecture</h2> <p>I employed a simple Transformer encoder model similar to ViT. I used PreLN and rotary position embeddings. I replaced ViT's stem linear patch projection with a single convolutional layer. This alternation aimed to enable the model to capture relative positional differences, such as motion vectors, from the first convolutional layer. With the application of rotary embeddings, there is no length limit and also I didn't truncate input sequences at inference time. Similar to other ViT variants, I also integrated LayerDrop to mitigate overfitting.</p> <h2>Data2vec 2.0 Pretraining</h2> <p>Given that the inputs consist of 3d points and I used a normal Transformer architecture which has low inductive bias toward data attributes, I guessed it is necessary to pretrain the model to learn all about data properties. The <a href="https://arxiv.org/abs/2212.07525" target="_blank">Data2vec 2.0</a> method, known for its remarkable performance and efficiency across various domain, seemed promising for adaptation to landmark datasets.</p> <p><img src="https://i.ibb.co/KLM8LYh/FigureA.png" alt="FigureA"></p> <p>According to the paper, using multiple different masks within the same batch helps convergence and efficiency. I set $M = 8$ and $R = 0.5$, which means 50% of the input sequences are masked and there are 8 different masking patterns. After I experimented many various models, and I arrived at the following final models:</p> <ul> <li>Transformer Large (24L 1024d): 109 epochs (872 effective epochs)</li> <li>Transformer Small (24L 256d): 437 epochs (3496 effective epochs)</li> </ul> <p>Termination of the overall trainings was determined by training steps, not epochs, resulting in epochs that are not multiples of 10. After pretraining the model, the student parameters are used for finetuning.</p> <h2>CTC Segmentation and CutMix</h2> <p>Before explaining the finetuning part, it is essential to discuss CTC segmentation and CutMix augmentation. Check out <a href="https://github.com/lumaku/ctc-segmentation" target="_blank">this repository</a> and <a href="https://pytorch.org/audio/main/tutorials/forced_alignment_tutorial.html" target="_blank">this documentation</a> which provide information about CTC segmentation. To summarize, a CTC-trained model can detect the position of character appearances, enabling the inference of time alignment between phrases and landmark videos.</p> <p>Initially, I trained a Transformer Large model and created pseudo aligned labels. Using the alignments, I applied temporal CutMix augmentation which cuts random part of the original sequence and inserts a part from another random sequence at the cutting point. This technique significantly reduces overfitting and improves the performance approximately +0.02. Furthermore, I retrained the Transformer Large model with CutMix and pseudo-aligned labels like noisy student, and it achieved better performance on CV set.</p> <p>Moreover, I observed that while supplemental datasets without CutMix degrades the performance, they provided substantial enhancement when used with CutMix, resulting in an improvement of about +0.01 on both CV and LB.</p> <h2>Finetuning and Knowledge Distillation</h2> <p>The finetuning phase followed a standard approach. I simply utilized the CTC loss with augmentations mentioned above. Given the constraints of 40MB and 5 hours, the Transformer Large model was too extensive to be accommodated. I explored various combinations and parameter sizes, eventually setting on the Transformer Small (24L 256d) architecture. To compress the Large model into the Small model, I used knowledge distillation like DeiT to predict the hard prediction label from teacher model. I observed sharing same head for KD and CTC adversely affected the performance, so I used distinct heads. I also experimented with RNN-like heads (especially stacked BiLSTM) but there is no performance gain. It seems the model converges fast, but final performance is not changed.</p> <p><img src="https://i.ibb.co/qp1R3pZ/FigureB.png" alt="FigureB"></p> <p>The training epochs were as follows:</p> <ul> <li>Transformer Large (24L 1024d): 65 epochs</li> <li>Transformer Small (24L 256d): 830 epochs</li> </ul> <h2>CV vs LB</h2> <p>Contrary to the standard validation strategy (GroupKFold) that many people used, I simply split 5% of the training set to validate the model performance.</p> <pre><code>train_test_split(train_labels, test_size=, random_state=) </code></pre> <p>While I initially employed group kfold based on participant IDs, I encountered incongruities between cross-validation and public LB scores. The approach outlined above led to consistent improvements in both cross-validation and public LB performance.</p> <p><img src="https://i.ibb.co/DWVG0qF/FigureC.png" alt="FigureC"></p> <h2>What Didn't Work &amp; Future Work</h2> <ul> <li>Using external datasets did not work. Actually, I realized competition dataset is quite large and it was really hard to find large scale fingerspelling set as this competition one.</li> <li>Although prefix beam search showed a modest enhancement of +0.002 even with a small beam size, my implementation in tflite version is too slow, and I didn't use it finally.</li> <li>I experimented with various head architectures, but a single linear layer was sufficient.</li> <li>Conformer and transformer encoder-decoder models were not better than vanilla transformer.</li> <li>I also tried RandAugment and TrivialAugment but hand-crafted strong augmentations worked well.</li> </ul>
Google - American Sign Language Fingerspelling Recognition
[3rd place solution with code] 17 layers squeezeformer with timerduce and ROPE
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is an intesting game, I learned a lot from </p> <p><a href="https://www.kaggle.com/code/hoyso48/1st-place-solution-training" target="_blank">https://www.kaggle.com/code/hoyso48/1st-place-solution-training</a><br> <a href="https://www.kaggle.com/code/irohith/aslfr-ctc-based-on-prev-comp-1st-place" target="_blank">https://www.kaggle.com/code/irohith/aslfr-ctc-based-on-prev-comp-1st-place</a><br> <a href="https://www.kaggle.com/code/markwijkhuizen/aslfr-transformer-training-inference" target="_blank">https://www.kaggle.com/code/markwijkhuizen/aslfr-transformer-training-inference</a></p> <h1><strong>Preprocss</strong></h1> <ol> <li><strong>Do not throw away frames witout hands!</strong></li> <li>Use as much info as possible, (hands, lips, eye, nose, pose) , use (x,y,z)</li> <li>Use more frames, resize to 320 frames for model training.</li> <li>I used <strong>original + normalized + abs position(/1000.)</strong> as input feats(total 384 * 2 + 1 = 769). <br> normalized feats help converge much faster but I found original feats also helps.</li> </ol> <h1><strong>Aug</strong></h1> <p>I followed last competion 1st solution, and found the most important aug are </p> <ol> <li><strong>time scale (interp1d)</strong></li> <li><strong>time dim mask</strong><br> I used heavy mask here which will mask some time seq and aslo will <strong>randomly mask 50% frames</strong></li> <li>affine <br> follow preve 1st solution with prob 0.75, for left-right flip only with prob 0.25</li> </ol> <h1><strong>Training</strong></h1> <ol> <li>I used <strong>train data + sup data combined training with sup data weight set to 0.1</strong> for 400 epochs.</li> <li>batch size 128, max lr 2e-3, lr scheduler using linear decay with 0.1 epochs warmup , adam optimizer</li> <li>Awp training started from epochs * 0.15, with adv_lr 0.2 and adv_eps 0.</li> <li><strong>Fintune with train data only for 10 epochs</strong> with max lr 1e-4, awp start from epoch 2. </li> </ol> <h1><strong>Postdeal</strong></h1> <p>I used <strong>rule for blank index(0)</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F42245%2F54e452eca19b2ee1610a7ecef4ea7cdc%2F1.png?generation=1692937017778775&amp;alt=media" alt=""></p> <h1><strong>Model</strong></h1> <ol> <li><strong>Squeezeformer</strong> works best, it is a bit faster then conformer. (Code from NEMO)</li> <li>In order to make net deeper I used <strong>1 down sampling layer to reduce frames from 320 to 160</strong>.<br> This is mainly helpful for adding more encoder layers, and also can allow using complex encoder layers.</li> <li><strong>Relative pos encoding</strong> affect the performance so much, I found <strong>ROPE(by Jianlin Su)</strong> perform best and super fast. (code from huggingface implementation)</li> <li><strong>Dropout is super important, I used final cls_drop=0.1 only, no dropout for other layers.</strong></li> <li>I learned from 1st-place-solution-training that <strong>stochastic path</strong> is <strong>super important to avoid overfit for deep network</strong>. So I used it in squeeeze former block for each layer like below(notice the InstDropout and 0.5 skip factor all super important for final performance)<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F42245%2Ffb17306d174a4bd7a4b81ad0acdc0ade%2F2.png?generation=1692937044317556&amp;alt=media" alt=""><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F42245%2F4834f8f3d02efab8d8f0bbe32a79f9a4%2F3.png?generation=1692937055628577&amp;alt=media" alt=""><br> Overall model arch below:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F42245%2F68866c3f59e3e6b2f308a54fc8b94248%2F4.png?generation=1692937099435075&amp;alt=media" alt=""></li> </ol> <h1><strong>Others</strong></h1> <ol> <li>For me (using 4090), torch much faster then tf (3-4 times faster) </li> <li>I used tf at first and swich to torch in the last month which help to speedup experiments a lot. <br> Speed is only one factor, another important thing is I could easily try opensource code of ASR like NEMO or espnet.</li> <li>I used <strong>nobuco</strong> to convert torch model to keras, it works like a charm. </li> <li>I still used tf for prorcess/aug and post process and also use tfrecord as input format, I wrote a torch iterable dataset which wrap tfrecord reader.</li> </ol> <h1><strong>TODOS</strong></h1> <p>Due to time limit, I could not finish more experiments at last days, but some possible improvements might be</p> <ol> <li>Model can be deeper up to <strong>20 layers</strong><br> 20 layer model perform better then 17 layers but I only trained 300 epochs which perform not as good as 17 layers + 400 epochs.</li> <li>More epochs training, maybe 500 or 600 ? <br> For 17 layer model from 300 to 400 improve LB 4 points and PB 2 points, so might sitll could train more epochs and might use 20 layer model for more epochs help even more:)</li> <li>From what I learned from other solutions, it seems I missed some major points here</li> </ol> <ul> <li>cutmix<br> this is a pity, I planned to do this at the begging but did not try it, as I found hard mask of frames(50%+) worked very well, I should have realized that cut mix might help even more. <br> simple concat 2 instances <br> concat with some ratio like 0.7 and 0.3 <br> concat with using ctc segmentation </li> <li>seq2seq method and ctc+attention decode method <br> I tried seq2seq using tf at the begging of this competition but not give good results, should have tried it after changing to use torch with squeezeformer encoder. </li> <li>input len mask to speedup infer</li> <li>try even more max input frames from 320 to 384 or 512(with input len mask infer)</li> </ul> <h1><strong>Code</strong></h1> <p>Opensource all codes here: <br> <a href="https://www.kaggle.com/code/goldenlock/3rd-place-step1-gen-tfrecords-for-train" target="_blank">https://www.kaggle.com/code/goldenlock/3rd-place-step1-gen-tfrecords-for-train</a><br> <a href="https://www.kaggle.com/code/goldenlock/3rd-place-step2-gen-tfrecords-for-supplement" target="_blank">https://www.kaggle.com/code/goldenlock/3rd-place-step2-gen-tfrecords-for-supplement</a><br> <a href="https://www.kaggle.com/code/goldenlock/3rd-place-step-3-gen-mean-and-std" target="_blank">https://www.kaggle.com/code/goldenlock/3rd-place-step-3-gen-mean-and-std</a><br> <a href="https://www.kaggle.com/code/goldenlock/3rd-place-step-4-train-squeezeformer" target="_blank">https://www.kaggle.com/code/goldenlock/3rd-place-step-4-train-squeezeformer</a><br> <a href="https://www.kaggle.com/code/goldenlock/3rd-place-step-5-torch2keras-using-nobuco" target="_blank">https://www.kaggle.com/code/goldenlock/3rd-place-step-5-torch2keras-using-nobuco</a><br> <a href="https://www.kaggle.com/code/goldenlock/3rd-place-step-6-inference" target="_blank">https://www.kaggle.com/code/goldenlock/3rd-place-step-6-inference</a> <br> Notice I did not reproduce my final results using kaggle notebooks, so if you want to reproduce or want to find the original code you could find it here:<br> <a href="https://github.com/chenghuige/Google-American_Sign_Language_Fingerspelling_Recognition" target="_blank">https://github.com/chenghuige/Google-American_Sign_Language_Fingerspelling_Recognition</a> </p>
ICR - Identifying Age-Related Conditions
98th Place Solution for ICR Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My approach to the modelling in this competition was a multistep underbagged ensemble model. Firstly, a model is trained to predict variables from the Greek dataset, which is only available for the training data. This model is used to augment the measurement features (i.e. train/test datasets) with predictions of the Greek variables. This augmented data is used to train the main model, which is an ensemble of (mostly) underbagged models. I published <a href="https://www.kaggle.com/code/nnjjpp/icr-silver-medal-98th-place-top-2" target="_blank">a notebook</a>, which is not the actual submission but a cleaned-up version of very similar code used in the submission.</p> <h2>Context section</h2> <ul> <li><p>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions</a></p></li> <li><p>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></p></li> </ul> <h2>Overview of the approach</h2> <p>We were provided with a range of anonymized measurement data (train/test), and several categorical variables (greeks) that were only provided for the training data. The predictand (the occurrence of one or more conditions: <code>Class</code>) was imbalanced, with most (&gt;80%) rows in the training data having no condition (i.e. value of 0). </p> <p>The greeks dataset consisted of two variables that were refinements of <code>Class</code>. <code>Alpha</code> separated positive <code>Class</code> occurrences into three different conditions, labelled 'B', 'D', and 'G'. The <code>Gamma</code> column further refined <code>Alpha</code>. The meaning of <code>Beta</code> was unclear, but showed a <a href="https://www.kaggle.com/code/nnjjpp/epsilon-eda-for-icr-competition#Annual-analysis" target="_blank">large shift</a> from 'C' to 'B' over the years of the training data. <code>Epsilon</code> appeared to be date of testing, and had a large number (approximately 1/4) of missing values.</p> <p>These considerations suggested that the greeks dataset held quite a bit of predictive information, and in order to use this data directly for predictions, I developed a two-step model: Firstly, we use the training data to model the encoded greeks variables, and use this model to predict greek probabilities. Secondly, the greek probabilities are used to augment the dataset, which is then used to predict the <code>Class</code> variable. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1158024%2F3361fddbc4c59b58e049789258088d94%2Fgreek_schematic.png?generation=1692367050270534&amp;alt=media" alt="Greek preprocessing schematic"></p> <p>To deal with class imbalance, I used <a href="https://www.kaggle.com/code/nnjjpp/underbagging-undersampling-and-bagging-estimators" target="_blank">underbagging</a>:</p> <blockquote> <p>In order to improve model performance, we create an undersampled dataset, i.e. use all the positive cases, but take only a small subset of the negative cases to get a balanced dataset with a roughly 50:50 split of cases. This works to improve classification accuracy, but we also lose a lot since almost all of the data is thrown away. One way to use this data is to create a bagging classifier (i.e. a similar rationale to a random forest classifier), where we fit many copies of the base estimator on different samples of the dataset, but using the undersampling procedure to generate the balanced datasets.</p> </blockquote> <p>And also the parameters of the models (where they existed) to deal with class imbalance.</p> <p>Finally, mean predictions from an ensemble of individually tuned models (various configurations of XGBoost, logistic regression, Catboost and lightGBM) was used to provide the final predictions.</p> <h2>Details of the submission</h2> <p>The general consensus was that the main difficulty in this competition was avoiding overfitting (especially to the public leaderboard). In early attempts, I found that increasing the number of hyperparameter tuning iterations made predictions worse, and this was because CV evaluation was very variable across folds, with hyperparameters being tuned solely to reduce the loss function in difficult folds. Overall performance of a model across all folds drops when this occurs. Experimenting with CV schemes, and avoiding data leakage was critical to avoid overfitting.</p> <p>To avoid data leakage, the greek probability data augmentation needed to be wrapped up in the cross-validation scheme. To this end I wrote the encoding, fitting and prediction as class <code>GreekPreprocessing</code>. I couldn't interface this properly with the scikit-learn <code>RandomizedSearchCV</code> class, as the greeks dataset needs to be passed through to the <code>GreekPreprocessing</code> class constructor, so a hand-coded randomized search function was needed.</p> <ul> <li><p>Preprocessing was minimal: the Greek dataset, and column <code>EJ</code> was encoded, simple imputation for missing data, and scaling was applied. Also an option for dropping columns with <code>SelectKBest</code> was included in the hyperparameter search.</p></li> <li><p>Hyperparameter tuning: A hand-coded randomized search CV function was used for each model in the ensemble (40 iterations). </p></li> <li><p>Cross-validation: As we saw, getting a robust CV scheme was extremely difficult in this competition due to the heterogeneity of the training data, public test and private test datasets. I tried a number of different approaches to cross-validation, including <code>KFold</code>, <code>StratifiedKFold</code> (on <code>Class</code>, <code>Alpha</code> and other combinations of variables from the greeks dataset), random sampling of rows, and a time-based <code>GroupKFold</code> strategy on the <code>Epsilon</code> column from the greeks dataset. Ultimately it was not possible to tell which CV strategy was most appropriate as we had little clues as to what the private test data would look like. I ended up using Alpha-stratification.</p></li> <li><p>Model ensemble: models from the ensemble with CV score greater than 0.4 were excluded, probabilities from the remaining models were blended using the mean. Models without balancing parameters produced lower probability predictions (i.e. were more likely to predict class 0). In contrast, the underbagged models produced higher probability predictions, and the unbagged models balanced with internal parameters produced probability predictions between these two extremes. <a href="https://www.kaggle.com/code/nnjjpp/icr-silver-medal-98th-place-top-2#5.-Blend-ensemble-probabilities" target="_blank">See the figure in the notebook</a>.</p></li> <li><p>Submission evaluation: Actual submission: Public 0.14, Private 0.38, <a href="https://www.kaggle.com/code/nnjjpp/icr-silver-medal-98th-place-top-2" target="_blank">published notebook</a>: Public 0.22, Private 0.36, Validation 0.31, Cross-validated (estimated from average of ensemble member CV scores) 0.35.</p></li> </ul> <h2>Sources</h2> <ul> <li><p>I think we can all agree that the public leaderboard for this competition was downright dangerous.</p></li> <li><p><a href="https://www.kaggle.com/code/nnjjpp/icr-silver-medal-98th-place-top-2" target="_blank">My published notebook</a> was not the actual submission but is a cleaned-up very similar version.</p></li> <li><p><a href="https://www.kaggle.com/code/janiobachmann/credit-fraud-dealing-with-imbalanced-datasets" target="_blank">Undersampling described in gold medal note book by janiobachmann</a>.</p></li> <li><p><a href="https://www.kaggle.com/code/nnjjpp/underbagging-undersampling-and-bagging-estimators" target="_blank">My underbagging notebook</a> contains classes for underbagging scikit-learn estimators.</p></li> <li><p><a href="https://www.kaggle.com/code/nnjjpp/epsilon-eda-for-icr-competition" target="_blank">My Epsilon EDA</a>.</p></li> <li><p>There were many great ideas on <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion" target="_blank">the competition discussion forum</a>. I found posts by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/carlmcbrideellis" target="_blank">@carlmcbrideellis</a>, <a href="https://www.kaggle.com/tilii7" target="_blank">@tilii7</a>, and <a href="https://www.kaggle.com/kaggleqrdl" target="_blank">@kaggleqrdl</a> particularly helpful.</p></li> </ul>
Google - American Sign Language Fingerspelling Recognition
95th Bronze solutions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>95th Bronze solutions</h1> <p>Thanks to Kaggle for hosting this interesting competition!!!!<br> This competition was diffucult for us…<br> There were many differences from last competition which made it challenging. I enjoyed learning about transcription task.</p> <h1>Summary</h1> <p>We utilized the best public notebook with some modifications.</p> <ul> <li>base notebook: <a href="https://www.kaggle.com/code/saidineshpola/aslfr-ctc-based-on-prev-comp-1st-place" target="_blank">https://www.kaggle.com/code/saidineshpola/aslfr-ctc-based-on-prev-comp-1st-place</a></li> </ul> <h1>What changes were made from the public notebook?</h1> <ul> <li><p>Normalization (+0.002 LB)</p> <ul> <li>Adjusted all coordinates so that the center of numbers 11 and 12 is set to 0.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2930242%2F6d424a3976cee8f769e3e90ba54a9ac2%2Finbox_2930242_fa394dbd96e33fb875aef6ed1f25c757_1.png?generation=1692922021894426&amp;alt=media" alt=""></li></ul></li> <li><p>Added more features (+0.01 LB)</p> <ul> <li>Motion, Shape Feature, <ul> <li>Shape is distance of this point<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2930242%2F3f7fdc56a208576b7664408fd807e422%2Finbox_2930242_b739316822c577ef9b4be204325f569e_2.png?generation=1692922063719858&amp;alt=media" alt=""></li></ul></li></ul></li> <li><p>Feature Augmentation (+0.01 LB)</p> <ul> <li>Framerate, randomDrop, random padding, random crop.</li></ul></li> <li><p>Changed Model (+0.005 LB)</p> <ul> <li>dim : 384 =&gt; 192</li> <li>conv trans Block num : 3=&gt;10</li> <li>Add aux Loss</li></ul></li> <li><p>Pretraining with supplementay data (+0.01 LB)</p></li> <li><p>Training params</p> <ul> <li>epoch 100</li> <li>batchsize 256</li> <li>learningrate 1e-2</li></ul></li> </ul> <h1>Things I couldn't do</h1> <ul> <li>Error correction Network.<ul> <li>I read the following paper and tried to incorporate voice recognition technology, but I couldn't make it in time.</li> <li><a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=&amp;cad=rja&amp;uact=8&amp;ved=2ahUKEwjAkrKRvvaAAxWBO3AKHbDkB4MQFnoECBMQAQ&amp;url=https%3A%2F%2Farxiv.org%2Fabs%2F2111.01690&amp;usg=AOvVaw3B--e8pJqXTaYolWYXIBOf&amp;opi=89978449" target="_blank">https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=&amp;cad=rja&amp;uact=8&amp;ved=2ahUKEwjAkrKRvvaAAxWBO3AKHbDkB4MQFnoECBMQAQ&amp;url=https%3A%2F%2Farxiv.org%2Fabs%2F2111.01690&amp;usg=AOvVaw3B--e8pJqXTaYolWYXIBOf&amp;opi=89978449</a></li></ul></li> <li>Change to SOTA OCR model.<ul> <li><a href="https://arxiv.org/pdf/2205.00159.pdf" target="_blank">https://arxiv.org/pdf/2205.00159.pdf</a></li></ul></li> </ul> <h1>Appendix: Model code</h1> <pre><code>def get: inp = tf.keras. x = tf.keras.layers.(inp) x = tf.keras.layers.(x) + positional x = tf.keras.layers.(x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x_aux = (x) x = (x_aux) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) x = (x) # main x = tf.keras.layers.(x) x = tf.keras.layers.(x) x = tf.keras.layers.,name='out')(x) #aux x_aux = tf.keras.layers.(x_aux) x_aux = tf.keras.layers.(x_aux) x_aux = tf.keras.layers.,name='aux_out')(x_aux) model = tf.keras. loss = { 'out':CTCLoss, 'aux_out':CTCLoss } # Adam Optimizer optimizer = tfa.optimizers. optimizer = tfa.optimizers. model.compile(loss=loss, optimizer=optimizer) return model </code></pre>
ICR - Identifying Age-Related Conditions
Silver medal Solution 0.3995 with CatBoost and Optuna
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I`m very excited with the results. This ir my first Kaggle competition and the result is very good for me.<br> Have several tries with other similar models like Xgboost and Lgbm. But the best result was achieved using CatBoost optimizing the hyperparameters with Optuna.<br> It is a simple solution, although the result is not the best, it is very good for my knowledge.<br> The solution link is :</p> <p><a href="https://www.kaggle.com/code/andrsvercesi/icr-eda-and-modeling-silver-medal" target="_blank">https://www.kaggle.com/code/andrsvercesi/icr-eda-and-modeling-silver-medal</a></p> <p>Greetings and see you in the next competition</p>
Benetech - Making Graphs Accessible
20th place solution (YOLOv7+EasyOCR)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the prize/medal winners of the competition🎉🎉. It was a great opportunity to learn and grow.I would like to share my approach here.</p> <p>My approach to the problem involves two main steps: object detection and Optical Character Recognition(OCR).<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4461928%2F7cf3386a43e8f1fa047c53892e9ab40f%2F247040237-f349dd62-6dbb-4792-8654-65d870c1b6f1.png?generation=1687256357459108&amp;alt=media" alt=""></p> <p><strong>Object Detection(Yolov7-X):</strong><br> I trained a yolov7 model to detect the x-axis labels, y-axis labels, the chart bounding box, and the data points on the chart. The coordinates of the data-points were not provided in the dataset. I was able to accurately compute the position of the datapoints on the chart images by linear interpolation of the x-axis and y-axis tick coordinates with respect to the x-axis/y-axis labels (values). <a href="https://www.kaggle.com/samratthapa/data-prep-for-yolov7-data-point-x-y-label-chart" target="_blank">Here</a> is a notebook to create a dataset using this approach. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4461928%2Fc68d1abc59f45a4aef8b8c6628d2fb23%2FScreenshot%202023-06-20%20at%2019.25.55.png?generation=1687256785660301&amp;alt=media" alt=""><br> During inference, I inverted this process by calulating the data-series from the linear interpolation of the values of x-axis and y-axis labels with respect to the co-ordinates of the data points.</p> <p>This approach also works relatively well for scatter plots, compared to other approaches like Donut.</p> <p>Notes:</p> <ul> <li>This object detection model was also used as the chart-type classification model.</li> <li>There are overlapping bounxing boxes for some x-axis labels like the image below. However, my OCR model was able to extract the correct text despite the input image including text from neighbour bboxes.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4461928%2F5dff7698a985cbbdbd1b333700b5537f%2F247041714-1ab49a89-1db7-48a1-9b3e-429217142b45.png?generation=1687256377327384&amp;alt=media" alt=""></li> </ul> <p><strong>OCR model:</strong><br> Using the EasyOCR libary, I trained a ResNet(feature extractor)+BidirectionalLSTM model with Connectionist Temporal Classification(CTC) loss. The additional dataset(ICPR 2022 CHART) improved the accuracy of the OCR model by about 5% from 84% to 89%. I used <a href="https://www.kaggle.com/code/nadhirhasan/let-s-finetune-easyocr-boom" target="_blank">this notebook </a>as a starting point to train the OCR model (Thank you <a href="https://www.kaggle.com/nadhirhasan" target="_blank">@nadhirhasan</a> ).</p> <p><strong>Post processing:</strong><br> After receiving the bounding boxes from the model, I performed some post-processing based on some simple heuristics like: removing the data points that lie outside the chart bbox, restricting x-labels(y-labels for horizontal-bar) to lie under the chart bbox, and restricting y-labels(x-labels for horizontal-bar) to the left side of the chart bbox. Also, the x/y axis tick coordinates are calculated using the x/y-axis bbox, and the chart bbox. I use the nearest point that lies on the chart bbox from the center of the x/y label bbox as the respective x/y tick coordinate. I chose this approach because the precision and recall of the x/y labels was higher than the x/y axis ticks in an older version of the model.</p> <p>I participated in this competition only for the last 4 weeks. So, due to lack of time, I wasnt able to try out other approaches like Donut . I think there is a lot of room for improvement for this model. For example,about 25% of the predictions made by the model automatically get scored 0, because of mismatching number of predictions. This mismatch is due to only 1 or 2 points for charts besides scatter plot.</p> <p>Submission Notebook:<br> <a href="https://www.kaggle.com/code/samratthapa/20th-place-submission-yolov7-easyocr/notebook?scriptVersionId=134338733" target="_blank">https://www.kaggle.com/code/samratthapa/20th-place-submission-yolov7-easyocr/notebook?scriptVersionId=134338733</a></p> <p>For model checkpoints, and training scripts: <br> <a href="https://github.com/SamratThapa120/benetech-making-graphs-accessible" target="_blank">https://github.com/SamratThapa120/benetech-making-graphs-accessible</a></p>
ICR - Identifying Age-Related Conditions
157th (preliminary) silver medal solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I had limited time for this competition, and my previous submission was over 2 months ago. Despite that, the method I'm presenting here won a silver medal. I had additional submissions that could have performed equally as well, indicating the overall approach was sound. With more time, this approach might have yielded even better results if I could have devoted more effort to the competition.</p> <p>A public version of my code is available <a href="https://www.kaggle.com/code/wisawesome/icr-identity-157th-place-prelim" target="_blank">here</a>.</p> <p>Things that worked for me:</p> <ul> <li>CatBoost</li> <li><code>auto_class_weights = "Balanced"</code></li> <li><code>logloss</code> with <code>sample_weights</code> set to account for the competition metric (see my code <a href="https://www.kaggle.com/code/wisawesome/icr-identity-157th-place-prelim" target="_blank">here</a>).</li> <li>ordered boosting. This dataset is small and as such prediction shift is a real concern. In these cases ordered boosting seems to work well</li> <li>Stochastic Gradient Langevin Boosting, see <a href="https://arxiv.org/abs/2001.07248" target="_blank">this paper by Aleksei Ustimenko and Liudmila Prokhorenkova</a> for details.</li> <li>RepeatedStratifiedKFold with 5 folds and 5 repeats resulting in 25 CatBoost models total.</li> <li><code>bootstrap_type= "Bernoulli"</code> with <code>subsampling</code> and <code>colsample_bylevel</code> because otherwise it would take forever to train.</li> <li>parameter tuning using Optuna.</li> <li>Ensembling predictions from all 25 CatBoost models.</li> </ul> <p>Things that did not work for me:</p> <ul> <li>post-processing and thresholding. Very quickly I realized that making as little as 1 mistake will knock me out of the competition with this competition metric. I decided that it was not worth the risk.</li> <li>XGBoost and LightGBM. Could not get scores that were anywhere close (in terms of cross validation) to the ones obtained by CatBoost. Probably ordered boosting had something to do with it, but there are also other design differences (symmetric trees etc.)</li> <li>Ensembling with TabPFN. I really like the idea behind TabPFN and the authors claim that ensembling with other methods may improve results however in this particular instance I was not able to do so in CV. Perhaps TabPFN is more suited for a different type of dataset. I really like TabPFN's execution speed and I would have loved it if it worked as well as CatBoost.</li> <li>Dropping features. Did not spend much time to select features but the feature selection that I did do based on CatBoost feature importances did not yield better results in CV.</li> <li>Greeks. Not available in test so I could not find a way to make them work for me. I did use them for EDA but I could not see how they would be useful if absent from test.</li> </ul> <p>Things that I did not try but I wish I had:</p> <ul> <li>Using the time domain. Lots of other notebooks used time. I just ignored it. It could have been the case that the data is not coming from the same distribution over time due to the way data is collected but there was no guarantee that these hypothetical data collection issues would persist into the testing data. In hindsight I should have spent some time to explore this.</li> <li>Dropping unusual observations. We had very little data to work with anyway so I was hesitant to do this because valuable information might be lost in the process.</li> </ul> <p>I hope you found my solution interesting and good luck in the next competition!</p> <p>Cheers,<br> Wojtek (Victor)</p>
ICR - Identifying Age-Related Conditions
Silver Medal - LB=0.39 - Sliding Time Cross Validation!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Exciting competition! Thanks Kaggle and ICR. I had three final submissions to choose from</p> <ul> <li>Time-CV Model with no funny business - Code published <a href="https://www.kaggle.com/code/cdeotte/silver-medal-using-sliding-time-cv" target="_blank">here</a> and achieves Silver Medal</li> <li>LB Probing Pseudo Label model 😀 (probing explained <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430764#2386580" target="_blank">here</a> and <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/431275" target="_blank">here</a>. And shown <a href="https://www.kaggle.com/code/cdeotte/use-public-lb-probing-data-lb-0-0" target="_blank">here</a>)</li> <li>LB Probing Pseudo Label with conservative PP Threshold model 😀 (threshold analysis shown below)</li> </ul> <p>What do they say? "Go Big or Go Home!" For my final two submissions, I did not pick my first bullet point above (which I suspected would win a medal). Instead I chose the last 2 bullet points in an attempt to win 1st place! Note that using probed positive targets from public LB actually hurt my private LB score. And even a very conservative PP threshold (<code>p[p&lt;0.1]=0</code>) hurt my private LB score. Both risks failed 😀</p> <p>Let's discuss my solution which did not include any funny business.</p> <h1>Sliding Time CV Scheme</h1> <p>First we sort all the train data by date. Next we assign a random date to the train samples without a date. Afterward we slide a window with indexes <code>START</code> and <code>END</code>. Then our validation data each fold are all rows from <code>START</code> to <code>END</code></p> <pre><code> fold (): fold STEP fold STEP VAL_SIZE X_valid train.iloc[:] </code></pre> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2023/time-cv.png" alt=""></p> <h1>Why Use Time CV?</h1> <p>The purpose of a local validation is to mimic the relationship between train data and test data. In this competition the host tells us that all the test data occurs after (in time) of the train data. In other words, we must train a model on data from one time period and predict data from another time period.</p> <p>Normally i would not think that time would influence what the model learns about medical diagnosis, however in this competition if you train an XGB model with time as a feature column it becomes the most important feature even after fillna() with random dates for the train data without dates. </p> <p>This means that the data is changing. Perhaps the data is collected in one location on set of machines during one time period and later collected by different machines that are calibrated differently. For some reason, the relationship between features and targets is changing over time!</p> <h1>OOF Validation Scores</h1> <p>We observe that different time periods have different difficulty when predicting. Notably easy time periods have a OOF validation balanced log loss of <code>0.15</code> whereas difficult time periods have <code>0.40</code>. Since the host said this was a difficult problem, we can deduce that public test is easy time and private test is hard time:</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2023/val-date.png" alt=""></p> <h1>CV vs Public LB vs Private LB</h1> <p>My time-CV model has an average local <code>CV = 0.27</code> and <code>public LB = 0.17</code> and <code>private LB = 0.39</code>. See notebook <a href="https://www.kaggle.com/code/cdeotte/silver-medal-using-sliding-time-cv" target="_blank">here</a>. When using time CV improvements in CV lead to improvements in public and private LB.</p> <h1>Analyze "Risky" Post Processing</h1> <p>Once we have a local validation that mimics the LB, we can analyze PP. For each of the 25 OOF folds, we can find the best Top and Bottom thresholds and plot them. We notice that during some time periods, thresholding works and during others it does not. If we pick the median bottom threshold (i.e. <code>pred[pred&lt;median_bottom]=0</code>), then CV informs us that we have a 50% chance of it working if private test is a "random time period". </p> <p>Note to find thresholds for private LB, we must redo the plot below using the size of private LB which is around 200 samples. Plot below is for 120 samples per validation fold and comes from my notebook <a href="https://www.kaggle.com/code/cdeotte/silver-medal-using-sliding-time-cv" target="_blank">here</a>. The Yellow line is the best top threshold for each fold (i.e. time period). The Green is bottom. The Blue line is the validation fold balanced log loss metric. And the Red line is how much better each fold can become using the optimal thresholds. We note that the best validation fold with PP achieves <code>0.06</code> just like the public LB. And we note that the worst validation fold does not improve from PP just like private LB.</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2023/time-thresholds2.png" alt=""></p> <h1>Best Fold OOF Predictions - (Balanced Log Loss like Public LB)</h1> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2023/best-fold.png" alt=""></p> <h1>Worst Fold OOF Predictions - (Balanced Log Loss like Private LB)</h1> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2023/worst-fold.png" alt=""></p> <h1>Post Competition Fun Fact</h1> <p>After competition ended, I was curious why the private test is harder than public test and train data to predict. All three have approximately 17.5% <code>target=1</code> which makes them seem similar. </p> <p>First I checked categorical feature <code>EJ</code> (which equals <code>A</code> or <code>B</code>) and found that train, public, private have 64%, 62%, and 50% of <code>EJ=B</code> respectively. Next I checked NAN and found that two additional test columns <code>EH</code> and <code>FD</code> have a few NAN whereas train has no NAN in these columns. This NAN discrepency most likely caused many submission errors but neither of these two discoveries makes a big difference in model accuracy i think. (i.e. I did not find any weird test columns with 100% NAN or something crazy like that).</p> <p>Next using probing techniques in notebook <a href="https://www.kaggle.com/code/cdeotte/probe-alpha-type-of-class-1-in-public-private" target="_blank">here</a> we see that private test has more of the difficult <code>Alpha = D</code> type of <code>target=1</code> Class 1. I think this helps explains what changed over time to make private test more difficult than public test and train data.</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2023/class1.png" alt=""></p>
Predict Student Performance from Game Play
Top 0.5% Efficiency Leaderboard with datatable
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Before jumping into details, I would like to say thank you to the organizers, Kaggle team and community.</p> <p>This was actually my first competition, though I had some limited Kaggle experience. Several years&nbsp;ago I benchmarked my newly developed models here, but at that time I didn't really look into the data and compete.&nbsp;</p> <p>Surprisingly, with the first attempt my solution was ranked 7th out of 2051 on <a href="https://www.kaggle.com/code/philculliton/student-performance-efficiency-leaderboard/notebook" target="_blank">the Efficiency Leaderboard</a>.</p> <h2>Background</h2> <p>When I joined about a month ago I was impressed by the fact, that simply <a href="https://www.kaggle.com/code/cpmpml/random-submission/" target="_blank">submitting mean values</a> could bring your score to&nbsp;<code>0.659</code> LB, while the most advanced models were at the level of ~<code>0.7</code>.&nbsp;</p> <p>So I decided to compete on the efficiency LB only and at the same&nbsp;time give a try to <a href="https://datatable.readthedocs.io/en/latest/api/models/linear_model.html" target="_blank">LinearModel</a>&nbsp;GLM I've recently developed. </p> <p>For data munging and feature engineering I've been using Python <a href="https://datatable.readthedocs.io/en/latest/index.html" target="_blank">datatable</a>,&nbsp;a package similar to pandas, but with a specific emphasis on speed and big data support.</p> <p>First, I have designed and shared <a href="https://www.kaggle.com/code/kononenko/datatable-linearmodel-0-676lb-in-6-seconds" target="_blank">a simple baseline</a>, that scored <code>0.676</code> and was pretty high on the efficiency LB.&nbsp;My next goal was to improve feature engineering and the overall code performance.</p> <h2>Feature engineering</h2> <p>Moving&nbsp;forward I've ended up with the following numeric features</p> <ul> <li>number of events per session, i.e. <code>sessions_id.count()</code>;</li> <li>session duration, i.e. <code>elapsed_time.max()</code>;</li> <li>mean level, i.e. <code>level.mean()</code>;</li> <li>screen x/y range, i.e. <code>screen_coor_x/y.max() - screen_coor_x/y.min()</code>.</li> </ul> <p>For categorical columns, I started with a number of unique values per a&nbsp;column. Then, created additional&nbsp;features for each of the values. Even though it worked locally on my CV, it didn't work on the public LB, so I had to employ feature selection based on the importances and also picked different features for different level groups.</p> <h2>Performance tuning</h2> <p>I have also&nbsp;employed some overall code tuning</p> <ul> <li>since <code>LinearModel</code> is fully&nbsp;parallel, both <code>.fit()</code> and <code>.predict()</code> methods, I have adjusted the number of threads to <code>2</code> to match the number of CPUs. By default, 4 threads were detected that could lead to over-parallelization;</li> <li>avoided <code>!pip install</code> in the inference code by pre-installing packages in a separate&nbsp;notebook. This saved me at least 30 seconds per submission;</li> <li>disabled "Persistence", so that no additional time is spent when the inference notebook is starting.</li> </ul> <h2>Final model</h2> <p><strong>Pros</strong></p> <ul> <li>a small number of features;</li> <li>less than one minute to be trained;</li> <li>highly interpretable;</li> <li>pretty robust: no errors due to the API changes, no submission errors, public LB scores are exactly the same as the private scores.</li> </ul> <p><strong>Cons</strong></p> <ul> <li>since at the end it is just a logistic regression, I don't think there is a huge room to further improve the model's score.</li> </ul> <h2>Conclusions</h2> <p>The best submission, I have selected, scores <code>0.687</code> LB with the scoring time of around 2-3 minutes. Notebook, that includes the training and the inference parts, is available <a href="https://www.kaggle.com/code/kononenko/top-1-public-efficiency-lb-with-linearmodel" target="_blank">here</a>.&nbsp;</p>
ICR - Identifying Age-Related Conditions
As expected the shake-up is the biggest I've seen, or trust your CV.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone! <br> First of all, I want to thank the organizers for a wonderful competition.<br> When I worked with the data and saw the number of participants, I imagined the end of this competition somewhere like that. <br> That's why I published my best result in the hope of getting at least some kind of medal (thanks to everyone who voted). <br> As it turned out, it was a good thing that I did this, because as it turned out, I did not have the opportunity to seriously engage in kaggle competitions for the last two months. <br> Now back to CV, this notebook shows how I did the hyperparameter search <a href="https://www.kaggle.com/code/aikhmelnytskyy/lb-0-11-cv-0-2-tabpfn-xgb-model/notebook" target="_blank">https://www.kaggle.com/code/aikhmelnytskyy/lb-0-11-cv-0-2-tabpfn-xgb-model/notebook</a>. <br> What is most interesting is my best notebook on private LB.<br> After 2 months, I forgot about this notebook when choosing models😝. By the way, I have a rule that if I enter the competition somewhere in the middle and after several attempts I am very high (in the silver, or especially in the gold zone), for me this is a sign that there will be a shake-up and I will be thrown to the end of the peloton. This rule has only failed a few times in 100+ events. <br> Good luck to everyone and peace!</p>
Google Research - Identify Contrails to Reduce Global Warming
8th place solution (OneFormer works)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1><strong>Summary</strong></h1> <ul> <li>Models:&nbsp;<strong>OneFormer</strong>, effnet, MaxViT, resnetrs, nfnetf5</li> <li>Train on <strong>soft labels and pseudo label</strong></li> <li>Optimizing Ensemble Weights</li> <li>Loss: BCE</li> <li>Only random image cropping without TTA and without mask shift</li> </ul> <h1><strong>Introduction</strong></h1> <p>Our team would like to thank the organizers and Kaggle for making this competition possible. Also, I want to express my gratitude to my outstanding teammates&nbsp;<a href="https://www.kaggle.com/cnumber" target="_blank">@cnumber</a> and&nbsp;<a href="https://www.kaggle.com/yukiokumura1" target="_blank">@yukiokumura1</a> for their incredible contribution toward our final result.</p> <h1><strong>Details</strong></h1> <p>Our main strategy is the adoption and optimization of various models and the use of soft and pseudo labels.</p> <ul> <li><strong>Models:</strong></li> </ul> <p>In the beginning of the competition, we mainly experimented with a combination of smp and timm, but from the middle of the competition, we also started using OneFormer.</p> <p>OneFormer demonstrated the best results. <br> <strong>private lb score of single oneformer (dinat-l) : 0.70204</strong><br> <strong>With CV score of holdout dice, effnet b7: 0.671, oneformer(dinat-l): 0.693</strong>.</p> <p>Model resolutions are below: (inference settings)&nbsp;</p> <table> <thead> <tr> <th>OneFormer</th> <th>effnetb7, b8</th> <th>MaxViT-Tiny, Base</th> <th>resnetrs</th> <th>nfnetf5</th> </tr> </thead> <tbody> <tr> <td>1024</td> <td>640</td> <td>512</td> <td>512</td> <td>640</td> </tr> </tbody> </table> <ul> <li><strong>Data: Soft and Pseudo Labels:</strong></li> </ul> <p>Using soft labels (the average of all annotator labels) during training. We also introduced pseudo labels for some models. When both are used together, training is performed randomly on images with the 0th through 7th time frame, with soft label used only for the 4th time frame and pseudo label for all other cases.</p> <ul> <li><strong>Optimizing Ensemble Weights:</strong></li> </ul> <p>To maximized the performance of ensembling, we incorporated optimization using Optuna within the CV environment. This facilitated the efficient verification of the best model combinations.</p> <h1>Be open to consideration</h1> <ul> <li><strong>Interplay Between a Strong Backbone and Pseudo Labels:</strong></li> </ul> <p>The impact of using pseudo labels<strong>, when trained with pseudo label of when trained with pseudo labels *, effnet b7: 0.685, oneformer(dinat-l): 0.695</strong> *The average of each model trained in holdout is used as pseudo label. </p> <p>However, the enhancements were relatively modest with OneFormer.<br> One hypothesis suggests that pseudo labels might bring about a knowledge distillation effect. The impact of this might reach a limit when the backbone is of a certain strength.</p>
Vesuvius Challenge - Ink Detection
55th silver solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Vesuvius Challenge - Ink Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for organizing interesting competition.<br> I joined this competition 3 weeks before close, so I couldn't do much trial and error.<br> However, reading many discussions and public notebooks helped me to make baseline faster.<br> Thank you all kagglers for sharing nice contents.</p> <h1><strong>Reference</strong></h1> <ol> <li><a href="https://www.kaggle.com/tanakar" target="_blank">@tanakar</a> 's 2.5d segmentation <a href="https://www.kaggle.com/code/tanakar/2-5d-segmentaion-baseline-training" target="_blank">training</a>, <a href="https://www.kaggle.com/code/tanakar/2-5d-segmentaion-baseline-inference" target="_blank">inference</a></li> <li><a href="https://www.kaggle.com/yoyobar" target="_blank">@yoyobar</a> 's <a href="https://www.kaggle.com/code/yoyobar/3d-resnet-baseline-inference" target="_blank">3D ResNet baseline [inference]</a></li> <li><a href="https://www.kaggle.com/samfc10" target="_blank">@samfc10</a> 's <a href="https://www.kaggle.com/code/samfc10/vesuvius-challenge-3d-resnet-training" target="_blank">Vesuvius Challenge - 3D ResNet Training</a></li> <li><a href="https://scrollprize.org/tutorial4" target="_blank">official tutorial page</a></li> <li><a href="https://youtu.be/g-7-Xg75CCI?t=6013" target="_blank">youtube in official tutorial page</a></li> </ol> <h1><strong>Solution</strong></h1> <ul> <li>3D ResNet (resnet34) ensemble<br> I used 3fold Group fold CV(fragment_id based), chose best model for each fold, and made ensemble model of them.<br> The base model is based on Ref 3, (192, 192, 16) as input size.<br> Why I use 3D ResNet is to capture the feature spreading in z direction, which is caused by ink bleeding.<br> As we can see Ref 4, the information about spatial frequency may be important for model, so not resizing the input image was very very important, I think.🤔 I should have tried <a href="https://pytorch.org/docs/stable/fft.html" target="_blank">fast fourier transformation</a>..😑<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4250230%2F1c0effbbab9a491994d11cfdb89ffeaa%2Fsem-alpha.png?generation=1686965040652713&amp;alt=media" alt="Ref 4"></li> <li>Data Augmentation based on Ref 1.<br> To my surprise, blur augmentation was important. Without blur related augmentation, CV goes down by 1~2%.<br> Though it is said that adding blur makes more difficult to distinguish for us in Ref 5, some blur made important role.🤔<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4250230%2F9da9c905c7e480e5ac5bb32665d0a748%2Fink_blured.png?generation=1686963393693952&amp;alt=media" alt="Ref 5"></li> <li>TTA based on Ref 2.</li> <li>th=0.50<br> I couldn't understand how to optimize for ensemble model, so fixed.</li> </ul> <h1><strong>environment for computing</strong></h1> <ul> <li>Google Colab Pro+<br> We can run at most 3 sessions in it, so I ran fold1~3 at the same time.<br> 1 epoch for batchsize=16 took 30 min with A100.</li> </ul> <h1><strong>not worked for me</strong></h1> <ul> <li>larger input<br> I tried from 256 to 512. </li> <li>reducing label near edge by cv2.erode<br> the picture shown in the most right is obtained by subtracting real label and erosion(shrinked) label<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4250230%2F3b75e6ea3bc351085102683f77432eb3%2Ferosion1.png?generation=1686964831123283&amp;alt=media" alt=""><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4250230%2F27141862c0e134018f485096ae191500%2Ferosion2.png?generation=1686964854760693&amp;alt=media" alt=""></li> <li>various loss function</li> </ul>
Google Research - Identify Contrails to Reduce Global Warming
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was a fantastic competition - thanks to organizers and my team. Past 3 months have not been easy, but I’m glad that in the end it was worth it.</p> <h3>Overview</h3> <ul> <li>U-Net models with a diverse set of encoders (both convolution-based, and transformer-based).</li> <li>Creating pseudo-labels for the unlabelled part of the data (7/8 of all the data) using folds and ensembles.</li> <li>Composite loss (CE, Dice, Focal).</li> <li>EMA + SWA.</li> <li>High resolution (512 + 768).</li> <li>4TTA (hflip, rot90, rot270).</li> </ul> <h3>Base models</h3> <p>Quite early we realized that heavy models and long training work well here, so we mostly ran experiments with backbones like effnet_v2_large and heavier. The final solution includes:</p> <ul> <li>effnet_v2_l.</li> <li>effnet_v2_xl.</li> <li>effnet_l2.</li> <li>maxvit.</li> </ul> <p>It takes about a week to train with 4xA6000 (most of the training time goes into creation of good pseudo-labels).</p> <h3>Validation</h3> <p>After we figured out that some geometric augmentations don’t really work (for example, vertical flips), we removed them from the augmentation pipeline. Additionally, the 4TTA validation and regular one didn’t have a perfect correlation, so the validation during training was performed with 4TTA, which gave a significant boost early in the competition. </p> <p>In terms of split - the splitting proposed by organizers was used. </p> <h3>Pseudo labels</h3> <p>The original dataset (of 20519 records, each of 8 images) is already quite big for U-Net like models, however we wanted to expand it further. That’s why we created pseudo-labels for the whole dataset by training 3 models with 4 folds for 2 rounds. The following scheme illustrates the idea:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2755695%2F0bf39e12f3004d2d843bea9aa2944dee%2F2023-08-20%2000.42.48.jpg?generation=1692481385008583&amp;alt=media" alt=""></p> <p>Then we train final models with a sampler: 50% original data, 50% soft pseudo labels.</p> <h3>Additional tricks</h3> <p>Not a lot of things worked well here, so we added some simple tricks to improve the pipeline a bit. Those things include:</p> <ul> <li>Weighted loss between CE, Dice, and Focal. </li> <li>EMA during training.</li> <li>SWA on checkpoints after training.</li> <li>Remove BN from the decoder in U-Net. </li> <li>Add different resolutions in final ensemble (512 + 768).</li> </ul> <h3>Things that didn’t work</h3> <ul> <li>Twersky loss (with focus on either FN or FP).</li> <li>Post-processing to remove FP. </li> <li>3D models (including conv-lstm).</li> <li>Training on individual annotations (shame on us for not trying to train with mean of annotations).</li> <li>Figuring-out why geometric augmentations don’t work (our guess was annotators’ bias. Turns out, it was conversion bias that other top teams found).</li> <li>Heavy augmentations.</li> <li>Adding classifier.</li> <li>Predicting additional frames or using additional channels during training.</li> <li>Validation based on geography. </li> <li>Training only with positive data.</li> </ul> <h3>Links</h3> <ul> <li>Inference kernel: <a href="https://www.kaggle.com/code/selimsef/kdl-unet-768-inference-contrails" target="_blank">link</a>.</li> <li>GitHub repo with training code: <a href="https://github.com/selimsef/kaggle-identify-contrails-4th/" target="_blank">link</a>.</li> </ul>
HuBMAP - Hacking the Human Vasculature
public 9th / private 295th achieved private 0.575 with single modification
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Despite achieving 9th place in the public leaderboard, I experienced a frightening shakeup, ending up at 295th place in the private leaderboard. However, many of the techniques I used proved effective in the private leaderboard, enabling me to achieve a score of 0.575 (equivalent to 4th place) by making just one modification to the 295th place code. Below, I will share the methods used and my reflections.</p> <h3>Method Overview:</h3> <p>I primarily utilized "mmdet" and employed ensemble learning with numerous models. I had hoped to use YOLO v8 and v7 as well, but the host did not disclose their availability, so I had to abandon the idea.</p> <p>For the ensemble, I made modifications to the "WBF" (Weighted Box Fusion) to use it with masks. Additionally, the major innovation was not limited to instance segmentation models but also involved using object detection models. I performed WBF based on bounding boxes and made mask predictions solely using the instance segmentation models. I used the object detection models only to calculate confidence scores. As you can see from the predictions, the models made far more predictions than needed. Given the nature of the evaluation metric, mAP, it was evident that properly aligning confidence scores could significantly improve the overall score.</p> <h3>Models</h3> <p>The following is a list of the models used:</p> <p>Instance Segmentation Models:</p> <ul> <li>Mask2Former</li> <li>MaskRCNN + various ConvNext v1 and v2</li> <li>Cascade RCNN + various ConvNext v1 and v2</li> <li>MaskRCNN + Swin Transformer<br> Other: MaskDino doesn’t work well.</li> </ul> <p>Object Detection Models:</p> <ul> <li>DINO</li> <li>Mask2Former (used only for object detection part)</li> <li>Diffusion Det</li> <li>ViT Det<br> Other: yolox doesn’t work well</li> </ul> <h3>Major Flaw:</h3> <p>I combined the outputs of the mask models using addition during post-processing. This resulted in improvements in ds1 (average IoU) and the public cross-validation (CV) score. However, as many of you may have noticed, when applying post-processing, the score on the private dataset degrades significantly. This indicates that the ds1 and private datasets have entirely different annotation tendencies. The annotations provided for ds1 were quite rough, and the dilation technique seemed effective due to this. One of the hosts mentioned that annotations were done in the same manner for both public and private datasets, but this information turned out to be inaccurate.</p> <h3>Single modification to achieve private 0.575:</h3> <p>Anticipating that the annotations for the private dataset might be more substantial, I submitted two versions: one with post-processing and another without. However, as mentioned earlier, during the ensemble mask generation, I was merely adding the masks instead of using averaging or voting. This method worked well for ds1 and the public dataset. On the other hand, I should have easily foreseen that if the private dataset had accurate and precise annotations, this approach could significantly reduce the score (which I overlooked).<br> Therefore, instead of adding the masks, I changed to using voting. This simple change dramatically improved the score from 0.397 to 0.575.</p> <div> <img src="https://pbs.twimg.com/media/F2avc0eaEAEmdvL.jpg" alt="score"> </div> <h3>Reflections:</h3> <p>One of the points I regret is being content with just having two submissions: one with post-processing and one without. Since I could have diversified the risks, I should have prepared submissions that could adapt to both small and accurate annotation masks.</p>
HuBMAP - Hacking the Human Vasculature
14th Place solution: Yolov8
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratuations to all the winner and thank the organizers for hosting such an interesting competition!</p> <ul> <li>My solution is a simple ensemble of Yolov8l and Yolov8m trained on image size 640 and 1024</li> <li>All of my submission using Dilation with Kernel size = 3 and Iters = 1 because on my local CV, dilation don't hurt (and sometimes improve) <strong>dataset1</strong> but decrease mAP on <strong>dataset2</strong> significantly.</li> </ul> <p>Things worked for me:</p> <ul> <li>Yolov8l-seg and Yolov8m-seg</li> <li>Copy-paste and Mixup augmentation (0.5 probability)</li> <li>Balance sampling between dataset1, dataset2 and unlabeled data (pseudo label) </li> <li>Pseudo-labeling on Unlabeled data</li> <li>Larger image size (Increase CV but not LB/PB 😭)</li> <li>Pseudo labeling on Test data (i.e: Retrain during submission)</li> <li>Ensemble using Weighted mask fusion from 8th place Sartorius competition <a href="https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/discussion/297998" target="_blank">https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/discussion/297998</a></li> </ul> <p>Things not worked for me:</p> <ul> <li>Pseudo label more than one round</li> <li>Ensemble on different fold</li> </ul> <h1>1. Validation strategy</h1> <ul> <li>I split the data into train/valid by WSI (4 folds), during the competition, I mostly use WSI 1 and 2 as validation data and models were trained on WSI 1, 3, 4 because WSI1,2 also contains <code>Dataset1</code></li> <li>I belive because of the split, my solution can survive when Private leaderboard released. My Validation scores on WSI 2 are nearly correlated to PB/LB</li> </ul> <h1>2. Modeling</h1> <ul> <li>Step 1: Train 8 models Yolov8l and Yolov8m with image size of 640 on 4 folds for 100 epochs</li> <li>Step 2: Use WMF (Weighted mask fusion) to generate pseudo label on unlabeled data</li> <li>Step 3: Retrain models on Labeled data + Pseudo data for 70 epochs, Image size are 640 and 1024</li> <li>Step 4 (During submission) I use my ensemble models to predict hidden test data and train new model Yolov8l x 640 on Labeled data + Private pseudo data, then append that model to ensemble and run inference again.</li> </ul> <p>All models are trained on the following hyper parameters, I only show the param that I change compared to default values:</p> <table> <thead> <tr> <th>Param</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>mixup</td> <td>0.5</td> </tr> <tr> <td>copy-paste</td> <td>0.5</td> </tr> <tr> <td>flipud</td> <td>0.5</td> </tr> <tr> <td>degrees</td> <td>45</td> </tr> <tr> <td>lr0</td> <td>0.001</td> </tr> </tbody> </table> <p>I also modified the dataloader of Yolov8 to sampling uniformly <code>dataset1</code>, <code>dataset2</code> and <code>pseudo data</code>.</p> <h1>3. The effect of pseudo labeling</h1> <ul> <li>I used pseudo labeling two times, one on local with unlabeled data (<strong>Stage1</strong>) and one during submission on hidden test data (<strong>Stage2</strong>) as mentioned in <strong>2.</strong></li> <li>Pseudo labeling <strong>stage1</strong> improve my mask mAP50 on WSI 1 and 2 about ~ 0.04</li> <li><strong>Stage2</strong> Pseudo labeling improved both my LB/PB about 0.005, I think if base models are better, it could improve more.</li> </ul> <p>Compare of Stage2 pseudo labeling</p> <table> <thead> <tr> <th>Sub</th> <th>LB</th> <th>PB</th> </tr> </thead> <tbody> <tr> <td>ENS1 + Stage2 pseudo</td> <td>0.537</td> <td>0.505</td> </tr> <tr> <td>ENS1</td> <td>0.532</td> <td>0.500</td> </tr> </tbody> </table> <h1>4. Choice of submission</h1> <ul> <li>I selected my two last submission on two set of my best CV models on different fold, all models are trained with pseudo labeling <strong>Stage1</strong>. Called ENS1 and ENS2.</li> <li>My best private score is 0.544 (LB 0.551) which is ensemble of 8 models <code>Fold 2</code></li> </ul> <p>Submission:</p> <table> <thead> <tr> <th>Sub</th> <th>LB</th> <th>PB</th> </tr> </thead> <tbody> <tr> <td>ENS1 + Stage2 pseudo</td> <td>0.537</td> <td>0.505</td> </tr> <tr> <td>ENS2 + Stage2 pseudo</td> <td>0.549</td> <td>0.532</td> </tr> </tbody> </table> <h1>5. Why shake-up (Just my guess)</h1> <ul> <li>Clearly because the test data is a completly different WSI, this is why my validation strategy is to split the data by WSI</li> <li>I think the private test data is small and very sensitive (only one WSI), I think It's not large enough to choose the best model for real-world data (I understand that the annotation process is very costly). I observed my local vaildation score is oscillated during training too.</li> <li>Many people reported that dilation also cause the shake-up, may be my dilation iters is only 1, so the effect is minimized</li> </ul>
HuBMAP - Hacking the Human Vasculature
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many thanks to Kaggle and HuBMAP who organized another very interesting competition. This is a joint writeup with my long-time teammate <a href="https://www.kaggle.com/phamvanlinh143" target="_blank">@phamvanlinh143</a> , who did most of the hard work in this one.</p> <p><strong>Summary</strong></p> <p>This competition is an especially tricky one with several problems that needed to be addressed:</p> <ul> <li>Public and private test set are structured differently, with private set coming from an unseen WSI. Public test set size is also quite small. With these two reasons combined we have a completely unreliable LB.</li> <li>The strange effect of dilation on public LB result.</li> </ul> <p>Fortunately, we came up with a strategy that we believed can deal with each of these issues accordingly:</p> <ul> <li>Build a reliable CV and trust it.</li> <li>Train a large and diverse ensemble for stability.</li> <li>Submit the ensemble with dilate and without dilate for our two submissions.</li> </ul> <p><strong>Cross-validation</strong></p> <p>To build a trustworthy CV, it must follow as closely as possible to how private test set is created. That means:</p> <ul> <li>Validation must be done on dataset 1 labels.</li> <li>Train and validation set must not contain the same WSI.</li> </ul> <p>A simple solution for this is to use Dataset1-WSI1 as one validation fold and Dataset1-WSI2 as the other, but this would leave out a lot of training samples, especially if we want to train using only Dataset 1. </p> <p>What we did is using metadata to split the Dataset1 tiles from each WSI into left and right side, resulting in 4 folds:</p> <ul> <li>Dataset1 – WSI1 - Left</li> <li>Dataset1 – WSI1 - Right</li> <li>Dataset1 – WSI2 - Left</li> <li>Dataset1 – WSI2 – Right</li> </ul> <p>Next, for each tile in training data we use <code>staintools</code> to generate 9 according tiles transformed into the style of the 9 additional WSIs in Dataset 3. In training, when the sampled tile come from the same WSI of the validation set, one of the 9 generated tiles are sampled instead. While this method doesn’t completely remove the characteristics of the original WSI, we felt that it is a good enough compromise to what we wanted to achieve.</p> <p>An example of an original tile and its 9 variations:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9250575%2F0e621e649f5c50cba0d6c9d51a15889a%2Ftest2.jpg?generation=1691163031076471&amp;alt=media" alt="“”"></p> <p>Splitting the dataset in this manner also allowed us to train models using tile concatenation with minimal leak, as we only had to handle the middle tile column that separated left and right side of the WSI.</p> <p><strong>Model training</strong></p> <p><strong>Input data:</strong> we used one of these two data types:</p> <ul> <li>original tiles</li> <li>padded tiles similar to what <a href="https://www.kaggle.com/hengck" target="_blank">@hengck</a> <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/discussion/419143#2316842" target="_blank">proposed</a>. We padded 128 pixels around the original tile using available neighboring tiles. Instance labels are modified accordingly. At inference time we predicted on padded region and then center crop.</li> </ul> <p>An example of original tile (left) and padded tile (right):<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9250575%2F07696cca441ed581263843a0a919752d%2Ftest.jpg?generation=1691162170897200&amp;alt=media" alt="“”"></p> <p><strong>Augmentation:</strong> we applied strong augmentation:</p> <ul> <li>stain augmentation: p=1.0 for tiles from same WSI as validation set, p=0.5 otherwise</li> <li>RandomRotate90, RandomFlip, ElasticTransform, ShiftScaleRotate, RandomBrightnessContrast, HueSaturationValue, ImageCompression, GaussNoise, GaussianBlur…</li> <li>AutoAugment similar to DETR <a href="https://github.com/open-mmlab/mmdetection/blob/master/configs/detr/detr_r50_8x2_150e_coco.py" target="_blank">training config</a>.</li> </ul> <p><strong>Training details:</strong> we trained the models in 2 stages, using only blood vessel class. </p> <ul> <li>Stage 1: train for 30 epochs using 3 folds dataset 1 + all dataset 2</li> <li>Stage 2: finetune for 15 epochs using only 3 folds dataset 1</li> </ul> <p>Then we picked 5 checkpoints from finetune stage and do SWA to provide the final model.</p> <p><strong>Models:</strong> Cascade Mask-RCNN models with <code>swin-t</code>, <code>coat-small</code>, <code>convnext-t</code> and <code>convnext-s</code> backbones. We used mmdet 2.x to train our models.</p> <p>We experimented with different combination of backbones and input data types (original or padded). We selected the models with best CV and trained a version using full data. The final ensemble consists of both fold models and full data models.</p> <p><strong>Postprocessing:</strong> The masks that met following criteria are filtered:</p> <ul> <li>Glomeruli filter: remove masks with more than 60% area inside glomeruli regions.</li> <li>Confidence filter: remove masks with confidence &lt;0.01.</li> <li>Ensemble filter by pixel: count for each pixels the number of blood vessel prediction by the ensemble (how many models predicted positive for that pixel). A threshold is then calculated from the resulting list of pixel counts with quantile q=0.05 (ignore zero value). Masks of which all pixels had count smaller than this threshold are removed.</li> <li>Ensemble filter by instance: perform nms with iou_thresh=0.65. For each selected mask we save the number of masks that satisfied iou_thresh with it. Similar to pixel counts filter, we remove masks with small number of overlapping masks using quantile q=0.075.</li> <li>Small mask filter: remove masks with fewer than 64 pixels.</li> </ul> <p><strong>To dilate or not to dilate</strong></p> <p>As others have reported, we observed significant change of score on Public LB with and without dilation. As adding dilation didn’t work at all on our cross validation, we knew we cannot trust it. However there exists also the probability that private set would present the same label patterns as public set, as the host have given confirmation that they were verified using the same procedure. Fortunately, we had two submissions, so this is where we decided to put it to good use.</p> <p>Our final ensemble scored: </p> <ul> <li>With dilation: 0.551 public, 0.526 private.</li> <li>Without dilation: 0.465 public, 0.588 private.</li> </ul> <p>Our late submissions showed that submitting any single backbone in our ensemble without dilation would have landed us in the private gold zone anyway while scoring 0.4x on public LB. Even though we chose correctly by trusting our CV, this public LB behavior is still a mystery to us. Hopefully we can shed some light into it by reading the solutions of other top teams.</p> <p>Thank you very much for reading and let us know if you have any questions.</p> <p>Edit: </p> <ul> <li>training code: <a href="https://github.com/phamvanlinh143/HubMap_2023_2nd_Place_Solution" target="_blank">https://github.com/phamvanlinh143/HubMap_2023_2nd_Place_Solution</a></li> <li>inference notebook: <a href="https://www.kaggle.com/code/phamvanlinh143/hubmap-2nd-place-inference" target="_blank">https://www.kaggle.com/code/phamvanlinh143/hubmap-2nd-place-inference</a></li> </ul>
HuBMAP - Hacking the Human Vasculature
30th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Overview</h2> <p><a href="https://www.kaggle.com/ayberkmir" target="_blank">@ayberkmir</a> and I joined the competition very late. I wasted my first week because I didn't read the metric properly and submitting not accordingly. We didn't have time to try so many diverse things so it will be quite simple.</p> <h2>Validation</h2> <p>Ayberk was using stratified single train/test split (stratified on blood vessel count) and I was using 4 fold leave-one-WSI-out cross-validation. Surprisingly both of them worked on private leaderboard.</p> <h2>Models</h2> <p>I mainly worked on Mask R-CNN and Cascade Mask R-CNN with ResNeXt101 backbone and Ayberk worked on YOLOv7 and v8 models.</p> <h2>Training</h2> <p>I dropped duplicate annotations by simply suppressing overlapping boxes with 0.99 IoU when I was creating datasets. I also included glomeruli annotations and used unsure annotations as blood vessels. I thought if I treat unsure as blood vessels, training would be more stable. Ayberk didn't use glomeruli or unsure annotations and YOLO was already handling duplicate annotations.</p> <p>Since the WSIs were kinda similar, I decided not to use stain augmentations. I used multi scale training and scales are arranged between 1.5 and 2x raw scale i.e.</p> <pre><code>train_scales = [ (, ), (, ), (, ), (, ), (, ), (, ), (, ), (, ), (, ) ] </code></pre> <p>Other training augmentatios were random horizontal, vertical and diagonal flip, random brightness, contrast, hue and saturation.</p> <p>Ayberk was using fixed size of 1024x1024 because we thought built-in augmentations in YOLO are already strong enough.</p> <p>We used almost default values for optimizer and schedulers. I only added another learning rate multiplier step.</p> <h2>Post-processing</h2> <p>Both of us thought post-processing was the most important in this competition because of the weird metric. We dedicated most of our time on this part rather than modelling.</p> <p>First of all, we used 3x TTA (horizontal, vertical and diagonal flip) when we were predicting with our models. TTA predictions were merged using weighted boxes fusion. Masks that belong to fused boxes are averaged. Unfortunately, I couldn't find a way to retrieve soft mask predictions in mmdetection and I had to average binary masks. We didn't use multi scale TTA. YOLO model were taking 1024x1024 and Mask R-CNN models were taking 1280x1280 inputs.</p> <p>Since I had 4 folds, I also had to merge their predictions and I used weighted boxes fusion with same configurations. We also tried to merge YOLO and Mask R-CNN predictions at this stage. It was working as good as other predictions on public leaderboard but always failed on private leaderboard for some reason.</p> <p>We also tried the same submissions with and without dilation and we found that TTA was reducing the effect of dilation A LOT on public leaderboard. It was a really good sign because TTA is safe and dilation isn't.</p> <p>We had different submissions with combinations of models, folds, TTA, dilation, score multiplication, WBF tuning and etc.</p> <p>When we sort our submissions by public score, we get submission with dilation or ensemble.<br> <img src="https://i.ibb.co/8PpVPxX/Screenshot-from-2023-08-01-09-38-13.png" alt="public"></p> <p>When we sort our submission by private score, we get submissions with TTA, without dilation and single models but I also have one submission with dilation that scores 0.50x on private which is weird.<br> <img src="https://i.ibb.co/Ss8WSFy/Screenshot-from-2023-08-01-09-41-36.png" alt="private"></p> <p>Our best submission was 14th place on private leaderboard but at the end Ayberk was able select a good enough submission that made us finish 30th.</p>
HuBMAP - Hacking the Human Vasculature
254th Place Solution for HuBMAP - Hacking the Human Vasculature Competition (Greedy Model Soup Code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Everyone! I'm just posting my solution here, in case it helps someone, and I can also learn to write good solutions!</p> <p><strong>Competition Context</strong><br> The context of this solution is the recent HuBMAP - Hacking the Human Vasculature Competition:<br> <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/overview" target="_blank">https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/overview</a></p> <p>The data of this competition can be found here:<br> <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/data" target="_blank">https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/data</a></p> <p><strong>My Approach</strong><br> In short, I implemented 2 models:</p> <ol> <li>A plain Mask-RCNN using PyTorch</li> <li>A model soup of Mask-RCNNs using PyTorch</li> </ol> <p>Both scored around the same amount (surprisingly, the model soup scored 0.001 less).</p> <p>The models I used for Mask-RCNN were taken from PyTorch.<br> To implement a Model Soup, I implemented a Greedy Soup (Hill-Climbing method).<br> Essentially, in this, we start with some model as our original model, and a pool of candidate models. Usually, our first model chosen is our best model. We then pick models from the candidate models. <br> The parameters of the selected models are averaged, and we check validation scores.<br> If the score improves, we keep all the current models for our final model soup. If it doesn't we discard this model from the candidate list.</p> <p>For my candidate models, I simply took the models that were saved after each epoch (I had 10 epochs only).<br> My first chosen model was the model from the final epoch.</p> <p>My further polished code can be found below:</p> <pre><code>greedy_soup = {} num_souped = best_score= e, val_loss (validation_mask_losses): model_chk = (, model_chk) model = get_model((cell_type_dict), model_chk) curr_state_dict = torch.load(model_chk) temp_soup = {} curr_score = (e == ): greedy_soup = {k : v k, v curr_state_dict.items()} num_souped = model.load_state_dict(greedy_soup) model.to(device) curr_score = get_score(ds_val, model) best_score = curr_score : temp_soup = {k : (v + greedy_soup[k] * num_souped) / (num_souped + ) k, v curr_state_dict.items()} model.load_state_dict(temp_soup) model = model.to(device) curr_score = get_score(ds_val, model) (e &gt; curr_score &gt; best_score): () greedy_soup = temp_soup num_souped += best_score = curr_score </code></pre> <p>When this competition started, I had little experience in this field. As a result, I didn't participate too much. I knew of Mask-RCNN as a model, and just decided to implement that to improve my knowledge and understanding.</p> <p>To go a few steps beyond, I wanted to implement Weighted Box Fusion for making an ensemble, but I couldn't get to it in time.</p> <p>I had many ideas and could have tried a lot more. But I got disappointed in my low public leaderboard score. I thought that I could work harder at other competitions. </p> <p><em>Validation</em><br> Due to my inexperience in larger Kaggle competitions, I did not implement any good sort of validation. I've learnt better now, seeing that validation is one of the main components of any Kaggle competition.</p> <p><strong>Further Improvements</strong><br> I could've tried larger models, such as OneFormer and MaskFormer. I could've tried YOLOv8 as well. <br> I could also try more sophisticated ensembles using Weighted Box Fusion.</p> <p>Honestly, I wish this competition was held now, and not earlier. I definitely have a lot more confidence and experience, and would hopefully have tried a lot more!!!</p> <p>In any case, I really appreciate any feedback! I'm just a student and I want to learn a lot from all the experts here!</p> <p>If you want to further support me, please upvote this notebook!<br> (Ok, enough with the YouTuber outro)</p> <p>Thank you for reading till here!</p>
Parkinson's Freezing of Gait Prediction
5th Place Training and Inference
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2><strong>Overall Pipeline</strong></h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F651378%2F699965e32e4940ab317dbe0f7cbe95d1%2Fpiplinev1.PNG?generation=1687193496974817&amp;alt=media" alt=""></p> <h2><strong>Training Methodology</strong></h2> <p><strong><em>DataSet</em></strong></p> <ul> <li>For Pretraining divide each unlabeled series into segment with length 100000</li> <li>For Training</li> <li>- Divide each time series with window size of 2000 observation with overlapping of 500</li> <li>- For defog randomly select 4 window from the above set. </li> <li>- For tdcsfog select 1 window from the above set. tdcsfog data is resampled 10 100 Hz using librosa (v 0.9.2). when I ported my training on Kaggle code then i found that training is not converging fast enough when I did resampled using librosa( v 0.10.0). Maybe difference is due to default resampling technique.</li> <li>- Set length of dataloader as 8 times number of time series each fold<br> <strong><em>Folds</em></strong></li> <li>Create 5 Folds using GroupKFold with Subject as groups</li> <li>- This could be improved by creating by carefully selecting subjects so that there is similar representation of each target type in each fold</li> </ul> <p><strong><em>Network Architecture</em></strong><br> All models has following architecture<br> `class Wave_Block(nn.Module):</p> <pre><code>def : super(Wave_Block, self). self.num_rates = dilation_rates self.convs = nn. self.filter_convs = nn. self.gate_convs = nn. self.convs.append(nn.) dilation_rates = dilation_rate dilation_rates: self.filter_convs.append( nn.)/), dilation=dilation_rate)) self.gate_convs.append( nn.)/), dilation=dilation_rate)) self.convs.append(nn.) def forward(self, x): x = self.convs(x) res = x i range(self.num_rates): x = torch.tanh(self.filter_convs(x))torch.sigmoid(self.gate_convs(x)) x = self.convs(x) res = res + x return res </code></pre> <p><code> </code>class Classifier(nn.Module):<br> def <strong>init</strong>(self, inch=3, kernel_size=3):<br> super().<strong>init</strong>()<br> self.LSTM = nn.GRU(input_size=128, hidden_size=128, num_layers=4, <br> batch_first=True, bidirectional=True)</p> <pre><code> #self.wave_block1 = self.wave_block2 = self.wave_block3 = self.wave_block4 = self.fc1 = nn. def forward(self, x): x = x.permute(, , ) #x = self.wave x = self.wave x = self.wave x = self.wave x = x.permute(, , ) x, h = self. x = self.fc1(x) return x` </code></pre> <h3>Different Models</h3> <h4>WaveNet-GRU-v1</h4> <p>Training Notebook is found at following link</p> <ul> <li><a href="https://www.kaggle.com/code/adityakumarsinha/wavenet-4096-v6/notebook" target="_blank">https://www.kaggle.com/code/adityakumarsinha/wavenet-4096-v6/notebook</a><br> The model is using all available data in training and validation irrespective of <em>Valid</em> column value is True and False and only best weight are saved. Last 2 best weights for each fold are used for inference.</li> </ul> <h4>WaveNet-GRU-v2</h4> <p>Training Notebook is found at following link</p> <ul> <li><a href="https://www.kaggle.com/code/adityakumarsinha/wavenet-2000-v6-public/notebook" target="_blank">https://www.kaggle.com/code/adityakumarsinha/wavenet-2000-v6-public/notebook</a><br> The model is using all available data in training and for validation data <em>Valid</em> column value as True is selected.<br> All the weights with average precision score &gt; 0.25 are saved. Best 2 weights, based on average precision score, are selected.</li> </ul> <h4>WaveNet-GRU-v3</h4> <p>This notebook is based on pre-training on unlabeled data. In pretraining target is to predict next value in the time series. For data creation each unlabeled series is divided into segment of length 100000.</p> <p><strong>Data Creation Notebook is available at</strong> : <a href="https://www.kaggle.com/code/adityakumarsinha/unlabeled-data-creation/notebook" target="_blank">https://www.kaggle.com/code/adityakumarsinha/unlabeled-data-creation/notebook</a>. <br> <em>Note</em>: This notebook will fail in Kaggle kernel as it requires more disk space as default available for kaggle notebooks. Please run on different PC/ Server/ VM<br> <strong>PreTraining Notebook is available at</strong> : <a href="https://www.kaggle.com/code/adityakumarsinha/pretrain-wavenet-4096-v1/notebook" target="_blank">https://www.kaggle.com/code/adityakumarsinha/pretrain-wavenet-4096-v1/notebook</a><br> The training will be executed for single fold and best weight will be used as initial weights (without LSTM layer) <br> for WaveNet-GRU-v3<br> <em>Note</em>: Singe epoch takes around 1-1:30 hours on RTX 3090 so the kernel will timeout. Please run on different PC/ Server/ VM<br> <strong>WaveNet-GRU-v3</strong>: **Training notebook is available at **: <a href="https://www.kaggle.com/code/adityakumarsinha/wavenet-2000-from-pretrain/notebook" target="_blank">https://www.kaggle.com/code/adityakumarsinha/wavenet-2000-from-pretrain/notebook</a>.<br> Use best weight for each fold in final inference. <br> CV score for this notebook is low as compared to WaveNet-GRU-v1 and WaveNet-GRU-v2 but it improves the final ensemble ( During competition time it improved CV score but due to some bug in inferencing code the final private leader-board score as come down. I will explain this in inferencing section.</p> <h2>** Inference Methodology**</h2> <ul> <li>Each series is predicted independently</li> <li>For inference, each series are divided into segments of size 16000 or 20000 and the last segment is comprised of last 16000/20000 data points of the series. It is possible that with this size complete tdcsfog series is predicted in single step.</li> <li>tdcsfog data are resampled at 100 Hz and prediction are restored back to 128 Hz.</li> <li>librosa 0.10.0, is used for resampling. After competition, I found that librosa 0.9.2 is improves score a bit. This is miss from my side (as i did training using librosa 0.9.2) but it has not much impact on the final score.</li> <li>Prediction of all the models are ensembled using simple mean.</li> </ul> <h3><strong>CPU based Inference Methodology</strong></h3> <p>As during last week my GPU quota has been exhausted so i need to use CPU for inference. Simple CPU based pytorch inference was exceeding the time limit of 9 hours. So I need to convert pytorch models into ONNX model. <em>Please refer following notebook for model conversion</em>: <a href="https://www.kaggle.com/code/adityakumarsinha/openvino-model-converter-all-models-v3/notebook" target="_blank">https://www.kaggle.com/code/adityakumarsinha/openvino-model-converter-all-models-v3/notebook</a></p> <p>The converted models are used in final inference. One of the final inference notebook is available at:<br> <a href="https://www.kaggle.com/adityakumarsinha/gait-openvion-bunch-v2" target="_blank">https://www.kaggle.com/adityakumarsinha/gait-openvion-bunch-v2</a>.</p> <p>After competition I found that, in ensemble, WaveNet-GRU-v3 (model that uses pretrained weight) is overfitting on public leaderboard and in private leaderboard its inclusion had decreased the score. While in local CV ensemble inclusion of this model was increasing the CV score. </p> <p>So i debugged more and I found that with GPU based inference WaveNet-GRU-v3 is indeed increasing the score. in face simple ensemble of WaveNet-GRU-v1 and WaveNet-GRU-v3 has private leaderboard score of 0.437. More than third position score.</p> <p>The best GPU based inference notebook is available at<br> <a href="https://www.kaggle.com/code/adityakumarsinha/wavenet-subm-focal-v2/notebook" target="_blank">https://www.kaggle.com/code/adityakumarsinha/wavenet-subm-focal-v2/notebook</a></p> <p>Regards<br> Aditya</p>
Image Matching Challenge 2023
6th Place Solution - [ --- ]AffNetHardNet8 + AdaLAM
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to express my thanks to the competition organizers and Kaggle staff for hosting this amazing competition. I would also like to thank the competition hosts (@oldufo , <a href="https://www.kaggle.com/eduardtrulls" target="_blank">@eduardtrulls</a>) for providing helpful materials and a great example submission, which allowed me to quickly get up to speed with the competition.<br> Participating in this competition and IMC2022 has taught me a lot about image matching, Kornia, and SfM. </p> <h1>1. Overview</h1> <p>My final solution was based on the submission example, but I pushed the limits of KeyNetAffNetHardNet + AdaLAM matcher with Colmap. </p> <ul> <li>I implemented four local feature detectors that are similar to KeynetAffnetHardnet (<a href="https://kornia.readthedocs.io/en/latest/feature.html#kornia.feature.GFTTAffNetHardNet" target="_blank">Example from Kornia</a>), and each detector extracted local features for all the images (with limited size) in a scene. See more info about other non-learning based keypoint detectors <a href="https://kornia.readthedocs.io/en/latest/feature.html" target="_blank">here</a>. </li> <li>Using HardNet8 rather than HardNet although I didn't see a performance difference.</li> <li>AdaLAM matcher was used to match all the pairs and get the matches for the scene. Only pair with an average matching distance &lt; 0.5 were kept. 'force_seed_mnn' was set to True and 'ransac_iters' was increased to 256 compared with the submission example.</li> <li>The matches from the four detectors were then combined.</li> <li>I used USAC_MAGSAC to get the fundamental matrix for each pair and only kept the inlier matches. The results are written to the database as two-view geometry. <code>incremental_mapping</code> is then performed without <code>match_exhaustive</code>.</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3964695%2Fbb53c3f1e7ebad9394a673a421b47d11%2Fdetector.png?generation=1686695228576401&amp;alt=media"><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3964695%2F051e9413d0ce9cd8357723ba7efab1be%2Ffull_pipeline.png?generation=1686695522836099&amp;alt=media"></p> <h2>1.1. How I reached this solution</h2> <p>After running the submission example a few times, I thought that the key to finding a good solution was to identify a good shortlist of image pairs and apply similar solutions from IMC2022. I noticed that the KeyNetAffNetHardNet solution only needed to run the slower local feature detection once for each image, and the matching using AdaLAM matcher was fast enough to match each pair. Therefore, I decided to use KeyNetAffNetHardNet and the matching distance of AdaLAM to find the matching/pairing shortlist. However, with some optimizations, it ended up being the final solution in the last week.</p> <h2>1.2. KeyNetAffNetHardNet</h2> <p>I was surprised by the performance of KeyNetAffNetHardNet after using AdaLAM to match all possible pairs. With an increased number of features to 8000 and a maximum longer edge of 1600, I was able to achieve a score of 0.414/0.457 (Public/Private).</p> <p>During my experimentation, I discovered that the matching distance can be used to determine whether two images have overlapping areas or not. See my <a href="https://www.kaggle.com/code/maxchen303/imc2023-test-notes" target="_blank">Test Notebook</a> for some experiments using KeyNetAffNetHardNet + Adalam for pairing.<br> Disabling the Upright option (enabling OriNet) can make the Adalam matching more robust in handling rotated images.</p> <h1>2. Implementation details with Colmap</h1> <h2>2.1. Using focal length</h2> <p>In the submission example, there is a section that extracts the focal length from image exif. However, the "FocalLengthIn35mmFilm" property may not be found using the <code>image.get_exif()</code> method. In some cases, the focal length information may exist in the <code>exif_ifd</code> (as described in this <a href="https://stackoverflow.com/questions/68033479/how-to-show-all-the-metadata-about-images" target="_blank">Stack Overflow post</a>). <br> To extract the focal length from the <code>exif_ifd</code>, I used the following code:</p> <pre><code>exif = image.getexif() exif_ifd = exif.get_ifd() exif.update(exif_ifd) </code></pre> <p>If the focal length is found in the exif, I also set the "prior_focal_length" flag to true when adding the camera to the database. This improved the mAA for some scenes (mAA of urban / kyiv-puppet-theater 0.764 -&gt; 0.812).</p> <p>According to the <a href="https://colmap.github.io/tutorial.html#database-management" target="_blank">Colmap tutorial</a>:</p> <blockquote> <p>By setting the prior_focal_length flag to 0 or 1, you can give a hint whether the reconstruction algorithm should trust the focal length value.</p> </blockquote> <h2>2.2. Handling Randomness: Bypass match_exhaustive()</h2> <p></p> <p></p> <h2>2.3. Multiprocessing</h2> <p>I discovered that adding matches from other AffNetHardNet detectors could increase the overall mAA score. To fit more similar detectors into the process, I used multiprocessing to ensure that the second CPU core was fully utilized. The final solution takes about 7.5 hours to run using four detectors with edges no longer than 1600.</p> <p>To optimize the process, I sorted all the scenes by the number of images before processing, from greater to smaller. For each scene, there were two stages: matching to generate the database and reconstruction from the database. The reconstruction of different scenes is allowed to run in parallel with the matchings. Ideally, the pipeline should look like the following:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3964695%2Fc9cbe4d2f30c6874e597dd5949c2b62e%2Fmultiprocessing.png?generation=1686696885412256&amp;alt=media"></p> <h1>3. Other things I tried</h1> <ul> <li>After discovering that KeyNetAffNetHardNet was good at finding matching/pairing shortlists, I spent a lot of time exploring pair-wise matching such as LoFTR, SE-LoFTR, and DKMv3. However, these methods were slow even just running on the selected pairs and did not improve the mAAs in my implementations.</li> <li>Use AdaLAM matcher with other keypoint detectors such as DISK, SiLK, and ALIKE: I also used OriNet and AffNet to convert the keypoints from these detectors to Lafs and get the HardNet8 descriptors. I then tried matching on the native descriptors, HardNet8 descriptors, or concatenated descriptors. This approach seemed to perform better than matching on the native descriptors without Lafs. If I had more time, I would like to explore this direction further.</li> <li>Running local feature detection without resizing by splitting large images into smaller images that could fit into the GPU memory. However, this approach was extremely slow and did not improve performance. I found that the mAAs did not improve beyond a certain resolution. Allowing the longer edge to be 1600 or 2048 yielded similar scores.</li> <li>Tuning Colmap mapping options: Too many options and difficult to evaluate the outcomes.</li> </ul> <h1>4. Local Validation Score</h1> <pre><code> / kyiv-puppet-theater ( images, pairs) -&gt; mAA=., mAA_q=., mAA_t=. -&gt; mAA=. / dioscuri ( images, pairs) -&gt; mAA=., mAA_q=., mAA_t=. / cyprus ( images, pairs) -&gt; mAA=., mAA_q=., mAA_t=. / wall ( images, pairs) -&gt; mAA=., mAA_q=., mAA_t=. -&gt; mAA=. / bike ( images, pairs) -&gt; mAA=., mAA_q=., mAA_t=. / chairs ( images, pairs) -&gt; mAA=., mAA_q=., mAA_t=. / fountain ( images, pairs) -&gt; mAA=., mAA_q=., mAA_t=. -&gt; mAA=. metric -&gt; mAA=. (t: . sec.) </code></pre>
ICR - Identifying Age-Related Conditions
4rd Place Solution for the "ICR - Identifying Age-Related Conditions"
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I'm pleasantly surprised to see such a strong perturbation on the leaderboard, resulting in me going to the very top of the leaderboard. No doubt, when I sent my 3rd attempt 2 months ago (0.16 on public), I certainly did not suspect that it would end up in 4th place on private (0.34). Since then, I've seen the score decrease rapidly on public and I've realized that all these solutions are heavily overfitted, which is something I've tried to avoid in every solution I've made. </p> <p>Now for the key features of my solution.</p> <p>1) Recursive filling of gaps in features using regression on CatBoostRegressor (default hyperparameters),<br> 2) greeks['Epsilon'] Unknown were filled with greeks['Epsilon'].min()<br> 3) row_id - row number in train and in test when sorting by Epsilon<br> 4) Creation features with CatBoostClassifier training for each value in 'Alpha', 'Beta', 'Gamma', 'Delta' - probabilities for corresponding values of these categories similarly <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430907" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430907</a>. To avoid overtraining for prediction on test, I used 5 - fold cross validation and then simple averaging,<br> 5) Final model - CatBoostClassifier without any tuning of hyperparameters and feature elimination</p> <p>In subsequent attempts I tried to expand the feature space by inventing various new features, complications of filling in the gaps, for example, I tried to predict epsilon and row_id, but I did not get any improvement in the results on cross validation, moreover, the results became more unstable, so I realized that these complications only lead to overfitting and I stopped these attempts.</p> <p>My solution is <a href="https://www.kaggle.com/code/andrejvetrov/third?scriptVersionId=131958512" target="_blank">https://www.kaggle.com/code/andrejvetrov/third?scriptVersionId=131958512</a></p>
Benetech - Making Graphs Accessible
51st Place Solution - Single Matcha Model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Benetech - Making Graphs Accessible <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone,</p> <p>First of all, huge congratulations to the winners. I would like to share our 51st-place solution briefly as well. Our team is <a href="https://www.kaggle.com/batuhanm" target="_blank">@batuhanm</a> and <a href="https://www.kaggle.com/emirkocak" target="_blank">@emirkocak</a> </p> <hr> <p>We started working on this competition just two weeks prior to the deadline and dived into the discussions immediately. After having an initial understanding we found these two baseline notebooks:</p> <ul> <li><a href="https://www.kaggle.com/code/nbroad/donut-train-benetech" target="_blank">https://www.kaggle.com/code/nbroad/donut-train-benetech</a> </li> <li><a href="https://www.kaggle.com/code/heyytanay/training-pix2struct-pytorch-amp-w-b" target="_blank">https://www.kaggle.com/code/heyytanay/training-pix2struct-pytorch-amp-w-b</a> </li> </ul> <p>huge thanks to <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> <a href="https://www.kaggle.com/heyytanay" target="_blank">@heyytanay</a> for their contributions. I merged these two code baselines and trained an end-to-end Matcha model for two epochs. The result of this pipeline was .43 - .25. After some investigations, I realized that the model was underfitting so we trained the same model for 8 more epochs (so 10 in total). This experiment reached .66 in the public LB and .37 in the private. This whole process took a week or so.</p> <p>Then after having lots of debates on using the ICDAR dataset and discussions, it was banned at first so we didn't focus on that part. Instead, we included this dataset:</p> <ul> <li><a href="https://www.kaggle.com/datasets/brendanartley/benetech-extra-generated-data" target="_blank">https://www.kaggle.com/datasets/brendanartley/benetech-extra-generated-data</a></li> </ul> <p>from <a href="https://www.kaggle.com/brendanartley" target="_blank">@brendanartley</a> (again huge thanks here). With around 300k~ images we trained 3 more epochs in total and reached out to .73 in the public LB.</p> <p>Since the matcha is a generative model, for final post-processing, we did some cleaning with the numbers and got a slight boost (+0.001).</p> <h4>Things didn't work</h4> <ul> <li>We couldn't manage deplot to work as matcha. Our goal was to combine these two models and improve on confused examples, but 1 week of effort was not enough and it just didn't work.</li> </ul> <p>We also trained a separate scatter model but due to time limit, we couldn't manage it to incorporate our inference pipeline. After reading couple of solutions, it looks this would improve the results a lot.</p> <p>Well, thanks for reading, and again congrats to the winners!</p>
Parkinson's Freezing of Gait Prediction
Eighth Place Solution: 5-Fold CV 1D-ResNet
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Parkinson's Freezing of Gait Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>EDIT BELOW!</p> <p>First of all thank you all for organising and participating in this wonderful challenge, I learned a lot about Parkinson and the techniques I used in my solution and especially used by others!</p> <p>The biggest thank you goes out to <a href="https://www.kaggle.com/mayukh18" target="_blank">@mayukh18</a> for an incredible starter notebook.<br> Please give it an upvote if you haven't already!</p> <p><a href="https://www.kaggle.com/code/mayukh18/pytorch-fog-end-to-end-baseline-lb-0-254" target="_blank">Mayukh's Baseline Notebook</a></p> <p>I modified this notebook's dataset class to be compatible with with a 1D-ResNet that has three channels, one for each of the sensors ('AccV', 'AccML', 'AccAP'). I also removed the wx-parameter and used 1000ms cuts with a future of 50ms.</p> <p>Additionally, I implemented a learning rate scheduler (ReduceLROnPlateau) and started with a learning rate of 0.001.</p> <p>While the original notebook used only one train-validation split, I calculated five models based on the original folds in the notebook using each split.</p> <p>I calculated the models using my RTX 3060 which took about an hour per model, including validation. CV was around .30 if I remember correctly.</p> <p>My submission was the ensemble of the resulting five models and resulted in scores of : Public - .351 / Private - .356.</p> <p>If there are any questions I will gladly try to answer them in the upcoming days but as of now this is all I can do (moved to a new city just yesterday!).</p> <p>Best,<br> Jan</p> <p>EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: <br> I took the time to generate a little write-up and publish the clean code, both available <a href="https://www.kaggle.com/competitions/tlvmc-parkinsons-freezing-gait-prediction/discussion/422477" target="_blank">here</a>.<br> EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: EDIT: </p>
ICR - Identifying Age-Related Conditions
Very simple solution (Private 800 | Public 3800)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I think this competition primarily shows two things:<br> 1) Be careful of overfitting public LB<br> 2) Kaggle Competitions <em>can</em> be random and high-scoring notebooks might not be helpful for the hosts after the competition</p> <p>My 'best' solution was meant as a baseline model and thus very simple:</p> <ul> <li>Impute NANs using regression (for some reason I didn't impute the test set, which of course was not on purpose to forget)</li> <li>Optimize hyper params with Optuna</li> <li>Predict and submit</li> </ul> <p>You can see the code <a href="https://www.kaggle.com/lucasboesen/xgb-cv-0-37-private-0-43-public-0-23" target="_blank">here</a></p>
Google - American Sign Language Fingerspelling Recognition
[109th Place Solution] 1DConv+Transformer+Augmentation using train+supplemental data split by types
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Context</h1> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/asl-fingerspelling/overview" target="_blank">https://www.kaggle.com/competitions/asl-fingerspelling/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/asl-fingerspelling/data" target="_blank">https://www.kaggle.com/competitions/asl-fingerspelling/data</a></li> </ul> <h1>Overview of the approach</h1> <p>I mainly followed the solutions from <a href="https://www.kaggle.com/code/irohith/aslfr-transformer" target="_blank">ROHITH INGILELA</a>, <a href="https://www.kaggle.com/code/markwijkhuizen/aslfr-transformer-training-inference" target="_blank">Mark Wijkhuizen</a> and <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406684" target="_blank">hoyso48</a>. First, I modified Mark's solution to make it work in Colab notebook with TPU, which saved me a lot of training time. To use TPU, I secondly converted the dataset to tfrecords files. I managed to get these two steps done by learning from hoyso48's notebooks with a lot of trials and errors. When I converted data, I processed the data following Rohith's dominant hand algorithm. At this point, I got LB: 0.659. Third, I added hoyso48's 1DConv architecture to the transformer model, which gave the LB a 0.015 boost. Fourth, I added most of hoyso48's augmentation methods with some parameter-tuning, which gave the LB a solid 0.033 boost. <br> I also tried out some of my ideas. 1) Added supplemental data with some pre-processing. Together with train/valid dataset splitting based on sequence types and participant ids, it showed slight LB improvement. 2) Labelled the dataset with types, i.e. phone, url, address, name, sentence. Performed multitasking training, i.e. predicting the types of the sequences and predicting the phrases as the same time, which actually dropped the LB. 3) Added a penalty in the loss function of Mark's to penalize the length difference between the true phrase and predicted phrase, which didn't improve the LB. More attempts will be discussed later. </p> <h1>Details of the submission</h1> <h2>Data processing:</h2> <ol> <li>Loaded original train_landmarks and supplemental_landmarks dataset to my google drive.</li> <li>Split each original parquet file (contains about 1000 sequences) to multiple parquet files by sequence_id. Removed all the frames with no hand landmarks. It turned out the LB will be increased by a lot if leaving half of the missing-hand frames in the data according to <a href="https://www.kaggle.com/competitions/asl-fingerspelling/discussion/434353" target="_blank">CHRIS DEOTTE's solution</a>. Calculated the frame-to-phrase ratios using the number of frames that have hand landmarks, and saved them into the train_ratio.csv and supple_ratio.csv files.</li> <li>Created a <a href="https://www.kaggle.com/datasets/joqueen/aslfr-all-landmarks-nonanhand" target="_blank">new Kaggle dataset</a> with the new parquet files (one parquet means one sequence) and new csv files.</li> <li>Created tfrecords files from the new dataset in step 3 using the <a href="https://www.kaggle.com/joqueen/yu-aslfr-train-supple-cleaned-landmarks-tfr" target="_blank">Kaggle notebook</a>. <br> 4.1. Removed the samples in supplemental dataset that have phrase length longer than 31, as the max phrase length of train dataset is 31. <br> 4.2 Removed the duplicates in supplemental dataset that has the same participant_id and phrase, as there are a lot duplicates in phrase and in participant_id in supplemental dataset.<br> 4.3 Removed the samples with frame-to-phrase ratios smaller than 0.5, 1, or 2. It turned out when ratio equals to 1, it gave me the best LB.<br> 4.4 Split the data to 5 folds based on the phrase type and participant_id. Saved the data as tfrecord files as a <a href="https://www.kaggle.com/datasets/joqueen/aslfr-all-5fold-grpsplit-cln-1p0-31phrlen-dedup" target="_blank">new Kaggle dataset</a>.</li> </ol> <h2>Training:</h2> <ol> <li>As described in the overview, the main body of my solution is from <a href="https://www.kaggle.com/code/markwijkhuizen/aslfr-transformer-training-inference" target="_blank">Mark Wijkhuizen</a> and <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406684" target="_blank">hoyso48</a>, i.e. 1DConv+Transformer+Augmentation. As I trained my model using TPU in Colab, here is my <a href="https://colab.research.google.com/drive/1IOkmBN558p8hB_bfF-g4KO7SrvQiFshC?usp=sharing" target="_blank">Colab notebook</a> for training. I'm not going to repeat the details again here, but I think there were a few learnings/observations worth mentioning:</li> <li>Mark padded the phrase length to 128 on the decoder side, the same as the frame length on the encoder side. We don't have to match the dimension. The max phrase length is only 31 and already padded to 32. Instead of padding phrase further to 128, I modified the MHA block to be able to take two different dimensions, i.e. q_len and kv_len. To be honest, it didn't improve or decrease my LB, so Mark's MHA is perfectly fine, but it did save me some training time. </li> </ol> <pre><code> (tf.keras.layers.Layer): (): (MultiHeadAttention,self).__init__() self.d_model = d_model self.n_heads = n_heads self.kv_len = kv_len self.q_len = q_len self.depth = d_model // self.scale = / tf.math.sqrt(tf.cast(self.depth, tf.float32)) self.wq = self.fused_mha(self.depth, q_len) self.wk = self.fused_mha(self.depth, kv_len) self.wv = self.fused_mha(self.depth, kv_len) self.wo = tf.keras.layers.Dense(d_model, use_bias=) self.softmax = tf.keras.layers.Softmax() self.reshape = tf.keras.Sequential([ tf.keras.layers.Permute([, , ]), tf.keras.layers.Reshape([self.q_len, self.depth]), ]) self.do = tf.keras.layers.Dropout(dropout) self.supports_masking = (): tf.keras.Sequential([ tf.keras.layers.Dense(dim, use_bias=), tf.keras.layers.Reshape([, self.n_heads, dim // self.n_heads]), tf.keras.layers.Permute([, , ]), ]) (): Q = self.wq(q) K = self.wk(k) V = self.wv(v) x = tf.matmul(Q, K, transpose_b=) * self.scale x = self.softmax(x, mask=attention_mask) @ V x = self.reshape(x) x = self.wo(x) x = self.do(x, training=training) x </code></pre> <ol> <li>I increased the mha_dropout_ratio, mlp_dropout_ratio of decoder by 1.5 times and clf_dropout_ratio of the classifier by 2 times, to heavily regularize the decoder and prevent overfitting. It improved my LB by 0.006. This idea is inspired by a discussion commented by hoyso48 .</li> </ol> <pre><code> x = Decoder(n_dec_blocks, dec_dim, n_mha_heads, mha_dropout_ratio*, mlp_ratio, mlp_dropout_ratio*, frame_len, phrase_len, )(x, phrase_inp, frames_inp) x = tf.keras.Sequential([ tf.keras.layers.Dropout(clf_dropout_ratio*), tf.keras.layers.Dense(N_UNIQUE_CHARACTERS, activation=tf.keras.activations.linear, kernel_initializer=INIT_HE_UNIFORM, use_bias=), ], name=)(x) </code></pre> <h2>Inference</h2> <p>My inference followed Mark's inference method. Here is the <a href="https://www.kaggle.com/joqueen/m12-yu-aslfr-inference" target="_blank">inference Kaggle notebook</a>, which reported both my highest public score 0.713 and my highest private score 0.665.</p> <h1>What tried but didn't work (well)</h1> <h2>Loss penalization</h2> <p>In Mark's customized loss function, y_true and y_pred got truncated by the y_true length. y_true length is known, so how to truncate the y_pred is not from training process. I was thinking what if the y_pred is very long, but got truncated in the loss function, which may not really represent the loss.</p> <pre><code> (): idxs = tf.where(y_true != PAD_IDX) y_true = tf.gather_nd(y_true, idxs) y_pred = tf.gather_nd(y_pred, idxs) y_true = tf.cast(y_true, tf.int32) y_true = tf.one_hot(y_true, N_UNIQUE_CHARACTERS, axis=) loss = tf.keras.losses.categorical_crossentropy(y_true, y_pred, label_smoothing=, from_logits=) loss = tf.math.reduce_mean(loss) loss </code></pre> <p>So I added a penalty about the length difference between the y_true and y_pred, to force the model to generate y_pred with a similar length as y_true while keep the categorical_crossentropy optimal.</p> <pre><code> (): y_true_len = tf.cast(tf.argmax(tf.cast(tf.math.equal(y_true, PAD_IDX), tf.int32),axis=), tf.int32) _idx = tf.argmax(y_pred, axis=) _ = tf.math.equal(_idx, EOS_IDX) _len1 = tf.cast(tf.argmax(tf.cast(_, tf.int32), axis=), tf.int32) _no_eos = tf.math.logical_not(tf.reduce_any(_, axis=)) _len2 = tf.cast(_no_eos, tf.int32) * CFG.phrase_len y_pred_len = _len1 + _len2 penal = tf.cast(tf.(y_true_len-y_pred_len), tf.float32) penal = tf.math.reduce_mean(penal) idxs = tf.where(y_true != PAD_IDX) y_true = tf.gather_nd(y_true, idxs) y_pred = tf.gather_nd(y_pred, idxs) y_true = tf.cast(y_true, tf.int32) y_true = tf.one_hot(y_true, N_UNIQUE_CHARACTERS, axis=) loss = tf.keras.losses.categorical_crossentropy(y_true, y_pred, label_smoothing=, from_logits=) loss = tf.math.reduce_mean(loss) loss + CFG.loss_coeff * penal </code></pre> <p>I was quite excited about this idea, and managed to code and run it through. But it still reduced my LB by 0.001 even after some fine-tuning. (sad face)</p> <h2>multi-tasking training</h2> <p>Inspired by Mark's solution, I categorized the train+supplemental data to five types, phone_number, url, address, name_like, sentence. It is obvious that each type is so different from the other types, in terms of the content, the length, the structure. I was thinking, maybe I can do multi-tasking - let the model do classification and phrase-prediction at the same time, which may force the model to figure out the different patterns for different types of samples. But this actually reduced my LB by about 0.1, which is a lot to me. </p> <pre><code> model.( optimizer=opt, loss = [tf.keras.losses.CategoricalCrossentropy(from_logits=,label_smoothing=), loss_w_ls], metrics = [[tf.keras.metrics.CategoricalAccuracy()], [TopKAccuracy()]], steps_per_execution=steps_per_epoch, ) </code></pre> <h2>positional encoding</h2> <p>Mark's method used trainable positional encoding. I switched it to the sin/cos positional encoding for encoder only, and actually it reduced my LB by 0.005. But I didn't try to switch it for decoder. </p> <h2>a validation strategy</h2> <p>I was struggling a lot on designing a good validation strategy that has the CV correlated to the LB well. Out of desperation, I even kept the all the validation data in their original form - before removing the missing-hand frames and removing the samples with low frame-to-phrase ratios. And calculated the Levenshtein distance on the raw valid data. However, it was very time consuming, my model training only took 45 min - 1.5 hour, depending on the parameters. But the Levenshtein distance calculation for validation took about 3 hours. And even with all the effort, I still didn't see strong correlation. </p> <h2>CTC</h2> <p>I tried out CTC a bit following the notebooks from <a href="https://www.kaggle.com/code/irohith/aslfr-ctc-based-on-prev-comp-1st-place" target="_blank">ROHITH INGILELA</a> and <a href="https://www.kaggle.com/code/shlomoron/aslfr-ctc-on-tpu" target="_blank">GREYSNOW</a>. But due to time limitation, I didn't really figure it out in time. And it looks like the majority of the top-tier solutions used CTC.</p> <h1>Sources</h1> <p><a href="https://www.kaggle.com/code/irohith/aslfr-transformer" target="_blank">https://www.kaggle.com/code/irohith/aslfr-transformer</a><br> <a href="https://www.kaggle.com/code/markwijkhuizen/aslfr-transformer-training-inference" target="_blank">https://www.kaggle.com/code/markwijkhuizen/aslfr-transformer-training-inference</a>) <br> <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406684" target="_blank">https://www.kaggle.com/competitions/asl-signs/discussion/406684</a><br> <a href="https://www.kaggle.com/code/irohith/aslfr-ctc-based-on-prev-comp-1st-place" target="_blank">https://www.kaggle.com/code/irohith/aslfr-ctc-based-on-prev-comp-1st-place</a><br> <a href="https://www.kaggle.com/code/shlomoron/aslfr-ctc-on-tpu" target="_blank">https://www.kaggle.com/code/shlomoron/aslfr-ctc-on-tpu</a><br> <a href="https://www.coursera.org/learn/nlp-sequence-models?specialization=deep-learning" target="_blank">https://www.coursera.org/learn/nlp-sequence-models?specialization=deep-learning</a> (transformer network)</p> <p>This is my first competition and first solution write-up. Please let me know if I missed anything or if any part of my explanation was unclear. Suggestions and discussions are welcome. Thank you.</p>
HuBMAP - Hacking the Human Vasculature
13th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to host and congrats to winners!<br> This competition gave me a lot of experience about instance segmentation. and a silver medal :)</p> <p>inference code : <a href="https://www.kaggle.com/code/kiinngdom7/hubmap-hhv-13th-place-inference" target="_blank">https://www.kaggle.com/code/kiinngdom7/hubmap-hhv-13th-place-inference</a></p> <p><strong>Summary</strong></p> <ul> <li>mmdetection</li> <li>2-fold validation</li> <li>cascade mask rcnn with convnext backbone</li> <li>multi scale tta &amp; wbf ensemble</li> </ul> <p><strong>Validation Strategy</strong></p> <ul> <li>fold 1 <ul> <li>train : wsi 2,3,4 / val : wsi1_ds1</li></ul></li> <li>fold 2<ul> <li>train : wsi 1,3,4 / val : wsi2_ds1</li></ul></li> </ul> <p><strong>Training</strong></p> <ul> <li>MMdet 3.0</li> <li>Only blood vessel class was used</li> <li>Augmentations<ul> <li>HorizontalFlip</li> <li>RandomRotate90</li> <li>RandomBrightnessContrast</li> <li>ShiftScaleRotate</li> <li>MotionBlur</li> <li>GaussianBlur</li></ul></li> <li>Models<ul> <li>Finally four models were used for ensemble</li> <li>model 1 : cascade mask rcnn with convnext_tiny backbone, fold1</li> <li>model 2 : cascade mask rcnn with convnext_tiny backbone, fold1</li> <li>model 3 : cascade mask rcnn with convnextv2_base backbone, fold2</li> <li>model 4 : cascade mask rcnn with convnextv2_base backbone, fold2</li></ul></li> </ul> <p><strong>TTA</strong></p> <ul> <li>Multiscale TTA<ul> <li>scales = [(544, 544), (576, 576), (608, 608), (640, 640), (672, 672), (704, 704)]</li></ul></li> </ul> <p><strong>Ensemble</strong></p> <ol> <li>Fuse boxes using WBF.</li> <li>Hard vote masks.</li> <li>Remove the area outside the fused box. (Not used in final submission but private score improved slightly)</li> </ol> <p><strong>Result</strong></p> <table> <thead> <tr> <th></th> <th>Public</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>model 1</td> <td>no_dil : 0.46 / dil : 0.526</td> <td>no_dil : 0.463 / dil : 0.41</td> </tr> <tr> <td>model 2</td> <td>no_dil : 0.48 / dil : 0.527</td> <td>no_dil : 0.466 / dil : 0.415</td> </tr> <tr> <td>model 3</td> <td>no_dil : 0.437 / dil : 0.519</td> <td>no_dil : 0.514 / dil : 0.395</td> </tr> <tr> <td>model 4</td> <td>no_dil : 0.452 / dil : 0.527</td> <td>no_dil : 0.489 / dil : 0.358</td> </tr> <tr> <td>ensemble</td> <td>no_dil : 0.479 / dil : 0.55</td> <td>no_dil : 0.541 / dil : 0.462</td> </tr> </tbody> </table>
ICR - Identifying Age-Related Conditions
188th place solution for ICR - Identifying Age-Related Conditions competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Greetings everyone! The gambling competition has ended and I would like to share my thoughts and the solution. I also want to really thank <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> for the EDA and insights notebooks, because I took ideas from there, such as the epsilon, missing values and data drift during the years.</p> <h1>Context section</h1> <p>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></p> <h1>Overview of the Approach</h1> <p>My main approach is the robust ensemble of XGBoost models. I chose features, based on their importance, firstly trained h2o automl boostings to get them. Used the epsilon feature to stratify the data and to train models. Deleted samples with a null epsilon and used 2018+ year. I applied undersampling with various random seeds to train diverse models.</p> <h1>Details of the submission</h1> <h3>Important and impactful detail about the submission</h3> <p>I focused on the stabilizing validation score and not overfitting on the public leaderboard. Many models score depended too much on the random seed and folds. In my case, my validation strategy was stable, I got a low variance in my validation scores for different models (from 0.28 to 0.33).</p> <h3>Feature choice</h3> <p>Trained the h2o automl XGBoost, got data about the most important features. I think I could just do the same with XGBoost, but it was faster and I wanted to not spend too much time. I think, I have chosen about 25 features in total. </p> <h3>Epsilon feature</h3> <p>I used the epsilon feature, extracted only the year from it. I used only the data from the 2018+ year, since it was more relevant. I also dropped all samples with nan epsilon, since Raddar found, that they all have the same class. That is, the model will just learn to guess them based on the epsilon, moreover, I thought that these samples might be not correct at all. So I did this:<br> <code>df = df[(~df["Epsilon"].isna()) &amp; (df["Epsilon"] &gt;= 2018)]</code></p> <h3>Validation strategy and undersampling technique</h3> <p>When I tested the models, I split the data on 85%/15% using a stratification <code>stratify=df[["Class", "Epsilon"]]</code>. Then I trained 30 XGBoosts on that data, splitting it again using the same stratification and undersampling, so there will be the same number of classes in the new train and validation. For the each model of these 30 I used a different random_state to split the data (so each one train on the different data). Then I changed the random seed again, split the data again and trained another 30 models. This way I did this for 20 random seeds and stopped, since from that moment more small ensembles did not make the performance better.<br> The really good thing was that models actually were robust. Almost on the each validation set for the each 30-models ensemble (the first one) the score was about 0.28-0.31. When I tried the same strategy without a stratification on the epsilon, my score depended on the seed, that is, scores were from 0.1 to 0.4, which was really sad. </p> <p>Finally, when I was confident in my training process, I stopped using the first validation set in that pipeline and just retrained the same ensemble using the whole data. But sure I was still undersampling the data using different seeds, when trained each individual model. I didn't changed hyperparameters, since it made the score worse. Only changed <code>["max_depth"] = 5</code> and <code>["eta"] = 0.285</code>.</p> <h3>Another finding about the epsilon impact on the leaderboard score</h3> <p>When I submitted, I tried using the epsilon=2020 or 2021 for the whole lb data, but for me it didn't make the difference actually. </p> <h3>What didn't work in my case</h3> <p>My another submission was a TabPFN, trained it using the same process. It scored 0.38 on a public, so I thought that it might overfitted to the private lb, was excited and wanted to gamble with it. I was wrong, <strong>it was just bad.</strong></p> <h3>My other thoughts and the mindset</h3> <p>When I saw the data and the description of this competition, I understood that it will be hard to win here, the probability of overfitting is very high. I tried to minimize the time, that I spend on this competition and "maximize the earnings". So I spent maybe 6-10 hours in total and stopped myself. I thought that there are 3 main types of submissions that will be able to win something:</p> <ol> <li>A robust submission of some gradient boostings. It scores ok on a public lb, it will score ok on a private lb. &lt;- <strong>My case</strong></li> <li>An overfitted to private lb submission. It should score bad on a public lb, but should score really good on a private lb. I suppose, it's a pure luck here, hard to create a such model. </li> <li>Some simple model (maybe KNN, log regression), that will not overfit and will show nice results after the shake up. </li> </ol> <p>So I focus on trying to create such solutions. In my experience with the similar data, oversampling the data and scale_pos_weights have never worked, so I focused only on the undersampling technique. </p> <h3>Competition metric</h3> <p>Another important thing here is a competition metric. It is a balanced log loss. And the main thing that you should remember about it it, that it punishes you really bad, when you confidently predict EVEN 1 SAMPLE. That's why I decided not to do any postprocessing of predicted values, I understood, that I will just make the score even worse. When I submitted, I also was somewhere at 3500th place on the lb. However, I thought that it is a good sign of not overfitting and stayed calm. Didn't take unnecessary actions.</p> <h3>Sources</h3> <ul> <li><a href="https://www.kaggle.com/code/raddar/icr-competition-analysis-and-findings" target="_blank">https://www.kaggle.com/code/raddar/icr-competition-analysis-and-findings</a></li> <li><a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/420342" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/420342</a></li> </ul>
Google - American Sign Language Fingerspelling Recognition
20th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle, Google and other organizers for hosting this exciting competition. Hope this kinds of competition open in kaggle more often.</p> <p><strong>TLDR</strong></p> <p>My solution is a single large model trained with CTC loss with no ensemble(39MB). The output shape of the model is (batch, sentence length, class).</p> <p><strong>Data Preprocessing and Augmentation</strong></p> <p>Basically the same as the previous 1st solution. But I found using (x, y, z) and pose landmarks gives better score. I used the following landmarks: left hand, right hand, eye, lips and pose landmarks.</p> <p><strong>Model</strong></p> <p>I used previous 1st solution. However, I used the branchformer. Also, I changed the multi head attention to relative multi head attention. My model consists of 1 dense layer - 4 conv1d layer - 6 branchformer layer. In addition, I used stochastic depth.</p> <p><strong>Training</strong></p> <p>Epoch = 150<br> bs = 64<br> Lr = 8e-4<br> AWP = Epoch * 0.1<br> Schedule = CosineDecay with warmup ratio 0.1<br> Optimizer = Adam with Lookahead<br> Loss = Inter CTC loss(<a href="https://arxiv.org/abs/2102.03216" target="_blank">https://arxiv.org/abs/2102.03216</a>)</p>
Google Research - Identify Contrails to Reduce Global Warming
Kaggle solution write-up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>import pandas as pd<br> import numpy as np</p> <p>df1 = pd.read_csv("/kaggle/input/google-research-identify-contrails-reduce-global-warming/sample_submission.csv")<br> df1</p> <p>df1.to_csv("submission.csv", index=False)</p>
Google Research - Identify Contrails to Reduce Global Warming
26th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all I would like to thank the organisers and Kaggle for this great competition. It was an amazing learning experience, I got to learn a lot of things. Congratulations to all the winners!</p> <h1>Summary</h1> <ul> <li>Data: Ash Color Images and Soft Labels (Average of individual labels)</li> <li>Image Size: 512</li> <li>Cross Validation: Split the Train Set to 5 folds after sorting by time.</li> <li>Ensemble of 3 Models</li> <li>Encoders: efficientnet-b5, regnety_120, seresnextaa101d_32x8d</li> <li>Decoder: Unet</li> </ul> <h2>Data</h2> <p>I trained on false color images and used soft labels. This gave me a big boost in both cv and lb score as compared to when training on the pixel masks. I think this was one of the most important tricks for this competiton. It was very surprising for me to see that the model was able to capture the uncertainty depicted by the probabilities of the pixels. As individual labels were not available for the valid set, so I didnt include it in training. I used it as hold set.</p> <h2>Cross Validation</h2> <p>This was a bit tricky due to the presence of many duplicates in the training set. So, to build a robust local validation, I first sorted the training set by date and time which was available in train.json and then split the sorted data into 5 folds without shuffling. After this, I clipped the valid folds during training like this:</p> <pre><code> self.fold == : df = df[:-] self.fold == : df = df[:] : df = df[:-] </code></pre> <p>The cv score was not at all correlated with the leaderboard score, but still I trusted my cv in the end to select my submissions.</p> <h2>Models</h2> <p>I tried many encoders but used efficientnet-b5, regnety_120 and seresnextaa101d_32x8d in the end as they gave the best cv score. I only used Unet for all the encoders.</p> <h2>Training</h2> <ul> <li>Epochs: 30</li> <li>Learning Rate: Between 3e-4 to 3e-3 (Different for different Encoders)</li> <li>Optimizer: Adam</li> <li>Scheduler: CosineAnnealingLR (I saved the epoch with the best valid fold score)</li> <li>Loss: BCE Loss (As I was training with soft labels)</li> <li>Augmentations: RandomResizedCrop, ShiftScaleRotate (HFlip, VFlip and RandomRotate90 didn't work for me also, but I had no idea that this can be due to some problem in the masks)</li> </ul> <h3>Trick to reduce training time</h3> <p>To reduce training time, I tried to remove some images during training. Removing all the non-contrail images reduced the local cv score. Then I predicted the train set with one of my best model and made a column with the highest pixel probability predicted by the model for the images. Then I removed the images which had max pixel probability lower than 0.05 and had no contrail.</p> <p><code>df = df[~((df['max_prob_pred'] &lt; 0.05) &amp; (df['contrail'] == 0))]</code></p> <p>Using this I was able to remove about 4000 images from training, reduce training time a lot and it didn't effect the cv score at all.</p> <h2>Ensemble</h2> <p>I used my cv folds to get the ensemble weights for my final models.<br> Weighted ensembling: 0.3172[eb5] + 0.2928[regty120] + 0.39[sr101].</p> <p>For final prediction, first I ensembled the different models separately for each fold. Then I converted the 5 fold predictions to binary predictions using the thresholds I got using my valid sets. The thresholds were 0.49, 0.47, 0.47, 0.48, 0.46 for the folds from 0 to 4. Then I took the average of the 5 folds and used 0.3 as the final threshold (which I got using the valid hold set). </p> <p>My teammate's account was deleted just after I teamed up with him. I don't know if he did anything wrong or not. Whatever be the case, I wish him best of luck for his future.<br> I really hope I won't be removed from the leaderboard after verification. Let's see what happens ☺️</p>
Google Research - Identify Contrails to Reduce Global Warming
7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Summary:</h1> <ul> <li>512x512 input resolution, single-frame models <strong>(no 2.5d segmentation, no output sequence fusion)</strong>;</li> <li>Non-binary targets (average of human_individual_masks.npy files);</li> <li>Augmentations: Rotate90 + Flip + ShiftScaleRotate;</li> <li><strong>Orthogonalized 9-channels input color space instead of ASH-RGB;</strong></li> <li>Weighted ensemble of 5 UNet+EfficientNet models, diversified by loss and encoder;</li> <li>TTA x8 mean.</li> </ul> <h3>Input preprocessing:</h3> <p>There was a lot of evidence that higher resolution helps, and after some experiments were conducted I decided that 512x512 input resolution seemed a sweet spot between quality and resources consumption.</p> <p>I haven't managed to detect target pixel shift that other participants are reporting about, but noticed, that something is wrong with seemingly "rotation-agnostic" data domain, because any augmentations were making smaller models (especially if trained on 256x256 resolution) perform worse.<br> But the same augmentations in combination with larger models/resolutions were quite helpful: even adding slight ShiftScaleRotate to already applied [rot90 + flip] increased generalization in terms of both the difference between train-sample/out-of-sample performance and validation score alone. So, I stopped on the combination mentioned above.</p> <p>Grid distortions/CutMix/Heavy affine transforms etc. seemed not appropriate considering the nature of labels and strict guidance for labelers, so I didn't even experiment on that.</p> <h3>Color Space:</h3> <p>The most interesing finding is that ASH-RGB color space actually is not the best one (well, at least in my pipeline), despite the fact that it was used to annotate the data, which is still unclear to me and quite confusing.<br> Firstly I decided that using different color spaces can help me to make final ensemble more diversified and robust, so I tried several combinations, including the original 9-channels band_{08-16} spectrum (which gives ~the same quality as ASH-RGB or even worse) and found that by orthogonalizing (and then std. scaling) this spectrum (e.g. using singular value decomposition to previously gathered stats for pixel values from the train data), one can squeeze about 0.007-0.008 global dice score "for free":</p> <pre><code>def transform(image, components=list(range())): """ image is mean-std scaled input image in original spectrum shape of (H, W, С=) components is list of singular values to preserve, all of them by default """ # U, S, V^T = scipy.linalg.svd((features - features.mean(axis=)) / features.std(axis=), full_matrices=False) # where features variable is numpy matrix shape of (H * W * #number_of_train_frames, ) # representing train sample of -channels pixel values # H = W = / , since [, ] spatial slices of images were used to save RAM V = np.array([[ , , -, - , -, , -, , -], [ , , , , , - , , -, ], [ , , , , , , -, , -], [ , -, , -, , , , -, -], [ , -, -, , , -, -, , ], [ , -, , -, , , , , ], [ , -, , -, -, - , -, , -], [ , -, , -, -, -, - , -, ], [ , -, , , - , , , -, -]]).astype(np.float32) # to normalize final model input: std = np.array([ , , , , , , , , ]).astype(np.float32) output = (np.dot(image, V[:, components]) / std[components]).astype(np.float32) return output </code></pre> <p>Since I discovered this, I've been using only this color space for all my models.</p> <h3>Models:</h3> <p>The more complexity the decoder had (UNet++/DeepLabV3/UperNet etc.), the worse results I obtained, so, only UNet models were used in the final ensemble. EfficientNet encoders performed significantly better than ResNeXt/ResNeSt/ConvNeXt (the last one especially: after a series of experiments with different encoders/custom dilations for the first layers I came up with a hypothesis that lower resolution of the first feature map before the first skip connection to the decoder leads to lower out-of-sample scores, which sounds convincing given the requirement for high pixel-level accuracy).</p> <h3>Losses:</h3> <p>Due to the lack of time and resources, I haven't investigated this matter properly, especially since the results for lighter 256x256 experiments don't match the resutls for heavier 512x512 ones. But overall, BCE alone performed better than any superposition of BCE+Dice or BCE+Focal. Nevertheless small additive infusion of Dice loss to BCE can be helpful: although it hurts global dice score but also makes ensemble with models trained with BCE better.<br> Increasing pos_weight and weighting pixels that are closer to label borders higher for BCE objective makes models converge faster, but to ~same score (pixel weighting mask example below):<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F938698%2F43526e6712c6845e3ff10ad11a322f62%2FScreenshot_20230810_180025.png?generation=1691690381689822&amp;alt=media" alt=""></p> <h3>Optimization:</h3> <p>AdamW, lr schedule: 1 epoch warmup from 2e-4 to 5e-4, CosineAnnealing till the end.<br> All final models were trained for 50 or 64 epochs with pytorch-lightning using bf16 mixed precision.</p> <h3>Final submission:</h3> <p>I didn't have much time to train models for building a proper ensemble at the end of the competition, so I had to improvise and include most successful models trained during earlier experiments (which checkpoints are also not lost), which means that the final mix is a bit awkward and all "survived" models were trained on the default train/val split (no checkpoints trained on n-fold CV splits):</p> <table> <thead> <tr> <th>weight*</th> <th>encoder</th> <th>loss</th> <th>augs</th> <th>validation folder only</th> <th>LB (private)</th> </tr> </thead> <tbody> <tr> <td>0.45</td> <td>efficientnet-b7 (smp)</td> <td>BCE**</td> <td>Rot90 + Flip + ShiftScaleRotate</td> <td>0.70030</td> <td>0.70755</td> </tr> <tr> <td>0.27</td> <td>efficientnet-b7 (smp)</td> <td>0.98 BCE + 0.02 DICE</td> <td>Rot90 + Flip + ShiftScaleRotate</td> <td>0.69794</td> <td>-</td> </tr> <tr> <td>0.12</td> <td>efficientnet-b7 (smp)</td> <td>BCE**</td> <td>Rot90 + Flip</td> <td>0.69524</td> <td>0.69823</td> </tr> <tr> <td>0.08</td> <td>tf_efficientnet_b7 (timm)</td> <td>BCE**</td> <td>Rot90 + Flip + ShiftScaleRotate</td> <td>0.67948</td> <td>0.67705</td> </tr> <tr> <td>0.08</td> <td>tf_efficientnet_b8 (timm)</td> <td>BCE**</td> <td>Rot90 + Flip + ShiftScaleRotate</td> <td>0.68611</td> <td>0.69415</td> </tr> <tr> <td></td> <td><strong>ensemble</strong></td> <td></td> <td></td> <td><strong>0.70749</strong></td> <td><strong>0.71337</strong></td> </tr> </tbody> </table> <p>All threshold were tuned on validation folder in logits space rather than in probability space, weighted sum of predictions was also calculated in logits space before transforming to probabilities.</p> <ul> <li>* Tuned manually on validation folder;</li> <li>** All these experiments have slightly different pixel weighting parameters in BCE loss.</li> </ul>
ICR - Identifying Age-Related Conditions
4978th Place Solution - everything was known from the beginning🤣
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Success in CV</h1> <p>Overwrite predictions with [0.0, 1.0].<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3190389%2Fe11dcd5434589a9376a4a960be4766cb%2F1.png?generation=1691715420918168&amp;alt=media" alt=""></p> <h1>Success in Public LB (52nd)</h1> <p>Overwrite predictions with [0.0, 1.0].<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3190389%2Fe11dcd5434589a9376a4a960be4766cb%2F1.png?generation=1691715420918168&amp;alt=media" alt=""></p> <h1>Failure in Private LB (4978th)</h1> <p>Overwrite predictions with [0.0, 1.0].<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3190389%2Fbed2a8d069cdc8ce49610455a83eb60c%2F2.png?generation=1691716135808111&amp;alt=media" alt=""><br> Everyone, see you all in the next competition😉</p>
Google - American Sign Language Fingerspelling Recognition
25th Place Solution (time warpping + relative positional encoding + intermediate CTC)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and Google for hosting this competition. This is my first Kaggle competition experence and I have learnt a lot from the previous ASL competition and other participants. I start my model from <a href="https://www.kaggle.com/hoyso48" target="_blank">@hoyso48</a>'s solution from the last competition and added some components:</p> <p><strong>Data Augmentation</strong><br> Applied random time warpping (LB +0.01)</p> <pre><code>def temporal: n = tf.shape(x) n &lt; : return x w = n t1 = tf.clip, , n-) t2 = tf.clip + t1, , n-) new_x1 = interp1d new_x2 = interp1d return tf.concat(, axis=) def augment: tf.random.uniform()&lt; always: x = resample(x, (,)) tf.random.uniform()&lt; always: x = flip max_len is not None: x = temporal tf.random.uniform()&lt; always: x = spatial tf.random.uniform()&lt; always: x = temporal tf.random.uniform()&lt; always: x = temporal tf.random.uniform()&lt; always: x = spatial return x </code></pre> <p><strong>Data Preprocessing</strong><br> Truncate <code>phrase</code> to have smaller length than the input</p> <pre><code> inp_len &lt;= label_len: offset = tf((), , label_len-inp_len+, dtype=tf.int32) = label_len = inp_len- </code></pre> <p><strong>Multi-headed attention with relative positional encoding</strong> (LB +0.02)</p> <pre><code> : def : super. self.dim = dim self.head_dim = dim self.scale = self.head_dim- self.num_heads = num_heads self.num_nbr = num_nbr self.qkv = tf.keras.layers. self.drop1 = tf.keras.layers. self.proj = tf.keras.layers. self.supports_masking = True self.wgt_v = self.add, ='glorot_uniform',trainable=True, name=+str(tf.keras.backend.get)) self.wgt_k = self.add, ='glorot_uniform',trainable=True, name=+str(tf.keras.backend.get)) def : idx_mat = tf.reshape(tf.tile(tf.range(length), ), ) idx_mat = idx_mat - tf.transpose(idx_mat) return tf.clip + self.num_nbr def : xy = tf.matmul(x, y, transpose_b=transpose) x = tf.keras.layers.)(x) mul = tf.matmul(x, z, transpose_b=transpose) mul = tf.keras.layers.)(mul) return xy + mul def call(self, inputs, mask=None): mask is not None: mask = mask length = inputs.shape idx_mat = self. rpe_k = tf.gather(self.wgt_k, idx_mat) rpe_v = tf.gather(self.wgt_v, idx_mat) qkv = self.qkv(inputs) qkv = tf.keras.layers.)(tf.keras.layers.)(qkv)) q, k, v = tf.split(qkv, , axis=-) logits = self.self.scale attn = tf.keras.layers.(logits, mask=mask) attn = self.drop1(attn) x = self. x = tf.keras.layers.)(tf.keras.layers.)(x)) x = self.proj(x) return x </code></pre> <p><strong>Intermediate CTC</strong></p> <pre><code>def get_base_model(=384, =384): inp = tf.keras.Input((max_len,CHANNELS), =) x = tf.keras.layers.Masking(=PAD,input_shape=(max_len,CHANNELS))(inp) x = tf.keras.layers.Dense(dim, =,name='stem_conv')(x) x = tf.keras.layers.BatchNormalization(=0.95,name='stem_bn')(x) ksize = 17 drop_rate = 0.2 x = Conv1DBlock(dim,ksize,=drop_rate)(x) x = Conv1DBlock(dim,ksize,=drop_rate)(x) x = TransformerBlock(dim,=2)(x) x = Conv1DBlock(dim,ksize,=drop_rate)(x) x = Conv1DBlock(dim,ksize,=drop_rate)(x) x = TransformerBlock(dim,=2)(x) x = Conv1DBlock(dim,ksize,=drop_rate)(x) x = Conv1DBlock(dim,ksize,=drop_rate)(x) x = TransformerBlock(dim,=2)(x) inter = tf.keras.layers.Dense(dim, =None)(x) inter = tf.keras.layers.Dense(NUM_CLASSES+1)(inter) x = Conv1DBlock(dim,ksize,=drop_rate)(x) x = Conv1DBlock(dim,ksize,=drop_rate)(x) x = TransformerBlock(dim,=2)(x) x = Conv1DBlock(dim,ksize,=drop_rate)(x) x = Conv1DBlock(dim,ksize,=drop_rate)(x) x = TransformerBlock(dim,=2)(x) x = Conv1DBlock(dim,ksize,=drop_rate)(x) x = Conv1DBlock(dim,ksize,=drop_rate)(x) x = TransformerBlock(dim,=2)(x) x = tf.keras.layers.Dense(dim,=None,name='top_conv')(x) x = tf.keras.layers.Dense(NUM_CLASSES+1, =)(x) return tf.keras.Model(inp, [inter, x]) def ctc_loss(args): y_pred, inp_len, label, label_len = args return tf.nn.ctc_loss(label, y_pred, label_len, inp_len, =, =NUM_CLASSES) def get_model(base_model): label = tf.keras.Input(shape=(MAX_LEN_OUTPUT,), =, =) label_len = tf.keras.Input(shape=(), =, =) inp_len = tf.keras.Input(shape=(), =, =) loss1 = tf.keras.layers.Lambda(ctc_loss, output_shape=())( [base_model.output[0], inp_len, label, label_len]) loss2 = tf.keras.layers.Lambda(ctc_loss, output_shape=())( [base_model.output[1], inp_len, label, label_len]) loss = 0.3 * loss1 + 0.7 * loss2 #loss = 0.0 * loss1 + 1.0 * loss2 model = tf.keras.Model( inputs=[base_model.input, inp_len, label, label_len], =loss) return model </code></pre> <p><strong>Fine tuning</strong> (LB +0.005)<br> The model was first trained with both training data and supplementary data with intermediate CTC (0.3 and 0.7) for 300 epochs. Then the pre-trained model was fine tuned only on the training data without intermediate CTC (0.0 and 1.0)</p> <p>I will post the training notebook soon.</p> <p><a href="https://www.kaggle.com/hoyso48" target="_blank">@hoyso48</a>'s solution<br> <a href="https://www.kaggle.com/code/hoyso48/1st-place-solution-training" target="_blank">https://www.kaggle.com/code/hoyso48/1st-place-solution-training</a></p>
HuBMAP - Hacking the Human Vasculature
Private 47th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our solution is an ensemble of yolov8m-seg models with different inputs: 512, 640, 1024. We pretrained our models using "unknown" and "glomerulus" as blood vessels since they have similar structures with the target. Then, we retrained our models with the correct blood vessels without any "unknown" or "glomerulus" labels.</p> <p>For test time augmentations (TTAs), we used rotation (90), horizontal flip, vertical flip, and diagonal flip. Additionally, we lowered the pixel threshold to 0.45, allowing the model to decide where and how to dilate. However, it didn't perform as well alone as dilation on the public leaderboard, so we kept dilation.</p> <p>To ensemble masks, we used the code from <a href="https://www.kaggle.com/code/mistag/sartorius-tta-with-weighted-segments-fusion" target="_blank">https://www.kaggle.com/code/mistag/sartorius-tta-with-weighted-segments-fusion</a>.</p> <p>We also utilized DBSCAN to crop images where major predictions are located. This technique was used in the <a href="https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/329131" target="_blank">image_matching comp 2022</a>. The aim here is to fine-tune the predicted masks with their confidence( random cropping should also work).</p> <p>Additionally, we performed fine-tuning of the predicted masks using SAM.</p> <p>What did not work:</p> <ul> <li>TTAs of scaling didn't work at all.</li> <li>Normalization using staintools didn't work in the public leaderboard (resulted in a drop of about 0.008), while it worked in the private leaderboard (resulted in a gain of +0.01).<br> Our best private score was 0.561, an ensemble of yolox(512) + yolom(1024) with a pixel threshold of 0.45, no_dilation.</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13672841%2F5ef92801ea5f6dc3f411a37cb1b365b1%2Fresult.png?generation=1690961885406247&amp;alt=media" alt=""></p>
Google Research - Identify Contrails to Reduce Global Warming
28th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Overview</strong></p> <ul> <li><p>Emsemble 14models</p></li> <li><p>Model architecture: Unet, Unet++</p></li> <li><p>Encoder: resnest200e, resnest101e, efficientnet-b7</p></li> <li><p>The region that over 1/2 labelers considered contrail is GT, but I created labels for the regions that over 0, 1/4, 1/2, and 3/4 labelers considered contrail.I trained the model to predict these four labels simultaneously.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4472591%2Fe7023ad79299578a50b17c99fa6a4d71%2F.png?generation=1691633654216697&amp;alt=media" alt=""><br> (<a href="https://www.kaggle.com/code/welshonionman/dataset-ashcolor-4labels" target="_blank">dataset notebook</a>, <a href="https://www.kaggle.com/code/welshonionman/gr-icrgw-training-with-ashcolor-4labels" target="_blank">training notebook</a> )</p></li> <li><p>pseudo labeling</p></li> </ul> <p><strong>Augmentation</strong></p> <ul> <li>A.RandomRotate90</li> <li>A.RandomBrightnessContrast</li> <li>A.CoarseDropout</li> <li>A.RandomGridShuffle</li> </ul>
HuBMAP - Hacking the Human Vasculature
4th Place Solution [SDSRV.AI] GoN
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for this compelling competition. We present the 4th place solution from the GoN team at SDSRV.AI. It emphasizes the importance of dataset, validation, ensemble and post processing method. Special thanks to <a href="https://www.kaggle.com/ducmanvo" target="_blank">@ducmanvo</a>, an amazing teammate.</p> <h1>Summary</h1> <p>Our solution employs a combination of 4 instance segmentation models comprising CascadeRCNN (ResNeXt, Regnet), MaskRCNN (swint), HybridTaskCascade (Re2Net), and 1 object detection model, Yolov6m. Each model is trained with a dataset consisting of 1549 images (ds1+ds2), with 84 images from dataset1 utilized for validation. During the inference process, the Weighted Boxes Fusion (WBF) technique is employed to ensemble RPN boxes and ROI boxes, while the masks generated by the ensemble of models utilize the mean average approach. For post-processing, we filter out small instances and refine instance score using mask scores.</p> <h1>Cross-Validation and Preprocessing</h1> <p><strong>Train-test split</strong><br> - Training: 1549 images (1211 images from ds2 + 338 images from ds1)<br> - Validation: 84 images from ds1<br> - During the initial stages of approaching the problem (in the final month of the competition), we initially utilized K-fold cross-validation. However, upon recognizing significant differences in distribution, label assignment methodologies between dataset1 and dataset2, as well as variations in label assignments among Whole Slide Images (WSI), we decided to split the data into two train-validation sets. The validation data was exclusively taken from dataset1 (20% of dataset1). Another reason for this data split was the scarcity of dataset1 images in the training set, which was a concern given that the private test data was exclusively sourced from dataset1 (with only 422 images available).<br> - To determine the dissimilarities in the label distribution between dataset1 and dataset2, as well as variations among different Whole Slide Images (WSI), we trained a base model, MaskRCNN R50, on the training data from dataset1 (fold1) and validated it on dataset1, dataset2 (fold1). Our observations revealed that the model's performance scored considerably higher on dataset1 compared to dataset2. Similar procedures were applied when comparing performance across various Whole Slide Images (WSI).<br> <strong>Preprocessing</strong>: remove duplicate annotations<br> - Approximately 4.5% of training data was duplicated. We directly removed these duplicate labels to ensure data integrity and improve model performance.</p> <h1>Training &amp; valid with blood_vessel &amp; unsure only</h1> <p>During training and validation, we specifically focused on the "blood_vessel" and "unsure" classes. This decision was influenced by the significant size difference between glomerulus and the other two classes, with glomerulus being much larger. Surprisingly, when training with all three classes, the model's performance on glomerulus remained exceptionally high. Additionally, the competition organizers informed us that during the testing phase, we could utilize glomerulus labels to eliminate false positives. Consequently, we concluded that training with additional glomerulus data was unnecessary. After removing glomerulus labels from the training dataset, we observed an improvement in the performance score for the "blood_vessel" class. Therefore, we made the decision to exclude glomerulus labels from all subsequent experiments.</p> <h1>Models</h1> <p><strong>2 stages model</strong>: MaskRCNN, CascadeRCNN, HybridTaskCascade</p> <ul> <li>When training with the base model (MaskRCNN-R50), we noticed that the model struggled to converge using the default mmdet configurations. Consequently, in subsequent experiments, we reduced the utilization of augmentation methods, retaining only multiscale training. Additionally, we adopted a larger backbone (ResNeXt-101) and replaced SGD with AdamW optimizer. To further enhance model diversity during ensemble, we incorporated CascadeRCNN and HybridTaskCascade into our approach. These modifications aimed to improve convergence and overall performance of the models.</li> </ul> <p><strong>Yolov6m</strong>: Two-stage models like MaskRCNN, CascadeRCNN, and HybridTaskCascade exhibit good Mean Average Precision (MAP) at a certain intersection-over-union (IoU) threshold range [0.5:0.95]. However, their overall MAP [0.5:0.95] scores may not reach high values. To address this limitation, YOLO series models offer a solution. Specifically, YOLOv6, with its default training configurations, achieves a significantly higher Mean Average Recall (MAR) of 55.1, compared to the 44.x obtained by the two-stage models. This improvement in MAR highlights the effectiveness of YOLO series models in overcoming the mentioned drawback.<br> <strong>Detail training config</strong></p> <ul> <li>Optimizer: AdamW, warmup 3 epoch</li> <li>Select best model use MAP[0.5:0.95] blood_vessel class</li> <li>Data augmentation: Multi scale training: [(512x512), (640,640), (768, 768), (896, 896), (1024, 1024)]</li> </ul> <h1>Training: 2 stages</h1> <ul> <li>Stage 1: training all 1549 images, 2 classes</li> <li>Stage2: training 338 images ds1, 2 classes</li> </ul> <p>Focusing on optimizing the validation set with dataset1 exclusively, fine-tuning the model for a few epochs using dataset1 in the training set improved performance on the validation data.</p> <div> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6439446%2Fdd8e2019007298da32e43ffe92e1fe32%2Fsingle_model.png?generation=1691082591623984&amp;alt=media"> </div> <h1>Ensemble &amp; Post processing</h1> <p><strong>Ensemble</strong></p> <ul> <li>The ensemble process is as depicted in the following diagram; We utilize Weighted Boxes Fusion (WBF) When ensembling bounding boxes, as it produces superior results compared to NMS, SoftNMS, or NMW. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6439446%2F5a2d33de596e95e08f227e514cc49350%2Fensemble.png?generation=1691081285365346&amp;alt=media" alt="Ensemble diagram"><ul> <li>For WBF, equal weights are assigned to each model during the ensembling process.</li></ul></li> </ul> <p><strong>PostProcessing</strong></p> <ul> <li>Remove instances with an area &lt; 80 pixels.</li> <li>Refine instance scores using the formula: <code>score_instance = score_bbox * mean(mask[mask &gt; 0.5])</code></li> </ul> <p>Some detail experiments:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6439446%2Ffabaef83a462c8b1d1f54d8ad11389ea%2Ffinal%20experiments.png?generation=1691081498124640&amp;alt=media" alt=""><br> <em>We observed a strong correlation between the local validation and the public leaderboard (LB) scores. Therefore, we only made submissions when there was a significant improvement in the validation score.</em></p> <h1>Conclusion</h1> <ul> <li>Several techniques in our approach significantly influenced the private score: using only dataset1 for validation and fine-tuning, employing light augmentation, multi-stage ensemble, utilizing YOLOv6 for higher MAR, refining prediction scores, and, undoubtedly, placing strong trust in the validation score - no dilated!!!</li> <li>Adopting larger models and training with larger image sizes can potentially enhance model quality. However, due to resource limitations, we were unable to implement this strategy.</li> </ul> <p><strong>Keywords</strong>: Instance segmentation, ensemble instance segmentation, re-ranking instance, refine instance score.</p>
HuBMAP - Hacking the Human Vasculature
(Best private 0.602) 297th Place Solution for the HuBMAP - Hacking the Human Vasculature Competition. How I lost the gold medal.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is the first time I solve a segmentation task.</p> <h1>Context</h1> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/overview" target="_blank">https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/data" target="_blank">https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/data</a></li> </ul> <h1>Overview of the approach</h1> <p>My final model was a combination of 13 single models. 4 model maskdino swin-L train on different data, 4 model yolov7-seg, 1 model maskdino resnet 50, 4 model Cascade maskrcnn Internimage-L. For validation, I split to 4 folds, 3 wsi for training, the other for testing. Inference with multiple size, Maskdino swin-L use 3 image size [512, 1024, 1440], yolov7 use 2 image size [512, 1024], maskdino resnet50 use 2 image size [1024, 1440], cascade maskrcnn internimage use 1 image size [1440]. Using Weight Mask Fusion for ensemble. No use external data or pseudo label.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2832287%2F0df50177802f0231fdb3a944e6808cd4%2FScreenshot%20from%202023-08-05%2019-33-28.png?generation=1691238892230812&amp;alt=media" alt=""><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2832287%2F576a5d3446d03ac0951a64c898d75ec4%2Fhubmap.drawio.png?generation=1691237415800097&amp;alt=media" alt=""></p> <h1>Details of the submission</h1> <h2>Cross validation experiments</h2> <p>Maskdino swin-L, image size 1440:</p> <ul> <li>Fold 0 (val on wsi1): bbox [email protected]=34.60</li> <li>Fold 1 (val on wsi2): bbox [email protected]=14.66</li> <li>Fold 2 (val on wsi3): bbox [email protected]=40.42</li> <li>Fold 3 (val on wsi4): bbox [email protected]=28.32<br> Result on fold 0 and fold 1 are very strange. This make me confuse. Should choose dataset 1 or all dataset for training? These results are the reason why I don't choose only dataset 1 for final submission.</li> </ul> <h2>Experiments on dataset 1</h2> <ul> <li>Maskdino swin-L, image size 1408, train only all dataset1. One model without dilate: Public 0.51, Private 0.543.</li> <li>Yolov7-seg, image size 1024, train only all dataset1. One model without dilate: Public 0.3, Private 0.446.</li> <li>Internimage-L, image size 1440, train only all dataset1. One model without dilate: Public 0.453, Private 0.502.</li> <li>Maskdino resnet50, image size 1024, train only all dataset1. One model without dilate: Public 0.46, Private 0.464.</li> <li>Internimage-L, image size [(1024, 1024), (1440, 1440), (1536, 1536)], train only all wsi2. One model without dilate: Public 0.439, Private 0.469.</li> </ul> <h2>Important of image size</h2> <p>Dataset have a lot of small mask. I found increase image size, boost both CV and Public leader board. However, CV not increase when image size 1024 -&gt; 1440</p> <h2>My best private is 0.597</h2> <p>Public notebook: <a href="https://www.kaggle.com/code/quan0095/maskdino-yolov7-internimage-private-0-597" target="_blank">https://www.kaggle.com/code/quan0095/maskdino-yolov7-internimage-private-0-597</a></p> <h2>My final submission</h2> <p>Final submission 1 (Version 8), mask threshold &gt; 0.6, using dialate: Private 0.334, Public: 0.551<br> Final submission 2 (Version 1), mask threshold &gt; 0.03, not using dialate: Private 0.394, Public: 0.539<br> Late submission: modify from <strong>Final submission 1 (version 8)</strong> not using dilate and I achieve Private leaderboard 0.602<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2832287%2F7aa3482c9ca8b4a2830348cf3468d942%2FScreenshot%20from%202023-08-01%2022-41-23.png?generation=1690904619445688&amp;alt=media" alt=""></p> <h2>Code optimization</h2> <ul> <li>To ensemble many models, I crop mask using bbox from 512x512 original mask to reduce CPU RAM usage. Using multi thread to reduce time run notebook.</li> <li>Modify maskdino source code to add EMA.</li> <li>Modify WBF to work with mask.</li> <li>Fix bug to use maskdino on kaggle notebook.</li> <li>Fix bug to train Internimage with cuda 12.0.</li> </ul> <h2>What didn't work</h2> <ul> <li>Self-supervised learning using dinov1 with dataset3 and hubmap kidney dataset.</li> <li>Using SAM.</li> </ul> <h2>Conclusion</h2> <ul> <li>I fail gold medal, because I have a mistake: I submit two submission with dilate and not dilate, but I have changed mask threshold when not dilate =&gt; Size of masks not change between two submission.</li> <li>IOU is very sensitive with small mask. I experiment with dilate and not dilate, IOU change 40% with bbox area &lt; 1000.</li> <li>Using only dataset 1 good for both public and private.</li> <li>When train with dataset 2, dilate good for public but not good for private.</li> </ul> <h1>Sources</h1> <h2>Two my submission notebook:</h2> <ul> <li><a href="https://www.kaggle.com/code/quan0095/dialate-version" target="_blank">https://www.kaggle.com/code/quan0095/dialate-version</a></li> <li><a href="https://www.kaggle.com/code/quan0095/not-dialate-version-7" target="_blank">https://www.kaggle.com/code/quan0095/not-dialate-version-7</a></li> </ul> <h2>Late submission notebook (Private 0.602):</h2> <ul> <li><a href="https://www.kaggle.com/code/quan0095/dialate-version?scriptVersionId=138564303" target="_blank">https://www.kaggle.com/code/quan0095/dialate-version?scriptVersionId=138564303</a></li> </ul> <h2>Maskdino inference:</h2> <ul> <li><a href="https://www.kaggle.com/code/quan0095/inference-of-maskdino-p100" target="_blank">https://www.kaggle.com/code/quan0095/inference-of-maskdino-p100</a></li> </ul> <h2>Cascade maskrcnn internimage:</h2> <ul> <li><a href="https://www.kaggle.com/code/quan0095/internimage-mmdet-2-26-inference" target="_blank">https://www.kaggle.com/code/quan0095/internimage-mmdet-2-26-inference</a></li> </ul>
ICR - Identifying Age-Related Conditions
Large and simple GBDT ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Wow, this is the craziest shakeup I've seen so far. Makes me feel lucky and grateful for 329th place!</p> <h1>Solution</h1> <p>It's a simple solution created during the last week.</p> <ul> <li>XGBoost, Catboost and LGBM</li> <li>Use (almost) default hyperparameters</li> <li>RandomUnderSampler to balance classes</li> <li>Train models using 1000 different 5-folds-seeds and RandomUnderSampler seeds</li> <li>Simple average ensemble</li> </ul> <h1>Potential improvement</h1> <p>One of many potential improvements is perhaps to also random sample different GBDT hyperparameters</p> <h1>Code</h1> <p>The code can be seen <a href="https://www.kaggle.com/code/erijoel/icr-solution" target="_blank">here</a></p>
HuBMAP - Hacking the Human Vasculature
Public 12th / Private 26th Place Solution for the HuBMAP - Hacking the Human Vasculature Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thank you very much for hosting this competition. We weren't able to win a gold medal, but participating in our first instance segmentation competition was a very educational experience.</p> <h2><strong>Context section</strong></h2> <ul> <li><strong>Business context</strong>: <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature" target="_blank">https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature</a></li> <li><strong>Data context</strong>: <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/data" target="_blank">https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/data</a></li> </ul> <h2><strong>Overview of the Approach</strong></h2> <p>Our model is an ensemble of Mask R-CNNs using Swin Transformer-S and Swin Transformer-T as backbones. </p> <h4>Our solution:</h4> <ul> <li><strong>Models:</strong><ul> <li>4 MMdet-based models, each trained on different folds (2 variations for each model):</li> <li>2x Mask R-CNN using Swin Transformer-S Backbone</li> <li>2x Mask R-CNN using Swin Transformer-T Backbone</li></ul></li> <li><strong>Training Method: 2-stage approach:</strong><ul> <li><strong>Stage 1 :</strong> Using dataset2 for initial training</li> <li><strong>Stage 2 :</strong> Using dataset1 to further train and fine-tune the models</li></ul></li> <li><strong>Ensemble Method:</strong><ul> <li><strong><a href="https://arxiv.org/pdf/1910.13302.pdf" target="_blank">Weighted Box Fusion(WBF)</a>:</strong> Applied to the Regions of Interest (ROI) for the final bounding boxes</li> <li><strong><a href="https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/discussion/297998" target="_blank">Weighted Masks Fusion(WMF)</a>:</strong> Applied to the final output for both bounding boxes and masks</li></ul></li> <li><strong>Test Time Augmentation</strong><br> Below is an overview of our solution, as illustrated in the following diagram.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7135891%2F05b266ee749cc16baf27278421f93d1a%2Foverview_figure.svg?generation=1691593014306642&amp;alt=media" alt=""></li> </ul> <h2><strong>Details of the Submission</strong></h2> <h3><strong>Two-Stage Training</strong></h3> <p>The public leaderboard was composed of wsi3 and wsi4 (wsis included in the training data of dataset2), and the private leaderboard was composed of wsi5, not included in the training data. This setup meant that the public leaderboard might not necessarily be reliable.<br> Because of this, it was necessary to build a validation method that could mimic the evaluation on the private leaderboard, and also to effectively utilize the wsi from dataset2 for training. Therefore, we adopted a CV strategy that divided the data by wsi and implemented a two-stage training process.</p> <h4>1st stage: Pretraining on dataset2</h4> <ul> <li>fold1:<ul> <li>train: wsi1, wsi3, wsi4(dataset2)</li> <li>validation: wsi2(dataset2)</li></ul></li> <li>fold2:<ul> <li>train: wsi2, wsi3, wsi4(dataset2)</li> <li>validation: wsi1(dataset2)</li></ul></li> </ul> <h4>2nd stage: Fine-tuning on dataset1</h4> <ul> <li>fold1:<ul> <li>train: wsi1(dataset1)</li> <li>validation: wsi2(dataset1)</li></ul></li> <li>fold2:<ul> <li>train: wsi2(dataset1)</li> <li>validation: wsi1(dataset1)</li></ul></li> </ul> <p>By evaluating with such CV splits, we were aided in judging whether the Public Leaderboard's score was reasonable or not.<br> However, the existence of unexpected cases was also confirmed.Within these CV splits, we optimized parameters such as mask probability thresholds and types of TTA with the intention of improving validation mAP, expecting an improvement on the private leaderboard. Contrary to our expectations, there were cases where these adjustments had an unexpected negative impact on both the Public Leaderboard and Private Leaderboard scores.</p> <h3><strong>Training Settings</strong></h3> <h4><strong>Model Training Configuration</strong></h4> <ul> <li>Our models were initialized using weights pretrained on the COCO dataset.</li> <li>Training was conducted with the AdamW optimizer.</li> <li>The learning rate was set at 0.0003 for Stage 1 (pretraining) and 0.00001 for Stage 2 (fine-tuning).</li> </ul> <h4><strong>Annotations in Training</strong></h4> <ul> <li>During the training step, annotations labeled as "blood_vessel" and "unsure" were both treated as "blood_vessel."</li> <li>The "glomerulus" annotation was not used in our training step.</li> </ul> <h4><strong>Data Augmentation</strong></h4> <p>We adopted the following data augmentations:</p> <ul> <li>RGB Shift</li> <li>RandomBrightnessContrast(with brightness and contrast limits of 0.1 and 0.4, respectively)</li> <li>RandomRotate90</li> <li>RandomFlip</li> <li>RandomChoiceResize(with sizes ranging from 480 to 1200)</li> <li>RandomCrop</li> </ul> <h3><strong>Ensemble Methodology</strong></h3> <p>We adopted an ensemble method that integrates the Regions of Interest (ROI) output by the model through Weighted Box Fusion (WBF). The ensemble of ROIs was conducted only for the outputs within the same fold, and that employed the same Test Time Augmentation (TTA) methods (2 models each). Subsequently, we integrated all the output masks and boxes within the same fold using Weighted Mask Fusion (WMF). Finally, we ensembled the outputs for each fold through WMF.</p> <h3><strong>Test Time Augmentation</strong></h3> <p>We adopted Horizontal Flip and Vertical Flip as our Test Time Augmentation (TTA) techniques.</p> <h3><strong>Postprocessing</strong></h3> <p>Masks that overlapped with glomerulus by more than 90% were removed.</p> <h3><strong>What Didn't Work</strong></h3> <ul> <li><strong>Sliding Window</strong>: We initially tried a sliding window technique but found it ineffective.</li> <li><strong>Using Context from Neighbor Tile</strong>: This approach did not bring the desired results.</li> <li><strong>Rotate TTA</strong>: Using rotation in multiples of 90 degrees increased validation mAP but did not improve the leaderboard score.</li> </ul> <h2><strong>Sources</strong></h2> <ul> <li><a href="https://arxiv.org/pdf/1910.13302.pdf" target="_blank">https://arxiv.org/pdf/1910.13302.pdf</a></li> <li><a href="https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/discussion/297998" target="_blank">https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/discussion/297998</a></li> </ul>
HuBMAP - Hacking the Human Vasculature
19th Place Solution: Repeated Pseudo Labeling
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I would like to thank the competition hosts for organizing this competition and also the participants.</p> <p>I was disappointed to be kicked out of the gold zone after the shakedown, but the competition was a great learning experience for instance segmentation.</p> <h2>Overview</h2> <p>My final submission was a single model of Cascade Mask R-CNN (Backbone: ConvNeXt-tiny).</p> <p>However, the training process had multiple stages. Figure 1 outlines the training and submission pipeline.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9058219%2F210e2b0ce449c03a2c77e93ae50c41df%2Fhubmap_train-sumarry.drawio%20(3).png?generation=1690860001136772&amp;alt=media" alt=""><br> Figure 1: Training and Submission Pipeline</p> <ul> <li>1st Training Phase:<br> Cascade Mask R-CNN was trained using Dataset 1 and 2. Three different backbones (ConvNext-tiny, ResNext-101, Resnet-50) were used for Cascade Mask R-CNN.<br> Using the three models, we performed pseudo labeling on dataset 3, extracting only labels with a confidence greater than 0.5. We also performed TTA and NMS during pseudo labeling.</li> <li>2nd Training Phase:<br> We used Dataset 1 and 2, and the pseudo labeled dataset 3 created in the 1st phase to train the model whose backbone is convnext. In this phase, the model trained in the 1st phase was fine tuned.</li> <li>3rd Training Phase:<br> Fine tuning the model trained in the 2nd phase using only Dataset 1. This was done to bring the output of the model closer to the annotation quality of dataset 1.<br> Using the model created here, we performed pseudo labeling on dataset 2 and 3, extracting only labels with a confidence greater than or equal to 0.95. We also performed TTA and NMS during pseudo labeling.</li> <li>Final Training Phase:<br> Dataset 1, and pseudo labeled dataset 2 and 3 created in the 3rd phase were used to train the model whose backbone is convnext. Here, the model trained in the 2nd phase was used for fine tuning.</li> <li>Submission:<br> The model developed in the Final Training Phase was used to infer the test dataset; it is a single model of Cascade Mask R-CNN (Backbone: ConvNeXt-tiny), but we have performed TTA and NMS.</li> </ul> <h2>My Approach</h2> <p>I proceeded to train the model, focusing on the quality of the annotations in each dataset. The following is a quote from the description of the datasets in this competition.</p> <blockquote> <p>The competition data comprises tiles extracted from five Whole Slide Images (WSI) split into two datasets. Tiles from Dataset 1 have annotations that have been expert reviewed. Dataset 2 comprises the remaining tiles from these same WSIs and contain sparse annotations that have not been expert reviewed.</p> <p>All of the test set tiles are from Dataset 1.</p> <p>Two of the WSIs make up the training set, two WSIs make up the public test set, and one WSI makes up the private test set.</p> <p>The training data includes Dataset 2 tiles from the&nbsp;<em>public</em>&nbsp;test WSI, but&nbsp;<em>not</em>&nbsp;from the&nbsp;<em>private</em>&nbsp;test WSI.</p> <p>We also include, as Dataset 3, tiles extracted from an additional nine WSIs. These tiles have not been annotated. You may wish to apply semi- or self-supervised learning techniques on this data to support your predictions.</p> </blockquote> <p>This means,</p> <ul> <li>Dataset 1 → accurate annotations and the same annotation quality as the test dataset</li> <li>Dataset 2 → not so accurate annotations</li> <li>Dataset 3 → no annotations</li> </ul> <p>Therefore, I decided to trust Dataset 1 and tried to improve the quality of the annotations output by the model to be closer to Dataset 1. I also tried to make good use of the remaining Dataset 2 and 3. Based on this policy, I proceeded with my experiments.</p> <ul> <li>Model<ul> <li>convnext-tiny mainly used.</li> <li>resnet, resnext were also used, but finally not used because they did not give good results when used with pseudo label.<ul> <li>resnet and resnext were used for the first pseudo labeling.</li></ul></li> <li>mmdetection 3.x was used.</li></ul></li> <li>Annotations Type<ul> <li>Only blood_vessel was used.</li> <li>unsure, glomerulus not used at all.</li></ul></li> <li>Data Augmentation<ul> <li>The images were randomly augmented to multiple scales during training as shown below.<ul> <li>scales=[(640, 640), (768, 768), (896, 896), (1024, 1024), (1152, 1152), (1280, 1280), (1408, 1408), (1536, 1536)]</li></ul></li></ul></li> <li>Test Time Augmentation (TTA)<ul> <li>The ensemble of output results from multiple scales was used.<ul> <li>scales=[(1024, 1024), (1280, 1280), (1536, 1536)]</li></ul></li></ul></li> <li>Post-Processing<ul> <li>Non-Maximum Suppression (NMS) was used during TTA.</li> <li>No dilation.</li></ul></li> <li>Pseudo Labeling<ul> <li>Pseudo labeling was performed in two parts as shown in Figure 1. TTA was performed for three different image sizes, scales=[(1024, 1024), (1280, 1280), (1536, 1536)], and NMS was used for the ensemble.</li></ul></li> <li>Cross Validation (CV) Strategy<ul> <li>CV was set to be similar to private LB.<ul> <li>Source WSI == 1 &amp; dataset == 1 → valid</li> <li>Source WSI == 1 &amp; dataset != 1 → It was not used for cv.</li> <li>Others → train</li></ul></li> <li>At the time of submission, all data was used for training.</li> <li>CV was not always correlated with Public LB. However, it became more correlated as the experiment progressed to the latter part of the experiment.</li></ul></li> </ul> <h2>What Worked</h2> <ul> <li>Pseudo Labeling</li> <li>Trust the quality of the annotations in Dataset 1</li> <li>TTA</li> <li>NMS</li> <li>Scale Up Images</li> </ul> <h2>What Didn’t Worked</h2> <ul> <li>Models with large parameter sizes<ul> <li>mask2former, convnext-small</li> <li>Maybe because of lack of parameter tuning…?</li></ul></li> <li>Use unsure and glomerulus for training</li> <li>Dilation<ul> <li>LB was effective when raw dataset 2 was used for training data</li> <li>LB was decreased when using only dataset 1 as training data</li></ul></li> <li>Flip Data Augmentation</li> <li>Weighted Boxes Fusion (WBF) (my implementation may have been suspect…)</li> <li>Smooth inference around edges (my implementation may have been suspect…)<ul> <li>Reference: <a href="https://www.kaggle.com/competitions/hubmap-kidney-segmentation/discussion/238013" target="_blank">https://www.kaggle.com/competitions/hubmap-kidney-segmentation/discussion/238013</a></li></ul></li> <li>Ensemble with other models</li> </ul> <h2>Code</h2> <ul> <li>GitHub (My training code. I'm sorry but it's not well maintained) -&gt; <a href="https://github.com/moritake04/hubmap-2023" target="_blank">https://github.com/moritake04/hubmap-2023</a></li> <li>Submitted notebook -&gt; <a href="https://www.kaggle.com/code/moritake04/private19th-final-sub" target="_blank">https://www.kaggle.com/code/moritake04/private19th-final-sub</a></li> </ul>
ICR - Identifying Age-Related Conditions
This is my solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It was very unexpected, didn't even think about getting in top 30% let alone 10%. Well, my solution is just mostly ensemble algorithms and fine tuning, the details is in the following link. Thanks. PLEASE upvote I want the swag!</p> <p><a href="https://www.kaggle.com/code/quangnhatbui/icr-eda-and-model-selection-xgb-randomforest" target="_blank">https://www.kaggle.com/code/quangnhatbui/icr-eda-and-model-selection-xgb-randomforest</a></p>
Google - American Sign Language Fingerspelling Recognition
Public notebook, cosmetic changes, 190 position
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi to all!<br> I haven't been able to really compete in the last while, but I'll share the changes to the public notebook: <a href="https://www.kaggle.com/code/saidineshpola/aslfr-ctc-based-on-prev-comp-1st-" target="_blank">https://www.kaggle.com/code/saidineshpola/aslfr-ctc-based-on-prev-comp-1st-</a> place (special thanks to the author). Only two changes, I reduced the drop_rate and added another Dropout. Here's what the function looks like</p> <p>def get_model(dim = 384,num_blocks = 6,drop_rate = 0.35):<br> inp = tf.keras.Input(INPUT_SHAPE)<br> x = tf.keras.layers.Masking(mask_value=0.0)(inp)<br> x = tf.keras.layers.Dense(dim, use_bias=False, name='stem_conv')(x)<br> pe = tf.cast(positional_encoding(INPUT_SHAPE[0], dim), dtype=x.dtype)<br> x = x + pe<br> x = tf.keras.layers.BatchNormalization(momentum=0.95,name='stem_bn')(x)</p> <pre><code> i range(num_blocks): x = (x) x = (x) x = (x) x = (x) x = tf.keras.layers.(x) x = tf.keras.layers.(x) x = tf.keras.layers.(x) x = tf.keras.layers.,name='classifier')(x) model = tf.keras. loss = CTCLoss # Adam Optimizer optimizer = tfa.optimizers. optimizer = tfa.optimizers. model.compile(loss=loss, optimizer=optimizer) return model </code></pre> <p>The changes are really minor, but worked because the base rate of 0.4 was too big, it slowed learning too much, and I knew from the previous competition that increasing the number of epochs could improve the result. Here, my logic was to simplify learning instead of increasing the number of epochs.<br> Good luck to all of you and a peaceful sky above your head!</p>
HuBMAP - Hacking the Human Vasculature
9th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers and congrats to all the winners!</p> <h2>Overview</h2> <p>From the top 1&amp;2 solutions of the <a href="https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation" target="_blank">Sartorius competition</a>, I decided to go with a two-stage pipeline of object detection and semantic segmentation instead of using a one-stage model (e.g. Mask-RCNN) in the early in the competition. I think that the main advantage of the two-stage pipeline is the ease of the ensemble and TTA.</p> <p><img src="https://i.postimg.cc/3xDzz1G0/hubmap1.png" alt=""></p> <h2>Detection part</h2> <h3>Update of dataset2 bbox-level annotation</h3> <p>As been pointed out in the discussion, the dilation significantly improved LB score in models trained from both dataset1 and dataset2. However, models trained only from dataset1 did not show this effect. This led me to believe that the annotations of dataset2 were made smaller than those of dataset1. Considering that there are several types of blood vessels and the possibility that the annotations of them are not uniformly small, I took the approach of updating the bbox-level annotations of dataset2 to dilate them by models trained only from dataset1.</p> <p><img src="https://i.postimg.cc/MK36b97C/hubmap2.png" alt=""></p> <p>The update was performed by replacing the original annotations with a predicted ones that met the following conditions. Basically, the updated bboxes is larger than the original ones. The total number of bboxes in dataset2 does not change by the updating.</p> <ol> <li>IoU &gt; 0.4</li> <li>FP / (TP + FP) &gt; 0.1</li> </ol> <p><img src="https://i.postimg.cc/jSGRBCkv/hubmap3.png" alt=""></p> <p>In the model trained from the updated dataset2 together with dataset1, the LB improvement by the dilation has almost disappeared and the LB score above 0.5 was achieved without the dilation.</p> <h3>CV strategy</h3> <p>The CV was carried out by the following special two-fold division.</p> <p><img src="https://i.postimg.cc/jSWs8yvL/hubmap4.png" alt=""></p> <h3>Model training</h3> <p>I trained 2-class (blood_vessel, glomerulus) detection models. "unsure" label was ignored. I modified the source code of YOLOv5/v7/v8 and add a 90 degree random rotation augmentation. The following 5 models x 2 folds (total 10 models) were used in the final submission.</p> <table> <thead> <tr> <th>Model</th> <th>Input size</th> </tr> </thead> <tbody> <tr> <td>YOLOv5x6</td> <td>512</td> </tr> <tr> <td>YOLOv7x</td> <td>512</td> </tr> <tr> <td>YOLOv8l</td> <td>512</td> </tr> <tr> <td>YOLOv8x</td> <td>512</td> </tr> <tr> <td>YOLOv8l</td> <td>768</td> </tr> </tbody> </table> <h3>Inference</h3> <p>I made a huge ensemble of 10 models x 16 TTAs since I considered the accuracy of detection to be more important than one of the segmentation. The small number of test images made it possible. 10 x 16 = 160 detection results were merged by WBF.</p> <ul> <li>16 TTAs: 8 for combinations of h-flip, v-flip, 90deg rotation, 2 for 2 scales (base size, base size + 64px), 8 x 2 = 16</li> <li>IoU threshold of each model's NMS: 0.6</li> <li>IoU threshold of WBF: 0.7</li> </ul> <h2>Segmentation part</h2> <h3>CV strategy</h3> <p>Almost the same as detection models, except that dataset2 is not updated.</p> <h3>Model training</h3> <p>As a mask of bboxes, I did not use the prediction results of the detection models, but used the bboxes obtained from the original annotations. Unlike the detection models, "unsure" label was also used for training. In the final submission, EfficientNetB1-Unet and EfficientNetB2-Unet was used (each 2 folds, total 4 models).</p> <h3>Inference</h3> <p>TTA was not used because of run-time constraints. A score threshold of 0.5 was used for binarization.</p> <h2>Strategy of final submission</h2> <p>The effect was smaller by updating dataset2, but the small dilation improved the LB score slightly. I implemented the dilation not by cv2.dilate for the final masks, but by increasing the size of bboxes by a percentage. In my final submission, 3% of bbox dilation increased my LB score about 0.005. I used 3% dilation in one of the two final submissions and not in the other (there are other differences besides the dilation).</p> <table> <thead> <tr> <th>Submission</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>w/ 3% dilation</td> <td>0.580</td> <td>0.549</td> </tr> <tr> <td>w/o 3% dilation</td> <td>0.572</td> <td>0.560</td> </tr> </tbody> </table> <h2>Things tried but not worked</h2> <ul> <li>Semi-supervised learning on dataset3</li> </ul>
Google - American Sign Language Fingerspelling Recognition
[28th Place Solution] Pre Training on Supplemental Data gives 0.01 LB Improvement
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Acknowledgements</h2> <p>Firstly, I'd like to extend my deepest gratitude to my teammates:</p> <ul> <li><a href="https://www.kaggle.com/chellyfan" target="_blank">@chellyfan</a></li> <li><a href="https://www.kaggle.com/chuansj" target="_blank">@chuansj</a></li> </ul> <p>Their enormous effort has been instrumental in this competition. </p> <p>I'd like also thanks to the authors of following notebooks. Our success wouldn't have been possible without the insights and techniques we adopted from these notebooks. A big thanks to you.</p> <ul> <li><p><strong>@irohith</strong> for his excellent preprocessing and training notebooks:</p> <ul> <li><a href="https://www.kaggle.com/code/irohith/aslfr-ctc-based-on-prev-comp-1st-place" target="_blank">ASLFR CTC based on prev comp 1st place</a></li> <li><a href="https://www.kaggle.com/code/irohith/aslfr-preprocess-dataset-tfrecords-mean-std" target="_blank">ASLFR preprocess dataset TFRecords mean std</a></li></ul></li> <li><p><strong>@hoyso48</strong> for his 1st place solution from a previous competition:</p> <ul> <li><a href="https://www.kaggle.com/code/hoyso48/1st-place-solution-training" target="_blank">1st Place Solution Training</a></li></ul></li> <li><p><strong>@greysnow</strong> for the CTC on TPU notebook:</p> <ul> <li><a href="https://www.kaggle.com/code/shlomoron/aslfr-ctc-on-tpu" target="_blank">ASLFR CTC on TPU</a></li></ul></li> <li><p><strong>@royalacecat</strong> for the notebook on more blocks with quantization:</p> <ul> <li><a href="https://www.kaggle.com/code/royalacecat/the-deeper-the-better" target="_blank">The deeper, the better</a></li></ul></li> </ul> <h2>TL;DR</h2> <p>Our solution is based on the CTC open source notebook. Here are the key improvements we introduced (all scores are public LB):</p> <ul> <li>Only filter <strong>full NaN frames</strong> when generating tfrecord files on data preprocessing. (0.687-&gt;0.705)</li> <li>Adding more numbers of block in the model as described in <a href="https://www.kaggle.com/code/royalacecat/the-deeper-the-better" target="_blank">The deeper, the better</a>. (0.705-&gt;0.716)</li> <li><strong>Increased frame length</strong> from 128 to 256. (0.716-&gt;0.746)</li> <li><strong>Enhanced feature selection</strong> by adding more face and pose indices, increase from 96 to 128 indices. (0.746-&gt;0.763)</li> <li>Incorporated <strong>CTC loss with label smoothing</strong> for added regularization. (0.763 -&gt;0.766)</li> <li><strong>Pretrained our model</strong> on the supplemental dataset. Pre train 80 epochs on supply set then 80 epochs on main dataset. (0.766-&gt;0.777)</li> <li>Adopted the <strong>motion feature</strong> technique from a previous competition. (0.777-&gt;0.781)</li> <li>Increase training epochs on main dataset from 80 to 120 (0.781-&gt;0.782)</li> </ul> <h2>Data Preprocessing</h2> <p>Referring to <a href="https://www.kaggle.com/irohith" target="_blank">@irohith</a>'s preprocessing notebook, we made changes by adding more face and pose indices. We selected the new indices based on their variance — higher variance indicates more information.</p> <p>Initially, the indices were:</p> <pre><code>LIP = [ , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] LPOSE = [, , , , ] RPOSE = [, , , , ] POSE = LPOSE + RPOSE </code></pre> <p>We then updated them to:</p> <pre><code>LIP = [ , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] face_id = [,,,,,,,,,,, ,,,,,,,,,,,, ,,,,,,,,,] k face_id: k LIP: LIP.append(k) l = (LIP) LIP = LIP[:(l - l/)] LPOSE = [, , , , , , , , , , , , , , , ] RPOSE = [, , , , , , , , , , , , , , , ] POSE = LPOSE + RPOSE </code></pre> <p>filter out full Nan frame</p> <p>origin notebook:</p> <pre><code>hand = tf.concat([rhand, lhand], axis=) hand = tf.where(tf.math.is_nan(hand), , hand) mask = tf.math.not_equal(tf.reduce_sum(hand, axis=[, ]), ) </code></pre> <p>update to </p> <pre><code>hand = tf.concat([rhand, lhand,lip,rpose,lpose], axis=) hand = tf.where(tf.math.is_nan(hand), , hand) mask = tf.math.not_equal(tf.reduce_sum(hand, axis=[, ]), ) </code></pre> <p>Those new indices along with 256 frame length gives us about 0.02 LB improvement!</p> <h2>Motion feature</h2> <p>see <a href="https://www.kaggle.com/code/hoyso48/1st-place-solution-training" target="_blank">1st Place Solution Training</a></p> <h2>CTC Loss with Label Smoothing</h2> <p>To enhance our model's stability during training, we implemented a CTC loss with an added regularization component. By combining the CTC loss with a Kullback-Leibler (KL) divergence, we introduced label smoothing to the model. Smooth weight = 0.7 works best for our model</p> <pre><code> (): label_length = tf.reduce_sum(tf.cast(labels != pad_token_idx, tf.int32), axis=-) logit_length = tf.ones(tf.shape(logits)[], dtype=tf.int32) * tf.shape(logits)[] ctc_loss = tf.nn.ctc_loss( labels=labels, logits=logits, label_length=label_length, logit_length=logit_length, blank_index=pad_token_idx, logits_time_major= ) ctc_loss = tf.reduce_mean(ctc_loss) kl_inp = tf.nn.softmax(logits) kl_tar = tf.fill(tf.shape(logits), / num_classes) kldiv_loss = (tf.keras.losses.KLDivergence(tf.keras.losses.Reduction.NONE)(kl_tar, kl_inp) + tf.keras.losses.KLDivergence(tf.keras.losses.Reduction.NONE)(kl_inp, kl_tar))/ kldiv_loss = tf.reduce_mean(kldiv_loss) loss = ( - weight) * ctc_loss + weight * kldiv_loss loss </code></pre> <h2>Pre-training the Model on the Supplemental Dataset</h2> <p>During our experimentation, we found a significant insight: training solely on the <strong>supplemental dataset</strong> yielded a LB score of 0.367. This highlighted the potential value embedded within the supplemental data.</p> <p>We decided to first pre-train our model on the supplemental dataset, then loaded these pre-trained weights to further train the model on the main dataset. We tested multiple epochs: 40, 60, and 80 for pre-train. 80 epochs works best and give a improvement of 0.013 on the LB score.</p> <h2>What We Tried, But Didn't Work</h2> <ol> <li><p><strong>Augmentation</strong>:</p> <ul> <li><strong>Random Affine Transformation</strong></li> <li><strong>Random Part Removal</strong>: We tried dropping certain parts (like lip, lpose, rpose, lhand, rhand) by setting them to NaN with a 5% probability. </li> <li><strong>Feature Switch for Phrases</strong>: switch features for phrases within the same category.</li></ul> <p>In 2nd place solution, <a href="https://www.kaggle.com/hoyso48" target="_blank">@hoyso48</a> mention that some augmentation didn't improve the model during short epoch training but in longer training would have the effect. This could applied to our case as well but we haven't verified yet.</p></li> <li><p><strong>Switching to GIC-CTC</strong>:</p> <ul> <li><strong>Reference Paper</strong>: <a href="https://ieeexplore.ieee.org/document/10094820" target="_blank">Improving CTC-Based ASR Models With Gated Interlayer Collaboration</a></li> <li>Increasing parameters led to our model exceeding the 5-hour inference time limit.</li></ul></li> </ol> <h2>What We could Imporve</h2> <ol> <li>long epochs training. We only train 120 epochs for final submission, which is much less compare to other top place solution (usually 300-500 epochs). We already saw a 0.001 LB improve from 80 epochs to 120 epochs on the final day, but it's too late</li> <li>Adjust the leaning rate and weight decay more carefully. We simply use the cosine decay lr scheduler in the public notebook and set weight decay to 0.05</li> <li>Utilized AWP for longer epoch training</li> </ol> <h2>Notebook:</h2> <p><a href="https://www.kaggle.com/code/wuhongrui/aslfr-gcs-path" target="_blank">https://www.kaggle.com/code/wuhongrui/aslfr-gcs-path</a><br> <a href="https://www.kaggle.com/wuhongrui/aslfr-training" target="_blank">https://www.kaggle.com/wuhongrui/aslfr-training</a><br> <a href="https://www.kaggle.com/wuhongrui/ctc-final-inference-notebook" target="_blank">https://www.kaggle.com/wuhongrui/ctc-final-inference-notebook</a></p> <p>Training notebook is written specific for training on Google Colab. Frist use gcs-path notebook check the path in training notebook. Set USE_SUPPLY = True first to train supply set, then set USE_VAL = True to load pre-train weight and train main data set.</p> <p>To use it on Kaggle TPU, change the file address to corresponding Kaggle input address and you NEED to modify the CTC loss. Check <a href="https://www.kaggle.com/code/shlomoron/aslfr-ctc-on-tpu" target="_blank">ASLFR CTC on TPU</a></p>
ICR - Identifying Age-Related Conditions
Top 3%: Feeding wrong predictions from initial model to training set - Boosted the Scores
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>To our bad luck 🤦‍♂️, we did not select the best scoring notebook for submission. So we were at Top 9%.</h3> <h3>Here is the notebook that could have easily taken us to Top 3 % 🔥.</h3> <table> <thead> <tr> <th><a href="https://www.kaggle.com/code/ajaysadhu/feeding-wrong-predictions-to-training-data" target="_blank">Notebook</a></th> <th></th> </tr> </thead> <tbody> </tbody> </table> <h2>🏃🤗 Approach followed:</h2> <p>This is a Data Bias problem and tried with different sampling approaches and finally took:</p> <blockquote> <ul> <li>Feature Selection, and Transformation using Pipeline with log/sqrt (based on tails), KNNImputer and Standard Scaling.</li> <li>Removed outliers based on IsolationForest.</li> <li>Tensor flow with quite a lot of Dropout layers on Random Under sampled Data </li> <li>LightGBM with is_unbalanced parameter true and class_weight as "balanced" on Random Under-sampled Data.</li> <li>CatBoot with custom class weights on 80% Training Data.</li> </ul> </blockquote> <h2>🧐🤓Post Processing:</h2> <blockquote> <ul> <li>For each model prediction, took selective threshold that was giving the best confusion matrix.</li> <li>0.78 for Neural Net and above this threshold took probability as 0.9995 and 0.0005 otherwise. To avoid penality. Similarly, 0.7 for LightGBM and 0.65 for CatBoost.</li> <li>Average of all the 3 model predictions were used to create the final submission probabilities.</li> </ul> </blockquote> <h1>🤯😬 <strong>Unique Idea that gave the boost to the scoring:</strong></h1> <blockquote> <ol> <li>After the first run, we found the mis-classified datapoints in Confusion Matrix and got their indexes and added them to the training dataframe with 30 times duplication factor (30 was helping with validation loss reduction).</li> <li>So, you would find the wrong_preds dataframe with indexes of the rows in the notebook.</li> <li>With the inflated training data, the notebook was run and results were submitted.</li> </ol> </blockquote> <pre><code>This idea helped our Score move up to .😎 </code></pre> <p>Do let us know what do you think about our approach.</p>
Google - American Sign Language Fingerspelling Recognition
[12th solution] Full, reproducible solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I already wrote the gist of my solution <a href="https://www.kaggle.com/competitions/asl-fingerspelling/discussion/434363" target="_blank">here</a>. Here, I will give a more comprehensive summary that follows official writeup guidelines. I also finished cleaning and arranging my work <a href="https://github.com/shlomoron/Google---American-Sign-Language-Fingerspelling-Recognition-12th-place-solution" target="_blank">in my github</a>. I made efforts to make my solution as easy to follow and reproduce as possible. I saved everything as Colab or Kaggle notebooks that you should be able to run as is. I kept all the data in public Kaggle datasets. Please let me know if I missed anything; I will do my best to fix it. If anything needs to be clarified, ask me. </p> <h3>Context section</h3> <p>This is a summary of my 12th place solution to <a href="https://www.kaggle.com/competitions/asl-fingerspelling/overview" target="_blank">Google - American Sign Language Fingerspelling Recognition competition</a>.<br> The data page is <a href="https://wwww.kaggle.com/competitions/asl-fingerspelling/data" target="_blank">here</a>.</p> <h3>Overview of the Approach</h3> <p>My model is a CTC encoder with transformers and convolution layers, the same as the one used in the previous competition <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406978" target="_blank">1st solution</a> with the CTC modification that was introduced <a href="https://www.kaggle.com/code/irohith/aslfr-ctc-based-on-prev-comp-1st-place" target="_blank">by Rohith</a>. My only modification to the model (besides changing the size) is adding positional encoding right before the first transformer layer.<br> For the features, I used a similar approach as the solution of the previous competition and took the same lips, nose, and hands landmarks. I did not use the eyes and used slightly different pose landmarks (added some). I used X and Y (without Z) and also used difference features with a skip of 1 and 2, also the same as the previous competition solution.<br> For augmentation, I used the same augmentations as the previous competition solution, with a modification to all augmentations based on modifying/deleting data in a window. The original solution used a random window, but a random window give a lower probability of modifications in the edges. I changed it to a random circular/rolling window, i.e., instead of ignoring the window section that goes out of the frames length, I rolled it to the beginning of the sequencs and modified it too.<br> For filtering the bad data, I used two-step filtering. In the first step, I filtered the data according to (number of frames with non-nan hands landmarks) &gt; 2*(length of phrase) and trained a basic model. In the second step, I used the basic model to calculate the normalized Levenshtein distance scores of all the samples and filtered them according to the score with a threshold of &gt;0.2. This filtering allowed me to add to my training a lot of samples that I filtered out in the first step.<br> A 9,497,577 parameters model with the previous competition solution's augmentations trained on ~300 epochs got a public LB of 0.779. making it larger (15,471,112 parameters) and training for 500 epochs got a public LB score of 7.9. Adding the modifications to augmentations and second-step filtering and training for 1500 epochs got public LB 0.794 and private LB 0.779 (the final, 12th-place solution).<br> For validation, I used the first 3,000 samples. Toward the end of the competition, I feared that I might be overfitting to existing signers, so I did one experiment with 1000 epochs and separated the validation fold by signers ID (I used five unique signers for validation with training on all the rest). Even after 1000 epochs, the scores on the validation fold did not suffer due to overfitting. Thus, I continued with the original validation since I preferred to train on samples from all the signers.<br> I did most of my training on Colab TPU. They are cheap and easy to use with kaggle datasets (no storing or egress costs).</p> <h3>Details of the submission</h3> <p>Besides what I wrote in the previous section, I had to change the maximum frame number after finishing the training to 320 since my original 340-frame model could not complete the inference in time. Of course, I did previous experiments and expected the 340 max frames number model to complete the inference in time. I suspect that with more epochs, the complexity of the parameters is higher, leading to more complex quantization and, hence, more inference time. Of course, I had to quantize to 16-bit for my model to fit the 40MB limit. Also, I could not train it in 16-bit since I used TPU with their bfloats. However, the quantization did not have a significant effect on my scores. If it had any impact, it was less than 0.001.<br> What did not work: mix-up augmentation and AWP, but with more time, I probably could have made them work, too. I just made some blunders at the beginning, and when I understood how to do things right, there was already not enough time to train new models.</p> <h3>Sources</h3> <p>As I wrote above, a lot of my work was based on:</p> <ol> <li><a href="https://www.kaggle.com/competitions/asl-signs/discussion/406978" target="_blank">previous competition 1st solution</a></li> <li><a href="https://www.kaggle.com/code/irohith/aslfr-ctc-based-on-prev-comp-1st-place" target="_blank">Rohith's notebook</a> (in particular, the CTC modification and TFlite submission code) </li> </ol> <p>My cleaned and reproducible solution can be found <a href="https://github.com/shlomoron/Google---American-Sign-Language-Fingerspelling-Recognition-12th-place-solution" target="_blank">on my GitHub</a>.</p> <p>It was a great competition, and I enjoyed it a lot. Thank you, Google, and everyone who published code or participated in forum and code section discussions. You made this competition as approachable and enjoyable as it was.</p>
Image Matching Challenge 2023
3rd Place Solution - Significantly Reduced the Fluctuations caused by Randomness!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We are delighted to participate in this competition and would like to express gratitude to all the Kaggle staffs and Sponsors. Congratulations to all the participants. <br> The team members include 陈鹏、陈建国、阮志伟 and 李伟. I would like to express my sincere gratitude to everyone for the excellent teamwork over the past month. I have thoroughly enjoyed working with all of you, and I am delighted to be a part of this team.</p> <h1>1 Overview</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3163774%2Fc8338666681788720689a7ead39a92ab%2F.png?generation=1686757921566293&amp;alt=media" alt="pipeline"></p> <h1>2 Main pipeline</h1> <h2>2.1 SP/SG</h2> <h3>2.1.1 Rotation</h3> <p>Rotating the image has a significant effect, since SG is lack of the rotation invariance. Therefore, for each image pair A-B, we fixed image A and rotated image B four times (0, 90, 180, 270). After performing four times SG matching, we selected the rotation angle with the most matches for the next stage. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3163774%2F33aa1271502732f84ca6420949ed19a6%2F.png?generation=1686758077155244&amp;alt=media" alt="Rotation"></p> <h3>2.1.2 Resize image</h3> <p>In the early stage of the competition, we used the original image for extracting keypoints and matching. And we noticed that the mAA score in the heritage cyprus scene was only 0.1. After experiments, we found that if we scaled the images in cyprus scene to 1920 on the longer side, the mAA score was improved to 0.6. Specifically, we didn't rescale the image size after SP keypoints extraction. Finally, in our application, we scaled the image size if the longer side was larger than 1920, otherwise kept the original image size for the following SP + SG inference.</p> <h3>2.1.3 SP/SG setting</h3> <p>We increased the NMS value of SP from 3 to 8 and set the maximum number of keypoints to 4000.</p> <h2>2.2 GeoVerification(RANSAC)</h2> <p>We used USAC_MAGSAC for geometric verification with the following configuration.<br> cv2.findFundamentalMat(mkpts0, mkpts1, cv2.USAC_MAGSAC, 2, 0.99999, 100000)</p> <h2>2.3 Setting Camera Params(Randomness)</h2> <p>In our experiments, we wanted to eliminate the randomness effect in our final mAA, since we observed that the same notebook could result in fluctuations of approximately 0.03 in the LB. We found that the randomness come mainly from the Ceres (<a href="https://github.com/colmap/colmap/issues/404)" target="_blank">https://github.com/colmap/colmap/issues/404)</a>) optimization. <br> After analysis, the initial values of camera parameters, such as the focal length, had a significant impact on the stability of optimization results.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3163774%2F9411c508f380417b835bb253b06eae7b%2Fwall_random.png?generation=1686758232496900&amp;alt=media" alt="randomness"><br> Since the camera focal length information could be extracted from the image EXIF data, we provided the prior camera focal length to the camera and shared the same camera settings among images captured by the same device. Although there were still some fluctuations in the metrics, the majority of the experimental results were consistent. <br> The figure below shows four submissions of the same final notebook. <strong>On the public LB, our metric fluctuates by no more than 0.004.</strong> Moreover, compared to many other teams, <strong>we do not have significant metric fluctuations between public LB and private LB.</strong> Our public LB score was amazingly close to private score. This might indicated that our method <strong>Significantly Reduced the Fluctuations caused by Randomness!</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3163774%2Ffd18321289d3745de78691742d0ca28d%2Fonline_maa.png?generation=1686758332640650&amp;alt=media" alt="mAA"></p> <h2>2.4 Mapper</h2> <p>In the COLMAP reconstruction process, we revised some default parameters in incremental mapping. After first trail, if the best model register ratio was below 1, we relaxed the mapper configuration (abs_pose_min_num_inliers=15) and re-run incremental mapping.</p> <h1>3 Conclusion</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3163774%2F47522f20ba1cf5b608a1806f0cf761f4%2Fconclution.png?generation=1686760502421586&amp;alt=media" alt="Conclusion"></p> <h1>4 Not fully tested</h1> <h2>4.1 Select image pairs</h2> <p>Our final solution took nearly 9 hours, since we generated the image matching pairs by exhaustive method. In order to speed up our solution, we tried Efficientnet_b7, Convnextv2_huge, and DINOv2 to extract image features and generate image matching pairs by feature similarity. In offline experiments, we selected the top N/2 most similar images (N being the total number of images) for each image to form image pairs. Compared to other methods, DINOv2 performed the best. By incorporating DINOv2 into the image pairs selection, we could control the processing time to 7 hours, and achieved 0.485 in LB (compared with exhaustive 0.507).<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3163774%2Fbfdb729a111a6633b27e573788e09263%2Fdinov2.png?generation=1686760519524067&amp;alt=media" alt="dinov2"></p> <h2>4.2 Pixel-Perfect Structure-from-Motion</h2> <p>PixSFM had a good performance improvement in our local validation. However, during online testing, it consistently times out even we run it on scenes less than 50 images. Moreover, it took a lot time for us to install the environment and run it on kaggle. Times out Sad! </p> <h1>5 Ideas that did not work well</h1> <h2>5.1 Different detectors and matchers</h2> <p>We tested DKMv3, GlueStick, and SILK, but neither was able to surpass SPSG. In our experiments, we observed that DKMv3 performed better than SPSG in challenging scenes, such as those with large viewpoint differences or rotations. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3163774%2Ffae472d5266e521f18f667592eaf3454%2Fdkm.png?generation=1686758763195086&amp;alt=media" alt="dkm"><br> However, the overall metrics of DKMv3 were not as good as SPSG, which may be due to the operation of sampling dense matches we performed.</p> <h2>5.2 Crop image</h2> <p>Just like last year's winning solution, we took a crop after completing the first stage of matching. In the second stage, we performed matching again on the crop and merge all the matching results together.</p> <h2>5.3 Half precision</h2> <p>It confused our team a lot. In the kaggle notebook, we found the half-precision improved the results, but on the public LB, it returned the lower results. </p> <h2>5.4 Merge matches in four directions</h2> <p>In our methods, we rotated the image (0, 90, 180, 270) and performed four times matching. Instead of keeping all matches from four directions, we only keep the best angle matches, because keeping all matches didn't lead to any improvement in the results.</p> <h2>5.5 findHomography</h2> <p>When feature points lie on the same plane (e.g., in a wall scene) or when the camera undergoes pure rotation, the fundamental matrix degenerates. Therefore, in RANSAC, we simultaneously used the findFundamentalMat and the findHomography to calculate the inliers. However, this approach didn't lead to an improvement in the metrics.</p> <h2>5.6 3D Model Refinement</h2> <p>After the first trail of mapping, we tried to filter nosiy 3D points with large projection error or short track length, and then re-bundle adjust the model. Experically, this could help export better poses, but.. that's life.</p>
Google Research - Identify Contrails to Reduce Global Warming
13th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>13th Place Solution</h1> <p>First of all, we would like to thank the organizers and the Kaggle team for hosting the competition.</p> <h2>Overview</h2> <p>We ensembled total 20 models including 2.5D-model and 2D model.</p> <p>We used the nelder-mead approach to determine the weight with constant threshold.</p> <p>I think taking an average of the individual labels is the best way for score.</p> <p>Here, the following shows Masaya's part in brief.</p> <h2>Masaya part</h2> <h3>Data preprocessing</h3> <p>I prepared 2 sets of data for evaluation, which are only validation data and cross-validation.<br> This is because I also wanted to use validation data as training data while that was very correlated with Public LB.</p> <p>Maybe, the 2 most important thing to raise my score are</p> <ul> <li>label data are averaged over individual labels.</li> <li>image resolution are set to 512pix or higher.</li> </ul> <h3>Model</h3> <ul> <li>EfficientNetb7-UNet</li> <li>EfficientNetb7-Unet++</li> <li>MiTb5-UNet</li> </ul> <h3>Training</h3> <ul> <li>amp (Sometimes the loss gets nan. Why?)</li> <li>Optimizer: AdamW<ul> <li>No weight decay</li></ul></li> <li>Sheduler: CossineAnnealing with warm up<ul> <li>initial lr: 3e-4 ~ 5e-4</li></ul></li> <li>Loss: SoftBCE</li> <li>CV strategy<ul> <li>hold out validation</li> <li>StratifiedKFold<ul> <li>train data or validation data</li> <li>including mask</li></ul></li></ul></li> <li>Augmentation<ul> <li>hflip and vflip (long epoch worked)</li> <li>only ShiftScaleRotate</li> <li>none</li></ul></li> </ul> <h3>Postprocess</h3> <p>The threshold varied a lot, depending on how the CV was cut. <br> So ensemble by average got less accurate. <br> Therefore, we fixed the threshold and determined the weight that maximized the validation by using the nelder-mead.</p> <h3>Not work for me</h3> <ul> <li>Pseudo labels</li> <li>training for only positive image</li> <li>screening for positive label</li> <li>2nd stage model with segmentation prediction of 1st stage model</li> <li>tta</li> <li>another band image</li> </ul>
HuBMAP - Hacking the Human Vasculature
7th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to HuBMAP for hosting the exciting competition, and congrats to all prize and medal winners!</p> <ul> <li>Summary<ul> <li>Mask R-CNN model trained with dataset1, 2, 3 (pseudo labels)</li></ul></li> <li>Training pipeline<ul> <li>Train models with dataset 1 (5 folds)</li> <li>Create pseudo labels for dataset 2, 3 using the above models (for each fold)</li> <li>Train models with dataset 1, 2, 3 (5 folds)</li> <li>For dataset 2, both original (dilated) annotations and pseudo labels were used</li></ul></li> <li>Model<ul> <li>Mask R-CNN (Swin Transformer backbone, HTC RoI head)</li></ul></li> <li>Augmentation<ul> <li>Random resize (768-1536), flip, Rot90, RandomBrightnessContrast, HueSaturationValue</li></ul></li> <li>TTA<ul> <li>Resize (1024, 1536), hvflip</li></ul></li> <li>Ensemble<ul> <li>Ensemble on both region proposal and RoI head</li> <li>See "ensemble detection model" part of this solution</li> <li><a href="https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/discussion/298146" target="_blank">https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/discussion/298146</a></li></ul></li> <li>Post-processing<ul> <li>Dilation</li> <li>Remove small masks</li> <li>Remove masks that contain glomerulus regions</li></ul></li> <li>Does not work for me<ul> <li>Train with test images' pseudo labels (train in submission)</li> <li>External dataset <a href="https://data.mendeley.com/datasets/m2t49zf6xr/1" target="_blank">https://data.mendeley.com/datasets/m2t49zf6xr/1</a></li> <li>YOLOv8</li> <li>Puzzle in submission</li> <li><a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/discussion/417314" target="_blank">https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/discussion/417314</a></li></ul></li> <li>Dilate or not dilate<ul> <li>I have found experimentally that when using only dataset1 for training, the score is higher without dilation than with dilation. Therefore, I suspected that the success of dilation was brought from noisy dataset2 and was an overfitting method to LB. Thus I have tried to minimize the difference in score with and without dilation by using pseudo labels and dilated annotation masks for dataset2.</li> <li>In the first submission, the dilation score was 0.1 better than without dilation, but in the final submission, the difference was reduced to 0.02. However, the submission with dilation was still better for both public and private LBs.</li></ul></li> </ul>
Google Research - Identify Contrails to Reduce Global Warming
14th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you Kaggle and Google Research for hosting this competition. </p> <p>Here's our solution in brief. </p> <p><strong>Preprocessing / Dataset</strong></p> <ul> <li><p>384 and 512 Image size - Bilinear resize images </p></li> <li><p>Augmentations (Following two slightly different augmentations were part of our selected ensemble:) </p></li> </ul> <pre><code>def (): transform = albu.( [ albu.([ albu.(p = ), albu.(p = ), ], p = ), albu.(limit = , p = ), albu.(p = ), albu.( mean = (, , ), std = (, , ), max_pixel_value = ), (transpose_mask = True), ], p = , is_check_shapes=False ) return transform </code></pre> <pre><code>self.transform = { : A.Compose([ A.ShiftScaleRotate(=0.20, =0, =0.1, =0.25, =cv2.BORDER_CONSTANT, =0), A.GridDistortion(=0.25), A.HorizontalFlip(=0.25), A.VerticalFlip(=0.25), A.Resize(G.img_size), A.RandomCrop(G.crop_size), A.Normalize(=CFG.pp_params[],=CFG.pp_params[]), A.pytorch.transforms.ToTensorV2(=) ], =1.0), } </code></pre> <p><strong>Validation</strong> (Two different CV schemes in ensemble) </p> <ul> <li>4 fold CV with geographical binning i.e similar images were kept in same fold<br> Also sampling with no duplicates (excerpt from dataset class)</li> </ul> <pre><code>.dup_ids = .df.dup_id.unique() (): dup_idx = .dup_ids[idx] rows = .df[.df.dup_id == dup_idx] … </code></pre> <ul> <li>Random 5 fold split on training and given validation dataset. </li> </ul> <p><strong>Things that worked</strong></p> <ul> <li>Soft labels - Instead of using human_pixel_masks, we used average of human_individual_masks from the training set and for validation we kept human_pixel_masks. </li> <li>Loss function - We used SoftBCEWithLogitsLoss with soft labels. Also worked combination of dice loss and BCE loss </li> <li>Usual Ash color images</li> </ul> <p><strong>Models (UNet)</strong> <br> <a href="https://postimg.cc/rKVRrBfX" target="_blank"><img src="https://i.postimg.cc/CLDGWYD1/models-table.jpg" alt="models-table.jpg"></a><br> Cosine scheduler with warm up = 0.02 or 0.03 for all models</p> <p><strong>Thresholds</strong><br> A threshold change could have landed us in the money zone. After the deadline submission, threshold of 0.45 was best with pvt. 0.713x and public 0.712x and we selected 0.5 threshold. </p> <p><strong>Things that didn’t work for us</strong></p> <p>Pseudo labels (PL)</p> <p>Test time augmentation</p> <p>Training with other bands, frames</p> <p>Weighted loss</p> <p>Adjusting decoder channels </p> <p>Various other backbones </p> <p><strong>Partial Code available here</strong> <br> <a href="url" target="_blank"></a><a href="https://github.com/furu-kaggle/ICRGW" target="_blank">https://github.com/furu-kaggle/ICRGW</a></p> <hr> <p>A big thank you to my amazing team mates <a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> <a href="https://www.kaggle.com/optimo" target="_blank">@optimo</a> and <a href="https://www.kaggle.com/kunihikofurugori" target="_blank">@kunihikofurugori</a> 🙏</p>
ICR - Identifying Age-Related Conditions
Ensembling works.. unless you pick the (only) overfit one
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Ensembles or simple, under-tuned models?</h1> <p>I noticed several people reported that single, under-tuned models performed very well for them on the private leaderboard of this competition, better than complex ensembles of highly tuned models. Possible reasons why this might happen are: 1) Overfitting the "highly tuned" models; 2) Some degree of survivorship bias with many simple models being submitted; and/or 3) Under-tuned models outputing less confident predictions when compared to highly tuned models, thus making them inherently less "risky" for the balanced log loss metric. </p> <p>As we know, the metric punishes confident and wrong predictions very badly, which makes the score very vulnerable to even 1 or 2 misclassified "hard cases" (<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430475" target="_blank">link</a>, <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/427187" target="_blank">link</a>, <a href="https://www.kaggle.com/code/raddar/icr-competition-analysis-and-findings/notebook" target="_blank">link</a>), especially on the false negative side. Given the data drift we have with time (<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/418398" target="_blank">link</a>, <a href="https://www.kaggle.com/code/raddar/icr-competition-analysis-and-findings/notebook" target="_blank">link</a>, <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/431067" target="_blank">link</a>), and also the variability in number of "hard cases" present between the train, public and private splits (with the public dataset being the "easiest" and the private seemingly the "hardest"), it is perhaps not a surprise that we've seen such a dramatic <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430752" target="_blank">shake-up</a> (probably one of the <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430716" target="_blank">worst</a> in recent history of Kaggle competitions), and it is also clear that using more confident models would generally be riskier.</p> <p>However, proper ensembling should in theory mitigate both the issues of over-confidence / calibration and overfitting, and if done properly (e.g. using good models that complement each other well - or even weak learners that complement each other <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/427446#2389281" target="_blank">well</a>; using proper weights, etc), it usually is a superior solution to merely using less confident models, since it is likely that different predicted data points are also pushed in the "right" direction, i.e. towards their ground truth, by different ensemble members (e.g. by weighted averaging or stacking their predictions).</p> <p>Was this still true for this competition? Does using simple under-tuned models work better than ensembling well-tuned models in the particular setting we have here (very small data regime, sensitive metric, data drift over time and between train/public/private LBs)? Can we also achieve good, stable scores with diverse ensembles of well tuned models?</p> <p>I want to share my ensembling (and overall model building) approach below, which pretty consistently outperformed single models for me, on both CV, public, and eventually private leaderboards (see snapshot below). In fact, out of a total of 11 ensemble submissions I did using this pipeline, 5 would have given silver medals (best one w'd've ranked 105 on the private LB, score 0.38435), 3 are in the bronze area, and only 3 are out of the medal area. Even so, I managed to pick as final submissions 2 out-of-medal ensembles - both variants of the same ensemble actually, which was quite unlucky, albeit defensible in terms of CV + public LB scores (see below).<br> On the other hand, for single models, I have only 2 (xgboost) models in the medal area (one silver and one bronze), both scoring worse than many ensembles. See screnshot below for the top 8 submissions, out of which 6 are ensembles. Admittedly, very few of my single models were particularly "under-tuned" (except of course TabPFN ones), which might be a key difference for the purposes of this comparison.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3220715%2F262174be2909788750871680f0725d72%2Fsubs.png?generation=1692098551418633&amp;alt=media" alt=""></p> <h1>Model creation and ensembling pipeline:</h1> <p>Model-wise, my ensembles were a mix of:</p> <ul> <li>Models with different feature sets (of size 39, 38 and 35 features). Each feature set was derived using different FS approaches and selected by cross validation. I did not use any models with the complete feature set, and the best performing models used 39 features (for both tree models and TabPFN).</li> <li>Models with different targets (I used Alpha and Class only in final ensembles; Alpha performed better than Class overall)</li> <li>Models with different hyperparamaters (found through Bayesian optimization) </li> <li>LGBM, Xgboost and TabPFN models</li> <li>Initial selection of models to use for ensembling was done using individual CV scores and techniques like correlation analysis of predictions. Each model configuration (type + features + tagets + hyperparams combo) was then bagged (I used either 10- or 20- fold splits)</li> <li>For predicting on the test set, some of the "extremely bad" folds (e.g. CV score &gt; 0.4) were kicked off the ensemble, as "bad folds". As we know, this usually occurred when "hard examples" made it to the validation, rather than training split of that fold. This step is probably discussable, but my thinking was that 1) I already mitigated the risk of overfitting to only good folds to some degree by using all of the folds (no matter how bad) when doing initial model selection / hyperparam tuning to find the individual models in the first place. 2) We arguably don't want folds that are missing important "hard points" from the training sets and provably (badly) underperform. 3) While the threshold cdn't be tuned by CV (obviously) it was set high enough s.t. hopefully only extremely bad folds were eliminated (in general, it caused all of TabPFN|LightGBM|XGboost models on that fold to be dropped, but not always). </li> <li>We then ensemble the remaining models via either stacking (using logistic regression as meta model), or per-fold minimization of scores assuming linear weights.</li> <li>The best ensemble (score 0.38435, w'd get get silver if selected, rank 105 on private LB) used 20 fold bagging, and the above pipeline selected (via either filtering or weights being set close to 0), 26 / 120 bagged models, with (aggregated) model weights of: {'TabPFN': 0.099, 'xgboost_0': 0.249, 'xgboost_1': 0.125, 'xgboost_2': 0.311, 'lgbm_0': 0.178, 'lgbm_1': 0.162}, where xgboost_* and lgbm_* are different configs of Xgboost and LGBM. To compute above aggregated weights, we add weights from all surviving folds of that particular model type and re-normalize. Target for all models in this ensemble was Alpha.</li> <li>The ensemble I actually submitted (unfortunately), was a supposedly more "robust" variation of the above, where I added an extra model configuration with target==Class, weighted binary cross entropy loss and model type LGBM to the ensemble, and removed some other models. Optimal weights for this ensemble were {'tabpfn_0': 0.092, 'lgbm_0': 0.268, 'lgbm_1': 0.083, 'lgbm_2': 0.550, 'xgboost_0': 0.100}, where 'lgbm_2' is the new model. Notice that a lot of weight was placed on the new binary target model. This new ensemble outperformed all others in both CV and public LB. CDFs of the predictions for each of the component models, as well as the final ensemble, are plotted below. Note that, in line with <a href="https://www.kaggle.com/tilii7" target="_blank">@tilii7</a>'s findings <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/426536" target="_blank">here</a>, TabPFN (blue line) performs better (or at least is more confident) for predicting negatives, whereas most tree models are better with positives, so they complement each other quite nicely. Btw, I found this to be even more the case after feature elimination to the 39 features set. Also note that the resulting ensemble (light grey line) gets "best of both worlds", i.e. is quite confident - much more so than any individual model! - in both the positive and negative regions (thus also quite risky when encountering hard cases!)</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3220715%2F4862b88e863e0f59dc61fe8ec64054a8%2Fcdfs_sub2.png?generation=1692096814224138&amp;alt=media" alt="Alt text"></p> <p>A few other details on modelling and other things I have tried (with variable success): </p> <h1>Handling imbalance / Loss:</h1> <p>Like many people, I used weighted binary loss for Class models and weighted softmax for Alpha models, with weights given by inverse class frequencies. For the Class models, I also attempted to use a custom implemented focal loss for LGBM, and hypertune the alpha and gamma parameters, but surprisingly this consistently underperformed basic weighted cross entropy in CV. I did not use any undersampling, oversampling or SMOTE like approaches (all felt risky given the dataset size at hand).</p> <h1>CV fold splitting:</h1> <p>I eventually used stratified splits on {Class, Alpha, Beta, Gamma} for most models. I also tried a more clever approach whereby I assign "hardness classes" to individual points and additionally stratify based on hardness class. To compute hardness classes, unlike <a href="https://www.kaggle.com/tilii7" target="_blank">@tilii7</a>'s very interesting approach <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430781" target="_blank">here</a>, based on an ensemble of supervised models, I took an unsupervised approach: the hardness score of each point is calculated as the residual of a kNN classifier (n_neighbors=20), applied over the (UMAP) dimenisionality reduced space of the points, using an input space of 35 selected features. Then, points are binned into either 2 or 3 classes based on their hardness score. This folding scheme performed worse in CV (but this was expected, as all folds are now forced to validate on hard examples). However, it also performed (slightly) worse on the public LB, for all of TabPFN|LightGBM|Xgboost, thus I decided not to use it for ensembles. However, on the private LB, the best solo (non-ensemble) model I have turns out to be an xgboost folded with this approach and N=2 hardness classes. So perhaps there was something to it (in retrospect, we know that the public LB has much fewer hard cases than private LB, so perhaps this is expected).</p> <h1>TabPFN optimization:</h1> <p>As we know, TabPFN cannot be tuned much, so what I tried is the following approach: rather than feed it the complete dataset as training data (within a fold), only restrict the training data to K nearest neighbors of each validation (or test) example. Note that this way we have to predict all test/validation points one by one, each with a different training set. Nearest neighbors were calculated in a dimensionality reduced space, similarly to the hardness classes. This approach performed worse in both CV, public, and ultimately on private LB compared to including all examples, at least for the feature sets I tried.</p> <h1>Using Epsilon:</h1> <p>As we have seen, there is clearly time <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/418398" target="_blank">drift</a> in this dataset, and as <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> and <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> showed <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/431067" target="_blank">here</a> and <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/430963" target="_blank">here</a> in post-analysis, there was considerable benefit to using Epsilon for stratified folding, and also to dropping NaN Epsilon rows (control cohort?). In terms of actually using Epsilon as a feature, my plan was to build a time-independent model as the main submission, with an Epsilon-based model as a backup. To me, using Epsilon as a feature felt quite risky because, although there is a clear trend over time for targets in the training set, so adding Epsilon as a feature might help the model learn it, it is unclear with so little data (essentially only 3 years if we ignore the very sparse data before 2017, and only 617 data points overall) if this trend will continue, reverse, if it was due to COVID, to patient population sampling, treatment interventions, whether there is seasonality for some of the targets etc. Similarly, it is hard IMO with this amount of data to rely on or forecast any trends regarding changes in feature distributions.<br> I thus opted for a "safer" version of using Epsilon, generated from the exact same ensembling pipeline above, where I only included the year as a feature along the other selected features. I felt using anything more granular than that (e.g. month), would be quite prone to overfitting the models (especially since we don't have that granularity level for Epsilon in the test sets). Year in the test set was set to Epsilon.year.max() + 1. That ensemble actually performed OK on both the private LB (0.41) and public LB (0.16), although I had several ensembles <em>without</em> Epsilon.year outperforming it on both.<br> I also briefly considered using a time-based validation scheme (similar to what <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> demonstrated <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/431067" target="_blank">here</a>), but eventually decided against it, mainly because I was wary to draw any conclusions about trends, time periods and "data regimes" based on the very limited data and time span we have available.<br> In the end, I was obviously proven wrong on this, and it seems like using Epsilon in CV was a game changer.</p> <h1>Picking submissions</h1> <p>Since the last ensemble I described (including the Class target model) was beating all other ensembles on both CV and public LB, and I had achieved reasonable CV-public LB correlation with ensemble submissions overall, I decided to give up the original plan (which was to submit a no-Epsilon and an Epsilon-based ensemble), and to go for a "go big or go home" approach. So I submitted 2 variants of the same ensemble, the exact one above and a variant of it which was slightly more confident / risky in the positive area of the predictions CDF, and less confident in the (more dangerous) negative area. Obviously this was the wrong call, and, as mentioned above, had I picked the Epsilon ensemble (or almost any other ensemble I had really), the submission would have been in the medal area (likely silver). In any case, picking a less confident model overall as a backup submission would probably have been a much better bet.</p> <h1>Conclusions and directions of further improvement</h1> <p>Ensembling overall gave good and stable results in my case, consistently better than the component solo models. Modulo the submission picking blunder above, ensembling achieved good scores and was even relatively stable in terms of CV-public (and eventually, to some degree -private LB) correlation, which was notoriously difficult in this competition.</p> <p>However, ensembling is obviously still vulnerable to overfitting, especially in the very low data regime we are in. In my particular case, I think the newly added Class-based model was actually overfit, and since the ensembling pipeline gave that model a very high weight (0.55), the entire ensemble became overfit (at least this is my hypothesis, I did not verify it yet via late submissions).<br> Another source of overfitting would be in the ensemble weights. Had we had more data, we could have used separate holdout sets for the validation and calibration of the stacked meta model for instance to mitigate this. But I felt there was barely enough data to reliably train the base models in this case.</p> <p>Of course, these particular ensemble submissions would still only get silver medals and the base models themselves can be further improved. For instance, with hindsight, I probably wouldn't shy away as much from using Epsilon and actually use some form of time-based validation scheme, where we train and validate on different periods / "regimes" of data (something along the lines of what <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> did <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/431067" target="_blank">here</a>). I was discouraged in taking this path by the fact that we had very little data and a short time span (thus any trends or "data regimes" observed seemed questionable IMO), and also by the fact that the organizers explicitly excluded Epsilon from the test sets (thus building models without making use of it seemed to be suggested). However, it is ultimately a matter of balancing capturing actual data drift signal in the model versus simply noise, and it appears, based on final results, there was clearly enough signal to make using Epsilon worth it .<br> Also, since we did have access to features (but not ground truth labels) for the test dataset upon submission, perhaps other more involved approaches using time were possible, for instance estimating similarity between the test dataset and particular time periods in our training set (using e.g KL divergences, distances between embeddings etc) and using features built upon these similar time periods.<br> Another direction of improvement would probably be recalibrating confident predictions on the test set to account for uncertainty due to data drifts. In general, if this was a real life model deployment, we would probably recalibrate and/or retrain our models with new data to account for drifts. In this Kaggle competition setting, we cannot do that (no ground truth labels for newer data - unless we probe the public LB :) ). Lacking that option, if we can somehow build an estimate of the uncertainty we should expect for our models on the <em>private</em> test set given the observed data drift (again just based on the test set features only, not the labels), we can perhaps find ways to recalibrate the predictions output by our well-tuned models / ensembles to be less confident on the test set, while still maintaining good performance overall. There are several approaches to recalibration possible, e.g. some basic temperature scaling on the output probabilities, or perhaps some more involved approaches taking into account the probability of "hard cases". But the harder problem is estimating calibration parameters like the optimal temperature, or other calibration parameters if using other methods - since again, we do not have access to a labeled calibration set from the time period of the private test set.</p>
Google Research - Identify Contrails to Reduce Global Warming
15th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for organizers hosting the competition. I also appreciate my team mates cooperation.</p> <h1>Summary</h1> <p>Our solution has nothing special, it's just ensemble of 4 EfficientNetV2 with different image sizes.</p> <ul> <li>2x l with 768x768 image size (different seeds)</li> <li>1x l with 1024x1024 image size</li> <li>1x s with 1536x1536 image size</li> </ul> <p>First 3 models give similar score around 0.690 ~ 0.692 for <code>validation</code> folder images and the last gives 0.685. Ensemble score is 0.6936.</p> <h1>What worked</h1> <ul> <li>Augmentations<ul> <li>HV Flip</li> <li>Rotate</li> <li>Random scale then crop</li></ul></li> <li>TTA<ul> <li>HV Flip</li> <li>Rotate</li></ul></li> <li>set <code>"drop_path_rate": 0.2</code> and <code>"drop_rate": 0.2</code></li> <li>Adding time steps 3 and 5 as pseudo labels</li> <li>Focal loss</li> </ul> <p>But I guess improvements from above things are pretty small compared with addressing label misalignment and using individual human annotated mask as new labels as mentioned other discussions (I'm plan to do experiment later).</p> <h1>What not worked</h1> <ul> <li>Other augmentations<ul> <li>ColorJitter</li> <li>HueSaturationValue</li> <li>RandomBrightnessContrast</li> <li>RandomGamma</li> <li>RandomFog</li> <li>RandomShadow</li> <li>CoarseDropout</li> <li>MixUp</li></ul></li> <li>Adding more time steps as pseudo labels</li> <li>Adding pseudo label rounds</li> <li>I'm really focused on training 2.5D and 3D models, but nothing worked at all.<ul> <li>Just 3D model (e.g. Resnet3D)</li> <li>3D encoder + 2.5D decoder</li> <li>Input other time frames predictions as additional channels</li> <li>Stack 2 Unets, first predicts segmentation masks for multiple time frames, second takes those masks as input by stacking channel dimensions</li></ul></li> </ul>
ICR - Identifying Age-Related Conditions
Shakeup was expected: trust your work and CV! (one submission - silver medal)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I earned a silver medal with only one submission and there are some ideas behind that I want to share. Having so many good scores and probing in Public LB was not a good motivation to me at all, but still, with the pieces of work done so far, I decided to put pieces together, learn something testing some ideas and have some fun. I could have ready a submission just two hours before the deadline. It was definitely a good choice to participate <strong>(Rank=201, CV=0.227, Public LB=0.21, Private LB=0.39)</strong>. </p> <p>During personal work during the competition, my goal was trying to get a robust model; it was expected a huge shakeup (new data coming with different conditions): CV works and with so much probing, no way to guide personal submissions with others’ results. The next is a summary of things I did: </p> <p><strong>1. Models:</strong> Many models (starting with linear discriminant) might be good candidates when robust models are needed: combined with XGboost nice results could be obtained. I tried some simple ones whose R packages were already installed by Kaggle (few). To gain robustness, I explored the CV asynchronously.<br> <strong>2. Additional information:</strong> Use of the greeks dataset<br> a. CV folds stratification with greeks subgroups<br> b. Targeting Alpha/ Splitted Alpha instead of Class<br> c. Dealing with imbalance by splitting Class 0 with greeks subgroups<br> <strong>3. Missing values:</strong> Deciding how to handle missing values, particularly trying to gain some insight of the nature of the 60 missing values from BQ and EL; for me, the only ones of serious concern. Since I reached to no conclusion, I used the preProcess tool from the caret package that uses bagged trees for imputation.<br> <strong>4. Influential observations:</strong> Applying the empirical distribution function Fn(x) to preprocess the RHS. <br> <strong>5. Objective and metric:</strong> I did not build one for the multiclass classification cases.<br> <strong>6. Target imbalance:</strong> See splendid post by Chris Deotte (<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/412507" target="_blank">here</a>)</p> <p><strong>Submission</strong><br> The submission was a weighted blend of four settings: </p> <ul> <li>Two models (XGboost and SVM with radial basis) </li> <li>Two transformed datasets (one with standard scaling, the other applying the empirical Fn(x)). </li> </ul> <p><strong>CV</strong><br> Five stratified CV folds (16 strata) with all data. I only could do a test with one holdout sample (to really test blending and data transformations)<br> The interesting part, at least for me, is the target. The purpose of it was to (partially) handling imbalance. There are 9 classes in this experiment):</p> <table> <thead> <tr> <th>Target</th> <th>A</th> <th>B</th> <th>D</th> <th>G</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>51</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>97</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>2</td> <td>125</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>3</td> <td>117</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>4</td> <td>55</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>5</td> <td>64</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>6</td> <td>0</td> <td>61</td> <td>0</td> <td>0</td> </tr> <tr> <td>7</td> <td>0</td> <td>0</td> <td>18</td> <td>0</td> </tr> <tr> <td>8</td> <td>0</td> <td>0</td> <td>0</td> <td>29</td> </tr> </tbody> </table> <p>After prediction columns corresponding to each <strong>Class</strong> must be collapsed. <br> Note that they are 6 for class_0 and 3 for class_1. A good choice (why?) is the mean of class_0 and the sum of class_1. Blending of models was done via an exhaustive grid search.</p> <p>In summary, this was my submission; I hope you liked it.</p>
Predict Student Performance from Game Play
240th place solution. Or, how difficult it is to earn your first Kaggle medal.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello, Kagglers!<br> This is my 240th solution (which earned a bronze medal on the public leaderboard).<br> It's a simple blend of CatBoost and XGBoost with hyperparameters tuning.</p> <p><a href="https://www.kaggle.com/code/lyalindmitriy/catboost-and-xgb-blend" target="_blank">https://www.kaggle.com/code/lyalindmitriy/catboost-and-xgb-blend</a></p> <p>P.S. Thanks to organizers for such a competition!<br> P.P.S. How do you feel, guys, when you work hard, carefully select final submissions, but then realize that a silver medal was among your other submissions? Arrgh!</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11108995%2F6aa114e1f5b5e6b6a9daf0df7e1d1aad%2F2023-06-29%20%2020.30.24.png?generation=1688041861940054&amp;alt=media" alt="silver"></p>
Google - American Sign Language Fingerspelling Recognition
[The 11th place] Shallow encoder-decoder model also works
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Acknowledgments for the great public notebooks:<br> [1] <a href="https://www.kaggle.com/code/markwijkhuizen/aslfr-eda-preprocessing-dataset" target="_blank">MARK WIJKHUIZEN's data preprocessing notebook </a><br> [2] <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406684" target="_blank">HOYSO48's first place notebook from ISLR</a></p> <p>Reproducible Codes:</p> <ol> <li><a href="https://www.kaggle.com/code/baohaoliao/aslfr-data-preprocessing" target="_blank">Data preprocessing notebook</a></li> <li><a href="https://github.com/BaohaoLiao/aslfrv4" target="_blank">Training codes</a>: I need some time to clean it and will guide you in the following. In case someone can't wait to check it.</li> </ol> <h1>Data preprocessing</h1> <p>My data preprocessing and split notebook is based on [1]. The differences are:</p> <ol> <li>I used the same landmarks (only x and y) from [2].</li> <li>I split data based on unique phrases into 10 folds, and use the first fold as the validation set. There are no shared phrases in different folds. For data splitting, I found unique phrase split = random split &gt; participant id split.</li> <li>Use the same way to preprocess the supplementary data.</li> </ol> <h1>Data augmentation (<a href="https://github.com/BaohaoLiao/aslfrv4/blob/main/mask/datav2_distributed.py" target="_blank">file</a>)</h1> <p>I almost used the same data augmentation and hyper-parameters as [2]. The main differences are:</p> <ol> <li>I randomly concatenated two samples with a prob of 50%. I found it useful for overfitting.</li> <li>For the input to the decoder, I randomly (20%) replaced some ground-truth characters with random characters from the vocab. I apply this trick to avoid exposure bias.</li> </ol> <h1>Model architecture (<a href="https://github.com/BaohaoLiao/aslfrv4/blob/main/auto_ctc/conformerencoder_transformerdecoder_mask_droppath_ctc.py" target="_blank">file</a>)</h1> <p>I use Conformer as the encoder and the vanilla Transformer decoder. Conformer is better than the vanilla Transformer encoder, since it focuses on both local and global relations. The main highlights here are:</p> <ol> <li><p>I use both CTC loss and cross-entropy loss. After the encoder, the CTC loss is applied to the encoder output. And cross-entropy loss is applied to the decoder output. The interpolation weights for these two losses are 0.2 for CTC loss and 0.8 for cross-entropy loss. The benefits of this setting are: (1) One trained model can be used in two ways, either non-autoregressive generation with the encoder or autoregressive generation with the decoder. In the end, both generations achieved the same 0.791 public LB, (2) One loss could be a regularization term to the other. When you only want to do non-autoregressive generation, you can throw away the decoder parameters, which allows you to use more parameters during training.</p></li> <li><p>The hyper-parameters are: 5-layer encoder with hidden_dim=384, mlp_dim=1024, conv_dim=768, num_heads=6, 3-layer decoder with hidden_dim=256, mlp_dim=512, num_heads=4. </p></li> </ol> <h1>Three-phase training (<a href="https://github.com/BaohaoLiao/aslfrv4/blob/main/train_cnnencoder_transformerdecoder_mask_ctc_distributed.py" target="_blank">file0</a> and <a href="https://github.com/BaohaoLiao/aslfrv4/blob/main/train_cnnencoder_transformerdecoder_mask_ctc_awp_distributed.py" target="_blank">file1</a>)</h1> <p>All training uses AdamW, inverse square root schedule, weight decay=0.001 and max norm=5, lr=5e-4, batch size=512, warmup ratio=0.2, label smoothing=0.1, frame_length=368.</p> <ol> <li>For the first phase, I only train on the training set and exclude the validation set with #epoch=100.</li> <li>For the second phase, I include all training data and supplemental data for another 150 epochs.</li> <li>For the third phase, I use AWP with awp_delta=0.2 and awp_eps=0 on all training data for 300 epochs. AWP is good for generalization, better than rdrop for my case.</li> </ol> <h1>What doesn't work</h1> <ol> <li>BPE: I try to use subwords rather than characters, but it doesn't work.</li> <li>Too wide but shallow model: For the abovementioned model, I quantize the model in FP16. The number model's parameters are about 18M, 37MB. We can use about 40M parameters if we use dynamic quantization. But dynamic quantization slows down the inference speed. For your reference, my FP16 CTC model runs 2h30m, while the dynamic quantized one runs about 4h. There is only 0.001 LB performance drop for the dynamic quantized one. Nothing drops from FP32 to FP16. In the last week, I tried to use an 8-layer encoder and a 4-layer decoder (same dimension as above). But I have to use frame_length=100 to reduce the inference time within 5 hours. The results are not good. I think there might be some overfitting with such a larger model.</li> <li>Ensemble between encoder and decoder: My model could do both autoregressive and non-autoregressive generation at the same time. I first generate the output with the encoder, and do an average on the probabilities of the autoregressive and non-autoregressive output at each time frame. But the result stays the same.</li> <li>Other data: I used <a href="https://home.ttic.edu/~klivescu/ChicagoFSWild.htm#download" target="_blank">ChicagoFSWild and ChicagoFSWild+</a>, it doesn't help. Domain shift is the main reason.</li> </ol> <h1>What could make my model better</h1> <ol> <li>Inspired by other top-rank methods, including pose landmarks and z coordination might be better.</li> <li>Use a deeper but narrower model.</li> </ol>
ICR - Identifying Age-Related Conditions
Top 20% (not far from Bronze) using automl
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi,</p> <p>now it all comes to an end. It has been an interesting competition for sure. My personal mission was it to use my automl library BlueCast and see how it feels for an end user and improve it along the way.</p> <p>I finished top 20% (score of 0.45… Bronze ranks ended at 0.43…😅…so close), but I am still happy about the result.</p> <p>In the <a href="https://www.kaggle.com/code/thomasmeiner/icr-bluecast-automl-almost-bronze-ranks?scriptVersionId=139135413" target="_blank">BlueCast notebook</a> you can see its EDA capabilities and it's open API (here I tested using a custom model and custom preprocessing, but commented using the model out).</p> <p>I hope you enjoy the notebook and maybe another time you can try it out yourself. :-) </p>
HuBMAP - Hacking the Human Vasculature
3rd Place Solution - How to properly utilise noisy annotations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone, sorry for taking a bit longer to publish the complete solution. Thanks to the Kaggle team and HubMap team for hosting the competition. In this post I’m going to explain my winning solution in detail. Again I am really happy that I became one of the youngest GrandMaster after this competition. This competition also taught how important it is to keep your updated and trying out the recent researches made in the field. </p> <p>Also, we have already made our inference notebook along with model weights public: you may visit that with this <a href="https://www.kaggle.com/code/nischaydnk/cv-wala-mega-ensemble-hubmap-2023" target="_blank">notebook link</a></p> <p>All codes related to training or preprocessing data codes are also made public (In Progress): <a href="https://github.com/Nischaydnk/HubMap-2023-3rd-Place-Solution" target="_blank">https://github.com/Nischaydnk/HubMap-2023-3rd-Place-Solution</a> </p> <p>You may find most of the configs I used in all_configs folder. I will clean up the repo in upcoming days.</p> <p>Here you can refer to the coco annotations used for training the model: <a href="https://www.kaggle.com/datasets/nischaydnk/hubmap-coco-datasets" target="_blank">dataset link</a></p> <h2>Overview</h2> <p><strong>Winning solution consists of :</strong></p> <p><strong>5</strong> MMdet based models with different architectures.<br> <strong>2x ViT-Adapter-L</strong> (<a href="https://github.com/czczup/ViT-Adapter/tree/main/detection" target="_blank">https://github.com/czczup/ViT-Adapter/tree/main/detection</a>)<br> <strong>1x CBNetV2 Base</strong> (<a href="https://github.com/VDIGPKU/CBNetV2" target="_blank">https://github.com/VDIGPKU/CBNetV2</a>)<br> <strong>1x Detectors ResNeXt-101-32x4d</strong> (<a href="https://github.com/joe-siyuan-qiao/DetectoRS" target="_blank">https://github.com/joe-siyuan-qiao/DetectoRS</a>)<br> <strong>1x Detectors Resnet 50</strong> </p> <p>I also had few Vit Adapter based single models which could have placed me on 1st/2nd ranks but I didn't select. No regrets :))</p> <h2>Image Data Used</h2> <p>I only used competition for training models. <em>No external image data was used.</em> </p> <h2>How to use dataset 2??</h2> <p>Making the best use of dataset 2 was one of the key things to figure out in the competition. For me multi stage approach turned to be giving the highest boost. Basically during stage 1, A coco pretrained model will be loaded and pretrained on all the WSIs present in noisy annotations (dataset 2) for less epochs (~10), using really high learning rate (0.02+), with a cosine scheduler with minimum lr around (0.01), light augmentations. </p> <p>In stage 2, we will load the pretained model from stage 1 and fine-tune it on dataset 1 with higher number of epochs (15-25), heavy augmentations, higher image resolution (for some models), slightly lower starting learning rate and minimum LR till 1e-7. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2Fdfbae6a6f634c755d9ea39ca2daa9751%2FScreenshot%202023-08-09%20at%203.36.59%20AM.png?generation=1691532480119818&amp;alt=media" alt=""></p> <p><em>I have used Pseudo labels using dataset 3 in training few models of final ensemble solution, although I didn't find any boost using them in the leaderboard scores, I will still talk about it as they were used in final solution</em></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2Facf9ee081989a328c9f150458b4fce95%2FScreenshot%202023-08-09%20at%204.01.34%20AM.png?generation=1691533922010932&amp;alt=media" alt=""></p> <p>Using multistage approach gave around consistent 2-3% boost on validation and 4-6% improvement in leaderboard scores which is quite huge. The gap between cv &amp; lb scores was quite obvious as models were now learning on WSI 3 &amp; 4. Also, I never had to worry about using dilation or not as my later stage was just fine-tuned on dataset 1 (noise free annotations), so dilation doesn't help if applied directly on the masks.</p> <h1>Models Summary</h1> <h2>Vit Adapter: These models were published in recent ICLR 2023 &amp; turned out to be highest scoring architectures.</h2> <ul> <li>Pretrained coco weights were used. </li> <li>1400 x 1400 Image size (dataset fold-1 with pseudo threshold 0.5) &amp; (full data dataset 1 with pseudo threshold 0.6)</li> <li>Loss fnc: Mask Head loss multiplied by 2x in decoder.</li> <li>1200 x 1200 Image size used in stage 1.</li> <li>Cosine Scheduler with warmup were used.</li> <li>SGD optimizer for fold 1 model &amp; AdamW for full data model</li> <li>Higher Image Size + Multi Scale Inference (1600x1600, 1400x1400)</li> </ul> <p><strong>Best Public Leaderboard single model: 0.600</strong><br> <strong>Best Private Leaderboard single model: 0.589</strong></p> <h2>CBNetV2: Another popular set of architectures based on Swin transformers.</h2> <ul> <li>Pretrained coco weights were used. </li> <li>1600 x 1600 Image size (dataset 1 fold-5 without Pseudo)</li> <li>1400 x 1400 Image size used in stage 1.</li> <li>Cosine Scheduler with warmup were used.</li> <li>Higher Image Size during Inference (2048x2048)</li> <li>SGD optimizer </li> </ul> <p><strong>Best Public Leaderboard single model: 0.567</strong></p> <h2>Detectors HTC based models: CNN based encoders for more diversity</h2> <ul> <li>Pretrained coco weights were used. </li> <li>2048 x 2048 image size (Resnet50 fold 1 w/ pseudo threshold 0.5 , Resnext101d without pseudo)</li> <li>Loss fnc: Mask Head loss 4x for Resnext101, 1x for Resnet50 </li> <li>Cosine Scheduler with warmup were used.</li> <li>SGD optimizer </li> </ul> <p><strong>Public Leaderboard single model: 0.573 ( resnext 101) , 0.558 (resnet50)</strong></p> <h2><strong>Techniques which provided consistent boost:</strong></h2> <ol> <li>Multi Stage Training</li> <li>Flip based Test Time Augmentation</li> <li>Higher weights to Mask head in HTC based models</li> <li>SGD optimizer </li> <li>Weighted Box Fusion for Ensemble</li> <li>Post Processing</li> </ol> <h2>Post Processing &amp; Ensemble</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2F0fb5299755c5bfa0d6da43263a8be223%2FScreenshot%202023-08-09%20at%204.38.49%20AM.png?generation=1691536182102653&amp;alt=media" alt=""></p> <p>As mentioned earlier, I used WBF to do ensemble. To increase the diversity, I kept NMS for TTA and WBF for ensemble. Also, using both CNN / Transformer based encoders helped in increasing higher diversity and hence more impactful ensemble. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2F764ea7995470a76b07549107c4b531a2%2FScreenshot%202023-08-09%20at%204.26.47%20AM.png?generation=1691536206638981&amp;alt=media" alt=""></p> <p>After the ensemble, I think some of my mask predictions got a little distorted. Therefore, I applied erosion followed by single iteration of dilation. This Post-processing gave me a decent amount of boost in both cross validation score as well as on leaderboard (+ 0.005)</p> <h2>Light Augmentations</h2> <pre><code>dict( =, direction=[, ], =0.5), dict( =, policies=[[{ : , : 0.4, : 0 }], [{ : , : 0.4, : 5 }], [{ : , : 1.0, : 6 }, { : }]]), dict( =, transforms=[ dict( =, =0.0625, =0.15, =15, =0.4) ], =dict( =, =, label_fields=[], =0.0, =) </code></pre> <h2>Heavy Augmentations</h2> <pre><code>dict( =, direction=[, ], =0.5), dict( =, policies=[[{ : , : 0.4, : 0 }], [{ : , : 0.4, : 5 }], [{ : , : 0.6, : 10 }, { : , : 0.6, : 3 }], [{ : , : 0.6, : 10 }, { : , : 0.6, : 5 }], [{ : , : 32, : (0.5, 1.5), : 15 }], [{ : , : (0.4, 0.5, 0.6, 0.7, 0.8, 0.9), : 0.2 }], [{ : , : (3, 8), : [(4, 4), (4, 8), (8, 4), (8, 8), (16, 32), (32, 16), (32, 32), (32, 48), (48, 32), (48, 48)] }], [{ : , : 0.6 }, { : , : 0.6, : 3 }], [{ : , : 32, : (0.5, 1.5), : 18 }], [{ : , : (0.4, 0.5, 0.6, 0.7, 0.8, 0.9), : 0.3 }], [{ : , : (5, 10), : [(4, 4), (4, 8), (8, 4), (8, 8), (16, 32), (32, 16), (32, 32), (32, 48), (48, 32), (48, 48)] }], [{ : , : 0.6, : 4 }, { : , : 0.6, : 6 }, { : , : 0.6, : 10 }], [{ : , : 1.0, : 6 }, { : }]]), dict( =, transforms=[ dict( =, =0.0625, =0.15, =15, =0.5), dict(=, =0.5), dict( =, transforms=[ dict( =, =120, =6.0, =3.5999999999999996, =1), dict(=, =1), dict( =, =2, =0.5, =1) ], =0.3) ], =dict( =, =, label_fields=[], =0.0, =) </code></pre> <p>Thank you all, I've tried my best to cover most part of my solution. Again, I am super happy to win the solo gold, feel free to reach out in case you find difficulty understanding any part of it.</p>
ICR - Identifying Age-Related Conditions
Silver medal for a First timer, Thank you for your inputs
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>🙇<strong>Credits to users contributed on Code and Discussion board, you are my extended team and you helped me learn a lot of new things. Thanks a ton 🙇</strong></p> <p><em>What did I do?</em><br> Data Analysis and Feature Engineering: <br> Identified Non-numeric columns and convert them using label encoding</p> <h2>Initially normalized all the columns, but later realized a few of the models like tabpfn don’t need normalization.</h2> <p>Identify null values and filled them with a median <br> Identified correlation between features, including Greeks Alpha, Beta, Gamma, Delta, dropped Epsilon (diff with the toppers)<br> Observed that Alpha and Gamma have a strong correlation (100%) with Class output. </p> <h2>So got an Idea that Alpha and Gamma can be used as Target variables and make it a multi-class classification problem besides class.</h2> <h2>Boxplots on each feature have shown that some features have high outliers, so removed outliers that have more than 3 standard deviations (again this is a random decision, as We don’t know the feature meaning and significance)</h2> <p>The Class distribution is highly biased toward Class 1 with 509 records and Class 0 with 108 records. So used a Random oversampler to balance the data one based on Class and the other based on Gamma</p> <p><em>Model Selection :</em><br> As it is a classification problem, first I tried Ensemble with Logistic Regression, Random Forest, Decision Tree, and Neural networks and average the output probability, didn’t work that great. I observed Decision Trees was doing good in the set. <br> Later I came to know about TabPfn which works well with Tabular data and used TabPfn, for Gamma as the output TabPfn was giving 0.965 Accuracy. <br> As Random Forest was doing good, I thought I should use XGB for class and it has given a Mean accuracy of 0.931<br> For XGB few of the hyperparameters I have taken from other published notebooks in the competition as well and tuned them further. </p> <p><em>Evaluation:</em><br> Used K fold Cross Validation for XGB with k=5, with XGB Parameters the CV distribution was high after certain Hyperparameter tuning found the best parameters with good Mean accuracy and low standard deviation. <br> Tabpfn I had 0.33 train and test split and verified accuracy with output prediction (Should have adopted better training and evaluation here)</p> <p><em>Submission :</em><br> Here is the tricky part we have 3 potential output features Class, Alpha, and Gamma, I need to choose to see which combination of them gives better output, after trying a few of these combinations, I figured Class and Gamma were doing good results during submission, so picked that for final submission. </p> <p>*** I was pretty disappointed, I am very low on the leadership board, people are scoring 0.06, 0.03, and finally 0.00, but one of the guys published 0.06, after looking at his notebook I understood he was kind of cheating adjusting the probability to align with the binary log loss calculation and making a better score. In the end, I made a choice to pick my own work instead of their published notebooks. If you see the final leaderboard none of these guys even made the top 200. So please trust your own work, don’t copy anything which you don’t understand. </p> <p>*** I have not used epsilon time from Greeks, but I see the other teams have used it, looking at the values I didn’t feel like using it, especially as the final data wouldn’t have that values. Similar to other features as I don’t understand what it means, I didn’t use it. May be I am wrong. </p> <p>*** Keep following the Code shared by other teams and take insights, don’t lose your strategy because of them, experiment with those points and evolve your approach to be much better. </p>
HuBMAP - Hacking the Human Vasculature
5th place solution. Higher Resolution and Dataset1 is all you need.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the organizers for hosting this competition!<br> Thank you to all the people who helped me! I have learned a lot from you!<br> This is my first time winning a gold medal!I am a master finally!</p> <h1>data</h1> <p>I only use dataset1 in my final submission,with coco pretrain.<br> 5folds splits were used in local cv</p> <h1>Training</h1> <h2>Model</h2> <p>I tried detectoRS-r101, htc-db-b. Htc-db-b performs better in both local cv(score 0.439) and lb(score 0.570) .</p> <h2>Training Strategy</h2> <p>I trained with mmdet2, default 1x schedule and 1x swa training. <br> Image scales were set as (512,512)-(1536,1536).<br> RandomFlip and Rotate90 aug were used.</p> <h1>Inference</h1> <p>TTA i used in inference. TTA scales is [(640,640),(800,800),(1024,1024),(1408,1408),(1696,1696),(1920,1920)] while TTA flip_direction is [horizontal,vertical].<br> Replace rcnn'nms by weighted cluster-nms with DIoU.</p> <h1>Final Submission</h1> <p>My final submission was 1 fold htc-db-b,trained in dataset1.</p> <h1>Others</h1> <h2>Ensemble</h2> <p>I tried ensemble of different folds,which decrease both cv and public score,so I didn't chose it as my final submission,but it increase private score finally. Only 2folds ensemble increase private from 0.57 to 0.573.</p> <h2>Larger Model</h2> <p>I tired htc-db-l,which local cv and public socre both lower than htc-db-b,but have higher score(0.573)in private score.<br> ## Tried but not worked<br> data augmentation: copy-paste,mixup,HSV,cutout…<br> dataset23: I tried pretrained in dataset2 then fintune in dataset1, which decrease score both in cv and lb. I tried pseudo label, which also didn’t work for me.</p>
Google Research - Identify Contrails to Reduce Global Warming
152nd solution (single model)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations, all winners.</p> <p>And thank you for holding a competition with a lot of learning. This was my first time doing a segmentation task.</p> <p>Not great results, but public notebooks helped me learn the basics.</p> <p>I made various models with <a href="https://www.kaggle.com/stgkrtua" target="_blank">@stgkrtua</a> , but I made a mistake in the ensemble code, so the result was a single model :)</p> <p>A simple solution is given below.</p> <ul> <li>model<ul> <li><code>Unet</code> (segmentation_models_pytorch)</li> <li>backbone<ul> <li><code>'timm-resnest26d'</code></li></ul></li></ul></li> <li>image size<ul> <li>512*512</li></ul></li> <li>fold<ul> <li>5</li></ul></li> <li>use validation data on training</li> <li>Ash color</li> <li>Use only one target frame</li> <li>threshold<ul> <li>0.3</li></ul></li> <li>CV 0.688, Public0.66273, Private0.66321</li> </ul> <h3>If the ensemble was successful</h3> <ul> <li>Public 0.67199, Private 0.67366(119th not medal…)</li> </ul> <table> <thead> <tr> <th>model</th> <th>backbone</th> <th>img_size</th> <th>weight</th> </tr> </thead> <tbody> <tr> <td>Unet</td> <td>timm-resnest26d</td> <td>512</td> <td>0.4</td> </tr> <tr> <td>UnetPlusPlus</td> <td>timm-efficientnet-b2</td> <td>512</td> <td>0.1</td> </tr> <tr> <td>DeepLabV3Plus</td> <td>timm-efficientnet-b4</td> <td>512</td> <td>0.15</td> </tr> <tr> <td>Unet</td> <td>tu-tf_efficientnetv2_s</td> <td>512</td> <td>0.15</td> </tr> <tr> <td>SegModel (Details below)</td> <td>tf_efficientnet_b6</td> <td>512</td> <td>0.2</td> </tr> </tbody> </table> <pre><code> (nn.Module): (): ().__init__() self.encoder = Encoder(CFG) self.model_name = CFG.model_name self.out_indices = CFG.out_indices channel_nums = self.get_channel_nums() self.decoder = Decoder(CFG, channel_nums) self.head = nn.Sequential( nn.Conv2d( channel_nums[-], CFG.out_channels, kernel_size=, stride=, padding=, ), ) (): self.model_name == : channel_nums = [, , , , ] self.model_name == : channel_nums = [, , , , ] self.model_name == : channel_nums = [, , , , ] self.model_name == : channel_nums = [, , , , ] self.model_name == : channel_nums = [, , , , ] self.model_name == : channel_nums = [, , , ] self.model_name == : channel_nums = [, , , ] self.model_name == : channel_nums = [, , , ] channel_nums = channel_nums[-(self.out_indices):] channel_nums (): skip_connection_list = self.encoder(img) emb = self.decoder(skip_connection_list) output = self.head(emb) output </code></pre> <h3>not work</h3> <ul> <li>Input a total of 9 channels including the front and rear frames</li> <li>Input 9ch as 3ch using 1x1Conv2d</li> </ul>
ICR - Identifying Age-Related Conditions
3rd Place Solution for the "ICR - Identifying Age-Related Conditions" Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I was really surprised to be able to achieve this result. In fact, after participating in the competition and simply implementing a baseline code, I rarely paid attention to this competition again because the company usually worked overtime😅.<br> The final result was a baseline code based on the catboost model at that time, and its public score was 0.21.<br> Due to anonymous data features and medical-related health features, my initial idea is to construct new features through the ratio between different features, just as some indicators in the medical examination report are also calculated by the ratio between other indicators.<br> Before this, I planned to filter some anonymous features through corr, so as not to construct too many invalid features. However, without further attempts, the final code is still the cross calculation of all features.<br> The more effective operations in this competition should be the following two points<br> 1.One is the cross calculation of features<br> 2.The other is the catboost model<br> Because my lightgbm model with the same features got 0.22 on public score and 0.38 on private score.<br> here's my solution:<br> <a href="https://www.kaggle.com/code/junyang680/icr-lightgbmbaseline" target="_blank">https://www.kaggle.com/code/junyang680/icr-lightgbmbaseline</a><br> Regarding the parameter selection of the lightgbm model and the catboost model, it seems to be based on the contents of some notebooks, but sorry, some have forgotten</p>
ICR - Identifying Age-Related Conditions
436th Place Solution for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is a Bronze solution using CatBoost with zero feature engineering, default hyperparameters, and essentially 3 lines of code. "greeks.csv" is not used.</p> <pre><code>X_train, X_val, y_train, y_val = train_test_split(train_X_raw, train_y, test_size=, stratify=train_y, random_state=) clf = CatBoostClassifier(cat_features=[CAT_COL], auto_class_weights=) clf.fit(X_train, y_train) </code></pre> <p><strong>Context</strong></p> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></li> </ul> <p><strong>Overview of the Approach</strong></p> <ol> <li>Evaluation metric and loss function. To use balanced logarithmic loss in mainstream classifiers, a simple way is to set class_weight (they may be under slightly different names) to "balanced". We can show that the weight-adjusted balanced log loss is mathematically equivalent to the regular log loss.</li> <li>Preprocessing. Fill NA with -1.</li> <li>Validation strategy. A stratified train-test split using the labels. </li> </ol> <p><strong>Details of the submission</strong><br> Nothing special here.</p> <p><strong>Sources</strong><br> None.</p>
ICR - Identifying Age-Related Conditions
[🥈 | 292/6431 | Top 5%] ICR — Identifying Age-Related Conditions, Kaggle Challenge Writeup
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is my solution for the recently concluded Kaggle Competition titled “ICR — Identifying Age-Related Conditions.” Below, I will be emphasizing the key points as follows:</p> <ul> <li><strong>Introduction to the Challenge</strong>: In this section, I will provide an overview of the challenge itself.</li> <li><strong>Introduction to the Data</strong>: This part will encompass an introduction to the dataset that was used in the competition.</li> <li><strong>Approach to the Problem:</strong> Here, I’ll outline the methodology I employed to tackle the given problem.</li> <li><strong>Successful Strategies</strong>: I will discuss the techniques and strategies that proved effective for me.</li> <li><strong>Challenges Faced:</strong>This section will delve into the obstacles I encountered during the competition.</li> <li><strong>Missed Opportunities:</strong> I will highlight concepts I considered but regrettably didn’t attempt.</li> <li><strong>New Knowledge Acquired:</strong> In this segment, I’ll share the new insights I gained throughout this challenge.<br> By exploring this write-up, you’ll gain valuable insights into the rewards of participating in Kaggle Competitions, and uncover the lessons I learned from the “ICR — Identifying Age-Related Conditions” challenge. Join me on this journey of discovery and growth!</li> </ul> <h2><strong>Introduction to the Challenge</strong></h2> <p>The goal of this competition is to predict if a person has any of three medical conditions. You are being asked to predict if the person has one or more of any of the three medical conditions (Class 1), or none of the three medical conditions (Class 0). You will create a model trained on measurements of health characteristics.<br> For the full context of the challenge you can refer the Official <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">Overview </a>Section.<br> The evaluation metric for the competition was <strong>balanced logarithmic loss</strong>:</p> <h2><strong>Introduction to the Data</strong></h2> <p>Now comes the most important part of the competition. The competition data comprises over fifty anonymized health characteristics linked to three age-related conditions. Our goal is to predict whether a subject has or has not been diagnosed with one of these conditions — a <strong>binary classification</strong> problem.<br> There was also some supplemental anonymized data provided, which would only be available for the training set. What makes this modelling challenging is that there were only 617 rows of data provided. It was very easy to overfit any kind of solution here. Also, there was imbalance in the dataset making the job even harder.</p> <h2><strong>Approach to the Problem</strong></h2> <p>The problem was simple enough to understand, although there was not much we could do to learn more about the domain as the data was anonymized, a few columns were distinguishable like age, sex etc. Apart from that most parameters were unknown.</p> <h3><strong>Step 1 : EDA on the data</strong></h3> <p>Tried minimal EDA on data regarding the correlation and features’ central tendency. There were a few features which were well correlated with the target value.</p> <h3><strong>Step 2 : Create a baseline</strong></h3> <p>As the features were not really making sense, I did not try to normalize the data on my own and built a simple decision tree classifier. i though of using it as the baseline for my further iterations. Unfortunately, it did not work very well. So I used everyone’s favorite XGBoost as the baseline.</p> <h3><strong>Step 3: Iterate Iterate Iterate</strong></h3> <p>After the baseline is ready we keep on adding new ideas to each iteration and keep building on the insights we gain on each submission. in the section below I will highlight ideas that helped me climb up the public leaderboard.</p> <h2><strong>Successful Strategies</strong></h2> <ol> <li><strong>A baseline xgboost/catboost model :</strong>For my final version of the baseline, I opted for an ensemble of CatBoost and XGBoost models. This choice stemmed from CatBoost’s inherent conservatism (because of its ordered boosting mechanism), setting it apart from more aggressive alternatives like LGBM. (Although, in subsequent iterations, I did introduce LGBM into the equation.)</li> <li><strong>Mode imputing of missing values:</strong> After trying out different central tendency imputation methods, mode imputation gave me slightly better results, also I did not want outliers affecting my score as the features nature was still unknown (less data did not help either), So I avoided mean filling.</li> <li><strong>A Stratified K-Fold CV was important :</strong>This competition highlights the importance of Cross validation. The error was shooting heavily without a K-Fold CV, even with it, the folds error was not very consistent. This made it very difficult, as the score was improving on the public LB, but the average CV score was not improving much.</li> <li><strong>Sampling techniques :</strong>I mainly used undersampling to balance the dataset to the minority class, which at first seemed wasteful of the already small data set. So, I tried out using upsampling and using class weights in the prediction, but those did not yield good results for me.</li> <li><strong>The Game Changer TabPFN:</strong> A Transformer That Solves Small Tabular Classification Problems in a Second. Everyone who participated here used this transformer based model in thier solution, I did too. This was fast, self normalizing and only worked for small datasets which makes it very appealing to use. (Although people have won even without this). If you want to know more about it you can refer the original <a href="https://arxiv.org/abs/2207.01848" target="_blank">paper</a>. </li> </ol> <h2><strong>Challenges Faced</strong></h2> <ol> <li>Small dataset did not help at all.</li> <li>Using the supplementary data was improving the score a bit, but was unsure of using it in my final solution as it won’t be there for the test set.</li> <li>Class imbalance hampered the models performance.</li> <li>It was an either <strong>Trust your work/CV or trust the Public LB</strong> kind of competition. As towards the end the public LB error was 0 for the top spot, it made sense to trust my work and CV.</li> </ol> <h2><strong>Missed Opportunities</strong></h2> <ol> <li>Hybrid Sampling (upsample the minority and downsample the majority to a threshold). I don’t know how it would have went, but I am sure this would’ve been interesting.</li> <li>A custom NN for the task, maybe an ensemble of XGBOOST, TabPFN and a custom NN with a lot of dropout (which is actually a part the winning solution).</li> <li>Using the supplementary data provided to train the model. </li> <li>Using the age, sex as the splitting criteria for the K-Folds. (might have been interesting).</li> <li>Most of the feature engineering process, I was not able to dedicate much time.</li> <li>Thresholding probability values to imitate the public test set, most top kernels were doing this, I did not like this approach so i skipped it (and it saved me from DOOM, all notebooks at the top fell very hard on the private LB).</li> <li>Using Optuna was on my list for hyperparameter tuning.</li> </ol> <h2><strong>New Knowledge Acquired</strong></h2> <p>I maintain this section in all my write-ups as it helps me reflect on my progress and remember when and how I acquired specific skills, tricks, or techniques. Here, I’m compiling insights that could prove valuable to anyone who peruses this:</p> <ol> <li><strong>Believe in Your Work’s Validity:</strong> Even if your ranking places you at the tail end, remember that correctness in your process and strong conviction can yield improvement. As evidence, I witnessed a leap of over 2500 places on the private leaderboard, securing a silver medal.</li> <li><strong>TabPFN — A Gem for Small Datasets:</strong> For those dealing with smaller datasets, TabPFN stands out as a remarkable AutoML technique to consider. While its parameters might not be fine-tuned extensively, its utility remains unparalleled.</li> <li><strong>XGBoost’s Enduring Strength:</strong> It’s evident that XGBoost continues to be a robust contender. Many participants achieved higher rankings by solely employing tuned XGBoost models. However, my personal inclination leans towards CatBoost for such scenarios due to its advantageous attributes.</li> <li><strong>Take the time to actually build the whole iteration pipeline yourself,</strong> rather than forking someone’s work, It could save you some time initially, but it would restrict your flexibility to make changes and build upon someone else’s code.<br> That is it for the Writeup this time, you can find the Notebook that won me the silver (I have made it public even before the competition ended) <a href="https://www.kaggle.com/code/mohneesh7/tabpfn-xgb-with-stratifiedkf-cv/notebook?scriptVersionId=135526306" target="_blank">Here</a><br> If you have any feedback or suggestions please do reach out to me!<br> Cheers!!</li> </ol>
HuBMAP - Hacking the Human Vasculature
10th place solutions!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for host and competitors.<br> It is interesting competition for me.</p> <p>I'm happy to become a Kaggle master!!</p> <h2>Team Member</h2> <p><a href="https://www.kaggle.com/sugupoko" target="_blank">@sugupoko</a>, <a href="https://www.kaggle.com/hatakee" target="_blank">@hatakee</a>, <a href="https://www.kaggle.com/kfuji" target="_blank">@kfuji</a><br> co-workers!!</p> <h2>Solution overview</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2930242%2Fec3da742a7a4a7c70bd8d56634204885%2Fsol.png?generation=1690850953870523&amp;alt=media" alt=""></p> <p>Thank you <a href="https://www.kaggle.com/fnands" target="_blank">@fnands</a> !!<br> base notebook : <a href="https://www.kaggle.com/code/fnands/a-quick-yolov7-baseline" target="_blank">https://www.kaggle.com/code/fnands/a-quick-yolov7-baseline</a></p> <p>Yolo's segmentation head size is 1/4 (512 =&gt; 128). So I changed the input resolution( 640 =&gt; 160).</p> <h3>Unique idea (maybe)</h3> <p>I did fine-tuning with ds1 in the last, because <strong>I didn't want to use dilate !!</strong></p> <h2>Inference code</h2> <p><a href="https://www.kaggle.com/sugupoko/10th-place-inference-yolov7ensemble-pseudo-res640" target="_blank">https://www.kaggle.com/sugupoko/10th-place-inference-yolov7ensemble-pseudo-res640</a></p> <h2>Reference previous competition</h2> <p>Our many experiments are from the previous two competitions</p> <ul> <li><a href="https://www.kaggle.com/competitions/global-wheat-detection" target="_blank">https://www.kaggle.com/competitions/global-wheat-detection</a></li> <li><a href="https://www.kaggle.com/c/sartorius-cell-instance-segmentation/overview" target="_blank">https://www.kaggle.com/c/sartorius-cell-instance-segmentation/overview</a></li> </ul> <h2>Things that didn't go well:</h2> <ul> <li>Applied pseudo-labeling twice.</li> <li>Yolov8 (training and inference).</li> <li>mmdetection. (difficult for us…)</li> <li>Unet prediction in the detected BBOX.</li> <li>NMS =&gt; WSF <ul> <li><a href="https://www.kaggle.com/code/mistag/sartorius-tta-with-weighted-segments-fusion" target="_blank">https://www.kaggle.com/code/mistag/sartorius-tta-with-weighted-segments-fusion</a></li></ul></li> </ul> <h2>Things I couldn't implement well:</h2> <ul> <li>Training with data regenerated by combining tiles.</li> </ul> <h2>Things I couldn't do:</h2> <ul> <li>Augmentation with the stain tool.</li> <li>Learning using extra data.</li> </ul> <h2>Other</h2> <p>I trained higher resolution(640=&gt;800). That LB is the highest of our models.<br> LB: 0.473, PB0.563</p>
HuBMAP - Hacking the Human Vasculature
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the organizers for hosting this competition.</p> <p>Thanks to <a href="https://www.kaggle.com/nghiahoangtrung" target="_blank">@nghiahoangtrung</a> for sharing the <a href="https://www.kaggle.com/code/nghiahoangtrung/hubmap-mmdet3-1-single-fold-inference/notebook" target="_blank">notebook and dataset</a>, which helped me make submissions within a short time.</p> <p>Training code of my best single model is shared on <a href="https://github.com/tascj/kaggle-hubmap-hacking-the-human-vasculature" target="_blank">GitHub</a>. (Update 2023/08/12)</p> <p>Inference notebook is public <a href="https://www.kaggle.com/tascj0/hubmap-2023-release" target="_blank">here</a>.</p> <h2>Overal strategy</h2> <p>For most instance segmentation models, the AP mainly relies on bounding box (bbox) prediction, while the precision of mask prediction has a minor impact. Additionally, mask prediction can be handled by other models (like the mask head of Mask R-CNN or any semantic segmentation model). Therefore, when dealing with instance segmentation tasks, I usually focus my efforts on optimizing bbox accuracy.</p> <p>After trying out several models, I found RTMDet to perform exceptionally well and train fast. Hence, I chose RTMDet as the primary model for my experiments.</p> <h2>EMA</h2> <p>I highlighted this because it's the most crucial technique I found in my experiments. The Exponential Moving Average (EMA) models not only demonstrates higher accuracy on the validation set but also achieves higher accuracy on the training set. In most of my experiments, I utilized EMA along with a fixed learning rate. This approach provided me with a stable foundation for conducting various experiments.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F381412%2Fc7fe742970beb68fdeca794c40919644%2Fema.png?generation=1691111824367027&amp;alt=media" alt="ema"></p> <h2>Dataset</h2> <p>I trained the final submitted models using both dataset1 and dataset2. dataset1 was split into train and val sets based on 'i' location.<br> Splitting based on 'i' is more difficult than random splitting. The validation score is approximately (0.47 bbox_mAP, 0.72 segm_mAP60) for random splitting and around (0.43 bbox_mAP, 0.68 segm_mAP60) for 'i' splitting.</p> <h2>Modeling</h2> <p>RTMDet-x is my baseline. I trained models with all 3 classes. Input size is 768.</p> <h3>add mask supervision</h3> <p>In the early experiments, I compared Faster R-CNN and Mask R-CNN, using the same Faster R-CNN pre-trained weights. The latter exhibited higher bbox mAP, indicating the helpfulness of mask annotation.</p> <p>Without using mask supervision, RTMDet-x already achieves a higher bbox mAP compared to models like Mask R-CNN. The addition of mask supervision further improves performance.</p> <p>There are two ways to utilize mask annotation:</p> <ol> <li>Indirectly use mask information through random rotation. Recalculate bboxes after rotation.</li> <li>Add a mask head to the model.</li> </ol> <p>Both worked well.</p> <table> <thead> <tr> <th>random rotate</th> <th>mask head</th> <th>bbox_mAP</th> <th>segm_mAP60</th> </tr> </thead> <tbody> <tr> <td>N</td> <td>N</td> <td>0.424</td> <td></td> </tr> <tr> <td>Y</td> <td>N</td> <td>0.434</td> <td></td> </tr> <tr> <td>N</td> <td>Y</td> <td>0.43</td> <td>0.68</td> </tr> <tr> <td>Y</td> <td>Y</td> <td>0.432</td> <td>0.688</td> </tr> </tbody> </table> <p>Due to minimal instance overlap, mask prediction is much simpler compared to typical instance segmentation scenarios.<br> I added a naive mask head in training to enhance bbox prediction. Surprisingly, the predictions from the mask head turned out not bad. Using the same bboxes, segm_mAP60 is slightly high than Mask R-CNN, while segm_mAP75 is slightly lower than Mask R-CNN. A single fold model without tta scored 0.565+ on the private LB, which is sufficient for a gold medal.</p> <h3>augmentations</h3> <p>I used strong geometric augmentations.</p> <pre><code> dict(=, img_scale=(768, 768), angle_range=(-180, 180), scale_range=(0.1, 2.0), border_value=(114, 114, 114), =0.5, =1.0, =0.5, =1.0, ), </code></pre> <h3>training</h3> <p>I trained with a batch size of 8, comprising 3 images from dataset1 and 5 images from dataset2. This approach led to a slight improvement.</p> <h3>ensemble</h3> <p>My final model is an ensemble, where bbox prediction is a WBF of 3 RTMDet models, a YOLOX-x (with mask supervision), and a Mask R-CNN. All models has two different weights (2 of 5 folds). Mask prediction is generated by mask head of Mask R-CNN using input size 1440. TTA was not used.</p> <p>The ensemble scored 0.589 private and 0.317 public.</p> <h2>dilation</h2> <p>In fact, I forgot about dilation until I saw my public score on the final day of the competition.</p> <p>In my local validation, adding wsi3dataset2 to the training set improved the mAP of wsi1dataset1 (all wsi1 holdout) and wsi2dataset2 (all wsi2 holdout). So I choose to trust wsi1dataset1 and wsi2dataset2.</p> <p>This was mostly a stroke of luck. If I had made the submission earlier, I might have made different choices in experiments. Anyway, I would definitely keep one final submission without dilation in the end.</p> <h2>A few tips</h2> <ol> <li>keep multiple EMA models in a single training run.</li> <li>use <code>torch._foreach_lerp_</code> to calculate EMA. It could save some time.</li> <li>use a fast COCOeval implementation. Since evaluation needs to be conducted on multiple EMA models with different momentums, faster evaluation can also save some time.</li> </ol>
ICR - Identifying Age-Related Conditions
Wow (and our solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>To put it mildly, I'm in complete shock, I woke up today with my teammate congrats and I got intrigued as to what place we got? I burst out laughing when I saw it was 2nd place, and the funniest thing is my solution is just CV - no probing, no nothing</p> <p>here's my solution:</p> <p><a href="https://www.kaggle.com/code/opamusora/main-notebook" target="_blank">https://www.kaggle.com/code/opamusora/main-notebook</a></p> <p>What we did:</p> <ul> <li>just like many people, we used time and max(time)+1 for test</li> <li>removed rows where time is None, I noticed a weird cluster that was far away from all other data when playing with umap and it were rows with absent time</li> <li>filling Nan values with -100, probably doesn't matter if its median or low numbers.</li> <li>used a technique I saw in some other competition: reducing dimensions with umap and then labeling clusters with kmeans, it didn't bring a lot of score, but it's there</li> <li>did feature permutation manually, dropped any cols that made score even slightly worse</li> <li>for a model I used Catboost, xgb with parameters from some public notebook and tabpfn. LGBM didn't seem to work for me as it always dropped my CV</li> <li>then we just averaged our predictions for test and that's it</li> </ul> <p>Also, I want to mention that we wanted to try edit tabpfn to get embeddings and we had an idea to try fine tune tabpfn, but it didn't work out.</p> <p>I also tried optuna for optimizing my models, but it didn't work out</p> <p>Stacking didn't help my score much as well</p> <p>edit: My last 2 sumbissions on this competitions are the highest scoring ones 😅</p>
ICR - Identifying Age-Related Conditions
149th Place Solution for the ICR - Identifying Age-Related Conditions Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>149th Place Solution for the ICR - Identifying Age-Related Conditions Competition</h3> <p>Public LB was ranked 2859th, but 2710th shake up and private LB was ranked 149th. This is the first time I have ever done so much SHAKE. Since the number of data was very small, I tried to keep the solution simple.</p> <h3>Context</h3> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></li> </ul> <h3>Overview of the approach</h3> <p>Our final model (Public/Private LB of 0.19743/0.39377) was a combination of 10 single models. The ensembles used were CatBoost (5-seeds ensemble Public/Private LB 0.22466/0.41235) and LGBM (5-seeds ensemble Public/Private LB 0.18788/0.4098).</p> <p>One final submission was an average ensemble of 10models from CatBoost (5 models) and LGBM (5 models).</p> <h3>Details of the submission</h3> <p>After trying 5fold, 10fold, and 20fold, 20fold had the best CV, so 20fold was chosen. The CV of the best fold was CV~0.05, but the CV of the worst fold was CV~0.27, and the CV of (out of fold) of all folds was CV~0.126, indicating that the variation among folds was very large. Therefore, I tried to keep the approach as simple and with the best generalization performance as possible, and by applying a seed ensemble to the approach, I tried to minimize the variation as much as possible. I also tackled the competition by trusting the behavior of the CVs without much consideration of Public LBs and selected the final sub (the so-called trust CV).</p> <p>Below is a summary of CV, Public LB, and Private LB for each model (all 20 fold).</p> <table> <thead> <tr> <th>model</th> <th>CV</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>LGBM</td> <td>0.12614</td> <td>0.18521</td> <td>0.41131</td> </tr> <tr> <td>LGBM (5 seeds ensemble)</td> <td>0.12456</td> <td>0.18788</td> <td>0.4098</td> </tr> <tr> <td>CatBoost (5 seeds ensemble)</td> <td>0.12545</td> <td>0.22466</td> <td>0.41235</td> </tr> <tr> <td>LGBM (5 seeds ensemble) + CatBoost (5 seeds ensemble)</td> <td>0.12087</td> <td>0.19743</td> <td>0.39377</td> </tr> </tbody> </table> <h3>Preventing overfitting</h3> <p>To prevent overfitting, both LGBM and CatBoost performed early stopping. (I set early_stopping_rounds = 100 for iterations = 50500, but in practice iterations were not needed to this extent because they converged within 1000 iterations.) I also adjusted the parameters related to regularization and overlearning for the LightGBM and CatBoost parameters. The specific LGBM and CatBoost parameters are as follows.</p> <ul> <li>LGBM parameters</li> </ul> <pre><code>lgbm_params = { ': ', ': ', ': CFG.random_seed, ': , ': , ': , ': , ': , ': -, ': , ': , ': , ': , balanced', none', ': } </code></pre> <ul> <li>CatBoost parameters</li> </ul> <pre><code>catboost_params = { : , : , : , : , : , : , : .random_seed, : } </code></pre> <h3>EDA</h3> <p>When I was doing EDA, I was thinking of adding information from greeks.csv to the feature set, but there were so many missing Epsilon values that I wondered how to handle them. Therefore, for the moment, I created models to predict Alpha, Beta, Gamma, and Delta, and tried to utilize Alpha, Beta, Gamma, and Delta as features by using the oof value in train.csv and the respective pred values in test.csv. However, this approach resulted in a lower CV. (Consequently, it was also detrimental from the Private LB perspective.) I thought about this result, and I believe that the reason may be that Alpha is practically Class(0,1), which is the target of this study, and the generalization performance was impaired because the feature called Alpha is too strong. In fact, the feature importance of LGBM at this time showed that the importance of Alpha was quite high.(In addition, the code of <a href="https://www.guruguru.science/competitions/21/discussions/ab028e86-d011-485e-8844-45d15717fec4/" target="_blank">https://www.guruguru.science/competitions/21/discussions/ab028e86-d011-485e-8844-45d15717fec4/</a> was used to illustrate the feature importance. Thank you very much. I really like the fact that the blurring of each CV can be expressed as a boxen plot.) Therefore, I decided to stop using the information in greeks.csv as features and use the features present in the original train.csv. (Instead, as mentioned in the Validation Strategy part, I utilize the information from greeks.csv in the Validation Strategy.)</p> <ul> <li><p>Feature importance of LGBM when predictions of Alpha, Beta, Gamma, and Delta are used as features<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6820505%2Fdff60ef0172acd231afbf6e80f194679%2F__results___35_1.png?generation=1692284555088960&amp;alt=media" alt=""></p></li> <li><p>Feature importance of LGBM when predictions of Alpha, Beta, Gamma, and Delta are not used as features<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6820505%2F23ad8cf3eaea53711dd1a5bdd034a5a7%2F__results___26_1.png?generation=1692284604740016&amp;alt=media" alt=""></p></li> </ul> <h3>Feature Selection</h3> <p>I used all features except for Id, which appears in train.csv.</p> <h3>Validation Strategy</h3> <p>As mentioned above, I wanted to make the validation strategy as unbiased as possible for the training data, since I expected a great deal of variability. Therefore, I decided to combine train.csv with greeks.csv and perform fold partitioning with MultilabelStratifiedKFold for 'Alpha', 'Beta', 'Gamma', 'Delta' and 'Class'.</p> <pre><code>train_df = pd() greeks_df = pd() train_df = pd(train_df, greeks_df, on=, how=) mskf = (n_splits=CFG, shuffle = True, random_state=CFG.random_seed) labels = train_df] train_df = - , (_, train_index) (mskf(train_df, labels)): train_df = train_df = train_df() </code></pre> <h3>Sources</h3> <ul> <li><a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/412507" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/412507</a></li> </ul>
HuBMAP - Hacking the Human Vasculature
12th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I am so happy to have won my first gold medal and even a solo medal in this competition. Also thanks to the OpenMMLab, SenseTime and The Chinese University of Hong Kong for making a really great library mmdetection.</p> <p><strong>Summary</strong><br> I used the pseudo labels of Dataset3 to train the Cascade Mask RCNN + Convnext v2 large.</p> <p><strong>Model</strong></p> <ul> <li>Detector : Cascade Mask RCNN </li> <li>Backbone : Convnext v2 Large</li> <li>Loss : FocalLoss</li> </ul> <p><strong>Augmentation</strong></p> <ul> <li>Albumentations<ul> <li>Distortion worked very well for me.</li></ul></li> </ul> <pre><code> ( =, transforms=[ (=, p=), (=, p=), (=, p=), ], p=), </code></pre> <ul> <li>mmdet augmentation<ul> <li>AutoAugment, MixUp, Mosaic, RandomErasing</li></ul></li> </ul> <p><strong>Training process</strong></p> <p>First, weighted segment fusion was performed on the output values ​​from the 4 models made of 4 types of data sets to create pseudo labels for Dataset3.</p> <p>I did WSF by referring to the guide below -&gt; <a href="https://www.kaggle.com/code/mistag/sartorius-tta-with-weighted-segments-fusion" target="_blank">https://www.kaggle.com/code/mistag/sartorius-tta-with-weighted-segments-fusion</a></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9334354%2F74ad73b605892f8de8d3bc401dae6d0a%2Fmake%20pseudo%20labels.png?generation=1690855159125845&amp;alt=media" alt=""></p> <p>second. The model was pre-trained using the pseudo label of Dataset 3 and the human label of Dataset 2.</p> <p>Finally, the model was fine-tuned using only Dataset1 data and submitted.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9334354%2F49ef424a4d828f38c7e44d374a2021c2%2Ffinetune.png?generation=1690855729462161&amp;alt=media" alt=""></p>
Google Research - Identify Contrails to Reduce Global Warming
25th Place Solution for the Google Research Identify Contrails to Reduce Global Warming Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Context</h2> <p>Business context: <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/overview" target="_blank">https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/data" target="_blank">https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/data</a></p> <h2>Overview of the Approach</h2> <p>My approach is simplified version of the 3D model found in the author's paper [1].</p> <ul> <li>extracting feature maps of each time frames in 2D backbone</li> <li>modulating feature map from neighboring time frame's feature map, and obtains modulated 2D feature maps (<strong>Temporal Feature Modulator</strong>)</li> <li>passed 2D feature maps to 2D decoder (U-Net)</li> </ul> <h2>Details of the submission</h2> <h3>Temporal Feature Modulator</h3> <p>First, I hypothesized that features on earlier layers are not contributed much to the final prediction, since contrails are drifted from frames to frames. So I come up with the idea of only applying temporal modulation in the later layers of feature maps.</p> <p>I tried some experiments with applying temporal modulation to earlier layers, and found only applying last layer gives the best performance. Note that the rest of the feature maps are just sliced by the current time frame (T=4).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4910466%2F99cce57c10157369fcc5c263ebfff4be%2Fcontrail.drawio.png?generation=1691723146332719&amp;alt=media" alt=""></p> <h3>Data processing &amp; Augmentations</h3> <p>In order for faster loading and saving disk space, I quantized false-color image to <code>uint8</code>.</p> <p>I did it based on the assumption that 'if humans label at <code>uint8</code> resolution, then training an AI model at that resolution would be sufficient'.</p> <p>As other participants are already discussed in publicly, any kind of geometric augmentations fails to improve models' performance (after competition, it turns ouf to be because of label misalignment though).</p> <p>So I just applied light augmentations of <code>RandomResizedCrop</code> and <code>HorizontalFlip</code>.</p> <pre><code>cfg.geometric_transform = A.Compose( [ A.RandomResizedCrop(height=, width=, scale=(, ), p=), A.HorizontalFlip(p=), ] ) </code></pre> <h3>CV strategy</h3> <p>My CV strategy is very naive. I only trained by images on <code>training</code> frames and evaluated on <code>validation</code> images.</p> <p>In the final submission, it might have been advisable to use all the images for training or to take the average across folds, but considering the remaining time until the competition's end, it was not realistic, so I gave up.</p> <h3>Ensemble &amp; TTA</h3> <p>The ensemble policy is:</p> <ul> <li>TTA: <code>hflip</code> (x2)</li> <li>seed ensemble: x4</li> <li>different backbones and input resolutions: x3</li> </ul> <p>I tested several backbones including recently published ones, and find EfficientNet and PVTv2 as ensemble seeds have good training time &amp; performance tradeoff.</p> <p>The final submission was composed of 24 models in total.</p> <table> <thead> <tr> <th>backbone</th> <th>used time frames</th> <th>input resolution</th> <th>TTA</th> <th>random seed</th> </tr> </thead> <tbody> <tr> <td>efficientnet_b3</td> <td>[1, 2, 3, 4]</td> <td>768</td> <td>hflip (x2)</td> <td>x4</td> </tr> <tr> <td>pvt_v2_b3</td> <td>[1, 2, 3, 4]</td> <td>640</td> <td>hflip (x2)</td> <td>x4</td> </tr> <tr> <td>pvt_v2_b5</td> <td>[1, 2, 3, 4]</td> <td>512</td> <td>hflip (x2)</td> <td>x4</td> </tr> </tbody> </table> <p>The CV score on validation set is <strong>0.692</strong>, and private LB score is <strong>0.700</strong>.</p> <h3>Other findings to be noted</h3> <h4>Technical tips of training with larger resolutions</h4> <p>In my moderate machine environment (RTX 3090 Ti x1), Training on higher resolution tend to get CUDA memory overflow error, or result in too small batch sizes. So I used gradient check-pointing and FP16 training to overcome this issue.</p> <h4>Geometric Distribution</h4> <p>Using metadata, I found the geometric distribution of contrails are very different between training and validation sets.</p> <p>As the authors state in paper [1], I believe there are geographical observation points that is only appeared in the training data, and not included in the validation. I also implemented a fold split based on geographical distribution, but I was unable to effectively utilize this information.</p> <blockquote> <p>To further boost the number of positives in the dataset, we also included some GOES-16 ABI imagery at locations in the US where Google Street View images of the sky contained contrails.</p> </blockquote> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4910466%2Fca513333ca52eca20a62c9ed0a5cc082%2Fgeometric_distributions.png?generation=1691723168388936&amp;alt=media" alt=""></p> <h3>Things that did not worked</h3> <ul> <li>PL</li> <li>randomly chose individual labels</li> <li>DeepLabv3 as a decoder</li> </ul> <h2>Acknowledgements</h2> <p>Thank you for holding this competition. In addition to tackling an interesting challenge, I was able to review and recap all the methods I used in previous CV competitions.</p> <h2>Sources</h2> <ul> <li>[1] <a href="https://arxiv.org/abs/2304.02122" target="_blank">Joe et.al., OpenContrails: Benchmarking Contrail Detection on GOES-16 ABI, 2023</a></li> <li>[2] <a href="https://github.com/opencv/opencv/issues/11784" target="_blank">warpAffine: correct coordinate system, documentation and incorrect usage</a></li> </ul> <h2>Appendix</h2> <h3>A. Effect of misalignment correction and soft labels</h3> <p>In late submission, I tested how misalignment of labels affect the score.<br> I also tested the effect of soft labels as some participants said effective.</p> <h4>Baseline model</h4> <p>To reduce training cost, I simply tested on the basic 2D U-Net architecture where only current frame is used, and no temporal feature modulator.<br> Other setups are like these:</p> <table> <thead> <tr> <th>Parameter</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>backend</td> <td>efficientnet_b3</td> </tr> <tr> <td>crop_size</td> <td>512x512</td> </tr> <tr> <td>loss</td> <td>BCE(pos_weight=8) + Dice</td> </tr> <tr> <td>augmentation</td> <td>ShiftScaleRotate(p=0.6), hflip</td> </tr> <tr> <td>ensemble</td> <td>4x seed ensemble</td> </tr> <tr> <td>TTA</td> <td>hflip</td> </tr> </tbody> </table> <h4>Misalignment correction</h4> <p>Since label was misaligned by (0.5, 0.5) pixels, I upscale the input image by 2x, and shifted (1, 1) pixels. It is simpler than correcting labels on training time and inverse transform them on inference time.<br> Correction was implemented by the below algorithm.<br> Note that additional shift coefficient (0.5, 0.5) is added to transformation matrix as a workaround of the open issue of OpenCV's <code>warpAffine</code> function[2].</p> <pre><code>M = np.array([[, , + ], [, , + ]], dtype=np.float32) img = cv2.warpAffine( img, M, ( * W, * H), flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT ) </code></pre> <h4>Soft Label</h4> <p>Soft labels are generated from <code>human_individual_masks.npy</code>.<br> Since the data tab on the competition page saids:</p> <blockquote> <p>Pixels were considered a contrail when &gt;50% of the labelers annotated it as such. Individual annotations (<code>human_individual_masks.npy</code>) as well as the aggregated ground truth annotations (<code>human_pixel_masks.npy</code>) are included in the training data.</p> </blockquote> <p>I created the soft labels in the below processing:</p> <ul> <li>take the mean of individual annotations</li> <li>multiplied by 2</li> <li>clip by (0, 1)</li> </ul> <pre><code>human_pixel_mask = np.clip(individual_pixel_mask.mean(-) * , , ) </code></pre> <h4>Result</h4> <p>I trained three types of models and tested on validation data (CV) and leader board(Public and Private):</p> <ol> <li>baseline</li> <li>baseline with misalignment correction (MC)</li> <li>baseline with MC and soft labels (SL)</li> </ol> <p>The result shows both MC and SL has independent positive gains by 0.59% and 0.70% respectively on the private LBs.</p> <table> <thead> <tr> <th>description</th> <th>ensemble seeds</th> <th>crop_size</th> <th>CV</th> <th>Public</th> <th>Private</th> <th>gain</th> </tr> </thead> <tbody> <tr> <td>baseline</td> <td>4x</td> <td>512</td> <td>0.6672</td> <td>0.67818</td> <td>0.67669</td> <td>-</td> </tr> <tr> <td>+MC</td> <td>4x</td> <td>512</td> <td>0.6708</td> <td>0.68131</td> <td>0.68259</td> <td>0.59%</td> </tr> <tr> <td>+MC +SL</td> <td>4x</td> <td>512</td> <td>0.6766</td> <td>0.69623</td> <td>0.68956</td> <td>1.29%</td> </tr> </tbody> </table> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4910466%2F85d08acac7f886c3270dd1da47992811%2FEffect%20of%20misalignment%20correction%20%20soft%20labels.png?generation=1691897900692345&amp;alt=media" alt=""></p>
ICR - Identifying Age-Related Conditions
Why everybody uses `MultilabelStratifiedKFold` WRONG: unveiling the DECEPTION
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello! I'm thrilled to share an exciting breakthrough I had while working on this competition.</p> <p>I invested considerable effort into exploring and experimenting with validation techniques while cracking the challenge. I observed a trend where many participants were applying <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/413099" target="_blank">the iterative stratification technique</a> on multiple columns. Personally, I used <code>Class</code>, <code>Alpha</code>, <code>Beta</code>, <code>Gamma</code>, <code>Delta</code>. It was also suggested to utilize <code>MultilabelStratifiedKFold</code> from the <a href="https://github.com/trent-b/iterative-stratification" target="_blank">iterative-stratification</a> library. <a href="https://www.kaggle.com/code/tauilabdelilah/icr-create-folds" target="_blank">Here</a> is an example of how folds can be created using it. However, as I delved deeper into my research, a startling revelation hit me like a ton of bricks.</p> <p>Ladies and gentlemen, brace yourselves: <strong>The MultilabelStratifiedKFold technique is nothing short of a colossal DECEPTION!</strong></p> <p>Here is the kicker. Turns out, when dealing with categorical string variables, the stratification achieved by <code>MultilabelStratifiedKFold</code> falls alarmingly short of its intended goal. The distribution of data points within groups across different folds is far from balanced. But we can improve it. Two game-changing ideas:</p> <ol> <li><strong>Convert those string labels into numbers before stratification.</strong> Take a look at <a href="https://github.com/trent-b/iterative-stratification/blob/master/iterstrat/ml_stratifiers.py#LL49C5-L49C19" target="_blank">this</a> line of code in the <code>iterative-stratification</code> library. Picture what happens when we perform <code>labels.sum(axis=0)</code>. If <code>labels = [0, 0, 0, 0, 1, 1, 1, 1]</code>, the result is 4. However, when <code>labels = [a, a, a, a, b, b, b, b]</code>, the result is <code>aaaabbbb</code> (some really weird stuff).</li> <li><strong>Use one-hot-encoding</strong>, just as <a href="https://github.com/trent-b/iterative-stratification/issues/9#issuecomment-646868889" target="_blank">suggested</a> by the creator of the iterative-stratification library.</li> </ol> <p>I vividly demonstrate what is happening – in my notebook. Please take a look: <a href="https://www.kaggle.com/code/blackitten13/multilabelstratifiedkfold-scam-you-need-ohe/notebook" target="_blank">https://www.kaggle.com/code/blackitten13/multilabelstratifiedkfold-scam-you-need-ohe/notebook</a></p> <p>To conclude, I made the submission using <a href="https://www.kaggle.com/code/thelastsmilodon/lb-0-06-cv-0-2-tabpfn-xgb-model" target="_blank">this</a> public kernel (without post-processing). It scored 0.49 on private. Then I added my validation approach, presented in <a href="https://www.kaggle.com/code/blackitten13/multilabelstratifiedkfold-scam-you-need-ohe/notebook" target="_blank">the notebook</a> (with OHE before stratification). I faced a remarkable leap to a private score of 0.41 and a bronze medal!</p> <p>Once again:</p> <p><strong>0.49 (top 35-38%) -&gt; 0.41 (top 6%) just by adding one-hot-encoding of labels before stratification</strong></p> <p>What do you think? Did anyone notice something similar during the competition? I would appreciate your feedback!</p>
Google Research - Identify Contrails to Reduce Global Warming
58th Place Solution (My first write-up!)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>Overview</h3> <p>First, thanks to Google Research for hosting an interesting and well-organized competition. I am always happy to try and build solutions that help reduce our environmental impact. Second, thanks to everyone who participated and shared ideas throughout this competition. I always learn a lot from the code and discussion tabs.</p> <p>My solution is an ensemble of Unet models with Timm backbones. The optimal threshold for each model was tuned using an out-of-fold validation set. No augmentation was applied to the images during training.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5570735%2F98f1fa7711e8116a4b40933f13209bbd%2Ficrgw-v1.drawio.png?generation=1691625764325229&amp;alt=media" alt="Ensemble Image"></p> <p>In order of highest to lowest validation dice scores, the backbones I used were maxxvitv2_rmlp_base_rw_384, resnest269, maxvit_base_tf_512, and mit_b4. I also used stochastic weight averaging on the last 5 epochs of training for the maxvit models, and no SWA for the other models. All of my experiments were done on small models and image sizes so that I could experiment quickly. I only scaled up the image sizes and model parameters in the last week of the competition for the final submission.</p> <p>Please leave any criticisms/suggestions/questions. The more the better :)</p> <h3>Other Attempts</h3> <p>Each item on this list either did not improve performance or was too computationally expensive for me to pursue.</p> <ul> <li>Losses: Tversky, LogCoshDice, BCE</li> <li>Downsampling/Upsampling Interpolation Methods</li> <li>Removing Islands </li> <li>Openmmlab (struggled to get this working)</li> <li>Deep supervision</li> <li>MANet, Unet++, Unet+++, and others..</li> </ul> <h3>Frameworks</h3> <p>Models: <a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">segmentation_models.pytorch</a>,<a href="https://github.com/huggingface/pytorch-image-models/tree/main/timm" target="_blank">Timm</a><br> Training: <a href="https://github.com/Lightning-AI/lightning" target="_blank">Lightning-AI</a><br> Logging: <a href="https://wandb.ai/" target="_blank">WandB</a></p> <hr> <p>Finally, congrats to <a href="https://www.kaggle.com/tascj0" target="_blank">@tascj0</a> for 1 submission gold. Very impressive!</p>
Google Research - Identify Contrails to Reduce Global Warming
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I thank the organizer and Kaggle for hosting this interesting competition. I also thank many Kagglers who wrote solutions in past competitions, without which I could not compete for segmentation tasks.</p> <h2>Overview</h2> <ul> <li>U-Net with MaxViT encoder</li> <li>Input is the standard ash color image</li> <li>Target y is soft; the average of individual masks</li> <li>BCE (binary cross entropy) loss</li> <li>U-Net trained for symmetrized label, y_sym, which is the label shifted by 0.5 pixels</li> <li>Shift-scale-rotate augmentation</li> <li>Additional tiny convolution trained to map y_sym to y</li> </ul> <h2>Rotation augmentation</h2> <p>Augmentation is extremely important in this competition to suppress overfit and train longer. With rotation augmentation I could train 40-50 epochs, compared to 10-20 epochs without augmentation. The test-time augmentation (TTA) is also very effective, adding ~0.006 to the score for free.</p> <p>This is basic but does not work as usual in this competition because the label is shifted 0.5 pixels to the right and bottom with respect to the contrails. Random rotation augmentation would shift the label to random directions and make the model impossible to learn the right-bottom aligned ground-truth labels.</p> <h2>Discover the 0.5-pixel shift</h2> <p>I was very confused when the score dropped with flip and rot90 (multiples of 90° rotation) augmentations. In physics, what symmetry the system has is the first thing to consider, and although westerly winds or Coriolis force could make the physics asymmetric under flip or rotation, I could not believe that those affect the contrail detection. I visualized a prediction with a model trained without augmentation and applied it to a 180°-rotated input image to see why the augmentation did not work.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F954117%2Ffe56c0bee942eb6591adabafbe0daf39%2Fcontrail_label_shift.png?generation=1691669316963209&amp;alt=media" alt=""> </p> <p>The blue-green-red stripe pattern from top to bottom shows false negative, true positive, and false positive, which means that the rotated label is shifted up compared to the predicted label; that is, the original labels are shifted down compared to the contrails. I also observed the same pattern for left and right.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F954117%2Fbb92508e8a6b1d8392819ac63cb36cd6%2Fcontrail_fish.png?generation=1691669701057157&amp;alt=media" alt=""></p> <p>If the labels are shifted 0.5 pixels to top left (right panel in the figure above), then the rotated label is consistent with the original label in terms of the contrail-label offset.</p> <ul> <li>I first shift the label by 0.5 pixels and create y_sym</li> <li>y_sym has size 512×512 and is sampled from y on a shifted regular grid with bilinear interpolation</li> <li>U-Net is trained with y_sym </li> <li>Additional tiny convolution of 5x5 with stride 2 is trained to learn the mapping from y_sym to right-bottom aligned y, using 5% of the data when the augmentations are not applied randomly (note that y cannot be augmented).</li> <li>At inference time, y_sym is averaged for 8 patterns of rot90 and flip TTAs. The conv5x5 is applied after the y_sym.</li> </ul> <p>The augmentation is,</p> <pre><code>import albumentations as A A.Compose([ A.RandomRotate90(=1), A.HorizontalFlip(=0.5), A.ShiftScaleRotate(=30, =0.2) ]) </code></pre> <p>I refer to my public notebook for the details:</p> <p><a href="https://www.kaggle.com/junkoda/base-unet-model-for-the-1st-place" target="_blank">https://www.kaggle.com/junkoda/base-unet-model-for-the-1st-place</a></p> <h2>Models</h2> <p>The final prediction is the weighted mean of two models with a threshold ~0.45. I tuned the threshold and the weights using the validation set. Both models are U-Net using maxvit_tiny_tf_512.in1k as the encoder, but one uses single time t=4, and the other use four times t = 1 - 4. The input image size is 1024×1024 for both models.</p> <h2>Single-time model</h2> <ul> <li>Upscale the input image to 1024×1024</li> <li>Drop the final decoder layer of upscale and output 512×512</li> </ul> <h2>4-panel model</h2> <p>In order to use the time information, I tried 3D ResNet and ConvLSTM, but I was not able to make them work at all. The only thing I could do was to pack four 512×512 image at t=1,2,3,4 into one 1024×1024 image and hope that self attention in MaxViT look at different times.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F954117%2F291ed2fb89c1e4b492f0d5890ad06868%2Fimg4.png?generation=1691671228909061&amp;alt=media" alt=""></p> <p>In the U-Net, I only pass the quarter of the features (H/2, W/2) to the decoder, which corresponds to the t=4 quarter. The rest is the same as the single-time model. Concatenating images in the spacial direction appear in Kaggle once in a while. I remember <a href="https://www.kaggle.com/competitions/g2net-gravitational-wave-detection/discussion/275433" target="_blank">CPMP's solution</a> for the G2Net blackhole merger competition, in which he stacked 3 detector data horizontally rather than stacking them as 3 channels. </p> <p>Training details:</p> <ul> <li>Train 5 out of k=10 folds, using the out-of-fold to monitor the validation loss</li> <li>0.5 epochs of linear warmup to learning rate 8e-4, followed by cosine annealing</li> <li>batch size 4 × gradient accumulation 2</li> <li>AdamW with weight_decay = 0.01</li> <li>35 or 40 epochs of training</li> </ul> <pre><code>Scores CV Public Private input size Single time 512 0.697 0.707 0.712 Single time 1024 0.703 0.719 0.716 4-panel 1024 0.704 0.719 0.722 Ensemble 0.706 0.725 0.724 </code></pre> <p>All scores are for 5-fold × 8 TTA mean.</p> <h2>Things I was not able to do</h2> <ul> <li>I did not use pseudo labeling. I tried pseudo labeling at an early stage. It boosted the single model score a lot, but the benefit was unimpressive after 5-fold mean.</li> <li>I was not able to train large models. I tried MaxViT small and base, and many other models, but the improvement was unclear. I increased the input image size instead. This failure must be due to my insufficient experience. </li> <li>I used only positive data (dropping data with no positive labels) for model evaluation because it takes 1/2 time to train. However, I was not able to use it in the real prediction; it performs very bad for negative samples, and my classifier was not good enough to exclude such false negatives.</li> <li>Removing small masks is a basic technique in segmentation competition, but here, removing even 1-pixel clusters (connected components) was not a good idea. I tried to remove false positive clusters based on some cluster statistics (size, density, max density, etc), but U-Net was cleverer than my post processing.</li> </ul> <h2>Codes</h2> <ul> <li>Training: <a href="https://github.com/junkoda/kaggle_contrails_solution" target="_blank">https://github.com/junkoda/kaggle_contrails_solution</a></li> <li>Inference notebook: <a href="https://www.kaggle.com/code/junkoda/contrails-submit" target="_blank">https://www.kaggle.com/code/junkoda/contrails-submit</a></li> </ul> <p>Update 2023-08-20: Add links to the codes and minor english corrections.</p>
Google Research - Identify Contrails to Reduce Global Warming
3rd Place Solution: 2.5D U-Net
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the host and Kaggle staff for holding the competition, and congratulations to the winners! I also appreciate my teammates ( <a href="https://www.kaggle.com/charmq" target="_blank">@charmq</a> and <a href="https://www.kaggle.com/yoichi7yamakawa" target="_blank">@yoichi7yamakawa</a>) a lot.</p> <h2>Overview</h2> <p>Our solution is an ensemble of three pipelines by each member. Here, let me mainly explain my pipeline, which was used as a main solution.<br> My solution is based on a simple 2.5D U-Net (described below). I used 512x512 (256x256 for experimental phases) ash color images of the <a href="https://www.kaggle.com/code/inversion/visualizing-contrails" target="_blank">official notebook</a> as input and predicted a mean value of human individual masks. I trained a model for 25 epochs by AdamW with a cosine annealing scheduler with warmup. The loss function I used was <code>(-dice_coefficient + binary_cross_entropy) / 2</code>.</p> <h2>Validation</h2> <p>At the early stage of the competition, I was doing 5-fold cross validation. The trends of out-of-fold scores and validation scores differed a little possibly because of the difference in positive-pixel ratios.<br> After the training get computationally heavy, I decided to check only the validation score of the training of the entire training data. The validation score seemed correlated with a public LB with a small noise.</p> <h2>Architecture</h2> <p>Our main approach was so-called 2.5D: making 3D input 2D by stacking frames to a batch dimension and input to 2D backbones.<br> I first tried 2.5D U-Net architecture with 3D convolutions after the whole U-Net and got a small gain compared to 2D models (around +0.01 in validation).<br> Then I tried to move 3D convolutions to the middle of U-Net's skip connection layers, which have richer information of each downsampled feature. We used frames 2, 3, and 4 (0-indexed). In 3D convolution, we reduced the frame dimension from 3 to 1 by stacking two convolutions with kernel_size=2 and padding=0. With this, we got a big gain (around +0.02 in validation).<br> The pseudo-code is as follows (depends heavily on segmentation_models.pytorch library):</p> <pre><code> (torch.nn.Sequential): (): ().__init__( torch.nn.Conv3d(in_channels, out_channels, kernel_size, padding=padding, padding_mode=), torch.nn.BatchNorm3d(out_channels), torch.nn.LeakyReLU(), ) (torch.nn.Module): (): ().__init__() self.n_frames = self.backbone = smp.Unet(...) conv3ds = [ torch.nn.Sequential( Conv3dBlock(ch, ch, (, , ), (, , )), Conv3dBlock(ch, ch, (, , ), (, , )) ) ch self.backbone.encoder.out_channels[:] ] self.conv3ds = torch.nn.ModuleList(conv3ds) () -&gt; torch.Tensor: total_batch, ch, H, W = feature.shape feat_3d = feature.reshape(total_batch // self.n_frames, self.n_frames, ch, H, W).transpose(, ) conv3d_block(feat_3d).squeeze() () -&gt; torch.Tensor: n_batch, in_ch, n_frame, H, W = x.shape x = x.transpose(, ).reshape(n_batch * n_frame, in_ch, H, W) self.backbone.check_input_shape(x) features = self.backbone.encoder(x) features[:] = [self._to2d(conv3d, feature) conv3d, feature (self.conv3ds, features[:])] decoder_output = self.backbone.decoder(*features) masks = self.backbone.segmentation_head(decoder_output) masks </code></pre> <h2>Data augmentation</h2> <p>While full flip and rotation augmentation did not work because of the pixel-shift issues pointed out by <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/discussion/430618" target="_blank">1st place solution</a> and <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/discussion/430479" target="_blank">9th place solution</a>, applying them by a small ratio enhanced the performance a little. The whole augmentation I used was as follows:</p> <pre><code> albumentations A augments = [ A.HorizontalFlip(p=), A.VerticalFlip(p=), A.RandomRotate90(p=), A.ShiftScaleRotate(, , , p=), A.RandomResizedCrop(, , scale=(, ), ratio=(, ), p=), ] </code></pre> <h2>Pseudo label</h2> <p>We discretized by 0.25 the models' predictions for 2, 3, 5, 6, and 7 frames and used them as pseudo labels. Since there may be a distribution shift from the original training data, we used pseudo labels for pretraining and the original training data for finetuning.</p> <h2>Threshold</h2> <p>Since we used validation data for training some models, optimizing the threshold by validation data was not easy. We adopted a percentile threshold. We confirmed by validation data that the optimal percentile was almost equal to the ratio of positive pixels (=0.18%). Therefore, we identified the percentile in test data of the best threshold of some models in validation data by LB probing using submission time. It was about 0.16% and I hope it is a correct ratio.</p> <h2>Other tips</h2> <ul> <li>Setting grad_checkpointing saved more than half of memory usage.</li> <li>Increasing the number of decoder channels of U-Net slightly enhanced the performance.</li> <li>The mean of the batch-wise dice coefficient does not correspond to the global dice coefficient and is also unstable with small batch sizes. To mitigate this, I heuristically added 700000 to the numerator and 1000000 to the denominator.</li> </ul> <h2>Final submission</h2> <p>Our best single model was 0.706/0.71770/0.71629 (validation/private/public) by maxvit_large. This could still win 3rd place!<br> By ensembling 18 2.5D models with different backbones (maxvit_large_tf_512, tf_efficientnet_l2, resnest269e, maxvit_base_tf_512, maxvit_xlarge_tf_512, tf_efficientnetv2_xl) and slightly different setups (use pseudo label, include validation data for training, finetune lr, etc), we achieved 0.72233 of private LB. Also, adding <a href="https://www.kaggle.com/yoichi7yamakawa" target="_blank">@yoichi7yamakawa</a> and <a href="https://www.kaggle.com/charmq" target="_blank">@charmq</a>'s models increased the private score to 0.72305, which could have won 2nd place by 0.00001!<br> Unfortunately, we couldn't select that submission. On the last day, we added a model trained with full flip and rotation augmentation and test-time augmentation, which didn't change the validation score a lot. This lowered the generalization to private data probably because of the pixel-shift issue (or just a random fluctuation).<br> Anyway, being unable to find out the pixel-shift issue was the reason for the loss, and I learned a lot.</p> <h2>What did not work</h2> <ul> <li>Double U-Net architecture (training was unstable…)</li> <li>Increasing image size by finetune</li> <li>Adding conv2d layers between conv3d layers</li> </ul> <h2>Yiemon773 Part</h2> <p>Results of my 2d models' ensemble<br> Private LB: 0.709+ w/ pseudo labels<br> Private LB: 0.705+ w/o pseudo labels</p> <h3>Preprocess</h3> <p>I changed the normalization process from <code>normalize_range</code> to <code>normalize_mean_std</code>. The mean and std of each band are calculated using the training data.</p> <h3>backbone</h3> <p>resnest269e, maxvit_base, maxvit_large, efficientnetv2_l</p> <h3>Loss function</h3> <p>Weighted mean of following losses: Hard-label Dice, Hard-label BCE, Soft-label Dice, Soft-label BCE</p> <h3>What did not work</h3> <ul> <li>Many augmentations <ul> <li>Mixup</li> <li>frame shuffle</li> <li>channel shuffle</li> <li>…</li></ul></li> <li>Using bands other than 11, 13, 14, and 15 </li> </ul> <h2>charmq Part</h2> <h3>Architecture</h3> <p>2.5d models which are mentioned above and 2d models. 2.5d models performed better than 2d models.</p> <h3>backbone</h3> <p>resnest269e, resnetrs420, maxvit_base, maxvit_large, efficientnetv2_l</p> <h3>image size</h3> <p>2.5d models and 2d maxvit models were trained with (512, 512). (1024, 1024) was better than (512, 512) with 2d resnest269 but didn't work with other models.</p> <h3>Loss function</h3> <p>Weighted mean of following losses: Hard-label (pixel mask) Dice, Hard-label (1-(pixel mask))Dice, Soft-label (mean of individual mask) Dice, and Hard-label (min and max of individual mask) Dice</p> <h3>What did not work</h3> <ul> <li>augmentation cropping the area around the positive pixels</li> <li>UPerNet with convnext and swin transformer (mmsegmentation)</li> </ul> <h3>Findings</h3> <ul> <li>large architectures such as resnest269e, resnetrs420 and maxvit works well</li> <li>the training of maxvit with amp is unstable<ul> <li>we turned off amp with maxvit</li> <li>gradient checkpointing contributes to reduce memory </li></ul></li> <li>efficientnet_l2 works well but sometimes output nan<ul> <li>we simply replaced the nan output to 0</li></ul></li> <li>ensemble boosts score<ul> <li>e.g. 0.68 model + 0.68 model -&gt; 0.695</li> <li>even ensemble of 5 fold model boosts public score</li></ul></li> </ul> <h2>Acknowledgment</h2> <p>We would like to appreciate Preferred Networks, Inc for allowing us to use computational resources.</p>
Google - American Sign Language Fingerspelling Recognition
🥈27th Place Solution🥈
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>TLDR</h1> <p>This competition is very similar to speech recognition. Therefore, I attempted new ideas focusing on techniques used in speech recognition. Among them, the most effective ones were <strong>conformer architecture</strong>, <strong>interCTC technique</strong>, <strong>strong augmentation</strong> and <strong>nomasking</strong>.<br> <br></p> <h1>Data Processing</h1> <p>Out of the provided 543 landmarks, I utilized 42 hand data, 33 pose data, and 40 lip data. Input features were constructed by concatenating xy coordinates (drop z) and motion (xy[1:] - xy[:-1]).</p> <ul> <li><p>preprocess</p> <pre><code>def pre: x = x x = tf.where(tf.math.is, tf.zeros, x) n_frames = tf.shape(x) lhand = tf.transpose(tf.reshape(x, (n_frames, , args.n_hand_landmarks)), (, , )) rhand = tf.transpose(tf.reshape(x, (n_frames, , args.n_hand_landmarks)), (, , )) pose = tf.transpose(tf.reshape(x, (n_frames, , args.n_pose_landmarks)), (, , )) face = tf.transpose(tf.reshape(x, (n_frames, , args.n_face_landmarks)), (, , )) x = tf.concat(, axis=) x = x return x def decode: schema = { 'coordinates': tf.io., 'phrase': tf.io. } x = tf.io.parse coordinates = tf.reshape(tf.sparse., (-, args.input_dim)) phrase = tf.sparse. augment: coordinates, phrase = augment dx = tf.cond(tf.shape(coordinates)&gt;,lambda:tf.pad(coordinates - coordinates, ,]),lambda:tf.zeros) coordinates = tf.concat(, axis=-) return coordinates, phrase </code></pre></li> </ul> <p><br></p> <h1>Augmentation</h1> <p>Three types of augmentations were applied, each of which significantly influenced CV.</p> <ul> <li><p>flip hand(CV improvement : ~0.003)</p> <pre><code> def flip_hand(video): video = tf.reshape(video, shape=(-1, args.n_landmarks, 2)) hands = video[:, :int(2 * args.n_hand_landmarks)] other = video[:, int(2 * args.n_hand_landmarks):] lhand = hands[:, :args.n_hand_landmarks] rhand = hands[:, args.n_hand_landmarks:] lhand_x, rhand_x = lhand[:, :, 0], rhand[:, :, 0] lhand_x = tf.negative(lhand_x) + 2 * tf.reduce_mean(lhand_x, =1, =) rhand_x = tf.negative(rhand_x) + 2 * tf.reduce_mean(rhand_x, =1, =) lhand = tf.concat([tf.expand_dims(lhand_x, =-1), lhand[:, :, 1:]], =-1) rhand = tf.concat([tf.expand_dims(rhand_x, =-1), rhand[:, :, 1:]], =-1) flipped_hands = tf.concat([rhand, lhand, other], =1) flipped_hands = tf.reshape(flipped_hands, shape=(-1, args.input_dim)) return flipped_hands </code></pre></li> <li><p>flip video and phrase(CV improvement : ~0.005)</p> <pre><code> (): x = x[] y = y[] x, y </code></pre></li> <li><p>concat 2 videos and 2 phrases(CV improvement : ~0.01)</p> <pre><code># videos phrases def cat_augment(inputs, inputs2): x, y = inputs x2, y2 = inputs2 x_shape = tf.shape(x) x2_shape = tf.shape(x2) should_concat = tf..uniform(()) &lt; x_condition = should_concat &amp; (x_shape[] + x2_shape[] &lt; .max_frame) x = tf.cond(x_condition, : tf.([x, x2], axis=), : x) y = tf.cond(x_condition, : tf.([y, y2], axis=), : y) x, y </code></pre></li> </ul> <p><br></p> <h1>Model</h1> <p>The model was constructed utilizing the architectures proposed in Conformer. At first, I employed the Conformer code from <a href="https://github.com/TensorSpeech/TensorFlowASR" target="_blank">https://github.com/TensorSpeech/TensorFlowASR</a>. However, by making specific modifications to the code utilized by <a href="https://www.kaggle.com/hoyso48" target="_blank">@hoyso48</a> , I successfully implemented the Conformer architecture. This implementation led to an enhancement of 0.008 in the LB score. This improvement is likely due to the removal of residual blocks between Conformer blocks, allowing for the integration of more layers and resulting in improved robustness against overfitting. Furthermore, the incorporation of the interCTC(<a href="https://arxiv.org/abs/2102.03216" target="_blank">paper</a>) contributed to an LB score(~0.005).</p> <p>Furthermore, as the model's output was computed without masking and a maximum length of 384 was employed for the CTC loss "input-length' variable, using a larger kernel size(=31) effectively conveyed frame information across the 384-length range, resulting in improved performance.</p> <ul> <li><p>The overall structure of the model is as follows.</p> <pre><code>def get: NUM_CLASSES= PAD= inp = tf.keras.) x = inp x = tf.keras.layers.(x) x = tf.keras.layers.(x) xs = i range(num_layers): x = (x) x = (x) xs.append(x) classifier = tf.keras.layers. x1 = tf.keras.layers.(xs) x1 = classifier(x1) x2 = tf.keras.layers.(xs) x2 = classifier(x2) return x1, x2 </code></pre> <p><br></p></li> </ul> <h1>Training</h1> <ul> <li>Scheduler : lr_warmup_cosine_decay </li> <li>Warmup Ratio : 0.1 </li> <li>Optimizer : AdamW </li> <li>Weight Decay : 0.01</li> <li>Epoch : 300(last 100 epoch for only train data(not suppl))</li> <li>Learning Rate : 1e-3 </li> <li>Loss Function : CTCLoss</li> </ul> <p><br></p> <h1>Didn't Work</h1> <ul> <li>masking </li> <li>adding distance features</li> <li>mlm pretraining <br> <br></li> </ul> <h1>Code</h1> <p><a href="https://github.com/siwooyong/Google-American-Sign-Language-Fingerspelling-Recognition" target="_blank">https://github.com/siwooyong/Google-American-Sign-Language-Fingerspelling-Recognition</a><br> <br></p>
Google - American Sign Language Fingerspelling Recognition
22nd place solution: CTC Loss, Strong augmentations, CNN+MHSA
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://github.com/SamratThapa120/sign-language-finger-spelling/blob/master/signet/models/feature_extractor_downsampled.py" target="_blank">My best-performing model</a> (<a href="https://github.com/SamratThapa120/sign-language-finger-spelling/blob/master/signet/configs/ctc_loss_with_downsampled_deploy_concataug.py" target="_blank">config</a>)is based on the <a href="https://www.kaggle.com/code/hoyso48/1st-place-solution-training/notebook" target="_blank">1st place solution of the previous competition</a> by <a href="https://www.kaggle.com/hoyso48" target="_blank">@hoyso48</a>. I trained the model using vanilla CTC Loss, and used greedy decoding for inference. </p> <h3>Things that worked for me:</h3> <ul> <li><p><strong>Longer input frames length:</strong> A longer input frame length of 384 performed better than shorter length of 256. </p></li> <li><p><strong>Deeper/larger model:</strong> My best-performing model has 7 blocks with hidden dimension of 256. It has 9.5M parameters.</p></li> <li><p><strong>Pose keypoints:</strong> Adding pose information was helpful. Best-performing model uses pose+hands+lips+eyes.</p></li> <li><p><strong>CNN+MHSA:</strong> CNN+MHSA model &gt; Only CNN Model&gt;Only MHSA model</p></li> <li><p><strong>Strong augmentation:</strong> My model was performing well based on local CV, and public LB, with strong correlations between the two.But when I tested the model using this <a href="https://github.com/SamratThapa120/gradio-ASL-fingerspelling-recognition" target="_blank">Gradio app</a> using my webcam, it could not recognize my signs, so I had to use strong data augmentations to get the model to work. Especially temporal mask helped because I sign slower than the pros. These augmentations also boosted the public LB score by +0.006.</p> <p>flip_lr_probability=0.5<br> random_affine_probability=0.75<br> freeze_probability=0.5<br> temporal_mask_probability=0.75<br> temporal_mask_range=(0.2,0.4)</p></li> <li><p><strong>concat augmentation:</strong> Randomly concatenate two short landmark sequences, as well as their labels. This improved public LB by +0.008. I applied this augmentation to 40% of all training samples.</p></li> </ul> <h3>Here are other things I tried, that did not contribute to the best-performing model:</h3> <ul> <li><p><strong>transformer-encoder+decoder:</strong> Transformer endoder-decoder model with cross-entropy loss.</p></li> <li><p><strong>transformer-decoder:</strong> CNN+MHSA model and Transformer-like decoder with cross-entropy loss.</p></li> <li><p><strong>Causal-masking in self-attention:</strong> removing causal masking performed better than using causal mask.</p></li> <li><p><strong>Attention-span in self-attention:</strong> I thought that there would not be long-term dependency between frames for this task, so I tried to reduce the attention span of self-attention. Although there was no performance degradation, it did not boost performance either.</p></li> <li><p><strong>Focal loss</strong>: I tried the CTC Focal loss based on <a href="https://github.com/TeaPoly/CTC-OptimizedLoss/blob/main/ctc_focal_loss.py" target="_blank">this</a> repo, but there were no gains.</p></li> <li><p><strong>Erase landmarks augmentation</strong>: Randomly erase landmarks except the hand landmarks, to make the model more robust to mediapipe's detection errors.</p></li> </ul> <p>Originally I was using <a href="https://github.com/SamratThapa120/sign-language-finger-spelling/tree/pytorch" target="_blank">pytorch</a>, but I switched to Tensorflow because I faced several issues when converting the pyTorch model to tfLite. However, training with CTC Loss was upto 10x slower in tensorflow than pytorch. Looking back, I think I should have put more effort into fixing the model conversion issue, as I would have been able to perform more experiments. </p> <p>Congratulations to the winning teams. I would also like to thank Google for hosting this competition, it was a valuable opporunity to learn many new things. Also, I would like to thank everyone who shared their notebooks and unique ideas.</p>
Google Research - Identify Contrails to Reduce Global Warming
30th Place Solution for the Google Research - Identify Contrails to Reduce Global Warming Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, we would like to thank Kaggle and the host for such a funny (and <em>nothing-works-well-for-us</em>) competition. This is not a fancy solution but we feel proud of taking the right steps to generalize well to the private test set (it was indeed, 42 positions shake up 😃).</p> <h1>1. <strong>Context</strong></h1> <ul> <li><em>Business context</em>: <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/overview" target="_blank">https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/overview</a></li> <li><em>Data context</em>: <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/data" target="_blank">https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/data</a></li> </ul> <h1>2. <strong>Overview of the Approach</strong></h1> <p>Our final solution was an ensemble of 8 models which combines different encoders and image sizes. Each model configuration was trained in turn 3 times with different random seeds (24 models in total) and always with all available data (train+validation).</p> <h3><strong>Data preprocessing</strong></h3> <p>ASH false color images, only 2D models with labeled frame and hard labels.</p> <h3><strong>Validation schema</strong></h3> <p>Provided train/validation split.</p> <h1>3. <strong>Overview of the Approach</strong></h1> <h3>3.1. <strong>Individual models and final ensemble</strong></h3> <p>All models are based on UNet architecture from SMP. We combine them using a weighted average of the probabilities dropped by single models, where weights were determined optimizing the global dice score on the validation set using Optuna. The final CV score was <strong>0.6825</strong> (which may be a little biased because as we mentioned we optimized the weights directly using the validation set).</p> <table> <thead> <tr> <th>Weight</th> <th>Encoder</th> <th>Image size</th> <th>LR</th> <th>Epochs</th> <th>BS</th> <th>CV score</th> </tr> </thead> <tbody> <tr> <td>0.02</td> <td>timm-resnest101e</td> <td>384</td> <td>5e-4</td> <td>20</td> <td>48</td> <td>0.64397</td> </tr> <tr> <td>0.07</td> <td>maxvit_base_tf_384.in21k_ft_in1k</td> <td>384</td> <td>1e-4</td> <td>20</td> <td>16</td> <td>0.65699</td> </tr> <tr> <td>0.18</td> <td>maxvit_base_tf_512</td> <td>512</td> <td>1e-4</td> <td>40</td> <td>8*</td> <td>0.66414</td> </tr> <tr> <td>0.02</td> <td>tf_efficientnetv2_b3.in1k</td> <td>768</td> <td>5e-4</td> <td>35</td> <td>24*</td> <td>0.65178</td> </tr> <tr> <td>0.23</td> <td>tf_efficientnetv2_m.in21k_ft_in1k</td> <td>512</td> <td>5e-4</td> <td>35</td> <td>24*</td> <td>0.6525</td> </tr> <tr> <td>0.14</td> <td>tf_efficientnetv2_l.in21k_ft_in1k</td> <td>384</td> <td>5e-4</td> <td>35</td> <td>24*</td> <td>0.64538</td> </tr> <tr> <td>0.11</td> <td>maxvit_large_tf_224.in1k</td> <td>448</td> <td>1e-4</td> <td>40</td> <td>8*</td> <td>0.66407</td> </tr> <tr> <td>0.23</td> <td>maxvit_large_tf_384.in1k</td> <td>384</td> <td>1e-4</td> <td>40</td> <td>8*</td> <td>0.66782</td> </tr> </tbody> </table> <p>*gradient accumulation of 2.</p> <h3>3.2. <strong>Threshold selection</strong></h3> <p>The optimal threshold for the ensemble was 0.43, which was determined running a for loop varying the threshold from 0 to 1 with 0.1 increments and calculating the corresponding global dice. We observed a relatively wide maximum peaking at 0.43 which give us enough confidence to choose that threshold.</p> <h3>3.3. <strong>Training procedure</strong></h3> <p>We used CosineAnnealingLR and AdamW. Maybe it is important to note that we always tried to choose the number of epochs such that the last epoch is the best one. This generalizes better than choosing the best checkpoint during training, especially in our case, where we observed an irregular behavior of the training curves. Finally, once a model was validated, we took the same configuration and train it again using all the available data.</p> <h3>3.4. <strong>Augmentations</strong></h3> <p>We didn't notice the mask shift, so flipping and rotation didn't work for us. Other augmentations that worked were:</p> <pre><code>A.Compose([ A.RandomResizedCrop(height=, width=, p=), A.RandomBrightnessContrast(p=) ]) CutMix(p=) MixUp(p=) </code></pre> <p>These are our implementations of CutMix and MixUp:</p> <pre><code> (): W = size[] H = size[] cut_rat = np.sqrt( - lam) cut_w = (W * cut_rat) cut_h = (H * cut_rat) cx = np.random.randint(W) cy = np.random.randint(H) bbx1 = np.clip(cx - cut_w // , , W) bby1 = np.clip(cy - cut_h // , , H) bbx2 = np.clip(cx + cut_w // , , W) bby2 = np.clip(cy + cut_h // , , H) bbx1, bby1, bbx2, bby2 (): indices = torch.randperm(data.size()) shuffled_target = target[indices] lam = np.clip(np.random.beta(alpha, alpha),,) bbx1, bby1, bbx2, bby2 = rand_bbox(data.size(), lam) new_data = data.clone() new_data[:, :, bby1:bby2, bbx1:bbx2] = data[indices, :, bby1:bby2, bbx1:bbx2] lam = - ((bbx2 - bbx1) * (bby2 - bby1) / (data.size()[-] * data.size()[-])) new_data, target, shuffled_target, lam (): alpha &gt; , x.size() &gt; , lam = np.random.beta(alpha, alpha) rand_index = torch.randperm(x.size()[]) mixed_x = lam * x + ( - lam) * x[rand_index, :] target_a, target_b = y, y[rand_index] mixed_x, target_a, target_b, lam </code></pre> <p>And losses are calculated as:</p> <pre><code> self.cfg.mixup torch.rand()[] &lt; self.cfg.mixup_p: mix_images, target_a, target_b, lam = mixup(images, masks, alpha=self.cfg.mixup_alpha) logits = self(mix_images) loss = self.loss_fn(logits, target_a) * lam + ( - lam) * self.loss_fn(logits, target_b) self.cfg.cutmix torch.rand()[] &lt; self.cfg.cutmix_p: mix_images, target_a, target_b, lam = cutmix(images, masks, alpha=self.cfg.cutmix_alpha) logits = self(mix_images) loss = self.loss_fn(logits, target_a) * lam + ( - lam) * self.loss_fn(logits, target_b) </code></pre> <h3>3.5. <strong>What didn't work</strong></h3> <p>A ton of things:</p> <ul> <li>Using other bands/other data schema.</li> <li>TTA.</li> <li>We tried to combine the former and the latter frames with the labeled one at each stage of the UNet without success (only tried concatenation).</li> <li>At the end of the competition we discovered that pseudolabels give a big boost for some models (not the strongest ones) but we didn't have enough time to properly experiment with it.</li> <li>…</li> </ul> <h1><strong>References</strong></h1> <ul> <li><a href="https://www.kaggle.com/code/inversion/visualizing-contrails" target="_blank">ASH color scheme</a></li> <li><a href="https://arxiv.org/abs/2304.02122" target="_blank">OpenContrails: Benchmarking Contrail Detection on GOES-16 ABI</a></li> <li><a href="https://www.kaggle.com/egortrushin" target="_blank">@egortrushin</a> <a href="https://www.kaggle.com/code/egortrushin/gr-icrgw-training-with-4-folds" target="_blank">public notebook</a> was our starting point.</li> </ul> <h1><strong>Acknowledgments</strong></h1> <p>To my wonderful teammates <a href="https://www.kaggle.com/maxdiazbattan" target="_blank">@maxdiazbattan</a> <a href="https://www.kaggle.com/edomingo" target="_blank">@edomingo</a> <a href="https://www.kaggle.com/juansensio" target="_blank">@juansensio</a> (if I forgot something please feel free to write it in the comments).</p>
Google - American Sign Language Fingerspelling Recognition
9th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The below is result of a team effort with <a href="https://www.kaggle.com/rafiko1" target="_blank">@rafiko1</a> and <a href="https://www.kaggle.com/group16" target="_blank">@group16</a> and myself.</p> <p>We want to thank to the hosts for putting together a great competition, it was really fun and we were glad to end up in the gold zone! We've already enjoyed reading some of the top teams solutions- congrats to all who participated.</p> <h2>tl;dr</h2> <p>We used a similar architecture to <a href="https://www.kaggle.com/hoyso48" target="_blank">@hoyso48</a>'s solution from the ASL-sign competition. Our model had six Conv1D/Transformer blocks, was trained with CTC loss over 500+ epochs, and utilized heavy augmentations.</p> <p>This competition posed challenges beyond achieving model accuracy. We had to meet model constraints (5-hour submission limit and 40mb model size). We also needed to make sure all of our code converted nicely to tflite, which at times was frustrating. This required us to find a good balance between model size and inference time with minimal postprocessing.</p> <h2>Data and Preprocessing:</h2> <p>We used the training data as the base dataset, but also trained using the supplemental and external (ChicagoWild/Plus) during fine tuning. Preprocessing was the same for all datasets:</p> <ul> <li>Standard scaling using the mean and standard deviation for each point.</li> <li>x, y, z coordinates for each point.</li> <li>Points used included: RHAND, LHAND, LIP, POSE, REYE, and LEYE.</li> <li>We tried different settings for the max frame length since it had the most impact on our preprocessing and ened up using 356.</li> <li>Downsampling non-hand frames: To handle samples that has more than 356 frames, we first removed frames with missing hands at even intervals. This improved performance when we added it to our inference- so eventually we added it to our training pipeline.</li> <li>Resize: Examples exceeding our max frame length post-downsampling were resized.</li> </ul> <h2>Augmentations:</h2> <p>Heavy augmentations helped us to train long without the risk of overfitting.</p> <ul> <li>Mirror/Flip</li> <li>Random resampling</li> <li>Random rotation</li> <li>Random spatial and temporal masking</li> <li>Temporal cropping</li> <li>Minimal Random noise</li> <li>Random scaling</li> </ul> <h2>Training</h2> <p>Given that training each model took a very long time to train (days running on colab TPUs), we resumed from the best checkpoints to experiment with different augmentations, data ratios, and learning rate schedules. It’s hard to exactly retrace the steps we took to get to our final model but the general idea was:</p> <ul> <li>Base training without augmentations at a consistent learning rate until convergence (~150 epochs).</li> <li>Introducing heavy augmentations and continuing until the validation score plateaued (200-400+ epochs).</li> </ul> <p>Example of how our validation score improved from different experiments over the final week of the competition:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F644036%2F19aa07e39c404ae7599d66184997d351%2FScreenshot%20from%202023-08-26%2016-58-18.png?generation=1693083542531397&amp;alt=media" alt=""></p> <p>Even up until the end of the competition we were able to get small improvements in our score from more training.</p> <h2>Model Architecture:</h2> <p>Initially, I tried an efficientnet approach similar to the 2nd place in the signs competition. Later, I pivoted to the 1st place architecture similar to public notebooks. After merging teams we found both of us had similar architectures but <a href="https://www.kaggle.com/rafiko1" target="_blank">@rafiko1</a> made some modifications that improved it's performance on this task:</p> <ul> <li>Switched from 3 to 2 Conv1D per Conv/Transformer block.</li> <li>Increated to 6 blocks (resulting in a 10.5M parameter model).</li> <li>Parameters: 384 max frames, 320 dimensions, 8 heads per transformer. </li> <li>Did not use a final MaxPooling layer.</li> </ul> <p>We also had experiments that used DebertaV2 instead of the transformer block, But the final model didn't use them.</p> <h2>Increasing model size without retraining:</h2> <p>Towards the competition's end, realizing we didn't have a great way to ensemble models, we wanted to get the biggest model possible without having to retrain. We found that instead of retraining from scratch we could simply expand the number of Conv1D/Transformer blocks we had - and copy the weights over from our previous best model, repeating the final layer weights. This allowed the model to converge very fast and squeeze out some additional accuracy while saying below the 5 hour inference limit.</p> <h2>Attempts not in our final solution:</h2> <ul> <li><strong>Cutmix based on pseudo labels:</strong> The idea was to use model predictions to pinpoint frames displaying each letter in videos, then applying cutmix at the frame level during training. It functioned initially but became required us to recreate the tfrecords each time we modfiied preprocessing or model archetecture, so it wasn't used in our final solution. When it was used we only applied cutmix across similar phrase types (phone numbers mixed only with phone numbers, etc) but it was interesting to read the top team found it worked best mixing only between the same signer.</li> <li><strong>Ensembling techniques:</strong> Couldn't find a good solution for this with CTC loss in tflite.</li> <li><strong>Beam search</strong></li> <li><strong>Efficientnet/Transformer</strong></li> <li><strong>DebertaV2</strong></li> </ul> <h2>Postprocessing / Ensemble:</h2> <p><strong>Fill word</strong>: As others have noted, there were a significant number of examples in the dataset with very few frames. These had low or no signal in them related to the target. We found it was better to just replace them with a fill word ('+w1- ea-or') We also found that it worked best to apply these to examples that had less than 30 frames AND a prediction phrase that was &lt;= 6 characters.<br> <strong>Model weight averaging</strong> Instead of using the final epoch of each training run, averaging the weights from the final few epochs really helped the CV and LB score. This became less powerful later in the competition when our models were stronger.</p> <h2>Strong Validation/LB Correlation:</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F644036%2F2a8e047f8cb571c018f477396c321a91%2Fsub-lb.png?generation=1693083215072772&amp;alt=media" alt=""></p> <p>We tracked each submission's execution time and CV/LB correlation. This competition had a really strong correlation between our validation set (one parquet file) and the LB. Our best private LB score was 0.788 but we didn't select it because it did not perform as well on the public LB.</p>