Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
Heritage Health Prize
video of HHP presentation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Heritage Health Prize <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Just for completeness, here is the presentation...</p> <p>http://www.youtube.com/watch?v=ILdDE6mXU_8&amp;feature=youtu.be</p> <p></p>
Higgs Boson Machine Learning Challenge
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Higgs Boson Machine Learning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I just posted my code + documentation at&nbsp;<a href="https://github.com/TimSalimans/HiggsML">https://github.com/TimSalimans/HiggsML</a></p>
Higgs Boson Machine Learning Challenge
Code release
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Higgs Boson Machine Learning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>As required by the rules, the code for the winning submission has been released. Model documentation is at:</p> <p>https://github.com/melisgl/higgsml/blob/master/doc/model.md</p> <p>Precompiled binary (with sources of all library dependencies) for x86-64 linux:</p> <p>http://quotenil.com/higgsml/gabor-melis.zip (35MiB)</p> <p>The git repo's readme has (README.md in the zip above) has information on how to run or recompile it (read the readme, you need a cuda card):</p> <p>https://github.com/melisgl/higgsml</p> <p>There is a note about this on my blog:</p> <p>http://quotenil.com/Higgs-Boson-Machine-Learning-Challenge-Post-Mortem.html</p> <p>and another one about updates to my Common Lisp libraries for those who care:</p> <p>http://quotenil.com/Higgs-Boson-Machine-Learning-Challenge-Bits-and-Pieces.html</p>
Higgs Boson Machine Learning Challenge
Third place Model Documentation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Higgs Boson Machine Learning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Name: Courtiol Pierre</p> <p>Country: France<br> Postal code: 94270<br> City: Kremlin-bic&#234;tre</p> <p>Email: [email protected]</p> <p>Competition: Higgs Boson Machine Learning Challenge</p> <h3>## Summary</h3> <p>The model is an ensemble of 108 neural networks.</p> <h3>## Features Selection / Extraction</h3> <p>All features were normalized to have zero mean, stddev 1. The training data was used as given.<br>After a GA search, several attributes did not seem relevant (no CV score improvement) but I finally decided to keep them all.<br>Cake features did not significantly improve the CV score.</p> <h3>## Missing data imputation</h3> <p>Simple imputation rules (min, median, mean, max) were used in equal proportions.<br>Regression imputation brought worse results.</p> <h3>## Modeling Techniques and Training</h3> <p>Every neural network in the ensemble predicts the probability of an example<br>being a signal. The probabilities predicted by networks in the ensemble were<br>simply averaged. All Predictions are in [0 ... 1] range and those above 0.5575 were predicted to be a signal.<br>This cutoff value was hand-selected based on the public leaderboard and CV.</p> <p>The ensemble is composed of these neural networks' architectures in equal proportions :</p> <p>- 30x50x1<br>- 30x50x25x1<br>- 30x50x50x25x1</p> <p>The network was trained with cross-entropy loss and backpropagation, hidden and output layers had sigmoid activation.</p> <p>Signal weights were updated as follow :<br>signal_weights = signal_weights * max(backgroung_weights) / max(signal_weights) / 10</p> <p>The first epochs were trained without weights.<br>The last epochs were trained with weights.</p> <p>Very small batch size (10)</p> <h3>## How To compile</h3> <p>Into the source directory, execute the MATLAB compiler as follow :</p> <p>- mcc -m main.m (to create binary which runs the training)<br>- mcc -m testing.m (to create binary which runs the prediction)</p> <p>Then move :</p> <p>- main.exe in higgsml-train directory<br>- testing.exe in higgsml-run directory</p> <h3>## How To Generate the Solution</h3> <p>$ higgsml-train.bat &lt;training.csv&gt; &lt;trained.dat&gt;</p> <p>to create a trained.dat file from the training.csv sample.</p> <p>$ higgsml-run.bat &lt;test.csv&gt; &lt;trained.dat&gt; &lt;submission.csv&gt;</p> <p>to create the submission.csv file from the test sample and the training parameters</p> <h3>## Computation time</h3> <p>The code was designed for Quad core processor.</p> <p>On my two-year-old laptop :</p> <p>- 3 hours for training step<br>- 20 minutes for running step</p> <p>The score can be slightly improved by increasing ensemble size</p> <h3>## Additional Comments and Observations</h3> <p>There were many other things that did not work for me,<br>among them:</p> <p>- Negative correlation learning to improve the diversity of the ensemble</p> <p>- Attribute selection with GA</p> <p>- Boosting (AdaBoost, ModestBoost, ...)</p> <p>- Add noise to reduce overfitting</p> <p>- Bagging</p> <p>- Semisupervised learning with Kmeans, EM</p> <p>- DropConnect</p> <p>- DropOut</p> <p>- Pseudo labeling.</p> <p>- Stacking</p> <p>and many others things ...</p> <h3>## Dependencies</h3> <p>DeepLearn Toolbox was my starting point : https://github.com/rasmusbergpalm/DeepLearnToolbox</p> <p>MATLAB Parallel Computing Toolbox : http://www.mathworks.com/products/parallel-computing/</p> <p>MATLAB Compiler Runtime (MCR) : http://www.mathworks.com/products/compiler/mcr/index.html?s_tid=gn_loc_drop</p>
Higgs Boson Machine Learning Challenge
Winning Methodology Sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Higgs Boson Machine Learning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[quote=AD Lav;56910]</p> <p>[quote=Log0;53891]</p> <p>&quot;... With many examples, I grid search and cross-validate to use min_samples_split = 100 for min_samples_leaf = 100, which reduces variances a bit...&quot;</p> <p>[/quote]</p> <p>This may be obvious but how do these values (standards are 2 and 1, respectively) reduce variances? And thank you for sharing your methods!</p> <p>[/quote]</p> <p>It depends on the data a lot ultimately. My understanding is that if the data leads to your tree to create many small leaf nodes, the tree will be less generalizable since you're overfitting to specific examples. So, you want to tell the tree only create a leaf node or a split if it has enough observations in the dataset (and not just 1 or 2). In the case of having more training data (250K, if&nbsp;memory&nbsp;serves me right), tuning this up a bit can help reduce variances so you don't fit to some outlier&nbsp;observations.</p> <p>[quote=phunter;53931]</p> <p>[quote=Log0;53910]</p> <p>[quote=Apophenia;53909]</p> <p>Could you elaborate how to correctly do those log transformation? I tried log transformation of non-negative numeric features with power-law like distributions, but it turns out very destructive when using GBC and RF, decrease score from ~3.7 to ~2.5.</p> <p>Thanks!</p> <p>[/quote]</p> <p>I think it's easier if you look <a href="https://github.com/log0/higgs_boson/blob/master/cleaned_model.py#L116" target="_blank">in the code</a>.</p> <p>[/quote]</p> <p>Maybe another dumb question: I see in the code you had imputer for filling up missing values with most_frequent</p> <p>imputer = Imputer(missing_values = -999.0, strategy = 'most_frequent')</p> <p>These missing values here are intrinsically&nbsp;missing, e.g. when there is no jet, the PRI_jet_leading_pt is surely non-sense, so filling-up can cause some confusions I guess. Have you tested with/without imputer? By the way, I just left it -999.0 and let xgboost handle the missing value: it did a great job.</p> <p>[/quote]</p> <p>What values does xgboost&nbsp;use in imputing the values?</p> <p>Not using the imputer seems to hurt the model a little bit, not a lot.&nbsp;I am not sure if that's the same case for others. I did not&nbsp;try not using it in my last 30+ models, so not using it might have helped better.</p> <p>I also understand that&nbsp;some values are better left at -999.0 because it is just meaningful to be a -999.0. However, I think&nbsp;from a model&nbsp;perspective, e.g. &quot;PRI_jet_leading_phi&quot;,&nbsp;a -999.0 values looks even more odd to the model, like a ton of&nbsp;outliers, when most values are&nbsp;in single digits. Anyway, this is something I&nbsp;picked by treating it blackbox and cross-validating.</p> <p><code>&gt; summary(X$PRI_jet_leading_phi)<br> Min. 1st Qu. Median Mean 3rd Qu. Max. NA's <br> -3.14 -1.58 -0.03 -0.01 1.56 3.14 99913 </code></p> <p>[quote=Apophenia;53920]</p> <p>[quote=Log0;53911]</p> <p>I am not sure what could have caused your drop in scores without knowing exactly what you're trying to do.&nbsp;You would like to re-visit and check your CV scores. It should not cause such a decrease even adding in bad features - ensemble methods are especially resistant to such problems.</p> <p>[/quote]</p> <p>Thanks for your reply.</p> <p>Almost half of the features were transformed. So a bad transformation might harm a lot.&nbsp;My approach might be totally wrong: I took logarithms of those values(with a tiny epsilon added) then scaled them&nbsp;into [0,1] using MinMaxScaler.</p> <p>Could you please elaborate the logic behind an inverse log? Or the best practice&nbsp;to deal with features with wide range and power-law like histogram? Thanks in advance.</p> <p>[/quote]</p> <p>For your score drop with adding more features I think that&nbsp;if&nbsp;the features are not helpful, using more features for each tree should alleviate the problem. Still, I don't know why there is such a big degrade. How many features do you use per tree in the&nbsp;ensemble?</p> <p>I'm not an expert in feature engineering,&nbsp;but this is what I think: the motivation&nbsp;of adding such seemingly meaningless &quot;features&quot;, such as an inverse log, or just a log, is&nbsp;similar to what <a href="http://blog.bigml.com/2013/02/21/everything-you-wanted-to-know-about-machine-learning-but-were-too-afraid-to-ask-part-two/" target="_blank">this post</a>&nbsp;explains:&nbsp;help the model&nbsp;understand complex relationships in the data. Yes, models can find hidden relationships, but why make it harder for them?</p> <p>In the case of&nbsp;a log / inverse log, it's like&nbsp;telling the model that: &quot;Hey, I know that you understand the numbers&nbsp;1, 10, 100, 1000, 10000, 100000, 1000000&nbsp;by computing their differences, so 1000000 is very very different from 1000 than is 10000 to 10. However, I want you to try seeing them as less different things by&nbsp;treating&nbsp;them as log(1), log(10), log(100), log(1000), log(10000), log(100000), log(1000000) instead. They are really less pronounced than you think&quot;.</p> <p>So, you can understand as log / inverse log are very similar in what they are doing. I do not know how different they are though. If you plot the graphs, they are similar. In fact, if I have both kinds of features (log and inv log), even increasing the number of features in each tree, the score does not raise&nbsp;at all, except slowing down the model and even hurting it.</p> <p>I am not sure what could have caused your drop in scores without knowing exactly what you're trying to do.&nbsp;You would like to re-visit and check your CV scores. It should not cause such a decrease even adding in bad features - ensemble methods are especially resistant to such problems.</p> <p>[quote=Apophenia;53909]</p> <p>Could you elaborate how to correctly do those log transformation? I tried log transformation of non-negative numeric features with power-law like distributions, but it turns out very destructive when using GBC and RF, decrease score from ~3.7 to ~2.5.</p> <p>Thanks!</p> <p>[/quote]</p> <p>I think it's easier if you look <a href="https://github.com/log0/higgs_boson/blob/master/cleaned_model.py#L116" target="_blank">in the code</a>.</p> <p>Code to my solution is available at the top post.</p> <p>[quote=Mark Vicuna;53902]</p> <p>[quote=Log0;53901]</p> <p>Yes. The output is an averaged CV score of the 10 runs, each with 4 folds. I also did not&nbsp;use a StratifiedKFold, but I just use a KFold.&nbsp;It might have helped to add in some stability.&nbsp;</p> <p>[/quote]</p> <p>Thanks again. &nbsp;This is my first competition and my first exposure to ML so there was a lot to learn and I still think I have a lot more to learn.</p> <p>[/quote]</p> <p>I thank you for pointing out the re-weighting problem in the&nbsp;CV loop again.&nbsp;I couldn't wrap my head around that at that time.&nbsp;Kaggle is really a great place to learn machine learning, and practical machine learning skills. Do join another one ;)</p> <p>And I'm only learning too.</p> <p>[quote=Mark Vicuna;53899]</p> <p>So psuedo python:</p> <p>skf = cross_validation.StratifiedKFold(Y_data,n_folds=4, shuffle=False)</p> <p>for iteration in range(10):</p> <p>&nbsp; for train_index, test_index in skf:</p> <p>&nbsp; &nbsp; &nbsp; something.train(X_data[train_index],Y_data[train_idx])</p> <p>&nbsp; &nbsp; &nbsp; something.predict(X_data[test_index])</p> <p>&nbsp; &nbsp;# average folds&nbsp;results</p> <p>#average runs results</p> <p>Thanks.</p> <p>[/quote]</p> <p>Yes. The output is an averaged CV score of the 10 runs, each with 4 folds. I also did not&nbsp;use a StratifiedKFold, but I just use a KFold.&nbsp;It might have helped to add in some stability.&nbsp;</p> <p>[quote=Mark Vicuna;53893]</p> <p>I have a dumb question. &nbsp;What do you mean by 'run the CV 10 times'? &nbsp;Use 10 folds? Do 10 runs with different seeds and shuffle the data before making folds?</p> <p>Thanks.</p> <p>[/quote]</p> <p>Hi Mark, far from a dumb question. A very good question to ask.</p> <p>I split&nbsp;the&nbsp;training data into 4 folds to get a single CV score. Then,&nbsp;I re-run this CV loop again&nbsp;10 times.&nbsp;All of the 10 CV loops have exactly the same folds (saving the indices before hand).&nbsp;I get the mean of the 10 CV loops CV score.</p> <p>The reason is because each&nbsp;CV&nbsp;score of a CV loop fluctuates with a 0.01 AMS score. That is, running *exactly* the same code could give a local CV AMS score of 0.91, 0.92, 0.93, 0.92 (just as an example). With such fluctuations,&nbsp;I cannot compare two solutions I have reliably. However, if I run the CV loops 10 times, the mean score&nbsp;is&nbsp;more stable, and I can discard solutions more confidently.</p> <p>This has helped me a lot to throwing away some solutions. However, due to the&nbsp;slowness, I really regret only running this until&nbsp;the very last. I should have done it earlier.</p> <p>Congratulations to the winners and especially&nbsp;G&#225;bor Melis the champion! It has been a wonderful competition and I learnt so so so many&nbsp;things in the competition. How about let's share our experiences and what we did right/wrong?</p> <p>I'll start with myself first!</p> <p>I use the python Scikit-Learn&nbsp;library. My approach is very simple:</p> <ul> <li><span style="line-height: 1.4">I use&nbsp;an AdaBoostClassifier on ExtraTreesClassifier. The reason is that because since there are many examples, it benefits from subsampling. GradientBoostingClassifier is too slow and RandomForestClassifier does not&nbsp;take advantage of the subsampling. Subsampling is good when training data is abundant.</span></li> <li><span style="line-height: 1.4">With many examples, I grid search and cross-validate to use min_samples_split = 100 for min_samples_leaf = 100, which reduces variances a bit.</span></li> <li>Grid search for optimal parameters for the AdaBoostClassifier&nbsp;and ExtraTreesClassifier.</li> <li><span style="line-height: 1.4">I use inverse log features on some of the features which has no negatives. I expected the model to pick up the rules even without this though. This is actually very easy to get an overfit, if done wrongly.<br></span></li> <li>I used the weights to train the solution. This is effectively telling&nbsp;classifers: Hey, look at these, seriously, these are more important. Not all predictions&nbsp;are equal!</li> <li><span style="line-height: 1.4">I picked the 83th percentile to cut for the signals in my solution..</span></li> </ul> <p>This puts me at rank 23rd.</p> <p>I'd like to&nbsp;bring up something that&nbsp;not mentioned a lot in sharing methodologies thread: <em>How to pick your solution for Kaggle</em>.</p> <ul> <li><span style="line-height: 1.4"><strong>Trust your CV score.</strong>&nbsp;If the CV score is not stable, run the CV 10 times. You should adopt this methodology, or similar. Do things&nbsp;systematically, and not according to the LB score.</span></li> <ul> <li>Here's the pain lesson I have: My best solution has a very poor public LB score of 3.71* (which I submitted 20 mins before the deadline), but it's the 3rd best ranked local CV score off&nbsp;the two&nbsp;solutions I picked, which this should should have gave me a rank of 4th.</li> </ul> <li><strong>Pick the N&nbsp;solutions with the&nbsp;most difference, subject to your CV score.</strong>&nbsp;Picking two very similar methodology solutions (in my case), means that your solutions&nbsp;either&nbsp;fail together or win together. However, should I have picked that solution above, it'll put me at <strong>rank 4</strong>. A very painful lesson.</li> <li><strong>Don't trust your LB score.</strong> Don't trust your LB score. I repeat again. It is not a CV score and it is NOT representative of what you'll get. Optimize for LB score, and prepare for a painful and an unnecessary lesson. I did not trust LB score,&nbsp;but I was still burned a bit to not pick the best solution because it's LB score is 3.71 only.</li> </ul> <p>I cleaned up the code a bit, which is a complete mess. Here it is: <a href="https://github.com/log0/higgs_boson/" target="_blank">https://github.com/log0/higgs_boson/</a>&nbsp;</p> <p>[edit:&nbsp;I added the step and explanation for using weights in training. Added code.]</p> <p>YB18, I saw the notification.&nbsp;Since you removed the content, I suppose you&nbsp;have resolved the problem?&nbsp;</p>
Home Credit Default Risk
# 19th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to everyone! Here's brief summary of my solution.</p> <ol> <li><p>Feature Engineering When merging features in BUREAU/CREDIT/INSTALL/POS_CASH/PREV_APP tables into application table, use subset of these tables. For example, select recent 1/2/3 years data, split these tables to Approved part/Not Approved part, Active part/Closed part.</p></li> <li><p>Feature Selection After my feature engineering, I got 10000+ features in total, which're too large for a single model. So I built LGB models using part of these features, saved top features accoring to LGB feature importance. For each LGB model, I choose top 50 features and random choose 50 features from left features, therefore I got about 500 features for a single model, My best single model has 0.8CV, 0.8 public LB, 0..8 private LB .</p></li> <li><p>Stacking I have about 200 oofs for stacking. In this part, I stack them with a 2-layer stack net. Layer 1 includes LGB, XGB, RF, Adaboost, LOGREG and ExtraTree, layer 2 is a LGB model. After stacking, the cv score boosting to 0.803.</p></li> <li><p>LGB metric In my LGB models, binary_logloss metric always has higher score than auc metric</p></li> </ol> <p>Never mind :)</p>
Home Credit Default Risk
#12 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>@M Hendra Herviawan </p> <p>We train 6 RNNs and 6 CNNs on every history loan table (bureau, previous_application, bureau_balance, POS_CASH_balance, credit_card_balance, installments_payments). </p> <p>For the missing value, we just replace the nan with median of that column.</p> <p>Thank you, <a href="/olivier">@olivier</a>. Your awesome kernels always help us most!</p> <p>Thanks, for the NN part you can refer to this <a href="https://www.kaggle.com/CVxTz/keras-bidirectional-lstm-baseline-lb-0-069">kernel</a> or <a href="https://www.kaggle.com/larryfreeman/toxic-comments-code-for-alexander-s-9872-model">kernel</a>, our NN codes are very similar to this one. </p> <p>Our NN structure is SpatialDropout(0.1) -&gt; GRU(40) -&gt; GRU(40) -&gt; Concat(Max_pool, Mean_pool) -&gt; Dense(256) -&gt; Dense(128) -&gt; Dense(64)</p> <p>Thank Kaggle and HomeCredit for hosting such an amazing competition. And also thanks for my awesome teammates <a href="https://www.kaggle.com/cxl923cc">@cxlcc</a> <a href="https://www.kaggle.com/oyxuan">@ouyangxuan</a> <a href="https://www.kaggle.com/yifanxie">@Yifan Xie</a> <a href="https://www.kaggle.com/yl1202">@YL</a> <a href="https://www.kaggle.com/zhiqiangzhong">@Zhiqiang Zhong</a>. We are the second time to team up and have a wonderful time. Our best result's local cv is 0.8047, it is an average of our blending results and stacking results.</p> <h1><strong>Feature Engineer</strong></h1> <p>In this part, I mainly describe features of <a href="https://www.kaggle.com/oyxuan">@ouyangxuan</a>’s and my part, and <a href="https://www.kaggle.com/yifanxie">@Yifan Xie</a>’s group has many features from other different points. Our best single model uses a combination of features from every teammate, and the best local cv of this single model is 0.8021.</p> <h2>1. Statistical Features</h2> <p>In this part, we mainly use groupby SK_ID_CURR to generate features. </p> <p>1) First we do feature engineer for every single table: we use +-*/ or groupby to generate simple features, such as interest ratio, time interval, days repay late, and etc. To evaluate users' recent behavior, we also groupby the last-k loans of the user to extract features.</p> <p>2) Then we merge all the features from every table into application_train table, numerical features (count, max, min, mean, median, std, skew, last, trend, weightd mean), category features (nunique, one-hot count, last). We also try groupby [‘SK_ID_CURR’ + category] features from every table. </p> <p>Start with this part of feature engineer, our local cv is about 0.798 – 0.799.</p> <h2>2. Model Features</h2> <p>After above feature engineer, we think we might have missed some patterns in history loan sequence, so we decide to use models to extract features. We use NNs and LGBMs to generate features from every history table. </p> <p>1) NN features: we use RNNs and CNNs to predict the next loan default prob for every user from his loan history. We believe these NN models can find some time-series pattern from the loan sequence or application sequence.</p> <p>2) LGBM features: we use LGBMs to predict whether a history loan is from a default user or not in every table. We treat every history loan from the default user as target 1, then we train LGBM to get the prob, groupby these probs by SK_ID_CURR and merge these into the train table. We think these LGBM models can find some tree split pattern.</p> <p>After adding these model features, our local cv improves to 0.800-0.801. </p> <h1><strong>Model Ensemble</strong></h1> <h2>1. Blending</h2> <p>In the blending part, we mainly concern the diversity from different train input data. We combine all features from every teammate and obtain about 2k+ features. Then we use random 0.5 fraction of these features to generate models, with random kf seeds and random model parameters. We use this strategy to generate about dozens of LGBM models, and the best single subsample model has the cv of about 0.8014. We average the predictions from these models and cv boosting to 0.8037. Then we blend these results with our best single model and the final cv is 0.8040.</p> <h2>2. Stacking</h2> <p>In the stacking part, we mainly concern the diversity from different models. This part is mainly operated by <a href="https://www.kaggle.com/yifanxie">@YiFan Xie</a> and we have dozens of different models such as lgb, xgb, rf, linear models, and etc. These stacking models give us a cv about 0.8034. In the last day, we fit the blending model oofs from above into the stacking net, and then the stacking cv boosting to 0.8046.</p> <p>Our best result is an average of stacking and blending results, it has a cv of about 0.8047, PB 0.803.</p> <p>We don't pay much attention on feature selection, we just use tree model importance and drop null importance features to speed up model training. </p> <p>I think low feature_fraction params in tree models or bagging strategy can handle redundant features well.</p> <p>Great idea, we have not tried using linear model to extract time patterns. It may find some different patterns which will improve the model performance.</p> <p>thank you</p>
Home Credit Default Risk
#14 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Oh, you definitely should use more operators. I checked with feature importance. Many MAX STD features contribute a lot to the final result.</p> <ol> <li><p>ElasticNet is to combine L1 and L2 penalty in linear regression. See <a href="https://en.wikipedia.org/wiki/Elastic_net_regularization">here</a> and <a href="https://web.stanford.edu/~hastie/TALKS/enet_talk.pdf">here</a></p></li> <li><p>Because we each have different feature group, we just run the model independently and get different oof results.</p></li> </ol> <p>Thanks! My great teammate!</p> <p>Hi everyone,</p> <p>First of all, never imagine that I can write something as a recipient of gold medal. Thanks to my great teammates <a href="https://www.kaggle.com/tita1708" target="_blank">@Adri</a> <a href="https://www.kaggle.com/guziye" target="_blank">@Tomoyo</a>.</p> <p>I just want to share some key features in our solution.</p> <p>My model has local cv 799+. I used kernel from <a href="https://www.kaggle.com/jsaguiar/updated-0-792-lb-lightgbm-with-simple-features/code" target="_blank">@Aguiar</a> as a baseline. </p> <p>The features boosted my cv:</p> <p>TARGET ENCODING: I used target encoding for the categorical features in Application, Bureau and Prev_application tables. In Bureau and Prev_application tables, the first thing you need to do is that creat a map between SK_ID_CURR and TARGET, then map it to Bureau and Prev_application tables. Since one SK_ID_CURR can correspond to many SK_ID_BUREAU or SK_ID_PREV, I used max, min and mean to aggregate the features after target encoding.</p> <p>P.S. I didn't bother to choose between TARGET ENCODING and ONE-HOT ENCODING. Actually, I used both of them. I think they have information from different perspectives.</p> <p>TIME WINDOW: I used this feature in every table except Application. For example, in Bureau table, I used 'DAYS_CREDIT' as a threshold for selecting rows. After selecting rows, I aggregated those features again. This approach will generate hundreds of features. I used lightGBM feature importance to select 60% of them(DON'T ASK ME WHY. I DON'T KNOW. XP ). And this feature selection approach can boost my cv further.</p> <p>One key feature: </p> <p>bureau['AMT_CREDIT_DEBT_RATE'] = bureau['AMT_CREDIT_SUM_DEBT']/(1 + bureau['AMT_CREDIT_SUM'])</p> <p>My teammate, Tomoyo, told me about this feature. It boosted my cv from 798 to 799+</p> <p>About stacking:</p> <p>We generated many models(dart, goss, catboost, xgb) with different features. When making meta-model, I used lightGBM and ElasticNet to make 30 oof predictions with different random seeds. Averaging these 30 predictions has our best private score. But we didn't select it. XD <br> When using lightGBM, I also selected 90 features by feature importance from first layer model and added them to meta-model as features.</p> <p>Best,</p> <p>seaguII</p> <p>I used max, min, mean, std on them and I selected them on each feature manually. However, I think it's fine if you do it in a brute-force way.</p> <p>np</p> <p>Congrats, Kain. See you a little bit. :)</p> <p>We exchanged our ideas of each other's features. Maybe my teammates used my features. But for me, after combination, the cv dropped. So, I didn't use my teammates' features. As a result, the feature groups are different from each other.</p> <p>It's a good point. I didn't use oof. My way might cause overfitting but it turned out that it didn't overfit.</p>
Home Credit Default Risk
#9 Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I tried to make it as random as I could yes. If you do anything to try to control your randomness then I can't see how it is really random.</p> <p>Firstly, I would like to say congratulations to all my team mates and to thank them for all their great efforts.</p> <p>Second, here is a brief write-up of our approach. This entry was born out of my own personal disappointment at suffering in the shakeup on Porto Seguro. I dropped 800+ places. That brought into very clear focus for me the importance of a rigorous CV approach. It also taught me how misleading the PLB can be. This whole team was based around making sure that did not happen again. We tried to do things properly this time. When we reached out to potential new team mates it was explained that they would have to fit into our CV approach. The team benefited from the diverse sets of feature which different members had put together. We tried lots of different types of models including xgb, lgbm, catboost, logistic regression, random forest, extra trees and a few others. We were feeding many base models into our stacker. Although a lot of those were relatively weak models it did seem to give us quite a stable basis for the relationship between the cv score and the plb score. Although we tried to do things properly I didn't imagine the outcome would be as good as it was.</p> <p>What was good? Having a stable relationship between our cv score and the plb score, seeing them move up together and the absolute numbers very close together.</p> <p>What was a challenge? Using Scirpus’ features messed with that close relationship between cv and plb. We decided it wasn't worth the risk of chasing the higher cv they offered us because it didn't translate to the plb. At one stage we rebuilt our approach without them. We felt that, from what we understood of the way the features were trained, it likely did not completely fit in with our cv approach. We did have lengthy debates about that though. Maybe we could have found better ways forward on that.</p> <p>What else could we have done? We did not run rNN's on the time series data. I think this could have added something to our approach.</p> <p>Thanks to all those who have written about the importance of cv, and, the right way to do it. CPMP and Tilii are two names that come to mind, both in this comp and previous ones, on that topic.</p> <p>Finally, thanks to all my team mates, great work guys - it has been a pleasure.</p> <p>We used StratifiedKFold with 5 folds and a fixed seed. We chose 5 fold partly due to the run time savings vs 10 folds and partly due to the variance we saw between folds which would have likely been magnified at 10 folds. The idea was to prevent stacking models predicting using underlying models that had already seen the SK_ID_CURR which the stacker was trying to predict. At times during the process we tested with other seeds as a sanity check that we were not overfitting too much to our chosen seed. Other approaches may have been workable as well but this worked pretty well for us. Our success wasn't just due to cv though, the guys came up with some great features, and, some very nice single models. Sharing of ideas within the team was also helpful and informative.</p> <p>Thank you for all the congratulations guys.</p> <p>Thank you - and congratulations to you.</p>
Home Credit Default Risk
10th place writeup
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!</p> <p>Thank you</p> <p>I also learned a lot from other teams' solution :D</p> <p>I tried the interval, but to be honest I didn't spend too much time trying different months combinations.</p> <h1>Our Team Members:</h1> <p><a href="/fatihozturk">@fatihozturk</a> <a href="/lingchensun">@lingchensun</a> <a href="/shivrajp">@shivrajp</a> <a href="/a31314431">@a31314431</a> <a href="/kingychiu">@kingychiu</a></p> <p>Congrats everyone and thanks Kaggle and Home Credit for hosting such an amazing competition! </p> <h1>My Top Features and Some Final Words <a href="/fatihozturk">@fatihozturk</a></h1> <p>I want to talk about my part first and some final words. I also used lightgbm as my best single model. During competition, I read papers and blogs about existing credit scoring methods and factors signing loan default. Got some insights and generated features based on them. Some didn't work, but most of them worked. I won't go much in details but will mention about my top features. Because we have sort of time series data, it made sense to generate features using dates close to current application.</p> <p>LATE_PAYMENT feature: I think most of you used installment['DAYS_INSTALMENT']-installment['DAYS_ENTRY_PAYMENT'] as a new feature since it shows number of delays. It was a already good feature but I realized that, when you take last 365 days of installment data and aggregate this delay as a new feature, it added much more value to model. Here is a simple code:</p> <pre><code>installment_temp = installment[installment.DAYS_ENTRY_PAYMENT &amp;gt;= -365] installment_temp['LATE_PAYMENT'] = installment_temp['DAYS_INSTALMENT']-installment_temp['DAYS_ENTRY_PAYMENT'] late_payment_feature = installment_temp.groupby('SK_ID_CURR')[['LATE_PAYMENT']].min().reset_index() </code></pre> <p>CREDIT_UTILIZATION feature: In blogs and papers I've read, most of them was saying credit utilization is a strong indicator for a risky customer and you can calculate it simply by dividing Credit card balance by credit card limit. It was making sense but didnt work when I calculated on full credit card dataset. Again I tried checking results on more recent credit card data. Finally I've found that last 2 months' data was more valuable for my model. You can see the simple code and check yourself with different months:</p> <pre><code>month = -2 cred_temp = cred_card_bal[cred_card_bal.MONTHS_BALANCE &amp;gt;= month] cred_temp['CRED_UTIL'] = cred_temp['AMT_BALANCE'] / cred_temp['AMT_CREDIT_LIMIT_ACTUAL'] cred_util_feature = cred_temp.groupby('SK_ID_CURR')['CRED_UTIL'].max().reset_index().rename(columns={'CRED_UTIL':'CRED_UTIL_'+str(month*-1)}) </code></pre> <p>Most important buraeu feature: Debt ratio was the most important feature I found in bureau file. I also tried using more recent bureau data for this feature but it was the best when I used all data.</p> <pre><code>buro['DEBT_RATIO']=buro['AMT_CREDIT_SUM_DEBT']/buro['AMT_CREDIT_SUM'] debt_ratio_feature = buro.groupby('SK_ID_CURR')['DEBT_RATIO'].max() </code></pre> <p>I was also at Santander competition and had a very intense 3,5 months. For me, our team was amazing. Now I have 4 new friends at different continents. I recommend you to team up with someone not only having a good single model score but also having full time for the competition and nice 128 cores machines ;)</p> <h1>Interest Rate Features &amp; Cash Loan Model <a href="/a31314431">@a31314431</a> (FuLin)</h1> <p>Again want to say thank you to my teammates, sponsors and everyone for making this competition a great experience for me. Here is my two cents:</p> <p><strong>Interest Rate Features</strong></p> <p>Interest rate seems to be an important feature in predicting loan default.(You can see this from the lending club dataset) We were not given interest rate directly in either Main Application data or Previous Application data but we could approximate it: </p> <p>Previous Application - The key thing here is that AMT_ANNUITY includes interest. Based on AMT_CREDIT, AMT_ANNUITY, and CNT_PAYMENT we can derive interest rate.</p> <pre><code>prev_app['INTEREST'] = prev_app['CNT_PAYMENT']*prev_app['AMT_ANNUITY'] - prev_app['AMT_CREDIT'] prev_app['INTEREST_RATE'] = 2*12*prev_app['INTEREST']/(prev_app['AMT_CREDIT']*(prev_app['CNT_PAYMENT']+1)) prev_app['INTEREST_SHARE'] = prev_app['INTEREST']/prev_app['AMT_CREDIT'] </code></pre> <p>Then by calculating max, min, mean of features above for each customer, I got several top features for Previous Application table.</p> <p>Main Application - We were not given CNT_PAYMENT in Main Application data and but we could predict it from Previous Application table. I built a lightgbm from Previous Application table by using AMT_CREDIT, AMT_ANNUITY, AMT_CREDIT/AMT_ANNUITY to predict CNT_PAYMENT. The model got a RMSE ~2.78. After applying the model on Main Application table, I got predicted CNT_PAYMENT(EXP_TERM) for each record and then I created exact same features as I did for previous app table. After applying the model on Main Application table, I got predicted CNT_PAYMENT(EXP_TERM) for each record and then I created exact same features as I did for previous app table.</p> <pre><code>main_app['INTEREST'] = main_app['EXP_TERM']*main_app['AMT_ANNUITY'] - main_app['AMT_CREDIT'] main_app['INTEREST_RATE'] = 24*main_app['INTEREST']/(main_app['AMT_CREDIT']*(main_app['EXP_TERM']+1)) main_app['INTEREST_SHARE'] = main_app['INTEREST']/main_app['AMT_CREDIT'] </code></pre> <p>Overall, several of these interest rate related features joined my top 10 lgb features and gave ~0.0006 boost in CV.</p> <p><strong>Cash Loan Model</strong></p> <p>Looking at the NAME_CONTRACT_TYPE in Main Application table, we have ~90% cash loans while ~10% revolving loans. I had a feeling that the default behavior may vary across this two types of loans so why not built a separate model only for cash loans and use this new model to correct the cash loan prediction from the big model? By blending this new cash loan model with the original model, one single lgb model could get a ~0.0006 boost on CV, and ~0.0004 on LB and PB.</p> <h1>Records Filtering and Division <a href="/kingychiu">@kingychiu</a> (nlgn)</h1> <p>Basically, I used nested for loops, for tables other than application.csv, they were filtered into sub-tables:</p> <pre><code>active_bureau_df = bureau_df[(bureau_df['CREDIT_ACTIVE']=='Active')] closed_bureau_df = bureau_df[(bureau_df['CREDIT_ACTIVE']=='Closed')] ... </code></pre> <p>For each sub-tables, then they were divided by “time”: 2 different “time” definitions were used:</p> <ol> <li>n months before the application, (3, 6, 18, 30, 42, 54, 66 months before)</li> <li>The latest k records, the earliest K records, k was the ratio (0.1, 0.2, 0.3, 0.4)</li> </ol> <p>Then the tables were grouped by [‘SK_ID_CURR’] and [‘SK_ID_CURR’, ‘SK_ID_PREV’] and aggregated mainly by mean and max, sometimes sum, min and seldom median. Trend aggregation was also used. It was inspired by the Open Solution.</p> <p>My model: <a href="https://github.com/kingychiu/Kaggle-Home-Credit-Default-Risk">https://github.com/kingychiu/Kaggle-Home-Credit-Default-Risk</a></p> <h1>Predicting Missing Values <a href="/lingchensun">@lingchensun</a> (Chad)</h1> <p>Thanks again for my teammates’ excellent work! The only thing I want to share is how to improve cv by predicting EXT source. As we know, EXT features are the most important features in this game, however, they are filled with too many NA. </p> <p>So, I built a lightGBM model to predict EXT 1 and 3 with similar features of the default model. My prediction helped all of my teammates boost CV at least 0.0005. I also attached the prediction files, feel free to use them to boost your model too! Enjoy boosting!</p> <h1>Boosting Best Single Model <a href="/shivrajp">@shivrajp</a> (Shivraj)</h1> <p>As our single model was pivot for us finishing so well when the dust settled, my two cent contribution was to fix the lower folds CV, which got a bump while using the same best params but by changing max_depth =-1 and blending them together always helped fix our CV on folds scoring low. PS: Our single model could have won Gold by itself ;) </p> <h1>Feature Selection</h1> <ul> <li><p>Lightgbm's gain importance was used initially to cut down features, at that time our best single model performance was: CV: 0.80368, Public lb: 0.80386, Private lb: 0.80048.</p></li> <li><p>Olivier's script was used later, the best single model became: CV: 0.80500, Public lb: 0.80533, private lb: 0.80207</p></li> </ul> <h1>Blending/Stacking</h1> <p>For a long time, we only used weighted blending predictions based on out of fold files. Our CV and LB went highly correlated. However, after we hit 0.806 CV score, Our Lb Score started to came 805 and it was coming lower 805 at each improvement. This part was kind of annoying. Even with 0.807 CV we were getting 0.805. </p> <p>Then, last day and actually the last hours <a href="/fatihozturk">@fatihozturk</a> suggested stacking with a simple tree model and adding that model also to blending. Thanks God, it gave us 0.806 LB but very close to 807 with 0.807+ CV . This submission happened to be our best private score.</p> <p><a href="/fatihozturk">@fatihozturk</a>'s quick notes for stacking: Having different folds and seed number for different model is not a problem and never use early stopping during stacking, it ruins oof score of 2nd level model.</p> <p>We didn't share our top features among us and we all had single 0.80+ CV models. This resulted in a good blending score which was 0.807+ since we had diverse models. However, our private score came 0.803+ and don't know what caused this.(I think chance took its part here).</p> <h1>Final Submissions:</h1> <ol> <li>Blend (stack + best public score blend): CV: 0.807420, Public lb: 0.80674, Private lb: 0.80354</li> <li>Blend of 12 models: CV:0.807640, Public lb: 0.80590, Private lb: 0.80346</li> </ol> <p>Thanks for letting us know, but, how to find out the source of the problem?</p>
Home Credit Default Risk
13th place - time series features
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all congrats to the winners and the organises! This was a fun competition and a very popular one with lots of activity on forums and kernels. </p> <p>Before I start, you will notice that when I type a link, I post the whole URL - that is because for some reason the <code>hyperlink</code> option is not working - not sure why. So I apologise for not being neat :/</p> <p>In many ways this competition reminded me of porto (<a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction">https://www.kaggle.com/c/porto-seguro-safe-driver-prediction</a>) where the target variable was quite sparse and cv was not in line with LB for quite some time. At some point our cv was <strong>0.7959</strong> and public LB was <strong>0.806</strong>. Our latest (stacking) model had cv of <strong>0.803</strong> and public LB again <strong>0.806</strong> (slightly worse than the one above!) .I had a hunch that Michael's technique would work well here too :) . Maybe your 1st-pace thread here will be even more popular than that one (<a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44629">https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44629</a>) which keeps receiving new posts everyday almost!</p> <p>In our solution we generated over 10,000 features. About 1,000 are of similar nature to the ones included in nenputne-ml solution (<a href="https://github.com/neptune-ml/open-solution-home-credit">https://github.com/neptune-ml/open-solution-home-credit</a>) - thank you very much for sharing. </p> <p>The remaining are mostly based on time series features. Its been a while since I last used that much SQL! We constructed various 96-lag views for various attributes. for example:</p> <ul> <li>Sum bureau status by month</li> <li>Sum credit by month</li> <li>Sum Credit to Balance by month (from credit cards) </li> <li><p>Sum Instalments per month</p> <p>We created around 60 such time series. This is an example of SQL:</p></li> </ul> <p>```</p> <pre><code>create table home.cash_active_cumulative as select a.SK_ID_CURR, sum(case when a.NAME_CONTRACT_STATUS='Active' and MONTHS_BALANCE=-96 then 1 else 0 end ) as Active_cumulative_count_96, sum(case when a.NAME_CONTRACT_STATUS='Active' and MONTHS_BALANCE=-95 then 1 else 0 end ) as Active_cumulative_count_95, sum(case when a.NAME_CONTRACT_STATUS='Active' and MONTHS_BALANCE=-94 then 1 else 0 end ) as Active_cumulative_count_94, sum(case when a.NAME_CONTRACT_STATUS='Active' and MONTHS_BALANCE=-93 then 1 else 0 end ) as Active_cumulative_count_93, sum(case when a.NAME_CONTRACT_STATUS='Active' and MONTHS_BALANCE=-92 then 1 else 0 end ) as Active_cumulative_count_92, sum(case when a.NAME_CONTRACT_STATUS='Active' and MONTHS_BALANCE=-91 then 1 else 0 end ) as Active_cumulative_count_91, sum(case when a.NAME_CONTRACT_STATUS='Active' and MONTHS_BALANCE=-90 then 1 else 0 end ) as Active_cumulative_count_90, sum(case when a.NAME_CONTRACT_STATUS='Active' and MONTHS_BALANCE=-89 then 1 else 0 end ) as Active_cumulative_count_89, ........ ........ ........ sum(case when a.NAME_CONTRACT_STATUS='Active' and MONTHS_BALANCE=-6 then 1 else 0 end ) as Active_cumulative_count_6, sum(case when a.NAME_CONTRACT_STATUS='Active' and MONTHS_BALANCE=-5 then 1 else 0 end ) as Active_cumulative_count_5, sum(case when a.NAME_CONTRACT_STATUS='Active' and MONTHS_BALANCE=-4 then 1 else 0 end ) as Active_cumulative_count_4, sum(case when a.NAME_CONTRACT_STATUS='Active' and MONTHS_BALANCE=-3 then 1 else 0 end ) as Active_cumulative_count_3, sum(case when a.NAME_CONTRACT_STATUS='Active' and MONTHS_BALANCE=-2 then 1 else 0 end ) as Active_cumulative_count_2, sum(case when a.NAME_CONTRACT_STATUS='Active' and MONTHS_BALANCE=-1 then 1 else 0 end ) as Active_cumulative_count_1 from home.cash as a group by a.SK_ID_CURR order by a.SK_ID_CURR </code></pre> <p>``` </p> <p>I have uploaded a small sample file so that you can get the idea. In that file ( <code>cash_active_fcumulative_sample.csv</code> ) , for each <code>SK_ID_CURR</code> and for each past <code>MONTHS_BALANCE</code> , for when <code>NAME_CONTRACT_STATUS='Active'</code> , it sums the <code>CNT_INSTALMENT_FUTURE</code> . So at any point in time, out of all accounts, how many future instalments are expected to be made?</p> <p>Based on these 96-lag time-series we generated thousands of time series features. These included:</p> <ul> <li>Moving averages on different lags</li> <li>Other aggregated measures like stds, medians,kurtosis skewness, max, min</li> <li>Exponential smoothing</li> <li>Correlations and regressions with time</li> </ul> <p>and many more. This feature engineering is heavily based on the work we do with Driverless ai (<a href="https://www.h2o.ai/products/h2o-driverless-ai/">https://www.h2o.ai/products/h2o-driverless-ai/</a> ) . it is more analytically explained in the following video-talk if someone is interested. (<a href="https://www.brighttalk.com/webcast/16463/330616/time-series-in-driverless-ai">https://www.brighttalk.com/webcast/16463/330616/time-series-in-driverless-ai</a>) </p> <p>Modelling-wise, apart from the usual suspects ( Lightgbm, xgboost, vanilla nns) what worked well was stacking (vertically) for each user all these time series and running CNNs or LSTMs . Even with these tricks though, our nns did not exceed 0.794-5 on (public) LB. </p> <p>We stacked around 100 models built on 2 validation schemas , one where we average predictions for the test data for each fold (e.g. without retraining on whole training data) and one where we did retrain on full training data to generate predictions for the test data. </p> <p>Thank you @zr . Congrats for your results too. </p> <p>Best single model had cv of 0.8002 (a lightgbm) . so We got around +0.003 from stacking. For feature selection I used permutations - very close to what is described here : <a href="https://www.kaggle.com/ogrellier/feature-selection-with-null-importances">https://www.kaggle.com/ogrellier/feature-selection-with-null-importances</a></p> <p>sure</p>
Home Credit Default Risk
17th place mini writeup
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to you too! And thank you for suggesting trying small colsample_bytree, that helps my model :)</p> <p>I experience similar score change when I add those features to open solution.</p> <p>I would like to thank fellow Kagglers for all discussions, ideas and kernels. This is my first competition and a wonderful one. I definitely learned a lot from it.</p> <p>For my solution, I just want to share a bit about the feature engineering that gives me a significant boost:</p> <p>I trained a model where training samples are each previous applications, and targets are the corresponding current targets, so I have a training set that looks like the following:</p> <p><code>SK_ID_CURR SK_ID_PREV AMT_ANNUITY DAYS_DECISION ... TARGET prediction</code></p> <p><code>1 1 .... .... 0 0.1</code></p> <p><code>1 2 .... .... 0 0.2</code></p> <p><code>1 3 .... .... 0 0.1</code></p> <p><code>2 4 .... .... 1 0.3</code></p> <p><code>2 5 .... .... 1 0.4</code></p> <p><code>3 6 .... .... 0 0.2</code></p> <p><code>4 7 .... .... 1 0.1</code></p> <p><code>4 8 .... .... 1 0.3</code></p> <p><code>4 9 .... .... 1 0.5</code></p> <p>....</p> <p>(assuming SK_ID_CURR = 1,3 have TARGET=0 and 2,4 have TARGET=1)</p> <p>The features are those describing previous applications, so we can use the previous_application table directly, and for credit card, pos cash and installment tables, we do groupby('SK_ID_PREV') instead of groupby('SK_ID_CURR'). We don't need to worry about if it is optimal to average over previous applications of different time or different amount. This way, the model will find the correlation between a specific previous application and current probability of defaulting, or "what is the probability of a certain previous application belongs to someone who has defaulted loan currently". After we get the prediction for each previous application, we can do:</p> <p>agg_prev_score = df.groupby('SK_ID_CURR')['prediction'].agg({'mean','max','sum'...})</p> <p>The aggregated predictions are pretty good features, and we can merge it to our regular training set by 'SK_ID_CURR'. So instead of doing aggregations like mean/sum on previous applications as people normally do, we look at each previous application separately and aggregate later, I guess this would give some complementary view on the dataset.</p> <p>And we can do the same thing on each bureau record as well. These generated features boost my CV from .798 to ~ .801.</p> <p>==============</p> <p>Edit: Github <a href="https://github.com/NoxMoon/home-credit-default-risk">link</a> to my solution.</p> <p>I just use sklearn's 5 StratifiedKFold, and I have early_stopping = 100 and learning_rate =0.03.</p> <p>Yeah, I tried to use things like whether there is DPD as the 'pseudo target ', and even tried to extract 'previous of previous applications' as features. Didn't work well..</p> <p>In the first step I train a model for each previous application and we need features describing previous applications, so I groupby 'SK_ID_PREV'. After we got the prediction for that model, we then groupby 'SK_ID_CURR' to generate features that can merge to the main table.</p>
Home Credit Default Risk
1st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Well, yes and no. :-) We (mostly I) did use blending during the competition, and as new teammates joined, we tried to blend their best work with the rest of ours. We also blended several of our new stackings with it. Our top blend was our second submission, and would have been in the money as well. Blending can be a useful indicator of which features are compatible and complimentary, and then try to incorporate them into a singe feature set. In that sense, our final feature set is a blend of five different feature sets.</p> <p>Don't know all the details (this was done by Silogram), but the idea was that the three external sources were among the most predictive features, and relatively stable between train and test.</p> <p>Here we'll try to make a more or less "technical" presentation of our work. For more personal and other insights, please refer to <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64541">Olivier</a>'s <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64490">Ryan</a>'s and <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64480">mine</a> threads, as well as to <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64480#378401">Phil</a>'s comment.</p> <p><strong>Introduction.</strong></p> <p>From my prior experience with credit underwriting, I’ve come to appreciate that this is one of the most complex problems for applying machine learning to. Data in this domain tends to be very heterogeneous, collected over different time frames, and coming from many different sources that may change and alter in midst of the data collection process. Coming up with a proper target variable is also a very tricky process, that requires deep domain knowledge and refined business analysis skills. I want, again, to commend Home Credit and Kaggle for coming up with such a great dataset, that was leak-free and very amenable to Machine Learning techniques. </p> <p>Based on what is known about Credit Underwriting, and in general these kinds of Machine Learning problems, it has been clear all along that there would be two things crucial for building a good model for this competition: 1. Good set of smart features. 2. Diverse set of base algorithms. We were able to have four main sources of feature diversity, and a few minor additional ones. </p> <p><strong>Data and Feature Engineering</strong></p> <p>The first big set of features that we used were the features that can be found in many kernels, including excellent one by Olivier. These features employed various forms of aggregation on many-to-one tables, and joined them with application features. There were about 700 of features total that were used.</p> <p>Feature Engineering (Ryan)</p> <p>Like many people I started my base models with simple aggregates over SK_ID_PREV &amp; SK_ID_BUREAU features for each SK_ID_CURR. I also created a lot of features based on division and subtraction from the application_train.csv. The most notable division was by EXT_SOURCE_3 which gave me small boosts to my CV and translated positively to the LB. I also found pretty decent boosts by using a label encoder for categorical variables. I did this for all the categorical variables on application_train.csv and all the categorical variables for the LAST application in the previous_application.csv. Aside from aggregate features over SK_ID_PREV &amp; SK_ID_BUREAU, I also used difference slices of data to compute aggregates. </p> <p>Previous_application.csv - aggregates for last 3, 5 and first 2, 4 applications. Each of these choices were tested against my CV and gave the best boost. Installment_payments.csv - aggregates for the last 2, 3, 5 payments. Aggregates over NUM_INSTALMENT_NUMBER at 1, 2, 3, 4 installments. Aggregates over last 60, 90, 180 &amp; 365 days filtered on DAYS_INSTALMENT. I also did aggregates over all installments that were past due. Where past due was defined as a 1 if the difference between DAYS_ENTRY_PAYMENT and DAYS_INSTALMENT was positive and 0 if not. POS_CASH_balance.csv, credit_card_balance.csv - similar methodology as installment_payments.csv I also used lag features from previous_application.csv. I used lag features up to the last 5 applications for each SK_ID_CURR. </p> <p>(Olivier) I suppose most of the competitors know my features and dataset :) Out of the public kernel I tried to compute a yearly interest rate that became one of the highest scoring feature on my models. I also had a try at creating predictions on a few tables (bureau and previous application) but for some reason I did not get the boost reported by other teams. With new skilled members entering the team with features/dataset that gave a real boost to our CV/LB I stopped working on features and focused on stacking.</p> <p>(Phil) The most important features that I engineered, in descending order of importance (measured by gain in the LGBM model), were the following: neighbors_target_mean_500: The mean TARGET value of the 500 closest neighbors of each row, where each neighborhood was defined by the three external sources and the credit/annuity ratio. region_id: The REGION_ID_POPULATION field treated as a categorical rather than as a numeric feature. debt_credit_ratio_None: grouped by SK_ID_CURR, the sum of all credit debt (AMT_CREDIT_SUM_DEBT) over the sum of all credit (AM_CREDIT_SUM). credit_annuity_ratio: AMT_CREDIT / AMT_ANNUITY prev_PRODUCT_COMBINATION: PRODUCT_COMBINATION value form most recent previous application. DAYS_CREDIT_mean: grouped by SK_ID_CURR, the mean CREDIT_DAYS value from the bureau table. credit_goods_price_ratio: AMT_CREDIT / AMT_GOODS_PRICE last_active_DAYS_CREDIT: From the active loans in bureau, the most recent DAYS CREDIT value, grouped by SK_ID_CURR. credit_downpayment: AMT_GOOD_PRICE - AMT_CREDIT AGE_INT: int(DAYS_BIRTH / -365) installment_payment_ratio_1000_mean_mean: Looking only at installment payments where DAYS_INSTALLMENT&gt;-1000, take the mean of AMT_PAYMENT - AMT_INSTALMENT, grouped first by SK_ID_PREV and then by SK_ID_CURR. annuity_to_max_installment_ratio: AMT_ANNUITY / (maximum installment from the installments_payments table, grouped by SK_ID_CURR).</p> <p>(Yang)</p> <p>My idea for the special features: some come from the open solution last 3,5,10 credit card, installment, and pos. But I modified the time period to include more variance in the features. Also, I apply the weighted mean( use the time period as the weight) to create some features related to the annuity, credit, and payment. I think these features are quite useful to extract the person credit habits. The last part I think that is interesting and useful: I generate some KPI constructed by income, payment and time which have some good impact on the CV.</p> <p>Feature selection and reduction (Bojan)</p> <p>One of the things that proved very useful for this competition was a good way of restricting the feature set. Various ways of aggregating features would usually result in feature sets that numbered in thousands, and it’s very likely that many, if not most, of those features were redundant, noisy, or both. I tried some very simple ways of reducing the number of features using frequency-encoded categorical features with the numerical features, and then running very simple forward feature selection using just Ridge regression. I’ve used this technique in the past for metafeatures, but this was the first time I tried it on raw features. Surprisingly enough, it worked remarkably well. I was able to reduce the “big” original set of features that numbered over 1600 features to just 240 features or so. Later on when Olivier added more features to the “base set”, I just directly added these features to my 240, and ended up with 287. These 287 features were able to give us models with CV of 0.7985 or so, with LB scores of 0.802-0.803. As we were adding more members to the team, it became important to try to combine their features with ours. It took some heroic effort on Olivier’s part to discern which one of Phil’s features were complementary to our own, and the combined set of features numbered in about 700+. When Ryan and Yang joined the team, it became impossible to repeat most of that effort. We’d try to just roughly see which ones of our features were different, and add them to theirs. </p> <p>Towards the end, it became obvious that Yang’s base set of features was perhaps overall the best. One of his Kernels alone was able to get 0.802x private / 0.803x public, thus placing it in the top 20 on the final LB. We would combine his base set with all the “special” features from the other feature sets, but due to time constraints we had to do this in a crude and probably very inefficient way. Our final “supersets” numbered in 1800-2000 features. It’s hard to put down the exact numbers - we had some “joint” sets, but we were also individually combining them for our own models. </p> <p><strong>Base Models</strong></p> <p>All of our base models were trained on StratifiedKFold, with 5-folds. I choose that setup for one of my models early on, for no particular reason, and that become the default. I know there has been lots of back and forth about relative merits of Stratified vs. “regular” KFold, but in the end I don’t think that choice makes much of a difference, at least not for this competition and this kind of problems. </p> <p>(Olivier) I used LightGBM, FastRGF and had a try at FFM but CV results were really below expectations (0.76 AUC).</p> <p>(Bojan) I used XGBoost, LightGBM, CatBoost, and even a simple Linear Regression. I essentially used just one set of hyperparameters for XGB (a pretty “standard” one, that you can find in most kernels, and about three different sets for LightGBMs - one that you can get in the kernels, one that the Neptune guys used, and one that’s more or less “standard” set of hyperparameters. Most of my XGB models were trained with gpu_hist on a GPU, while LightGBMs were trained on CPU. CatBoost models were not that good, and took forever to train, but I think they helped a bit with metafeature diversity. </p> <p>(Ryan and Yang) Trained several LightGBM models on their engineered datasets, as well as on some combination of their data with the rest of the datasets. Ryan also tried his luck with FFM, but no luck there either. </p> <p>Neural Nets etc. (Michael Jahrer) As many of you have read in the discussion forum neural net models are behind boosted trees (lgbm,xgb) in terms of AUC. This is my outcome too, at the beginning it was not possible for me to reach 0.785 CV with nn, while lgbm was above 0.79. NN is always a hot candidate for blending when it comes to a certain level of accuracy, so I tried my best. My features at the beginning was very bad, that's not my strength. After teaming up I had access to excellent feature sets. Kudos to all 5 remaining team members. Results were similar across all datasets, DAE+NN was consistently better than plain NN (maybe +0.005 in AUC), till end I never tried plain nn alone. DAE means denoising autoencoder preprocessing as input to nn. An unsupervised technique for better data representation. Same technique as I described in porto seguro. The number of raw features are in the 100..5000 range, this means the DAE need to be very big to avoid information loss and keep the overcomplete/relaxed representation. First DAE models was always with 10000-10000-10000 topology, this means I blow up number of features to 30k. Supervised nn had always 1000-1000 topology, that's my standard recommendation, works here as well. Tried more neurons, AUC goes down. DAE: swapnoise=0.2, high start learnrate in DAE (close to divergence), 1000 epochs. Supervised nn: lRate=2.5e-5, dropout=0.5, 50 epochs approx (until overfitting), logloss optimized. All hidden units are ReLU, Optimizer SGD, minibatchsize=128, small lRateDecay. One complete run with 5CV is about 1 day on a GTX1080Ti, DAE dominates runtime. Raw data normalization is still rankGauss, missing values are replaced by 0. I added reference code for this normalization to my github repo. Later in the competition I tried fewer+larger hidden layers in DAE and I guess its better here. Best AUC I got with a DAE with just 1 hidden layer with 50k neurons, followed by a 1000-1000 supervised nn (CV=0.794961 public=0.80059 private=0.79433) on a joint featureset from all 6 of us. Apart from all the neural net optimizations a simple lgbm with small learning_rate beat them all. Best lgbm was 0.8039 on CV, nearly 0.01 higher AUC than the nn. Anyways, I think neural nets here plays a minor role. My guess is that data normalization is still the biggest issue why they come not to same AUC level compared to lgbm. But they were needed at the end to fight for the last 0.0001 boost, that is what kaggle is about.</p> <p><strong>Ensembling</strong> For ensembling we used 3 levels with a complete pipeline of csv generation for easy sharing of meta files across team members and centralize the concatenation work.</p> <p>Everyday as we got new base predictions, L1 dense matrices were produced and input to 1st level stackers : NN (guess who), XGBoost, LightGBM and a Hill Climber linear model. We ended up with 90+ base predictions there. As the number of base predictions grew, LB and CV became more correlated. I strongly believe Michael’s NNs even if they individually did not perform as good as boosters di help a lot in getting CV and LB relation more stable. </p> <p>After a few days our CV/LB stopped progressing as much as we wanted and it became clear we had to go to Level 2. </p> <p>The L2 layer contained a NN, an ExtraTree and a Hill Climber. This is where Silogram suggested we should add a bit of restacking with raw feature.</p> <p>The final prediction was an equal weighted blend of these 3 predictions.</p> <p>(Michael)</p> <p>We had a neural net also as successful stacker. Here I used plain supervised nn with one hidden layer 500 ReLU units. The trick here was to find proper start lRate=1e-3 and good lRateDecay=0.96 (the lRate multiplier after each epoch) values to make a smooth run. Also dropout=0.3 was important here.</p> <p>(Phil) The ExtraTrees L3 model was a very shallow (max_depth=4), highly regularized (min_samples_leaf=1000) model that used just 7 of the L2 models plus a single raw feature, AMT_INCOME_TOTAL. It scored CV: 0.80665, LB:80842, PB: 80565.</p> <p><strong>Other things that we tried, some interesting insights, and a few recommendations</strong></p> <p>As briefly mentioned above, overall as a team we did not invest much time in hyperparamter tuning. I tried running one optimization script that Olivier had prepared, on both the XGB parameters as well as on LightGBM, but local results were somewhat discouraging. By then we were relying heavily on our ensmebles, so in our mind having several different models trained on different hyperparameters made up for not having a single, highly optimized model. </p> <p>One thing that Phil discovered, and I think there has also been some mention of this in the forums, is that it’s possible to predict train/test with over 0.98 AUC! We discovered this towards the end, but were not able to make any use of it. (Some of us were convinced that some of the top teams were in fact exploiting some kind of adversarial validation/ pseudolabeling.)</p> <p>One of the things that we tried was to develop a predictive model to impute the EXT_* features. The imputed features had AUC of 0.78x, but they did not help our base models, either for CV or on public LB. Based on our analysis, we believe that might be because these features are amongst the most consistent between the train and test sets, thus not really contributing much to the “lift” that the public LB had over the local CV.</p> <p>After the competition ended, we took a closer look at some of our base models. It turns out that our top three base models would all have placed in top 10, and their simple average would also been sufficient for first place! This, to me at least, suggests that for this kind of competition and this kind of problem, feature engineering and feature selection are still the most important step. Since we have not optimized our best feature set, or the hyperparametes for models trained on it, it is conceivable that it’s possible to create a single model that would outperform all of our ensembles. </p> <p>Yup. Turns out, for AUC it doesn't matter if you are using it as continuous or discrete. You can use Logistic Regression instead, but I usually find that Ridge has better score and is easier to work with.</p> <p>Sure. </p> <ol> <li>Train a ridge on a subset of features.</li> <li>Calculate AUC.</li> <li>Add features, one at a time. Train ridge. If AUC improves, keep the feature. If not, discard the feature.</li> </ol> <p>We don't have any plans to make most of our code available. We have used one repo for some of the feature engineering stuff, but otherwise we have all worked independently on most of the modeling. </p> <p>Congrats to you too! You guys kept us motivated throughout the competition. :)</p> <p>Thank you!</p> <p>No problem, don't mind at all. Thanks for creating that resource!</p> <p>We used EXT_* features as target for predictive model, based on all other features. We used the same k-fold as for all the other models, but had to restrict training rows to just those that had actual values of EXT_*. Then I just trained simple XGBoost models, with RMSE loss objective.</p> <p>I just wanted to reiterate that I used Ridge because it was a "quick and dirty" approach, not necessarily because it was the best. It's a good "first approximation" to relative feature importances. You can use other methods (XGBoost in linear or even nonlinear mode) to probably get even better results. </p> <p>We did some imputations of the missing nan values. Usually setting them to zeros or means for the given feature. Again, this is a very approximate method, so no need to go too fancy with that.</p> <p>I initially tried just using train/validation set for forward feature selection, but that ended up getting stuck in what I believe was a local optimum, and the results were pretty bad. Then I decided to use the full k-fold estimation for the Ridge, and the results were much better.</p> <p>I would actually start with a single feature, and then add all of them, one at the time. :) I tried going through the features as they are listed in the original table, and I also tried going through them based on their importance which I got from XGBoost. The latter works better. </p> <p><a href="/terenceflow">@terenceflow</a> I basically just left alpha to be zero for each feature. It is possible to fine tune alpha during the selection process, but in my experience it doesn't add much in terms of accuracy, but it significantly lengthens the feature selection time.</p> <p><a href="/dearll">@dearll</a> Yes, it too a really long time. Was running one of my computers the entire weekend. :)</p> <p><a href="/fabiendaniel">@fabiendaniel</a> What I meant to say by "reducing the number of features using frequency-encoded categorical features with the numerical features" is that the combination of features we used were the frequency-encoded categorical features and the original numerical features. In order for something like Ridge to create meaningful results we needed to turn all features into meaningful numerical ones. We chose frequency encoding for categorical, since it's pretty simple and straightforward to implement, and usually gives pretty decent results.</p>
Home Credit Default Risk
24th place - Simple Solution with 7 Models.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Merci beaucoup !</p> <p>Thanks, made a little typo cv is 0.793 - Sure, when I have time ! :)</p> <p>First of all I want to thank my wonderful teammate Abdel for all his fabulous work, many thanks to all contributors of discussion during the entire competition (Oliver, Tilii, Bojan...), and last but not least, thanks to every grandmasters (Giba, Kaz, Silogram ...) who push us to get beyond our limits. </p> <p>This is a brief summary, which will be completed soon for the FE part (when Google AI competitions are over) !</p> <p>Our solution is based on diversity of features enginereed datasets (Aggregated, Binarized, Stats, Lags ...). We have construct seven differents dataset, with a lot of differents features and one unique model for each one. </p> <p>The main reason of this approach is that we first try a big features enginereed dataset, and made diversity with the models but CV was not really increasing. That's why we tried to tackle the problem with this way.</p> <p>The seven models were really basic :</p> <ul> <li>6 Boosting models with CV in range [0.792-0.80] and LB in [0.790-0.802]. Each one was optimized with Bayesian search for hyperparameters. And we brute force one with stump trees on a small dataset. <ul><li>1 NeuralNet with CV 0.793 and LB 0.797. (PreLu/BatchNorm/High DropOut) (giving a +0.02 boost)</li></ul></li> </ul> <p>We stack them with a LogReg CV 0.804 LB 0.806 and a LinearModel CV 0.805 LB 0.807.</p> <p>So, as you can see, nothing really fancy, just do some features engineering and <strong><em>trust your CV</em></strong> !</p> <p>Thank you, and congrats to you too ! Sure, we just fixed the maximum depth of trees to two - a simple binary tree with 1 node - and made brute force search to find the others parameters. It leads us to the same LB performance in comparison of a full depth boosting, but with a better CV for a particular dataset. I found, sometimes, when the CV-LB gap is huge this technic useful to reduce it ! </p> <p>And if remember well, Hastie talks about boosted stump trees in a paper or course, but I don't remember which one, if it's coming back i'll notice you !</p> <p>Hello, in fact they are not totally independant. For exemple, one dataframe is created from aggregating features of all csv, one is created from several statistics etc. In fact, you can concatenate all to obtain a strong dataset but we couldn't achieve good performance with the big one, so we split up by a natural way :)</p> <p>Thanks ! You can find a good explanation here : <a href="http://blog.kaggle.com/2016/12/27/a-kagglers-guide-to-model-stacking-in-practice/">http://blog.kaggle.com/2016/12/27/a-kagglers-guide-to-model-stacking-in-practice/</a> and <a href="https://www.coursera.org/lecture/competitive-data-science/stacking-Qdtt6">https://www.coursera.org/lecture/competitive-data-science/stacking-Qdtt6</a> ! Have fun :)</p> <p>Thanks Giba, Congras to you too ! :)</p>
Home Credit Default Risk
2nd place solution ( team ikiri_DS )
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="/janmejaya">@janmejaya</a> <br> Thanks for your some questions. </p> <hr> <blockquote> <p>Is there reason/thought process behind using CNN and RNN? </p> </blockquote> <p>A good question. <br> Some tables (e.i. bureau_balance, POS_CASH_balance, .. ) have <code>time series</code> information. <br> So <a href="/tereka">@tereka</a> tried to catch those information via both RNN and CNN. </p> <hr> <blockquote> <p>And How does they behave individually on this data? </p> </blockquote> <p>Could you check the below link ? <br> Thanks. <br> <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64722#379880">tereka solution</a></p> <p><a href="/alosmn">@alosmn</a> <br> Thanks and congrats for your silver medal ! </p> <p><a href="/pragyanbo">@pragyanbo</a> </p> <p>Thnaks for your comments. <br> For example, <a href="/nejumi">@nejumi</a> made some interesting features. <br> He explained about his fe in following link. <br> <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64784">nejumi's findings</a></p> <p>Thanks @YaGana and congrats your medal.</p> <blockquote> <p>So you have an additional feature. And what happens next? I cannot wrap my mind around the steps you mentioned.</p> </blockquote> <p>Yes. Additional feature is how train is <strong>test-like</strong>. <br> Then I used those probabilistic values as the weights to select samples from train data. <br> For example, when you select 30% of train data, you will select those with that <strong>test-like</strong> values. <br> After you have selected 30% of train data, you can optimize blending coefficients to maximize AUC with that selected data. <br> In this stage, I used OOF ( 5fold ) predictions of models not to overfit. <br> ( blended OOF pred = c_1 * model1_OOF_pred + c_2 * model2_OOF_pred + ... ) <br> Hope it would help your wrapping up. </p> <p>Hi <a href="https://www.kaggle.com/adamsfei">@Adams</a> Good question. Could you read <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64722#379799">this</a>?</p> <p>Addition to linked explanation, I will refer naming of this method.</p> <p>Adversarial : This means just using adversarial validation as sampling weights. <br> Stochastic : We iterated sampling process for 350 times and averaged weights of each model to get converged. This procedure is Monet Carlo like method. </p> <p>Thanks @zr ! Congrats for your 12th place !</p> <p>@Banerjee Thank you!</p> <p>@Arthur Thank you for letting me know. Sure, go ahead.</p> <p><a href="/pocket">@pocket</a> thanks for many questions. </p> <p><strong>1) What is "stochastic adversarial blending"?</strong> <br> As @andy refered in <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/61273">this post</a>, we can build a model which can distinguish train or test data. <br> We use that prediction ( adversarial validation ) when blending. <br> The process is as follows, </p> <ol> <li>Sample data with a sampling ratio by using adversarial validation as a sampling weight.</li> <li>Optimize the weights with train prediction. </li> <li>Iterate 1~2 process to get converged weights. </li> </ol> <p>This technique propelled our score about 0.0002.</p> <p><strong>2) Did you all share the features for each of your models?</strong> <br> Some were shared, others were not. </p> <p><strong>3) What features did you use for the final blend?</strong> <br> For the final blend, we did not use any features. <br> Attached image shows the features used in each model. <br> ( Cardinal numbers were allocated in each feature. ) </p> <p><strong>4) Were you looking at the CV score to tune your blend?</strong> <br> Yes. We monitored both CV and Public LB. <br> We submitted the predictions which have nice score in CV and public LB, both.</p> <p><strong>5) Did you all choose the same cv-split for the blend?</strong> <br> Actually at first we tried to keep same cv-split. <br> But after some experiments of stacking we did not care cv-split any more. <br> ( Stacking tended to overfit. CV &gt;&gt; LB. So we decided to use only blending. )</p> <p><a href="/pacificactuary">@pacificactuary</a> thanks!</p> <p>@Soyan Many thanks. For blender, to get the picture of this complex model pipeline was the first task.</p> <p>@nlgn Thanks ! Huge team, a lot of models, a lot of features.</p> <p>Congratulations to the winner Home Aloan and to all participants in this competition!</p> <p>This competition was very tough for us and the biggest competition ever. <br> Thank you Home Credit and kaggle for organizing this great competition.</p> <p>ikiri_DS consists of 12 kagglers and we can not write up detail of each solution at once. <br> So we decided to post our model pipeline and short description of individual approaches at first. <br> We will add more descriptions with requests in your comments of this discussion. <br> Please feel free to ask us questions and comments, thanks. </p> <hr> <p><strong>Member comments</strong> </p> <ul> <li><p><strong><a href="https://www.kaggle.com/onodera">ONODERA</a></strong> <br> I'm in charge of feature engineering. I don't have any my models. Here is my <a href="https://github.com/KazukiOnodera/Home-Credit-Default-Risk">git</a>. </p></li> <li><p><strong><a href="https://www.kaggle.com/ryuichi0704">RK</a></strong> <br> Feature <br> ・Various dimension reductions (PCA, UMAP, T-SNE, LDA...) <br> ・Genetic programming application (from kernel) <br> ・Brute force feature search from ~1TB feature pools <br> Model <br> ・Algorithm comparison (Catboost, LightGBM with dart, and so on) <br> ・Parameter tuning <br> Blending <br> ・Algorithm comparison (Direct AUC maximization with modified Powell algorithm is good for making my 2nd level model) </p></li> <li><p><strong><a href="https://www.kaggle.com/nejumi">Yuya Yamamoto</a></strong> <br> Interest rate feature, finding about difference between train and test data and residual correction of Neural Network predictions. I have posted my finding and others in <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64784">https://www.kaggle.com/c/home-credit-default-risk/discussion/64784</a>. </p></li> <li><p><strong><a href="https://www.kaggle.com/ktts1031">tosh</a></strong> <br> I'm in charge of Neural Network(NN) in this competiton. <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64722#380224">https://www.kaggle.com/c/home-credit-default-risk/discussion/64722#380224</a></p></li> <li><p><strong><a href="https://www.kaggle.com/ireko8">ireko8</a></strong> <br> DAE based on porto seguro 1st solution <br> <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64722#379891">https://www.kaggle.com/c/home-credit-default-risk/discussion/64722#379891</a> </p></li> <li><p><strong><a href="https://www.kaggle.com/tereka">tereka</a></strong> <br> CNN and RNN models. <br> <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64722#379880">https://www.kaggle.com/c/home-credit-default-risk/discussion/64722#379880</a> </p></li> <li><p><strong><a href="https://www.kaggle.com/brandenkmurray">branden</a></strong> </p></li> <li><p><strong><a href="https://www.kaggle.com/takuok">takuoko</a></strong> <br> I worked with angus and branden. <br> Our task in team ikiri_DS is to generate diversity, and we could do it! <br> I made LGBM ( private 0.79966 / public 0.80328 ). </p></li> <li><p><strong><a href="https://www.kaggle.com/andrew60909">Angus</a></strong> <br> Hello ~ I am Shuo-Jen, Chang - a kaggler from Taiwan. you can just call me Angus. My job in ikiri_DS is to generate diversity by modeling with my feature set. see the attachment below ( HC - Brief solution from Shuo-Jen, Chang.html )!</p></li> <li><p><strong><a href="https://www.kaggle.com/nthanhtam">Tam</a></strong> </p></li> <li><p><strong><a href="https://www.kaggle.com/titericz">Giba</a></strong> <br> Post Processing <br> discussion <br> <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64485">https://www.kaggle.com/c/home-credit-default-risk/discussion/64485</a> <br> kernel <br> <a href="https://www.kaggle.com/titericz/giba-post-processing-user-id-boost">https://www.kaggle.com/titericz/giba-post-processing-user-id-boost</a> </p></li> <li><p><strong>Maxwell</strong> <br> Meta features on train_app and bureau( gain : public +0.002 / private +0.003 ), some models( LGBM, ExtraTree ) for model diversity and blending using adversarial validation( gain : private +0.0002 ) </p></li> </ul> <p><img src="https://cdn-ak.f.st-hatena.com/images/fotolife/g/greenwind120170/20180901/20180901083809.png" alt="model pipeline of team ikiri_DS"></p> <p>Thanks @Shourov !</p> <p>Thanks <a href="/senkin13">@senkin13</a>! and congrats your 8th position!</p> <p>Many thanks for sending me back. I'm happy you are interested in our solutions. </p> <blockquote> <p>Let me check if I understood you correctly :) So you tried to use more data from train if it is indistinguishable from the test set? Something like weight = (1-adversarialAUC), and you subset the train set with the weight?</p> </blockquote> <p>Yes. You are right. We sampled test-like data from train data when blending. </p> <hr> <blockquote> <p>When you say blending, I expect that you used a holdout set. If that is correct, you first trained the adversarial model which chose your holdout set?</p> </blockquote> <p>Yes. I used same fold to predict adversarial validation. </p> <p>Yes! <br> You are right!</p> <p><a href="/ziychen">@ziychen</a> <br> Thank you for your questions. </p> <hr> <blockquote> <p>If I understand correctly, the new test-like feature is only used as sampling weight to blend multiples models. Repeat sampling-blending procedures k times then take the average. I am curious how this Adversarial Stochastic Blending improves your results. Do you have some comparison result on simple blending and Adversarial Stochastic Blending?</p> </blockquote> <p>Yes. <br> I used <code>30%</code> sampling ratio in the result of simple experiment. <br> <code>30%</code> sampling improve blended score by about 0.0002 (Not so much) in comparison to 100% sampling. <br> Improvement could be observed on both public and private. <br> I have also tested 20%. But score became worse. </p> <blockquote> <p>I am also curious is it possible to use this test-like feature directly into those classifiers. For example in lightgbm, instead of uniformly sampling with bagging_fraction to grow each tree, use the test-like feature as the sampling weight .</p> </blockquote> <p>To be honest, at first I tried to use test-like feature as you said. <br> But my trials went in failure. <br> So at that point I changed my mind to use test-like feature in model, I hit on using it in blending. </p> <p>Thanks.</p>
Home Credit Default Risk
31st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi guys!</p> <p>It is a little bit late but I want to share my simple solution. My 31st place is a best result for me. It is just 7 places after gold zone.</p> <p>My model is a simple stack of 4 models:</p> <ul> <li>open solution with seed 0 from <a href="https://github.com/neptune-ml/open-solution-home-credit">https://github.com/neptune-ml/open-solution-home-credit</a></li> <li>open solution with seed 90210 from <a href="https://github.com/neptune-ml/open-solution-home-credit">https://github.com/neptune-ml/open-solution-home-credit</a></li> <li>kaggle kernel <a href="https://www.kaggle.com/aantonova/797-lgbm-and-bayesian-optimization">https://www.kaggle.com/aantonova/797-lgbm-and-bayesian-optimization</a></li> <li>my own XGBOOST model on mean / min / max / delta features with CV 0.796</li> </ul> <p>The weights of models found by regression were: 0.23 / 0.12 / 0.3 / 0.35</p> <p>Also, I added a little data leak that was found by <a href="/raddar">@raddar</a> <a href="https://www.kaggle.com/raddar/a-competition-without-a-leak-or-is-it">https://www.kaggle.com/raddar/a-competition-without-a-leak-or-is-it</a></p> <p>For stacking I used logistic regression and it gave me Public LB 0.80651 / Private LB 0.80047 / Local CV 0.801.</p> <p>I didnt spend too much time on competition and gold medal would not have been well-deserved for me.</p> <p>So, silver medal and top 1% is a good result for me. Thank you all for great ideas in your kernels. I learned a lot. Happy new kaggling!</p> <p><a href="https://github.com/paveltr/home_credit_default_risk">https://github.com/paveltr/home_credit_default_risk</a></p> <p>I tried to build a NN for Deep Autoencoded features. It showed Public LB 0.767 and worsened stacking results. So I declined using it.</p> <p>@John I just figured out that the CV score of open solution is not stable. I mean when I changed the seed the score changed significantly. Then I thought it would be good to combine them to get diversity of predictions.</p>
Home Credit Default Risk
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>What I did here is the following:</p> <ul> <li>First I run stackers without any raw features, only with first-level model predictions to get baseline. </li> <li>Afterwards I run a loop through all raw features checking CV for first-level models +1 feature and kept the one with biggest CV improvement - took this +1 raw feature score as a new baseline and run a loop again through all other raw features. </li> <li>Repeated this until CV was not improving reasonably any more.</li> </ul> <p>As my initial stacker had only about 15 features it was trained very fast and each loop wasn't taking too long.</p> <p>I filled missing values with mean for all stackers except LightGBM.</p> <p>For now I guess I'll take some short break from competitions as 2 last weeks were really tiring. But sure I'll be back - Kaggle is very addictive :)</p> <p>Congratulations to teams in first 2 places and to everyone else who took part in this competition and learned something new!</p> <p>This was the biggest competition by number of teams participating so far and definately was not an easy one. I think it was also very interesting competition with rich dataset which gave wonderful possibilities for improving ML and feature engineering skills - thanks to Kaggle and Home Credit for this!</p> <p>We didn't use any leaks, duplicate rows or other kind of post-processing tricks. Only pure feature engineering and ML models.</p> <p>Next we'll give some short description of our individual approaches.</p> <h2>Evgeny approach</h2> <p>I built model based on my previous experience in credit scoring (last year I worked 8 months in a bank). Features were created according to my understanding of scoring process and the data. I tried to create a model not very far from a real process. It was interesting how far I could reach with this approach against Kaggle without stacking As I had limited time, my model not perfect but good for further improvements - 6th place now alone. </p> <p>My main model was a mix of features and metafeatures. I built separate models for each block of data - base application, last application, bureau, credit cards, installment. I used predicts and also some features from sub models (or created for those models but not used) in the main models. At the end my main model had 124 features. Together in all models there were near 250 features from more than 1000 created.</p> <p>I selected features by local cross validation one by one. My models were compact and each check iteration was very fast. During one night it was possible to check which features could be added or deleted with cv improvement. This way helped me to control my model better. Near the end of the competition I had a divergence between CV and LB and tried to find better subset for validation, but without success and return to full train.</p> <p>Models for bureau and prev application were build unusual way - I used data without aggregation by sk_id_curr and used target from main application (same for all data for one client). I was very weak model Predicts were averaged by sk_id_curr after. This approach performed better in main model compared to model with aggregated data with better individual score.</p> <p>I also built models to predict ext_sources features. I use main model for that. Predicts and differences with original were useful features. I also tried to built model to predict income, but without success.</p> <p>First month I developed model locally without submissions and I missed gap between CV and LB. When I submitted first time, I expected some gap as many reported, but LB score was in line with CV (hear .800). It was little bit disappointed but I trusted my CV. Before merge I reached .804 CV &amp; .805 LB and still not realized what I did wrong as I know that my model far better than public.</p> <p>I offered to alijs try to reach the prize together as he had model based on public scripts but much better. We had good effect from join.</p> <p>After merge I took some top features from alijs models and he took some mine and our models became much better.</p> <h2>alijs approach</h2> <p>I started this competition quite long ago and got quite high (about 10th if I remember correctly), but switched to Santander competition at some point and was slowly dropping down the LB here. When Evgeny offered to create a team about a week before deadline, I instantly agreed, as we had already worked together in one team previously with very good success.</p> <p>As I don't have any experience with credit scoring, my features were mostly created from statistical point of view (different aggregates, etc). </p> <p>There were a lot of teams in top of LB with many members so they probably had a lot of diverse models to choose from. As we were only 2 members in the team, we had to find alternative ways to get as many diverse models as possible to be competitive. </p> <p>So for example I took about 50 different model runs of my experiments regardless of their CV scores, selected 7 most uncorrelated ones and made average of them - got quite solid CV score literally out of nothing. My take-away from this - don't throw out anything until the competition ends, maybe it will be useful at some point later.</p> <p>For combining our very different models together I made 4 second-level stackers (actually more, but others were dropped) and our best submission (Private LB: 0.80511, Public LB: 0.80969, CV: 0.80855) was simple average of those 4:</p> <ul> <li>LightGBM (CV: 0.808419)</li> <li>Random Forest (CV: 0.807978)</li> <li>Extra Trees (CV: 0.807539)</li> <li>Linear Regression (CV: 0.807291) </li> </ul> <p>Each stacker was trained on slightly different subset of about 15 first-level model predictions (mainly LightGBMs, but also some XGBoost, NN) and some selected raw features.</p> <p>For the final submissions we selected one submission with highest LB score and the one with highest CV. The latter turned out also to be the one with highest private LB score. That just confirms one more time - trust your CV.</p> <p>Good luck!</p> <p>Neural networks didn't work well for me in this competition, I used them only to give some diversity for stackers to choose from. </p> <p>Best one was similar to this one: <a href="https://www.kaggle.com/ezornow/simple-ffnn-with-rank-gauss-and-early-auc-stopping">https://www.kaggle.com/ezornow/simple-ffnn-with-rank-gauss-and-early-auc-stopping</a></p> <p>Best CV I got with Neural network was 0.786, didn't test it alone against LB.</p>
Home Credit Default Risk
4th place sharing and tips about having a good teamwork experience
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>hey max, the goal here is to generate diverse oof for second layer modeling, i.e stacking, and not to improve the single model. the problem i usually have is that i might sometimes add lots of features and it does not help. no worry, keep those oof predictions and use them for stacking. like i mentioned, applying some feature selection method on the Layer 2 modeling to make sure you select the right one</p> <p>Congrats to all the winners. we know we will move up in LB, but 4th place is probably the worst to be, however, our submission is our best cv and best lb. nothing to regret. </p> <p>my teammate will share more details and i will share what i learn. i believe here are the few tricks that help us.</p> <ol> <li>trend features, the last 1,3,5,10 in installments and pos,bureau features</li> <li>train diverse models on subset of features and on overall big feature set we had trained over 200 models on different parts with different models to make sure we completely squeeze all the values from our feature. Our final best single model has CV 0.8022 LB: 0.806 PB: 0.801 (gold medal!) the stacking CV is 0.8055 LB: 0.808 PB: 0.804</li> <li>one trick we did was when we were running Bayesian to optimize the parameters, we save the predictions and use that as part of the oof</li> <li><p>when build stacking layers, instead of using the 100+ oof, we first run a feature selection and then run the lightgbm, it helps improves the cv by about 0.0005</p></li> <li><p>we applied a "manual" magic correction on our final results. this is an insight from my team mate greatdatanalyst that if you correct your prediction for revolving loan that is over 0.4 by 0.8, it will boost your auc. maybe revolving loan has lower probability in the testset</p></li> </ol> <p>tips about teamwork: i think we did really well on teamwork and everyone contributes and no one burns out. </p> <ol> <li><p>we have one person kain focusing on just running the stacking models and keep track of our progress. we have to create 2 slack pages because we are running out of space sharing files!</p></li> <li><p>rest of the people keep creating features and generating oof, creating features and generating oof. towards the last week, everyone regroups and combine all the features and then run the models on the big dataset to squeeze the last bit of value</p></li> </ol> <p>i have attached our best submission and feel free to ensemble and see what we can get :) keep kaggling.</p> <p>0.8 is tuning based on cv and it also worked on lb when we submit so we know this trick will definitely work. how we found this i believe is that we analyze our predictions for revolving loan and cash loan. the average prediction for revolving loan is higher than cash based on cv. when we set prediction greater than 0.4, the false positive rate is too high based on our model something like 65% range. therefore, we know our model did really poor here. in addition. in training set, average default rate for revolving loan is lower so we just give a try. it is dangerous in many other scenarios. </p> <p>hey @sasakama, we applied bayesian in the first layer not the second. the goal is just to generate more diverse oof and less about tuning. what we usually found out is that if you trained a particular type of model 20 times with different parameter setup, and aggregate the results it will be close to the perform you did through tuning. this is just more like a brute force way. </p> <p>nothing really fancy here. 1 thing we realized help little bit particularly is weights. we add 3 to 1 or 5 to 1.</p> <p>what i mean is lower it by 20%. so use the original value times 0.8 :)</p>
Home Credit Default Risk
7th solution - **Not great things but small things in a great way.**
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you ! Congratulations to you too )</p> <p>Great question ! The thing is whenever we found news useful features we applied them for a new model with different parameters (by Bayesian search).</p> <p>Thank you, Congratulation to you too guys ))</p> <p>I'm glad to work with you bro )</p> <p>We have Four diverse LGBM models over 0.8 in CV ! How we did it, Whenever found a great new features we run a new model and follow those steps : Features Engineering - Clean Data - Bayesian search for hyperparameters. <a href="/jsaguiar">@jsaguiar</a> will share one of our best model !</p> <p>Thank you @Giba, waiting your amazing solution ) Btw, Congratulations for your 1th Worldwide Rank Competitions )</p> <p>Yeah, I agree ! Congratulation to you and Thank you )</p> <p>Thank you ! Because the Logistic Regression give us : 0.8049 Cv !</p> <p>Here's our 7 place solution.</p> <p>First of all I want to thank <strong>my great teammate</strong> <a href="/jsaguiar">@jsaguiar</a> ,I'm glad to work with him ! Many thanks to all contributors of discussion during the entire competition (<a href="/dromosys">@dromosys</a>, <a href="/ogrellier">@ogrellier</a>, Neptune.ml, ....). Congratulations to all winners and new Experts, Masters and Grandmaster!</p> <p><strong>Summary :</strong></p> <p>1). We start the competition with features engineered datasets, more than 500 new useful features (Aggregations features) (Took like 10 days), specially from four features who gives like +0.04 boost : [NEW_CREDIT_TO_ANNUITY_RATIO , EXT_SOURCE_3 , INSTAL_DAYS_ENTRY_PAYMENT_MAX , Days_Birth]. In total, we had like More than 1200 features to run our models.</p> <p>2). 11 diverse models of LGBM, with different data representation &amp; parameters give CV in the range [0.794-0.8036] and LB in [0.796-0.803] (Took like 10 days).</p> <p>We found that using LGBM models with the different data representation is not going to work for a Gold Medal so we then choose to work with NN.</p> <p>3). Two NeuralNet with CV [0.792,0.793] and [0.792,0.79X]. I’m not so proud of my NNs but we didn’t have much time to work on them (last 4 days), because of that I want to <strong>Thank</strong> <a href="/aharless">@aharless</a> for sharing his amazing public kernel NN.</p> <p>4). We stack 13 models with a LinearRegression : <strong>CV 0.8062 Public 0.80871 and Private 0.80396</strong>.</p> <p>We never care about Public LB but just focus in CV.</p> <p>And As a PhD student, I want to thank <strong>my great and unique professor</strong> <a href="/mchahhou">@mchahhou</a> too! He is the one introduce us Kaggle first time, teach us Machine Learning and everything !</p> <p>Thank you )</p>
Home Credit Default Risk
8th Solution Overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Rank percentage can be generated using either Pandas or numpy. Let's say y is the prediction you get from a LGB model, rank percentage can be got using,</p> <p>y = pd.Series(y).rank(pct=True).values </p> <p>Then you can ensemble it with other model predictions. </p> <p>As for the reference, I believe some threads in the discussion have mentioned, but I can not find it now. You can do an experiment to verify it: </p> <ol> <li><p>Get the area under the ROC curve for your prediction over the train data.</p></li> <li><p>Do the rank percentage over the prediction from Step 1, then calculate the area under the ROC curve. </p></li> </ol> <p>I believe you will find the result from the above two steps are identical to each other. </p> <p>Stratified 10-fold, no secret ingredient. </p> <p>For each model predictions, do a rank percentage, then put it into the ensemble pool. You can get it by using 'pd.Series(x).rank(pct=True).values'. </p> <p>We realize this when trying to figure out the reason for the bad performance of stacking. The single models in stacking pool have low correlation, but stacking results are kind of bad. After rank percentage, the correlation increases to 0.9+, and the stacking model behaves normally. </p> <p>Thanks, CPMP! </p> <p>Thanks! </p> <p>Yes. The first thing to do for a new joined teammate is re-run his model using our k-fold split seed.</p> <p>First of all, I would like to thank Home Credit and Kaggle for hosting such an interesting competition! The data is very rich and the quality is good, which give lots of ways to do feature engineering. </p> <p>I would also like to thank my awesome teammates, <a href="https://www.kaggle.com/h4211819">30CrMnSiA</a>, <a href="https://www.kaggle.com/wangking">Jin Wang</a>, <a href="https://www.kaggle.com/pureheart">piupiu</a>, <a href="https://www.kaggle.com/plantsgo">plantsgo</a>, <a href="https://www.kaggle.com/senkin13">senkin13</a> and <a href="https://www.kaggle.com/wuyhbb">Yuanhao</a> (ordered alphabetically)! You guys are truly talented and amazing kagglers! Our team name “七上八下” means butterflies in my stomach, this reflects our feeling in the last days. </p> <p><strong>Summary</strong></p> <p>I think our approach is not different than what other top teams did. Our single models are all lgb models. Personally, I felt the most difficult thing in this competition is the gap between local CV and LB scores. Only 20% data is used for public leaderboard, which makes the LB scores unreliable. On the other hand, the difference between train and test data makes people to hold doubt regarding the CV scores. Hence, we decide to build two finale models: one for best local CV and one for best LB. </p> <p>I think the key for our team is that everyone brings a good model that is different from others. We have different approaches to do the feature engineering, which provides lots of diverse models for stacking. I will share my experience in implementing my model (lgb, best public score: 0.809) in the following. The final results show that golden rule still holds: <strong>trust your local CV</strong>. We are using stratified 10-fold as the cross validation method. The local CV is surprisingly stable, the gap (CV - PB) is from <strong>-0.0005</strong> to <strong>0.0003</strong>.</p> <p><strong>My solo approach</strong></p> <p>I built my baseline model by taking all the insights from the public kernels. This model has a local CV of 0.794x, LB 0.802. I was checking my credit report one day and realized the bureau calculates the total amount of credit of a person over all the credit cards. Inspired by this, I grouped all the tables with transaction data (credit card, pos cash, instal payments and bureau balance) by both time and ‘SK_ID_CURR’. This gives an overall description about the applicants for every month/installment. One more point inspired by the credit report is that I realize the industry buckets DPD into 30/60/90/120, this is also implied in the description of bureau balance. I applied similar approach to all the DPD data. <br> I realized the overlap between credit card and application is small, a simple merge would leave lots of NaN values. I decided to fill this NaN by ‘0’. This strategy is applied to fill all the NaN values generated during the merge process. The above 3 steps are my 2nd round feature engineering. After these, my model reaches a local CV of 0.797x, LB 0.804. </p> <p><strong>After teaming up</strong></p> <p>After we formed the team, we realize that the best way to improve scores is adding groups of engineered features together. Since our feature engineering approaches are different from each other, we start to build ‘big’ single models using <strong>combined features brought by everyone</strong>. Interestingly, we found <strong>treating the OOFs from single tables as meta features</strong> and putting back to the lgb model could improve both CV and LB significantly. By putting my teammates’ features and single table OOFS, I got two good models: one with CV 0.800x, LB 0.808; the other one with CV 0.800x, LB 0.809.</p> <p><strong>Stacking/blending</strong></p> <p>The <strong>trick</strong> about stacking/blending in this competition is that all single model predictions need to do a rank percentage before putting them into the ensemble. We think this is because the evaluation metric for this competition is area under the ROC curve, which is sensitive to the relative rank. Before realizing this, my staking/blending CV and LB are always worse than the best single model put into the ensemble. </p> <p>With this gold medal, congrats to my teammate <a href="https://www.kaggle.com/pureheart">piupiu</a> to become a grandmaster! </p> <p>By introducing the single table oofs as features into my model, I get around 0.003 boost in both CV and public LB. </p> <p>Thanks Giba! Congtrats to you as well. </p> <p>The two submitted predictions:</p> <ol> <li><p>Stacking: CV 0.8045, Public 0.808, Private 0.803</p></li> <li><p>Blending: CV ~0.8036, Public 0.810. Private 0.802</p></li> </ol> <p>Only use features generated from single table to train a model, get the OOF predictions and predictions on test set, then used them as a feature. </p> <p>I didn't apply too many things to handle missing values, since lightGBM could handle NaN itself. The only thing related to NaN, is that I filled NaN with 0 for all the NaNs generated during table merge. The reason is that if NaN is generated during the merge, it means that the applicant has no corresponding record. </p> <p>Not exactly. Stacking refers to put all oofs together. Here we treat the oofs as meta features and put them into the base model with all the features from feature engineering. You can consider it as a semi-stacking. </p> <p>We don't need to handle nan in LR or RF, since the input are all oofs, which have no nan values. </p> <p>As you can see from my teammate's <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64474#378074">post</a>, we use both LGB and RF as stacker. Since our features are different from each other, the predictions from different models vary a lot. For one sample, prediction from Model A may be 10 times larger than Model B, but the relative rank within each model predictions are the same. Hence, you can think the rank percentage as a normalization method. </p>
Home Credit Default Risk
Kernel for my (relative) beginner's 561st place solo submission
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was my first Kaggle competition, and through several months of practice, research, trial and error, as well as extensive exploration of forum advice and kernels written by expert Kagglers, I was able to build a featureset/model that earned a final private leaderboard score of 0.79506 as a solo submission, which translated to final rank of 561 out of 7,198 -- just inside the top 8%.</p> <p>Please feel free to check out my kernel at: <a href="https://www.kaggle.com/jamesdellinger/home-credit-putting-all-the-steps-together/notebook">https://www.kaggle.com/jamesdellinger/home-credit-putting-all-the-steps-together/notebook</a></p> <p>Along the way, I had to figure out from scratch several things that seasoned Kagglers probably take for granted. These included things like:</p> <ul> <li>Are my Pandas operations as efficient as can be?</li> <li>How do I keep my memory use from ballooning out of control?</li> <li>What kind of cross validation should I use? And how do I implement it?</li> <li>Choosing a learning rate and tuning hyperparameters.</li> <li>Deciding how many boosting rounds to use during training.</li> <li>Should I use only one, or multiple training rounds?</li> <li>If more than one, how do I blend predictions from each training round?</li> <li>And finally, exactly how do I create that CSV file that I'll need to submit?</li> </ul> <p>I wanted to publish this kernel in order create an example (both for my future self and for other beginners) of what it looks like to implement all the steps (preprocessing --&gt; feature engineering --&gt; cross validation --&gt; training --&gt; prediction generation --&gt; post processing) that are necessary to build a (somewhat) competitive prediction algorithm. In other words, I wanted to demonstrate what it looks like to "put it all together."</p> <p>This is just single model LightGBM kernel, without any stacking. I'm planning to dive much deeper into stacking/blending/etc. in my next competition : )</p> <p>I am indebted to the generosity of the community at large, and in particular, to the wisdom and techniques shared by folks like <a href="https://www.kaggle.com/psilogram">Silogram</a>, <a href="https://www.kaggle.com/ogrellier">olivier</a>, and <a href="https://www.kaggle.com/laurae2">Laurae</a>.</p> <p>Although I still consider myself to be a relative beginner who has a lot more to learn, these folks and their generosity have inspired me to do what little I can to give back. I hope that certain aspects of my code below, such as how my cross-validation method includes target encoding in a way that (I believe) prevents data leak, will be useful to others in the community.</p> <p>Finally, I welcome any tips and or feedback on my approach and implementation that follows below. Having one's blindspots pointed out to them is the surest way to growth and improvement : )</p> <p><a href="https://www.kaggle.com/jamesdellinger/home-credit-putting-all-the-steps-together/notebook">https://www.kaggle.com/jamesdellinger/home-credit-putting-all-the-steps-together/notebook</a></p> <p>Thanks @Arthur! That's wonderful. I'm honored to be included.</p>
Home Credit Default Risk
Overview of the 5th solution [+0.004 to CV/Private LB of your model]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!</p> <p>Thank You!</p> <p>Thanks so much!</p> <p>Thanks Michal!</p> <p>Thanks so much!</p> <p>Great, thank You for testing and sharing this!</p> <p>Thanks!</p> <p>Thank You so much!</p> <p>Thank You!</p> <p>Thanks for testing and sharing! </p> <p>Thank you Pascal!</p> <p>It is exactly the same target as in main application train, because we created an "user image" on SK_ID_CURR level.</p> <p>Thank you Olivier! I learned a lot from you in this competition. Great work</p> <p>I think that after all the reason is very small size of public LB, compared to the size of train set and private test set. As NxGTR calculated, the difference between 817 and 810 on public LB is around 30 cases out of around 10.000 cases. So we were, as I like to say, fooled by randomness :)</p> <h2>Acknowledgements</h2> <p>We would like to thank:</p> <ul> <li>Kaggle/Homecredit - for providing such an interesting and non-anonymized dataset</li> <li>Neptune guys - who generously shared their solutions and magnificent <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/57175">code</a></li> <li>Olivier for sharing his knowledge and kernels that helps us with feature optimization problem.<a href="https://www.kaggle.com/ogrellier/feature-selection-with-null-importances">Your ‘Null Importance’ - improved model</a> made it less time consuming and gave slight improvement on CV (first time it was feature reduction from 4500 to 920)</li> <li>Silogram for your educational and inspiring <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/58332">'few notes'</a> </li> <li>All competitors. One man’s dream come true is another man’s nightmare, but you are “the salt of this earth” called Kaggle, and your presence and pressure makes us all better in Data Science. Your chasing made us dig deeper into data and find out more and more interesting features.</li> </ul> <h2>Overview of the solution</h2> <p>Our best single model scored 0.80550 on Private LB, which would be enough for 3rd place. Unfortunately, it was rather impossible to select it among our solutions- it had neither the best CV (0.8064) nor LB (0.8150). Our best submission was a weighted average of 3 models (0.25, 0.25, 0.5), but scored the same on LB as one of the models in the mix. Maybe the fact that we didn’t put much attention to stacking (and that stacking gave us no improvement, and was much worse that the best single model) was our biggest mistake. We invested everything into feature engineering, because the data was so interesting to analyze. </p> <p>We did a lot of hand-crafted features (around 8.000), out of which we selected around 3.000 using mechanism based on Olivier’s work. </p> <p>There are few main ideas that we think we could have done differently vs. others, and which gave us the lead on public LB.</p> <ol> <li>Using deep learning to extract interactions among different data sources</li> <li>Using “nested models”, with (2.1) different approach to credit card balance features</li> <li>Using a model to predict the interest rate/duration of loans in current applications</li> <li>Using scoring model based on banking mechanism taken from articles and meetings</li> <li>Some additional ideas</li> </ol> <p>First 3 points (apart from 2.1 which caused an overfit) resulted in ~60 features that gave us 0.005 boost on cv / 0.010 boost on LB. We share them in krakowlublinzhabinka_feats.csv attached to this post so that you can check if they improve your models too. Their format is the following: First 307511 rows belong to train cases, and the following rows belong to test cases. The order of IDs is preserved within train/and test subsets, so that you can use pd.concat([your_df, krakowlublinzhabinka_feats], axis = 1).</p> <p>Below we elaborate on all of the points. </p> <h2>1. Using deep learning to extract interactions among different data sources</h2> <p>We wondered how we can capture the interactions between signals coming from different data sources. For example, what if 20 months ago someone was rejected in external Bureau, had a late payment in installment payments, and applied for a loan at Homecredit? These types of interactions are very hard to capture by humans because of the number of possible options. So, we turned to deep learning and turned this problem into an image classification problem. How? Attached is the sample “user image” that we fed to the neural network (user_100006_image.xlsx ). </p> <p>You can see that we created a single vector of user characteristics coming from different data sources for every month of user history, going as far as 96 months into the past (8 years was a cutoff in most data sources). Then we stacked those vectors and created a very sparse “user image”. </p> <p>The network architecture was as following: </p> <ul> <li>Normalization - division by global max in every row</li> <li>Input (the “user image” in format of n_characteristics x 96 months - we looked 8 years into the past)</li> <li>1-D convolution spanning 2 consecutive months (to see the change between periods)</li> <li>Bidirectional LSTM</li> <li>Dense</li> <li>Output</li> </ul> <p>This model scored 0.72 AUC on cv, without any features from current application. It trained rather quickly = around 30 mins on GTX 1080. We have put oof predictions from this model as a feature into our LGBM model, which gave us around 0.001 on CV (improvement on an already very strong model with &gt;3000 features) and 0.001 on LB. This means that the network was able to extract some information on top of &gt;3000 hand-crafted features.</p> <h2>2. Using nested models</h2> <p>One of the things that bothered us throughout the competition was a somehow arbitrary nature of various group-bys that we performed on data. For example, we supposed that an overdue installment 5 years ago is less important than 1 month ago, but what is the exact relationship? Traditional way is to test different thresholds using cv score, but there is also a way for the model to figure it out. </p> <p>In order to explain our approach, I will use an example of installment_payments data source. What we did: </p> <ol> <li>Add a “TARGET” column to installment_payments, by merging on SK_ID_CURR with application_train</li> <li>Run a lgbm model to predict which records have 0 or 1 target. This step allows a model to abstract “what type of behavior in installment payments generally leads to default in loan in current_application”. You can see in the attached example (2. installment_payments_example.xlsx), that the same “fraudulent” behavior (like missing money = installment amount - payment amount) long time ago receives lower score compared to more recent similar behavior. We concluded then that the model learned to identify fraudulent behavior at “behavior” level (without the need to aggregate or set time thresholds)</li> <li>From step 2 we receive bunch of oof predictions for every SK_ID_CURR on row-level in installment payments. Then we can aggregate: min, max, mean, median, etc. and attach as features to main model.</li> </ol> <p>We ran the same procedure on all data sources (previous application, credit card balance, pos cash balance, installment payments, bureau, bureau balance). OOF aggregated features on all of them added value, apart from Bureau Balance, which actually decreased cv and we didn’t use it in the end. We received very low auc on those “nested models”:</p> <ul> <li>previous application: 0.63</li> <li>credit card balance: 0.58</li> <li>pos cash balance: 0.54</li> <li>installment payments: 0.58</li> <li>bureau: 0.61</li> <li>bureau balance: 0.55</li> </ul> <p>The low auc scores for these models were hardly surprising, as they carry enormous amount of noise. Even for default clients, majority of their behaviors are OK. The point is to identify those few behaviors that are common across defaulters. </p> <p>This gave us 0.002 improvement on CV/ 0.004 improvement on LB, on our strong model with &gt;3000 features.</p> <h2>2.1 Another approach on credit card balance features</h2> <p>We have observed that there is a significant difference in train and test sets in ‘loan types’. The cash_loans to revolving_loans ratio in train and test set is respectively ~90/10 and ~99/1. That is why we have decided to use features generated on credit card balance set only in one final submission. </p> <p>Our observation was that in the case of ‘nested feature’ generated on credit card balance set by one model was 0.58, in other case (using different approach of Light GBM) the AUC score was ~0.64. Despite the fact that only one third of the train and test sets we labeled by this feature we have trained model with only one additional feature and improvement on CV was shocking! From 0.805 to 0.811! This corresponds with the ratio we have mentioned before. It appears to be confirmed in our final result - that it is an overfit caused by cash_loans / revolving_loans ratio.</p> <h2>3. Model to predict the interest rate</h2> <p>We were wondering for a long time why the organizers did not share cnt_payments for current_application. And then it was clear - if we know amount of credit, amount of annuity and number of payments, we can calculate interest rate from the following formula: Annuity x CNT payments / Amount of Credit = (1+ir) ^ (CNT payment /12). This formula assumes annual payments, while in reality the payments were done in monthly fashion, but it was good enough as a proxy.</p> <p>The interest rate is the measure of risk that a current Homecredit model assigns to a customer (especially within given duration of loan). Hence, knowing interest rate means to some extent knowing the risk assessment from current HomeCredit model. From that moment on we set out on a journey to guess the interest rate of loans in train and test sets.</p> <p><strong>3.1 First iteration: prediction of interest rate based on business understanding of credit products</strong> The key to predict interest rate is to understand that credit duration (or cnt payment) is not a continuous variable - it belongs to a very specific set of values. When you look at cnt_payments from previous application, you see that majority of loans have a duration which is a multiple of 6. It makes sense - you don’t take a loan for 92.3 days, rather you take it for half a year, one year, 2 years etc. So, at the very beginning I assumed that duration can belong to a following set of values (in months): [6, 12, 18, 24, 30, 36, 42, 48, 54, 60]. </p> <p>What can you do now? Let me explain it using a loan with 35.000 credit value and 1.000 annuity (both of these values are known for applications in train and test).What you can do, is iterate over different durations and check if interest rate makes sense. </p> <p>Have a look at attached file 'ir_table.jpg'. From this table you can see that only two interest rate values make sense: 10% or 11% (I assumed, based on previous applications, that around 8.5% is the lowest interest rate that a loan may get at Homecredit). Which one is correct? We don’t know, so we can compute various characteristics of “possible interest rates”, like min, max, median, etc, and let LGBM find the best correlation with TARGET. It turned out, that a minimum value of such a “possible” set of interest rates was a very good predictor of the actual interest rate, until…</p> <p><strong>3.2. .. we understood, that you can actually use previous applications to build a model to predict interest rate.</strong></p> <p>For majority of previous applications, you can calculate interest rates, because you have cnt_payment, credit amount and annuity. Then, you can treat interest rate calculated in such a way as a Target, and use features common between current application and previous application as features to explain it. But still, the most important features are those from the table above: min, max, mean, median over possible set of interest values. Features like credit amount, annuity amount etc. are only ‘helper’ features, letting the model choose properly between different possible values of interest rates.</p> <p>We achieved 0.02 rmse on model to predict interest rate and 0.50 rmse on model to predict duration, both based on oof predictions tested on previous application. These models used only features common between previous application and current application, so that we could use them to predict interest rate and duration for loans in current application.</p> <p>Interest rate prediction gave us 0.002 improvement on CV/ 0.004 improvement on LB, on our strong model with &gt;3000 features.</p> <h2>4. Using scoring model based on banking mechanism taken from articles and meetings</h2> <p>Another issue that we were exploring during the competition was the broad idea of currently used credit-risk-scoring banking models. We read some articles and conducted a series of meetings with people involved in credit risk scoring in banks. Thus, we have established that what we could create is a basic scoring models – logit and probit.</p> <p>Firstly we have prepared the list of features from application_train, previous_applications sets and aggregate them into groups.</p> <p>Then we have created two models: - logit - probit</p> <p>The scores of those models are respectively 0.56 and 0.59. Nevertheless features created from those models (OOF) improve CV score by 0.0013 -&gt; 0.001 LB</p> <h2>5. Some additional ideas</h2> <ul> <li>Dealing with categorical features as numerical. For example, app_data.replace({'NAME_EDUCATION_TYPE': {'Lower secondary': 0, 'Secondary / secondary special': 1, 'Incomplete higher': 2, 'Higher education': 3, 'Academic degree': 4}}), prev_app_df.replace({'NAME_YIELD_GROUP': {'XNA': 0, 'low_action': 1, 'low_normal': 1, 'middle': 3, 'high': 4}})</li> <li>While making aggregation features, GroupBy data not only by one feature but also by combination of features. For example, by (‘CREDIT_TYPE’+’CREDIT_ACTIVE’).</li> <li>Using grouping by ‘time windows’. This allows you, for example, follow the change of credit card balance in different periods of time.</li> </ul> <h2>Summary</h2> <p>Again, we encourage you to try out attached features in krakowlublinzhabinka_feats.csv and check out their impact on your score.</p> <p>That’s it - it’s over now. We must admit, it was quite a journey for us, and we feel we moved mountains to get these features and scores. Thank you, and see you again in the following competitions. We will be back, stronger with this experience.</p> <p>We would like to finish with this citation from Churchill and dedicate it to everyone in this competition:</p> <p><em>It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.</em> </p> <p>Pawel, Michal and Aliaksandr, aka “Kraków, Lublin and Zhabinka”</p> <p>Thank you!</p> <p>Thanks Evgeny and congrats on your Grandmaster status!</p> <p>Thank You so much! Congrats to you and your team - this is an amazing win. </p> <p>True. We used oof predictions from some models and fed it to the best model. But for me this was feature extraction from different data sources. I referred to 'stacking' in more conventional form - when you take your final set of features, and run different models/different seeds/different feature groups on that final set, and then use some higher level models to make the results more stable and robust.</p> <p>Thanks! We even did nested-within-nested models, by appending oof predictions from installment payments and cc_balance and pos_balance to previous applications, and then doing a model on previous application :) It also gave some improvement, but I dont remember how much exactly,</p> <p>Thank You!</p> <p>Now that i think about it, you are 100% correct. I was wondering why AMT_installment was much more important feature than for example, missing money. I think it is mitigated by the fact that the majority of AMT_installment were to some extent standard - for example have a look at attachement no 2 (2. ip_oof_preds_SK_ID_PREV_2791456.xlsx (21.46 KB)). For the same loan, at the beginning of its lifetime, target is low, and only closer to the end it becomes high.</p> <p>Sure, thank you</p> <p>@Chad - great that you bothered to test it. I will put it in the topic title now :)</p> <p>Thank You! Indeed - the pre-selection of possible interest rates is crucial - LGBM can just focus on choosing between 3-4 options, rather than estimating it from the scratch</p> <p>Here is the link to the article we wrote: <a href="https://deepsense.ai/wait-so-loans-need-to-be-repaid-the-home-credit-risk-prediction-competition-on-kaggle/">https://deepsense.ai/wait-so-loans-need-to-be-repaid-the-home-credit-risk-prediction-competition-on-kaggle/</a></p> <p>Hi Corey and Gates - great to hear you are curious! We are just now writing a blog post about it, with more details on NN architecture. Stay tuned - I will post it here once it is ready.</p> <p>Worked for us, too. Have you tried using the attached features in your model? Interest rate is among the strongest features in this set</p>
Home Credit Default Risk
Pseudo Data Augmentation (27th place short writeup)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I'd like to thank Kaggle and Home Credit for hosting such a great competition. Congratulations to everyone, espetially congrats on your solo gold medal @noxmoon, @emotionevil, and @rterada ! - it's hard for me to compete in this competition alone :)</p> <p>You can see my approach in <a href="https://github.com/nyanp/kaggle-homecredit">my repository</a>. Most of my features are not so unique, so I just want to share my "magic" feature here. </p> <h2>Approach</h2> <p>From column description provided by Home Credit, you can see the definition of target variable:</p> <blockquote> <p>Target variable (1 - client with payment difficulties: he/she had late payment more than <strong>X</strong> days on at least one of the first <strong>Y</strong> installments of the loan in our sample, 0 - all other cases)</p> </blockquote> <p>So if we can estimate <strong>X</strong> and <strong>Y</strong>, we can create target variables for all previous loans from installment_payments.</p> <pre><code>def make_target_variables(install, X, Y): install_first_Y = install[install.NUM_INSTALMENT_NUMBER &lt;= Y] install_first_Y['TARGET'] = (install_first_Y['DPD'] &gt; X).astype(np.int32) return install_first_Y.groupby('SK_ID_PREV')['TARGET'].max().reset_index() </code></pre> <p>Once we get this pseudo-label for previous loans, we can train this variable on previous loans(prev), and make prediction on current loans(application). I added this predicted value as a new feature. By trying various X &amp; Y in a blute-force way, I decided to use <strong>X == 3</strong> and <strong>Y == inf (no limit)</strong> in the final model.</p> <pre><code>def make_explanatory_variables(prev, application): common_columns = [ 'AMT_ANNUITY', 'AMT_GOODS_PRICE', 'NAME_TYPE_SUITE', 'AMT_CREDIT', 'NAME_CONTRACT_TYPE' ] # replace common columns with previous loan's values return pd.merge(prev[common_columns + ['SK_ID_CURR', 'SK_ID_PREV']], application.drop(common_columns, axis=1), on='SK_ID_CURR', how='left') # train "pseudo-label" on previous loans, # then add prediction output on current loans as a new feature X_train = make_explanatory_variables(prev, application) Y_train = make_target_variables(install, 3, 10) X_test = application.drop('TARGET', axis=1) model = LGBMClassifier(param) model.fit(X_train, Y_train) application['PREDICTED_DPD'] = model.predict_proba(X_test)[:, 1] </code></pre> <p>We can do the same thing on different target variable. I also use mean(DPD) as target, and train LGBMRegressor to train "estimated mean DPD". These 2 variables boost my CV ~0.003.</p> <p>I guess this feature captures similar information to "nested model" described in <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64625">5th</a> and <a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/64503">17th</a> solutions, but maybe we can use both and see what we can get.</p> <p>Thank you!</p> <p>Thank you for the great summary!</p>
Home Credit Default Risk
Reflections from the biggest fall on Leaderboard (among gold medal contenders)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I had held within the gold medal positions throughout big part of the competition. I fell from 16th on Public LB to 127th on Private LB. It is pretty embarrassing to admit and talk about this in public, especially for a LONG time Kaggle member. However, I do hope my reflections and the discussions of my thought process may generate some useful discussion and others may find them useful. Also, this reflection or confession can be some form of therapy for me :-).</p> <p>Usually with such a drop in LB, we will hear the usually talk of: Overfitting/Over probing public LB, or “Trust only CV” etc. These are in general correct and good advices. However, sometimes it is not the whole story.</p> <p>I did probing the LB a lot as a shortcut to blending using OOF predictions. You may think my probing may have caused the significant drop in LB positions. However, what I chose for my final submission was among the top of Private LB scores in ALL my submissions. (The best submission would have only advance me 3 more positions.) So in that sense, my probing of Leader Board didn’t hurt me. Actually to the contrary, the top choice was against my CV results. (Will discuss this further later.)</p> <p>The only unique insight/trick I had that worth mentioning that kept me in the Gold medal range on Public LB, is the following: I had a base model which is a mixture of a variation of the neptune.ml open solution kernel (Thanks neptune.ml for the excellent kernel and impressive platform.) and some of my other models. But it was dominated by the neptune inspired model. The base model have Public LB 0.805 (Private 0.796, CV around 0.795(6)). I also created another model that excluded all features that are related to EXT_SOURCE_{1,2,3}. Even though this second model has very low Public and Private scores (Public 0.79375, Private 0.78431), blending it with the base model, with 70/30% weights, I got public score 0.80826 (Private 0.79824), both got a lift of around 0.003, which is pretty significant. Notice that even though the 30% weight was obtained by probing the Public leaderboard, the improvement in Private leaderboard is similar in magnitude. If I optimize the blending weights by CV using oof predictions, the weights should be 91%/9%. I thought the blending weights assumptions are my weakest link (turns out it is not.), so my final 2 submissions are one with 70/30% weight and the other with 91/9%. If I had “Trusted CV”, I would have gotten Public 0.80680, Private 0.79713, which would have dropped me another 90+ positions.</p> <p>The rationale behind trying model without EXT_SOURCEs, is that they are the most important features in most of the models. When these features are correct, they are important. But may be when they are wrong or when they are missing, the results sometimes can be very wrong. So blend with a model without these features my smooth out the results. And the reason why the blending weight probed from public leaderboard worked better than CV weights is that the statistics of missing EXT_SOURCE’s are clearly different between training data set and the testing data set.</p> <p>So what caused my big drop on private leaderboard? The base model was not good enough. I should trust the CV score in this case. Even though, there are many pretty high CV scores reported on the forum, I justify to myself by thinking CV scores depend on the division of folds. May be they are just lucky in their CV folds. I did try to make sure that the base model is a blend of different seeds and different parameters. And they seem to be stable. This gave a false sense of security.</p> <p>I do have significant domain knowledge and could and should have done better in this competition. It is the <strong>mentality</strong> of how I approach to my Kaggle competitions that prevented me doing better. With different responsibilities and priorities in life, the time that can be devoted to Kaggle is inconsistent and hard to come by. My goal in this competition is to get a gold medal. But I just want to put the effort enough to get a gold medal, not more than that. So when I saw I was pretty well positioned for a gold medal, I did not dig deeper to improve the model. I shot down at least two ideas that were similar to some of the ideas that presented by some of the top winners, because the efforts that would have been required. If I was not that well positioned on public leaderboard, or I did not watch the public leaderboard, I may have tried harder to improve the model.</p> <p>The lessons learned: <strong>There is no shortcut</strong>. The Kaggle competitions have become so popular and attracted so many talents that one or two original ideas/tricks will not be enough to pull away from the packs. Reading through the approaches the top winners are using and all the work they have put into, my work definitely didn’t deserve a gold. Also, as most people emphasized: don’t trust LB, trust CV. But I would like to add a twist to that: sometimes well informed hypothesis may be worth a bet.</p> <p>Hope this is helpful.</p> <p>Wei</p> <p>Thanks @Arthur for including.</p>
Home Credit Default Risk
Simple feature that made public kernels top 50 (and thanks!)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks so much to Oliver, Will Koehrsen, Scripus, Aguiar, Ann Antonova, Andy Harless and every one of the teachers and contributors on Kaggle. It's been an informative, challenging, and rewarding experience, and your contributions really make a difference in people's lives. I didn't know what cross validation meant 2 months ago- Learn by doing!</p> <p>This small kernel predicts the loan term length (easy, since loan terms are binned) and computes real interest rates on the loans. Obviously, that's a good indicator of risk. I stacked once with this and a couple derivative features, and rank averaged. </p> <p>I'm not sure (but very curious) if other solutions used this idea - Home Credit clearly intended for people <em>not</em> to, since they removed loan term lengths from their application data. <br> <a href="https://www.kaggle.com/lorentztrans/final-cnt-generator/">https://www.kaggle.com/lorentztrans/final-cnt-generator/</a></p>
Home Credit Default Risk
Story behind the 32nd place (top 1%) with 2 submissions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi and thanks. I'm happy to share my guideline:</p> <p>1) LGBM has many hyperparameters. But when dataset is large, it make sense to focus only on the most important ones. For me it was: num_boost_round, learning_rate, num_leaves, max_depth, min_data_in_leaf, feature_fraction, bagging_fraction, bagging_freq</p> <p>2) I decided to use early_stopping_rounds=200 in lgb.train. It allows me to exclude num_boost_round from the tuning procedure by always setting it to 10000.</p> <p>3) setting bagging_freq=1 is a safe bet</p> <p>4) So now, we have left with 6 the most important LGBM hyper-parameters.</p> <p>For hyper-parameter tuning I always use random search with manually prepared, short, discrete sampling spaces, like:<br> <code>'learning_rate': random.choice([0.005, 0.007, 0.01, 0.02, 0.05, 0.1]),</code></p> <p>Every single CV run creates unique folder where various artifacts are dumped. One of them is json with selected hyperparameters and CV score.</p> <p>At any point in time I can read all these jsons, do some quick analysis and adjust sampling space.</p> <p>I often stop random search and specify manually list of points to be evaluated, before going back to random search.</p> <p>I also make sure that the same hyperparameters aren't evaluated more than once.</p> <p>Yes, you're right.</p> <p>I used <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html">http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html</a></p> <p>It's very fast, stable and easy to use method to ensemble predictions.</p> <p>I'm learning here from the best in the world ;)</p> <p>Thanks!</p> <p>I went through all tables and from each column I hand-crafted reasonable features based on my understanding of data (HomeCredit_columns_description.csv was very helpful). At the end I had 1003 features in total. Then I focused on training LGBM models in a CV fashion using random split with 5 folds. After I got hyperparameters right, I launched many CV runs with random seeds and different random splits (10 folds mostly).</p> <p>My final submissions are:<br> 1) Simple average of top30 runs (CV:0.80001 Public:0.79924 Private:0.80028)<br> 2) Ridge regression applied on top60 runs (CV:0.80084 Public:0.80014 Private:0.80043)</p>
Home Credit Default Risk
The 16th Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Credit Default Risk <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello to everyone,</p> <p>So my solution reposed on two parts. First of all, boosting CV score. At my first attempt, I tried to aggregate the features one by one but had a very low CV score ~ 0.778. Then, in my second attempt, I tried to brute force create the features with all aggregations that were possible. This surely helped me to get a boost at my CV score ~ 0.785. But the problem occured that I had a very large number of features. And trust me, I didn't use any algorithm to select them, I selected them by hand one by hand and I created them in my code manually. Then, according to Silogram notes, he said that most recent data are more accurate. I decided then to filter the features I selected from the last 6 months and the last 12 months. This surely helped me boost my CV score to 0.795. Then, I created a new feature that calculated, for each 'SK_ID_CURR' in installments payments, the difference time between payments. That gave another boost to my features. In the end, I fed my features to xgboost then I selected top ~400 most important features and that gave me a CV score of 0.798. As I saw that there were a lot of teams that could do blendings to boost their CV score, I had to think of a way to make different models that are different one from each other. Here is where I used mostly Scirpus features that were generated by GP. But one problem occured with those features, is that they easily overfit the training set. So I had to think of a way to limit that overfitting the most. For that, I took my 400 features that I trust the most, and I duplicated them into 4 different training set. I did the same with my colleague features. Then, in each training dataset, I added a bunch of features generated by Scripus. So why I did that ? Let's say that GP features have high chance to overfit the training set, sharing them into parts and adding them to the 8 training sets would help me use each of those features just once or twice, since I thought that the patterns that those features would introduce into the dataset would be limited if I used them that much time. Indeed, the features that were created by feature engineering were the ones that were used in all models, so their weights were more importants. That's what, in my opinion, limited the overfitting of GP the most. Then I trained the 8 datasets that I created with xgboost and lightgbm. That gave me 16 different models. Stacking them all would get me a CV score of 0.810 and private LB of 0.802. Thanks for everything, <a href="https://www.kaggle.com/scirpus">Scirpus</a> without you I would never have done all of that. </p> <p>I think that your gp works well. I am not sure of how it works but I think that reducing the number of individuals in a feature and the number of his appearances in that feature would result in less training data overfitting. </p> <p>for 1. I corrected it : Brute force aggregations gave score 0.785 rather than 0.7985. 2. I used two things : The importance of the features given by lightgbm, and if they had a real business meaning or it was just noise. 3. I filtered rows with MONTHS_BALANCE &gt;= -6 and DAYS_DECISION &gt;= -183 and DAYS_CREDIT &gt;= -183 to have data from 6 months and did the same for 12 months. 4. Yes that's what I exactly did. People who have difficulties in payments would always try to postpone their payments. 5. Top 400 features given by features importance of xgboost. It gave me bette CV score than features importance of lightgbm. 6. We work in the same company but he gave up the competition after 1 month. So I took his notebook and added him to the team, but since we were working together we were sharing ideas so I must say that those features were almost the same as mine, just the method of making them was different. He had a CV score of 0.7976 with ~720 features.</p>
Home Depot Product Search Relevance
Congrats to the winners and insights sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Home Depot Product Search Relevance <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>For those teams who didn't figure out this specific public/private split and use the original product_uid or original id as a feature in their model, then the model prediction seems probably would have some issues predicting the last 20k records that all belong to the private LB and not reflected on public LB at all. (model prediction including raw id or product_uid as a feature seems to have rolling mean of about &lt;=0.43 on the last 20k records, while model prediction without raw id or product_uid as a feature have rolling mean of about &gt;=0.48 on the last 20k records). The prediction for all other 146k records except the last 20k are similar to each other for with/without original product_uid/original id. Thus, without observing this issue, it doesn't affect the public LB score at all, but it will hurt the private Lb score to some extent for the last 20k records.</p> <p>For models that trained without original id or product_uid as feature, they don't have such issues. Or for models with adjusted id or adjusted product_uid used, the prediction file would also be ok for the last 20k records prediction (with rolling mean also close to &gt;=0.48), where the adjustment was just to move the middle low rolling mean records ids from the middle to the left by some code like df_X['adj_id] = source_df['id'].apply(lambda x: x-300000 if x &gt; 160000 and x &lt; 220000 else x).</p> <p>Congrats to the winners and all other top finishing teams! This competition is indeed a very challenging and interesting one, and you guys were doing a really amazing job.</p> <p>And thanks to home depot and kaggle for hosting such an awesome competition, I learned a lot!</p> <p>Thanks to all my team collaborators for all your hard work and creative ideas!</p> <p>Finally, did any other top teams figure out the funny public/private LB split? </p> <p>Inspired from Ben'S post <a href="https://www.kaggle.com/c/home-depot-product-search-relevance/forums/t/20288/trends-in-relevances-by-row-ids">https://www.kaggle.com/c/home-depot-product-search-relevance/forums/t/20288/trends-in-relevances-by-row-ids</a>, and after some public/private LB split submission test, we found that the public/private split seems to be: last 20k records of the test set (id range in [220k-240k] all belong to the private LB, all other records seems are split randomly for public/private. Such kind of split is a bit funny (i haven't seen such split in all other past competitions) and not sure why kaggle or sponsor do such kind of split.</p> <p>I guess this public/private split contributes to the mild private LB shaking among the top teams, and without such kind of a split of last 20k records all belong to the private LB, the shaking probably would be much less (since local cv also correlates with public LB score very well).</p> <p>Anyway, congrats, and see you in the next competition!</p> <p>Best regards,</p> <p>Shize</p>
Hotel-ID to Combat Human Trafficking 2021 - FGVC8
14th place solution - fastai ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Hotel-ID to Combat Human Trafficking 2021 - FGVC8 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, but it seems I didn't reach your benchmark (<a href="https://sites.google.com/view/fgvc8/papers" target="_blank">https://sites.google.com/view/fgvc8/papers</a>) with the ResNet50 🙄 in direct comparison. <br> I hope that some more of the top teams will share their solutions with you.</p> <p>Congrats to the winners! 🎉 Score &gt; 0.8, phew, I'm curious about your solutions.</p> <p>Thank you to the host for setting up this competition! I'm wondering where your benckmark was before the competition?</p> <h1>14th place solution</h1> <h2>Overview</h2> <p>The best scoring submission (private score 0.616) was an ensemble of five models.</p> <ul> <li>Ensembling notebook: <a href="https://www.kaggle.com/joatom/fgvc8-hotel-ensemble-inference" target="_blank">https://www.kaggle.com/joatom/fgvc8-hotel-ensemble-inference</a></li> <li>Trained models: <ul> <li><a href="https://www.kaggle.com/joatom/hotel-train-fastai-densnet161" target="_blank">https://www.kaggle.com/joatom/hotel-train-fastai-densnet161</a></li> <li><a href="https://www.kaggle.com/joatom/fgvc8hotel" target="_blank">https://www.kaggle.com/joatom/fgvc8hotel</a></li></ul></li> <li>Training and inference notebooks of my best single model (private score 0.574): <ul> <li><a href="https://www.kaggle.com/joatom/hotel-train-fastai-densnet161" target="_blank">https://www.kaggle.com/joatom/hotel-train-fastai-densnet161</a></li> <li><a href="https://www.kaggle.com/joatom/hotel-inference-fastai/" target="_blank">https://www.kaggle.com/joatom/hotel-inference-fastai/</a></li></ul></li> </ul> <h2>Data</h2> <p>I restricted training to the data provided in this competition. <br> I used pretrained models from pytorch (<a href="https://pytorch.org/vision/stable/models.html" target="_blank">https://pytorch.org/vision/stable/models.html</a>) via fastai library.<br> No external data.</p> <h3>Data preperation</h3> <p>Downscaling of the images to max size of 512 for faster training. Scaled images and code to downscalecan be found here: <a href="https://www.kaggle.com/joatom/fgvc8hoteltrain512" target="_blank">https://www.kaggle.com/joatom/fgvc8hoteltrain512</a></p> <h3>Augmentation</h3> <p>Fastai default augmentation with some additional scaling and <em>reflection</em> padding:<br> <code> item_tfms=Resize(448, method='pad', pad_mode='reflection') batch_tfms=aug_transforms(size=224) </code></p> <h2>Training</h2> <p>All the models used for the final submission were trained in a similar way using fastai library. Five of the models I trained at home with a RTX2070S and 4 CPUs in mixed precision mode. The sixth model was trained on kaggle (see <a href="https://www.kaggle.com/joatom/hotel-train-fastai-densnet161" target="_blank">notebook</a>).</p> <ul> <li>Opt. function: QHAdam</li> <li>Scheduler: OneCycle; on model v1 linear decay of max LR per round </li> </ul> <p>Singel Models that were used for best ensemble:</p> <table> <thead> <tr> <th>Model</th> <th>Arch</th> <th>Loss</th> <th>(freeze)Epochs</th> <th>public LB</th> <th>private LB</th> <th>Duration</th> </tr> </thead> <tbody> <tr> <td>v3</td> <td>resnet50</td> <td>CE</td> <td>(1)10</td> <td>0.4746</td> <td>0.4863</td> <td>80 min</td> </tr> <tr> <td>v5</td> <td>resnet101</td> <td>CE</td> <td>(1)10</td> <td>0.5339</td> <td>0.5422</td> <td>140 min</td> </tr> <tr> <td>v7</td> <td>densenet161</td> <td>CE</td> <td>(1)10</td> <td>0.5484</td> <td>0.5631</td> <td>270 min</td> </tr> <tr> <td>v8</td> <td>densenet161</td> <td>FocalLoss</td> <td>(1)10</td> <td>0.5388</td> <td>0.5574</td> <td>270 min</td> </tr> <tr> <td>v11</td> <td>resnet101</td> <td>CE</td> <td>(6)4+3<em>1+4</em>10</td> <td>0.5451</td> <td>0.5576</td> <td>670 min</td> </tr> <tr> <td>vk2</td> <td>densenet161</td> <td>CE</td> <td>(3)12</td> <td>0.5594</td> <td>0.5748</td> <td>480 min</td> </tr> </tbody> </table> <h2>Inference</h2> <p>Probas of all six models were equaly weighted. <br> All models were run with 4xTTA.</p> <p>Scores of some ensembles:</p> <table> <thead> <tr> <th>Models</th> <th>public LB</th> <th>private LB</th> </tr> </thead> <tbody> <tr> <td>3, 5, 7, 8, 11, k2</td> <td>0.6046</td> <td>0.6164</td> </tr> <tr> <td>7, 8, 11, k2</td> <td>0.5987</td> <td>0.6120</td> </tr> </tbody> </table> <h2>What didn't work this time</h2> <ul> <li>I tried ArcFace, but failed</li> <li>Hotel Chain as auxilary variable</li> <li>Attention blocks in head or body</li> <li>ResNexts, SEResnet</li> <li>EfficientNets, NFNet (see lessons learned)</li> </ul> <h2>Lessons learned</h2> <p>Some of the experiments (chain as aux variable or NFNet) were done in plain pytorch.<br> I had a hard time to achive fastai performance in plain pytorch in 10 epochs. I only managed to achive a comparable performance for the <em>freezing</em> epochs (where only the head is updated). That's probably the reason why I didn't get EffNets and NFNets to fly. Some valuable insides I discovered when reimplementing the wheel were:</p> <ul> <li>augmentation on GPU is very helpfull with only few slow CPUs</li> <li>Fastai doubles the features before it enters the head (MaxPool AND AvgPool)</li> <li>discriminating learning rate seems to be the secret sauce on this data. Tried param_groups on Adam but didn't get it to work effectively.</li> </ul> <p>Sometimes simple is better. So I ended up with rather common models.</p> <p>Thanks! No, I tried to implement ArcFace, but screwed it. Besides regular CE I also used FocalLoss in some models which performed about the same as LabelSmoothing.</p>
Hotel-ID to Combat Human Trafficking 2021 - FGVC8
1st place solution: Swin + Arcface + Label-constrained DBA
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Hotel-ID to Combat Human Trafficking 2021 - FGVC8 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for this interesting challenge! I am very proud to have participated in such a socially meaningful competition.</p> <p>Code: <a href="https://github.com/smly/hotelid-2021-first-place-solution" target="_blank">https://github.com/smly/hotelid-2021-first-place-solution</a></p> <h2>Preprocessing: Rotation Correction</h2> <p>Many Traffickcam images do not have the proper rotation angle. I created a classification model for correcting the rotation angle and applied it to all traffickcam images. This model was trained on the Hotels50k dataset. In the end, the rotation angles of 4859 out of 97554 training images were corrected.</p> <h2>Modeling</h2> <ul> <li>arch: Arcface (backbone: Swin, RegNetY, ResNeSt)</li> <li>optimizer: AdamW, scheduler=CosineAnnealing</li> <li>data augmentation: RandomResizedCrop, Resize, HorizontalFlip, RandomBrightness, RandomContrast, RandomGamma, ShiftScaleRotate and Cutout</li> </ul> <p>I solved this contest by metric learning and nearest neighbor search. The image representations were trained by Arcface. For the backbone, I used ResNeSt101e, RegNetY120, and Swin Transformer (Swin-L). All models I trained scores almost the same in public LB.</p> <p>For imbalances, in the nearest neighbor search, only the TOP1 nearest neighbor similarity score for each hotel was used as confidence. When aggregated with the sum of topk scores, the result is worse.</p> <h2>Label-constrained Database Augmentation</h2> <p>By updating the image representations with Database augmentation (k=5), the LB score can be improved. However, DBA is easily affected by different classes of image representations because there are few similar images. Thus, I added the following label constraint into DBA. (<a href="https://gist.github.com/smly/ce2457841941789cc4b9dcc670765dca" target="_blank">code</a>)</p> <p><img src="https://cdn-ak.f.st-hatena.com/images/fotolife/s/smly/20210527/20210527212650.png" alt=""></p> <h2>Database Expansion</h2> <p>I performed a search using Hotels50k as an index with Hotel-ID as the query and mapped the hotels. Adding H50k images to the training set to the index set in nearest neighbor search further improved the classification accuracy.</p> <p><img src="https://cdn-ak.f.st-hatena.com/images/fotolife/s/smly/20210527/20210527212607.png" alt=""></p> <h2>Ablation Study</h2> <h3>Single model performance</h3> <p>I used three backbone, ResNeSt101e, RegNetY120, and Swin Transformer, to create models with various combinations of training and index sets. The major improvement is from the difference in the training set and index set, not from the modeling part.</p> <p>HID+H50k(a) indicates the image set with added traffickcam images in H50k. HID+H50k(b) indicates the image set with added traffickcam &amp; travel site images in H50k. HID+H50k(a) and HID+H50k(b) correspond to <code>train_hotelidv3.csv</code> and <code>train_hotelidv4.csv</code> in <a href="https://www.kaggle.com/confirm/hotelid-models" target="_blank">https://www.kaggle.com/confirm/hotelid-models</a> respectively.</p> <table> <thead> <tr> <th>Backbone</th> <th>input size</th> <th>train set</th> <th>index set</th> <th>public LB</th> <th>private LB</th> </tr> </thead> <tbody> <tr> <td>ResNeSt101e</td> <td>512</td> <td>H50k-&gt;HID</td> <td>HID only</td> <td>0.7852</td> <td>0.8027</td> </tr> <tr> <td>ResNeSt101e</td> <td>512</td> <td>H50k→HID+H50k(a)</td> <td>HID+H50k(a)</td> <td>0.8197</td> <td>0.8250</td> </tr> <tr> <td>ResNeSt101e</td> <td>512</td> <td>H50k→HID+H50k(a)</td> <td>HID+H50k(b)</td> <td>tbd</td> <td>tbd</td> </tr> <tr> <td>RegNetY120</td> <td>512</td> <td>H50k→HID+H50k(a)</td> <td>HID+H50k(a)</td> <td>tbd</td> <td>tbd</td> </tr> <tr> <td>RegNetY120</td> <td>512</td> <td>H50k→HID+H50k(b)</td> <td>HID+H50k(a)</td> <td>best</td> <td>best</td> </tr> <tr> <td>Swin</td> <td>384</td> <td>H50k→HID+H50k(b)</td> <td>HID+H50k(a)</td> <td>0.8126</td> <td>0.8218</td> </tr> <tr> <td>Swin</td> <td>384</td> <td>H50k→HID+H50k(b)</td> <td>HID+H50k(b)</td> <td>0.8181</td> <td>0.8259</td> </tr> </tbody> </table> <h3>Ensemble</h3> <p>I checked the performance contribution on the ensemble by removing one of the four models I created. The lower score indicates that the model is more important to the ensemble. It can be seen that Swin makes a significant contribution to the ensemble.</p> <table> <thead> <tr> <th>Backbone</th> <th>input size</th> <th>train set</th> <th>index set</th> <th>public LB</th> <th>private LB</th> </tr> </thead> <tbody> <tr> <td>- ResNeSt101e</td> <td>512</td> <td>H50k-&gt;HID+H50k(a)</td> <td>HID+H50k(b)</td> <td>0.8502</td> <td>0.8551</td> </tr> <tr> <td>- RegNetY120</td> <td>512</td> <td>H50k→HID+H50k(a)</td> <td>HID+H50k(b)</td> <td>0.8525</td> <td>0.8594</td> </tr> <tr> <td>- RegNetY120</td> <td>512</td> <td>H50k→HID+H50k(b)</td> <td>HID+H50k(b)</td> <td>0.8523</td> <td>0.8589</td> </tr> <tr> <td>- Swin</td> <td>384</td> <td>H50k→HID+H50k(b)</td> <td>HID+H50k(b)</td> <td><strong>0.8442</strong></td> <td><strong>0.8508</strong></td> </tr> </tbody> </table> <h3>Database Augmentation</h3> <p>The table below compares the effect of DBA after ensemble.</p> <p>HID+H50k(b) gives higher accuracy, but the number of images in the index set is large. This causes an out-of-memory error when trying to do DBA.</p> <table> <thead> <tr> <th>No</th> <th>Index set</th> <th>DBA method</th> <th>public LB</th> <th>private LB</th> </tr> </thead> <tbody> <tr> <td>#1</td> <td>HID+H50k(a)</td> <td>No QE&amp;DBA</td> <td>0.8514</td> <td>0.8557</td> </tr> <tr> <td>#2</td> <td>HID+H50k(a)</td> <td>QE&amp;DBA*2</td> <td>0.8527</td> <td>0.8570</td> </tr> <tr> <td>#3</td> <td>HID+H50k(a)</td> <td>QE&amp;LC-DBA*2</td> <td><strong>0.8576</strong></td> <td><strong>0.8596</strong></td> </tr> <tr> <td>#4</td> <td>HID+H50k(b)</td> <td>No QE&amp;DBA</td> <td>0.8541</td> <td>0.8595</td> </tr> <tr> <td>#5</td> <td>HID+H50k(b)</td> <td>QE&amp;DBA*2</td> <td>OOM Error</td> <td>OOM Error</td> </tr> <tr> <td>#6</td> <td>HID+H50k(b)</td> <td>QE&amp;LC-DBA*2</td> <td>OOM Error</td> <td>OOM Error</td> </tr> </tbody> </table> <p>Finally, my best submission is the simple average of (#3) and (#4). It scores 0.859 in public LB.</p> <h2>Things that did not work well</h2> <p>I tried to add a chain classifier head, but it didn't work well. Since the number of images for many hotels is very small, I assumed that the information about the same chain would be useful in training the model. I think the reason why it did not work was that the diversity of images in the same chain was too large.</p> <ul> <li>OSME block</li> <li>chain_id Head</li> <li>DELG descriptors</li> <li>Reranking Transformer (My code might be wrong.)</li> <li>Scoring similarities with using local feature matching</li> <li>TransReID</li> <li>Graph NN</li> <li>Graph-based SSL</li> </ul>
Hotel-ID to Combat Human Trafficking 2021 - FGVC8
8th place solution: arcface + cosface + classification
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Hotel-ID to Combat Human Trafficking 2021 - FGVC8 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many thanks to the organizers for hosting such an interesting challenge and I hope some of the solutions will help to improve the current methods used in trafficking investigations.</p> <p>Congrats to the winners, the final scores are pretty impressive and I can't wait to learn what was the secret sauce of your solutions.</p> <h2>Motivation</h2> <p>I've never worked on reverse image search or image similary problem before so i wanted to try out some methods and learn something new. On top of that this competition is very interesting and the solutions might have a real world impact which made it very tempting to join.</p> <h2>Overview</h2> <p>My solution is nothing special. I trained 3 types of models<br> ArcMargin model: <a href="https://www.kaggle.com/michaln/hotel-id-arcmargin-training" target="_blank">https://www.kaggle.com/michaln/hotel-id-arcmargin-training</a><br> CosFace model: <a href="https://www.kaggle.com/michaln/hotel-id-cosface-training" target="_blank">https://www.kaggle.com/michaln/hotel-id-cosface-training</a><br> Simple classification model: <a href="https://www.kaggle.com/michaln/hotel-id-classification-training" target="_blank">https://www.kaggle.com/michaln/hotel-id-classification-training</a></p> <p>With same parameters: Lookahead + AdamW optimizer, OneCycleLR scheduler, CrossEntropyLoss/CosFace loss</p> <p>Models used as backbones: eca_nfnet_l0, efficientnet_b1, ecaresnet50d_pruned</p> <p>These models were then used to generate embeddings for the images which were then used to calculated cosine similarity of the test images to the train dataset. To ensemble I just calculated the product of similarities from different models and then found the top 5 most similar images from different hotels.</p> <p>I planned to train more models but it was painfully slow (2-3 hours for a single epoch on Colab using T4 GPU) and I ran out of time. So in the end I used only 4 models in my final ensemble.<br> Inference notebook: <a href="https://www.kaggle.com/michaln/hotel-id-inference" target="_blank">https://www.kaggle.com/michaln/hotel-id-inference</a><br> Dataset with trained models: <a href="https://www.kaggle.com/michaln/hotelid-trained-models" target="_blank">https://www.kaggle.com/michaln/hotelid-trained-models</a></p> <table> <thead> <tr> <th>Type</th> <th>Backbone</th> <th>Embed size</th> <th>Public LB</th> <th>Private LB</th> <th>Epochs</th> </tr> </thead> <tbody> <tr> <td>ArcMargin</td> <td>eca_nfnet_l0</td> <td>1024</td> <td>0.6564</td> <td>0.6704</td> <td>6/6</td> </tr> <tr> <td>ArcMargin</td> <td>efficientnet_b1</td> <td>4096</td> <td>0.6780</td> <td>0.6962</td> <td>9/9</td> </tr> <tr> <td>Classification</td> <td>eca_nfnet_l0</td> <td>4096</td> <td>0.6691</td> <td>0.6875</td> <td>6/9</td> </tr> <tr> <td>CosFace</td> <td>ecaresnet50d_pruned</td> <td>4096</td> <td>0.6702</td> <td>0.6796</td> <td>9/9</td> </tr> <tr> <td>Ensemble</td> <td></td> <td></td> <td>0.7273</td> <td>0.7446</td> <td></td> </tr> </tbody> </table> <p>git: <a href="https://github.com/michal-nahlik/kaggle-hotel-id-2021" target="_blank">https://github.com/michal-nahlik/kaggle-hotel-id-2021</a></p> <h2>Data</h2> <p>I used only competition data as it was never really confirmed that we can use external datasets like Hotels50k. I rescaled images to 512x512 and padded them when it was needed.</p> <p>Image preprocessing notebook: <a href="https://www.kaggle.com/michaln/hotel-id-preprocess-images" target="_blank">https://www.kaggle.com/michaln/hotel-id-preprocess-images</a><br> 512x512 dataset: <a href="https://www.kaggle.com/michaln/hotelid-images-512x512-padded" target="_blank">https://www.kaggle.com/michaln/hotelid-images-512x512-padded</a><br> 256x256 dataset: <a href="https://www.kaggle.com/michaln/hotelid-images-256x256-padded" target="_blank">https://www.kaggle.com/michaln/hotelid-images-256x256-padded</a></p> <h2>What worked</h2> <ul> <li>heavy augmentations</li> <li>cosine similarity (I tried euclidean distance, SNR Distance and other buts cosine worked the best)</li> <li>lookahead + AdamW optimizer</li> <li>bigger embedding layer - in general I saw best results with embedding layer with at least half of the size of final targets (used 4096 for 7770 targets)</li> <li>bigger images (512x512 was better than 256x256, 1024 was even better but too slow to train)</li> <li>eca_nfnet_l0 - this model got the best score out of all, it was a hidden gem dicovered in Shopee competition for me</li> </ul> <h2>What didn't work</h2> <ul> <li>TTA - I tried to use TTA to rotate the image and use the maximum similarity (or minumum distance) over the TTAs but in the end it did not improve the score</li> <li>clustering + voting - I tried clustering to get the predictions based on embeddings and then use voting to ensemble different models but simple search for most similar image and product of similarites worked better</li> <li>triplet loss - couldn't get a decent score, it got much better when I added classification head but then pure classification got better results by itself so I dropped triplet completely</li> <li>pretraining on smaller images + pretraining and freezing some layers - full training was just much better</li> </ul> <h2>What worked but I didn't use</h2> <ul> <li>More data - I downloaded parts of the Hotels 50k dataset and added them to the competition data and it did improve the score. The training was pretty slow already and it was never confirmed that we can use it so I decided to stick with the competition data.</li> </ul>
Hotel-ID to Combat Human Trafficking 2022 - FGVC9
1st place solution - New augmentation method + 5 model ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Hotel-ID to Combat Human Trafficking 2022 - FGVC9 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><blockquote> <p>Would you mind sharing the effect of PCA here?</p> </blockquote> <p>Performing PCA gives about a 0.01 mAP improvement (vs simple concat of embeddings) while reducing dimensionality at the same time.</p> <blockquote> <p>I can't see in the description if you used any external data, does it mean you managed to get the results with only competition data?</p> </blockquote> <p>I used a very small amount of data from FGVC8, only from classes where &lt;8 samples existed in FGVC9.</p> <p>Thanks to the sponsors and Kaggle for organizing this important research challenge.</p> <p><strong><em>TLDR</em></strong>: Solution architecture includes 5 model ensemble at two different image sizes with embedding dimension reduced by PCA. Secret sauce is a newly developed augmentation method designed to deal with large occlusions while maintaining semantic cohesion of the overall scene.</p> <h2>Augmentation - Introducing BlendFlip:</h2> <p>The core problem in this challenge is dealing with large masked occlusions in test/query images. For this my goal was to inpaint the masked region while attempting to:</p> <ul> <li>maintain semantic cohesion with the overall scene</li> <li>maintain continuity around masked edges</li> <li>fill in visually realistic content</li> </ul> <p>When looking at hotel room images, a common pattern is many vertical and horizontal features against repeating backgrounds. For instance wall corners that run vertical, door frames that run vertical and horizontal, pictures on walls that are square, etc. This observation gave me the idea to apply image patches from the same image directly adjacent to the masked region and essentially flip them into the masked region, and then blend the flips from the vertical and horizontal direction, hence the name BlendFlip.</p> <p><img src="https://i.imgur.com/q8ZAdPK.jpg" alt="BlendFlip"></p> <p>BlendFlip algorithm:</p> <ol> <li>Determine max distance between masked region edges in the vertical direction (image top to mask top, mask bottom to image bottom). Do same for horizontal. Max direction becomes the regions to be used for inpainting.</li> <li>Flip max vertical region into masked region. If initial flip doesn’t fill entire masked region, keep flipping in vertical region until masked region is filled, cut off at edge of masked region.</li> <li>Repeat step 2 for horizontal region.</li> <li>Take 50/50 blend of step2 and step3 (cv2.addWeighted) as final inpainted region.</li> </ol> <p>BlendFlip helps address 2 1/2 of the 3 problems listed above. It helps maintain semantic cohesion with visually realistic content. It maintains continuity on 2 of the 4 sides of the masked edges. I tried GAN's, traditional CV inpainting methods, variations of CutMix, but BlendFlip significantly outperformed them all.</p> <p><img src="https://i.imgur.com/sl6d9IK.jpg" alt="Example 1"><br> <img src="https://i.imgur.com/i18vrXt.jpg" alt="Example 2"><br> <img src="https://i.imgur.com/BxLju54.jpg" alt="Example 3"></p> <p>The goal of blendflip is NOT to predict what might be in the occluded region, but rather maintain the integrity of the image embedding from the non-occluded regions.</p> <h2>Solution architecture</h2> <p>The overall solution architecture is shown in the diagram below. Images of sizes 1024x1024 (longest max side scaled to 1024, aspect ratio maintained) or 384x384 are fed into 5 models of various architecture. Each model is trained with ArcFace loss and the embedding size of each is 1536D. The embeddings are then concatenated and reduced to a total of 3072D via PCA (size determined by maintaining 99% of variance of embedding concat), and then KNN performed. Each model is trained with a 50% probability of BlendFlip on each image. BlendFlip is performed on all test images prior to inference. No post processing or re-ranking was applied. </p> <p><img src="https://i.imgur.com/0vbOz3Z.jpg" alt="Solution architecture"></p> <p>For the CVPR paper I will include proper ablation studies and relative impact of the solution ingredients. BlendFlip accounted for ~0.03-0.04 mAP.</p>
Hotel-ID to Combat Human Trafficking 2022 - FGVC9
2nd solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Hotel-ID to Combat Human Trafficking 2022 - FGVC9 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to host for organizing this challenge.</p> <h3>Mask</h3> <p>Statistics of the mean, variance of the size and position of the mask in the test set. Generate mask with the statistical values in training.</p> <h3>data preprocess</h3> <p>50k+fgvc9<br></p> <ul> <li><p>data clean<br><br> Calculate the md5 of all the data, and delete the data with the same md5 but different categories, keep data in fgvc9(A total of 45,769 categories remain). </p></li> <li><p>direction<br><br> Using the direction model, rotate the image to face upward.</p></li> </ul> <h3>train step</h3> <p>step 1. <br><br> train all data (10~20 epoch)<br><br> step 2. <br><br> Fine-tuning the data of 3116 categories in fgvc9 (40epoch) (improve ~0.03)</p> <h3>loss function</h3> <p>Because the same id of the hotel varies greatly, there are bedrooms, bathrooms, etc. so we use sub-center Arcface(k=3, Dynamic Margin) as the loss function.</p> <h3>predict</h3> <p>There is little difference between using logits and retrieval in my experiments, so I choose the more convenient logits as the prediction.</p> <h3>Model</h3> <table> <thead> <tr> <th>Backbone</th> <th>input size</th> <th>public</th> <th>private</th> </tr> </thead> <tbody> <tr> <td>swin-base-384</td> <td>384</td> <td>0.707</td> <td>0.688</td> </tr> <tr> <td>swin-large-384</td> <td>384</td> <td>0.704</td> <td>0.692</td> </tr> <tr> <td>eca_nfnet_l1</td> <td>576</td> <td>0.700</td> <td>0.688</td> </tr> <tr> <td>efficientNetV2-l</td> <td>480</td> <td>/</td> <td>/</td> </tr> <tr> <td>efficientNet-b5</td> <td>576</td> <td>/</td> <td>/</td> </tr> <tr> <td>ensemble</td> <td>/</td> <td>0.732</td> <td>0.717</td> </tr> </tbody> </table> <h3>dit not work</h3> <p>Pseudo Labeling of fgvc8</p>
Hotel-ID to Combat Human Trafficking 2022 - FGVC9
Public/Private 3rd Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Hotel-ID to Combat Human Trafficking 2022 - FGVC9 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you!<br> I read your solution that is also great. (Blind Flip etc..)</p> <blockquote> <p>You were able to squeeze a lot of performance out of ConvNeXt, I ended up dropping it late because it didn't perform as well as others for me</p> </blockquote> <p>heavier model is better(ConxNeXt XLarge)<br> If I have more resouces, I tried many models ensemble.</p> <blockquote> <p>I also like you got subcenter + dynamic margin </p> </blockquote> <p>I experiment both ArcFace vanilla and ArcFace Subcenter + Dynamic margin, better is ArcFace Subcenter + Dynamic margin. </p> <blockquote> <p>due to the different scenes within the same hotel (ie bathroom vs bedroom)</p> </blockquote> <p>I agree, Hotel have difference scenes.</p> <p>Thanks!</p> <blockquote> <p>Did you handle the occlusions in some special way? Did you generate them for training or remove them in test using the provided masks?</p> </blockquote> <p>I create mask using albumentations.</p> <blockquote> <p>Do you plan to release the source code?</p> </blockquote> <p>No. </p> <blockquote> <p>I am curious about the ArcFaceSubcenter + Dynamic Margin :-)</p> </blockquote> <p>ArcFaceSubcenter + Dynamic Margin is used for GLR 2020 3rd place.<br> <a href="https://www.kaggle.com/competitions/landmark-recognition-2020/discussion/187757" target="_blank">https://www.kaggle.com/competitions/landmark-recognition-2020/discussion/187757</a><br> it have source code.</p> <blockquote> <p>What is DOLG?</p> </blockquote> <p>DOLG is used for GLR2021 1st Solution<br> I refer to it code.<br> <a href="https://github.com/ChristofHenkel/kaggle-landmark-2021-1st-place" target="_blank">https://github.com/ChristofHenkel/kaggle-landmark-2021-1st-place</a><br> <a href="https://github.com/ChristofHenkel/kaggle-landmark-2021-1st-place/blob/main/models/ch_mdl_dolg_efficientnet.py#L248" target="_blank">https://github.com/ChristofHenkel/kaggle-landmark-2021-1st-place/blob/main/models/ch_mdl_dolg_efficientnet.py#L248</a></p> <p>Thank you for host,competitors,team member <a href="https://www.kaggle.com/ks2019" target="_blank">@ks2019</a> <br> It is a great competiton!<br> here is our team summary.</p> <h1>Summary</h1> <ul> <li>ArcFaceSubcenter + Dynamic Margin</li> <li>Many Backbones(Swin,ConvNeXt, ResNet200D, EfficientNet etc..)</li> <li>use External Data(FGVC8)</li> <li>use Logits</li> </ul> <h2>Modeling</h2> <table> <thead> <tr> <th>Model</th> <th>Data</th> <th>ImageSize</th> <th>Public</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>Swin Large</td> <td>Comp</td> <td>512</td> <td>0.587</td> <td>0.584</td> </tr> <tr> <td>ConvNeXt XLarge</td> <td>Comp + External</td> <td>384</td> <td>0.662</td> <td>0.656</td> </tr> <tr> <td>Swin Large</td> <td>Comp+ External</td> <td>384</td> <td>0.659</td> <td>0.657</td> </tr> <tr> <td>EfficientNetB7 + DOLG</td> <td>Comp + External</td> <td>448(stride1)</td> <td>0.644</td> <td>0.639</td> </tr> <tr> <td>ConvNeXt XLarge</td> <td>Comp + External</td> <td>512</td> <td>0.677</td> <td>0.672</td> </tr> <tr> <td>ResNet200D + DOLG</td> <td>Comp + External</td> <td>640</td> <td>0.651</td> <td>0.654</td> </tr> <tr> <td>EfficientNetB6 + DOLG</td> <td>Comp + External</td> <td>640</td> <td>0.635</td> <td>0.642</td> </tr> <tr> <td>EfficientNetV2S + DOLG</td> <td>Comp + External</td> <td>1024</td> <td>0.645</td> <td>0.642</td> </tr> <tr> <td>EfficientNetV2M + DOLG</td> <td>Comp + External</td> <td>896</td> <td>0.666</td> <td>0.666</td> </tr> </tbody> </table> <h2>Pseudo Labeling</h2> <p>We use FGVC9 and FGVC8 competiton data but FGVC8 don't have competiton labels.<br> We annotated label to these data using Pseudo Labeling.</p> <p>First, We used KNN matching FGVC9 training and FGVC8 dataset.<br> also if threshold &lt; 0.5, these data used training.</p> <p>Next, We adjust labeling. FGVC8 have same hotel list.<br> I use mean aggregation method</p> <h2>Prediction</h2> <p>I use logits because test have a mask, but train do not have a mask.<br> I think it is difficult to match training and test so I decide to compare knn vs logits<br> Logits is better than knn in my experiments.</p> <p>My inference time is about 2hours.</p> <h2>did not work</h2> <ul> <li>used Hotel50K</li> <li>gradient checkpointing</li> </ul> <p>Thanks for <a href="https://www.kaggle.com/picekl" target="_blank">@picekl</a> <a href="https://www.kaggle.com/michaln" target="_blank">@michaln</a>. I totally agree with Micael.<br> I found the host statement and then I decide to use external data. I understand the host allowed external data. </p> <blockquote> <p>However, I will simply say that there are no points awarded for this competition or monetary awards -- the goal of this competition is to help in the development of the best approaches to recognizing hotels to combat human trafficking. If an external dataset fits that goal, then by all means, use it.</p> </blockquote>
How Much Did It Rain?
First place code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: How Much Did It Rain? <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is code for anyone interested(i'll put it on github when I figure out how to do that). I simplified things a bit and it should run in about 10 hours. The included documentation has instructions to run.</p> <p>scores public 0.00751, private 0.007485</p>
HuBMAP + HPA - Hacking the Human Body
11th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP + HPA - Hacking the Human Body <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers and congrats to all the winners!</p> <h3>Overview</h3> <p>In this competition, since CV based on the HPA labelled data is not reliable for the private LB score improvement, I made as many submissions as possible to confirm the effectiveness of my experiments by the Hubmap only LB score. Finally, the following three points were the key item of my solution.</p> <ul> <li>Use GTEx data</li> <li>Ensemble and TTA as many as possible (12 models x 20 TTAs)</li> <li>Post-process</li> </ul> <h3>Models</h3> <p>Total 12 models were used for ensemble.</p> <ul> <li>EfficientNet B4/B5/B6/B7 - Unet</li> <li>EfficientNetV2 S/M/L - Unet</li> <li>ConvNeXt T/S/B - DeeplabV3+</li> <li>Swin B/L - Unet</li> </ul> <h3>Training</h3> <ul> <li>Data were resized to x0.25 scale (for GTEx data, after rescaled to 0.4 um pixel size) and random cropped to 512x512 size</li> <li>120 epochs</li> <li>Loss: BCE loss + Dice loss</li> <li>Optimizer: Adam</li> <li>LR schedule: Cosine decay with warmup</li> <li>Augmentation stronger than usual competitons was used</li> <li>ImageNet pretrained backbone</li> <li>Trained on TPU of Kaggle kernel or Colab Pro</li> <li>All models were trained with TensorFlow</li> <li>All data were used for training (Not using kFold-CV)</li> </ul> <h3>External data</h3> <p>I downloaded external data from GTEx Portal, and made the pseudo labels of them. Since the GTEx .svs images have too large resolution, they were cropped as tile before making the pseudo labels. If the pseudo label's mask area was too small or there was no mask, this tile was excluded. The pseudo labels were manually checked and modified or removed if it was obviously wrong. These data were used for training by mixing with the original labeled data. Finally, total 650 tiles from 40 samples were used for training. It improved my Hubmap only LB score ~0.02.</p> <h3>TTA</h3> <p>Although I could use maximum 8 combination patterns, I reduced them to four because of the 9 hours run-time limitation. The scales were chosen based on x0.25 training scale.</p> <ul> <li>5 scales x 4 patterns = 20 TTAs per model</li> <li>5 scales: 0.19, 0.21, 0.23, 0.25, 0.28</li> <li>4 patterns: original, lr-flip, ud-flip, rot90</li> </ul> <h3>Post-process</h3> <p>Although there is no clear evidence, especially for "spleen" and "lung", the boundary of the masks is fuzzy, so I tried to enlarge or shrink the masks and confirmed the effect in LB score. As a result, shrinking spleen and lung prediction masks improved my LB score. I did not include post-processing in one of the final submissions because I thought there was a possibility of overfitting to the public data. However, the resulting private LB score was better if I include the post-processing. The post-processing was impremented as following code.</p> <pre><code>SHRINK = {'spleen': 10, 'lung': 10} def post_proc(preds, organ): # preds: predicted mask after binarization if organ in SHRINK.keys(): contours, _ = cv2.findContours(preds, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) preds = preds.astype(np.int8) preds_cont = np.zeros_like(preds) preds_cont = cv2.drawContours(preds_cont, contours, -1, 1, SHRINK[organ]*2) preds = (preds - preds_cont).clip(0, 1).astype(np.uint8) return preds </code></pre> <p>I used <a href="https://github.com/qubvel/segmentation_models" target="_blank">this repository</a> and all the model's loss were configured like the following code (TensorFlow code).</p> <pre><code>model.compile( loss=sm.losses.bce_dice_loss, ) </code></pre>
HuBMAP + HPA - Hacking the Human Body
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP + HPA - Hacking the Human Body <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><blockquote> <p>Loss and Acc is normal.<br> The evaluation mDice with MMSeg is about 16</p> </blockquote> <p>is mDice 0.16? Do you mean Acc is decode.acc_seg?</p> <blockquote> <p>Yes. I didn't take 255 class into account.</p> </blockquote> <p>I feel like test accuracy is also low. Did you check it?</p> <blockquote> <p>Could you share your method?</p> </blockquote> <p>Actually I don't recommend to use it.<br> I couldn't modify the code to work also in multi-class and the code was written in messy way. <br> Also, CV wasn't useful in this competition:)</p> <p>Thanks:)</p> <p><strong>norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)</strong><br> model = dict(<br> type='EncoderDecoder',<br> backbone=dict(<br> type='MixVisionTransformer',<br> …<br> decode_head=dict(<br> type='SegformerHead',<br> …</p> <p>SegFormer is composed of 'MixVisionTransformer' and 'SegformerHead'.</p> <p>For MixVisionTransformer, Layer Normalization was used as default.<br> <a href="https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/backbones/mit.py#L329-L330" target="_blank">https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/backbones/mit.py#L329-L330</a></p> <p>For SegformerHead, default settings is SyncBN.<br> <a href="https://github.com/open-mmlab/mmsegmentation/blob/master/configs/_base_/models/segformer_mit-b0.py#L2" target="_blank">https://github.com/open-mmlab/mmsegmentation/blob/master/configs/_base_/models/segformer_mit-b0.py#L2</a><br> So you can change SyncBN to 'GN'.</p> <p>Supported normalizations would be<br> <a href="https://github.com/open-mmlab/mmcv/blob/master/mmcv/cnn/bricks/norm.py#L11-L21" target="_blank">https://github.com/open-mmlab/mmcv/blob/master/mmcv/cnn/bricks/norm.py#L11-L21</a></p> <p>The boost of external spleen was 0.00594 and lung was 0.00622.<br> In simple calculation, 0.83562(final score) - (0.00594+0.00622)=0.82346</p> <p>It was a good opportunity to check my submission history :)</p> <p>Hi, how degree is it low? Although I used mDice, I haven't compared the metrics of 2 class with multi class because I customized mDice calculation for 2 class, from "calculation from total area" to "average of each image".<br> I recommend you to submit the model.</p> <p>By the way, did you take "class 255" into account?<br> Class 255 means 'ignore' and would be converted to "class 1" in your code depending on pipeline position.<br> For example, RandomRotate and Pad would create class 255.<br> Here is an example. </p> <pre><code>@PIPELINES.register_module() class MaskTo2Class: def __call__(self, data): mask = data['gt_semantic_seg'] mask_ignore = (mask == 255) mask = np.where((mask &gt;= 1) &amp; (mask &lt; 255), 1, 0) mask[mask_ignore] = 255 data['gt_semantic_seg'] = mask return data </code></pre> <p>Anyway, creating custom pipeline for visualization would be helpful for future works:)</p> <p>First of all, I’d like to express my gratitude to kaggle and the competition hosts for holding such an amazing competition.<br> I’d like to thank MMSegmentation developers too. MMSegmentation is really nice and useful tool. Also great thanks to <a href="https://www.kaggle.com/ayuraj" target="_blank">@ayuraj</a> who incorporated a hook of Weight &amp; Biases into MMSegmentation, which helped a lot to manage training history.</p> <p>In the contest, I learned a lot from <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>. Thank you so much for sharing knowledge with us:)</p> <h3><strong>Overview</strong></h3> <p>I used the following methods.<br> ・ End-to-end MMSegmentation(with customization)<br> ・ Ensemble of SegFormers<br> ・ Used group normalization and trained with batch_size=1 in Colab<br> ・ Hand-labeling of lung from scratch<br> ・ External dataset and pseudo labeling.</p> <h3><strong>DataSet</strong></h3> <p>Used dataset:<br> ・ HPA images [351 images]<br> ・ HPA images stained with H&amp;E or PAS [351 images x 4 stain patterns]<br> ・ external spleen [39 images + 39 images x 4 stain patterns] *from a single image<br> ・ external lung [73 images] *from a single image</p> <p><strong>(Lung)</strong>When I noticed that unstable results were from lung prediction, I imagined hand labeling is better than predicting lung by low threshold, which might lead to unstable and unconfident results. So I researched how lung alveolus/alveoli look alike and hand-labeled lung images from scratch. Since there is no information on how to annotate alveolus/alveoli anywhere, I made about 7 annotation sets where potential alveolus was gradually annotated and probed which one is the best. Then I created an external dataset with pseudo labels and with the best threshold. After training with it, the best threshold for lung increased to 0.60, which gave the best result ever. I guess good modeling is better than hand labeling since there are those who got 0.11/0.22 in public/private. <br> <strong>(Spleen)</strong>Used pseudo label for an external dataset<br> <strong>(Prostate/ Largeintestine)</strong>Used pseudo label for the HPA dataset.<br> I had a feeling that my models underfit HPA dataset because these dataset didn’t put weight on HPA datasets in terms of colors. The torchstain was used for the stain tool.</p> <h3><strong>Models</strong></h3> <p>Ensemble of the following models:<br> ・ 1x SegFormer mit-b3, image_size: 1024x1024<br> ・ 2x SegFormer mit-b4, image_size: 960x960<br> ・ 1x SegFormer mit-b5, image_size: 928x928<br> ・ 2x SegFormer mit-b5, image_size: 960x960<br> Different pretrained models, seed and additional stain dataset were used in some models. The reason all models are SegFormer is I chose models only from MMSegmentation and it worked well.<br> *Best single model was mit-b4(private: 0.82821).</p> <h3><strong>Traning</strong></h3> <p>I used google colaboratory. Since GPU resources are limited, I trained segformer with batch_size=1 with group normalization. In my experiment, it was better than batch_size=4 with batch normalization in both CV and LB. I chose 2 class rather than 6 class segmentation because it showed better results consistently, especially in lung. In 6 classes, the area of the lung tends to shrink and even if the area was equivalent to 2 classes, LB lung score was worse.</p> <p>settings:<br> ・ norm: group normalization(num_groups=32) for decoder<br> ・ num_classes:2<br> ・ loss_decode: CE:LovaszLoss=1:3<br> ・ steps: 45000<br> ・ optimization: AdamW with lr=0.00006 </p> <p>augmentations:<br> ・ Dataset image size: 2000x2000<br> ・ Downscale for prostate<br> ・ Crop/ Flip/ RandomRotate90/ HueSaturationValue/ RandomGamma/ RandomBrightness<br> ・ (mmseg)Resize to (480~1600, 480~1600) * such as (480, 1600), (1024,1024), (1500,480)<br> ・ (mmseg)RandomCrop(crop_size=model input size)</p> <h3><strong>Inference</strong></h3> <p>・ Whole image and ignored pixel_size<br> ・ TTA: horizontal flip and different scales (approximately x0.8, x1.0, x1.2) .<br> ・ Thresholds for each organ were kidney: 0.3 | large intestine: 0.2 | lung: 0.6 | prostate: 0.3 | spleen: 0.5</p> <h3><strong>Results</strong></h3> <p>all private: 0.83562 public: 0.82716 <br> kidney private: 0.16940 public: 0.12175<br> large intestine private: 0.09005 public: 0.05730<br> lung private: 0.21586 public: 0.10963<br> prostate private: 0.17784 public: 0.14772<br> spleen private: 0.18246 public: 0.17515</p> <h3><strong>About license</strong></h3> <p>Pretrained SegFormer and its encoder “mit” are unavailable for commercial use. It seems that other winners also use this model. However, I got an answer from the host, which permits the use of SegFormer in this competiion. The reasons are:</p> <ol> <li>Since competition hosts are non-profit, the commercial restriction isn’t applied.</li> <li>The model was Apache 2.0 license at the competition launch and the license was changed during the competition.</li> </ol> <p>Did you downscale prostate images? The image size for prostate is said to be 160x160.</p> <p>I used train_pipeline like below.</p> <hr> <p>crop_size = (960, 960)<br> train_pipeline = [<br> dict(type='LoadImageFromFile'),<br> dict(type='LoadAnnotations'),<br> dict(type='CustomAug'), # Customized for each organs<br> dict(type='Resize', img_scale=[(480,480),(1600, 1600)], keep_ratio=False, ratio_range=None),<br> dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.5), <br> dict(type='RandomFlip', direction='horizontal', prob=0),<br> dict(type='SaveOverlay', save_root_dir='/content', save_num=500, no_overlay=True), # Customized for visualization<br> dict(type='Convert2Class1'), # Customized for converting 6 class masks to 2 class one.<br> dict(type='Normalize', **img_norm_cfg),<br> dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),<br> dict(type='DefaultFormatBundle'),<br> dict(type='Collect', keys=['img', 'gt_semantic_seg']),<br> ]</p> <p>Prostate were processed with Albumentations in CustomAug:<br> A.Downscale(scale_min=0.08, scale_max=0.4, interpolation=cv2.INTER_LINEAR, p=0.9)<br> I used 2000x2000 images. So, downscale to from 160x160 to 800x800 and upscale to 2000x2000 in this case.</p> <blockquote> <p>How to set thresholds for each organ?</p> </blockquote> <p>You need to customize encorder_decoder.py<br> I created a notebook.<br> <a href="https://www.kaggle.com/code/opusen/customize-mmseg" target="_blank">https://www.kaggle.com/code/opusen/customize-mmseg</a></p> <p>Then you can use sigmoid raw outputs of each class for thresholds or ensemble.<br> I ensembled 'cell class' sigmoid of several models.</p> <blockquote> <p>Can you share your test config?</p> </blockquote> <p>Not much different from the others.</p> <p>dataset_type = 'xxx'<br> data_root = 'xxx'<br> img_norm_cfg = dict(<br> mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)<br> crop_size = (960, 960)<br> train_pipeline = []<br> val_pipeline = []<br> test_pipeline = [<br> dict(type='LoadImageFromFile'),<br> dict(<br> type='MultiScaleFlipAug',<br> img_scale=crop_size,<br> img_ratios=[0.8, 1.0, 1.2], #[0.8125, 1.0, 1.1875] for 1024x1024.<br> flip=True,<br> transforms=[<br> dict(type='Resize', keep_ratio=True),<br> dict(type='RandomFlip'),<br> dict(type='Normalize', **img_norm_cfg),<br> dict(type='ImageToTensor', keys=['img']),<br> dict(type='Collect', keys=['img']),<br> ])<br> ]</p>
HuBMAP + HPA - Hacking the Human Body
25th Simple Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP + HPA - Hacking the Human Body <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to thank the organizers for hosting the great competition.<br> also, I would like to express my gratitude to our teammate <a href="https://www.kaggle.com/hwigeon" target="_blank">@hwigeon</a> and new teammate <a href="https://www.kaggle.com/methyl" target="_blank">@methyl</a> for their dedication to the competition.<br> I also want to express my greatest gratitude to the frog <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>. Without his comments through the open discusstion, I would have given up this competition. </p> <h1>Augmentations</h1> <p>I can't cross even 0.6 without insane augmentations. <br> After I applied very strong augmentations with a lot of epochs, I could stably score over 0.75+ </p> <p>I applied HSV, BRIGHTNESS, RANDOMCROPANDRESIZE, STAIN NORMALIZATION, AUGRESSIVE SIZE CHANGE OF PROSTATE with 500 epoch patch training + 400 epoch whole region finetuning<br> after these augmentations, we can score over 0.75 with most of our models</p> <h1>Models</h1> <p>Kaggler : ( Patch + Finetune ) Efficientnet7+Deeplabv3plus, Efficientnet6+Deeplabv3plus, Efficientnetv2l+Deeplabv3Plus, Coat-small + Coat-small-pl [768,1536, 1024,1536,1536]<br> Hwigeon : (Only Patch) swin transformer_Unet + PvT + Coat<br> Methyl : (Only whole image) PVT + Coat [1024,1024,1024,1024]</p> <h1>Validation</h1> <p>We failed validation. When teaming up, though I know validation is very important, but with time being constrained or as the performances of some models are not totally reproduced, we can't validate our cv properly. We just depended on hubmap LB. </p> <h1>Threshold</h1> <p>organ_threshold = {<br> 'Hubmap': {<br> 'kidney' : 0.40,<br> 'prostate' : 0.40,<br> 'largeintestine': 0.40,<br> 'spleen' : 0.40,<br> 'lung' : 0.10,<br> }}</p> <p>We failed to get the gold medal but There is no regret since I tried my best <br> Thank you for reading this!</p> <p><a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> which one is better, increasing thresholds or lowering thresholds? <br> isn't it the same situation when more and more models are being ensembled</p>
HuBMAP + HPA - Hacking the Human Body
26th place solution | Recursive Gated Convolutions + Lung Tiling
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP + HPA - Hacking the Human Body <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><em>Dedicated to the Armed Forces of Ukraine!</em><br> <em>Glory to the Armed Forces of Ukraine for the opportunity to provide science and education for the benefit of Ukraine!</em></p> <hr> <p>Hello everyone from the complex team! <br> First of all, I would like to give my huge thank you to the Kaggle team for hosting such an amazing competition. <br> Every time, I feel honored to be a part of such outstanding research.<br> I want to give a big thank you from our team to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>. We have learned so much from your notebooks :)<br> It is always a pleasure to have this great of a competitor from whom you can learn!</p> <p>Our submission is a result of a few stages. We don't mainly do any pre-processing of the data. Although, at the very end of the competition, we found a lack in annotations that could have caused a slight weakening of masks and added noise to the training [read about it at the end of the overview]</p> <h1>Overview</h1> <p>After many experiments with training all organs with one model, we gave up this idea. The reason was we didn't see any improvement with it after 0.77. <br> Thus, we decided that the model needed contextual information about the organ type. The first experiments were set up with additional aux classification head for classifying each organ type. However, this became a false path after all; the models couldn't converge, and scores were not getting better.</p> <p>Then, we decided to train separate models for the lung and the rest of the organs together. We adopted this strategy and proceeded with training the models in different conditions:</p> <ul> <li>we use full masks resized to 768 for ['kidney', 'largeintestine', 'lungs', 'prostate', 'spleen'], + some models were trained with lung excluded from the train set</li> <li>for lungs, we chose only type1 masked lungs [Lung Segmentation Section] and pre-processed the mask batch by resizing it first to 1024 and then tiling with kernel size 512 and stride 2 </li> </ul> <p>We ensemble 6 models with different weights applied (5 folds per model) based on the best local cv scores. Plus, only one model for the lungs.</p> <h1>Segmentation</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6468176%2F8a4a0c4ea0e4edbedc48368eb8c66c0a%2Fimg_10.png?generation=1663923823988013&amp;alt=media" alt=""></p> <p>Before training, we resized tissue slices to 768 and trained using strong augmentation. <br> We find that intense color shifts and dropout noise produce good results. </p> <pre><code>_transforms = A.Compose([ A.VerticalFlip(p=0.5), A.HorizontalFlip(p=0.5), A.RandomRotate90(p=0.5), A.RandomBrightnessContrast(brightness_limit=(-0.02, 0.02), contrast_limit=(-0.02, 0.02), p=0.25), A.RGBShift(r_shift_limit=(-25, 25), g_shift_limit=(-20, 20), b_shift_limit=(-20, 20), p=0.5), A.HueSaturationValue(hue_shift_limit=(-15, 15), sat_shift_limit=(-15, 15), val_shift_limit=(-15, 15), p=0.5), A.CoarseDropout(max_holes=20, min_holes=7, p=0.5), A.ShiftScaleRotate(shift_limit=0, scale_limit=(-0.2, 0.2), rotate_limit=0, p=0.5), A.OneOf([ A.GridDistortion(num_steps=5, distort_limit=0.05, p=1.0), A.ElasticTransform(alpha=1, sigma=50, alpha_affine=50, p=1.0) ], p=0.5), ]) </code></pre> <hr> <p><strong>The resulting pipeline is a model zoo :)</strong></p> <p>We use pretrained ConvNext and HorNet backbone architectures. For the decoder block, we only use UPerNet.</p> <ul> <li><em>Inside: After the competition, we find that HorNet with recursive gated convolutions for high-order spatial interactions and large receptive field (the 7x7 conv and global filter) gave significantly stable results for all organs!</em></li> </ul> <h3>ConvNext vs HorNet</h3> <p>Best HorNet Single Model Submissions<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6468176%2F5e2fb6e9d794947a77280a957f79a427%2Fimg_1.png?generation=1663921043762859&amp;alt=media" alt=""><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6468176%2Fe32e4bd32acecc9868ebf968843561cf%2Fimg_2.png?generation=1663921060337152&amp;alt=media" alt=""></p> <p>Best ConvNext Single Model Submissions<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6468176%2F2f57c0eda913e6ce17a655fb6a06161f%2Fimg_4.png?generation=1663921074450105&amp;alt=media" alt=""><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6468176%2Fe795dcd4552bf563af59928437e28706%2Fimg_3.png?generation=1663921086356023&amp;alt=media" alt=""></p> <h1>Lung Segmentation | Tiles</h1> <p>We used the tiling approach for lung training, which we found to be very robust. The images were first resized to 1024 size. After, the image batch got tiled the convolution way with 0.5 overlaps between neighboring tiles. <br> We also try randomly sampling 6 out of 9 tiles for data variability per mini-batch. Thus, we result in a bigger batch size with smaller amount of tiles per tissue slice. However, the LB standing didn't change. Indeed, it stayed the same.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6468176%2F889525186f8cb99f1b7cfe2b4f6ab13a%2Fimg_7.png?generation=1663921988920443&amp;alt=media" alt=""></p> <p>For the inference, we rescale the images normalized to tissue slice thickness of size 1024 and use full images for predictions.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6468176%2F006a4459b0c93626ddaa4078da8550f5%2Fimg_8.png?generation=1663921968542680&amp;alt=media" alt=""></p> <h3>type1 masked lungs</h3> <p>After reviewing all the lung masks and corresponding images, we concluded that several annotators might have done annotations. </p> <p>Thus, we decide to cluster the lung set into two groups with similar annotations styles. We find the following:</p> <p><strong><em>type1_anns</em></strong> = [10488, 11064, 11629, 1220, 12827, 15067, 15329, 1731, 1878,<br> 20563, 24782, 26480, 27232, 2793, 28052, 28189, 28429, 30084,<br> 31800, 32151, 4301, 4412, 4776, 5086, 5552, 8231, 8343, 9387,<br> 9450, 12452, 14388, 25516, 30394, 5777, 686]</p> <p><strong><em>type2_anns</em></strong> = [12476, 15124, 23252, 25945, 30500, 31571,<br> 7359, 8151, 127, 13189, 16564, 29610, 31139]</p> <p><strong><em>type3_anomaly_anns</em></strong> = [30084, 31800]</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6468176%2Fda5003ee2d2c568ead11f7da1fbbe0f0%2Fimg.png?generation=1663920807141538&amp;alt=media" alt=""></p> <h1>Post-Processing | Refine Mask</h1> <p>Here we tried fixing the morphological structure for kidney masks. Instances of kidney masks are pretty separate from each other. Thus, we expected not to have many problems with watershed segmentation. If the instances result in touching points at the inference stage, we apply binary opening with disk size 5 and opening with disk size 10 operations to remove the extra connected parts if present. </p> <h1>Adaptive Thresholding</h1> <p>This is one of the essential parts of the prediction side. We notice that hand-picking thresholds might result in some inconsistencies for some predictions.<br> Thus, we present the adaptive way for thresholding a probability map. We find that models indeed can produce an accuracy measure: </p> <ul> <li>We calculate the mean of the models' prediction mask and set it as our threshold. We additionally divide it by 1.75 constant to lower the threshold step. We find that this constant is as good for kidney predictions as it is suitable for lungs.</li> <li>Additionally, assuming every mask can't be empty, we utilize a linear descent in the threshold by the factor of 0.2 if any segmentation is found to be empty.</li> </ul> <h1>LB Scores</h1> <p>Private/Public: 0.81/0.81 (many models ensemble + thresholding strategy introduce good metric stability :))</p> <h3>Ideas we didn't try:</h3> <ul> <li>Pseudo-labeling/SSL (unfortunately, we didn't have time to try this approach; it seems to be a game changer for many teams)</li> <li>Cleaning masks before training (hole filling)</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6468176%2Ff57edc4ec940697aed0303a5dd7a1570%2Fimg_6.png?generation=1663921344885286&amp;alt=media" alt=""></p> <h1><strong>Glory to Ukraine! Glory to Heroes! 🇺🇦</strong></h1>
HuBMAP + HPA - Hacking the Human Body
27th Simple Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP + HPA - Hacking the Human Body <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thank the organizers for hosting this competition. <br> We also thank <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for his discussions and notebook.</p> <p><strong>Models</strong><br> Coat_small+daformer,size=768,folds=5,swa<br> Coat_small+daformer,size=768,folds=5,External data , swa <br> Coat_lite_medium+daformer,size=768,folds=5,swa<br> Coat_lite_medium+daformer,size=768,all data,swa<br> Coat_small(level5)+daformer_conv1*1,size=1152, folds=5, swa<br> pvt_v2_b4+daformer_conv3x3,size=768,folds=5,swa<br> pvt_v2_b4+daformer_conv3x3,size=768,all data,swa<br> pvt_v2_b4+smp_unet,size=768,folds=5,swa<br> pvt_v2_b4+smp_unet,size=768,all data,swa</p> <p><strong>Augmentations</strong><br> Random flip, rotate, noise, HueSaturationValue, affine..</p> <p><strong>Data Processing</strong><br> We have selected three methods of training data:<br> The original data is divided into training data and validation data for training. <br> The full data is trained. <br> The external data is labeled and then added to the original data for training</p> <p><strong>Inference</strong><br> Ensemble<br> Coat_small+daformer:fold0 fold3<br> Coat_lite_medium+daformer: fold2 fold3 fold4 all_data<br> Coat_small(level5)+daformer_conv1*1 :fold0_1152<br> pvt_v2_b4+daformer_conv3x3:fold1,fold2,fold3,fold4,all_data,<br> fold3_384,fold2_384,fold3_1024<br> pvt_v2_b4+smp_unet:fold0,fold2,all_data</p> <p>TTA<br> Flip,Rorate</p> <p>Organ Threshold<br> 'Hubmap': {<br> 'kidney' : 0.35,<br> 'prostate' : 0.25,<br> 'largeintestine': 0.3,<br> 'spleen' : 0.3,<br> 'lung' : 0.1,<br> },</p> <p><strong>Work</strong><br> SWA, TTA, Adjust the threshold, model fusion, Multi-input size fusion, Augmentation(Random flip, rotate, noise, HueSaturationValue, affine)<br> <strong>Not Work</strong><br> Augmentation - rgbtogbr and cutout,<br> External data and Stain normalization.</p>
HuBMAP + HPA - Hacking the Human Body
2nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP + HPA - Hacking the Human Body <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! Your stain normalization for test on inference is also interesting!</p> <p>Big thanks to the organizers for the great competition!<br> Also thanks to all competitors for sharing their experiments here, especially <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a></p> <h1>Models</h1> <p>In this competition the heavy encoders and larger resolution worked better. I've used 3 CNN encoders (efficientnet_b7, convnext_large, tf_efficientnetv2_l) and 1 transformer (coat_lite_medium). Coat performed the best as single model, but ensemble with CNNs scored more. Also tried few versions of swin v1 and v2, but it performed worse.</p> <p>All models trained on 3 input resolutions: 768 * 768, 1024 * 1024, 1472 * 1472 with 5 folds.</p> <p>Models also trained to predict organ and pixel_size. I think these aux outputs help to train more robust model. pixel_size calculated for resized input resolution and changed during training augmentations.</p> <h1>Augmentations</h1> <p>random cropping/padding<br> scaling<br> rotating<br> flipping<br> color changing<br> blur/noise<br> saturation/brightness/contrast<br> elastic</p> <h1>External data</h1> <p>External data helped a lot here. I've download some HPA data with this notebook <a href="https://www.kaggle.com/code/carnozhao/hpa-data-download" target="_blank">https://www.kaggle.com/code/carnozhao/hpa-data-download</a> and picked some images manually from sources posted here and previous Hubmap and Panda competitions. </p> <h1>Pseudo labeled</h1> <p>All external data pseudo-labeled using ensemble of initial models. <br> Also training data was pseudo-labeled and in 30% used as ground truth for training.</p> <h1>Color transfering</h1> <p>Training images recolored using this great notebook <a href="https://www.kaggle.com/code/gray98/stain-normalization-color-transfer" target="_blank">https://www.kaggle.com/code/gray98/stain-normalization-color-transfer</a> with 3 different target images and used with 15% chance instead of original during training.</p> <h1>Validation</h1> <p>Test-time augmentations used on validation (flip, crop, padding) + external data also separated on folds and used as validation to get best checkpoints. </p> <p>Inference notebook: <a href="https://www.kaggle.com/code/victorsd/2nd-place-inference/notebook?scriptVersionId=106240458" target="_blank">https://www.kaggle.com/code/victorsd/2nd-place-inference/notebook?scriptVersionId=106240458</a></p> <p>2*a6000 (48gb). Really good GPU for larger models and input size.</p> <p>I'll check this with the organizers when will prepare my solution. It is a mess of files/experiments now, not notebooks.</p> <p>dice+focal for segmentation, bce for organ classification, mse for pixel_size</p> <p>lr = 1e-4, ReduceLROnPlateau</p>
HuBMAP + HPA - Hacking the Human Body
3d place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP + HPA - Hacking the Human Body <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><em>First, we would like to thank the Armed Forces of Ukraine, Security Service of Ukraine, Defence Intelligence of Ukraine, State Emergency Service of Ukraine for providing safety and security to participate in this great competition, complete this work, and help science, technology, and business not stop and move forward.</em></p> <p>Also thanks to my teammates (@sakvaua , <a href="https://www.kaggle.com/igorkrashenyi" target="_blank">@igorkrashenyi</a>, <a href="https://www.kaggle.com/alexkirnas" target="_blank">@alexkirnas</a>), Kaggle team and competition organizers for this year HuBMAP + HPA event. It was a great pleasure to compete in order to bring benefit in biomedical sphere!<br> According to hosts the main challenge was:</p> <pre><code>Adapting models to function properly presented with that was prepared using a different protocol will be one of the core challenges of competition. While expected to make the problem more difficult, developing models that generalize a key goal of endeavor. </code></pre> <p><br> And it was really a key point to Hacking the Human Body this year :) So here is our approach </p> <h1>Data</h1> <ul> <li>Drop some completely (to our point of view) mislabeled lungs. IDS. Later the organizers clarified that those ids are not mislabeled but rather represent a different way to section alveoli. Anyway there were too few of them to be useful and they confused our models without providing any meaningful new data. So we decided to focus on the horizontally sectioned alveoli that look like bubbles.</li> </ul> <pre><code>, , , , , , , , ,,, ,, , </code></pre> <p>Lately we have pseudo-labeled it and added again to training </p> <ul> <li>One of key points was adapting to wildly varying pixel sizes. The images scales ranged from 6.3um for prostate and down to 0.2um/pixel for large intestine. We tackled this issue by rescaling our train dataset to the target HuBMAP resolution. Though to increase the model’s receptive field we applied additional downscalers for larger images and upscalers for the prostate. <br> We used one dataset rescaled to HuBMAP scales and another dataset with the original HPA scales. The latter one was not only important for HPA predictions (absent in the private LB) but also to provide some additional scaling information to the model. <br> Here is our additional rescaling dict (original scale / additional scale). </li> </ul> <pre><code>: * , : * , : * , : * , : * , </code></pre> <ul> <li>One important aug was CutMix. We have used pretty aggressive CutMix ("prob": 0.5, "alpha": 1.0) though the trick here was to apply CutMix only within the single organ class.</li> <li>We have trained CNN based models on 512 crops and segformer on 1024 crop. As for segformer bigger training crop it was pretty important, because results on 512 crop were much worse. While increasing training crop size for CNN - increased local validation but decreased HubMap LB score. Also we have sampled Non Empty Mask with probability 0.5</li> <li>Also we have used pretty heavy augs - Geometric, Color, Distortions, Scales. The overall pipeline is really long :). And it might be the key point to model performance and stability on. The main idea behind the color augmentation was to suggest the model that the color is not important and it had to look for the other clues</li> <li>To deal with the color shift between the DAB and H&amp;E stains we used Histogram Matching of training pictures to H&amp;E stained GTEX and HubMap images. We have used Historgram Matchin for all the images.</li> </ul> <h2>Pseudo/Additional Data</h2> <p>We used additional data from GTEX and HPA portals to complement the initial training data. The GTEX data was especially important here because it was stained, similar to HuBMAP slides, with H&amp;E. From GTEX we downloaded prostate, large intestine, kidneys and spleen data for patients with no apparent pathologies. We ignored lungs from GTEX as we couldn’t figure out how to segment those (and neither did our model). We were progressively adding GTEX images to our pipeline ending up with around 140 at the end of the competition. And pseudo labeled them with the same ensemble, as HPA<br> From the HPA site we used a plethora of DAB stained slide very similar to those provided by organizers. Overall, we have added between 57-61K of additional HPA images for each organ. <br> We pseudo labeled all the external data with our ensamples, which scored 0.59 on HubMap and 0.81 on HPA+HubMap LB.<br> For selecting particular pseudo images for labeling we were inspired by <a href="https://arxiv.org/pdf/1904.04445.pdf%3C/p%3E" target="_blank">Semi-Supervised Segmentation of Salt Bodies in Seismic Images using an Ensemble of Convolutional Neural Networks</a>. We did not select best labels but just randomly sample the same dataset as training one from HPA pseudo and all GTEX dataset for training.<br> We have repeated pseudo labeling twice </p> <h2>Composed training dataset</h2> <p>All our training dataset was composed from 3 subsets:</p> <ol> <li>HPA train fold + HPA pseudo with Pixel Size rescale and Hist Matching - repeated X8</li> <li>HPA train fold + HPA pseudo - repeated X4</li> <li>GTEX pseudo with Pixel Size rescale - repeated X16</li> </ol> <h1>Models</h1> <p>We have used <a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">SMP</a>. And final ensemble :</p> <ul> <li>Unet ++. Encoder Effnet B7 (noisy student pretrain). First HPA pseudo iteration. X5 - folds</li> <li>Unet. Encoder Mit B5 (imagenet pretrain). Second HPA pseudo iteration. X5 - folds</li> <li>Unet . Encoder Effnet B7 (noisy student pretrain) + Point Rand only for training. First HPA pseudo iteration. X5 - folds </li> <li>Unet . Encoder Effnet B7 (noisy student pretrain) + Point Rand only for training. Second HPA pseudo iteration. X5 - folds <br> Public Hubmap: 0.61453<br> Public Hubmap+HPA: 0.83983<br> Private: 0.83266<br> Local OOF: 0.8493729371691843<br> Local organ scores:<br> kidney: 0.954695<br> largeintestine: 0.913645<br> lung: 0.488490<br> prostate: 0.833822<br> spleen: 0.834292<br> As for <a href="https://arxiv.org/abs/1912.08193" target="_blank">PointRand</a> we have used it only as additional loss as regularization while training </li> </ul> <p>Interesting Point is that CNNs showed better performance on public HubMap ~ 0.61 comparing to Segformers ~ 0.60, while Segformers outperformed CNNs on local validation ~0.85-0.86, while CNNs ~0.83-0.84. BUT on Private Segformers outperformed CNNs and our best Private submit (not selected) includes:</p> <ul> <li>Unet ++. Encoder Effnet B7 (noisy student pretrain). First HPA pseudo iteration. X5 - folds</li> <li>Unet. Encoder Mit B5 (imagenet pretrain). Second HPA pseudo iteration. X5 - folds</li> <li>Unet. Encoder Mit B3 (imagenet pretrain). First HPA pseudo iteration. X5 - folds<br> Public Hubmap: 0.60931<br> Public Hubmap+HPA: ???<br> Private: 0.83419<br> Local OOF: 0.8542834156960896<br> Local organ scores:<br> kidney: 0.956776<br> largeintestine: 0.917554<br> lung: 0.488331<br> prostate: 0.838744<br> spleen: 0.848721<br> It may granted us second place :) </li> </ul> <h1>Training</h1> <ul> <li>We used one channel output</li> <li>Adam 1e-3</li> <li>ReduceLROnPlateau : patience=3; factor=0.5; min_lr=1e-7 by valid dice</li> <li>losses: softbce + tversky + focal + jaccard + (for some models) point rand loss * 2</li> <li>fp16</li> </ul> <h1>Validation</h1> <ul> <li>5 folds CV with stratification by organs </li> <li>Predict image by image (batch size 1) in full scale with rescale to HubMap scale</li> </ul> <h1>Inference</h1> <ul> <li>Average 3 best checkpoints for each fold - just average model weights</li> <li>Mean Fold prediction</li> <li>4 Flips TTA</li> <li>Remove small regions - compute relative area on HPA data and take as threshold - &lt;0.5 quantile</li> <li>Predict image in full scale. We have only used sliding window for Segformers (1024 and 0.75 overlap) just because of cuda out of memory problem</li> </ul> <p><strong>Inference Kernel</strong> : <a href="https://www.kaggle.com/code/vladimirsydor/hubmap-2021-inference-v1/notebook?scriptVersionId=106283407" target="_blank">https://www.kaggle.com/code/vladimirsydor/hubmap-2021-inference-v1/notebook?scriptVersionId=106283407</a><br> <strong>GitHub</strong> : <a href="https://github.com/VSydorskyy/hubmap_2022_htt_solution" target="_blank">https://github.com/VSydorskyy/hubmap_2022_htt_solution</a> <br> <strong>Paper</strong> : <a href="https://arxiv.org/abs/2305.02148" target="_blank">https://arxiv.org/abs/2305.02148</a></p> <p>just randomly select a part from another example</p> <pre><code>We have trained CNN based models on 512 crops and segformer on 1024 crop. As for segformer bigger training crop it was pretty important, because results on 512 crop were much worse. While increasing training crop size for CNN - increased local validation but decreased HubMap LB score. Also we have sampled Non Empty Mask with probability 0.5 </code></pre>
HuBMAP + HPA - Hacking the Human Body
48th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP + HPA - Hacking the Human Body <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for hosting such a great competition. <br> I want to express my greatest gratitude to the <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> Without his comments through the discusstions, I learned a lot in the competition from his opinion!</p> <h1>Augmentations</h1> <ul> <li>Because i applied different ensemble model to different model, so there was a few different augmentations.</li> </ul> <h2>Used augmentation:</h2> <ul> <li>5 simple augmentation: HSV, RANDOM FLIPS, Rotate 90 DEGREES, RANDOM NOISES, RANDOM CONTRAST</li> <li>Stain augmentation for specific organs (spleen, lung), i also use this augmentation to prostate, but it never got better score.</li> </ul> <h2>Different augmentation for different organs</h2> <ul> <li>5 simple augmentation for 3 organs (prostate, largeintestine, kidney)</li> <li>5 simple augmentation and Stain augmentation for 2 organs (spleen, lung) </li> </ul> <h1>Models (different ensemble models for each organ)</h1> <ul> <li>Training image size: 768 * 768</li> </ul> <h3>kidney, largeintestine</h3> <ul> <li>pvtv2-b4 backbone with conv3x3 decoder, 3 of 5folds (fold1,fold2,fold3),each weight are 0.05, 0.3, 0.65.</li> </ul> <h3>prostate (using ensemble models didn't get better result to public score)</h3> <ul> <li>pvtv2-b4 backbone with conv3x3 decoder, 1 of 5folds (fold3)</li> </ul> <h3>spleen</h3> <ul> <li>pvtv2-b4 backbone with conv3x3 decode, 3 of 5folds (fold1,fold2,fold3) + 1 of 4 folds for spleen only (fold3), and just averaged all model predictions</li> </ul> <h3>lung</h3> <ul> <li>pvtv2-b4 backbone with conv3x3 decode, 1 of 5folds (fold2, fold3) + </li> <li>pvtv2-b4 backbone with DAformer decode, 2 of 5folds lung only(fold2, fold3) +</li> <li>pvtv2-b4 backbone with conv3x3 decode, 1 of 5folds lung only, all training images are stain normalized(fold3)</li> <li>weights: 0.28 in pvtv2-b4 backbone with conv3x3 decode of fold3, others are 0.18</li> </ul> <h1>Validation</h1> <ul> <li>Using 25% training data to validation for one organ only</li> <li>Using 20% training data to validation for all organ training</li> </ul> <h1>Threshold</h1> <p>organ_threshold = {</p> <pre><code>'Hubmap': { 'kidney' : 0.45, 'prostate' : 0.40, 'largeintestine': 0.30, 'spleen' : 0.30, 'lung' : 0.07, }, 'HPA': { 'kidney' : 0.50, 'prostate' : 0.50, 'largeintestine': 0.50, 'spleen' : 0.50, 'lung' : 0.10, }, </code></pre> <p>}</p> <h1>Detailed notebook link</h1> <p><a href="https://www.kaggle.com/code/chris666/48th-place-inference" target="_blank">https://www.kaggle.com/code/chris666/48th-place-inference</a></p>
HuBMAP + HPA - Hacking the Human Body
4th place solution: Stain Normalization is all you need
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP + HPA - Hacking the Human Body <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, I just use the only one HuBMAP test image for normalization.<br> For resolution, I just resize these images to a fixed size, for example 1024x1024.</p> <p>Thank you!</p> <p>First, we'll thank organizers for their effort in this amazing competation. And then, we will thank <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> , we learnt a lot from your code and notebook.</p> <p>This is my second competation in kaggle, and my first competation on image segmentation, I learnt a lot from this, thanks all of you.</p> <p>In this competation, we mainly rely on two-stream model and some tricks on models ensemble and stain normalization at infernce, without using any pseudo label or ext data, maybe that's why we missed prize hahaha :)</p> <h2>Models</h2> <p>At first, we followed the code share by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> that's an amazing baseline code. Besides, in our experience, we found that the knowledge learned from other organs may influence the performance in lung images, so we proposed two-stream model, one stream for lung images and the other model for other organs. And then we tried some other encoders and decoders, and the final ensemble models are shown as follows:</p> <ul> <li>Encoder: Coat-lite-medium; Decoder: daformer+unet</li> <li>Encoder: mit (Segformer); Decoder: daformer+unet</li> <li>Encoder: mpvit; Decoder: daformer+unet</li> </ul> <h2>Stain Normalization</h2> <p>That's the most import part in our training and inference step, for we think the biggest challenge in this competation is how to bridge the gap between HPA and HuBMAP images. The methods we used for stain normalization are from Staintools.</p> <h3>Train</h3> <p>First, in training step, we normalize the HPA image to the only one HuBMAP test image by using reinhard and vahadane normalization randomly. So that the model can study both features in HPA feature space and HuBMAP feature space.</p> <h3>Inference</h3> <p>However, just using stain normalization in training step is not enough, for stain normalization still can't transfer feature space perfectly, we also need using stain normalization in inference step for better bridging the gap between two datasets.</p> <p>In inference step, we select a representive image from training images for five organs respectively. And for all of HuBMAP images, we normalized them to HPA images according to their organ labels by using vahadane and reinhard methods, and then average their predictions to get the final predicted mask.</p> <p>The final codes will be released after a few days, thanks for reading!</p> <p>I just use staintools.<a href="url" target="_blank">https://github.com/Peter554/StainTools</a></p> <p>Sorry for the clerical error, I wrote it too hasty and didn't notice it, thanks for pointing.</p> <p>My teammates are all my classmates who want to take participate in deep learning competation and learn knowledge from it, I cannot achieve this without them, and you can see that our submission times is not too much :)</p>
HuBMAP + HPA - Hacking the Human Body
7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP + HPA - Hacking the Human Body <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>heuristic and trial and error.<br> The keys of this competition were color shift and pixel size, so I focused on tuning the hyper-parameters related to them.</p> <p>Thanks to the competition host, Kaggle, and all participants. <br> And congrats to all the winners!</p> <h1>Overview</h1> <p>The key points of my solution are:</p> <ul> <li>Strong data augmentation</li> <li>Using whole and tile images simultaneously</li> </ul> <h1>Model &amp; Training</h1> <p>I used 9 CNN models.</p> <ul> <li>efficientnet b5 x 7</li> <li>convnext base x 2<ul> <li>1-class segmentation : 7 models</li> <li>Multi class segmentation : 2 models</li></ul></li> </ul> <h5>Best single model training settings</h5> <ul> <li>Encoder <ul> <li>efficientnet b5 advprop pre-trained</li></ul></li> <li>Decoder<ul> <li>UNet-based model that I modified</li></ul></li> <li>Data augmentation<ul> <li>Resize<ul> <li>e.g.) prostate : <code>cv2.resize(img, dsize=None, fx= 0.4/6.263, fy= 0.4/6.263, interpolation=cv2.INTER_LINEAR)</code></li></ul></li> <li>RandomCrop (p=0.5) <ul> <li>If cropped : tile image</li> <li>Else : whole image</li></ul></li> <li>ShiftScaleRotate</li> <li>GaussNoise</li> <li>GaussianBlur or MotionBlur</li> <li>HorizontalFlip </li> <li>ColorJitter</li> <li>RGBShift</li></ul></li> <li>Other detailed training settings<ul> <li>Input image size : 800 x 800</li> <li>65 epochs</li> <li>Loss: BCE loss + Tversky loss</li> <li>Optimizer: RAdam</li> <li>Using external data from GTEx Portal and HuBMAP - Hacking the Kidney<ul> <li>GTEx Portal : spleen, prostate, largeintestine<ul> <li>I only used 1 slide per organ because I couldn't find out that they helped my scores increased.</li></ul></li></ul></li> <li>All images were used for training </li> <li>1-class segmentation</li></ul></li> </ul> <h5>Best single model scores</h5> <ul> <li>Public HuBMAP : 0.596</li> <li>Private : 0.810</li> </ul> <h1>Inference</h1> <h5>TTA</h5> <ul> <li>Whole image<ul> <li>Rotate (0, 90, 180, 270) </li> <li>HorizontalFlip + rotate (0, 90, 180, 270) </li></ul></li> <li>Tile Image<ul> <li>Crop size : 1900 x 1900<ul> <li>If image size &lt; 1900 : no crop</li> <li>If 1900 &lt;= image size &lt; 3000 : 4 crops (2 x 2 tiles)</li> <li>If 3000 &lt;= image size : 9 crops (3 x 3 tiles)</li></ul></li></ul></li> </ul> <h5>Ensemble</h5> <ul> <li>Each model prediction : ((mean whole images) &gt; threshold_1) + ((mean tile images) &gt; threshold_1)</li> <li>Final prediction : Sum of each model prediction &gt; threshold_2<ul> <li>lung threshold_2 : 0</li> <li>kidney threshold_2 : 4</li> <li>the others : 2</li></ul></li> </ul> <h3>Tips</h3> <p>My scores increased a little using this trick.</p> <pre><code>img = tifffile.imread("test_image.tiff").astype(np.float32) img = np.clip(img+15, 0, 255).astype(np.uint8) </code></pre> <hr> <p>P.S.<br> I only used kaggle kernel for training :) </p> <p>Thank you!<br> These models are slightly different seed, hyper parameter, and the way of pre-training.</p> <p>Thank you!<br> And I completely agree with you.</p>
HuBMAP + HPA - Hacking the Human Body
[171st solution] How far could I go with the UNeXt50 baseline?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP + HPA - Hacking the Human Body <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank the organizers and Kaggle team for hosting this competition, and congratulate to all the winners! I also want to thank <a href="https://www.kaggle.com/thedevastator" target="_blank">@thedevastator</a>, <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>, <a href="https://www.kaggle.com/jamesphoward" target="_blank">@jamesphoward</a>, <a href="https://www.kaggle.com/gunesevitan" target="_blank">@gunesevitan</a>, <a href="https://www.kaggle.com/kingjohnson" target="_blank">@kingjohnson</a> and many other Kagglers for their discussions and contributions. I've been inspired a lot by their insightful discussions and valuable contributions to this competition.</p> <hr> <p>I've seen many discussions related to training on tiles vs non-tiles and how it affects the public LB. So, I was curious and decided to do some experiments based on these data settings. I used the <a href="https://www.kaggle.com/code/thedevastator/training-fastai-baseline" target="_blank">UNeXt50 baseline</a> provided by <a href="https://www.kaggle.com/thedevastator" target="_blank">@thedevastator</a> for all my experiments. </p> <p>Here is my methodology in detail: <a href="https://www.kaggle.com/code/nghihuynh/hubmap-hpa-methodology-submission" target="_blank">HuBMAP_HPA methodology</a></p> <hr> <p><strong>Data</strong>: </p> <ul> <li>Prepare tile dataset using <a href="https://www.kaggle.com/code/nghihuynh/wsi-preprocessing-tiling-tissue-segmentation" target="_blank">Otsu and Triangle thresholding</a> to remove a large portion of background vs non-FTUs tissue.</li> <li>Prepare <a href="https://www.kaggle.com/code/nghihuynh/wsi-hubmap-dataset-creation-256x256" target="_blank">non-tile dataset</a> by cropping the tissue area and rescaling to 256x256, and 512x512</li> </ul> <p><strong>Data augmentation</strong>: heavy data augmentation pipeline including morphology and color changes</p> <p><strong>Model architecture</strong>: UNeXt50 baseline</p> <p><strong>Training setup</strong>: </p> <ul> <li>Cross-validation: Stratified K-Fold to select training images based on organs. (4 folds)</li> <li>Loss function: asymmetric Lovasz Hinge with the weight of 0.55 for positive prediction, and 0.45 for negative prediction.</li> <li>Number of epochs: at least 20 epochs, at most 80 epochs. Most models converged after 60 epochs.</li> </ul> <p><em>All trainings were done using GPU quota from Kaggle.</em></p> <p><strong>Results</strong>:</p> <ul> <li><p>Worst case prediction: Lung (Average DSC: 0.1925)<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6261540%2F5a67f6179ac3e0ae91b57c3682a268ec%2Flung_3_wsi_worst.png?generation=1663951851101765&amp;alt=media" alt=""></p></li> <li><p>Best case prediction: Prostate (Average DSC: 0.8017)<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6261540%2Fe188e36765bf2c76404a52eac0e9ade0%2Fprostate_0_wsi_best.png?generation=1663951873035576&amp;alt=media" alt=""></p></li> </ul> <p><strong>Inference</strong>:</p> <ul> <li>Tiles: 0.64 (averaging 4 folds)</li> <li>Non-tiles: 0.74 (averaging 4 folds)-for experiments only. I used more ensemble models for submitting.</li> </ul> <p>My public and private LB : <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6261540%2F686924ea25aae33c0840f7fc678e24ec%2FScreen%20Shot%202022-09-23%20at%2012.58.19%20PM.png?generation=1663952329198584&amp;alt=media" alt=""></p> <hr> <p>Other things I tried out but failed:</p> <ul> <li>Multi-class segmentation: I'm still not quite sure why it gave poorer results, even though the dice soft was much higher than in binary segmentation models. Any insights are really appreciated!</li> <li>Fine-tuning on synthetic dataset using Gaussian-Laplacian blending: overfitting might be an issue. </li> </ul> <hr> <p><strong>Conclusion</strong>:<br> With some tweaks, I boosted the UNeXt50 baseline from 0.51322 to 0.73040 in private LB. Training on non-tiles performed better in this case. </p> <p>Thanks for reading! </p>
HuBMAP - Hacking the Human Vasculature
(Best private 0.602) 297th Place Solution for the HuBMAP - Hacking the Human Vasculature Competition. How I lost the gold medal.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is the first time I solve a segmentation task.</p> <h1>Context</h1> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/overview" target="_blank">https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/data" target="_blank">https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/data</a></li> </ul> <h1>Overview of the approach</h1> <p>My final model was a combination of 13 single models. 4 model maskdino swin-L train on different data, 4 model yolov7-seg, 1 model maskdino resnet 50, 4 model Cascade maskrcnn Internimage-L. For validation, I split to 4 folds, 3 wsi for training, the other for testing. Inference with multiple size, Maskdino swin-L use 3 image size [512, 1024, 1440], yolov7 use 2 image size [512, 1024], maskdino resnet50 use 2 image size [1024, 1440], cascade maskrcnn internimage use 1 image size [1440]. Using Weight Mask Fusion for ensemble. No use external data or pseudo label.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2832287%2F0df50177802f0231fdb3a944e6808cd4%2FScreenshot%20from%202023-08-05%2019-33-28.png?generation=1691238892230812&amp;alt=media" alt=""><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2832287%2F576a5d3446d03ac0951a64c898d75ec4%2Fhubmap.drawio.png?generation=1691237415800097&amp;alt=media" alt=""></p> <h1>Details of the submission</h1> <h2>Cross validation experiments</h2> <p>Maskdino swin-L, image size 1440:</p> <ul> <li>Fold 0 (val on wsi1): bbox [email protected]=34.60</li> <li>Fold 1 (val on wsi2): bbox [email protected]=14.66</li> <li>Fold 2 (val on wsi3): bbox [email protected]=40.42</li> <li>Fold 3 (val on wsi4): bbox [email protected]=28.32<br> Result on fold 0 and fold 1 are very strange. This make me confuse. Should choose dataset 1 or all dataset for training? These results are the reason why I don't choose only dataset 1 for final submission.</li> </ul> <h2>Experiments on dataset 1</h2> <ul> <li>Maskdino swin-L, image size 1408, train only all dataset1. One model without dilate: Public 0.51, Private 0.543.</li> <li>Yolov7-seg, image size 1024, train only all dataset1. One model without dilate: Public 0.3, Private 0.446.</li> <li>Internimage-L, image size 1440, train only all dataset1. One model without dilate: Public 0.453, Private 0.502.</li> <li>Maskdino resnet50, image size 1024, train only all dataset1. One model without dilate: Public 0.46, Private 0.464.</li> <li>Internimage-L, image size [(1024, 1024), (1440, 1440), (1536, 1536)], train only all wsi2. One model without dilate: Public 0.439, Private 0.469.</li> </ul> <h2>Important of image size</h2> <p>Dataset have a lot of small mask. I found increase image size, boost both CV and Public leader board. However, CV not increase when image size 1024 -&gt; 1440</p> <h2>My best private is 0.597</h2> <p>Public notebook: <a href="https://www.kaggle.com/code/quan0095/maskdino-yolov7-internimage-private-0-597" target="_blank">https://www.kaggle.com/code/quan0095/maskdino-yolov7-internimage-private-0-597</a></p> <h2>My final submission</h2> <p>Final submission 1 (Version 8), mask threshold &gt; 0.6, using dialate: Private 0.334, Public: 0.551<br> Final submission 2 (Version 1), mask threshold &gt; 0.03, not using dialate: Private 0.394, Public: 0.539<br> Late submission: modify from <strong>Final submission 1 (version 8)</strong> not using dilate and I achieve Private leaderboard 0.602<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2832287%2F7aa3482c9ca8b4a2830348cf3468d942%2FScreenshot%20from%202023-08-01%2022-41-23.png?generation=1690904619445688&amp;alt=media" alt=""></p> <h2>Code optimization</h2> <ul> <li>To ensemble many models, I crop mask using bbox from 512x512 original mask to reduce CPU RAM usage. Using multi thread to reduce time run notebook.</li> <li>Modify maskdino source code to add EMA.</li> <li>Modify WBF to work with mask.</li> <li>Fix bug to use maskdino on kaggle notebook.</li> <li>Fix bug to train Internimage with cuda 12.0.</li> </ul> <h2>What didn't work</h2> <ul> <li>Self-supervised learning using dinov1 with dataset3 and hubmap kidney dataset.</li> <li>Using SAM.</li> </ul> <h2>Conclusion</h2> <ul> <li>I fail gold medal, because I have a mistake: I submit two submission with dilate and not dilate, but I have changed mask threshold when not dilate =&gt; Size of masks not change between two submission.</li> <li>IOU is very sensitive with small mask. I experiment with dilate and not dilate, IOU change 40% with bbox area &lt; 1000.</li> <li>Using only dataset 1 good for both public and private.</li> <li>When train with dataset 2, dilate good for public but not good for private.</li> </ul> <h1>Sources</h1> <h2>Two my submission notebook:</h2> <ul> <li><a href="https://www.kaggle.com/code/quan0095/dialate-version" target="_blank">https://www.kaggle.com/code/quan0095/dialate-version</a></li> <li><a href="https://www.kaggle.com/code/quan0095/not-dialate-version-7" target="_blank">https://www.kaggle.com/code/quan0095/not-dialate-version-7</a></li> </ul> <h2>Late submission notebook (Private 0.602):</h2> <ul> <li><a href="https://www.kaggle.com/code/quan0095/dialate-version?scriptVersionId=138564303" target="_blank">https://www.kaggle.com/code/quan0095/dialate-version?scriptVersionId=138564303</a></li> </ul> <h2>Maskdino inference:</h2> <ul> <li><a href="https://www.kaggle.com/code/quan0095/inference-of-maskdino-p100" target="_blank">https://www.kaggle.com/code/quan0095/inference-of-maskdino-p100</a></li> </ul> <h2>Cascade maskrcnn internimage:</h2> <ul> <li><a href="https://www.kaggle.com/code/quan0095/internimage-mmdet-2-26-inference" target="_blank">https://www.kaggle.com/code/quan0095/internimage-mmdet-2-26-inference</a></li> </ul>
HuBMAP - Hacking the Human Vasculature
10th place solutions!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The effect was quite minimal, about +0.001 …</p> <p>Thank you for your suggestion. I don't know that prize! I'll try!</p> <p>Thank you for host and competitors.<br> It is interesting competition for me.</p> <p>I'm happy to become a Kaggle master!!</p> <h2>Team Member</h2> <p><a href="https://www.kaggle.com/sugupoko" target="_blank">@sugupoko</a>, <a href="https://www.kaggle.com/hatakee" target="_blank">@hatakee</a>, <a href="https://www.kaggle.com/kfuji" target="_blank">@kfuji</a><br> co-workers!!</p> <h2>Solution overview</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2930242%2Fec3da742a7a4a7c70bd8d56634204885%2Fsol.png?generation=1690850953870523&amp;alt=media" alt=""></p> <p>Thank you <a href="https://www.kaggle.com/fnands" target="_blank">@fnands</a> !!<br> base notebook : <a href="https://www.kaggle.com/code/fnands/a-quick-yolov7-baseline" target="_blank">https://www.kaggle.com/code/fnands/a-quick-yolov7-baseline</a></p> <p>Yolo's segmentation head size is 1/4 (512 =&gt; 128). So I changed the input resolution( 640 =&gt; 160).</p> <h3>Unique idea (maybe)</h3> <p>I did fine-tuning with ds1 in the last, because <strong>I didn't want to use dilate !!</strong></p> <h2>Inference code</h2> <p><a href="https://www.kaggle.com/sugupoko/10th-place-inference-yolov7ensemble-pseudo-res640" target="_blank">https://www.kaggle.com/sugupoko/10th-place-inference-yolov7ensemble-pseudo-res640</a></p> <h2>Reference previous competition</h2> <p>Our many experiments are from the previous two competitions</p> <ul> <li><a href="https://www.kaggle.com/competitions/global-wheat-detection" target="_blank">https://www.kaggle.com/competitions/global-wheat-detection</a></li> <li><a href="https://www.kaggle.com/c/sartorius-cell-instance-segmentation/overview" target="_blank">https://www.kaggle.com/c/sartorius-cell-instance-segmentation/overview</a></li> </ul> <h2>Things that didn't go well:</h2> <ul> <li>Applied pseudo-labeling twice.</li> <li>Yolov8 (training and inference).</li> <li>mmdetection. (difficult for us…)</li> <li>Unet prediction in the detected BBOX.</li> <li>NMS =&gt; WSF <ul> <li><a href="https://www.kaggle.com/code/mistag/sartorius-tta-with-weighted-segments-fusion" target="_blank">https://www.kaggle.com/code/mistag/sartorius-tta-with-weighted-segments-fusion</a></li></ul></li> </ul> <h2>Things I couldn't implement well:</h2> <ul> <li>Training with data regenerated by combining tiles.</li> </ul> <h2>Things I couldn't do:</h2> <ul> <li>Augmentation with the stain tool.</li> <li>Learning using extra data.</li> </ul> <h2>Other</h2> <p>I trained higher resolution(640=&gt;800). That LB is the highest of our models.<br> LB: 0.473, PB0.563</p> <p>I did fine-tuning with ds1 because I didn't want to use dilate.</p>
HuBMAP - Hacking the Human Vasculature
12th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>there is two step</p> <ol> <li>After WSF, only masks with a confidence score of 0.5 or higher were used.</li> <li>And I filtered the mask of area smaller than 100.</li> </ol> <p>Only blood vessel, Thank you</p> <p>Hello, thank you for your kind words.</p> <p>There are a total of 4 steps.</p> <ol> <li><p>Create a pre-trained model using Dataset2 + Dataset3 data. (24 epoch was the best result for me here.)</p></li> <li><p>Train Dataset1 data from 24 epoch(from pre-train model). Here, it was verified using train (80%) and validation (20%). (This step is to find out which epoch is best.)</p></li> <li><p>I found that training up to 28 epochs in step 2 is the best mAP60.</p></li> <li><p>Finally, submit the model trained on 28 epochs of 100% of Dataset1 data.</p></li> </ol> <p>If there is something I don't understand, I will answer again. thank you</p> <p>I am so happy to have won my first gold medal and even a solo medal in this competition. Also thanks to the OpenMMLab, SenseTime and The Chinese University of Hong Kong for making a really great library mmdetection.</p> <p><strong>Summary</strong><br> I used the pseudo labels of Dataset3 to train the Cascade Mask RCNN + Convnext v2 large.</p> <p><strong>Model</strong></p> <ul> <li>Detector : Cascade Mask RCNN </li> <li>Backbone : Convnext v2 Large</li> <li>Loss : FocalLoss</li> </ul> <p><strong>Augmentation</strong></p> <ul> <li>Albumentations<ul> <li>Distortion worked very well for me.</li></ul></li> </ul> <pre><code> ( =, transforms=[ (=, p=), (=, p=), (=, p=), ], p=), </code></pre> <ul> <li>mmdet augmentation<ul> <li>AutoAugment, MixUp, Mosaic, RandomErasing</li></ul></li> </ul> <p><strong>Training process</strong></p> <p>First, weighted segment fusion was performed on the output values ​​from the 4 models made of 4 types of data sets to create pseudo labels for Dataset3.</p> <p>I did WSF by referring to the guide below -&gt; <a href="https://www.kaggle.com/code/mistag/sartorius-tta-with-weighted-segments-fusion" target="_blank">https://www.kaggle.com/code/mistag/sartorius-tta-with-weighted-segments-fusion</a></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9334354%2F74ad73b605892f8de8d3bc401dae6d0a%2Fmake%20pseudo%20labels.png?generation=1690855159125845&amp;alt=media" alt=""></p> <p>second. The model was pre-trained using the pseudo label of Dataset 3 and the human label of Dataset 2.</p> <p>Finally, the model was fine-tuned using only Dataset1 data and submitted.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9334354%2F49ef424a4d828f38c7e44d374a2021c2%2Ffinetune.png?generation=1690855729462161&amp;alt=media" alt=""></p> <p>Yes, I thought it was difficult to trust the validation score because Dataset1 was used together when pseudo labeling was performed.</p> <p>Therefore, this idea was not applied right away, but was applied after finding the best hyperparameters and epochs through CV until just before the end of the competition.</p> <p>Since I am also a beginner, I am cautious about giving advice, but in my case, participating in various competitions was helpful. I also read a lot of META research papers.</p> <p>Hi What is the meaning of the concept here?</p> <p>I thought that there would be differences for each WSI, so I created 4 folds for each WSI. The predicted values ​​of the four models thus created were WSFed and used as pseudo labels. It is.</p> <p>Sorry My mistake, Thanks for the advice.</p> <ol> <li>Yes, In the pre-training step, I select the best epoch model pth and resume fine-tuning in the same configs</li> <li>First, find the epoch with the best valid loss using 20% ​​of Dataset1 as validatoion. After that, fine-tuning was performed on all Dataset1 data under the same conditions.</li> </ol> <ol> <li>yes, "best epoch model pth" with Train (dataset 2 of wsi 1,2 and dataset 3) valid (only dataset 1)</li> <li>no i didn't freeze layer</li> </ol> <p>Sorry for not being able to explain clearly.</p>
HuBMAP - Hacking the Human Vasculature
13th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to host and congrats to winners!<br> This competition gave me a lot of experience about instance segmentation. and a silver medal :)</p> <p>inference code : <a href="https://www.kaggle.com/code/kiinngdom7/hubmap-hhv-13th-place-inference" target="_blank">https://www.kaggle.com/code/kiinngdom7/hubmap-hhv-13th-place-inference</a></p> <p><strong>Summary</strong></p> <ul> <li>mmdetection</li> <li>2-fold validation</li> <li>cascade mask rcnn with convnext backbone</li> <li>multi scale tta &amp; wbf ensemble</li> </ul> <p><strong>Validation Strategy</strong></p> <ul> <li>fold 1 <ul> <li>train : wsi 2,3,4 / val : wsi1_ds1</li></ul></li> <li>fold 2<ul> <li>train : wsi 1,3,4 / val : wsi2_ds1</li></ul></li> </ul> <p><strong>Training</strong></p> <ul> <li>MMdet 3.0</li> <li>Only blood vessel class was used</li> <li>Augmentations<ul> <li>HorizontalFlip</li> <li>RandomRotate90</li> <li>RandomBrightnessContrast</li> <li>ShiftScaleRotate</li> <li>MotionBlur</li> <li>GaussianBlur</li></ul></li> <li>Models<ul> <li>Finally four models were used for ensemble</li> <li>model 1 : cascade mask rcnn with convnext_tiny backbone, fold1</li> <li>model 2 : cascade mask rcnn with convnext_tiny backbone, fold1</li> <li>model 3 : cascade mask rcnn with convnextv2_base backbone, fold2</li> <li>model 4 : cascade mask rcnn with convnextv2_base backbone, fold2</li></ul></li> </ul> <p><strong>TTA</strong></p> <ul> <li>Multiscale TTA<ul> <li>scales = [(544, 544), (576, 576), (608, 608), (640, 640), (672, 672), (704, 704)]</li></ul></li> </ul> <p><strong>Ensemble</strong></p> <ol> <li>Fuse boxes using WBF.</li> <li>Hard vote masks.</li> <li>Remove the area outside the fused box. (Not used in final submission but private score improved slightly)</li> </ol> <p><strong>Result</strong></p> <table> <thead> <tr> <th></th> <th>Public</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>model 1</td> <td>no_dil : 0.46 / dil : 0.526</td> <td>no_dil : 0.463 / dil : 0.41</td> </tr> <tr> <td>model 2</td> <td>no_dil : 0.48 / dil : 0.527</td> <td>no_dil : 0.466 / dil : 0.415</td> </tr> <tr> <td>model 3</td> <td>no_dil : 0.437 / dil : 0.519</td> <td>no_dil : 0.514 / dil : 0.395</td> </tr> <tr> <td>model 4</td> <td>no_dil : 0.452 / dil : 0.527</td> <td>no_dil : 0.489 / dil : 0.358</td> </tr> <tr> <td>ensemble</td> <td>no_dil : 0.479 / dil : 0.55</td> <td>no_dil : 0.541 / dil : 0.462</td> </tr> </tbody> </table> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10800512%2Fb5b1e6464b906d849c2bd7ebf6ace89a%2FWB%20Chart.png?generation=1691971360361404&amp;alt=media" alt=""></p> <table> <thead> <tr> <th></th> <th>CV</th> </tr> </thead> <tbody> <tr> <td>model 1</td> <td>0.35</td> </tr> <tr> <td>model 2</td> <td>0.353</td> </tr> <tr> <td>model 3</td> <td>0.549</td> </tr> <tr> <td>model 4</td> <td>0.58</td> </tr> </tbody> </table> <p>Metric : [email protected]</p>
HuBMAP - Hacking the Human Vasculature
14th Place solution: Yolov8
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratuations to all the winner and thank the organizers for hosting such an interesting competition!</p> <ul> <li>My solution is a simple ensemble of Yolov8l and Yolov8m trained on image size 640 and 1024</li> <li>All of my submission using Dilation with Kernel size = 3 and Iters = 1 because on my local CV, dilation don't hurt (and sometimes improve) <strong>dataset1</strong> but decrease mAP on <strong>dataset2</strong> significantly.</li> </ul> <p>Things worked for me:</p> <ul> <li>Yolov8l-seg and Yolov8m-seg</li> <li>Copy-paste and Mixup augmentation (0.5 probability)</li> <li>Balance sampling between dataset1, dataset2 and unlabeled data (pseudo label) </li> <li>Pseudo-labeling on Unlabeled data</li> <li>Larger image size (Increase CV but not LB/PB 😭)</li> <li>Pseudo labeling on Test data (i.e: Retrain during submission)</li> <li>Ensemble using Weighted mask fusion from 8th place Sartorius competition <a href="https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/discussion/297998" target="_blank">https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/discussion/297998</a></li> </ul> <p>Things not worked for me:</p> <ul> <li>Pseudo label more than one round</li> <li>Ensemble on different fold</li> </ul> <h1>1. Validation strategy</h1> <ul> <li>I split the data into train/valid by WSI (4 folds), during the competition, I mostly use WSI 1 and 2 as validation data and models were trained on WSI 1, 3, 4 because WSI1,2 also contains <code>Dataset1</code></li> <li>I belive because of the split, my solution can survive when Private leaderboard released. My Validation scores on WSI 2 are nearly correlated to PB/LB</li> </ul> <h1>2. Modeling</h1> <ul> <li>Step 1: Train 8 models Yolov8l and Yolov8m with image size of 640 on 4 folds for 100 epochs</li> <li>Step 2: Use WMF (Weighted mask fusion) to generate pseudo label on unlabeled data</li> <li>Step 3: Retrain models on Labeled data + Pseudo data for 70 epochs, Image size are 640 and 1024</li> <li>Step 4 (During submission) I use my ensemble models to predict hidden test data and train new model Yolov8l x 640 on Labeled data + Private pseudo data, then append that model to ensemble and run inference again.</li> </ul> <p>All models are trained on the following hyper parameters, I only show the param that I change compared to default values:</p> <table> <thead> <tr> <th>Param</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>mixup</td> <td>0.5</td> </tr> <tr> <td>copy-paste</td> <td>0.5</td> </tr> <tr> <td>flipud</td> <td>0.5</td> </tr> <tr> <td>degrees</td> <td>45</td> </tr> <tr> <td>lr0</td> <td>0.001</td> </tr> </tbody> </table> <p>I also modified the dataloader of Yolov8 to sampling uniformly <code>dataset1</code>, <code>dataset2</code> and <code>pseudo data</code>.</p> <h1>3. The effect of pseudo labeling</h1> <ul> <li>I used pseudo labeling two times, one on local with unlabeled data (<strong>Stage1</strong>) and one during submission on hidden test data (<strong>Stage2</strong>) as mentioned in <strong>2.</strong></li> <li>Pseudo labeling <strong>stage1</strong> improve my mask mAP50 on WSI 1 and 2 about ~ 0.04</li> <li><strong>Stage2</strong> Pseudo labeling improved both my LB/PB about 0.005, I think if base models are better, it could improve more.</li> </ul> <p>Compare of Stage2 pseudo labeling</p> <table> <thead> <tr> <th>Sub</th> <th>LB</th> <th>PB</th> </tr> </thead> <tbody> <tr> <td>ENS1 + Stage2 pseudo</td> <td>0.537</td> <td>0.505</td> </tr> <tr> <td>ENS1</td> <td>0.532</td> <td>0.500</td> </tr> </tbody> </table> <h1>4. Choice of submission</h1> <ul> <li>I selected my two last submission on two set of my best CV models on different fold, all models are trained with pseudo labeling <strong>Stage1</strong>. Called ENS1 and ENS2.</li> <li>My best private score is 0.544 (LB 0.551) which is ensemble of 8 models <code>Fold 2</code></li> </ul> <p>Submission:</p> <table> <thead> <tr> <th>Sub</th> <th>LB</th> <th>PB</th> </tr> </thead> <tbody> <tr> <td>ENS1 + Stage2 pseudo</td> <td>0.537</td> <td>0.505</td> </tr> <tr> <td>ENS2 + Stage2 pseudo</td> <td>0.549</td> <td>0.532</td> </tr> </tbody> </table> <h1>5. Why shake-up (Just my guess)</h1> <ul> <li>Clearly because the test data is a completly different WSI, this is why my validation strategy is to split the data by WSI</li> <li>I think the private test data is small and very sensitive (only one WSI), I think It's not large enough to choose the best model for real-world data (I understand that the annotation process is very costly). I observed my local vaildation score is oscillated during training too.</li> <li>Many people reported that dilation also cause the shake-up, may be my dilation iters is only 1, so the effect is minimized</li> </ul> <p>Cảm ơn bro</p> <p>Thank you</p> <ul> <li><p>For hidden pseudo label part, I could not evaluate the exact performance of new model. I just trust it, because I use the best hyper-parameters and number of epochs I found during K-fold training. And in my experiments, pseudo label on external data improved my results, so I believe it would as good when training on hidden dataset. Beside of that, training on hidden data is very hard to evaluate the performance because we have no access to the validation mAP during submission.</p></li> <li><p>Yes, I trained 4 models Yolov8M and 4 models Yolov8L on WSI dataset1,2 the run inference on dataset 3 using 8 models. Then threshold confidence scores and add them to training set, retrain and evaluate again on 4 folds as I did without pseudo label.</p></li> </ul>
HuBMAP - Hacking the Human Vasculature
19th Place Solution: Repeated Pseudo Labeling
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I would like to thank the competition hosts for organizing this competition and also the participants.</p> <p>I was disappointed to be kicked out of the gold zone after the shakedown, but the competition was a great learning experience for instance segmentation.</p> <h2>Overview</h2> <p>My final submission was a single model of Cascade Mask R-CNN (Backbone: ConvNeXt-tiny).</p> <p>However, the training process had multiple stages. Figure 1 outlines the training and submission pipeline.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9058219%2F210e2b0ce449c03a2c77e93ae50c41df%2Fhubmap_train-sumarry.drawio%20(3).png?generation=1690860001136772&amp;alt=media" alt=""><br> Figure 1: Training and Submission Pipeline</p> <ul> <li>1st Training Phase:<br> Cascade Mask R-CNN was trained using Dataset 1 and 2. Three different backbones (ConvNext-tiny, ResNext-101, Resnet-50) were used for Cascade Mask R-CNN.<br> Using the three models, we performed pseudo labeling on dataset 3, extracting only labels with a confidence greater than 0.5. We also performed TTA and NMS during pseudo labeling.</li> <li>2nd Training Phase:<br> We used Dataset 1 and 2, and the pseudo labeled dataset 3 created in the 1st phase to train the model whose backbone is convnext. In this phase, the model trained in the 1st phase was fine tuned.</li> <li>3rd Training Phase:<br> Fine tuning the model trained in the 2nd phase using only Dataset 1. This was done to bring the output of the model closer to the annotation quality of dataset 1.<br> Using the model created here, we performed pseudo labeling on dataset 2 and 3, extracting only labels with a confidence greater than or equal to 0.95. We also performed TTA and NMS during pseudo labeling.</li> <li>Final Training Phase:<br> Dataset 1, and pseudo labeled dataset 2 and 3 created in the 3rd phase were used to train the model whose backbone is convnext. Here, the model trained in the 2nd phase was used for fine tuning.</li> <li>Submission:<br> The model developed in the Final Training Phase was used to infer the test dataset; it is a single model of Cascade Mask R-CNN (Backbone: ConvNeXt-tiny), but we have performed TTA and NMS.</li> </ul> <h2>My Approach</h2> <p>I proceeded to train the model, focusing on the quality of the annotations in each dataset. The following is a quote from the description of the datasets in this competition.</p> <blockquote> <p>The competition data comprises tiles extracted from five Whole Slide Images (WSI) split into two datasets. Tiles from Dataset 1 have annotations that have been expert reviewed. Dataset 2 comprises the remaining tiles from these same WSIs and contain sparse annotations that have not been expert reviewed.</p> <p>All of the test set tiles are from Dataset 1.</p> <p>Two of the WSIs make up the training set, two WSIs make up the public test set, and one WSI makes up the private test set.</p> <p>The training data includes Dataset 2 tiles from the&nbsp;<em>public</em>&nbsp;test WSI, but&nbsp;<em>not</em>&nbsp;from the&nbsp;<em>private</em>&nbsp;test WSI.</p> <p>We also include, as Dataset 3, tiles extracted from an additional nine WSIs. These tiles have not been annotated. You may wish to apply semi- or self-supervised learning techniques on this data to support your predictions.</p> </blockquote> <p>This means,</p> <ul> <li>Dataset 1 → accurate annotations and the same annotation quality as the test dataset</li> <li>Dataset 2 → not so accurate annotations</li> <li>Dataset 3 → no annotations</li> </ul> <p>Therefore, I decided to trust Dataset 1 and tried to improve the quality of the annotations output by the model to be closer to Dataset 1. I also tried to make good use of the remaining Dataset 2 and 3. Based on this policy, I proceeded with my experiments.</p> <ul> <li>Model<ul> <li>convnext-tiny mainly used.</li> <li>resnet, resnext were also used, but finally not used because they did not give good results when used with pseudo label.<ul> <li>resnet and resnext were used for the first pseudo labeling.</li></ul></li> <li>mmdetection 3.x was used.</li></ul></li> <li>Annotations Type<ul> <li>Only blood_vessel was used.</li> <li>unsure, glomerulus not used at all.</li></ul></li> <li>Data Augmentation<ul> <li>The images were randomly augmented to multiple scales during training as shown below.<ul> <li>scales=[(640, 640), (768, 768), (896, 896), (1024, 1024), (1152, 1152), (1280, 1280), (1408, 1408), (1536, 1536)]</li></ul></li></ul></li> <li>Test Time Augmentation (TTA)<ul> <li>The ensemble of output results from multiple scales was used.<ul> <li>scales=[(1024, 1024), (1280, 1280), (1536, 1536)]</li></ul></li></ul></li> <li>Post-Processing<ul> <li>Non-Maximum Suppression (NMS) was used during TTA.</li> <li>No dilation.</li></ul></li> <li>Pseudo Labeling<ul> <li>Pseudo labeling was performed in two parts as shown in Figure 1. TTA was performed for three different image sizes, scales=[(1024, 1024), (1280, 1280), (1536, 1536)], and NMS was used for the ensemble.</li></ul></li> <li>Cross Validation (CV) Strategy<ul> <li>CV was set to be similar to private LB.<ul> <li>Source WSI == 1 &amp; dataset == 1 → valid</li> <li>Source WSI == 1 &amp; dataset != 1 → It was not used for cv.</li> <li>Others → train</li></ul></li> <li>At the time of submission, all data was used for training.</li> <li>CV was not always correlated with Public LB. However, it became more correlated as the experiment progressed to the latter part of the experiment.</li></ul></li> </ul> <h2>What Worked</h2> <ul> <li>Pseudo Labeling</li> <li>Trust the quality of the annotations in Dataset 1</li> <li>TTA</li> <li>NMS</li> <li>Scale Up Images</li> </ul> <h2>What Didn’t Worked</h2> <ul> <li>Models with large parameter sizes<ul> <li>mask2former, convnext-small</li> <li>Maybe because of lack of parameter tuning…?</li></ul></li> <li>Use unsure and glomerulus for training</li> <li>Dilation<ul> <li>LB was effective when raw dataset 2 was used for training data</li> <li>LB was decreased when using only dataset 1 as training data</li></ul></li> <li>Flip Data Augmentation</li> <li>Weighted Boxes Fusion (WBF) (my implementation may have been suspect…)</li> <li>Smooth inference around edges (my implementation may have been suspect…)<ul> <li>Reference: <a href="https://www.kaggle.com/competitions/hubmap-kidney-segmentation/discussion/238013" target="_blank">https://www.kaggle.com/competitions/hubmap-kidney-segmentation/discussion/238013</a></li></ul></li> <li>Ensemble with other models</li> </ul> <h2>Code</h2> <ul> <li>GitHub (My training code. I'm sorry but it's not well maintained) -&gt; <a href="https://github.com/moritake04/hubmap-2023" target="_blank">https://github.com/moritake04/hubmap-2023</a></li> <li>Submitted notebook -&gt; <a href="https://www.kaggle.com/code/moritake04/private19th-final-sub" target="_blank">https://www.kaggle.com/code/moritake04/private19th-final-sub</a></li> </ul>
HuBMAP - Hacking the Human Vasculature
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the organizers for hosting this competition.</p> <p>Thanks to <a href="https://www.kaggle.com/nghiahoangtrung" target="_blank">@nghiahoangtrung</a> for sharing the <a href="https://www.kaggle.com/code/nghiahoangtrung/hubmap-mmdet3-1-single-fold-inference/notebook" target="_blank">notebook and dataset</a>, which helped me make submissions within a short time.</p> <p>Training code of my best single model is shared on <a href="https://github.com/tascj/kaggle-hubmap-hacking-the-human-vasculature" target="_blank">GitHub</a>. (Update 2023/08/12)</p> <p>Inference notebook is public <a href="https://www.kaggle.com/tascj0/hubmap-2023-release" target="_blank">here</a>.</p> <h2>Overal strategy</h2> <p>For most instance segmentation models, the AP mainly relies on bounding box (bbox) prediction, while the precision of mask prediction has a minor impact. Additionally, mask prediction can be handled by other models (like the mask head of Mask R-CNN or any semantic segmentation model). Therefore, when dealing with instance segmentation tasks, I usually focus my efforts on optimizing bbox accuracy.</p> <p>After trying out several models, I found RTMDet to perform exceptionally well and train fast. Hence, I chose RTMDet as the primary model for my experiments.</p> <h2>EMA</h2> <p>I highlighted this because it's the most crucial technique I found in my experiments. The Exponential Moving Average (EMA) models not only demonstrates higher accuracy on the validation set but also achieves higher accuracy on the training set. In most of my experiments, I utilized EMA along with a fixed learning rate. This approach provided me with a stable foundation for conducting various experiments.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F381412%2Fc7fe742970beb68fdeca794c40919644%2Fema.png?generation=1691111824367027&amp;alt=media" alt="ema"></p> <h2>Dataset</h2> <p>I trained the final submitted models using both dataset1 and dataset2. dataset1 was split into train and val sets based on 'i' location.<br> Splitting based on 'i' is more difficult than random splitting. The validation score is approximately (0.47 bbox_mAP, 0.72 segm_mAP60) for random splitting and around (0.43 bbox_mAP, 0.68 segm_mAP60) for 'i' splitting.</p> <h2>Modeling</h2> <p>RTMDet-x is my baseline. I trained models with all 3 classes. Input size is 768.</p> <h3>add mask supervision</h3> <p>In the early experiments, I compared Faster R-CNN and Mask R-CNN, using the same Faster R-CNN pre-trained weights. The latter exhibited higher bbox mAP, indicating the helpfulness of mask annotation.</p> <p>Without using mask supervision, RTMDet-x already achieves a higher bbox mAP compared to models like Mask R-CNN. The addition of mask supervision further improves performance.</p> <p>There are two ways to utilize mask annotation:</p> <ol> <li>Indirectly use mask information through random rotation. Recalculate bboxes after rotation.</li> <li>Add a mask head to the model.</li> </ol> <p>Both worked well.</p> <table> <thead> <tr> <th>random rotate</th> <th>mask head</th> <th>bbox_mAP</th> <th>segm_mAP60</th> </tr> </thead> <tbody> <tr> <td>N</td> <td>N</td> <td>0.424</td> <td></td> </tr> <tr> <td>Y</td> <td>N</td> <td>0.434</td> <td></td> </tr> <tr> <td>N</td> <td>Y</td> <td>0.43</td> <td>0.68</td> </tr> <tr> <td>Y</td> <td>Y</td> <td>0.432</td> <td>0.688</td> </tr> </tbody> </table> <p>Due to minimal instance overlap, mask prediction is much simpler compared to typical instance segmentation scenarios.<br> I added a naive mask head in training to enhance bbox prediction. Surprisingly, the predictions from the mask head turned out not bad. Using the same bboxes, segm_mAP60 is slightly high than Mask R-CNN, while segm_mAP75 is slightly lower than Mask R-CNN. A single fold model without tta scored 0.565+ on the private LB, which is sufficient for a gold medal.</p> <h3>augmentations</h3> <p>I used strong geometric augmentations.</p> <pre><code> dict(=, img_scale=(768, 768), angle_range=(-180, 180), scale_range=(0.1, 2.0), border_value=(114, 114, 114), =0.5, =1.0, =0.5, =1.0, ), </code></pre> <h3>training</h3> <p>I trained with a batch size of 8, comprising 3 images from dataset1 and 5 images from dataset2. This approach led to a slight improvement.</p> <h3>ensemble</h3> <p>My final model is an ensemble, where bbox prediction is a WBF of 3 RTMDet models, a YOLOX-x (with mask supervision), and a Mask R-CNN. All models has two different weights (2 of 5 folds). Mask prediction is generated by mask head of Mask R-CNN using input size 1440. TTA was not used.</p> <p>The ensemble scored 0.589 private and 0.317 public.</p> <h2>dilation</h2> <p>In fact, I forgot about dilation until I saw my public score on the final day of the competition.</p> <p>In my local validation, adding wsi3dataset2 to the training set improved the mAP of wsi1dataset1 (all wsi1 holdout) and wsi2dataset2 (all wsi2 holdout). So I choose to trust wsi1dataset1 and wsi2dataset2.</p> <p>This was mostly a stroke of luck. If I had made the submission earlier, I might have made different choices in experiments. Anyway, I would definitely keep one final submission without dilation in the end.</p> <h2>A few tips</h2> <ol> <li>keep multiple EMA models in a single training run.</li> <li>use <code>torch._foreach_lerp_</code> to calculate EMA. It could save some time.</li> <li>use a fast COCOeval implementation. Since evaluation needs to be conducted on multiple EMA models with different momentums, faster evaluation can also save some time.</li> </ol>
HuBMAP - Hacking the Human Vasculature
254th Place Solution for HuBMAP - Hacking the Human Vasculature Competition (Greedy Model Soup Code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Everyone! I'm just posting my solution here, in case it helps someone, and I can also learn to write good solutions!</p> <p><strong>Competition Context</strong><br> The context of this solution is the recent HuBMAP - Hacking the Human Vasculature Competition:<br> <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/overview" target="_blank">https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/overview</a></p> <p>The data of this competition can be found here:<br> <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/data" target="_blank">https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/data</a></p> <p><strong>My Approach</strong><br> In short, I implemented 2 models:</p> <ol> <li>A plain Mask-RCNN using PyTorch</li> <li>A model soup of Mask-RCNNs using PyTorch</li> </ol> <p>Both scored around the same amount (surprisingly, the model soup scored 0.001 less).</p> <p>The models I used for Mask-RCNN were taken from PyTorch.<br> To implement a Model Soup, I implemented a Greedy Soup (Hill-Climbing method).<br> Essentially, in this, we start with some model as our original model, and a pool of candidate models. Usually, our first model chosen is our best model. We then pick models from the candidate models. <br> The parameters of the selected models are averaged, and we check validation scores.<br> If the score improves, we keep all the current models for our final model soup. If it doesn't we discard this model from the candidate list.</p> <p>For my candidate models, I simply took the models that were saved after each epoch (I had 10 epochs only).<br> My first chosen model was the model from the final epoch.</p> <p>My further polished code can be found below:</p> <pre><code>greedy_soup = {} num_souped = best_score= e, val_loss (validation_mask_losses): model_chk = (, model_chk) model = get_model((cell_type_dict), model_chk) curr_state_dict = torch.load(model_chk) temp_soup = {} curr_score = (e == ): greedy_soup = {k : v k, v curr_state_dict.items()} num_souped = model.load_state_dict(greedy_soup) model.to(device) curr_score = get_score(ds_val, model) best_score = curr_score : temp_soup = {k : (v + greedy_soup[k] * num_souped) / (num_souped + ) k, v curr_state_dict.items()} model.load_state_dict(temp_soup) model = model.to(device) curr_score = get_score(ds_val, model) (e &gt; curr_score &gt; best_score): () greedy_soup = temp_soup num_souped += best_score = curr_score </code></pre> <p>When this competition started, I had little experience in this field. As a result, I didn't participate too much. I knew of Mask-RCNN as a model, and just decided to implement that to improve my knowledge and understanding.</p> <p>To go a few steps beyond, I wanted to implement Weighted Box Fusion for making an ensemble, but I couldn't get to it in time.</p> <p>I had many ideas and could have tried a lot more. But I got disappointed in my low public leaderboard score. I thought that I could work harder at other competitions. </p> <p><em>Validation</em><br> Due to my inexperience in larger Kaggle competitions, I did not implement any good sort of validation. I've learnt better now, seeing that validation is one of the main components of any Kaggle competition.</p> <p><strong>Further Improvements</strong><br> I could've tried larger models, such as OneFormer and MaskFormer. I could've tried YOLOv8 as well. <br> I could also try more sophisticated ensembles using Weighted Box Fusion.</p> <p>Honestly, I wish this competition was held now, and not earlier. I definitely have a lot more confidence and experience, and would hopefully have tried a lot more!!!</p> <p>In any case, I really appreciate any feedback! I'm just a student and I want to learn a lot from all the experts here!</p> <p>If you want to further support me, please upvote this notebook!<br> (Ok, enough with the YouTuber outro)</p> <p>Thank you for reading till here!</p>
HuBMAP - Hacking the Human Vasculature
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many thanks to Kaggle and HuBMAP who organized another very interesting competition. This is a joint writeup with my long-time teammate <a href="https://www.kaggle.com/phamvanlinh143" target="_blank">@phamvanlinh143</a> , who did most of the hard work in this one.</p> <p><strong>Summary</strong></p> <p>This competition is an especially tricky one with several problems that needed to be addressed:</p> <ul> <li>Public and private test set are structured differently, with private set coming from an unseen WSI. Public test set size is also quite small. With these two reasons combined we have a completely unreliable LB.</li> <li>The strange effect of dilation on public LB result.</li> </ul> <p>Fortunately, we came up with a strategy that we believed can deal with each of these issues accordingly:</p> <ul> <li>Build a reliable CV and trust it.</li> <li>Train a large and diverse ensemble for stability.</li> <li>Submit the ensemble with dilate and without dilate for our two submissions.</li> </ul> <p><strong>Cross-validation</strong></p> <p>To build a trustworthy CV, it must follow as closely as possible to how private test set is created. That means:</p> <ul> <li>Validation must be done on dataset 1 labels.</li> <li>Train and validation set must not contain the same WSI.</li> </ul> <p>A simple solution for this is to use Dataset1-WSI1 as one validation fold and Dataset1-WSI2 as the other, but this would leave out a lot of training samples, especially if we want to train using only Dataset 1. </p> <p>What we did is using metadata to split the Dataset1 tiles from each WSI into left and right side, resulting in 4 folds:</p> <ul> <li>Dataset1 – WSI1 - Left</li> <li>Dataset1 – WSI1 - Right</li> <li>Dataset1 – WSI2 - Left</li> <li>Dataset1 – WSI2 – Right</li> </ul> <p>Next, for each tile in training data we use <code>staintools</code> to generate 9 according tiles transformed into the style of the 9 additional WSIs in Dataset 3. In training, when the sampled tile come from the same WSI of the validation set, one of the 9 generated tiles are sampled instead. While this method doesn’t completely remove the characteristics of the original WSI, we felt that it is a good enough compromise to what we wanted to achieve.</p> <p>An example of an original tile and its 9 variations:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9250575%2F0e621e649f5c50cba0d6c9d51a15889a%2Ftest2.jpg?generation=1691163031076471&amp;alt=media" alt="“”"></p> <p>Splitting the dataset in this manner also allowed us to train models using tile concatenation with minimal leak, as we only had to handle the middle tile column that separated left and right side of the WSI.</p> <p><strong>Model training</strong></p> <p><strong>Input data:</strong> we used one of these two data types:</p> <ul> <li>original tiles</li> <li>padded tiles similar to what <a href="https://www.kaggle.com/hengck" target="_blank">@hengck</a> <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/discussion/419143#2316842" target="_blank">proposed</a>. We padded 128 pixels around the original tile using available neighboring tiles. Instance labels are modified accordingly. At inference time we predicted on padded region and then center crop.</li> </ul> <p>An example of original tile (left) and padded tile (right):<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9250575%2F07696cca441ed581263843a0a919752d%2Ftest.jpg?generation=1691162170897200&amp;alt=media" alt="“”"></p> <p><strong>Augmentation:</strong> we applied strong augmentation:</p> <ul> <li>stain augmentation: p=1.0 for tiles from same WSI as validation set, p=0.5 otherwise</li> <li>RandomRotate90, RandomFlip, ElasticTransform, ShiftScaleRotate, RandomBrightnessContrast, HueSaturationValue, ImageCompression, GaussNoise, GaussianBlur…</li> <li>AutoAugment similar to DETR <a href="https://github.com/open-mmlab/mmdetection/blob/master/configs/detr/detr_r50_8x2_150e_coco.py" target="_blank">training config</a>.</li> </ul> <p><strong>Training details:</strong> we trained the models in 2 stages, using only blood vessel class. </p> <ul> <li>Stage 1: train for 30 epochs using 3 folds dataset 1 + all dataset 2</li> <li>Stage 2: finetune for 15 epochs using only 3 folds dataset 1</li> </ul> <p>Then we picked 5 checkpoints from finetune stage and do SWA to provide the final model.</p> <p><strong>Models:</strong> Cascade Mask-RCNN models with <code>swin-t</code>, <code>coat-small</code>, <code>convnext-t</code> and <code>convnext-s</code> backbones. We used mmdet 2.x to train our models.</p> <p>We experimented with different combination of backbones and input data types (original or padded). We selected the models with best CV and trained a version using full data. The final ensemble consists of both fold models and full data models.</p> <p><strong>Postprocessing:</strong> The masks that met following criteria are filtered:</p> <ul> <li>Glomeruli filter: remove masks with more than 60% area inside glomeruli regions.</li> <li>Confidence filter: remove masks with confidence &lt;0.01.</li> <li>Ensemble filter by pixel: count for each pixels the number of blood vessel prediction by the ensemble (how many models predicted positive for that pixel). A threshold is then calculated from the resulting list of pixel counts with quantile q=0.05 (ignore zero value). Masks of which all pixels had count smaller than this threshold are removed.</li> <li>Ensemble filter by instance: perform nms with iou_thresh=0.65. For each selected mask we save the number of masks that satisfied iou_thresh with it. Similar to pixel counts filter, we remove masks with small number of overlapping masks using quantile q=0.075.</li> <li>Small mask filter: remove masks with fewer than 64 pixels.</li> </ul> <p><strong>To dilate or not to dilate</strong></p> <p>As others have reported, we observed significant change of score on Public LB with and without dilation. As adding dilation didn’t work at all on our cross validation, we knew we cannot trust it. However there exists also the probability that private set would present the same label patterns as public set, as the host have given confirmation that they were verified using the same procedure. Fortunately, we had two submissions, so this is where we decided to put it to good use.</p> <p>Our final ensemble scored: </p> <ul> <li>With dilation: 0.551 public, 0.526 private.</li> <li>Without dilation: 0.465 public, 0.588 private.</li> </ul> <p>Our late submissions showed that submitting any single backbone in our ensemble without dilation would have landed us in the private gold zone anyway while scoring 0.4x on public LB. Even though we chose correctly by trusting our CV, this public LB behavior is still a mystery to us. Hopefully we can shed some light into it by reading the solutions of other top teams.</p> <p>Thank you very much for reading and let us know if you have any questions.</p> <p>Edit: </p> <ul> <li>training code: <a href="https://github.com/phamvanlinh143/HubMap_2023_2nd_Place_Solution" target="_blank">https://github.com/phamvanlinh143/HubMap_2023_2nd_Place_Solution</a></li> <li>inference notebook: <a href="https://www.kaggle.com/code/phamvanlinh143/hubmap-2nd-place-inference" target="_blank">https://www.kaggle.com/code/phamvanlinh143/hubmap-2nd-place-inference</a></li> </ul>
HuBMAP - Hacking the Human Vasculature
30th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Overview</h2> <p><a href="https://www.kaggle.com/ayberkmir" target="_blank">@ayberkmir</a> and I joined the competition very late. I wasted my first week because I didn't read the metric properly and submitting not accordingly. We didn't have time to try so many diverse things so it will be quite simple.</p> <h2>Validation</h2> <p>Ayberk was using stratified single train/test split (stratified on blood vessel count) and I was using 4 fold leave-one-WSI-out cross-validation. Surprisingly both of them worked on private leaderboard.</p> <h2>Models</h2> <p>I mainly worked on Mask R-CNN and Cascade Mask R-CNN with ResNeXt101 backbone and Ayberk worked on YOLOv7 and v8 models.</p> <h2>Training</h2> <p>I dropped duplicate annotations by simply suppressing overlapping boxes with 0.99 IoU when I was creating datasets. I also included glomeruli annotations and used unsure annotations as blood vessels. I thought if I treat unsure as blood vessels, training would be more stable. Ayberk didn't use glomeruli or unsure annotations and YOLO was already handling duplicate annotations.</p> <p>Since the WSIs were kinda similar, I decided not to use stain augmentations. I used multi scale training and scales are arranged between 1.5 and 2x raw scale i.e.</p> <pre><code>train_scales = [ (, ), (, ), (, ), (, ), (, ), (, ), (, ), (, ), (, ) ] </code></pre> <p>Other training augmentatios were random horizontal, vertical and diagonal flip, random brightness, contrast, hue and saturation.</p> <p>Ayberk was using fixed size of 1024x1024 because we thought built-in augmentations in YOLO are already strong enough.</p> <p>We used almost default values for optimizer and schedulers. I only added another learning rate multiplier step.</p> <h2>Post-processing</h2> <p>Both of us thought post-processing was the most important in this competition because of the weird metric. We dedicated most of our time on this part rather than modelling.</p> <p>First of all, we used 3x TTA (horizontal, vertical and diagonal flip) when we were predicting with our models. TTA predictions were merged using weighted boxes fusion. Masks that belong to fused boxes are averaged. Unfortunately, I couldn't find a way to retrieve soft mask predictions in mmdetection and I had to average binary masks. We didn't use multi scale TTA. YOLO model were taking 1024x1024 and Mask R-CNN models were taking 1280x1280 inputs.</p> <p>Since I had 4 folds, I also had to merge their predictions and I used weighted boxes fusion with same configurations. We also tried to merge YOLO and Mask R-CNN predictions at this stage. It was working as good as other predictions on public leaderboard but always failed on private leaderboard for some reason.</p> <p>We also tried the same submissions with and without dilation and we found that TTA was reducing the effect of dilation A LOT on public leaderboard. It was a really good sign because TTA is safe and dilation isn't.</p> <p>We had different submissions with combinations of models, folds, TTA, dilation, score multiplication, WBF tuning and etc.</p> <p>When we sort our submissions by public score, we get submission with dilation or ensemble.<br> <img src="https://i.ibb.co/8PpVPxX/Screenshot-from-2023-08-01-09-38-13.png" alt="public"></p> <p>When we sort our submission by private score, we get submissions with TTA, without dilation and single models but I also have one submission with dilation that scores 0.50x on private which is weird.<br> <img src="https://i.ibb.co/Ss8WSFy/Screenshot-from-2023-08-01-09-41-36.png" alt="private"></p> <p>Our best submission was 14th place on private leaderboard but at the end Ayberk was able select a good enough submission that made us finish 30th.</p>
HuBMAP - Hacking the Human Vasculature
3rd Place Solution - How to properly utilise noisy annotations
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you 😀</p> <p>Thanks for reminding about split, I will update the solution. I simply used kfold on ds1, so basically in stage 1 , training data is kept ds2 (complete) , validation split is single fold from ds1. In stage 2, 4 folds from ds1 in training keeping the same validation fold.</p> <p>Hi everyone, sorry for taking a bit longer to publish the complete solution. Thanks to the Kaggle team and HubMap team for hosting the competition. In this post I’m going to explain my winning solution in detail. Again I am really happy that I became one of the youngest GrandMaster after this competition. This competition also taught how important it is to keep your updated and trying out the recent researches made in the field. </p> <p>Also, we have already made our inference notebook along with model weights public: you may visit that with this <a href="https://www.kaggle.com/code/nischaydnk/cv-wala-mega-ensemble-hubmap-2023" target="_blank">notebook link</a></p> <p>All codes related to training or preprocessing data codes are also made public (In Progress): <a href="https://github.com/Nischaydnk/HubMap-2023-3rd-Place-Solution" target="_blank">https://github.com/Nischaydnk/HubMap-2023-3rd-Place-Solution</a> </p> <p>You may find most of the configs I used in all_configs folder. I will clean up the repo in upcoming days.</p> <p>Here you can refer to the coco annotations used for training the model: <a href="https://www.kaggle.com/datasets/nischaydnk/hubmap-coco-datasets" target="_blank">dataset link</a></p> <h2>Overview</h2> <p><strong>Winning solution consists of :</strong></p> <p><strong>5</strong> MMdet based models with different architectures.<br> <strong>2x ViT-Adapter-L</strong> (<a href="https://github.com/czczup/ViT-Adapter/tree/main/detection" target="_blank">https://github.com/czczup/ViT-Adapter/tree/main/detection</a>)<br> <strong>1x CBNetV2 Base</strong> (<a href="https://github.com/VDIGPKU/CBNetV2" target="_blank">https://github.com/VDIGPKU/CBNetV2</a>)<br> <strong>1x Detectors ResNeXt-101-32x4d</strong> (<a href="https://github.com/joe-siyuan-qiao/DetectoRS" target="_blank">https://github.com/joe-siyuan-qiao/DetectoRS</a>)<br> <strong>1x Detectors Resnet 50</strong> </p> <p>I also had few Vit Adapter based single models which could have placed me on 1st/2nd ranks but I didn't select. No regrets :))</p> <h2>Image Data Used</h2> <p>I only used competition for training models. <em>No external image data was used.</em> </p> <h2>How to use dataset 2??</h2> <p>Making the best use of dataset 2 was one of the key things to figure out in the competition. For me multi stage approach turned to be giving the highest boost. Basically during stage 1, A coco pretrained model will be loaded and pretrained on all the WSIs present in noisy annotations (dataset 2) for less epochs (~10), using really high learning rate (0.02+), with a cosine scheduler with minimum lr around (0.01), light augmentations. </p> <p>In stage 2, we will load the pretained model from stage 1 and fine-tune it on dataset 1 with higher number of epochs (15-25), heavy augmentations, higher image resolution (for some models), slightly lower starting learning rate and minimum LR till 1e-7. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2Fdfbae6a6f634c755d9ea39ca2daa9751%2FScreenshot%202023-08-09%20at%203.36.59%20AM.png?generation=1691532480119818&amp;alt=media" alt=""></p> <p><em>I have used Pseudo labels using dataset 3 in training few models of final ensemble solution, although I didn't find any boost using them in the leaderboard scores, I will still talk about it as they were used in final solution</em></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2Facf9ee081989a328c9f150458b4fce95%2FScreenshot%202023-08-09%20at%204.01.34%20AM.png?generation=1691533922010932&amp;alt=media" alt=""></p> <p>Using multistage approach gave around consistent 2-3% boost on validation and 4-6% improvement in leaderboard scores which is quite huge. The gap between cv &amp; lb scores was quite obvious as models were now learning on WSI 3 &amp; 4. Also, I never had to worry about using dilation or not as my later stage was just fine-tuned on dataset 1 (noise free annotations), so dilation doesn't help if applied directly on the masks.</p> <h1>Models Summary</h1> <h2>Vit Adapter: These models were published in recent ICLR 2023 &amp; turned out to be highest scoring architectures.</h2> <ul> <li>Pretrained coco weights were used. </li> <li>1400 x 1400 Image size (dataset fold-1 with pseudo threshold 0.5) &amp; (full data dataset 1 with pseudo threshold 0.6)</li> <li>Loss fnc: Mask Head loss multiplied by 2x in decoder.</li> <li>1200 x 1200 Image size used in stage 1.</li> <li>Cosine Scheduler with warmup were used.</li> <li>SGD optimizer for fold 1 model &amp; AdamW for full data model</li> <li>Higher Image Size + Multi Scale Inference (1600x1600, 1400x1400)</li> </ul> <p><strong>Best Public Leaderboard single model: 0.600</strong><br> <strong>Best Private Leaderboard single model: 0.589</strong></p> <h2>CBNetV2: Another popular set of architectures based on Swin transformers.</h2> <ul> <li>Pretrained coco weights were used. </li> <li>1600 x 1600 Image size (dataset 1 fold-5 without Pseudo)</li> <li>1400 x 1400 Image size used in stage 1.</li> <li>Cosine Scheduler with warmup were used.</li> <li>Higher Image Size during Inference (2048x2048)</li> <li>SGD optimizer </li> </ul> <p><strong>Best Public Leaderboard single model: 0.567</strong></p> <h2>Detectors HTC based models: CNN based encoders for more diversity</h2> <ul> <li>Pretrained coco weights were used. </li> <li>2048 x 2048 image size (Resnet50 fold 1 w/ pseudo threshold 0.5 , Resnext101d without pseudo)</li> <li>Loss fnc: Mask Head loss 4x for Resnext101, 1x for Resnet50 </li> <li>Cosine Scheduler with warmup were used.</li> <li>SGD optimizer </li> </ul> <p><strong>Public Leaderboard single model: 0.573 ( resnext 101) , 0.558 (resnet50)</strong></p> <h2><strong>Techniques which provided consistent boost:</strong></h2> <ol> <li>Multi Stage Training</li> <li>Flip based Test Time Augmentation</li> <li>Higher weights to Mask head in HTC based models</li> <li>SGD optimizer </li> <li>Weighted Box Fusion for Ensemble</li> <li>Post Processing</li> </ol> <h2>Post Processing &amp; Ensemble</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2F0fb5299755c5bfa0d6da43263a8be223%2FScreenshot%202023-08-09%20at%204.38.49%20AM.png?generation=1691536182102653&amp;alt=media" alt=""></p> <p>As mentioned earlier, I used WBF to do ensemble. To increase the diversity, I kept NMS for TTA and WBF for ensemble. Also, using both CNN / Transformer based encoders helped in increasing higher diversity and hence more impactful ensemble. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2F764ea7995470a76b07549107c4b531a2%2FScreenshot%202023-08-09%20at%204.26.47%20AM.png?generation=1691536206638981&amp;alt=media" alt=""></p> <p>After the ensemble, I think some of my mask predictions got a little distorted. Therefore, I applied erosion followed by single iteration of dilation. This Post-processing gave me a decent amount of boost in both cross validation score as well as on leaderboard (+ 0.005)</p> <h2>Light Augmentations</h2> <pre><code>dict( =, direction=[, ], =0.5), dict( =, policies=[[{ : , : 0.4, : 0 }], [{ : , : 0.4, : 5 }], [{ : , : 1.0, : 6 }, { : }]]), dict( =, transforms=[ dict( =, =0.0625, =0.15, =15, =0.4) ], =dict( =, =, label_fields=[], =0.0, =) </code></pre> <h2>Heavy Augmentations</h2> <pre><code>dict( =, direction=[, ], =0.5), dict( =, policies=[[{ : , : 0.4, : 0 }], [{ : , : 0.4, : 5 }], [{ : , : 0.6, : 10 }, { : , : 0.6, : 3 }], [{ : , : 0.6, : 10 }, { : , : 0.6, : 5 }], [{ : , : 32, : (0.5, 1.5), : 15 }], [{ : , : (0.4, 0.5, 0.6, 0.7, 0.8, 0.9), : 0.2 }], [{ : , : (3, 8), : [(4, 4), (4, 8), (8, 4), (8, 8), (16, 32), (32, 16), (32, 32), (32, 48), (48, 32), (48, 48)] }], [{ : , : 0.6 }, { : , : 0.6, : 3 }], [{ : , : 32, : (0.5, 1.5), : 18 }], [{ : , : (0.4, 0.5, 0.6, 0.7, 0.8, 0.9), : 0.3 }], [{ : , : (5, 10), : [(4, 4), (4, 8), (8, 4), (8, 8), (16, 32), (32, 16), (32, 32), (32, 48), (48, 32), (48, 48)] }], [{ : , : 0.6, : 4 }, { : , : 0.6, : 6 }, { : , : 0.6, : 10 }], [{ : , : 1.0, : 6 }, { : }]]), dict( =, transforms=[ dict( =, =0.0625, =0.15, =15, =0.5), dict(=, =0.5), dict( =, transforms=[ dict( =, =120, =6.0, =3.5999999999999996, =1), dict(=, =1), dict( =, =2, =0.5, =1) ], =0.3) ], =dict( =, =, label_fields=[], =0.0, =) </code></pre> <p>Thank you all, I've tried my best to cover most part of my solution. Again, I am super happy to win the solo gold, feel free to reach out in case you find difficulty understanding any part of it.</p>
HuBMAP - Hacking the Human Vasculature
4th Place Solution [SDSRV.AI] GoN
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Code training <a href="https://www.kaggle.com/code/damtrongtuyen/training-hubmap-yolov6m" target="_blank">yolov6m</a> and <a href="https://www.kaggle.com/damtrongtuyen/training-hubmap-2-stages-model" target="_blank">2 stages model </a>: mmdet maskrcnn, cascade, htc</p> <p>Thanks to the organizers for this compelling competition. We present the 4th place solution from the GoN team at SDSRV.AI. It emphasizes the importance of dataset, validation, ensemble and post processing method. Special thanks to <a href="https://www.kaggle.com/ducmanvo" target="_blank">@ducmanvo</a>, an amazing teammate.</p> <h1>Summary</h1> <p>Our solution employs a combination of 4 instance segmentation models comprising CascadeRCNN (ResNeXt, Regnet), MaskRCNN (swint), HybridTaskCascade (Re2Net), and 1 object detection model, Yolov6m. Each model is trained with a dataset consisting of 1549 images (ds1+ds2), with 84 images from dataset1 utilized for validation. During the inference process, the Weighted Boxes Fusion (WBF) technique is employed to ensemble RPN boxes and ROI boxes, while the masks generated by the ensemble of models utilize the mean average approach. For post-processing, we filter out small instances and refine instance score using mask scores.</p> <h1>Cross-Validation and Preprocessing</h1> <p><strong>Train-test split</strong><br> - Training: 1549 images (1211 images from ds2 + 338 images from ds1)<br> - Validation: 84 images from ds1<br> - During the initial stages of approaching the problem (in the final month of the competition), we initially utilized K-fold cross-validation. However, upon recognizing significant differences in distribution, label assignment methodologies between dataset1 and dataset2, as well as variations in label assignments among Whole Slide Images (WSI), we decided to split the data into two train-validation sets. The validation data was exclusively taken from dataset1 (20% of dataset1). Another reason for this data split was the scarcity of dataset1 images in the training set, which was a concern given that the private test data was exclusively sourced from dataset1 (with only 422 images available).<br> - To determine the dissimilarities in the label distribution between dataset1 and dataset2, as well as variations among different Whole Slide Images (WSI), we trained a base model, MaskRCNN R50, on the training data from dataset1 (fold1) and validated it on dataset1, dataset2 (fold1). Our observations revealed that the model's performance scored considerably higher on dataset1 compared to dataset2. Similar procedures were applied when comparing performance across various Whole Slide Images (WSI).<br> <strong>Preprocessing</strong>: remove duplicate annotations<br> - Approximately 4.5% of training data was duplicated. We directly removed these duplicate labels to ensure data integrity and improve model performance.</p> <h1>Training &amp; valid with blood_vessel &amp; unsure only</h1> <p>During training and validation, we specifically focused on the "blood_vessel" and "unsure" classes. This decision was influenced by the significant size difference between glomerulus and the other two classes, with glomerulus being much larger. Surprisingly, when training with all three classes, the model's performance on glomerulus remained exceptionally high. Additionally, the competition organizers informed us that during the testing phase, we could utilize glomerulus labels to eliminate false positives. Consequently, we concluded that training with additional glomerulus data was unnecessary. After removing glomerulus labels from the training dataset, we observed an improvement in the performance score for the "blood_vessel" class. Therefore, we made the decision to exclude glomerulus labels from all subsequent experiments.</p> <h1>Models</h1> <p><strong>2 stages model</strong>: MaskRCNN, CascadeRCNN, HybridTaskCascade</p> <ul> <li>When training with the base model (MaskRCNN-R50), we noticed that the model struggled to converge using the default mmdet configurations. Consequently, in subsequent experiments, we reduced the utilization of augmentation methods, retaining only multiscale training. Additionally, we adopted a larger backbone (ResNeXt-101) and replaced SGD with AdamW optimizer. To further enhance model diversity during ensemble, we incorporated CascadeRCNN and HybridTaskCascade into our approach. These modifications aimed to improve convergence and overall performance of the models.</li> </ul> <p><strong>Yolov6m</strong>: Two-stage models like MaskRCNN, CascadeRCNN, and HybridTaskCascade exhibit good Mean Average Precision (MAP) at a certain intersection-over-union (IoU) threshold range [0.5:0.95]. However, their overall MAP [0.5:0.95] scores may not reach high values. To address this limitation, YOLO series models offer a solution. Specifically, YOLOv6, with its default training configurations, achieves a significantly higher Mean Average Recall (MAR) of 55.1, compared to the 44.x obtained by the two-stage models. This improvement in MAR highlights the effectiveness of YOLO series models in overcoming the mentioned drawback.<br> <strong>Detail training config</strong></p> <ul> <li>Optimizer: AdamW, warmup 3 epoch</li> <li>Select best model use MAP[0.5:0.95] blood_vessel class</li> <li>Data augmentation: Multi scale training: [(512x512), (640,640), (768, 768), (896, 896), (1024, 1024)]</li> </ul> <h1>Training: 2 stages</h1> <ul> <li>Stage 1: training all 1549 images, 2 classes</li> <li>Stage2: training 338 images ds1, 2 classes</li> </ul> <p>Focusing on optimizing the validation set with dataset1 exclusively, fine-tuning the model for a few epochs using dataset1 in the training set improved performance on the validation data.</p> <div> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6439446%2Fdd8e2019007298da32e43ffe92e1fe32%2Fsingle_model.png?generation=1691082591623984&amp;alt=media"> </div> <h1>Ensemble &amp; Post processing</h1> <p><strong>Ensemble</strong></p> <ul> <li>The ensemble process is as depicted in the following diagram; We utilize Weighted Boxes Fusion (WBF) When ensembling bounding boxes, as it produces superior results compared to NMS, SoftNMS, or NMW. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6439446%2F5a2d33de596e95e08f227e514cc49350%2Fensemble.png?generation=1691081285365346&amp;alt=media" alt="Ensemble diagram"><ul> <li>For WBF, equal weights are assigned to each model during the ensembling process.</li></ul></li> </ul> <p><strong>PostProcessing</strong></p> <ul> <li>Remove instances with an area &lt; 80 pixels.</li> <li>Refine instance scores using the formula: <code>score_instance = score_bbox * mean(mask[mask &gt; 0.5])</code></li> </ul> <p>Some detail experiments:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6439446%2Ffabaef83a462c8b1d1f54d8ad11389ea%2Ffinal%20experiments.png?generation=1691081498124640&amp;alt=media" alt=""><br> <em>We observed a strong correlation between the local validation and the public leaderboard (LB) scores. Therefore, we only made submissions when there was a significant improvement in the validation score.</em></p> <h1>Conclusion</h1> <ul> <li>Several techniques in our approach significantly influenced the private score: using only dataset1 for validation and fine-tuning, employing light augmentation, multi-stage ensemble, utilizing YOLOv6 for higher MAR, refining prediction scores, and, undoubtedly, placing strong trust in the validation score - no dilated!!!</li> <li>Adopting larger models and training with larger image sizes can potentially enhance model quality. However, due to resource limitations, we were unable to implement this strategy.</li> </ul> <p><strong>Keywords</strong>: Instance segmentation, ensemble instance segmentation, re-ranking instance, refine instance score.</p> <p>The bbox output of rpn head and box head are different. Yes, we only use mask head from 2 stage models.</p> <p>Great finding! I drew the wrong arrow endpoint. It should be the ensemble bounding box after the boxhead, instead of the rpnhead! Thanks</p> <p>I think there is currently no public code that does that. You need to customize your inference flow. My training and inference code will be prepared and made public soon!</p> <p><a href="https://github.com/amirassov/kaggle-imaterialist/blob/f1ae37100801203500d20119b9de7e19b0d89a1c/mmdetection/mmdet/models/detectors/ensemble_htc.py" target="_blank">Inference code for ensemble both rpn head and roi head(boxhead)</a></p>
HuBMAP - Hacking the Human Vasculature
54th place solution YOLOV8x + WBF + TTA ensemble train and inference code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Summary:<br> MultilabelStratifiedKFolds on dataset, source, num_cells with yolov8 on one class, drop unsure, TTA, WBF, dilation</p> <p>Inference code: <a href="https://www.kaggle.com/code/sirtimio/infer-yolov8-tta-wbf-55th-place" target="_blank">https://www.kaggle.com/code/sirtimio/infer-yolov8-tta-wbf-55th-place</a></p> <p>Train code: <a href="https://github.com/sir-timio/HuBMAP-2023" target="_blank">https://github.com/sir-timio/HuBMAP-2023</a></p> <p>Best private: 0.552 without dilation and trained mostly on dataset 1</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7215366%2Fab482c349804007b4699e3250f68f6db%2F2023-08-01%20%2010.30.49.jpg?generation=1690875066944109&amp;alt=media" alt=""></p>
HuBMAP - Hacking the Human Vasculature
5th place solution. Higher Resolution and Dataset1 is all you need.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the organizers for hosting this competition!<br> Thank you to all the people who helped me! I have learned a lot from you!<br> This is my first time winning a gold medal!I am a master finally!</p> <h1>data</h1> <p>I only use dataset1 in my final submission,with coco pretrain.<br> 5folds splits were used in local cv</p> <h1>Training</h1> <h2>Model</h2> <p>I tried detectoRS-r101, htc-db-b. Htc-db-b performs better in both local cv(score 0.439) and lb(score 0.570) .</p> <h2>Training Strategy</h2> <p>I trained with mmdet2, default 1x schedule and 1x swa training. <br> Image scales were set as (512,512)-(1536,1536).<br> RandomFlip and Rotate90 aug were used.</p> <h1>Inference</h1> <p>TTA i used in inference. TTA scales is [(640,640),(800,800),(1024,1024),(1408,1408),(1696,1696),(1920,1920)] while TTA flip_direction is [horizontal,vertical].<br> Replace rcnn'nms by weighted cluster-nms with DIoU.</p> <h1>Final Submission</h1> <p>My final submission was 1 fold htc-db-b,trained in dataset1.</p> <h1>Others</h1> <h2>Ensemble</h2> <p>I tried ensemble of different folds,which decrease both cv and public score,so I didn't chose it as my final submission,but it increase private score finally. Only 2folds ensemble increase private from 0.57 to 0.573.</p> <h2>Larger Model</h2> <p>I tired htc-db-l,which local cv and public socre both lower than htc-db-b,but have higher score(0.573)in private score.<br> ## Tried but not worked<br> data augmentation: copy-paste,mixup,HSV,cutout…<br> dataset23: I tried pretrained in dataset2 then fintune in dataset1, which decrease score both in cv and lb. I tried pseudo label, which also didn’t work for me.</p> <p>inference code: <a href="https://www.kaggle.com/code/wushaodong/5th-solution-htc-db-b-single-fold-submission" target="_blank">https://www.kaggle.com/code/wushaodong/5th-solution-htc-db-b-single-fold-submission</a></p> <p>both not </p>
HuBMAP - Hacking the Human Vasculature
7th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><ol> <li>Is there a reference or reason why only HTC Roi Head was used in Mask R-CNN?<ul> <li>Although not compared in this competition, I empirically believe that HTC is more accurate than Cascade Mask R-CNN.<br> I did not utilize multiple architectures for ensemble as I prefer a simple solution.</li></ul></li> <li>how did you set the threshold for the small mask? (Area or Width * height?)<ul> <li>Area (mask.sum() &lt; 100). From EDA, there were very few masks with area &lt; 100. I did not optimize this, but when I changed the threshold to 1 in late submission, both private/public LBs were decreased. </li></ul></li> </ol> <p>Good question! That's exactly the question I'd like to ask the other participants, especially to <a href="https://www.kaggle.com/tascj0" target="_blank">@tascj0</a> !</p> <blockquote> <p>I thought it would be better to divide the folds by WSI in this competition, why did you use random K-fold? Which score did you trust?, Your CV, LB or something else?</p> </blockquote> <ul> <li>For me, I splitted folds according to WSI first. However, training was not stable and models were quickly overfitted. Instead, I evaluated my models trained with only dataset 1 (= without WSI 3, 4) on public LB. This evaluation will allow me to see if my model is generalized to a WSI I have not seen before.</li> <li>I got a public LB score of 0.549 using models <strong>without dilation and dataset 2</strong>. This gave me the confidence to not shake down.</li> </ul> <p>Thx!</p> <ol> <li>How many rounds pseudo labeling have you done?<ul> <li>Only once. Repeating the procedure does not work for me.</li></ul></li> <li>Have you used all instances in your pseudo labels? If not, how you filtered your pseudo labels?<ul> <li>I used detection results with a score of 0.7 or higher as pseudo labels.</li></ul></li> <li>After pseudo labeling, did the 5 folds split only on dataset 3 or the hole mixed dataset?<ul> <li>Pseudo labels were created for each pretrained model individually, and used for each training of corresponding fold to avoid leakage.</li></ul></li> <li>How you mixed the labels for dataset 2?<ul> <li>Simply add pseudo labels to ground truth annotations. Thus there would be many similar masks but I did not think this brings a negative impact.</li></ul></li> </ol> <p>Thx!</p> <ol> <li>What is the CV of the model trained on dataset 1?<ul> <li>Average of best val scores (segm_mAP_60) is 0.7004. This score is with resize TTA (1024, 1536) and without hvflip TTA.</li></ul></li> <li>Why did you train the model only datasets 1 to create pseudo labels for dataset 3? The number of dataset 1 is very small, so I think adding dataset 2 will improve the accuracy of the pseudo-labels.<ul> <li>I thought that the poor quality (in terms of missing masks and inaccurate mask regions) of dataset 2 would result in poor quality pseudo labels.</li></ul></li> <li>Did you use all of dataset 2, dataset 2 with pseudo-labels, and dataset 3 with pseudo-labels? The number of data is about 400 (dataset 1), 1200 (dataset 2), and 5500 (dataset 3). If all data are used, the ratio of training data would be 320 : 1200+1200 : 5500.<ul> <li>I used dataset 1 with gt labels, dataset 2 with pseudo labels + gt labels, and dataset 3 with pseudo labels. In training, I oversampled dataset 1 and 2 for four times for balance.</li></ul></li> <li>Did you use all of the instances output from each model to create pseudo label?<ul> <li>Partially Yes. I used only detection results with higher scores but I did not care about overlapping. In model inference, NMS is already performed to some extent.</li></ul></li> </ol> <ol> <li>Does 0.562 private correspond to 0.575 public?<ul> <li>No. I selected a submissioin with public LB score of 0.574 considering local CV scores and my intuition.</li></ul></li> <li>"the difference was reduced to 0.02" means 0.560 and 0.562 private for non-dilatation and dilation, doesn't it?<ul> <li>No. The selected two submissions are:</li></ul><ol> <li>with dilation: public 0.574, private 0.562</li> <li>without dilation: public 0.559, private 0.562</li></ol><ul> <li>I intended the difference in scores for public LB (because we could not see private scores during competition)</li></ul></li> </ol> <p>Thx!</p> <ul> <li>2.x. I have not yet migrated configs from 2.x to 3.x.</li> <li>random kfold</li> <li>I tried mask2former, but failed to make them learn well.</li> </ul> <p>The implementation is somewhat complex but you can simply write ensemble detector class in notebook, and there is no need to modify mmdet itself.</p> <p>The implentation is something like this: <a href="https://github.com/amirassov/kaggle-imaterialist/blob/f1ae37100801203500d20119b9de7e19b0d89a1c/mmdetection/mmdet/models/detectors/ensemble_htc.py#L7" target="_blank">https://github.com/amirassov/kaggle-imaterialist/blob/f1ae37100801203500d20119b9de7e19b0d89a1c/mmdetection/mmdet/models/detectors/ensemble_htc.py#L7</a></p> <blockquote> <p>So the folds are unchanged once you splitting the dataset 1, and use a single fold-model to create fold-specific pseudo labels?</p> </blockquote> <p>Exactly.</p> <p>Thanks to HuBMAP for hosting the exciting competition, and congrats to all prize and medal winners!</p> <ul> <li>Summary<ul> <li>Mask R-CNN model trained with dataset1, 2, 3 (pseudo labels)</li></ul></li> <li>Training pipeline<ul> <li>Train models with dataset 1 (5 folds)</li> <li>Create pseudo labels for dataset 2, 3 using the above models (for each fold)</li> <li>Train models with dataset 1, 2, 3 (5 folds)</li> <li>For dataset 2, both original (dilated) annotations and pseudo labels were used</li></ul></li> <li>Model<ul> <li>Mask R-CNN (Swin Transformer backbone, HTC RoI head)</li></ul></li> <li>Augmentation<ul> <li>Random resize (768-1536), flip, Rot90, RandomBrightnessContrast, HueSaturationValue</li></ul></li> <li>TTA<ul> <li>Resize (1024, 1536), hvflip</li></ul></li> <li>Ensemble<ul> <li>Ensemble on both region proposal and RoI head</li> <li>See "ensemble detection model" part of this solution</li> <li><a href="https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/discussion/298146" target="_blank">https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/discussion/298146</a></li></ul></li> <li>Post-processing<ul> <li>Dilation</li> <li>Remove small masks</li> <li>Remove masks that contain glomerulus regions</li></ul></li> <li>Does not work for me<ul> <li>Train with test images' pseudo labels (train in submission)</li> <li>External dataset <a href="https://data.mendeley.com/datasets/m2t49zf6xr/1" target="_blank">https://data.mendeley.com/datasets/m2t49zf6xr/1</a></li> <li>YOLOv8</li> <li>Puzzle in submission</li> <li><a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/discussion/417314" target="_blank">https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/discussion/417314</a></li></ul></li> <li>Dilate or not dilate<ul> <li>I have found experimentally that when using only dataset1 for training, the score is higher without dilation than with dilation. Therefore, I suspected that the success of dilation was brought from noisy dataset2 and was an overfitting method to LB. Thus I have tried to minimize the difference in score with and without dilation by using pseudo labels and dilated annotation masks for dataset2.</li> <li>In the first submission, the dilation score was 0.1 better than without dilation, but in the final submission, the difference was reduced to 0.02. However, the submission with dilation was still better for both public and private LBs.</li></ul></li> </ul>
HuBMAP - Hacking the Human Vasculature
8th Place Solution: single model with heavy augmentation + tuned threshold
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Hubmap for hosting such a nice competition and to the many helpful notebooks and discussions.</p> <ul> <li>Summary<ul> <li>Train using all train data with the yolov8x-seg model with strong augmenation</li> <li>The threshold for converting mask prob to binary seems to be a key for scores</li></ul></li> <li>Training<ul> <li>yolov8x-seg model( with settings below) with all train data</li> <li>Trained all 3 classes </li> <li>2-fold validation <ul> <li>fold1: wsi1, 3</li> <li>fold2: wsi2, 4</li> <li>The key insight for me here was that the optimal value of the mask threshold varies considerably with fold1, 2. For example, in one experiment, the following score were obtained<ul> <li>mask_threshold, fold1 val , fold2 val </li> <li>0.2, <strong>0.349</strong>, 0.35</li> <li>0.5, 0.211, <strong>0.481</strong></li></ul></li> <li>Therefore, in final 2 submission, I thought it was a safe bet to choose different thresholds for masks</li></ul></li> <li>The yolov8 setup is as follows</li></ul></li> </ul> <pre><code> </code></pre> <ul> <li>Inference<ul> <li>image size: 768 (better than 512, why?)</li> <li>as noted above, two different thresholds was used<ul> <li>thresh 0.5: private <strong>0.56</strong>, public: 0.391</li> <li>thresh 0.2: private0.369, public: 0.506</li></ul></li></ul></li> <li>Not work for me<ul> <li>combined with semantic segmenation</li> <li>Training on large image sizes (768, 1024)</li> <li>tta(rot90) (a bug in my implementation?)</li></ul></li> <li>Did not try<ul> <li>Pseudo labeling in instance segmentation model</li> <li>Stein augmentation</li> <li>WBF</li> <li>And many more models</li></ul></li> </ul>
HuBMAP - Hacking the Human Vasculature
9th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Basically, the segmentation model outputs mask predictions of the same size (same width and same height) as the input bbox. Therefore, by dilating the bbox, the mask prediction is also dilated compared to the original bbox input.</p> <p>Thank you for the question.</p> <blockquote> <p>"As a mask of bboxes, I did not use the prediction results of the detection models, but used the bboxes obtained from the original annotations." Have you tested both options and settled on this one?</p> </blockquote> <p>No, only this option was tested. The 1st place solution in the satorius competition used this method, so I was confident in my choice.</p> <blockquote> <p>"Unlike the detection models, "unsure" label was also used for training." Was it a result of testing both options or you had some prior intuition about it?</p> </blockquote> <p>No, only this option was tested, too. I thought that the benefit of increasing the amount of data would be greater than the loss of data quality due to using "unsure" in the segmentation task.</p> <blockquote> <p>"A score threshold of 0.5 was used for binarization.". Another option would be to not have binarization at all. Is binarization always better or you tested both options?</p> </blockquote> <p>You may have misunderstood something… I don't think there is an option to not binarize the mask, since the final submission mask must be binarized for RLE in this competition.</p> <p>Thank you for the question!<br> Yes, that is the correct understanding.</p> <p>Thanks to the organizers and congrats to all the winners!</p> <h2>Overview</h2> <p>From the top 1&amp;2 solutions of the <a href="https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation" target="_blank">Sartorius competition</a>, I decided to go with a two-stage pipeline of object detection and semantic segmentation instead of using a one-stage model (e.g. Mask-RCNN) in the early in the competition. I think that the main advantage of the two-stage pipeline is the ease of the ensemble and TTA.</p> <p><img src="https://i.postimg.cc/3xDzz1G0/hubmap1.png" alt=""></p> <h2>Detection part</h2> <h3>Update of dataset2 bbox-level annotation</h3> <p>As been pointed out in the discussion, the dilation significantly improved LB score in models trained from both dataset1 and dataset2. However, models trained only from dataset1 did not show this effect. This led me to believe that the annotations of dataset2 were made smaller than those of dataset1. Considering that there are several types of blood vessels and the possibility that the annotations of them are not uniformly small, I took the approach of updating the bbox-level annotations of dataset2 to dilate them by models trained only from dataset1.</p> <p><img src="https://i.postimg.cc/MK36b97C/hubmap2.png" alt=""></p> <p>The update was performed by replacing the original annotations with a predicted ones that met the following conditions. Basically, the updated bboxes is larger than the original ones. The total number of bboxes in dataset2 does not change by the updating.</p> <ol> <li>IoU &gt; 0.4</li> <li>FP / (TP + FP) &gt; 0.1</li> </ol> <p><img src="https://i.postimg.cc/jSGRBCkv/hubmap3.png" alt=""></p> <p>In the model trained from the updated dataset2 together with dataset1, the LB improvement by the dilation has almost disappeared and the LB score above 0.5 was achieved without the dilation.</p> <h3>CV strategy</h3> <p>The CV was carried out by the following special two-fold division.</p> <p><img src="https://i.postimg.cc/jSWs8yvL/hubmap4.png" alt=""></p> <h3>Model training</h3> <p>I trained 2-class (blood_vessel, glomerulus) detection models. "unsure" label was ignored. I modified the source code of YOLOv5/v7/v8 and add a 90 degree random rotation augmentation. The following 5 models x 2 folds (total 10 models) were used in the final submission.</p> <table> <thead> <tr> <th>Model</th> <th>Input size</th> </tr> </thead> <tbody> <tr> <td>YOLOv5x6</td> <td>512</td> </tr> <tr> <td>YOLOv7x</td> <td>512</td> </tr> <tr> <td>YOLOv8l</td> <td>512</td> </tr> <tr> <td>YOLOv8x</td> <td>512</td> </tr> <tr> <td>YOLOv8l</td> <td>768</td> </tr> </tbody> </table> <h3>Inference</h3> <p>I made a huge ensemble of 10 models x 16 TTAs since I considered the accuracy of detection to be more important than one of the segmentation. The small number of test images made it possible. 10 x 16 = 160 detection results were merged by WBF.</p> <ul> <li>16 TTAs: 8 for combinations of h-flip, v-flip, 90deg rotation, 2 for 2 scales (base size, base size + 64px), 8 x 2 = 16</li> <li>IoU threshold of each model's NMS: 0.6</li> <li>IoU threshold of WBF: 0.7</li> </ul> <h2>Segmentation part</h2> <h3>CV strategy</h3> <p>Almost the same as detection models, except that dataset2 is not updated.</p> <h3>Model training</h3> <p>As a mask of bboxes, I did not use the prediction results of the detection models, but used the bboxes obtained from the original annotations. Unlike the detection models, "unsure" label was also used for training. In the final submission, EfficientNetB1-Unet and EfficientNetB2-Unet was used (each 2 folds, total 4 models).</p> <h3>Inference</h3> <p>TTA was not used because of run-time constraints. A score threshold of 0.5 was used for binarization.</p> <h2>Strategy of final submission</h2> <p>The effect was smaller by updating dataset2, but the small dilation improved the LB score slightly. I implemented the dilation not by cv2.dilate for the final masks, but by increasing the size of bboxes by a percentage. In my final submission, 3% of bbox dilation increased my LB score about 0.005. I used 3% dilation in one of the two final submissions and not in the other (there are other differences besides the dilation).</p> <table> <thead> <tr> <th>Submission</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>w/ 3% dilation</td> <td>0.580</td> <td>0.549</td> </tr> <tr> <td>w/o 3% dilation</td> <td>0.572</td> <td>0.560</td> </tr> </tbody> </table> <h2>Things tried but not worked</h2> <ul> <li>Semi-supervised learning on dataset3</li> </ul>
HuBMAP - Hacking the Human Vasculature
Enhancing Kidney Microvascular Structure Segmentation with Morphological Operations - Private LB .451
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Summary:</strong><br> In this competition, the goal is to segment microvascular structures in healthy human kidney tissue slides. To achieve this, a model is trained on 2D PAS-stained histology images. I used morphological operations to significantly boost accuracy, however I didn't select this model but thought I'd share the approach for posterity. </p> <p>I forked a popular notebook and made minor changes: <br> <a href="https://www.kaggle.com/code/skeller/hubmap-mmdet3-1-single-fold-inference/notebook" target="_blank">https://www.kaggle.com/code/skeller/hubmap-mmdet3-1-single-fold-inference/notebook</a></p> <p><strong>Introduction:</strong><br> Segmenting microvascular structures, such as capillaries, arterioles, and venules, in kidney tissue slides is a challenging task due to their intricate nature and complex patterns. While deep learning models can achieve impressive results, incorporating domain-specific knowledge can further refine the predictions. In this approach, morphological operations are applied to the binary masks generated by the model to improve the accuracy of segmenting these delicate structures.</p> <p><strong>Morphological Operations for Microvascular Segmentation:</strong><br> Morphological operations are image processing techniques that alter the shape and structure of objects in an image. In the context of this competition, binary masks are used to represent the presence or absence of microvascular structures in the kidney tissue slides.</p> <p>**Morphological Gradient: **<br> The morphological gradient is calculated by taking the difference between the dilated and eroded versions of the binary mask. It highlights the boundaries and edges of the structures. This operation helps in refining the masks and enhancing the edges of the segmented structures.</p> <p>**Combining Multiple Operations: **<br> To further improve the segmentation, a combination of morphological operations is applied to the binary masks. Specifically, the code uses erosion, opening, and closing operations iteratively. These operations help remove noise, close small gaps, and smooth the masks, resulting in cleaner and more accurate segmentations.</p> <p>**Weighting Glomerulus Class: **<br> Glomerulus structures are of particular importance in kidney tissue analysis. To ensure accurate segmentation of these structures, the code assigns a higher score (weight) to the glomerulus class during the final prediction. This weighting ensures that glomeruli are given more significance in the final segmentation.</p> <p><strong>Conclusion:</strong><br> I didn't spend much time on this contest (last 5 days) but saw some decent results with morphological operations with a deep learning model to accurately segment microvascular structures in kidney tissue slides. By enhancing the binary masks through morphological gradient calculation and applying a combination of erosion, opening, and closing operations, the approach significantly boosted accuracy. Additionally, the approach effectively assigns a higher weight to the glomerulus class, further improving the segmentation performance. This combination of deep learning and domain-specific image processing techniques showcases the power of leveraging morphological operations for microvascular structure segmentation in histology images.</p> <p>**Methodology's Final Private Leaderboard Score: 0.451 **</p>
HuBMAP - Hacking the Human Vasculature
Private 47th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our solution is an ensemble of yolov8m-seg models with different inputs: 512, 640, 1024. We pretrained our models using "unknown" and "glomerulus" as blood vessels since they have similar structures with the target. Then, we retrained our models with the correct blood vessels without any "unknown" or "glomerulus" labels.</p> <p>For test time augmentations (TTAs), we used rotation (90), horizontal flip, vertical flip, and diagonal flip. Additionally, we lowered the pixel threshold to 0.45, allowing the model to decide where and how to dilate. However, it didn't perform as well alone as dilation on the public leaderboard, so we kept dilation.</p> <p>To ensemble masks, we used the code from <a href="https://www.kaggle.com/code/mistag/sartorius-tta-with-weighted-segments-fusion" target="_blank">https://www.kaggle.com/code/mistag/sartorius-tta-with-weighted-segments-fusion</a>.</p> <p>We also utilized DBSCAN to crop images where major predictions are located. This technique was used in the <a href="https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/329131" target="_blank">image_matching comp 2022</a>. The aim here is to fine-tune the predicted masks with their confidence( random cropping should also work).</p> <p>Additionally, we performed fine-tuning of the predicted masks using SAM.</p> <p>What did not work:</p> <ul> <li>TTAs of scaling didn't work at all.</li> <li>Normalization using staintools didn't work in the public leaderboard (resulted in a drop of about 0.008), while it worked in the private leaderboard (resulted in a gain of +0.01).<br> Our best private score was 0.561, an ensemble of yolox(512) + yolom(1024) with a pixel threshold of 0.45, no_dilation.</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13672841%2F5ef92801ea5f6dc3f411a37cb1b365b1%2Fresult.png?generation=1690961885406247&amp;alt=media" alt=""></p> <p>Yes, unfortunately, we didn't select it. To be honest, there was no clue that we should select it.</p>
HuBMAP - Hacking the Human Vasculature
Public 12th / Private 26th Place Solution for the HuBMAP - Hacking the Human Vasculature Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thank you very much for hosting this competition. We weren't able to win a gold medal, but participating in our first instance segmentation competition was a very educational experience.</p> <h2><strong>Context section</strong></h2> <ul> <li><strong>Business context</strong>: <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature" target="_blank">https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature</a></li> <li><strong>Data context</strong>: <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/data" target="_blank">https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/data</a></li> </ul> <h2><strong>Overview of the Approach</strong></h2> <p>Our model is an ensemble of Mask R-CNNs using Swin Transformer-S and Swin Transformer-T as backbones. </p> <h4>Our solution:</h4> <ul> <li><strong>Models:</strong><ul> <li>4 MMdet-based models, each trained on different folds (2 variations for each model):</li> <li>2x Mask R-CNN using Swin Transformer-S Backbone</li> <li>2x Mask R-CNN using Swin Transformer-T Backbone</li></ul></li> <li><strong>Training Method: 2-stage approach:</strong><ul> <li><strong>Stage 1 :</strong> Using dataset2 for initial training</li> <li><strong>Stage 2 :</strong> Using dataset1 to further train and fine-tune the models</li></ul></li> <li><strong>Ensemble Method:</strong><ul> <li><strong><a href="https://arxiv.org/pdf/1910.13302.pdf" target="_blank">Weighted Box Fusion(WBF)</a>:</strong> Applied to the Regions of Interest (ROI) for the final bounding boxes</li> <li><strong><a href="https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/discussion/297998" target="_blank">Weighted Masks Fusion(WMF)</a>:</strong> Applied to the final output for both bounding boxes and masks</li></ul></li> <li><strong>Test Time Augmentation</strong><br> Below is an overview of our solution, as illustrated in the following diagram.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7135891%2F05b266ee749cc16baf27278421f93d1a%2Foverview_figure.svg?generation=1691593014306642&amp;alt=media" alt=""></li> </ul> <h2><strong>Details of the Submission</strong></h2> <h3><strong>Two-Stage Training</strong></h3> <p>The public leaderboard was composed of wsi3 and wsi4 (wsis included in the training data of dataset2), and the private leaderboard was composed of wsi5, not included in the training data. This setup meant that the public leaderboard might not necessarily be reliable.<br> Because of this, it was necessary to build a validation method that could mimic the evaluation on the private leaderboard, and also to effectively utilize the wsi from dataset2 for training. Therefore, we adopted a CV strategy that divided the data by wsi and implemented a two-stage training process.</p> <h4>1st stage: Pretraining on dataset2</h4> <ul> <li>fold1:<ul> <li>train: wsi1, wsi3, wsi4(dataset2)</li> <li>validation: wsi2(dataset2)</li></ul></li> <li>fold2:<ul> <li>train: wsi2, wsi3, wsi4(dataset2)</li> <li>validation: wsi1(dataset2)</li></ul></li> </ul> <h4>2nd stage: Fine-tuning on dataset1</h4> <ul> <li>fold1:<ul> <li>train: wsi1(dataset1)</li> <li>validation: wsi2(dataset1)</li></ul></li> <li>fold2:<ul> <li>train: wsi2(dataset1)</li> <li>validation: wsi1(dataset1)</li></ul></li> </ul> <p>By evaluating with such CV splits, we were aided in judging whether the Public Leaderboard's score was reasonable or not.<br> However, the existence of unexpected cases was also confirmed.Within these CV splits, we optimized parameters such as mask probability thresholds and types of TTA with the intention of improving validation mAP, expecting an improvement on the private leaderboard. Contrary to our expectations, there were cases where these adjustments had an unexpected negative impact on both the Public Leaderboard and Private Leaderboard scores.</p> <h3><strong>Training Settings</strong></h3> <h4><strong>Model Training Configuration</strong></h4> <ul> <li>Our models were initialized using weights pretrained on the COCO dataset.</li> <li>Training was conducted with the AdamW optimizer.</li> <li>The learning rate was set at 0.0003 for Stage 1 (pretraining) and 0.00001 for Stage 2 (fine-tuning).</li> </ul> <h4><strong>Annotations in Training</strong></h4> <ul> <li>During the training step, annotations labeled as "blood_vessel" and "unsure" were both treated as "blood_vessel."</li> <li>The "glomerulus" annotation was not used in our training step.</li> </ul> <h4><strong>Data Augmentation</strong></h4> <p>We adopted the following data augmentations:</p> <ul> <li>RGB Shift</li> <li>RandomBrightnessContrast(with brightness and contrast limits of 0.1 and 0.4, respectively)</li> <li>RandomRotate90</li> <li>RandomFlip</li> <li>RandomChoiceResize(with sizes ranging from 480 to 1200)</li> <li>RandomCrop</li> </ul> <h3><strong>Ensemble Methodology</strong></h3> <p>We adopted an ensemble method that integrates the Regions of Interest (ROI) output by the model through Weighted Box Fusion (WBF). The ensemble of ROIs was conducted only for the outputs within the same fold, and that employed the same Test Time Augmentation (TTA) methods (2 models each). Subsequently, we integrated all the output masks and boxes within the same fold using Weighted Mask Fusion (WMF). Finally, we ensembled the outputs for each fold through WMF.</p> <h3><strong>Test Time Augmentation</strong></h3> <p>We adopted Horizontal Flip and Vertical Flip as our Test Time Augmentation (TTA) techniques.</p> <h3><strong>Postprocessing</strong></h3> <p>Masks that overlapped with glomerulus by more than 90% were removed.</p> <h3><strong>What Didn't Work</strong></h3> <ul> <li><strong>Sliding Window</strong>: We initially tried a sliding window technique but found it ineffective.</li> <li><strong>Using Context from Neighbor Tile</strong>: This approach did not bring the desired results.</li> <li><strong>Rotate TTA</strong>: Using rotation in multiples of 90 degrees increased validation mAP but did not improve the leaderboard score.</li> </ul> <h2><strong>Sources</strong></h2> <ul> <li><a href="https://arxiv.org/pdf/1910.13302.pdf" target="_blank">https://arxiv.org/pdf/1910.13302.pdf</a></li> <li><a href="https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/discussion/297998" target="_blank">https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/discussion/297998</a></li> </ul>
HuBMAP - Hacking the Human Vasculature
public 9th / private 295th achieved private 0.575 with single modification
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for your question!</p> <p>The "voting" here is that once the mask is obtained in binary and the majority of the model predicts the pixel to be a binary mask. The process is called.<br> So the former of the codes is my method.</p> <p>After looking at the revised comments, I understood what you meant. Indeed, if we assume the process before loss calculation, it becomes almost the same argument as max-pooling, and I don't know which one is more suitable.</p> <p>That's right, in cases where mAP is used as an evaluation metric (requring confidence scores), one can say that WBF is a superior process to NMS. If we aggregate scores from multiple models like stacking and use gradient boosting, we might be able to predict even better scores, but the process becomes too complicated. So calculating mean values is reasonable.</p> <p>About WBF.<br> This is the same as my understanding.<br> Yes, it is similar to max and mean pooling lol.<br> However, mAP can be improved by reordering the forecasts correctly. Therefore, I think it is quite nonsense to use max to aggregate predictions. This is because the implication is the same as taking max when performing an ensemble of many models in image classification. In such cases, taking mean or median is the most basic and robust method, isn't it?<br> In object detection, as in image classification, it seems to be a good idea to calculate the confidence score using mean or weighted mean.</p> <p>You are right about the second. <br> Furthermore, we were informed in advance that only ds1 was used in the test data, and as ds1 and ds2 are annotated differently, it was more efficient to calculate CVs according to ds1 only. So I should have spent more time considering how to fit ds1, as in the top solutions…</p> <p>Thank you for your question!</p> <p>The base is utilizing the WBF library.<br> When clustering boxes, the boxes and scores are processed together, but in the original code, masks could not be used as inputs.<br> Therefore, when acquiring the combinations, I've made it so that masks are clustered along with them.<br> I thought it might be good to calculate the IoU using masks, but like WBF, I calculated the IoU using boxes.<br> Usually, a single model will output multiple overlapping boxes. Generally, after a model's output, techniques like NMS are used to leave only the high-scoring ones before performing WBF.<br> At the time, I didn't think to apply NMS, so I decided to use the same number of top-scoring boxes as the number of models.<br> I believe this is essentially similar, but I think this way might result in lower confidence scores when less accurate masks are outputted in some cases. I have not been able to examine the superiority compared to using NMS. I would like to try it when I have time, but it seems like it will be difficult for a while.</p> <p>CV strategy. This is my biggest failure.<br> Basically, I divided the fold into six (although I intended to divide the folds into five, due to typo, it was divided into six…), and essentially calculated only with fold0.<br> I mixed up dataset1 and dataset2 in the CV, and I now think this was a significant mistake.<br> Furthermore, I also conducted studies with a small number of ds1 only in fold0.<br> This was because I had much work privately, and I had to choose whether to experiment with fold0-5 or reduce the number of experiments.<br> In retrospect, I should have prioritized robustness even if reducing experiments, carried out training with all folds in all experiments, and calculated the CV. <br> That would have prevented this situation…</p> <p>Despite achieving 9th place in the public leaderboard, I experienced a frightening shakeup, ending up at 295th place in the private leaderboard. However, many of the techniques I used proved effective in the private leaderboard, enabling me to achieve a score of 0.575 (equivalent to 4th place) by making just one modification to the 295th place code. Below, I will share the methods used and my reflections.</p> <h3>Method Overview:</h3> <p>I primarily utilized "mmdet" and employed ensemble learning with numerous models. I had hoped to use YOLO v8 and v7 as well, but the host did not disclose their availability, so I had to abandon the idea.</p> <p>For the ensemble, I made modifications to the "WBF" (Weighted Box Fusion) to use it with masks. Additionally, the major innovation was not limited to instance segmentation models but also involved using object detection models. I performed WBF based on bounding boxes and made mask predictions solely using the instance segmentation models. I used the object detection models only to calculate confidence scores. As you can see from the predictions, the models made far more predictions than needed. Given the nature of the evaluation metric, mAP, it was evident that properly aligning confidence scores could significantly improve the overall score.</p> <h3>Models</h3> <p>The following is a list of the models used:</p> <p>Instance Segmentation Models:</p> <ul> <li>Mask2Former</li> <li>MaskRCNN + various ConvNext v1 and v2</li> <li>Cascade RCNN + various ConvNext v1 and v2</li> <li>MaskRCNN + Swin Transformer<br> Other: MaskDino doesn’t work well.</li> </ul> <p>Object Detection Models:</p> <ul> <li>DINO</li> <li>Mask2Former (used only for object detection part)</li> <li>Diffusion Det</li> <li>ViT Det<br> Other: yolox doesn’t work well</li> </ul> <h3>Major Flaw:</h3> <p>I combined the outputs of the mask models using addition during post-processing. This resulted in improvements in ds1 (average IoU) and the public cross-validation (CV) score. However, as many of you may have noticed, when applying post-processing, the score on the private dataset degrades significantly. This indicates that the ds1 and private datasets have entirely different annotation tendencies. The annotations provided for ds1 were quite rough, and the dilation technique seemed effective due to this. One of the hosts mentioned that annotations were done in the same manner for both public and private datasets, but this information turned out to be inaccurate.</p> <h3>Single modification to achieve private 0.575:</h3> <p>Anticipating that the annotations for the private dataset might be more substantial, I submitted two versions: one with post-processing and another without. However, as mentioned earlier, during the ensemble mask generation, I was merely adding the masks instead of using averaging or voting. This method worked well for ds1 and the public dataset. On the other hand, I should have easily foreseen that if the private dataset had accurate and precise annotations, this approach could significantly reduce the score (which I overlooked).<br> Therefore, instead of adding the masks, I changed to using voting. This simple change dramatically improved the score from 0.397 to 0.575.</p> <div> <img src="https://pbs.twimg.com/media/F2avc0eaEAEmdvL.jpg" alt="score"> </div> <h3>Reflections:</h3> <p>One of the points I regret is being content with just having two submissions: one with post-processing and one without. Since I could have diversified the risks, I should have prepared submissions that could adapt to both small and accurate annotation masks.</p> <p>Of course it's OK!</p> <p>However, this was written by me when I was a mmdetection beginner, so there are many mistakes. For example, the size of the inference is improperly 1333x800 because it is taken from the base, it should be 1024x1024.<br> The scheduler is also unchanged.</p> <p>However, the basic setting is this one, so I think it will give some good results!</p> <p><a href="https://www.kaggle.com/code/yosukeyama/hubmap23-mmdet-train-mask2former-1024/notebook" target="_blank">https://www.kaggle.com/code/yosukeyama/hubmap23-mmdet-train-mask2former-1024/notebook</a></p> <p>EDIT: I have checked past scores. Using this training note, even a single-fold and improperly size inference can earn a silver medal.</p>
HuBMAP - Hacking the Kidney
12th solutions (0.948/0.949 strageties)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We used a simple ensemble of three types of models. (three b7 models<strong>[1]</strong> from 5-folds, one b7 models<strong>[2]</strong> best score from 5-folds using other seeds, four b5 models<strong>[3]</strong> using d488 labels).<br> In this competition, pseudo labels do have big effect on our submissions. Following I will introduce my strategies.</p> <p>First, we trained a b7 model using Zhao's labels and LB reached to 0.934. Then I combined this model with models<strong>[2]</strong> and reached to 0.935. I drawed some lost targets depended on the results in d488 until we reached to 0.937, which is the pseudo labels using in models<strong>[3]</strong>.</p> <p>According to hosts' comments, I strongly believe the usage of pseudo labels, but in our experiment on public LB, I found using d488 could hurt the prediction in 575. Thus I should deal with it carefully since TN is more fatal than FN. I like it but also fear it, thus, I used them(models[3]) as part of model ensembles and tune its weight to lessen its influence, divided submission into two parts to reduce marginal mistakes. (overlap = 230, overlap = 32).</p> <p>Three models<strong>[1]</strong> are ensembled in the first part. Due to tech limitation, I choose TTA strategies as TTA[0] -&gt; model[0], TTA[1] -&gt; model[1], TTA[2] -&gt; model[2]. Then combine their results using weighted average to make the results of first part more stable.</p> <p>Five models (models<strong>[2]</strong>, models<strong>[3]</strong>) ensembled in the second part. I used models<strong>[2]</strong> as a Guider in this part. TTA strategies as TTA[0] -&gt; model[0]-(Guider), TTA[1] -&gt; [model[1], model[2]] , TTA[2] -&gt; [model[3], model[4]]. since the overlap is very small, I use average of the results with the results in the first part.</p> <p><strong>Compared to using pseudo with not using, private score could increase about +0.003 ~ +0.004 in our strategies(0.945 - 0.949). The parameters in my models aren't well adjusted therefore the single model didn't get a good score (0.942).</strong></p> <p>This is my first competition in CV and the second one in kaggle ,my teammates are busy with graduation with no time in the later stage due to this long-term competition. Very thanks to <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> <a href="https://www.kaggle.com/wrrosa" target="_blank">@wrrosa</a> and other competitors for their excelent notebooks, which made me familiar with the process of CV competition in few months. Some of my logic and ideas might look funny since the strageties are sometimes by my intuition and guess and I still need to learn and improve in the further life. </p> <p>Notebook link: <a href="https://www.kaggle.com/puyuzhou/test-xx?scriptVersionId=62471723" target="_blank">https://www.kaggle.com/puyuzhou/test-xx?scriptVersionId=62471723</a><br> Augmentation link: <a href="https://www.kaggle.com/southsakura/kaggle" target="_blank">https://www.kaggle.com/southsakura/kaggle</a></p> <p>You are very welcome!</p>
HuBMAP - Hacking the Kidney
17th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to the winners and all participants who finished this competition.<br> A number of problems arose during the competition, and the competition was held for six months. I have to admit that it was quite a challenge for me to keep participating until the end.<br> My ranking was not so great, but I would like to share it with you.</p> <hr> <p>The resolution is not very good, so you may not be able to see the fine details, but the model I created is shown in the following figure.<br> <img src="https://f.easyuploader.app/20210511095415_66516e4b.jpg" alt=""></p> <p>I would like to explain the main points.</p> <p><strong>1. Cutting out patches</strong><br> Early in the competition, I recognized through my experiments that the difference in image size (original size to be cut out and image size after downscaling) might affect the score. Therefore, I chose several image sizes that were large enough to ensure a sufficient batch size for training, and eventually ensembled 2 models for 352 x 4 and 512 x 2 image sizes. <br> Also, I did not include the black and white regions, which are not very informative regions, in the training patches, referring to <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> kernel. <br> If we simply shifted the image by augmentation for the patch, we would have to do some kind of interpolation such as reflection for the missing regions. Since my experimental results showed that this was not a good idea, I shifted the image by a quarter of the patch size when cutting out the patches from the WSIs.</p> <p><strong>2. Preprocessing / Augmentation</strong><br> The images were simply normalized by 255, and then I trained the models on 5 folds, where each fold consists of 3 WSIs.<br> I don't know about the private WSIs, but at least some of the public images were darker than the training WSIs, or had some kind of dirt on them, so I applied a very strong augmentation as shown below.</p> <ul> <li>Horizontal/Vertical Flip (p=0.5)</li> <li>RandomRotate90 (p=0.5)</li> <li>Rotate (-40/+40, reflect101, p=0.5)</li> <li>ShiftScaleRotate (scale: -0.2/0.1, p=0.5)</li> <li>OneOf (p=0.5)<br> RandomBrightnessContrast (B: 0.5, C: 0.1)<br> HueSaturationValue (H: +-20, S: +-100, V: +-80)</li> <li>One of (p=0.5)<br> Cutout (holes: 100, size: 1/64, white RGB)<br> GaussianNoise</li> <li>One of (p=0.5)<br> ElasticTransform<br> GridDistortion (steps: 5, limit: 0.3)<br> OpticalDistortion (distort: 0.5, shift: 0.0)</li> </ul> <p>As a result, I think that the evaluation of the model with 5 folds by WSI and this strong augmentation contributed to the stable correlation between local CV and LB.</p> <p><strong>3. Training</strong><br> My training was very simple. <br> I used BCE + Adam to train ImageNet Pretrained EfficientNetB3 backbone U-Net. As mentioned in some discussions, the complexity of the model did not seem to have any effect, probably because it is a rather simple task. I also tried some losses such as Focal loss and Combo loss, but there was no noticeable gain, probably because I excluded the black and white parts beforehand. In addition, I don't use losses based on discrete metrics such as Dice loss for ensembles, so I thought that BCE would be sufficient for this project when there was no gain from focal loss. <br> Also, since I used the psuedo label in the latter half like many of the participants, I trained the image size combinations (group A and group B in the model pipeline figure) alternately to avoid overtraining. I used the psuedo label of the model trained in group A for training group B to avoid the model from becoming overly dependent on the public data. However, since the psuedo label was almost meaningless in private score, I think it was almost a meaningless attempt.<br> (I used the d48 hand label at the end of the competition, but I did not do any hand labeling by myself. I think that the d48 hand label actually contributed nothing to the improvement of the private score.)</p> <p>The following graph shows the correlation between the local CV and the public LB, and since the public LB is in increments of 0.001, I think the correlation is actually greater than what can be seen in the graph. There are three clusters, starting from the bottom one: no psuedo at all, psuedo without d48 hand label, and psuedo based on d48 hand label.<br> <img src="https://f.easyuploader.app/20210511104838_62546152.jpg" alt=""></p> <p><strong>4-5. Prediction + Tiling predicted patches</strong><br> I think the prediction was the same method as many of you.</p> <ul> <li>TTA: Raw, HorizontalFlip, VerticalFlip</li> <li>Overlapping by 1/16 image size for tiling</li> <li>Naive averaging for ensemble (folds and models)</li> <li>Threshold optimization with train images (th: 0.40 for public LB, 0.44-0.48 for local CV)</li> </ul> <p>In my experiments, I found that there was no significant difference in the scores if the overlap was 1/16 or more, so I decided to use that value to save inference time.<br> The only regret I have is that I set the threshold in the direction of better public score. As I will show later, if I had set the threshold to the theoretical value determined by the train data, I would have been sure to get the gold medal.</p> <p><strong>6. Submission</strong></p> <ul> <li>Approximately 8 hours running (5 folds, 3 TTA, 2 models)</li> <li>Good correlation between local CV and public LB<br> (Alghough there are deviations…, maybe due to the notorious <code>d48...</code> image)</li> <li>Local CV: 0.9429<br> Public LB: 0.927<br> Private LB: 0.947</li> </ul> <p><strong>Finally…</strong><br> I think this competition was about how much you can believe in your Local CV. If there is a domain shift between the training data and the test data, the <code>trust CV</code> is not always correct, so I don't think the participants who did the training with more public are wrong. It would be nice if the host would mention the presence or absence of domain shift at least a little in advance, but there is a degree to which this is the case, and as long as the number of data is finite, it may be a permanent problem in data science.</p> <p>By the way, my sbumission in descending order of private score was as follows. It would have been better if I had used the correct threshold value, but it was a good lesson for me that I didn't trust all my locals.<br> <img src="https://f.easyuploader.app/20210511105807_7a4f4f78.jpg" alt=""></p> <p>See you at the next competition!</p>
HuBMAP - Hacking the Kidney
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>That line of code is used to remove background only images. I found std can be used for such purpose.</p> <p>Thanks <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> ! I entered this competition early. So, in the other point of view, my last submission was made 5 months after I joined this competition. </p> <p>When we feed a tile into U-net, U-net cannot precisely predict the edge part of the tile compared with the center part of the tile. We can remove this edge part from inference by the trick like this:<br> <img src="https://i.postimg.cc/ncHpwJZY/avoiding-edge-effect.png" alt="image"></p> <p>I used hypercolumns and CBAM for all of my experiments in this competition. So I didn't do the abbreviation study. From my experience of the cloud competition, these modules helped.<br> I'm not sure if these are always helpful. Maybe the devil's in the details here…</p> <p>Thanks! That's good to know. The trick can be used in any segmentation problem !</p> <p>Thank you <a href="https://www.kaggle.com/mpware" target="_blank">@mpware</a> !</p> <p>my second time to have big surprise. Let's see the next time :)</p> <p>My loss function looks like this:</p> <pre><code>criterion = nn.BCEWithLogitsLoss().to(device) criterion_clf = nn.BCEWithLogitsLoss().to(device) def criterion_lovasz_hinge_non_empty(criterion, logits_deep, y): batch,c,h,w = y.size() y2 = y.view(batch*c,-1) logits_deep2 = logits_deep.view(batch*c,-1) y_sum = torch.sum(y2, dim=1) non_empty_idx = (y_sum!=0) if non_empty_idx.sum()==0: return torch.tensor(0) else: loss = criterion(logits_deep2[non_empty_idx], y2[non_empty_idx]) loss += lovasz_hinge(logits_deep2[non_empty_idx].view(-1,h,w), y2[non_empty_idx].view(-1,h,w)) return loss loss = criterion(logits,y_true) loss += lovasz_hinge(logits.view(-1,h,w), y_true.view(-1,h,w)) if deepsupervision: for logits_deep in logits_deeps: loss += 0.1 * criterion_lovasz_hinge_non_empty(criterion, logits_deep, y_true) if clfhead: loss += criterion_clf(logits_clf.squeeze(-1),y_clf) </code></pre> <p>lovasz_hinge loss is from <a href="https://github.com/bermanmaxim/LovaszSoftmax/blob/master/pytorch/lovasz_losses.py" target="_blank">https://github.com/bermanmaxim/LovaszSoftmax/blob/master/pytorch/lovasz_losses.py</a></p> <p>Masked area is balanced. Here is an example : </p> <pre><code>total # of tiles = 33114 # of masked_tiles = 17292 # of non_masked_tiles = 15822 among masked tiles # of tiles in bin 1 : 7322 (tiles have very small masked area) # of tiles in bin 2 : 5547 (tiles have small masked area) # of tiles in bin 3 : 2796 (tiles have some masked area) # of tiles in bin 4 : 1627 (tiles have large masked area) </code></pre> <p>I sampled tiles from these bin1,2,3,4 equally.</p> <p>EDIT : looks like this<br> <img src="https://i.postimg.cc/XqxstSbp/balanced-tile-sampling.png" alt="image"></p> <p>Thanks <a href="https://www.kaggle.com/lhagiimn" target="_blank">@lhagiimn</a> ! I almost forgot because I was busy on HPA comp. :)</p> <p>Thanks <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> !</p> <p>First of all, congratulations to the winners and thank you Kaggle and the host team for this tough competition! I entered this competition early and spent a lot of time until the data update. I could not keep my motivation for hand-labeling, so I left a month ago. Luckily I got the 1st place, so I would like to share my approach.</p> <h1>Summary of my approach</h1> <ul> <li>single model Unet se_resnext101_32x4d (4folds)</li> <li>some techniques from previous segmentation competitions (mainly from the cloud comp.) </li> <li>balanced tile sampling for training (EDIT : masked area is balanced)</li> <li>pseudo-label for the public test data and external data</li> <li>a trick for avoiding the edge effect</li> <li>My pipeline was developed with the old dataset (i.e., before the data update).</li> </ul> <h1>1. Data preparation</h1> <p>make 1024x104 tiles + shifted 1024x1024 tiles (I shifted the tiles by (512,512))</p> <h1>2. Validation</h1> <p>I selected the validation data so that the same patient number is in the same group</p> <pre><code>val_patient_numbers_list = [ [63921], # fold0 [68250], # fold1 [65631], # fold2 [67177], # fold3 ] </code></pre> <h1>3. Balanced tile sampling for training</h1> <p>First I binned the tile data with respect to the masked area (number of bins = 4 for masked tiles). Then I apply the following procedure for balanced sampling.</p> <pre><code>n_sample = trn_df['is_masked'].value_counts().min() trn_df_0 = trn_df[trn_df['is_masked']==False].sample(n_sample, replace=True) trn_df_1 = trn_df[trn_df['is_masked']==True].sample(n_sample, replace=True) n_bin = int(trn_df_1['binned'].value_counts().mean()) trn_df_list = [] for bin_size in trn_df_1['binned'].unique(): trn_df_list.append(trn_df_1[trn_df_1['binned']==bin_size].sample(n_bin, replace=True)) trn_df_1 = pd.concat(trn_df_list, axis=0) trn_df_balanced = pd.concat([trn_df_1, trn_df_0], axis=0).reset_index(drop=True) </code></pre> <h1>4. Model</h1> <p>U-Net SeResNext101 + CBAM + hypercolumns + deepsupervision<br> In my case, larger model gave better CV and LB.<br> I resized 1024x1024 to 320x320 for input tiles.<br> here is the code snippet:</p> <pre><code>class CenterBlock(nn.Module): def __init__(self, in_channel, out_channel): super().__init__() self.conv = conv3x3(in_channel, out_channel).apply(init_weight) def forward(self, inputs): x = self.conv(inputs) return x class DecodeBlock(nn.Module): def __init__(self, in_channel, out_channel, upsample): super().__init__() self.bn1 = nn.BatchNorm2d(in_channel).apply(init_weight) self.upsample = nn.Sequential() if upsample: self.upsample.add_module('upsample',nn.Upsample(scale_factor=2, mode='nearest')) self.conv3x3_1 = conv3x3(in_channel, in_channel).apply(init_weight) self.bn2 = nn.BatchNorm2d(in_channel).apply(init_weight) self.conv3x3_2 = conv3x3(in_channel, out_channel).apply(init_weight) self.cbam = CBAM(out_channel, reduction=16) self.conv1x1 = conv1x1(in_channel, out_channel).apply(init_weight) def forward(self, inputs): x = F.relu(self.bn1(inputs)) x = self.upsample(x) x = self.conv3x3_1(x) x = self.conv3x3_2(F.relu(self.bn2(x))) x = self.cbam(x) x += self.conv1x1(self.upsample(inputs)) #shortcut return x class UNET_SERESNEXT101(nn.Module): def __init__(self, resolution, deepsupervision, clfhead, load_weights=True): super().__init__() h,w = resolution self.deepsupervision = deepsupervision self.clfhead = clfhead #encoder model_name = 'se_resnext101_32x4d' seresnext101 = pretrainedmodels.__dict__[model_name](pretrained=None) if load_weights: seresnext101.load_state_dict(torch.load(f'{model_name}.pth')) self.encoder0 = nn.Sequential( seresnext101.layer0.conv1, #(*,3,h,w)-&gt;(*,64,h/2,w/2) seresnext101.layer0.bn1, seresnext101.layer0.relu1, ) self.encoder1 = nn.Sequential( seresnext101.layer0.pool, #-&gt;(*,64,h/4,w/4) seresnext101.layer1 #-&gt;(*,256,h/4,w/4) ) self.encoder2 = seresnext101.layer2 #-&gt;(*,512,h/8,w/8) self.encoder3 = seresnext101.layer3 #-&gt;(*,1024,h/16,w/16) self.encoder4 = seresnext101.layer4 #-&gt;(*,2048,h/32,w/32) #center self.center = CenterBlock(2048,512) #-&gt;(*,512,h/32,w/32) #decoder self.decoder4 = DecodeBlock(512+2048,64,upsample=True) #-&gt;(*,64,h/16,w/16) self.decoder3 = DecodeBlock(64+1024,64, upsample=True) #-&gt;(*,64,h/8,w/8) self.decoder2 = DecodeBlock(64+512,64, upsample=True) #-&gt;(*,64,h/4,w/4) self.decoder1 = DecodeBlock(64+256,64, upsample=True) #-&gt;(*,64,h/2,w/2) self.decoder0 = DecodeBlock(64,64, upsample=True) #-&gt;(*,64,h,w) #upsample self.upsample4 = nn.Upsample(scale_factor=16, mode='bilinear', align_corners=True) self.upsample3 = nn.Upsample(scale_factor=8, mode='bilinear', align_corners=True) self.upsample2 = nn.Upsample(scale_factor=4, mode='bilinear', align_corners=True) self.upsample1 = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) #deep supervision self.deep4 = conv1x1(64,1).apply(init_weight) self.deep3 = conv1x1(64,1).apply(init_weight) self.deep2 = conv1x1(64,1).apply(init_weight) self.deep1 = conv1x1(64,1).apply(init_weight) #final conv self.final_conv = nn.Sequential( conv3x3(320,64).apply(init_weight), nn.ELU(True), conv1x1(64,1).apply(init_weight) ) #clf head self.avgpool = nn.AdaptiveAvgPool2d(1) self.clf = nn.Sequential( nn.BatchNorm1d(2048).apply(init_weight), nn.Linear(2048,512).apply(init_weight), nn.ELU(True), nn.BatchNorm1d(512).apply(init_weight), nn.Linear(512,1).apply(init_weight) ) def forward(self, inputs): #encoder x0 = self.encoder0(inputs) #-&gt;(*,64,h/2,w/2) x1 = self.encoder1(x0) #-&gt;(*,256,h/4,w/4) x2 = self.encoder2(x1) #-&gt;(*,512,h/8,w/8) x3 = self.encoder3(x2) #-&gt;(*,1024,h/16,w/16) x4 = self.encoder4(x3) #-&gt;(*,2048,h/32,w/32) #center y5 = self.center(x4) #-&gt;(*,320,h/32,w/32) #decoder y4 = self.decoder4(torch.cat([x4,y5], dim=1)) #-&gt;(*,64,h/16,w/16) y3 = self.decoder3(torch.cat([x3,y4], dim=1)) #-&gt;(*,64,h/8,w/8) y2 = self.decoder2(torch.cat([x2,y3], dim=1)) #-&gt;(*,64,h/4,w/4) y1 = self.decoder1(torch.cat([x1,y2], dim=1)) #-&gt;(*,64,h/2,w/2) y0 = self.decoder0(y1) #-&gt;(*,64,h,w) #hypercolumns y4 = self.upsample4(y4) #-&gt;(*,64,h,w) y3 = self.upsample3(y3) #-&gt;(*,64,h,w) y2 = self.upsample2(y2) #-&gt;(*,64,h,w) y1 = self.upsample1(y1) #-&gt;(*,64,h,w) hypercol = torch.cat([y0,y1,y2,y3,y4], dim=1) #final conv logits = self.final_conv(hypercol) #-&gt;(*,1,h,w) #clf head logits_clf = self.clf(self.avgpool(x4).squeeze(-1).squeeze(-1)) #-&gt;(*,1) if self.clfhead: if self.deepsupervision: s4 = self.deep4(y4) s3 = self.deep3(y3) s2 = self.deep2(y2) s1 = self.deep1(y1) logits_deeps = [s4,s3,s2,s1] return logits, logits_deeps, logits_clf else: return logits, logits_clf else: if self.deepsupervision: s4 = self.deep4(y4) s3 = self.deep3(y3) s2 = self.deep2(y2) s1 = self.deep1(y1) logits_deeps = [s4,s3,s2,s1] return logits, logits_deeps else: return logits </code></pre> <h1>5. Loss</h1> <p>I used bce loss + lovasz-hinge loss, on top oh that I used deep supervision with bce loss + lovasz-hinge loss (for only non-empty masks) multiplied by 0.1. For classification head, I used bce loss</p> <h1>6. External data and Pseudo-label</h1> <p>I generated pseudo-labels for (EDIT) train data, public test data, the hubmap-portal (<a href="https://portal.hubmapconsortium.org/search?entity_type[0]=Dataset" target="_blank">https://portal.hubmapconsortium.org/search?entity_type[0]=Dataset</a>)<br> and dataset_a_dib (<a href="https://data.mendeley.com/datasets/k7nvtgn2x6/3)" target="_blank">https://data.mendeley.com/datasets/k7nvtgn2x6/3)</a>. For the data d488c759a, I used Carno Zhao's pseudo-label (thanks <a href="https://www.kaggle.com/carnozhao" target="_blank">@carnozhao</a> !) for one of my final model and my model's pseudo-label for the other one. I checked my private scores and found that these models' performances are not so different. But my 1st place model is the former one with Carno Zhao's pseudo-label. I think it's not the hand-labeling but the indirectly ensemble effect that boosted my score since my model is single and diversity should contribute.</p> <h1>7. A trick for inference</h1> <p>I found that avoiding edge effect consistently boosts CV and LB. My trick is to use only the center part of tiles for prediction (I notice that 3rd place solution by <a href="https://www.kaggle.com/shujun717" target="_blank">@shujun717</a> used the same idea). I found that using smaller part as prediction gave better CV and LB, but it is time-consuming. So I end up with using 512x512 center part from original 1024x1024 as prediction. I needed x4 time for inference, but using classification head prediction was helpful to save inference time.</p> <h1>8. LB scores</h1> <p>public LB=0.936 / private LB=0.951</p> <h1>9. Some thoughts</h1> <p>I used what I learned from the previous competitions and I used the same model architecture for most of my experiments. This saved my time and I focused on the data preparation and how to avoid the edge effect. I think I did something different from the other competitors but I'm not sure these were the keys in the final position since I've not experimented a lot for the new dataset.</p> <p>EDIT<br> training code : <a href="https://github.com/tikutikutiku" target="_blank">https://github.com/tikutikutiku</a><br> inference code : <a href="https://www.kaggle.com/tikutiku/hubmap-tilespadded-inference-v2?scriptVersionId=59475269" target="_blank">https://www.kaggle.com/tikutiku/hubmap-tilespadded-inference-v2?scriptVersionId=59475269</a></p> <p>For all of my experiments, I used the same head architecture which was based on my model in the cloud competition. I didn't do the abbreviation study, but other activation functions should also work.</p> <p>I think this is a bit complex (sorry if it is confusing). I used it to implement the trick for avoiding the edge effect. Here I added comments to each line :</p> <pre><code># height and width of the slide (no resize) self.h, self.w = self.data.height, self.data.width # input image size for U-net self.input_sz = config['input_resolution'] # tile size (no resize) self.sz = config['resolution'] # I used a trick to avoid the edge effect and this pad size determines the size of the neglected part # (see the self.pred_sz below). self.pad_sz = config['pad_size'] # This part is a bit tricky. # self.pred_sz is the size used for prediction which is cut from the output (with self.sz) of U-net # For example, # self.sz = 1024, self.pad_sz = 256, self.input_sz = 320 # then I first resize 1024x1024 tile into 320x320 and feed it into U-net. # The output of U-net is 320x320 and I resize it to 1024x1024. # Since the self.pad_sz=256 here, I extract the center part 512x512 (512=1024-2*256) from the 1024x1024. self.pred_sz = self.sz - 2*self.pad_sz # pad size for the slide # Since the prediction size is self.pred_sz (not self.sz) here, I used the equation below self.pad_h = self.pred_sz - self.h % self.pred_sz self.pad_w = self.pred_sz - self.w % self.pred_sz # number of tiles self.num_h = (self.h + self.pad_h) // self.pred_sz self.num_w = (self.w + self.pad_w) // self.pred_sz </code></pre>
HuBMAP - Hacking the Kidney
23rd Place Silver - Balanced Stochastic Data Loader
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Training was fast because i stored all the complete train images in memory, I did not read from disk during training which is slow. </p> <p>At full resolution, the 15 train images combined is <strong>41.6GB</strong>. At 2x scale reduction, all 15 images combined is <strong>10.4GB</strong>. At 3x scale reduction, all 15 images are <strong>4.6GB</strong>. At 4x reduction, all 15 train images are <strong>2.6GB</strong>.</p> <p>To further clarify, if the private test data contained one image like public test d488c759a it <strong>would not matter</strong> if our model had a good or bad CV. The winning team would be the team with the best labeled d488c759a image and used it to train with.</p> <p>Those are great points and will most likely improve the data loader.</p> <p>In my case, i didn't try to improve my model. I spent my entire time hand labeling, pseudo labeling, and hill climbing ensembling to maximize public LB score and acquire the best public LB labels I could. When building ensembles, I downloaded every public notebook <code>submission.csv</code> and mixed those in using genetic algorithms to maximize public LB. I also randomly added and removed labeled gloms and found many annotator pattens and errors that were not discussed in the forums.</p> <p>As a result, i had very accurate public LB labels that contained many patterns that were not present in the train data. I was hoping that private LB contained the same new patterns that public LB had. If that were the case then only a model trained with overfitted public LB labels would find them.</p> <p>As it turns out, the private LB consisted of only easy images similar to the easiest train images. I'm surprised that my untuned model did so well. One submission of my model without training on public test labels actually achieved 8th place Gold private LB 9487. In retrospect, i should have just improved my model CV and ignored acquiring public LB labels.</p> <p>Wow, private score 0.9499, great model</p> <p>It's one that was posted in Flower Comp <a href="https://www.kaggle.com/mgornergoogle/custom-training-loop-with-100-flowers-on-tpu" target="_blank">here</a> then modified by me:</p> <pre><code>LR_START = 1e-5 LR_MAX = 1e-3 LR_RAMPUP_EPOCHS = 2 LR_SUSTAIN_EPOCHS = 0 LR_STEP_DECAY = 0.316 LR_STEP_EPOCHS = 5 def lrfn(epoch): if epoch &lt; LR_RAMPUP_EPOCHS: lr = (LR_MAX - LR_START) / LR_RAMPUP_EPOCHS * epoch + LR_START elif epoch &lt; LR_RAMPUP_EPOCHS + LR_SUSTAIN_EPOCHS: lr = LR_MAX else: lr = LR_MAX * LR_STEP_DECAY**((epoch - LR_RAMPUP_EPOCHS - LR_SUSTAIN_EPOCHS)// LR_STEP_EPOCHS) return lr lr = tf.keras.callbacks.LearningRateScheduler(lrfn, verbose = True) plt.figure(figsize=(20,5)) rng = [i for i in range(50)] y = [lrfn(x) for x in rng] plt.plot(rng, np.log10(y),'-o'); plt.xlabel('epoch',size=14); plt.ylabel('log10 learning rate',size=14) plt.title('Training Schedule',size=16); plt.show() </code></pre> <p>Thank you Kaggle and HuBMAP for hosting a fun competition. The challenges in this competition were designing a train data loader to work with large images and writing a successful inference notebook to work with large images.</p> <h1>Train Data Loader</h1> <p>My train data loader each epoch selects 128 <strong>random</strong> crops from each of the 15 train images. Therefore it <strong>balances</strong> the contribution from each train image (it does not give more attention to larger train images) since we don't know what the images in private test look like.</p> <p>Additionally, the train image is first reduced by either 2x, 3x, or 4x using a Numpy trick, <code>image = image[::2, ::2, ]</code>, <code>image = image[::3, ::3, ]</code> etc. Lastly, the data loader guarantees that each crop of 1024x1024 contains <strong>at least 1 segmentation label</strong>. (If not, randomly select again).</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Jun-2022/hub1.png" alt="image"></p> <p>Afterward, Albumentations augmentations are applied on the 1024x1024 crop</p> <pre><code>composition = albu.Compose([ albu.HorizontalFlip(p=0.5), albu.VerticalFlip(p=0.5), albu.ShiftScaleRotate(rotate_limit=25, scale_limit=0.15, shift_limit=0, p=0.75), albu.CoarseDropout(max_holes=16, max_height=64 ,max_width=64 ,p=0.5), albu.ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.25, p=0.75), albu.GridDistortion(num_steps=5, distort_limit=0.3, interpolation=1, p=0.5), albu.RandomCrop(height=768, width=768, p=1.0) ]) </code></pre> <p>And finally, the data loader extracts a 768x768 random crop from the augmented 1024x1024 random crop.</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Jun-2022/hub2.png" alt="image"></p> <p>The model is FPN with backbone EfficientNetB2 and bce jaccard loss:</p> <pre><code>import segmentation_models as sm def build_model(): inp = tf.keras.Input(shape=(None,None,3)) base = sm.FPN('efficientnetb2', encoder_weights='imagenet', classes=1, activation='sigmoid') x = base(inp) opt = tf.keras.optimizers.Adam() model = tf.keras.Model(inputs=inp, outputs=x) model.compile( optimizer=opt, loss=sm.losses.bce_jaccard_loss, metrics=[sm.metrics.f1_score] ) return model </code></pre> <h1>Nvidia 4xV100 32GB</h1> <p>Using four Nvidia V100 32GB, the model is trained 50 epochs with batch size 24, image size 768x768. </p> <pre><code>model.fit(train_gen(batch_size=24, image_size=(768,768)), epochs=50, callbacks=[lr], use_multiprocessing=True, workers=4) model.save_weights('model_weights.h5') </code></pre> <p>Each epoch, the data loader randomly selects 128 crops from each of the train images. Training uses the following learning rate schedule. Training is a few minutes per epoch.</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Jun-2022/hub3.png" alt="image"></p> <h1>Ensemble</h1> <p>There are no folds. The model is trained with 100% train data. One model is trained with 2x reduce images, one model is trained with 3x reduce images, and one model with 4x reduce images. Afterward this is repeated using EfficientNetB0 backbone (image size 1280x1280 reduced to 1024x1024 and batch size 16). The 6 models are ensembled and achieve <strong>public LB 0.920 and private LB 0.947</strong>! (and an unseletected submission had <strong>8th place Gold private</strong> LB 0.9487)</p> <p>We can check CV score by training 5 folds of 12 train images and inferring the other out-of-fold 3 images. When doing this, <strong>CV score averages 0.945</strong> on 14 train images and gets a low 0.890 on train image <code>c68fe75ea</code> due to very light colored image in bottom right.</p> <blockquote> <p>In the Ensemble section, you mention that you don't use folds.</p> </blockquote> <p>I joined the competition late and my plan was to find the best masks for the infamous d488c759a image. So once i got my model running and it outperformed public notebooks, i didn't spend time computing CV nor optimizing my model anymore.</p> <p>Instead, i iIteratively trained my model on hand label and pseudo labels. Each time it predicted better masks for d488c759a. I also used genetic algorithms to ensemble my model with public <code>submission.csv</code> files and made random changes to the mask. Eventually, i got my public LB up to 0.944.</p> <p>For my final submission, i added the 5 test images with their 0.944 public test masks to my <code>balanced stochastic data loader</code>. My final submission had public LB 0.9431 and private LB 0.9476. If private test contained a strange image like d488c759a, then my private test score would still be high and other teams would be lower.</p> <p>Next, here is my TF Keras dataloader</p> <pre><code>IMG_SIZE = 768 IMG_SIZE2 = 1024 SHRINK_SIZE = 3 class DataGenerator(tf.keras.utils.Sequence): 'Generates data for Keras' def __init__(self, imgs, masks, batch_size=24, shuffle=False, augment=False, crops=128, size=IMG_SIZE, size2=IMG_SIZE2, shrink=SHRINK_SIZE): self.imgs = imgs self.masks = masks self.batch_size = batch_size self.crops = crops self.size = size self.size2 = size2 self.shrink = shrink self.shuffle = shuffle self.augment = augment self.on_epoch_end() def __len__(self): 'Denotes the number of batches per epoch' ct = int( np.ceil(self.crops * len( self.imgs ) / self.batch_size ) ) return ct def __getitem__(self, index): 'Generate one batch of data' indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size] X, y = self.__data_generation(indexes) if self.augment: X2 = np.zeros((len(indexes),self.size,self.size,3),dtype='float32') y2 = np.zeros((len(indexes),self.size,self.size,1),dtype='float32') X2, y2 = self.__augment_batch(X, y) else: X2 = X y2 = y return X2, y2 def on_epoch_end(self): 'Updates indexes after each epoch' self.indexes = np.arange( self.crops * len( self.imgs ) ) if self.shuffle: np.random.shuffle(self.indexes) def __data_generation(self, indexes): 'Generates data containing batch_size samples' X = np.zeros((len(indexes),self.size2,self.size2,3),dtype='float32') y = np.zeros((len(indexes),self.size2,self.size2,1),dtype='float32') for k in range(len(indexes)): i = np.random.randint(0,len(self.imgs)) img = self.imgs[i] mask = self.masks[i] sm = 0; ct = 0 while (sm==0)&amp;(ct&lt;25): a = np.random.randint(0,img.shape[0]-self.size2*self.shrink) b = np.random.randint(0,img.shape[1]-self.size2*self.shrink) sm = np.sum(mask[a:a+self.size*self.shrink,b:b+self.size*self.shrink]) ct += 1 X[k,] = img[a:a+self.size2*self.shrink,b:b+self.size2*self.shrink,][::self.shrink,::self.shrink,]/255. y[k,:,:,0] = mask[a:a+self.size2*self.shrink,b:b+self.size2*self.shrink][::self.shrink,::self.shrink] return X,y def __random_transform(self, img, msk): composition = albu.Compose([ albu.HorizontalFlip(p=0.5), albu.VerticalFlip(p=0.5), albu.ShiftScaleRotate(rotate_limit=25, scale_limit=0.15, shift_limit=0, p=0.75), albu.CoarseDropout(max_holes=16, max_height=64 ,max_width=64 ,p=0.5), albu.ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.25, p=0.75), albu.GridDistortion(num_steps=5, distort_limit=0.3, interpolation=1, p=0.5), albu.RandomCrop(height=self.size, width=self.size, p=1.0) ]) tmp = composition(image=img, mask=msk) return tmp['image'], tmp['mask'] def __augment_batch(self, img_batch, mask_batch): img_batch2 = np.zeros((len(img_batch),self.size,self.size,3)) mask_batch2 = np.zeros((len(mask_batch),self.size,self.size,1)) for i in range(img_batch.shape[0]): img_batch2[i, ],mask_batch2[i, ] = self.__random_transform(img_batch[i, ],mask_batch[i, ]) return img_batch2,mask_batch2 </code></pre> <p>Thank you. First i read all the train images and masks into memory</p> <pre><code>train_images = [] train_masks = [] train_sizes = [] for k in range(15): name = train.iloc[k,0] print(name,', ',end='') img = np.squeeze( tiff.imread('/raid/Kaggle/hubmap/train/'+name+'.tiff') ) if img.shape[0]==3: img = np.transpose(img,[1,2,0]) train_images.append(img) train_sizes.append(img.shape[:2]) rle = train.iloc[k,1] mask = rle2mask(rle, shape=(img.shape[1],img.shape[0])) train_masks.append(mask)` </code></pre>
HuBMAP - Hacking the Kidney
28th Place Solution with inference code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I tried ResNet-200D, Densenet-121, EfficientNet-B0, but the mighty ResNet-34D beat them all.</p> <p>I also tried different decoder architectures like PAN, MANet, DeepNetV3+, UNet++ but the base UNet seemed to work best for me.</p> <p>Congratulations to all the winners and also to the hosts for such a successful competition which will hopefully benefit medical professionals in their work going forward. Commiserations for those who were stung by the shakeup, I'm sure you'll all be at the top of another LB again soon :)</p> <h1>CV Strategy</h1> <p>I knew early on that <code>d488c759a</code> was causing the public LB to be misaligned with CV, so I generally ignored the public LB other than loosely treating it as a 6th fold. My other 5 folds were simply set up like this to prevent leakage between images during training.</p> <pre><code>folds = [ ["aaa6a05cc", "26dc41664", "b9a3865fc"], ["2f6ecfcdf", "c68fe75ea", "e79de561c"], ["1e2425f28", "b2dc8411c", "afa5e8098"], ["0486052bb", "cb2d976f4", "54f2eec69"], ["4ef6695ce", "8242609fa", "095bf7a1f"], ] </code></pre> <p>Although I did track per-tile dice during training, I calculated my final CV score by calculating the dice on the full resolution out-of-fold predictions (exactly as it's done in the submission notebook).</p> <h1>Preprocessing</h1> <p>I used a tile size of 2048 which was resized to 1024 before feeding to the model. Overlap of 32 (although I don't think this made much difference).</p> <p>I also used the JSON files to create a polygon defining the edge of the mask (since getting the edges right was important in this task). This allowed me to treat this as a two-channel segmentation problem, even though I'd only be using one of them for prediction</p> <h1>Augmentation</h1> <pre><code>trfm = A.Compose( [ A.Resize(img_size_model, img_size_model), A.Flip(), A.RandomRotate90(), A.ColorJitter(p=1), A.OneOf( [ A.ElasticTransform(), A.GridDistortion(), A.Blur(blur_limit=(3, 5)), ], p=0.5, ), A.ShiftScaleRotate(), A.Normalize(), ToTensorV2(), ] ) </code></pre> <h1>Model</h1> <p>UNet with ResNet34d &amp; ResNet50d. That's it! Deeper architectures didn't seem to bring many gains.</p> <ul> <li>Loss = 0.9 Focal Loss and 0.1 Lovasz</li> <li>MixUp with alpha=0.25</li> <li>No CutMix. When I was looking at predictions when using CutOut augmentation, the cut boundary between a glomerulus and a non-glom area would cause some mask bleeding, which made me uncomfortable using CutMix. CV changes were in the noise.</li> <li>Stochastic weight averaging</li> <li>AdamW(lr=0.0001)</li> </ul> <h1>Submission/postprocessing</h1> <p>For the predictions, I used an overlap of 256. This meant that in the corners of some images, you could potentially get 4 predictions which needed to be weighted accordingly. I did this by creating a weight matrix and dividing the full prediction by this. The RAM in the notebooks was too small to keep a prediction array and a weight array in memory, so I did this on the GPU. The mask threshold was 0.5. For TTA I only used flip transforms.</p> <table> <thead> <tr> <th>Backbone</th> <th>CV</th> <th>Public</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>ResNet34D</td> <td>0.94037</td> <td>0.917</td> <td>0.947</td> </tr> <tr> <td>ResNet50D</td> <td>0.93859</td> <td>0.916</td> <td>0.946</td> </tr> </tbody> </table> <p>0.7:0.3 weighting = 0.947</p> <p>I did have some pseudo-labelled models using the provided test images which got into 0.948, but the CV wasn't as good so I didn't select those given how important trusting your CV was in this competition.</p> <h1>Inference code</h1> <p><a href="https://www.kaggle.com/anjum48/28th-place-inference-code" target="_blank">https://www.kaggle.com/anjum48/28th-place-inference-code</a></p>
HuBMAP - Hacking the Kidney
35th solution, seg&yolo
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>lb0.933, pb0.946<br> inference code:<a href="https://www.kaggle.com/kevin1742064161/seg-yolo-final-inference?scriptVersionId=62406590" target="_blank">https://www.kaggle.com/kevin1742064161/seg-yolo-final-inference?scriptVersionId=62406590</a><br> ensemble of 4 model:unet+b2, unet+b4, unetplusplus+b2, manet+b4<br> window:1024<br> overlap:100<br> size:512<br> YOLOv5 is used to detect cells. By comparing the area of the bonding box with the area of the segmentation, we can judge whether the segmentation result needs to be filled.<br> Just trust your CV!</p>
HuBMAP - Hacking the Kidney
39th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you kaggle and the host team for such a interesting competition and thanks for everyone to share their brilliant mind! So happy to win my first medal in my first medical segmentation competition, and i have learned a lot from this.<br> My notebook is based on <a href="https://www.kaggle.com/wrrosa/hubmap-tf-with-tpu-efficientunet-512x512-train" target="_blank">hubmap-tf-with-tpu-efficientunet-512x512-train</a>, <a href="https://www.kaggle.com/wrrosa/hubmap-tf-with-tpu-efficientunet-512x512-subm" target="_blank">hubmap-tf-with-tpu-efficientunet-512x512-subm</a> and <a href="https://www.kaggle.com/wrrosa/hubmap-tf-with-tpu-efficientunet-512x512-tfrecs" target="_blank">hubmap-tf-with-tpu-efficientunet-512x512-tfrecs</a>, and besides I've made several improvements and explorations by myself. (thank you for sharing these excellent notebooks <a href="https://www.kaggle.com/wrrosa" target="_blank">@wrrosa</a>)<br> Here is a brief description about my solution:</p> <h1>Dataset</h1> <p>Split origin tiff images with window size of 1024x1024 and overlap size of 512, then resize to 320x320 images.<br> Split dataset to 5-fold and make every fold have nearly the same sum of training examples.<br> Use <a href="https://www.kaggle.com/carnozhao/d48-hand-labelled" target="_blank">d48-hand-labelled</a> as external data (thanks <a href="https://www.kaggle.com/carnozhao" target="_blank">@carnozhao</a>!)</p> <p><strong>Note:</strong><br> I've also tried other combinations of window size and overlap size, such as 1024-256, 1536-512 and 1560-320, but haven't get better CV and LB.<br> I haven't tried 1024-512, because training time is so long for me.<br> Equal k-fold split give slightly better CV and public LB than random split.<br> Using <a href="https://www.kaggle.com/carnozhao/d48-hand-labelled" target="_blank">d48-hand-labelled</a> helps get better public LB on both d488c759a and other public test images (about 0.001).<br> I've also tried to add <a href="https://www.kaggle.com/baesiann" target="_blank">@baesiann</a>'s <a href="https://www.kaggle.com/baesiann/glomeruli-hubmap-external-1024x1024" target="_blank">external dataset</a>, but achieved low public LB.</p> <h1>Model</h1> <p>U-Net with EfficientB3 encoder. <br> <strong>Note:</strong><br> I've also tried LinkNet as segmentation framework and other encoders (such as EfficientB0~B2, Resnet50 and Swin-Transformer), but didn't get better CV and public LB.</p> <h1>Loss</h1> <p>cross entropy and finetune with lovasz softmax.<br> For evaluation, compute global dice score rather than mean dice score for every tile.<br> <strong>Note:</strong><br> Finetune with lovasz softmax gives 0.003+ inprovement in public LB.<br> I've also tried other loss functions, such as tversky loss, focal tversky loss and bce jaccard loss, but didn't get better CV and public LB.<br> With global dice score, better CV often implies better public LB.</p> <h1>optimizer</h1> <p>Adam+SAM (Sharpness-Aware Minimization)<br> <strong>Note:</strong><br> For me, SAM give more stable CV and public LB.<br> I've also tried Ranger as optimizer, but get better CV and worse public LB.</p> <h1>Shuffle Training</h1> <p>Shuffle train files in every fold between every epoch.<br> <strong>Note:</strong><br> TFRecordDataset have a fixed order of input files, I believe that a more adequate shuffle in training process will benefits.</p> <h1>Center Crop Inference</h1> <p>Just split test images as train dataset does. Center crop 1/2 of every test tile's predict mask and tie every tile mask together as the final mask.<br> <strong>Note:</strong><br> Center crop Inference is the key idea to achieve high LB score. It gives about 0.005+ improvement.<br> I have tried to crop 1/4 of predict mask, but got almost the same public LB, although has higher private LB (0.948)</p> <h1>Ensemble</h1> <p>Because model ensemble doesn't give better CV and LB score, I mostly use single model to submit.</p> <h1>Code</h1> <p>1) <a href="https://www.kaggle.com/snowballball/hubmap-submit1" target="_blank">inference notebook</a><br> 2) I have modified <a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">segmentation_models.pytorch</a> and <a href="https://github.com/microsoft/Swin-Transformer" target="_blank">Swin-Transformer</a>to support Swin-Transformer encoder, and here is the modification code: (hope it helps in other competitions)<br> <a href="https://github.com/cersar/segmentation_models.pytorch" target="_blank">https://github.com/cersar/segmentation_models.pytorch</a><br> <a href="https://github.com/cersar/Swin-Transformer" target="_blank">https://github.com/cersar/Swin-Transformer</a></p>
HuBMAP - Hacking the Kidney
3rd place solution, the simplest way is the best way (full training/inference code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you!</p> <p>Thanks, I expect this trick will work well for segmentation tasks, since you will need to classify every pixel essentially. It might also be good for object detection, but it wont do much for classification</p> <p>That's quite interesting! The main reason we used it is because I thought tho iafoss' tile method is very efficient and effective, the edge effect was the only downside I could think of. I'm not sure exactly how much improvement I got. On public lb it was maybe 0.001. But on private probably at least 0.002, so enough to push me from silver to the prize zone.</p> <p>Thank you!</p> <p>Thanks! Congrats on your finish as well! Honestly, I'm not sure how much high resolution helped, when I was doing experiments with PL, resolution (reduce=2 as opposed to 4) helped public lb by 0.001. I did it in the final subs because I have seen many times higher res tends to work better. However, most of the improvements in the private lb came from expansion tiles and a neutral threshold I think. Many public notebooks use low threshold because of the d4 image. A single model with reduce=2,sz=2048, and th=0.4 without expansion tiles only give 0.945 private lb score</p> <p>Congrats to everyone who participated in this competition! It's been a roller coaster of emotions for me. Thanks to the organizers for hosting an interesting competition with meaningful data. I also would like to specially thank <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> without his starter notebooks, I would never have been able to do anything here, since I've never worked on segmentation before. BTW, congrats on getting one step closer to gm! It seems like we have both been doing ok in this shakeup business</p> <p>Near the end of the competition, I was getting really tilted by the labeling issues, thinking nothing could be done. But i didn't want to give up either, so I took the path of just doing the simplest things which I think would generalize as well as possible. Here I will discuss the most important details and also release all code used to produce the final result. </p> <p>Full training code: <a href="https://github.com/Shujun-He/Hubmap-3rd-place-solution/" target="_blank">https://github.com/Shujun-He/Hubmap-3rd-place-solution/</a><br> Inference code (forked from <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a>) <a href="https://www.kaggle.com/shujun717/hubmap-3rd-place-inference?scriptVersionId=62198789" target="_blank">https://www.kaggle.com/shujun717/hubmap-3rd-place-inference?scriptVersionId=62198789</a></p> <h1>Preprocessing</h1> <p>We used reduce=2, and sz=1024 in our final ensemble. The notebook we used was originally released by iafoss: <a href="https://www.kaggle.com/iafoss/256x256-images" target="_blank">https://www.kaggle.com/iafoss/256x256-images</a></p> <h1>Augmentation</h1> <p>We used cutout in all our training:</p> <pre><code>def cutout(tensor,alpha=0.5): x=int(alpha*tensor.shape[2]) y=int(alpha*tensor.shape[3]) center=np.random.randint(0,tensor.shape[2],size=(2)) #perm = torch.randperm(img.shape[0]) cut_tensor=tensor.clone() cut_tensor[:,:,center[0]-x//2:center[0]+x//2,center[1]-y//2:center[1]+y//2]=0 return cut_tensor </code></pre> <p>In addition, we used the following albumentation augmentations:</p> <pre><code>def get_aug(p=1.0): return Compose([ HorizontalFlip(), VerticalFlip(), RandomRotate90(), ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.2, rotate_limit=15, p=0.9, border_mode=cv2.BORDER_REFLECT), OneOf([ ElasticTransform(p=.3), GaussianBlur(p=.3), GaussNoise(p=.3), OpticalDistortion(p=0.3), GridDistortion(p=.1), IAAPiecewiseAffine(p=0.3), ], p=0.3), OneOf([ HueSaturationValue(15,25,0), CLAHE(clip_limit=2), RandomBrightnessContrast(brightness_limit=0.3, contrast_limit=0.3), ], p=0.3), ], p=p) </code></pre> <h1>Models</h1> <p>Starting from iafoss's starter notebooks and changing them to pure pytorch with heavier augmentation, I ensembled 2 sets of 5-fold models: one with resnext50 and one with resnext101. Effnets have never worked for me, and here that trend continues. Not much more could be said here tbh.</p> <h1>Inference</h1> <p>We used something we called expansion tiles but only during inference, which my teammate jj originally came up with in the PANDA competition. You can see how this is done in our inference notebook and also in the graphic below. The basic idea is to eliminate edge effects. To achieve this, for each tile we run the model on, we expand the tile by a certain number of pixels in all 4 directions, but only use the center region's predictions which do not have edges. Effectively there are no edge effects. For the threshold, I decided to simply use 0.5 to minimize any sort of bias the training data might have.</p> <p><img src="https://raw.githubusercontent.com/Shujun-He/Hubmap-3rd-place-solution/main/expansion_tiles.PNG" alt=""></p> <p>I wrote this very quickly since I'm eager to share my results/code, so lmk if you have any questions.</p> <p>Thanks! I picked my best score, which was also my last sub without using hand label or pseudo labels</p> <p>It's very surprising effnet worked for you. Honestly I did not try it that much, i think just a few experiments and I just gave up, which is partly also due to the bad impression effnet's it has left on me. idk… there's something about resnexts, the way the res blocks are split are like multiheaded self-attention. The arch is so simple and it just generalizes really really well. I remember you mentioned to me that you though effnet overfitted to imagenet, and I think that's true. I mean, how can it not when you use RL to do architecture search to fit to imagenet? That completely goes against the idea of agnostic model design.</p> <p>As for the degradation the pl labels cause, <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> talked about it in this post:<a href="https://www.kaggle.com/c/hubmap-kidney-segmentation/discussion/237999" target="_blank">https://www.kaggle.com/c/hubmap-kidney-segmentation/discussion/237999</a> and it did not affect him that much. I think this depends a lot of the way pl is done during training. For me, it really degraded model performance since I used it without consideration of leakage, but really when you start doing pl, there's always leakage</p> <p>On a side note, i also noticed my local CV was always much higher than LB, and when I changed the validation scheme, so validation was done on full resolution images (by upsampling before evaluating just like how inference is done), my CV was pushing 0.96. That also gave me confidence to just train models without much consideration for other stuff. Reading your solution I think your approach is much more systematic than mine, but I think placings at the top probably came down to luck as 2-4 places have the same score 3 digits past the decimal</p> <p>Swin transformers look really interesting and the design is quite intuitive as well. It looks more akin to how humans actually perceive than traditional type of CNN. I guess long range dependencies can actually be captured by swin transformers whereas CNNs really cant. I need to give madgrad a try it sounds like, but mostly adam just works well enough for me</p> <p>And you're right, leak free PL is possible actually; but I guess I was more talking about the hand labels I used which also included pseudo labels of someone else's model so I couldn't really separate it like that. Overall tho, PL is really risky business sometimes it works well sometimes it doesnt</p> <p>Congrats to you as well! You're one gold medal away from GM and I think you will get it soon. As for the private lb result, I'd like to think its not dumb luck as well haha. In fact, by the end of the competition, I saw only two ways how the private lb would turn out. The first one is that the d4 image was more or less inconsistently labeled compared to the private set, which I think turned out to be the case. The second, which imo would be very bizarre, is that the private set score would somehow not differ too much from the public. I have no idea how the second could be true, but it was not impossible. So I chose my subs accordingly based on these two senarios, one sub without training on pl/hl and one with. By design you also have 2 sub selections, so it worked out perfectly. In the end, my 2 subs got scores:</p> <p>no hand/pseudo label: 0.918/0.95<br> with d4 hand/pseudo labels: 0.943/0.936.</p> <p>The first one being my best private score out of all subs. On another note, the hand labels did not catastrophically swing the model in a bad direction, which I expected as well because of my previous experiments with hand labels. In short, I found that resnext101 worked significantly better than resnext50, which makes sense because a larger network should be better at dealing with noise and confrontation in the data. </p>
HuBMAP - Hacking the Kidney
44th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I give here a brief overview of my solution mostly to share the methodology I used to tackle that competition: I used chunks of codes that track the evolution of the different experiments through git, which turns out to be quite useful in order to monitor the evolution along the various attempts. </p> <p>Since I had not previous experience on computer vision, I entered HubMap using the code shared by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> (I take this opportunity to thanks <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for all the material and knowledge he shares along its kaggle's journey).</p> <p>The code used for training and inference is available on <a href="https://github.com/FabienDaniel/HubMap/" target="_blank">git</a>. <br> The inference code on Kaggle is <a href="https://www.kaggle.com/fabiendaniel/submit-hubmap" target="_blank">here</a>. (utility scripts are used inside this script.) </p> <p>My solution consists in a two stage modeling:</p> <ul> <li>a first stage with 256 x 256 tiles, with a reduction by a factor 4 in resolution: this network is trained with emphasis on false negatives, using images that cover the entire .tiff images.</li> <li>the output of the previous network is used to locate the potential glomeruli on the image. A second network is then trained using images that are mostly located around known glomeruli. The train images have sizes of 515 x 512 with a scale factor of 2. </li> </ul> <p>For each of these two networks, I used Unet models with Efficientnet b0 backbones. I quickly checked other backbones but didn't see any noticeable impact on the results. It quickly appear that selecting the dataset to train the model was the key for model performance.</p> <h1>CV scheme</h1> <p>As mentioned many times, finding a correlation between local CV and LB was difficult because of the <code>d48</code> image. During the course of this competition, I tried various CV schemes:</p> <ol> <li>Kfold on whole tiles</li> <li>a random sampling of images with a random 20% of each tile used as validation</li> <li>a whole part of each tile kept as validation.</li> </ol> <p>A posteriori, the two latest schemes give good results with scores well aligned with the private LB. During the competition, it was not clear if any of these scheme would be reliable but I auto-convinced myself that the 3rd one should be reliable and I used it to select the details of the models.</p> <h1>Pseudo-labelisation</h1> <p>In the last days, I started to include the test tiles in order to train some models. For these images, the mask were generated using the best model I had so far. I subsequently checked "visually" that the glomeruli selected that way and the corresponding masks were reliable with respect to what I expected.</p> <p>[TO COMPLETE]</p>
HuBMAP - Hacking the Kidney
46th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>46th Place Solution</h2> <p>I would like to thank the organizers and management for organizing the competition.<br> It was a fun way to deepen my knowledge.<br> Also, if you are going to make this more of a competition, I would like to see a fuller description of the data generation process.<br> We would also like to thank those who provided us with easy-to-understand kernels.</p> <hr> <h2>Pre processing:</h2> <ul> <li>resize tileimage size to (256, 256)</li> <li>pseudo label</li> <li>using d48_handlabel(Check the discussion.)</li> </ul> <h2>Augmentations:</h2> <p>I used albumentations.</p> <ul> <li>HorizontalFlip</li> <li>VerticalFlip</li> <li>ShiftScaleRotate, GridDistortion</li> <li>Blur, MedianBlur, GaussianBlur</li> <li>CLAHE, RandomBrightnessContrast, HueSaturationValue, IAASharpen</li> </ul> <h2>Post processing:</h2> <ul> <li>TTA(HorizontalFlip、VerticalFlip)</li> </ul> <h2>Model-1:</h2> <ul> <li>efficientnetb0 Unet</li> <li>image size : 256x256</li> <li>bce_dice(bce_weight:0.75)</li> <li>5FOLD</li> </ul> <h2>Model-2:</h2> <ul> <li>efficientnetb0 link</li> <li>image size : 256x256</li> <li>bce_dice(bce_weight:0.75)</li> <li>5FOLD</li> </ul> <h2>Model-3:</h2> <ul> <li>seresnext50 Unet</li> <li>image size : 256x256</li> <li>bce_dice(bce_weight:0.75)</li> <li>5FOLD</li> </ul> <h2>Model-4:</h2> <ul> <li>efficientnetb0 FPN</li> <li>image size : 256x256</li> <li>bce_dice(bce_weight:0.75)</li> <li>5FOLD</li> </ul> <h2>sub1:</h2> <ul> <li>1/4(Model-1+Model-2+Model-3+Model-4)</li> </ul> <p>I look forward to seeing you at another competition!</p>
HuBMAP - Hacking the Kidney
470th Position Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all those who have landed themselves medals and prize money. Unfortunately, I could not get any. However, I have added the link to my code and submission notebook. </p> <p>Feel free to have a look at the code and I would love to hear constructive criticism in the form of feedback.</p> <p>Code: <a href="https://github.com/nauyan/HuBMAP-HackingtheKidney" target="_blank">https://github.com/nauyan/HuBMAP-HackingtheKidney</a></p>
HuBMAP - Hacking the Kidney
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> , this competition was quite confusing because of incorrect annotation of 'd488c759a', and host never clarified it. There is a number of other images where FC instances are not annotated, so we assumed that 'd488c759a' is just and exception annotated by some external person, who didn't know what to include. We also had a plan B, though, where we used PL and hand annotation of FC for 'd488c759a' (but not as elaborated as yours), in case if the private test set had this kind of mistakes as train.</p> <p>Thank you so much, I'm very happy.</p> <p>Thanks <a href="https://www.kaggle.com/rosuluc" target="_blank">@rosuluc</a>, quite interesting observation. My best explanation could be that tissue is extremally different from once patient to another (I spent really a lot of time annotating and looking into the images). It may happen that for a specific subset of images the impact of adding FC to d488c759a is not significant, like, for example, there may be no images with FC in the private test set (adding FC to d488c759a does not generate FP at private set). It is difficult to say without looking into images and actual predictions. Meanwhile, it could be another private subset of images where the effect will be game changing.</p> <p>In our work we tried to minimize the risks and create a model that generalizes well on diverse images. It is obvious that there may be some randomness in the LB, but we tried to follow a methodology driven approach rather than at the end, if get lucky enough, write about a success story.</p> <p>Thanks so much</p> <p>Thanks </p> <p><strong>Please consider the following links:</strong> <a href="https://www.kaggle.com/theoviel/hubmap-final-methodology-submission" target="_blank"><strong>writeup</strong></a>, <a href="https://github.com/Optimox/HubMap" target="_blank"><strong>code</strong></a>. Below I provide just a short description.</p> <h3>The key:</h3> <ul> <li>proper consideration of healthy and unhealthy glomeruli and trust in CV</li> <li>2 class approach</li> <li>proper annotation of the data</li> <li>be careful with d488c759a when check LB</li> </ul> <h3>Introduction</h3> <p>Our team would like to thank organizers and kaggle for making this competition possible. And a really big thanks to my amazing teammates <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> and <a href="https://www.kaggle.com/optimo" target="_blank">@optimo</a> for their incredible efforts in working on the competition and putting everything into writeup for the judge prize. Also <a href="https://www.kaggle.com/optimo" target="_blank">@optimo</a> received his first gold medal in this competition, big congratulation to him! Finally, congratulations to all participants of this challenge.</p> <p>It was quite a long journey for some of us, since the competition started back in 2020. There were a number of challenges we needed to overcome, and there were many learning opportunities. For me, as well as for some participants, it was the opportunity to <strong>learn labeling medical data</strong>. It may sound funny, but doing this job I realized the amount of work organizers put into this challenge.</p> <h3>Our approach</h3> <p><strong>In core of our approach is proper treatment of healthy and unhealthy glomeruli.</strong><br> In short, we quickly realized that d488c759a with FC annotated is destabilizing LB and creating a gap between CV and LB. Use of external data with <a href="https://www.kaggle.com/c/hubmap-kidney-segmentation/discussion/208972" target="_blank">this annotation</a> improved LB but did not affect or even degraded CV. The explanation is simple: it contained annotation of both healthy and unhealthy glomeruli (at least large portion of them), while the train set contained only annotation of healthy glomeruli. So <strong>how can we make a model to predict FC: train it on images with FC… But it was not a right thing</strong>: organizers were looking for healthy glomeruli only, and all images except d488c759a did not include FC into annotation. So, use the external data with the shared hand labeling was dangerous, and some people, I may expect, got affected by shake up at the end of the competition because of this.</p> <p><strong>Our strategy was proper hand annotation of</strong> <a href="https://data.mendeley.com/datasets/k7nvtgn2x6/3" target="_blank"><strong>this external dataset</strong></a> <strong>with 2 classes</strong>: healthy and unhealthy glomeruli. There are many kinds of unhealthy glomeruli, but we decided to keep the things simple, combining them into a single class. We also hand annotated unhealthy glomeruli class in test images, which combined with PL for healthy glomeruli, were used in our training pipeline. We also used <a href="https://zenodo.org/record/4299694" target="_blank">zondo dataset</a>, which naturally includes two classes. However, there was quite a number of missed glomeruli, especially at the image edges, and we hand annotated them. Finally, we used 2 images from HuBMAP portal, not included into train/test datasets with PL (both healthy and unhealthy) generated by our model. Using of multiple data sources helped us to ensure that the model is able to generalize well on new cases. We also used a combined human+AI guided method to add missing annotation to the train data (there were just a few instances tough). </p> <p><strong>Secret sauce: ignore d488c759a from LB evaluation.</strong> LB with fixed predictions on this image correlated quite well with CV</p> <p>From the model perspective of view, there is nothing really special, EfficientNet, Unet… Though, we tried more advanced things, like BoT middle layer, Swin transformer backbone, FPN skip connections, but it was not giving any noticeable benefit. This competition really was not about it. The thing that helped was using 2 classes during training. The level of injury of glomeruli is on a continuous scale. So instead of confusing the model by providing nearly identical glomeruli included/excluded from masks, we made the model to learn that unhealthy glomeruli are also glomeruli, and at the end model decides whether a particular instance is healthy enough. To be honest, this competition was not really about segmentation but rather about classification. The uncertainty of including a particular glomerulus in the mask predicted by the model is the major factor degrading the model performance.</p> <p>Our best submission (ensemble of two models) got 0.9439 CV, 0.941 LB (with including hand annotation of unhealthy glomeruli for d488c759a image), and 0.950 private LB. There were some technical difficulties with making submissions on the last day, so we could not make a submission with all our final models(</p> <h4>Links: <a href="https://www.kaggle.com/theoviel/hubmap-final-methodology-submission" target="_blank"><strong>writeup</strong></a> &lt;- please take a look (it provides additional details and results of the analysis we made), <a href="https://github.com/Optimox/HubMap" target="_blank"><strong>code</strong></a></h4> <p><a href="https://www.kaggle.com/jaideepvalani" target="_blank">@jaideepvalani</a> Thank you so much</p> <p>Thanks, you are very welcome</p> <p>Thank you so much!</p> <p><a href="https://www.kaggle.com/usharengaraju" target="_blank">@usharengaraju</a> Thanks so much, and you are very welcome.</p> <p><a href="https://www.kaggle.com/nauyan" target="_blank">@nauyan</a> thank you, I'm glad that you found my kernels helpful</p> <p>You are very welcome</p> <p>Yes, the augmentation may be sometimes quite tricky, and there could be a trade of between CV and the ability of the model to work on a completely new data. Doing some analysis based on activation and/or attention maps is often quite helpful to get an idea if model really good, or just pretending to be good by capturing some irrelevant things of the particular set… And based on that one can adjust the augmentation.</p> <p>Thank you so much, and congratulations with completing your first competition.</p>
HuBMAP - Hacking the Kidney
54th Place Solution Pytorch Single_5fold_UNet50 private_lb:0.946
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Model: UNet aspp with Resnext50 backbone</p> <p>Keys:</p> <ul> <li><strong>aspp_stride</strong>: 1024x1024 tiles rescaled to 480x480 to feed the model. With 480x480 model input, backbone output feature map is 15x15, so set aspp_stride to 2 to get dilations of 2,4,6,8 which gives the desired coverage of the 15x15 output feature map of the backbone.</li> <li><strong>batch_size</strong>: Chose 480x480 instead of say 512x512 for model input, so as to be able to train with a batch size of at least 32 on a P100 GPU in mixed precision, as the model has batchNorm layers. </li> <li><strong>edge_ignore</strong>: During inference, when making predictions for a tile, ignore the predictions at tile edges. Let overlap take care of ignored tile edges.</li> <li>Tuned the number of training epochs on the validation holdout.</li> <li>Don't overfit to the public test set. </li> </ul> <p>I went through and built on <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> public notebook: <a href="https://www.kaggle.com/iafoss/hubmap-pytorch-fast-ai-starter" target="_blank">https://www.kaggle.com/iafoss/hubmap-pytorch-fast-ai-starter</a></p> <p>Generating the training dataset: <br> <a href="https://www.kaggle.com/pratikkumar/hubmapdataset-sz480-window1024" target="_blank">https://www.kaggle.com/pratikkumar/hubmapdataset-sz480-window1024</a><br> Tile Size of 1024x1024 rescaled to 480x480 with 20% overlap.</p> <p>Training:<br> <a href="https://colab.research.google.com/drive/1u9NXkjRbgI9_3Mwc1FGRLciyXcMFo3Rv?usp=sharing" target="_blank">https://colab.research.google.com/drive/1u9NXkjRbgI9_3Mwc1FGRLciyXcMFo3Rv?usp=sharing</a><br> Model: Pytorch/Fastai UNet50 (from <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a>) with aspp_stride of 2.<br> Data Augmentation: </p> <blockquote> <p>HorizontalFlip(),<br> VerticalFlip(),<br> RandomRotate90(),<br> ShiftScaleRotate(p=0.5, rotate_limit=15),<br> OneOf([<br> OpticalDistortion(),<br> GridDistortion(),<br> IAAPiecewiseAffine(scale=(0.02, 0.03)),<br> ], p=0.2),<br> OneOf([<br> HueSaturationValue(10,15,10),<br> RandomBrightnessContrast(0.1,0.1),<br> ], p=0.5),</p> </blockquote> <p>Reduced the intensity of some augmentations and removed clahe.<br> 5 fold training.</p> <p>Inference:<br> <a href="https://www.kaggle.com/pratikkumar/hubmap-win1024-sz480-inference-batched-subds" target="_blank">https://www.kaggle.com/pratikkumar/hubmap-win1024-sz480-inference-batched-subds</a><br> Overlap of 30% and Edge Ignore of slightly less than 15%.</p>
HuBMAP - Hacking the Kidney
Private submission topping 1st place in leaderboard
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I noticed that one of my private submissions topped the submission of the first price on the private leaderboard with score 0.9516. Just out of curiosity, how many of those submissions are there that have outperformed the winner of the competition.<br> Obviously this submission with 0.9516 was not taken into account for the final rankings in my case and I understand the rules, so I am not disputing anything here. I fully accept the results and congratulate the winner.<br> I am just curious about other participants who outperformed the winner and are not visible.</p> <p>You are right.</p> <p>Here it is:</p> <p>This submission topped the first place solution on the private leaderboard (score: <em>0.9516</em>), even though it fared quite modestly on the public leaderboard (score: <em>0.9166</em>).</p> <p>The approach taken in the inference notebook is:</p> <ol> <li>Use one single model trained with FPN and efficientnet-b7 back-end (Pytorch, using <a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">segmentation models Pytorch</a>), and with a window size of <em>1536</em> window size and <em>768</em> tile size. </li> <li>Perform inference on three grids, all with window size of <em>1536</em> window size and <em>768</em> tile size, but different overlaps with these sizes: <em>[32, 128, 256]</em></li> <li>Take the predicted sets of masks and check if its average is above <em>0.49</em> and produce with that the final prediction set of masks.</li> </ol> <p>You are right; we did not use any pseudo- or hand-labelling.</p> <p>The batch size was 12 using four RTX6000 GPUs (<a href="https://cloud.jarvislabs.ai/)" target="_blank">https://cloud.jarvislabs.ai/)</a>. Also trained for 18 epochs and the time per epoch was around six minutes.</p>
HuBMAP - Hacking the Kidney
Single mode 0.945 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>In this competition, I learned a set of traing techniques and pseudo-label finetune, which made my model single-mode at Private Score 0.945. Using this method, we mentioned efficientnet-b3 + Unet, efficientnet-b3 + Linknet as the same. However, we finally submitted the merge version, and his Private Score can only reach 0.943.</p> <p>Our backbone uses segmentation_models_pytorch's <strong>efficientnet-b3</strong>, and the structure uses <strong>Linknet and Unet.</strong></p> <p>dataset: 1. The image is reduced by 0.25, then cropped to 512 size, 256 steps<br> input: resize 320size,<br> lr: 0.001, batchsize: 16, iterations: 8000<br> ways to raise points:<br> Use dice loss + bce, the weight is 0.75<em>dice_loss, 0.25</em>bce_loss<br> Use lookahead + CosineAnnealingWarmRestarts combination to train the base model,<br> Then finetune the model with the highest score on the validation set.</p> <p>lr changed to 0.0003,<br> data: Use pseudo-labels, use the model to make predictions on the test set, and use the prediction results with higher scores as labels<br> Use the method of adam + dice + bce for finetune.</p> <p>summary<br> The scores of pseudo-labels in lb are actually mainly concentrated on the d48 picture, which blinds our eyes and cannot analyze the scores on other pictures, so that we cannot compare them. Through the offline verification set analysis, although the pseudo-labels will increase Dice scores, but there will be an increase in false detections. This may be caused by the inaccuracy of the pseudo-label use model prediction.Therefore, when there are some pictures or categories that are more peculiar when the game is found later in the game, the lb score is obvious, which may be a trap.</p> <p>The pseudo-label itself is neutral and will not cause major problems, but after the use of pseudo-labels in different models, the role of the wrong category is amplified, and other categories will be ignored, which makes a large jitter in the merge.</p> <hr> <p>这次比赛让我学到了一套调参技巧和伪标签finetune, 这使得我的模型单模在 Private Score 0.945, 利用这套方法我们将 efficientnet-b3 + Unet, efficientnet-b3 + Linknet 都提到了相同的分数, 但是我们最终提交的是merge的版本, 他在 Private Score 只能达到0.943的成绩. </p> <p>我们的backbone 使用 segmentation_models_pytorch的 efficientnet-b3, 结构使用 Linknet 和 Unet, <br> dataset: 1. 图片缩小0.25, 然后裁剪512尺寸, 256步进<br> input: resize 320size, <br> lr: 0.001, batchsize: 16, iterations:8000<br> 提分比较明显的方法: <br> 使用 dice loss + bce, 权重为 0.75<em>dice_loss, 0.25</em>bce_loss<br> 使用 lookahead + CosineAnnealingWarmRestarts 组合 训练base 模型, <br> 然后将验证集得分最高的模型进行finetune.</p> <p>lr 改为0.0003, <br> data: 使用伪标签, 利用模型在测试集上进行预测, 将得分较高的预测结果作为标注<br> 使用 adam + dice + bce的方法 进行finetune. </p> <p>比赛总结: <br> 伪标签在lb的提分其实主要集中在d48这张图上面, 这蒙蔽了我们的双眼, 无法分析在其他图上的得分, 让我们无法比较, 通过线下验证集分析, 虽然伪标签会增加dice得分, 但是会产生误检增多的情况, 这也许是伪标签利用模型预测的不准确导致的, 所以之后比赛当发现有一些图片或者类别较为奇特, 提分明显, 可能是一个陷阱.</p> <p>伪标签本身是中立的, 不会产生较大的问题, 而是不同模型使用了伪标签后, 错误类别的作用被放大, 就会忽略其他类别, 这使得在merge 的时候就会出现较大抖动</p>
HuBMAP - Hacking the Kidney
gold medal public kernel
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I’m not very sure, but I guess..they did? Since the public score only 0.926</p> <p>Turns out you can get a gold medal by a simple public kernel!<br> This is the most interesting part of kaggle😆<br> Good job deepflash2 team, and congrats to all the winners!</p> <p><a href="https://www.kaggle.com/matjes/hubmap-deepflash2-scaled-ensemble-submission?scriptVersionId=62380986" target="_blank">https://www.kaggle.com/matjes/hubmap-deepflash2-scaled-ensemble-submission?scriptVersionId=62380986</a></p>
HuBMAP - Hacking the Kidney
single model gold medal(private 948) solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Kidney <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It's hard to say it's a happy journey. <br> We missed the private 948 solution submitted last night.😄<br> <img src="https://i.loli.net/2021/05/11/Zfyk6rX7MHCRdv4.png" alt="微信截图_20210511090244.png"><br> And I found that an experiment with lb912 got private 948, which I trained a month ago.<br> <img src="https://i.loli.net/2021/05/11/Em2K3QYlXrOTwjs.png" alt="微信截图_20210511090219.png"><br> I share the train, inference and tiles-making code of exp007 here.<br> train: <a href="https://www.kaggle.com/tiandaye/hubmap-exp007-train?scriptVersionId=62523292" target="_blank">https://www.kaggle.com/tiandaye/hubmap-exp007-train?scriptVersionId=62523292</a><br> inference: <a href="https://www.kaggle.com/tiandaye/hubmap-exp007-5folds?scriptVersionId=58430396" target="_blank">https://www.kaggle.com/tiandaye/hubmap-exp007-5folds?scriptVersionId=58430396</a><br> tiles making: <a href="https://www.kaggle.com/tiandaye/make-tiles" target="_blank">https://www.kaggle.com/tiandaye/make-tiles</a></p> <p>Unet with encoder='timm-efficientnet-b4' , weights='noisy-student', trained with label smooth and no PL.</p>
Human Protein Atlas - Single Cell Classification
10th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Pipeline</h2> <p><img src="https://pbs.twimg.com/media/E1JXDBBUcAItGu9?format=jpg&amp;name=large" alt="pipeline"></p> <h2>Filtering</h2> <pre><code> # relabel and remove unmatched cell ## single gr = train_single.groupby("image_id") cell_size = gr.cell_id.transform(max) for c1, c2, c3 in zip(COLS_TARGET, COLS_RELABEL, COLS_PRED): # 1 re1 = ( train_single[c1] * ((gr[c3].rank() / cell_size) &gt; 0.5) | train_single[c1] * (train_single[c3] &gt;= 0.5) ) re1.loc[re1==0] = np.nan # 0 re0 = ( train_single[c1] * ((gr[c3].rank() / cell_size) &lt; 0.1) &amp; train_single[c1] * (train_single[c3] &lt; 0.1) ) re0.loc[re0==0] = np.nan re0.loc[re0==1] = 0 train_single[c2] = pd.concat([re0, re1], axis=1).max(1) train_single.loc[train_single[c1]==0, c2] = 0 train_single = train_single.dropna().reset_index(drop=True) ## multi import cudf cdf = cudf.DataFrame(train_multi[['image_id'] + COLS_PRED]) pred_quantile1 = cdf.groupby("image_id").agg(lambda x: x.quantile(0.50)).to_pandas() pred_quantile1 = train_multi[['image_id']].merge(pred_quantile1, on='image_id', how='left') pred_quantile0 = cdf.groupby("image_id").agg(lambda x: x.quantile(0.50)).to_pandas() pred_quantile0 = train_multi[['image_id']].merge(pred_quantile0, on='image_id', how='left') for c1, c2, c3 in zip(COLS_TARGET, COLS_RELABEL, COLS_PRED): # 1 re1 = ( train_multi[c1] * (train_multi[c3] &gt;= pred_quantile1[c3]) | train_multi[c1] * (train_multi[c3] &gt;= 0.5) ) re1.loc[re1==0] = np.nan # 0 re0 = ( train_multi[c1] * (train_multi[c3] &lt; pred_quantile0[c3]) &amp; train_multi[c1] * (train_multi[c3] &lt; 0.1) ) re0.loc[re0==0] = np.nan re0.loc[re0==1] = 0 train_multi[c2] = pd.concat([re0, re1], axis=1).max(1) train_multi.loc[train_multi[c1]==0, c2] = 0 train_multi = train_multi.dropna().reset_index(drop=True) </code></pre> <h2>Final Submission</h2> <p><a href="https://www.kaggle.com/inoueu1/hpa2021-p-all-1stx8-2ndx16-imgx16/notebook?scriptVersionId=62576779" target="_blank">https://www.kaggle.com/inoueu1/hpa2021-p-all-1stx8-2ndx16-imgx16/notebook?scriptVersionId=62576779</a></p> <p>Thanks, our solution came from quite simple pipeline.<br> Now we understand it's not necessary to make complex networks to get 0.53~0.54 on private LB.</p>
Human Protein Atlas - Single Cell Classification
11st Place Solution Summary: Active learning for cell label mining and Ensemble of different models
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank Kaggle and organizers hosting this really interesting competition. Thanks to my teammates <a href="https://www.kaggle.com/silversh" target="_blank">@silversh</a> and <a href="https://www.kaggle.com/jianguoguo" target="_blank">@jianguoguo</a> for the hard work and collaboration. </p> <p>Congratulations to all the winners. This is my first Kaggle competition, I really learn a lot of nice ideas and solutions in the forum.</p> <p><strong>1. Overview</strong><br> Our final solution includes active learning for cell label mining and ensemble of different models.</p> <p><strong>2. Active learning for cell label mining</strong><br> Because I do not participate Kaggle conpetition before, I'm not sure whether data label mining is allowed. Therefore, we try to use active learning to mine cell labels relatively late. Due to time limitation, we only apply it for a few classes, i.e., 9, 11, 12 and 15.</p> <p>First, we chose the cells in the single-label image as the initial trainset, and the cell labels directly inherit the image labels. For classes 11 and 15 with few images, we manually selected about 200 cells with ground truth labels into the initial trainset. After the model is trained on the initial trainset, it is used to make predictions for cells in all images. Use the following strategies to prepare the trainset for the next round.</p> <p>If an image contains original label X and the prediction confidence of one cell in this image is greater than 0.25, the label X of the cell is retained, otherwise the cell is removed from the next round trainset. If an image does not contain original label X, and the predictive confidence of one cell in this image is greater than 0.5, this cell will be assigned a true label X. </p> <p>After several rounds, the noise of most cell labels can be inhibited. For example, we finally retrieve about 2,000 cells for class 11 after 4 rounds. On the final public test set, the AP score for class 11 is improved from 0.001 to 0.035.</p> <p>The specific principle of active learning can be referred to my previous papers <a href="url" target="_blank">https://doi.org/10.1016/j.future.2019.07.013</a>.</p> <p><strong>3. Ensemble of different models</strong><br> We use ensemble method to fuse Cell-level models and Image-level models. The weights are 0.75 and 0.25, respectively. The Cell-level models employ EfficientNet-B3 network, and the cell score is averaged after 5fold cross-validation. For the Image-level models, we train EfficientNet-B7 networks with RGBY 4-channel image input and Green 1-channel image input, respectively, and the image score is averaged from the two Image-level models. Specifically, when the Cell-level models and the Image-level models are fused, the prediction result of class 11 only use the prediction score of the Cell-level model.</p> <p><strong>4. Configuration of training</strong></p> <ul> <li>Image-level model:</li> </ul> <ol> <li>EfficientNet-B7,Concat-pooling + 2 layers of FC Head, input size 600 × 600</li> <li>Data augmentation: flip, rotation</li> <li>Focal Loss</li> <li>SWA (Nearly useless)</li> <li>Fusion of RGBY 4-channel model and Green 1-channel model</li> </ol> <ul> <li>Cell-level model:</li> </ul> <ol> <li>EfficientNet-B3,Concat-pooling + 2 layers of FC Head, input size 300 × 300</li> <li>Data augmentation: flip, rotation, mixup</li> <li>Focal Loss, Label smooth [0.1, 0.9]</li> <li>SWA (Nearly useless)</li> <li>5-folds cross validation</li> </ol> <p><strong>5. Some results</strong><br> | One Cell-level model only | active learning for classes | Public LB | Private LB |<br> | One Cell-level model only | 11, 15 at 1st round | 0.446 | 0.465 |<br> | One Cell-level model only | 11, 15 at 2nd round | 0.462 | 0.494 |<br> | One Cell-level model only | 11, 15 at 4th round | 0.502 | - |</p> <p>| Ensemble model | active learning for classes | Public LB | Private LB |<br> | Ensemble model | 11, 15 | 0.558 | 0.528 |<br> | Ensemble model | 9, 11, 12, 15 | 0.558 | 0.532 | -&gt; Final result in LB<br> | Ensemble model | 9, 11, 12, 15 at 4th round, 4 at 1st round | 0.561 | 0.537 | -&gt;Notebook timeout</p> <p><strong>6. Others</strong><br> TTA is helpful for prediction, with an increase of 0.001~0.003, but it may tend to time out for our submissions.</p> <p>Early in the competition, we tried to solve this problem by using multi-instance learning with transformer encoder-based attention, but the effect was not good.</p> <p>As the results above, we additionally use a round of active learning for cell label mining on class 4. The submission was timeout and we resubmited it after the deadline of the competition, and it finally obtained the results of Public LB of 0.561 and Private LB of 0.537, indicating that active learning mining labels can continue to improve the performance if it was extended to other classes. However, we adopted this strategy too late..</p> <p>Please forgive my typesetting, I encounter problems inserting images and tables…</p>
Human Protein Atlas - Single Cell Classification
12th place solution summary (0.530 LB)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Introduction</strong><br> First of all, thank you to the organizers for hosting a fun (and very intense) competition. This is the second computer vision project I’ve attempted and I learned a tremendous amount along the way.</p> <p>Overall observations and approach<br> I began the project by reading twenty or thirty papers on the state-of-the-art for weakly-labeled image classification and segmentation, including the papers shared on the forum as well as quite a few others that I came across. I kept a daily journal of my progress and maintained a separate “to-do” list with ideas I had as I went along. Whenever I felt stuck I could always go back and find new directions to pursue.</p> <p>When I entered the competition, <a href="https://www.kaggle.com/bestfitting" target="_blank">@bestfitting</a> had a commanding lead over the rest of the entrants. Since <a href="https://www.kaggle.com/bestfitting" target="_blank">@bestfitting</a> won the first HPA competition, I realized that he was probably re-using a substantial amount of his prior solution to achieve this result. His code and models were published and open sourced after the prior competition, so I resolved to use this as my starting point as well rather than forking any of the public kernels.</p> <p>My first (working) baseline was to take one of <a href="https://www.kaggle.com/bestfitting" target="_blank">@bestfitting</a>’s pretrained models and simply re-map all of the category predictions from HPAv1 to HPAv2. I applied the same category to every cell in each image equally without attempting to localize. There was no category equivalent to “Negative” in the HPAv1 set so I copied these labels from one of the public kernels. Without doing any training whatsoever this baseline put me in front of all of the public kernels available at that time.</p> <p>Despite the organizers’ attempts to pick high single-cell variation, the same-slide category correlation between cells remains extremely high. As such, I viewed the deciding factors in this competition as likely to be, in descending order:</p> <ol> <li>Classification</li> <li>Localization (i.e. differentiating cells on a single image)</li> <li>Segmentation</li> </ol> <p>Segmentation is last for two reasons: 1. The HPA cell segmentator is already quite robust and any significant deviation from its output is likely to be severely penalized and 2. The lack of any cell-level ground truth labels doomed, in my opinion, any attempt to use Mask-RCNN or other segmentation models to define cell boundaries. I’ll be surprised if any top teams had success with this.</p> <p>Accordingly, I spent a significant amount of time in the early stages of the competition refining my image-level classifier, which I suspect was the opposite of what most teams were doing at that point. Once <a href="https://www.kaggle.com/h053473666" target="_blank">@h053473666</a>’s kernel was published this changed the competition dynamic significantly as many other teams started integrating image-level classifiers into their ensembles.</p> <p><strong>My top solution</strong></p> <p><strong>Input:</strong></p> <p>4-channel RGBY images from both train and external. I used a multi-threaded file downloader to get all of the external images early on (even so, this took several days of downloading). No attempt to remove any duplicates, etc.</p> <p>My image classifier resized all images to 768x768 with no additional preprocessing.</p> <p>My cell classifier used HPA cell segmentation first then padded each cell to square before masking and cropping each cell to 768x768.</p> <p><strong>Train augmentation:</strong></p> <p>Fairly standard flipping horizontal, vertical, and transpose along with random cropping. This seemed more than adequate for my purposes.</p> <p><strong>Segmentation</strong></p> <p>I used an implementation of HPA segmentator which allowed batch processing through the network, as the default implementation seems to process images one at a time. I modified the labeling function to run on 25% scaled masks as well to achieve significant time savings, albeit at a very slight hit to accuracy.</p> <p>I implemented heuristics to exclude edge cells based on the size of the cell and nucleus relative to the average size of the cells and nuclei in the image. The discussion forums suggested this wasn’t very important but I found it to make a very significant difference in my score. </p> <p><strong>Architecture:</strong></p> <p>I used <a href="https://www.kaggle.com/bestfitting" target="_blank">@bestfitting</a>’s architecture from the HPAv1 competition with only minor tweaks. This uses a DenseNet121 backbone followed by:</p> <ol> <li>Average Pooling and Max Pooling concatenated together (Side note: the model appeared to rely mostly on the Max Pooling and used Average Pooling to regularize the output)</li> <li>Batch Norm and 50% Dropout</li> <li>A FC layer reducing from 2048 to 1024</li> <li>ReLU, BN, Dropout 50%</li> <li>A second FC layer reducing from 1024 -&gt; 19</li> </ol> <p>I also used one of <a href="https://www.kaggle.com/bestfitting" target="_blank">@bestfitting</a>’s pretrained models as a starting point for fine-tuning, which undoubtedly reduced my energy bill a bit. 😊</p> <p>I used an identical architecture for both the image and cell classifiers and used the same pretrained model for both as well.</p> <p><strong>Loss functions</strong></p> <p>I experimented with various combinations of loss functions. Whereas <a href="https://www.kaggle.com/bestfitting" target="_blank">@bestfitting</a> combined three loss functions into a single aggregate loss for training, I tried training models based on a single loss then ensembling them together. </p> <p><strong>Training</strong></p> <p>Mid-way through the competition I switched to using mixed-precision training, which gave me a huge speed boost with very little coding required. I’ll be doing this for sure in all future competitions. I trained on a Quadro RTX 5000 and RTX 2080S. Running in parallel I could only use half of the Quadro RTX’s memory due to their mismatched memory sizes. A single epoch of my cell classifier takes 12-13 hours on this setup, and I wasn’t able to run as many epochs as I would have liked. (It took about 20 hours on full precision).</p> <p><strong>Test augmentation</strong></p> <p>I used similar test-time augmentation to my train augmentation, including random cropping. This proved to be very useful, so I did as much as I could, limited by Kaggle’s 9 hour restriction. I wasn’t very diligent in setting random seeds to this introduced some variation from run to run. While the variation caused me some uncertainty in picking models it also made me less likely to cherry-pick models based on a single “lucky roll.”</p> <p><strong>Heuristics</strong></p> <p>Right before the end of competition I realized that multiplying my image-level predictions against my cell-level predictions gave better output versus averaging them. The fact that all of the confidences shrink to near zero is no problem, since the relative ranking is the only important factor for the mAP metric. This allowed my model to more effectively down-weight many of the spurious single-cell predictions which didn’t align with the image-level prediction.</p> <p>For the Negative category I used the max of the cell-level or image-level, which I think was better than the average (but I didn’t spend submissions to prove this for sure).</p> <p>I also implemented a heuristic to penalize predictions from cells with a high Negative probability, applying a penalty to all of the non-Negative labels for this cell. This gave me a small score boost, largely because I believe my Negative label predictions are quite accurate.</p> <p><strong>Ensemble</strong></p> <p>768x768 4-channel Cell classifier trained 6 epochs with BCE (5x random crop and augment) 40% weight<br> 768x768 4-channel Cell classifier trained 4 epochs with BCE then 1 epoch with Focal Loss (5x random crop and augment) 60% weight</p> <p>768x768 4-channel Image classifier trained 25 epochs with BCE (8x random crop and augment)</p> <p>Negative category based on maximum of Cell or Image classifier. Other categories are Cell x Image result, downweighted by Negative class prediction.</p> <p><strong>Results:</strong></p> <p>Public LB: 0.547 (equivalent to 14th place on the leaderboard)<br> Private LB: 0.530 (12th place)</p> <p>My best public LB of 0.552 was based on a quick submit which I couldn’t reproduce on Kaggle. I think this 0.005 score difference from my next best submission was due to the reduced scaling of the segmentator labeling that I had to adopt for speed reasons (or perhaps a lucky combination of crops during my test augmentation).</p> <p><strong>What worked:</strong></p> <p>• The model architecture was quite robust to overfitting and required very little additional hyperparameter tuning to get good results.<br> • Using <a href="https://www.kaggle.com/bestfitting" target="_blank">@bestfitting</a>’s pretrained weights saved me a lot of computation time<br> • FocalLoss generally gave superior results for image classification compared to other loss functions<br> • My heuristics (edge cell pruning, multiplicative ensembling, and penalizing Negative cells’ other class predictions) all gave me meaningful score boosts<br> • Mixed precision training was a life-saver<br> • Test augmentation – I never realized how important this was prior to this competition!<br> • Getting an early start on private LB submissions – I missed out on a silver medal in a previous competition because of submission-related problems and wanted to make sure that never happens again. I probably spent ten days or more focused on performance-tuning and bug-fixing my code to where it can consistently submit without problems. It also made me realize that going in with a crazy ensemble with everything under the sun wasn’t going to be a viable solution and I should focus more on ensembling together a few high-performing models.</p> <p><strong>Other things I tried which (kind of) worked but I abandoned along the way:</strong></p> <p>• Focal Loss for cell-level classification seemed to quickly overfit. Not really sure why. This was particularly the case in early training. As a result, I pretrained my model using BinaryCrossEntropy to first reach a fairly stable point and then used Focal loss to fine tune further.<br> • Manual labelling of mitotic spindle cell instances was important to ensure quality labels for this class as most images only included 1-3 examples.<br> • L2 loss was surprisingly effective, particularly when ensembled with other losses. It didn’t make my final ensemble, however. (Actually, one of the models which I didn’t pick for the final submission used L2 and scored slightly higher, so I wish I’d kept this in).<br> • Increasing resolution to 1536x1536 for the image classifier. This showed some promise but the accuracy gains were minimal (probably some overfitting going on). It was also unacceptably slow during inference so I reverted back to 768x768 resolution.<br> • I built a model using single-channel green 224x224 nuclei crops focused just on nuclear categories (plus mitotic spindle). This worked but its predictions weren’t as strong as my final cell-level model’s. I think it might have struggled to differentiate nucleoplasm from other “diffuse field” categories, lacking broader context. Didn’t want to waste my precious remaining submissions to see if it was helping for any of the other categories.<br> • PuzzleCAM – I built a fully-functioning PuzzleCAM implementation on top of a modified version of the architecture I’m using. (Had to remove MaxPooling, etc. for this to function). </p> <ul> <li>I used raw un-normalized feature outputs from the PuzzleCAM rather than true “CAMs” and used the feature overlaps with individual cell masks to make cell-level predictions.</li> <li>I really liked this approach since the PuzzleCAM model is smart enough to make highly localized predictions while also being aware of the overall image context.</li> <li>However after transitioning to a pure cell-level image classifier I was able to surpass my PuzzleCAM score significantly and abandoned this approach. I think with additional work this would have been an extremely powerful addition to my ensemble, but I just ran out of time.</li> <li>The main downside of PuzzleCAM is it requires some additional heuristics to generate class predictions, so there’s a bit of fine-tuning involved.</li> <li>I thought about building a custom loss function for PuzzleCAM which could take masks and images as input and generate losses at the cell level. This seemed like a novel approach to the problem but I didn’t get time to implement it.</li> </ul> <p>• Rank-based ensembling – since the mAP metric is scale-invariant, the specific confidence scores output by a model are irrelevant. When combining loss functions with very different confidence ranges (e.g. like combining L2 together with Focal Loss) it sometimes seemed to help to ensemble together based on rank rather than confidence scores. However this caused slight performance degradation in my final ensemble so I deactivated it.<br> • EfficientNet backbone as a swap in replacement – I got the backbone working but probably didn’t have enough time remaining in the competition to train it sufficiently well.</p> <p><strong>Other things I tried which didn’t work</strong></p> <p>• Applying weights to different categories generally seemed to destabilize my training<br> • ROCStar loss function (<a href="https://github.com/iridiumblue/roc-star" target="_blank">https://github.com/iridiumblue/roc-star</a>) trying to optimize directly for AUC. Seemed to work for the first epoch but overall was less effective than other loss functions.<br> • I spent a lot of time building an artificial dataset by hand-curating cell examples from different categories and combining them using random copy-paste into completely artificial images. I thought this could substitute for the lack of ground truth labels since I could “create my own.” Ultimately, though, this was a recipe for rapid, massive overfitting.</p> <ul> <li>It did, however, provide an interesting dataset for cross-validation of my image classifiers as I could estimate my mAP breakdown for each category. </li> <li>However, trying to use this information to optimize my ensemble on a category level didn’t seem to give any meaningful improvement.</li> </ul> <p>• I tried running PCA and an SVM to create a metric model using the cell images as well as their spectral domain counterparts, but my initial tests didn’t show promising results so I abandoned this approach.<br> • Multiprocessing in Kaggle kernels was extremely frustrating. Wasted lots of GPU quota trying to get this to work consistently. After one too many random crashes I quit doing this. </p> <p><strong>Other things I wanted to do but didn’t have time</strong></p> <p>• I wanted to build a metric learning model looking at cell-level similarity, along the lines of what <a href="https://www.kaggle.com/bestfitting" target="_blank">@bestfitting</a> did for the HPAv1 competition but on the cell-level rather than image-level. Ultimately though I decided to spend my last few days of GPU processing time doing more cell-level epochs instead (probably a mistake?).<br> • Further refinement of the segmentation output – finding accidental blue stains, cells with multiple nuclei, mitotic spindles which are incorrectly segmented as two cells, etc.<br> • Mixup or copy/paste augmentation to increase frequency of rare class examples in the training dataset.<br> • CLAHE or other contrast-enhancement on the green channel.<br> • Teaming: I debated a lot about whether to team or not. However when the deadline approached I was struggling a lot with performance issues, multiprocessing instability, Kaggle errors, etc. and was stressed out just about getting my models working in the 9 hour time limit. There’s no doubt in my mind that if I had teamed up, even if just to ensemble my final output, my ranking would have improved. Lesson learned!</p> <p><strong>Thank you!</strong></p>
Human Protein Atlas - Single Cell Classification
15th Place Solution: CAM & Semantic Segmentation approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to kaggle and organizers hosting a nice competition. </p> <h2>Overview</h2> <p>I followed weakly semantic segmentation approach. So firstly I trained image level classifier and prepared CAM, then trained segmentation model with pseudo labels from the CAM. </p> <h2>Pipeline</h2> <p>Image level classification &amp; semantic segmentation &amp; hpa cell segmentator.<br> <img src="https://raw.githubusercontent.com/Fkaneko/kaggle-hpa-single-cell-image-classification/main/images/hpa_inference.png" alt="inference piple"></p> <h2>Image Level Classification</h2> <ul> <li><p>model: resnest50, resnet50, resnet200d, effb3</p></li> <li><p>Input size: 512x512 ~ 1280x1280</p></li> <li><p>Loss: focal loss</p></li> <li><p>Class Mask: normal CAM and SC-CAM</p> <ul> <li><p><a href="https://arxiv.org/abs/2008.01183" target="_blank">SC-CAM, arxiv: 2008.01183</a><br> Sub-category CAM, to improve CAM localization this method introduces sub category which is generated by a clustering of CNN feature. This sub category is contributed to differentiate intra-class variation and the area of CAM will be extended as shown below. This figure is taken from <a href="https://github.com/Juliachang/SC-CAM" target="_blank">https://github.com/Juliachang/SC-CAM</a><br> <img src="https://raw.githubusercontent.com/Juliachang/SC-CAM/master/teaser.png" alt="SC-CAM"> </p></li> <li><p>SC-CAM improved the score by 0.005~0.02 but it only worked for 512x512.<br>  + SC-CAM x Puzzle CAM does not work.</p></li></ul></li> <li><p>Best single model: resnest50 768x768, public 0.445 private 0.441 </p></li> </ul> <h2>Semantic Segmentation</h2> <ul> <li>model: Unet{resnest50, resnet50, resnext50}</li> <li>Input size: 512x512 ~ 768x768</li> <li>Label: pseudo label by class mask CAM&amp;SC CAM and instance mask from hpa cell segmentator</li> <li>Best single model: resnet50 512x512, public 0.491</li> </ul> <h2>Other Tips</h2> <ul> <li>Training speed : NGC container x Channel Last for pytorch memory format<br> following <a href="https://gist.github.com/rwightman/bb59f9e245162cee0e38bd66bd8cd77f" target="_blank">the benchmark result at timm</a>, my resnet training time reduced by 40% from my initial configuration. Also effb training reduced by 20 or 30%.</li> <li>TTA<br> h/vflip, image scale { x1.2, x1.4} improve the CAM model score by 0.01~0.03</li> </ul> <p><a href="https://github.com/Fkaneko/kaggle-hpa-single-cell-image-classification" target="_blank">Code is available</a> </p>
Human Protein Atlas - Single Cell Classification
16th Place Solution: CNN, Yolov5, and use image-wise model to predict the cell
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to everyone! We are glad to get 16th in this competition. We want to say thank to our team members <br> <a href="https://www.kaggle.com/louieshao" target="_blank">@louieshao</a> <a href="https://www.kaggle.com/lucamtb" target="_blank">@lucamtb</a> <a href="https://www.kaggle.com/joven1997" target="_blank">@joven1997</a> <a href="https://www.kaggle.com/morizin" target="_blank">@morizin</a> and all participants in this competitions! Especially thank host to celebrate such an amazing competition! <a href="https://www.kaggle.com/emmalumpan" target="_blank">@emmalumpan</a> <a href="https://www.kaggle.com/lnhtrang" target="_blank">@lnhtrang</a> <a href="https://www.kaggle.com/cwinsnes" target="_blank">@cwinsnes</a> <a href="https://www.kaggle.com/philculliton" target="_blank">@philculliton</a> <a href="https://www.kaggle.com/maggiemd" target="_blank">@maggiemd</a> It’s my first time to see hosts are so nice to help people facing issues in discussion, also post some great kernels to help us dive into this competition! Great job everyone!</p> <p>Pipeline:<br> 5 cell-wise model, 3 image-wise model, 1 yolov5 model. All image-wise model receive green channels only. </p> <p><img src="https://imgur.com/rkpVAAi.jpg" alt=""></p> <p>Weights: <br> 0.5 cell-wise model + 0.3 image-wise model + 0.1 yolov5 model + 0.1 image-wise model(predict the cell)</p> <p>Cell-wise models:<br> 1.Efficientnet b0, external data included, pseudo labeling, epoch 10;<br> 2.Efficientnet b0, external data included, pseudo labeling, epoch 1;<br> 3.Efficientnet b7, external data included, epoch 1;<br> 4.Efficientnet b7, epoch 1;<br> 5.Efficientnet b7, external data included, green channel only, epoch 1.</p> <p>Image-wise models:<br> 1.Efficientnet b7, external data included, green channel only, epoch 20;<br> 2.Efficientnet b7, green channel only, epoch 20; <br> My <a href="https://www.kaggle.com/h053473666/hpa-classification-efnb7-train" target="_blank">public kernel</a><br> 3.Efficientnet b7, green channel only, input size 720, epoch 20; <br> <a href="https://www.kaggle.com/aristotelisch" target="_blank">@aristotelisch</a> ‘s <a href="https://www.kaggle.com/aristotelisch/hpa-classification-efnb7-train-13cc0d" target="_blank">public kernel</a></p> <p>Cell tile resize trick:<br> Add padding to keep the cell tile’s width equals height, then resize to the input size, which can keep the cell tile’s width-height ratio unchanged, highly boosting our score. </p> <p>Pseudo Labeling:<br> Considering the cell-wise tile has unreliable class, we think pseudo labeling on cell may help. Here is our strategy:<br> 1)Split datasets to two part;<br> 2)Use one predict the other;<br> 3)For the possibility of each class:<br> i)If confidence value&gt;0.8, put this class into class list;<br> ii)If confidence value0.2, if the answer is YES, we will put class 18 into class list;<br> v)Drop cell tiles without any class.</p> <p>Some postscripts by me:<br> I have posted my public kernel<a href="https://www.kaggle.com/h053473666/0-354-efnb7-classification-weights-0-4-0-6" target="_blank">https://kaggle.com/h053473666/0-354-efnb7-classification-weights-0-4-0-6</a> about the ensemble of cell-wise and image-wise models. I’m glad to see that it helps. Nevertheless, in some cases, adding image-wise models may not improve the performance. Basically, if it does not work, you can try to tune down the weights of image-wise models. I joined VinBigData competition before, in which i use the weight of 0.95<em>detection + 0.05</em>image-wise model. Even the score went down when i tuned the weight of detection to 0.9. In most cases, more powerful the cell-wise model is, less improvement you can get by ensemble with image-wise models. It’s normal because the mAP focus on the cell-wise confidence anyway. Basically, if one want to do ensemble of cell-wise and image-wise models. Maybe you want to use multiplication first. Nevertheless, I think weighted average may help and easy to conduct. This idea is from my combine of detection and CNN. I think detection is to get bbox, but CNN may predict more accurate confidence value than detection. However, False positives becomes a main issue here. We will expect that image-wise model can help the performance a little bit and that is enough. In this competition, however, we do not have reliable cell-wise labels, so image-wise models may help a lot. If a person can solve the weak-label issue, they will get less improvement from image-level models, but the upper limit of score is relatively high</p> <p>Here comes the interesting part--</p> <p>Yolov5 model:<br> Considering we don’t have the GT of the mask( they can only be obtained by CellSegmentator ), we think it is not easy to make our segmentation models more accurate than CellSegmentator . Nevertheless, yolov5 can not only detect the image into cell bbox but also give each bbox a confidence value. Considering all of these, we select yolov5 to our model zoo. However, considering the mAP is calculated by the sorting of confidence value, different location of cells in images may influence the confidence value for yolov5. So it is necessary to train other models to predict cell-tile data to do the ensemble.</p> <p>Training: Green channel only, no external data, 20 epochs for basic training, 20 epochs for fine tuning.<br> Prediction: <br> 1)Get the mid points of each cell by Cellsegmentator;<br> 2)Do the KNN(N=1) with mid points predicted by yolov5, making cells and confidence values compared. When a cell mask has two bbox confidence values, choose the larger one. (like NMS)<br> 3)Assign each cell with the confidence value given by yolov5. If KNN does not find bbox of yolov5 to compare, set the confidence value to 0. Finally, weighted average with our cell-level model. (Like WBF)</p> <p>New prediction approach:<br> In this competition, the labels of cell tiles are not quite stable. Meanwhile, the image-wise model seems more robust because the training data of image-wise label is reliable. So, we mainly want to find a way to use image-wise model to predict the cell. </p> <p>So we came up with a way, it goes like this:</p> <p>1)Label a whole image using Host’s Segmentator, so we will get different cells in the image(1,2,3 …);<br> 2)Choose a cell in interest, set all area except that to 0;<br> 3)Resize the image to the size that image-wise model need;<br> 4)Predict. Image-wise models(My public kernel )<br> img example<br> <img src="https://imgur.com/kj8vk8t.png" alt="img"></p> <p>I think it is a ‘magic’ part of our solution.</p> <p>Thanks all from participants to host. It’s a such amazing journey for us. Hope to see you guys in HPAv3 :)</p> <p>Thanks! It’s also fun to work with you! I learned a lot from your kernel. 👍</p> <p>0.005~0.009. There is an advantage that yolo can increase robustness</p>
Human Protein Atlas - Single Cell Classification
18th place solution private 0.515 public 0.514
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to kaggle and the organizers for putting together a competition on weakly supervised learning approach. learned a lot from this experience.</p> <h2>solution overview</h2> <p>since this is a weakly supervised learning task and HPA cell segment model was able to give instance mask, i chose to use Multiple instance learning method here. this made sense as the image level labels are not precise and through discussion it was clear that there could be cells that are negative class but not mentioned on the image level label. </p> <h2>stage 0 model</h2> <p>Here i used Effb4 and resnest50 as the CNN backbone. the training setup consist of extracting individual cells from an image and selecting 16 cells (max, else resampling to match 16 count) at random to be used as Bag of images. the target is same as image level label. here i use 16 cells just to make sure i have at least some instance of cells corresponding to the label. (in stage 0 we cant be sure on cell labels) </p> <p>CNN Feature Extractor : Effb4, resnest 50<br> cell used per image : 16<br> Pooling layer (per cell): GAP<br> Attention pooling layer (Per image, 16 cells)<br> image level label<br> Focal loss<br> Time : roughly 1hr per epoch </p> <p>The magic of this approach is the attention pooling layer which weights the 16 cell feature embedding by providing importance score based on the contribution of individual cells to the image level label. this enabled me to train a model that can find the appropriate cell level label by looking at the image level label. </p> <h2>stage 1 model</h2> <p>after training stage 0 model, now we have a model that is capable of providing cell level protein score that can be submitted to the competition. along with this we also have a well trained attention layer that can serve as a module to score individual cells. this can be used as a pseudo label generator. i used this approach to find the missing cell labels in a image and append this to the image level label. this helped with finding images which consist of Negative classes but not labeled correctly. This also helped with reducing the cell count per image as now we are more sure about the class of cells present on the image.</p> <p>CNN Feature Extractor : Effb4, resnest 50<br> cell used per image : 8<br> Pooling layer (per cell): Transformer based attention pooling<br> Attention pooling layer (Per image, 8 cells)<br> image level label + pseudo label<br> Focal loss<br> Time : roughly 20min per epoch </p> <p>we can clearly see how the training time dropped because of reducing the no of cells. </p> <h2>final inference</h2> <p>My final inference consist of 5 model where 3 stage 0 and 2 stage 1 models were used. i did use 4 TTA to make my model prediction robust</p> <p>i am happy that my public and private score for this submission did not change a lot Private 0.515 public 0.514. i did suffer from accuracy drop due to GPU hardware change. i trained my model on volta architecture GPU and Kaggle uses P100 (pascal). i got a scored of 0.524 on public dataset by generating the submission file from volta GPU. this was a surprise for me. i did try to calculate the mean of difference between the GPU submission files and used it to correct my final submission score. it did help me a bit but i lost a lot of score just because of Hardware difference. </p> <h2>final thoughts</h2> <p>overall i got a good experience with working on Weakly supervised learning problem. Thanks again</p>
Human Protein Atlas - Single Cell Classification
20th Place Solution - Two stage model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the competition host for such an interesting competition. I had quite some fun and learnt a lot from this competition. Thank you fellow HPA competitors for sharing your ideas and solution. To the winners of this competitions, congratulations!</p> <h1>Introduction</h1> <p>Like most competitors I used the segmentation algorithms provided by the organizers to segment the cells . The data provides weak positive labels, but an abundance of strong negative labels. The converse is true for the negative class. So I trained a two-stage model, the first on image level labels and the second on cell level labels predicted by the first stage.</p> <h1>Stage 1</h1> <p>An ensemble of 6 efficientnet-b0 models to predict image level labels, trained with different random seeds but with the full datasets.</p> <h2>Augmentation</h2> <ul> <li>Random flip</li> <li>Random rotate 90</li> <li>Random brightness (independent in each RGBY channel)</li> <li>Random crop</li> </ul> <h2>Training settings</h2> <ul> <li>A slightly modified <a href="https://openaccess.thecvf.com/content_CVPR_2019/papers/Cui_Class-Balanced_Loss_Based_on_Effective_Number_of_Samples_CVPR_2019_paper.pdf" target="_blank">balanced focal loss</a>. Instead of assigning a weight to each class I calculated a different alpha for each class assuming a one-vs-all classification.</li> <li>cosine annealing with warmup</li> <li>1e-2 learning rate</li> <li>10 warmup epochs</li> <li>90 epochs</li> <li>AdamW with 1e-3 weight decay</li> </ul> <h2>Results on LB</h2> <p>The stage 1 model has a LB score of 0.47455 private, 0.48021 public with 8x TTA.</p> <h1>Stage 2</h1> <p>In stage 2, I made two assumptions on the image level labels provided by the organizers.</p> <ol> <li>Positive labels indicates at least one cell in the image with the target class.</li> <li>Negative labels indicates the all the cells in the image do not contain the target class.</li> </ol> <p>So I used the stage 1 ensemble to predict individual cell labels but I only keep the cell-wise labels for classes where their ground truth image label is positive. The cell labels for classes which have negative ground truth image label are set to 0.</p> <p>For the negative classes, I predict negative_prob = 1 - max(other classes). I downloaded 1123 negative images from the extra HPA dataset and assign a negative class label of 1.</p> <p>I then trained this stage 2 model with the following setting.</p> <h2>Augmentation</h2> <ul> <li>Random crop 10% off height and width</li> <li>Random flip</li> <li>Random rotate 90</li> <li>Random brightness (independent in each RGBY channel)</li> <li>With 50% probability, I multiply the red, blue and yellow channels with a random number between 0 and 1. This is motivated by the observation that some duplicate images have the same structure but different intensities between different channel.</li> <li>Crop or pad images to 224x224</li> </ul> <h2>Hyperparameters</h2> <p>I didn't have time to try different hyperparameters but the settings I used for my solution is</p> <ul> <li>Cosine annealing with warmup</li> <li>learning rate 1e-3</li> <li>10 warmup epochs, 30 epochs</li> <li>AdamW optimizers with 1-e4 weight decay (this is quite important as the cell-level models overtrain very quickly into NaNs on mixed precision without weight decays).</li> </ul> <p>This model is very expensive to train as I had more than 500,000 examples. One epoch took 6 minutes on the latest gen GPU.</p> <h2>Results of stage 2 training</h2> <p>The stage 2 training improve my score by about 0.3 in my best solution, others vary from 0.1-0.8 based on what I tried in stage 1 models. My best submission has a score of 0.50626 private 0.50738 public.</p> <h1>Extra Data</h1> <p>I downloaded the extra public dataset provided by the organizers but I only selected from classes which had image-level mAP scores of less than 0.8 based on the validation set of my stage 1 model. I also downloaded 1123 negative (no target class) images.</p> <h1>Choice of model</h1> <p>I noticed efficientnet-b0 performed the best in stage 1. Other models I tried were mobilenet v3 and resnet-50, resnet-101. Larger efficientnet models (I tried b1 and b2) didn't perform as well as b0.</p> <h1>Post-processing</h1> <p>Unfortunately, I ran out of time towards the end of the competition and I wasn't able to try combining image level predictions with cell level predictions or trying something to deal with edge cells. (One thing I learnt was that time management is quite important in competitions like this.)</p> <h1>Things that did not work</h1> <ul> <li>Concatenate a global max pool and global average pool in the last layer of the stage 1 (image level) models.</li> <li><a href="https://arxiv.org/abs/2103.07246" target="_blank">DRS</a> in the global average pooling layer. Maybe there is some hyperparameter setting or something I got wrong.</li> </ul> <h1>Lack of plots or image</h1> <p>It seems I cannot upload images to kaggle at the moment. I hope to update this post with plots or images in the future when that is possible.</p>
Human Protein Atlas - Single Cell Classification
21st Place Solution: You don't need cell tiles
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Raman!</p> <p>Thanks Martin!<br> 1) It brought some stability to the CAM*mask-values because I had some very large class-activations for classes with very low probability (that presumeably evened out in the last fully connected layer of the model?)<br> 2) The bottom half are image-level labels</p> <p>Thanks Darek! That's really nice to hear, you're also an awesome contributor to Kaggle!</p> <p>Hey everyone, thanks for that awesome challenge! Congrats to everyone :)<br> I'm verry happy to get my first silver medal on Kaggle and want to share my approach with you.<br> And thanks to phalanx and his <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/217395" target="_blank">post </a>on Puzzle-CAM for good inspiration</p> <p><strong>General Approach</strong><br> Like many of you, I read a lot a weakly-labeled instance segmentation and eventually wanted to go for a image-level training and inferencing method. For this to achieve, a model producing good Class-Activation-Maps was needed so I decided to try <a href="https://arxiv.org/abs/2101.11253" target="_blank">Puzzle-CAM</a> and do some mapping magic for inferencing to get probabilities from my CAMs. </p> <p><strong>Training</strong><br> I trained according to the Puzzle-CAM paper with each images being tiled to four single images and considering the full-image CAMs versus the tiled-image CAMs in a loss function. I used a ResNest-101 and an EfficientNet-B4 with the according GAP Layers added and Focal Loss function. </p> <p><strong>Inferencing</strong><br> Here's the interesting part. I'm simply multiplying the CAM of each class with the cell mask of each cell and the class probability the model produces <em>(using a Swish-Activation to obtain the CAMs gives slightly better results than raw CAMs or ReLU)</em>. This gives very large class activated values for each class for each cell, which have to be mapped to real class probabilities and I used two approaches for this:</p> <ol> <li>standardize the values of each image using <code>sklearn.preprocessing.StandardScaler</code> and applying a sigmoid function to these values (works surprisingly good)</li> <li>Do the inferencing on the single-class labeled train data to get the raw values and train a gradient boosting regressor to learn the according label (0..1) for each class (to make sure, that the right mapping function, that might be different from the sigmoid function, is found)</li> </ol> <p>In the end I combined both approaches. </p> <p>Now enjoy some nice graphics showing my approaches (click links for higher res) :)</p> <p><img src="https://images2.imgbox.com/e7/85/HVh20eFe_o.jpg" alt=""><br> <a href="https://images2.imgbox.com/e7/85/HVh20eFe_o.jpg" target="_blank">TRAIN</a></p> <p><img src="https://images2.imgbox.com/fd/b1/4bEYABtz_o.jpg" alt=""><br> <a href="https://images2.imgbox.com/fd/b1/4bEYABtz_o.jpg" target="_blank">INFERENCE</a></p> <p>Thanks Alex! Yes I used Lucidchart: <a href="https://www.lucidchart.com/" target="_blank">https://www.lucidchart.com/</a><br> I made my inferencing code public: <a href="https://www.kaggle.com/alexanderriedel/hpa-inferencing?scriptVersionId=62555565" target="_blank">https://www.kaggle.com/alexanderriedel/hpa-inferencing?scriptVersionId=62555565</a></p> <p>the training is basically Puzze-CAM</p>
Human Protein Atlas - Single Cell Classification
23rd Place Solution: You don't need heuristics or expensive GPU
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>seems like… thanks for sharing those additional details!</p> <p>Thanks for the kind words, Darek! :) That'd be awesome to meet :)</p> <p>Thank you for the kind words, <a href="https://www.kaggle.com/mpware" target="_blank">@mpware</a> ! It means a lot to me!</p> <blockquote> <p>Is there the same boost on private LB?</p> </blockquote> <p>unfortunately, I can't say for sure as I didn't run every submission on the private test. <br> I happen to have a private LB score for the submission without the border cell removal and with a different small cell removal threshold (the threshold was already median-based; there were no major differences between this scored submission and the final submission). Based on the scored submission, fine-tuning of the median-based threshold and removal of the border cells improved my scores by 0.003 on the public LB and by 0.002 on the private LB. </p> <p>Thank you, <a href="https://www.kaggle.com/usharengaraju" target="_blank">@usharengaraju</a> !</p> <p>[<strong>UPDATE</strong>]:<br> For the latest version of the solution write-up, please check <a href="https://github.com/SamusRam/hpa-single-cell/blob/main/README.md" target="_blank"><strong>the GitHub repo README</strong></a>. There, all the graphics should be present, but I am not maintaining external hosting links for graphics in the current post. Please, feel free to drop me a personal email via Kaggle or raise a GitHub issue in case of any questions. Thanks!</p> <hr> <p>Hello, everyone!</p> <p>First of all, I'd like to congratulate&nbsp;the winners! It's a privilege to learn from your solutions!&nbsp;</p> <p>Secondly, I'd like to thank the organizers of the competition! Thank you for your kind, attentive and responsive approach on forums!<br> And thank you, all my fellow HPA competitors! 😊Thanks to all of you it felt like an awesome Team of amazing like-minded enthusiastic colleagues. As Darek <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> has put it nicely, I wish we would meet and celebrate. Hopefully, it'd happen during some offline KaggleDays hackathon in the future 😉</p> <h1>Solution</h1> <h2>Intro</h2> <p>Analogously to <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/232396" target="_blank">the post</a> by Paweł <a href="https://www.kaggle.com/narsil" target="_blank">@narsil</a> , my approach was <strong>very much data-centric</strong> and <strong>theoretically rigorous</strong>.</p> <p>The core of my solution was <strong>de-noising</strong>, including de-noising based on graph Laplacian regularization theory. De-noising itself allowed me to achieve a competitive position <strong>without compute-intensive large models</strong>, <strong>without heuristics</strong> like ranking, multiplication of predictions, etc.</p> <h2>De-noising</h2> <p><a href="https://drive.google.com/file/d/1RDrcZ9_boh4u6gTX6O5ObETK7TpSl3ns/view?usp=sharing" target="_blank">IMAGE</a>:<br> <img src="https://svgshare.com/i/XB5.svg" alt=""></p> <p>I managed to implement the computationally efficient graph signal denoising from <a href="https://ieeexplore.ieee.org/document/7450177" target="_blank">this paper</a>. My Python implementation can be found <a href="https://github.com/SamusRam/hpa-single-cell/blob/main/src/denoising/graph_denoising.py" target="_blank">here</a>. The algorithm from the paper enforces label sparsity so that each object would have a single confident label after the de-noising. Therefore, outputs of the de-noising were not used directly. I focused on the group of cells having the highest mode in the de-noised soft labels. E.g., I've added around 500 cells from the highest de-noised labels:<br> IMAGE:<br> <img src="https://i.imgur.com/3GWvpWn.png" alt=""></p> <h2>Negative label</h2> <p><br> $$ P_{neg} = \prod_{c\ in\ patterns} (1 - P_{c}) $$.</p> <p></p> <p><em>Update</em>: I noticed that by mistake I did not include the independence-based estimation into my final submission. In the submission corresponding to my private LB position I estimated negative label probability as <br> $$ P_{neg} = \min_{c\ in\ patterns} (1 - P_{c}) $$. </p> <p>Such a way to estimate P_{neg} was worse compared to the independence-based estimation by 0.0007 on public LB. But it turned out to give a result by 0.0003 better on the private LB, luckily.</p> <h2>Segmentation</h2> <p>I implemented the removal of border cells. I also modified the removal of small nuclei, doing it based on medium nucleus size instead of the hardcoded threshold. These modifications boosted my public LB score from 0.523 to 0.527.</p> <h2>Simple model</h2> <p>5 folds, DenseNet121 by&nbsp;Shubin <a href="https://www.kaggle.com/bestfitting" target="_blank">@bestfitting</a>, referenced in <a href="https://www.nature.com/articles/s41592-019-0658-6" target="_blank">the Nature paper</a> and <a href="https://www.kaggle.com/c/hpa-single-cell-image-classification/discussion/214925" target="_blank">on forums</a>. Thank you for your awesome work, Shubin <a href="https://www.kaggle.com/bestfitting" target="_blank">@bestfitting</a> !</p> <p>Data-related challenged seemed to be of higher priority, I focused on those and had no time/hardware to switch from DenseNet121.&nbsp;</p> <p><em>For reference I checked that on ImageNet DenseNet121 is currently ranked 325th with top-1 accuracy of 75%, while state-of-the-art is 90% (<a href="https://paperswithcode.com/sota/image-classification-on-imagenet" target="_blank">source</a>). Other successful teams used EfficientNets, Swin transformers which achieve&nbsp;up to 87% on ImageNet.</em></p> <h2>Strict avoidance of heuristics</h2> <p>The hosts mentioned that the hidden test set was designed to have&nbsp;high single-cell variation. For this reason, I fought the temptation of heuristical re-ranking based on image-level labels, etc., even though the community reported that it helped on the public test set. Regarding the private LB placement, it seems that I've been unnecessarily cautious.</p> <h2>Splitting cells</h2> <p>I've invested quite some time into modifications of the HPA-Cell-Segmentation to split crowded&nbsp;cells, e.g. in tissues, but it didn't provide a public LB boost. The work-in-progress splitting algorithm can be found&nbsp;<a href="https://github.com/SamusRam/HPA-Cell-Segmentation/tree/closer_to_orig" target="_blank">in this branch on GitHub</a></p> <h2>Masking cells</h2> <p>When predicting a single cell, I masked all surrounding cells out, so that in the high SCV scenario predictions would not be influenced by neighboring cells in the bounding box. I later realized that it might make it harder for the model to spot later phases of mitosis. Yesterday I quickly tried to create a separate binary classifier for the mitotic spindle, where I included the surrounding cells in the extended bounding box. But the quick experiment with including the mitotic spindle classifier didn't provide a score improvement.</p> <h2>Source code</h2> <p>.. is in <a href="https://github.com/SamusRam/hpa-single-cell" target="_blank">this GitHub repo</a>. The high-level entry point would be <code>orchestration scripts</code>. </p> <blockquote> <p>BTW, at the last part, you wrote you did all surrounding cells out.<br> Does this mean all pixel values other than the targeted cell convert to 0?</p> </blockquote> <p>Yes, exactly.</p> <blockquote> <p>I also tried this, but LB dropped. Have you compared the LB score with or without it??</p> </blockquote> <p>I did not, on purpose. </p> <p>I thought that the private test set would have significantly higher SCV compared to the public test set.</p> <p>So, if a prediction based on the cells with its surrounding led to a better public LB score, I'd explain it with the fact that surrounding cells supported correct predictions (because all the cells are often similar in a sample with low SCV). But for the higher SCV samples, I expected surrounding cells to actually poison the single-cell prediction. Again, it seems I was overcautious and the private test SCV is not as severe as I expected.</p> <p>Thank you, Dave!! 😊</p> <p>Darien, thank you so much for the kind words! 😊 I enjoyed our discussions in notebooks and on the forum enormously! </p> <p><em>Post-deadline experiments</em>:<br> I checked that the label de-noising could have been used to remove labels as well. As described in the write-up, during the competition I used de-noising only to get additional mitotic spindle labels.</p> <p>I computed candidates for label removal as follows (without strong candidates for removal though, please see the normalized de-noised labels in the Fig.):</p> <pre><code># Y_init contains labels before application of de-noising # Y_opt contains labels after de-noising class_i = class_names.index('Mitotic spindle') positive_init_label_threshold = 0.95 positive_denoised_threshold = Y_opt[:, class_i].max()*0.92 removed_bool_idx = ((Y_init[:, class_i] &gt;= positive_init_label_threshold) &amp; (Y_opt[:, class_i] &lt; positive_denoised_threshold)) </code></pre> <p>The computed post-deadline candidates for removal are depicted in the following figure:<br> <img src="https://i.imgur.com/ZWlROr8.png" alt=""></p> <p>I believe that the outcomes should be viewed as candidates for data cleaning which help an annotator to focus on the most important samples for potential label correction. <br> The outcomes of label de-noising depend on:</p> <ul> <li>the quality of appearance embedding (how well the extracted embeddings encode cell appearance)</li> <li>noise level of the initial labels (how well the labels cover various mitotic spindle patterns)</li> </ul> <p><strong>Credits, once again</strong> <br> The label de-noising procedure, which I took the effort to implement, is from the paper <a href="https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/12661/imparsing_final.pdf?sequence=1&amp;isAllowed=y" target="_blank">Learning from Weak and Noisy Labels for Semantic Segmentation</a></p> <p>The visualization above suggests that removing labels via de-noising is not effective for the problem at hand, as false-positive mitotic spindle labels do not seem to be the main issue. False-negative mitotic spindle cells seem to be a more significant data issue, based on the cells observed among results of de-noising via</p> <pre><code># Y_init contains labels before application of de-noising # Y_opt contains labels after de-noising class_i = class_names.index('Mitotic spindle') negative_init_label_threshold = 0.4 positive_denoised_threshold = Y_opt[:, class_i].max()*0.92 addition_bool_idx = ((Y_init[:, class_i] &lt; negative_init_label_threshold) &amp; (Y_opt[:, class_i] &gt;= positive_denoised_threshold)) </code></pre> <p><img src="https://i.imgur.com/AxKJ2t2.png" alt=""></p>
Human Protein Atlas - Single Cell Classification
24th place (private) solution - Single model approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>You are most welcome <a href="https://www.kaggle.com/mathurinache" target="_blank">@mathurinache</a> ! Someone forgot to predict on private! 😬😬</p> <p>You are most welcome <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> !! Your solo gold in shopee is yet another display of your sorcery 😉. Congrats for that!</p> <p>I will try to share the code soon. It's a simple code. Training is mostly plug and play puzzle CAM. But the inference has a few lines more!</p> <p>Thanks and congratulations on your gold!</p> <p>Thanks and congrats to you too!</p> <p>You are most welcome! I appreciate your contribution to the community. Thanks!</p> <p>I have uploaded a cleaned version of the inference <a href="https://www.kaggle.com/sgalib/0-495-private-single-model-puzzlecam" target="_blank">code</a>. Training code may take a little more time. Thanks!</p> <p>In this competition, multiple different approaches like cell level classification or image level heat-map prediction both seemed possible. During the early phase of the competition, I tried detection models (mmdetection and yolo) using the raw image level labels. But failed get a good score. This made me assume that image level labels are far to represent labels for each cell.</p> <p>Moving forward, I wanted to try CAM based models. Puzzle-CAM paper and their implementation was easy to follow. My final model was based on Puzzle-CAM without much modification of training schema or model architecture.</p> <h1>Hyperparameters</h1> <p><strong>Augmentation:</strong><br> Random crop of ~80% of input image size<br> Random flip/transpose<br> Random brightness/contrast<br> Random cutout<br> <strong>Optimizer:</strong> AdamW<br> <strong>Epoch:</strong> 10<br> <strong>Loss:</strong> Multilabel soft-margin loss for classification + L1 loss for reconstruction<br> <strong>Learning rate:</strong> linearly decreasing from 1e-4 to 1e-6</p> <h1>Inference</h1> <p>Probability for each cell was calculated by multiplying image level probability with normalized CAM logits. Sliding window was used to predict on input image.<br> TTA: 2 times flip and 3 scales (0.75, 1.00, 1.50)</p> <h1>Ablation study</h1> <p>Model: Densenet121 <br> Input resolution: 800x800 -&gt; 512x512 crop<br> Dataset: Only competition <br> LB: 0.457 (public)<br> Dataset: Competition + External data <br> LB: 0.481 (public)<br> Input resolution: 1280x1280 -&gt; 1024x1024 crop<br> LB: 0.503(public)</p> <p>Model: Xception<br> LB: 0.527(public)<br> LB: 0.495(private)</p> <p>I have tried cell level classifier by cropping individual cells and training with the labels found from CAM model. But the labels were still noisy and didn't produce better results than CAM models. So cell level models were not used in the final solution.</p> <p>I have also tried to ensemble CAM based models. Trained Densenet121, Resnest50 and Xception models (each single fold). Xception performed better than others and ensemble provided a little gain. So decided to go with single model (xception) and TTA.</p> <p>I did experiment with a few loss functions. Focal Loss, Asymmetric Loss For Multi-Label Classification and Sharpness-Aware Minimization are some of them. They are all good fit for this problem, but probably needed more hyperparameter tuning.</p> <p>Inference code: <a href="https://www.kaggle.com/sgalib/0-495-private-single-model-puzzlecam" target="_blank">Xception-0.495 private</a></p> <p>Thanks for reading!</p>
Human Protein Atlas - Single Cell Classification
27th place solution (0.483LB)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks and congratulations to you too!)</p> <p>Thank you! Congratulations on the gold medal!)</p> <p>Thanks, as I see you also done great in Shopee competition) Congratulations with Expert tire)</p> <p>Hi all! It was a very long and intensive 3 months)<br> First of all, many thanks to the organizers for this very challenging and interesting competition!</p> <p>This was my first "real" competition at kaggle, so I'm glad to get a medal :-)</p> <p>My solution consists of several parts:</p> <p>1) <strong>Cell segmentation</strong>: I trained MaskRCNN on the HPASegmentator predictions. The only reason for this was the inference speed. MaskRCNN works really faster. A few experiments showed that solution score stayed the same when swapping HPASegmentator with MaskRCNN.</p> <p>2) <strong>Image Level classifiers</strong>:</p> <ul> <li>EfficientNetB3 (1024x1024)</li> <li>EfficientNetB3 (only green channel) (1024x1024) (thanks <a href="https://www.kaggle.com/h053473666" target="_blank">@h053473666</a> for this idea)</li> </ul> <p>3) <strong>Cell level classifiers</strong>:<br> I used <a href="https://arxiv.org/abs/2101.11253" target="_blank">PuzzleCAM paper</a> approach for creating CAMs. And then I used them to create pseudo labels (at this point the labels were smooth - floats in [0.0, 1.0]) for each cell within images.<br> Next step was to manually set up thresholds for each class and get final pseudo labels for further training model.</p> <ul> <li>EfficientNetB3 with dropout (256x256)</li> <li>EfficientNetB3 with dropout (only green channel) (256x256)</li> </ul> <p>4) <strong>Final submission</strong> I combined predictions from all 4 networks and get final predictions for each cell. The best result was with just an average of predictions from 4 models.</p> <p>5) <strong>Training details</strong><br> All classifiers were trained with focal_loss + lovazh_loss (like in the winning solution of <a href="https://www.kaggle.com/bestfitting" target="_blank">@bestfitting</a> in previous HPA competition).<br> Optimizer: Adam<br> Scheduler: ReduceOnPlatoue<br> I also used oversampling for rare classes and undersampling for too common classes.</p> <p>Here is a summary:</p> <p><img src="https://i.ibb.co/VCns6cw/submission.png" alt="Summary"></p> <p>Thanks for reading and good luck in future competitions!)</p>
Human Protein Atlas - Single Cell Classification
28th solution/ with self-supervised learning
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for this very interesting competition, and we have learned a lot during this competition.</p> <p>Code link: <a href="https://github.com/tuotuo-1997/HPA" target="_blank">https://github.com/tuotuo-1997/HPA</a></p> <p><strong>Method Overview</strong></p> <p>We use the Kaggle provided dataset and the public dataset to train and evaluate using different model architectures. The public tools used include Fastai, Opencv, CellSegmentator, Cleanlab, etc.</p> <p>We classify 12 models from the whole image and cells. The 8 different model structures used in the cell classification are: Resnet18, 34, 50, 101, Densenet121, Efficientnet B0, B1, and B2. The cell label assignment is consistent with the whole image. The four different model structures used in the whole image are: Efficientnet B0, B1, B2, and B3.<br> We use Adam as the optimizer. For the cell model, each model trains a total of 4 epochs, and for the whole image model, each model trains 6 epochs. The initial learning rate is set to 3e-2. The predicted probabilities of the cells and the whole image are averaged to obtain the final result.</p> <p>The improvement of results in our method is mainly concentrated in the following 5 points:<br> 1) Self-supervised learning (SSL).<br> 2) The HPA Public data set.<br> 3) The combine of cells level and whole image level results.<br> 4) Asymmetric Loss.<br> 5) Modify the label of the cell to 18 whose maximum value of the green channel is less than 60.<br> 6) Ensemble.<br> The training time of the entire 12 models on 8*1080Ti is about 48h. It takes about 1.5 hours with a single 2080Ti to submit to Kaggle (for public test dataset). After the code is submitted, the prediction time of Kaggle kernel is about 9 hours (for all test datasets).</p> <p><strong>Conclusions</strong></p> <p>It is important to use Asymmetric Loss can alleviate category imbalance while SSL can be helpful to improve the performance. The combine of the cells and the whole image could also greatly improve experiment result.</p>
Human Protein Atlas - Single Cell Classification
31rd Place Solution: CAM only.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I come up with the pipline myself ; )</p> <p>Thanks Kaggle for hosting this incredible game, I really learned a lot from it. And thanks the Kagglers for the awesome ideas. This is my highest rank in my Kaggle trip so I decide to write the journey down.</p> <h1>Summary</h1> <p>My final solution only use CAM(Class Activation Map) because other tricks does not work well for me. So it's a simple, e2e solution other than a complex pipeline. And it suprised me that the CAM only pipeline can give me a silver place.</p> <p>I used Knowledge Distillation for the CAM layer becuase I think it will help the model to generate more slid CAM, well, it does.</p> <p>I change the model downsampling size to 16(2**4) so for a 512 x 512 input, I will get a 32 x 32 CAM. And I use 3 layers output to get 3 CAMs at once.</p> <p>The only working backbone for me is ResNest and all its variant (Thank You Zhang Hang).</p> <h1>Training</h1> <p>The picture below describes the whole training pipeline:</p> <p><img src="http://github.com/louis-she/static/blob/main/hpasolution.png?raw=true" alt=""></p> <h5>CAM Extractor</h5> <p>The DRS and Dropout layer could force the model focusing on the whole image other than one or two cell. DRS can found here: <a href="https://github.com/qjadud1994/DRS" target="_blank">https://github.com/qjadud1994/DRS</a></p> <p><img src="https://github.com/qjadud1994/DRS/raw/main/docs/DRS_CAM.png" alt=""></p> <h5>Training Details</h5> <p>Input: 512<br> Optimizer: Adam <br> Loss: BCE<br> CV: 1 fold and then fine tune on whole dataset(5 fold is too expensive for me)<br> Augmentation: flip, random contrast<br> Model: ResNest and all the variant</p> <h1>Inference</h1> <p>I finally used 5 models toghether(all from timm) to ensemble, all the model input is 512 x 512.</p> <ol> <li>resnest50d_1s4x24d with DRS</li> <li>resnest101e</li> <li>resnest50d_1s4x24d</li> <li>resnest50d_4s2x40d with DRS</li> <li>resnest50d with DRS</li> </ol> <p>TTA: flip both vertically and horizontally, for every spatial changed image, the contrast is changed too, the contrast is changed as 0.64, 0.88, 1.12, 1.36.</p> <p>Cell Segment: original segmentator provided by the host.</p> <p>Cell Scoring: Sum the pixel value of the CAM in the cell area, and divided by the nonzero pixel numbers in that cell.</p> <p><strong>Thanks for reading.</strong></p>
Human Protein Atlas - Single Cell Classification
32nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I had a great experience in the last months attempting to solve an interesting problem. Thank you all the participants for your great ideas and notebooks that helped me achieve my final results. I have learnt a lot from you. Thanks to the host team for always being supportive during the competition.</p> <p>Inference Notebook: <a href="https://www.kaggle.com/quochungto/final-submission-hpa" target="_blank">https://www.kaggle.com/quochungto/final-submission-hpa</a></p> <h3><strong>OVERVIEW</strong></h3> <p><strong>Main components of my solution:</strong></p> <ul> <li>2 stages pipeline - Cell level &amp; Image level predictions</li> <li>Eliminating duplicate images - more reliable validation</li> <li>Faster version of cell segmentations</li> <li>External dataset</li> <li>Increasing diversity of models for ensembling: models trained on Green/RGB, 8bit/16bit images</li> </ul> <p>As the competition stated, we are dealing with a weakly supervised semantic segmentation problem and only have labels for image level, not cell level. The task is to predict the label of each cell for an image with multiple cells.</p> <p>Training single stage models to predict each cell’s label is difficult because we do not have the true target and using image labels as target will make it noisy to train. The idea is that each cell’s label inherits part of the image label, so using image prediction as context to correct each cell prediction will be effective compared to just predicting each cell independently.</p> <h3><strong>TRAINING DETAILS</strong></h3> <h4><strong>Dataset</strong></h4> <p>Competition dataset + External dataset</p> <h4><strong>Preprocessing</strong></h4> <p>As pointed out from other participants, the training dataset contained some duplicated images just like the last competition. Using this dataset directly in training will make validation less reliable. So the first thing I do is to try to remove all the duplicate images in the full dataset.</p> <h4><strong>Cell Segmentation</strong></h4> <p>All cell masks are extracted using the host's HPA Cell Segmentator as it does quite well in segmenting cells. Then each R, G, B channel is combined into a single RGB image and saved as a new dataset. This reduces the problem to just weakly supervised multi-label classification.</p> <h4><strong>Models</strong></h4> <p>In order to maximize the effectiveness of ensembles, different backbones and inputs are used. This increases the diversity of my models.</p> <p><strong>Cell level models</strong></p> <ul> <li>Input: Green/RGB cell images extracted from HPA Cell Segmentator</li> <li>Data augmentation: Horizontal &amp; vertical flip, random crop</li> <li>Dataset: Competition &amp; external datasets</li> <li>Epochs: 2 to 4</li> <li>Backbones: Resnet50, Efficientnet-b5</li> <li>Loss: BCEWithLogitLoss</li> <li>Optimizer: Adam</li> <li>TTA: 1 on original data + 4x</li> </ul> <p><strong>Image level models</strong></p> <ul> <li>Input: Green/RGB images, 8-bit/16-bit images</li> <li>Data augmentation: Horizontal &amp; vertical flip, rotate, shear, shift, zoom, random brightness</li> <li>Dataset: Competition &amp; external datasets</li> <li>Epochs: 40 to 50</li> <li>Backbones: Resnet50, Densenet121, Efficientnet-b0, b1, b2, b3, b5, b7</li> <li>Loss: SigmoidFocalCrossEntropy</li> <li>Optimizer: Adam</li> <li>Learning rate scheduler: One-cycle</li> <li>TTA: 1 on original data + 4x</li> </ul> <h4><strong>Ensemble</strong></h4> <ul> <li>Weighted average ensemble for each of cell level and image level prediction</li> <li>Final prediction as: <code>0.25 * cell_prediction + 0.25 * image_prediction + 0.5 * sqrt(cell_prediction * image_prediction)</code></li> </ul> <h4><strong>Others</strong></h4> <p>Negative class:</p> <ul> <li>By replacing prediction of negative class with <code>P(neg) = prod(1 - P(cls_i))</code>, my public LB increased ~0.003</li> </ul> <h4><strong>Things that did not work</strong></h4> <ul> <li>Hand labeling negative cell by calculating mean pixel intensity (public LB dropped)</li> <li>Eliminating border cells (no significant effect on public LB)</li> </ul>
Human Protein Atlas - Single Cell Classification
3rd place solution - MPWARE part
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <a href="https://www.kaggle.com/tikutiku" target="_blank">@tikutiku</a> , you're on the right track to become GM too.</p> <p>Thanks <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> !</p> <p>First of all I would like to thank the host for this really interesting competition and also for the accurate answers to questions in the forum. High quality data provided is something I would like to highlight too. I would like to warmly thank my teammates <a href="https://www.kaggle.com/zfturbo" target="_blank">@zfturbo</a> and <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> for the great collaboration, firework of ideas and their patience with me.</p> <p>Our solution is an ensemble of different independant models. Hereafter my part.</p> <h1>Introduction</h1> <p>Early in this competition I decided to start with WSSS (Weakly-Supervised Semantic Segmentation) <a href="https://paperswithcode.com/sota/weakly-supervised-semantic-segmentation-on" target="_blank">SOA</a> approaches to learn something new. Full image training with multi labels and try to build a segmentation. Many solutions are based on CAM utilization. In short, at the end of the pipeline, make CAM grow and spread over the image on the regions of interest. However, as cell masks were already available, I didn’t need to go with the full WSSS process and just intersect at some point CAM with cells to get predicted labels. I’ve experimented with both <a href="https://arxiv.org/pdf/2101.11253.pdf" target="_blank">Puzzle-CAM</a> [1] and <a href="https://arxiv.org/pdf/2103.07246.pdf" target="_blank">DRS</a> (Discriminative Region Suppression) [2] approaches. I finally got best results with a modified Puzzle-CAM trained on RGBY images and with a two stages inference (full image and per cell basic ensemble). </p> <p>I teamed up early with ZFturbo that used a different approach and both solutions ensembled quite well. Next step to improve was to generate OOF (per cell prediction for each class) from some of my best model variants that could benefit ZFTurbo's models and then to our ensemble.</p> <p>Close to the end of competition we had room left on inference (our pipeline was running in around 6h30 only) and we agreed that integrating new models should help and we merged with Dieter that brought new models/ideas that made the difference to be in the money zone. </p> <h1>Training</h1> <p>Siamese network with CNN backbone followed by classifier to build activation maps per class. GAP moved that end to get full image predictions. Combo loss that takes into account both full and puzzled/recomposed features and distance between activation maps. </p> <p><strong>Pipeline:</strong></p> <p><img src="https://nsa40.casimages.com/img/2021/05/13/21051307014745767.png" alt="Model"></p> <p><em>Notice the Classifier/GAP swap used in WSSS compared to regular classifiers and L1 loss for overall consistency.</em></p> <p><strong>Augmentations:</strong></p> <p>Simple augmentations that apply to RGBY images:</p> <ul> <li>Noise: GaussNoise, CoarseDropout, IAAAdditiveGaussianNoise</li> <li>Flips/Rotations: HorizontalFlip, RandomRotate90</li> <li>Rotate/Distorsion: GridDistortion, ShiftScaleRotate, ElasticTransform, OpticalDistortion, IAAAffine/Shear</li> <li>Blurs: GaussianBlur, MotionBlur, MedianBlur</li> <li>Colors: RandomGamma, RandomBrightnessContrast</li> </ul> <p><strong>Sampling strategy:</strong></p> <p>Classes distribution is depicted below. Classes are quite imbalanced especially for Aggresome (class 15) and Mitotic spindle (class 11). Nucleoplasm (class 0) and Cytosol (class 16) are over-represented.</p> <p><img src="https://nsa40.casimages.com/img/2021/05/13/210513070340273015.png" alt="Distribution"></p> <p>A weighted random sampling has been applied to balance each class during training. Basically, less weight for nucleoplasm and more weights for Mitotic spindle. However, to counterbalance the rare classes 11 and 15 important oversampling due to this strategy, we’ve clipped weights value to the one computed on Intermediate filaments (class 8).</p> <p><strong>Data:</strong></p> <p>Train set and external data had been used to train different model variants:</p> <ul> <li>89k images: Train set + External public data shared by host</li> <li>98k images: Train set + External public data shared by host + HPA 2018</li> </ul> <p>Each single layer image has been merged and resized to create RGBY 512x512 images (protein + context). A few similar images have been found on sanity check but not removed.</p> <p>External data was key for this competition to increase volume and make models better.</p> <p><strong>Cross validation:</strong></p> <p>Train dataset is split in 4 folds with a <a href="https://github.com/trent-b/iterative-stratification" target="_blank">MultilabelStratifiedKFold</a> strategy.</p> <p><img src="https://nsa40.casimages.com/img/2021/05/13/210513070514669583.png" alt="CV"></p> <p><strong>Training procedure:</strong></p> <p>4 main models trained with different <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">backbones</a> (but only variants of #1 in the final ensemble).</p> <ol> <li>seresnext50_32x4d</li> <li>gluon_seresnext101_32x4d, </li> <li>cspresnext50</li> <li>regnety_64</li> </ol> <p>Weighted ComboLoss:</p> <ul> <li>BCEWithLogitsLoss (each with weight=1.0)</li> <li>L1Loss (weight from 0.25 to 0.5 got good results)</li> </ul> <p>LR and hyperparameters:</p> <ul> <li>Optimizer: Adam, LR= 0.0003, beta1=0.9</li> <li>LR scheduler: ReduceLROnPlateau, factor = 0.3, patience = 8</li> <li>Epochs: 48</li> <li>Batch size: From 32 to 36</li> <li>FP16 enabled</li> <li>Single cycle</li> </ul> <p>Criteria to save model’s weights is based on best ComboLoss only, F1 and mAP scores are just monitored:</p> <p><img src="https://nsa40.casimages.com/img/2021/05/13/21051307055127008.png" alt="CV"></p> <h1>Inference</h1> <p>Inference is based on two stages ensembled at the end. </p> <p><strong>Stage #1:</strong></p> <p>It’s the inference related to a model trained with activation maps output normalized to [0-1.0] range and resized to input image size. HPA segmentation pipeline is executed in parallel to get cell masks instances. Each cell is intersected with CAM (overlap + magnitude score) and weighed with the per-class probability outputted from the sigmoid.</p> <p><img src="https://nsa40.casimages.com/img/2021/05/13/210513070630451026.png" alt="stage1"></p> <p>Note: HPA segmentation pipeline provided by the host has been modified to make it faster (around x6). As the host recommended scale factor = 0.25, it globally matches with 512x512 images and we’ve modified the morphology post-processing to detect instances based on 512x512 and not on original image size. The gain is on speed but we lose on quality, some cells remain merged with this optimization but they should be split. The impact on score has been estimated to around -0.002 which is an acceptable trade off.</p> <p><strong>Stage #2:</strong></p> <p>It is another inference that comes for free by simply re-using the model and fed by each cell instance (from stage#1) crop.</p> <p><img src="https://nsa40.casimages.com/img/2021/05/13/210513070737945441.png" alt="stage2"></p> <p>Stage#1 vs stage#2 predictions are different by design, stage#1 are sparse whereas stage#2 are more flatten. Both acting together gave some regularization to the results.</p> <p>From a performance point of view, this single model (both stages + HPA segmentation) on 559 images ran in around 2h with one P100 GPU. Additional averaging with different model variants (different input dataset, different backbones, seed) got public LB around 0.52/0.53.</p> <h1>Misc</h1> <p><strong>What could be improved?</strong></p> <p>DRS approach was also giving interesting results, the single model (with vgg16 backbone) was slow to train and got a lower score compared to Puzzle-CAM (but acceptable). Inference was slow too that’s why we didn’t get a chance to ensemble it at the end but it might be worth assessing as CAM generated are different.</p> <p><strong>What did not work (or not better than this solution)?</strong></p> <ul> <li>YoloV5 model based on OOF and RGB image</li> <li>Post process probabilities to move to rank probabilities on ensemble</li> <li>TTA (it worked indeed but gave tiny improvement) </li> <li>Larger full image</li> </ul> <p><strong>Update</strong>: Source code available: <a href="https://github.com/MPWARE-TEAM/HPA2021" target="_blank">https://github.com/MPWARE-TEAM/HPA2021</a></p> <p>Thanks <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> ! So true.</p> <p>Training model (my part) open sourced on GitHub:<br> <a href="https://github.com/MPWARE-TEAM/HPA2021" target="_blank">https://github.com/MPWARE-TEAM/HPA2021</a></p> <p>Models weights shared (even models not used in our ensemble):</p> <ul> <li>SEResNeXt: <a href="https://www.kaggle.com/mpware/hpa-models" target="_blank">https://www.kaggle.com/mpware/hpa-models</a></li> <li>DRS: <a href="https://www.kaggle.com/mpware/hpa-drs" target="_blank">https://www.kaggle.com/mpware/hpa-drs</a></li> </ul>
Human Protein Atlas - Single Cell Classification
41st Place Solution: Cell-based RoI Pooling + Transformer Encoder
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Human Protein Atlas - Single Cell Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hey fellow Kaggle competitors! </p> <p>I had a lot of fun with this unique competition and I'm looking forward to reading your own solutions to this problem. Here is an overview of my final approach + link to my code (open images in a new tab for higher res):</p> <p><a href="https://github.com/martin-chobanyan/hpa-single-cell" target="_blank">Github Repo</a></p> <p><img src="https://raw.githubusercontent.com/martin-chobanyan/hpa-single-cell/main/resources/cell-transformer-overview.png" alt="cell transformer"></p> <p>First off, in all of my models I used the pre-trained backbone CNN of <a href="https://www.kaggle.com/bestfitting" target="_blank">@bestfitting</a> 's winning DenseNet-121 models from the previous competition (3 different pre-trained versions). I stacked all of the color stains into 4-channel images and resized them to 1536x1536 and fed them through the backbone CNN, resulting in 1024 feature maps.</p> <p>I then extracted a feature vector for each cell in the image by performing Region-of-Interest (RoI) pooling over the feature maps using the cell segmentation masks from the HPA Cell Segmentator. The pooling layer consisted of a concatenation of both avg-pool and max-pool features (resulting in a 2048-dim vector per cell). I also applied an adaptive average pool over the cell masks to reduce them to 8x8, which I then flattened. These downsampled, flatten masks served as position encoding for the cell regions (see diagram below):</p> <p><img src="https://raw.githubusercontent.com/martin-chobanyan/hpa-single-cell/main/resources/cell-roi-pool-overview.png" alt="roi pool"></p> <p>At this stage, we have a sequence of cell vectors, which allows us to explore the use of a Transformer model. The cell feature vectors are passed through a feed-forward layer which maps them to a 1024-dim embedding. Then, four encoder layers are applied as they were defined in the original Transformer paper. A final feed forward layer then maps each cell to an 18-dimensional logit.</p> <p>To predict the image-level labels, a log-sum-exponential layer is applied which is defined as f(x) = (1/r) * ln(mean(e^(r*x))) with r=5 in our case. This serves as a mechanism to interpolate between average pooling and max pooling with different values for r.</p> <p>For training I used Focal Loss with sigmoid activation and flip/rotate/crop augmentations. During inference, I extracted the cell-level logits and applied a sigmoid activation before their reduction to image-level labels. The final class prediction for a cell was averaged across all three models (three different pre-trained backbones).</p> <p>I also explored the use of <a href="https://arxiv.org/abs/1804.00880" target="_blank">Peak Response Maps</a> as a CAM based approach to the problem. Though the localizations looked good, the model was a bit too sensitive to classes which were not present in the image.</p> <p>Things I would have tried with more time:</p> <ul> <li>Ensembling with a cell-level classifier</li> <li>Explore more ways of merging the peak-response-map results with the cell transformer model</li> <li>Domain adaptation to the public test set (since we know the test set has higher variability in protein locations within a given image compared to the training set).</li> </ul> <p>Thank you!</p> <p>Hi <a href="https://www.kaggle.com/nguyenhuudat" target="_blank">@nguyenhuudat</a>,</p> <p>I just made my submission kernel public: <a href="https://www.kaggle.com/mch123/ensemble-cell-transformer-2" target="_blank">https://www.kaggle.com/mch123/ensemble-cell-transformer-2</a></p> <p>You'll also have to use the model states in this dataset I made: <a href="https://www.kaggle.com/mch123/hparoimodels" target="_blank">https://www.kaggle.com/mch123/hparoimodels</a></p>