Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
ICR - Identifying Age-Related Conditions
|
Top 20% (not far from Bronze) using automl
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi,</p>
<p>now it all comes to an end. It has been an interesting competition for sure. My personal mission was it to use my automl library BlueCast and see how it feels for an end user and improve it along the way.</p>
<p>I finished top 20% (score of 0.45… Bronze ranks ended at 0.43…😅…so close), but I am still happy about the result.</p>
<p>In the <a href="https://www.kaggle.com/code/thomasmeiner/icr-bluecast-automl-almost-bronze-ranks?scriptVersionId=139135413" target="_blank">BlueCast notebook</a> you can see its EDA capabilities and it's open API (here I tested using a custom model and custom preprocessing, but commented using the model out).</p>
<p>I hope you enjoy the notebook and maybe another time you can try it out yourself. :-) </p>
|
ICR - Identifying Age-Related Conditions
|
Top 3%: Feeding wrong predictions from initial model to training set - Boosted the Scores
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h3>To our bad luck 🤦♂️, we did not select the best scoring notebook for submission. So we were at Top 9%.</h3>
<h3>Here is the notebook that could have easily taken us to Top 3 % 🔥.</h3>
<table>
<thead>
<tr>
<th><a href="https://www.kaggle.com/code/ajaysadhu/feeding-wrong-predictions-to-training-data" target="_blank">Notebook</a></th>
<th></th>
</tr>
</thead>
<tbody>
</tbody>
</table>
<h2>🏃🤗 Approach followed:</h2>
<p>This is a Data Bias problem and tried with different sampling approaches and finally took:</p>
<blockquote>
<ul>
<li>Feature Selection, and Transformation using Pipeline with log/sqrt (based on tails), KNNImputer and Standard Scaling.</li>
<li>Removed outliers based on IsolationForest.</li>
<li>Tensor flow with quite a lot of Dropout layers on Random Under sampled Data </li>
<li>LightGBM with is_unbalanced parameter true and class_weight as "balanced" on Random Under-sampled Data.</li>
<li>CatBoot with custom class weights on 80% Training Data.</li>
</ul>
</blockquote>
<h2>🧐🤓Post Processing:</h2>
<blockquote>
<ul>
<li>For each model prediction, took selective threshold that was giving the best confusion matrix.</li>
<li>0.78 for Neural Net and above this threshold took probability as 0.9995 and 0.0005 otherwise. To avoid penality. Similarly, 0.7 for LightGBM and 0.65 for CatBoost.</li>
<li>Average of all the 3 model predictions were used to create the final submission probabilities.</li>
</ul>
</blockquote>
<h1>🤯😬 <strong>Unique Idea that gave the boost to the scoring:</strong></h1>
<blockquote>
<ol>
<li>After the first run, we found the mis-classified datapoints in Confusion Matrix and got their indexes and added them to the training dataframe with 30 times duplication factor (30 was helping with validation loss reduction).</li>
<li>So, you would find the wrong_preds dataframe with indexes of the rows in the notebook.</li>
<li>With the inflated training data, the notebook was run and results were submitted.</li>
</ol>
</blockquote>
<pre><code>This idea helped our Score move up to .😎
</code></pre>
<p>Do let us know what do you think about our approach.</p>
|
ICR - Identifying Age-Related Conditions
|
Unveiling a Potential Silver Lining: The Power of a Public Notebook 🥈
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Greetings Kagglers,</p>
<p>I'm excited to share a revelation from our recent competition that underscores the influence of a well-crafted public notebook. Allow me to introduce you to my public notebook, "ICR | MultiHeads Ensemble Baseline [CV 0.23] & EDA" which holds the potential to secure a silver medal for those who made the strategic choice to incorporate it into their approach.</p>
<p>Link to the notebook: <a href="https://www.kaggle.com/code/elcaiseri/icr-multiheads-ensemble-baseline-cv-0-23-eda" target="_blank">https://www.kaggle.com/code/elcaiseri/icr-multiheads-ensemble-baseline-cv-0-23-eda</a></p>
<p>In the ever-evolving landscape of Kaggle competitions, decisions can often play a pivotal role in shaping our outcomes. While my public notebook was designed with meticulous care, it's important to note that the impact of certain choices becomes more evident as we transition to the private leaderboard.</p>
<p>I'm hopeful that some of you recognized the promise of this notebook and opted to integrate it into your strategy. This choice could have translated into an impressive silver medal on your profile, and I genuinely hope that it worked out in your favor.</p>
<p>As we navigate the complexities of Kaggle, let's continue to appreciate the power of sharing knowledge and solutions within our community. Every notebook, every insight, and every decision contributes to our collective growth and understanding.</p>
<p>Thank you for being a part of this journey. I eagerly await your thoughts and insights on the notebook, and I'm looking forward to witnessing the diverse impacts it might have had.</p>
<p>Wishing you all happy coding and successful insights!</p>
|
ICR - Identifying Age-Related Conditions
|
Very simple solution (Private 800 | Public 3800)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I think this competition primarily shows two things:<br>
1) Be careful of overfitting public LB<br>
2) Kaggle Competitions <em>can</em> be random and high-scoring notebooks might not be helpful for the hosts after the competition</p>
<p>My 'best' solution was meant as a baseline model and thus very simple:</p>
<ul>
<li>Impute NANs using regression (for some reason I didn't impute the test set, which of course was not on purpose to forget)</li>
<li>Optimize hyper params with Optuna</li>
<li>Predict and submit</li>
</ul>
<p>You can see the code <a href="https://www.kaggle.com/lucasboesen/xgb-cv-0-37-private-0-43-public-0-23" target="_blank">here</a></p>
|
ICR - Identifying Age-Related Conditions
|
Why everybody uses `MultilabelStratifiedKFold` WRONG: unveiling the DECEPTION
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>That's a really nice approach, thank you for sharing! I guess this is a universal way to be secured from bugs in comparison with using a library. Utilizing an open-source code might be easier, but in my post I show that it can lead to unexpected outcome. So if it's relatively simple to code yourself - that's really great.</p>
<p>Thank you! I'm sure it is also luck, though, considering the shakeup. But I'm also sure that my approach with OHE before stratification increased my chances to end up with a high rank :)</p>
<p>Hello! I'm thrilled to share an exciting breakthrough I had while working on this competition.</p>
<p>I invested considerable effort into exploring and experimenting with validation techniques while cracking the challenge. I observed a trend where many participants were applying <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/413099" target="_blank">the iterative stratification technique</a> on multiple columns. Personally, I used <code>Class</code>, <code>Alpha</code>, <code>Beta</code>, <code>Gamma</code>, <code>Delta</code>. It was also suggested to utilize <code>MultilabelStratifiedKFold</code> from the <a href="https://github.com/trent-b/iterative-stratification" target="_blank">iterative-stratification</a> library. <a href="https://www.kaggle.com/code/tauilabdelilah/icr-create-folds" target="_blank">Here</a> is an example of how folds can be created using it. However, as I delved deeper into my research, a startling revelation hit me like a ton of bricks.</p>
<p>Ladies and gentlemen, brace yourselves: <strong>The MultilabelStratifiedKFold technique is nothing short of a colossal DECEPTION!</strong></p>
<p>Here is the kicker. Turns out, when dealing with categorical string variables, the stratification achieved by <code>MultilabelStratifiedKFold</code> falls alarmingly short of its intended goal. The distribution of data points within groups across different folds is far from balanced. But we can improve it. Two game-changing ideas:</p>
<ol>
<li><strong>Convert those string labels into numbers before stratification.</strong> Take a look at <a href="https://github.com/trent-b/iterative-stratification/blob/master/iterstrat/ml_stratifiers.py#LL49C5-L49C19" target="_blank">this</a> line of code in the <code>iterative-stratification</code> library. Picture what happens when we perform <code>labels.sum(axis=0)</code>. If <code>labels = [0, 0, 0, 0, 1, 1, 1, 1]</code>, the result is 4. However, when <code>labels = [a, a, a, a, b, b, b, b]</code>, the result is <code>aaaabbbb</code> (some really weird stuff).</li>
<li><strong>Use one-hot-encoding</strong>, just as <a href="https://github.com/trent-b/iterative-stratification/issues/9#issuecomment-646868889" target="_blank">suggested</a> by the creator of the iterative-stratification library.</li>
</ol>
<p>I vividly demonstrate what is happening – in my notebook. Please take a look: <a href="https://www.kaggle.com/code/blackitten13/multilabelstratifiedkfold-scam-you-need-ohe/notebook" target="_blank">https://www.kaggle.com/code/blackitten13/multilabelstratifiedkfold-scam-you-need-ohe/notebook</a></p>
<p>To conclude, I made the submission using <a href="https://www.kaggle.com/code/thelastsmilodon/lb-0-06-cv-0-2-tabpfn-xgb-model" target="_blank">this</a> public kernel (without post-processing). It scored 0.49 on private. Then I added my validation approach, presented in <a href="https://www.kaggle.com/code/blackitten13/multilabelstratifiedkfold-scam-you-need-ohe/notebook" target="_blank">the notebook</a> (with OHE before stratification). I faced a remarkable leap to a private score of 0.41 and a bronze medal!</p>
<p>Once again:</p>
<p><strong>0.49 (top 35-38%) -> 0.41 (top 6%) just by adding one-hot-encoding of labels before stratification</strong></p>
<p>What do you think? Did anyone notice something similar during the competition? I would appreciate your feedback!</p>
|
ICR - Identifying Age-Related Conditions
|
Wow (and our solution)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>3 years. By the way, I freaked out a bit when I saw Kevin Levrone pfp, since I got one on my media accounts 😂</p>
<p>Thanks. Around 20 features removed if I'm not mistaken.</p>
<p>CatBoost has standart parameters and everything else I took from public notebooks, it worked the best like that anyway and didn't need change.</p>
<p>Tabpfn improved my ensemble and overall its a model I will add to my arsenal from now on, from what I understood it's still in development so I'm eager to find out what its capable of.</p>
<p>To put it mildly, I'm in complete shock, I woke up today with my teammate congrats and I got intrigued as to what place we got? I burst out laughing when I saw it was 2nd place, and the funniest thing is my solution is just CV - no probing, no nothing</p>
<p>here's my solution:</p>
<p><a href="https://www.kaggle.com/code/opamusora/main-notebook" target="_blank">https://www.kaggle.com/code/opamusora/main-notebook</a></p>
<p>What we did:</p>
<ul>
<li>just like many people, we used time and max(time)+1 for test</li>
<li>removed rows where time is None, I noticed a weird cluster that was far away from all other data when playing with umap and it were rows with absent time</li>
<li>filling Nan values with -100, probably doesn't matter if its median or low numbers.</li>
<li>used a technique I saw in some other competition: reducing dimensions with umap and then labeling clusters with kmeans, it didn't bring a lot of score, but it's there</li>
<li>did feature permutation manually, dropped any cols that made score even slightly worse</li>
<li>for a model I used Catboost, xgb with parameters from some public notebook and tabpfn. LGBM didn't seem to work for me as it always dropped my CV</li>
<li>then we just averaged our predictions for test and that's it</li>
</ul>
<p>Also, I want to mention that we wanted to try edit tabpfn to get embeddings and we had an idea to try fine tune tabpfn, but it didn't work out.</p>
<p>I also tried optuna for optimizing my models, but it didn't work out</p>
<p>Stacking didn't help my score much as well</p>
<p>edit: My last 2 sumbissions on this competitions are the highest scoring ones 😅</p>
<p>Yeah, I was hesitant to do it and it obviously dropped some score (since we remove a lot of easy class 0 ), but eventually I decided to keep it cause it would be better for general cases.</p>
<p>Well, sometimes LB probing can do wonders, to be honest I was sure until the very end that winners will be mostly LB probers</p>
<p>Yeah, it also dropped my cv score, but data points were really far from the main cluster so I decided to remove it anyway</p>
<p>A simple 4 split SKfold</p>
|
ICR - Identifying Age-Related Conditions
|
[475 Post 0.37649] Place Solution for the ICR - Identifying Age-Related Conditions Competition
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p></p>
<h2> Context </h2>
<div><p>
<b>Business context:</b> https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview
<br>
<b>Data context: </b> https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data
</p></div>
<h2> Overview of the Approach</h2>
<div><p>
<b>Handling Missing Values: </b> <br>
Median values were collected for train by EJ and Alpha.
Train columns with missing data (NaNs) were defaulted to median values for row EJ and Alpha.
Median values were then collected for train by EJ.
Test columns with missing data (NaNs) were defaulted to train median values for row EJ.</p>
<p>
<b>Decision to Split: </b> <br>
Split data for EJ A and EJ B for use in separate models for A and B -
Based on review of some of the EDA notebooks, discussions and consideration of differences in columns,
e.g. some static for A ('DU','EH','FD','FL','GL') and no Alpha B for A,
while those columns were relevant for Alpha B and EJ B only,
and analysis of differences in distributions and stats between A and B for other columns esp correlated ones. </p>
<p>
<b>Correlations:</b> <br>
Separate correlations for EJ A Alpha D and G and EJ A overall for analysis<br>
Separate correlations for EJ B Alpha B, D and G and EJ B overall for analysis</p>
<p>
<b>Feature columns:</b> <br>
np.log1p for all columns excluding EJ for train and test. (EJ column was not used in data for models) <br>
Combination columns via addition for AB_CF, CD_DI, BC_FR, CR_DA for EJ A data and model<br>
Combination columns via addition as for EJ A with DU_FL specific for Alpha B, for EJ B data and model.</p>
<p>
<b>Weighted PCA features:</b> <br>
Separate Weighted PCA for EJ A and EJ B using weight columns max normed for correlated groupings of columns and default weights for remaing columns.
Used wpca.EMPCA based on the algorithm presented in Bailey (2012) to handle weights on each input
value and heteroscedastic noise.<br>
Considering correlations in different ways gave a view for which columns influence other columns as well as Class and which might be combined as features and/or used for weights or in groupings for weighted PCA.</p>
<p>
<b>Percentiles Differences:</b> <br>
Separate Percentiles 10-90 were obtained for EJ A and EJ B for all columns excluding EJ <br>
Difference of columns to percentiles were calculated and for EJ A used 10, 40, 60, 90 and for EJ B used 20, 40, 60, 80<br>
Although column meanings were unknown, the percentiles differences were included as a proxy for
what could perhaps be considered in or out of normal range for test results, what may indicate
hyper or hypo conditions, etc. </p>
<p><b>Training and Models:</b> <br>
2 sets of features transformed one with QuantileTransformer, one with MinMaxScaler 0-255,
difference percentiles tranformed with MinMaxScaler 0-255, and a subset of np.log1p columns with most correlation (different for A and B) were all stacked together. Then transformed using Table to Image, into a 2D image style representation 16x16, input for a CNN Model. <br>
The model used was a simple Convolutional Neural Network (CNN) with a stack of 3 Conv2D, MaxPooling2D and BatchNormalization layers with Dropout on the last, swish activation on first, relu on second and third.
Flatten then 2 Dense layers, the first outputs 4 like Alphas activation swish, the second outputs 1 for Class activation Sigmoid.<br>
Trained with optimizer AdamW, CyclicalLearningRate and Batch size 8.<br>
Cross validation MultilabelStratifiedKFold split on Class, Alpha, Gamma with 8 folds for EJ A and 10 folds for EJ B (post competition 6 folds for EJ B was best)<br>
A proof of concept XGB, LGBM, TabFPN model was trialled for evaluating use of Weighted PCA and combination columns features. This was based on a notebook using Epsilon and modified to use KNNImputer(n_neighbors=8,weights='distance') for Unknown Episilon in train.
</p></div>
<h2>Details of the submission</h2>
<div><p>
The competition Overview stated "Currently, models like XGBoost and random forest are used to predict medical conditions yet the models' performance is not good enough." <br></p><p>
<b>What was special, creative, important:</b>
What was creative here was to try and see if a CNN would work with this data and how features could be used in different ways to that end.
Separate EJ A and EJ B data and models were important and allowed for insights gained from A which was easier to develop B. Using weighted PCA was special in this competition (first proposed in the IceCube - Neutrinos in Deep Ice competition (<a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/overview"> IceCube - Neutrinos</a>) by bowaka). Using percentiles differences was another feature idea that leveraged the weighted PCA columns for potential additional information the model could use.<br>
Table to image proposed in this competition by miyawakiyoshifum,was a novel approach for NN solutions here.
Initially it did not seem as if it would be able to get good results but input size was small 5x5. To consider how features could be expanded and augmented, a variation of ulrich07 notebook XGB, LGBM, TabFPN was used.
(After competition end and more decimals revealed, the original notebook Private Score was 0.48347 and the variation including weighted PCA Private Score was 0.42157 so these features did work.) <br>
Returning to a CNN with table to image, initial versions of combination columns, weighted PCA, percentiles
for the 16x16 input and looking good CV 0.138855 seemed not to fare well with LB 0.36. After competition end and more decimals revealed this had <b>Private Score 0.40326 Public Score 0.36356 </b> but regrettably it was not selected for one of the final submissions. <br></p>
<p><b>What did not work:</b> Use of Epsilon in data for CNN models or for cross validation splits did not help. Anything to try to improve some of the hard to classify points like the notorious Id cf5439add02c, an Alpha D, which seemed to work seesaw to another Alpha D, Id 3924089e4396, a case of rob Peter to pay Paul. Various ideas were tried to add or remove features, use new combinations, more percentiles, other seeds. Attempts to create larger inputs e.g. 25x25 or 28x28 did not work even with more Dropout.<br></p>
<p><b>Post competition</b>
However, for the most part, improvements in CV for the CNN model translated to improved Private LB even if Public LB was not great, and spread between them was smaller than many other models. This is perhaps impactful for future considerations of CNN models with this type of dataset. <br>
Post competition, wanted to try to improve more on the best CNN table to image notebook Private Score 0.40326. EJ A model always performed best so focused on EJ B correlation and rework of its weighted PCA. Also with reduction in number of folds to 6 and batch size to 8 from 16 obtained the best so far: <br>
<b>Private Score 0.37649 Public Score 0.32110 </b> with CV 0.16837328202944316
see notebook <a href="https://www.kaggle.com/code/something4kag/icr-split-a-b-wpca-pctiles-table2img-cnn">ICR - Split A B WPCA Pctiles Table2Img CNN</a>
<br></p><p>There are no doubt ways to improve further, especially with knowledge of the true features meanings. <br>
Maybe a 2nd ICR competition can find out!
</p></div>
<h2>Sources </h2>
<p></p><div><p><br>
<a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/381279" target="_blank">https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/381279</a><br>
<a href="https://github.com/jakevdp/wpca" target="_blank">https://github.com/jakevdp/wpca</a><br>
<a href="http://arxiv.org/abs/1412.4533" target="_blank">http://arxiv.org/abs/1412.4533</a> Delchambre (2014) algorithm<br>
<a href="http://arxiv.org/abs/1208.4122" target="_blank">http://arxiv.org/abs/1208.4122</a> co Bailey (2012) algorithm<br>
<a href="https://www.kaggle.com/datasets/something4kag/wpca-01-py3-whl" target="_blank">https://www.kaggle.com/datasets/something4kag/wpca-01-py3-whl</a><br>
<a href="https://www.kaggle.com/code/something4kag/example-weighted-pca-no-internet" target="_blank">https://www.kaggle.com/code/something4kag/example-weighted-pca-no-internet</a><br>
<a href="https://www.kaggle.com/datasets/miyawakiyoshifumi/offline-tab2img" target="_blank">https://www.kaggle.com/datasets/miyawakiyoshifumi/offline-tab2img</a><br>
<a href="https://www.kaggle.com/code/miyawakiyoshifumi/table-to-image" target="_blank">https://www.kaggle.com/code/miyawakiyoshifumi/table-to-image</a><br>
<a href="https://www.tensorflow.org/tutorials/images/cnn" target="_blank">https://www.tensorflow.org/tutorials/images/cnn</a><br>
<a href="https://www.kaggle.com/code/ulrich07/tabfpn-and-xgboost-cv-0-19-lb-0-17" target="_blank">https://www.kaggle.com/code/ulrich07/tabfpn-and-xgboost-cv-0-19-lb-0-17</a><br>
<a href="https://www.kaggle.com/code/propriyam/icr-tabpfn-lgb-xgb-lb-0-17-cv-0-16" target="_blank">https://www.kaggle.com/code/propriyam/icr-tabpfn-lgb-xgb-lb-0-17-cv-0-16</a><br>
<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/429029" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/429029</a><br>
<a href="https://www.kaggle.com/tilii7/data-splitting" target="_blank">https://www.kaggle.com/tilii7/data-splitting</a><br>
<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/428997" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/428997</a><br>
</p></div>
|
ICR - Identifying Age-Related Conditions
|
[ICR] 439th Position's Solution Summary
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Chris!</p>
<p>Appreciate your work in this competition. As always your insights and knowledge are a huge influence that I have greatly benefited from. </p>
<p>Update: linked notebooks have been successfully published.</p>
<h1>I. Foreword</h1>
<p>I want to thank ICR and Kaggle for hosting such an interesting competition. It's still surprising for even newbie like me to see a competition with such a small dataset. The numerous moving pieces make even tiny changes can cause huge impacts, which is the source of the fun.</p>
<p>I initially didn't have high hope due to the comp's uncertainty nature, but ended up spending a lot more time than I thought. Due to the dataset's mini size, I believed simple solutions are more likely to win, decided to go that way and expected a shake-up. The final result is a 2200+ rank pump and not enough for a silver, but still a good learning experience!</p>
<p>Below is the summary of my workflow shortly. I think my biggest takeaway are the different options for feature selection, which I have shared in another post <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/432072" target="_blank">linked here</a>.</p>
<h1>II. Overall workflow</h1>
<h2>1. Workflow</h2>
<p>I tried several models, and most of them went through the same workflow described below. Final submissions are ensembles of individual models which, together, have highest OOF CVs.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F982488%2Fc6745cd5c92b9e78724253559906a0e8%2FScreenshot_1.png?generation=1692088143683642&alt=media" alt="ICR Overall Workflow"></p>
<h3>(a) Feature engineering</h3>
<p>Ideas tested include</p>
<ul>
<li>Feature integerization (credit: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/413198" target="_blank">link to discussion</a>)</li>
<li>Feature first-order interaction (<code>X * Y</code> and <code>X / Y</code>) (credit: <a href="https://www.kaggle.com/code/tatudoug/logistic-regression-baseline" target="_blank">link to notebook</a>)</li>
<li><code>Epsilon</code>-based features (year, month, week, days, etc.)</li>
<li><code>NaN</code> imputation: <code>mean</code>, <code>median</code>, constant</li>
<li><code>EJ</code> encoding/left out</li>
<li>Multi-class label</li>
</ul>
<h3>(b) Feature selection</h3>
<p>With a dataset this small, I believed using fewer features are crucial to avoid overfitting, and focused a lot of finding the right features. I use 3 different feature selection methods and a few combination of them, in addition with correlation value to remove redundant features.</p>
<ul>
<li>Information Value (IV) & Weight of Events (WoE) (<a href="https://www.kaggle.com/code/tatudoug/logistic-regression-baseline" target="_blank">notebook source</a>)</li>
<li>(For tree-based models) Target Permutation Importance (TPI - <a href="https://github.com/kingychiu/target-permutation-importances" target="_blank">github repo</a> - thanks to <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/424124" target="_blank">this discussion</a> by <a href="https://www.kaggle.com/kingychiu" target="_blank">@kingychiu</a>) </li>
<li><code>featurewiz</code> (FW - <a href="https://www.kaggle.com/code/nemanjagojkovic/using-advanced-feature-engineering-with-featurewiz" target="_blank">notebook source</a>)</li>
</ul>
<h3>(c) Training parameter search</h3>
<p>I also tested different training set-up combinations of the following configs</p>
<ul>
<li>Feature selection's suggestions</li>
<li>Imbalance data treatment: sample weighting, oversampling</li>
<li><code>StratifiedKFold</code> strategy: target only; target concatenated with Epsilon truncated to year (<code>target_epsilon</code>); target conatenated with Experimental columns (<code>target_experiment</code>)</li>
<li>Multiclass/Binary</li>
<li>Model-specific hyper-parameters</li>
</ul>
<h3>(d) Model training</h3>
<ul>
<li>I selected the best training setups based solely on CV, often limiting to 100 or fewer features.</li>
<li>My techniques to avoid overfitting when training include<ul>
<li>Limiting model complexity by using hyper-parameters as close to default as possible</li>
<li>Randomly selecting hyper-parameters within a close-range of the initially chosen set of hyper-parameters.</li>
<li>Using 5 or 10 folds of <code>StratifiedKFold</code>, often repeated by 10 times.</li></ul></li>
</ul>
<h3>(e) Model selection</h3>
<p>Using the above workflow (a)-(d), I tested several model options, including: LGBM, Logistic Regression (LogReg), XGB, CatBoost, TabFPN. I then tried ensembling, and selected models based on CV scores. The best combination is LGBM + LogReg, which was selected for final submissions.</p>
<h3>(f) Inference</h3>
<p>Each single model instance during training is weighted with <code>1 / balanced_log_loss(y_val_prediction)</code> where <code>y_val_prediction</code> is the model's prediction for the validation set.</p>
<h2>2. Comments</h2>
<ul>
<li>Feature engineering<ul>
<li>Feature integerization seemed useful for EDA but was not needed for my final model selection.</li>
<li>Many of the first-order interaction features turned out to be powerful.</li>
<li>Things that boosted CV but hurt private LB<ul>
<li>Keeping <code>EJ</code> column</li>
<li>Binary class</li>
<li>CV-strat using target only</li></ul></li></ul></li>
<li>Feature selection<ul>
<li>IV & WoE consistently provides useful features. Its selection is also often fewer than other methods.</li>
<li>When combined with TPI, IV & TPI provide even better features for tree-based models.</li></ul></li>
<li>Training config<ul>
<li>Weighting is consistently better than oversampling. This could be explained by this theory, that the <code>Null</code> Epsilon samples are of bad quality/different from private LB, so any effort in reducing them might have a chance to boost private LB. In other words, undersampling > not doing anything > oversampling.</li></ul></li>
<li>Training and inference<ul>
<li>LGBM often has high CV and public LB but low private LB.</li>
<li>LogReg, CatBoost and XGB are the other way around - medium-low CV, medium-low public LB but good private LB.</li></ul></li>
</ul>
<h1>III. Submission selections & result</h1>
<p>My final submission is a combination of LGBM and Logistic Regression, which I have found to have best CV (0.175) and public LB (0.17). The good correlation made me choose this submission. </p>
<p>However, at the end they don't perform well on private LB (0.41). Instead, my private LB-best, not-selected submission is an XGB with CV-public-private LB of 0.258 - 0.25 - 0.367. This submission would have achieved rank 20th and gold medal, but at the time it was impossible to choose it over the ensemble above.</p>
<h3>Submission scores</h3>
<table>
<thead>
<tr>
<th>Submission</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Selection 1 - LGBM</td>
<td>0.1923</td>
<td>0.1891</td>
<td>0.4997</td>
</tr>
<tr>
<td>Selection 1 - LogReg</td>
<td>0.1908</td>
<td>N/A</td>
<td>N/A</td>
</tr>
<tr>
<td><em>Selection 1</em></td>
<td><em>0.1752</em></td>
<td><strong><em>0.1717</em></strong></td>
<td><em>0.4197</em></td>
</tr>
<tr>
<td>Selection 2 - LGBM</td>
<td>0.1545</td>
<td>N/A</td>
<td>N/A</td>
</tr>
<tr>
<td>Selection 2 - LogReg</td>
<td>0.1685</td>
<td>N/A</td>
<td>N/A</td>
</tr>
<tr>
<td><em>Selection 2</em></td>
<td><strong><em>0.1478</em></strong></td>
<td><em>0.3553</em></td>
<td><em>0.6248</em></td>
</tr>
<tr>
<td><em>Best (not selected) - XGB</em></td>
<td><em>0.2575</em></td>
<td><em>0.2490</em></td>
<td><strong><em>0.3679</em></strong></td>
</tr>
</tbody>
</table>
<h3>Best submissions' setups</h3>
<table>
<thead>
<tr>
<th>Idea</th>
<th>Select 1 - LGBM</th>
<th>Select 1 - LogReg</th>
<th>Best overall (XGB - not selected)</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>NaN</code> imputing</td>
<td>N/A</td>
<td><code>mean</code></td>
<td>N/A</td>
</tr>
<tr>
<td><code>X * Y</code></td>
<td>True</td>
<td>True</td>
<td>True</td>
</tr>
<tr>
<td><code>X / Y</code></td>
<td>True</td>
<td>True</td>
<td>False</td>
</tr>
<tr>
<td><code>Epsilon</code> features</td>
<td>False</td>
<td>False</td>
<td>False</td>
</tr>
<tr>
<td><code>EJ</code> included</td>
<td>True</td>
<td>True</td>
<td>False</td>
</tr>
<tr>
<td>Multiclass/Binary</td>
<td>Binary</td>
<td>Binary</td>
<td>Multiclass</td>
</tr>
<tr>
<td>CV strat</td>
<td><code>target</code></td>
<td><code>target</code></td>
<td><code>target_experiment</code></td>
</tr>
<tr>
<td>Imbalance treatment</td>
<td><code>weighting</code></td>
<td><code>weighting</code></td>
<td><code>weighting</code></td>
</tr>
<tr>
<td>Feature selection</td>
<td>IV & TPI</td>
<td>IV</td>
<td>TPI</td>
</tr>
<tr>
<td>N-features</td>
<td>52</td>
<td>59</td>
<td>340</td>
</tr>
<tr>
<td>CV</td>
<td>0.1923</td>
<td>0.1908</td>
<td>0.2575</td>
</tr>
<tr>
<td>Public LB</td>
<td>0.1891</td>
<td>N/A</td>
<td>0.2490</td>
</tr>
<tr>
<td>Private LB</td>
<td>0.4997</td>
<td>N/A</td>
<td>0.3679</td>
</tr>
</tbody>
</table>
<h1>IV. References</h1>
<ul>
<li>Submission Selection 1<ul>
<li><a href="https://www.kaggle.com/code/hoangnguyen719/icr-fe-v3" target="_blank">FE</a></li>
<li>LGBM: <a href="https://www.kaggle.com/code/hoangnguyen719/icr-fs-d3-lgbm-v1" target="_blank">Feature Selection</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-ps-lgbm-d3-fs1-v0" target="_blank">Parameter Search</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-train-lgbm-d3-fs1-v0" target="_blank">Training</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-infer-lgbm-d3-fs1-train0?scriptVersionId=139110044" target="_blank">Inference</a></li>
<li>LogReg: <a href="https://www.kaggle.com/code/hoangnguyen719/icr-fs-d3-iv-woe-v1" target="_blank">Feature Selection</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-ps-lr-d3-fs-ivwoe1-v0" target="_blank">Parameter Search</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-train-lr-d3-fs1-v0" target="_blank">Training</a></li>
<li><a href="https://www.kaggle.com/code/hoangnguyen719/icr-infer-stack-d3-v1/notebook?scriptVersionId=139399123" target="_blank">Ensemble submission</a></li></ul></li>
<li>Submission Selection 2<ul>
<li><a href="https://www.kaggle.com/code/hoangnguyen719/icr-fe-v4" target="_blank">FE</a></li>
<li><a href="https://www.kaggle.com/code/hoangnguyen719/icr-iv-woe-v5" target="_blank">IV & WoE Result</a></li>
<li>LGBM: <a href="https://www.kaggle.com/code/hoangnguyen719/icr-fs-d4-lgbm-v0" target="_blank">Feature Selection</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-ps-lgbm-d4-fs0-v0" target="_blank">Parameter Search</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-train-lgbm-d4-fs0-v0" target="_blank">Training</a></li>
<li>LogReg: <a href="https://www.kaggle.com/code/hoangnguyen719/icr-fs-d4-iv-woe5" target="_blank">Feature Selection</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-ps-lr-d4-fs-ivwoe0-v0" target="_blank">Parameter Search</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-train-lr-d4-fs0-v0" target="_blank">Training</a></li>
<li><a href="https://www.kaggle.com/code/hoangnguyen719/icr-infer-stack-d4-v0?scriptVersionId=139547779" target="_blank">Ensemble submission</a></li></ul></li>
<li>Best submission (XGB)<ul>
<li><a href="https://www.kaggle.com/code/hoangnguyen719/icr-fe-fs-xgb-v1" target="_blank">FE & Feature Selection</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-train-xgb-d1-param-search/notebook" target="_blank">Parameter Search</a>, <a href="https://www.kaggle.com/code/hoangnguyen719/icr-train-xgb-v9?scriptVersionId=137025656" target="_blank">Training & Inference Submission</a></li></ul></li>
</ul>
<p>Thank you! Appreciate you sharing your experience and insights in the PSPFGP competition.</p>
<p>And thanks again for letting me know about the notebooks - it kept saying <code>An error occurred while saving changes to your notebook</code> when I tried to make the notebooks public. Do you happen to know what the cause might be?</p>
<p>Thanks - congratulations on your silver!</p>
|
ICR - Identifying Age-Related Conditions
|
[🥈 | 292/6431 | Top 5%] ICR — Identifying Age-Related Conditions, Kaggle Challenge Writeup
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is my solution for the recently concluded Kaggle Competition titled “ICR — Identifying Age-Related Conditions.” Below, I will be emphasizing the key points as follows:</p>
<ul>
<li><strong>Introduction to the Challenge</strong>: In this section, I will provide an overview of the challenge itself.</li>
<li><strong>Introduction to the Data</strong>: This part will encompass an introduction to the dataset that was used in the competition.</li>
<li><strong>Approach to the Problem:</strong> Here, I’ll outline the methodology I employed to tackle the given problem.</li>
<li><strong>Successful Strategies</strong>: I will discuss the techniques and strategies that proved effective for me.</li>
<li><strong>Challenges Faced:</strong>This section will delve into the obstacles I encountered during the competition.</li>
<li><strong>Missed Opportunities:</strong> I will highlight concepts I considered but regrettably didn’t attempt.</li>
<li><strong>New Knowledge Acquired:</strong> In this segment, I’ll share the new insights I gained throughout this challenge.<br>
By exploring this write-up, you’ll gain valuable insights into the rewards of participating in Kaggle Competitions, and uncover the lessons I learned from the “ICR — Identifying Age-Related Conditions” challenge. Join me on this journey of discovery and growth!</li>
</ul>
<h2><strong>Introduction to the Challenge</strong></h2>
<p>The goal of this competition is to predict if a person has any of three medical conditions. You are being asked to predict if the person has one or more of any of the three medical conditions (Class 1), or none of the three medical conditions (Class 0). You will create a model trained on measurements of health characteristics.<br>
For the full context of the challenge you can refer the Official <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank">Overview </a>Section.<br>
The evaluation metric for the competition was <strong>balanced logarithmic loss</strong>:</p>
<h2><strong>Introduction to the Data</strong></h2>
<p>Now comes the most important part of the competition. The competition data comprises over fifty anonymized health characteristics linked to three age-related conditions. Our goal is to predict whether a subject has or has not been diagnosed with one of these conditions — a <strong>binary classification</strong> problem.<br>
There was also some supplemental anonymized data provided, which would only be available for the training set. What makes this modelling challenging is that there were only 617 rows of data provided. It was very easy to overfit any kind of solution here. Also, there was imbalance in the dataset making the job even harder.</p>
<h2><strong>Approach to the Problem</strong></h2>
<p>The problem was simple enough to understand, although there was not much we could do to learn more about the domain as the data was anonymized, a few columns were distinguishable like age, sex etc. Apart from that most parameters were unknown.</p>
<h3><strong>Step 1 : EDA on the data</strong></h3>
<p>Tried minimal EDA on data regarding the correlation and features’ central tendency. There were a few features which were well correlated with the target value.</p>
<h3><strong>Step 2 : Create a baseline</strong></h3>
<p>As the features were not really making sense, I did not try to normalize the data on my own and built a simple decision tree classifier. i though of using it as the baseline for my further iterations. Unfortunately, it did not work very well. So I used everyone’s favorite XGBoost as the baseline.</p>
<h3><strong>Step 3: Iterate Iterate Iterate</strong></h3>
<p>After the baseline is ready we keep on adding new ideas to each iteration and keep building on the insights we gain on each submission. in the section below I will highlight ideas that helped me climb up the public leaderboard.</p>
<h2><strong>Successful Strategies</strong></h2>
<ol>
<li><strong>A baseline xgboost/catboost model :</strong>For my final version of the baseline, I opted for an ensemble of CatBoost and XGBoost models. This choice stemmed from CatBoost’s inherent conservatism (because of its ordered boosting mechanism), setting it apart from more aggressive alternatives like LGBM. (Although, in subsequent iterations, I did introduce LGBM into the equation.)</li>
<li><strong>Mode imputing of missing values:</strong> After trying out different central tendency imputation methods, mode imputation gave me slightly better results, also I did not want outliers affecting my score as the features nature was still unknown (less data did not help either), So I avoided mean filling.</li>
<li><strong>A Stratified K-Fold CV was important :</strong>This competition highlights the importance of Cross validation. The error was shooting heavily without a K-Fold CV, even with it, the folds error was not very consistent. This made it very difficult, as the score was improving on the public LB, but the average CV score was not improving much.</li>
<li><strong>Sampling techniques :</strong>I mainly used undersampling to balance the dataset to the minority class, which at first seemed wasteful of the already small data set. So, I tried out using upsampling and using class weights in the prediction, but those did not yield good results for me.</li>
<li><strong>The Game Changer TabPFN:</strong> A Transformer That Solves Small Tabular Classification Problems in a Second. Everyone who participated here used this transformer based model in thier solution, I did too. This was fast, self normalizing and only worked for small datasets which makes it very appealing to use. (Although people have won even without this). If you want to know more about it you can refer the original <a href="https://arxiv.org/abs/2207.01848" target="_blank">paper</a>. </li>
</ol>
<h2><strong>Challenges Faced</strong></h2>
<ol>
<li>Small dataset did not help at all.</li>
<li>Using the supplementary data was improving the score a bit, but was unsure of using it in my final solution as it won’t be there for the test set.</li>
<li>Class imbalance hampered the models performance.</li>
<li>It was an either <strong>Trust your work/CV or trust the Public LB</strong> kind of competition. As towards the end the public LB error was 0 for the top spot, it made sense to trust my work and CV.</li>
</ol>
<h2><strong>Missed Opportunities</strong></h2>
<ol>
<li>Hybrid Sampling (upsample the minority and downsample the majority to a threshold). I don’t know how it would have went, but I am sure this would’ve been interesting.</li>
<li>A custom NN for the task, maybe an ensemble of XGBOOST, TabPFN and a custom NN with a lot of dropout (which is actually a part the winning solution).</li>
<li>Using the supplementary data provided to train the model. </li>
<li>Using the age, sex as the splitting criteria for the K-Folds. (might have been interesting).</li>
<li>Most of the feature engineering process, I was not able to dedicate much time.</li>
<li>Thresholding probability values to imitate the public test set, most top kernels were doing this, I did not like this approach so i skipped it (and it saved me from DOOM, all notebooks at the top fell very hard on the private LB).</li>
<li>Using Optuna was on my list for hyperparameter tuning.</li>
</ol>
<h2><strong>New Knowledge Acquired</strong></h2>
<p>I maintain this section in all my write-ups as it helps me reflect on my progress and remember when and how I acquired specific skills, tricks, or techniques. Here, I’m compiling insights that could prove valuable to anyone who peruses this:</p>
<ol>
<li><strong>Believe in Your Work’s Validity:</strong> Even if your ranking places you at the tail end, remember that correctness in your process and strong conviction can yield improvement. As evidence, I witnessed a leap of over 2500 places on the private leaderboard, securing a silver medal.</li>
<li><strong>TabPFN — A Gem for Small Datasets:</strong> For those dealing with smaller datasets, TabPFN stands out as a remarkable AutoML technique to consider. While its parameters might not be fine-tuned extensively, its utility remains unparalleled.</li>
<li><strong>XGBoost’s Enduring Strength:</strong> It’s evident that XGBoost continues to be a robust contender. Many participants achieved higher rankings by solely employing tuned XGBoost models. However, my personal inclination leans towards CatBoost for such scenarios due to its advantageous attributes.</li>
<li><strong>Take the time to actually build the whole iteration pipeline yourself,</strong> rather than forking someone’s work, It could save you some time initially, but it would restrict your flexibility to make changes and build upon someone else’s code.<br>
That is it for the Writeup this time, you can find the Notebook that won me the silver (I have made it public even before the competition ended) <a href="https://www.kaggle.com/code/mohneesh7/tabpfn-xgb-with-stratifiedkf-cv/notebook?scriptVersionId=135526306" target="_blank">Here</a><br>
If you have any feedback or suggestions please do reach out to me!<br>
Cheers!!</li>
</ol>
|
ICR - Identifying Age-Related Conditions
|
automl pipeline: featurewiz, multilabel k-fold and optuna
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My second competition concludes with a confident bronze at position 361, however, a wild shakeup of thousands of places and the quality of the data leaves more questions than answers.</p>
<p>Over the course of the competition, I dedicated myself to improving my skills in AutoML, and I am pleased to share some key insights from my solution:</p>
<p>🛠 Feature Engineering:<br>
A crucial step in my strategy was the automatic generation and selection of features. (+0.02 CV LB both) I developed new features from the existing data, extending my understanding of the information. I then used correlation analysis to systematically identify and select the most influential features, significantly improving the robustness of my models.</p>
<p>📊 Multilabel Stratification:<br>
I also employed a technique called iterative stratification to cope with the data imbalance and skewness. Multilabel stratification by label and age bin allowed me to handle multiple variables simultaneously, ensuring more accurate and stable cross-validation results. This approach was critical in helping me navigate the challenging terrain of the competition dataset.</p>
<p>🕯 Ensemble+Optuna:<br>
One of the cornerstones of my success was the construction of an ensemble of models using Optuna. Optuna is an open-source hyperparameter optimization framework that enabled me to automatically fine-tune the weights and thresholds of my ensemble models. The optimized ensemble not only boosted my predictive performance but also improved the consistency of my results across multiple test sets.</p>
<p>Overall, this competition has been an exhilarating journey. I faced setbacks, questioned my approaches, and ultimately learned a lot. The experience has been invaluable, and I am excited to apply the knowledge gained to future challenges.</p>
<p>code: <a href="https://www.kaggle.com/code/sirtimio/bronze-solution-0-4-lb?scriptVersionId=139617098" target="_blank">https://www.kaggle.com/code/sirtimio/bronze-solution-0-4-lb?scriptVersionId=139617098</a></p>
|
IEEE's Signal Processing Society - Camera Model Identification
|
10th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE's Signal Processing Society - Camera Model Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Those were from our best single DenseNet201 that was scoring 0.963. When we teamed up, we all retrained our best models with the same pseudolabels, and all of them showed varying but substantial improvements. We all felt pretty confident that the pseudolabels were working. </p>
<p>Here is a brief overview of our modeling and solution. Please feel free to ask any questions.</p>
<ol>
<li><p>We only used the data provided by Kaggle and Gleb´s links. We had some concerns about the Gleb's dataset license of use for some of the images, to be honest, but most of them seem to be good. </p></li>
<li><p>We first tried some CNN models from scratch, but discovered that the well known pretrained models converge faster and gave better accuracy. Our best single model was a DenseNet201 trained on patches of size 336x336 scoring 0.963/0.967 (public/private LB). As the accuracy was pretty high, we decide to give a try to pseudo-labeled images from the testset using the predictions from this model.</p></li>
<li><p>We applied a threshold of p>=0.999 and used only unaltered images. This gives us 1116 new images to train with.</p></li>
<li><p>The same DenseNet201 model, trained now with this extra dataset, scored 0.975/0.973. After intense TTA, it gave us 0.980/0.975.</p></li>
<li><p>We trained a few more models (another DenseNet201, InceptionResnetV2, etc.) , but we couldn't improve much by just blending their predictions (and also being worried about overfitting to the LB), we decide to try stacking predictions from different models, and to train a second layer model with them.</p></li>
<li><p>Since we only had about a week to train them, and since we didn't have too many GPU resources, we decided to do stacking with just 2-folds.</p></li>
<li><p>While we were training NN models, we tried to extract image features to train a non-NN model. We basically followed the ideas from this paper:</p></li>
</ol>
<p><a href="https://www.semanticscholar.org/paper/Using-sensor-pattern-noise-for-camera-model-identi-Filler-Fridrich/e9d9d89d81e49c8fcbb8b2cc960efa4c91434319">"Using sensor pattern noise for camera model identification" Thomas Filler, Jessica Fridrich, Miroslav Goljan</a></p>
<ol>
<li><p>We extracted approx. 3000 features from image linear pattern and noise pattern. We trained XGB, LightGBM and MultilayerPerceptron after applying PCA, scoring 0.898/0.881. (We found out that PCA reduced the training times of the models, but the raw dataset was trainable as well, achieving similar performance, if not better.)</p></li>
<li><p>Our final model is an XGB model of stacked predictions from 9 L1 models:
- From images: DenseNet201, VGG16, Xception, DenseNet121, InceptionV3, ResNet50 (we tried others but we discarded them)
- From extracted features: XGB, LightGBM, MultilayerPerceptron</p></li>
<li><p>This model (A13) scored 0.9877/0.9866. We trained the same model but excluding pseudolabeled data. We thought that they could introduce a strong bias toward these images, and after all, they were already used for all ours L1 models. This second model (A13a) scored 0.9798/0.9847, and we made the random decision of select this submission for the final, finishing 10th instead of 5th. (Our 2nd final submissions was a optimized weighted average of several submissions scoring 0.9885/0.9835, but we weren't very confident about it).</p></li>
</ol>
|
IEEE's Signal Processing Society - Camera Model Identification
|
1st place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE's Signal Processing Society - Camera Model Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>For XGBoost it is critical to have a lot of data points, so predicting time matters. The faster you predict the more test time augmentations you can produce and fit models better. </p>
<p>We used it for model predictions on hold-out set. Each model predicts probabilities of 10 classes and they become an input for XGBoost and etc.</p>
<h2>Models</h2>
<p>Here is the list of our best model. Half of them is based on Andres code (forked before adding any restrictions on usage) and half on its PyTorch implementation:</p>
<ul>
<li>984_densenet201_antorsaegen_29_0.98624</li>
<li>976_densenet201_antorsaegen_62_0.98271</li>
<li>977_resnet50_antorsaegen_119_val_0.9815</li>
<li>976_DenseNet201_do0.3_doc0.0_avg-epoch072-val_acc0.981250</li>
<li>967_InceptionResNetV2_do0.1_avg-epoch154-val_acc0.965625</li>
<li>962_Xception_do0.3_avg-epoch079-val_acc0.991667 (leaky validation, pls ignore)</li>
</ul>
<p>All of the models had a full-size crop (512) + TTA 8. Choosing this crop was our biggest flaw since smaller crops allow faster training, more TTA options and produce almost the same accuracy. </p>
<h2>Data</h2>
<p>In total, we collected 300 Gb of photos from Flickr and Yandex.Foto. But due to the huge crop size, we could not fully utilize the entire data set and used only 20,000 photos for training. The rest 50,000 was used for blending.
Data was filtered by resolution and camera type, also all photos with non-default software in Exif (i.e. processed somehow) and with poor quality (lower than 95 as identified by ImageMagick) were excluded.</p>
<h2>Hardware</h2>
<p>Nothing special for a team with five members: 5 x 1080ti + 1070</p>
<h2>Submissions</h2>
<p>We had 2 submission strategies:</p>
<ul>
<li>Averaging all TTA prediction from all models by power mean - powers of 1,2,4 as a parameter. Additionally, we explicitly made all classes equally distributed (132 manip and 132 unalt photos for each class) by using Hungarian algorithm on probabilities. As expected, this resulted in a huge LB overfit: 0.991 score on public and 0.985 on private.</li>
<li>Blending models by using hold-out predictions. There were 52 models with slightly different parameters and equal weights: XGBoost (20) + LightGBM (20) + Keras (12). This gave us 0.986 score on public and 0.989 on private. No class equalization was used with this approach except some weight tuning for Nexus 5 during training (all models had a smaller number of predictions for this phone). </li>
</ul>
<h2>Key takeaways</h2>
<ul>
<li>Collect as much data as possible but do not forget to clean it (GIGO)</li>
<li>Try smaller crops first and all architecture types</li>
<li>LB probing is evil, trust your CV</li>
</ul>
<p>We were lucky that predictions did not take a lot of time to make in this competition. It is not always the case though.</p>
<p>We used <a href="http://www.seleniumhq.org/">http://www.seleniumhq.org/</a> for scraping image preview links, then extracted links to the original files from it</p>
<p>We did not touch architecture at all, except for dropout rate</p>
<p>To produce a bunch of XGBoost models we used different combinations of training parameters, i.e. number of folds, optimizer, patience</p>
|
IEEE's Signal Processing Society - Camera Model Identification
|
2nd place solution with GPU muscles
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE's Signal Processing Society - Camera Model Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>tldr</strong>:
We downloaded a about 500+ GB photos, trained 9+ imagenet-like models with 3 version of the pipeline. Finally, we averaged 27 checkpoints with geometric mean.</p>
<p><strong>Key components of a good solution, by priorities:</strong></p>
<ol>
<li>Large and clean external dataset </li>
<li>Consistant local validation</li>
<li>Classic competitive approach to learning models</li>
<li>Diverse models</li>
</ol>
<p><strong>Data mining</strong>: We downloaded 500+ Gb photos from various resources: Flickr, Yandex.Fotki, Wikipedia Commons, mobile reviews. In addition, on the last night we downloaded 22k more photos with urls from Flickr gathered by <a href="https://www.kaggle.com/c/sp-society-camera-model-identification/discussion/49064">Andres Torrubia</a>. </p>
<p><strong>Filtering data</strong>: Lightroom/photoshop/etc processing could eliminate all the information about the camera. We filtered on: model, resolution, quality of jpeg compression, software of processing.
After filtering of the training and validation, the datasets looked like this:
<img src="https://pp.userapi.com/c824701/v824701072/a95a8/Hgl68WyLzyU.jpg" alt="enter image description here">
We took the validation from <a href="https://www.kaggle.com/c/sp-society-camera-model-identification/discussion/47235">Gleb’s post</a>, but replaced iPhone 6 plus pictures in it by iPhone 6 ones.</p>
<p><strong>Training models</strong>: Our code is based on pytorch version of <a href="https://www.kaggle.com/c/sp-society-camera-model-identification/discussion/48679">Andres solution</a>.
For all the models, the binary flag is_manip was used as an additional feature for the classifier of a net. All models had an input of 480. For the majority of our models, five crops + flip photo orientation (10TTA) was applied to the pictures, for D4 models five crops + the whole group of D4 were applied (40TTA); then geometric mean was applied to the predictions.
Useful tricks:
1. Adam, reducing LR on plateau with patience 2-4
2. Cyclic LR with SGD
3. Pseudo-labeling
4. Averaging 3 checkpoints with the best loss for validation</p>
<p>Also on the last day we trained several models with D4 augmentations, finetune from best checkpoint of previous pipeline. We didn’t have submit to check all models on LB, but the result on one was impressive.</p>
<p>The final ensemble of models looked like this:
<img src="https://pp.userapi.com/c824701/v824701072/a95b9/dg36oZu6_B8.jpg" alt="enter image description here">
Logging models:
<img src="https://pp.userapi.com/c824701/v824701072/a95b2/aMQ2rKhWAI4.jpg" alt="enter image description here">
<strong>Averaging</strong>: We tried different approaches with class balancing and dropping the missing classes according to the probabilities. But in the end geometric average of 27 checkpoints of different models is the best.</p>
<p><strong>What did not work?</strong>
Demosizing
<img src="https://pp.userapi.com/c824701/v824701072/a95c0/sJpJNsYtDcg.jpg" alt="enter image description here">
Cameras fix on the matrix in a particular pixel the intensity of only one color, and the rest of the colors are restored to neighboring pixels, and the recovery algorithm for different camera manufacturers is different.
The idea was to calculate the difference between the original image and the image obtained by one of the standard demosayzingov.
The inertia of the differences is to train the neural network.</p>
<p><strong>Hardware:</strong></p>
<ul>
<li>i7 7700k, 64gb, 2x 1080 (just for development) </li>
<li>i7 6700k, 32gb, 2x Titan X (Maxwell)</li>
<li>i7 5930K, 32GB, 3x1080Ti</li>
<li>Xeon 2696v3, 64gb, 4x1080Ti</li>
<li>i7 3770k, 16gb, 2x 1080Ti</li>
</ul>
<ol>
<li>Loss curves seems jumpy due to scale and little amount of sample in
validation set. </li>
<li>Almost all validation accuracy is about 0.98. Pretty
high for me.</li>
<li>As far as I remember typically batch size is 24-40 for 4x
1080ti.</li>
<li>We tried weighted mean, small class sorting. Doesn't work for
us.</li>
<li>DPN is fantastic! Our best single model: dpn92 with pseudo
labeling, TTA, 3 checkpoints TTA: 0.987 (private) and 0.984 (public).</li>
<li>About D4 answered above</li>
</ol>
<p>About D4: <a href="https://en.wikipedia.org/wiki/Dihedral_group">https://en.wikipedia.org/wiki/Dihedral_group</a></p>
|
IEEE's Signal Processing Society - Camera Model Identification
|
3rd place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE's Signal Processing Society - Camera Model Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Data</strong></p>
<p>The training set has been augmented with both private images and images downloaded from Flickr.
All images were preliminary sifted based on the EXIF header meta-data information,
to keep only images with meta-data coherent with the corresponding camera of the training set.
All images that were processed by an editing software or with restrictive license were removed. </p>
<p>Finally we obtained a dataset of 450 images per model (about 15 GB). 400 images have been used for training, 50 images for validation.</p>
<p><strong>Training</strong></p>
<p>During training patches were randomly extracted from the central part of image (1024x1024 pixels) and randomly rotated with steps of 90 degrees. </p>
<p>We trained separate networks for the model identification of unaltered and manipulated images.
For the first problem, we included only unaltered images in the training set.
On the contrary, the networks for manipulated images were trained using both unaltered and manipulated images, as we found this solution to provide consistently better results.</p>
<p><strong>Solution overview</strong></p>
<p>Fusion of different CNNs models (arithmetic mean of the scores). </p>
<p>For unaltered images we considered 5 networks trained at different patch size: XceptionNet (96), XceptionNet (299), Inception v3 (139), InceptionResNet v2 (299), DenseNet121 (224). </p>
<p>For manipulated images we considered the same networks above but used various input patch sizes, for a total of 14 models.
We tested many alternative solutions to improve performance, two of them were eventually applied: 1) network training on small patches and fine-tuning on larger ones, 2) identification of problematic classes (JPEG 70 and Resizing 0.5) and design of dedicated detectors.</p>
<p><strong>Hardware</strong></p>
<p>NVIDIA Tesla P100 GPU with 16GB of RAM.</p>
<p><strong>Special mention</strong> </p>
<p>It is interesting that using only a single model for unaltered (XceptionNet on patches of dimension 96) and manipulated images (XceptionNet on patches of dimension 96 fine tuned on 299) gave a public score = 0.986666 and a private score = 0.981309. </p>
<p><strong>Lessons learnt</strong></p>
<p>Do not trust the public LB too much. Use a better validation set!</p>
<p>This was a great experience for us (our first time in a Kaggle competition) and we really learnt a lot. Many thanks to the organization and to all the teams sharing their work.</p>
<p>We noticed on our internal validation set that the accuracy was different based on the type of manipulations.
For example, with reference to XceptionNet working on patches of dimension 96, we obtained the following results:</p>
<p>Accuracy</p>
<p>0.982 gamma = 0.8</p>
<p>0.982 gamma = 1.2</p>
<p>0.978 JPEG 90</p>
<p>0.940 JPEG 70</p>
<p>0.982 resize = 0.8</p>
<p>0.966 resize = 0.5</p>
<p>0.969 resize = 1.5</p>
<p>0.954 resize = 2.0</p>
<p>Hence, we decided to train specific networks for the classes achieving the worst results:
compression (JPEG = 70) and resizing (0.5 and 2).
This actually means that in the training set only images manipulated using that specific class are present.
When fusing with the other detectors only two of them helped us increasing the score on the public LB,
hence we discarded the one designed for detecting resizing by a factor of 2.</p>
<p>No. We tried to detect the type of manipulations, but we were not able to achieve good results.
So we decided to give up along this direction.
What we did was only to verify that the accuracy on the detection of the manipulations was not equally good
and added specific detectors for these problematic classes.</p>
|
IEEE's Signal Processing Society - Camera Model Identification
|
4th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE's Signal Processing Society - Camera Model Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Right, I did iterate this process three times, but did not observe any concrete improvement.</p>
<p>I forgot to mention that I only used the 80% central portion of the train data, which should be in agreement with your idea because the centers are generally more textured. I have updated the original post. Thank you for reminding me.</p>
<p>The crop sizes: inception_v3 (480x480), inception_resnet_v2 (299x299), inception_v4 (352x352), xception (320x320).</p>
<p>Yes, similar to others works, I augmented the random training crops by the 8 manips with a probability. In other words, one model for all the unalt and manip test data.</p>
<p>In step 1, I used the original train data provided by the organizer plus Gleb's data (good jpgs) for training, and I used Gleb's validation data for validation. In step2, I used the original training data provided by the organizer plus Gleb's data (good jpgs) plus the 'unalt' part of the test data for training, and I still used Gleb's validation data for validation</p>
<p>During competition, I did submit an ensemble result in which I averaged predictions using four inception models (inception_v3, inception_resnet_v2, inception_v4 xception) trained with various crop sizes. The LB result was public0.981/private0.981 (after step 2 of my solution). I don't know how much the improvement would transfer to after step 3. I feared that I would drop out of the 'gold' zone so I did not choose to continue with the ensemble result. The leading teams were just giving me too much pressure. </p>
<p>"Determining Image Origin and Integrity Using Sensor Noise"</p>
<p>I just ran step3 with the ensemble model - Private 0.988452 and Public 0.983541. Still cannot beat the 1st team. This confirms the importance of more data.</p>
<p>Here is a brief summary of my steps.</p>
<p>Edit: I only used the central 80% crop of the train data because the boundaries are often statistically very different from the test data. For example, if the original image size is 1000x1000, only the central 800x800 crop is used. This center-cropping applies to train data only, and it gave around 1% higher accuracy than training on the original size.</p>
<ol>
<li><p>Finetune a pretrained inception_v3 with random 480x480 crops. The provided training set and Gleb's data were used. My data augmentation include the eight possible manipulations but no transpose, rotation or flipping as I believe they should not help in theory. JPEG compression is always aligned (the 8x8 grid) as I bet re-compressions were done before cropping. This achieved Public LB 0.976 and Private LB 0.972. </p></li>
<li><p>Predict the test set ('unalt' images only) using the finetuned model. Use the predicted probabilities as pseudo-labels for test data and merge the test data with the training set. Continue tuning with the merged set. After the pseudo-labeling, the performance improved to Public LB 0.983 and Private LB 0.976.</p></li>
<li><p>Group the 'unalt' images in test set by predicted labels and estimate the sensor noise patterns for each camera in test set (totally ten reference patterns). Then match each of the 'unalt' images with the ten reference patterns, and correct the predictions when the correlation between an image and a reference pattern is larger than a certain threshold. I also corrected the 'manip' part by matching their sensor noises with the augmented (by the eight manipulations) reference patterns. The last step gave the largest boost: Public LB 0.986 and Private LB 0.987.</p></li>
</ol>
<p>Thanks to Kaggle and IEEE SPS for hosting this interesting competition.
Thanks to everyone who generously shared their data and ideas.</p>
<p>I gathered all the test data with disagreed labels by the two approaches (DL based and sensor noise based). The corrections are done by choosing the labels predicted by the sensor noise based method when the correlation values (between noise estimated by a test image and a reference pattern) is larger than a threshold, otherwise I keep using the DL produced label. To correct the 'manip' part, I first processed the 'unalt' test set by the eight manipulations. For each manipulation and for each camera, one reference pattern were estimated. So we have 10 classes x 8 manips reference patterns. Then, match each 'manip' image with the 80 ref patterns and choose the camera label with the largest correclation.</p>
<p>In multi-class classification, labels should be (internally) one-hot encoded. Assume we have three classes, the input (data, label) pairs should be (data1, [0 1 0]), (data2, [1 0 0]), (data3, [1 0 0]) ... It's true that we can use single integers to represent the labels but they will be ultimately treated as one-hot encoded internally. So the pseudo-labeled test data, assume three classes, should look like (data1, [0.1 0.7 0.2]), (data2, [0.4 0.3 0.3]) ... You can also use 'hard' predictions instead of the probabilities. Honestly I don't know the best way to do the pseudo-labeling either.</p>
<p>1 and 2: In this competition, I always finetune all the layers of the CNN in step 1 and step 2, not just the FC layers.</p>
<p>3: I used the softmax probabilities as the labels to NOT retrain but continue finetune the saved weights from step 1. And yes, the probabilities were obtained by the saved net from step 1.</p>
<p>4: I used ALL the 'unalt' test set and I did not use the 'manip' set. The main intuition is to use the 'unalt' part to predict the 'manip' part of the test data because they share same noise patterns and I assume they have similar image content. What contradicts to my intuition is that the 'unalt' performance also improved a little bit and I don' understand why.</p>
<p>5.1: That's pretty close. I actually used the MLE estimator proposed in "Determining Image Origin and Integrity Using Sensor Noise". But direct averaging should be okay too.</p>
<p>5.2: Correct.</p>
<p>Hi Anselmo,</p>
<p>In the second step, I was actually doing training using the test data. Since we need the labels for optimization (training), what we can do is predicting the labels (for the test data) use the model obtained from the first step. In my case, I used the soft probabilities as labels. </p>
<p>jeandebleau is accurate about the sensor noise part. Again I used the predicted labels for grouping the test data into ten clusters and estimate reference patterns. There is about 1% wrong labels in the prediction in my case for the 'unalt' test data, and fortunately this small 'labeling noise' does not have huge negative impact. Note a minor increase of the 'labeling noise' could totally collapse this method. One key point I would like to add is when matching the sensor noise of each test data with the ten reference patterns, there is an extra step to exclude the contribution of the given test data from the reference pattern, otherwise the method won't work well.</p>
<p>You are right that the sensor noise based method might only work in this specific scenario. I don't believe it will bring improvement if the test data are obtained from a larger number of devices for each class, and if we dont have enough data for each device, and also if the DL method is less accurate.</p>
|
IEEE's Signal Processing Society - Camera Model Identification
|
5th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE's Signal Processing Society - Camera Model Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our solution is pretty simple.</p>
<p>We trained several different CNN models and blended them. We were using imagenet weights for initialization and manipulations described on the Data page for augmentation.</p>
<p><strong>Key features:</strong></p>
<ul>
<li>224x224 random crops</li>
<li>TTA 11: all manipulations and rotates with geometric mean</li>
<li>Additional data (40k images from flickr which were filtered only by resolution)</li>
<li>Good predictions postprocessing (+3% accuracy)</li>
</ul>
<p>One epoch takes only 8 minutes because of small crops. It allowed us to train several different models: Resnet50, Resnet101, Densenet121, Densenet201, Xception, Resnext101. Each one achieves 0.975-0.982 on private LB. Our final submission is just mode blend of these models :)</p>
<p>I uploaded all code to github repo: <a href="https://github.com/PavelOstyakov/camera_identification">https://github.com/PavelOstyakov/camera_identification</a></p>
<p>There you can find an instruction how to train a model. I also uploaded weights for Resnet50. You can easily generate a submission which gives 0.98 on private LB.</p>
<p>Thanks to all participants and congratulations to the winners! It was a funny competition :)</p>
<p>Good luck in next competitions!</p>
|
IEEE's Signal Processing Society - Camera Model Identification
|
9th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE's Signal Processing Society - Camera Model Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>One epoch was 4 full passes through the training set. It took about 50 such epochs at the first stage and about 10 at the second.</p>
<p>As pseudo-labels we've been using only confident predictions from all the models. So, there were about 2400 test images. I was highly overfitting to these images to make my model replicate confident pseudolabels. And hopefully such model will learn how to classify the rest 240 test images.</p>
<p>First of all, congratulations to all of my teammates: <a href="https://www.kaggle.com/iglovikov">ternaus</a> for getting Grand Master title; <a href="https://www.kaggle.com/arsenyinfo">arsenyinfo</a> and <a href="https://www.kaggle.com/nesterov">mephistopheies</a> for Master titles and <a href="https://www.kaggle.com/cortwave">cortwave</a> for the first gold medal. Well-deserved guys!</p>
<p>Here is a brief overview of our solution:</p>
<ol>
<li>Each of us created a solid single model earlier in the competition with the initial data and Gleb's data. There were MobileNets, VGGs, custom ResNets, DenseNets (about 6-7 models) with different crop sizes, augmentations and TTAs. We constructed majority voting out of these models and put test images with the maximum votes to pseudolabels.</li>
<li>Got more data from flickr: about 40k images. They have been filtered by camera model, resolution, quality and any software changes. Constructed validation from Gleb's and new flickr images.</li>
<li>The major idea of our models was to make training process iterative. In the first stage we've used train+flickr data (Gleb's and ours). Adding pseudolabels and resetting LR in the second stage. In particular, my own approach included: in the first stage, training VGG-16 on initial images + some flickr (overall 7k images). In stage 2, keep only pseudolabels and highly overfit to them reaching 100% accuracy on the train set. Such model gave 0.985 Private LB.</li>
<li>Occasionally, we noticed that distribution of test photos is quite uniform in the test set and decided to force it to be exactly uniform. <a href="https://www.kaggle.com/nesterov">mephistopheies</a> made some Analysis magic and gave a formula for such a normalization (it's better to ask him directly what he's done :) )</li>
<li>Our final submission was a blend of 10 models (with and without stages) and subsequent classes balancing.</li>
</ol>
<p>Repo of our solution:
<a href="https://github.com/cortwave/camera-model-identification">https://github.com/cortwave/camera-model-identification</a></p>
<p>TL; DR:</p>
<p>Did Work:</p>
<ul>
<li>Pseudolabels</li>
<li>External data and data cleaning</li>
<li>Different crop sizes </li>
<li>Picking the argmax of probabilities during TTA</li>
<li>Balancing helped in the Public LB, but now we see that is has been overfitting</li>
</ul>
<p>Did Not Work:</p>
<ul>
<li>Averaging model checkpoints</li>
<li>Training on denoised images</li>
<li>Non-standard loss functions like hinge loss</li>
<li>KNN on pseudolabel embeddings</li>
<li>GAN and Siamese architectures</li>
</ul>
<p>P.S. Late submission of the blend on top-3 models scored 0.989 Private LB. Unfortunately, we haven't tried this one due to the lack of submissions..</p>
|
IEEE's Signal Processing Society - Camera Model Identification
|
My solution: Single model with a few quirks.
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE's Signal Processing Society - Camera Model Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi all! I had a lot of fun participating in this competition and in retrospect I've learned a few lessons which I'll share. Although I didn't score as high as I intended (my personal goal was to get 0.98 single model, gold medal) I am happy of having helped a lot of teams, both with the code, ideas and dataset.</p>
<p>My latest code is @ <a href="https://github.com/antorsae/sp-society-camera-model-identification">Github repo</a> and you'll see that while I added a BUNCH of features, I didn't have time to even test them all (as most of us I'm limited by GPUs).</p>
<h1>Solution overview</h1>
<ul>
<li>Single model (no ensemble)</li>
<li>Inputs: 512x512 crop size, whether the image is manipulated or not,
and relative location of crop wrt canonical (i.e. vertically
oriented) sensor resolution (e.g. center is 0,0; top left -1,1,
bottom right 1,1). Random crops are displaced in 2x multiples since most CFA arrays are 2x2 so it is aligned.</li>
<li>Outputs: Class and type of manipulation performed, manipulation prediction is fed to main (class) FC head, with the expectation that knowing the manipulation will help the main classifier.</li>
<li>Feature extractor: DenseNet201 using avg pooling</li>
<li>Two fully connected layers 512, and 256; with 0.3 dropout each for main </li>
<li>Class-aware sampling</li>
<li>Test-time augmentation using 28 total augmentations (for <code>unalt</code> images, less so for <code>manip ones</code>)</li>
<li>Loss function: x-entropy of class + 0.1 x-entropy of manipulation</li>
<li>Test probabilities equalization (instead of performing old-school
inference)</li>
<li>Used 2 x 1080 Tis and 1 x 1070 Ti</li>
<li>Flickr CC dataset + organization dataset (I wasn't 100% positive Gleb's dataset was clean re: organization rules, so I didn't risk it).</li>
</ul>
<h1>What worked</h1>
<ul>
<li>Class-aware sampling: it got slightly better results than weighted class weights.</li>
<li>Fine-tuning FC classes (both architecture and weights) freezing classifier.</li>
<li>Flickr CC dataset :-)</li>
<li>Equalizing test prediction probabilities: instead of just taking predicted classes, use probabilities to </li>
</ul>
<h1>What kind-of-worked</h1>
<ul>
<li>Feeding relative crop location to the main FC head. Motivation was that without information of where the crop is the net: may not learn lens characteristics that easily (focal length, chromatic aberration, etc.) </li>
<li>Validation set: my <code>val_acc</code> oscillated 0.01 points during epochs and then I saw a detrimental shakeup on private LB. I had a poor validation set.</li>
</ul>
<h1>What didn't work</h1>
<ul>
<li>Mixup: I had a lot of faith in mixup, however mixup didn't improve training accuracy beyond 0.97, reason is I was feeding random crops taken at different locations with different manipulations; and the dual-output net was expected to classify the mixed-up class PLUS the mixed-up augmentation type. I new it was far fetched, and I should have (for a given mixup pair) used the same manipulation type so the net would have focused much more on learning the main objetive.</li>
<li>Experimental loss-function to "force" a flat target distribution. I thought this was so cool: adding a regularization term to the loss function which is basically the mean of the variance of target predictions within a batch. I needed to fine-tune the weight of the regularization term but didn't have time to test it properly. I could do this b/c freezing the classifier allowed me to focus on the FC layers which had ~10% of the net expressive power and do big batch sizes (128+) so the variance calculation was meaningful. Anyway, not sure if it even makes sense.</li>
<li>Ensembles: At the end I had 6+ models with ~0.97ish LB. I only had >ONE< submission left and I coded a too far-fetched ensemble code which attempted to reject outliers (softprobs below a certain threshold) and at the same time equalizing the target distribution. I must have introduced a bug b/c the 6-model ensemble only got 0.95 LB... so I didn't use and I didn't have any extra submissions left.</li>
</ul>
<h1>Lessons learnt</h1>
<ul>
<li>Creating a good validation set is critical.</li>
<li>Adding extra samples works.</li>
<li>Should have used faux-labels</li>
<li>Don't get too excited if you can add more features that the resources/time you have to test and compare them.</li>
<li>Yes, I know, ensembles rule the LB. ;-)</li>
<li>Pytorch is faster than Keras. </li>
</ul>
<p>Best and congratulations to all teams, it's been a great learning experience!</p>
<p>-Andres</p>
<p>Thanks. Not sad at all about sharing the code, as you say everybody wins.</p>
<p>I'm too positively surprised that an almost vanilla NN can distinguish camera sensors even after manipulations (and my net could identify <em>which</em> manipulation was performed with 0.97+ validation accuracy) ... it kind of blows your mind... :-)</p>
<p>Update: I submit all my CSVs to see what model was really the best... it turns out:</p>
<p><code>submission_DenseNet201_cs512_fc512,512,512_doc0.0_do0.5_dol0.3_avg_xx_cas-epoch057-val_acc0.978009.csv</code></p>
<p>It's a single-model, 512crop DenseNet201 w/ 2 heads of 3 FC connected layers, class-aware sampling and Flickr CC dataset... but importantly <em>no TTA</em> and <em>no test probability distribution equalization</em>. </p>
<p>This model gets 0.977 in the private LB and 0.966 in the public one... so I think my biggest mistake was to have a poor validation set.</p>
|
IEEE's Signal Processing Society - Camera Model Identification
|
Optimization or Magic features
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE's Signal Processing Society - Camera Model Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank to Kaggle to organize this challenge, a big thank to all people especially Andres Torrubia, and of course many thanks to my awesome team mates. In this thread I would like to share one idea/improvement that were not mentioned by top teams.</p>
<p>The idea is to apply a post-processing step to balance number of images for each camera type. The input is the actual probabilities, the number of images for each group and the output is the final prediction. The optimal objective is the sum of all (probability * assignment), where assignment can be zero or one.</p>
<p>What is the number of images for each camera? We discovered that the proportion of different type of images are almost equal. Even more, the number of unalt images and manip images are the same as well. And we also discovered that the public test and private test set are split by file timestamp.</p>
<p>The optimization code is quite simple and it runs within a second, you can have a look at one of our first version at <a href="https://www.kaggle.com/golubev/simple-example-mcf-balance-improve-0-976-to-0-984">https://www.kaggle.com/golubev/simple-example-mcf-balance-improve-0-976-to-0-984</a></p>
<p>@Andres. I probed the LB quite a lot during early phase of the competition to discover the distribution of images as I already knew that it was possible to optimize the predictions. Thanks to your code we had a pretty good result and we decided to probe the LB in the other way to get submissions with zero accuracy.</p>
<p>On the subject of the LB split, I sorted the test files by timestamp and made an accumulate count for both manip and unalt images. The plot already shows it clearly. <img src="https://image.ibb.co/b0G73x/unalt_manip.png" alt="Number of manip/unalt images by timestamp "> </p>
<p>We even verified that by giving the best predictions to the first 36% images and the wrong predictions to the rest. And the public score for this new submission did not change.</p>
|
IEEE's Signal Processing Society - Camera Model Identification
|
Our approach - 47th place
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE's Signal Processing Society - Camera Model Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello, we are sharing our approach for this competition. Getting a silver medal on this competition was not possible without the ideas and codes from @andreas and @Ivan Romanov, nad additional data from @Gleb. A big thanks to them and my teammates.</p>
<p>We obtained this score on our last day submission (public LB 0.962, private 0.969) because we could not run our code fast enough (keras, theano backend) due to limited resources. We decided to use pytorch on the second last day of this competition and were able to run Densnet kind of network in less than 8 hours (single GPU; titanx).</p>
<p>Approach:</p>
<ol>
<li>We used roughly 7000 images for training, removed bad images (low resolution images, low light images, non unique images).</li>
<li>Trained three netwroks. One Densenet201 with 448 crop size, and two DenseNet201 with 512 image size but modified FC layers.</li>
<li>For 448 crop size, we did test time augmentation (random crop and flip). This model alone gave 0.968 on private leaderboard. For others no TTA.</li>
<li>Final results were obtained from max voting of all the three models.</li>
</ol>
<p>My take:</p>
<ul>
<li>Pytorch is relatively faster than keras (theano backend at least). I was hesitant to use it but given the speed, I would like to invest some time learning it. </li>
</ul>
<p>It would be great if other could share their ideas and help everyone benefit from it.</p>
<p>We trained roughly 120-140 epochs for each model with a batch size of 8. Validation loss was 0.04 and score was between 0.97-0.98</p>
|
IEEE's Signal Processing Society - Camera Model Identification
|
gold solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE's Signal Processing Society - Camera Model Identification
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>64</p>
<p>For some reason, random patches didn't yield any accuracy improvement for me, so I went for slicing instead. It also prevents overlapping, streamlining the training process.</p>
<p>Each test image is sliced into 4 256 x 256 images and augmented into 12 (vertical + horizontal flip). A model outputs raw probabilities for each image slice, which are then averaged together with those from other models. Then, the class for each slice is determined with np.argmax, and the mode is taken as the final prediction.
The validation set is just like the training set - with manipulations - just with fewer images. The original images are randomly assigned to training and validation sets so that validation crops do not come from the same image as training crops.</p>
<p>My resizing was more like slicing an image into 256 x 256 patches, as opposed to merely using a center crop; this significantly increases the training data. I also initialized models from keras applications with imagenet weights, so that could have made a difference.</p>
<p>First, congratulations to all the winners and those that did well. Thanks to Kaggle, IEEE, and everyone who shared their ideas. This is my first gold medal.</p>
<p>Convolutional neural networks were able to learn meaningful relationships, which is quite interesting. As a result of this competition, it is evident that deep learning is a powerful tool, generalizable to many tasks. The final model ensemble obtains a private LB score of 0.986 (Public: 0.983), using ImageNet models ResNet50, DenseNet121, DenseNet169, Xception, and InceptionV3.</p>
<h2>Data Preparation:</h2>
<ul>
<li><p>Slice each image into 256 x 256 non-overlapping patches and discard any images with only one unique pixel, since they don't help with camera identification; Gleb's additional dataset was used. Each patch retains the label of the original image. My data augmentation included the eight possible manipulations. </p>
<ol><li>JPEG compression with quality factor = 70</li>
<li>JPEG compression with quality factor = 90</li>
<li>resizing (via bicubic interpolation) by a factor of 0.5</li>
<li>resizing (via bicubic interpolation) by a factor of 0.8</li>
<li>resizing (via bicubic interpolation) by a factor of 1.5</li>
<li>resizing (via bicubic interpolation) by a factor of 2.0</li>
<li>gamma correction using gamma = 0.8</li>
<li>gamma correction using gamma = 1.2</li></ol></li>
<li><p>Trained on unaltered patches and set each manipulation probability to 5%. No rotation or flipping was used, though in retrospect, those could have improved my score slightly.</p></li>
<li>Validation patches are not from training images to ensure no possibility of data leakage.</li>
</ul>
<h2>Models:</h2>
<ul>
<li>ResNet50, DenseNet121, DenseNet169, Xception, and InceptionV3, all pretrained on ImageNet.</li>
<li>All models use Global Average Pooling after the final convolution followed by a softmax layer. I experimented with fully-connected layers with dropout and regularization, but they didn't seem to improve the score.</li>
</ul>
<h2>Training:</h2>
<ul>
<li>Adam optimizer, learning rate: 0.0001 </li>
<li>Each model achieves between 0.955 and 0.975 validation accuracy after the 2nd epoch; in most cases, the 3rd epoch actually lowered accuracy. I tried retraining with a learning rate of 0.00005 for another epoch, and managed to boost validation accuracies of most models to 0.975. </li>
</ul>
<h2>Prediction:</h2>
<ul>
<li>Each test image is sliced into 4 256 x 256 images and augmented into 12 (vertical + horizontal flip). A model outputs raw probabilities for each image slice, which are then averaged together with those from other models. Then, the class for each slice is determined with np.argmax, and the mode is taken as the final prediction. The validation set is just like the training set - with manipulations - just with fewer images. The original images are randomly assigned to training and validation sets so that validation crops do not come from the same image as training crops.</li>
</ul>
<p>Adding some checkpoint models helped raise private LB score from 0.983 to 0.986. The additional models helped with corner cases. </p>
<p>The results could have been improved by adding more models like DenseNet201, NASNet, etc., and training for more epochs. But I guess at this point its for future competitions.</p>
|
IEEE-CIS Fraud Detection
|
10th solution and code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks my teammates for their efforts. </p>
<h3><strong>Feature Engineering</strong></h3>
<p>Find the openCardDay(day-D1) by EDA(card = card_cols+addr_cols+email_cols, day=TransactionDT//86400):
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1098915%2F666a266df4f34904073bf5f25b9432a0%2Feda.png?generation=1571312507671223&alt=media" alt="">
There are different day-D1 between isFraud=0 and isFraud=1</p>
<ul>
<li><p>Encoding different uids by other columns---useful in old uids</p></li>
<li><p>Count the number of transactions and types of product by uid in a period of time---useful in new uids</p></li>
</ul>
<h3><strong>How to infer there is no shake up in this competition</strong></h3>
<p>Although the number of old uid in test dataset decreased with time. But old uid with fraud=1 largely unchanged.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1098915%2F3c00d5b0df96223745627793161f13d1%2FScreenshot%20from%202019-10-17%2019-47-06.png?generation=1571312848792088&alt=media" alt=""></p>
<h3><strong>How to prevent overfitting</strong></h3>
<p>Because the major improvement is old uid encoding. So more types of uid encoding, higher local cv. But not every uid in train dataset also in test dataset. The trick to prevent overfitting is setting uid which only in train and count < k to NaN. Then you can add more types of uid and encoding.
My three models:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1098915%2F8f40cc584044d9e56a8105d24589154f%2FScreenshot%20from%202019-10-17%2019-52-18.png?generation=1571313166608110&alt=media" alt=""></p>
<p>submission:<a href="https://www.kaggle.com/daishu/10th-submission-part-daishu">https://www.kaggle.com/daishu/10th-submission-part-daishu</a>
code:<a href="https://github.com/jxzly/Kaggle-IEEE-CIS-Fraud-Detection-2019">https://github.com/jxzly/Kaggle-IEEE-CIS-Fraud-Detection-2019</a></p>
|
IEEE-CIS Fraud Detection
|
11th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I trained models on randomly selected percentages of all features..for example If we have 100 features we can train a lot of different subsets containing 20 features (20 percent) like one model with feature 1 to 20 another one with feature 2 to 21 and so on...these are all different feature sets and if we have a lot of features (say like hundreds or thousands), it could be used to incorporate the information containing these features into another model using a couple of oofs. It also adds diversity to stacking.</p>
<p>Congrats to all the top teams, especially people at the gold zone. This has been a tough competition to us (at least to me 😎).
Our solution was based on stacking. I personally started this competition with stacking about 20 models and after adding more and more oofs (started with Roman's KFold(k=5, shuffle=False) validation scheme with LB~0.9419),
then I continued by adding oofs from:</p>
<ul>
<li>most of the public kernel models,</li>
<li>LGBM based on data created using categorical features + PCA on numerical features,</li>
<li>XGB trained trained on different encodings (target, catboost, weight of evidence, ... encodings) of categorical features + numerical features,</li>
<li>Lasso trained on different subsets of numerical features (mean and median imputations),</li>
<li>Ridge trained on different subsets of numerical features (mean and median imputations),</li>
<li>LGBM/Ridge trained on different subsets of count features (each feature count encoded),</li>
<li>LGBM/Ridge trained on count features (each feature count encoded) + onehot encoding of counts,</li>
<li>LGBM/Ridge trained on count features (each feature count encoded) + onehot encoding of counts (dimension reduced using truncated SVD),</li>
<li>Gaussian Naive Bayes trained on different subsets of count features (each feature count encoded),</li>
<li>Multinomial Naive Bayes trained on a subset of count features,</li>
<li>Univariate Logistic Regression trained on a subset of Vs picked using Spearman correlation between them and the target,</li>
<li>LGBM/XGB trained on categorical features + some numerical features features (picked using Kolmogorov-Smirnov test),</li>
<li>a couple of NFFM, FFM models trained on raw categorical features + numerical features using <a href="https://github.com/guoday/ctrNet-tool">this</a>,</li>
<li>LibFM trained on a subset of categorical features using <a href="https://github.com/jfpuget/LibFM_in_Keras">this code</a>,</li>
<li>CatBoost trained on different subsets of categorical + numerical features (all features treated as categorical and fed to CatBoost encoding API),</li>
<li>NN trained on raw data (embedding of categorical data + different imputation methods for the numerical data) with different architectures, </li>
<li>LGBM/XGB trained on a subset of 2d, 3d, 4d interactions of categorical features,</li>
<li>Extra Tree classifier trained on different subsets of PCAed data set (numerical data),</li>
<li>and KNN trained on a small subset of numerical features. </li>
</ul>
<p>After this, I ended up with about 900+ oofs. By selecting about 120 oofs out of these oofs using RFE (Recursive Feature Elimination) with LGBM as the classifier, I was able to reach 0.9552 on LB. After that I teamed up with my current teammates where we pooled our features and oofs. ynktk had an XGB trained on about 2000 features that scored 0.9528 and he had created lots of oofs with the same validation method as mine as he worked on his single model. ynktk had been using KFold + brute force feature engineering (various aggregations of the data) from the start and my other teammates were mostly doing FE based on validation using GroupKFold on months so in terms of diversity, it was very helpful. Luckily after a day or two we reached 0.9579. </p>
<p>I started to search for the m*gic (😎) after reading Konstantin's post about <code>making them do friendship</code> and he using CatBoost (to me his magic was most probably about some form of an interaction). First I focused on the interactions of the categorical features especially card1-card6 and addresses then I added Cs and Ds to my interactions. After a lot of experiments with CatBoost I found the feature <code>card1 + date.day - D1</code> (treated as categorical) and shared it with my teammates (later Youngsoo Lee found out that <code>card1 + card6 + addr1 + addr2 + date.day - D1</code> was a better option). After focusing on this feature and adding new oofs based on it, we reached AUC=0.9649.</p>
<p>Our best stacked model was a 5 time bagged XGB (it was a bit better than LGBM) trained with about 190 oofs (selected from a pool of about 1100+ oofs) that scores 0.9650 on LB with CV=0.9644. </p>
<p>Thanks Fatih, also congrats to you for becoming a GM. well deserved!</p>
<p>Hey Corey, <code>stack until we bleed!</code> 😈 </p>
<p>If you are new to Kaggle, you can create your own simple stacking pipeline. It is a very good exercise to get familiar with stacking.</p>
<p>I used my own code for stacking with bagging in Python 3, available <a href="https://github.com/Cirice/4th-place-Home-Credit-Default-Risk/tree/master/stackers-examples">here</a>. You can also use <a href="https://github.com/kaz-Anova/StackNet">StackNet</a>.</p>
<p>Honestly, I have not checked that if they increased or not but I had them in my oof pool and let my CV + RFE + a specific model (I tried LGBM, Lasso, Ridge and NN, each with it own selected set of features where XGB was the best scoring) decide to keep or drop those features. </p>
<ul>
<li><code>What what basis did you select only these specific models?</code> Mostly experience of the past competitions. If you spend sometime on a couple of Kaggle comps, you will have a decent sense of what would work and what would not in a specific problem.
<ul><li><code>There are some which I am completely unaware of and would you like to publish you code?</code> My code base is a bit messy ;) so it would take a bit of time for me to tidy it up and publish it on github.
BTW here's a few resources I highly recommend for learning DS using Kaggle competitions: </li></ul></li>
<li><a href="https://www.slideshare.net/DariusBaruauskas/tips-and-tricks-to-win-kaggle-data-science-competitions">https://www.slideshare.net/DariusBaruauskas/tips-and-tricks-to-win-kaggle-data-science-competitions</a> (This gives you a really good and concise description on how to compete at Kaggle) </li>
<li><a href="https://www.coursera.org/learn/competitive-data-science">https://www.coursera.org/learn/competitive-data-science</a> (I think this course is one of the best stuff there is for learning DS, it is based on real world problems people solved at Kaggle and it is taught by top Kagglers)</li>
</ul>
<p>Thanks. I usually try logistic regression as one of my stacking models but here when I tried to use logic regression with RFE for stacking for feature (oof) selection I was not able to get it to run it in a reasonable amount of time (the implementation in scikit-learn was too slow even for a couple of hundred oofs).. For the sake of comparison between linear and non linear models I can say that
* my LGBM + RFE took about 5 hours to run for selecting the final 190 oofs then I trained XGB on those 190 oofs;
* I also had a ridge regression stacker that could be run with 1100+ oofs and RFE in about one hour, it had a CV of 0.9588 (40 oofs) and LB of 0.9615;
* a lasso stacker without RFE (bagged with different values of alpha) that used about 40-65 oofs, CV=0.9556, LB=0.9604.</p>
|
IEEE-CIS Fraud Detection
|
13th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you. As for the procedure, please see my response to <a href="/cdeotte">@cdeotte</a>.</p>
<p><a href="https://github.com/RGF-team/rgf">RGF</a> is a bit slow but it is often good for ensemble.</p>
<p>Thank you for all participants, Kaggle Admin, IEEE-CIS and Vesta corporation who realized this exciting competition. I would like to briefly share my 13th solution, which gave me my first solo gold medal. </p>
<h2>1. Validation Scheme</h2>
<p>First I used 5 fold CV without shuffle. Then in later stage of the competition, I switched to simple time-split and used about last 100k records as val set. This is because 5 fold CV takes too much time. In addition to that, some of my important features generated based on time-split and 5 fold CV might have caused leakage. </p>
<h2>2. Reconstructing user_id</h2>
<p>When I closely examined df[df.isFraud==1], I found that there exists many clusters of very similar records. Because I experienced similar case in Home Credit Default Risk Competition, I could quickly understand what it means and what I should do. I generated wide variety of user_ids from D features and card information by brute force attack approach. They covered wide spectrum from strict and deterministic one to a little ambiguous but robust to changing in physical address/e-mail address. This combination boosted my score greatly. Only strict version of user_ids gave me ~0.9620 in PublicLB, but both of them resulted in >0.9640. </p>
<h2>3. Feature Engineerings</h2>
<p>I conducted various feature engineerings by using user_ids. Out of lots of generated features, Seq2Dec and TargetEncoding on past records showed significant improvement in both of CV and LB. I think Seq2Dec is good method to encode individual transactions because it can prevent loss of order information and it can handle series with variable length. In addition, I added typical features such as count, share, length of the series of the user_id, sinse_first_obs_user_id, to_last_obs_user_id etc.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F570506%2F5b565b73c57cc0195047f06e30aac7de%2Fseq2dec.png?generation=1570335898830916&alt=media" alt="">
Fig. Schematic illustration of Seq2Dec</p>
<h2>4. Modeling and Blending</h2>
<p>I mainly used LightGBM and added Regularized Greedy Forest as small portion of flavor. I added model diversity by seed averaging and bagging models with different n_folds for Seq2Dec/TargetEncoding.</p>
<h2>5. Result</h2>
<p>13th out of 6381 teams, Public: 0.965086, Private: 0.941338</p>
<p>Thank you!</p>
<p>Thank you for your explanation!</p>
<p>Thank you and congrats on your 29th place.</p>
<p>And yes, Seq2Dec is interesting for me, too. I learned this method from <a href="/onodera">@onodera</a> when I teamed up with him in HomeCredit competition. Good teaming is a good teacher. I believe you got something valuable in this competition, too.</p>
<p>Thank you and Congrats on your 2nd place! I was a member of Sashimi Power in Malware. I learned a lot from you and so could get gold this time. Learning is always more than everything.</p>
<p>Thank you!</p>
<p>Thank you, Chris and congrats your 1st place in this tough competition! I'm always surprised at your findings and solutions.</p>
<p>I split training data by n-fold of time split. For kth-fold, Sec2Dec and TargetEncoding were conducted by using records from 0th-fold to (k-1)th-fold. As n_folds, I used [5, 7, 13, 17, 19] and then these models were blended (here, I intentionally selected prime numbers for maximising model diversity). For test data, all training data were used for the encodings.</p>
|
IEEE-CIS Fraud Detection
|
15th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="/cpmpml">@cpmpml</a> ! I can't wait to team up again :) I actually did not spend time on the ways apart from hardcoding. Just for the userid formed by <strong>['card1','card2','card3', 'card4','card5','card6','ReferenceDate','DeviceInfo']</strong>, I added previosisfraud feature in model. Since this key was covering more transactions rows in test, it has much lower null ratio than the key I mentioned in the post above. Adding this previousisfraud feature into model gave me more boost rather than using it in post_processing. Boost was around 0.002 for a single model.</p>
<p>It looks you have a robust method indeed. However, I guess you only worked for transactions with same transactionamt. Am I right?</p>
<p>The key I mentioned above was the most accurate key. But it was giving limited shifted info for test set because the key has so many features in it. I also found same user with different transactionamts. </p>
<p>Look at this:
<code>key_new_cols = ['card1','card2','card3',
'card4','card5','card6','ReferenceDate','DeviceInfo']
</code></p>
<p>For me this one gives the most generalized user id. It has 0.90 target mean for previousisfraud == 1. Quite lower than 0.99 mean but since this key gave me much more shifted isFraud info than the key mentioned above, this new previous_isfraud gave me more boost.</p>
<p>Here something different I did not see in others' solution is:
<code>X_train['new_ReferenceDate'] = X_train['DayNumber'] - X_train['D15']</code>
Actually there is another referencedate for transactions and it can be calculated by using D15.</p>
<p>Creating another previous_isfraud by using ['card1','card2','card3','card4','card5','card6','new_ReferenceDate','DeviceInfo'] gave me another boost too.</p>
<p>Here one more thing I forgot to mention: After applying .shift(1) from train set, now you also need to pass them to other test rows with same userid ;)
I mean this:
<code>test.groupby(['userid'])['previous_isfraud'].transform('first')</code></p>
<p>With this final code, I was almost fully covering all info coming from train :)</p>
<p>Thank you so much and congrats for your great finish as well! I did give a try for Catboost with same feature set indeed. Also tried assigning categorical_features but somehow for the same validation scheme it was way lower than my lightgbm. I have quite less experience with it btw. I should invest some time on it obviously :)</p>
<p>Thank you!
Let me try to explain post processing. Let's assume that you run your model and get raw predictions for test set. Let's assign them first.
<code>test['Predictions'] = raw_predictions</code></p>
<p>Now let's create a 'userid' .
```
train['ReferenceDate'] = train.DayNumber - train.D1
test['ReferenceDate'] = test.DayNumber - test.D1</p>
<p>userid_cols = ['TransactionAmt','ProductCD','card1','card2','card3', 'card4','card5','card6','addr1','addr2','ReferenceDate','P_emaildomain']</p>
<p>train['userid'] = train[userid_cols].apply(tuple,1)
test['userid'] = test[userid_cols].apply(tuple,1)
```
Now let's create 'previous_isfraud' feature I mentioned above.</p>
<p>```
train['is_train'] = 1
test['is_train'] = 0
temp_g = pd.concat([train,test],axis=0)</p>
<p>temp_g['previous_isfraud'] = temp_g.groupby(['userid'])['isFraud'].shift(1)</p>
<p>train = temp_g[temp_g.is_train==1]
test = temp_g[temp_g.is_train==0]
```
Now you can postprocess your test predictions with following code:</p>
<p>```
filter_ = test['previous_isfraud']==1
test.loc[filter_,'Predictions']=test.loc[filter_,'Predictions']*10</p>
<p>filter_ = test['previous_isfraud']==0
test.loc[filter_,'Predictions']=test.loc[filter_,'Predictions']*0.5
```</p>
<p>You should find the constant you used for multiplication based on your validation set of course.</p>
<hr>
<p>Why don't we use previous_isfraud as a feature but for post-processing ? It's because test set has a much higher null ratio than the train set. Since this is an enormously important feature, your model will make most of predictions based on that and most probably it will lower your test performance because test set has a very high null ratio for that feature. So I utilized by simply applying post-processing.</p>
<p>I needed diversity indeed.. Thank you so much! It was great to compete with you (on public LB at least:d). </p>
<p>Merci beaucoup!</p>
<p>As a first thing, I want to thank both Kaggle and IEEE for letting us to compete in this challenge. It was a quite exhausting but also educative process for me.</p>
<p>I've already read all of the top solutions and I know that I don't have super different tricks from theirs. I just wanted to tell how my process shaped during the competition for those who are wondering.</p>
<p>Let's start.</p>
<p>My final submission has following main points:
- Generated my predictions by using unshuffled KFold. Since the test set was time split and I was using unshuffled KFold, I mimicked all these for my local validation scheme. I separated last 20% as hold-out and got predictions for it by setting another unshuffled KFold with the first 80% of the train data. Tested all operations like feature generation, parameter tuning, post processing etc. with this strategy.
- Only used LightGBM. I was neither able to generate sufficiently good CatBoost predictions nor NN predictions. Final sub is 4-5 different lightgbms' weighted blend.
- I couldn't perform any feature selection because all the methods I tried worsened my CV.
- Found different userid's based on different feature combinations. And tested their quality by calculating target means after shifting isFraud by 1 within train data.
- Example:
```
train['ReferenceDate'] = train.DayNumber - train.D1</p>
<p>key_cols = ['TransactionAmt','ProductCD','card1','card2','card3', 'card4','card5','card6','addr1','addr2','ReferenceDate','P_emaildomain']</p>
<p>train['key'] = train[key_cols].apply(tuple,1)</p>
<p>train['previous_isfraud'] = train.groupby(['key'])['isFraud'].shift(1)</p>
<p>print(train.groupby('previous_isfraud')['isFraud'].mean())
>previous_isfraud
>0.0 0.000528
>1.0 0.991961
```</p>
<ul>
<li>Shifting isFraud info from train to test rows for the same userids and applying post process was the main boost of course. Attention: Postprocess was not directly shifting isFraud info from train to test. It was multiplying predictions with a constant number which I found by using validation dataset. Since I had multiple different userids, I also had multiple postprocessing steps.</li>
<li>Creating many aggregated features based on userids.</li>
<li>Assigning mean of predictions to a userid's all transactions. This gives significant boost indeed. (Applied this only for userids with high accuracy)</li>
<li>For blending, I found that lgbm boosted with 'dart' method was quite useful.</li>
<li>Applied final blending on <strong>only new users</strong> in the test. </li>
</ul>
<p><strong>What were my milestones?</strong>
- Analyzed dataset on Excel by sorting TransactionAmt and card infos. Seeing the high correlation of the shifted isFraud with real isFraud for the sorted dataset. (I find it quite useful to look at data in Excel most of the time.) </p>
<ul>
<li><p>Realizing the real meaning of D1 and D3. Then creating 'ReferenceDate' feature by simply calculating data['DayNumber'] - data['D1']</p></li>
<li><p>Starting creating userid's by using different feature combinations and using sihfted isFraud info for postprocessing of final predictions.</p></li>
<li><p>Adding testrows with shifted isFraud info coming from userids with highest accuracy as psudeo datasets during training.</p></li>
<li><p>Creating aggregation features based on userids.</p></li>
<li><p>Realizing that our models were learning by overfitig actually. Isn't it weird that numleaves >= 200 was performing quite well ? It was because the models were actually finding same users in the test set. So we had real problem at new users which only exist in the test set. You can also check this for your models. Take last 20% of the train data as hold-out and make predictions. If it has around 0.940 AUC, now also check AUC for users also exist in train and for users only exist in hold-out. You'll see that old users have AUC around 0.977 and new users have AUC around 0.900.</p></li>
<li><p>Last week I also focused on improving new users in the test set, because private LB has much more new users than public LB. Found out that some of the features that I generated was improving old users' predictions but lowering new users' predictions. Also for the same model, increasing regularization and lowering num_leaves led to better new user predictions. So I rerun my main model for new users with dropped features and new model parameters. Also moved from unshuffled KFold to GroupKfold grouped by userid.</p></li>
<li><p>Finally used new model predictions to blend my best submission but only for new users. Did not touch old users' predictions. I think this process gave me around 0.001 boost. I wish I had worked on modeling by considering two types of users from the very beginning.</p></li>
</ul>
<p>P.S.
As a solo competitor, being silent for a long time to have competitive advantage and to avoid private sharing was hard indeed. I like to talk when I found something immediately :d.
Also I know that the forum and kernels are actually quite beneficial, however; when you found something before others you just pray everyday not to see another post saying 'there is something in V columns.. How to find true users..' etc. etc. 😆 😆 </p>
<p>Have a nice weekend and
See you in the next competition!</p>
<p>Thanks Chris!</p>
<p>1- Since I did not run groupkfold with month, I can not do the comparison at the moment. Groupkfold based on uid helped me to watch behavior of only different users in a better way. If you try, you'll see that the CV mean will also be around 0.900 ;).
2- I think there is a misunderstanding about my previous feature as well. I did not lag previous predictions but only 'isFraud' itself. I also did what you said btw. Lagged predictions and run another model with it but it always worsened the score.
3- The userid I showed above is just an example. I have 5 different userids(or cardid whatever) in total, and as you realized this one is the most restrictive one. My main boosts came from other ids.
4- Yes I tried card1 + addr1+ D1n as well but since it has low target mean for previosisfraud==1, it was useless during postprocessing. But used this combination during aggregated feature engineering.</p>
<p>P.S. If you missed, you can give a look at the comments I replied to <a href="/tpthegreat">@tpthegreat</a> and <a href="/cpmpml">@cpmpml</a> below . There are other details which you may find useful.</p>
|
IEEE-CIS Fraud Detection
|
17th place solution summary - there is nothing special
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!</p>
<p>Thank you Chris( <a href="/cdeotte">@cdeotte</a> ), and congratulations on the first place!
I've been your big fan since instant gratification competition.
I am happy with your kind words!</p>
<p>Thank you!</p>
<p>Thank you!</p>
<p>First of all, I would like to express my sincere gratitude to Kaggle, IEEE-CIS and Vesta for hosting the competition.
This competition was really amazing and fun and I learned a lot.</p>
<h2>Retrospection</h2>
<p>When I submitted my last submission at the last day of the competition, I checked that the public ranking was 32th place, and I fell asleep hoping that the shake-up wouldn't be too bad because the 32th place was already well enough for me.
When I woke up, I couldn't believe that my final ranking went up 15th and won a gold medal in 17th place.
In fact, I had a nightmare of winning a bronze medal barely because of shake-up. 😂 </p>
<p>I have two main goals when joining this competition.
One is to be a competition expert by winning a bronze medal or more.
The other was that I played as a team in the previous instant gratification competition, so this time I wanted to experience the competition alone, end-to-end.
It was enough to achieve these goals and I never expected winning a solo gold medal.</p>
<p>I know how hard it is to win a gold medal through the past few competitions, and I think a lot of other teams are better than me, so I doubt if I deserve a gold medal. All I can think is that I was really lucky in this competition.</p>
<p>As you can see the summary and code I shared below, what I did was just a good use of reading the public kernels and the discussions.</p>
<h2>Solution Summary</h2>
<h3>Validation Scheme</h3>
<p>I thought setting up a reliable validation scheme is the most crucial part. Many people used group k fold with month as a group or k fold without shuffle, but I felt uncomfortable for predicting the past with future data.</p>
<p>Instead, reading various kernels and discussions, my strategy was to get an idea from Chris( <a href="/cdeotte">@cdeotte</a> )'s discussion( <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/108571#625045">https://www.kaggle.com/c/ieee-fraud-detection/discussion/108571#625045</a> ).
For 5 folds, I used the last 20% of the train data as the hold out, and measured the average auc for validation score and the average number of iterations with 3 random seeds to predict test data. When predicting test data, I used stratified 5 fold without early stopping.</p>
<p>After this validation scheme was established, the validation score and the public lb trend were almost perfectly correlated. I think this is the part where I did the best in this competition.</p>
<p>Below is the validation score, public lb, private lb trend after my validation scheme is set up.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F940278%2Fa35a75a05b442d2cdc6faf1291459c1d%2Fieee_val_score.png?generation=1570513675066873&alt=media" alt=""></p>
<h3>Pipeline</h3>
<p>After establishing the validation scheme, I followed Konstantin( <a href="/kyakovlev">@kyakovlev</a> )'s discussion( <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/107697">https://www.kaggle.com/c/ieee-fraud-detection/discussion/107697</a> ) and created a pipeline like this:
1. Feature engineering based on discussions, public kernels, or features I thought myself
2. Evaluate the model performance using validation score
3. For adversarial validation, filter out features using Roman( <a href="/nroman">@nroman</a> )'s covariate shift
4. Re-evaluate the model performance and get final validation score
5. Predict test data and submit</p>
<h3>Preprocessing</h3>
<ul>
<li>Memory reduction : <a href="https://www.kaggle.com/alexeykupershtokh/safe-memory-reduction">https://www.kaggle.com/alexeykupershtokh/safe-memory-reduction</a></li>
<li>Filling C nans with 0 in test</li>
<li>Filling card nans : <a href="https://www.kaggle.com/grazder/filling-card-nans">https://www.kaggle.com/grazder/filling-card-nans</a></li>
</ul>
<h3>Modeling</h3>
<p>I used two models: LightGBM and CatBoost. Both are based on Konstantin( <a href="/kyakovlev">@kyakovlev</a> )'s public kernel.</p>
<p>I used LightGBM until ensemble based on <a href="https://www.kaggle.com/kyakovlev/ieee-fe-with-some-eda">https://www.kaggle.com/kyakovlev/ieee-fe-with-some-eda</a></p>
<p>After reaching public lb 0.9592 with LightGBM, I created a CatBoost model for ensemble based on <a href="https://www.kaggle.com/kyakovlev/ieee-catboost-baseline-with-groupkfold-cv">https://www.kaggle.com/kyakovlev/ieee-catboost-baseline-with-groupkfold-cv</a></p>
<p>The score of the final model is:
- LightGBM validation score : 0.94838, public lb : 0.959666, private lb : 0.937376
- CatBoost validatoin score : 0.95212, public lb : 0.960571, private lb : 0.936798</p>
<h3>Feature Engineering</h3>
<ul>
<li>Frequency encoding</li>
<li>Feature combining
<ul><li>card1_addr1, ProductCD_card1, TransactionAmt_dist2, ProductCD_TransactionAmt, ProductCD_cents</li></ul></li>
<li>Finding uid
<ul><li>Hint from <a href="https://www.kaggle.com/akasyanama13/eda-what-s-behind-d-features"></a><a href="https://www.kaggle.com/akasyanama13/eda-what-s-behind-d-features">https://www.kaggle.com/akasyanama13/eda-what-s-behind-d-features</a></li>
<li>uid : combine of ProductCD, card1-6, addr1, D1-TransactionDay</li></ul></li>
<li>uid aggregation
<ul><li>TransactionAmt</li>
<li>dist1</li>
<li>D features</li>
<li>C features</li>
<li>Some identity features : id_01, id_02, id_05, id_06, id_09, id_14</li>
<li>Some V features : V258, V294, V306, V307, V308</li></ul></li>
<li>C, V features nan group pattern</li>
</ul>
<h3>Feature Selection</h3>
<ul>
<li>Covariate shift from <a href="https://www.kaggle.com/nroman/eda-for-cis-fraud-detection">https://www.kaggle.com/nroman/eda-for-cis-fraud-detection</a></li>
<li>Drop all V features</li>
</ul>
<h3>Ensemble</h3>
<p>Final ensemble is LightGBM, CatBoost weighted geometric mean
- LightGBM, CatBoost Ensemble validatoin score : 0.95541, public lb : 0.963251, private lb : 0.940934</p>
<h3>Code Link</h3>
<p>I wanted to share the final code with the Kaggle kernel, but because of the memory limitations of the Kaggel kernel, I failed to load the code, so I shared the code on the github.
- Preprocessing : <a href="https://nbviewer.jupyter.org/github/tmheo/IEEE-Fraud-Detection-17th-Place-Solution/blob/master/notebook/IEEE-Preprocessing-uid-memory-reduction.ipynb">https://nbviewer.jupyter.org/github/tmheo/IEEE-Fraud-Detection-17th-Place-Solution/blob/master/notebook/IEEE-Preprocessing-uid-memory-reduction.ipynb</a>
- LightGBM : <a href="https://nbviewer.jupyter.org/github/tmheo/IEEE-Fraud-Detection-17th-Place-Solution/blob/master/notebook/IEEE-17th-Place-Solution-LightGBM.ipynb">https://nbviewer.jupyter.org/github/tmheo/IEEE-Fraud-Detection-17th-Place-Solution/blob/master/notebook/IEEE-17th-Place-Solution-LightGBM.ipynb</a>
- CatBoost, Ensemble : <a href="https://nbviewer.jupyter.org/github/tmheo/IEEE-Fraud-Detection-17th-Place-Solution/blob/master/notebook/IEEE-17th-Place-Solution-CatBoost-Ensemble.ipynb">https://nbviewer.jupyter.org/github/tmheo/IEEE-Fraud-Detection-17th-Place-Solution/blob/master/notebook/IEEE-17th-Place-Solution-CatBoost-Ensemble.ipynb</a></p>
<p>As you can see my solution, there is nothing really special or new. I just followed Konstantin( <a href="/kyakovlev">@kyakovlev</a> )'s kernel and discussion. It was something like standing on the shoulder of giants and reaching out a bit more. </p>
<p>Thank you and congrats to all the winners!</p>
<p>Thank you!</p>
<p>Thanks!
Since I found the magic feature, I think I would have the silver medal at least. 🤔 </p>
<p>Thanks!</p>
<p>Thanks!</p>
<p>Thank you!</p>
<p>Thank you!
I hope team up with you again next time!</p>
<p>Thank you!</p>
<p>Thank you! I respect your passion!</p>
<p>Thank you!</p>
<p>Thank you, your words are very helpful to me!</p>
<p>Thank you!</p>
<p>Thank you!</p>
<p>Thanks!</p>
|
IEEE-CIS Fraud Detection
|
19th Place Solution summary
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!</p>
<p>Thanks!</p>
<h2><strong>Introduction</strong></h2>
<p>First of all, I would like to thank Kaggle and host for hosting such an interesting competition!
I would also like to thank my awesome teammates, <a href="https://www.kaggle.com/masatomatsui">e-toppo</a>, <a href="https://www.kaggle.com/kurupical">kuripical</a>, <a href="https://www.kaggle.com/shogonagano">pondel</a>.</p>
<p>Our team name ‘クソザコやねん’ means ‘We are very very very weak’ in Japanese. But, in the end of the competition, our team turned out to be a very strong team, and finally got a gold medal. That’s why we changed the team name to ‘クソザコちゃうねん’ which means ‘We are not weak’.</p>
<p>By the way, we met each other in the on-site competition called <a href="https://atma-cup.atma.co.jp/">atmaCup</a>, and formed a team at a social gathering for this competition. The next atmaCup will be held in Osaka, Japan on November 23, 2019. Let's participate with us!</p>
<h2><strong>Model pipeline</strong></h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F623370%2F1c5ab565e368ae60b9c59d9456d95b8e%2F2019-10-06%206.02.12.png?generation=1570338484162909&alt=media" alt=""></p>
<h2><strong>Feature Engineering</strong></h2>
<ul>
<li>UserID (Combining card features with the date D1 days before the TransactionDT)</li>
<li>Aggregaion by UserID (count,mean,var)
- C1-C14
- D1-D15
- TransactionAmt etc.</li>
<li>max/min values, max/min columns between similar C,V features
- example:
V212, V274, V317
V213, V275, V318
V167~V171, C2</li>
<li>decompose D, C, V feature (use PCA)</li>
<li>nan pattern(D, C, V, M)</li>
<li>isDecember Flag</li>
</ul>
<h2><strong>Validation Strategy</strong></h2>
<p>We first used stratified 5 fold with no shuffle for validation (<a href="https://www.kaggle.com/kurupical">kurupical</a> kfold instead of Stratified).
Then we started using time series splits to avoid huge shakes.Our time series splits has 7folds.
The | indicates the train/val split.
1) 0 1 2 3 | 4
2) 0 1 2 3 4 | 5
3) 0 1 2 3 4 5 | 6
4) train all data using 1.1x number of rounds early stopped in 3
Blend: using weighted average 1) * 0.1 + 2) * 0.1 + 3) * 0.25 + 4) * 0.55.</p>
<h2><strong>Models and ensemble</strong></h2>
<p>In this competition, we decided to choose our final submission with the following 2 different strategy in order to prevent the risk of shake(which didn’t occur).
- submission1: High LB model (Full model in the slide.)
- submission2: Conservative model (sumission of k2_samp model in the slide)</p>
<p>Finally, we made 10 models (we count as the same model for changing the validation method or k2_samp), included Only ProductCD=W models, without MAGIC features, and so on. Our best single model scored LB: 0.9588 by LGBM.</p>
<p>For the optimization of blending weight, we used Spicy Nelder-Mead which were used in the <a href="https://speakerdeck.com/hoxomaxwell/home-credit-default-risk-2nd-place-solutions?slide=55">2nd place solution of Home Credit Default Risk</a> . We decide not to optimize the weight of timeserie based model and Stratified based model and chose the following weihgt as the final submission (0.4*timeserise + 0.4*stratified + 0.2*timeseries(k2_samp)).</p>
<h2><strong>Post Process</strong></h2>
<p>We made unique user ID which is concatination of card1-card6 and difference between Day-D1, Day-D10, and confirm the count of record and isFraud’s mean.
If the user is all “Fraud” transaction in the train datasets, we replaced the submit prediction to 1, if the user is all “not Fraud” transaction in the train datasets, we replaced the submit prediction to 0. This postprocess boosted +0.0007 in Public, +0.0004 in Private. if you want to know the detail, please refer to my kernel: <a href="https://www.kaggle.com/kurupical/postprocess-19th-place-solution-team">https://www.kaggle.com/kurupical/postprocess-19th-place-solution-team</a></p>
<h2><strong>Conclusion</strong></h2>
<p>In this post, we briefly summarised our model pipeline. Please feel free to ask us for any questions you have for our solutions. </p>
<h2><strong>Appendix (Methods that didin’t work for us)</strong></h2>
<ul>
<li>Neural network</li>
<li>Pseudo labeling</li>
</ul>
<p>Thank you!</p>
<p>Thanks!</p>
<p>Thank you and congratulations on winning 1st place! I have learned a lot from you and as a result, I got a gold medal.
Regarding post process, you are right. I corrected it.</p>
|
IEEE-CIS Fraud Detection
|
1st Place Solution - Part 2
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Great questions. Let me share my opinions. Doing group aggregations with standard deviations (specifically of normalized D columns) allows your model to <strong>find</strong> clients. And doing group aggregation with means (and/or std) allows your model to <strong>classify</strong> clients. Let me explain. </p>
<p>Consider a group that all have the same <code>uid = card1_addr1_D1n</code> where <code>D1n = day - D1</code>. This group may contain multiple clients (credit cards). The features <code>D4n, D10n, and D15n</code> are more specific than <code>D1n</code> and better at finding individual clients. Therefore many times a group of <code>card1_addr1_D1n</code> will have more than 1 unique value of <code>D15n</code> inside suggesting multiple clients. But some groups have only 1 unique value of <code>D15n</code> inside suggesting a single client. When you do <code>df.groupby('uid').D15n.agg(['std'])</code> you are will get <code>std=0</code> if there is only one <code>D15n</code> inside and your model will be more confident that that <code>uid</code> is a single client (credit card). </p>
<p>The M columns are very predictive features. For example if you train a model on the first month of data from train using just <code>M4</code> it will reach train AUC = 0.7 and it can predict the last month of train with AUC = 0.7. So when you use <code>df.groupby('uid').M4.agg(['mean'])</code> after (mapping M4 categories to integers), it allows your model to use this feature to classify clients. Now all <code>uids</code> with <code>M4_mean = 2</code> will be split one way in your tree and all with <code>M4_mean = 0</code> another way.</p>
<p>Thanks link is fixed. </p>
<p>I fixed the link. Thanks for pointing that out. I fixed it once but somehow it reverted to the old link.</p>
<p>An actual client group is smaller than <code>uid1 = card1+addr1+D1n</code> where <code>D1n=day-D1</code>. But as the architect we shouldn't force our model to use a more precise uid such as <code>uid2 = card1+addr1+D1n+D15n</code>. Instead we should give it <code>uid1</code> and <code>D15n</code> separately and let the model decide when it wants to combine the two. Furthermore we can give it columns dist1, D4, D10, D15, C13, C14, V127, V136, V307, V309, V320 as various aggregations over uid1. </p>
<p>For example, C14 should stay the same for a single client (card holder) most of the time. In our script we enforce that is does all the time. But we give our model <code>df.groupby('uid').C14.agg(['std'])</code>. When <code>std=0</code>, the model knows that the C14 is the same for the entire uid. When <code>std!=0</code>, the model knows that that uid contains more than 1 client. Next the model will use another column to separate that uid into 2 or more clients. You see the model will figure out how to use C14 better than the rules we enforce in our script.</p>
<p>In "1st Place Solution - Part 1" posted <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/111284">here</a>, we discussed the benefits of classifying <strong>clients</strong> (credit cards) instead of <strong>transactions</strong> in Kaggle's Fraud competition. Here we will discuss the technical details. </p>
<h1>Final Model</h1>
<p>Our final model was a combination of 3 high scoring single models. CatBoost (Public/Private LB of 0.9639/0.9408), LGBM (0.9617/0.9384), and XGB (0.9602/0.9324). These models were diversified because Konstantin built the CAT and LGB while I built the XGB and NN. And we engineered features independently. (In the end we didn't use the NN which had LB 0.9432). XGB notebook posted <a href="https://www.kaggle.com/cdeotte/xgb-fraud-with-magic-0-9600">here</a>.</p>
<p>One final submission was a stack where LGBM was trained on top of the predictions of CAT and XGB and the other final submission was an ensemble with equal weights. Both submissions were post processed by taking all predictions from a single client (credit card) and replacing them with that client's average prediction. This PP increased LB by 0.001.</p>
<h1>How to Find UIDs</h1>
<p>We found UIDs in two different ways. (Specific details <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/111510">here</a>).
* Wrote a script that finds UIDs <a href="https://www.kaggle.com/kyakovlev/ieee-uid-detection-v6">here</a>
* Train our models to find UIDs <a href="https://www.kaggle.com/kyakovlev/ieee-basic-fe-part-1">here</a> and <a href="https://www.kaggle.com/cdeotte/xgb-fraud-with-magic-0-9600">here</a> </p>
<p>If you remember, Konstantin's original public FE kernel <a href="https://www.kaggle.com/kyakovlev/ieee-fe-for-local-test">here</a> without UIDs achieves local validation AUC = 0.9245 and public LB 0.9485. His new FE kernel <a href="https://www.kaggle.com/kyakovlev/ieee-basic-fe-part-1">here</a> achieves local validation AUC = 0.9377 and public LB 0.9617 by finding and using UIDs. Soon I will post my XGB kernel which finds UIDs with even less human assistance and proves to beat all other methods of finding UIDs. (XGB posted <a href="https://www.kaggle.com/cdeotte/xgb-fraud-with-magic-0-9600">here</a>). The purpose of producing UIDs by a script was for EDA, special validation tests, and post process. We did not add the script's UIDs to our models. Machine learning did better finding them on its own.</p>
<h1>EDA</h1>
<p>EDA was daunting in this competition. There were so many columns to analyze and their meanings were obscured. For the first 150 columns, we used Alijs's great EDA <a href="https://www.kaggle.com/alijs1/ieee-transaction-columns-reference">here</a>. For the remaining 300 columns, we used my V and ID EDA <a href="https://www.kaggle.com/cdeotte/eda-for-columns-v-and-id">here</a>. We reduced the number of V columns with 3 tricks. First groups of V columns were found that shared similar NAN structure, next we used 1 of 3 methods:
* We applied PCA on each group individually
* We selected a maximum sized subset of uncorrelated columns from each group
* We replaced the entire group with all columns averaged.</p>
<p>Afterward, these reduced groups were further evaluated using feature selection techniques below. For example, the block <code>V322-V339</code> failed "time consistency" and was removed from our models.</p>
<h1>Feature Selection</h1>
<p>Feature selection was important because we had many columns and preferred to keep our models efficient. My XGB had 250 features and would train 6 folds in 10 minutes. Konstantin will need to say what his models had. We used every trick we knew to select our features:
* forward feature selection (using single or groups of features)
* recursive feature elimination (using single or groups of features)
* permutation importance
* adversarial validation
* correlation analysis
* time consistency
* client consistency <br>
* train/test distribution analysis</p>
<p>One interesting trick called "time consistency" is to train a single model using a single feature (or small group of features) on the first month of train dataset and predict <code>isFraud</code> for the last month of train dataset. This evaluates whether a feature by itself is consistent over time. 95% were but we found 5% of columns hurt our models. They had training AUC around 0.60 and validation AUC 0.40. In other words some features found patterns in the present that did not exist in the future. Of course the possible of interactions complicates things but we double checked every test with other tests.</p>
<h1>Validation Strategy</h1>
<p>We never trusted a single validation strategy so we used lots of validation strategies. Train on first 4 months of train, skip a month, predict last month. We also did train 2, skip 2, predict 2. We did train 1 skip 4 predict 1. We reviewed LB scores (which is just train 6, skip 1, predict 1 and no less valid than other holdouts). We did a CV GroupKFold using month as the group. We also analyzed models by how well they classified known versus unknown clients using our script's UIDs. </p>
<p>For example when training on the first 5 months and predicting the last month, we found that our
* XGB model did best predicting known UIDs with AUC = 0.99723
* LGBM model did best predicting unknown UIDs with AUC = 0.92117
* CAT model did best predicting questionable UIDs with AUC = 0.98834</p>
<p>Questionable UIDs are transactions that our script could not confidently link to other transactions. When we ensembled and/or stacked our models we found that the resultant model excelled in all three categories. It could predict known, unknown, and questionable UIDs forward in time with great accuracy !!</p>
<p>Thank you. <a href="/kyakovlev">@kyakovlev</a> will need to answer that. He made lots of new creative categorical features that didn't help LGBM nor XGB but helped CatBoost. </p>
<p>My local validation scheme was to train a model on the first 75% rows and predict <code>isFraud</code> on the last 25% rows. (This is approximately train first 4.5 months and predict last 1.5 month). Konstantin used train on first 4 months, skip 1 month, and predict last month. </p>
<p>When I engineer a new feature (or group of features), I evaluate whether it (they) increased this local validation AUC. Other tests like train.csv/test.csv distribution, time consistency, correlation redundancy would indicate possible "bad" features. I would then remove these features and evaluate whether local validation AUC increased or decreased.</p>
<p>When local validation AUC increased, the Group k Fold CV AUC usually increased too. When they did not agree, I trusted local holdout AUC more because it was a forward in time prediction whereas Group K Fold CV includes some backwards in time folds.</p>
<p>I will be posted my XGB kernel shortly so you can see the code. The kernel I will post trains two XGB models each Group K=6 Fold and trains two local validations. All four times combined train in under a half hour.</p>
<p>"train/test distribution consistency" is comparing the distribution (histogram) of a feature in the training set with its distribution (histogram) in the test set. For example <code>TransactionDT</code> has low values in train and high values in test. And the values do not overlap. This feature is bad (obviously) so we drop it. You can either inspect features manually or use a library like <code>from scipy.stats import ks_2samp</code>. Documentation <a href="https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.ks_2samp.html">here</a>, drop features with p=0.</p>
<p>"client consistency" is similar to "time consistency". Split your train dataset by UIDs. Put 80% of UIDs (clients i.e. credit cards) into a train subset and 20% UIDs into a holdout (validation set). Make sure that no UID (client/credit card) is in both sets. Then train models on the 80% train and predict the 20% unseen clients. This evaluates how well features can predict unseen clients (credit cards). </p>
<p>CatBoost is great. In Santander Comp it was my best performing GBM</p>
<p>Maybe. I need to clean the code first. I do some novel things like category encode V columns and statistically reduce cardinality of categorical features. I also use my favorite TanH activation function instead of ReLU</p>
<p>I'll tell you what's interesting. There are only 2 submission better than the final sub we selected. They are the two submissions where we add 5% of my NN. Both submissions have Private LB over 0.9460 !! The problem is that the public score is only LB 0.9660 :P</p>
<p>I also have a version of NN which produces credit scores for every client to be used as a meta feature!</p>
<p><a href="/kyakovlev">@kyakovlev</a> Did you notice that our two best private submissions include NN?</p>
<p><a href="/eoldridge">@eoldridge</a> TanH is just a personal preference. ReLU usually outperforms it in most situations but I always try to make TanH work. For this dataset, I got it to perform better than ReLU.</p>
<p>The reason I like TanH is because I coded an NN on my website <a href="http://www.ccom.ucsd.edu/~cdeotte/programs/neuralnetwork.html">here</a>. I have another private website that lets me changes activation functions, add more layers, and provides an interactive console. I like watching the TanH learn more than ReLU. It's more continuous and smoother to watch. </p>
<p>Hmm... I guess it wouldn't a bad thing if you gave your model <code>uid2 = card1+addr1+D1n+D15n</code> <strong>in addition</strong> to <code>uid</code>. That's what Konstantin does. And it works great. I think it would be a bad thing if we created a very precise uid and <strong>only</strong> gave it that. Then it would have no choice but to use the uid that a human created.</p>
<p>So I prefer giving it <code>uid</code> and additional "building blocks". The feature<code>D15n = day - D15</code> helps because transactions that have the same <code>D15n</code> are more likely to belong to the same client. Thus <code>D15n</code> is a building block. Another building block is the feature <code>X_train['outsider15'] = (np.abs(X_train.D1-X_train.D15)>3).astype('int8')</code>. This boolean helps your model compare <code>D15n</code> to <code>D1n</code>. If these two numbers are within plus or minus one of each other that adds confidence that <code>uid</code> is correct. Additionally give your model <code>df.groupby('uid').D15n.agg(['mean','std'])</code>. Another example is that C10 is usually increasing so you can give it <code>df.groupby(['uid'])['C10'].shift()</code> which should always be zero or one. If it is negative than <code>uid</code> contains a C10 that decreases in time.</p>
<p>So in conclusion, i thinks it is important to know which columns help identify clients and then transform them and/or interact them with the others in a way that LGBM can utilize to create some "black box" rule about which transactions are from the same client (credit card).</p>
<p>Here's some code</p>
<pre><code># ADD MONTH FEATURE
import datetime
START_DATE = datetime.datetime.strptime('2017-11-30', '%Y-%m-%d')
train['DT_M'] = train['TransactionDT'].apply(lambda x: (START_DATE + datetime.timedelta(seconds = x)))
train['DT_M'] = (train['DT_M'].dt.year-2017)*12 + train['DT_M'].dt.month
# SPLIT DATA INTO FIRST MONTH AND LAST MONTH
train = train[train.DT_M==12].copy()
validate = train[train.DT_M==17].copy()
# TRAIN AND VALIDATE
lgbm = lgb.LGBMClassifier(n_estimators=500, objective='binary',
num_leaves=8, learning_rate=0.02)
h = lgbm.fit(train[[col]], train.isFraud, eval_metric='auc',
eval_set=[(train[[col]],train.isFraud),(validate[[col]],validate.isFraud)])
auc_train = np.round(h._best_score['valid_0']['auc'],4)
auc_val = np.round(h._best_score['valid_1']['auc'],4)
</code></pre>
<p>Run this with variable <code>col</code> equal to the column you wish to test. Here are some results. Variable C3 has <code>auc_train = 0.5</code> and <code>auc_val = 0.5</code>. Variable C7 has <code>auc_train = 0.65</code> and <code>auc_val = 0.67</code>. And Variable <code>id_38</code> has <code>auc_train = 0.61</code> and <code>auc_val = 0.36</code>. The next step is to remove weak variables from your model and then evaluate your entire model with your normal local validation to see if AUC increases or decreases. Using this, we found and removed 19 features and resultantly improved our model.</p>
<p>Ah yes, it is unclear in my explanation. Thanks for pointing that out. I'll update it.</p>
<p>I did try that. It helped my CatBoost models using hyperparameter <code>class_weights=[1,2.5]</code> but over/under sampling didn't help my LGBM nor XGB models. </p>
<p>I don't have any great insights but I think the proportion of <code>isFraud</code> in the test.csv file makes a difference. For example if the <code>train.csv</code> had 3% <code>isFraud</code> but the <code>test.csv</code> had 10% <code>isFraud</code> then it would be important to upsample so that your predictor would predict more <code>isFraud</code> than 3% in the <code>test.csv</code>.</p>
<p>Good observation. Anonymity most likely is correlated with <code>isFraud</code> but there is no need to add booleans such as "is_protonmail". Your model can already do that for you. Especially if you make the parent feature a categorical variable, i.e. <code>df['P_emaildomain'] = df['P_emaildomain'].astype('category')</code>.</p>
<p>Both types are useful to review. "split" refers to quantity and "gain" refers to quality. LGBM is lots of trees and trees are lots of nodes with splits. The importance type "split" refers to how often a feature is used to make splits and "gain" refers to how beneficial splits are when they occur.</p>
<p>Note that regarding "gain", you can use "average gain" or "total gain" where average is "average per split" and total is "total over all splits". Gain is a measure of the model becoming more accurate. Every time a tree makes a split the model will always increase or stay at the same accuracy. (The bottom plot below is "split" and the image is cropped so we can read the feature names).
<img src="http://playagricola.com/Kaggle/f1b.png" alt="image">
<img src="http://playagricola.com/Kaggle/f2b.png" alt="image">
<img src="http://playagricola.com/Kaggle/f3b.png" alt="image"></p>
<p>Thanks</p>
|
IEEE-CIS Fraud Detection
|
25th Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you. Congratulate for the outperforming solo gold!</p>
<p>Just for curiosity
<a href="/kyakovlev">@kyakovlev</a> made public their submission sets <a href="https://www.kaggle.com/kyakovlev/ieee-submissions-and-uids">here</a>. You can take their submission set and do simple average ensemble, and check the result :D
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1231967%2F0351b59084299ecbe70427c6c3d36812%2F2019-10-06%20%2010.01.19.png?generation=1570367011626457&alt=media" alt=""></p>
<p>Just do it for fun if you want :P</p>
<p>Thank you for Kaggle, Vesta, IEEE-CIS for hosting this competition. And before writing about the summary of our team's approach, I really want to appreciate my team-mates,( @yeonmin, @chocozzz, @soonhwankwon , @sunghun), without your hard work it wasn't possible to end up well. All of my team-members needed a gold medal to become a competition master. That's why we choose "borabori" (Tinky Winky in Teletubby) as our team-name, but we missed it by 2 positions. Let's promise it next time :D</p>
<h1>Overview</h1>
<p>Our team made 2 independent pipelines. Two different approaches were quite different, so these gave us huge boost when ensembling(stacking wouldn't work for us). I'm going to write down the main concepts of each pipeline below.</p>
<h1>First model</h1>
<ul>
<li>Specify the Client_id : At the very first stage, we found out the real meaning of D1 column, so we could specify client_id by (card1, addr1, addr2, make_account_date_D1), though it was not perfectly matches. </li>
<li>Again Client_id, Card_id : After roughly finding the combination for discerning client, we thought it would great if we figure out how to specify the following two groups. (1) Who is the owner of each transaction(=client_id), (2)Who is the owner of the card that was used for each transaction(=card_id). Users can have more than one card. If specifying these two is possible, we thought grouping the transactions by card_groups or client_groups, then making aggregation features from them would be the best way to solve this problem.(By implementing GroupKfold based on card_groups or client_groups)</li>
<li>Failure : By the last day the competition ended, we couldn't find the perfect condition for specifying groups. So, we generated diverse client_id, card_id combinations to approximate the real client_id, card_id.</li>
<li>FE : Created aggregated, lag, lead features based on those "imitated combination" such as how many unique items(=ProductCD +'_'+TransactionAmt) are pursed by each client, how much was the time gap between the previous transaction and the current one. And frequency-encoding for some categorical variables.</li>
<li>Feature Selection : First of all, we only took the common values between Train/Test-set for categorical variables, and replace the uncommon values with NaN. Then, removed some columns by adversarial validation check.</li>
<li>Validation Strategy : As I mentioned above, we would have gone with GroupKfold based on client_groups or card_groups. But we couldn't. Instead, we choose validation strategy as GroupKfold(K=6) based on the month_block. </li>
<li>Model : LGBM(goss, dart, gbdt) , CATBOOST, XGBOOST. For the model performance, (CAT> LGBM > XGB). The single model's score was varied a lot depend on seed. So, we trained each model at least with 5 different seeds.</li>
</ul>
<h1>Second Model</h1>
<ul>
<li>Specify the Client_id ,Card_id </li>
<li>LDA, NMF : First change the data-type as string and concatenate, then implemented LDA for two combinations( ['card_1','addr_1'], ['card1','id_20']), NMF for one combination(['card4','DeviceInfo']).</li>
<li>FE : Similar job with the first approach + Target_mean_encoding for some categorical variables</li>
<li>Validation Strategy : First, split the dataset into two sub-groups. One for "ProductCD" is "W", and the other for "ProductCD" is not "W". Then, implemented StratifiedKFold(K=5) for each sub-group. </li>
<li>Model : LGBM(goss, dart, gbdt) , CATBOOST, XGBOOST. For the score, (CATBOOST > LGBM > XGB). </li>
</ul>
<h1>The things didn't work for us.</h1>
<ul>
<li>NN : @soonhwankwon really tried hard for this part. But it was hard to make a robust, good-scoring model with NN. And also, ensembling/stacking the result with tree-based boosting model didn't work.</li>
<li>DAE </li>
<li>Stacking</li>
</ul>
|
IEEE-CIS Fraud Detection
|
29th place - Mini write up about automation and groupings
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>i'll search for some bugs on Kaggle and then send dozens of support emails demanding for equality between all participants.
and then ask them to increase the gold zone by 7 more teams, it's for the greater good you see..?
=)</p>
<p>Sure, this is the link, it's free to use in its desktop version</p>
<p><a href="https://powerbi.microsoft.com/en-us/">https://powerbi.microsoft.com/en-us/</a></p>
<p>Well, that was a blast!</p>
<p>First i want to thank Kaggle of course for setting up the great competition, and my awesome teammate <a href="/psilogram">@psilogram</a> ,It's been a real honor working with you, i learned a lot!
Thank you for deciding to work with me.</p>
<p>This has been an incredible competition.
I'll put emphasis on the non model processes I ran in order to support and enhance Silogram's models. There were so many iterations of my solution, I will only mention the last solution. Also - non of the things written here are facts, just my findings personal findings.
The main thing that guided me was Lynn's note :
<code>The logic of our labeling is define reported chargeback on the card as fraud transaction (isFraud=1) and transactions posterior to it with either user account, email address or billing address directly linked to these attributes as fraud too.&nbsp;&nbsp;</code></p>
<p>After some EDA i found that once a customer is fraud once, most of the time all future transactions will also be fraud. My goal was to create groups of cards that if we find 1 of them as fraud, then we can mark them all as fraud, and also to retain the frauds we have in train set and mark these customers as frauds in the future. </p>
<p>We could use grouping by card, date-d1, addr1+2, dist1+2 to identify customers but there are lots of NAs, if only addr1 is different it could be a different customer, and when it's NA, we can't know who it is, So i took a different approach. </p>
<p>I'll start with the groupings that i used, and continue with how i came about finding those, all these were done using an iterative script .Within each of these grouping there are multiple sanity checks to consider for lost information due to missing data. for example - V307 is not simple cumsum of TranscationAmt, it's for last 30 days, if a card has more than 1 consecutive month of data, , sometimes V307 won't be equal to previous V307 + previous Transaction Amount, also if a customer had more than 30 days between transactions, V307 loses its signal and becomes 0. I'm not able to list all the checks and rounds, but i'll say the general guidelines. </p>
<p><strong>1) Customer\Card:</strong></p>
<p>Use unique_card1-6 and date-D1, identify next transaction by V307 = CumSum(TransactionAmt) in last 30 days (TransactionDT - 2592000), and either Date-D3 =date of previous transaction or V322= previous V322 +1 (when D3 is NA)</p>
<p><strong>2 ) 1 type of User grouping. </strong></p>
<p>Group by id_19+id_20+id_31+DeviceInfo, next transaction identified by V264= CumSum(TransactionAmt) in last 30 days (TransactionDT - 2592000), V218= previous transaction V218+ 1, and shares same D1 or D4</p>
<p><strong>3 ) another type of User grouping. </strong>
Group by id_19+id_20+id_31+DeviceInfo, next transaction identified by V203= CumSum(TransactionAmt) in last 30 days (TransactionDT - 2592000), V168 = previous transaction V168 + 1, and shares same D8</p>
<p>The "User Grouping" ones span across different cards
Some of them are relevant for specific ProductCDs.This allowed me to fill many NAs in the data, in addr, dist, card features. </p>
<p>My output for the ensemble was:
1) a 50 feature. 9492 lgbm that eventually wasn't used. It was craeted before the groupings</p>
<p>2) per each transaction,its card group id, user1 group ID and user2 group ID. Also i grouped together all groups that are related. Is we have group X with 8 transactions, and one of them is also part of group Y, create 1 grouping for all of them (as a new type).
3) in order to retain customer frauds into the future, i joined all test transactions with all fraud transactions in train, and created features such as "is same addr", "is dist na", does past transaction's group has within it a matching addr", "do their groups share any transaction amounts " etc. I then trained a model to identify the probability that the transaction at hand is of the same customer as any of the frauds we know in the past.</p>
<p>Using these, Silogram was able to use additional features in the ensemble to enable the model to take transaction that it found as fraud, and to mark the transaction's entire group as higher fraud probability. </p>
<p>I'm sure there was a lot more to do, both Silogram and i had breakthroughs on the last day that allowed us to boost our score significantly, but not enough time to explore further. </p>
<p>I think now, after reading the other write ups - i might have took too conservative way to do the groupings, or picked logics that would generate the groups with not as good quality as others did.I did lots and lots of EDA, but probably i got myself too tired with lack of sleep, and sometimes i would just stare at the data while my brain melts.Might have been worth to revisit all of my assumptions from scratch with a new perspective from time to time.</p>
<p>...........
As a side note. I used a power Bi model in the start of competition to allow myself drag and drop and object interactions functionality in a dashboard. It was of great value because on the fly i could drag card, addr, D1, and have a measurement that that shows me all those that have both rows in train and test, have frauds, and that after 1st fraud, there are only frauds onwards. Clicking on 1 of the groups would bring up their transactions. This allows quick and intuitive hands on EDA that you code once, and then analyze on the fly from then on. </p>
<p>I do it in almost every competition and it's an awesome tool for immediate, interactive ,hands on interaction with the data
......... </p>
<p>Thank you all for reading,</p>
<p>Thank you for being such an awesome community.</p>
<p>You're all truly a unique group of people.</p>
<p>Hi <a href="/cdeotte">@cdeotte</a> ,
Just to add to what <a href="/psilogram">@psilogram</a> wrote.
At some iterations of the ensemble (not sure if the final one), we would simply take the mean prediction from previous model\s per user\card\master group.</p>
<p>the model that <a href="/psilogram">@psilogram</a> mentioned is a model i used to retain card frauds from train into the test set.</p>
<hr>
<p>I took all transactions and i joined them with all isFraud==1 transactions that are before it and have the same Card + D1Reference.
I then added features such as "is Addr1 the same", "is Dist1 the same", "is there similar TransAMT within the transactions groups" , "is P_emaildomain NA", and more.</p>
<p>I then trained a model that predicts for each transactions - given these relationships with past frauds - how likely is this transaction a fraud (or actually, the same customer\card).
and we used it as a feature in lvl2\3 ensemble.</p>
<p>because we had the scripted groupings, i could also cause this not to overfit or have leakage, because i could exclude, in the join , past transactions within the same group.
so basically rows in Train are having also a hard time going backwards in time like test set.</p>
<hr>
<p>The biggest value in this model is accounting for NAs in addr1,dist1,email columns, and finding the future frauds despite the missing information.
this is something that probably won't be of great value in your model, as you already had strong features and aggregations within the model so it could find this information.
When eventually we did start using better features in the models themselves - the value this gave decreased as the model was finding this information by itself.</p>
<p><em><strong>Updated Sketch</strong></em>
Hi <a href="/cdeotte">@cdeotte</a>
Here's a sketch of how it looks like.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1697090%2F1b3b6e73de02a3dadef2b09183643bcc%2F1sModel.JPG?generation=1570393393585337&alt=media" alt=""></p>
<p>It got to be 359 features (types of relationships) based on about 20 different interactions</p>
|
IEEE-CIS Fraud Detection
|
2nd Solution, CPMP View
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>When Giba and I joined the team, <a href="/sggpls">@sggpls</a> <a href="/tony321">@tony321</a> and <a href="/krivoship">@krivoship</a> already had a solid position in top 10 and there were less than 10 days left. It is hard to add value in this case, and I decided to focus on adding diversity to the models they were blending. For that I started my own dataset and borrowed bits of theirs. Let me describe the steps I took.</p>
<p><strong>Data Cleaning</strong>
The data was a bit similar to Malware, with time stamped observations, and many features with lots of values present in test and not in train. I screened all features by looking at train vs test distributions, for raw feature and also frequency encoding of features. Here is an example for <code>card1</code>. <br>
<img src="https://storage.googleapis.com/kaggle-forum-message-attachments/641506/14236/card1.png" alt="card1 plot">
Raw data is used on the left column, and feature encoding of it in the right column. Red is train, blue is test. Left row x axis is <code>card1</code>. Top left y axis is frequency. Top left show frequency of each card1 value in train (red) and test (blue). Bottom left y axis is mean of target with a log transform. 0 is the average for train. Right column is the same, except card1 is replaced by log of its frequency.</p>
<p>We see that lots of card1 values only appear in test. If we use it directly it will lead to major drop in private LB. We also see that frequency encoding of card1 looks more balanced across train and test. FOr this feature I discarded the original one and kept the frequency encoding.</p>
<p>I did not merge values as I did in Malware, for lack of time. I am not sure it would have helped a lot, in hindsight.</p>
<p><strong>Cross Validation</strong></p>
<p>Cross validation for time series is always tricky. Here is what I ended up with. I tried to mimic the fact that there is a significant time gap between train and test, and also that test can be several months away from train. </p>
<p>I run these folds, using months starting with with 0. The | indicates the train/val split
0 | 2 3 4 5 6
0 1 | 3 4 5 6
0 1 2 | 4 5 6
0 1 2 3 | 5 6
This makes me train 4 models for a given model type. I then average the auc score of each of these models on the corresponding validation data. This scheme or some variants were reused by my team mates for validating their models. The advantage is that we have folds that test long term forecast.</p>
<p>I used this CV to evaluate features, to evaluate HPO, and for feature selection. For submission I tried two ways. One, to extrapolate the number of trees from the CV runs. I just ran a linear regression on the log of trees, and predicted the value for full train. I also tried to use an unshuffled 8 fold CV and average all fold model predictions for test. Both ways were giving similar LB score. I probably have used both and blend the result, but I didn't...</p>
<p><strong>Feature Engineering</strong></p>
<p>I started by computing frequency encoding of all features, then used lightgbm with the above cross validation to select features using <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/107877#latest-635386">permutation importance</a> with some modification: I kept a feature only if permuting it did not improve any of the 4 model predictions. This removed some features. I repeated the process, using permutation importance to remove features until the feature list does not shrink anymore. With this I got 0.942 on public LB.</p>
<p>The next level was to look for user id, and this is when I teamed. I was hoping my new team mates had sorted out the user id puzzle. They did. They actually had a number of different uids. I selected two of them defined as follows:</p>
<pre><code>data['uid1']` = (data.day - data.D1).astype(str) +'_' + \
data.P_emaildomain.astype(str)
data['uid2'] = (data.card1.astype(str) +'_' + \
data.addr1.astype(str) +'_' + \
(data.day - data.D1).astype(str) +'_' + \
data.P_emaildomain.astype(str))
</code></pre>
<p>where <code>data.day</code> is the number of days since the beginning, computed from <code>TransactionDT</code>.</p>
<p>I did not want to use these user ids directly for fear of overfitting (in hindsight this was a mistake, and in my last sub I just ran catboost using these as features and it led to my best sub). I rather used these uids to compute aggregate features from the sequence of time intervals for transactions for a given uid, and also for teh transaction amount. Here is one way I used:</p>
<pre><code>def add_gr(data, col):
cols = data.columns
gr = data.groupby(col)
data[col+'_count'] = gr.TransactionID.transform('count').astype('int32')
data[col+'_next_dt'] = gr.TransactionDT.shift(-1)
data[col+'_next_dt'] -= data.TransactionDT
data[col+'_mean_dt'] = gr[col+'_next_dt'].transform('mean').astype('float32')
data[col+'_std_dt'] = gr[col+'_next_dt'].transform('std').astype('float32')
data[col+'_median_dt'] = gr[col+'_next_dt'].transform('median').astype('float32')
data[col+'_next_amt'] = gr.TransactionAmt.shift(-1)
data[col+'_mean_amt'] = gr.TransactionAmt.transform('mean').astype('float32')
data[col+'_std_amt'] = gr.TransactionAmt.transform('std').astype('float32')
data[col+'_median_amt'] = gr.TransactionAmt.transform('median').astype('float32')
new_cols = list(set(data.columns) - set(cols) -set([col+'_next_dt']))
return new_cols
</code></pre>
<p>where col is one of <code>uid1</code> or <code>uid2</code>.</p>
<p>With these I got 0.952 public LB.</p>
<p>Another significant improvement was to use target encoding of these uids and a set of transaction chains that my team mates derieed by combining uids with additional feature columns that represent successive transaction amounts, eg <code>V307</code>. This moved my lgb model to 0.9606 public LB unless mistaken.</p>
<p>I added few other features that improved a bit, but nothing very important.</p>
<p><strong>Final Blend</strong></p>
<p>I mostly used lightgbm, but I also tried xgboost and catboost. XGboost was giving models extremely close to lgb, with both very high correlation in predictions and similar LB score. My features were added my team mates already very good models, yielding even higher scores: Giba had one at 0.9644, and my Russian team mates had models up to 0.9626. Giba then used his blending magic to reach 0.9662 public LB the day of original deadline. We then wondered about what could yield us the extra 0.001 we needed to reach prize zone. We realized we were not using the main property of the dataset: if a card is a fraud at some point, then it is a fraud in the future of that point. Well Giba captured that partially with some lag variables, but that was it. We then added some post processing that overrides predictions if a card is a fraud in the past, and this moved us to 0.9672 at the last minute.</p>
<p><strong>What did not work</strong></p>
<p>Many things, but the one I spent most time on what to try to leverage the fact that if a card is a fraud at some point, then it is a fraud in the future of that point. I didn't want to use post processing as it can lead to overfitting if the card ids are not perfect. Indeed, if one of our id actually covers several cards, then one fraud isn't necessarily followed by all frauds. I therefore tried this:
Create one submission. Then for each uid create a lag variable equal to the average of predictions before the current point, and train again. This is like stacking, but with a temporal shift.
This did not work, CV skyrocketed, but LB dropped by 0.01. I still think there is something to be done to make it work, but I could not find how.</p>
<p><strong>What worked</strong></p>
<p>Teaming. See <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/111159">here</a> for more details.</p>
<p>It is clear that 3 + 1 + 1 > 5 here. If you have never teamed then try, you will not regret it.</p>
<p>Edit: Additional information about our solution can be found in these two posts:</p>
<p>How to find card ids: <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/111559">https://www.kaggle.com/c/ieee-fraud-detection/discussion/111559</a>
The whole team story: <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/111554">https://www.kaggle.com/c/ieee-fraud-detection/discussion/111554</a></p>
<p>Yes, I use a blend with overall target mean. The smaller the frequency, the closer to train average. Values that only appear in test get the average train value. This is key.</p>
<p><a href="/longyin2">@longyin2</a> No, I haven looked at that one.</p>
<p><a href="/tpthegreat">@tpthegreat</a> exactly this indeed, but with nested fold in the train fold. I used out of nested fold target encoding in the training fold, and the average of nested fold TE for validation fold and test. Key is to never use target encoding on the data used to compute it.</p>
<blockquote>
<p>However even if we finished with a public score of LB 0.9668 (using no stack) which would be lower than CPMP and team on public LB, our corresponding private LB score still wins the competition.</p>
</blockquote>
<p>What is the score of your weighted average without post processing? I didn't see it mentioned in your writeup unless mistaken.</p>
<p>Ours is 0.941815 private, and that's our second selected sub. It would have been 10th.</p>
<p>Not really: in stacking your train a model. If you use a linear regression, then indeed you end up with a weighted average, but the weights are found out of training data rather by hand.</p>
<p>Indeed, from reading your writeups our solutions are extremely similar (no surprise I guess). I was also surprised to see that our individual models look a bit better than yours on public LB. </p>
<p>We had different datasets and different validation schemes. My Russian team mates had their own, with way more features than mine. And Giba started from theirs, selected features aggressively and added his own. This ensured some diversity but I reused the exact same card ids. The top models in the blend are:</p>
<p>lgb 0.9644
Catboost 0.9626
xgb/lgb blend 0.9623</p>
<p>I think the difference comes from stacking. We only used a weighted average. We teamed too late to have consistent oof predictions from our models.</p>
<p>It was 0.966171</p>
<p>Answering 2 is easy: I tune alpha using cross validation, like any other hyper parameter. I ended up using 50 here.</p>
<p>1 is trickier. Let me use a simplistic example for the sake of discussion. Assume two folds 1 and B, and T for test. Let's look at a new feature TE which is the target encoding for a given feature.</p>
<p>In A, TE values are computed from B target.
In B, TE values are computed from A target.
in T, TE values are computed from A target and B target.</p>
<p>If you use cross validation with early stopping models to compute test prediction you do:</p>
<p>Train a model MA on A, use early stopping on B, then predict on T.
Train a model MB on B, use early stopping on A, then predict on T.</p>
<p>MA training data includes TE computed from B target, and MA training uses B for early stopping. This is where there is a leak that can lead to overconfidence in TE value. Also, there is a lack of symmetry with test, because MA does not use TE computed from T.</p>
<p>Nested fold TE work this way. A is split in A1 and A2, and B is split in B1 and B2</p>
<p>When training M1:
In A1, TE values are computed from A2 target.
In A2, TE values are computed from A1 target.
In B, TE values are computed from A1 target and A2 target.
In T, TE values are computed from A1 target, and A2 target.</p>
<p>There is no leak from B target to MA training data.</p>
<p>The difference between the two way may matter, or not, depending on the dataset. </p>
<blockquote>
<p>but a higher cv also be a symptom of leak. </p>
</blockquote>
<p>Sure, if you don't have a reliable CV setting you can't do much. That's why I spend quite a lot of time testing differen tCV settings and their correlation with LB scores.</p>
<p>Additional information about our solution can be found in these two posts:</p>
<p>How to find card ids: <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/111559">https://www.kaggle.com/c/ieee-fraud-detection/discussion/111559</a>
The whole team story: <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/111554">https://www.kaggle.com/c/ieee-fraud-detection/discussion/111554</a></p>
<p>Thanks, but I didn't do that much. The time consuming part, analyzing data to find uids, was done by my team mates before I joined the team.</p>
<blockquote>
<p>Unless stacking definition forces the use of a learned model</p>
</blockquote>
<p>Well, that's the definition of stacking I know.</p>
<p>I think I mean what I wrote. If permuting the values for a feature leads to a better score then it means that the feature is noise. This is done without retraining any model.</p>
<p>Permutation importance is described in the link I provide.</p>
<p>Are you familiar with permutation importance (link in my post)? Then I use it with the 4 models I got from my 4 different folds. A feature is fremoved if permuting the values then predicting improves the score for one of the models.</p>
|
IEEE-CIS Fraud Detection
|
40th place solution, my part.
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h3>Preamble</h3>
<p>First of all I want to congratulate all the winners, as you will get paid for your hard work. Also personal congratulation to <a href="/cdeotte">@cdeotte</a> with his 'promotion' to competitions master. 3 more gold medals and we will have one more triple GM.
Congratulations to <a href="/fatihozturk">@fatihozturk</a> for becoming competitions GM.
And congratulations to all other participants because the best prize you are getting out of every competition is what you have learned.</p>
<h3>Solution</h3>
<ul>
<li>We did find the same thing as, I suppose, any other team from top 1-2% has found, which is uid by card1, addr1, email and TransactionDT - D1. </li>
<li>So then comes a lot of different groupby's on that uid</li>
<li>Also my teammate <a href="/nvarganov">@nvarganov</a> has engineered some fancy shifting features, which added around 0.0005 to my best single model when I used them</li>
<li>All in all my best model had ~1000 features. It was a LightGBM on 10 KFolds with no shuffle and it scored 0.956 Public and 0.932 Private</li>
<li>Our best submit was a blend of all of our best models, which contained LightGBM and XGBoost.</li>
</ul>
<h3>C features interaction</h3>
<p>And also here is a code for some C features interaction which I used in my best model and which improved it's score by a little bit. I believe it was around 0.0002-0.0003.</p>
<p>A list of C features
<code>
c_cols = [col for col in X.columns if col.startswith('C') and len(col) &lt;= 3]
</code>
Multiplication and subtraction interactions each-to-each
<code>
for i in range(len(c_cols)):
for j in range(i + 1, len(c_cols)):
X['{}_mul_{}'.format(c_cols[i], c_cols[j])] = X[c_cols[i]] * X[c_cols[j]]
X['{}_sub1_{}'.format(c_cols[i], c_cols[j])] = X[c_cols[i]] - X[c_cols[j]]
X['{}_sub2_{}'.format(c_cols[j], c_cols[i])] = X[c_cols[j]] - X[c_cols[i]]
test['{}_mul_{}'.format(c_cols[i], c_cols[j])] = test[c_cols[i]] * test[c_cols[j]]
test['{}_sub1_{}'.format(c_cols[i], c_cols[j])] = test[c_cols[i]] - test[c_cols[j]]
test['{}_sub2_{}'.format(c_cols[j], c_cols[i])] = test[c_cols[j]] - test[c_cols[i]]
</code></p>
<p>Computing a correlation matrix for all new generated features
<code>
sum_cols = [col for col in X.columns if '_mul_' in col or '_sub_' in col]
corr_matrix = X[sum_cols].corr()
</code></p>
<p>Then removing all the features with a high correlation. Keeping those which correlate with target value better.
```
to_drop = list()</p>
<h1>Iterating over rows starting from the second one, because position [0, 0] will be self-correlation which is 1</h1>
<p>for i in range(1, len(corr_matrix)):
# Iterating over columns of the row. Only going under the diagonal.
for j in range(i):
# See if the correlation between two features are more than a selected threshold
if corr_matrix.iloc[i, j] >= 0.98:
# Then keep the one from thos two which correlates with target better
if abs(pd.concat([X[corr_matrix.index[i]], y], axis=1).corr().iloc[0][1]) > abs(pd.concat([X[corr_matrix.columns[j]], y], axis=1).corr().iloc[0][1]):
to_drop.append(corr_matrix.columns[j])
else:
to_drop.append(corr_matrix.index[i])</p>
<p>to_drop = list(set(to_drop))
```</p>
<p>Wow, pretty nice to know that my kernels helped.</p>
<p>Did you use CAT with a default parameters or you tuned up something right from the beginning?</p>
|
IEEE-CIS Fraud Detection
|
41st place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to my teammates Rob Mulla <a href="/robikscube">@robikscube</a> , <a href="/raddar">@raddar</a> , and Yang <a href="/ljjsfe">@ljjsfe</a> .</p>
<h3>Initial findings</h3>
<p>It starts with Rob Mulla and Yang independently submitting stuff when the competition first opened. Yang had independently achieved 0.927 private LB. Then when CHAMPS ended, Rob and I decided to pursue the IEEE competition more fiercely.</p>
<p>I had noted that the D-features leaked <code>user_id</code> privately to Rob. Just using the D-features alone, however wasn't enough, and so I saw many users flip from <code>isFraud=1</code> back to <code>isFraud=0</code>. I reached out to Raddar because he is the king of reverse-engineering (I knew of his strength from ELO and Home Credit post-processing). After I explained to him the D-features, he knew exactly how to exploit this and was on-board to team. The simple problem we were missing was also to enforce the user should have the same <code>card1-6, P_emaildomain, addr1-2</code>, etc.</p>
<h3>Raddar post processing</h3>
<p>After raddar did that, he also began tinkering to enforce other rules, like C features should be monotonous. By simply post-processing people with <code>isFraud=1</code> and people with <code>isFraud=0</code> to 0.93 and 0.000001 respectively in the test set, he brought our private LB 0.927 -> 0.9297 (And Rob and I were very happy to have raddar on our team when we saw our public LB jump so high)</p>
<p>Yang reached out to join us on September 21, and we accepted given his strong performance on Home Credit.</p>
<h3>LB Probing</h3>
<p>I then began to LB probe based on the <code>user_id</code> raddar had created. Basically, if the <code>user_id</code> was only in test and not in train, and if it had a high prediction like >= 0.15, I changed him all to 1's. If LB improved, I kept it; if it worsened, I changed him to all 0's. I continued this every day until the competition ended; this ended up netting us a decent amount of AUC</p>
<p>We coded up some features related to <code>user_id</code> (like number transactions, max TransactionAmt, mean isFraud, etc.) and chose them if they improved the CV.</p>
<h3>Adversarial Validation, Pseudolabels, and train on all data blindly</h3>
<p>We began to get scared of train/test difference in distribution since many were saying there would be a shakeup (and it was evident that many features had different distribution in test). Therefore we fit a feature 1 at a time to predict train vs private test (this technique is called adversarial validation). From now on, we had 2 branches: one model that kept all of the features, and one model that only kept features that had <= 60% ROC AUC in adversarial validation.</p>
<p>Additionally, after having so many LB probes, we began to train on full train set + use the LB probes as pseudolabels from the test set. The number of iterations we used was roughly (number of iterations on 5th fold) * 1.1</p>
<h3>Corey post processing</h3>
<p>In the last few days, I modified <code>corey_user_id</code> (different from <code>raddar_user_id</code>) to be just based on the D intercept and then group by <code>card1-6, ProductCD, and P_emaildomain</code>. Whenever I tried any LB probes based on my ID, it worsened LB. So I made a modification: IF in test set there was a <code>raddar_user_id</code> with <code>isFraud=1</code>, then for the <code>corey_user_id</code> mapped to that row, change all predictions to <code>isFraud=0.99</code>. Do the same if <code>raddar_user_id</code> had <code>isFraud=0</code>, except change predictions to 0.0002. Why? Because I had more confidence in <code>raddar_user_id</code> (he was grouping by more features, and was enforcing C monotonicity) and my <code>corey_user_id</code> was more shoddy. However, doing this post processing gave us a jump of like 20 places.</p>
<h3>Final submissions</h3>
<p>In the end, we have 2 submission:
1. The first is a giant model LGBM trained on Rob's, my, raddar's, and Yang's features such that ROC AUC in adversarial validation is <= 60%. We bagged it thrice, trained blindly on full train data + pseudolabels from <code>raddar_user_id</code> and <code>corey_user_id</code> and then did post processing similar to how I described it above. This model gave 0.955 public LB, and 0.9339 private LB
2. The second is a 60% * (50/50 blend between Rob's old model that used all features and Yang's model that used solely his features, with post processing) + 40% * (first submission). This had 0.9586 public LB and 0.9363 private LB.</p>
<h3>Post-competition thoughts</h3>
<p>We thought there would be more of a shakeup, so we are sad not to achieve gold (though we did rise 18 LB spots). There are some problems we could have pursued: for example, using D3, D4, D5, or V307 to make our <code>user_id</code>'s better. Additionally, basically Rob was the only one training LGBM, so we had no big diversity from other models/feature sets apart from Yang's early model. Next time, we will work harder ;)</p>
|
IEEE-CIS Fraud Detection
|
5th Place Solution: Lions
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you, CPMP and congrats your 2nd place. I always learn a lot from your solutions and discussions.</p>
<p>> Users who are confirmed as isFraud=1 in test-public dataset by LB probing. (207rows)</p>
<p>We gained from this part, and its gain is about <code>+0.0004~0.0005</code>.</p>
<p>Thank you !</p>
<p>Thank you!</p>
<p>Thank you!</p>
<p>First of all, I would like to thank Kaggle and IEEE-CIS and Vesta corporation for hosting this competition. Congratulations to the winners. Thanks to the best team members <a href="/hmdhmd">@hmdhmd</a>, <a href="/shuheigoda">@shuheigoda</a>, <a href="/yryrgogo">@yryrgogo</a> who were able to share great results.</p>
<p>In this solution, we describe what we have devised, while omitting what has already been explained in other solutions as much as possible.</p>
<h2>Final submission summary</h2>
<ul>
<li>Simple average of LightGBM individually modeled by four team members. The effect of averaging was great even though everyone modeled using LightGBM & similar parameters.
<ul><li>Although our team was formed from the beginning, we almost didn’t share modeling ideas except the core part of the competition (e.g. userID). This is one of the factors that greatly improved the ensemble.</li></ul></li>
<li>After averaging, we override isFraud to 1 as following two user patterns. This gave us about <code>+0.0011</code> boost in private LB.
<ol><li>Users who are certain to exist in train dataset and isFraud=1. (335rows)</li>
<li>Users who are confirmed as isFraud=1 in test-public dataset by LB probing. (207rows)</li></ol></li>
<li>stats
<ul><li>Final Result: <code>private 0.942453 / public 0.964383</code></li>
<li>Base Models (LightGBM)
<ul><li>private 0.933088 / public 0.956396</li>
<li>private 0.938111 / public 0.960937</li>
<li>private 0.936381 / public 0.959820</li>
<li>private 0.933819 / public 0.958436</li></ul></li></ul></li>
</ul>
<h2>Key Ideas</h2>
<h3>User identification</h3>
<ul>
<li>User identification was a very important factor, but we briefly describe as it has already been explained in many other solutions.</li>
<li>Column used for identification: <code>D1/D3/addr1/P_emaildomain/ProductID/C13</code></li>
<li>The identification work was performed with attention to the following two points.
<ol><li>emaildomain: Replace <code>anonymous.com</code> and <code>mail.com</code> with <code>NaN</code></li>
<li>There is a gap period between train and test-public
<ul><li>Recognized as the same user even if <code>D3</code> etc. don't fit due to the gap period</li></ul></li></ol></li>
<li>In the above user identification is not perfect, so we aggregate based on various keys to generate user group features as describe details below.</li>
</ul>
<h3>User Group Feature</h3>
<ul>
<li>This is the same type of feature that is called userID , but in one model we didn't try to identify the User. The concept of the model is to let the tree model specify a userID . We added a lot of necessary features to the model.</li>
<li>We made about 80 features like this <code>df.groupby('uid_and_register_date')[C_or_V_columns].agg(['mean'])</code>. This is similar to 1st place solution as chris described. Single LGB improvement by these 80 feature is following: <code>LB 0.9519-&gt;0.9565, CV 0.9257-&gt;0.9339</code></li>
<li>Since this user grouping is not perfect neither, we have created features to identify different users in user groups.
<ul><li>Whether the value matches the most frequent value in the group</li>
<li>Whether the value appeared in the user group last month</li>
<li>How many days ago that value appeared in the user group
etc...</li></ul></li>
</ul>
<h3>Utilizing test data which we grasp isFraud correctly</h3>
<ul>
<li>Since we were able to grasp isFraud of more than 5000 lines(1) of test data almost accurately by user identification and probing, we utilized as following three methods. (1: total 5448rows / positive: 542 / negative: 4906))
<ol><li>Override at final submission (as wrote above)
<ul><li>Since the loss is very large when the one with true value is overridden as 0, we only override the one with true value isFraud=1.</li>
<li>This gave us about <code>+0.0011</code> boost. (Even if we didn't override, we got the gold medal.)</li></ul></li>
<li>Feature Selection
<ul><li>We calculated AUC with the data in the test set for which we knew the correct answer. If we add new features to model and the AUC dropped significantly, we threw away them.</li></ul></li>
<li>Determination of model ensemble ratio
<ul><li>The ensemble ratio was decided to maximize the value of above AUC calculated only in the test-private. (I'm not sure how useful this was.)</li>
<li>Weight optimize was executed using gpyopt.</li></ul></li></ol></li>
<li>The isFraud specification could be used to identify negative. To be specific, if user who is isFraud=0 in train and has completed all transactions within 120 days from the last transaction date of train, we can judge all(train&test) transactions as isFraud=0.</li>
</ul>
<h3>Feture Selection Policy</h3>
<ul>
<li>At the beginning of the competition, features that contributed to Adversarial Validation were not added because of fear of shakedown due to the use of features with large discrepancy in train / test.</li>
<li>However, from the experiment described below, it was found that even a feature with a large discrepancy in train-test contributes to improving the score. Therefore, we changed the policy to add features that contribute to improving prediction accuracy, even if they contribute to Adversarial Validation.</li>
<li>Experiment
<ul><li>Divide train data into 3 parts by the following ratio
<ul><li>(Pseudo) train: 50%</li>
<li>(Pseudo) public: 10%</li>
<li>(Pseudo) private: 40%</li></ul></li>
<li>We investigated the relationship between adv val AUC and oof, pub, priv AUCs under different conditions.</li>
<li>Regardless of the value of AUC of Adversarial Validation, if oof is high, pubic and private AUC tend to be high</li></ul></li>
</ul>
<h2>Model details</h2>
<ul>
<li>Four team members used LightGBM with almost the same settings
<ul><li>validation: <code>DT_M</code> GroupKFold</li>
<li>userID GroupKFold didn't work well.</li>
<li>parameter: almost same as konstantin's kernel.</li></ul></li>
<li>I(ML_Bear) made original model using only the data of users who have only one transaction.
<ul><li>purpose: If a user with a large number of transactions is in a scam, normal transactions will be overridden to fraud, making it really difficult to see what the scam transaction is. Therefore, I tried to extract the characteristics of isFraud using transactions of only one transaction users.</li>
<li>validation: StratifiedKFold</li>
<li>The oof / pred of this model was input to above LightGBM as feature.</li>
<li>parameter: almost same as konstantin's kernel.</li>
<li>Thanks to this special model(?), the public LB achieved by my model was not very high, but the weight of the ensemble was fairly high. (best my model: <code>private 0.935089 / public 0.958713</code>)</li></ul></li>
</ul>
<h2>Feature Engineering</h2>
<h3>Based on userID</h3>
<p>We used the userID excessively.
- <code>user_transaction_per_day</code>
- <code>user_transaction_amt_min</code>
- Aggregate the average and deviation of <code>TransactionDT.diff(1)</code> for the last N (= 2,3,4,5,10,20) rows.
- Counting the number of nulls for each userID for columns that contain 1% or more of nan.
- Aggregate the deviation, max, min, diff of max and min, and unique count (or rate) for <code>Cxx</code> or <code>Dxx</code> or 'V_group' fetures. (<code>userID_Vxx_Vxx_mean_mean</code>, <code>userID_Cxx_unique_rate</code>, etc.)</p>
<h3>Other features</h3>
<ul>
<li>Aggregation such as <code>TransactionAmt</code> with various keys including <code>regist_date</code> (Transaction date - D1)</li>
<li><code>Cxx</code> feature dimension reduction by PCA</li>
<li>TargetEncoding of <code>addr1</code> (Numerical features but TargetEncoding worked)</li>
<li>category groupby feature
<ul><li>Calculate frequency distribution of specified category data for each groupby key</li>
<li>Expresses the probability of fraud when it behaves differently from the usual by the probability of appearance of the category</li></ul></li>
<li>LDA / SVD of category co-occurrence matrix</li>
<li><code>Vxx</code> encodings
<ul><li>Since <code>Vxx</code> has many features, we tried a few encodings instead of using original values.</li>
<li>For example, since <code>V242~258</code> group has almost the same value scale, count encoding was performed for each value in that group. ("1" count in V242~258, "2" count in V242~258...)</li></ul></li>
</ul>
In single transaction users model
<ul>
<li>TargetEncoding of <code>TransactionAmt</code> (Numerical features but TargetEncoding worked)</li>
<li>TargetEncoding of <code>emaildomain</code> (both P and R)</li>
<li><code>TransactionAmt_mod_XXX_is_zero</code> (XXX=50,100,200)</li>
</ul>
Other trivial content
<ul>
<li>START_DATE
<ul><li>We set <code>transactionDT=0</code> as <code>11/30 00:00:00</code></li>
<li>Estimate approximate date from browser history</li>
<li>Looking at the number of transactions, we saw a pattern of "many transactions for 5 consecutive days → fewer transactions for 2 consecutive days". So we estimate that the day with few transactions is a Saturday or Sunday, then 11/30 fits perfectly.</li></ul></li>
</ul>
<h2>What did not work</h2>
<ul>
<li>Seed Averaging
<ul><li>There was no high effect.</li>
<li>Averaging a model with high AUC in test data and a model with high AUC in train oof worked a little</li></ul></li>
<li>Split model for each <code>ProductCDs</code>.
<ul><li>We thought that the Fraud trend in <code>Cxx</code> and <code>Vxx</code> was different depending on <code>ProductCD</code>.</li>
<li>There were some features created for each <code>ProductCD</code> that helped improve CV & LB, but not enough.</li></ul></li>
<li>Change sample_weight according to the number of transactions</li>
</ul>
|
IEEE-CIS Fraud Detection
|
6th place solution: The Zoo
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The idea is roughly to rank all predictions using only the coverage model, take the nocoverage part and rank again, build a mapping of those ranks to overall ranks and apply it to the nocoverage ranks. Sounds complicated, is complicated, but makes sense ;) </p>
<p>We never subbed the NNs alone as they don't work well on the coverage part. But they did improve the non-coverage part. So can't really say. For the fusing I will elaborate.</p>
<p>Thanks a lot to Kaggle and IEEE to hosting this competition, I hope that we will have another tabular competition soon enough. Big shout out goes to all my teammates <a href="/dott1718">@dott1718</a>, <a href="/christofhenkel">@christofhenkel</a>, <a href="/ebouteillon">@ebouteillon</a>, and <a href="/maxjeblick">@maxjeblick</a> it was really a pleasure to working with them.</p>
<p>From reading the solutions that have already been posted and from expecting what others did, I am pretty sure that most did (similar as in Santander) a very similar thing with some fine-grained tricks. Our solution has maybe 2-3 tricks which might be different to what others did. I am a bit tired, so sorry when not all things are super precise :)</p>
<p><strong>User IDs</strong></p>
<p>As everyone should know by now, identifying users is the key to achieving a good score. We did a lot of trial and error here, and <a href="/ebouteillon">@ebouteillon</a> and <a href="/dott1718">@dott1718</a> were monumental in our identification of users. When we merged we combined our two approaches and in the end <a href="/dott1718">@dott1718</a> crafted a very elegant and well-working solution for user identification. You can find the script of how to do it here (let me call those ids uid6 from now on): <a href="https://www.kaggle.com/dott1718/ieee-userid-proxy">https://www.kaggle.com/dott1718/ieee-userid-proxy</a></p>
<p>Our models used different variations of the user id, with some simpler forms that have much less unique values ranging up to the ids posted before, which are very restrictive but also the most precise ones. </p>
<p>However, now comes the bit more trick part of choosing the CV setup. There are different ways of doing it, and there are different types of leaks you then introduce in your CV setup. For example, one way is to do monthly time-split leak which is also what we chose to use in the end as imho this CV kind-of represents test data the closest. We chose to look mostly at average CV across folds. The important fact here is though that you have ID overlaps between folds and you have more overlaps the more folds you have around your validation fold. IDs overlap way more in closer time proximity. So for example, Fold0 (Christmas) and the last fold perform worse than those folds in the middle, because they have more IDs overlapping to the training folds.</p>
<p>What is then also important is to understand how this overlap will behave in test:</p>
<p><img src="https://i.imgur.com/WmSVsf9.png" alt=""></p>
<p>Here you see different IDs (let’s focus on UID6) and on the x-axis the different months of test. The y-axis is the percentage of unique IDs in this month that we also observe in train. As the performance of the models is way better on overlapping ids compared to non-overlapping ones, we could expect private LB to have much lower score than public LB. This also means that it is more important on private LB to have a well working model for non-overlapping customers that you have never observed before in training.</p>
<p>As a result of this, we decided to split our efforts into both developing strong models for overlapping IDs and non-overlappping IDs. We created for both our CV setup as well for test two indices which both mark the overlaps, and the non-overlaps. For example, in a validation fold those transactions where we observe the ID in the other folds would be marked as overlap, and all others as non-overlap.</p>
<p><strong>Features</strong></p>
<p>To be honest, this is probably our weak point as we did not spend too much time on this. Maybe we could further improve our results with better features, but I don’t know. We use a lot of count encoded features (also the uids), as well as other aggregated features. We also introduce aggregated features on UID level. Overall this should not be too different compared to some public kernels.</p>
<p><strong>Models</strong></p>
<p>We utilize three types of models: LGB, Catboost, and Neural Networks all fitted on monthly CV setup. All models use pretty much the same / similar features with some differences of course. In Catboost it was necessary to include the IDs as explicit categorical variables to make it work. Neural networks of course needed some feature transformations and other tricks. <a href="/christofhenkel">@christofhenkel</a> and <a href="/maxjeblick">@maxjeblick</a> did most of the work there, so they are probably the best to elaborate on this further. We additionally did a bunch of pseudo-tagged LGB models where we used predictions from only overlapping UIDs (because we are confident with them) as either soft or hard labels. We use them partly in the blend, but hard for me to say if they help much.</p>
<p><strong>Blending</strong></p>
<p>Our blending is now probably the interesting part. First of all, we decided to do hillclimbing blending, mostly using mean or gmean on raw predictions. But instead of optimizing for overall CV, we optimize separately for overlapping and non-overlapping CV. So first, we only optimize the CV for those indices which we have marked as non-overlapping, and then we do the same for the overlapping indices. The cool thing is now that different models come into the mix for those different indices. </p>
<p>LGBs usually work well for both parts, so they are part of both overlapping and non-overlapping blends. Neural networks help significantly for the non-overlapping part, but not so much for the overlapping part because they don’t overfit heavily on the IDs and it is even hard to force them to. Catboost models work well on both parts, but have some issues on the non-overlaps leaking too much which doesn’t work well on test, so they are only part of overlaps. To summarize: Overlap Blend consists of LGBs + Catboosts and nonoverlap blend consists of LGBs + NNs.</p>
<p>Now the next tricky thing is how to combine them? One way that surprisingly works well is to just fuse them by raw probability. Actually, most of our final models just do it like that. We had some other fancy methods for fusing, starting from simple mean adjustments, towards a tricky rank-based fusing approach that <a href="/christofhenkel">@christofhenkel</a> came up with. The idea is roughly to rank all predictions using only the coverage model, take the nocoverage part and rank again, build a mapping of those ranks to overall ranks and apply it to the nocoverage ranks. </p>
<p><strong>Post-processing</strong></p>
<p>What further helped us is the following trick. Take all predictions from a customer based on our UID6, and combine the predictions to a single value so that all transactions of a customer have the same value. This gave both a nice boost on CV as well as on LB. I think this is also something that helped us quite significantly on private LB because it makes the ranking much more precise by grouping customers together. One way is to just take the mean of the predictions, what we mostly did is to take 90% quantile of the predictions as this usually worked best on CV.</p>
<p><strong>Final submission</strong></p>
<p>Our blends had a certain random range on both CV as well as LB and as we could not fully decide which one to pick we made a blend of blends by taking rank-average of a bunch of our top blends we had. This was a good choice and it is nearly our top private LB solution we picked by doing so. </p>
<p>In our second submission we tried to gamble a bit more by focusing even more on the non-overlaps. So we did a different CV setup splitting by UID instead of by month, so we had little to no UID leaking in the CV. Then we set the nonoverlap predictions based on a blend of that CV setup. It only scored a bit lower on private LB.</p>
<p>It fitted well together for us when merging. Eric had a version of uids which was quite different to the one we had before that. We then combined it and significantly improved. <a href="/dott1718">@dott1718</a> then step by step further improved it and it is really solid now. He can probably say more to it if need be.</p>
<p>UIDs you also observe in train</p>
<p>I just quickly checked it. Only subbing a simple blend of a bunch of NNs with postprocessing scores 0.9438 public LB and 0.9166 private LB. However, it is unfair as they don't work well for the overlap part. So if I just use them for non-overlap part (and use catboost+lgb for coverage) public LB is 0.9606 and private LB is 0.9343.</p>
<p>There is a parameter in the fit function where you specify which feature is categorical so you specify uid as a feature there.</p>
<p>Hard to say because we decided to just follow up on it as it gave us boost on cv and immediately worked on LB. I guess it gave at least a boost of 0.0005. It was actually quite important to us as it allowed us to use NN models we had that didnt work well on overlap and remove overfitty catboost models from jonoverlap.</p>
<p>We tried stacking and it worked way worse than blending for us. Interesting.</p>
|
IEEE-CIS Fraud Detection
|
9th place solution notes
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Mainly I was selecting features in a loop. Start with one feature and go through all, calculating AUC. Select the one with highest AUC. Then try adding second feature to that one and find which gives the best increase in AUC. And so on, selecting one feature in each loop. </p>
<p>On big dataset that takes time. But negative downsampling, higher learning rate, limited number of rounds and other techniques can help with that. I used such technique for base set of features up to reasonable amount as it takes more and more time with each iteration. The rest I added simply by checking if it's improving my CV score.</p>
<p>I had several models with different numbers of features, starting from about 30 features to models with several hundreds of features.</p>
<p>Pseudo labelling helps if quality of labels is high. Here it was high thanks to identification of users. I was able to extract more than 100000 rows from test set with corresponding transaction in train dataset (which means I can take label from corresponding train dataset entry and use as pseudo label). When applied same extraction on local validation set, more than 0.999 of pseudo labels was correct. I suspect that for private dataset it had bigger error, but haven't checked that yet.</p>
<p>Thanks, Kha!</p>
<p>Yes, your story really looks very similar to ours :)</p>
<p>Thanks! We tried some NNs, but their scores were relatively much lower than for tree based models and they didn't help in overall ensemble.</p>
<p>Thanks and congrats to you, too! I'm not mad :) Just a bit disapoinited. That's good experience, will analyze later what could have been done better to use that for next time.</p>
<p>Regarding your question - for test predictions full training dataset was used.</p>
<p>I know, 9th place out of 6000+ is a very good place. But well... it doesn't exactly feel so after being 1st on public LB for quite a long time :)
Anyway, this was an interesting competition and - it's better to fly high and fall down, rather than never try flying.</p>
<p>First I would like to congratulate top teams and especially <strong>FraudSquad</strong> for well deserved 1st place with big gap from others!
Also special congratulations to some people I know better than others - my ex teammates from other competitions @johnpateha and @yryrgogo for getting in gold!</p>
<p>And of course, the biggest thanks to my teammate @kostoglot for great and professional teamwork!</p>
<h2>Some key points from our solution</h2>
<ul>
<li>We heavily used identification of transactions belonging to the same user (as I think all of the top teams did).</li>
<li>To identify users very helpful feature was ("2017-11-30" + TransactionDT - D1) - this corresponds to some date (like first transaction date, etc) of card - same for all transactions of one card/user. Similar applies to several other D features.</li>
<li>To check identified users very helpful features were V95, V97, V96 which are previous transaction counts in day, week, month and features V126, V128, V127 - cumulative amount of transactions made by user in previous day, week, month.</li>
<li>We have different approaches on how to integrate user identification in solution - Konstantin used them as features, I used them in postprocessing and pseudo labelling.</li>
<li>We used cv setup taking first training data set 3 months for training, 1 month gap removed and 2 last months for validation. This cv setup correlated quite good with public LB, but as it turns out, in some cases it lied to us regarding private set and at some points we moved in a bit wrong direction resulting in falling down to 9th place.</li>
<li>Models used were LightGBM, Catboost and XGBoost.</li>
</ul>
<p>Good luck!</p>
<p>Thanks!
Basically I concentrated more to reduce number of errors of my postprocessing instead of finding maximum number of users. As each incorrect label in postprocessing makes big damage, I used it very carefully. But sure, maybe it would be possible to reach better score with V307 and friends.</p>
<p>Thanks, Evgeny!
At the beginning I tried also target encoding and using card id directly as categorical feature. Postprocessing showed the best CV scores (even significantly better) so I stayed with that. And public LB agreed on that, bringing me to the top :) So I had no reasons to discard postprocessing.
Sure, who knows, maybe at the end target encoding would yield better private score - maybe after some days I'll try that with late submissions.</p>
<p>I didn't concentrate much on parameters tuning for feature selection. Just used some "good enough" set from high scoring public kernel with minimal modifications for speed-up (number of rounds, leaves, depth). If feature has some strong value in it, in most of cases it will be found, whatever the parameters are. Maybe except some extreme values.</p>
<p>I used both, early stopping (typically triggers only on initial iterations with small number of features) and fixed rounds (to keep reasonable speed when number of features grows). For early stopping I tried different approaches - OOF, holdout. Holdout worked better for me here.</p>
<p>Different validation set.</p>
<p>Thanks!
In almost all cases each user has only isFraud=1 or isFraud=0 for all it's transactions.
So for all users having transactions in both - train and test data, we can assume that transactions in test have the same label as transactions in train for the same user.
I took all such transactions from test data, assigned them labels from corresponding transactions in train data and added these entries as additional training data.</p>
<p>When stated this way - yes, that could be the case indeed. Can't confirm it before deeper analysis, but I wouldn't be surprised, given that CV lied to us also in several other cases.</p>
<p>I enjoyed it, too. Even despite he won. This time :)</p>
<p>Thanks! Congratulations on 2nd place!</p>
<p>I did only quick look at submission scores yet, but it doesn't look that postprocessing did anything bad. It gave score boost even for models with user features already included. And boost seems to be even bigger for private - at least for some random models I find in submission history.</p>
<p>Issue was that CV (and public LB agreed with it!) showed us better scores for models which were not better on private. Resulting in not optimal weights in ensambles and selecting not best submissions as final ones. Probably 2-2-2 CV setup could have been better than 3-1-2 we used.</p>
|
IEEE-CIS Fraud Detection
|
Get silver medal using public kernel mainly.
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!</p>
<p>This is my second competition, and first time committed from the biginning.</p>
<p>I tried to make good feature, but unfortunately, it failed.
So, I tried to use public kernel whose distribution is different.
The link I used mainly was following.
These two links has especially high weight in my blending(20% or more).</p>
<p>Litemort
<a href="/litemort">@litemort</a>
<a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/108364#latest-634308">https://www.kaggle.com/c/ieee-fraud-detection/discussion/108364#latest-634308</a>
Catboost
<a href="/kyakovlev">@kyakovlev</a>
<a href="https://www.kaggle.com/kyakovlev/ieee-catboost-baseline-with-groupkfold-cv">https://www.kaggle.com/kyakovlev/ieee-catboost-baseline-with-groupkfold-cv</a></p>
<p>I also combine over 15 submissions in public kernel for final submission, using this kernel.
<a href="https://www.kaggle.com/paulorzp/gmean-of-low-correlation-lb-0-952x">https://www.kaggle.com/paulorzp/gmean-of-low-correlation-lb-0-952x</a></p>
<p>What I also cared was not using kernel whose data distribution is different between private and public. I also made bins whose distribution is different between train and test and make model again.
I might not be able to say that this is my ability, so I will make efforts more.
I have to say thank you for all people making kernels and teammates.</p>
<p>Thanks very much.</p>
<p>Thank you, litemort has high accuracy and different distribution from other argorism. I hope some day I can also use it for regression problems.</p>
|
IEEE-CIS Fraud Detection
|
IEEE-CIS 12nd solution 中英注释
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>大家好,很抱歉来晚了
但是我还是坚持把我们丑陋的代码开源出来。和其他的参赛队伍一样我们也发现了D2的秘密,关于这个magic其他帖子已经说了很多了,我这里就不在赘述。
特征工程上,我们将productCD按照类别进行了拆分,对D特征进行scale的操作,并采用其他常规encoding方法。
我的好友<a href="https://www.kaggle.com/newbielch">@ADAM</a>进行了大量的基础工作,并且进行了一些奇妙的特征工程,值得一提的是这是他第一次参加kaggle竞赛,为他鼓掌。我在赛程的后半场加入并发现了D2的秘密。
我将我们的工作分为5个部分</p>
<p>part 1 介绍了整个特征工程,part 2-4构建3个不同的模型,part 5进行了模型融合,后验发现XGB模型对融合的提高非常小,我们最好的score也是0.8lgb+0.2cat构成的。链接如下,请大家批评指正。</p>
<p>Hello everyone, I am sorry to be late.
But I still insist on opening up our ugly code. Like other participating teams, we also discovered the secret of D2. People have already said a lot about this magic on other posts. I won't go into details here.
In feature engineering, we split the productCD by category, scale the D features, and use other conventional encoding methods.
My friend [@ADAM] (<a href="https://www.kaggle.com/newbielch">https://www.kaggle.com/newbielch</a>) has done a lot of basic work and has done some wonderful feature engineering. It is worth mentioning that this is his first time to participate in the kaggle competition. , applaud him. I joined and discovered the secret of D2 in the second half of the race.
I divide our work into 5 parts.</p>
<p>Part 1 introduces the whole feature project, part 2-4 builds 3 different models, part 5 do some blending , and we find that the XGB model has very little improvement on the blending . Our best score is also 0.8lgb+0.2cat. The link is as follows, </p>
<p><a href="https://www.kaggle.com/maxduan/ieee-cis-12nd-solution-part-1">IEEE-CIS 12nd solution 中英注释 part 1</a>
<a href="https://www.kaggle.com/maxduan/ieee-cis-12nd-solution-part-2-lgb">IEEE-CIS 12nd solution 中英注释 lgb part 2</a>
<a href="https://www.kaggle.com/maxduan/ieee-cis-12nd-solution-part-3-cat">IEEE-CIS 12nd solution 中英注释 cat part 3</a>
<a href="https://www.kaggle.com/maxduan/ieee-cis-12nd-solution-part-4-xgb">IEEE-CIS 12nd solution 中英注释 xgb part 4</a>
<a href="https://www.kaggle.com/maxduan/ieee-cis-12nd-solution-part-5-blending">IEEE-CIS 12nd solution 中英注释 blending part 5</a></p>
<p>my pleasure</p>
|
IEEE-CIS Fraud Detection
|
LB9564 NN solution overview
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks! It's my honor to team up with you, a resourceful and trustable teammate. We can't make it without your magic feature! Now let's enjoy our first gold medal!</p>
<p>Thanks, Chris! My finding is that adding UID to the first layer only slightly improved local CV, but adding it to the last layer provided a significant boost</p>
<p>Thanks, Soonhwan! This is consistent with the results I saw although I don't have a clear idea of the reason why this is the case.</p>
<ol>
<li>Let's say, if I have a feature of cardinality 10000 (10000 different classes), then I will just keep the most frequent 100 class (specific number is also a hyperparameter to tune) and label all other classes as 'others'. Then the cardinality is reduced from 10000 to 101.</li>
<li>For FM, I use the implementation of FM from DeepFM(DeepFM is a DNN-model which contains a FM part) in this library: <a href="https://deepctr-doc.readthedocs.io/en/latest/Features.html#deepfm">https://deepctr-doc.readthedocs.io/en/latest/Features.html#deepfm</a>. </li>
</ol>
<p>I mentioned in another discussion that I trained NN with 9548 CV (unshuffled Kfold) and 9564 LB and promised to post the solution. Here it is:</p>
<p>Version 1:
- Features: all raw categorical features + all raw numerical features (except V features)
- Preprocess:
* Numerical: Log transformation first, then apply standardization; Fill Nan by mean and add boolean indicators for all numerical features to indicate whether a certain feature is missing or not
* Categorical features: Decrease the cardinality of all features first and then apply one-hot-encoding
- Optimizer: Nadam with Keras default setting
- Batch size: 2048
- Validation strategy: 5 fold unshuffled
- Early stopping based on AUC on Val dataset
- Score: LB 931+
- Architecture: DNN with 3 layers (256-256-256, dropout = 0.3, use batch normalization **after **the activation; relu as the activation function)</p>
<p>Version 2:
- Extra features added: all numerical engineered features (Groupby features, count features, gap features, etc) that used by my lgbm model
- Preprocess method:
* Same as version 1
- score LB 937+</p>
<p>Version 3: here is the real magic
- Extra features added:
* uid: a user identifier engineered by my teammate. Just like the magic feature found by top solutions
* uid-based numerical features: Groupby(uid)[fea].transform('mean'); Groupby(uid)[fea].transform('std') for fea in all numerical_features
- Preprocess method:
* uid-based numerical features: same as the previous version
* uid: This is a high cardinality categorical feature; I tried multiple approaches to handle this and the best approach I found is to use embedding layer(size = 4) and concatenate the embedding layer with the last dense layer in NN
- Score LB 9556</p>
<p>Version 4:
- Extra features added: some engineered categorical features used by lgbm
- Preprocess method:
- Feed these categorical features into a FM (factorization machine) with a high regularization on the embedding. Add the output of FM to the output of DNN
- Why I used this way? Tried other approaches like one-hot encoding with decreased cardinality, embedding layer used by DNN, DeepFM model, etc. All of these methods significantly decreased local CV
- Score: 0.002 boosted in local CV; slightly decrease in LB</p>
<p>Something to notice:
- In each version, I actually trained 3 DNN with the same architecture with different loss functions (logloss, focal loss, tflearn.objectives.roc_auc_score) and blend them. This practice gave me a boost in local cv (about 0.0015)
- Apply the DAE+NN learning strategy to improve the score from 9556 to 9564; Things to notice is that I slightly modified the approach proposed by the original author: I trained DAE with a subset of all features and the second stage NN is trained with both original features and latent feature representation.
- The original author of this approach: <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44629">https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44629</a></p>
<p>What can be further improved:
- Replace Nadam with SGD and train more epochs and smaller batch sizes. This will surely improve the performance but I cannot afford the computational burden needed
- When I'm writing this post, I suddenly realize that I should use rank blend instead of a weighted average to blend 3 DNN. Using rank blending should improve performance since the metric is auc.
- I found enough evidence that restricting width and increasing depth (use residual connection when depth > 7) of the NN can improve NN performance on this dataset. But I didn't have enough time to further explore this strategy</p>
<p>Special thanks to this kernel. I learned a lot from it: <a href="https://www.kaggle.com/abazdyrev/keras-nn-focal-loss-experiments">https://www.kaggle.com/abazdyrev/keras-nn-focal-loss-experiments</a></p>
<p>PS: I'm in the midterm season and don't have enough time to prepare a clean kernel. Also, I may be too busy to answer the questions at the time being. Will post a kernel and answer all questions after the end of all my midterms.</p>
|
IEEE-CIS Fraud Detection
|
My first competition/ silver medal
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for Kaggle for organizing this exciting competition and thanks to my teammates for their incredible performance.
I am a first-year PhD student with a background in machine learning more specifically recommender systems. I just started my PhD so while busy with classes and looking for a good subject of research, I decided to give Kaggle a try to see what is the state of the art of Fraud detection/tabular data in general. I tried to play around with different competitions before but this one was the first serious one.</p>
<p>My main goal was to learn not to compete and I think everyone should set their goal at the start of the competition so that they are really focused on achieving it. </p>
<p>As I am coming from an academic environment, this competition taught me a very important skill: validation strategy design. After competing here, I am questioning many of my previous experiences related to my research and other papers that I have seen that claim to be better than others. In fact, it is so easy to tune your "new" model until it overfits your local validation and then you can claim to be advancing the state of the art. </p>
<p>Another, important thing that I learned as a researcher is how to present your work to the public. During the competition, I was the public. I looked at many research projects and github repo and read many papers trying to find state of the art models for different tasks. I found out these different key features that should be present in decent research:
- Always attach your code to your paper
- Document your GitHub repo
- Provide easy installation / importing guidelines
- Provide a notebook explaining an example with your work</p>
<p>Now coming to the competition. We tried a gazillion things as we worked on it full time for at least 2 months. What worked was really feature engineering using groupby and lag methods and some feature interactions (C features) also we built some features using nmf on several groups of features. We tried different unsupervised methods like DAE, regular AE but they did not give good results. We also tried Gaussian Mixtures Models on each "user" and tried to single out anomalies. But in the end, most of our features are from public kernels with some modifications and cleaning</p>
<p>We tried to find the "magic" in the data. It did not work out well and we did not spend too much time on it as we were trying to learn other methodologies. </p>
<p>One thing that I wanted to try and did not have time to is to try some Recommender systems models for fraud detection. Basically in RS, the goal is to rank items based on some user-item similarities. What I wanted to try is to rank transactions based on user dissimilarities/anomalies and try Bayesian Personalized ranking as a learning algorithm which aims at maximizing the AUC. It was far fetched but I think it would have been cool.</p>
<p>Our final solution was a blend of 6 of our best models with different hyperparameters (scoring 0.949-0.95), 2 NNs scoring 0.91 on the LB and one catboost scoring 0.94 (nothing fancy)</p>
<p>Finally, I want to say thanks to everyone for their insightful discussions, also thanks to the trolls for the spice and the fun.
Next challenge: Image data competition.</p>
<p>Thanks :D</p>
<p>Thanks :D</p>
<p>Thank you. nothing fancy it looks like this:
train_df[c+'_lag'] = train_df[c]/train_df.groupby(['id'])[c].shift(1)
we mainly wanted to capture the different between current and previous values. we did it for different values like Cs TransactionAmt and some V features</p>
<p>Thanks :D</p>
<p>Thank you. I replied in the top comments. We did not exploit it perfectly as we were not able to identify the cards but it helped a little bit</p>
<p>Congrats to you for the well-deserved win and your humble attitude. </p>
<p>Thank you :D</p>
<p>You're welcome and thanks :D </p>
<p>Thanks. Our user ids where not as accurate as top teams but yeah I believe lag and lead features were very helpful. In brief, if you are used to spending 50$ and suddenly your card is charged with 5000$ then it is probably a fraud.</p>
<p>Thank and Congrats on becoming Competition experts. Next step: Master. Go Tunisia</p>
|
IEEE-CIS Fraud Detection
|
Silver Medal in My First Kaggle Competition
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Received a silver medal (54/6438) in my very first Kaggle competition! I know it's not the best (not as good as gold), but as someone who just started learning Machine Learning on Coursera not so long ago, and who only used logistic regression in real life, I am really happy where I am!</p>
<p>Taking this chance, I have a few special thank you to say.</p>
<p>Thank @kyakovlev for your wonderful baseline kernels. I have to admit that without your baseline, I would have had little idea where to start. People say your are the MVP of this competition and I couldn't agree more. </p>
<p>Thank @cdeotte for your discussion thread on systematic feature engineering and all of your kind replies to other Kagglers' questions. Learned so much.</p>
<p>And last but certainly not the least, thank you @raddar for your hint, and for showing me there was still time to build a good solution, when I was out of ideas and about to give up, by getting up to top 1% on leaderboard so fast although being late to the party. That was tremendous encouragement.</p>
<p>And for others who kindly shared their kernels and insights, I made sure I upvoted them. </p>
<p>Happy Kaggling!</p>
|
IEEE-CIS Fraud Detection
|
Solution and code from 21st
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!</p>
<p>hey! It's been a long time from Mercari. :D</p>
<p>Haha, I played dota when yyf is famous.</p>
<p>The 20+ teams don't post solutions usually. But something happened make me feel it is necessary to do this. <br>
I wanna say I don't know if the the shakeup is a coincidence or not. I post my track about this competition to help someone who wanna learn about this competition. People may have different opinions about whole this mess. You cound ask about the solution or code with comments. <strong>Plz just dont argue here.</strong></p>
<p>All my track files are in this github:<a href="https://github.com/white-bird/kaggle-ieee">https://github.com/white-bird/kaggle-ieee</a> <br>
if you wanna only run the best model, you should run: <br>
f3_deepwalk.py, <br>
fd_key.py, <br>
fe_V307.py, <br>
fi_yu.py, <br>
model26.ipynb, <br>
model32.ipynb, <br>
<a href="https://www.kaggle.com/whitebird/ieee-internal-blend">https://www.kaggle.com/whitebird/ieee-internal-blend</a> </p>
<p>LB 9500-9520 : I spent most of my time at here while I try to dig the count/mean/std features which didn't work. </p>
<p>LB 9520-9580 : I realized the bad guys stole the cards and make transactions for money, but cards always have some protects, like the biggest amount for one transaction. So they need to have many similar transactions on one card in a lone period or many cards in a short time. That's the keypoint of this competition ----- the series samples make it fraud, not single sample.
We need to find some "keys" to group the data: </p>
<p>1) V307. There are too many V features. Some are int and some are floats. It's not hard to find out that int means the times this card have transactions with same website/seller, and float means the accumulated amount. Obviously, int + cardid may casue misjudge easily. If you have some baseline models, I recommend you the lib eli5 to find which feature is most important, which leads me to the V307. You can find these eda at model14.ipynb. I use the fe_V307.py to process the feature. </p>
<p>2).deviceinfo & id. Different cards have same amt in same addr with same device. is it strange? So I use fd_key.py to extract them. </p>
<p>3). cardid + D. My teammates found this. All people knows the D features minus days mean a lot. We find the D2 and D15 run through the time best by max all the data, while D2 and D15 has the biggest value. fi_yu.py </p>
<p>4). amt + days + addr1. It is simple but easy to misjudge. </p>
<p>LB 9590-9600: So we all know the fraud sample is fraud because its similar samples is fraud. Why not let the infect of fraud more crazy? Making a two-stage models improve 0.001:model26.ipynb + model32.ipynb </p>
<p>LB 9600-9630: This is caused by a bug. I grouped the keys above and get big improvments offline. However, there is only one key, cardid + D, behave badly online. I used 2~3 days to find out that I grouped them with train and test separately. It make improvments online when I grouped the key with all data. It means the key is not working as other keys to make group features but as a embedding key. Then I wrote some rules to process results with kernels. It's easy to understand but make huge boost:<a href="https://www.kaggle.com/whitebird/ieee-internal-blend?scriptVersionId=21198581">https://www.kaggle.com/whitebird/ieee-internal-blend?scriptVersionId=21198581</a> </p>
<p>And there are other small improvments I don't mention. Post here if you have any question while reading/running my code. </p>
|
IEEE-CIS Fraud Detection
|
Summary of my competition (43rd)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!</p>
<p>Thank you <a href="/cdeotte">@cdeotte</a> . I’m glad you enjoyed. I not only enjoyed but learned a lot from your commentaries. You are really good explaining things.
The 0.0004 boost I talk about in the write up is over your postprocess final solution 0.946468 on private LB, 0.96798 on public LB but I check now and lgbm_meta_model.csv is already better than you say (0.945237/0.967396) so probably I’m not comparing what I thought I was. Anyway, what it's clear for me is that nobody got to understand this dataset better than your teammate.</p>
<p>Thanks!</p>
<p>Thanks!</p>
<p>Thanks <a href="/fatihozturk">@fatihozturk</a> I had to go solo too as I needed to get more experience before being in a team, but as you say in your write up it’s difficult to be silent ( My daugther is starting to think I’m crazy 😁) </p>
<p>First of all, I want to congratulate @kyakovlev and @cdeotte . Even before the end results you were the stars of this competition with your amazing kernels and topics. Well deserved first prize, I'm really happy for you. Congratulations as well for all the other medalists, specially @fatihozturk for his solo gold medal and for becoming GM.</p>
<p>Second, I want to thank all the people (including hosts) who have contribute with their kernels, topics and commentaries to make this a brilliant competition. I learned so much and I feel so proud to be part of this great community that I want to apologize for not giving anything back. I was so absorbed, with no time enough to read about all the new methods, to create new features and to run the models that I wasn't able to participate as I would had wanted. I'll try to do better in the next one. </p>
<p>On a personal note, I feel very happy not only for the silver medal but because I learned a lot. In my previous (and first) <a href="https://www.kaggle.com/c/santander-value-prediction-challenge">competition</a> I also got silver but it was because I took advantage of a made public data leak. I worked a lot trying to exploit that leak but I didn’t better my data science skills at all. I ended the competition very tired and, even after the medal, a bit demotivated so it took me more than one year to came back here. With that experience in mind, my idea when I entered this competition was to didn’t care about the score but use it as a way to learn and practise (for example Phyton). As I got to a good position in the leaderboard very early on I didn’t stick to the plan (I keep using R) but I think that I got to know very valuable methods (specially around CV). </p>
<h3>Finding the users</h3>
<p>Since I read this commentary from the organizers:</p>
<blockquote>
<p>The logic of our labeling is define reported chargeback on the card as fraud transaction (isFraud=1) and transactions posterior to it with either user account, email address or billing address directly linked to these attributes as fraud too.</p>
</blockquote>
<p>I knew that the key to a good score was to be able to single out unique users. I started to create a script to identify users using the card variables and TransactionAmt, address, email, D1, D3, V279, V280, V306, V307 and V308. For the rest of the competition I kept improving the script to get as many users as possible without getting false positives (countless users for the same card and address combination and the month gap between train and test sets made it difficult). You can see it <a href="https://www.kaggle.com/artmatician/script-for-finding-unique-card-transactions">here</a>. It includes a part where imputes D1 NAs values using the same logic as for finding users. </p>
<p>It’s a not very good coded script, not optimized and very slow (It takes almost 3 days to get all the users) as it’s checks each transaction looking for a possible next one. But I think it does a good job identifying same card transactions. Using my <a href="https://www.kaggle.com/artmatician/users-ids">list of userIds</a> to postprocess (average of predictions for the same ID) the preprocess solution of the winners (<a href="https://www.kaggle.com/kyakovlev/ieee-submissions-and-uids">lgbm_meta_model.csv</a>) gets a 0.0004 boost over their postprocess solution. It doesn’t find users with different cards as others did it (<a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/111320#latest-642831">Topic1</a>, <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/111306#latest-642921">Topic2</a>) though.</p>
<h3>"Magic features"</h3>
<p>Using a public R kernel and the list of users to postprocess it I was able to get to a good position, even fifth after a script upgrade. Then I start to create my own models but I wasn’t able to improve much and a lot of people was getting ahead so I became obsessed trying to find the “magic features”. Only at the very end of the competition I realized that I already had them and only needed to use the userID to create aggregated features. I don’t know why I didn’t do it earlier as I always figured that as a possible boost. When I think about it, and how this steal me time to create more features and diverse models, do blending and get me to a better position I feel very stupid so I prefer to concentrate on the positive: the experience, the medal and for example to had chosen well the final submissions (not the ones with better public LB but worse private LB).</p>
<h3>Cross validation</h3>
<p>I tried different approaches during the competition but at the end I used a <a href="https://www.kaggle.com/c/ieee-fraud-detection/discussion/108616#latest-634925">downsampled</a> last 20% holdout for trying new features and a month GroupKfold to validate and create submissions. I gave great importance to the results of the first fold (December).</p>
<h3>Blending</h3>
<p>I kept improving my single model till the very end so I had no time (and experience) to create new models and do proper blending. I only used <a href="https://medium.com/data-design/reaching-the-depths-of-power-geometric-ensembling-when-targeting-the-auc-metric-2f356ea3250e">power blending</a> with some very similar models (change of hyperparameters or one or two different features). Choosing 2 or 4 to be the power I got better public LB and OOF AUC but not better score for the December fold.</p>
<h3>Postprocessing:</h3>
<p>I did vertical blending by userId using max of predictions or averaging this value with the individual prediction. Booth increased the score but the former got better results for the December fold and the latter for the OOF AUC and the public LB.
I also replaced the submit prediction to 1, if the last user transaction was fraud in the train dataset. I tried to do the same for not fraud but I got worse results. </p>
<h3>Final submission</h3>
<p>A single LGB model with 140 features (21 of them aggregations by userid).
AUC = 0.956246 / 0.930972 </p>
<p>with postprocessing:
Using max of predictions by userid
AUC = 0.957749 / 0.935836
Replacing with 1s for train fraud users
AUC = 0.958303 / 0.936062</p>
<p>I chose it because it was the one with better AUC for the first fold.</p>
|
IEEE-CIS Fraud Detection
|
Survived the shakeup and earned a silver medal!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Aha! Perfect thanks! The best Public LB solution was also the best in Private! It seems I had two other versions of the same blend (in different ratios) that scored about 0.002x higher than that! Anywayz, still happy...</p>
<p>I am pretty sure this post means so little to most of you when there are Top of the top solutions being published out there as I am wirting these words. For me though, this comp meant really a lot!! First of all, I survived the shakeup: Around me the vast majority of people have gained hundreds of positions which means that the ones who were there at the public LB are now much much worse. I only lost 30 positions. Secondly, I earned my first solo Silver medal and I am super exhited about it! I spent so many days and nights studying the analysis of others (and doing my own stuff) and all that paid of! </p>
<p>Thank you all for your wonderful, crazy and inspired EDAs and models. Congrats to the top performers and good luck to all next time! </p>
<p>As for the model? I will not get in much detail since there will be more interesting solutions out there but I'll say this: My first submission was at 0.9534 and was a blend of my own LightGBM and XGboost solutions blended with a highly ranking public kernel at ratios 0.25, 0.25, 0.5 respectively. My second solution (at 0.9417) was the same LightGBM (that scored around 0.949), another version of the XGboost model (that scored around 0.947) and again the same public kernel, in ratios 0.4, 0.3, 0.2. </p>
<p>At this point I should say that I don't know which solution actually brought me to this Private LB position - if anyone knows who I can see that, please let me know! Thanks!</p>
|
IEEE-CIS Fraud Detection
|
The REAL UserID, different from CardID
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IEEE-CIS Fraud Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I do not agree with many discussion, calling the CardID UserID. It is very different.
A user can use mutliple cards, and a fraudulent user will probably use more cards than a regular one.</p>
<h3>Part 1, convincing you to follow me in this idea</h3>
<p>If you are ready to assume that the following transactions have been made from a single user, despite using many different cards, then this discussion (and my notebook) will be of your interest.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1148719%2Fe4cc0c2e2dd227970a1021c4068e2840%2FCapture%20dcran%20de%202019-10-04%2012-30-47.png?generation=1570214279323977&alt=media" alt=""></p>
<p>You can see that the transactions are grouped by color (same color for the same card). You can also see, that probably these transactions have been made by the same user, because of the very specific amounts, the sort period, the DeviceInfo, the "IP" (id19 id20) and the browser.</p>
<p>The cardID column was created thanks to an algo that i share in a kernel (<a href="https://www.kaggle.com/tuttifrutti/isolating-a-cardid">https://www.kaggle.com/tuttifrutti/isolating-a-cardid</a>)</p>
<p>After running the algo that i'm sharing in another notebook, we were able to group these transactions, using some Transactions as "bridges" between groups.
Here, the bridge between the green group and the yellow one, is the amount/Device/Browser/IP of one of the green group transaction that is the same as the one of the yellow one. The same applies to the pink group.</p>
<p>We've managed to create an algo that reconstruct UserId based on its transactions. See the results for this example:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1148719%2Fc18ecdd814b52527141530f778828814%2FCapture%20dcran%20de%202019-10-04%2012-50-34.png?generation=1570214327951966&alt=media" alt=""></p>
<h3>Part 2, some results</h3>
<p><strong>1st example</strong>
On this first screenshot we can see:
- Card1 14276, was group by CardID with another card, we cannot see the second on the screenshot, but the variable cardID_count indicates that this group contains 2 transactions (and 2 of them are fraudulent (cardID_sumFraud))
- Same for Card1 8755.
- These two cards were used by the same User. And are grouped with other cards. (12 transactions in total by the User, with 12 frauds)</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1148719%2F413dc1fe22a09a630a7e30394de39b17%2FCapture%20dcran%20de%202019-10-04%2014-54-32.png?generation=1570214408842410&alt=media" alt=""></p>
<p><strong>2nd example</strong>
Here we can see that a fraudulent card had 10 transactions (incl. 10 fraudulent), two others had one. <br>
The User grouping succeeded in grouping these cards together along with other that don't appear on this screenshot. </p>
<p>However we can see, that this user has 2 non fraudulent transaction out of 30 (error of aggregation?)</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1148719%2F0ea890306f3ba6bb5fbc1d8cea44c1f1%2FCapture%20dcran%20de%202019-10-04%2014-54-10.png?generation=1570214447409779&alt=media" alt=""></p>
<p>*<em>3rd example *</em>
This one is also nice, we can see that a card with 9 transactions, has been grouped with a single transaction card. It will be obviously easier to spot single transaction frauds thanks to this grouping!</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1148719%2F87232e396cd06955e7fb50ed42975474%2FCapture%20dcran%20de%202019-10-04%2014-54-18.png?generation=1570214547396949&alt=media" alt=""></p>
<p><strong>4th example : most spectacular</strong></p>
<p>According to the grouping, the User made 275 transactions, out of which 231 were fraudulent.
This high value is unique in the whole dataset</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1148719%2F18c01c090b38fcb6454ae231b1bf93f6%2FCapture%20dcran%20de%202019-10-04%2014-54-14.png?generation=1570214677717746&alt=media" alt=""></p>
<p><strong>5th: most useful example</strong></p>
<p>The UserID that was more represented on the test set, and very fraudulent on the train, had 144 lines on the test.</p>
<p>This is the training data of a very big fraudulent group. You can see that some are misclassified (look at colum isFraud), tou can easily spot them following the pattern on the C1/C2 columns.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1148719%2F10d4018c64b80e3d051b444da6c84322%2FCapture%20dcran%20de%202019-10-01%2000-40-15.png?generation=1570214732899130&alt=media" alt=""></p>
<p>This user had around 150 transactions in the test set. Here there is no column isFraud, but i guess you can easily spot the not-necessarily frauds looking at the C columns</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1148719%2Fd883038b0fed483e4a1659b6bbf1d74e%2FCapture%20dcran%20de%202019-10-01%2000-41-23.png?generation=1570214753990242&alt=media" alt=""></p>
<p>I hope that you'll find some value in it! a kernel about how to compute these groupings is on its way. Imagine the opportunities of FE that it can give you (counting the number of cards, creating features to see ho hard a fraudster tried to stay hidden (changing devices, IP, emails ect...)</p>
<p>here is the kernel:
<a href="https://www.kaggle.com/tuttifrutti/find-users-multiple-cardids-from-the-same-user">https://www.kaggle.com/tuttifrutti/find-users-multiple-cardids-from-the-same-user</a></p>
<p>To me also, from a business perspective this is what make the most sense, and the reason behind this effort.</p>
<p>I agree that it did for some groups (of user) that were tracked by the counts (C) or some cumsum (V). But if you look at the kernel, or the examples of this discussion, you'll find that many tracks of the same user were lost during the process: changing device, then changing card, then changing amount ect... . It was a very interesting challenge to "follow" the user!</p>
<p>Actually i got really intrigued by the discussion you've created during the competition, about manually finding the fraudulent groups. This conforted me in the idea that spending some time in identifying them might pay off!</p>
<p><a href="/cpmpml">@cpmpml</a>, It did. I think that it enables to catch the cards that were used only once, and that do not stand out threw other variables (as their stats (counts (C), V ect.. look like the ones of regular cards used only one time).</p>
<p>Anyways, before knowing if it could brings some value to the model, I thought that it would make a huge sense from a domain perspective!</p>
<p><a href="/cdeotte">@cdeotte</a> , i was talking about people actually using the word 'user', it indeed created a big confusion on the forums. <a href="/kyakovlev">@kyakovlev</a> seems to have understood the data that like no one else, i would love to see how he managed to create all these groups!</p>
|
IJCNN Social Network Challenge
|
How we did it (IND CCA)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IJCNN Social Network Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><meta http-equiv="content-type" content="text/html; charset=utf-8" rel="nofollow" rel="nofollow" ><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; background-color: transparent; font-family: 'Times New Roman'; font-size: medium; " mce_style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; background-color: transparent; font-family: 'Times New Roman'; font-size: medium; "><span id="internal-source-marker_0.12246124283410609" style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">Hi everyone,</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; "></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">This is Arvind Narayanan from the IND CCA team, and I’d like to share our techniques.</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; "></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">First things first: in case anyone is wondering about our team name, we are all computer scientists, and most of us work in cryptography or related fields. IND CCA refers to a <a href="http://en.wikipedia.org/wiki/Ciphertext_indistinguishability#Indistinguishability_under_chosen_ciphertext_attack.2Fadaptive_chosen_ciphertext_attack_.28IND-CCA.2C_IND-CCA2.29" mce_href="http://en.wikipedia.org/wiki/Ciphertext_indistinguishability#Indistinguishability_under_chosen_ciphertext_attack.2Fadaptive_chosen_ciphertext_attack_.28IND-CCA.2C_IND-CCA2.29">property</a> of an encryption algorithm. Other than that, no particular significance.</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; "></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">I myself work in computer security and privacy, and my specialty is de-anonymization. That explains why the other team members (Elaine Shi, Ben Rubinstein, and Yong J Kil) invited me to join them with the goal of de-anonymizing the contest graph and combining that with machine learning.</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; "></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">To clarify: our goal was to map the nodes in the training dataset to the real identities in the social network that was used to create the data. That would allow us to simply look up the pairs of nodes in the test set in the real graph to see whether or not the edge exists. There would be a small error rate because some edges may have changed after the Kaggle crawl was conducted, but we assumed this would be negligible.</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; "></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">Knowing that the social network in question is Flickr, we crawled a few million users' contacts from the site. The crawler was written in python, using the curl library, and was run on a small cluster of 2-4 machines.</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; "></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">While our crawl covered only a fraction of Flickr users, it was biased towards high degree nodes (we explicitly coded such a bias, but even a random walk is biased towards high degree nodes), so we were all set. By the time we crawled 1 million nodes we were hitting a 60-70% coverage of the 38k nodes in the test set. But more on that later.</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; "></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">Our basic approach to deanonymization is described in <a href="http://33bits.org/2009/03/19/de-anonymizing-social-networks/" mce_href="http://33bits.org/2009/03/19/de-anonymizing-social-networks/">my paper with Vitaly Shmatikov</a>. Broadly, there are two steps: “seed finding” and “propagation.” In the former step we somehow deanonymize a small number of nodes; in the latter step we use these as “anchors” to propagate the deanonymization to more and more nodes. In this step the algorithm feeds on its own output.</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; "></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">Let me first describe propagation because it is simpler. As the algorithm progresses, it maintains a (partial) mapping between the nodes in the true Flickr graph and the Kaggle graph. We iteratively try to extend the mapping as follows: pick an arbitrary as-yet-unmapped node in the Kaggle graph, find the “most similar” node in the Flickr graph, and if they are “sufficiently similar,” they get mapped to each other.</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; "></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">Similarity between a Kaggle node and a Flickr node is defined as cosine similarity between the already-mapped neighbors of the Kaggle node and the already-mapped neighbors of the Flickr node (nodes mapped to each other are treated as identical for the purpose of cosine comparison).</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; "></span><br></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; background-color: transparent; font-family: 'Times New Roman'; font-size: medium; " mce_style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; background-color: transparent; font-family: 'Times New Roman'; font-size: medium; "><img mce_src="http://www.kaggle.com/components/com_ccboard/assets/uploads/5340_7d172f8e824f704ecc1aae8e05e2af73.png" src="http://www.kaggle.com/components/com_ccboard/assets/uploads/5340_7d172f8e824f704ecc1aae8e05e2af73.png" width="336" height="207"><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">In the diagram, the blue nodes have already been mapped. The similarity between A and B is 2 / (</span><span class="Apple-style-span" style="color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: 11px; font-size: small; " mce_style="color: #222222; font-family: arial, sans-serif; line-height: 11px; font-size: small; ">√</span><span class="Apple-style-span" style="font-family: Arial; font-size: 15px; white-space: pre-wrap; " mce_style="font-family: Arial; font-size: 15px; white-space: pre-wrap; ">3</span><span class="Apple-style-span" style="color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: 11px; font-size: small; " mce_style="color: #222222; font-family: arial, sans-serif; line-height: 11px; font-size: small; ">·</span><span class="Apple-style-span" style="color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: 11px; font-size: small; " mce_style="color: #222222; font-family: arial, sans-serif; line-height: 11px; font-size: small; ">√</span><span class="Apple-style-span" style="font-family: Arial; font-size: 15px; white-space: pre-wrap; " mce_style="font-family: Arial; font-size: 15px; white-space: pre-wrap; ">3) = ⅔. Whether or not edges exist between A and A’ or B and B’ is irrelevant.</span></div><meta http-equiv="content-type" content="text/html; charset=utf-8"><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; background-color: transparent; " mce_style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; background-color: transparent; "><span style="font-family: arial, sans-serif; color: rgb(34, 34, 34); font-size: x-small; " mce_style="font-family: arial, sans-serif; color: #222222; font-size: x-small; "><span class="Apple-style-span" style="line-height: 11px;" mce_style="line-height: 11px;"><span class="Apple-style-span" style="white-space: pre-wrap; " mce_style="white-space: pre-wrap; "></span></span></span></div><meta http-equiv="content-type" content="text/html; charset=utf-8"><meta http-equiv="content-type" content="text/html; charset=utf-8"><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; background-color: transparent; " mce_style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; background-color: transparent; "><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;" mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;"></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">There are many heuristics that go into the “sufficiently similar” criterion, which will be described in our upcoming paper. There are two reasons why the similarity between a node and its image may not be 100%: because the contest graph is slightly different from our newer crawled graph, and because the mapping itself might have inaccuracies. The latter is minimal, and in fact the algorithm occasionally revisits already-mapped nodes to correct errors in the light of more data.</span></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; background-color: transparent; " mce_style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; background-color: transparent; "><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; "><br></span></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; background-color: transparent; " mce_style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; background-color: transparent; "><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">I have elided over many details — edge directionality makes the algorithm significantly more complex, and there are some gotchas due to the fact that the Kaggle graph is only partially available. Overall, however, this was a relatively straightforward adaptation of the algorithm in the abovementioned paper with Shmatikov.</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;" mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;"></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">Finding seeds was much harder. Here the fact that the Kaggle graph is partial presented a serious roadblock, and rules out the techniques we used in the paper. The idea of looking at the highest-degree nodes was obvious enough, but the key observation was that if you look at the nodes by highest </span><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; "><i>in-degree</i></span><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">, then the top nodes in the two graphs roughly correspond to each other (whereas ordered by out-degree, only about 1/10 of the Flickr nodes are in the Kaggle dataset). This is because of the way the Kaggle graph is constructed: </span><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; "><i>all</i></span><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; "> the contacts of each of the 38K nodes are reported, and so the top in-degree nodes will show up in the dataset whether or not they’re part of the 38K.</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;" mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;"></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">Still, it’s far from a straightforward mapping if you look at the top 20 in the two graphs. During the contest I found the mapping by hand after staring at two matrices of numbers for a couple of hours, but later I was able to automate it using simulated annealing. We will describe this in detail in the paper. Once we get 10-20 seeds, the propagation kicks off fairly easily.</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;" mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;"></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">On to the results. We were able to deanonymize about 80% of the nodes, including the vast majority of the high-degree nodes (both in- and out-degree.) We’re not sure what the overall error rate is, but for the high-degree nodes it is essentially zero.</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;" mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;"></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">Unfortunately this translated to only about 60% of the edges. This is because the edges in the test set aren’t sampled uniformly from the training graph; it is biased towards low-degree nodes, and deanonymization succeeds less often on low-degree nodes.</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;" mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;"></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">Thus, deanonymization alone would have been far from sufficient. Fortunately, Elaine, Ben and Yong did some pretty cool machine learning, which, while it would not have won the contest on its own, would have given the other machine-learning solutions a run for their money. Elaine sends me the following description:</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;" mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;"></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">====</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">Our ML algorithm is quite similar to what vsh described earlier. However, we implemented fewer features, and spent less time fine tuning parameters. That's why our ML performance is a bit lower, with an *estimated* AUC of 93.5-94%. <br class="kix-line-break"><br class="kix-line-break">(Note that the AUC for ML is not corroborated with Kaggle due to the submission quota, but rather, is computed over the ground truth from the deanonymization. The estimate is biased, since the deanonymized subset is not sampled randomly from the test set. [The 93.5-94% number is after applying Elaine’s debiasing heuristic. -- Arvind])<br class="kix-line-break"><br class="kix-line-break">We also used Random Forest over features a bunch of features, including the following (with acknowledgements to the "social network link prediction" paper): <br class="kix-line-break"><br class="kix-line-break">1) whether reverse edge is present<br class="kix-line-break">2) Adamic<br class="kix-line-break">3) |intersection of neighbors| / |union of neighbors|<br class="kix-line-break">4) Neighborhood clustering coefficient<br class="kix-line-break">5) Localized random walk 2 - 4 steps<br class="kix-line-break">6) Degree of n1/n2.<br class="kix-line-break">7) ... and a few other features, most are 1-3 hop neighborhood characteristics. <br class="kix-line-break"><br class="kix-line-break">For some of the above-mentioned features, we computed values for the out-graph, in-graph, and the bi-directional graph (i.e., union of in-graph and out-graph). We wanted to implement more features, but did not due to lack of time<br class="kix-line-break"><br class="kix-line-break">The best feature by itself is the localized random walk 3-4 steps. This feature on its own has an *estimated* AUC of 92%. <br class="kix-line-break"><br class="kix-line-break">The Random Forest implementation used the Python Milk library. http://pypi.python.org/pypi/milk/<br class="kix-line-break"><br class="kix-line-break">Combining DA and ML:<br class="kix-line-break"><br class="kix-line-break">1) Naive algorithm:<br class="kix-line-break"><br class="kix-line-break">For each test case:<br class="kix-line-break"> output DA prediction if DA prediction exists, else ML score<br class="kix-line-break"><br class="kix-line-break">2) Improved version:<br class="kix-line-break"> <br class="kix-line-break">For edge (n1, n2), if n1 and/or n2 has multiple DA candidates, and all candidates unanimously vote yes or no, output the corresponding prediction.<br class="kix-line-break"><br class="kix-line-break">Finally, like every one else, there were some trial-and-error tuning and adjustments.</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">====</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;" mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;"></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">Once we hit No. 1, we emailed the organizers to ask if what we did was OK. Fortunately, they said it was cool. Nevertheless, it is interesting to wonder how a future contest might prevent deanonymization-based solutions. There are two basic approaches: construct the graph so that it is hard to deanonymize, or require the winner to submit the code for human inspection. Neither is foolproof; I’m going to do some thinking about this but I’d like to hear other ideas.</span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;" mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; font-size: 11pt; background-color: transparent;"></span><br><span style="font-size: 11pt; font-family: Arial; color: rgb(0, 0, 0); background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; " mce_style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: normal; font-style: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; ">Last but not least, a big thanks to Kaggle and IJCNN!</span></div>
|
IJCNN Social Network Challenge
|
Sharing Techniques Used
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IJCNN Social Network Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|>First, thanks to the organizers --- this was an interesting problem. And thanks to those who described their methods so far. I see now that I should have been more creative when defining features.<br><br>My approach (in unlucky 13th place!) consisted of the following:<br><br>1. A binary SVD on the graph adjacency matrix (using sparse/missing values, trained via gradient descent, loosely following the paper I've attached by Kozma). Best AUC was ~0.81<br><br>2. Like others, I used Adar/Adamic, Jaccard, #Common Neighbors, and preferential attachment as described in "The Link Prediction Problem for Social Networks(2004)" Individual AUCs were ~0.72<br><br>3. Because the graph was bipartite in spots, I also tried using common neighbors-neighbors (i.e. nodes 2 links away from the test edge endpoints, instead of 1), as suggested in a paper I saw (by Huang & Chen, attached). That improved the Adar/Adamic & Jaccard AUC's to 0.74 & 0.76 respectively. <br><br>4. I blended all the predictors above using a small neural net. (Addendum: for everything except the BSVD, I first transformed the values using log(x+1) to make them have a more normal distribution). The result was ~0.91 AUC. So blending created quite a large improvement (+0.1000) over my best individual predictor.<br><br>Next, I wish I had had time to add some other features involving edge centrality, community detection & other "newer" methods for graphs. I read about them, and they seemed promising, though perhaps too computationally expensive. <br><br>I tried igraph too, via R, but the routines I wanted to use didn't seem to scale well to a graph of this size. So I wrote most of my routines in Jython (Python mostly, downshifting to Java for the computational bottlenecks).<br mce_bogus="1">
Congratulations to all the leaders in this contest! Unfortunately, these forums have been pretty quiet during the contest, but now that it's over, I'm wondering if people are willing to disclose the techniques they used so others can learn something new.<br><br>In a few emails with a couple contestants, I know there are a mix of techniques being used out there --- KNNs, neural-nets, SVDs, node metrics (like Adar/Adamic, Jaccard, number of common neighbors), and some graph-based techniques (shortest paths, edge-betweenness centrality, etc.). So, what techniques did you use? What worked, and what didn't? <br><br>Thanks!<br mce_bogus="1">
|
IceCube - Neutrinos in Deep Ice
|
10th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/manwithaflower" target="_blank">@manwithaflower</a> did you use TF or PyTorch? I was getting a worse score in TF. No idea why.</p>
<p>For the xyz prediction, I used it practically the same as they suggested in graphnet.</p>
<p>Thank you 🎉</p>
<p>I really had a negative experience with TF in this competiton. At some point even submission inference was crashing non-stop. 🤷♂️</p>
<p>I think the smallest model was 3.5h per epoch and the largest maybe 7h. It was a lot of electricity.</p>
<p><a href="https://www.kaggle.com/rusg77" target="_blank">@rusg77</a> this sub was late only by a few minutes. I started to blend models a few hours before the end (I went to do a short crossfit session instead of blending right away). This sub is 0.000001 better than your LB sub 😂😂😂</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6674022%2Fb40e45ac0cae28c13cb4b06c64c57dd4%2FScreenshot%20from%202023-04-20%2022-15-50.png?generation=1682021763851108&alt=media" alt=""></p>
<p>I started to get decent results quite late in the competitions. A lot of the things I tried didn't work. Everything was much better when I switched to PyTorch.</p>
<p>Thanks <a href="https://www.kaggle.com/drhabib" target="_blank">@drhabib</a>! Great job on the 2nd position :)</p>
<p>The last few weeks have been intense, but in the end I am super happy that I've finally got my first solo gold medal. Great competition overall! It was great to see the stability between local validation and LB scores (even with using only <strong>1.5%</strong> of the data as <strong>validation</strong>).</p>
<p>The model training process involved a train-validation split, wherein <strong>batches 11-660</strong> were utilized for <strong>training</strong>, and <strong>batches 1-10</strong> were allocated for <strong>validation</strong>. An ensemble of 8 models was formed using <strong>hill climbing</strong> or <strong>Nelder-Mead</strong> optimization techniques, with a submission time of approximately 3-3.5 hours. The inclusion of more models in the ensemble resulted in a better score; however, since I was submitting the best blend only a few hours before the end of the competition, there was not enough time to do inference on a bigger ensemble. The same model architectures were trained multiple times and blended.</p>
<p>The ensemble's local validation score was <strong>0.97747</strong>, while the public and private LB scores were <strong>0.976</strong>.</p>
<p>Input data for the model consisted of the following 9 features per event: <strong>sensor_x, sensor_y, sensor_z, time, charge, auxiliary, is_main_sensor, is_deep_veto, and is_deep_core</strong>.</p>
<p>Four model architectures have been selected for the final submission.</p>
<p><strong>Model 1</strong>:</p>
<p>Validation score 1: <strong>1.0016</strong><br>
Validation score 2: <strong>1.0005</strong><br>
Loss function: <strong>CrossEntropyLoss</strong></p>
<p>Data preprocessing steps:</p>
<ul>
<li>sensor_x, sensor_y, and sensor_z are divided by 600</li>
<li>time is divided by 1000 and subtracted by its minimum value</li>
<li>charge is divided by 300</li>
</ul>
<pre><code> (pl.LightningModule):
():
self.bin_num =
self.gru = nn.GRU(, , num_layers=, dropout=, batch_first=, bidirectional=)
self.fc1 = nn.Sequential(nn.Linear(, ), nn.ReLU())
self.fc2 = nn.Linear(, bin_num*bin_num)
():
batch_sizes = batch_sizes.cpu()
x = pack_padded_sequence(x, batch_sizes, batch_first=, enforce_sorted=)
x, _ = self.gru(x)
x, _ = pad_packed_sequence(x, batch_first=)
x = x.(dim=)
x = x.div(batch_sizes.unsqueeze(-).cuda())
x = self.fc1(x)
x = self.fc2(x)
x
</code></pre>
<p>Output bins were generated using <a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-training-tpu" target="_blank">code</a> from <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a>. </p>
<p><strong>Models 2-4</strong>:</p>
<p>Loss function: <strong>VonMisesFisher3DLoss</strong></p>
<p>Data preprocessing steps:</p>
<ul>
<li>sensor_x, sensor_y, and sensor_z are divided by 500</li>
<li>time is scaled by subtracting 1.0e04 and dividing by 3.0e4</li>
<li>charge is transformed using the logarithm base 10 and then divided by 3.0</li>
</ul>
<p>Validation score 3: <strong>0.9847</strong><br>
Validation score 4: <strong>0.9859</strong></p>
<pre><code> (pl.LightningModule):
():
self.bilstm = nn.LSTM(, , num_layers=, dropout=, batch_first=, bidirectional=)
self.fc1 = nn.Sequential(nn.Linear(, ), nn.ReLU())
self.dropout = nn.Dropout()
self.fc2 = nn.Linear(, )
():
batch_sizes = batch_sizes.cpu()
x = pack_padded_sequence(x, batch_sizes, batch_first=, enforce_sorted=)
x, _ = self.bilstm(x)
x, _ = pad_packed_sequence(x, batch_first=)
x = x.(dim=)
x = x.div(batch_sizes.unsqueeze(-).cuda())
x = self.fc1(x)
x = self.dropout(x)
pred = self.fc2(x)
kappa = pred.norm(dim=, p=) +
pred_x = pred[:, ] / kappa
pred_y = pred[:, ] / kappa
pred_z = pred[:, ] / kappa
pred = torch.stack([pred_x, pred_y, pred_z, kappa], dim=)
pred
</code></pre>
<p>Validation score 5: <strong>0.9872</strong><br>
Validation score 6: <strong>0.9887</strong></p>
<pre><code> (pl.LightningModule):
():
().__init__()
self.embedding = nn.Linear(, )
self.bilstm = nn.LSTM(, , num_layers=, dropout=, batch_first=, bidirectional=)
self.fc1 = nn.Sequential(nn.Linear(, ), nn.ReLU())
self.fc2 = nn.Linear(, )
():
x = self.embedding(x)
batch_sizes = batch_sizes.cpu()
x = pack_padded_sequence(x, batch_sizes, batch_first=, enforce_sorted=)
x, _ = self.bilstm(x)
x, _ = pad_packed_sequence(x, batch_first=)
x = x.(dim=)
x = x.div(batch_sizes.unsqueeze(-).cuda())
x = self.fc1(x)
pred = self.fc2(x)
kappa = pred.norm(dim=, p=) +
pred_x = pred[:, ] / kappa
pred_y = pred[:, ] / kappa
pred_z = pred[:, ] / kappa
pred = torch.stack([pred_x, pred_y, pred_z, kappa], dim=)
pred
</code></pre>
<p>Validation score 7: <strong>0.9842</strong><br>
Validation score 8: <strong>0.9841</strong></p>
<pre><code> (pl.LightningModule):
():
self.embedding = nn.Linear(, )
self.bilstm = nn.LSTM(, , num_layers=, dropout=, batch_first=, bidirectional=)
self.fc1 = nn.Sequential(nn.Linear(lstm_units, ), nn.ReLU())
self.fc2 = nn.Linear(, )
():
batch_sizes = batch_sizes.cpu()
x = self.embedding(x)
x = pack_padded_sequence(x, batch_sizes, batch_first=, enforce_sorted=)
x, _ = self.bilstm(x)
x, _ = pad_packed_sequence(x, batch_first=)
x = x.(dim=)
x = x.div(batch_sizes.unsqueeze(-).cuda())
x = self.fc1(x)
pred = self.fc2(x)
kappa = pred.norm(dim=, p=) +
pred_x = pred[:, ] / kappa
pred_y = pred[:, ] / kappa
pred_z = pred[:, ] / kappa
pred = torch.stack([pred_x, pred_y, pred_z, kappa], dim=)
pred
</code></pre>
<p>Hyperparameters:<br>
Optimizer: <strong>Adam</strong><br>
Scheduler: <strong>CosineAnnealingLR</strong><br>
Batch size: <strong>2048</strong><br>
Max pulses: <strong>128</strong><br>
Max LR: <strong>1e-3</strong> or <strong>5e-4</strong><br>
Min LR: <strong>1e-6</strong><br>
Warmup steps: <strong>2000</strong><br>
Epochs: <strong>10-15</strong> (possibly a few extra fine tuning epochs)</p>
<p>The DL library used for the competition was PyTorch (Lightning), which delivered better results than TensorFlow. While blending multiple models in TensorFlow, the code produced some unanticipated errors which I wasn't able to debug. It was just simpler to switch to PyTorch.</p>
<p>I tried a few different transformer architectures, but didn't have enough time to push it through the end. I retrained a few models similar to graphnet (<a href="https://www.kaggle.com/code/amoshuangyc/icecube-gnn-baseline-rewrite" target="_blank">code</a> from <a href="https://www.kaggle.com/amoshuangyc" target="_blank">@amoshuangyc</a> was very useful), but it didn't give a boost to the final ensemble.</p>
<p>2 are in 2 prebuild gamer PCs. The last 4 are in the same case. <a href="https://www.kaggle.com/init27" target="_blank">@init27</a> helped me build the machine. You can see it in this <a href="https://www.linkedin.com/posts/andrijamilicevic_deeplearning-nvidia-nvidiartx-activity-6969293456661704705-T4KU?utm_source=share" target="_blank">post</a>.</p>
<p>Thanks <a href="https://www.kaggle.com/ravi20076" target="_blank">@ravi20076</a> :) very happy to join the club :)</p>
<p>I have 6 RTX 3090s. I also rented a few A100s in the last 2 days.</p>
<p><a href="https://www.kaggle.com/junseonglee11" target="_blank">@junseonglee11</a> This is the first time I tried TF. Not sure if I will try it again anytime soon. Wasted too much time.</p>
<p>Yea, it wasn't easy to find them. They seemed to be best buy at time. Maybe A5000 is a decent buy now?</p>
|
IceCube - Neutrinos in Deep Ice
|
11-th place solution: Attention, GNN, Ensemble
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, we are very happy to be a part of such profound and interesting competition, many thanks to the organizers for their support and to all competitors who selflessly shared their knowledge and insights, <a href="https://www.kaggle.com/rasmusrse" target="_blank">@rasmusrse</a>, <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a>, <a href="https://www.kaggle.com/solverworld" target="_blank">@solverworld</a>, <a href="https://www.kaggle.com/edguy99" target="_blank">@edguy99</a>, <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a>, <a href="https://www.kaggle.com/pellerphys" target="_blank">@pellerphys</a>… Forgive me if I've missed someone, just so many people helped us so much with going forward. </p>
<p>Many thanks to my great team QuData <a href="https://www.kaggle.com/synset" target="_blank">@synset</a>, <a href="https://www.kaggle.com/alexz0" target="_blank">@alexz0</a> and <a href="https://www.kaggle.com/semenb" target="_blank">@semenb</a>, you are awesome! We all come from Ukraine, and we wish to thank everyone who is supporting our people now!</p>
<p>This competition was a great opportunity to unite a bit of relevant domain knowledge, as two of us were doing theoretical physics in the past, with the whole team's data science skills and knowledge. <br>
Taking part in a big astrophysics experiment was novel and exciting, and we hope we made a useful contribution to finding out how the Universe works.</p>
<h2> Source Code and Expanded Write-up </h2>
<p>Our report turned out to be quite large to comfortably fit in one post, so here are the links to</p>
<ul>
<li><a href="https://qudata.com/projects/icecube-neutrino/en/" target="_blank">More detailed version of the solution description and background</a></li>
<li><a href="https://qudata.com/projects/icecube-neutrino/en/" target="_blank">Source code for data preparation, training and inference</a></li>
</ul>
<h2>General architecture</h2>
<p>
The solution has the following structure:
</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2Fd27747a4b9d7f2eb613032789cc095a0%2Fcommon_arch.png?generation=1681914442238565&alt=media" alt=""></p>
<p>Here is the important milestones in our progress during the competition:</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Score LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Training base GNN model</td>
<td>1.012</td>
</tr>
<tr>
<td>Adding another layer to GNN</td>
<td>1.007</td>
</tr>
<tr>
<td>Increasing the number of neighbors</td>
<td>1.003</td>
</tr>
<tr>
<td>GNN - Expanding Layers MLP</td>
<td>0.996</td>
</tr>
<tr>
<td>GNN - Replacing regression with classification</td>
<td>0.992</td>
</tr>
<tr>
<td>Transformer (aggregated pulses)</td>
<td>1.002</td>
</tr>
<tr>
<td>Transformer (separate pulses)</td>
<td>0.995</td>
</tr>
<tr>
<td>GNN-Transformer Ensemble (trained MLP)</td>
<td>0.976</td>
</tr>
</tbody>
</table>
<h1>We list only one score for each model, as due to high quality data, the validation, public and private scores are all very close.</h1>
<p>
"GNN" is a modification of the graph neural network architecture for Neutrino Telescope Event Reconstruction <a href="https://github.com/graphnet-team/graphnet">GraphNet</a>
(public score: <b>0.992</b>)
</p>
<p>
"Transformer" is a combination of architectures including fully connected blocks, an attention mechanism, and a recurrent layer
(public score: <b>0.995</b>)
</p>
<p>
"Ensemble" is a neural network that agregates outputs from "GNN" and "Transformer" models and predict direction.
(public score: <b>0.976</b>)
</p>
<h2>GNN</h2>
<h3>Model</h3>
<p>During the competition, we made many changes to the architecture of the graph neural network (described in detail on the <a href="https://qudata.com/projects/icecube-neutrino/en/gnn.html">https://qudata.com/projects/icecube-neutrino/en/gnn.html</a>), its final state:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2F000a4894518c3090f4fad6146a865c13%2Fgnn_arch.png?generation=1681914695253186&alt=media" alt=""></p>
<p>The input for this model is the graph. Each node corresponds to the pulse, and the features are the features of the pulse combined with the aggregated features of the event. To construct the topology of the graph, the <b>knn</b> algorithm is used. The resulting graph sequentially passes through layers, each of which modifies the feature space and also updates its topology. The outputs of all layers are combined and fed to the layer <b>Layers MLP</b> in which the number of features is reduced to 256. Then a pooling operation is performed in which the features of all nodes of the graph are aggregated by functions <b>min, max, mean</b>. At the last node <b>Classification MLP</b> the resulting embedding is converted into class id and each class is responsible for its own direction.</p>
<p>Below is described our progress that led to this architecture.</p>
<h3>Training and enchancing the model
</h3><h4>Training base model (1.018 → 1.012)</h4>
<p>Since we were limited in computational resources, we decided to proceed with retraining the best public GNN model from a notebook <a href="https://www.kaggle.com/code/rasmusrse/graphnet-baseline-submission">GraphNeT Baseline Submission</a> (special thanks to <a href="https://www.kaggle.com/rasmusrse" target="_blank">@rasmusrse</a>) which gave LB score <b>1.018</b>.</p>
<p>We retrained the model on all batches, except for the last one; wich we were used for validation. During training, the learning rate was reduced in steps, at those moments when the validation did not improve for about 100 epochs. As a result, after 956 epochs, the value of the metric dropped to <b>1.0127</b>.</p>
<h4>Adding another layer (1.012 → 1.007)</h4>
<p>Having a trained network, we tried to add another layer <b>EdgeConv</b> to it. In order not to learn from scratch, all layers in the new architecture were frozen except for the new one.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2Fc2015199cfa3d81a4f028d61458c239d%2Fgnn_plus_layer.png?generation=1681919370984185&alt=media" alt=""></p>
<p>In the frozen layers, weights were loaded from the model retrained at the previous stage and training continued.<br>
The model in this mode learns quickly, and the metric also quickly reaches the level of the model from which the weights are borrowed. Then, unfreezing was performed, the learning rate was reduced, and the entire model was retrained in this mode. After 1077 epochs, we reached the metric <b>1.007</b></p>
<h4>Increasing the number of neighbors (1.007 → 1.003)</h4>
<p>In the original Graphnet library, the number of neighbors for building a graph is chosen as <b>8</b>.<br>
We have tried increasing this value to <b>16</b>. In the module <b>EdgeConv</b> the same MLP model is applied to all neighbors, and then the result is summed. Therefore, with an increase in the number of neighbors, the number of network parameters does not change, but at the same time it learns and works twice as slowly.</p>
<p>Thus, we retrained the trained model from the previous stage with a new number of neighbors and after the 1444 epochs, the metric reached a new low of <b>1.003</b></p>
<h4>Expanding Layers MLP(1.003 → 0.9964)</h4>
<p>Since the number of layers was increased, we thought it reasonable that the number of parameters of <b>Layers MLP</b> which receives concatenated input of the outputs of each level, should also be increased. The first layer of the module <b>Layers MLP</b> was increased from 336 to 2048. Similarly, all levels were frozen, the weights of the model of the previous stage were loaded, and training was continued. After 1150 epochs, the metric dropped to <b>0.9964</b></p>
<h4>Replacing regression with classification (0.9964 → 0.9919)</h4>
<p>Studying the best solutions, we paid attention to the notebook <a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-training-tpu">Tensorflow LSTM Model Training TPU</a> (thanks to <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a>) From it we borrowed the idea to move from a regression problem to a classification one.</p>
<p>The azimuth angle was uniformly divided into 24 bins. The zenith angle was also divided into 24 bins; here we warked with cosine of zenith, since it is the cosine that has a uniform distribution (as is seen from the statistics of all training events).</p>
<p>Accordingly we received a total of <b>24x24=576</b> classes. The the last layer of MLP was increased from [128,3] to [512,576], and the loss-function was changed to <b>CrossEntropyLoss</b>.<br>
We froze the entire model except for the last module and loaded the weights obtained at the previous stage and continued training. After 967 epochs, the metric reached the value of <b>0.9919</b>.<br>
This was the best result that we achieved for a standalone GNN model, and it was then used for<br>
ensembling with other models.</p>
<h3>What didn't help</h3>
<p>
During the competition we tried many other things that did not yield a noticeable improvement
of the metric, or none at all. Some of the things we did:
</p>
<ul>
<li> separately predicting zenith and azimuth angles;
</li><li> changing the type of convolution to SAGE, GAT;
</li><li> inserting transformer after Layers MLP;
</li><li> replacing pooling with RNN (GRU);
</li><li> inserting a TopKPooling layer;
</li><li> using 4 or more features to build the event graph.
</li></ul>
<h3>What have not been done</h3>
<p>
There are also several approaches which we thought over but did not pursued to the end,
mainly because of lack of time and hardware resources,
to mention just a few:
</p>
<ul>
<li> training the GNN from scratch, with scattering, absorption and DOM-embedding features;
</li><li> training a classifier with a larger number of bins;
</li><li> parallel training of transformer and GNN models.
</li></ul>
<h2>Transformer</h2>
<p>
The model, which we named <a href="https://qudata.com/projects/icecube-neutrino/en/transformer.html">"Transformer"</a>, is a combination of architectures that include fully connected blocks, an attention mechanism, and a recurrent layer.
The typical metric achieved by this model was 0.995.
</p>
<p>
The general architecture of the Transformer model looked like this.
It receives two tensors as input: <b>(B,T,PF)</b> and <b>(B,EF)</b>,
where <b>B</b> is the index of the sample in the minibatch and <b>T</b> is the index of the pulse number in the sequence.
Each pulse was characterized by <a href="https://qudata.com/projects/icecube-neutrino/en/transformer.html#F"><b>PF=9</b> features</a>.
The second tensor <b>(B,EF)</b> characterized the properties of the entire event with <a href="https://qudata.com/projects/icecube-neutrino/en/transformer.html#AF"><b>EF</b> more
features</a>.
These features were calculated for all pulses of the event (24 features) and inaddition only for the pulses marked with the auxiliary=0 flag (we'll shorten it to "aux"), 24 more.
</p>
<p>
In some modifications of the architecture, aggregated features with <b>aux=0</b>, using a fully connected MLP block
(multilayer perceptron) with one hidden layer, were compressed to a smaller dimension and then combined with the pulse feature tensor <b>(B,T,PF)</b>.
The motivation for compression is related to the fact that the number of event features (24) was significantly greater than the number of more significant pulse features (9).
In other architectures, only features of pulses were used.
</p>
<p>
After concatenation, the resulting tensor <b>(B,T,F)</b> was fed to the second MLP block (below the Feature Generator).
Its task was to increase the number of features to <b>E=128</b>.
The resulting tensor <b>(B,T,E)</b> was fed to a chain of 10–12 transformer blocks.
Each pulse, as a result of the mechanism of attention, "interacted" with all the pulses in the sequence.
As a result of the work of the transformer, the pulses received new features in the space of the same dimension.
</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2Fd22171be96432ebf984027aaaf96bfc8%2Ftransformer_00.png?generation=1681979981110852&alt=media" alt=""></p>
<p>Next, the Integrator block converted the transformer output tensor <b>(B,T,E)</b> into a tensor of <b>(B,E')</b> dimensions.<br>
At the initial stage of training, the concatenation of the operation of averaging and obtaining the maximum according to the features of all pulses acted as the Integrator block.<br>
At the final stage, this block was replaced by an RNN layer integrating the event pulses.<br>
One unidirectional recurrent layer with GRU cells was chosen as RNN.<br>
In this case, the pulses of the event were sorted in reverse order by time.<br>
As a result, the input of the last GRU cell (the output of which was used by the Integrator) received the most significant, that is the first, pulse.<br>
</p><p></p>
<p>
The event feature tensor for all pulses and pulses with aux=0 was attached to the <b>(B,E')</b> tensor obtained after the Integrator.
The resulting tensor <b>(B,E'+2*EF)</b> was fed to an MLP whose three outputs were Cartesian components
of the direction vector predicted by the regression model.
</p>
<p>
The transformer had a standard architecture with 16 heads in the attention block and trainable weights on skip-connections:
Position embedding was not used (although we experimented with it early on).
Time was one of the features of a pulse, and apparently that was enough.
Numerous, but not final :) experiments were carried out on various modifications of this architecture.
In particular, we studied the version where the final layer was a classifier in 10k directions in space,
as well as a version in which the transformer consisted of two parallel chains of blocks that were eventually merged together.
The feasibility of such architectural modifications requires further research.
</p>
<p><a></a></p>
<h3>Dataset</h3>
<p>
The transformer training requires the same number of pulses for all examples in the minibatch.
The standard solution to this problem is masking tokens to align sequence lengths.
However, given our computational capabilities, this was a very wasteful method.
So we went the other way.
Every 5 training batches were collected in a pack of 1m examples.
These examples were sorted by sequence length and combined into groups of the same length.
Accordingly, the data were divided into minibatches within each group.
After that, the minibatches of all groups were mixed.
One learning epoch consisted of one pack. Then the next pack was loaded.
</p>
<p>
Another idea that allows you to cope with limited memory was related to the variable size of the minibatch.
The memory requirements increase quadratically with the length of the sequence T.
Therefore, long sequences can be collected into shorter mini-batch, and short sequences into longer batches.
Thus, several hyperparameters were set: T_max - the maximum length of the sequence,
batch_size - batch size for maximum length and batch_max - upper limit of minibatch size for short sequences.
For examples of length T, the minibatch size was determined by the following formula:
</p><pre>batch_size = min(int(batch_size * (T_max/T)**2), batch_max)
</pre>
This led to approximately the same memory consumption for batches with long and short sequences.
<p></p>
<p><a></a></p>
<h3>Training implementation</h3>
<p>
Let's show a typical training start schedule (pulses are DOM-aggregated).
An error of the order of 1.020 when learning from scratch is achieved in 60-70 million examples (300 batches, not necessarily new ones).
The total time on the standard T4 card is about 12 hours.
This stage was done at a constant learning rate lr=1e-3 with the Adam optimizer:
</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2F6bf188353b365fde71a4e1a94fe91b82%2Fexp_large_01.png?generation=1681973550189176&alt=media" alt=""></p>
<p>
It took significantly longer to get close to error 1.002.
To achieve it, the learning rate was gradually decreased to 1e-5 (below, the dotted brown line).
At the level of 1.004, there was a transition to large maxima of the sequence lengths.
This immediately reduced the error by 1-1.5 points.
</p>
<p>
Further training required a transition to non-DOM-aggregated pulses and took a few more days on the A100 card.
</p>
<p>
During the training process, we analyzed the value of the weights on various blocks of the model
and gradient propagation through them. A typical analysis chart looked like this
(see details in <a href="https://qudata.com/projects/icecube-neutrino/en/transformer.html">our report</a>):
</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2F2cd7550542fa9f05041b1d8266aa9756%2Fexp_large_grad_01.png?generation=1681973578263913&alt=media" alt=""></p>
<h2>Ensemble set-up</h2>
<p>
The results of the architectures we trained (<a href="https://qudata.com/projects/icecube-neutrino/en/transformer.html">Transformer</a> and <a href="https://qudata.com/projects/icecube-neutrino/en/gnn.html">GNN</a>) were not strongly correlated with each other.
This made it possible to form ensembles of models, which gave a significant improvement in the final metric.
In addition, thanks to the remarkably large dataset provided by the organizers, additional progress in the metric could be obtained using the trained ensemble.
The general architecture of the ensemble looked like:
</p>
<p></p><p></p>
<h2>Ensemble Models</h2>
<p>
In each architecture, several models were selected, obtained with different hyperparameters or architecture options.
80 batches were passed through these models, the results of which formed a dataset for the ensemble.
</p>
<p>
Below are the initial data of the selected models. The first column of numbers is the base metric of the competition (mean angular error).
The second two columns are the azimuth error and the module of the zenith error of the model (all validations were carried out on the first 5 batches):
</p>
<pre>id ang_err az_err ze_err model
0 0.9935 1.035 0.606 | gnn_1
1 0.9922 1.033 0.595 | gnn_2
2 0.9963 1.039 0.598 | gnn_3
3 0.9954 1.018 0.612 | att_1
4 0.9993 1.022 0.636 | att_2
---------------------------------------------------------------
0.9846 1.021 0.540 | simple mean
</pre>
<p>
The criterion for selecting models for the ensemble was the angular error and coefficients of correlations of the angular error between different models:
</p>
<pre> 0 1 2 3 4
0 0.959 0.950 0.807 0.781
1 0.959 0.934 0.820 0.795
2 0.950 0.940 0.808 0.783
3 0.807 0.820 0.808 0.940
4 0.782 0.795 0.783 0.940
</pre>
<p>
Simply averaging the vectors predicted by each model resulted in a <b>0.9846</b> error.
</p>
<p>
Next, we built an ensemble with trainable weights (the multiplier of each model):
<b>n</b> = w <b>n</b> + ... + w <b>n</b>
This reduced the error to <b>0.9827</b>.
In this case, the regression weights of the models had the form:
</p>
<pre>gnn_1: 0.934, gnn_2: 1.346, gnn_3: 0.753, att_1: 1.466, att_2: 0.477
</pre>
<p><a></a></p>
<h2>Training Ensemble</h2>
<p></p><p><br>
Further advancement of the metric was achieved by a trained ensemble based on a neural network.<br>
Several architectures were considered, including a transformer and adding aggregated event features to the model outputs.<br>
However, they turned out to be no better than conventional MLP with one hidden layer.<br>
The tensor <b>(B,N,3)</b> was fed at the input of this network, where B is the number of examples in the batch, N is the number of models,<br>
each of which produced three components of the direction vector.<br>
This tensor was converted into dimension <b>(B,3*N)</b>.<br>
After MLP, its dimension became equal to <b>(B,3)</b>.</p>
<p>
The number of neurons in the hidden layer ranged from 128 to 2048. However, all these models gave approximately the same results.
For training, the best loss appeared to be the cosine between the predicted and the target vector.
The learning rate was quite high <b>lr = 1e-3 - 1e-4</b> and the Adam optimizer was used.
</p>
<p>
The best metric, as determined by the validation on the first 5 batches, gave the result <b>0.9796</b>.
Below are some typical learning curves:
</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2F2342fe7cc1946eaaee3ca0f42d8a4339%2Fensemble_train.png?generation=1681973858282499&alt=media" alt=""></p>
<h3>Source code</h3>
<p></p><p><br>
The source code of our solutions can be found on <a href="https://github.com/QuDataAI/IceCube">https://github.com/QuDataAI/IceCube</a> and in Kaggle public notebooks.<br>
Below are links to modules:</p>
<ul>
<li> <a href="https://github.com/QuDataAI/IceCube/blob/main/kaggle_2023/IceCube-Dataset-kaggle.ipynb">IceCube-Dataset-kaggle</a> - сreating a dataset for transformer;
</li><li> <a href="https://github.com/QuDataAI/IceCube/blob/main/kaggle_2023/IceCube-Transformer-kaggle.ipynb">IceCube-Transformer-kaggle</a> - transformer train;
</li><li> <a href="https://github.com/QuDataAI/IceCube/blob/main/kaggle_2023/IceCube_GNN_train_colab.ipynb">IceCube-GNN-train-colab</a> - GNN train;
</li><li> <a href="https://github.com/QuDataAI/IceCube/blob/main/kaggle_2023/IceCube-Ensemble-kaggle.ipynb">IceCube-Ensemble-kaggle</a> - Ensemble train;
</li><li> <a href="https://www.kaggle.com/andreybeyn/icecube-ensemble-submit">IceCube-Ensemble-submit</a> - Ensemble submission;,
</li><li> <a href="https://www.kaggle.com/code/synset/icecube-qudata-line-fit-method">IceCube_QuData_Line-fit_Method</a> - modifications of Line-fit method;
</li><li> <a href="https://www.kaggle.com/code/synset/icecube-qudata-anisotropy">IceCube_QuData_Anisotropy</a> - anisotropy analyses;
</li></ul>
<p><a></a> </p>
<h3>What have not been done</h3>
<p>
There are also several approaches which we thought over but did not pursued to the end,
mainly because of lack of time and hardware resources,
to mention just a few:
</p>
<ul>
<li> Training the GNN from scratch, with scattering, absorption and DOM-embedding features;
</li><li> Training a GNN classifier with a larger number of bins;
</li><li> Parallel training of transformer and GNN models.
</li><li> Use a bidirectional recurrent layer as integrator in transformer.
</li><li> Use the transformer pre-trained on the regression task to detect noise pulses, for their subsequent filtering.
</li><li> Replace the regressor with a classifier in transformer, for a large set of directions (points on a sphere).
A similar model got to error 1.012, but there was not enough time to retrain it and include it in the ensemble.
</li><li> Train a model on a subset of the data (for example, single-string events, two-string events, etc.) by obtaining a set of models that specialize in specific event patterns.
</li><li> Add position embedding in transformer.
</li></ul>
<h2>Conclusions</h2>
<p>Thanks to the organizers for large dataset, without which we would not have been able to train complicated architectures and combine them into an ensemble.</p>
<p>Yes, GNN better predicts zenith anglе while Attention better with azimuth. So, ensemble gave significant (-0.0159) boost in compare with a best single model.</p>
|
IceCube - Neutrinos in Deep Ice
|
12th Place Solution Details - IceCube
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Summary</h1>
<p>My final solution used an ensemble with a simple weighted average of 6 models that were generalizations of publicly posted models: (1) the DynEdge model <a href="https://www.kaggle.com/rasmusrse" target="_blank">@rasmusrse</a> , <a href="https://www.kaggle.com/anjum48" target="_blank">@anjum48</a> , and (2) LSTM model by <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a>. Each model predicted directly a 3-dimensional vector of the target direction, using a loss that was the norm of the difference between the predicted vector and the target vector. The predicted vector was not normalized first.</p>
<h1>TLDR:</h1>
<ol>
<li>Final Ensemble of 6 models, linear weighting determined by nonlinear optimization minimizing MAE (Mean Angular Error) on entire held-out batch.</li>
<li>Pool of all models included 30 different models, best combination of 6 selected through optimization method.<br>
Models included Dynamic Graph Networks of different sizes, and LSTM networks of different sizes and head networks</li>
<li>Models included different number of pulses selected for training and different number of pulses used for inference (not always the same). Pulses were selected randomly from entire set of pulses.</li>
<li>Each model was used (effectively) 4 times and averaged (because of the random selection of pulse set) - a simple observation was that smaller events did not require multiple evaluations.</li>
<li>For prediction, each batch in the test set was read into memory and cached for use in evaluating all 6 models (and their 4x sampling). Because of the distribution of number of pulses per event, only about 15% additional inferences were required to produce a 4x average for each model.</li>
<li>Each model used as loss the norm of predicted vector minus true unit direction vector.</li>
</ol>
<h1>References</h1>
<p><a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-training-tpu" target="_blank">Robin Smits LSTM models</a><br>
<a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/383524" target="_blank">RASMUS ØRSØE DynEdge Post</a><br>
<a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/381747" target="_blank">My Derivation of Least Squares Fit</a><br>
<a href="https://www.kaggle.com/code/solverworld/12th-place-ensemble-models" target="_blank">My Ensembling Notebook</a> [The attached dataset contains the pytorch models]<br>
Mirco Hunnefield Masters Thesis, Online Reconstruction of Muon-Neutrino Events in IceCube using Deep Learning Techniques<br>
Kai Schatto, PhD Thesis, Stacked searches for high-energy neutrinos from blazars with IceCube</p>
<h1>Early Days</h1>
<p>I started by reading as many of the neutrino detection papers as I could, trying to understand the nature of the problem. There were many excellent posts with graphics (taken from the papers) showing how neutrinos generated a muon that then formed spinoff collisions that illuminated a large set of sensors embedded in the Antarctic Ice. I won't repeat all that excellent information, but I will try to outline my thought process as my solution evolved.</p>
<p>My first submissions were the (<a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/381747" target="_blank">line-fit solutions</a> that got 1.214 and later improved to 1.174 with more clever picking of pulses through neighbor counting (thanks <a href="https://www.kaggle.com/roberthatch" target="_blank">@roberthatch</a> !).</p>
<p>I then decided to move on to neural net models. Since there was a lot of excitement around the DynEdge models, I though maybe I should zig while others were zagging, and looked into other graph neural networks. I implemented PointNet and then PointNet++, which are basically DynEdge models where you do not dynamically alter the graph at each iteration. That is, the graph is fixed by neighbors within a certain radius, and those neighbors are then operated on much as a Convolutional NN (CNN) does for the entire solution. The geometric pytorch <a href="https://pytorch-geometric.readthedocs.io/en/latest/cheatsheet/gnn_cheatsheet.html" target="_blank">cheatsheet</a> gives a nice summary of all the GNN models it supports with linked papers. </p>
<p>After looking at all the GNN models, and failing to have PointNet++ do much, I thought I should dive into the DynEdge models. Trying to install the graphnet module required backing up too many python modules (and using Python 3.7 instead of my local 3.10). So, since I hate to go backwards, I pulled out the dynedge code from the entire graphnet module, and used py-geometric to build my DynEdge models. This also freed me from the SQlite and other framework difficulties so I could build my own pipeline.</p>
<p>I solved the data size problem by preprocessing each parquet batch file into 780 individual hdf5 files which contained 256x2000xK matrices which were the data for 256 events. The pulses were limited to 2000 by Aux and Charge sorting. The rest were zero-padded, but because hdf5 compresses the data, this did not result in larger storage. The K features were x,y,z, time, charge, aux, num-of-neighbors, and some pre-computed time window information.</p>
<p>I then was able to shuffle both batches and the 256-groups of events within each batch without massive amounts of memory. I was able to feed my GPU efficiently using the preprocessed files.</p>
<h1>Progression</h1>
<p>I decided to use batches 1-100 for training, and batch 601 and 602 for validation. I typically trained for 600-800 batches, thus used each batch only 6-8 times. I see now from the top finishers that people trained on much more data and for longer. My models were probably not complex enough to need more data.</p>
<p>I got the DynEdge models working close to the reported public notebooks, and eventually tried a few variations, including adding more layers (up to 6) and a larger output head. The default was just a 128 size FC layer, but I eventually used [384,256,128,64,3]. My best DynEdge models got around 1.001 on Batch 602 (my test batch).</p>
<h1>Choice of Output</h1>
<p>I tried outputting bins (thanks again, <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a>) but I found better results with directly predicting the 3-dimension direction and using the norm ||v-pred|| as the loss function. The gradient of this function for v close to to pred looks much like the Angular Error function, so I thought it should work. I could not get the Angular Error to work directly as a loss function, because it does not control for the vector size and allows the predicted vector to shrink to zero size. There also some inaccuracies involved in making sure you don't normalize a length 0 vector.</p>
<h1>Choice of Input</h1>
<p>I tried various ways of selecting the input pulses to use in my models. Every sorting or choosing method seemed to work worse than random selection. Because of the random selection nature, model accuracy could be improved by simply averaging 4 predictions <em>from the same model</em>. One simple trick was to realize that events with less pulses than were required for a model did not have a random element and thus those did not need to be repeated 4 times. This meant I could do inference 4x with only a 15% increase in computation effort on submissions.</p>
<h1>Back to LSTM Models</h1>
<p>I decided to go back the LSTM models and taking a hint from the 2nd place finish of the RSNA Intracranial Hemorrhage challenge ( <a href="https://www.kaggle.com/darraghdog" target="_blank">@darraghdog</a> ) <a href="https://www.kaggle.com/competitions/rsna-intracranial-hemorrhage-detection/discussion/117228" target="_blank">2nd Place</a> I used multiple LSTM layers, with the outputs of each layer concatenated together into a final multi-layer fully connected prediction head.<br>
The best models were 512 wide LSTMs in 3 layers. I used prediction heads with layers [256,64,64,3] and variations thereof. I also tried an Attention modification on the LSTM output which helped a little bit (this was simply a learned weight that is dotted with the LSTM output to produce a signal proportional the weight that should be given to that element in the sequence).</p>
<h1>Trying all the Models</h1>
<p>Here are the scores of 35 models, just to show the variation and improvement from averaging. The s<i> indicates a different random selection of the pulses from each event, the p<n> indicates how many pulses were used in the inference (prediction) and the last two columns indicate the change in score by using an average of the 4 predictions and the standard deviation, respectively. The score was done on Batch 602.<br>
You can see the average helps about 35 bps on models using 128 pulses, about 6-10 bps for models using 350 or 384 pulses. The description includes the type of model as well as the number of pulses used for training. E.g. LSTM512x3-384 was a 3 layer LSTM model trained on 384 pulses. ATT means attention was used.<br>
The best GNN model gets 1.001, the best LSTM model gets 0.9859 on batch 602. My scores on the leaderboard were about .002 lower than my cross validation scores on batch 602. I did not submit any individual models to the leaderboard, as I did want to contaminate my procedure with fitting to the leaderboard.</p>
<pre><code>000
00 dog-qnr-state-664-s9 LSTM512x3-384 p384 s0 0.98869
00 dog-qnr-state-664-s9 LSTM512x3-384 p384 s1 0.98881
00 dog-qnr-state-664-s9 LSTM512x3-384 p384 s2 0.98886
00 dog-qnr-state-664-s9 LSTM512x3-384 p384 s3 0.98881
00 dog-qnr-state-664-s9 LSTM512x3-384 p384 -- 0.98810 -0.00069 0.00006
001
01 dog-ust-state-752-s9 LSTM512x3 p128 s0 0.99525
01 dog-ust-state-752-s9 LSTM512x3 p128 s1 0.99547
01 dog-ust-state-752-s9 LSTM512x3 p128 s2 0.99538
01 dog-ust-state-752-s9 LSTM512x3 p128 s3 0.99533
01 dog-ust-state-752-s9 LSTM512x3 p128 -- 0.99184 -0.00352 0.00008
002
02 dog-ust-state-752-s9 LSTM512x3 p200 s0 0.99056
02 dog-ust-state-752-s9 LSTM512x3 p200 s1 0.99073
02 dog-ust-state-752-s9 LSTM512x3 p200 s2 0.99069
02 dog-ust-state-752-s9 LSTM512x3 p200 s3 0.99061
02 dog-ust-state-752-s9 LSTM512x3 p200 -- 0.98900 -0.00165 0.00007
003
03 dog-ust-state-752-s9 LSTM512x3 p256 s0 0.98949
03 dog-ust-state-752-s9 LSTM512x3 p256 s1 0.98929
03 dog-ust-state-752-s9 LSTM512x3 p256 s2 0.98943
03 dog-ust-state-752-s9 LSTM512x3 p256 s3 0.98945
03 dog-ust-state-752-s9 LSTM512x3 p256 -- 0.98831 -0.00110 0.00008
004
04 dog-ust-state-752-s9 LSTM512x3 p320 s0 0.98896
04 dog-ust-state-752-s9 LSTM512x3 p320 s1 0.98897
04 dog-ust-state-752-s9 LSTM512x3 p320 s2 0.98885
04 dog-ust-state-752-s9 LSTM512x3 p320 s3 0.98891
04 dog-ust-state-752-s9 LSTM512x3 p320 -- 0.98813 -0.00079 0.00005
005
05 dog-ust-state-752-s9 LSTM512x3 p384 s0 0.98868
05 dog-ust-state-752-s9 LSTM512x3 p384 s1 0.98881
05 dog-ust-state-752-s9 LSTM512x3 p384 s2 0.98874
05 dog-ust-state-752-s9 LSTM512x3 p384 s3 0.98872
05 dog-ust-state-752-s9 LSTM512x3 p384 -- 0.98815 -0.00058 0.00005
006
06 dog-uva-state-648-s9 LSTM512x3-200 p320 s0 0.99047
06 dog-uva-state-648-s9 LSTM512x3-200 p320 s1 0.99047
06 dog-uva-state-648-s9 LSTM512x3-200 p320 s2 0.99047
06 dog-uva-state-648-s9 LSTM512x3-200 p320 s3 0.99047
06 dog-uva-state-648-s9 LSTM512x3-200 p320 -- 0.99047 0.00000 0.00000
007
07 dog-uva-state-768-s9 LSTM512x3-200 p200 s0 0.99120
07 dog-uva-state-768-s9 LSTM512x3-200 p200 s1 0.99138
07 dog-uva-state-768-s9 LSTM512x3-200 p200 s2 0.99120
07 dog-uva-state-768-s9 LSTM512x3-200 p200 s3 0.99140
07 dog-uva-state-768-s9 LSTM512x3-200 p200 -- 0.98955 -0.00175 0.00010
008
08 dog-uva-state-768-s9 LSTM512x3-200 p256 s0 0.98947
08 dog-uva-state-768-s9 LSTM512x3-200 p256 s1 0.98931
08 dog-uva-state-768-s9 LSTM512x3-200 p256 s2 0.98939
08 dog-uva-state-768-s9 LSTM512x3-200 p256 s3 0.98936
08 dog-uva-state-768-s9 LSTM512x3-200 p256 -- 0.98819 -0.00119 0.00006
009
09 dog-uva-state-768-s9 LSTM512x3-200 p320 s0 0.98846
09 dog-uva-state-768-s9 LSTM512x3-200 p320 s1 0.98843
09 dog-uva-state-768-s9 LSTM512x3-200 p320 s2 0.98848
09 dog-uva-state-768-s9 LSTM512x3-200 p320 s3 0.98835
09 dog-uva-state-768-s9 LSTM512x3-200 p320 -- 0.98756 -0.00087 0.00005
010
10 dog-uva-state-768-s9 LSTM512x3-200 p384 s0 0.98793
10 dog-uva-state-768-s9 LSTM512x3-200 p384 s1 0.98807
10 dog-uva-state-768-s9 LSTM512x3-200 p384 s2 0.98810
10 dog-uva-state-768-s9 LSTM512x3-200 p384 s3 0.98798
10 dog-uva-state-768-s9 LSTM512x3-200 p384 -- 0.98741 -0.00061 0.00007
011
11 dyno-dxu-state-456-s GNN5-RO p200 s0 1.00377
11 dyno-dxu-state-456-s GNN5-RO p200 s1 1.00385
11 dyno-dxu-state-456-s GNN5-RO p200 s2 1.00370
11 dyno-dxu-state-456-s GNN5-RO p200 s3 1.00397
11 dyno-dxu-state-456-s GNN5-RO p200 -- 1.00193 -0.00189 0.00010
012
12 dyno-dxu-state-456-s GNN5-RO p256 s0 1.00346
12 dyno-dxu-state-456-s GNN5-RO p256 s1 1.00343
12 dyno-dxu-state-456-s GNN5-RO p256 s2 1.00341
12 dyno-dxu-state-456-s GNN5-RO p256 s3 1.00340
12 dyno-dxu-state-456-s GNN5-RO p256 -- 1.00213 -0.00129 0.00002
013
13 dyno-dxu-state-456-s GNN5-RO p350 s0 1.00300
13 dyno-dxu-state-456-s GNN5-RO p350 s1 1.00308
13 dyno-dxu-state-456-s GNN5-RO p350 s2 1.00316
13 dyno-dxu-state-456-s GNN5-RO p350 s3 1.00299
13 dyno-dxu-state-456-s GNN5-RO p350 -- 1.00213 -0.00093 0.00007
014
14 dyno-kxc-state-524-s GNN5-RO-350 p350 s0 1.00250
14 dyno-kxc-state-524-s GNN5-RO-350 p350 s1 1.00255
14 dyno-kxc-state-524-s GNN5-RO-350 p350 s2 1.00268
14 dyno-kxc-state-524-s GNN5-RO-350 p350 s3 1.00245
14 dyno-kxc-state-524-s GNN5-RO-350 p350 -- 1.00158 -0.00096 0.00009
015
15 dyno-kxc-state-580-s GNN5-RO-350 p350 s0 1.00200
15 dyno-kxc-state-580-s GNN5-RO-350 p350 s1 1.00209
15 dyno-kxc-state-580-s GNN5-RO-350 p350 s2 1.00218
15 dyno-kxc-state-580-s GNN5-RO-350 p350 s3 1.00194
15 dyno-kxc-state-580-s GNN5-RO-350 p350 -- 1.00107 -0.00098 0.00009
016
16 dyno-rjl-state-708-s GNN4-RO-350 p200 s0 1.00572
16 dyno-rjl-state-708-s GNN4-RO-350 p200 s1 1.00582
16 dyno-rjl-state-708-s GNN4-RO-350 p200 s2 1.00562
16 dyno-rjl-state-708-s GNN4-RO-350 p200 s3 1.00587
16 dyno-rjl-state-708-s GNN4-RO-350 p200 -- 1.00374 -0.00202 0.00010
017
17 dyno-rjl-state-708-s GNN4-RO-350 p256 s0 1.00503
17 dyno-rjl-state-708-s GNN4-RO-350 p256 s1 1.00487
17 dyno-rjl-state-708-s GNN4-RO-350 p256 s2 1.00496
17 dyno-rjl-state-708-s GNN4-RO-350 p256 s3 1.00504
17 dyno-rjl-state-708-s GNN4-RO-350 p256 -- 1.00352 -0.00146 0.00007
018
18 dyno-rjl-state-708-s GNN4-RO-350 p350 s0 1.00432
18 dyno-rjl-state-708-s GNN4-RO-350 p350 s1 1.00434
18 dyno-rjl-state-708-s GNN4-RO-350 p350 s2 1.00439
18 dyno-rjl-state-708-s GNN4-RO-350 p350 s3 1.00445
18 dyno-rjl-state-708-s GNN4-RO-350 p350 -- 1.00337 -0.00101 0.00005
019
19 dyno-wnq-state-476-s GNN4 p200 s0 1.00640
19 dyno-wnq-state-476-s GNN4 p200 s1 1.00645
19 dyno-wnq-state-476-s GNN4 p200 s2 1.00646
19 dyno-wnq-state-476-s GNN4 p200 s3 1.00660
19 dyno-wnq-state-476-s GNN4 p200 -- 1.00457 -0.00191 0.00007
020
20 egg-jhn-state-604-s9 LSTM384x3-ATT p128 s0 0.99903
20 egg-jhn-state-604-s9 LSTM384x3-ATT p128 s1 0.99885
20 egg-jhn-state-604-s9 LSTM384x3-ATT p128 s2 0.99879
20 egg-jhn-state-604-s9 LSTM384x3-ATT p128 s3 0.99877
20 egg-jhn-state-604-s9 LSTM384x3-ATT p128 -- 0.99553 -0.00333 0.00010
021
21 egg-jhn-state-604-s9 LSTM384x3-ATT p200 s0 0.99426
21 egg-jhn-state-604-s9 LSTM384x3-ATT p200 s1 0.99452
21 egg-jhn-state-604-s9 LSTM384x3-ATT p200 s2 0.99433
21 egg-jhn-state-604-s9 LSTM384x3-ATT p200 s3 0.99448
21 egg-jhn-state-604-s9 LSTM384x3-ATT p200 -- 0.99282 -0.00158 0.00011
022
22 egg-jhn-state-604-s9 LSTM384x3-ATT p350 s0 0.99456
22 egg-jhn-state-604-s9 LSTM384x3-ATT p350 s1 0.99457
22 egg-jhn-state-604-s9 LSTM384x3-ATT p350 s2 0.99457
22 egg-jhn-state-604-s9 LSTM384x3-ATT p350 s3 0.99453
22 egg-jhn-state-604-s9 LSTM384x3-ATT p350 -- 0.99396 -0.00060 0.00002
023
23 egg-jqn-state-572-s9 LSTM512x3-ATT200 p200 s0 0.99215
23 egg-jqn-state-572-s9 LSTM512x3-ATT200 p200 s1 0.99226
23 egg-jqn-state-572-s9 LSTM512x3-ATT200 p200 s2 0.99228
23 egg-jqn-state-572-s9 LSTM512x3-ATT200 p200 s3 0.99238
23 egg-jqn-state-572-s9 LSTM512x3-ATT200 p200 -- 0.99057 -0.00170 0.00008
024
24 egg-jqn-state-572-s9 LSTM512x3-ATT200 p384 s0 0.99058
24 egg-jqn-state-572-s9 LSTM512x3-ATT200 p384 s1 0.99062
24 egg-jqn-state-572-s9 LSTM512x3-ATT200 p384 s2 0.99072
24 egg-jqn-state-572-s9 LSTM512x3-ATT200 p384 s3 0.99065
24 egg-jqn-state-572-s9 LSTM512x3-ATT200 p384 -- 0.99003 -0.00061 0.00005
025
25 fir-irl-state-456-s9 LSTM512x3-200-mod p200 s0 0.99316
25 fir-irl-state-456-s9 LSTM512x3-200-mod p200 s1 0.99324
25 fir-irl-state-456-s9 LSTM512x3-200-mod p200 s2 0.99298
25 fir-irl-state-456-s9 LSTM512x3-200-mod p200 s3 0.99322
25 fir-irl-state-456-s9 LSTM512x3-200-mod p200 -- 0.99145 -0.00170 0.00010
026
26 fir-irl-state-456-s9 LSTM512x3-200-mod p300 s0 0.99138
26 fir-irl-state-456-s9 LSTM512x3-200-mod p300 s1 0.99142
26 fir-irl-state-456-s9 LSTM512x3-200-mod p300 s2 0.99129
26 fir-irl-state-456-s9 LSTM512x3-200-mod p300 s3 0.99142
26 fir-irl-state-456-s9 LSTM512x3-200-mod p300 -- 0.99050 -0.00087 0.00005
027
27 fir-irl-state-456-s9 LSTM512x3-200-mod p350 s0 0.99141
27 fir-irl-state-456-s9 LSTM512x3-200-mod p350 s1 0.99143
27 fir-irl-state-456-s9 LSTM512x3-200-mod p350 s2 0.99151
27 fir-irl-state-456-s9 LSTM512x3-200-mod p350 s3 0.99137
27 fir-irl-state-456-s9 LSTM512x3-200-mod p350 -- 0.99071 -0.00072 0.00005
028
28 egg-kvu-state-516-s9 LSTM512x3-ATT384 p384 s0 0.98831
28 egg-kvu-state-516-s9 LSTM512x3-ATT384 p384 s1 0.98831
28 egg-kvu-state-516-s9 LSTM512x3-ATT384 p384 s2 0.98837
28 egg-kvu-state-516-s9 LSTM512x3-ATT384 p384 s3 0.98838
28 egg-kvu-state-516-s9 LSTM512x3-ATT384 p384 -- 0.98771 -0.00063 0.00003
029
29 egg-kvu-state-548-s9 LSTM512x3-ATT384 p384 s0 0.98777
29 egg-kvu-state-548-s9 LSTM512x3-ATT384 p384 s1 0.98777
29 egg-kvu-state-548-s9 LSTM512x3-ATT384 p384 s2 0.98784
29 egg-kvu-state-548-s9 LSTM512x3-ATT384 p384 s3 0.98781
29 egg-kvu-state-548-s9 LSTM512x3-ATT384 p384 -- 0.98717 -0.00063 0.00003
030
30 goo-uel-state-652-s9 GOO-200 p200 s0 0.99403
30 goo-uel-state-652-s9 GOO-200 p200 s1 0.99415
30 goo-uel-state-652-s9 GOO-200 p200 s2 0.99398
30 goo-uel-state-652-s9 GOO-200 p200 s3 0.99416
30 goo-uel-state-652-s9 GOO-200 p200 -- 0.99241 -0.00167 0.00008
031
31 goo-uel-state-652-s9 GOO-200 p350 s0 0.99091
31 goo-uel-state-652-s9 GOO-200 p350 s1 0.99087
31 goo-uel-state-652-s9 GOO-200 p350 s2 0.99092
31 goo-uel-state-652-s9 GOO-200 p350 s3 0.99093
31 goo-uel-state-652-s9 GOO-200 p350 -- 0.99013 -0.00078 0.00002
032
32 goo-wqz-state-664-s9 GOO-384 p384 s0 0.99036
32 goo-wqz-state-664-s9 GOO-384 p384 s1 0.99049
32 goo-wqz-state-664-s9 GOO-384 p384 s2 0.99043
32 goo-wqz-state-664-s9 GOO-384 p384 s3 0.99039
32 goo-wqz-state-664-s9 GOO-384 p384 -- 0.98976 -0.00066 0.00005
033
33 dyno-fdk-state-396-s GNN6-200 p200 s0 1.01130
33 dyno-fdk-state-396-s GNN6-200 p200 s1 1.01137
33 dyno-fdk-state-396-s GNN6-200 p200 s2 1.01136
33 dyno-fdk-state-396-s GNN6-200 p200 s3 1.01143
33 dyno-fdk-state-396-s GNN6-200 p200 -- 1.00933 -0.00204 0.00005
034
34 dyno-rqy-state-408-s GNN6-350 p350 s0 1.00987
34 dyno-rqy-state-408-s GNN6-350 p350 s1 1.00999
34 dyno-rqy-state-408-s GNN6-350 p350 s2 1.00999
34 dyno-rqy-state-408-s GNN6-350 p350 s3 1.01000
34 dyno-rqy-state-408-s GNN6-350 p350 -- 1.00892 -0.00104 0.00005
035
35 egg-fsh-state-620-s9 LSTM512x3-ATT384 p384 s0 0.98744
35 egg-fsh-state-620-s9 LSTM512x3-ATT384 p384 s1 0.98755
35 egg-fsh-state-620-s9 LSTM512x3-ATT384 p384 s2 0.98756
35 egg-fsh-state-620-s9 LSTM512x3-ATT384 p384 s3 0.98758
35 egg-fsh-state-620-s9 LSTM512x3-ATT384 p384 -- 0.98687 -0.00066 0.00006
036
36 egg-fsh-state-640-s9 LSTM512x3-ATT384 p384 s0 0.98654
36 egg-fsh-state-640-s9 LSTM512x3-ATT384 p384 s1 0.98662
36 egg-fsh-state-640-s9 LSTM512x3-ATT384 p384 s2 0.98664
36 egg-fsh-state-640-s9 LSTM512x3-ATT384 p384 s3 0.98666
36 egg-fsh-state-640-s9 LSTM512x3-ATT384 p384 -- 0.98595 -0.00066 0.00005
Total models present 37
</code></pre>
<h1>Combining the Models</h1>
<p>I used a linear combination of a subset of these models. For a baseline, I simple calculated the optimal weights on all 37 models using scipy.optimization.minimize with the Powell method. Using least squares gives about 20 bps worse score because the true metric is the mean angular error. The baseline score was .97948 on Batch 601 (used for weight calculation) and .97964 on Batch 602 (for validation). This is a 63bp improvement over the best single model. Since I didn't want to calculate all 37 models for submission, I choose a subset of models. You cannot simply use the models with the highest weights to pick a subset because the models are correlated. For example, if model A and model B are the best and yet effectively the same, then their weights might be approximately equal and the highest of the bunch. A subset should only include one of them. So I took an iterative approach, removing a single model at a time and removing the one that had the least negative impact. These are the results:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F651278%2Fa906b327fa59d5259a491c6098ce82eb%2Fscore_by_models.png?generation=1682004978180418&alt=media" alt=""></p>
<p>You can see that too many models is bad on cross-validation due to overfitting. I choose 6 models as a compromise of inference time and best score.</p>
<h1>Lessons Learned</h1>
<ol>
<li>Learn about Transformers!</li>
<li>Have a better framework in place to evaluate different models and parameters without pulling my hair out when changing stuff.</li>
<li>Spend less time on searching for small tweaks when there is a large gap to the leaders</li>
<li>To save time, do more fine tuning of models when adding things rather than training from scratch</li>
</ol>
<h1>Things that didn't work for me</h1>
<ol>
<li>I could not get vonMises-Fisher loss to work</li>
<li>Adding additional features, like number of neighbors, sensor type, line-fit estimated direction </li>
<li>Dropout or weight decay. My models tended not to overfit, probably because they were not complex enough</li>
<li>Pulse selection. I could never improve on random selection, which still mystifies me</li>
<li>More complex ensembles. I see <a href="https://www.kaggle.com/dipamc77" target="_blank">@dipamc77</a> did well with this, but I could not get XGBOOST or a neural network to combine predictions or even the embeddings into anything better than weighted averaging.</li>
</ol>
<h1>Conclusion</h1>
<p>Congratulations to all the top finishers! And a big thank you to all the people posting information to help everyone, to everyone for participating, and to the contest organizers for putting on such a great competition.</p>
<p>Yes, many thanks. I was surprised to see LSTM did better than DynEdge in the end.</p>
<p>vMF just didn't work for me. By using norm of vector error, you get some benefit of vMF - when the model is totally uncertain, it picks small length prediction vector, because that minimizes ||v-pred|| over a (longer) wrong direction. This helps in the ensembling, because those predictions get automatically weighted less. That's something I forgot to mention in the writeup. Normalizing the predictions before averaging gives a worse solution because of this effect.</p>
|
IceCube - Neutrinos in Deep Ice
|
13th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>A thank you to the host, and well done to all the participants. I'm frustrated that I couldn't stay in the gold tier, but I gained valuable learning experiences. I'd like to take this opportunity to express my gratitude. Although it's late, I will share my solution. My English isn't so good so feel free to ask me if there is anything unclear.</p>
<p>My solution is:</p>
<ul>
<li>Noise reduction from training data</li>
<li>Customized GraphNet-based models</li>
</ul>
<p>I tried training LSTM and Transformers, but it didn't go well, so I abandoned them early, which led to my failure.</p>
<p>I used an ensemble of eight GraphNet-based models on final submit. A single graph model get Public 0.982 and Private 0.984 scores. Therefore, there may not have been much of an ensemble effect.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2391815%2Fc25aef115dd90773cb76ae7fc0ae296c%2Flatesub.png?generation=1682672834486253&alt=media" alt="late sub"></p>
<p><strong>Noise reduction from training data</strong><br>
I consider when the data had few observed signals within an event, it couldn't fit well and led to overfitting. So, I implemented the following cycle:<br>
1) Infer to training data and exclude training data that do not predict well. Specifically, exclude data with MAE > 1.<br>
2) Train on the dataset, and if the validation score improves, use that model to infer the training data again.<br>
3) Construct the data set in the same manner as step 1, exclude the training data that cannot be fitted, and train again.</p>
<p>The above process was repeated two or three times.</p>
<p>This idea is based on my past experience with overfitting with noisy labels, as in the PANDA competition. I think this competition deals with physical phenomena, prediction is relatively easy given ideal observational data. Therefore, I considered a model trained include data like noise would not correctly predict difficult events on test data, too. In fact, the model trained with the noisy data was able to improve the CV. Public LB also improved as did CV, so I trusts this method.</p>
<p><strong>Customized GraphNet-based models</strong><br>
I created models based on provided by the host, I also created models that include time variables in the information added to the KNN. And, changed the number of KNN neighborhoods (8 to 32) and pooling layers(replacing pooling to attention). I also tried other things such as changing the activation function, but the zero gradient function in the negative region does not seem to be a good. PReLU was the best, in my case.</p>
<p><strong>My Failure Points</strong></p>
<ul>
<li>Other information<br>
Tried to include information on QE, etc., but could not pursue it very deeply, especially since GNN did not seem to improve.</li>
<li>Other approach<br>
I could not train LSTM and Transformer well. It is possible that the inputs and settings were incorrect, but the biggest failure was to terminate this approach prematurely.</li>
<li>post-process<br>
Attempted post-processing in the classification task to deal with cases where the exact opposite direction was obtained, but it did not work. </li>
<li>Interpolation<br>
Since the accuracy was poor for events with few observed signals, I looked for ways to improve the accuracy by interpolating signals at intermediate times and intermediate locations, but this did not work.</li>
<li>Imaging Idea<br>
Tried to create an ensemble factor by predicting the position and orientation of the sensor as a video/imaging task, but it did not work because the number of events was too large (storage was disk full). I should have focused on something else.</li>
</ul>
<p><strong>Others</strong></p>
<ul>
<li><a href="https://www.kaggle.com/code/sinpcw/icecube-submit" target="_blank">inference notebook</a> (version 111 is best LB.)</li>
</ul>
|
IceCube - Neutrinos in Deep Ice
|
14th place solution: transformer and GRU
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We would like to thank the organizers and participants of this great competition! We learned a lot and had a lot of fun. Special thanks to <a href="https://www.kaggle.com/seungmoklee" target="_blank">ZhaounBooty</a> for his early sharing solution, <a href="https://www.kaggle.com/rsmits" target="_blank">Robin Smits</a> for sharing further improvements of the RNN idea, and <a href="https://www.kaggle.com/iafoss" target="_blank">Iafoss</a> for sharing the dataloader.</p>
<p>Below is a (preliminary) description of the two models we trained.</p>
<p>We preprocessed the whole training dataset and saved it as a memory-mapped file. We have written an iterable dataset that allowed very fast data loading.</p>
<h3>Transformer model:</h3>
<p><em>Private and public score 0.983</em></p>
<ul>
<li>Size: 4.9M params</li>
<li>Batch: 2000 events</li>
<li>Hidden size: 256</li>
<li>Activation: SiLU</li>
<li>Layernorm</li>
<li>Num transformer layers: 3</li>
<li>Num heads: 4</li>
<li>Global aggregation: [sum, mean, max, std_dev]</li>
</ul>
<p>Training:</p>
<ul>
<li>LR: 0.002</li>
<li>Patience: 1, multiplicative decay: 0.9</li>
<li>Run for 77 epochs</li>
<li>32 GPUs</li>
<li>Warmup: 2 epochs</li>
</ul>
<h3>Bidirectional GRU model:</h3>
<p><em>Private and public score 0.987</em></p>
<ul>
<li>Size: 1.7 M parameters</li>
<li>Batch size: 2048</li>
<li>Input shape: (96, 6)</li>
<li>Number of layers: 3</li>
<li>hidden_size: 160</li>
<li>Hidden states of the last GRU unit are passed to a fully connected layer (size: 512)</li>
</ul>
<p>Training</p>
<ul>
<li>max LR 1e-3 with a warm up and cosine annealing ( OneCycleLR, pct_start: 0.001)</li>
<li>AdamW with weight decay 1e-4</li>
<li>Trained for 14 epochs on the full dataset on GTX 1080 GPU for 90 hours</li>
</ul>
<p>In both cases, the network output was a 3-dimensional unit vector. We then directly used the mean angular error as the loss.</p>
<p>Interestingly, despite very limited computer resources, the GRU model scores quite well. Unfortunately, we didn't have time to scale it further on the GTX 1080.</p>
<p>Thank you!</p>
|
IceCube - Neutrinos in Deep Ice
|
15th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, we want to thank Kaggle team and the organizer for holding this amazing competition. In addition, we appreciate the people who shared their amazing notebooks and insightful ideas about this competition.</p>
<p>Unfortunately, we could not reach a gold medal, but we will share a brief explanation of our solution.</p>
<hr>
<ol>
<li>model<br>
custom DNN (based on <a href="https://www.kaggle.com/code/anjum48/early-sharing-prize-dynedge-1-046" target="_blank">this</a>) <br>
We trained 4fold (out of 100fold cv) DNN and ensembled them with simple averaging.<br>
We tried RNN, LSTM, or transformer, but because of limited time and computer resources, could not make them work.</li>
<li>data<br>
We picked 200 pulses per event based on auxiliary, valid time window, and charge.</li>
<li>features<br>
We chose [x, y, z, time, charge, auxiliary, valid_time_window, string, qe, scatter, absorption] as input features based on small data experiments. Maybe some of them are not necessary for big data training because the model can implicitly learn them from data. But we don't have much time to check, so we left them.</li>
<li>prediction<br>
We made the model to predict the (x, y, z) direction from where neutrinos came and to where they go. During training, we calculated losses for each of them. In Inference time, we flipped one of them, averaged them, and got one (x, y, z) direction.</li>
<li>loss function<br>
We used VonMisesFisher3DLoss. We checked other losses or binning&classification, but this gave us the best result.</li>
<li>others<br>
We utilized polars for data processing. The processing speed was very fast, and our best score submission took only about 30min.<br>
This is a regret, but we got a fairly good score with GNN and a little effort and persisted in improving their scores. Reading top-ranked teams' solutions, we thought we should have a more wide view and try various solutions.</li>
</ol>
<hr>
<p><a href="https://www.kaggle.com/code/ludditep/15th-place-dynedge-2-direction-prediction" target="_blank">Inference code</a></p>
<p>株式会社Rist様・株式会社スクラムサイン様、スポンサーとしてKaiRAとそこに所属する学生の学びを支えてくださり、ありがとうございます。KaiRAを代表してお礼を申し上げます。</p>
<p>Thank you.<br>
As mentioned in other teams' solutions, random pulse sampling gave us a slightly better score. We also tried x-y 180° rotation TTA and this improved the score by 0.000X. But we dropped these ideas as meaningless noise.<br>
What most worked was a 2-direction prediction, and this improved the score by 0.015. I think this works as some kind of ensemble, and force the model to look at later pulses.</p>
<p>We updated our solution. If you have a question, ask us freely.</p>
|
IceCube - Neutrinos in Deep Ice
|
17th place solution (silver)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Many thanks to everyone, it was an exciting competition!</p>
<p>I also want to give thanks to <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a> and <a href="https://www.kaggle.com/seungmoklee" target="_blank">@seungmoklee</a> for your notebooks with LSTM solutions to this problem. They really helped me to understand the possible approaches for this task. </p>
<h3>Our solution is an ensemble of 2 types of models:</h3>
<h4>1. Ensemble of 5 LSTM models, that are trained on classification problem to predict bins of angles</h4>
<p>Tried different architectures. It appeared, that actually LSTM here worked better, than GRU. However, in one model we used 3 LSTM + 2 GRU layers and it also worked very good. We also used exponential LR, which gave slight boost. Cosine annealing LR didn't work good in our case. Models very trained on 200-300 files. I once tried 400-600 files, but training was much slower and the score increased also slowly. There was a deadline soon, so it was more "profitable" to use 200-300 files. We also tried to predict just a XYZ vector and the convert it to angle, tried MSE and VmF loss, didn't work well. It was ok, but not better, than predicting bins of angles.</p>
<h4>2. DynEdge graphnets (5-7 in best submissions), just trained on VmF3d loss to predict XYZ vectors (as in the baseline)</h4>
<p>Didn't really change anything, just trained this model on 200 files. Tried different lengths of pulses: from 96 to 200. More pulses gave better results. We managed to train about 20 epochs before the deadline, but it can definitely give better results with more training.</p>
<p>While ensembling, we used different ensemble weights to predict azimuth and zenith. They were chosen on a validation set.</p>
<p>Each graphnet takes around 30 minutes on inference. LSTM ensemble takes around 2 hours and has 0.999 on the private LB. We didn't check the score of pure graphnet ensemble on the LB, but the best single model is around 1.006. Mixing these 2 types of models gave us a nice boost (0.987 on the private).</p>
<h3>Additional thoughts:</h3>
<ul>
<li>GraphNet is a cool thing, however, it felt like it can be pushed further in our case. It trains well, but after some epochs loss starts to increase slowly. Probably should tried different learning rate schedulers, but didn't have time at the end. </li>
<li>We also tried a transformer model, trained on XYZ vector with MSE/VmF loss. Moreover, I believed, that in this competition it should win top places. However, for some reason it didn't overperform the LSTMs. Maybe I chose bad params in its architecture (was my first time, yeah). Maybe we needed some additional techniques and other losses to make it work.</li>
<li>Ensembling gives nice boosts. I think that models are still not robust and in many cases they are not confident in predictions.</li>
</ul>
|
IceCube - Neutrinos in Deep Ice
|
1st Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First and foremost, I would like to express my gratitude to the hosts and the Kaggle team for organizing such an engaging competition. <br>
Additionally, I want to extend my appreciation to those who shared their valuable insights by publishing notebooks and participating in the discussions.<br>
In particular, the following notebooks and posts have been incredibly helpful.</p>
<ul>
<li><a href="https://www.kaggle.com/code/rasmusrse/graphnet-baseline-submission" target="_blank">graphnet_baseline_submission</a> by <a href="https://www.kaggle.com/rasmusrse" target="_blank">@rasmusrse</a></li>
<li><a href="https://www.kaggle.com/code/rasmusrse/graphnet-example" target="_blank">graphnet_example</a> by <a href="https://www.kaggle.com/rasmusrse" target="_blank">@rasmusrse</a> and <a href="https://www.kaggle.com/pellerphys" target="_blank">@pellerphys</a></li>
<li><a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/392096" target="_blank">∞ Explanation and Improvement von Mises-Fisher Loss</a> by <a href="https://www.kaggle.com/synset" target="_blank">@synset</a></li>
<li><a href="https://www.kaggle.com/code/anjum48/early-sharing-prize-dynedge-1-046" target="_blank">Early Sharing Prize - DynEdge - 1.046</a> by <a href="https://www.kaggle.com/anjum48" target="_blank">@anjum48</a></li>
<li><a href="https://www.kaggle.com/code/seungmoklee/3-lstms-with-data-picking-and-shifting" target="_blank">3 LSTMs; with Data Picking and Shifting</a> by <a href="https://www.kaggle.com/seungmoklee" target="_blank">@seungmoklee</a></li>
<li><a href="https://www.kaggle.com/code/dschettler8845/ndi-let-s-learn-together-eli5-and-eda" target="_blank">NDI – Let's Learn Together – ELI5 and EDA</a> by <a href="https://www.kaggle.com/dschettler8845" target="_blank">@dschettler8845</a></li>
<li><a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-training-tpu" target="_blank">Tensorflow LSTM Model Training TPU</a> by <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a></li>
</ul>
<h3>Architecture</h3>
<p>My model has a simple structure with EdgeConv and Transformer connected.<br>
I aimed to combine EdgeConv and Transformers in my model to efficiently gather information. GNNs were employed to capture local neighborhood information, while Transformers were used to collect global context, ensuring a comprehensive understanding of the data.<br>
This simple model achieves a public score of 0.9628 points and a private score of 0.9633 points as a single model.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F548996%2Ff28760ca31a4cedaebaad8cac80352ac%2Ficecube2.png?generation=1682000618971016&alt=media" alt="model"></p>
<h3>Static edge selection for GNN</h3>
<p>In the <a href="https://arxiv.org/abs/1801.07829" target="_blank">original paper</a>, edges are calculated in each layer dynamically, but this edge selection is not differentiable and cannot be trained. This still works well for the segmentation task in the original paper, as points in the same segment are trained to be close in the latent space. However, the situation is different in this task, so I did not think it would make sense to dynamically select edges. Therefore, the edge selection used in EdgeConv is calculated at the input.</p>
<h3>Simple modification of EdgeConv</h3>
<p>Using <a href="https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.EdgeConv.html" target="_blank">EdgeConv</a>, the latent parameter \({x}_i\) of \({dom}_i\) is updated using the difference: \({x}_j-{x}_i\), where \({x}_j\) is the latent parameter of the \({dom}_i\)'s k-nearest neighbour \({dom}_j\).<br>
This is fine for x, y, z and time, but for charge and auxiliary, the absolute values are meaningful, so I make \({x}_i\) be updated using not only \({x}_j-{x}_i\) but also \({x}_j\).</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F548996%2F99153c1b48d05472673c6dcd1827d187%2Fedge4.png?generation=1682093716754213&alt=media" alt="edgeConv"></p>
<h3>Loss function</h3>
<p>VMFLoss is a good and stable loss function, but θ is in the form of cos:</p>
<pre><code>VMFLoss = -κ*cos(θ) + C(κ)
</code></pre>
<p>(where θ is the angle between the ground truth and the prediction and κ is the length of the 3D prediction).<br>
On the other hand, the metric of this competition is the angle θ itself.<br>
To minimise θ itself, I defined the loss function as follows:</p>
<pre><code>MyLoss = -θ - κ*cos(θ) + C(κ)
</code></pre>
<p>This simple modification resulted in a 0.005👍 gain compared to VMFLoss.</p>
<h3>Sequence Bucketing</h3>
<p>In transformers, computational complexity is proportional to the square of the sequence length, making it essential to reduce the sequence length. As a result, it is effective to group data with similar sequence lengths and create mini-batches. This approach speeds up computation and further decreases the GPU memory usage, allowing for the utilization of larger mini-batche size.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F548996%2F3674d2af7f7552e176a208149f3ba2ec%2Fmini_batch.png?generation=1682128087385204&alt=media" alt=""></p>
<p>It is easy to implement by changing collate_fn in the DataLoader.</p>
<pre><code> ():
graphs = [g g graphs g.n_pulses > ]
graphs.sort(key= x: x.n_pulses)
batch_list = []
minp, maxp ([]+split_list[:-], split_list):
min_idx = (minp*(graphs))
max_idx = (maxp*(graphs))
this_graphs = graphs[min_idx:max_idx]
this_batch = Batch.from_data_list(this_graphs)
batch_list.append(this_batch)
batch_list
</code></pre>
<p>Note that even if gradients are calculated individually for each length-biased mini-mini-batch in the batch_list, the negative effects can be mitigated by updating the weights collectively afterwards.</p>
<h3>Data loading</h3>
<p>Because the data was huge and it was difficult to put everything in memory, I tried to load one batch per epoch.<br>
The pseudo code is shown below.</p>
<pre><code> ():
():
self.this_batch_id = self.get_next_batch_id()
self.this_batch = pd.read_parquet()
self.this_meta = self.meta[self.meta.batch_id == self.this_batch_id]
() -> :
(self.this_meta)
・・・
():
():
・・・
self._dataset = dataset
():
・・・
self._dataset.reset_batch()
</code></pre>
<h3>Model parameters</h3>
<p>This is some information about base models which are used for stacking. <br>
4 Layters, 6M parameters seems to be smaller than other top solutions.</p>
<ul>
<li>Depth of EdgeConv+Transformer block layer: 3 or 4</li>
<li>Embedding dimension: 256</li>
<li>Used sequence length (train): 200 to 500</li>
<li>Used sequence length (inference): 6000</li>
<li>With/without global features</li>
<li>Number of parameters: 6M</li>
<li>Parameters used in kNN for GNN: x,y,z or x,y,z,time</li>
<li>Training time: 10 to 14 hours/epoch (on 2x titan RTX)</li>
<li>Inference time: 30 minutes/5 batches ( on Kaggle notebook)</li>
</ul>
<p>Inference time is relatively slow due to the use of 6000 sequences.</p>
<h3>Stacking</h3>
<p>The framework for stacking was primarily based on the original base model. <br>
The main difference was that the model was replaced with a 3-layer MLP. <br>
Through stacking, an improvement of +0.003 was achieved.</p>
<h3>Code</h3>
<p>Inference notebook is available <a href="https://www.kaggle.com/code/its7171/icecube-edgeconv-transformer-inference" target="_blank">here</a>.</p>
<p>Thank you! And congratulations to new GM, Well deserved!🎉</p>
<p>Thank you. And thank you for publishing the notebook. It was very helpful.</p>
<p>Thank you!</p>
<p>Thank you!<br>
The following documentation on EdgeConv may be helpful.</p>
<p><a href="https://arxiv.org/abs/1801.07829" target="_blank">https://arxiv.org/abs/1801.07829</a><br>
<a href="https://www.nbi.dk/~petersen/Teaching/ML2022/Week5/IntroductionToGNNs_RasmusO.pdf" target="_blank">https://www.nbi.dk/~petersen/Teaching/ML2022/Week5/IntroductionToGNNs_RasmusO.pdf</a><br>
<a href="https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.EdgeConv.html" target="_blank">https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.EdgeConv.html</a></p>
<p>Thank you. And thank you for publishing the notebook. It was insightful.</p>
<p>Thank you very much!</p>
<p><a href="https://www.kaggle.com/crodoc" target="_blank">@crodoc</a> I made my inference notebook pulblic.<br>
<a href="https://www.kaggle.com/code/its7171/icecube-edgeconv-transformer-inference" target="_blank">https://www.kaggle.com/code/its7171/icecube-edgeconv-transformer-inference</a></p>
<p>I just thought that it was the state-of-the-art for this task by this paper from hosts.<br>
<a href="https://iopscience.iop.org/article/10.1088/1748-0221/17/11/P11003/pdf" target="_blank">https://iopscience.iop.org/article/10.1088/1748-0221/17/11/P11003/pdf</a></p>
<p>It is typically used in Point Clouds segmentation tasks.<br>
<a href="https://arxiv.org/abs/1801.07829" target="_blank">https://arxiv.org/abs/1801.07829</a></p>
<p>The data x,y,z and time contained of this competition are only meaningful for their differences.<br>
In this respect it is similar to Point Clouds, which I believe is why it was effective in this competition task.</p>
|
IceCube - Neutrinos in Deep Ice
|
20th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Team members are junseonglee11 (@junseonglee11), Ayaan Jang(@ayaanjang). <br>
We ensembled 6 LSTM models (2 different versions).<br>
We modified Robin Smith's and Robert Hatch's notebooks. </p>
<h1><strong>Our notebooks:</strong></h1>
<p>Inference: <a href="https://www.kaggle.com/code/ayaanjang/20th-tensorflow-lstm-model-inference-merged" target="_blank">https://www.kaggle.com/code/ayaanjang/20th-tensorflow-lstm-model-inference-merged</a><br>
Train: <a href="https://www.kaggle.com/code/junseonglee11/20th-tensorflow-tfrecord-tpu-lstm-line-fit-train" target="_blank">https://www.kaggle.com/code/junseonglee11/20th-tensorflow-tfrecord-tpu-lstm-line-fit-train</a><br>
Dataset (TFRecord): <a href="https://www.kaggle.com/code/junseonglee11/icecube-data-to-tfrecord-v2-1" target="_blank">https://www.kaggle.com/code/junseonglee11/icecube-data-to-tfrecord-v2-1</a> </p>
<h1><strong>References</strong></h1>
<p>Robin Smith's: notebooks:<br>
<a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-inference" target="_blank">https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-inference</a><br>
<a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-training-tpu" target="_blank">https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-training-tpu</a><br>
<a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-data-preprocessor/notebook" target="_blank">https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-data-preprocessor/notebook</a> I modified his notebook </p>
<p>Robert Hatch's notebook<br>
<a href="https://www.kaggle.com/code/roberthatch/lb-1-183-lightning-fast-baseline-with-polars" target="_blank">https://www.kaggle.com/code/roberthatch/lb-1-183-lightning-fast-baseline-with-polars</a><br>
It was crucial to improving our score. Used the results of this notebook as additional inputs in our model.</p>
<p>Seungmoklee's notebook<br>
<a href="https://www.kaggle.com/code/seungmoklee/lstm-preprocessing-point-picker" target="_blank">https://www.kaggle.com/code/seungmoklee/lstm-preprocessing-point-picker</a></p>
<h1><strong>Data preprocessing</strong></h1>
<p><strong>Our code:</strong> [<a href="https://www.kaggle.com/code/junseonglee11/icecube-data-to-tfrecord-v2-1" target="_blank">https://www.kaggle.com/code/junseonglee11/icecube-data-to-tfrecord-v2-1</a>]</p>
<ol>
<li><p>Reference part: Preprocessed data to generate 96 time-series data with 6 features including:</p>
<ul>
<li>Sensor signal measurement time</li>
<li>Sensor signal strength</li>
<li>Sensor signal quality</li>
<li>X, Y, Z coordinates of received sensor (3 features)</li></ul></li>
<li><p>Performed feature engineering on the 6 features to improve the prediction accuracy of the RNN (Residual Neural Network) model (trained on 90 data files instead of the entire dataset, then experimented with feature transformation)</p>
<ul>
<li><strong>Using original features: LB 1.015</strong></li>
<li><strong>Adding Time Difference: LB 1.0128</strong></li>
<li><strong>Replaced sensor signal measurement time with time interval (time difference between next and current measurement time, Time diff):</strong> <br>
The difference in sensor position between current and next time points has different meanings depending on the time interval (for example, <br>
moving 10m in 1 second vs. 1m in 10 seconds has a 10-fold speed difference). However, the absolute measurement time of the sensor signal <br>
can not reflect this, so it was determined that time interval is a more appropriate input than the measurement time.</li>
<li><strong>Adding Coordinate Difference: LB 1.0115</strong> <br>
<strong>Added three features with the difference in X, Y, Z coordinates between the next and current sensor positions:</strong> Similar to the time <br>
interval feature, it was determined that the difference in coordinate values between time points can better reflect the direction information of <br>
the neutral particle. However, when replacing the sensor's xyz coordinates with coordinate difference values, the accuracy decreased. This is <br>
because the difference in sensor coordinate values between non-adjacent time points has important information.</li></ul></li>
<li><p>Convert all the inputs above to TensorFlow TFRecord format to minimize CPU memory usage and accelerate training.</p></li>
</ol>
<h1><strong>Model training and inference</strong></h1>
<p><strong>Our code (model):</strong> <a href="https://www.kaggle.com/code/junseonglee11/20th-tensorflow-tfrecord-tpu-lstm-line-fit-train" target="_blank">https://www.kaggle.com/code/junseonglee11/20th-tensorflow-tfrecord-tpu-lstm-line-fit-train</a><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7429668%2Feea4665a122c77a5c2e3aa60a38e65b1%2F.png?generation=1683061023145992&alt=media" alt=""></p>
<ul>
<li><strong>Model Overview</strong>: Bidirectional LSTM (long short-term memory) 7 layer+ batch normalization + Line fitting features concatenation + Dense layer</li>
<li><strong>Activation Function</strong>: GELU(Gaussian Error Linear Unit) activation layer</li>
<li><strong>Loss Function</strong>: Sparse_categorical_crossentropy loss</li>
<li><strong>Training Metric</strong>: Accuracy</li>
<li><strong>Optimizer:</strong> RAdam (Rectified Adam) optimizer</li>
<li><strong>Ensemble (2 versions of the model)</strong></li>
<li>Model trained on fold 0 of the dataset divided into 10 folds<ul>
<li>Train: 594 dataset, Valid: 66 dataset: model trained using only 1 validation dataset</li>
<li>Train: 659 dataset, Valid: 1 dataset: four models with the lowest Mean Angular Error values are selected from each version of the model (total of 8 models)</li>
<li>Optimal ensemble weights are determined through cross-validation</li></ul></li>
</ul>
<h1><strong>Data postprocessing</strong></h1>
<p><strong>Our code (Data postprocessing):</strong> <a href="https://www.kaggle.com/code/ayaanjang/20th-tensorflow-lstm-model-inference-merged" target="_blank">https://www.kaggle.com/code/ayaanjang/20th-tensorflow-lstm-model-inference-merged</a><br>
<strong>Original code:</strong> Power value not squared in the code below.<br>
Perform a weighted average of predicted probabilities for each category and the direction of the particle represented by each category to <br>
calculate the azimuth and zenith directions of a neutron.<br>
<strong>Changed code:</strong> Add a square of the power value to the predicted value obtained from the model.<br>
It's not optimal to simply derive the results based on the direction for each category and the predicted model probability.<br>
Attempt various modifications to the predicted probability using exponential, logarithmic functions, activation functions, etc. to improve the <br>
post-processing stage.<br>
<strong>Improvement:</strong> When the model-predicted category probabilities were squared by 1.35, there was a decrease of about 0.002 in mean angular error.<br>
→ This gave appropriate additional weight to the category with a high probability in the predicted result.</p>
<p><a href="https://www.kaggle.com/manwithaflower" target="_blank">@manwithaflower</a> additional preprocessing is as follows.</p>
<ol>
<li>Used time interval (next time point - current time point) instead of time point (becuase for RNN, I felt that interval is more natural.)</li>
<li>Added difference of sensor position of next and current time point (because I thought that it may contain direction information)</li>
<li>Added line fit result of Robert Hatch's notebook and concatenated it to the layer after the last LSTM layer.</li>
</ol>
<p>Also, stacking 7 LSTM layers and using all data with TFRecord format were crucial.</p>
<p>Thank you!</p>
|
IceCube - Neutrinos in Deep Ice
|
2nd place solution: Neutrino direction prediction with transformers
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks </p>
<h1>Summary</h1>
<ul>
<li><strong>Transformer-based solution</strong></li>
<li>Chunk-based data loading with caching and length matching</li>
<li>Fourier encoder</li>
<li>Relative spacetime interval bias</li>
<li>von Mises-Fisher Loss + the competition metric</li>
</ul>
<p><strong>Our code is available at <a href="https://github.com/DrHB/icecube-2nd-place" target="_blank">this repo</a></strong>.</p>
<h1>Introduction</h1>
<p>While neutrinos are one the most abundant particles in the universe that may bring information about violent astrophysical events, their fundamental properties make them difficult to detect and requires a huge volume of material to interact with. The IceCube Neutrino Observatory is the first detector of its kind, which consists of a cubic kilometer of ice and is designed to search for nearly massless neutrinos. The rare interaction of neutrinos with the matter may produce high-energy particles emitting Cherenkov radiation, which is recorded with photo-detectors immersed in ice. The objective of this challenge was a reconstruction of the neutrino trace direction based on the photon detections. <br>
To begin with, I would really like to express my gratitude to the organizers and Kaggle team for making this competition possible. Reading the description of the competition I immediately realized that transformers may be a really good approach for it. Since I wanted to experiment with transformers in depth, I decided to join this competition. Meanwhile, my outstanding teammate <a href="https://www.kaggle.com/drhabib" target="_blank">@drhabib</a> has joined this competition <a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/381481#2117294" target="_blank">hoping to visit Antarctica</a>. </p>
<h1>Details</h1>
<p>Our solution is based on transformers, which appeared to be ideal for the considered task. Below we provide the key components of our approach.</p>
<h2>Why transformers</h2>
<p>Graph Neural Networks (GNNs) exhibit limitations in certain areas:<br>
(1) GNNs function primarily on a local level, as updates are performed based on neighboring nodes. Meanwhile, these models are expected to predict global quantities, such as the direction of a track. Organizers have attempted to address this issue through dynamic edge building, but this method falls short in comparison to the multi-head self-attention mechanism found in transformers. Moreover, dynamic edge building does not provide gradient feedback for neighbor selection.<br>
(2) Upon initial examination, GNNs might appear faster than transformers due to their consideration of only a fixed number of neighbors. Nevertheless, extensive sparse operations and data rearrangements in GNNs can be significantly slower than the highly optimized dense tensor operations in transformers for a reasonable sequence length and considering all possible interactions. In our experiments, the organizers’ baseline has the same computational cost as our reference T model (discussed below) while having only 1.4M parameters vs. 7.4M for T model.<br>
<strong>Transformer is a GNN on steroids</strong>. It may be considered as GNN on a fully connected graph using attention to estimate the edge weights dynamically. But this power comes with the need for huge training data.</p>
<h2>Data</h2>
<p>The organizers generously provided a sizable training dataset, which may be somewhat challenging to manage. Since transformers require massive data for training, it was vital to set up a proper data pipeline at the very beginning. Our approach consists of three primary components: (1) caching the selected data chunks to minimize the substantial cost of data loading and preprocessing, (2) employing chunk-based random sampling to effectively utilize caching, and (3) <strong>implementing length-matched data sampling</strong> (sapling batches with approximately equal lengths) to reduce the computational overhead associated with padding tokens when truncating the batch at the longest sequence. Components (1) and (2) offer a computationally efficient and RAM-friendly handling of the data and were shared through the <a href="https://www.kaggle.com/code/iafoss/chunk-based-data-loading-with-caching/notebook" target="_blank">"Chunk-based Data Loading with Caching" notebook</a> during the competition to facilitate the setup of a comprehensive training data pipeline for participants. <strong>Component (3) considerably expedited training and allowed us to use the maximum sequence length of 192 for training and 768 for inference (without a noticeable increase in inference time)</strong>. Inference at a 768 -length led to a 25 bps improvement over the 192-length. Initially, we also attempted to use Hugging Face datasets, which offered the ability to utilize chunked data but required additional file conversion; ultimately, we settled on a chunk caching pipeline.<br>
In our study, we observed no significant difference between utilizing only basic features (time, sensor position, charge, auxiliary, and the total number of detections) and incorporating additional features related to ice transparency and absorption. This outcome is expected, as for sufficiently large training datasets, the extra features merely offer an alternate representation of the z-coordinate, with the corresponding transparency and absorption distributions being explicitly learned. To ensure a diverse range of models, we employed both the base and extended feature sets in our experiments. The features are normalized in a similar way as in the organizers' baseline. <strong>It is important to consider detections with both auxiliary true and false.</strong> For events exceeding the maximum considered sequence length, we implemented the following selection process: Initially, we randomly selected detections from the auxiliary false subset, and if this subset proved insufficient to create a sequence of the required length, we then randomly sampled noisy auxiliary true detections. It is interesting that this random selection approach outperformed more complex methods, such as selecting based on charge or considering only events within a specific temporal boundary (IceCube size divided by the speed of light) surrounding the detection with the maximum charge.</p>
<h2>Model</h2>
<p>The model is schematically illustrated in the plot below:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2F552b4994e2931207aa2267c170d6044d%2FModel2.png?generation=1681961703319090&alt=media" alt=""><br>
<em>Fourier encoder</em>. Our solution is based on a transformer model, which considers each event as a sequence of detections. It is crucial to process the provided continuous input, such as time or charge, into a form suitable for transformers. We use <a href="https://arxiv.org/pdf/1706.03762.pdf" target="_blank">Fourier encoding representation</a>, often used to describe the position in the sequence in language models. This method can be viewed as a soft digitization of the continuous input signal into a set of codes determined by Fourier frequencies. This procedure is applied for all continuous input variables, while the discrete auxiliary flag is encoded with learnable embedding. We multiply the normalized input variables by 1024-4096 to have a sufficient resolution after digitizing. For example, after multiplying the normalized time by 4096, the temporal resolution becomes 7.3 ns, and the model may understand smaller time variations because of the continuous nature of the Fourier encoding. This multiplication is critical, and even a change of the coefficient from 128 to 4096 gives 20 bps boost.<br>
The encoding of all variables is concatenated, passed through one GELU layer, and reprojected into the transformer dimension. An alternative to Fourier encoding may be a multilayer network that learns the corresponding representation of continuous variables, but it may require many more layers than we consider in our setup. <br>
<em>GraphNet encoder</em>. In addition to the use of Fourier encoding, in several of our models, we incorporated GraphNet feature extractor. In addition to learning the representation of input variables, it also constructs features accounting for the neighbors. <br>
<em>Relative spacetime interval bias</em>. In the special theory of relativity there is a quantity called spacetime interval: ds^2=c^2 dt^2-dx^2-dy^2-dz^2. For particles moving with a speed close to the speed of light, it is close to zero. What is more useful is that all particles and photons produced in the reactions caused by a single neutrino should have ds^2 close to zero too (it is not fully precise because the refractive index of ice is about 1.3, and the speed of light in it is lower than c). Therefore, by <strong>computing ds^2 between all pairs of detections in the given event, it is relatively straightforward to distinguish between detections originating from the same neutrino and those that are merely noise</strong>, as demonstrated in the figure below. This criterion can be naturally introduced to a transformer as a <a href="https://arxiv.org/pdf/1803.02155v2.pdf" target="_blank">relative bias</a>. Therefore, during the construction of the attention matrix, the transformer automatically groups detections based on the source event effectively filtering out noise. We use Fourier encoding representation of ds^2 to build the bias term. <strong>Use of relative bias boosts the performance by 40 bps</strong>.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2Fa96019c14603b2c54be402d9d960464f%2Fds.png?generation=1681961957521492&alt=media" alt=""><br>
<em>Model</em>. We use the standard <a href="https://arxiv.org/pdf/2208.06366.pdf" target="_blank">BEiT building blocks</a> (transformer with learnable shortcuts). To incorporate relative bias, the first transformer blocks are modified according to <a href="https://arxiv.org/pdf/1803.02155v2.pdf" target="_blank">this paper</a>. The model size is characterized with a standard ViT notation: T – tiny (dim 192), S – small (dim 384), B – base (dim 768). We use 16 transformer blocks in total: 4 blocks with relative bias + 12 blocks with cls token. The use of an additional cls token, included in the input sequence, enables the gradual collection of information about the track direction throughout the entire model. The attention head size is varied between 32 and 64: the smaller head size gives better performance, especially for the T model (10 bps). The details of the considered models are summarized below.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>dimension</th>
<th>number of heads</th>
<th>depth</th>
<th>number of parameters</th>
</tr>
</thead>
<tbody>
<tr>
<td>T</td>
<td>192</td>
<td>3-6</td>
<td>4+12</td>
<td>7.57M</td>
</tr>
<tr>
<td>S</td>
<td>384</td>
<td>6-12</td>
<td>4+12</td>
<td>29.3M</td>
</tr>
<tr>
<td>B</td>
<td>768</td>
<td>12-24</td>
<td>4+12</td>
<td>115.6M</td>
</tr>
</tbody>
</table>
<p><em>Training setup</em>. The models are trained with AdamW optimizer (weight decay 0.05) for 4-5 epochs (all data is considered). The learning rate is changed with a cosine annealing scheduler with a warmup. The maximum learning rate at the first epoch is 5e-4 for T and S models, and 1e-4 for B models. The following epochs are trained with the maximum learning rate of 0.5e-5 - 2e-5. The effective batch size is 4096 in all experiments (with the use of gradient accumulation). Starting from the second epoch Stochastic Weight Average is used. <br>
We perform training for the first 2-3 epochs with von Mises-Fisher Loss (the model predicts 3d vectors, and kappa is computed based on the length of the produced vector), and the remaining epochs are performed with using the competition metric as the objective function + 0.05 von Mises-Fisher Loss (to give the model feedback on the vector length vs. confidence relation and simplify ensembling step). Use of the competition metric as the loss gives 55 bps boost. </p>
<h1>Results</h1>
<p>The table below summarizes the performance of our finalized models. CV is evaluated at L=512. (*) refers to models trained on 2×A6000, which in contrast to rtx4090 enables larger actual batch size. Further finetuning with L=256 may result in about 5 bps boost.</p>
<table>
<thead>
<tr>
<th>Model setup</th>
<th>CV512</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>T d32</td>
<td>0.9704</td>
<td>0.9693</td>
<td>0.9698</td>
</tr>
<tr>
<td>S d32</td>
<td>0.9671</td>
<td>0.9654</td>
<td>0.9659</td>
</tr>
<tr>
<td>B d32</td>
<td>0.9642</td>
<td>0.9623</td>
<td>0.9632</td>
</tr>
<tr>
<td>B d64</td>
<td>0.9645</td>
<td>0.9635</td>
<td>0.9629</td>
</tr>
<tr>
<td>B+gnn d48</td>
<td>0.9643</td>
<td>0.9624</td>
<td>0.9627</td>
</tr>
<tr>
<td>* S+gnn d32</td>
<td>0.9639</td>
<td>0.9620</td>
<td>0.9628</td>
</tr>
<tr>
<td>* B+gnn d32</td>
<td>0.9633</td>
<td>0.9609</td>
<td>0.9621</td>
</tr>
</tbody>
</table>
<p>Each increase of the model to the next size T->S->B gives approximately 30 bps but is accompanied by an ×4 increase in the number of model parameters. However, the provided data may be insufficient for training models larger than B. Meanwhile, the results of the use GNN in addition to the Fourier extractor are rather contradictive and may be affected by the difference in the hardware configuration. <br>
So, even <strong>the smallest considered T model with only 7.53M parameters and fast training (under 12 hours on rtx4090 per epoch) can take the top-5 in this competition</strong> by itself. Meanwhile, with additional finetuning at L=256, <strong>our best model, B+gnn d32, reaches 0.9628 CV512, 0.9608 public, and 0.9618 private LB as a single model</strong>. </p>
<h2>Model ensemble</h2>
<p>With the use of von Mises-Fisher Loss, the vector length corresponds to the model confidence. Therefore, a weighted average of the predicted vectors automatically incorporates the confidence and biases the prediction towards the most confident direction. <br>
Our best final submission in the linear ensemble of 6 models gives 0.9594 at public and 0.9602 at private LB (the contribution of each model is evaluated as an average of weights fitted in 5 random 5-fold splits). Ensembling gives only an insignificant improvement over our best single model submission of 0.9608 public and 0.9618 at private LB. We also tried to consider more elaborated ensembling by utilizing additional features such as sequence length, first detection time, total charge, etc., but without any improvement over the simple linear ensemble</p>
<h2>Training and inference time</h2>
<p><strong>On rtx4090 the inference time can be as short as 3 min per 1M samples for the T model</strong> (without inference optimization) and may increase to 8-10 min for the B model. On outdated GPUs several generations behind, such as P100 at Kaggle, the inference time may increase by x10. The training time varies from 12 hours per epoch for the T model to 56 hours per epoch for the B+gnn model. On a machine with 2×A6000 the speed is approximately the same (~10% faster) than running the model on a single rtx4090.</p>
<h2>Robustness</h2>
<p>Regarding the robustness of model predictions with respect to perturbation of the input parameters, we have considered the following cases: (1) <strong>temporal noise</strong>: add normally distributed random variable to detection time of each event; (2) <strong>charge noise</strong>: multiply charge by 10^ normally distributed randomly with variance reported at the horizontal axis; (3) <strong>auxiliary noise</strong>: randomly flip auxiliary flag with a given probability; and (4) <strong>length reduction</strong>: remove a given percentage of detections. The plot below reports results for our reference T model (noise-free CV512 is 0.9704).<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2Ffdbfc788586698d1965af8a024868dd7%2FNoise.png?generation=1681963294066000&alt=media" alt=""><br>
Although the model was not explicitly trained to consider the abovementioned types of noise, the predictions in most cases do not degrade by more than 15% at the largest magnitude of the considered noise, e.g. adding 200 ns random time error or ×2 reduction of the number of detected events. Training the model with specific data augmentation during training may improve the robustness of the model. However, there may be a limit when the error starts rapidly increasing, e.g. 10 ns temporal noise or a change in the value of detected charge by 40%.<br>
The considered model is versatile enough to apply to any neutrino telescope with a similar detector concept as IceCube, regardless of the number of detectors and their arrangement. However, since during the training of our models, the positions of the detectors are fixed, the model may not have a complete understanding of the geometry and experience the degradation of the performance if the detectors are moved. For example, the performance of T model drops from 0.9704 to 1.004 under flip of all detector positions along X or Y axes. To avoid such performance degradation and improve the understanding of geometry by the model, the training could be performed at randomly generated configurations of detectors at each detection event. Incorporation of geometry augmentation, such as shifts, flips, and rotation, would also improve the model robustness to the change of the detector geometry. The less optimal solution is the generation of new training data for a new neutrino telescope. Consideration of <a href="https://arxiv.org/pdf/2006.10503.pdf" target="_blank">3D Roto-Translation Equivariant Attention</a> models, in which detector positions are incorporated through attention rather than direct input, could potentially address the issue of detector geometry. However, this improvement may come with both the computational cost and VRAM requirement increase, and, therefore, a more practical solution may be training a simple model with spatial augmentation and randomized detector positions.<br>
The plot below visualizes the distribution of error for confident and unconfident prediction.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2Fd1fe483e3972b5ae6aafa7c4db3f4ea9%2Fdist.png?generation=1681963384438419&alt=media" alt=""></p>
<h2>Interesting findings</h2>
<p>It appeared that <strong>the majority of participants of the competition are unconsciously using the following leak</strong>. The provided data is the result of a simulation, and 0 time has a particular meaning revealing some details of the generation process (the time of the first detection with respect to 0 time may probably give such information as the energy and approximate direction limitations based on the traveling time of the neutrino from the box boundary to the detector). The models learn how to utilize this leak and improve its performance. For example, one of our experiments with the T model trained (1) on data with keeping 0-time-reference and (2) subtracting the time of the first detection from all detections gives the performance gap of 145 bps at the end of the first epoch. <br>
In the real detector data, the 0-time-reference is not available, and the time might be considered with the reference to the first detected event only. For models trained using 0-time-reference, such experimental data should be shifted by the average first detection time. However, in this case, the performance may drop by 150+ bps, comparable to the results of the model trained without the 0-time leak. </p>
<h2>Things that did not work</h2>
<p>In this competition, we have experimented with a number of things that, unfortunately, did not give an improvement:<br>
<strong>Graph Neural Networks</strong>. After building the first transformer-based pipeline it becomes apparent that GNNs are well behind (0.98-0.99 vs. 1.0 LB for first models). One interesting comparison we ran at that moment is dividing the predictions based on the kappa by 30% of confident and 70% of unconfident (noisy input and very small number of detections for track reconstruction). The performance of both models on unconfident predictions was comparable, while the improvement on confident predictions was the thing giving the performance gap between transformers and GNNs. It is fully in line with our initial expectation that transformers with global consideration of the input data are more appropriate for the prediction of the global quantity as the track direction in contrast to local GNNs. So further efforts were spent on enhancing the transformer setup. So, in the end, we used GNN as an optional addition to our Fourier feature extractor for diversity.<br>
<strong>Ice properties</strong>. We did not see that use of additional input, such as ice properties, helps to improve model performance.<br>
<strong>Local attention</strong>. We spent several weeks experimenting with local attention. The basic idea is simple: before giving a sequence of detections to the transformer model, which considers all-vs-all interactions between detections, process the features within the local neighbors. In theory, local attention is expected to be fast and memory efficient since it examines only a small number of k=8 neighbors instead of requiring L×L attention. However, in practice, local attention appears to be quite slow because of extensive data shuffling (similar to GNN), and in some naïve implementations, it also may consume significantly more VRAM than highly optimized transformers. Surprisingly, the fastest local attention is just regular transformer attention with masking attention for elements beyond k neighbors. Such masking helps at the beginning of training, but at the later stage, full attention outperforms it or gives comparable results. We also tried to consider configurations with alternating global-local attention (sandwich) or parallel blocks of local and global attention in the part of the model preceding the regular 12-layer transformer. However, a simple configuration with consideration of 4 layers with full attention and rel bias + 12-layer regular transformer outperformed the setups with local attention.<br>
<strong>Classification loss</strong>. We tried to subdivide the angle space into bins and utilize classification loss to fight against angle uncertainty. The ground truth is represented as a 2D Gaussian around the provided label, as illustrated in the figure below. We introduced 8 cls tokens to predict the probability for a 64×128 angular bin map (32×32 per token). Unfortunately, this model ended up at approximately 1.0 CV, and we did not perform any further checks on this approach. However, this model is quite interesting to mention, and it may provide a good visualization of the predictions.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2F1a2b2b993bb6a79ad13babb00ea689dc%2Fcls.png?generation=1681963515503826&alt=media" alt=""><br>
<strong>L and H models</strong> (400 and 800M parameters). Because of Nvidia virtual address space driver bug, 2×rtx4090 could not be used together until 1-2 weeks before the end of the competition. And even after this bug was fixed, disabled p2p makes multi-GPU training with these cards to be quite inefficient. Therefore, we were unable to scale up our runs and stopped at relatively small B models. The expected boost at each model size increase is about 30 bps, and L and H single models could likely reach 0.958 and 0.955-0.956 LB as a single model but with a massive increase of the required compute for both training and inference.</p>
<p>Yes, it is the place where it was introduced for the first time to my knowledge for the description of the position of tokens in the sequence, but I may be wrong.</p>
<p>Transformer can use any length from the architecture point of view, the limitation you refer is coming from the predefined positional encoding in some transformer models, like ViT. In our case we do not need anything like that. </p>
<p>The code is available now <a href="https://github.com/DrHB/icecube-2nd-place" target="_blank">https://github.com/DrHB/icecube-2nd-place</a> </p>
|
IceCube - Neutrinos in Deep Ice
|
3rd Place - Attention + XGBoost Ensembler
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>3rd place Solution writeup - Attention + XGBoost</h1>
<p>Participated on Kaggle after 3 years, first gold for me, had a really exciting time. 😃</p>
<p><strong>Check the TL;Dr at the end for a short summary</strong> - The post is long with a mixture of model information and personal lessons.</p>
<h2>The bitter lesson - Attention is all you need</h2>
<p>This data is not a natural graph, I'm confused by all the graph models being trained on it. The graph edges are artificially selected based on criteria such as time or distance, which I didn't think was the right approach. Hence I spent my time on LSTM and later moved to Self-Attention.</p>
<h3>Base Model</h3>
<p>The base model is a simple <strong>self-attention model</strong>. Most of the model code is from Andrej Karpathy's NanoGPT <a href="https://github.com/karpathy/nanoGPT.git" target="_blank">https://github.com/karpathy/nanoGPT.git</a></p>
<p>The inputs are normalized pulse data with transparency information from Datasaurus' Dynedge baseline <a href="https://www.kaggle.com/code/anjum48/early-sharing-prize-dynedge-1-046" target="_blank">https://www.kaggle.com/code/anjum48/early-sharing-prize-dynedge-1-046</a> - Sequences are sampled upto a max length of 256, prioritizing sampling of the pulses where <code>auxiliary=False</code></p>
<p>The outputs are <strong>two classification heads</strong> for each angle with 128 bins each. - I train it with a custom loss that locally smoothens the one hot target vectors, more on that below.</p>
<p>A small version of the model can be trained on an RTX 3060 to give a score close to ~1.02 in 1 hour. (Embedding size 128 - 6 layers).</p>
<h3>The bitter lesson - Scale beats everything</h3>
<p>The competition was undoubtedly compute-heavy, but I only had access to my home PC with an RTX 3060. Naturally, I spent a lot of time hand-designing features, optimizing the code, and trying ideas that felt like they should help. Only to have to throw out most of the experiments which help a smaller model but get washed out when training a big model with more data.</p>
<p>Notable ideas that helped but later got discarded:</p>
<ol>
<li>Simple augmentation like moving the points slightly, changing the charges etc.</li>
<li>Supervised contrastive loss style training to predict the angle between two events.</li>
<li>Mutliple pooling instead of just average pooling at the end. (in fact this caused overfitting)</li>
<li>Angle classifier on sphere coorindates instead of two classifiers.</li>
</ol>
<p>All of the above individually helped get better performance on a small model trained for 2-3 hours, but reach the same performance with bigger models. There is no overfitting in the models even with 20 batches of training data.</p>
<p>The real improvements in score came from scaling the model. Training on more data was crucial as well, the final models I use train on 650 batches of data. They finally start to overfit after 4 cycles through the entire data. </p>
<p>All the wasted days on experimentation reminds me of Richard Sutton's blog on "The Bitter Lesson". <a href="http://www.incompleteideas.net/IncIdeas/BitterLesson.html" target="_blank">http://www.incompleteideas.net/IncIdeas/BitterLesson.html</a> - Haha GPUs go bitterrrrrrr.</p>
<h3>Scaling and engineering tricks</h3>
<p>At the start of April, I decided to buy a new GPU just seeing how much compute is needed. Upgraded to RTX 4080, which is 3x faster. (No this is not the only scaling I did 😅, but more compute was crucial)</p>
<p>Final submitted model has 18 self attention layers, embedding size 512. This reaches a LB score of 0.982, and I'm sure training a bigger model will get even better. I trained the same model with 15 layers, but had worse performance due to mixed precision instability.</p>
<p>Engineering tricks</p>
<ol>
<li>Mixed precision training is much faster, I train with <strong>FP16 with FlashAttention</strong>. However this started getting unstable after reaching a score of 0.990, so I switched to FP32 after that. Only the bigger models showed this instability and I didn't get to investigate how to continue on FP16, probably the scales of the inputs or the loss can be managed.</li>
<li>The sequence lengths in the dataset has high variance. After sampling the data upto 256 pulses, the average length of each event is close of 100. This means a lot of attention processing (when not using FlashAttention) is wasted on padding. To speed up training I <strong>group the batches by event length</strong>. This gives a 1.5x speedup over FlashAttention and <strong>3.5x speedup</strong> over normal attention implementation. Packing also speeds up inference in the same ratios.</li>
<li>Loading a portion of the training data for each "epoch". This is straightforward, switch out the bathces in the dataloader after each epoch, or do it as a background process for prefetch.</li>
</ol>
<p>Scaling worked incredibly well:</p>
<ol>
<li>128 Embedding - 9 Layers - 200 batches - Score ~1.001</li>
<li>256 Embedding - 12 Layers - 400 batches - Score ~0.990</li>
<li>512 Embedding - 18 Layers - 650 batches - Score ~0.982 (Some overfitting at the end)</li>
</ol>
<p>Further, I fine tuned the model for sequence lengths upto 3072, which gives a slight imporvement of 0.002.</p>
<h3>Smoothed cross entropy loss</h3>
<p>The Von Mises-Fisher Loss proved unstable with Attention model, and I never got it to go beyond 1.04 when training end to end.</p>
<p>For the classifier I digitize the angles to 128 bins. Cross entropy loss works well, but I wanted to introduce inductive bias in the model that the clases are ordered, and not independent. To do so, I apply <strong>1D convolution with a gaussian kernel</strong> to the target one hot vectors. For azimuth this covolution has wraparound, for zenith I extend the bins based on the length of the kernel. (Notebook explaining the loss will be released later). This gives slightly more stable training, though with good hyperparam tuning even cross entropy might work just as well.</p>
<h2>The sweet ending - Attention is not all you need</h2>
<p>Obviously attention was not enough to win, 0.982 isn't even gold zone (cries in corner). However, the base model was cruicial for the next ideas to work.</p>
<h3>Stack vMF model</h3>
<p>Training an MLP with vMF loss on top of the encoder embeddings works. I dump 100 batches of average pooled encoder embeddings and train the stack model with it, achieves a score of ~0.978. Simple average ensemble of vMF and base model gives a score ~0.976.</p>
<h3>The magic - Ensembling using XGBoost 🔥</h3>
<p>I know the standard ensembling tricks, but this is the first time I have ensembled using another model. If you are aware of this used in practice, <strong>please let me know if there is literature on this</strong>. I got the idea just 3 days before the end of the competition, barely got to test it properly, yet it worked like magic.</p>
<p>My hypothesis on why this worked - The classifier and vMF models are far from perfect scores, and have high disagreement. Here's a histogram of the zenith angles.</p>
<p>Both the model have its own biases, and its not obvious how to combine them.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3468635%2Fd4807872db2ad427f9484a1f6ac8ccbd%2Fzenith_histogram.png?generation=1681967255551488&alt=media" alt=""></p>
<p>First I tried a simple voting rule - If they're too far then don't average:<br>
This already bumped the score from 0.976 -> 0.973, already a top 5 solution.</p>
<pre><code>avg_zenith = (zn_stack + zn_base) /
vote_zenith = avg_zenith.clone()
voter = torch.(zn_stack - zn_base) >
vote_zenith[voter] = zn_stack[voter].clone()
</code></pre>
<p>Both the models have some way of indicating "confidence" in prediction. Probabilty of bin for classifier, and kappa for vMF. I played around with some thresholding for those, along with event lengths and z coordinates, and it bumped score to 0.9715, so I decided to try XGBoost. On the first try the score went straight to 0.969. 🚀</p>
<p>Further, I tried another boosting classifier to "predict which predictions are bad". And used it to flip the angles, this also gives a slight improvement in score.</p>
<p>Adding some more simple features the final score for the 18 layer model was 0.965 (I made a submission with the 15 layer model which got 0.967).</p>
<p>I used sklearn's HistGradientBoostingClassifier for the final setup, but XGBoost has similar results.</p>
<h3>Inference timing stat</h3>
<p>The inference time for the base model is around 4 minutes per batch, and another 4 minutes for the long sequence model, which only gives a small boost. So its possible to get a score of <strong>0.967 within 25 minutes</strong>.</p>
<h3>Final submission</h3>
<p>The final submission has 3 models. The 18 layer, 15 layer, and another checkpoint from same 18 layer run.</p>
<p>Another boosting classifier to merge 18 layer and 15 layer. And then simple voting to merge the 3.</p>
<p>Here's the inference notebooks for final submission and single attention model.</p>
<ol>
<li>Final - <a href="https://www.kaggle.com/code/dipamc77/3rd-place-attention-xgboost-ensembler" target="_blank">https://www.kaggle.com/code/dipamc77/3rd-place-attention-xgboost-ensembler</a></li>
<li>Attention only - <a href="https://www.kaggle.com/code/dipamc77/single-attention-model-inference" target="_blank">https://www.kaggle.com/code/dipamc77/single-attention-model-inference</a></li>
</ol>
<h2>Summary of learnings</h2>
<ol>
<li>Test the model for scaling, especially if its underfitting, before trying feature engineering/other ideas.</li>
<li>Spend the initial engineering effort to optimize the pipeline, to test ideas fast.</li>
<li>If models have high disagreement and not close to max score, try to train a classifier to ensemble to models.</li>
<li>Even though the final improvement came from ensembling tricks, scaling was the most important. Bigger models will likely give even better results.</li>
<li>Don't give up, I really threw everything in the kitchen sink to improve scores at the end!</li>
<li>Buy more GPUs. 💸</li>
</ol>
<p>In hindsight, I should have tried some of the graph models as they might have disagreed with the classifier, and ensembling them might give a good improvement.</p>
<h2>Notable Failed ideas</h2>
<p>Didn't expect these to fail, and didn't abandon prematurely, but couldn't make them work.</p>
<ol>
<li><p>Finding noise in the dataset to ignore during training.</p></li>
<li><p>vMF loss training with attention model.</p>
<ul>
<li>Edit - From reading the 2nd place solution, this might be because I didn't encode the inputs properly. Perhaps it was lack of understanding on my part. I thought encoding continuous variables with a normal FC layer is good enough. Clearly it did work with the classification loss though.</li></ul></li>
<li><p>Reducing bias in zenith predictions by class weightage and oversampling.</p></li>
</ol>
<h2>TL;Dr - Final Solution summary</h2>
<p>Pulses as tokens, sequence length upto 256, priority sampling for "real" pulses.</p>
<ul>
<li>(Self Attention (512 embedding, 18 layer, Avg pool, 2048 FF) + 2 Angle classifier heads) -> ~0.982</li>
<li>(Fine tuned attention model for sequence length upto 3072) -> 0.980</li>
<li>(MLP with vMF loss on attention model embeddings) -> ~0.976</li>
<li>(Combine classifier and vMF angles with boosting classifier) -> ~0.965 🔥</li>
<li>(15 layer + 18 layer + 1 checkpoint from same 18 layer run - combine with another boosting classifier) -> ~0.962</li>
</ul>
<hr>
<p>P.S - Doesn't matter what your rank is, please share your learnings and also failed ideas. It helps you organize what you learnt and get better insights, I'll make sure to read all of them.</p>
<p><strong>Training Code - Please share your feedback.</strong></p>
<p><a href="https://github.com/dipamc/kaggle-icecube-neutrinos" target="_blank">https://github.com/dipamc/kaggle-icecube-neutrinos</a></p>
<p>Pays off to make a machine at home, I'm definitely buying another GPU before the next competition. 😅</p>
<p>Thank you so much for the kind words.</p>
<p>A week ago I thought the max score I can get is probably around 0.973, the boosting model ensembler really did the magic! </p>
<p>Truly it was quite unexpected, really want to if generally ensembling using decision trees is a thing.</p>
<p>Also, I just realized, congratulations on soon becoming a Competitions Grandmaster.</p>
<p>I use OneCycle learning rate scheduler, max rate 5e-5 for the largest model, final learning rate is 2e-6. The scheduler is certainly important.</p>
<p>The final model I trained on a 4080 for 5 days, switching out from FP16 to FP32 after around ~1.3 epochs.</p>
<p>Thanks, attention definitely, seemed like a more promising idea than using a graphnet, for the reason I described above. Making it work took some time. </p>
<p>Thanks.</p>
<p>I remember going like, wrote some for hacky loops for test "if kappa > k and zenith_prob < p and zenith_base - zenith_stack > d and z_first > z" etc, and score were improving. Then I realized, hey that's a cute little decision tree. </p>
<p>Also, noob question, how to get these exact time values for submission runs?</p>
<p>Thank you so much for these.</p>
|
IceCube - Neutrinos in Deep Ice
|
5th place solution: combining GraphNet and Transformer with LSTM meta-model
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We'd like to thank the organizers and Kaggle staff for putting together this enthralling challenge. As we share the scientific feast we've cooked up in our "Big Bento Box of Science," please accept our apologies for the slightly delayed write-up. The competition was very intense, with our team initially lagging over 400 spots away from the medal zone just two weeks before the end. Through hard work and persistence, we managed to improve our standing, ultimately finishing in 5th place and achieving our first gold medal.</p>
<h2>Summary:</h2>
<ul>
<li>Two classification heads with 256 bins each for transformer, smart label smoothing -> <strong>1.003</strong></li>
<li>Increasing the number of features GraphNet uses in KNN graph builder, changing activation functions to LayerNorm, overall increasing model size, predicting <code>(x, y, z)</code> direction instead of azimuth and zenith directly -> <strong>0.990</strong></li>
<li>LSTM model selector which took the input data as well as predictions from Transformer and GraphNet, and returned prediction which of these models to use -> <strong>0.974</strong></li>
</ul>
<h2>Transformer</h2>
<p>We used standard encoder architecture with 8 layers, 512 model size and 8 attention heads. The input consisted of <code>(x, y, z, t, charge, auxiliary, is_core, rank)</code>. <code>is_core</code> was a 0/1 value indicating whether the sensor was part of a deep core or not. <code>rank</code> was computed the same way as in the <a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-inference" target="_blank">following great notebook with LSTM</a>. For features like <code>(x, y, z, t, charge)</code> we used a few linear layers to preprocess them before passing them to the Transformer. While <code>(auxiliary, is_core, rank)</code> due to their discrete nature were embedded.</p>
<p>We had two output heads for both azimuth and zenith. Each head predicted 256 values that represented bins. We used equally spaced bins and trained them using cross entropy with smoothed labels. The labels were smoothed so that neighboring bins also have some probability (not like in standard label smoothing where remaining probability is equally assigned to all classes). For azimuth this smoothing was wrapped around in circular fashion, so that predicting \(0\) and \(2 \pi \) was equivalent. Prediction was just an argmax over the bins. We experimented with a different number of bins, ranging from 16 to 256, and more bins resulted in better performance. However, the difference between 128 and 256 was not significant and we chose to use 256. </p>
<p>We trained the model for 760k steps on TPU with batch size of 512 and sequence length of 256, so around 3 epochs (which is not much). We used a cosine schedule for LR with peak LR=3e-5 and repeated the cycle every 10k steps (looking back at it the cycle should probably be longer, but at the beginning we weren’t sure what to expect from the training). For the last 160k steps we changed peak LR to 3e-6. Whole training took around 26h. After 760k steps we could still see some slight improvements as the training progressed but decided to focus more on other ideas. This led to a model with a score of <strong>1.003</strong> on public LB.</p>
<h2>GraphNet</h2>
<p>As a starting point we’ve used <a href="https://www.kaggle.com/code/rasmusrse/graphnet-baseline-submission" target="_blank">standard GraphNet model</a> with <a href="https://www.kaggle.com/code/anjum48/early-sharing-prize-dynedge-1-046" target="_blank">additional input features from early sharing price notebook</a>.</p>
<p>Model input was composed of <code>(x, y, z, t, charge, auxiliary, quantum_effeciency, wrong_charge_dom, ice_absorption, ice_scatter)</code> where <code>wrong_charge_dom</code> was a marking of one sensor (num. 1229) because our analysis shown there might be something wrong with it. Output of the model was a direction. </p>
<p>To achieve the final LB score we've made a few changes to the model.</p>
<p>First of all, after looking through the code of <code>DynEdge</code> we saw that <code>features_subset</code> argument (latent feature subset for computing nearest neighbors in <code>DynEdge</code>) is applied not only on the first layer, but also on inputs to other layers. Intuitively it felt limiting, we changed it so that <code>features_subset</code> for the non-first layer was variable. In the end we set it to 48 (bigger seems to work better). With this change our model achieved <strong>0.998</strong> on LB.</p>
<p>Next big change was moving from <code>BatchNorm1d</code> to <code>LayerNorm</code>. This decision was guided by a big difference in MAE metrics we observed between training and validation. Even the standard model (with <code>BatchNorm1d</code>) got better results without switching to eval mode, so we decided to move to <code>LayerNorm</code> that does not differ between training and evaluation.</p>
<p>Last but not least, we experimented and tuned the whole training procedure. We used AdaBelief optimizer. Unlike the Transformer, with GraphNet model we took the approach of changing the learning rate in steps, starting from 1e-3 and reducing the learning rate by a factor of 2 every 200k steps. Total number of steps was 700k with 512 batch size. It took around 3 days on RTX 4090.</p>
<p>Finally, with all changes and scaling of the model we were able to achieve a <strong>0.990</strong> score on LB. </p>
<h2>Model stacking ensemble</h2>
<p>In the last few days of competition we had two models and no time to train them any further. We looked previously at ensembling them but results were not promising. We made some analysis and found that the predictions are sometimes very different from both models. If we could pick the correct model perfectly the lower bound was around 0.810. </p>
<p>Two days before the end of competition we dumped the embeddings from our best Transformer (last common layer before two categorical heads) and GraphNet (layer before predicting direction) for the first 10 data batches. At first we fed both of those embeddings along with model predictions into XGBoost. It was able to achieve 57% accuracy which resulted in <strong>0.976</strong> score on LB (and 0.978 on our internal validation).</p>
<p>Later on we coded up a simple 3 layer Bi-directional LSTM as a model selector. Our thinking was that having a different architecture (not Transformer or GraphNet) as a selector will be beneficial in learning some different features from data resulting in better selections.</p>
<p>LSTMs hidden states were initialized with concatenated Transformer and GraphNet embeddings and subset of pulses were given as input sequence*. We thought that training it on event data will help it make better decisions. We trained the model for around 9 epochs (keep in mind it’s only on the first 10 batches) which took about 8h. Doing this improved accuracy to 58% and resulted in a score of <strong>0.974</strong> which was our final result.</p>
<p>(*) We also experimented with concatenating the embeddings to the input sequence or just before the last classification layer, but both of those approaches turned out to be worse than just initializing hidden states with embeddings.</p>
<p>Overall architecture of our final submission looked like this:</p>
<p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/2230884/19025/model-idea.png" alt="model diagram"></p>
<h2>Things that we tried</h2>
<ul>
<li>We trained two Transformer models on different subset of dataset and tried simple ensembling (adding logits/softmaxes and argmaxing) but this resulted only in marginal improvements of 0.002</li>
<li>Training a separate categorical heads (like in case of transformer) on GraphNet embeddings and then ensemble both models by simply adding logits/softmaxes and picking argmax - this slightly reduced the error but was not spectacular</li>
<li>Classification task for GraphNet didn’t work that great, neither have azimuth and zenith regression. Direction regression is what performed best.</li>
<li>At the beginning of the competition we found a paper “A Convolutional Neural Network based Cascade Reconstruction for the IceCube Neutrino Observatory” and implemented a variant of a 3d convolutional network with so called hexagonal convolutions, which are supposed to be localized in the DOMs space. However, we couldn’t get it to work and get the mean angular error below 1.2. A likely reason is that the data in the competition was quite different from the data used in the paper. The latter had great resolution in the time dimension, because the pulses were collected every 2ns, while our data was much more sparse.</li>
</ul>
<h2>Inference code</h2>
<p><a href="https://www.kaggle.com/grzego/inference-lstm-ensembler-graphnet-transformer" target="_blank">https://www.kaggle.com/grzego/inference-lstm-ensembler-graphnet-transformer</a></p>
<p>Yes, it was just a single output with sigmoid activation. For the loss we used binary cross entropy. Our experiments with softening the loss, based on the difference in MAE between two models, turned out worse than just the basic approach.</p>
|
IceCube - Neutrinos in Deep Ice
|
6th solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>inference(normal ensemble,0.978)</strong>:<a href="https://www.kaggle.com/code/chihantsai/ens-gnn-v4-5-2-and-lstm-v8-and-tformer-v18-up300" target="_blank">https://www.kaggle.com/code/chihantsai/ens-gnn-v4-5-2-and-lstm-v8-and-tformer-v18-up300</a> <br>
<strong>inference(group ensemble,0.975):</strong><a href="https://www.kaggle.com/code/chihantsai/ens-group-gnn-v4-5-2-and-lstm-v8-and-tformer-v17" target="_blank">https://www.kaggle.com/code/chihantsai/ens-group-gnn-v4-5-2-and-lstm-v8-and-tformer-v17</a> </p>
<h3><strong>GNN part</strong> :</h3>
<p><a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/402880" target="_blank">https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/402880</a></p>
<h3><strong>Transformer part</strong> :</h3>
<p><a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/402880#2227949" target="_blank">https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/402880#2227949</a></p>
<h3><strong>LSTM</strong>:</h3>
<p>Based on <a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-training-tpu" target="_blank">LSTM model</a> by <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a>.<br>
loss : <br>
Custom cross-entropy loss can be enhanced for regression tasks by adding a nearby area loss to improve training. The weights of the nearby loss used are shown in the figure below, inspired by <a href="https://www.kaggle.com/competitions/ventilator-pressure-prediction/discussion/278362" target="_blank">this</a>.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4184342%2F937cb92c0b9387a371379130f211eee4%2Fnearby_weight.png?generation=1682086516402080&alt=media" alt="weights of nearly loss"><br>
We believe that our custom cross-entropy loss can cover a larger area than the original cross-entropy loss, which enables us to divide the spheres into more classes<br>
bin_num : 48 (Spheres are classified into 48*48 classes)<br>
pulse_count(the number of pulses selected per event) : 192 and 300<br>
optimizer: AdamW <br>
train step:<br>
Step 1: set the learning rate to 1e-3 and use pulse_count 192 to train the dataset for 3 rounds<br>
Step 2: apply the CosineAnnealingLR scheduler with a learning rate range of 1e-4 to 1e-6, t_max=4003 (an epoch iteration), and pulse_count 192. We train the dataset for 4 rounds<br>
Step 3: use the CosineAnnealingLR scheduler with a learning rate range of 1e-5 to 1e-7, t_max=4003 (an epoch iteration), and pulse_count 192. We train the dataset for 3 rounds<br>
Step 4: use the CosineAnnealingLR scheduler with a learning rate range of 1e-4 to 1e-6, t_max=4003 (an epoch iteration), and pulse_count 300</p>
<h3><strong>SAKT</strong>(LSTM CNN transformer) :</h3>
<p>(from <a href="https://www.kaggle.com/code/shujun717/1-solution-lstm-cnn-transformer-1-fold/notebook" target="_blank">https://www.kaggle.com/code/shujun717/1-solution-lstm-cnn-transformer-1-fold/notebook</a>)<br>
loss : as same as LSTM model<br>
bin_num : 48 (Spheres are classified into 48 classes)<br>
pulse_count(the number of pulses selected per event) : 192 <br>
optimizer: AdamW <br>
train step:<br>
Step 1:use learning rate to 1e-3 , train the dataset for 3 rounds<br>
Step 2:apply the CosineAnnealingLR scheduler with a learning rate range of 1e-4 to 1e-6, t_max=4003 (an epoch iteration) , train the dataset for 4 rounds<br>
Step 3: apply the CosineAnnealingLR scheduler with a learning rate range of 1e-5 to 1e-7, t_max=4003 (an epoch iteration)</p>
<h1><strong>Ensemble</strong></h1>
<p>Validation score of the model in batch 657 658 659<br>
Lstm : 0.9899<br>
Sakt : 0.9927<br>
Gnn : 0.9927<br>
Transformer : 0.9869</p>
<h2><strong>Normal ensemble</strong> :</h2>
<p>validation score : 0.9803<br>
lb : 0.978</p>
<h2><strong>Group ensemble</strong> :</h2>
<p>I used the results of the <a href="https://www.kaggle.com/code/solverworld/icecube-neutrino-path-least-squares-1-214" target="_blank">line-fit</a> and metadata to classify the events into 6 groups </p>
<pre><code>group1 : the number of pulses aux= >= zenith(prediction by line-fit) >= pi/ (~%)
group2 : the number of pulses aux= >= zenith(prediction by line-fit) < pi/ (~%)
group3: the number of pulses aux= < , zenith(prediction by line-fit) >= pi/ the pulses of aux= are on a vertical string (~%)
group4 : the number of pulses aux= < , zenith(prediction by line-fit) < pi/ the pulses of aux= are on a vertical string (~%)
group5 : the number of pulses aux= < , zenith(prediction by line-fit) >= pi/ the pulses of aux= are on multiple strings (~%)
group6 : the number of pulses aux= < , zenith(prediction by line-fit) < pi/ the pulses of aux= are on multiple strings (~%)
</code></pre>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4184342%2F48225ff16bf6c8a8bf69ef01029cc4e1%2FScreenshot%20from%202023-04-21%2022-04-42.png?generation=1682085909903995&alt=media" alt="Validation score"></p>
<p>we can observe that the validation scores of different groups vary considerably, and each model performs better on different groups. This is logical since atmospheric muons always travel from top to bottom, so the zenith (predicted by line-fit) < pi/2 is prone to more noise. Moreover, pulses of aux=False located on a vertical string have a higher probability of containing noise. Additionally, events with more pulses are better predicted.</p>
<p>To enhance our results, we used different ensemble weights for each group, leading to an improvement of 0.003 in both validation score and lb (0.975). We further divided the events into 13 groups based on additional criteria, resulting in a gain of 0.001 in both validation score and lb (0.974, private score 0.975).</p>
<p>Perhaps using an MLP for ensemble could get better results, but unfortunately, we did not have enough time to investigate this approach.</p>
<p>P.S. My English is not very fluent, so please let me know if my explanation is not clear enough.</p>
|
IceCube - Neutrinos in Deep Ice
|
8th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the organizers and staff for hosting such a wonderful competition. And thank you to all my wonderful teammates! <a href="https://www.kaggle.com/anjum48" target="_blank">@anjum48</a>, <a href="https://www.kaggle.com/allvor" target="_blank">@allvor</a>, <a href="https://www.kaggle.com/remekkinas" target="_blank">@remekkinas</a>, <a href="https://www.kaggle.com/wrrosa" target="_blank">@wrrosa</a>. </p>
<p>Congratulations <a href="https://www.kaggle.com/anjum48" target="_blank">@anjum48</a>! your promotion to GM!</p>
<p>Our code will be available <a href="https://github.com/Anjum48/icecube-neutrinos-in-deep-ice" target="_blank">here</a>.</p>
<h1>Datasaurus Part</h1>
<h2>Preprocessing</h2>
<h3>Raw data</h3>
<p>This dataset was really big, and repeatedly doing pandas operations every epoch would have been a waste of CPU cycles (polars does not appear to work with PyTorch data loaders with many workers yet). To address this, I made PyTorch Geometric <code>Data</code> objects for each event and saved them as <code>.pt</code> files which could be loaded during training. This took about 8 hours to create using 32 threads, and required about 1TB of space. </p>
<p>The issue with this was that that 1TB was across 130+ millon tiny files. A Linux partition has a finite number of “index nodes” or <code>inodes</code>, i.e. an index to a certain file. Since these files are so small I ran into my inode limit before I ran out of space on my 2TB drive. As a workaround, I had to spread some of these files across two drives, so heads up for anyone trying to reproduce this method or run my code.</p>
<p>A more efficient way could be to store <code>.pt</code> files that have already been pre-batched which would require fewer files, but then you lose the ability to shuffle every epoch which may/may not make a difference with this much data. I didn’t try the sqlite method suggested by the GraphNet team.</p>
<h2>Features</h2>
<p>In the context of GNNs, each DOM is considered as a node. Each node was given the following 11 features:</p>
<ul>
<li>X: X location from sensor_geometry.csv / 500</li>
<li>Y: Y location from sensor_geometry.csv / 500</li>
<li>Z: Z location from sensor_geometry.csv / 500</li>
<li>T: (Time from batch_[n].parquet - 1e4) / 3e3</li>
<li>Charge: log10(Charge from batch_[n].parquet) / 3.0</li>
<li>QE: See below</li>
<li>Aux: (False = -0.5, True = 0.5)</li>
<li>Scattering Length: See below</li>
<li>Distance to previous hit: See below</li>
<li>Time delta since previous hit: See below</li>
<li>Scattering flag (False = -0.5, True = 0.5). See below</li>
</ul>
<p>Many of the normalisation methods were taken from GraphNet as a <a href="https://github.com/graphnet-team/graphnet/blob/4df8f396400da3cfca4ff1e0593a0c7d1b5b5195/src/graphnet/models/detector/icecube.py#L64-L69" target="_blank">starting point</a>, but I altered the scale for time, since time is a very important feature here.</p>
<p>For events with large numbers of hits, to prevent OOM errors, I sampled 256 hits. This can make the process slightly non-deterministic.</p>
<h2>Quantum efficiency</h2>
<p>QE is the quantum efficiency of the photomutipliers in the DOMs. The DeepCore DOMs are quoted to have 35% higher QE than the regular DOMs (Figure 1 of this <a href="https://arxiv.org/pdf/2209.03042.pdf" target="_blank">paper</a>), so QE was set to 1 everywhere, and 1.35 for the lower 50 DOMs in DeepCore. The final QE feature was scaled using (QE - 1.25) / 0.25.</p>
<h2>Scattering length</h2>
<p>Scattering and absorption lengths are important to characterise differences in the clarity of the ice. This data is published on page 31 of this <a href="https://arxiv.org/abs/1301.5361" target="_blank">paper</a>. A datum depth of 1920 metres was used so that z = (depth - 1920) / 500. The data was resampled to the z values using <code>scipy.interpolate.interp1d</code>. I found that after passing the data though <code>RobustScaler</code>, the scattering and absorption data was near identical, so I only used scattering length.</p>
<h2>Previous hit features</h2>
<p>The two main types of events are track and cascade events. Looking at some of the amazing <a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/388858" target="_blank">visualisation tools</a> for example from edguy99, I got the idea that if a node had some understanding where and when the nearest previous hit was, it might help the model differentiate between these two groups. To calculate this for each event, sorted the hits by time, calculated the pairwise distances of all hits, masked any hits from the future and calculated the distance, d, to the nearest previous hit. This was scaled using (d - 0.5) / 0.5. The time delta from the previous hit was also calculated using the same method and scaled using (t - 0.1) / 0.1.</p>
<h2>Scattering flag</h2>
<p>I tried to create a flag that could discern whether a hit was caused directly from a track, or some secondary scattering, inspired by section 2.1 of this <a href="https://arxiv.org/pdf/2203.02303.pdf" target="_blank">paper</a>. A side effect of adding this flag was that training was much more stable. The flag is generated as follows:</p>
<ol>
<li>Identify the hit with the largest charge</li>
<li>From this DOM location, calculate the distances & time delta to every other hit</li>
<li>If the time taken to travel that distance is > speed of light in ice, assume that the photon is a result of scattering</li>
</ol>
<h2>Validation</h2>
<p>I used 90% - 10% train-validation split, and no cross validation due to the size of the dataset. The split was done by creating 10 bins of log10(n_hits), and then using <code>StratifiedKFold</code> using 10 splits.</p>
<h2>Models</h2>
<p>I used the <code>DirectionReconstructionWithKappa</code> task directly from GraphNet, meaning that an embedding (e.g. shape of 128) will be projected to a shape of 4 (x, y, z, kappa)</p>
<h2>Architectures</h2>
<p>I used the following 3 architectures. All validation scores are with 6x TTA applied</p>
<p><a href="https://github.com/graphnet-team/graphnet" target="_blank">GraphNet/DynEdge</a> - Val = 0.98501<br>
<a href="https://arxiv.org/abs/2205.12454" target="_blank">GPS</a> - Val = 0.98945*<br>
<a href="https://arxiv.org/abs/1902.07987" target="_blank">GravNet</a> - Val = 0.98519</p>
<p>The average of these 3 models gave a 0.982 LB score.</p>
<p>The GraphNet/DynEdge model had very little modfification, other than changing to GELU activations.</p>
<p>GPS & GravNet used 8 blocks and you can find the exact architectures for both in the code <a href="https://github.com/Anjum48/icecube-neutrinos-in-deep-ice/blob/main/src/modules.py" target="_blank">here</a>.</p>
<p>*GPS was the most powerful model, but also slowest to train being a transformer type model (roughly 11 hours/epoch on my machine). I managed to train a model which achieved a validation score of 0.98XX but was too late to include in our final submission.</p>
<h2>Loss</h2>
<p>I used VonMisesFisher3DLoss + (1 - CosineSimilarity) as the final loss function, since cosine similarity is a nice proxy for mean angular error. For CosineSimilarity I transformed the target azimuth & zenith values to cartesian coordinates.</p>
<p>This performed much better than separate losses for azimuth (VMF2D) & zenith (MSE) which I the route I <a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/383546" target="_blank">initially went down</a>.</p>
<h2>Augmentation</h2>
<p>I centered the data on string 35 (the DeepCore string) and rotated about the z-axis in 60 degree steps. This didn’t actually improve validation performance but did have the benefit of making the models rotationally invariant so that I could take advantage of the detector symmetry and apply a 6x test time augmentation (TTA). This often improved scores by 0.002-0.003. </p>
<h2>Training parameters</h2>
<ul>
<li>AdamW optimiser</li>
<li>Epochs = 6</li>
<li>Cosine schedule (no warmup)</li>
<li>Learning rate = 0.0002</li>
<li>Batch size = 1024</li>
<li>Weight decay = 0.001 - 0.1 depending on model</li>
<li>FP16 training</li>
<li>Hardware: 2x RTX 3090, 128 GB RAM</li>
</ul>
<h2>Final submissions</h2>
<p>Circular mean</p>
<h2>Robustness to perturbation</h2>
<p>TBC</p>
<h2>Lessons learned/stuff that didn’t work</h2>
<ul>
<li>The GraphNet DynEdge baseline is extremely strong and tough to improve on - kudos to the team! It is also the fastest/efficient model, and what I used for the majority of experimentation</li>
<li>More data = more better. The issue with this though is that I found that some conclusions drawn from experiments on 1% or 5% of the data were no longer applicable on the full dataset. This made experimentation slow and expensive</li>
<li>Batch normalisation made things unstable and didn’t show improvements</li>
<li>Lion optimiser didn’t generalise as well as AdamW</li>
<li>Weight decay was important for some models. As a result I assumed changing the epsilon value in Adam would have an effect, but I didn’t see anything significant</li>
<li>In my experiments, GNNs seem to benefit from leaky activations, e.g. GELU</li>
<li>For MPNN aggregation, it seems that [min, max, mean, sum] is sufficient. Adding more didn’t appear to make significant gains</li>
<li>Realigning all of the times to the time of the first hit of each event deteriorates performance, possibly due to noise in the data/false triggers etc.</li>
<li>Radial nearest neighbours didn’t work any better than KNN when defining graph edges</li>
<li>Only using 1 - CosineSimilarity as a loss function wasn’t very stable. Adding VMF3D helped a lot</li>
</ul>
<h2>Code</h2>
<p>All my code will be available here soon: <a href="https://github.com/Anjum48/icecube-neutrinos-in-deep-ice" target="_blank">https://github.com/Anjum48/icecube-neutrinos-in-deep-ice</a></p>
<h1>Isamu Part</h1>
<p>Before the team merge, I created LSTM and GraphNet models. After the team merge, I focused on GraphNet because Remek's LSTM model was superior to mine. I used graphnet (<a href="https://github.com/graphnet-team/graphnet" target="_blank">https://github.com/graphnet-team/graphnet</a>) as a baseline and made several changes to improve its accuracy. I list below some of the experiments I performed that worked(There are tons of things that didn't work)</p>
<ul>
<li>random sampling (random sampling from DB if the specified data length exceeds 800)</li>
<li>Increasing nearest neighbors of KNN layer(8->16)</li>
<li>Addition of features<ul>
<li>x, y, z</li>
<li>time</li>
<li>charge</li>
<li>auxiliary</li>
<li>ice_transparency feature</li></ul></li>
<li>2-stage model with kappa(sigma) of vonMisesFisher distribution <ul>
<li>Train 1st stage model to predict x, y, z, kappa<ul>
<li>restart from the weights of GraphNet from public baseline notebook</li>
<li>About 1-250 batches were used</li>
<li>batch size 512</li>
<li>epoch 20</li>
<li>DirectionReconstructionWithKappa</li></ul></li>
<li>Split data into easy and hard parts according to 1st stage kappa value<ul>
<li>Inference was performed using the 1st model and classified into two sets of data(easy part and hard part) according to their predicted kappa value</li></ul></li>
<li>Train expert models for easy and hard parts and combine their predictions<ul>
<li>About 250-350 batches were used</li>
<li>batch size 512</li>
<li>epoch 20</li>
<li>DirectionReconstructionWithKappa</li></ul></li></ul></li>
<li>TTA <ul>
<li>rotation 180-degree TTA about the z-axis</li></ul></li>
<li>Loss<ul>
<li>DirectionReconstructionWithKappa</li></ul></li>
<li>Hardware: RTX 3090, 64 GB RAM, 8TB HDD, 2TB SSD, Google Colab Pro</li>
</ul>
<p>The ensemble of models(1st and 2nd) created above gave public LB 0.995669, private LB 0.996550 </p>
<h1>Remek Part - LSTM</h1>
<p>For LSTM training we used the attitude proposed by Robin Smits (@rsmits) with some improvements.</p>
<ul>
<li><p>We added more LSTM/GRU layers (4 GRU/LSTM) - more and less than 4 layers was not better in our experiments.</p></li>
<li><p>We added ice transparency as additional features (we used both features – transparency and absorption).</p></li>
<li><p>We redesigned the training loop to train models on all batches – we can train models on different parts of DS (from one batch to all DS in one epoch).</p></li>
<li><p>We checked many hypotheses (for part of them weight and biases Swipe tool was used):</p>
<ul>
<li>Different size of LSTM units – finally 196 was the best in our model.</li>
<li>Different bin size – 24 was our final choice.</li>
<li>Different amount of features and impulse selection - finally we use strategy - first select non_aux events then add random aux events (it gave us score boost as well - this was kind of an augmentation technique).</li>
<li>Different model architectures – finally for our score blend we used pure LSTM/GRU setup. We tested transformer architecture but as it appeared we gave up too early (first scores were way worse then pure LSTM).</li>
<li>Different optimizers (AdamW, NAdam) – final choice was Adam.</li>
<li>Different schedulers – CosineDecay, OneCycle but then we use step LR scheduling described below.</li></ul></li>
</ul>
<p>Training was divided into three parts scheduling LR:</p>
<ul>
<li>Step 1 - Train baseline models (two models) – batches 4-330 (m1) and 331-660 (m2) using LR = 0.005, for 6-8 epochs, sparse categorical crossentropy loss. Both models were validated on batches 1-3.</li>
<li>Step 2 - Fine tuning models m1 and m2 using LR/10 = 0.0005 for 3 epochs, sparse categorical crossentropy loss.</li>
<li>Step 3 – Fine tuning models m1 and m2 using LR = 0.00025 for 2 epochs with different loss function – categorical crossentropy with label smoothing 0.05</li>
<li>Then we took 4 best models (according to MAE metrics) for both model m1 and m2 (8 models in total) and produced one model using SWA (Stochastic Weight Averaging). We simply averaged model weights. As it appeared the model had the same performance compared for 8 ensembled models but we significantly decreased LSTM inference time. Single model score (public LB) (no TTA): 1.0024.</li>
</ul>
<p>Things did not improve our score:</p>
<ul>
<li>Dropout in LSTM/GRU layers and linear head.</li>
<li>GaussianNoise Layer after Masking layer or after LSTM/GRU layer.</li>
<li>Adam + Lookahead optimizer.</li>
<li>Gradient Accumulation to simulate TPU big batch size – I (Remek) had problem to implement it properly in TF/Keras (it is easy in Pytorch, as it appeared not exactly easy to implement in TF/Keras – my daily choice is Pytorch)</li>
</ul>
<p>Additional tools:</p>
<ul>
<li>Weights and biases for two tasks – logging and monitoring training process and swipe for hyperparameter tuning (number of LSTM units, LR scheduler, Optimizer and LR).</li>
</ul>
<p>For my (Remek) experiment I use (in first part of competition) ZbyHP Z4 with 2xA5000 but then HP sent me ZbyHP Z8 workstation with 2x Intel Xeon CPU and Nvidia A6000 GPU. Personally I can say that this help me to establish fast experimentation pipeline. I was able to process dataset files very fast. Having A6000 gave me possibility to train models on bigger batch size. Thank HP for supporting my work.</p>
<p>My final words - we set up a great team in my opinion - each of us was responsible for part of the solution, we discussed a lot but there was no “my is better”. Although we worked together for the first time, I had the impression that we had known each other forever. Great team, great result! Thank you guys for having the opportunity to learn from great AI guys.</p>
<h1>Alvor part - blending</h1>
<p>My main contribution to this competition was the development of methods for blending my teammates’ solutions. Analysis of the different types of solutions (GraphNet, LSTM etc.) showed that they have different efficiency at different predicted zeniths and azimuths values (zenith mainly). So I changed my initial "constant weight" approach to a "bins" approach.<br>
The method consists in splitting the predicted zenith values into 10 bins of equal width. Thus, when combining two solutions, we get 100 bins in total. The value of 10 is configurable, but experiments showed it to be close to optimal.</p>
<p>After that, the blending weight for the zenith was found in each bin and the blending weight for the azimuth was found. The blending of predicted zenith values was done by a simple linear combination. Blending the azimuth values was a little more complicated, due to the possible transition through 2*pi. Therefore, to begin with, the difference in the predicted azimuths in the two solutions was calculated and the direction of the second value relative to the first value.<br>
Weights fitting was carried out on a sample of training data close in size to the test data (5 batches with 1M events).</p>
<p>Several approaches to improve this blending method were also tested. In particular, the use of GBDT.</p>
<p>One of these approaches was an attempt to classify events into “simple” and “complex” ones (inspired by the kernel <a href="https://www.kaggle.com/code/tatelarkin/neutrino-event-type-classifier-auc-score-0-93)" target="_blank">https://www.kaggle.com/code/tatelarkin/neutrino-event-type-classifier-auc-score-0-93)</a>.</p>
<p>Experiments have shown that some types of neural networks are better at handling simple events, while other types are better at handling complex events. If we could accurately determine the type of each event, this would greatly improve the competition metric. A model was built, the efficiency of which in events classifying was at the level of the public kernel (AUС 0.93), but this was not enough to get a noticeable improvement in the blend quality.</p>
<p>Several other models of classification (binary variable - which of the two neural networks better predicts a given event) and regression (target variable - the difference in competition metrics from two neural networks for a given event) were also trained. Unfortunately, all these approaches showed only a minor improvement in the result (fourth decimal place), but they were very overfitting-sensitive.</p>
<p>Later, my method was significantly improved by Wojtek Rosa. Therefore, it was his approach that was used in the final submissions, which he describes in more detail in his part. In particular, I would like to note his brilliant idea of using a Decision Tree model to build blending bins.</p>
<h1>Wojtek part - more blending</h1>
<p>Thank you competition host for exciting competition IceCube!<br>
Also thank you my Teammates - once again congrats for their prizes, GM titles and great solutions.<br>
My part was about blending.<br>
I used batch_ids 1-5 for evaluation and adjusting parameters, later I used batch_ids 655+ only for evaluation purposes.<br>
When I joined Remek&Alvor team, we have great LSTM solution, public Graphnet, and amazing blend technique:</p>
<pre><code> ():
s[] = np.(s[] - s[])
s[] = np.where(
s[] < np.pi,
np.sign(s[] - s[]),
-np.sign(s[] - s[])
)
N =
s[] = (N*np.floor(N*s[]/np.pi) + np.floor(N*s[]/np.pi)).astype()
</code></pre>
<p>and minimize score for each bin, finding best qu, alpha such as:</p>
<pre><code>s0[] = s0[] + alpha * s0[] * s0[]
s0[] = (-qu) * s0[] + qu * s0[]
</code></pre>
<p>I realize, that compared to simple/public vector weight ensembling this method is much better.<br>
I tried to improve score with greater N values but this leads me to overfit.<br>
After this, I managed to improve the score, by creating bins using Regression Trees with target = score_1 - score_2 and features:</p>
<pre><code>cls = [,,,,,, ,,,,,,
,,,]
</code></pre>
<p>where direction_% are from Isamu submission, zenith_3 is from crazy quick and clean 1.183 Robert linear solution. I tried many different features directly from event, but<br>
The simplicity of Decision Tree Regressor gave us (almost) full control of N bins for adjustment allvors parameters qu and alpha:</p>
<pre><code>regr_1 = DecisionTreeRegressor(max_depth=, min_samples_leaf=, min_samples_split=)
s[] = regr_1.predict(s[cls])
s[] = (s[],).astype()
</code></pre>
<p>Using DecissionTrees gave us 0.0027 boost both with local score and LB. I tried other methods of blending such as MLP or LGBM without success.<br>
Later on, I also slightly modified <code>adjusting</code> loop, for finding full linear combination of zenith_1 and zenith_2:</p>
<pre><code>s0[] = s0[] + alpha * s0[] * s0[]
s0[] = ru * s0[] + qu * s0[]
</code></pre>
<p>It takes much more computation to find ru and qu, but that gives us additional 0.001 improvement.<br>
Our final blend consist:<br>
Isamu solution .995 -> Remek LSTM 1.002 -> datasaurus Graphnet .982 -> Robert 1.183<br>
local score 0.9774692 Public LB 0.975545, Private LB 0.97658</p>
<h2>Findings:</h2>
<ul>
<li>we found interesting type of error, ‘model give up’ and predicts zenith close to 0.<br>
Our final solution zenith prediction vs score (x-axis zenith predicted, y-axis zenith ground truth):</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3974868%2F4a8b117472efd756290083dcf0b787c1%2Fimage1.png?generation=1682343676129581&alt=media" alt=""></p>
<p>Ground truth zenith (x-axis) vs score (y-axis) - ‘error triangle’ visible:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3974868%2F2ee6d7032191782027e45522daff9081%2Fimage2.png?generation=1682343631946316&alt=media" alt=""></p>
<ul>
<li>we found that predictions from LSTM classification goes incredibly good at the center of bins, chart: zenith_pred_lstm*1000 (x-axis), mean score (y-axis), zenith edges (red lines)</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3974868%2F7fde090be1100c83a09f045fbf57f5fd%2Fimage3.png?generation=1682343415933098&alt=media" alt=""></p>
|
IceCube - Neutrinos in Deep Ice
|
9th Place Solution: GNNs Ensemble and MLP Stacking
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, we want to thank Kaggle and IceCube Collaboration for providing this spectacular competition!</p>
<p>Our solution is based on MLP stack with shared weights of 6 DynEdge models + blend of 4 GNNs inside the segments by pulses number.</p>
<h2>1. Model Zoo</h2>
<p>We took <a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/383524" target="_blank">GraphNeT</a> model with DynEdge architecture as a baseline and modified it in different ways. Main variations and their nicknames:</p>
<ul>
<li>v1 <strong>Vanilla</strong>. Changed KNN value and included additional pooling schemes. Replaced Adam optimizer with <a href="https://arxiv.org/abs/1810.06801" target="_blank">QHAdam</a>.</li>
<li>v2 <strong>Bigman</strong>. Added more layers and made them wider. Changed activation functions to <a href="https://arxiv.org/abs/1702.03118" target="_blank">SiLU</a> (they proved to be very powerful in various domains). Added Batch Normalization layers.</li>
<li>v3 <strong>DynEmb</strong>. To account for ice heterogeneity, we integrated sensor embeddings into the model (embeddings are initialized by XYZ coordinates). This approach can be generalized and used with any type of Observatory that has sensors distributed in ice or water. We also added Weight Normalization and switched to FP16 training with new <a href="https://arxiv.org/abs/2302.06675" target="_blank">Lion</a> optimizer. For this model, we published a submission <a href="https://www.kaggle.com/code/ukrainskiydv/icecube-submission-dynemb-example/notebook" target="_blank">example</a> that gives top 20 LB in solo for 1 hour 20 minutes.</li>
</ul>
<h2>2. Training Protocol</h2>
<p>Every model had longrun training from scratch with von Mises-Fisher Loss on events up to 600 pulses. Then a couple of additional epochs was trained with Euclidean Distance Loss and up to 200 pulses. As <a href="https://www.kaggle.com/rusg77" target="_blank">@rusg77</a> noted below this loss works much better than original for second-stage training. The details on the optimizers choice and second-stage training strategy presented by <a href="https://www.kaggle.com/churkinnikita" target="_blank">@churkinnikita</a> in the comment section.</p>
<h2>3. Ensembling</h2>
<p>Finally, SWA was applied to the best checkpoints of each model for better robustness. As a result we had v1, v2, v3 models with performance < 1.000 LB and v1, v2, v3 models with performance < 0.990 LB. These 6 models were blended and stacked. Stack showed better performance ~ 0.977 LB in the very final version, <a href="https://www.kaggle.com/simakov" target="_blank">@simakov</a> added the details below.</p>
<h2>4. Alternative Approach</h2>
<p>Another part of our solution consists of several GNN models for different neutrino events. The dataset was segmented by ten uniformly bins on pulses number (that represents energy levels of neutrino) and 4 new models were trained on their compositions. Eventually, the models were blended segment by segment, <a href="https://www.kaggle.com/rusg77" target="_blank">@rusg77</a> added the details on this part of solution in the post <a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/402995" target="_blank">here</a>.</p>
<h2>5. Results</h2>
<p>Final model is a 60 / 40 blend of 6-stacked / 4-segmented GNNs and shows 0.975 LB (0.976 Private). We used a blend of stacked models to improve the overall stability of the ensemble and reduce the variance of individual models. We also hope this approach leads to better resilience of the model to possible noise in the data.</p>
<p>For sure, Transformer-based models have great performance on the task, but a diversified ensemble of properly tuned GNNs can also show a keen score on the considered reconstruction problem.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F743648%2F829df4033d8353d91b74f0559259506c%2FModels%20Zoo%20RESIZED.png?generation=1682034404975611&alt=media" alt=""></p>
|
Image Matching Challenge 2022
|
10th Place solution (brief summary)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I am glad you understand how we feel. Thanks for the compliments.</p>
<p>Thanks for the compliments! And congratulations to you too😊</p>
<p>I would say that my team was able to make good use of the DKM tendency, such as creating ROI and using it alone under certain conditions💪</p>
<p>First of all, I'd like to thank Kaggle staff and the organizers for organizing this very practical and challenging competition. <br>
Our team is ranked 10th on the private leader board. </p>
<h2><strong>Overview</strong></h2>
<p><a href="https://postimg.cc/64yWk0Kn" target="_blank"><img src="https://i.postimg.cc/QCqFW4n4/IMC2022-10th.png" alt="IMC2022-10th.png"></a></p>
<p>The solution can be summarized as below.</p>
<h2><strong>Key Points</strong></h2>
<ul>
<li>Ensemble with DKM-ROI</li>
<li>Make Faster and Better Estimation</li>
<li>Outlier Handling</li>
<li>Trust CV</li>
</ul>
<h2>Ensemble with DKM-ROI</h2>
<p>DKM showed a robust tendency in our experiments. It can find many correspondence points almost all the time, but peak performance was a shortage as compared with another algorithm. </p>
<p>From this tendency, we regarded DKM as an “Average Hitter”. And used it for making cropped images as ROI in the following steps.</p>
<ol>
<li>Inference with DKM and calculate confidence threshold using Otsu's method</li>
<li>Filtering correspondence points using calculated threshold</li>
<li>Making a bounding box with filtered correspondence points </li>
<li>Cropping image from bounding box region</li>
<li>Ensemble with LoFTR, SE2-LoFTR, and SuperGlue</li>
</ol>
<p>A sample result of ROI extraction by DKM-ROI is shown below</p>
<p><a href="https://postimg.cc/HjZz0yhy" target="_blank"><img src="https://i.postimg.cc/TYGs6jjC/Untitled.png" alt="Untitled.png"></a></p>
<p>Figure.1 confidence map of points confirmed by DKM</p>
<p><a href="https://postimg.cc/Whc3H9CL" target="_blank"><img src="https://i.postimg.cc/3JdvWVZR/Untitled.png" alt="Untitled.png"></a></p>
<p>Figure.2 Binarized result of the confidence map by Otsu’s method</p>
<p><a href="https://postimg.cc/62ds2h0n" target="_blank"><img src="https://i.postimg.cc/C12w2PnP/Untitled-1.png" alt="Untitled-1.png"></a></p>
<p>Figure.3 bbox of the corresponding point on the original image</p>
<p>Basically Ensemble with DKM-ROI worked well. It makes finding key points easier. But sometimes it missed important regions. As a simple countermeasure for this issue, our solution does not only ensemble with DKM-ROI ensemble and also inference with the original image using LoFTR and ensemble with it. </p>
<h2>Make Faster and Better Estimation</h2>
<p>As with usual competition, one of the important points of this competition is increasing inference time somehow or other. Typical ideas of our solutions are below:</p>
<ul>
<li>Pipelining:<ul>
<li>We optimized the overall process as a multithreading implementation. It can be described as three components. 1. Preprocess(e.g. load image), 2. Inference(e.g. matcher), 3 . Postprocess(e.g. RANSAC) . Our solution ran multithread to process those parallels. Especially for the tasks in this competition, pipelining is very effective, because GPU processing and CPU processing are highly loaded with different processes.</li></ul></li>
<li>Key points NMS: <ul>
<li>Detected key points are usually noisy and redundant. It makes time to calculate. To avoid this issue, we used NMS algorithms for key points. The name SSC (Suppression via Square Covering). SSC could reduce processing time while maintaining accuracy.</li></ul></li>
<li>Weighted Random sampling:<ul>
<li>For the same reason, we tried to reduce the number of key points fed to RANSAC in a way that would maintain estimation accuracy. As an approach, we applied the weighted random sampling algorithm used in DKM to the key point ensemble results.</li></ul></li>
</ul>
<h2>Outlier Handling</h2>
<p>The input images in this competition are expected to be very diverse. </p>
<p>In a difficult case, such as with no texture and far distance and very different viewpoints, the effect of a simple ensemble was limited because few correspondence points could be found. Therefore, the following measures were taken for these pairs that should be called outliers.</p>
<ul>
<li>Outlier case 1: When only a few dozen percent of the usual number of correspondence points are found.<ul>
<li>Boosting key points by incorporating methods that differ in tendency from those used in the previous stage of the ensemble. We used SGMNet as an additional ensemble.</li></ul></li>
<li>Outlier case 2: When only a few percent of the usual number of correspondence points are found.<ul>
<li>Judging the scene to be extremely difficult to find sufficient correspondence points with the usual approach, search for correspondence points with DKM alone, not with an ensemble.</li></ul></li>
</ul>
<p>These conditions were derived by analyzing the tendency in the output of the ensemble from image pairs of validation data.</p>
<h2>Trust CV</h2>
<p>To my best knowledge, some candidates for this competition said “this competition is not Trust CV”. But actually validating with CV was a fundamental process for our team with original validation settings.<br>
I posted about this one as a separated thread.</p>
<p><a href="https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/328999" target="_blank">https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/328999</a></p>
<h2>Plot results for test images</h2>
<p>We also agree with this post same as the 11th place solution.<br>
<a href="https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/328715" target="_blank">https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/328715</a><br>
<a href="https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/328887" target="_blank">https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/328887</a></p>
<p>The results are here.<br>
<a href="https://postimg.cc/9RgQNbKM" target="_blank"><img src="https://i.postimg.cc/GtdTtSjk/image-7.png" alt="image-7.png"></a><br>
<a href="https://postimg.cc/H8514tCF" target="_blank"><img src="https://i.postimg.cc/90gX2xgX/image-8.png" alt="image-8.png"></a><br>
<a href="https://postimg.cc/tY3HsDkx" target="_blank"><img src="https://i.postimg.cc/QxPNnyW0/image-9.png" alt="image-9.png"></a></p>
<hr>
<h2>Acknowledgment</h2>
<p>We deeply acknowledge great OSS such as PyTorch, Kornia, OpenCV, etc.<br>
Special thanks go to authors of these great papers and pretrained models that we used in our final submission:</p>
<p>・DKM</p>
<p><a href="https://github.com/Parskatt/DKM" target="_blank">https://github.com/Parskatt/DKM</a></p>
<p>Edstedt, J., Wadenbäck, M. and Felsberg, M., 2022. Deep Kernelized Dense Geometric Matching. arXiv preprint arXiv:2202.00667.</p>
<p>・LoFTR</p>
<p><a href="https://github.com/zju3dv/LoFTR" target="_blank">https://github.com/zju3dv/LoFTR</a></p>
<p>Sun, Jiaming, Zehong Shen, Yuang Wang, Hujun Bao, and Xiaowei Zhou. "LoFTR: Detector-free local feature matching with transformers." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8922-8931. 2021.</p>
<p>・SE2-LoFTR</p>
<p><a href="https://github.com/georg-bn/se2-loftr" target="_blank">https://github.com/georg-bn/se2-loftr</a></p>
<p>Bokman, Georg and Kahl, Fredrik “A case for using rotation invariant features in state of the art feature matchers” ,CVPRW, 2022</p>
<p>・SuperGlue</p>
<p><a href="https://github.com/magicleap/SuperGluePretrainedNetwork" target="_blank">https://github.com/magicleap/SuperGluePretrainedNetwork</a></p>
<p>Sarlin, Paul-Edouard, et al. "Superglue: Learning feature matching with graph neural networks." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.</p>
<p>・SGMNet</p>
<p><a href="https://github.com/vdvchen/SGMNet" target="_blank">https://github.com/vdvchen/SGMNet</a></p>
<p>Hongkai Chen, Zixin Luo, Jiahui Zhang, Lei Zhou, Xuyang, Bai, Zeyu Hu, Chiew-Lan Tai, and Long, Quan. Learning to match features with seeded graph matching network. In <em>CVPR</em>, pages 6301–6310, 2021. 1, 2, 6</p>
<p>・Otsu’s method</p>
<p><a href="https://cw.fel.cvut.cz/b201/_media/courses/a6m33bio/otsu.pdf" target="_blank">https://cw.fel.cvut.cz/b201/_media/courses/a6m33bio/otsu.pdf</a></p>
<p>Nobuyuki Otsu. A threshold selection method from gray-level histograms. <em>IEEE
transactions on systems, man, and cybernetics</em>, 9(1):62–66, 1979.</p>
<p>・SSC</p>
<p><a href="https://github.com/BAILOOL/ANMS-Codes" target="_blank">https://github.com/BAILOOL/ANMS-Codes</a></p>
<p>Oleksandr Bailo, Francois Rameau, Kyungdon Joo, Jin- sun Park, Oleksandr Bogdan, and In So Kweon. Efficient adaptive non-maximal suppression algorithms for homo- geneous spatial keypoint distribution. Pattern Recognit. Lett., 106:53–60, April 2018.</p>
|
Image Matching Challenge 2022
|
11th Place (Public: 0.847 / Private: 0.845)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks, and congratulations to you too on your solo gold-winning and promotion to Kaggle Master!</p>
<p>>I have a question: is there a noticeable score difference between inference in FP16 and FP32?<br>
No, If I remember correctly, the difference in accuracy between fp16 and fp32 is about -0.001. <br>
That's why I used fp16 :).</p>
<p>First of all, thank you so much to the organizers for hosting this competition. <br>
This competition was educational with much of resources from organizers, and the responses to the competition's questions were very clear and fast.<br>
Also, I can't say this lightly, but thank you so much for handling this competition even during this extremely difficult time in Ukraine. I sincerely hope the peace in Ukraine as soon as possible.</p>
<h2><strong>[Overview]</strong></h2>
<p>My approach is similar to other solutions.<br>
<strong>In short: QuadTreeAttention LoFTR + Magic Leap’s SuperPoint/SuperGlue with Multi-Scale TTA & fp16.</strong><br>
Both models are pre-trained ones and didn’t be fine-tuned or trained from scratch with the training dataset.</p>
<p>After trying various pre-trained models, I selected two models, QuadTreeAttention LoFTR and MagicLeap’s SuperPoint+SuperGlue, as my final submission. In the ensemble of these two models, I applied multi-scale TTA for each model.<br>
(LoFTR and DKM were also candidates because they showed good accuracy, but for creating a simple solution I decided not to use DKM and excluded LoFTR from one final submission. However, now I think I should have incorporated DKM into final submissions for boosting the final performance.)</p>
<p>The resize parameters of the longer side are set to 832, 1024, 1152 for QuadTreeAttention LoFTR, and 840, 1200, 1400 for SuperGlue. When increasing the image size for QuadTreeAttention LoFTR, the number of output correspondences also increases in proportional to image size and becomes imbalanced with that of the original image size (i.e. 800), so I adjusted the confidence threshold for suppressing output correspondences for each image scale. The confidence parameters are set to 0.20, 0.40, 0.50 for 832, 1024, 1152 image size.<br>
The score of each model is here:</p>
<table>
<thead>
<tr>
<th>Method</th>
<th>Image Size (longer side)</th>
<th>Public LB / Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>SuperGlue (Single)</td>
<td>840</td>
<td>0.721 / 0.715</td>
</tr>
<tr>
<td>QuadTree (Single)</td>
<td>832</td>
<td>0.788 / 0.787</td>
</tr>
<tr>
<td>QuadTree (3 Ensemble)</td>
<td>832, 1024, 1152</td>
<td>0.811 / 0.821</td>
</tr>
<tr>
<td>QuadTree (3 Ensemble) + SuperGlue (3 Ensemble)</td>
<td>832, 1024, 1152 + 840, 1200, 1400</td>
<td>0.847 / 0.845</td>
</tr>
</tbody>
</table>
<p>Also, when resizing images for QuadTreeAttention LoFTR, the aspect ratio of an image is kept as the original one, and the width/height size is adjusted to be divisible by 64 with zero-padding. Using these images for outputting correspondences, the inference coordinates of correspondences are possible to be out of the original image size, so I removed out-of-border points with simple post-processing.</p>
<p>To reduce GPU memory consumption, I also used torch.cuda.amp.autocast() for each model and converted the model’s parameters to fp16. QuadTreeAttention code internally uses CUDA C++ modules, so before passing variables to CUDA C++ modules, I converted variables to fp32 and after receiving variables from CUDA C++ modules I re-converted those variables to fp16.</p>
<h2><strong>[Training or No-Training]</strong></h2>
<p>When I first looked at the dataset, the training data looked like PhotoTourism Dataset, and the test data looked like GoogleUrban Dataset, so I thought about how we could fill in the gaps between the training data and test data. </p>
<p>However, after the appearance of the LoFTR 0.721 kernel by <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> (It’s a great kernel, really appreciate it), I thought that just using MegaDepth pre-trained models seems enough for this competition, and decided to concentrate on searching for nice pre-trained models in behalf of training/fine-tuning some models. <br>
In addition, I didn’t have enough time & GPU machine to create strong image matching models, so I just had to give up training. If there are enough resources to train models, I wanted to try PosFeat (CVPR2022) for example.</p>
<h2><strong>[Handling the time cost of MAGSAC++ calculation]</strong></h2>
<p>As described in VSAC paper Table4, the worst calculation time of MAGSAC++ is larger than that of other RANSAC variants and had a bad effect on the 9hrs inference time limitation, so I tried various other RANSAC variants which can replace MAGSAC++. <br>
However, they all performed worse than MAGSAC++, so I tried the approach of dividing/decomposing MAGSAC++ into 2-stages, i.e., first with fewer iterations (e.g. 10000) and then with more iterations (e.g. 90000). <br>
Having applied this approach, the accuracy drops a bit (-0.002), but the calculation time for a difficult image pair (e.g. test image pair No.3) decreases from around 1.5s to 0.2s.</p>
<p>I didn’t incorporate this approach into the final submission because I prioritized accuracy, but I think it could be a useful technique if improved well.</p>
<h2><strong>[Plot results for test images]</strong></h2>
<p>Yes, I agree with this post :). I also want to see the visualization results of each team.<br>
<a href="https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/328715" target="_blank">https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/328715</a><br>
The results are here. I used <a href="https://postimages.org/" target="_blank">https://postimages.org/</a> service to upload these images.<br>
<img src="https://i.postimg.cc/R00Msy7M/results-17-2.png"> <img src="https://i.postimg.cc/Ss2mq913/results-17-3.png"> <img src="https://i.postimg.cc/nzypLL6z/results-17-4.png"></p>
<h2><strong>[What didn’t work in my experiment]</strong></h2>
<p>・Pre-Processing approaches like gamma correction, contrast correction, histogram equalization, etc.<br>
・Post-Processing with segmentation mask to remove moving objects (e.g. car, person, sky) as used in IMC2021 winner solution. I tried mmsegmentation’s pspnet/segformer which was trained on Cityscapes, but the results were slightly worse than the original one, and also it’s time-consuming, so I decided not to use this approach.<br>
・Other pre-trained models like MatchFormer, SuperPoint+SGMNet, SE2-LoFTR, etc. However, I think SE2-LoFTR is useful for tackling the rotation-invariant matching problems. In this competition, as denoted on the data description page, all images are correctly prepared as upright, so I guess SE2-LoFTR didn’t have much gain for test images.</p>
|
Image Matching Challenge 2022
|
14th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the winners, and thanks to organizers and all participants. This is a joint writeup with my wonderful teammate <a href="https://www.kaggle.com/yamash73" target="_blank">@yamash73</a> (big thanks to awesome team work).</p>
<p><strong>Overview</strong><br>
Our final model is an ensemble of MatchFormer, QuadTreeAttention, SuperPoint+SuperGlue (2 scales) and Patch2Pix. Like the other solutions, we used only pre-trained weights for our solution.<br>
Code: <a href="https://www.kaggle.com/code/yamash73/imc2022-14th-place-solution-private-lb-0-839/notebook" target="_blank">https://www.kaggle.com/code/yamash73/imc2022-14th-place-solution-private-lb-0-839/notebook</a><br>
NOTE: we used the above notebook for both calculation of CV and submission. These two modes can be switched by changing the "mode" parameter in the CFG class.</p>
<p><strong>Number of matches from each model</strong><br>
The maximum number of matches is set so that the number of matches from the 5 models is balanced, and the computation time is reduced. The number of matches from transformer-based methods (LoFTR/MatchFormer) is larger than that from SuperGlue in many cases.</p>
<p><strong>Ensemble</strong><br>
We used the following 5 models for our final model. We used Patch2Pix for the final model, since LB/CV is slightly improved. However, the change in score is so small, and hence it is likely that this model is not required for the final model.</p>
<table>
<thead>
<tr>
<th>#</th>
<th>model</th>
<th>resolution</th>
<th>Maximum number of matches</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>MatchFormer</td>
<td>840</td>
<td>900</td>
</tr>
<tr>
<td>2</td>
<td>QuadTreeAttention</td>
<td>960</td>
<td>900</td>
</tr>
<tr>
<td>3</td>
<td>SuperPoint + SuperGlue</td>
<td>2000</td>
<td>450</td>
</tr>
<tr>
<td>4</td>
<td>SuperPoint + SuperGlue</td>
<td>1600</td>
<td>450</td>
</tr>
<tr>
<td>5</td>
<td>Patch2Pix</td>
<td>2048</td>
<td>400</td>
</tr>
</tbody>
</table>
<p><strong>MAGSAC parameters</strong><br>
We used cv2.findFundamentalMat(mkpts0, mkpts1, cv2.USAC_MAGSAC, 0.2, 0.99999, 18000) to compute the fundamental matrix like other teams. We decreased the maximum number of iterations to reduce the computation time.</p>
<p><strong>Trust CV…?</strong><br>
We used two types of CV scores with fixed random seed.</p>
<ul>
<li>CV for 80 images (5 images for each scene): we used this CV to check whether our implementation is correct, and our approach has the potential to yield better results. If the CV was significantly low at this stage, the implementation was reviewed or that idea was not submitted.</li>
<li>CV for 800 images (50 images for each scene): after calculation of CV for 80 images, we calculated this CV and used it to check if there is a correlation between CV and LB. Here we didn't consider the scope of covisibility, distance and rotation angle. So we could not see the strong correlation between LB/CV but the submissions with good LB scores had around the same CV scores (0.58~0.60 in our seed), so we were checking to see if CV could equal this score.</li>
</ul>
<p><strong>Things that didn't work</strong></p>
<ul>
<li>Semantic segmentation to filter out keypoints/matches in specific class areas (sky, car, person, etc.). This never contributed to LB/CV scores…</li>
<li>Refinement using ALIKE. This is based on IMC 2021 solution (LoFTR-SPP). We used ALIKE instead of SuperPoint but LB was not improved.</li>
<li>Refinement using <a href="https://github.com/mihaidusmanu/local-feature-refinement" target="_blank">this repository</a>. The processing time is slow and the CV score was not good.</li>
<li>Combinations of the other models. We tested PDCNet+, ASLFeat and COTR. However, the processing time is slow and LB/CV is not improved.<br>
DEGENSAC. We used this for only one submission. The LB score was worse then.</li>
</ul>
<p><strong>Things that may work</strong></p>
<ul>
<li>TTA (rotation/flip).</li>
<li>Ensemble with DKM. We tested DKM using a public notebook and observed low LB score. But we should have tried the higher resolution…</li>
<li>Retraining SuperGlue using Disk like IMC 2021 solution.</li>
</ul>
|
Image Matching Challenge 2022
|
17th place solution - Upscaling & Models
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our team would like to thank Google Research and Kaggle for organizing such an amazing competition.</p>
<h1>Models</h1>
<p>Our final solution which scored 0.835 consists of: </p>
<ul>
<li><strong>LoFTR DS/OT</strong> (default settings for DS, 0.55 match_coarse threshold for OT)</li>
<li><strong>SuperPoint/SuperGlue</strong></li>
<li><strong>DKM</strong> (100 features)</li>
<li><strong>MatchFormer-LargeLA</strong> (0.55 match_coarse threshold)</li>
</ul>
<h1>Upscaling with Lanczos interpolation over 8×8 pixel neighborhood</h1>
<p>Part which had the biggest impact on our solution was <strong>upscaling</strong> with Lanczos interpolation. Generally speaking, we upscaled images for all of our models except MatchFormer by a factor of <strong>1.5x</strong>, and then made them divisible by 8. In the end, position of keypoints was adjusted using the same scale factor. We think that CNNs used in many models, especially in LoFTR, can simply extract richer local features from upscaled images.</p>
<blockquote>
<p>img = cv2.resize(img, None, fx=1.5, fy=1.5, interpolation=cv2.INTER_LANCZOS4)</p>
<p>w, h = img.shape[1], img.shape[0]<br>
w, h = w // 8 * 8, h // 8 * 8</p>
<p>img = img[:h, :w, :]</p>
</blockquote>
<h1>Outlier detection</h1>
<p>For outlier detection we incorporated simple MAGSAC from OpenCV:</p>
<blockquote>
<p>cv2.findFundamentalMat(keypoints_1, keypoints_2, cv2.USAC_MAGSAC, ransacReprojThreshold=0.25, confidence=0.99999, maxIters=100000)</p>
</blockquote>
<h1>What did not work for us?</h1>
<p>Here's a list of things which unfortunately didn't improve our score.</p>
<ul>
<li>SuperGlue re-training or fine-tuning (we did this to combine it with DISK descriptors)</li>
<li>DISK + OANet</li>
<li>Calculating additional keypoints from rotated images</li>
<li>Inverse image order for LoFTR</li>
</ul>
<p>I don't remember the exact difference in scores, but Lanczos was definitely better than other methods</p>
<p>At the moment when I tried DISK, I thought the performance would be quite good, so I didn't check its solo results. However, when I added DISK + OANet to LoFTR + SP/SG, the score went down from 0.775 to 0.768</p>
|
Image Matching Challenge 2022
|
18th place - Ensemble of 3 matchers + bonus crazy ideas that didn't work
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I and my teammate <a href="https://www.kaggle.com/radac98" target="_blank">@radac98</a> want to deeply thank all organizers and colleagues that participated in this challenge. It was our first experience as kagglers, and we enjoyed it a lot! Very nice community, with a lot of knowledge sharing.</p>
<hr>
<h1>[Our solution]</h1>
<p>Our best approach is actually very simple compared to several other complicated strategies we have tested during the competition. It is an ensemble of three matchers: Vanilla LoFTR + Vanilla DKM + Vanilla PDCNet_plus.<br>
For each image pair, we do as follows:</p>
<ol>
<li>For LoFTR, we perform a simple TTA rescaling of images for each pair (here, the numbers mean the size in pixels of the largest image dim for the first, and second image respectively):<br>
LoFTR(1200, 1200)<br>
LoFTR(1200, 600)<br>
LoFTR(600, 1200)</li>
<li>For DKM, we randomly sample 9000 * <strong>overlap</strong> correspondences (<em>overlap</em> as estimated by its confidence map above a threshold of 0.05), and the sampling probability is weighted by its confidence.</li>
<li>For PDCNet, we randomly sample 6000 * <strong>overlap</strong> correspondences (<em>overlap</em> as estimated by its confidence map above a threshold of 0.2)</li>
</ol>
<p>Concatenate all matches (with corrected scaling to the original image resolution) and use OpenCV's MAGSAC (15k iters, p=0.9999, px_thr = 0.25) and voilà!</p>
<p>We have chosen these parameters empirically based on the public LB score and it turned out to work quite nice for the private LB. </p>
<p><em>Little comment about the QuadTree Attention</em>: After reading some solutions after the competition ended, we (and several other people) were surprised that the QuadTree LoFTR alone performs very well on the LB. We simply overlooked it because the public kernel with QuadTree performed badly on the LB and its visual matches were quite bad. Maybe if we included it in our ensemble, we could have obtained better results with quite a simple modification.</p>
<hr>
<h1>[Other matchers we have tried]</h1>
<p>We also tested including other pretrained matchers and filtering methods in our Ensemble, which ended up not improving the results:</p>
<ul>
<li>ASLFeat</li>
<li>Se-LOFTR</li>
<li>SGMNet</li>
<li>MatchFormer</li>
<li>Patch2Pix</li>
<li>SuperPoint + SuperGlue</li>
<li>SIFT + HardNet</li>
<li>SIFT + Key.Net kps + HardNet desc</li>
<li>Try to filter matches with AdaLAM</li>
</ul>
<p>*In this case, we tried to add each method individually to our best ensemble. Maybe if we have tested more different combinations, they could have been useful in the resulting ensemble, but in the meantime we were testing more complicated solutions that ended up not working, which I'm describing below).</p>
<hr>
<h1>[Complicated solutions that ended up not working]</h1>
<p>By far, the most complicated one we have tried was to try a two-view bundle adjustment, to do that, we had to decompose the estimated F matrix into K, R, t by using an optimization technique described in this paper: <a href="https://openaccess.thecvf.com/content_iccv_2015/papers/Sweeney_Optimizing_the_Viewing_ICCV_2015_paper.pdf" target="_blank">Optimizing the Viewing Graph for Structure-from-Motion</a>, where the F matrix can be decomposed to E = K'.T @ F @ K, and the essential matrix now has to satisfy the constraint in eqn (10) of the paper. We have used a scipy optimizer to optimize the focal lengths that minimizes that cost. I have tested this algorithm with images with known focal length, and surprisingly it was working quite well for pairs which the cost function was very low. Plus, with the BA, even if the value was a little off than optima, it could converge in the optimization. After that, we than were able to obtain <strong>K</strong>, <strong>R</strong>, and <strong>t</strong>, by decomposing E and even obtain visually appealing 3D reconstructions. However, when attempting to bundle adjust the image pairs from the test data as a preliminary test, we observed that the estimated focal lengths were actually diverging from the pseudo-groundtruth of the test data during optimization. We suspect that this was happening because the two-view only setup + near planar structures causes the optimization to be ill-conditioned, maybe with more simultaneous views, it would be possible to obtain stable results (or maybe it was just a bug we didn't find?). In the end, this solution was already quite complicated and we decided to focus on simpler ones.</p>
<p>Another thing that we have tried was to obtain sub-pix refined correspondences by using the two-view refinement stage from the work <a href="https://github.com/mihaidusmanu/local-feature-refinement" target="_blank">Multi-View Optimization of Local Feature Geometry</a>. It seemed to be working on small scale but it was slowish to run for thousands of matches, like 5 seconds + for thousands of matches which were impracticable. So we had to dramatically reduce the number of matches to a maximum of 800 to be able to run it in the time budget we had available for the submission. To only focus on very confident matches we even tried to first estimate a F matrix, get the inliers, reduce them to 800 and run the optimization on them to finally estimate a refined F, but apparently, limiting the matches was causing a big drop in the LB score.</p>
<p>We have also tried a similar strategy to <a href="https://arxiv.org/pdf/2202.09050.pdf" target="_blank">Guide Local Feature Matching by Overlap Estimation</a>, where we were estimating an overlap region with the ensemble and then tried to use LoFTR(the best model) to extract matches exactly on the estimated overlap region only, which would make the matcher only match parts of the image that were actually matchable, reducing the search space thus facilitating the matcher's job. This kind of worked in the beginning of the competition, when we were only using LoFTR + Local Features, but when we included DKM and PDCNet, it was actually making the LB score worse.</p>
<p>More things that we have tried but they failed:</p>
<ul>
<li>We attempted to estimate a F matrix for each ensemble item individually, concatenated the inliers, and estimated a last F matrix from the inliers;</li>
<li>We recursively estimated planes in the scene until no more planes were found, and tried to balance the number of matches on each plane -- the rationale behind this was that the F matrix estimation from a wide range of different planes would be more well-defined mathematically (similar idea to DegenSAC).</li>
</ul>
<p>Last but not least, we thank the authors of all the amazing matching methods we have used in the experiments, releasing the source code to the community is essential to make progress in the field!</p>
|
Image Matching Challenge 2022
|
1st Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It was a wonderful experience for me to participate with you as well. Thank you!</p>
<p>Thanks to all participants for their hard work in the competition.<br>
We are honored to have achieved a great result of 1st place in this competition.</p>
<p>We would also like to express our deepest gratitude to the organizers for organizing such a wonderful competition.<br>
Thank you very much.</p>
<p>Finally, thanks to my amazing teammates <a href="https://www.kaggle.com/gmhost" target="_blank">@gmhost</a>, <a href="https://www.kaggle.com/xiuqi0" target="_blank">@xiuqi0</a>, and <a href="https://www.kaggle.com/forcewithme" target="_blank">@forcewithme</a> the universe.<br>
I believe that their great insight and experimentation brought about honorable results. Thank you very much!</p>
<h1>OverView</h1>
<p>Our solution is based on an ensemble of matching results from three different models: LoFTR, SuperGlue, and DKM.<br>
They all use publicly available pre-trained weights and have not been trained or fine-tuned.</p>
<p>Significant score increases were also achieved by matching at multiple resolutions and by cropping only the covibility region before matching.</p>
<p>Our solution is a 2-stage method and the framework is as follows.</p>
<ol>
<li><p>We first concat the keypoints produced by LoFTR with a 840 resolution and keypoints produced by Superpoint_SuperGlue with 840, 1024, 1280 resolution. This is the first stage of our framework.</p></li>
<li><p>We <strong>use DBSCAN to get clusters</strong> containing the top 80~90% matching keypoints. Generate a bbox for each image with those keypoints, and crop it. We call this method mkpt_crop. Mkpt_crop can efficiently filter outliers and extract key areas.</p></li>
<li><p>Use LoFTR, dkm, SuperPoint_SuperGlue to rematch on the key area. This is the second stage of our framework.</p></li>
<li><p>concat the keypoints produced by stage1 and stage2. Then do RANSAC and get fundamental matrix.</p></li>
</ol>
<p><a href="https://postimg.cc/JDhJzSCY" target="_blank"><img src="https://i.postimg.cc/W3G7XPWV/solution-framework.png" alt="solution-framework.png"></a></p>
<h1>KeyPoint</h1>
<ul>
<li>mkpt crop</li>
<li>ensemble multi-resolution matching</li>
</ul>
<h3>mkpt crop (matching keypoints crop)</h3>
<p>Based on the hypothesis that regions not shared by two images are unnecessary for matching, we developed the idea of <strong>mkpt crop</strong> (matching keypoints crop), which crops images efficiently and accurately.</p>
<p>The mkpt crop crops covisibility regions in the following way.</p>
<ol>
<li>We use pre-cropped images (original images) to match each other </li>
<li>Clustering the matching points of each image with DBSCAN</li>
<li>Extract clusters containing the top 80%~90% matching points to remove outlier regions</li>
<li>Crop the outer boundary of the cluster</li>
</ol>
<p><a href="https://postimg.cc/zLwvY7Lt" target="_blank"><img src="https://i.postimg.cc/s2qhS03r/mkpt-crop-image.png" alt="mkpt-crop-image.png"></a></p>
<p>Since a well pre-trained model finds quite a lot of correct matching points, we were able to find covisibility regions with high accuracy by thinning out the outliers to some extent. </p>
<p>The models used in Step 1 were LoFTR and SuperGlue. At first, only the LoFTR results were used to calculate the crop area, but by adding the SuperGlue results, it was possible to cover the covisibility areas that LoFTR could not find, thus increasing the score significantly.</p>
<p>In addition, mkpt crop is a very lean and efficient method because the matching points computed in step 1 can also be used for the final ensemble.</p>
<p>We also tried to crop by segmentation using Mask2Former, but since segmentation crop cannot determine whether or not a region is common to two images, there were inevitably many wasted regions, and mkpt crop gave better results.</p>
<p>However, mkpt_crop has the risk of removing small covisibility regions because it removes clusters with few matching points. Therefore, our team added the matching points from the original image to the ensemble to achieve more stable matching.</p>
<h3>ensemble multi-resolution matching</h3>
<p>We used LoFTR, SuperGlue, and DKM to perform matching at various resolutions and ensemble the results.<br>
The final model used was as follows. </p>
<table>
<thead>
<tr>
<th>#</th>
<th>Model</th>
<th>input image</th>
<th>resolution</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>LoFTR</td>
<td>original</td>
<td>840</td>
</tr>
<tr>
<td>2</td>
<td>SuperGlue</td>
<td>original</td>
<td>1280</td>
</tr>
<tr>
<td>3</td>
<td>SuperGlue</td>
<td>original</td>
<td>1024</td>
</tr>
<tr>
<td>4</td>
<td>SuperGlue</td>
<td>original</td>
<td>840</td>
</tr>
<tr>
<td>5</td>
<td>LoFTR</td>
<td>mkpt crop</td>
<td>1280</td>
</tr>
<tr>
<td>6</td>
<td>DKM</td>
<td>mkpt crop</td>
<td>840</td>
</tr>
<tr>
<td>7</td>
<td>SuperGlue</td>
<td>mkpt crop</td>
<td>1024</td>
</tr>
<tr>
<td>8</td>
<td>SuperGlue</td>
<td>mkpt crop</td>
<td>1280</td>
</tr>
<tr>
<td>9</td>
<td>SuperGlue</td>
<td>mkpt crop</td>
<td>1536</td>
</tr>
</tbody>
</table>
<p>(#1~4 are used for mkpt_crop calculation)</p>
<p>The matching points output by these models are rescaled to fit the original image size and then concatenated using np.concatenate to compute the F matrix.</p>
<p>We believe that changing the input resolution led to an improvement in scores because the matching points obtained were different and more diverse.</p>
<h1>Other tips</h1>
<h3>Use of pretrained models</h3>
<p>All of our solutions this time use a publicly available pretrained model.</p>
<p>Since the quality of the training and test data differed in this competition, we did not conduct fine tuning using the provided training data because we thought it would be less effective.</p>
<h3>RANSAC parameter</h3>
<p>We used USAC_MAGSAC to compute the F matrix.</p>
<p>However, the parameter settings in the public notebook took a long time to calculate and resulted in a timeout, so we repeated several experiments and adopted the following parameters, which are fast and do not change the accuracy much. <br>
<code>cv2.findFundamentalMat(mkpts0, mkpts1, cv2.USAC_MAGSAC, 0.2, 0.99999, 50000)</code><br>
With these parameters, the above model configuration took approximately 9 hours to complete the process, yielding a score of public:0.86287/private:0.86343.</p>
<p>Incidentally, although not selected in the final submission, a solution that excluded the #4 model (SuperGlue/original/840), increased the DBSCAN eps parameter slightly, and employed the following parameters yielded private:0.86516.<br>
<code>cv2.findFundamentalMat(mkpts0, mkpts1, cv2.USAC_MAGSAC, 0.19, 0.99999, 50000)</code><br>
We have not experimented enough with the model configuration and the combination of RANSAC parameters, so there may still be room for improvement.</p>
<h1>Things are also useful but not as effective as our final solution</h1>
<h3>TTA</h3>
<p>Finally we did not use TTA.<br>
We initially used horizontal flip TTA, but found that multi-resolution improved scores more, so we eliminated TTA and increased the number of models used in multi-resolution.</p>
<h3>Other Matching Methods</h3>
<p>We also tried matchformer and keynet_affnet_hardnet, but ultimately did not adopt them.</p>
<ul>
<li>matchformer: Slow, can improve by about 0.002.</li>
<li>keynet_affnet_hardnet: can improve by about 0.001</li>
</ul>
<h1>Things do not work</h1>
<ol>
<li>Train LoFTR, Superpoint, and SuperGlue.</li>
<li>Larger resolution for dkm</li>
<li>Use Segmentation to filter moving object(e. g. cars, people)</li>
<li>Adding ASLFeat</li>
<li>Adding ALike</li>
</ol>
<p>Originally, I was looking for a way to exclude outliers from the matching points, and I ended up with DBSCAN.<br>
Other than DBSCAN, I also tried Isolation Forest, which is an outlier detection method, although it is not a clustering method. However, it was not able to exclude outliers very cleanly, so I did not use it.</p>
|
Image Matching Challenge 2022
|
21st place solution summary
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes that world rank is crazy :) 🙏 I'm the Kaggle Expert Spokesperson/ambassador ;)</p>
<p>Thanks again for a great competition and exciting race to the finish line about the lines! 🙂<br>
Also extra thanks to the host that responded and interacted with professionalism, speed and full support during the competition. If we also had a host leaderboard this host would have been ranked high 😉</p>
<p>Thanks also the fellow competitors, this is not a solo performances show, it’s a community teamwork, and that is what we also have seen here, many contributions and top ideas.</p>
<hr>
<p><strong>Short summary of the solution as follows:</strong></p>
<p>The solution is to combine two public SOTA models, Loftr+ QuadTreeAttention and MatchFormer, tuned together in the same solution and using the information from each of the research papers as input as-well as the information in the training and/or testing codes, with some extra codes and parameter changes, merging them all together for this challenge. In this competition the results were better than using them separately, which is not a surprise as ensembles/ concatenating of different kind of models often leads to better results but with the cost of more time and hw consuming. But here we could take the advantage of the less GPU expansive and faster inference in the Loftr+Q framework.</p>
<p><strong>Training:</strong></p>
<p>No training, just pretrained models.</p>
<p><strong>Validation during creation and testing:</strong></p>
<p>Used the training data as validation during testing with the competition metric as validation score.</p>
<p><strong>Models/matchers used:</strong></p>
<p>Loftr + QuadTreeAttention<br>
<a href="https://github.com/Tangshitao/QuadTreeAttention" target="_blank">https://github.com/Tangshitao/QuadTreeAttention</a></p>
<p>MatchFormer<br>
<a href="https://github.com/jamycheung/MatchFormer" target="_blank">https://github.com/jamycheung/MatchFormer</a></p>
<p><strong>Specific solution details and parameter changes:</strong></p>
<p>Loftr + QuadTreeAttention with FeatureMatching</p>
<p>Outdoor Megadepth pretrained model.<br>
config file = loftr_ds_quadtree.py<br>
['match_coarse']['thr'] = 0.2255<br>
['match_coarse']['border_rm'] = 1<br>
['match_coarse']['skh_prefilter'] = True</p>
<p>Matchformer</p>
<p>Outdoor-large-LA model pretrained model<br>
config file = megadepth_test_1500.py<br>
['scens'] = 'outdoor'<br>
['match_coarse']['thr'] = 0.180</p>
<p><strong>Image and fundamental matrix processing</strong></p>
<p>Image processing</p>
<p>Here I used the standard preprocessing in the standard Loftr+Q framework and used them for both models as what I could see worked well for both and even for other loftr models, and it had better results than a custom own codes for resizing, scaling etc, in other words Use the same as what the model is designed from.<br>
I changed the standard values to mgdpt_df = 2 that worked best in this challenge.</p>
<p>Image dimension and scaling was one of key areas in the challenge. And looked like different models had different best and mandatory formats to work as it should do. Here I think I found a middle way for both. I did a classic try/except approaches, three of them. First used the max dim side of the second image, seemed working best, tested the original size with // 8*8 approach, if models failed the second try it added +64 to the size, and as the last try I just used dim 1088. All the sizes were test for best result and failproof.</p>
<p>Fundamental matrix processing</p>
<p>Then I filtered the results from the matchers with a confidence threshold 1e-5 as described in the paper/site. I even tried with and without threshold and instead sorted the top scores, e.g. use the top 1k, but the first approach was the better one here.<br>
Next part was scaling, this is a very important section too, scale back the key points to original size and for this used the scaling from the saved scales in the output from the matchers.<br>
After the scaling I just concatenated the key points before using the CV2 MAGSAC to create the Fundamental matrix. I had to go down to iteration of 40k to handle the timelimit.</p>
<p>That’s it! 🙂</p>
<hr>
<p><strong>What I also tried</strong></p>
<p>Sinkhorn</p>
<p>Used sinkhorn to the original loftr models as it gave better result than the dual_softmax match type. An idea was to use it as the third model, but it couldn’t fit the ensemble in terms of GPU and time limit, and it also used superglue which didn't match the prize rule criteria.</p>
<p>VSAC, org MAGSAC++, Degensac</p>
<p>Tried other algorithms like VSAC and Degensac but didn’t work better than CV2 Magsac (which is an implementation of the org Magsac++) this time, with my implementation of them. My plan with the own implementation of the org MAGSAC++ was also to combine different known algorithms and samplers like in this presentation <a href="http://cmp.felk.cvut.cz/cvpr2020-ransac-tutorial/presentations/RANSAC-CVPR20-Mishkin.pdf" target="_blank">http://cmp.felk.cvut.cz/cvpr2020-ransac-tutorial/presentations/RANSAC-CVPR20-Mishkin.pdf</a> but didn't have the time. My implemention of the org MAGSAC++ didn't work better or the same as the CV2 one, more time needed here.<br>
I have also shared the installation of these algorithms in the post <a href="https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/328877" target="_blank">https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/328877</a></p>
<p>Thanks, yes that I know but it didn't work the same, thought the cv2 had did something more to it, but maybe it was my implementation of it then, but will edit the comment so it do not confuse someone :)</p>
|
Image Matching Challenge 2022
|
23rd place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for the challenge! It was a great learning experience.<br>
Here's a short summary of our best scoring method and some insights and open questions.</p>
<ol>
<li>Upscale image.</li>
<li>Use <strong>LoFTR</strong>.</li>
<li>If < 2000 matches, use <strong>SE2-LoFTR</strong> but with the two images in reverse order and add the found matches to the earlier ones.</li>
<li>If still < 2000 matches, use <strong>DKM</strong> to find ROIs in the images as follows. <ol>
<li>Sample 500 DKM matches. </li>
<li>Cluster the DKM correspondences (as 4D points) using DBSCAN and crop to the different clusters in both images. </li>
<li>Run SE2-LoFTR on each of the cropped (and resized) image pairs and add the found matches to the earlier ones.</li></ol></li>
<li>Downscale keypoints to compensate for 1.</li>
<li>Run <code>cv2.findFundamentalMat</code>.</li>
</ol>
<p><strong>SE2-LoFTR</strong><br>
In our experiments the "big" version of SE2-LoFTR usually slightly outperformed LoFTR while the "small" versions did worse. The disadvantage of the "big" version is that is slower than LoFTR, but we still used it for performance reasons. Using two different networks in stage 2. and 3. improves robustness, but perhaps a retrained LoFTR version could have been an alternative to SE2-LoFTR. Upright images of static objects (buildings) is not a scenario where SE2-LoFTR has a clear upside on LoFTR. Still, I wanted to use SE2-LoFTR as it is my method. ;)</p>
<p><strong>Adaptively choosing ensemble</strong><br>
Most teams seem to have ensembled by always using a certain set of methods on all image pairs. We saw on phototourism that when LoFTR finds many matches, the score is usually very good. So we only use additional methods when LoFTR fails to find many matches, saving computational resources in this way.</p>
<p><strong>DKM</strong><br>
Simply applying DKM as is yielded a worse score than LoFTR, possibly due to poorer keypoint localizations (this seems to have been solved by some teams by changing the image size used internally in DKM). However DKM has the advantage of always producing as many matches as one wants and usually good ones. The idea of using DKM for finding ROIs in the images is similar to the 10th place solution and could probably be improved a lot by digging into the DKM method.</p>
<p><strong>Resizings</strong><br>
Upscaling the input images to longest side 1176 was the single most important reason for the improvement over the baseline LoFTR. Using dimension divisible by 8 is very important for LoFTR due to the 1/8 coarse scale. Modifying LoFTR to use a finer coarse scale and retraining would probably have worked quite well.</p>
<p><strong>RANSAC versions</strong><br>
<code>cv2.USAC_MAGSAC</code> always hugely outperformed other open-cv RANSAC presets in our experiments. The reason for this is a bit unclear. The last 2 days we tried changing the parameters of MAGSAC (i.e. nbr of local optimization samples and l.o. iterations), which seemed to give improvements on phototourism data using only LoFTR, but this did not translate to the test set and the complete method in the few submissions we had left.</p>
<p>In the end, the big thing we missed was perhaps using the QuadTreeAttention version of LoFTR as described in for instance the 6th, 11th and 21st place solutions. We also did not use SuperPoint/SuperGlue as it was semi-banned. Reading through other solutions we could probably have done some multithreading tricks to be able to get more computations done, we had quite frequent timeout issues when trying things.</p>
<p>Hi, thanks for trying SE2-LoFTR! I'm unsure what you mean - do you mean using a segmentation mask to filter out outlier correspondences obtained from LoFTR/SE2-LoFTR? Sure, share the error.</p>
<p>Do you know where in the LoFTR code they resize the mask? I can't currently think of a reason for why I would have changed this behavior in the SE2-LoFTR code.</p>
<p>Ah, nice to have it cleared up! No worries. :)</p>
<p>Since 396480 is 8*8*6195 I would assume that this is might be due to using a mask at the full image resolution in the coarse matching step which is at 1/8 scale. Did this work with LoFTR or did you get the same error?</p>
|
Image Matching Challenge 2022
|
26th place
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you! Yes, trying ideas is the main fun part :) Congratulations to you too!!! About DKM, no, it was in ensemble, on validation it was on 0.6 better than the original, but I'll wait for the official training code, I might have missed something, as I could only fine-tune, with re-training I couldn't reach the performance level. Colab Pro+ was a really good option, I got couple of times A100, which made it possible to try re-training from scratch :) </p>
<p>For SGMNet, I've only tried it in the very-very beginning, baseline setup, don't have exact numbers, but wasn't happy with it and dropped (didn't spend time to investigate, might be some bug).</p>
<p>Thanks! Congratulations to you!!! Great job! Looking forward to read the solution! :) </p>
<p>First, I would like to thank organizers for such a great competition, it was really easy to participate, great job! </p>
<p>And congratulations to all the winners! </p>
<p>My plan for the competition was to see how far I can get with limited resources (Colab Pro+) just to try what is out there on the market, but I’ve ended up training some models…)))</p>
<p>The main thing to solve as for me are: scale and matching/smart outliers removal. </p>
<hr>
<p><strong>IMAGE</strong></p>
<ol>
<li>For LoFTR I didn’t resize, but crop the image a little bit from the right/bottom which didn’t affect the keypoints' locations that were found. </li>
<li>Multiple scales (for LoFTR and SuperGlue), all keypoints after were resized to the original image size</li>
</ol>
<hr>
<p><strong>RECIPE</strong></p>
<p>Public 0.827, Private <strong>0.828</strong></p>
<p>For LoFTR, TTA: left-right, and merge keypoints</p>
<ol>
<li>LoFTR (pre-filtered keypoints by confidence)<br>
For multiple scales, I rescale uniformly (1.2, 1.2; 1.6,1.6) and filter out by the biggest number of outliers (with running MAGSAC, on conf.>0.7), rescale back to original image size </li>
<li>HardNet (2k, only for images not from google urban (based on image size))</li>
<li>ALIKE, 2k </li>
<li>DKM (TTA: left-right, 500 features subset, then >0.7 probability), default</li>
<li>Superpoint+SuperGlue (only for google urban data, 8k, 0.3thr, image original size, 1.2x, filter by confidence)</li>
<li>Combine all keypoints.</li>
<li>Magsac++ 0.185, 0.25, depends on google urban or not, 100k iter</li>
</ol>
<p>I had also a model I’ve uploaded the last minute and wanted to select, but didn’t))) - public 0.824, private <strong>0.831</strong>, it is with the fine-tuned DKM model.</p>
<hr>
<p><strong>What maybe could have worked with more time investment:</strong></p>
<ol>
<li><p>Keypoints + Line matching + Homography+ Fundamental from Homography. I found this paper and migrate it to python: <a href="url" target="_blank">http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7371221</a> and tried running with just multiple homography matrices sequentially + ransac, as-is (didn’t work, need to test widely more general, maybe some bug). <br>
I think it's an idea to try in combination with line matches. Or even maybe with progressive-x it could have worked. But I didn’t have enough time for it.</p></li>
<li><p>Find similar dataset for validation. For sure that would made life much easier. For validation I was using IMC train dataset, subsampled to 2k with the same distribution, but it is not the same. </p></li>
<li><p>DKM customization with re-training - I managed to fine-tune and get improvement on validation on approx. 0.6. I had an issue with the probability loss which I didn’t completely solve, so fine-tuning, but not to the scale I wanted. And some ideas I tried with epipolar loss, etc. </p></li>
<li><p>LogPolar (re-training on MegaDepth, does improve a little bit, was trying to reincarnate it, but too slow)</p></li>
<li><p>QuadAttentionTree - based on simple resize didn't get great results, decided to leave it. </p></li>
</ol>
<hr>
<p><strong>What didn’t work for me:</strong></p>
<ol>
<li>Re-training NG-RANSAC </li>
<li>ScaleNet (<a href="url" target="_blank">https://github.com/axelbarroso/scalenet</a>)</li>
<li>GOPAC</li>
<li>se-LoFRT</li>
<li>SGMNet</li>
<li>HardNet + AdaLAM</li>
</ol>
<p><strong>References</strong> </p>
<ol>
<li>LoFTR: <a href="url" target="_blank">https://zju3dv.github.io/loftr/</a></li>
<li>DKM: <a href="url" target="_blank"></a><a href="https://github.com/Parskatt/DKM/" target="_blank">https://github.com/Parskatt/DKM/</a> </li>
<li>ALIKE: <a href="url" target="_blank"></a><a href="https://github.com/Shiaoming/ALIKE/" target="_blank">https://github.com/Shiaoming/ALIKE/</a> </li>
<li>SuperGlue: <a href="url" target="_blank"></a><a href="https://github.com/magicleap/SuperGluePretrainedNetwork" target="_blank">https://github.com/magicleap/SuperGluePretrainedNetwork</a> </li>
</ol>
|
Image Matching Challenge 2022
|
27th place 0.823 public lb - 0.828 private lb
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the organizers for hosting this competition and congrats to all the participants.</p>
<p><strong>Summary</strong></p>
<p>My final solution is a combination of several models:</p>
<ol>
<li>3 x LoFTR (472x840, 704x1280, 936x1704)</li>
<li>SuperGlue + SuperPoint (472x854, max_keypoints 4096)</li>
<li>DKM (472x854) with 600 feature points</li>
</ol>
<p>Before merging all the keypoints for finding fundamental matrix I prefiltered DKM outliers by applying DEGENSAC with threshold 0.5. </p>
<p>Final RANSAC parameters:</p>
<p><code>cv2.findFundamentalMat(keypoints_1, keypoints_2, cv2.USAC_MAGSAC, ransacReprojThreshold=0.25, confidence=0.99999, maxIters=50000)</code></p>
<p><strong>Tricks that made a huge impact on the score:</strong></p>
<ol>
<li>Upscaling images for LoFTR and making them divisible by 8</li>
<li>Changing DKM default image resolution</li>
<li>Filtering out DKM outliers before the final fundamental matrix estimation</li>
<li>RANSAC parameters tuning</li>
<li>Ensembling</li>
</ol>
<p><strong>Things that didn't work for me:</strong></p>
<ol>
<li>HardNet, Hynet, SosNet</li>
<li>R2D2, DISK</li>
<li>OANet</li>
<li>SGMNet (<a href="https://github.com/vdvchen/SGMNet" target="_blank">https://github.com/vdvchen/SGMNet</a>)</li>
<li>LISRD (<a href="https://github.com/rpautrat/LISRD" target="_blank">https://github.com/rpautrat/LISRD</a>)</li>
<li>Patch2pix (<a href="https://github.com/GrumpyZhou/patch2pix" target="_blank">https://github.com/GrumpyZhou/patch2pix</a>)</li>
<li>Segmenting out shaky classes (people, cars, sky etc.) with models pretrained on MIT ADE20K scene parsing dataset<br>
(<a href="https://github.com/CSAILVision/semantic-segmentation-pytorch" target="_blank">https://github.com/CSAILVision/semantic-segmentation-pytorch</a>)</li>
<li>Other types of RANSAC (GC-RANSAC, DEGENSAC etc.)</li>
</ol>
|
Image Matching Challenge 2022
|
2nd Place Solution, ensemble of old and new methods
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to all the teams and especially the organizers' excellent work for hosting this competition, which helps to push the performance of image matching to the limit. It's my first kaggle competition but not the first time to participate in IMC, and I find it's always exciting to learn from other participants' brilliant solutions :)</p>
<h2><strong>Keypoints of my solution</strong></h2>
<ol>
<li>A novel single model baseline achieving 0.838/0.833 in private/public LB</li>
<li>Ensemeble with other strong matchers further boosts baseline greatly</li>
<li>Normalized postional encoding for transformer-based matcher</li>
<li>No TTA/multi-resolution or post/pre processing is applied. </li>
</ol>
<h2><strong>Start with a good baseline</strong></h2>
<p>My very initial motivation to participate in this competition is to benchmark a matching model we developed recently. Our model shares similar paradigm with LoFTR, using a transformer-style network to generate matches from raw image pairs without kpt detection. Since our paper is still under double-blind review, we might not be able to share more information publicly, but I'm sure the code should be available in near future :)</p>
<p>The model is submitted to IMC at a very early time and achieves an impressive score with 0.838/0.833 on private/public LB without any TTA. </p>
<h2><strong>Ensemble with other methods</strong></h2>
<p>Starting from the initial baseline, I ensemble more strong matchers into the solution, which always seems helpful. The results are reproted as follows.</p>
<table>
<thead>
<tr>
<th>Matcher</th>
<th>Score(private/public)</th>
<th>Input Resolution</th>
</tr>
</thead>
<tbody>
<tr>
<td>Baseline</td>
<td>0.838/0.833</td>
<td>(1472,832) or (832,1472)</td>
</tr>
<tr>
<td>LoFTR</td>
<td>0.783/0.772</td>
<td>(1400,792) or (792,1400)</td>
</tr>
<tr>
<td>SuperGlue+8k SuperPoint</td>
<td>0.724/0.728</td>
<td>longest dim 1600</td>
</tr>
<tr>
<td>Baseline+QuadTree</td>
<td>0.854/0.848</td>
<td>--</td>
</tr>
<tr>
<td>Baseline+QuadTree+LoFTR</td>
<td>0.860/0.853</td>
<td>--</td>
</tr>
<tr>
<td>Baseline+QuadTree+LoFTR+SuperGlue</td>
<td>0.862/0.859</td>
<td>--</td>
</tr>
</tbody>
</table>
<p>Generally both LoFTR/Quadtree contrubutes largely to the final socre, SG boost the public score while the private score benefits less from it. For all transformer-based matcher, I keep two types of fixed resolution(chosen by aspect ratio) to maintain a roughly constant memory consumption. For all methods, I use <em>cv2.findFundamentalMat(mkpts0, mkpts1, cv2.USAC_MAGSAC, 0.2, 0.99999, 100000)</em> to get fundamental matrix.</p>
<p>It also needs to be clarified that I use a <strong>normalized positional encoding</strong> for baseline, loftr and quadtree, which seems always boost performance by 0.003-0.005(sometimes even more) w/o additional cost. I will discuss it in the next section.</p>
<h2><strong>Normalized Positional Encoding</strong></h2>
<p>The original positional encoding of LoFTR is something like</p>
<p><strong>pe[0::4, :, :] = torch.sin(x_position * div_term)</strong>, where the <strong>x_position</strong> is absolute pixel coordinate</p>
<p>A potential issue with this implementation is that when test resolution deviates from training resolution, the pe will use unseen coordinate for encoding, which harms the network's abillity to recognize locations. To alleviate this, we apply a simple normalization as,</p>
<p><strong>pe[0::4, :, :] = torch.sin(x_position * (training_dim/test_dim) * div_term)</strong>, where <strong>training_dim/test_dim</strong> is a normalizing factor to ensure all coordinate ranges within the training resolution. For example, train with 840 resolution and test with 1472 resolution, then the factor will be 840/1472=0.5706. Emprically I find this modification more or less benefit all transformer-based matcher by around 0.005.</p>
<h2><strong>Tried but didn't work</strong></h2>
<ol>
<li><p>Use semantic mask to filter out unmatchable objects (sky, cars , trees, pedestrians). This pre-processing is generally costly for me and doesn't deliever consistent improvements.</p></li>
<li><p>Homography pre-filtering. Use a homography hypothesis to filter out some matches and then estimate F, which is adopted by a last year's IMC winning team. Generally this pre-filtering is theoretically confusing for me and empirically doesn't give me gain. </p></li>
<li><p>Replace LoFTR with SE2-LoFTR. At some point I guess there might be some mild rotation in the test set (around 30-40 degree), so I try to replace LoFTR with SE2-LoFTR, but it turns out in-effective.</p></li>
<li><p>Using larger resolution for all methods, I have tried to enlarge the input resolution (like >1500), but the gain is limited and run-time largely increases.</p></li>
<li><p>Using AdaLAM to filter matches. There are some hyper-params for AdaLAM and maybe I didn't tune it well :(</p></li>
</ol>
<h2><strong>Didn't try but may work</strong></h2>
<ol>
<li>TTA and multi-resolution input.</li>
<li>Learned outlier filters, like OANet(<a href="https://github.com/zjhthu/OANet)" target="_blank">https://github.com/zjhthu/OANet)</a>, LMCNet (<a href="https://liuyuan-pal.github.io/LMCNet/" target="_blank">https://liuyuan-pal.github.io/LMCNet/</a>) and CLNet (<a href="https://github.com/sailor-z/CLNet)" target="_blank">https://github.com/sailor-z/CLNet)</a>.</li>
<li>Two stage matching, cropping convisble regions/wrapping with an estimated homography at first and then matching again.</li>
<li>Ensemble SOTA descriptors, like ASLFeat(<a href="https://github.com/lzx551402/ASLFeat)" target="_blank">https://github.com/lzx551402/ASLFeat)</a>, ALIKE(<a href="https://github.com/Shiaoming/ALIKE" target="_blank">https://github.com/Shiaoming/ALIKE</a>) and PoSFeat(<a href="https://github.com/The-Learning-And-Vision-Atelier-LAVA/PoSFeat" target="_blank">https://github.com/The-Learning-And-Vision-Atelier-LAVA/PoSFeat</a>)</li>
</ol>
<p>Thanks :). But I guess I may not present it due to policy constraint.</p>
<p>Thanks! Your detailed solution report is also very insightful and I learn a lot from it :)</p>
<p>Thanks! After re-checking the submission policy, I think it's allowed to present our method to the workshop and I'll do it. </p>
|
Image Matching Challenge 2022
|
30th Place 827 Private LB with image size 840
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all i would like to thank the organizers for this wonderful competition.<br>
Great thanks to my teammate and friend <a href="https://www.kaggle.com/jaafarmahmoud1" target="_blank">@jaafarmahmoud1</a> for his hard work we really enjoyed working together again here in kaggle.</p>
<h2>What I didn't like on this competition</h2>
<ul>
<li>The inconsistency between training and testing data. so if you work on validation you will probably end out off the medal zone. I thought that the private testing set will be hard to test the models robustness and stability but it seems really similar to the public data (in terms of the results).</li>
</ul>
<h2>Our solution:</h2>
<p>We have developed two solutions one is to maximize the validation and another one to maximize the LB. </p>
<h3>Best validation submission:</h3>
<p>After testing super point + super glue, DKM , Loftr, loftr QuadTree and match former on the validation data. We noticed that some of them impact (increase) easy scenes like "notre_dame_front_facade" but drop the MAA for other fields (Hard - medium) like "colosseum_exterior" which indicates a huge difference in results (shakes) if the data is more generalize.<br>
The difference for the same scene could be up to 0.08.<br>
To overcome this problem we have validated the combination of the models and took the best results for each scene to label the data. We trained a classification model to tell when to use DKM + Matchformer light and when to ignore as well when to use super point + loftr and when to ignore. and the main part (models) which will be applied to all images is an ensemble of match former and loftr QT. </p>
<h4>Tricks that will work for everyone:</h4>
<ul>
<li>Using loftr gives an ability to pass a mask with the input. This mask originally is for padding so if you padded the image from 840 x ? to 840x840 and pass a mask that indicates the padding area to ignore. the results will become even better up to +0.01</li>
<li>From the mask idea what will happen if you pass the mask as a depth or segmentation mask instead of just padding mask. That will change the matches and points giving you another ensemble to add and enhance your results up to +0.01.</li>
<li>input reversed instead of passing the images as inp1, inp2 pass them as inp2 then inp1 you can also flip them to get better results but it is a really good approach to get over the grid problem with loftr models up to 0.015. it will also works for super glue so you can apply super point once with two super glues + 0.006.</li>
</ul>
<p>This model scored 0.813 on public and 0.816 on private lb.</p>
<h3>Best submission</h3>
<p>Our best submission on lb uses the same models without loftr QT. and the classification model used only to determine if to use DKM or not. The main model were SP + 2 SuperGlue, loftr, match former reversed and match former light.</p>
<h2>Tried but didn't work:</h2>
<ul>
<li>Anything not MAGSAC++ was worse including (USAC_ACCURATE, DEGENSAC, VSAC ..)</li>
<li>MEMORIZE from two images look for a third image that has a higher cov with both and calculate the F matrix<br>
$$X_1^t F_{12} X_2 = X_1^t F_{13} X_3 X_3^t F_{32} X_2 -> F_{12} = F_{13} X_3 X_3^t F_{32} $$</li>
<li>OANet - Disc - HardNet and (any other public model)</li>
<li>Multi step MAGSAC</li>
<li>TTA other than flipping was :( </li>
<li>Cropping the area of interest (has the major number of matches) or multiple areas</li>
</ul>
<p>I hope that my contribution to this competition was helpful thanks to all of you and congrats to all medal winners</p>
|
Image Matching Challenge 2022
|
37th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all many thanks to the competition hosts and Kaggle for what was in my opinion the most exciting competition of this year.</p>
<p>I personally started this competition with the only objective being 'to learn a bit more'….that turned out to be a lot more ;-)</p>
<p>With a lot of new techniques, learned models, some nice notebooks and a completely unexpected 37th place I couldn't be more happy.</p>
<p>I started out with following the many wonderfull posts and shared papers posted by 'old-ufo'. Many thanks! They were a terrific starting point.</p>
<p>I used the training data locally to be able todo many validation runs. For determining the effects of any change I made I looked both at local validation score and the effect on the LB as the public part was 51% of the data.</p>
<p><strong>My final solution:</strong><br>
• An ensemble combined of 3 pretrained models.<br>
• LoFTR. A fixed resize of the longest side to 840 pixels like in the training set. Resize based on INTER_LANCZOS4 interpolation. This boosted my score already very early to 0.807.<br>
• SuperGlue with a fixed 1024 keypoints. No modified image processing.<br>
• DKM with a fixed 256 keypoints. Image preprocessing. Resize based on image orientation to sides of 512 and 768 pixels. This boosted my score over 0.820.<br>
• MAGSAC++ with a threshold of 0.20 and 60K iterations..</p>
<p>I discovered that the resizing of the images for DKM had a very large impact…unfortunately I only had 5 days left to be able to further use the impact of that.</p>
<p>It turned out that my best private LB score is 0.822. In this version I increased the amount of keypoints for DKM to 336 and decreased the SuperGlue amount to 832. However both public LB and local validation score decreased for that version…so unfortunately it was no obvious one to choose.</p>
<p><strong>What didn't work for me:</strong><br>
• I tried ANMS - SSC for non maximum suppression to reduce keypoints. Spend a few evenings on it but couldn't get it to work properly. Interresting to read was that the 10th place solution was able to use it. I will have to revisit that again.<br>
• I tried dynamically balancing keypoint amounts. If one model finds more keypoints..increase also for other models. And vice versa. The effects seemed to be to little and fluctuated (sometimes better, sometimes worse) compared to fixed amount of keypoints.<br>
• I tried combining my final ensemble with ASLFeat. It didn't gave me a boost.<br>
• I tried combining my final ensemble with MatchFormer. It didn't gave me a boost.<br>
• Using some of the filtered keypoints from SOLD2…again score decreased.<br>
• Way to many configuration changes for all 3 models that seemed to have mixed and minor effects.<br>
• Training on the available training dataset. This just took way to long and did only seem to hurt performance.<br>
• 2 stages with RANSAC. First a quick round to get some more filtered keypoints and in a final stage the long run with many iterations. The score decreased. I read in some of the other solutions that they used something similar were the threshold for the first stage was increased. Very interresting.<br>
• Different versions of the RANSAC family. MAGSAC++ is just the current best.</p>
|
Image Matching Challenge 2022
|
3rd Place Solution 😎
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Overview</strong></p>
<ol>
<li>We use 4 sub-methods. SuperPoint + SuperGlue + Refinement, DISK + SuperGlue + Refinement, LoFTR, DKM.</li>
<li>Match NMS filters proximity keypoints based on matching score.</li>
<li>Adaptive augmentation matches again based on the transformed images. The transformation is computed by original matching results.<br>
<img src="https://i.postimg.cc/3rZRkpCB/20220607-175754.jpg" alt=""></li>
</ol>
<p><strong>Keypoints of our method</strong></p>
<ol>
<li>Refinement for matching results of SG.</li>
<li>Ensemble sub-methods based on match_NMS.</li>
<li>Adaptive augmentation based on original matching results.</li>
</ol>
<table>
<thead>
<tr>
<th>Method</th>
<th>Private Score</th>
<th>Public Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>sp+sg+refine</td>
<td>0.77974</td>
<td>0.78240</td>
</tr>
<tr>
<td>(sp+sg+refine) + (disk+sg+refine) + loftr + dkm + match_nms</td>
<td>0.83904</td>
<td>0.84268</td>
</tr>
<tr>
<td>(sp+sg+refine) + (disk+sg+refine) + loftr + dkm + match_nms + Adaptive_aug</td>
<td>0.85866</td>
<td>0.86110</td>
</tr>
</tbody>
</table>
<p><strong>Team Information</strong><br>
Nreal is leading the world in the revolutionary AR transformation. Widely recognized for its superior display technology and product design, Nreal has established partnerships with more than 10 world-renowned carriers and brought its product to 6 countries to date. Founded in 2017, Nreal first received worldwide recognition when it debuted Nreal Light in early 2019, the first ever AR glasses for consumers. Not only did it feature a revolutionary design, it was also the first AR glasses connected to the mobile content ecosystem. <br>
Welcome to join us. <a href="https://www.nreal.ai/" target="_blank">https://www.nreal.ai/</a></p>
|
Image Matching Challenge 2022
|
4th Public LB 0.852 -> 6th Private LB 0.850
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Sure! Our team will only be grateful if you refer to this post!</p>
<p>I cannot remember the original place where I found it. I guess it also comes from Loftr paper<br>
<a href="https://arxiv.org/pdf/2104.00680.pdf" target="_blank">https://arxiv.org/pdf/2104.00680.pdf</a> </p>
<blockquote>
<p>"We use ̃F A and ̃F B to denote the coarse-level features at 1/8 of the original image dimension, and ˆF A and ˆF B the fine-level features at 1/2 of the original image dimension". </p>
</blockquote>
<p>Highly appreciate! It was a great pleasure to work with you either!</p>
<p>Our team would like to deeply appreciate the Kaggle staff, Google Research and CVPR 2022 for hosting this exciting competition, as well has everyone here who compete and shared great discussions and notebooks. My congratulations to all participants!</p>
<p>This was my first joint participation in Kaggle competition. Special thanks to <a href="https://www.kaggle.com/yamsam" target="_blank">@yamsam</a> for hard and effective teamwork! We really enjoyed this competition!</p>
<p>Unfortunately, in the end we did not choose the right, best score submission. We got private score 0.852, but selected wrong version with 0.850. Sometimes, it happens…</p>
<hr>
<h2>Overview</h2>
<p>Our solution is based on the ensemble of different pretrained models and a bunch of different tricks to reach the final score. We used np.concatenate to join matches from different models. I assume most Kagglers taking part in this competition used similar approach.</p>
<p>All pretrained models that we tested are QuadTreeAttention, Loftr, SE2-Loftr, MatchFormer, DKM, SuperGlue, Patch2Pix, PDCNet, SGM, ASLFeat and AdaLam. <br>
Our final submission includes <strong>QTA</strong>, <strong>Loftr</strong>, <strong>DKM</strong>, and <strong>SuperGlue</strong>.</p>
<p>When QuadTreeAttention scored <strong>0.812/0.817</strong> (public/private) we were shocked. It was almost close to Gold Zone with only single model. And, now, when the competition finished, you could get a silver model using only one model.</p>
<h2>Our usage of QuadTreeAttention model</h2>
<p><a href="https://www.kaggle.com/code/igorlashkov/imc-2022-baseline-quadtreeattention-0-812-0-817" target="_blank">https://www.kaggle.com/code/igorlashkov/imc-2022-baseline-quadtreeattention-0-812-0-817</a></p>
<p>Our models and their best scores independently:</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Image size</th>
<th>w/o TTA Public/Private</th>
<th>with TTA Public/Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>QuadTreeAttention</td>
<td>1024</td>
<td>0.812/0.817</td>
<td>0.815/0.818</td>
</tr>
<tr>
<td>Loftr</td>
<td>1120</td>
<td>0.777/0.786</td>
<td>0.783/0.790</td>
</tr>
<tr>
<td>SuperGlue</td>
<td>1600</td>
<td>0.716/0.717</td>
<td>0.786/0.777</td>
</tr>
<tr>
<td>DKM</td>
<td>(672, 896)</td>
<td>0.735/0.748</td>
<td>0.744/0.757</td>
</tr>
<tr>
<td>Patch2Pix</td>
<td>1024</td>
<td>0.629/0.608</td>
<td>–</td>
</tr>
<tr>
<td>SGM</td>
<td>Original image size</td>
<td>0.464/0.427</td>
<td>–</td>
</tr>
<tr>
<td>PDCNet</td>
<td>Original image size</td>
<td>0.573/0.566</td>
<td>–</td>
</tr>
<tr>
<td>ASLFeat+AdaLam</td>
<td>Original image size</td>
<td>0.576/0.578</td>
<td>–</td>
</tr>
<tr>
<td>SE2-Loftr</td>
<td>1120</td>
<td>0.761/0.772</td>
<td>–</td>
</tr>
<tr>
<td>MatchFormer</td>
<td>1120</td>
<td>0.774/0.783</td>
<td>–</td>
</tr>
</tbody>
</table>
<p>Parameters that we used for F matrix calculation (cv2.findFundamentalMat):</p>
<pre><code>ransacReprojThreshold* = 0.24
confidence* = 0.999999
max iterations* = 100 000
</code></pre>
<h2>Ideas that worked out</h2>
<ul>
<li><strong>Input image size</strong> is super important. Certain models have specific characteristics. For instance, Loftr prefers images that have dimension multiplied by 8. Another example is QuadTreeAttention model that accepts images only dividable by 32. In first case, if it necessary we slightly reduced image size to be compatible with this ‘multiply 8’ rule. For QuadTreeAttention model we used zero-padding technique that makes possible to enlarge the image and keep the aspect ratio. In most cases bigger image size leads to better score.</li>
<li><strong>Ensemble different models</strong>. It may sound obvious but most boost comes from concatenating matches produced by completely different models (zero-shot, one-stage, two-stage) with different architectures. For example, QuadTreeAttention gives much better boost with SuperGlue than with Loftr. Most probably, the reason is that QuadTreeAttention has some common points with Loftr implementation and they both adhere Transformers approach. But I should admit that use of these three models in ensemble gives better score.</li>
<li>Use of <strong>filtering strategy</strong> or, in other words, some kind of model ensemble weight. The idea is to filter matches using normalized confidence scores, produced by the models. For instance, you take 300 most confident matches generated by Loftr model with confidence at least 0.3. By the way, it helps to reduce number of matches, and, eventually, speedup F matrix calculation.</li>
<li><strong>Test time augmentation (TTA)</strong>. We tested rotation (15° and 45°) and scale ([0;2]) with Albumentations, and merged matches produced by them with the matches extracted from images w/o applying augmentation. We see that rotation works quite well, especially for SuperGlue model. So, with TTA applied three times score improved 0.735 -> 0.786, but in ensemble the result is not so noticeable. Scale helped to boost score of Loftr model, but not too much <strong>0.774 -> 0.777</strong>.</li>
<li><strong>Mofidications in DKM library</strong>. Initially, this library did not allow to use any image size and, internally, it resized every image to (512, 384). We changed this behavior and used (672, 896) as an input size for the model. It significantly boosted public score <strong>0.672 -> 0.735 (private 0.688 -> 0.748)</strong>. </li>
<li><strong>Multi threading</strong>. In the middle of the competition, we realized that we are out of Kaggle time to run submission successfully. We experienced ‘Notebook timeout’ message for notebooks exploiting multiple models whereas some of them may run quite a lot time. To fix this situation, we implemented two important things. The first one is related to F matrix calculation. In the beginning we tested F matrix estimation in parallel, but, it did not work. I’m sure it is because of OpenCV limitations. Thereby, we decided to run inference and F matrix calculation in parallel. The idea is pretty simple. We first calculate matches for the 1st image pair, then we take the next image pair and calculate matches for current image pair and F matrix for the previous one, and then the process repeats. At the last step, it is important not to forget to calculate F matrix for matches belonging to the last image pair. Some multi-threading performance benchmarks are listed below:</li>
</ul>
<p>Calculation of F matrix using multiple threads is faster by ~64% rather than if we run sequentially. We run experiments multiple times and the results are pretty stable for Kaggle platform. Tests for 30 images pairs:<br>
In parallel:</p>
<ul>
<li>Total running time F Matrix: 24.9 s inference 78.1 s</li>
<li>CPU times: user 1min 40s, sys: 7.61 s, total: 1min 47s</li>
<li>Wall time: 1min 27s<br>
Sequential run:</li>
<li>CPU times: user 1min 35s, sys: 7.87 s, total: 1min 43s</li>
<li>Wall time: 1min 43s</li>
</ul>
<p>1 thread 103 sec = 1 min 43sec<br>
2 threads, in parallel 87 sec = 1 min 27 s<br>
Improvement: -16 sec</p>
<ul>
<li><strong>Release resources wisely</strong>. One more performance related issue we found here is that resource release does not run instantly. I mean that gc.collect() and torch.cuda.empty_cache() may take time up to 0.30sec that is similar to how long SG runs. In order to save some computational resources and make the task more efficient, we employed resource release only on every 10th image pair.</li>
</ul>
<h2>What did not work</h2>
<ul>
<li><strong>Scene segmentation</strong> and masking classes, including person, sky, vehicle. We tested this approach with Detectron2 and mseg_semantic.</li>
<li><strong>RotNet to estimate a good rotation</strong> for every image.</li>
<li><strong>Run inference on the same image</strong> having different sizes.</li>
<li><strong>RANSAC USAC_ACCURATE</strong> did not work out.</li>
<li><strong>Crop image into multiple patches</strong> (e.g., 2 x 2). Essentially, at first, we propose to crop image into multiple patches in the way that they overlap each other. On the next step we find matches on the cropped areas. Finally, we merge all matches found on these patches. We tested this method using Loftr model, score decreased <strong>0.777 -> 0.745</strong>.</li>
<li><strong>Adaptive F/H strategy</strong>. The main idea of this is to remove some wrong matches using Homography matrix. In contrast to Fundamental matrix, Homography assumes that points are planar. This kind of matrix could help to find matches on the flat building along the street. So, you have to calculate the F matrix and the H matrix, as well as their scores, that are symmetric transfer error, and compare it with the threshold. I tested this idea using only Loftr on the early stage and probably proper testing have to be done.</li>
<li>Another idea is also related to <strong>Homography matrix</strong>. It has common points with the previous approach, but differs in calculation and outlier removal. During testing this idea on available image data, I made sure that matches presented by H matrix are quite stable and have minimum number of outliers. I implemented a function that replaces all matches found by F matrix with matches contained by H matrix in the neighborhood area of fixed radius whereas F matrix matches are located (tested with 3 px). You can think about it as an extra layer of outlier filtering strategy prior to RANSAC applying.</li>
<li>Use of <strong>different versions of the same model</strong> (e.g., regular and lightweight weights of MatchFormer) did not show score improvement.</li>
<li><strong>Removal of matches residing outside the image</strong>. We noticed that sometime matches are placed outside the bounds of the image. We deleted such matches, but it did not help to boost score.</li>
<li><strong>Inference multiple models in parallel</strong> in order to reduce running time. Unfortunately, it almost immediately leads to the error of exhausting GPU memory in the notebook.</li>
</ul>
<hr>
<p>It was an unusual competition though, I think it is the first time the train data didn't help at all, only useful for debugging.</p>
<h2>Acknowledgement</h2>
<p>We deeply acknowledge great OSS such as PyTorch, PyTorch Lightning, PyTorch Image Models, Kornia, Albumentations, etc.</p>
<p>Special thanks go to authors of these great papers and pretrained models that we used in our final submission:</p>
<p><a href="https://github.com/Parskatt/DKM" target="_blank">DKM</a><br>
Edstedt, J., Wadenbäck, M. and Felsberg, M., 2022. Deep Kernelized Dense Geometric Matching. arXiv preprint arXiv:2202.00667.</p>
<p><a href="https://github.com/zju3dv/LoFTR" target="_blank">Loftr</a><br>
Sun, Jiaming, Zehong Shen, Yuang Wang, Hujun Bao, and Xiaowei Zhou. "LoFTR: Detector-free local feature matching with transformers." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8922-8931. 2021.</p>
<p><a href="https://github.com/magicleap/SuperGluePretrainedNetwork" target="_blank">SuperGlue</a><br>
Sarlin, Paul-Edouard, et al. "Superglue: Learning feature matching with graph neural networks." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.</p>
<p><a href="https://github.com/Tangshitao/QuadTreeAttention" target="_blank">QuadTreeAttention</a><br>
Tang, Shitao, Jiahui Zhang, Siyu Zhu, and Ping Tan. "QuadTree Attention for Vision Transformers." arXiv preprint arXiv:2201.02767 (2022).</p>
<p>Thank you! Sure. I made it public. Please, upvote the post to make it more visible for others<br>
Image size is 1056 that gives 0.817 in private test<br>
<a href="https://www.kaggle.com/code/igorlashkov/imc-2022-baseline-quadtreeattention-0-812-0-817" target="_blank">https://www.kaggle.com/code/igorlashkov/imc-2022-baseline-quadtreeattention-0-812-0-817</a></p>
<p>Thank you! <br>
Congrats with solo gold!<br>
Yeah, that's right. Previous winning solutions is a fount of good ideas to start</p>
<p><a href="https://www.kaggle.com/optimo" target="_blank">@optimo</a> I'm sorry that I wasn't specific about this part. Actually, we did not train models. Eventually. we only used pretrained weights.<br>
In the beginning of competition I did an attempt to train Loftr, but score decreased.<br>
Probably, you have to use more powerful GPU station. Authors of QTA heavily rely on the use of multiple GPUs 👀</p>
<p>Thank you!</p>
<p>Thanks! Congratulations to you as well! Great job!</p>
|
Image Matching Challenge 2022
|
4th place solution (brief summary)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>😂快进到提桶跑路。</p>
<p>Our final submissions used "ransacReprojThreshold=0.15, confidence=0.9999, maxIters=30000" and "ransacReprojThreshold=0.25, confidence=0.99999, maxIters=40000". The performance of the former is slightly better in private LB score.</p>
<p>There are two potentially time-consuming operations:</p>
<ol>
<li>Extracting matching points</li>
<li>Calculating fundamental matrix</li>
</ol>
<p>Firstly, you can check the time spent on TTA and extracting matching points.</p>
<p>Another reason why it takes too long time may be the calculation of the fundamental matrix. </p>
<blockquote>
<p>F, inliers = cv2.findFundamentalMat(mkpts0, mkpts1, cv2.USAC_MAGSAC, ransacReprojThreshold=0.25, confidence=0.9999, maxIters=30000)</p>
</blockquote>
<p>More matching points usually take more time, but as you mentioned, you have set a threshold to limit the number of matching points, which is an effective way to reduce time. Besides, reducing the value of ransacReprojThreshold or increase the maxIters may bring greater computing costs, so I recommend that you can set <strong>ransacReprojThreshold</strong> to about 0.25 and reduce <strong>maxIters</strong> appropriately.</p>
<p>Before submitting your code, you can test how long it takes to process the three image pairs. In my experience, if it took less than 9 seconds, the probability of "Notebook Timeout" would be low.</p>
<p>Thanks for organizing the interesting competition! All members of our team are in civil engineering and have almost no relevant knowledge in image matching before the competition, so it is a challenging and funny experience for us to start from scratch. Competition host provided valuable learning material (problem definition, correspondences filtering and learned matching paper), which allowed us beginners to get started quickly. It is pleasantly surprised for us to get the fourth place. 😄</p>
<h1>Summary</h1>
<p>Our method is entirely based on pretrained models, and no fine-tuning of these models is made in our method. Without enough computing resources, therefore, we focused on the ensemble methodology of existing methods at the very beginning.</p>
<p>We refer to the following notebooks in our work: (Sincere thanks to the authors of the papers and notebooks!)</p>
<p>• LoFTR: <a href="https://www.kaggle.com/code/cbeaud/imc-2022-kornia-score-0-725/notebook" target="_blank">https://www.kaggle.com/code/cbeaud/imc-2022-kornia-score-0-725/notebook</a><br>
• QuadTree: <a href="https://www.kaggle.com/code/dschettler8845/quadtree-image-matching-challenge-2022" target="_blank">https://www.kaggle.com/code/dschettler8845/quadtree-image-matching-challenge-2022</a><br>
• DKM: <a href="https://www.kaggle.com/code/radac98/public-baseline-dkm-0-667" target="_blank">https://www.kaggle.com/code/radac98/public-baseline-dkm-0-667</a><br>
• SuperGlue: <a href="https://www.kaggle.com/code/losveria/superglue-baseline/notebook" target="_blank">https://www.kaggle.com/code/losveria/superglue-baseline/notebook</a></p>
<p>We combined all the matched key points obtained in different methods and used MAGSAC to calculate the fundamental matrix. The process is roughly as follows:</p>
<ol>
<li><strong>Extracte matched key points of multiscale images</strong><br>
Resize image pairs to different scales and apply matching method.</li>
<li><strong>Rescale the key points</strong><br>
Rescale the coordinate of key points to its position in original image</li>
<li><strong>Combine all key points</strong></li>
<li><strong>Calculate the fundamental matrix</strong></li>
</ol>
<table>
<thead>
<tr>
<th></th>
<th>resize strategy</th>
</tr>
</thead>
<tbody>
<tr>
<td>loftr</td>
<td>resize the length of long side to 1000, 1200, 1400 + adjust the height-width ratio</td>
</tr>
<tr>
<td>superglue</td>
<td>resize the length of long side to 1200, 1600, 2000, 2800 + adjust the height-width ratio</td>
</tr>
<tr>
<td>dkm</td>
<td>resize image pixel to 346800 + maintain the height-width ratio</td>
</tr>
</tbody>
</table>
<p>Since superglue has strict license and is not allowed for prize-winning, but permitted to be used in the competition to get better rank, we took two routes after breaking into the top five rank:</p>
<ol>
<li><p>ensemble of loftr, superglue and dkm<br>
best private score: 0.853 (not selected as final submission)<br>
<a href="https://www.kaggle.com/gufanmingmie/imc-2022-final-ensemble" target="_blank">https://www.kaggle.com/gufanmingmie/imc-2022-final-ensemble</a></p></li>
<li><p>ensemble of loftr, quadtree and dkm (exclude superglue)<br>
best private score: 0.843</p></li>
</ol>
<p>best private score selected as final submission: 0.852, ensemble of loftr, quadtree, superglue and dkm </p>
|
Image Matching Challenge 2022
|
7th place solution (brief summary)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks!! I updated the links, thanks for letting me know :) </p>
<p>Thanks!! Do you mean on the github? I knew it was you :) </p>
<p>Thank you! We didn't rigorously do the ablation study but did some rough comparisons. <strong>(Based on Public LB)</strong> It was ~0.832 with a single crop and got improved to ~0.835 with double crops. It was stuck there for a while and warping matching got us to ~0.841 with a single crop(+ some resize trick)!</p>
<p>You're welcome! :) </p>
<h2>Acknowledgement</h2>
<p>Firstly, we'd like to thank the organizers( <a href="https://www.kaggle.com/eduardtrulls" target="_blank">@eduardtrulls</a>, <a href="https://www.kaggle.com/oldufo" target="_blank">@oldufo</a>)and Kaggle staff for opening this remarkable challenge and giving us an opportunity to learn, suffer(😏) and grow without any barriers. They were kindly up to responding quickly to questions and leaving us helpful references and tutorials. Personally, although I'm quite a newbie here in Kaggle, I've never seen this before. </p>
<p>Secondly, thanks for sharing your notebooks( <a href="https://www.kaggle.com/remekkinas" target="_blank">@remekkinas</a>, <a href="https://www.kaggle.com/radac98" target="_blank">@radac98</a>) and ideas( <a href="https://www.kaggle.com/kirderf" target="_blank">@kirderf</a>, <a href="https://www.kaggle.com/johanedstedt" target="_blank">@johanedstedt</a>) so we, beginners, could learn from it and keep moving forward. I'm looking forward to seeing me publishing notebooks or discussions like these one day:</p>
<ul>
<li><p><a href="https://www.kaggle.com/code/remekkinas/detector-free-local-feature-matching-w-transformer" target="_blank">https://www.kaggle.com/code/remekkinas/detector-free-local-feature-matching-w-transformer</a> </p></li>
<li><p><a href="https://www.kaggle.com/code/cbeaud/imc-2022-kornia-score-0-725" target="_blank">https://www.kaggle.com/code/cbeaud/imc-2022-kornia-score-0-725</a></p></li>
<li><p><a href="https://www.kaggle.com/code/radac98/public-baseline-dkm-0-667" target="_blank">https://www.kaggle.com/code/radac98/public-baseline-dkm-0-667</a></p></li>
<li><p><a href="https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/324805" target="_blank">https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/324805</a></p></li>
<li><p><a href="https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/321177" target="_blank">https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/321177</a></p></li>
</ul>
<h2>Short Impression</h2>
<p>This is my second competition in Kaggle, but this one is the first I seriously dedicated my time and energy to.(Nights after work and weekends..😂) And, my teammates and I joined this competition with almost zero background knowledge and learned a lot from this competition, as I believe many of us here are the same. So this is a very meaningful result for me 😊</p>
<h2>Summary of our solution</h2>
<p>Actually, we were surprised that other top teams ensembled at least more than 3-4 models(though we only used 2), so we learned from it that ensemble almost always works. Will use it for the next competitions. </p>
<h4>0. Models</h4>
<ol>
<li>LoFTR(<a href="https://arxiv.org/abs/2104.00680" target="_blank">https://arxiv.org/abs/2104.00680</a>)</li>
<li>DKM(<a href="https://arxiv.org/abs/2202.00667" target="_blank">https://arxiv.org/abs/2202.00667</a>)</li>
</ol>
<h4>1. K-means and Crop</h4>
<p>We observed that the matching points tend to be concentrated on promising regions and spread out on doubtful regions such as roads. So why not cropping those promising regions? We firstly used <strong>K-means</strong> to cluster the matching points, found the most promising cluster based on the number of points, and picked the center point of that cluster for cropping. We <strong>cropped</strong> that region (512x512) from image-1, <strong>extracted</strong> the corresponding region(512x512) from image-2, <strong>set</strong> them as a new image pair, and <strong>ran</strong> the model to get more meaningful matching points. This method boosted our score even without ensemble.</p>
<p>Furthermore, we tried to crop two promising regions and get more and more matching points. It worked and boosted our score at first. However, this method takes so much time for inference and restricts the resource we could use for experimenting with other methods, so we turned our way. </p>
<h4>2. Warping (back and forth) matching</h4>
<p>We have got this idea from the previous IMC2021 challenge winning team(<a href="https://github.com/PruneTruong/DenseMatching" target="_blank">https://github.com/PruneTruong/DenseMatching</a>) that they firstly warp the image-1 based on the coarse matched homography matrix and then get more accurate one with the new warped image pair. This method was similarly implemented in our way that we first calculate the homography matrix using <strong>cv2.findHomography</strong> for the original image pair<strong>(1->2 and 2->1)</strong> with LoFTR, get the new <strong>two image pairs(warped_image1/image2 & image1/warped_image2)</strong>, run the model for both pairs, and concatenate the resulted matching points. </p>
<h4>3. Ensemble</h4>
<p>Ensembling LoFTR and DKM definitely helped to boost our score. LoFTR was more robust and strong for situations where general and global context-based understanding is required. On the other hand, DKM worked better in local and promising regions. Hence, we used LoFTR for the non-cropped image pairs to get matching points in the first place and then used both LoFTR/DKM to get matching points from the cropped image pairs. </p>
<h4>4. VSAC(🤔?)</h4>
<p>We found this issue(<a href="https://github.com/ivashmak/vsac/issues/5#issuecomment-1145685290" target="_blank">https://github.com/ivashmak/vsac/issues/5#issuecomment-1145685290</a>) in the official repository of VSAC and decided to somehow implement it on our notebook regardless of our lack of knowledge of how to do it without the internet. After the painful weekend, we made it by manually installing all the dependencies with dpkg install and revising python binding parts, etc.</p>
<p>It didn't improve our score with the default setting, so we tried to find better settings/parameters to at least improve the speed without any loss of score. This is the final setting that we used for the submission, but we believe this is not optimal and there should be better ones. </p>
<p><code>params = pvsac.Params(pvsac.EstimationMethod.Fundamental, 0.1, 0.99999, 100000, pvsac.SamplingMethod.SAMPLING_PROGRESSIVE_NAPSAC, pvsac.ScoreMethod.SCORE_METHOD_MAGSAC)</code><br>
<code>params.setParallel(True)</code><br>
<code>params.setLocalOptimization(pvsac.LocalOptimMethod.LOCAL_OPTIM_INNER_LO)</code><br>
<code>params.setPolisher(pvsac.PolishingMethod.MAGSAC)</code><br>
<code>params.setLOSampleSize(10*params.getSampleSize())</code><br>
<code>params.setLOIterations(20)</code><br>
<code>F, inliers = pvsac.estimate(params, f_mkpts0, f_mkpts1)</code></p>
<p>The public LB score seemed to get worse a little bit with gaining some speed. We were not sure at the moment, so just decided to include this one for the final submission. It turns out to improve the speed (way faster than cv2.MAGSAC) without loss of score. However, it is not what we expected in the first place. (we expected more for the time we spent for it LOL) </p>
<h4>5. Radius NMS</h4>
<p>Since there are so quite a lot of matching points that are redundant to each other, we thought NMS-type of filtering would work to filter those points. It certainly helped to not only decrease the time spent for ransac, but improve our score. We found that applying NMS for 0.5~1.0 pixel resolution works the best. We just implemented it based on the public GitHub repository(<a href="https://github.com/luigifreda/pyslam" target="_blank">https://github.com/luigifreda/pyslam</a>). </p>
<h4>6. Resize</h4>
<p>We tried several different image sizes for inference such as 640, 840, 1080, and 1280 for the longest side of the image. 1080 was our selection considering both memory resources and the score. We found that scaling up the image to some extent helps LoFTR find more matching points. </p>
<h2>Ideas that worked</h2>
<ul>
<li>K-Means/Crop</li>
<li>Warping (back and forth) matching</li>
<li>Ensemble</li>
<li>Some tuning on VSAC</li>
<li>Radius NMS</li>
<li>Multiple Scales Matching (worked in <strong>private score</strong> but not selected)</li>
<li>Confidence based Filtering (worked in <strong>private score</strong> but not selected)</li>
<li>Resizing</li>
</ul>
<h2>Ideas that didn't work</h2>
<ul>
<li>Multiple Crops</li>
<li>Adaptive-size Crops (based on original image size)</li>
<li>YOLOX-based Filtering (filtering cars/people/bikes, etc.)</li>
<li>Rotation and Projection based TTA</li>
</ul>
<h2>Lessons learned</h2>
<ul>
<li><strong>Ensemble</strong> almost always works great. I should exploit its benefits more for the next competitions for sure. </li>
<li>Learning from challenges is fun, great, and very helpful for growing as an engineer, but very painful at the same time.</li>
<li>Team play works better than solo play. (at least for me) </li>
<li>There are wonderful people out here I can learn from.</li>
<li>I like coffee.</li>
</ul>
|
Image Matching Challenge 2022
|
[28th] Baby Kagglers Deus Lab
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi everyone! First, I wanna thank the organizers for hosting such a great competition and all the participants for making it really competitive one. Now it's time to share solutions, here is ours. </p>
<h3>Brief Summary</h3>
<p>Ensemble of LoFTR, SuperGlue, DKM and ASLFeat with the resize calibration and appropriate number of keypoints for each model.</p>
<h3>Choosing a RANSAC</h3>
<p>Set up: <a href="https://www.kaggle.com/code/mcwema/imc-2022-kornia-loftr-score-plateau-0-726" target="_blank">baseline LoFTR</a>, 100k iterations</p>
<table>
<thead>
<tr>
<th>Approach</th>
<th>Threshold</th>
<th>Public LB</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>MAGSAC</strong></td>
<td><strong>0.2</strong></td>
<td><strong>0.726</strong></td>
</tr>
<tr>
<td>DEGENSAC</td>
<td>0.5</td>
<td>0.713</td>
</tr>
<tr>
<td>DEGENSAC</td>
<td>0.2</td>
<td>0.684</td>
</tr>
<tr>
<td>DEGENSAC</td>
<td>1.0</td>
<td>0.705</td>
</tr>
<tr>
<td>GC-RANSAC</td>
<td>0.2</td>
<td>0.671</td>
</tr>
<tr>
<td>MAGSAC & DEGENSAC & GC-RANSAC</td>
<td>0.25, 1.0, 1.0</td>
<td>0.725</td>
</tr>
<tr>
<td>MAGSAC & DEGENSAC & GC-RANSAC</td>
<td>0.25, 3.0, 3.0</td>
<td>0.727 (but too slow)</td>
</tr>
<tr>
<td>MAGSAC & DEGENSAC & GC-RANSAC</td>
<td>0.25, 0.5, 0.5</td>
<td>0.720</td>
</tr>
</tbody>
</table>
<h3>Choosing an image size</h3>
<p>Image size is crucial for LoFTR. Here is the proof.<br>
Set up: <a href="https://www.kaggle.com/code/mcwema/imc-2022-kornia-loftr-score-plateau-0-726" target="_blank">baseline LoFTR</a></p>
<table>
<thead>
<tr>
<th>Approach</th>
<th>Public LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>640, 480</td>
<td>0.533</td>
</tr>
<tr>
<td>840 longest size</td>
<td>0.726</td>
</tr>
<tr>
<td><strong>Just calibrate image size in such way that both sides were divisible by 8</strong></td>
<td><strong>0.755</strong></td>
</tr>
</tbody>
</table>
<h3>Building a ZOO</h3>
<h4>Models</h4>
<ul>
<li>LoFTR</li>
<li>SuperGlue</li>
<li>DKM</li>
<li>ASLFeat</li>
<li>SGMNet (isn't used in final submission)</li>
</ul>
<h4>Tuning</h4>
<p>We've tried a bunch of different parameters different models. Only one worked. Switching <code>border_rm</code> parameter in LoFTR from 4 to 1 leads to generating more keypoints and gives 0.006 boost.</p>
<h4>Ensembling approach</h4>
<p>1) Get the keypoints from all the models.<br>
2) Concatenate them.<br>
3) Run RANSAC algorithm.</p>
<p>We've also tried to concatenate keypoints after several RANSAC runs but it was too slow and was giving a worse score.</p>
<h4>Image sizes</h4>
<p>I mentioned above that image size is crucial here, especially for LoFTR, but in the terms in ensembling it becomes even more important. So here are some points:</p>
<p>1) We've chosen <code>Just calibrate image size in such way that both sides were divisible by 8</code> approach for all the models and it was the only way where ensembling started giving a solid boost.<br>
2) The first reason of it is that when LoFTR are more sensitive to image size, other models are less, so the rule <code>the bigger is better</code> works here.<br>
3) Another reason is that in order to do a proper ensembling all keypoints should have the same scale, the same image size is the easiest way to achieve it. <br>
4) DKM resizes image to (512, 384) in its forward function, so it was necessarily to rewrite it in order to fix it.</p>
<h4>Keypoints number</h4>
<p>The next important thing to do in order to get the most out of the ensemble is to select the number of points since the default value have wide range among the models which. <strong>That's the reason why ensembles could not to work for some people.</strong> </p>
<p>Not all the models have a strict parameter for the number of final keypoints, so here we show how to set them up:<br>
1) SuperGlue | <code>max_keypoints</code> in config: 1024 -> 2048<br>
2) DKM | <code>num</code> in sample function: 2000 -> 300<br>
3) ASLFeat | <code>kpt_n</code> in config: 8000 -> 2048</p>
<p>Our final keypoints' shapes for LoFTR, SuperGlue, DKM and ASLFeat accordingly look this way:</p>
<pre><code>(871, 2) (247, 2) (300, 2) (407, 2)
(529, 2) (210, 2) (300, 2) (236, 2)
(210, 2) (42, 2) (300, 2) (193, 2)
</code></pre>
<p>We also tried to create an adaptive number of keypoints for these models. For example, based on the number of LoFTR's keypoints, but it didn't work. </p>
<p>Decreasing the number of keypoints not only allows to create a stable ensemble but also gives a solid speed up.</p>
<h4>Overall</h4>
<p>DR - default resize<br>
CR - calibrated resize</p>
<table>
<thead>
<tr>
<th>Approach</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>LoFTR (DR)</td>
<td>0.732</td>
<td>0.742</td>
</tr>
<tr>
<td>LoFTR (DR) + SuperGlue (DR, max_keypoints=1024)</td>
<td>0.701</td>
<td>0.709</td>
</tr>
<tr>
<td>LoFTR (CR)</td>
<td>0.755</td>
<td>0.764</td>
</tr>
<tr>
<td>LoFTR (CR) + SuperGlue (CR, max_keypoints=1024)</td>
<td>0.795</td>
<td>0.797</td>
</tr>
<tr>
<td>LoFTR (CR + SuperGlue (CR, max_keypoints=1024) + DKM (DR, num=2000)</td>
<td>0.743</td>
<td>0.751</td>
</tr>
<tr>
<td>LoFTR (CR) + SuperGlue (CR, max_keypoints=1024) + DKM (CR, num=2000)</td>
<td>0.786</td>
<td>0.799</td>
</tr>
<tr>
<td>LoFTR (CR) + SuperGlue (CR, max_keypoints=1024) + DKM (CR, num=200)</td>
<td>0.816</td>
<td>0.820</td>
</tr>
<tr>
<td>LoFTR (CR) + SuperGlue (CR, max_keypoints=1024) + DKM (CR, num=200) + ASLFeat (CR, kpt_n=2048)</td>
<td>0.819</td>
<td>0.821</td>
</tr>
<tr>
<td>LoFTR (CR) + SuperGlue (CR, max_keypoints=1024) + DKM (CR, num=200) + ASLFeat (CR, kpt_n=2048) + SGMNet</td>
<td>0.818</td>
<td>0.818</td>
</tr>
<tr>
<td><strong>LoFTR (CR) + SuperGlue (CR, max_keypoints=2048) + DKM (CR, num=300) + ASLFeat (CR, kpt_n=2048)</strong></td>
<td><strong>0.820</strong></td>
<td><strong>0.827</strong></td>
</tr>
</tbody>
</table>
<h3>Handling exceptions</h3>
<p>We haven't had any submission exceptions after adding this to our code:</p>
<pre><code>if FM is None:
F_dict[sample_id] = np.zeros((3, 3))
continue
elif FM.shape != (3, 3):
F_dict[sample_id] = np.zeros((3, 3))
continue
</code></pre>
<h3>Another things that didn't work</h3>
<ul>
<li>TTA (horizontal flilp)</li>
<li>90 degrees rotate</li>
<li>DISK</li>
<li>Inference in fp16 mode</li>
<li>Tuning confidence thresholds</li>
</ul>
<h3>Code and final keypoints</h3>
<p>The notebook with an entire solution and keypoints examples can be found <a href="https://www.kaggle.com/code/vadimtimakin/imc-solution" target="_blank">here</a>.</p>
<p>Hi, thanks!</p>
<p>it's pointed in the official repository, I just took code from there. It was hard to admit for me that the public notebook that I used for a couple of weeks as a baseline had the wrong resize approach. I think the reason of that many people didn't even think about it was a contrast with the first version of LoFTR published here which was giving even worse score due to the even worse resize approach.</p>
<p>So, yes, just coping LoFTR code from the official repository instead of using public notebook gives 0.755 score instead of 0.732. I think, that's the good lesson :)</p>
<p>Thank you! We decided to give this model a try after viewing your public notebook, it was very useful for us.</p>
<p>Thanks, Vadim!</p>
<p><img src="https://i.ibb.co/hg1FyBF/depositphotos-28734339-stock-illustration-presenting-emoticon.webp" alt="sticker"></p>
|
Image Matching Challenge 2022
|
[5th] Refined SPSG + LoFTR (brief summary)
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Models:</h2>
<p>Pretrained SPSG and LoFTR models, COTR-like functional matching model trained by ourself.</p>
<h2>Brief introduction</h2>
<ol>
<li><p>Apply SPSG on original images and crop the patches which contain all match pairs.</p></li>
<li><p>Apply SPSG(and LoFTR) on cropped patches again and find matches.</p></li>
<li><p>Use COTR-like model to refine the matches provided by SPSG.<br>
<em>ps: We propose a new functional matching model here, which runs dozens to hundreds of times faster than COTR and applied it here. </em></p></li>
<li><p>Ensemble all the matches (first and second round SPSG matches refined by COTR-like method, LoFTR matches) and find Fundamental Matrix by MAGSAC</p></li>
</ol>
<h2>Some tricks and findings</h2>
<ul>
<li><p><strong>Crop images (two-stage matching)</strong></p>
<p>We find taking original images as input leads to a sub-optimal result. Cropping patches and taking overlapping areas as input give better performance. Overlapping areas are estimated under the instruction of one robust matching method.</p>
<table>
<thead>
<tr>
<th>Method</th>
<th>parameters</th>
<th>score[private]</th>
</tr>
</thead>
<tbody>
<tr>
<td>two-stage SPSG</td>
<td>spsg: {size:1600px for long side, max-pts:2048, sp thresh:0.0001} <br>magsac:{thresh:0.25,conf:0.99999,max-iter:100000}</td>
<td>0.773</td>
</tr>
</tbody>
</table></li>
<li><p><strong>Ensemble</strong></p>
<p>Ensemble of LoFTR and SPSG can enhance performance. </p>
<p>We find that SPSG is more friendly to scenes with large perspective changes than LoFTR due to differences in attention types of transformer and search space(sparse vs. semi-dense), while semi-dense methods provide more accurate matches.</p>
<table>
<thead>
<tr>
<th>Method</th>
<th>parameters</th>
<th>score[private]</th>
</tr>
</thead>
<tbody>
<tr>
<td>two-stage SPSG+<strong>LoFTR</strong></td>
<td>spsg: <br>{size:1600px for long side, max-pts:2048, sp thresh:0.0001} <br>magsac:<br>{thresh:0.25,conf:0.99999,max-iter:100000} <br>loftr:<br>{size:1200, retain the default settings for other hyperparameters}</td>
<td>0.836</td>
</tr>
</tbody>
</table></li>
<li><p><strong>Refine SPSG matches by functional query method</strong></p>
<p>Our local experiments of vanilla SPSG, 2-stage SPSG and refined SPSG shows that refining the location of SP points leads to better performance, for sparse matching method like SP often suffer from inaccurate keypoint positions problem. </p>
<table>
<thead>
<tr>
<th>Method</th>
<th>AUC@5</th>
<th>AUC@10</th>
<th>AUC@20</th>
<th>Prec</th>
</tr>
</thead>
<tbody>
<tr>
<td>SPSG paper</td>
<td>39.02</td>
<td>59.51</td>
<td>75.72</td>
<td>98.72</td>
</tr>
<tr>
<td>SPSG reproduce</td>
<td>39.74</td>
<td>59.89</td>
<td>75.90</td>
<td>98.74</td>
</tr>
<tr>
<td>SPSG + refine method</td>
<td>41.46</td>
<td>60.97</td>
<td>76.52</td>
<td>98.77</td>
</tr>
<tr>
<td>SPSG(2 stage) + refine method</td>
<td>42.23</td>
<td>61.90</td>
<td>77.49</td>
<td>98.92</td>
</tr>
</tbody>
</table>
<p>Results on competition dataset gives similar trends.</p>
<table>
<thead>
<tr>
<th>Method</th>
<th>parameters</th>
<th>score[private]</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Refined</strong> two-stage SPSG + LoFTR</td>
<td>same with previous</td>
<td>0.846</td>
</tr>
</tbody>
</table></li>
<li><p><strong>Query expansion by functional query method</strong></p>
<p>We set virtual queries around original matches provided by SPSG, too. Experiment shows that it delivers better results. </p>
<table>
<thead>
<tr>
<th>Method</th>
<th>parameters</th>
<th>score[private]</th>
</tr>
</thead>
<tbody>
<tr>
<td>Refined two-stage <br>SPSG(<strong>query expansion</strong>) + LoFTR</td>
<td>same as previous</td>
<td>0.850</td>
</tr>
</tbody>
</table></li>
<li><p><strong>Mask doesn't work here</strong></p>
<p>We find that mask is of little help on this year's dataset (part of Google urban dataset maybe). Different from Phototourism dataset, SP detects far less meaningless keypoints(e.g., keypoints caused by the artifact in the sky) here. </p></li>
</ul>
<h2>Summary</h2>
<p>We find RSA are keys to our performance, namely Robust to scenes with large perspective changes, Sufficient keypoints and Accurate keypoint locations. All the tricks aforementioned are based on these principles. </p>
<h2>Acknowledgement</h2>
<p>This is a brief summary of our method. Thanks to my master instructor and colleagues in the company who help me a lot with related paper and this competition. Thank Dmytro, Eduard and other organizers for holding this wonderful competition. I appreciate having this chance to play with many intelligent and talented competitors:)</p>
|
Image Matching Challenge 2022
|
[77th Place] Solo, LOFTR - SE - Matchformer / Thoughts on F calculation
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats everyone for nice results. I learned a lot trough this competition. </p>
<p>I know its like a joke to share my 77th place solution when people almost achieved 0.1 higher than I did, but wanted to share some things that I tried that also other people did, but was one thing that I thought would work and didnot. So would be nice if I could share that with you. </p>
<h1>My final Solution</h1>
<p>Image_Size 840, scaled keypoints in the end</p>
<table>
<thead>
<tr>
<th>Models</th>
<th>Result</th>
</tr>
</thead>
<tbody>
<tr>
<td>Loftr, Se-LoFTR, Matchformer</td>
<td>0.781</td>
</tr>
<tr>
<td>Loftr, Se-LoFTR</td>
<td>0.759</td>
</tr>
<tr>
<td>Loftr, Matcformer</td>
<td>0.776</td>
</tr>
</tbody>
</table>
<h1>Scaling with different image sizes</h1>
<p>I tried running different image sizes for each run and rescale back but this never improved the result in ensemble. However, when I tried to run Matchformer with 640 separately, it was better than on 840. So I think, there is a way to make it work but could not really find it.</p>
<p>Also, I applied some blurring to the images and for basic LoFTR medium bluring made 0.722 - 0.725</p>
<h1>Why I created this discussion</h1>
<p>As already said people shared a better results, so this makes no sense. But what I tried that didnot work was combining different methods for finding Fundamental matrix including <code>pydegensac</code> with all other cv2 methods and average the Fundamental matrix. For example,</p>
<pre><code>F1, inliers = cv2.findFundamentalMat(mkpts0, mkpts1, cv2.USAC_MAGSAC, 0.2, 0.9999, 100000)
F2, inliers = cv2.findFundamentalMat(mkpts0, mkpts1, cv2.USAC_ACCURATE, 0.1845, 0.9999, 100000)
F3, inliers = cv2.findFundamentalMat(mkpts0, mkpts1, cv2.FM_LMEDS, 0.1845, 0.9999, 100000)
F4, inliers = pydegensac.findFundamentalMatrix(mkpts0, mkpts1, 0.5, 0.9999, enable_degeneracy_check= True)
</code></pre>
<p>And then write something like </p>
<pre><code>F = (F1 + F2 + F3 + F4) / 4
</code></pre>
<p>on single LOFTR this decreased accuracy from 0.725 to 0.696 and its quite understandable because mathematically this makes no sense. But I wanted to further follow aproach that I could not finish so I am just interested in your opinion.</p>
<p>I wanted to decompose this F matrices into rotation/translation, using static intrinsics, then average those rotation / tralsations and again create one Fundamental matrix which mathematically would be more correct. I could not make this work because of lack of time, but if anyone has done something similar or has thoughts about it. Would be nice if you could share some. </p>
<p>Yes, also wanted to find the good thresholds for each, but execution took some time, so was harder. Had to create own script for some small custom validation data maybe. Thanks for hosting nice compettition. I had a lot of fun</p>
<p>Thanks, yea, I felt I missed Quadtrees badly, I had in my mind to implement it in the end but didnt have enough time, so I think I missed at least silver, maybe if I had 1 more day. :( But its fine, learning experience is what matters at the end of the day. Good luck in future to you. </p>
<p>Thanks for the reply.</p>
<p>Yes, as I said I know it mathematically made no sense. Thats why I wanted to try with rotation translation. But your seccond approach also should work. Thanks for sharing your solution, I really appreciate. </p>
<p>Actually, I was surprised to see many people use SuperGlue as it was licensed and said you cannot use it, so I decided avoiding it because of that. But when I saw many people using it, I thought its fine to use on Kaggle :)) </p>
<p>Thanks for sharing. Yes I knew my approach mathematically is wrong. Thats why I offered rotation decomposition and then reconstucting from there because mathematically that would also give the correct fundamental matrix. But your solution is great and makes sense to be honest. thanks for sharing . But still doubt even with the right constraints this would have improved the accuracy, but might give it a try. </p>
|
Image Matching Challenge 2022
|
[9th place] Detailed report
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2022
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The most surprising thing for me when I moved stuff to different thread is that <code>scipy.spatial.cKDTree</code> works slower when I moved it to a different thread than when I kept it in main thread. My initial logic was "it will distribute to other cores anyways, because the back-end is written in C", but for some reason it became 10x slower. I think I don't understand GIL good enough… would appreciate if someone can elighten me!</p>
<p>That's right, I do a crop out regions of the images (default behavior of <code>cv2.warpAffine</code>), that's why I chose smaller rotation angles, like 10 or 15 degrees.</p>
<p>I have faced similar issue with DKM when I simply concatenate it with other methods. That's why I made some witchcraft selection strategy, so that DKM does not overwhelm other methods.</p>
<p>Sorry for not being clear. What I meant is that DKM requires both <code>image0</code> and <code>image1</code> to be of the same size! Even if you have a vertical and horizontal image, you should resize them to a similar size. In the code of DKM, it resizes all images down to 512x384. So, when you feed images of different sizes in the public notebook to DKM, what happens under the hood is that it re-scales your images down to 512x384.</p>
<p>Me learning about the stuffs that I have missed:<br>
<img src="https://i.imgur.com/ZgpgtGd.png" alt=""></p>
<p>Seems like I also missed the thing where LoFTR inputs should be divisible by 8…</p>
<p>Seems like the 4th place team used multiple resolutions instead of TTAs (rotations, flipping, CLAHE, etc.) and it worked well for them. In my case, I performed inference only on fixed size. Congrats with the medal btw! </p>
<p>First of all, I'd like to thank Kaggle staff and the organizers for such a fun and interesting competition, especially <a href="https://www.kaggle.com/oldufo" target="_blank">@oldufo</a> and <a href="https://www.kaggle.com/eduardtrulls" target="_blank">@eduardtrulls</a> for their incredible responsiveness and for their great sample notebooks that shows us how to submit with LoFTR, DISK, and DKM. They introduced us to the new field of feature matching in the most gentle way possible and made our lives sooooo much easier!</p>
<p>Also, huge congratulations to <a href="https://www.kaggle.com/forcewithme" target="_blank">@forcewithme</a>'s team with the 1st place! What you guys achieved, especially that jump from <code>0.836</code> to <code>0.863</code> just few days before the end, was amazing and inspired me to explore more creative solutions (although I failed to find anything haha). Can't wait to read the write-ups of top teams.</p>
<p>I joined the competition relatively late, only 19 days before the end and spent only few hours per day on it, so I did not expect a gold medal at all. I built my solution on top of the following notebooks: <a href="https://www.kaggle.com/code/cbeaud/imc-2022-kornia-score-0-725" target="_blank">IMC 2022-kornia : Score 0.725</a> by <a href="https://www.kaggle.com/cbeaud" target="_blank">@cbeaud</a>, <a href="https://www.kaggle.com/code/losveria/superglue-baseline" target="_blank">SuperGlue baseline</a> by <a href="https://www.kaggle.com/losveria" target="_blank">@losveria</a>, and <a href="https://www.kaggle.com/code/radac98/public-baseline-dkm-0-667" target="_blank">Public Baseline DKM - 0.667</a> by <a href="https://www.kaggle.com/radac98" target="_blank">@radac98</a>. In this write-up, I'll try to describe in detail all tricks that I've used.</p>
<blockquote>
<p>Fun fact: I had a <strong>0.851 private</strong> run, that would put me on <strong>5th</strong> place, but for many reasons (I'll explain below) I did not even thought about choosing it for final submission.</p>
</blockquote>
<hr>
<h1>My solution</h1>
<p>In this competition, post-processing is king! As <a href="https://www.kaggle.com/old-ufo" target="_blank">@old-ufo</a> mentioned in <a href="https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/328976" target="_blank">this thread</a>, the main goal of the organizers was to discover better post-processing techniques, since prior works in this area only focused on better modelling.</p>
<h3>Few words about MAGSAC++</h3>
<p>I did not experimented with MAGSAC parameters at all. I kept my MAGSAC parameters as in this <a href="https://www.kaggle.com/code/cbeaud/imc-2022-kornia-score-0-725" target="_blank">public notebook</a>:</p>
<pre><code>cv2.findFundamentalMat(mkpts0, mkpts1, cv2.USAC_MAGSAC, 0.1845, 0.999999, 220000)
</code></pre>
<p>Only in last submissions, I changed number of iterations to <code>250000</code> for more stable results, and just because I can afford that. I'm reporting my scores below in "public LB / private LB" format.</p>
<h3>Simple ensemble, TTA, and scaling: 0.824/0.833 baseline</h3>
<p>A common bug in all public notebooks is that they don't <a target="_blank">re-scale the resulting matched keypoints correctly back to original image size</a>. To ensemble multiple models and across TTAs, I just concatenated their keypoints together and fed everything to USAC_MAGSAC. After a few tries, here is what I added in my baseline ensemble:</p>
<ul>
<li><strong>Kornia LoFTR.</strong> Input image is resized so that the longest side is 1200 pixels (it's the validation setting described in <a href="https://arxiv.org/abs/2104.00680" target="_blank">the LoFTR paper</a>). TTAs: <code>[original, flip_lr]</code>.</li>
<li><strong>SuperPoint + SuperGlue.</strong> Input image is resized so that the longest side is 1600 pixels (it's the validation setting described in <a href="https://arxiv.org/abs/1911.11763" target="_blank">the SuperGlue paper</a>). TTAs: <code>[original, flip_lr]</code>. To speed-up inference a bit, I also modified the code from <a href="https://github.com/magicleap/SuperGluePretrainedNetwork" target="_blank">SuperGlue repository</a> so that I can perform batched inference for SuperPoint and per-pair inference (with different keypoints number) for SuperGlue.</li>
<li><strong>DKM</strong>, proposed by <a href="https://www.kaggle.com/johanedstedt" target="_blank">@johanedstedt</a> in <a href="https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/321177" target="_blank">this thread</a>. It seems to be a good addition to the ensemble, especially in harder samples when LoFTR and SuperPoint fails to detect many points. Unfortunately, this one can operate only on fixed image size. I found <code>640x480</code> resolution to be reasonably okay-ish. TTAs: <code>[original, flip_lr]</code>. I take only 100 points from each TTA of it though, which means DKM will have minimal effect when LoFTR and SuperGlue can detect a lot of matches, and it will take over when the other two fails to find matches.</li>
</ul>
<p>Such simple baseline (with no hyperparameter search) got me <strong>0.824/0.833</strong> (public LB/private LB). Getting close to the gold range in just 1 week after I joined (and 12 days before the end of this competition), spending only few hours per day, was quite exciting. That's when I though "hmm, maybe I will have a chance here".</p>
<h3>DKM is weird. Pushing to 0.836/0.838</h3>
<p>DKM is a bit more tricky to use. By default, it samples N pairs from ALL matched keypoint pairs (which is WxH pairs, where W, H is the size of input image, because it is a dense matcher) with sampling probability equal to <code>pair_confidence / sum_of_all_pair_confidences</code>. I modified it to use absolute confidence values, and for each TTA sample randomly 200 pairs among all pairs with absolute confidence larger than 0.8 (also an arbitrary chosen number). So, in the end, I'm sampling 600 pairs from DKM. I also changed the input resolution to 800x600. After such witchcraft manipulations, my score increased to <strong>0.836/0.838</strong> (public LB/private LB).</p>
<p>I have tried other sampling strategies: by relative confidence, just random on top of a threshold, with cycle-consistency check, etc. Their results in both local CV and public LB are not that good though.</p>
<h3>Pushing the limits with simple engineering: 0.842/0.845</h3>
<p>It became clear that the bottleneck in my pipeline was MAGSAC (i.e. cv2.findFundamentalMat). I could not add more TTAs because I kept hitting time limits. No problem, <strong>let's move it to a parallel thread!</strong> (also, pre-fetch test samples)</p>
<p>Running MAGSAC in a different thread, in parallel with keypoints detection with LoFTR/SuperGlue/DKM allowed me to add some more TTAs: <code>[original, flip_lr]</code> for LoFTR, <code>[original, flip_lr, rotate(15), rotate(-15)]</code> for SuperGlue, and <code>[original, flip_lr, rotate(15)]</code> for DKM.</p>
<p>The pipeline now produces way too much matched keypoints. To lower the number of matched keypoints fed to MAGSAC, I set a confidence threshold of <code>0.3</code> for LoFTR matches and <code>0.4</code> for SuperGlue matches. I also increased the number of MAGSAC iterations to <code>250000</code>. Bam, 7 days before competition ends. instant increase to <strong>0.842/0.845</strong> (public LB/private LB).</p>
<h3>The right way to do TTA with SuperGlue: 0.848/0.847</h3>
<p>It was 3 days before competition ends. I was pretty nervous about my position. So far, I performed TTA in a naive way. Schematically, it looks as follows:</p>
<pre><code>tta1_img0 --> SuperPoint --> tta1_kpts0 --> | Super | --> tta1_matches0
tta1_img1 --> SuperPoint --> tta1_kpts1 --> | Glue | --> tta1_matches1
...
ttaN_img0 --> SuperPoint --> ttaN_kpts0 --> | Super | --> ttaN_matches0
ttaN_img1 --> SuperPoint --> ttaN_kpts1 --> | Glue | --> ttaN_matches1
</code></pre>
<p>Which is a total waste of resources. Why can't we mix-and-match, e.g. pairing <code>tta1</code> with <code>ttaN</code>. This way, we only have to execute SuperPoint (the slower network) N times, but we can perform SuperGlue as much as N^2 times and get even more matched keypoints.</p>
<pre><code>1. Extract keypoints and descriptors: ttaK_image{0,1} --> SuperPoint --> ttaK_kpts{0,1}
2. Apply SuperGlue on various pairs of different TTAs
3. Convert the coordinates back (after SuperGlue, not before!!!)
</code></pre>
<p>My final choice of TTA pairs to apply SuperGlue on was chosen randomly: <code>[(original, original), (original, rotation(10)), (rotation(10), original), (original, rotation(-10)), (rotation(-10), original), (rotation(-10), rotation(10)), (flip_lr, flip_lr)]</code> (note that we cannot cross-match flipped and non-flipped images).</p>
<p>This simple idea boosted my score to <strong>0.848/0.847</strong> (public LB/private LB) and I used it for final submission (code: <a href="https://www.kaggle.com/code/chankhavu/loftr-superglue-dkm-with-inspiration?scriptVersionId=97150029)" target="_blank">https://www.kaggle.com/code/chankhavu/loftr-superglue-dkm-with-inspiration?scriptVersionId=97150029)</a>.</p>
<h3>Multi-stage inference. The 0.851 submission that I ignored</h3>
<p>I wanted to apply LoFTR on larger image sizes (i.e. with 1600 pixels on longer side), but Kaggle don't have enough GPU memory for that and it became too slow. So I tried a multi-stage approach:</p>
<pre><code>Stage 1: inference on full image
- LoFTR with no TTA
- SuperGlue with enhancements as in 0.842/0.845 section
Preparation for stage 2:
- Run a simple MAGSAC with only 1000 iterations to get inliers
- Sample from each image a rectangle that contains all inliers
(with min size of 200 and padding 20)
Stage 2: inference on Region-of-Interest
- LoFTR with no TTA, scaled ROI to 2x (with limit on long side of 1200px)
- Run DKM with above settings on 640x640 size
</code></pre>
<p>Here is the full inference code: <a href="https://www.kaggle.com/code/chankhavu/two-stage-loftr-superglue-dkm" target="_blank">https://www.kaggle.com/code/chankhavu/two-stage-loftr-superglue-dkm</a></p>
<p>I thought this solution is too unstable because it relies too heavily on the correct ROI, tends to ignore parts of co-visibility regions, and it showed only 0.002 improvements on public LB (over the 0.842/0.845 submission), so I decided not to try my luck with this one in final submission. Turns out that even without the improved TTAs in "0.848/0.847" submission, it outperformed that one on private LB. If I had 1 or 2 more days, I would definitely try to combine this idea with the improved TTAs for SuperGlue.</p>
<h3>What did not work?</h3>
<ul>
<li>OpenGlue. I wanted to add SIFTAffnetHardnet to my pipeline, but it was too slow.</li>
<li>DISK. I don't have a good keypoint matcher for it, and the naive matching algorithm (provided in the repo) is not good engouh.</li>
<li>Semantic segmentation to filter out keypoints. I tried to add Mask2Former to filter out "people", "cars", "sky", etc. but quickly found out that it was not worth it. The score did not increase at all.</li>
</ul>
<h5>LoFTR keypoints correction with SuperPoint</h5>
<p>In the last day, I also tried to correct LoFTR points with SuperPoint keypoints. The idea is that LoFTR is actually a region-matching (with each "point" corresponding to a whole region of 5x5 pixels or more), and is not really suitable for tasks that requires pixel-perfect keypoints. So I used this idea from IMC2021 lecture (link: <a href="https://www.youtube.com/watch?v=9cVV9m_b5Ys&ab_channel=KwangMooYi):" target="_blank">https://www.youtube.com/watch?v=9cVV9m_b5Ys&ab_channel=KwangMooYi):</a></p>
<p><img src="https://i.imgur.com/CHfrPK4.png" alt=""></p>
<p>For each LoFTR keypoint, if there's a SuperPoint keypoint nearby (not more than 3 pixels) and it is closer to it than another LoFTR point, I'll "move" that LoFTR keypoint to the keypoint suggested by SuperPoint. The pseudocode is basically:</p>
<pre><code>loftr_kpts_tree = spatial.cKDTree(loftr_kpts)
sp_kpts_tree = spatial.cKDTree(sp_kpts)
loftr2loftr_dist, loftr2loftr_idx = loftr_kpts_tree.query(loftr_kpts, k=2)
loftr2loftr_dist, loftr2loftr_idx = loftr2loftr_dist[:,-1], loftr2loftr_idx[:,-1]
loftr2sp_dist, loftr2sp_idx = sp_kpts_tree.query(loftr_kpts)
for i in range(len(loftr2sp_dist)):
if loftr2sp_dist[i] < loftr2loftr_dist[i] / 2 and loftr2sp_dist[i] < 3:
loftr_kpts[i] = sp_kpts[loftr2sp_idx[i]]
</code></pre>
<p>This idea miserably decreased my score by 0.01 points!! Code: <a href="https://www.kaggle.com/code/chankhavu/chasing-pixel-perfection?scriptVersionId=97293363" target="_blank">https://www.kaggle.com/code/chankhavu/chasing-pixel-perfection?scriptVersionId=97293363</a></p>
<h5>More "creative" TTA for SuperGlue</h5>
<p>I think I misunderstood last year's winners (in the paper, the way I understood it is that they claimed to concatenate keypoints from different scales and TTAs before feeding to SuperGlue for matching). Inspired by that, I tried more sophisticated mix-and-match schemes, for example mixing 2 TTA groups:</p>
<pre><code>For two TTA groups [ttaA, ttaB] and [ttaX, ttaY):
1. Convert SuperPoint keypoints (i.e. ttaA_kpts0) back to original image coordinates
2. Concatenate kpts0 = [ttaA_kpts0, ttaB_kpts0], same with kpts1
3. Run SuperGlue on (kpts0, kpts1)
</code></pre>
<p>but this failed miserably (both my CV and score dropped 0.3 points with this one). I realized that you cannot feed affine-transformed keypoints to SuperGlue, because it will de-couple the keypoint position encodings from the orientation information baked into their descriptors, and produce poor results. Code: <a href="https://www.kaggle.com/code/chankhavu/loftr-superglue-dkm-with-inspiration?scriptVersionId=97151572" target="_blank">https://www.kaggle.com/code/chankhavu/loftr-superglue-dkm-with-inspiration?scriptVersionId=97151572</a></p>
<p>Okay, how come I totally missed the QuadTreeAttention thing? It was not even on my radar but everyone seems to be using it…</p>
<p>Thanks for your kind words! Your public kernels (including the kornia one from HappyWhale comp) and discussions were super helpful in this competition and steered me in the right direction!</p>
|
Image Matching Challenge 2023
|
(Prize Eligible) 7th place Solution using a novel matcher LightGlue
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Intro</h1>
<p>Hi there, we are Alex, Andri, Felix, Deep and Philipp, 4 master students and one PhD from ETH Zurich.</p>
<p>First of all, we would like to thank the organizers for hosting this fun and interesting challenge, we have greatly enjoyed it! We also greatly enjoy reading about the solution of other teams, who present captivating and innovative concepts through their fantastic works.</p>
<p>We have been pushing for a prize-eligible version that could compete against solutions using SuperPoint (SP) and SuperGlue (SG). In order to achieve this, we tried out various replacements for SP such as DISK and ALIKED and moved from SuperGlue to <a href="https://github.com/cvg/LightGlue" target="_blank">LightGlue</a> (LG), a cheaper and more accurate local feature matcher developed at ETHZ which is released under the APACHE license. While LG provided very promising results, we were unable get a satisfactory score without using SP until the very last submission on the last day. This last submission, using an ensemble of ALIKED, DISK and SIFT, gave us enough confidence to choose it as our final submission. However, our best scoring submission would have been an ensemble using DISK, SIFT with SP which would have matched the score of 2nd place (0.562). Additionally, we had another submission that was able to match the score of 5th place on the private leaderboard but did not produce convincing train nor public scores.</p>
<table>
<thead>
<tr>
<th>Features</th>
<th>Matchers</th>
<th>Train</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>ALIKED*</td>
<td>LG</td>
<td>0.763</td>
<td>0.361</td>
<td>0.407</td>
</tr>
<tr>
<td>ALIKED+SIFT*</td>
<td>LG+NN</td>
<td>0.594</td>
<td>0.434</td>
<td>0.480</td>
</tr>
<tr>
<td>DISK*</td>
<td>LG</td>
<td>0.761</td>
<td>0.386</td>
<td>0.437</td>
</tr>
<tr>
<td>DISK+SIFT*</td>
<td>LG+NN</td>
<td><strong>0.843</strong></td>
<td>0.438</td>
<td>0.479</td>
</tr>
<tr>
<td>ALIKED2K+DISK*</td>
<td>LG+LG</td>
<td>0.837</td>
<td>0.444</td>
<td>0.488</td>
</tr>
<tr>
<td>ALIKED2K+DISK+SIFT**</td>
<td>LG+LG+NN</td>
<td>0.837</td>
<td><strong>0.475</strong></td>
<td>0.523</td>
</tr>
<tr>
<td>ALIKED2K+DISK+SIFT</td>
<td>LG(h)+LG(h)+NN</td>
<td>0.824</td>
<td>0.450</td>
<td><strong>0.529</strong></td>
</tr>
<tr>
<td>------------------------</td>
<td>------------</td>
<td>---------</td>
<td>---------</td>
<td>---------</td>
</tr>
<tr>
<td>DISK+SP</td>
<td>LG+LG</td>
<td>0.876</td>
<td>0.484</td>
<td><strong>0.562</strong></td>
</tr>
<tr>
<td>DISK+SP*</td>
<td>LG+SG</td>
<td>0.880</td>
<td>0.498</td>
<td>0.517</td>
</tr>
<tr>
<td>DISK+SIFT+SP*</td>
<td>LG+NN+LG</td>
<td><strong>0.890</strong></td>
<td><strong>0.511</strong></td>
<td>0.559</td>
</tr>
<tr>
<td>DISK+SIFT+SP*</td>
<td>LG+NN+SG</td>
<td>0.867</td>
<td>T/o</td>
<td>T/o</td>
</tr>
</tbody>
</table>
<p>** was our final submission, * have been submitted after the deadline, LG(h) has an increased matching threshold of LightGlue of 0.2 (default is 0.1).</p>
<h1>LightGlue vs SuperGlue</h1>
<p>LightGlue is an advanced matching framework developed here at ETH Zurich, which exhibits remarkable efficiency and precision. Its architecture features self- and cross-attention mechanisms, empowering it to make robust match predictions. By employing early pruning and confidence classifications, LightGlue efficiently filters out unmatchable points and terminates computations early, thus avoiding unnecessary processing. LightGlue, along with its training code, is made available under a permissive APACHE license, facilitating broader usage.</p>
<p>In comparison to SuperGlue when combined with SuperPoint, LightGlue demonstrates superior performance in both accuracy and speed. It notably enhances the scores on the train, public, and private datasets while accomplishing these results in nearly half the time required by alternative methods.</p>
<table>
<thead>
<tr>
<th>Config</th>
<th>Train</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>SP+SG</td>
<td>0.643</td>
<td>0.361</td>
<td>0.438</td>
</tr>
<tr>
<td>SP+LG</td>
<td>0.650</td>
<td>0.384</td>
<td>0.461</td>
</tr>
</tbody>
</table>
<h1>Method</h1>
<p>We developed a modular pipeline that can be called with various arguments, enabling us to try out different configurations and combine methods very easily. In our pipeline, we made heavy use of hloc, which we used as a starting point.</p>
<h2>Image Retrieval</h2>
<p>To avoid matching all image pairs of a scene in an exhaustive manner, we used NetVLAD to retrieve the top k images to construct our image pairs. We also tried out CosPlace but did not observe any notable improvements over NetVLAD. Depending on the configuration of each run, we either used <em>k=20</em>, <em>30</em> or <em>50</em> due to run time constraints. For our final submission, we used <em>k=30</em>.</p>
<h2>Feature Extraction</h2>
<p>For keypoint extraction, we combined and tried multiple alternatives. For all feature extractions, we experimented with different image sizes but finally settled on resizing the larger edge to 1600 as it provided the most robust scores:</p>
<ul>
<li>ALIKED: We played around with a few settings and finally chose to add it to our ensemble as it showed promising results on a few train scenes. We had to limit the number of keypoints to 2048 due to run-time limitations.</li>
<li>DISK: DISK was the most promising replacement for SP. We tried a few different configurations and finally settled with the default using a max of 5000 keypoints.</li>
<li>SIFT: Due to its rotation invariance and fast matching, adding sift to our ensemble turned out to boost performance, especially for heritage/dioscuri and heritage/cyprus.</li>
<li>SP: SuperPoint was the best-performing features extractor in all our experiments, however, we did not choose it for our final submission because of its restrictive license.</li>
</ul>
<h2>Feature Matching</h2>
<p>We used NN-ratio to match SIFT features. For the deep features such as DISK, ALIKED and SP, we trained LightGlue on the MegaDepth dataset.</p>
<h2>Ensembles</h2>
<p>The ensembles gave us the biggest boost in the score. It allowed us to run extraction and matching for different configurations and combine the matches of all configurations. This basically gives us the benefits of all used methods. The only drawback is the increased run-time and we thus had to decrease the number of retrievals. Adding SIFT was always a good option because it did not increase the run-time by much while helping to deal with rotations.</p>
<h2>Structure-from-Motion</h2>
<p>For the reconstruction, we used PixSfM and forced COLMAP to use shared camera parameters for some scenes.</p>
<h3>Pixel-Perfect-SfM</h3>
<p>We added PixSfM (after compiling a wheel for manylinux, following the build pipeline of pycolmap) as an additional refinement step to the reconstruction process. During our experiments, we noted that using PixSfM decreased the score on scenes with rotated images as the S2DNet features are not rotation invariant. We thus only used it if no rotations are found in the scene. Due to the large number of keypoints in our ensemble, we had to use the low memory configuration in all scenes, even on the very small ones.</p>
<h3>Shared Camera Parameters</h3>
<p>We noticed that most scenes have been taken with the same camera and therefore decided to force COLMAP to use the same camera for all images in a scene if all images have the same shape. This turned out to be especially valuable on the haiper scenes where COLMAP assigned multiple cameras.</p>
<h2>Localizing Unregistered Images</h2>
<p>Some images were not registered, even with a high number of matches to registered ones, possibly because the assumption of shared intrinsics was not always valid. We, therefore, introduced a post-processing step where we used the hloc toolbox to estimate the pose of unregistered images. Specifically, we checked if the camera of an unregistered image is already in the reconstruction database. If that was not the case, we would infer it from the exif data.</p>
<h1>Other things tried</h1>
<ul>
<li>rotating images → We used an image orientation prediction model to correct for rotations. This worked well on the training set but reduced our score significantly upon submission.</li>
<li>Inspired by last year's solutions, use cropping to focus matching on important regions between image pairs → Became infeasible as we would have a different set of keypoints for each pair of images used for matching.</li>
<li>Other feature extractors and matcher such as a reimplementation of SP and dense matchers such as LoFTR, DKM → did not improve results or too slow, also unclear license for SP reimplementation.</li>
<li>Estimated relative in-plane rotation pairwise from sift matches and then estimated the rotation for each image by propagating the rotation through the maximum spanning tree of pairwise matches. → Worked sometimes on Dioscuri but failed on other scenes.</li>
<li>Resize for sfm did not help.</li>
</ul>
<h1>Acknowledgments</h1>
<p>We would like to thank Philipp Lindenberger for his awesome guidance, tips, and support. We also want to give a huge credit to his novel matcher LightGlue. We also want to thank the <a href="https://cvg.ethz.ch" target="_blank">Computer Vision and Geometry Group, ETH Zurich</a> for the awesome project that started all this.</p>
<h1>Links</h1>
<ul>
<li><a href="https://github.com/cvg/LightGlue" target="_blank">LightGlue Repo</a></li>
<li><a href="https://arxiv.org/pdf/2306.13643.pdf" target="_blank">LightGlue Paper</a></li>
<li><a href="https://github.com/veichta/IMC-2023" target="_blank">Solution Repo</a></li>
<li><a href="https://www.kaggle.com/code/alexanderveicht/imc2023-from-repo" target="_blank">Kaggle Notebook</a></li>
</ul>
<h2>Per Scene Train Scores</h2>
<h3>Heritage</h3>
<table>
<thead>
<tr>
<th>Features</th>
<th>Matchers</th>
<th>Cyprus</th>
<th>Dioscuri</th>
<th>Wall</th>
<th>Overall</th>
</tr>
</thead>
<tbody>
<tr>
<td>ALIKED</td>
<td>LG</td>
<td>0.850</td>
<td>0.684</td>
<td><strong>0.967</strong></td>
<td><strong>0.833</strong></td>
</tr>
<tr>
<td>DISK</td>
<td>LG</td>
<td>0.314</td>
<td>0.592</td>
<td>0.843</td>
<td>0.583</td>
</tr>
<tr>
<td>ALIKED+SIFT</td>
<td>LG+NN</td>
<td>0.991</td>
<td>0.772</td>
<td>0.436</td>
<td>0.733</td>
</tr>
<tr>
<td>DISK+SIFT</td>
<td>LG+NN</td>
<td>0.993</td>
<td>0.624</td>
<td>0.756</td>
<td>0.791</td>
</tr>
<tr>
<td>ALIKED2K+DISK</td>
<td>LG+LG</td>
<td>0.792</td>
<td>0.712</td>
<td>0.930</td>
<td>0.811</td>
</tr>
<tr>
<td>ALIKED2K+DISK+SIFT**</td>
<td>LG+LG+NN</td>
<td><strong>0.993</strong></td>
<td><strong>0.802</strong></td>
<td>0.595</td>
<td>0.796</td>
</tr>
<tr>
<td>ALIKED2K+DISK+SIFT</td>
<td>LG(h)+LG(h)+NN</td>
<td><strong>0.993</strong></td>
<td><strong>0.802</strong></td>
<td>0.595</td>
<td>0.796</td>
</tr>
</tbody>
</table>
<h3>Haiper</h3>
<table>
<thead>
<tr>
<th>Features</th>
<th>Matchers</th>
<th>bike</th>
<th>chairs</th>
<th>fountain</th>
<th>Overall</th>
</tr>
</thead>
<tbody>
<tr>
<td>ALIKED</td>
<td>LG</td>
<td>0.431</td>
<td>0.735</td>
<td><strong>0.998</strong></td>
<td>0.721</td>
</tr>
<tr>
<td>DISK</td>
<td>LG</td>
<td><strong>0.926</strong></td>
<td>0.799</td>
<td><strong>0.998</strong></td>
<td>0.908</td>
</tr>
<tr>
<td>ALIKED+SIFT</td>
<td>LG+NN</td>
<td>0.579</td>
<td>0.931</td>
<td><strong>0.998</strong></td>
<td>0.836</td>
</tr>
<tr>
<td>DISK+SIFT</td>
<td>LG+NN</td>
<td>0.917</td>
<td>0.929</td>
<td><strong>0.998</strong></td>
<td>0.948</td>
</tr>
<tr>
<td>ALIKED2K+DISK</td>
<td>LG+LG</td>
<td>0.918</td>
<td>0.812</td>
<td><strong>0.998</strong></td>
<td>0.909</td>
</tr>
<tr>
<td>ALIKED2K+DISK+SIFT**</td>
<td>LG+LG+NN</td>
<td>0.922</td>
<td>0.801</td>
<td><strong>0.998</strong></td>
<td>0.907</td>
</tr>
<tr>
<td>ALIKED2K+DISK+SIFT</td>
<td>LG(h)+LG(h)+NN</td>
<td>0.920</td>
<td>0.934</td>
<td><strong>0.998</strong></td>
<td><strong>0.951</strong></td>
</tr>
</tbody>
</table>
<h3>Urban</h3>
<table>
<thead>
<tr>
<th>Features</th>
<th>Matchers</th>
<th>kyiv-puppet-theater</th>
<th>Overall</th>
</tr>
</thead>
<tbody>
<tr>
<td>ALIKED</td>
<td>LG</td>
<td>0.735</td>
<td>0.735</td>
</tr>
<tr>
<td>DISK</td>
<td>LG</td>
<td>0.793</td>
<td>0.793</td>
</tr>
<tr>
<td>ALIKED+SIFT</td>
<td>LG+NN</td>
<td>0.215</td>
<td>0.215</td>
</tr>
<tr>
<td>DISK+SIFT</td>
<td>LG+NN</td>
<td>0.789</td>
<td>0.789</td>
</tr>
<tr>
<td>ALIKED2K+DISK</td>
<td>LG+LG</td>
<td>0.742</td>
<td>0.742</td>
</tr>
<tr>
<td>ALIKED2K+DISK+SIFT**</td>
<td>LG+LG+NN</td>
<td>0.806</td>
<td>0.806</td>
</tr>
<tr>
<td>ALIKED2K+DISK+SIFT</td>
<td>LG(h)+LG(h)+NN</td>
<td><strong>0.824</strong></td>
<td><strong>0.824</strong></td>
</tr>
</tbody>
</table>
|
Image Matching Challenge 2023
|
12nd Place Solution: SP+SG
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the competition organizers and Kaggle staff for hosting this amazing competition and solid support. Thanks to each my team member for their creative perspective and effort which gain us this place in competition.</p>
<ol>
<li><p>Overview<br>
Our final model is rather simple. The solution is based on a modular toolbox named Hierarchical-Localization (implementation of typical SP+SG structure), and the key modification is to set the input resolution to 2000.<br>
As the resolution increased from 1024 to 2000, the public LB scores increased from 0.24 to 0.46. As the resolution further increased to 3000, there is still improvement in the trainset (especially "wall").</p></li>
<li><p>Configuration in detail </p>
<pre><code>confs = {
'superpoint_aachen': {
'output': 'feats-superpoint-n-r',
'model': {
'name': 'superpoint',
'nms_radius': ,
'max_keypoints': ,
},
'preprocessing': {
'grayscale': True,
'resize_max': ,
},
}
}
</code></pre>
<pre><code>'superglue': {
'name': 'superglue',
'weights': 'outdoor',
'sinkhorn_iterations': ,
}
</code></pre></li>
<li><p>Tricks didn't work<br>
Our code suffers from randomness; the difference in public LB score for the same code can be up to 0.04! Unfortunately, during the whole competition, we failed to find a way to eliminate randomness. We decide a modification useless if no obvious improvement observed during repetitive submissions, but still, there are possibly mistakes.</p>
<ul>
<li>Image retrieval<br>
After incremental mapping , try to add the images mapped failed to the successfully reconstructed model.</li>
<li>TTA<br>
Reverse and concatenate and apply NMS to features extracted from [original left-right-flip 10-deg-rotation], however, computation time hugely increased while no obvious improvement observed.</li>
<li>Ensemble SIFT to SP<br>
We learned that SP fails when the image is extremely in-plane rotated. So we tried to perform SIFT extraction and matching using pycolmap and extract the match from the database and concatenate with SPSG. This trick can improve the score for "cyprus" but is useless in this year's test set. Worth noting is that it doesn't harm the LB score either.</li>
<li>Multi-models and multi-resolution<br>
We tried to ensemble SPSG with DKM, Loftr, Quadtree, silk, but these models perform poorly despite being in the same resolution with SPSG. Some of them are one-stage models, which made them inefficient for the task. We tried to concatenate [800, 1500, 2000] resolutions calculated by SPSG; the result is nearly the same with 2000 alone.</li>
<li>Suppress randomness by more iterations<br>
Hloc uses geometry verification (<code>pycolmap.verify_matches</code>), set <code>max_num_trials=40000</code> didn't work.</li></ul></li>
<li><p>Some experience from the competition<br>
Reading images using cv2 can lose EXIF information, which is not conducive to reconstruction. But there's a counter-example, "theater" can score even higher after the information removed.</p></li>
<li><p>Referrence</p></li>
</ol>
<ul>
<li>Hloc<br>
<a href="https://github.com/cvg/Hierarchical-Localization/" target="_blank">https://github.com/cvg/Hierarchical-Localization/</a></li>
<li>Superpoint introduction<br>
<a href="https://github.com/magicleap/SuperPointPretrainedNetwork/blob/master/assets/DL4VSLAM_talk.pdf" target="_blank">https://github.com/magicleap/SuperPointPretrainedNetwork/blob/master/assets/DL4VSLAM_talk.pdf</a></li>
</ul>
|
Image Matching Challenge 2023
|
16th Place Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, we would like to say thank you to the organizers and Kaggle staff for setting up this challenge, it has been an amazing experience for us. </p>
<p><strong>Our solution</strong><br>
For our final submission, we used the an emsemble of SuperPoint, KeyNet/AffNet/HardNet and SOSNet as our feature detectors/descriptors, and used SuperGlue and Adalam for feature matching. We used Colmap for reconstruction and camera localization. We also used <a href="https://github.com/cvg/Hierarchical-Localization" target="_blank">hloc</a> to speed up the pipeline and make it more scalable for validation and testing. </p>
<p><strong>Image Retrieval</strong><br>
We used <a href="https://openaccess.thecvf.com/content_cvpr_2016/papers/Arandjelovic_NetVLAD_CNN_Architecture_CVPR_2016_paper.pdf" target="_blank">NetVLAD </a>as implemented in <a href="https://github.com/cvg/Hierarchical-Localization" target="_blank">hloc</a> as the global feature descriptor for image retrieval</p>
<p><strong>Feature Matching</strong><br>
The first thing we noticed in the dataset was that some scenes contain a lot of rotated images, and we tried to tackle this problem with 2 approaches: <br>
1) use rotation invariant feature matchers (e.g., KeyNet/AffNet/HardNet, SOSNet). <br>
2) use a <a href="https://github.com/ternaus/check_orientation" target="_blank">lightweight orientation detector</a> to detect the rotation angles and rotate the image pairs accordingly so that both images have a similar orientation (for simplicity, we only set the rotation angles to 90, 180 and 270 degrees). </p>
<p>We proceeded with both approaches and found out that both approaches achieve a similar improvement on the Heritage dataset, however, by ensembling more feature matchers, we observe some extra improvements on Urban and Haiper datasets, so we finally took this approach, and this ensemble achieved the best results for us within the time limit of 9h: <strong>SuperGlue + KeyNet/AffNet/HardNet (with Adalam) + SOSNet (with Adalam)</strong>. Using orientation compensation on the ensembled model does not bring any extra improvements. </p>
<p><strong>Things that did not work</strong>:<br>
1) We first tried <a href="https://github.com/pidahbus/deep-image-orientation-angle-detection" target="_blank">this SOTA orientation detector</a>, however it consumes too much memory and could not be integrated into our pipeline on Kaggle<br>
2) We found <strong>KeyNet/AffNet/HardNet + Adalam</strong> in the baseline the <strong>best single feature matcher</strong> without any preprocessing -- We could achieve 0.455/0.433 (equal to 47th place) by only tuning its parameters and resizing the input images to 1600, however, when we integrated them into our pipeline using hloc, its performance dropped significantly to 0.334/0.277 (locally as well, mainly on urban), we tried to investigate but still do not know why. <br>
3) We experimented on a lot of recent feature matchers and ensembles, including DKMv3, DISK, LoFTR, SiLK, DAC, and they either do not perform as well or are too slow when integrated into the pipeline. In general, we found that end-to-end dense matchers not well suited for this multiview challenge despite their success in last year's two view challenge, their speed is too slow and the scores they achieve are also not as good. Here are some local validation results:</p>
<ol>
<li>SiLK (on ~800x600):<br>
urban: 0.125<br>
haiper: 0.165</li>
<li>DKMv3 (on ~800x600 and it's still very slow):<br>
heritage: 0.185<br>
haiper: 0.510</li>
<li>DISK (on ~1600x1200):<br>
urban: 0.461<br>
heritage: 0.292 (0.452 with rotation compensation)<br>
haiper: 0.433</li>
<li>SOSNet with Adalam (on ~1600x1200):<br>
urban: 0.031<br>
heritage: 0.460 (same with rotation compensation)<br>
haiper: 0.653</li>
<li>Sift / Rootsift with Adalam (on ~1600x1200):<br>
urban: 0.02<br>
heritage: 0.396<br>
haiper: 0.635</li>
<li>DAC: the results are very bad</li>
</ol>
<p><strong>Reconstruction</strong><br>
After merging all the match points from the ensemble, we apply <a href="https://github.com/colmap/pycolmap/blob/743a4ac305183f96d2a4cfce7c7f6418b31b8598/pipeline/match_features.cc#L76" target="_blank">geometric verification</a> in Colmap before reconstructing the model, which speeds up the reconstruction. <br>
<strong>Things that did not work</strong>: <br>
1) We tried using Pixel-Perfect SFM, we set it up locally and it gave descent results visually comparable to our pipeline, but since we could not get it up running on Kaggle we did not proceed further. <br>
2) We tried using MAGSAC++ to replace the default RANSAC function Colmap uses to remove bad matching points before reconstructing the model, but we did not see a significant difference in the final scores. </p>
|
Image Matching Challenge 2023
|
19th Place Solution - Simple SP+SG via HLoc
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our solution is based on the baseline. In addition, we used the famous open sourced code named “Hierarchical-Localization-master" (<a href="https://github.com/cvg/Hierarchical-Localization" target="_blank">https://github.com/cvg/Hierarchical-Localization</a>) to implement a famous image matching pipeline named "SuperPoint + SuperGlue" without fine tuning but only pretrained weights. So we spent no time on training. We changed parameters such as nms_radius, max_keypoints, resize_max and sinkhorn_iterations to achieve our best leaderboard score.</p>
<p>The github link of HLoc: <a href="https://github.com/cvg/Hierarchical-Localization" target="_blank">https://github.com/cvg/Hierarchical-Localization</a></p>
|
Image Matching Challenge 2023
|
1st Place Solution: Sparse + Dense matching, confidence-based merge, SfM, and then iterative refinement
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks very much for the insight! I really like your improvement on the learning-based image retrieval (NetVlad), which is a phase we did not elaborate on. We are glad to try your method in our framework in the future :).</p>
<h1>0. Introduction</h1>
<p>We are delighted to be participating in the image matching challenging 2023. Thanks to the organizers, sponsors, and Kaggle staff for their efforts, and congrats to all the participants. We learn a lot from this competition and other participants.</p>
<p>Our team members include Xingyi He, Dongli Tan, Sida Peng, Jiaming Sun, and Prof. Xiaowei Zhou. We are affiliated with the State Key Lab. of CAD&CG, Zhejiang University. I would like to express my gratitude to my teammates for their hard work and dedication. </p>
<h1>1. Overview and Motivation</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14597895%2F2b2b6c045d8a2dfa0a536090f025db02%2Fmain_fig.png?generation=1686841291013288&alt=media" alt="Fig.1"><br>
We proposed a coarse-to-fine SfM framework to draw benefits from the recent success of detector-free matchers, while solving the multi-view inconsistency issue of detector-free matchers.<br>
Due to the time limitation in the competition, we also incorporate the "light-weight" sparse feature detection and matching methods to determine image rotation and final overlap region between pairs, where the detector-free matcher will be performed upon.</p>
<p>However, caused by the multi-view inconsistency of detector-free matchers, directly using matches for SfM will lead to a significant number of 2D and 3D points. It is hard to construct feature tracks, and the incremental mapping phase will be extremely slow.</p>
<p>Our coarse-to-fine framework solves this issue by first quantizing matches with a confidence-guided merge approach, improving consistency while sacrificing the matching accuracy. We use the merged matches to reconstruct a coarse SfM model.<br>
Then, we refine the coarse SfM model by a novel iterative refinement pipeline, which iterates between an attention-based multi-view matching module to refine feature tracks and a geometry refinement module to improve the reconstruction accuracy.</p>
<h1>2. Method</h1>
<h2>2.1 Image Pair Construction</h2>
<p>For each image, we select k relevant images using image retrieval method. Here we haven't found significant differences among different retrieval methods. This could potentially be attributed to the relatively small number of images or scenes in the evaluation dataset.</p>
<h2>2.2 Matching</h2>
<h3>2.2.1 Rotation Detection</h3>
<p>There are some scenes within the competition datasets which contain rotated images. Since many popular learning-based matching methods can not handle this case effectively, Our approach, similar to that of many other participants, involves rotating one of the query images several times[0, π/2, π, 3π/2] and matching it with the target image, respectively. This helps to mitigate the drastic reduction in the number of matching points caused by image rotations.</p>
<h3>2.2.2 Overlap Detection</h3>
<p>Like last year's solution, estimating the overlap region is a commonly employed technique. We use the first round of matching to obtain the overlap region and then perform the second round of matching within them. According to the area ratio, we resize the smaller region in one image and align it with the larger region. We find a sparse matcher is capable of balancing efficiency and effectiveness.</p>
<h3>2.2.3 Matching</h3>
<p>We find the ensemble of multiple methods tends to outperform any individual method. Due to time constraints, we choose the combination of one sparse method (SPSG) and one dense method (LoFTR). We also find that substitute LoFTR by DKMv3 performs better in this competition.</p>
<h2>2.3 Multi-view inconsistency problem</h2>
<p>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14597895%2Fab3bc20584b40131a4fe28b55d7d1530%2Fnon_repeatable_problem.png?generation=1686841396817545&alt=media">
</p>
<p>As shown in Fig.2, the resulting feature locations of detector-free matchers (e.g., LoFTR) in an image depend on the other image. This pair-dependent nature leads to fragmentary feature tracks when running pair-wise matching over multiple views, which makes detector-free matchers not directly applicable to existing SfM systems (e.g., COLMAP).<br>
Moreover, as for the sparse detection and matching part, since the cropped image overlap regions are also relevant to the other image, re-detecting keypoints on the cropped images for matching also shares the same multi-view inconsistency issue.<br>
This issue is solved by the following coarse-to-fine SfM framework.</p>
<h2>2.4 Coarse SfM</h2>
<p>In this phase, we first strive for consistency by merging to reconstruct an initial coarse SfM model, which will be further refined for higher pose accuracy in the refinement phase.</p>
<h3>2.4.1 Confidence-guided Merge</h3>
<p>After the matching, we merge matches on each image based on confidence to improve the consistency (repeatability) of matches for SfM. For each image, we first aggregate all its matches with other images and then perform NMS with a window size of 5 to merge matches into points with the local highest confidence, as depicted in Fig.1(2). After the NMS, the number of 2D points can be significantly reduced, and the top 10000 points are selected for each image by sorting the confidence if the total point is still larger than the threshold.</p>
<h3>2.4.2 Mapping</h3>
<p>Based on the merged matches, we perform the coarse SfM by COLMAP. Note that the geometry verification is skipped since RANSAC is performed in the matching phase. For the reconstruction of the scene with a large number of images (~250 in this competition), we enable the parallelized bundle adjustment (PBA) in COLMAP. Specifically, since PBA uses a PCG solver, which is an inexact solution to the BA problem and unlike the exact solution of Levenberg-Marquardt (LM) solver used by default in Ceres, we enable the PBA only after a large number of images are registered (i.e., >40). This is based on the intuition that the beginning of reconstruction is of critical importance, and the inexact solution of PBA may lead to a poor initialization of the scene.</p>
<h2>2.5 Iterative Refinement</h2>
<p>We proceed to refine the initial SfM model to obtain improved camera poses and point clouds. To this end, we propose an iterative refinement pipeline. Within each iteration, we first enhance the accuracy of feature tracks with a transformer-based multi-view refinement matching module.<br>
These refined feature tracks are then fed into a geometry refinement phase which optimizes camera poses and point clouds jointly. The geometry refinement iterates between the geometric-BA and track topology adjustment (including complete tracks, merge tracks, and filter observations). The refinement process can be performed multiple times for higher accuracy.<br>
Our feature track refinement matching module is trained on the MegaDepth, and more details are in our paper which is soon available on arXiv.</p>
<table>
<thead>
<tr>
<th>Method</th>
<th>score(private)</th>
</tr>
</thead>
<tbody>
<tr>
<td>spsg</td>
<td>0.482</td>
</tr>
<tr>
<td>spsg+LoFTR</td>
<td>0.526</td>
</tr>
<tr>
<td>spsg+LoFTR+refine</td>
<td>0.570</td>
</tr>
<tr>
<td>spsg+DKM^+refine</td>
<td>0.594</td>
</tr>
</tbody>
</table>
<p>^only replace LoFTR in the Haiper dataset</p>
<h1>3. Ideas tried but not worked</h1>
<h2>3.1 Other retrieval modules</h2>
<p>Other than NetVLad, we have also tried the Cosplace, as well as using SIFT+NN as a lightweight detector and matcher for retrieval. However, there is no noticeable improvement, even performs slightly worse than NetVLad in our framework. We think this may be because the pair construction is at the very beginning of the overall pipeline, and our framework is pretty robust to the image pair variance.</p>
<h2>3.2 Other sparse detectors and matchers</h2>
<p>Other than Superpoint + Superglue, we have also tried Silk + NN, which performs worse than Superpoint + Superglue. I think it may be because we did not successfully tune it to work in our framework.</p>
<h2>3.3 Other detector-free matchers</h2>
<p>Other than LoFTR, we also tried Matchformer and AspanFormer in our framework. We find Matcherform performs on par with LoFTR but slower, which will lead to running out of time. AspanFormer performs worse than LoFTR when used in our framework in this challenge.</p>
<h2>3.4 Visual localization</h2>
<p>We observe that there may image not successfully registered during mapping. Our idea is to "focus" on these images and regard them as a visual localization problem by trying to register them into the existing SfM model. We use a specifically trained version of LoFTR for localization, which can bring ~3% improvement on the provided training dataset. However, we did not have a spare running time quota in submission and, therefore, did not successfully evaluate visual localization in the final submission.</p>
<h1>4. Some insights</h1>
<h2>4.1 About the randomness</h2>
<p>We observe that the ransac performed with matching, the ransac PnP during mapping, and the bundle adjustment multi-threading in COLMAP may contain randomness.<br>
After a careful evaluation, we find the ransac randomness seed in both matching and mapping is fixed. The randomness can be dispelled by setting the number of threads to 1 in COLMAP.<br>
Therefore, our submission can achieve exactly the same results after multiple rerunning, which helps us to evaluate the performance of our framework.</p>
<h2>4.2 About the workload of the evaluation machine</h2>
<p>Given that the randomness problem of our framework is fixed, we observe that the submission during the last week before the DDL is slower than (~20min) the previous submission with the same configuration.<br>
Our final submission before the DDL using the DKM as a detector-free matcher has run out of time, which we believe may bring improvements, and we decided to choose it as one of our final submissions.<br>
We rerun this submission version after the DDL, and it can be successfully finished within the time limit, which achieves 59.4 finally.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14597895%2F93c7812782a9455aaeafabdffec752b9%2Ffinal_shot_2.png?generation=1686842141320946&alt=media" alt=""></p>
<h1>5. Acknowledgment</h1>
<p>The members of our team have participated in the IMC for three consecutive years(IMC 2021, 2022, and 2023), and we are glad to see there are more and more participants in this competition, and the number of submissions achieves a new high this year. We really enjoyed the competition this year since one of the most applications of feature matching is SfM. The organizers remove the limitation of the only matching submission as in IMC2021 but limit the running time and computation resources (a machine with only 2 CPU cores and 1 GPU is provided), which makes the competition more interesting, challenging, and flexible. Thanks to the organizers, sponsors, and Kaggle staff again!</p>
<h1>6. Suggestions</h1>
<p>We also have some suggestions that we notice the scenes in this year's competition are mainly outdoor datasets. We think more types of scenes, such as indoor and object-level scenes with severe texture-poor regions, can be added to the competition in the future. In our recent research, we also collected a texture-poor SfM dataset which is object-centric with ground-truth annotations. We think it may be helpful for the future IMC competition, and we are glad to share it with the organizers if needed.</p>
<p>Special thanks to the authors of the following open-source software and papers: COLMAP, SuperPoint, SuperGlue, LoFTR, DKM, HLoc, pycolmap, Cosplace, NetVlad.</p>
|
Image Matching Challenge 2023
|
2nd Place Solution for the IMC 2023 – Win Over COLMAP Randomness?!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1><strong>Bonus Point</strong></h1>
<p>We presented our solution at <a href="https://image-matching-workshop.github.io/" target="_blank">CVPR 2023</a>. Watch on YouTube <a href="https://youtu.be/9JpGjpITiDM?si=l7pGDPw4vOnZsH9H&t=13519" target="_blank">here</a></p>
<h1><strong>Intro</strong></h1>
<p>Our team would like to deeply appreciate the Kaggle staff, Google Research, and Haiper for hosting the continuation of this exciting image matching challenge, as well as everyone here who compete and shared great discussions. My congratulations to all participants!</p>
<p>The work we describe here is truly a joint effort of <a href="https://www.kaggle.com/yamsam" target="_blank">@yamsam</a>, <a href="https://www.kaggle.com/remekkinas" target="_blank">@remekkinas</a>, and <a href="https://www.kaggle.com/vostankovich" target="_blank">@vostankovich</a>. I’m grateful of being a part of this hardworking, cohesive, and skilled team. Thanks a lot, guys! I learned a lot from you.</p>
<h6>We enjoyed this competition!</h6>
<p>The best submission that was used for final scoring in private LB finished on the last day of the competition. We weren’t completely sure about this submission because it was not clear how much randomness was in it. We had a practice to re-run the same notebook code multiple times to see what scores we can get. We discussed the solution which was implemented on the last day, trusted it and it worked out. The other interesting fact is that our 2nd selected submission for final evaluation scored <strong>0.497/0.542</strong> that also allows us to take 2nd place. This selected second submission is the same as the 1st one except the “Run reconstruction multiple times” trick, that is described below. Anyway, the difference between the best submission (<strong>0.562</strong>) and the 2nd one is noticable. </p>
<h1><strong>Overview</strong></h1>
<p>In general, throughout our code submissions every time we fight with the randomness coming from COLMAP responsible for scene reconstruction. Our final solution is based on the use of COLMAP and pretrained SuperPoint/SuperGlue models running on different resolutions for every image in the scene. We apply a bunch of different tricks aimed at different parts of COLMAP-based pipeline in order to stabilize our solution and reach the final score.</p>
<h1><strong>Architecture</strong></h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5065877%2F68911db7c4cc430dec05670cd196a960%2Fslide_architecture.png?generation=1687202186098466&alt=media" alt="architecture"></p>
<h1><strong>Key Takeaways:</strong></h1>
<ul>
<li>Initially, use all <strong>possible unique image pairs</strong> generated from the scene set. Remove the model and logic used for finding and ranking similar images in the scene. A threshold of 100 defines a minimum number of matches that we expect each image pair to have. If it is less, we discard that pair.</li>
<li><strong>SP/SG</strong> settings: unlimited N of keypoints, the keypoint threshold is <strong>0.005</strong>, match threshold <strong>0.2</strong>, and the number of sinkhorn iteratiors is <strong>20</strong>.</li>
<li><strong>Half precision</strong> for SP/SG helped to reduce occupied memory without sacrificing noticable accuracy. Another great performance trick is to cache keypoints/descriptors, generated by SP for each image, and, then cache SG matches for every image pair. It allowed to reduce the running time a lot.</li>
<li><strong>TTA</strong>. Ensemble of matches extracted from images at different scales. In our local experiments, the best results are achieved with a combination of <strong>[1088, 1280, 1376]</strong>. We used np.concatenate to join matches from different models that was pretty common for the last IMC22 competition.</li>
<li>Apply <strong>rotate detector</strong> to un-rotate images in the scene if necessary. We discovered that some scenes in the train dataset (cyprus, dioscuri) have many 90/270 rotated images. Some of the images have EXIF meta information. Unfortunately, looking into the train dataset, we did not find any specifics about the orientation the image was captured at. To address this rotation issue, we re-rotate the image to its natural orientation by employing this <a href="https://github.com/ternaus/check_orientation" target="_blank">solution</a>. We use it w/o any threshold and look at the number of rotations that we need to apply to an image. After applying rotation, the score for cyprus scene jumped up significantly from <strong>~0.02</strong> up to <strong>~0.55</strong>. RotNet implementation did not work out for us.</li>
<li><strong>Set-up initial image</strong> for COLMAP reconstruction explicitly. For each image we store the number of pairs in which it seen and the number of matches these pairs produce. Then we pick the one with the highest number of pairs. If multiple images satisfy this criterion, we pick the one with the highest number of matches. It helped to boost the score.</li>
<li>To reduce randomness in the score, we decided to do something like <strong>averaging multiple match_exhaustive</strong> calls. The idea is to run match_exhaustive for N times on the original database of matches. Then we take only those matches that that appear in 8/10 cases, other matches are neglected. It was done in a rude way with database copies write/read, etc.</li>
<li><strong>Run reconstruction multiple times</strong> from scratch with different matchers threshold e.g. <strong>[100, 125, 75, 100]</strong>. By looking at N of registered images and number of 3D cloud points, we select the best reconstruction. This trick not only allows to find the better reconstruction by finding the better threshold for matches, but also decrease the randomness effect and acts as a countermeasure against a shake up. Due to its running time complexity, we used this strategy only for scenes having less than 40-45 images. This is the last step in our solution that helped us to boost score from <strong>0.497/0.542</strong> to <strong>0.506/0.562</strong>. We also experimented with pycolmap.incremental_mapping employing similar idea but that scenario did not work out.</li>
</ul>
<h1><strong>Ideas that did not work out or not fully tested:</strong></h1>
<p>• <strong>TTA multi crop</strong>, no success. The idea was to split image into multiple crops and extract matches in order to find similar images in the scene and determine the best image pairs.<br>
• <strong>Square-sized images.</strong><br>
• <strong>Bigger image size</strong> (e.g., 1600) for SP/SG.<br>
• <strong>Manual RANSAC</strong> instead of using COLMAP internal implementation. We run experiments by disabling geometric verification, but the score was not good.<br>
• <strong>NMS filtering</strong> to reduce number of points by using <a href="https://github.com/BAILOOL/ANMS-Codes" target="_blank">ANMS</a>. <br>
• Filter least significant image pairs by number of outliers instead of relying on raw matches. It was quite important to look for certain number of matches for an image pair. Run experiments using SP/SG and Loftr. We got higher mAA with Loftr, but, probably, more effort needed here to make it work properly, not enough time.<br>
• <strong>Downscale scene images</strong> before passing them to COLMAP.<br>
• <strong>Pixel-Perfect Structure-from-Motion</strong>. It was a promising method to evaluate as we got a good boost locally with <a href="https://github.com/cvg/pixel-perfect-sfm" target="_blank">PixSfm</a>, using a single image size of 1280, and it boosted our score from <strong>0.71727</strong> to <strong>0.76253</strong>. Then, we managed to install this framework successfully and run it in Kaggle environment, but could not beat our best score at that moment. It is a heavyweight framework taking too much RAM, and we could run it only for scenes having at most ~30 images. A bit upset because we spent tons of hours compiling all this stuff.<br>
• <strong>Adaptive image sizes</strong>. Say, if the longest image side >= 1536 for most images in the scene, we use higher image resolution for matchers ensemble e.g. [1280, 1408, 1536]. Otherwise, a default one is applied [1280,1088,1376]. Did not have enough time to test this idea. It worked locally for cyprus and wall that have big resolution. One of our last submissions implementing this idea crashed with internal error.<br>
• Different <strong>detectors, matchers</strong>. We tested Loftr, QuadreeAttention, AspanFormer, DKM v3, GlueStick (keypoints + lines), Patch2Pix, KeyNetAffNetHardNet, DISK, PatchNetVLAD. We also experimented with the confidence matching thresholds and number of matches, but no boost here. Eventually, SP/SG was the best choice for us. Probably, the reason why many dense-based methods did not work out for us is because of the low performance of the “repeatability” metric and high noise in the matches.<br>
• Different <strong>CNNs to find the most similar images</strong> in the scene and generate corresponding image pairs (NetVLAD, different pretrained timm-based backbones, CosPlace etc). We even specifically trained a model to find similar images in the scene, but no success here. Later, we gave up using this strategy at all.<br>
• Different <strong>keypoint/matching refinement</strong> methods (e.g., recently published <a href="https://github.com/TencentYoutuResearch/AdaMatcher" target="_blank">AdaMatcher</a>, Patch2Pix ), but did not have enough time. AdaMatcher seems a promising idea to try, a quote from their paper “as a refinement network for SP/SG we observe a noticeable improvement in AUC”</p>
<h1><strong>Performance Improvements Step By Step</strong></h1>
<table>
<thead>
<tr>
<th>Method</th>
<th>Private LB</th>
<th>Public LB</th>
<th>Δ Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Baseline</td>
<td>0.382</td>
<td>0.317</td>
<td>–</td>
</tr>
<tr>
<td>Manual RANSAC</td>
<td>0.336</td>
<td>0.286</td>
<td>–0.046</td>
</tr>
<tr>
<td>Image size 1024 → 1280</td>
<td>0.407</td>
<td>0.345</td>
<td>+0.025</td>
</tr>
<tr>
<td>Image size 1376, similarity=None</td>
<td>0.489</td>
<td>0.426</td>
<td>+0.026</td>
</tr>
<tr>
<td>Exhaustive matching, 8/10</td>
<td>0.486</td>
<td>0.441</td>
<td>-0.003</td>
</tr>
<tr>
<td>TTA, Image sizes [840, 1024, 1280]</td>
<td>0.491</td>
<td>0.447</td>
<td>+0.002</td>
</tr>
<tr>
<td>TTA, Image sizes [1088, 1280, 1376]</td>
<td>0.523</td>
<td>0.475</td>
<td>+0.032</td>
</tr>
<tr>
<td>Manual image initialization</td>
<td>0.529</td>
<td>0.492</td>
<td>+0.006</td>
</tr>
<tr>
<td>Rotate detection</td>
<td>0.542</td>
<td>0.497</td>
<td>+0.013</td>
</tr>
<tr>
<td>Multi-run reconstruction, matches thr [100, 125, 75, 100]</td>
<td>0.562</td>
<td>0.506</td>
<td>+0.02</td>
</tr>
</tbody>
</table>
<h5>The final score is 0.506/0.562 in Public/Private.</h5>
<h1><strong>Local Validation</strong></h1>
<p>As a reference, this is one of our latest metric reports using train dataset: </p>
<p>urban / kyiv-puppet-theater (26 images, 325 pairs) -> mAA=0.921538, mAA_q=0.991077, mAA_t=0.921846<br>
urban -> mAA=0.921538</p>
<p>heritage / cyprus (30 images, 435 pairs) -> mAA=0.514713, mAA_q=0.525287, mAA_t=0.543678<br>
heritage / wall (43 images, 903 pairs) -> mAA=0.495792, mAA_q=0.875637, mAA_t=0.509302<br>
heritage -> mAA=0.505252</p>
<p>haiper / bike (15 images, 105 pairs) -> mAA=0.940952, mAA_q=0.999048, mAA_t=0.940952<br>
haiper / chairs (16 images, 120 pairs) -> mAA=0.834167, mAA_q=0.863333, mAA_t=0.839167<br>
haiper / fountain (23 images, 253 pairs) -> mAA=0.999605, mAA_q=1.000000, mAA_t=0.999605<br>
haiper -> mAA=0.924908</p>
<p><strong>Final metric -> mAA=0.783900</strong></p>
<p>Finally, we had two submissions running on the last day. One of them succeeded and allowed us to get 2nd place, but the other one did not fit the time limit, unexpectedly. Sometimes it is stressful to make a submission on the last day of the competition.</p>
<h1><strong>Helpful Resources</strong></h1>
<p>Special thanks to the authors of the following projects:<br>
• <a href="https://colmap.github.io/" target="_blank">COLMAP</a><br>
• <a href="https://ieeexplore.ieee.org/document/7780814" target="_blank">SuperPoint</a><br>
• <a href="https://arxiv.org/abs/1911.11763" target="_blank">SuperGlue</a><br>
• <a href="https://github.com/ternaus/check_orientation" target="_blank">check_orientation</a></p>
<p>Use of cache allowed us significantly increase speed of our solution. We mostly cached 3 heavy things – images resized and packed in CUDA tensor beforehand, SP keypoints, and SG matches saved in memory on a 1st run. Also, if we see that the N of matches for a certain image pair is low, we do not proceed with other image sizes, exit earlier. Special thanks to <a href="https://www.kaggle.com/vostankovich" target="_blank">@vostankovich</a> for making our solution blazing fast!</p>
|
Image Matching Challenge 2023
|
30th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to everyone for the journey we had throughout the competition. Also, thanks to the organizers for bringing the Image Matching Challenge to Kaggle again.<br>
I will give a brief of my solution to this challenge.</p>
<h1>Architecture</h1>
<p>I came to the competition with very limited time and only a little experience from IMC 2022. So, it is likely the first time I walked through the flow of 3D reconstruction.<br>
I strictly followed the host pipeline, which I split into 3 modules:</p>
<ul>
<li>Global descriptors</li>
<li>Local descriptors (matching)</li>
<li>Reconstruction</li>
</ul>
<p>My main work was focused on improving their efficiency separately.</p>
<h1>Global descriptors</h1>
<p>From my point of view, well-trained models on a <strong>landmark</strong> dataset could provide better descriptors than ImageNet pre-trained backbones.<br>
As a result, I utilized some of the models that I had trained to compete in the <a href="https://www.kaggle.com/competitions/landmark-recognition-2021" target="_blank">Google Landmark 2021</a>, and then concatenate them to a global descriptor:</p>
<pre><code>EfficientNetV2-M
EfficientNetV2-L
. |-->[concat]--> [fc ]
ResNeSt- /
ResNeSt- /
</code></pre>
<h1>Local descriptors</h1>
<p>This year, competitors are required to perform matching in a strict time interval.<br>
I focused on <strong>detector-based</strong> (2-stage) methods only, because I thought I could save time on the points detector part (for example, to match <em>(image_i, image_j)</em> and <em>(image_i, image_k)</em>, semi-dense and dense methods will have to "extract" <em>image_i</em> two times). When I read other top team solutions, it seemed to be a wrong decision I had made, since such an amount of good matching models are omitted 😭. However, here is the list of methods I tried:<br>
<strong>Detector</strong>: SuperPoint, SuperPoint + FeatureBooster, KeyNetAffNetHardNet, DISK, SiLK, ALIKE.<br>
<strong>Matcher</strong>: SuperGlue, GlueStick, SGMNet, AdaLAM.<br>
With the detector, I found that <strong>SuperPoint</strong> gave superior results than others.<br>
With the matcher, <strong>SuperGlue</strong> showed the best performance in accuracy and efficiency. <strong>GlueStick</strong> is quite good but slower. <strong>SGMNet</strong> is quite fast but lower. I then ensemble keypoints and matches from their predictions and filter out duplicates.</p>
<h1>Reconstruction</h1>
<p>I didn't think I could improve much on this, so I only played with <strong>colmap parameters</strong> a bit to find out a (maybe) better combination than the default one.<br>
Some parameters I changed:</p>
<pre><code>max_num_trials
b_images_freq
b_max_num_iterations
b_points_freq
b_max_num_iterations
init_num_trials
max_num_models
min_model_size
</code></pre>
<p>I could save a little time with a "lighter" combination of parameters while still keeping the accuracy.</p>
<h1>Final thoughts</h1>
<p>I guess mine is quite a simple solution, but still give me a silver :D.<br>
However, the knowledge I gained from the competition may be the best I could achieve.<br>
Thank you for your reading and happy Kaggling!</p>
|
Image Matching Challenge 2023
|
39th place solution - SuperGlue + SIFT
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Overview</h2>
<p>We used SuperGlue or SIFT on different scenes based on a heuristic and you can see our scores below.</p>
<p>Notebook: <a href="https://www.kaggle.com/code/gunesevitan/image-matching-challenge-2023-inference" target="_blank">https://www.kaggle.com/code/gunesevitan/image-matching-challenge-2023-inference</a><br>
Code: <a href="https://github.com/gunesevitan/image-matching-challenge-2023" target="_blank">https://github.com/gunesevitan/image-matching-challenge-2023</a></p>
<h3>Scene Scores</h3>
<table>
<thead>
<tr>
<th></th>
<th>mAA</th>
<th>mAA Rotation</th>
<th>mAA Translation</th>
</tr>
</thead>
<tbody>
<tr>
<td>bike</td>
<td>0.9228</td>
<td>0.9904</td>
<td>0.9228</td>
</tr>
<tr>
<td>chairs</td>
<td>0.9775</td>
<td>0.9916</td>
<td>0.9775</td>
</tr>
<tr>
<td>fountain</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
</tr>
<tr>
<td>dioscuri</td>
<td>0.5062</td>
<td>0.5220</td>
<td>0.5236</td>
</tr>
<tr>
<td>cyprus</td>
<td>0.6523</td>
<td>0.7887</td>
<td>0.6586</td>
</tr>
<tr>
<td>wall</td>
<td>0.8150</td>
<td>0.9359</td>
<td>0.8317</td>
</tr>
<tr>
<td>kyiv-puppet-theater</td>
<td>0.7704</td>
<td>0.8756</td>
<td>0.7895</td>
</tr>
</tbody>
</table>
<h3>Dataset Scores</h3>
<table>
<thead>
<tr>
<th></th>
<th>mAA</th>
<th>mAA Rotation</th>
<th>mAA Translation</th>
</tr>
</thead>
<tbody>
<tr>
<td>haiper</td>
<td>0.9667</td>
<td>0.9994</td>
<td>0.9667</td>
</tr>
<tr>
<td>heritage</td>
<td>0.6578</td>
<td>0.7489</td>
<td>0.6713</td>
</tr>
<tr>
<td>urban</td>
<td>0.7704</td>
<td>0.8756</td>
<td>0.7895</td>
</tr>
</tbody>
</table>
<h3>Global Scores</h3>
<table>
<thead>
<tr>
<th>mAA</th>
<th>mAA Rotation</th>
<th>mAA Translation</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.7983</td>
<td>0.8746</td>
<td>0.8092</td>
</tr>
</tbody>
</table>
<h3>LB Scores</h3>
<p>Public LB Score: <strong>0.415</strong><br>
Private LB Score: <strong>0.465</strong></p>
<h2>SuperPoint & SuperGlue</h2>
<p>SuperPoint and SuperGlue models are used with almost default parameters except <code>keypoint_threshold</code> is set to 0.01. We found that SuperGlue works better with raw sizes but some of the scenes had very large images that didn't fit into GPU memory. We resized images to 2560 (maximum longest edge that can be used safely on Kaggle) longest edge if any of the edges exceed that number. Otherwise, raw sizes are used.</p>
<h2>SIFT</h2>
<p>We initially started with COLMAP's SIFT implementation and it was working pretty good as a baseline. It was performing better on some scenes with very large images and strong rotations compared to deep models. There was a score trade-off between cyprus and wall while enabling <code>estimate_affine_shape</code> and <code>upright</code> and we ended up disabling both of them.</p>
<pre><code>sift_extraction_options.max_image_size =
sift_extraction_options.max_num_features =
sift_extraction_options.estimate_affine_shape =
sift_extraction_options.upright =
sift_extraction_options.normalization =
</code></pre>
<h2>Model Selection</h2>
<p>We noticed that large images with EXIF metadata has very high memory consumption and those are the images that have 90 degree rotations because of DSLR camera orientation. We add a simple if block that was checking the mean memory consumption of each scene. If that was greater than 16 megabytes, we used SIFT. Otherwise, we used SuperGlue on that scene.</p>
<h2>Incremental Mapper</h2>
<p>We used COLMAP's incremental mapper for reconstruction with almost default parameters except <code>min_model_size</code> is set to 3. Best reconstruction is selected based on registered image count and unregistered images are filled with scene mean rotation matrix and translation vector.</p>
<h2>Thing that didn't work</h2>
<ul>
<li>OpenCV SIFT (COLMAP's SIFT implementation was working way better for some reason)</li>
<li>DSP-SIFT (domain size pooling was boosting my validation score on local but it was throwing an error on Kaggle)</li>
<li>LoFTR (too slow)</li>
<li>KeyNet, AffNet, HardNet (bad score)</li>
<li>DISK (bad score)</li>
<li>SiLK (too slow)</li>
<li>ASpanFormer (too slow)</li>
<li>Rotation Correction (we probably didn't make it correctly within the limited timeframe)</li>
<li>Two stage matching (it was boosting my validation score but didn't have enough time to fit it into pipeline)</li>
</ul>
|
Image Matching Challenge 2023
|
3rd Place Solution - Significantly Reduced the Fluctuations caused by Randomness!
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, PyCOLMAP is a Python wrapper for COLMAP. PyCOLMAP does not expose many parameters of COLMAP. If you need to set these parameters, you should directly call COLMAP instead of using PyCOLMAP.</p>
<p>欢迎大家加入我们👋</p>
<p>We are delighted to participate in this competition and would like to express gratitude to all the Kaggle staffs and Sponsors. Congratulations to all the participants. <br>
The team members include 陈鹏、陈建国、阮志伟 and 李伟. I would like to express my sincere gratitude to everyone for the excellent teamwork over the past month. I have thoroughly enjoyed working with all of you, and I am delighted to be a part of this team.</p>
<h1>1 Overview</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3163774%2Fc8338666681788720689a7ead39a92ab%2F.png?generation=1686757921566293&alt=media" alt="pipeline"></p>
<h1>2 Main pipeline</h1>
<h2>2.1 SP/SG</h2>
<h3>2.1.1 Rotation</h3>
<p>Rotating the image has a significant effect, since SG is lack of the rotation invariance. Therefore, for each image pair A-B, we fixed image A and rotated image B four times (0, 90, 180, 270). After performing four times SG matching, we selected the rotation angle with the most matches for the next stage. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3163774%2F33aa1271502732f84ca6420949ed19a6%2F.png?generation=1686758077155244&alt=media" alt="Rotation"></p>
<h3>2.1.2 Resize image</h3>
<p>In the early stage of the competition, we used the original image for extracting keypoints and matching. And we noticed that the mAA score in the heritage cyprus scene was only 0.1. After experiments, we found that if we scaled the images in cyprus scene to 1920 on the longer side, the mAA score was improved to 0.6. Specifically, we didn't rescale the image size after SP keypoints extraction. Finally, in our application, we scaled the image size if the longer side was larger than 1920, otherwise kept the original image size for the following SP + SG inference.</p>
<h3>2.1.3 SP/SG setting</h3>
<p>We increased the NMS value of SP from 3 to 8 and set the maximum number of keypoints to 4000.</p>
<h2>2.2 GeoVerification(RANSAC)</h2>
<p>We used USAC_MAGSAC for geometric verification with the following configuration.<br>
cv2.findFundamentalMat(mkpts0, mkpts1, cv2.USAC_MAGSAC, 2, 0.99999, 100000)</p>
<h2>2.3 Setting Camera Params(Randomness)</h2>
<p>In our experiments, we wanted to eliminate the randomness effect in our final mAA, since we observed that the same notebook could result in fluctuations of approximately 0.03 in the LB. We found that the randomness come mainly from the Ceres (<a href="https://github.com/colmap/colmap/issues/404)" target="_blank">https://github.com/colmap/colmap/issues/404)</a>) optimization. <br>
After analysis, the initial values of camera parameters, such as the focal length, had a significant impact on the stability of optimization results.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3163774%2F9411c508f380417b835bb253b06eae7b%2Fwall_random.png?generation=1686758232496900&alt=media" alt="randomness"><br>
Since the camera focal length information could be extracted from the image EXIF data, we provided the prior camera focal length to the camera and shared the same camera settings among images captured by the same device. Although there were still some fluctuations in the metrics, the majority of the experimental results were consistent. <br>
The figure below shows four submissions of the same final notebook. <strong>On the public LB, our metric fluctuates by no more than 0.004.</strong> Moreover, compared to many other teams, <strong>we do not have significant metric fluctuations between public LB and private LB.</strong> Our public LB score was amazingly close to private score. This might indicated that our method <strong>Significantly Reduced the Fluctuations caused by Randomness!</strong><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3163774%2Ffd18321289d3745de78691742d0ca28d%2Fonline_maa.png?generation=1686758332640650&alt=media" alt="mAA"></p>
<h2>2.4 Mapper</h2>
<p>In the COLMAP reconstruction process, we revised some default parameters in incremental mapping. After first trail, if the best model register ratio was below 1, we relaxed the mapper configuration (abs_pose_min_num_inliers=15) and re-run incremental mapping.</p>
<h1>3 Conclusion</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3163774%2F47522f20ba1cf5b608a1806f0cf761f4%2Fconclution.png?generation=1686760502421586&alt=media" alt="Conclusion"></p>
<h1>4 Not fully tested</h1>
<h2>4.1 Select image pairs</h2>
<p>Our final solution took nearly 9 hours, since we generated the image matching pairs by exhaustive method. In order to speed up our solution, we tried Efficientnet_b7, Convnextv2_huge, and DINOv2 to extract image features and generate image matching pairs by feature similarity. In offline experiments, we selected the top N/2 most similar images (N being the total number of images) for each image to form image pairs. Compared to other methods, DINOv2 performed the best. By incorporating DINOv2 into the image pairs selection, we could control the processing time to 7 hours, and achieved 0.485 in LB (compared with exhaustive 0.507).<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3163774%2Fbfdb729a111a6633b27e573788e09263%2Fdinov2.png?generation=1686760519524067&alt=media" alt="dinov2"></p>
<h2>4.2 Pixel-Perfect Structure-from-Motion</h2>
<p>PixSFM had a good performance improvement in our local validation. However, during online testing, it consistently times out even we run it on scenes less than 50 images. Moreover, it took a lot time for us to install the environment and run it on kaggle. Times out Sad! </p>
<h1>5 Ideas that did not work well</h1>
<h2>5.1 Different detectors and matchers</h2>
<p>We tested DKMv3, GlueStick, and SILK, but neither was able to surpass SPSG. In our experiments, we observed that DKMv3 performed better than SPSG in challenging scenes, such as those with large viewpoint differences or rotations. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3163774%2Ffae472d5266e521f18f667592eaf3454%2Fdkm.png?generation=1686758763195086&alt=media" alt="dkm"><br>
However, the overall metrics of DKMv3 were not as good as SPSG, which may be due to the operation of sampling dense matches we performed.</p>
<h2>5.2 Crop image</h2>
<p>Just like last year's winning solution, we took a crop after completing the first stage of matching. In the second stage, we performed matching again on the crop and merge all the matching results together.</p>
<h2>5.3 Half precision</h2>
<p>It confused our team a lot. In the kaggle notebook, we found the half-precision improved the results, but on the public LB, it returned the lower results. </p>
<h2>5.4 Merge matches in four directions</h2>
<p>In our methods, we rotated the image (0, 90, 180, 270) and performed four times matching. Instead of keeping all matches from four directions, we only keep the best angle matches, because keeping all matches didn't lead to any improvement in the results.</p>
<h2>5.5 findHomography</h2>
<p>When feature points lie on the same plane (e.g., in a wall scene) or when the camera undergoes pure rotation, the fundamental matrix degenerates. Therefore, in RANSAC, we simultaneously used the findFundamentalMat and the findHomography to calculate the inliers. However, this approach didn't lead to an improvement in the metrics.</p>
<h2>5.6 3D Model Refinement</h2>
<p>After the first trail of mapping, we tried to filter nosiy 3D points with large projection error or short track length, and then re-bundle adjust the model. Experically, this could help export better poses, but.. that's life.</p>
|
Image Matching Challenge 2023
|
46th solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our method consists of three simple parts: keypoint matching, structure from motion, and post-processing. I will briefly explain each of them with a focus on the differences from the baseline.</p>
<h2>Keypoint Detect and Matching</h2>
<p>We adopted a method that performs keypoint extraction and matching separately, rather than an end-to-end matching method that can share 3D model points across many images. Ultimately, we only used KeyNetAffNetHardNet, but if we had more time, we would have liked to ensemble it with SuperPoint-based methods.<br>
By making simple changes listed below, we can improve the score.</p>
<ul>
<li>increasing the number of keypoint (2048→2048*4).</li>
<li>extracting keypoints from different resized images.</li>
<li>using algorithm adalam and Orinet written in the codes.</li>
<li>using both the Fundamental matrix and the Homography matrix and merging the two results<br>
for narrowing down the matching based on geometric characteristics.</li>
<li>setting an upper limit on the number of feature point matches per pair of images for feature point matching to reduce the computational complexity of 3D reconstruction.</li>
</ul>
<h2>Structure from Motion</h2>
<p>We tried multiple minimum matching numbers and used the model that estimated the largest number of images that could be estimated as the final estimation result. However, if the proportion of images that could be estimated exceeded the threshold when trying the final matching number from the largest one, we did not try any minimum matching below it.</p>
<h2>Post-Processing</h2>
<p>Images that could not be estimated by colmap were estimated using cv2.solvePnPRansac.</p>
<p>Thank you for your question.<br>
1) We narrowed each of the matches using the two methods and simply merged them. Of course, we treated the matching that covered the two methods as one.<br>
2) This was accomplished by using matches.h5 and images.txt and points3d.txt to correspond. It was a bit tedious, but the file contained the necessary information.<br>
3) Both methods improved the score by about 0.01. Method 2 had a smaller contribution than I expected.</p>
|
Image Matching Challenge 2023
|
4th place solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to you, too!<br>
And thank you for the supplement and pointing out the issues.</p>
<p>As you mentioned, the lack of internal parameters of the camera was indeed the main reason that made calculations difficult. <br>
Actually, I tried something like this; In the output of Colmap's incremental mapping, there are (estimated) camera matrices for registered images. For images that were not registered, I assumed the camera matrices were the same as the registered images and fed them into the ComputeEssentialMatrix function (found in <a href="https://www.kaggle.com/code/eduardtrulls/imc2022-training-data" target="_blank">this notebook</a>). However, it was rarely the case that it fell within the threshold of the evaluation function…<br>
Reading your explanation, I've understood an additional reason why it didn't work out.</p>
<p>Thank you for your kind words of congratulations. I'm truly happy! </p>
<p>Here are the tables to compare the effects of model choice onto local score.<br>
Since I haven't evaluated all models at the same time, the benchmark values differ. This is why the table is not consolidated into one…</p>
<table>
<thead>
<tr>
<th>Exp. 1</th>
<th>Local mAA</th>
</tr>
</thead>
<tbody>
<tr>
<td>baseline (disk+SuperGlue+DKM)</td>
<td>0.613</td>
</tr>
<tr>
<td>+LoFTR</td>
<td>0.604</td>
</tr>
<tr>
<td>+QuadTreeAttention</td>
<td>0.609</td>
</tr>
<tr>
<td>OpenGlue instead of SuperGlue</td>
<td>0.504</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Exp. 2</th>
<th>Local mAA</th>
</tr>
</thead>
<tbody>
<tr>
<td>baseline (only SuperGlue)</td>
<td>0.740</td>
</tr>
<tr>
<td>+DKM</td>
<td>0.783</td>
</tr>
<tr>
<td>+SILK</td>
<td>0.749</td>
</tr>
<tr>
<td>+DKM+SILK</td>
<td>0.768</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Exp. 3</th>
<th>Local mAA</th>
</tr>
</thead>
<tbody>
<tr>
<td>baseline (only SuperGlue)</td>
<td>0.792</td>
</tr>
<tr>
<td>+DKM</td>
<td>0.848</td>
</tr>
<tr>
<td>+disk</td>
<td>0.802</td>
</tr>
<tr>
<td>+SE2-LoFTR</td>
<td>0.750</td>
</tr>
</tbody>
</table>
<p>In my situation, DKM is indeed the best for ensembling, and Disk and SILK are the second best.</p>
<p>Thank you very much for your kind comments, and introduction to RoMa! <br>
Although I haven't tested it, I expect it to work as well as DKM.<br>
I've provided the tables of local mAA at the reply to old-ufo. Please refer to it.</p>
<p>(June 21st: Added additional explanation.)</p>
<p>Firstly, I'd like to express my gratitude to the hosts and Kaggle staff for conducting the IMC 2023 competition. The task was both exciting and challenging, which made it a pleasure to engage with over the two months.</p>
<h2>Overview</h2>
<p>SuperPoint/SuperGlue proved to be exceptionally accurate and quick.<br>
My code is partly a combination of the baseline provided by the host and <a href="https://www.kaggle.com/code/chankhavu/loftr-superglue-dkm-with-inspiration" target="_blank">the notebook by </a><a href="https://www.kaggle.com/chankhavu" target="_blank">@chankhavu</a> at IMC 2022. I extend my appreciation to both for providing the codes.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9249230%2F1036693590b6655d8d48d588bfd69e9c%2FIMC-solution.png?generation=1686658721858306&alt=media" alt=""></p>
<h2>Main pipeline</h2>
<h3>Screening process based on the number of matches</h3>
<p>Considering the large number of image combinations, an effective screening method was necessary. I noticed that the number of matches made by SG is significantly low (<10) when an image pair is unsuitable for stereo matching. Consequently, I decided to bypass the process if the number of matches achieved by SG (longside = 1200) fell below a certain threshold (in this case, 30 matches). This strategy significantly reduced processing time, allowing for more pair trials within the given timeframe, leading to a noticeable improvement (LB: +0.08).</p>
<h3>Rotation during the screening process</h3>
<p>Procuring meaningful matches from pairs with unsorted image orientations, such as those found in Cyprus, proved to be challenging. Therefore, I incorporated a rotation process into the screening procedure, resulting in further improvement (LB: +0.04).</p>
<h3>Image splitting</h3>
<p>Each image was divided into four sections, each generating its own set of keypoints, followed by the execution of matchings across all pair combinations (4x4 = 16 pairs) with a batched process for SP/SG (longside = 1400). <br>
With image splitting, the number increased to almost 3 times larger than the single one as shown below, <br>
actually, which does not depend on the original size of the image.<br>
Just increasing the input size of an image cannot achieve this.<br>
I obtained similar benefits at the other scenes.<br>
This method proved to be more effective and time-efficient than traditional TTA in my case (LB: +0.01~0.02).<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9249230%2F59ee7b75b96f413775ba3f8f7e6fc6c0%2Fsplit_2.png?generation=1687299587316264&alt=media" alt=""></p>
<h3>Ensembling with DKM</h3>
<p>After comparing various models, DKM v3 emerged as a relatively lightweight and effective choice when used in conjunction with SG (LB: +0.01~0.04).<br>
SuperGlue could not create correct matches at the stair in the image pair below (see the yellow region), while it provided good matches at the other objects such as the arch and pillar.<br>
On the other hand, DKM can detect the correct corresponding points of the stair, which suggests that these matchers are complementary to each other.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9249230%2F4a040a43881adc0532ccc0e5b4878c17%2FDKM_2.png?generation=1687299538792519&alt=media" alt=""></p>
<h3>Parallel execution of separate matching and mapping processes</h3>
<p>Both matching and mapping/reconstruction are time-intensive tasks. However, the former utilizes both the GPU and a single CPU, while the latter only requires CPU resources. Therefore, implementing parallel processing using the queue library improved time efficiency by approximately 20~30%. This concept was inspired by a gold-prize solution at IMC 2022. <br>
Remember to set <code>mapper_options.num_threads = 1</code>, which can also help avoid OOM during reconstruction.</p>
<h2>The final score:</h2>
<p>urban / kyiv-puppet-theater (26 images, 325 pairs) -> mAA=0.921538, mAA_q=0.987385, mAA_t=0.921538<br>
urban -> mAA=0.921538</p>
<p>heritage / dioscuri (174 images, 15051 pairs) -> mAA=0.594950, mAA_q=0.689662, mAA_t=0.602279<br>
heritage / cyprus (30 images, 435 pairs) -> mAA=0.706437, mAA_q=0.727126, mAA_t=0.724828<br>
heritage / wall (43 images, 903 pairs) -> mAA=0.805980, mAA_q=0.935105, mAA_t=0.824917<br>
heritage -> mAA=0.702456</p>
<p>haiper / bike (15 images, 105 pairs) -> mAA=0.933333, mAA_q=0.999048, mAA_t=0.933333<br>
haiper / chairs (16 images, 120 pairs) -> mAA=0.981667, mAA_q=0.999167, mAA_t=0.981667<br>
haiper / fountain (23 images, 253 pairs) -> mAA=0.999605, mAA_q=1.000000, mAA_t=0.999605<br>
haiper -> mAA=0.971535</p>
<p><strong>Final metric -> mAA=0.865176</strong></p>
<p><strong>Public LB: 0.471</strong><br>
<strong>Private LB: 0.534</strong></p>
<p>It should be noted that the submission showed best the highest local and public score, and also resulted in the best private score among my submissions. While I struggled with the randomness of Colmap, I now recognize that the dataset was useful and served as a valuable reference in aiming for the correct goal.</p>
<h2>Ideas that did not work well</h2>
<ul>
<li>Other models such as LoFTR, SE2-LoFTR, disk, QuadTreeAttention, OpenGlue, and SILK were tested. It was revealed that the combination of SP/SG and DKM consistently outperformed them in terms of both speed and performance. (The differences in local mAA scores have been provided in the comment section.)</li>
<li>Employing USAC_MAGSAC prior to reconstruction occasionally shortened the reconstruction duration, but the effect was minor with the parallel execution, where the matching process is rate-determining. Also, it never improved the score in my case.</li>
<li>Implementing CLAHE rendered my pipeline unstable and less robust. While it proved effective in some scenes, it often deteriorated the accuracy in others, overall often leading to a decrease in score.</li>
<li>Other forms of TTA (resolution, flip, 10deg-rotation, crop) provided only minimal improvement while consuming significant time. It appeared more beneficial to experiment with numerous pairs than to utilize TTA.</li>
<li>I attempted to determine the R and T for each image that could not be registered with Colmap, employing the same methodology as IMC2022. However, this approach failed to improve the score. The reason seems straightforward: with either method, if matches cannot be identified, there is little that can be done. (On the other hand, adding the R and T values obtained from reconstructions other than best_idx to the submission also slightly improved the score.)</li>
</ul>
|
Image Matching Challenge 2023
|
5th place solution: kNN shortlist and rotation correction
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Understood. I am curious about those numbers too. I will have it ready by the workshop!<br>
In my memory, SPSG, crop and rotation correction were particularly important. I think the other changes were not that significant.</p>
<p>Great! You are right. This may be my mistake. I focused too much on saving GPU runtime. In fact the computational cost of keypoint extraction is relatively small.</p>
<p>Thanks for the question! Just updated the description. I used 5 matchers in a large scene. 9 matchers for small scenes.</p>
<p>I will be in Vancouver for the CVPR IMW. I am very much looking forward to learning from you all!</p>
<h2>Overview</h2>
<p>Basically, the stereo matching part is the same as that of the winning solution in 2022, <a href="https://www.kaggle.com/competitions/image-matching-challenge-2022/discussion/329131" target="_blank">the crop and multi-scale ensemble composition by DBSCAN</a>. In addition to this, the improvement of the shortlist, the improvement of computational efficiency, and rotation correction are the main features of my solution.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6388%2F522450db593ec4545cbb3ab6d3f59cd6%2Fmatcher.png?generation=1686648814730059&alt=media" alt=""></p>
<h2>kNN shortlist + completion</h2>
<p>In the baseline, the euclidean distance of the global descriptor is used to filter stereo matching candidates by a threshold value. However, since the scale of the distance differs among datasets, I thought it would not be appropriate to use a common threshold value. It’s not robust to unknown data sets.</p>
<p>Since the number of images per scene is only about 200 at most, stereo matching and RANSAC verification can be performed <strong>using a very lightweight model</strong> for all image combinations. The shortlist was generated by extracting the k nearest neighbors for each image based on the number of inliers.</p>
<p>The very lightweight model may not match well enough. Some scenes produce images that do not match any of the images. This is fatal because it prevents camera pose estimation. Therefore, image pairs were complementarily added to the shortlist so that there are at least k neighbors for all images by the number of matching keypoints.</p>
<p><strong>The very lightweight model</strong>: I used SPSG as the very lightweight model. The number of keypoints was set to 512 for efficiency as shown in Fig. 11, Appendix B of superglue's paper.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6388%2F6f82729162d57442693ff3fa2b408209%2Finference_time.png?generation=1686630524341903&alt=media" alt=""></p>
<h2>Rotation correction</h2>
<p>A notable weakness of superglue's pretrained model is its lack of robustness to rotation. This is especially noticeable in the heritage dataset of the IMC2023 training set, cyprus scene. I addressed this issue without training by just rotating images.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6388%2F212bfdaf476e1ec79efa5692162d99d2%2Fcyprus1.png?generation=1686630454984938&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6388%2F75ce985499c56086b721df6a3e63e5d1%2Fcyprus2.png?generation=1686630470817962&alt=media" alt=""></p>
<p>Specifically, 4 different rotated images were prepared and matched using the very lightweight model. All images were resized to the same size (840x840) and processed in one batch.</p>
<h2>Parallel Execution</h2>
<p>GPU-intensive and CPU-intensive tasks can be executed in parallel for efficiency. Specifically, COLMAP BA is a CPU-intensive task, but stereo matching is a GPU-intensive task. Therefore, I implemented COLMAP processing in a separate thread and run it in parallel with stereo matching.</p>
<h2>Local Validation</h2>
<p>Only some difficult scenes were verified in the local environment. In particular, dioscuri, which has 174 images, was used as a reference to adjust the algorithm so that it would not time out.</p>
<table>
<thead>
<tr>
<th>scene</th>
<th># images</th>
<th>score</th>
<th>time (local env)</th>
</tr>
</thead>
<tbody>
<tr>
<td>bike</td>
<td>15</td>
<td>0.9342</td>
<td>134 s</td>
</tr>
<tr>
<td>kyiv-puppet-theater</td>
<td>27</td>
<td>0.7781</td>
<td>291 s</td>
</tr>
<tr>
<td>cyprus</td>
<td>30</td>
<td>0.6239</td>
<td>312 s</td>
</tr>
<tr>
<td>wall</td>
<td>43</td>
<td>0.4753</td>
<td>851 s</td>
</tr>
<tr>
<td>dioscuri</td>
<td>174</td>
<td>0.8775</td>
<td>2045 s</td>
</tr>
</tbody>
</table>
<h2>Processing time</h2>
<p>My best submisison run time was <strong>7 hours and 40 minutes</strong>. This is well under the time limit of 9 hours. Therefore, I added LoFTR to ensemble on the last day, but the Kaggle server is broken and the notebook is still running. If the notebook had been successfully processed, My best submission could have been a little better.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6388%2F538925ab8b7ece79f2630f2a9f18708e%2Ferrors.png?generation=1686630579463016&alt=media" alt=""></p>
<p>The issue of not being able to submit on the last day is very stressful. Something similar occurred with Google Landmark Recognition in 2019. I pray that this kind of trouble will not occur again🙏</p>
<h2>Thing that didn't work</h2>
<p>Too many things to try to do alone :-)</p>
<ul>
<li>TTA … no significant improvement</li>
<li>Pixel-Perfect SfM .. I got a slight improvement in my local environment but gave up because I couldn't set up ceres in my kaggle notebook</li>
<li>TensorRT .. I had to downgrade from PyTorch 2.0, but the setup was successful. I can't say with much certainty, but at least my experiments did not improve execution speed.</li>
<li>Half precision … It was about 10%(?) faster, but mAA was worse, so it was not adopted in the end.</li>
<li>other matchers … Tried DKMv3, LoFTR, Silk, but finally used only SPSG and MatchFormer due to high mAA.</li>
<li>OpenGLUE … Trained DISK+OpenGLUE and SuperPoint+OpenGLUE on the MegaDepth dataset but did not reach the scores of the SPSG pretrained model. Converted phototourist dataset to OpenGLUE format and built MegaDepth + phototourist dataset, but did not validate it because of lack of time.</li>
<li>Incremental Mapper parameter tuning … contributed to the stable high score in the local environment. Specifically, I reduced the max refinement change of BA and increased the max num iteration. However, all the submissions to kaggle were out of memory, so I did not check the improvement on the leaderboard.</li>
</ul>
<p>Thanks for your comment! It was very encouraging to see you at the top of the leaderboard early in the contest.</p>
<p>Also, thanks a lot for the question! I realize that my explanation is not very accurate. I will fix it later. Only one of the pair rotates. One of the four is not rotated. This very lightweight model is only used to compensate for rotation and build a shortlist. The keypoints calculated here are not used in the two-stage matcher.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6388%2F68b3c0c1944e1a4f430db4feec0e6b42%2Fkpts.png?generation=1686639018524622&alt=media" alt=""></p>
<p>Thank you!! Congratulations to you too! I look forward to learning from your team's solutions.</p>
<p>Thanks for your comment! Congratulations <a href="https://www.kaggle.com/remekkinas" target="_blank">@remekkinas</a> and your teammates!! I was confident that I could move up, but I didn't think it would be this big jump. I still can't stop my excitement :-)</p>
<p>Great findings! Thanks for sharing the dataset! Congratulations on your Gold Medal!<br>
I enjoyed your team name, TLE forever. I saw endless TLEs and OOMs too.</p>
|
Image Matching Challenge 2023
|
6th Place Solution - [ --- ]AffNetHardNet8 + AdaLAM
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I am actually new to SfM but I found something from the Colmap source code that may answer your question:</p>
<ol>
<li>When the "estimate_focal_length" option is True, the absolute pose estimator will quadratically sample some <a href="https://github.com/colmap/colmap/blob/43de802cfb3ed2bd155150e7e5e3e8c8dd5aaa3e/src/estimators/pose.h#LL56C1-L56C1" target="_blank">(30 by default)</a> scale factors and select the best one to correct the camera model <a href="https://github.com/colmap/colmap/blob/43de802cfb3ed2bd155150e7e5e3e8c8dd5aaa3e/src/estimators/pose.cc#L88" target="_blank">(Source Code)</a>. I think if the given initial focal length is off by too much and the number of samples is not enough, the "best" estimated camera model won't be good enough for the <a href="https://github.com/colmap/colmap/blob/43de802cfb3ed2bd155150e7e5e3e8c8dd5aaa3e/src/estimators/pose.cc#L50" target="_blank">absolute pose estimation</a> and refinement.</li>
<li>Setting "prior_focal_length" flag to 1 will disable the sampling (<em>estimate_focal_length</em>) mentioned above and only use the given camera model in absolute pose estimation <a href="https://github.com/colmap/colmap/blob/43de802cfb3ed2bd155150e7e5e3e8c8dd5aaa3e/src/sfm/incremental_mapper.cc#L430" target="_blank">(Source Code)</a>. This won't affect the refinement options.</li>
<li>Images from a camera with a prior focal length will have a higher priority to be selected as the initial pair. <a href="https://github.com/colmap/colmap/blob/43de802cfb3ed2bd155150e7e5e3e8c8dd5aaa3e/src/sfm/incremental_mapper.cc#L845" target="_blank">Source Code</a>.</li>
</ol>
<p>I would like to express my thanks to the competition organizers and Kaggle staff for hosting this amazing competition. I would also like to thank the competition hosts (@oldufo , <a href="https://www.kaggle.com/eduardtrulls" target="_blank">@eduardtrulls</a>) for providing helpful materials and a great example submission, which allowed me to quickly get up to speed with the competition.<br>
Participating in this competition and IMC2022 has taught me a lot about image matching, Kornia, and SfM. </p>
<h1>1. Overview</h1>
<p>My final solution was based on the submission example, but I pushed the limits of KeyNetAffNetHardNet + AdaLAM matcher with Colmap. </p>
<ul>
<li>I implemented four local feature detectors that are similar to KeynetAffnetHardnet (<a href="https://kornia.readthedocs.io/en/latest/feature.html#kornia.feature.GFTTAffNetHardNet" target="_blank">Example from Kornia</a>), and each detector extracted local features for all the images (with limited size) in a scene. See more info about other non-learning based keypoint detectors <a href="https://kornia.readthedocs.io/en/latest/feature.html" target="_blank">here</a>. </li>
<li>Using HardNet8 rather than HardNet although I didn't see a performance difference.</li>
<li>AdaLAM matcher was used to match all the pairs and get the matches for the scene. Only pair with an average matching distance < 0.5 were kept. 'force_seed_mnn' was set to True and 'ransac_iters' was increased to 256 compared with the submission example.</li>
<li>The matches from the four detectors were then combined.</li>
<li>I used USAC_MAGSAC to get the fundamental matrix for each pair and only kept the inlier matches. The results are written to the database as two-view geometry. <code>incremental_mapping</code> is then performed without <code>match_exhaustive</code>.</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3964695%2Fbb53c3f1e7ebad9394a673a421b47d11%2Fdetector.png?generation=1686695228576401&alt=media"><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3964695%2F051e9413d0ce9cd8357723ba7efab1be%2Ffull_pipeline.png?generation=1686695522836099&alt=media"></p>
<h2>1.1. How I reached this solution</h2>
<p>After running the submission example a few times, I thought that the key to finding a good solution was to identify a good shortlist of image pairs and apply similar solutions from IMC2022. I noticed that the KeyNetAffNetHardNet solution only needed to run the slower local feature detection once for each image, and the matching using AdaLAM matcher was fast enough to match each pair. Therefore, I decided to use KeyNetAffNetHardNet and the matching distance of AdaLAM to find the matching/pairing shortlist. However, with some optimizations, it ended up being the final solution in the last week.</p>
<h2>1.2. KeyNetAffNetHardNet</h2>
<p>I was surprised by the performance of KeyNetAffNetHardNet after using AdaLAM to match all possible pairs. With an increased number of features to 8000 and a maximum longer edge of 1600, I was able to achieve a score of 0.414/0.457 (Public/Private).</p>
<p>During my experimentation, I discovered that the matching distance can be used to determine whether two images have overlapping areas or not. See my <a href="https://www.kaggle.com/code/maxchen303/imc2023-test-notes" target="_blank">Test Notebook</a> for some experiments using KeyNetAffNetHardNet + Adalam for pairing.<br>
Disabling the Upright option (enabling OriNet) can make the Adalam matching more robust in handling rotated images.</p>
<h1>2. Implementation details with Colmap</h1>
<h2>2.1. Using focal length</h2>
<p>In the submission example, there is a section that extracts the focal length from image exif. However, the "FocalLengthIn35mmFilm" property may not be found using the <code>image.get_exif()</code> method. In some cases, the focal length information may exist in the <code>exif_ifd</code> (as described in this <a href="https://stackoverflow.com/questions/68033479/how-to-show-all-the-metadata-about-images" target="_blank">Stack Overflow post</a>). <br>
To extract the focal length from the <code>exif_ifd</code>, I used the following code:</p>
<pre><code>exif = image.getexif()
exif_ifd = exif.get_ifd()
exif.update(exif_ifd)
</code></pre>
<p>If the focal length is found in the exif, I also set the "prior_focal_length" flag to true when adding the camera to the database. This improved the mAA for some scenes (mAA of urban / kyiv-puppet-theater 0.764 -> 0.812).</p>
<p>According to the <a href="https://colmap.github.io/tutorial.html#database-management" target="_blank">Colmap tutorial</a>:</p>
<blockquote>
<p>By setting the prior_focal_length flag to 0 or 1, you can give a hint whether the reconstruction algorithm should trust the focal length value.</p>
</blockquote>
<h2>2.2. Handling Randomness: Bypass match_exhaustive()</h2>
<p></p>
<p></p>
<h2>2.3. Multiprocessing</h2>
<p>I discovered that adding matches from other AffNetHardNet detectors could increase the overall mAA score. To fit more similar detectors into the process, I used multiprocessing to ensure that the second CPU core was fully utilized. The final solution takes about 7.5 hours to run using four detectors with edges no longer than 1600.</p>
<p>To optimize the process, I sorted all the scenes by the number of images before processing, from greater to smaller. For each scene, there were two stages: matching to generate the database and reconstruction from the database. The reconstruction of different scenes is allowed to run in parallel with the matchings. Ideally, the pipeline should look like the following:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3964695%2Fc9cbe4d2f30c6874e597dd5949c2b62e%2Fmultiprocessing.png?generation=1686696885412256&alt=media"></p>
<h1>3. Other things I tried</h1>
<ul>
<li>After discovering that KeyNetAffNetHardNet was good at finding matching/pairing shortlists, I spent a lot of time exploring pair-wise matching such as LoFTR, SE-LoFTR, and DKMv3. However, these methods were slow even just running on the selected pairs and did not improve the mAAs in my implementations.</li>
<li>Use AdaLAM matcher with other keypoint detectors such as DISK, SiLK, and ALIKE: I also used OriNet and AffNet to convert the keypoints from these detectors to Lafs and get the HardNet8 descriptors. I then tried matching on the native descriptors, HardNet8 descriptors, or concatenated descriptors. This approach seemed to perform better than matching on the native descriptors without Lafs. If I had more time, I would like to explore this direction further.</li>
<li>Running local feature detection without resizing by splitting large images into smaller images that could fit into the GPU memory. However, this approach was extremely slow and did not improve performance. I found that the mAAs did not improve beyond a certain resolution. Allowing the longer edge to be 1600 or 2048 yielded similar scores.</li>
<li>Tuning Colmap mapping options: Too many options and difficult to evaluate the outcomes.</li>
</ul>
<h1>4. Local Validation Score</h1>
<pre><code> / kyiv-puppet-theater ( images, pairs) -> mAA=., mAA_q=., mAA_t=.
-> mAA=.
/ dioscuri ( images, pairs) -> mAA=., mAA_q=., mAA_t=.
/ cyprus ( images, pairs) -> mAA=., mAA_q=., mAA_t=.
/ wall ( images, pairs) -> mAA=., mAA_q=., mAA_t=.
-> mAA=.
/ bike ( images, pairs) -> mAA=., mAA_q=., mAA_t=.
/ chairs ( images, pairs) -> mAA=., mAA_q=., mAA_t=.
/ fountain ( images, pairs) -> mAA=., mAA_q=., mAA_t=.
-> mAA=.
metric -> mAA=. (t: . sec.)
</code></pre>
|
Image Matching Challenge 2023
|
9th Solution
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Image Matching Challenge 2023
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks <a href="https://www.kaggle.com/gunesevitan" target="_blank">@gunesevitan</a>! Yes, you can check this wheel for colmap 0.4.0. Basically, changing everything inside to one thread, and skipping <a href="https://github.com/colmap/pycolmap/blob/743a4ac305183f96d2a4cfce7c7f6418b31b8598/pipeline/match_features.cc#L76" target="_blank">geometric verification </a> made things deterministic. because this process is somehow random, then we replaced it with MagSac and provided a specific seed as USAC parameters. <br>
<a href="https://www.kaggle.com/datasets/jaafarmahmoud1/pycolmap-040" target="_blank">https://www.kaggle.com/datasets/jaafarmahmoud1/pycolmap-040</a></p>
<p>Thanks! Great work as well! Congrats on the 2nd place🎉! </p>
<p>Great thanks to the organizers and Kaggle staff for this amazing competition.<br>
Our solution shares architectural similarities with the baseline provided by the organizers. <br>
Firstly, we addressed the issue of rotation in-variance by implementing a rotation model to standardize the orientation of input images. Next, we employed a neural network for the retrieval task, enabling us to extract matching pairs for the feature extraction and matching process. This process generated a database which served as the input for the incremental mapping in Colmap.</p>
<h2>Orientation Model</h2>
<p>We would like to express our gratitude to <a href="https://www.kaggle.com/iglovikov" target="_blank">@iglovikov</a> for developing this great <a href="https://github.com/ternaus/check_orientation">model</a> , which was trained on a large dataset and exhibited good performance in both (LB) and (CV) evaluations. We made a slight modification to this model, which we refer to as self-ensembling.<br>
In our proposed adjustment, we utilized the model iteratively on the query image and its different rotated versions. Subsequently, we tuned the threshold by conducting validation tests on more than 1500 images with different rotation angles. Through experimentation, we discovered that using a threshold between 0.8 and 0.87 might result in incorrect orientation predictions. To address this issue, we applied self-ensembling by further rotating the image and checking if the predicted class probability fell within this range of thresholds. We repeated this last step twice to ensure accuracy.</p>
<h2>Retrieval Method</h2>
<h3>Retrieval Model</h3>
<p>In order to address the challenges posed by the cost inefficiency of exhaustive matching and the limitations of sequential matching for 3D reconstruction, we sought alternative methods for image retrieval. <br>
After careful evaluation, we chose to utilize NetVlad as our chosen method due to its superior performance compared to openibl and cosplace. To further enhance the results, we employed various techniques, including:<br>
1) We passed the original image and its horizontally flipped version through the model. By summing the descriptors obtained from both passes, we achieved improved performance, particularly for highly similar images. This technique is especially effective in cases where the scene exhibits symmetry. By redefining a new point in the n-dimensional descriptive space, which effectively increase the similarity distance between two distinct parts of the scene.</p>
<p>2) Re-ranking: After calculating the similarity scores, we performed re-ranking by selecting the top 1 match. We then re-queried the retrieval process using these two images instead of just one. The similarity scores of the resulting matches were summed together after raising them to a specific power "m". This manipulation of probabilities ensures that if the best match for one of the query images is found, it will be favored over an image that is similar to both where the sum of the similarities on both scenarios is equal.</p>
<p>We repeated this procedure twice using NetVlad with Test Time Augmentation (TTA) on the image size. The results were nearly perfect, and the approach even enabled the correct ordering of The Wall scene pairs, which can be best matched by doing sequential matching.</p>
<h3>Number of Matches</h3>
<p>Determining the number of image pairs to select is a critical factor that directly affects both validation and leaderboard performance. This becomes particularly important for large scenes where there may be no or very few common images between subsets of the scene.</p>
<p>To address this challenge, we devised a strategy based on a complete graph representation. In this graph, each edge represents the similarity between two images. The goal is to choose a subset of edges where each image has an equal number of connected nodes.</p>
<p>We employed a Binary Search approach to determine the number of matches for each image, with a check function to verify if the resulting graph is connected or not. The lower bound of the binary search was set to half the number of images, ensuring that we consider common matches and prevent incomplete 3D model reconstruction. Additionally, we made sure that the approach remains exhaustive for small scenes containing less than 40 images.</p>
<p>By employing this method, we aimed to strike a balance between capturing sufficient matching pairs for accurate 3D reconstruction while avoiding redundant or disconnected image subsets.</p>
<h2>Feature Extraction and Matching</h2>
<p>In our selected submissions, we have utilized the SuperPoint and SuperGlue algorithms followed by MagSac filtering. Unfortunately SuperGlue is not licensed for commercial use. However, we have also achieved highly promising results using GlueStick. We have made modifications to the GlueStick architecture to integrate it with SuperPoint, and we achieved a score of approximately 0.450 on the public leaderboard and 0.513 on private leaderboard without employing our best tuning parameters. It is worth noting that this modified architecture is permitted for commercial use and offers improved processing speed. We anticipate that further tuning can yield even better results with GlueStick, but didn't choose it as our last submissions.</p>
<h2>Refinement</h2>
<p>Although not included in our selected submissions, we would like to mention an approach that significantly improved validation results across various scenes. We employed Pixel-Perfect-SFM in conjunction with sd2net for dense matching, but it didn't improved the results on public leaderboard.</p>
<h2>Registering Unregistered Images</h2>
<p>While not part of our selected submissions, we made attempts to register unregistered images using various techniques. However, these attempts did not yield significant improvements on the leaderboard. We explored the following strategies:</p>
<ul>
<li>Utilizing different orientations and attempting registration.</li>
<li>Applying different extractor-matchers, such as LoFTR and R2D2, for the unregistered images.</li>
<li>Adjusting parameters for SuperPoint and SuperGlue to optimize the registration process.</li>
</ul>
<h2>Tried but not worked</h2>
<ul>
<li>Pixel-Perfect-SFM</li>
<li>Semantic Segmentation masks</li>
<li>Illumenation enhancement</li>
<li>PnP localization for unregistered Images</li>
<li>LoFTR/ SILK / DAC / Wireframe</li>
<li>CosPlace / OpenIBL</li>
<li>Large Image splitting into Parts</li>
<li>Grid-based Point Sampling (equally spatial replacement of points in an image).</li>
<li>Rotation self-ensemble averaging</li>
<li>Filtering the extracted pairs using a threshold from a binarysearch function or a fixed threshold.</li>
</ul>
<h2>Important Notes:</h2>
<p>1) Most participants were using colmap reconstruction, which is nondeterministic. We were able to modify the code so that it became deterministic but working only one CPU thread, which helped us observe our improvement and avoid randomness.<br>
2) We found out that OpenCV and PIL libs use EXIF Information when reading images, unless providing flags to prevent it. By that, we mean that if the orientation in EXIF will rotate the image automatically before processing. This was confusing for us, as there are missing information about how the GT were collected for rotation part (with or without considering them), that's why one of our chosen solutions included this correction to overcome such issue, plus we lost a lot of submission to check the effect of this issue on leaderboard. It would have been more helpful if there was better explanation about how the GT calculated.<br>
3) Our validation scored 0.485 on public and 0.514 on private, with local score of 0.87.<br>
4) The fact that validation and leaderboard were not correlated made things more difficult and random, as it was clear there will be a shake up due to the fact that for some specific scene some solutions might fail which will drastically impact the mAA score.</p>
<h2>Acknowldgment</h2>
<ul>
<li>I wanted to take a moment to express my sincere appreciation for the exceptional contribution and dedication to my friend and teammate <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> . His hard work and commitment have played a crucial role in our collective success, and I am truly grateful for the opportunity to work alongside such a remarkable individual. we had fun and we learned a lot.</li>
<li>Special thanks to the <a href="https://github.com/cvg" target="_blank">Computer Vision and Geometry Lab</a> Group for (hloc, gluestick, pixel perfect sfm ..)</li>
</ul>
<h2> </h2>
|
Inclusive Images Challenge
|
1st place solution summary
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Inclusive Images Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hey guys,</p>
<p>I'm sorry, I did not have enough time to write a post about my solution here. </p>
<p>However, I would like to share my video-presentation from NeurIPS 2018 which I recorded because the Canadian Embassy rejected my visa application. <a href="https://drive.google.com/open?id=1wI1cUwxoir8Y0Adabnpx8Kt2nqzvi013">Link</a> to the video.</p>
<p>Also, you may watch our <a href="https://youtu.be/fZS-h8fwRIk?t=5908">Moscow ML Trainings</a>. Unfortunately, the presentation was in Russian, but you can choose automatically generated English subtitles.</p>
<p>Also, <a href="https://drive.google.com/file/d/1Rnhl_BAI816gDM7fw4ZdGrVlyg20NXLi/view?usp=sharing">this is a link</a> to my PDF presentation.</p>
<p>Feel free to ask me any questions you have :)</p>
<p>Always yours,
Pavel</p>
<p>Hi guys!</p>
<p>We wrote a paper about the solution and published it on arxiv: <a href="https://arxiv.org/abs/1901.06345">https://arxiv.org/abs/1901.06345</a></p>
<p>Enjoy reading and ask questions if you have any :)</p>
|
Inclusive Images Challenge
|
2nd place solution overview
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Inclusive Images Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I don’t know about other people, but when I saw $1,000 in my inbox I had Christmas Morning moment. Free GCP credits are extremely important for our community. Many thanks to Google & Kaggle!</p>
<h2>Solution</h2>
<p>Ensemble of 3 Deep CNNs: ResNet50, InceptionV3, and InceptionResNetV2.</p>
<h2>Concept</h2>
<p>We expected test images to come from different regions. So there were no doubts that augmentation and ensembling would be the most important directions to make robust models. But later when detailed EDA showed substantial difference in train/test label distributions it became clear that tuning models using tuning set would also play very important role. </p>
<h2>Diversity</h2>
<p>To get more diversity for ensemble I used following approaches: </p>
<ol>
<li>Different CNN architectures</li>
<li>Different image sizes</li>
<li>Train with/without augmentation</li>
<li>Different augmentation schedules</li>
</ol>
<h2>Models</h2>
<p>Of course all models were trained from scratch according to the rules. I think it was nice because there was no those usual pretrained model search rush :) Scores below are from Stage 1 LB.</p>
<ol>
<li>ResNet50, 57 epochs, 224x224, adam 0.001, batch 64, increasing augmentation schedule, 4 hours per epoch on 1x P100, 0.311 without tuning, 0.510 with tuning</li>
<li>InceptionV3, 25 epochs, 400x400, adam 0.001, batch 32, increasing augmentation schedule, 9 hours per epoch on 1x P100, 0.327 without tuning, 0.545 with tuning</li>
<li>InceptionResNetV2, 8 epochs, 598x598, adam 0.001, batch 12, no augmentation, 38 hours per epoch on 1x P100, 0.318 without tuning, 0.550 with tuning</li>
</ol>
<h2>Final submission</h2>
<p>I trained 3 models described above, tuned them, predicted original test images, predicted mirrored test images, and computed weighted average.</p>
<h2>Notes</h2>
<ol>
<li>By increasing augmentation schedule I mean the following: train without augmentation until plateau (or until you feel like enough), then apply soft augmentation (wide range of values) during several epochs: e.g. <code>rotate(np.random.randint(-30, 30))</code>, then apply hard augmentation (short range or even two possible values) during another several epochs: e.g. <code>rotate(np.random.choice([-30, 30]))</code>.</li>
<li>About training times. I used batch generator derived from <a href="https://keras.io/utils/#sequence">Sequence</a> and multiprocessing with 8 processes per GPU. My typical hardware setup on GCP looks like: 1x P100, 8x CPU, 15 GB RAM, SSD or 2x P100, 16x CPU, 30 GB RAM, SSD, etc.</li>
</ol>
<h2>Side Notes</h2>
<ol>
<li>Time after time I had been running 8x V100. It definitely feels like a ride on racing motorbike. I mean not only extreme speed of computation. I also mean adrenaline related to the speed of money flow.</li>
<li>Looking at some images I was curious about one thing. How often do you experience something like nostalgia about the country you’ve never been?</li>
<li>And by the way have you seen <a href="https://www.kaggle.com/c/inclusive-images-challenge/discussion/69789">convolved foxes</a>?</li>
</ol>
|
Inclusive Images Challenge
|
4th place solution and code
|
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
|
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Inclusive Images Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Summary</h1>
<p>My solution consists of a single model (SE-ResNet-101), which I first trained on the Open Images Training Set using human generated labels, and then finetuned on 1,000 images from the stage 1 test set using provided tuning labels. Finetuning was very important since distributions of training and testing labels are very different. For training, I came up with a new loss function MultiLabelSoftmaxWithCrossEntropyLoss, which is easier to optimize than the classic SigmoidWithCrossEntropyLoss. Also, I used random crops and horizontal flips for training data augmentation. To enhance the result, I applied ten crops for test-time augmentation and tuned (using stage 1 leaderboard scores) the following parameters: global threshold, minimum and maximum number of predictions per image. </p>
<h1>Model And Code</h1>
<p>You can use my <a href="https://github.com/azat-d/inclusive-images-challenge">code</a> to reproduce the result.</p>
<h1>Detailed Description</h1>
<p>Here's what I did: </p>
<ul>
<li>Firstly, I trained the SE-ResNet-101 from scratch on the Open Images Training Set using human generated labels (I used 1% of training images for validation). <br>
<ul><li>I used all 18k labels instead of 7k labels listed in the classes_trainable.csv </li>
<li>I trained the network for 25 epochs using Nesterov Momentum SGD (momentum=0.9, batch_size=32) with the following training schedule: </li>
<li>epochs 1..10: lr = 0.01 </li>
<li>epochs 10..20 lr=0.001 </li>
<li>epochs 20..25 lr=0.0001 </li>
<li>The input image size was 224x224. I used random crops + random horizontal flips for augmentation. </li>
<li>I came up with the following loss function, which is generalization of the softmax with cross-entropy loss to the case of multiple labels per sample </li></ul></li>
</ul>
<pre><code>class MultiLabelSoftmaxWithCrossEntropyLoss(nn.Module):
def __init__(self):
super(MultiLabelSoftmaxWithCrossEntropyLoss, self).__init__()
self.criterion = nn.CrossEntropyLoss(size_average=False)
def forward(self, predictions, labels):
assert len(predictions) == len(labels)
n_classes = predictions.shape[1]
all_classes = np.arange(n_classes, dtype=np.int64)
zero_label = torch.tensor([0]).to(predictions.device)
loss = 0
denominator = 0
for prediction, positives in zip(predictions, labels):
negatives = np.setdiff1d(all_classes, positives, assume_unique=True)
negatives_tensor = torch.tensor(negatives).to(predictions.device)
positives_tensor = torch.tensor(positives).to(predictions.device).unsqueeze(dim=1)
for positive in positives_tensor:
indices = torch.cat((positive, negatives_tensor))
loss = loss + self.criterion(prediction[indices].unsqueeze(dim=0), zero_label)
denominator += 1
loss /= denominator
return loss</code></pre>
<ul>
<li>Secondly, I finetuned the network for 80 epochs on the Stage 1 Tuning Set using the same optimizer and loss function with the fixed learning rate 0.0001. </li>
<li>Finally, I did the following to improve the model: <br>
<ul><li>I used ten crops for test-time augmentation. (+0.005) </li>
<li>I tuned the global threshold for predictions using leaderboard scores </li>
<li>I set the minimum and maximum number of predictions per image to be 2 and 5, respectively. (+0.02) </li></ul></li>
</ul>
<p>For predictions, I used the following function (top_k was set to 150): </p>
<pre><code>def predict(classifier, x, top_k):
input_shape = x.shape
if len(input_shape) == 5:
# Test-time augmentation
x = x.view(-1, input_shape[2], input_shape[3], input_shape[4])
predictions = classifier(x)
predictions = predictions.view(input_shape[0], input_shape[1], -1).mean(dim=1)
else:
predictions = classifier(x)
scores, labels = predictions.sort(dim=1, descending=True)
pred_scores = np.zeros(shape=(len(scores), top_k), dtype=np.float32)
pred_labels = labels[:, :top_k].cpu().numpy()
for i in range(top_k):
i_scores = torch.cat((scores[:, i:i + 1], scores[:, top_k:]), dim=1)
pred_scores[:, i] = F.softmax(i_scores, dim=1)[:, 0].cpu().numpy()
return pred_scores, pred_labels</code></pre>
<h1>Simple Model</h1>
<p>The private leaderboard score of the final model without test-time augmentation is 0.32601, which gives me exactly the same place in this competition.</p>
<h1>Model Execution Time</h1>
<p>My hardware: Intel Core i7 5930k, 2x1080, 64 GB of RAM, 2x512GB SSD, 3TB HDD. I used only 1x1080 for this challenge. It takes approximately 10 days to train the model. The generation of predictions for the second stage takes about an hour and a half. The simple model (without test-time augmentations) is 10 times faster. </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.