Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
NFL Big Data Bowl
Public 14th Solution - Transformer
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for such an interesting competition! This is the first time for me to use NN as my main model, and I've learned a lot from this competition ;)</p> <p>After a quick glance at training data, I thought this competition is quite similar to CHAMPS Predicting Molecular Properties competition - so I decided to use transformer in the early stage of the competiton.</p> <h3>Model</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1532198%2F4c07f028121a2d4eacc48a4c78f1ab57%2FUntitled%20Diagram%20(1" alt="">.png?generation=1574937511409845&amp;alt=media)</p> <p>My architecture is 3 pointwise convolution layers (like PointNet) plus 4 transformer encoders followed by few dense layers. I tried multi-headed attention, but it didn't work for me. Implementation of transformer is based on PyTorch 1.3, but slightly modified (change post-LN architecture to pre-LN. see this paper in detail <a href="https://openreview.net/pdf?id=B1x8anVFPr">here</a>).</p> <h3>Features</h3> <p>I use 17 features in total. Except for YardLine and Distance, 15 features are calculated for each player (so 15 x 22 feature vector are fed into transformer encoder). All features are quite simple, but the most important one is an angle from Rusher's moving direction. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1532198%2F6b57f4d27053e5bc50d04731485524bd%2FUntitled%20Diagram.png?generation=1574935570947304&amp;alt=media" alt=""></p> <p>This polar coordinates based on rusher's dir gave me a big boost (~0.0003). I also calculate this angle in 0.001s after the given time and calculate the difference between θ(t) and θ(t+0.001). </p> <p>List of all features used:</p> <ul> <li>YardLine</li> <li>Distance</li> <li>X, Y, S, A, Dis, PlayerWeight</li> <li>IsBallCarrier</li> <li>IsOffence</li> <li>sin(Dir), cos(Dir)</li> <li>dX, dY, atan(dY, dX) from Rusher</li> <li>angle, angle variation from Rusher's direction (described above)</li> </ul> <p>That's all :) CV and Adversarial validation is used to remove irrelevant or suspicious features.</p> <h3>Validation</h3> <p>In addition to standard GroupKFold, I also tried time-split validation (discussed in <a href="https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/113861#latest-658998">https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/113861#latest-658998</a>). Both worked well for me but its validation score differs a lot. So I choose one from each validation strategy for the final submission.</p> <h3>Other techniques (maybe) worked</h3> <ul> <li>AdamW optimizer (slightly better than Adam)</li> <li>Gaussian noise and Y-shift augmentation on player vector</li> <li>TTA (20x augmenting and adding gaussian noise on Y and Dir)</li> <li>Snapshot Ensemble</li> <li>Downsample 2017 data (randomly drop 40% of data for each epoch)</li> <li>Keep the length of epoch constant (12,000 samples = 1 epoch), regardless of the total number of training samples <ul><li>This helped me estimating running time in 2nd stage</li></ul></li> </ul> <p>EDIT: Here is my solution code. <a href="https://www.kaggle.com/nyanpn/pytorch-transformer-public-14th-private-22th">https://www.kaggle.com/nyanpn/pytorch-transformer-public-14th-private-22th</a></p> <p>Thanks! :)</p> <blockquote> <p>your learning method to keep num samples of epoch is really good idea to prevent over running at 2nd stage.</p> </blockquote> <p>Thanks. It also aims to keep my LR scheduling consistent between 1st and 2nd stage. To estimate the running time in the 2nd stage, I run the same script 5~10 times with doubling train.csv (and there was always enough margin from 4h limit). I hope my script pass the 2nd stage...</p> <p>Thanks a lot!</p> <p>Thanks! One is ~12000 sec (2 folds, 100 epoch, LB1268) and the other is ~9000 sec (1 fold, 150 epoch, LB1271).</p> <p>Thanks! I also surprised by your solution (I didn't think Voronoi feature would work). Maybe combining our solution gives far better result ;)</p> <p>Thanks! I ! I hope my extra tricks will work...</p> <p>Q, K, V has a same size and initialized by random weights. I’ve made my final submission public. Please see my code in detail ;)</p> <p><a href="https://www.kaggle.com/nyanpn/pytorch-transformer-public-14th-private-22th">https://www.kaggle.com/nyanpn/pytorch-transformer-public-14th-private-22th</a></p> <p>Thanks! I've fixed the link.</p> <blockquote> <p>I'm comparing transformers from several competitors. Yours is also different.</p> </blockquote> <p>Yeah it's very interesting for me, too :) </p> <p>1) Yes. I tried 2, 4, 8 heads but no improvement found. Maybe I should try further... 2) No. I want to treat a set of players, so positional invariant is needed. 3) While they used player-pair as a fundamental block, I simply arranged 15 features for each player (including rusher). 4) 128 for each player. 5) Not tried, but I guess there is little difference because YardLine and Distance is less important in my model (almost ignored by NN).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1532198%2Fbb6d9854830c21aa12ee6cb8a9860e2c%2FArchitecture%20detail.png?generation=1575899031950959&amp;alt=media" alt=""></p>
NFL Big Data Bowl
Public 15th place overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was a very interesting and enjoyable competition. My final solution used a simple feedforward neural network, 3 hidden layers 100 units each with a strong regularization of 0.5 Dropout and categorical cross-entropy. The main CV used the last few weeks of 2018. A separate CV was predicting different weeks for both 2017 and 2018, (so 34 folds).</p> <p>The main (and practically the only) idea was to use minimal distances from the ball carrier at different time frames. This would assume that everyone was following at the same trajectory for the next 2 seconds. Adding that to basic features gave a large boost from around LB 0.01364 to LB 0.01304. (To avoid overfitting it helps to fine-tune the time interval, I used 2 seconds updated by 0.2 seconds).</p> <p>The rest was a long slog. Adding <br> 1, averages of Physics features (Energy, Force) inversely weighted by distances from the ball 2, pretending the ball carrier turned slightly left or right 3, fine-tuning regularization, number of hidden units 4, bagging 10 neural nets improved it to 0.01282. </p> <p>Then an even longer slog. Lots of small tweaks: <br> 1, data augmentation by flipping Y to 53-Y, 2, adding noise, changing Adam optimizer to Nadam 3, clipping impossible distances, 4, using two additional outputs to also predict average yards, and touchdown improved it to 0.01272. <br> At the end of the competition small changes that improved one CV made the other CV worse, sometimes by a lot : 0.00004, so it was hard to select the "best" 2 versions. ( Instead I selected seemingly "safe" ones that gave balanced CV values and ran faster). It will be interesting to see the final results. Good luck for everyone.</p> <p>Thanks for the kind words, and best of luck. It was an inspiration to see the top of the leaderboard to improve so much. </p> <p>It was minimally better (by perhaps 0.00001-0.00002) and that may have been just noise. However the convergence was faster with cross-entropy. </p>
NFL Big Data Bowl
Public 17th place overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[Data standardization]</p> <p><code> df['S'] = df['Dis']*10 </code></p> <p>Host says that:</p> <p><code> Dis measures time covered in the most recent window of player tracking data. Given that tracking data roughly covers 10 frames per second, Dis corresponds to distance traveled in the recent 0.1 seconds. Note that speed and acceleration are directly calculated using Dis (this is done in the data pre-processing) </code></p> <p>[Features] I created about 300 features. ・N seconds later minimum distance between rusher and other players ・N seconds later minimum (distance/S) between rusher and other players ・N seconds later how many players around rusher(square, circle) ...etc</p> <p>```</p> <h1>N seconds later player's position</h1> <p>df['X_after_N_seconds'] = df['X'] + df['S'] * np.cos(np.deg2rad(df['Dir'])) * N df['Y_after_N_seconds'] = df['Y'] + df['S'] * np.sin(np.deg2rad(df['Dir'])) * N ```</p> <p>[Modeling] ・5 fold with stratified kfold ・Binning target and treat as classification</p> <p>・NN ... CV 0.01234 Structure is here.</p> <p><code> x = Concatenate(axis=1)([category_features_embedding, num_features]) x = Dense(150, activation='softplus')(x) x = Dropout(0.5)(x) x = Dense(100, activation='softplus')(x) x = Dropout(0.25)(x) predictions = Dense(num_classes, activation='softmax')(x) <br> </code></p> <p>・Lightgbm ... CV 0.0125 Use conservative parameters. ex 'num_leaves': 10, 'max_depth': 3, 'min_data_in_leaf': 150, 'max_bin': 64</p> <p>[Postprocess] This give +0.00005 on LB. For example, if yardline is 30, possible gain yards within -30 to 70</p> <p>``` max_yard = 99 + 70 min_yard = 99 + -30</p> <p>pred[max_yard-1] += pred[max_yard:].sum() pred[max_yard:] = 0</p> <p>pred[min_yard+1] += pred[:min_yard].sum() pred[:min_yard] = 0 ```</p> <p>[Final submission] Averaging 2 NN and 1 lightgbm.</p> <p>[Doesn't work] ・Predicting outliers I think predicting long Yards exactly will give big jump. But I couldn't succeed it.</p> <p>・Using A of season 2017 This cause overfitting with me. How can I deal with A of 2017...?</p> <p>I create aggregated features, so one PlayId has one row. Train data shape is (23171, 317). And my NN stracture is simple MLP. Finally, runtime of training NN takes about 240s.</p>
NFL Big Data Bowl
Public 5th place solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to give a big shout out to the National Football League and to Kaggle for hosting such an interesting competition! Personally, what I liked most about this competition was that it challenged you to build a custom model which reflects the dynamics of the game.</p> <p>The competition instantly reminded us of the <a href="https://www.kaggle.com/c/champs-scalar-coupling">Molecular Properties competition</a>, where one was asked to predict scalar coupling constants based on the spatial configuration of atoms. With that, our idea was to incorporate ideas from there into our models. We had the 2nd place solution of Quantum Uncertainty in mind: &gt; This immediately triggered the idea of using transformer layers (encoders) stacked taking as an input x,y,z (normalized but otherwise as-is), and atom type and j-coupling type embeddings; just concatenated… nothing fancy.</p> <p>We therefore did minimal feature engineering and used mainly the data preprocessing steps which can be found in the public kernels. All plays were transformed such that the play direction is towards the right. Ordinal features were transformed using StandardScaler.</p> <p>Our model consists of three distinct blocks, a <strong>player-player interaction block</strong>, a <strong>player block</strong> and a <strong>global game block</strong>. </p> <p><strong>Player-player interaction block</strong>: Input is a tensor <code>X_1</code> of shape <code>(batch_size, num_players, num_players, num_features)</code>, where <code>X_1[:, i, j]</code> contains information about player i and player j - their relative distance, velocity and acceleration (distance also extrapolated into the future), their relative distances/velocities/accelerations in x and y direction respectively, as well as their team - offense/defense/rusher. <code>X_1[:, i]</code> is then fed into one of three possible Transfomer-type blocks. That is, we have one Mini-Transformer (just as in BERT with 2-3 Blocks) for offense players, one for defense players and one for the rusher. After these blocks, we concatenate the outputs together (same shape as <code>X_1</code>) and take the mean over the tensor dimension 1. With that, we have a tensor <code>X_1’</code> of shape <code>(batch_size, num_players, num_features’)</code></p> <p>This output is then concatenated along the last axis to the <strong>Player input tensor X_2</strong>, which has shape <code>(batch_size, num_players, num_features’’)</code>. It contains features such as position of each player, distance to rusher, velocity in x and y direction, etc. The concatenated tensor is processed with another Mini-Transformer (usually 2 Blocks). After that, we keep only the first dimension of the output <code>X_2[:, 0]</code>, where index 0 is the position of the rusher, and concatenate it with the <strong>play input</strong>. </p> <p>The play input contains information such as distance to yardline, etc. The concatenated tensor is then processed with the prediction head (just 2-5 linear layers) to predict the cumulative probability distribution of shape <code>(batch_size , 199)</code>.</p> <p>Before the sigmoid output, we add a mask such that “inaccessible” yards (as determined by the current yardline) are predicted correctly (the mask is either -100, 0 or 100, such that <code>sigmoid(x+ mask)</code> will be 0 or 1 for regions for which we know the actual label). This gave us a boost of about ~ 0.0003 and stabilized the training of the model.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1435684%2Fc3223d9e355c93a3aebbc4c7c514ecac%2FModel%20Overview.svg?generation=1574927943599355&amp;alt=media" alt=""> Model development was performed using time-based 5fold splits. As the validation loss varies a lot throughout the folds (from 0.010x to 0.013x), we added additional metrics which computes the MSE loss on long runs (which we defined as runs with Yards&gt;=15) and short runs (runs with Yards&lt;15). The long run loss is about ~10 times larger (0.07-0.08) than the short run loss (0.007-0.01, depending on whether some rushers run negative yards). The difference in validation loss can be thus explained by the percentage of long runs which varies among the folds. We used MSE loss, a batch size of 32, 5% swap dropout for certain features after epoch 4, and learning rate decay with a start learning rate of 5e-4. The transformer hidden size of our models ranges between 32-48, with either 1 or 2 attention heads. We did not scrape the test data for model development.</p> <p>There were numerous tweaks on how to improve the model’s performance, e.g. it was important to leave the distance features in the player-player interaction block unscaled (no preprocessing). Also, the splitting the interaction into different parts (offense/defense/rusher) improved our score. We implemented a TimeDistributed layer which processes X_1 in parallel. This reduced training time by about 40% compared to a python loop over the players. We also applied some data cleaning on the 2017 games. For our final submission, we used 3 and 4 different Model architectures (the three model notebook is conservative regarding the run time). Each model is trained up to 12 Epochs and the two best epochs per model are kept for prediction. </p> <p>What did not work: A lot! We tried to incorporate the last play’s features into the model, used aggregated information about the rusher and added several features which we thought were meaningful input. In all cases the model’s performance was about the same. I guess this is due to face that the geometrical information is the most important factor, whereas other features’ impact tends to fade out if it they are not crucial. We also experimented with different loss functions, giving higher loss to either long or short runs without any decisive advantage.</p> <p>We concatenated some of the players features to the distance features in the player-player interaction block. We actually tried both concatenating/ passing the features through a block first, but it made no big difference in the end.</p> <p>Interestingly, we never faced any severe overfitting issues. The transformer block has 32-48 Layers and worked quite well out-of-the-box; even with a small dropout rate 0.03. Using swap dropout on the input features certainly helped avoiding overfitting, too.</p>
NFL Big Data Bowl
Public 8th place overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>model and training</h2> <ul> <li>I used the neural network model based on my public kernel (<a href="https://www.kaggle.com/mrkmakr/neural-network-with-mae-objective-0-01381?scriptVersionId=22302384">https://www.kaggle.com/mrkmakr/neural-network-with-mae-objective-0-01381?scriptVersionId=22302384</a>).</li> <li><p>The changes from the kernel are</p> <ul><li>global average pooling for the player dimension</li> <li>mish activation</li> <li>RAdam optimizer</li> <li>binary classification loss (ex, yard &gt; 0 or not)</li> <li>mixup augmentation <ul><li>All strong features are numerical features, so mixup augmentation might be useful <h2>features</h2></li></ul></li></ul></li> <li><p>My best features are based on future player positions calculated from X, Y, A, S, and Dir</p> <ul><li>distance and distance/speed between players at different time points</li> <li>These features made big improvements on LB (about 0.01335 → 0.01275)</li></ul></li> <li>Feature selection was performed by recursive feature elimination with permutation importance <h2>rough LB score history with the main change</h2></li> </ul> <p>0.01385 : my public kernel ( <a href="https://www.kaggle.com/mrkmakr/neural-network-with-mae-objective-0-01381?scriptVersionId=22302384">https://www.kaggle.com/mrkmakr/neural-network-with-mae-objective-0-01381?scriptVersionId=22302384</a> ) 0.01350 : add aggregation features group by rusher team, not rusher team, and near rusher players 0.01335 : change neural network model to not depend on the player order of inputs by using global average pooling 0.01275 : features based on future player positions 0.01261 : tuning (feature selection and bayesian optimization of neural network parameters) </p> <p>Like you, I did not. Another possible reason may be that the maximum speed of the person was not taken into account in the calculation with A. Players cannot keep accelerating at the same pace. Constant accelerated motion is probably not appropriate for describing human movement.</p> <p>Yes, in the 4th version of the kernel, I used average pooling. In the 1st version of kernel (LB = 0.01385), I didnt use pooling. And I wrote this 1st version as original one here. I added the modification to the public kernel after, and the result is the version 4.</p> <p>Like this: ``` df["Post05X"] = df["X"] + 0.5 * df["Sx”] df["Post05Y"] = df["Y"] + 0.5 * df["Sy”]</p> <p>df["Post10X"] = df["X"] + df["Sx”] df["Post10Y"] = df["Y"] + df["Sy”]</p> <p>df["Post20X"] = df["X"] + 2 * df["Sx”] + df["Ax”] df["Post20Y"] = df["Y"] + 2 * df["Sy”] + df["Ay”] ```</p> <p>This calculation may not be correct about the acceleration term, but it worked better for me than correct one, xt = x0 + s0 * dt + 0.5 * a * dt^2.</p>
NFL Big Data Bowl
Public LB 10th Place - NN vs. GBM
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I guess everyone who competed in this competition realized fairly quickly that NNs were going to be much better than GBMs for this data. My team provides a case study in the pros and cons of the two approaches.</p> <p>@sasrdw and I teamed up early in this competition, on Oct. 20. At that time, we both had top-10 solutions. His was an NN and mine was an LGB model. We decided to work on our respective models separately but share FE and other ideas.</p> <p>The NN model started outscoring the LGB model fairly quickly and extended its lead as time went on. In the end, our best NN model scored 0.01265 on the LB and our best LGB model scored only 0.01297. Originally, we thought we would be able to blend or ensemble the two models, but ultimately we couldn't do this and still remain within the time limit.</p> <p>In retrospect, it would have made more sense for me to abandon the LGB model and work up an alternative NN model, but I felt the LGB model, even though it was under-performing, was still useful for testing new features and approaches because it was fast. This turned out to be only partly true because many of things that worked for the LGB model didn't work for the NN model, and vice versa. Here are a few examples:</p> <ul> <li><p>Handling 2017 S &amp; A data: The best approach I found for the LGB model was to make all S &amp; A values relative per play based on the max values. So the fastest players per play received a value of 1 and the others were some fraction of 1. (The absolute speeds were still represented in the Dis feature). For NN, Russ found it was better to treat the 2017 &amp; 2018 values as entirely separate features.</p></li> <li><p>Augmentation: For the LGB model, it helped to augment the data with two duplicates, modifying the labels +1 and -1, as @gertjac mentioned in his solution description. This didn't help the NN model.</p></li> <li><p>Bagging: Bagging with different seeds and different groups of training data had almost no effect on the LGB model but helped the NN model significantly.</p></li> </ul> <p>Some things that helped both models:</p> <ul> <li><p>Validation: We had a few different validation methods but the one that worked best for both models was to use the last 3 games of all teams as the holdout set.</p></li> <li><p>Label clipping: Clipping labels to reduce the effect of outliers (and speed up training) helped both models. For LGB, the best clipping values were -7 to 22. For NN, I think it was -10 to 35.</p></li> <li><p>FE: Most of the geometric features (distances, voronoi areas, etc.) helped both models. Also, simulating the plays forward by n seconds helped a lot. In the end, the NN model had about 110 features and the LGB model about 160.</p></li> </ul> <p>It's really interesting to see how everyone approached this problem, and I'm looking forward to seeing the final results. We found that just a few long runs can have a large impact on scores, so I'm expecting a fair amount of shake-up.</p> <p>We had only 1 NN architecture, but bagging the models generated from KFold validation helped a lot. The final solution bagged 15 models. </p>
NFL Big Data Bowl
Public LB 34th Solution Sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank everyone for the great work. I'm impressed by how elegant the solution of @philippsinger is. Congrats to everyone and I wish our kernels will survive stage 2.</p> <p>I'll just share some ideas that really helped me.</p> <ol> <li>Model: my model structure was mostly based on the kernel shared by @mrkmakr</li> <li>Features: cosine/sine of speed, distance, sum(exp(- defender distance from rusher)) etc</li> <li>Simulate player positions after 1 second and compute derived features</li> <li>Data Augmentation by symmetry (flipping X/Y as most said)</li> <li>Data Augmentation by adding random noise to X/Y, for me uniform random 1 yard helped most</li> <li>CV: GroupKFold by week</li> </ol>
NFL Big Data Bowl
Public LB 9th Place solution highlights
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Pitch control</h1> <p>Using <a href="/statsbymichaellopez">@statsbymichaellopez</a> VIP hint helped a lot. <a href="http://www.lukebornn.com/papers/fernandez_ssac_2018.pdf">Here</a> is the paper.</p> <p>To implement it, I used <a href="https://www.kaggle.com/pednt9/vip-hint-coded">this</a> good kernel (thank you <a href="/pednt9">@pednt9</a>) and made it run under 10ms per play for 224 points in front of rusher.</p> <p>I did not keep track of all improvements on LB, but to be safe, this gave me between 0.00010 and 0.00030 boost.</p> <h1>Tabular model</h1> <p>My model is a derivation of the standard TabularModel from Fastai with several submodules as described here:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2949020%2F0bc38ace700a464e95f5638e12933ed6%2FUntitled%20Diagram%20(1" alt="">.svg?generation=1574941608426908&amp;alt=media)</p> <h1>Loss</h1> <p>My loss activation is <a href="https://pytorch.org/docs/stable/nn.html#softplus">softplus</a>, then I normalize the output so it sums to 1 and backpropagate CRPS.</p> <p><code> inps = softplus(inps) inps = (inps / inps.sum(1).unsqueeze(-1)).cumsum(1) return (inps - targ).pow(2).mean() </code></p> <h1>Rusher heading north-east</h1> <p>A lot of competitors flipped plays so they all happen left to right. I also flipped them so all rusher are heading with increasing Y. The latter gave me a 0.00010 LB boost.</p> <h1>Biggest regret</h1> <p>Not cleaning S nor A as <a href="/wimwim">@wimwim</a> explains in his <a href="https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119314#latest-683357">solution overview</a>. I guess this is a beginner mistake to not analyse initial features well enough and not catch inconsistencies. Lesson learned !</p> <p>Thank you !</p> <p>I control training with basic 6 folds CV, <a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html">stratified</a> on Yards.</p> <p>My optimizer is RAdam with weight_decay=0.01 and eps=1e-5.</p> <p>For the learning rate I use fastai <code>fit_one_cycle</code> method for 20 epochs with max_lr=2e-2: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2949020%2Fcfbb504dfb090151aaf4a7e948ef4ee8%2Fdownload.png?generation=1574950156005479&amp;alt=media" alt=""></p> <p>No. But the "team level" pitch control kind of account for that.</p> <p>Sure !</p> <blockquote> <p>Could you elaborate on how your 224 points were computed</p> </blockquote> <ul> <li>First, in your public kernel <code>player_influence</code> is recomputed for each point. This can be factorized.</li> <li>Then, I used <a href="http://numba.pydata.org/">numba jit</a> to speed up calculation.</li> <li>Finally, as you point out in your kernel, influence radius function can be change to improve results. I used <code>1/4 * np.log(1 + np.exp(dist_to_ball)) + 3.5</code>. This is very close to a linear function such as <code>1/4 * dist_to_ball + 3.5</code>.</li> </ul> <blockquote> <p>what did you do with that?</p> </blockquote> <p>The 224 points were selected as follow: I computed 16 points in 14 directions. On each direction, each point were spaced by 2 yards. I used two main direction, straight to end zone and rusher direction. Each was offset 7 times with [-67.5, -45, -22.5, 0, +22.5, +45, +67.5].</p> <p>I first fed them in a basic fully connected module with 2 hidden layers of sizes 128 and 32, and an output of size 8. Then I used this output as an input for the final module of my network as shown in my model diagram above.</p>
NFL Big Data Bowl
The Most Important Thing... (50th Public LB Solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The Most Important Thing is that I beat the #1 seed in my Fantasy Football League in Week 12. Wait, you wanted to read a writeup? Okay, here goes.</p> <p>Thank you to my teammates @siavrez and @kainsama and to all the competitors and the sponsors for making this such a fun competition. It will be exciting to see our scores live over the coming weeks. I want to give a quick summary of our team's solution.</p> <p>My wonderful teammates invited me who were better on the LB and we eventually reached 0.01330 on LB. We were combining our features and adding some new ones/changing the NN architecture a bit.</p> <p>The biggest boost in our score after merging was using Permutation Importance to get rid of unimportant features (0.01330 -&gt; 0.01320).</p> <p>We tried SOOOOO much stuff that did not improve our score of 0.01320. The most shocking to me is that adding Embeddings of RusherPosition, OffenseFormation, and Offense/Defense Teams did not help.</p> <p>My teammates and I also learned SOOOOO much stuff in Tensorflow. We were trying to think out of the box, so we were trying everything, including: - Cleaning Orientation and Speed of 2017 season (no improvement) - Adding small amount of noise (no improvement) - Adding L1 distances in addition L2 (no improvement) - Representing the play as an image and running a CNN on it concatenated with the Multilayer Perceptron (no improvement) - Fitting to Dense(2) being mu and sigma, and then creating a Gaussian from the mu and sigma and using that for a CDF (no improvement) - Predict the parameters for a Log-Logistic function for each play, and use that for a CDF (this is because the kernel density plot of yards was shaped like Log Logistic) (no improvement) - Making different models for different RusherPositions, e.g. WR vs RB (no improvement) - Using FloatingPoint16 or FloatingPoint64 in the neural network (no improvement) - Changing Batchsizes (no improvement) - Running Convolutional1D on many features sorted by player's distance to rusher (eg perform Conv1D on [S_closest_defender1, S_closest_defender2, ..., S_closest_defender3]) then GlobalMaxPooling/GlobalAveragePooling and concatenating with previous Multilayer Perceptron (no improvement) - Running LSTM/GRU on those same sorted features and GlobalMaxPooling/GlobalAveragePooling and concatenating with previous Multilayer Perceptron (no improvement) - The same as before, but doing BiDirectional (better than before, but no improvement) - Changing Adam to AdamW (no improvement) - Adding Warm Restart Weight Decay Scheduler to AdamW Optimizer (no improvement) - Feeding NflIdRusher via an Embedding (if you have &lt;10 occurrences of NflIdRusher you get mapped to "Rare") (no improvement) - CuDNNLSTM/CuDNNGRU (looked interesting, but realized we have no GPU therefore cannot use this layer) - Use Kolmogorov-Smirnov statistic to remove features that have different distribution between train and test (no improvement) - Concatenate "Season" with the Team's name and feed into an Embedding, because teams can change season-to-season (no improvement) - Using Gert's method to bin/cut predictions and modify them (no improvement) - Learned how to use TensorflowProbability to predict a NormalDistribution for Yards (MAE) and then use the standard deviations that it produces to obtain a CDF (no improvement) - We also tried using GammaDistribution for TensorflowProbability (no improvement) - Multiobjective loss (fit to both Dense(200) CRPS and Dense(1) MAE) (small improvement) - Multiobjective loss v2 (fit to Dense(200), Dense(25), Dense(1)) (small improvement) - Remove outlier Yards where abs(Z-score) &gt;= 3 (no improvement) - Switching to SGD with small learning rate after finding optimum with Adam (small improvement) - Adding GaussianNoise/GaussianDropout layers - Trying to smooth the CRPS output by averaging with neighboring predictions - Fitting to the Yards MAE, then using the standard error of the Yards predictions to create Gaussian CDFs (this is what 1st placed did in the previous competition that used CRPS metric. Unfortunately, this brought us no improvement). - Post-processing based on the min/max yards of the NflIdRusher_Season (no improvement) - Post-processing based on the min/max yards allowed of the DefenseTeam_Season (no improvement) - Post-processing based on the min/max yards gained of the OffenseTeam_Season (no improvement) - Post-processing based on the min/max yards gained by that Position (no improvement) - Online Learning (yes, we had implemented online learning into our CV using <code>train_on_batch</code>. We found the improvement minimal and only decided to try it on the penultimate day. So we did not have time to implement it into test predictions).</p> <p>As you can see, we were stuck at a wall with no improvements. We tried SO much stuff. In the end, we could not break 0.01320. However, I am proud to learn so much (Tensorflow Probability is awesome, and I am already using it at my work). In the end, we submitted two models: a 10Fold Multilayer Perceptron with AdamW, Warm Restart Weight Decay, QuantileTransformer, softplus activations, MultiObjective (softmax+mae) model. And the second submission is the exact same, except it is trained on all data without validation bagged 10 times, stopping when categorical_crossentropy loss reaches 2.52 (this value was found by submitting to LB and seeing what worked best).</p> <p>Things we didn't try that I think would've been helpful: - Using a holdout validation set instead of KFold (allows you to fit more models, which lets you blend and yield better score) - Graph Neural Networks/Pointnet (if anyone was able to fit this onto the CPU in 4 hours, you are an amazing data scientist) - Treating the 22 NflId sorted by distance to rusher as a sentence, and running Seq2Seq NLP architecture on this. We wanted to try this, but seeing how feeding NflIdRusher through an embedding made the score SO much worse, I think it was good that we did not travel down this rabbit hole.</p> <p>Once again thank you to my talented teammates, it was a pleasure working with you, and thank you to the NFL for creating a fun competition. P.S.: A funny joke, but the whole competition, our slack channel was named "NLF World" because of a typo ;)</p>
NFL Big Data Bowl
Things that worked
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><blockquote> <p><strong>NickL. wrote:</strong> I did this as well and thought it lead to overfitting.</p> </blockquote> <p>For me, GBM immediately improved, individual NNs worsened but their ensemble improved</p> <blockquote> <p><strong>CPMP wrote:</strong> How do you know that? I missed it.</p> </blockquote> <p><a href="https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/111918#657892">https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/111918#657892</a></p> <p>I'm very curious to learn about what others did, here's my approach:</p> <ul> <li>augment the data by copying all rows twice, once with Yards-1 and once with Yards+1</li> <li>engineer hundreds of features, including features on indiviual players, sorted by their distance to the rusher (within offense/defense)</li> <li>forward feature selection, using a linear (Ridge) model, with strong regularization and a QuantileTransformer to reduce overfit and outlier effects; greedy selection helped to find different feature sets; I even searched pairwise products of all features</li> <li>modeling: weighted ensemble of LightGBM (5-10 leaves), MultiLayerPerceptron (4 hidden layers) and Logistic regression (all classification models) and the regression model using ExtraTrees that I shared as a notebook</li> <li>special trick: add last known X/Y by adding S x 0.5 x Dir (because S was calculated using distance half a second before and after handoff)</li> <li>postprocessing: set predictions that are impossible (Yardline + Yards below 0 or above 100) to 0/1</li> </ul>
NFL Big Data Bowl
how a newbie got to public 19th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Big Data Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Also, I added "flipped data" indicator into my model as a feature... just in case I forgot something that needs to be flipped... This also helped a little (~0.00003): so mirrored games are not exactly the same as original games - maybe I forgot to flip some features, or maybe it's just the difference of left/right (for example right-handed people might feel slightly simpler to turn right).</p> <p>Yes I did exactly as you described (but also we need to flip some other features like Dir, Orientation, WindDirection, etc - basically mirror everything). </p> <p>This idea gained me roughly 0.00015 on public board, which helped me to advance from about 500th place to 150th place at that time. But that was in early stage when the model was simple and a simple change could easily make a difference. I'm not sure how much marginal improvement this has in my current model...</p> <p>Thank you for sharing this! It's amazing to see this data. I wished I had something like this before, but I didn't even try to search for it, as I never thought these data would actually exist...</p> <p>I totally agree with you that these orientations are difficult to use without knowing which point is considered as the (0,0) point in each stadium. And it's definitely a lot of trouble hard-coding these. But I 'm wondering about adding sin/cos of these orientations into the wind model - maybe the embedding of stadium would help the model figure out which point is (0, 0)?</p> <p>I guess the reason that 2017/2018 data works ok in my case is that I removed all season specific features in that submission, and only kept those that are expected to be stable across seasons. Also the data augmentation helped - so that getting additional data doesn't make as much difference to my model. <br> (The 13th place is lucky, and I expect to drop significantly next week... but thank you ;)</p> <p>Of course. Wish you all the best in the last rerun!</p> <p>(First, I'm heartbroken to see what happened with the Zoo. Their solution is the most elegant one I've seen, and they do deserve a top place in this competition. Hope they will reappear on leaderboard after next week's rerun.)</p> <p>I didn't share my solution earlier because there's no much highlight in my work. I'm new to ML, and this is literally my first NN project. <strong>What I did was taking this <a href="https://www.kaggle.com/mrkmakr/neural-network-with-mae-objective-0-01381">great kernel</a> by <a href="/mrkmakr">@mrkmakr</a> (many thanks! You saved my life!)</strong>, working very hard to learn all the magic done there, and making my small changes based on my intuition. -- But then I thought it might be helpful to some other newbies to learn <strong>how a newbie with minimum ML knowledge could approach this competition.</strong> So here I am. </p> <p><strong>1. Data augmentation:</strong> - First flip all data so the play direction goes from left to right. This is a standard procedure. - Train augmentation: flip the play along Y axis (so X'=X, Y'=53.3-Y, similarly for other variables). - <strong>Test augmentation</strong>: flip the play along Y axis, predict it. And return the average of the original test prediction and the augmented test prediction. </p> <p><strong>2. FE</strong> - For all speed, distance features: it's helpful to split to x and y axis based on Dir - Meet time with rusher, and the distance that the rusher could travel within such time - Game dynamic: the relative distance after 0.5s, 1s, 2s if everyone goes according to current direction. - Potential positions of rusher after 1s (if he runs at current S&amp;A towards: front left, front right, straight front) - and for each of those positions, calculate distance and travel time needed for each player from the current position. - For all distance, meet time features, for defend team and offense team, each calculate the 1, 3, 6 smallest values. Also calculate the difference between the 1,3,6th teammate and defender. (This step produces a lot of features. I use them in the "Rank" layer as discussed below.)</p> <ul> <li><strong>Wind</strong> + wind direction + stadium: am I the only one who finds wind directions useful? (About 0.00005 boost). <ul><li>The idea is to first split wind direction to only 2 dimensions (N, W) - and S, E are just negative N, W. Thus each wind direction can be represented by two numbers. For example, "NE" = (0.7, -0.7), "S" = (-1, 0), etc. Then multiply this by wind speed - gives the speed in each direction. </li> <li>This info is stadium specific - because it's relative to the angle between the stadium and earth. </li> <li>Also it matters whether the stadium is indoor or outdoor.</li> <li>I had to be very careful when doing all these wind speed with flipped/augmented data - wind speed needs to be flipped as well.</li></ul></li> </ul> <p><strong>3. NN structure</strong> - As mentioned earlier, mainly based on the kernel by <a href="/mrkmakr">@mrkmakr</a>. But I did make it more complicated by adding more layers. - "Game" layer: game features (except for the ones listed below) + player features for the closest defend player and closest teammate. - "Rusher", "Offense", "Defend" layers: rusher/offense/defend team specific features. For offense and defend, there're multiple players, so I do AveragePooling. It helped to have several dense layers for these, before concatenating results with other layers. - "Wind" layer: directional wind speed (North, West) + embedding of Stadium + embedding of Stadium type (outdoor or indoor) - "Rank" layer: lots of calculated features based on the 1,3,6th rank of various of distance, speed, etc, as discussed in FE. This needs to be a separate layer because these are a lot of very strong but correlated features, thus needs to reduce dimension before concat with other layers. - "All" layer: concat of all above, and 3 dense layers after concat before prediction.</p> <p><strong>4. Control overfitting</strong> - Overfitting needs to be carefully controlled when I use a complicated NN structure with so few data. So I did the following. - Set a small batch size. (128 for me - would be 64 if I didn't do BatchNormalization) - Dropout: is super helpful. My dropout was pretty high (ranges from 0.3 after player layers to 0.6 after most other layers). - BatchNormalization: can be very helpful when I use it in the right way, (or otherwise hurt me very badly when used improperly). It really helped me by enable me to use a large learning rate (0.01) and a larger batch size. It took me a whole day to figure out the right way of using BN: it has to be used after a dense layer, but before activation. This means I had to set activation=None in Dense layer, and then use an activation layer after BN. - Early stopping</p> <p><strong>5. Issue with 2017 data (hence overweight 2018)</strong> - As many others have noted, A and S has very different distribution in 2017 compared with 2018. I tried very hard to adjust the 2017 data, but failed. Then I decided to overweight 2018, which was super helpful. - My best attempt in adjusting 2017 A,S data was to note that different types of players have very different A,S. Thus split the 2017 data by player type, and do a rank based mapping to the 2018 data of the same player type. (For example if a certain entry has S ranked 26% within 2017 rushers, the I replace it by the 26% ranked value within 2018 rushers.) This didn't work - I guess 2017 data has larger measurement error, so can't be adjusted in a systematical way.</p> <p><strong>6. Small tricks</strong> - Post processing: Manually assign 0 probability to the "infeasible" yards that are beyond the boundaries of the field. - Bet anything &lt;-20 can't happen. So I'm predicting a vector with 129 dim instead of 199. - Try except in prediction stage: set a prior = distribution of train data. Return the prior if prediction fails for a certain play.</p> <p><strong>7. My two submissions:</strong> - "Safe": only uses 2017/2018 data to train. Hence remove features that might be season specific, like team names, player names, etc. This one scored 0.01279 on original public leaderboard (19th place), and gets me 0.01145 after first week's rerun (13th place), 0.01153 after 2nd week's rerun (16th place), 0.01183 after 3rd week's rerun (17th place), and currently 0.01200 after 4th week's rerun (17th place). - "Aggressive": includes and overweights 2019 data, as well as season specific info. Also slightly overweights the later half of each season. This one scored 0.01282 on originally, and drop to 0.01320 after first week's rerun. My guess is this one fails during the train process, and thus only outputs my "prior" estimates for all test plays. (Don't understand why as I almost try-except everywhere during FE. Anyway, the "safe" submission was designed for situations like this.)</p> <p>Finally thank you everyone for sharing your thoughts via discussions and kernels. I couldn't imagine learning NN within 4 weeks without all those great resources you shared.</p> <p>Thank you ;) Congrats on your standing as well!</p> <p>I wish I had that angle (shared by <a href="/booooooow">@booooooow</a> above) earlier, but I didn't have that until he shared this amazing plot yesterday... </p> <p>So all I could do was to do an embedding on "Stadium" and add the result to my wind layer, in the hope that the outcome of the embedding contains some information about that angle. Seems helpful - without stadium embedding my wind layer is almost pure noise, but with stadium embedding it has about 0.00005 improvement.</p> <p>Thank you ;) You have a very good point that stadium+play_direction is more logical. Actually my full story was the following: I first flip play_direction='left' to right by (X'=120-X, Y'=53.3-Y, etc). And here I change wind directions (from SE to NW, etc), which helped me to avoid using play_direction here. And then for data augmentation, (X'=X, Y'=53.3-Y), there's no way to flip wind directions accordingly, so I used a combined categorical variable of stadium+indicator(augmented data). </p>
NFL Health & Safety - Helmet Assignment
10th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Health & Safety - Helmet Assignment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>input is just a mask. the input is like the image I use in this post but normalized by dividing by 255.</p> <p>sorry <a href="https://www.kaggle.com/ouyimin19" target="_blank">@ouyimin19</a> I have no plan to release my code.</p> <p>Thanks <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a>. thanks for being active in discussion forum</p> <p>gold so hard :))</p> <p>Thank you Chris</p> <p>Thanks NFL and kaggle for hosting this interesting competition.<br> I am happy that I get a gold in this very challenging problem.<br> So, Here is my approach.</p> <p>Firstly, I use Yolov5 to detect helmet, then I use a regression network to match the detected helmet with tracking data. This regression network is an efficientnetb0 encoder with an Unet decoder. By using yolov5 and my regression model, I am able to reach a CV of 0.7 (0.68 public and 0.7 private). <br> The output is then being postprocessed by tracking algorithm (I use deepsort and SiamRPN), apply tracking boost my CV to 0.9, public LB 0.85 and private LB 0.86.</p> <p>The yolov5 and deepsort is quite popular in the public notebooks and discussion, so I won't write more about it here. I just use the public source code and optimize it to this competition.</p> <p>The regression network is quite similar to a normal segmentation network.</p> <ul> <li>Encoder: EfficientnetB0</li> <li>Decoder: Unet</li> <li>Input: 2-channel image (2x256x256)</li> <li>Output: 2-channel regression mask (2x256x256). Channel 1 predict the x-coordinate and channel 2 predict y-coordinate.<ul> <li>For example, the point at position (x',y') in the output mask is responsible for predicting value (x,y).</li> <li>Loss function: L1 loss. Loss is only calculated at the pixels where tracking data exist (white points in channel 2). <br> ![](<a href="https://ibb.co/bzY9V82"><img src="https://i.ibb.co/NZfBqHm/nfl-matching-drawio.png" alt="nfl-matching-drawio"></a>2)</li></ul></li> </ul> <p><em>My real implementation is a little different where I use an extra segmentation channel to guild the network for better training. I simplified the idea here to make it easier to understand</em></p>
NFL Health & Safety - Helmet Assignment
11th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Health & Safety - Helmet Assignment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank the host for organizing such an exciting competition. I am very glad to have won my first gold medal!</p> <h1>Overview</h1> <p>My solution consists of the following five steps.</p> <ol> <li>Helmet detection</li> <li>Tracking &amp; bbox interpolation</li> <li>Team classification</li> <li>Player mapping</li> <li>Post process</li> </ol> <h1>1. Helmet detection</h1> <ul> <li>YOLOv5l, Input size:1280x1280</li> <li>Training data: 9947 images in "images" directory</li> <li>Validation data: ~1000 images from 120 training videos</li> <li>Use TTA ("augment=True" of YOLOv5's function)</li> </ul> <h1>2. Tracking &amp; bbox interpolation</h1> <p>I created my original tracking algorithm. Basically my algorithm uses the distance of two points (LT, RB) of bboxes between two frames, it also predicts next frame's bboxes utilizing the optical flow. The optical flow method was inspired by the previous competition's top solution. When implementing optical flow bbox prediction, I referenced <a href="https://github.com/jguoaj/multi-object-tracking" target="_blank">this GitHub</a>. The IoU between predicted bboxes and detected bboxes is used together with the bbox's distance.</p> <p>When the tracking is lost on some bboxes, they are left for two frames as the following figure. As a result, the number of bboxes increase. This is aimed at reducing the number of bbox's false negatives. Since the tracking is processed two times (forward direction and reverse direction), bboxes are interpolated in the previous and next frames.<br> <img src="https://i.postimg.cc/Kj3CkHQT/bbox-interpolation.png" alt=""></p> <h1>3. Team classification</h1> <p>I trained the team classification CNN which has output size of 2(teams) x len(gameKeys). As the training images, I cropped helmet's bboxes. Aiming to utilyze the color of the uniform, I cropped bboxes a slightly larger to bottom direction than the original bbox's size.</p> <ul> <li>MobileNet v2, Input size:160x160</li> <li>Loss function: Cross-entropy</li> <li>Augmentation: Holizontal flip, Shift, Scale, Rotate, Shear, Brightness, Contrast, Cutout</li> <li>Feature size: 1280</li> </ul> <p>The trained CNN is used as the feature extracter (the last fully connected layer is removed). The extracted features are splitted to two clusters by the K-means. At this point, I don't know which one is the "H" team or "V" team. The team mapping is completed in the next "Player mapping" step.</p> <h1>4. Player mapping</h1> <p>My base mapping algorithm is <a href="https://www.kaggle.com/its7171/nfl-baseline-simple-helmet-mapping" target="_blank">this great notebook</a>. I added some improvement to it.</p> <ul> <li>Two-dimensional mapping (x and y)</li> <li>To weight the x-axis of the bbox coordinates, the y-axis was normalized to 0~0.3 instead of 0~1.</li> <li>Use the Hungarian algorithm to minimize the mapping distance</li> <li>The team mapping is used as the cost of distance (the mismatch of the team increases the distance).</li> <li>Two combinations of team assignment between the K-means result and H/V are tried, and the minimum distance's one is adopted.</li> </ul> <h1>5. Post process</h1> <p>In this step, the final player label of each bboxes are determined by merging the tracking result and the mapping result. Firstly, the mapping accuracy of each frame are predicted by the XGBoost model. As input features of the XGBoost model, the following were used.</p> <ul> <li>The number of detected bboxes</li> <li>The number of players from tracking.csv</li> <li>Frame number</li> <li>The absolute difference between this frame number and the nearest tracking frame number</li> <li>Endzone or Sideline</li> <li>Mean value of x(y)-coordinate of all the players from tracking.csv</li> <li>Standard deviation of x(y)-coordinate of all the players from tracking.csv</li> </ul> <p>Secondly, for each tracking IDs, each player's total predicted mapping score are calculated and the highest player is assigned. If the same player is assigned in the same frame, the bbox with lower total score is replaced to the secondly highest player.</p>
NFL Health & Safety - Helmet Assignment
1st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Health & Safety - Helmet Assignment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you. It's very helpful to understand how it works.<br> It's interesting for me to know the effect of attention weight visually.</p> <p>Regarding "Player's RGB Info", I used both the mask of the target player and the mask of other players as obstacles. So there are 3 inputs in total, which are rgb, mask_target and mask_others.</p> <p>Thank you. </p> <p>I haven't decided yet. If you want to see the loss functions, they are defined in model.py in my dataset released already.</p> <p>Models are trained in colab and kaggle notebook except final ensemble models. Every model takes about 5hr. High resolution detector is the largest one. It takes about 7hr. So I didn't use all dataset in the first month and a half.<br> I should buy GPUs with prize money.</p> <p>Thank you for hosting! It's really fun competition! </p> <p>No, this model predicts the relative position of the players.<br> The helmets' position in the image does not necessarily match the actual position in real world coordinate. So it converts the image coordinate to world coordinate.</p> <p>Thank you <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a> <br> Congrats on the gold!</p> <p>You're absolutely right. <br> Lots of players are close to each other, so it's difficult to distinguish them without team labels.<br> Team labels improves the score by approx. 0.02 in my case.<br> Late WBF is stable as you say since it can make up for the error in registration as well as in detection.</p> <p>I really enjoyed competing with your team. See you in the future competition!</p> <p>Very simple encoder-decoder network(FCOS) with the backbone of EfficientNetV2. I haven't even tried YOLO since the output of my detector is single level and single anchor. CenterNet-like architecture is enough for me.</p> <p>Wait a little please. Dirtiness of my code would be also the 1st place.😂</p> <p>Now available! But still dirty.</p> <p>Inference code is just released!</p> <p><a href="https://www.kaggle.com/shangweichen" target="_blank">@shangweichen</a><br> <img src="https://i.imgur.com/BTcCZJg.png" alt="globalanddelta"></p> <p>Zoom_ratio is not trainable parameter. It is the value solved numerically by fitting the predicted position to the provided tracking data. In other words, Zoom_ratio is used for training Zoom_Scale. </p> <p>Very nice question <a href="https://www.kaggle.com/minomonter" target="_blank">@minomonter</a> ! You're correct. I had the same idea and added the output of trapezoid factor, but no changes were seen. This is because only the delta map can compensate this effect.</p> <p>At first, I used many parameters such as the camera intrinsic matrix and depth estimation, but it turns out that the simple model works well enough.</p> <p>I haven't decided yet. If you want to see the loss functions, they are defined in model.py in my dataset released already.</p> <p>Players' position in tracking data is the ground truth but the loss in this model is determined adaptively and actual target values(labels?) are not fixed in advance. <br> <img src="https://storage.googleapis.com/kagglesdsdata/datasets/1691218/2775836/fig_11.png?X-Goog-Algorithm=GOOG4-RSA-SHA256&amp;X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20211104%2Fauto%2Fstorage%2Fgoog4_request&amp;X-Goog-Date=20211104T085432Z&amp;X-Goog-Expires=345599&amp;X-Goog-SignedHeaders=host&amp;X-Goog-Signature=31b88b0a10a8af81d28c843bf106513715ba58d77274d4d98e47204897c485cc1743bee20c4bb8f3f0b1945df71c640bdc1abb586767d3b7f35ff21150ec99d31e0297138e5dc7cbf8dc35f668ad45c1b404a1346a205c62a86ff1fe2bb0b2c68c233c0d9ae4db839464bffeb50cc8457293ff364182550a3f8abf7ea1618c1f84eb855a69d96e8a34312f22c3c5dcada86f46b2b1da65c4716df7465766272805843aa89f8ab92b07814054c1cfd6c8e4a8742d55e95e4cd9afa166875c82e4d5b8aaa016088a8a81ed7447df5ada33d5a8a5290e2b8d714f2cb7e84ef5866f308e9f1be7f6b69438139037619d6822a1d59a62875c6aee526d31bfe0fbaf04" alt="lossformap"></p> <p>Thanks to organizers for this interesting challenge and congrats everyone who enjoyed it! It’s one of the most fun competitions that I’ve ever experienced. I especially appreciate Rob’s supports before and during competition. I believe he makes this competition such great thing.</p> <p>It’s really tough challenge. It requires various skills of detection, registration, optimization and tracking (and debugging). I guess many competitors build complicated pipelines, and struggled to debug it. I respect those who have completed this competition until the end. Congratulations!</p> <h1>Overview of my solution</h1> <p>My pipeline consists of:</p> <ul> <li><strong>Detector to find helmets.</strong></li> <li><strong>Converter to project movies’ images to 2D map (bird’s eye view).</strong></li> <li><strong>Classifier to classify players into 2(H/V) teams.</strong></li> <li><strong>Registration of detected players on 2D map to provided tracking data.</strong></li> <li><strong>Track detected bounding boxes and reassign players.</strong></li> </ul> <p>I guess the <strong>key difference between my solution and public notebooks is the mapping and registration modules</strong>, which gave me the score of approx. 0.8 only by helmets.csv though it works faster than 10 frames/sec on GPU.<br> Let me explain each modules briefly.<br> <img src="https://i.imgur.com/nEmBuby.png" alt="pipeline"></p> <h2>01. Helmet Detector</h2> <ul> <li>2 stage detector to find helmets.</li> <li>1st detector predicts the average helmets’ size and resize input images(higher resolution) based on it. </li> <li>2nd detector detects helmets in high resolution images. I thought detecting fixed-size objects is easier than detecting objects of various sizes.<br> <img src="https://i.imgur.com/Yz8uzgJ.png" alt="detector"></li> </ul> <h2>02. Image to Map Converter</h2> <ul> <li>CNN(U-Net base) to convert helmets’ bounding boxes in images to 2D map. It predicts 2D location(x and y) looking from camera location.</li> <li>It outputs the global location from bottleneck and small residuals from decoder.</li> <li>Attention by helmets bounding-boxes improves accuracy.<br> <img src="https://i.imgur.com/xZgDZGa.png" alt="image2map"></li> </ul> <h2>03. Points to Points Registration</h2> <ul> <li>Matching predicted players on 2D map to the provided tracking data.</li> <li>ICP (Iterative Closest Points) based algorism. Iteratively solve the nearest search and normal equation to get 4 unknown parameters (xy translation, rotation and zoom ratio) by least squares fitting.</li> <li>Pre/post-processing is applied to remove inappropriate predictions such as sideline players.<br> <img src="https://i.imgur.com/76TN1q3.png" alt="p2p_registration"><br> <img src="https://i.imgur.com/6MMnD2e.png" alt="pre_registration"></li> </ul> <h2>04. Team Classifier</h2> <ul> <li>Team information is also important to improve the registration accuracy. X,Y location and team can be used in Points to Points Registration above.</li> <li>CNN classifier predicts the similarity matrix to show each pair of players belongs to the same team or not.</li> <li>Using arcface and pseudo-labeling improved the accuracy. Validation score is approx. 97%.<br> <img src="https://i.imgur.com/ln8Wk4F.png" alt="team_classifier"></li> </ul> <h2>05. Tracker</h2> <ul> <li>Tracker accumulates the results of player assignment through all frames, and re-assign players to bounding boxes.</li> <li>I applied simple IoU tracker because it’s fast and enough accurate.</li> <li>Not only frequency but also tracking confidence and frame distance is used for reassignment. Assignment results far from the target frame should not be weighted.<br> <img src="https://i.imgur.com/ltWNkDV.png" alt="tracker"></li> </ul> <h2>06. Ensemble</h2> <ul> <li>WBF(Weighted Box Fusion) is applied in the reassignment phase in the tracker. It ensembles multi-frame and multi-model predictions.</li> <li>Take the weighted average of the player-assignment matrix of each models, then choose the final assignment by Hungarian algorithm. It works better than normal WBF which ensembles the results of bounding boxes and detection confidences only.</li> <li>4 detectors’ ensemble for final submission. <br> <img src="https://i.imgur.com/AAP5yKT.png" alt="ensemble"></li> </ul> <h2>Scores History</h2> <p><img src="https://i.imgur.com/lgOj7hI.png" alt="scores_comparison"></p> <h1>Final thoughts</h1> <p>Fortunately, I have the experience of the object detection, depth estimation, pointcloud registration and tracking. I think this is the reason why I could win this competition.<br> I started learning programming and machine learning at kaggle 3 years ago, and most of my knowledge comes from kaggle. Thanks again to the whole of kaggle community!</p> <h3>Inference code is released</h3> <p><a href="https://www.kaggle.com/kmat2019/nfl-1stplace-inference" target="_blank">https://www.kaggle.com/kmat2019/nfl-1stplace-inference</a></p> <h2>Ablation Study (Late Submission)</h2> <p><img src="https://i.imgur.com/ee8G8Rl.png" alt="scores_comparison"></p> <p>Good question.<br> Delta map(xy_offsets) starts training from approx. 5 epoch. Only global map is trained in first 5 epochs. So epoch_dependent_value is 0.0 if epoch&lt;5, otherwise 1.0.</p> <p>Actually, this model works without the global map, since the delta map can include it. But I believed using the former leads to the better prediction. So I forced the model to learn the location without the delta map for the first several epochs.</p> <p>I just want to limit the value between exp(0) to exp(3) for scale_ratio and exp(0.4) to exp(2.4) for aspect_ratio. So the following might work<br> scale_ratio = clip_by_value( linear_activation(x), min_value=exp(0), max_value=exp(3))</p> <p>Using the logarithmic scale is intuitive. <br> I checked the distribution of helmets' size by<br> plt.hist(helmet_size).<br> This is not normal distribution. Using logarithmic scale is better.<br> plt.hist(log(helmet_size))</p> <p>So the outputs of my model is log scale value. I thought it is more natural.<br> But I've not compared them.😑</p> <p>1.<br> These coordinate-like feature maps are Coord Conv.<br> <a href="https://arxiv.org/abs/1807.03247" target="_blank">https://arxiv.org/abs/1807.03247</a><br> I applied it so that this model can be aware of the positions in the image, which is effective when compensating the gap due to camera.</p> <p>2.<br> shape of y_pred [batch, num_player, num_player, 1]<br> shape of y_true [batch, num_player, num_player, 2]<br> The second channel of y_true is mask. I'm not using it now. so the mask is same as tf.ones_like(y_pred).</p> <p>I implemented FCOS but it's slightly different from original one. <br> <a href="https://arxiv.org/abs/1904.01355" target="_blank">https://arxiv.org/abs/1904.01355</a><br> The backbone(encoder) is EfficientNet v2.</p>
NFL Health & Safety - Helmet Assignment
21st Place Solution - Hungarian Algorithm, Frame Filling
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Health & Safety - Helmet Assignment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>A lot of thanks to organiser for organising this competition and my best wishes to all the winners. This was one of the most interesting competitions I took part on Kaggle. I am glad to get a solo silver here.</p> <p>Briefly Summarising my 21st place solution:</p> <h2>Step 1: Helmet Detection</h2> <p>Two models were used in final submission: YoloV5l and YoloV5x. Both were trained on 1280 image size.</p> <h2>Step 2: Helmet Mapping</h2> <h3>(Linear Sum Assignment Problem - Hungarian Algorithm)</h3> <p><strong>1. The helmet assignment problem for each frame was framed as linear sum assignment problem as follows:</strong></p> <p>INPUT: Players at the boundaries of the video frame, Angle of rotation</p> <p>OUTPUT: All players in the video frame</p> <pre><code>a. Assume you have solved the problem for 4 players at the boundaries (LEFT, RIGHT, TOP, DOWN) of the video frame and you know the angle of rotation (theta). b. Transform the ground positions of all the players (x-center, y-center) by rotating the positions by angle theta and then applying min-max scaling. c. Scale the video coordinates with min-max approach d. Compute weighted euclidean distance matrix: For Sideline (0.5*x-dist + 0.5*y-dist) For Endzone (0.8*x-dist + 0.2*y-dist) e. Solve the linear sum assignment problem with hungarian algorithm to get all the players </code></pre> <p><strong>2. Grid Search over all possible candidates for players at boundary and angles of rotation</strong></p> <pre><code>Suppose N_VIDEO = no of helmets detected in video N_GROUND = 22 OFFSET = N_GROUND-N_VIDEO a. A set of 4 players are said to be valid candidates if no of players in the rectangle enclosed by these players on the ground is N_VIDEO-1, N_VIDEO or N_VIDEO+1 b. Grid Search over all possible 4 players and angles of rotation allowed for views to get minimum distance with linear assignment approach c. Angles of Rotation allowed for Endzone = (90,270) d. Angles of Rotation allowed for Sideline = (-30,-15,0,15,30,150,165,180,195,210) e. If N_VIDEO&gt;22, reverse the problem -&gt; pick 4 valid boxes from video and apply grid search for players at boundaries on ground f. If OFFSET&lt;=8 problem can be solved in less than 1000 iterations g. If OFFSET&gt;8 pick top 8 candidates for each direction -&gt; Sort the coordinates and pick the ones whose distance from nearest players are highest For example: If offset = 9 and coordinates of players at left are : [1,5,8,9,13,15,18,22,25] -&gt; Reject 9 as its distance from nearest player on left side is minimum. Then apply grid search on remaining 8 </code></pre> <h2>Step 3: Helmet Tracking (Deepsort with IOU)</h2> <ol> <li>The assignment part of public deepsort code was modified </li> <li>IOU matrix was computed for deepsort coordinates and yolo coordinates. Hungarian algorithm was applied for deepsort cluster assignment</li> <li>As in public notebook, Most frequent label for each cluster was chosen </li> </ol> <h2>Step 4: Ensemble</h2> <ol> <li>Helmet Mapping and Tracking part was run for each model separately</li> <li>For each player and frame, predictions were ensembled with following approch<br> a. If model 1 has some assignment for player, add model 1 solution for the player <br> b. If model 1 has no assignment for player, add solution from model 2</li> <li>Note that: This approach works best if both the models are equally good.</li> </ol> <h2>Step 5: Postprocessing (Forward &amp; Backward Filling)</h2> <ol> <li>If for some player, we dont have solution at frame t, search for solution in frames belonging to (t-5, t+5)</li> <li>Add solution from nearest frame if found. This approach works as the competition metric doesn't give any penalty on false positives and in most cases, there are not much movement in 5 frames. </li> </ol> <h2>CV strategy</h2> <p>There are around 15 impacts per video. So, around 90 impacts in public test. The competition metric is highly impacted by these bounding boxes. I decided to go with full CV approach due to these observations</p> <ul> <li>Public best notebooks has around 0.6 LB but had only 0.45-0.48 CV</li> <li>My CV-LB difference was much lesser. Many times LB&lt;CV</li> <li>A better CV submission was not always better LB submission.</li> </ul> <p>I keep 2 validation set and compute 2 different metrics for each of them.</p> <p>Validation set 1: 20 videos kept aside while training yolo with yolo predictions<br> Validation set 2: all 120 videos with baseline predictions <br> Metric 1: Competition Metric <br> Metric 2: Assignment accuracy without giving any special weight to impacts </p> <p><strong>I choose some strategy or hyperparameter only if it improves assignment accuracy on validation set 1 and competition metric on validation set 2</strong></p> <h2>Final Results</h2> <p><strong>CV 0.805 (Endzone 0.81, Sideline 0.79)</strong><br> <strong>Public LB 0.849</strong><br> <strong>Private LB 0.795</strong></p>
NFL Health & Safety - Helmet Assignment
22nd place solution: Yolov5l + Deepsort + CPD registration
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Health & Safety - Helmet Assignment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulation to all the winners!</p> <p>My best private score is 0.804 but CV score is not high as my selected notebook.</p> <p>My solution Pipeline</p> <h2>1. Yolov5l</h2> <ul> <li>I trained single model yolov5l on 9k images with full resolution and validate on ~4k images extracted from 10 training videos</li> <li>My submission is the ensemble of 3 checkpoints that have the highest mAP.</li> <li>I also try Yolov5l6 and yolov5x, but training time is too long so I give up :))</li> </ul> <h2>2. Deepsort</h2> <ul> <li>I extracted features for each helmet box from Deepsort feature extractor using the pretrained weight (ckpt.t7) and save it to disk for step 4</li> </ul> <h2>3. Point set registration</h2> <ul> <li>I simply use CPD non-rigid registration from <a href="https://github.com/neka-nat/probreg" target="_blank">https://github.com/neka-nat/probreg</a> to transform box coordinates to tracking point plane</li> <li>For each image, I rotate -45 and +45 with step of 10 to find the transformation that has the lowest chamfer distance</li> <li>Then I assign labels to each helmet by using the Hungarian algorithm, the cost matrix is the distance between each helmet coordinate and tracking points</li> </ul> <h2>4. Post-processing</h2> <ul> <li>For each helmet bounding box, I collect all bounding boxes from previous/next frames that have the same deepsort cluster and assign score to each label by multiplying box confidence and cosine similarity of helmet features (from step 2) as following</li> </ul> <pre><code>cosine = np.dot(feats, feats.T).T[0,1:] score = row.conf.dot(cosine) </code></pre> <p>and then re-assign label that has the highest score.</p>
NFL Health & Safety - Helmet Assignment
23th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Health & Safety - Helmet Assignment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks the host for an interesting competition and congratulation to everyone who fight until the end of competition. </p> <p>My approach doesn't use custom helmet detection because I have no experience in Computer Vision. The main improvement is in Tracking-Helmet Mapping and Post Process. Here is my approach</p> <h2>1. Helmet filter</h2> <p>I ran a simple iou tracking algorithm first to get cluster id and cluster score of each helmet box.<br> Cluster score is equal to maximum confidence score of all boxs belong to a cluster. A cluster is valid if its size is greater than 3. Cluster score is then used to remove box when number of boxes greater than tracking player numbers. To remove outlier helmet ( sideline helmet ), I simple keep only boxes which have (top &gt;= 70) &amp; (top &lt;= 650).</p> <p>I think there must be a better way to assign cluster score but couldn't find out during competition but found some potential ideas in public solution and I 'll try after</p> <h2>2. Tracking-Helmet Mapping</h2> <h4>2.1 1-stage</h4> <p>I did some EDA and found that the score of individual frame is much higher when number of boxes &gt;= 18. So I ran Tracking-Helmet Mapping process to only frame that have number of boxs above a certain threshold (I set to 17). I choose candidate players set by keeping player whose x is in [L, R] and y is in [T, B] and use linear_sum_assignment to assign boxs and candidate players, then use cluster id to vote label for each box, duplicated label in each frame is set to np.nan </p> <h4>2.2 Find Camera Direction</h4> <p>After 1-stage of Tracking-Helmet Mapping, I calculate median of top and left value for each team's boxes and compare this to tracking coordinate to find camera direction</p> <h4>2.3 2-stage</h4> <p>2-stage use camera direction and some fixed label in 1-stage. Non-duplicated label in 1-stage is fixed and used as constrains in linear_sum_assignment. After 2-stage, cluster id is used again to vote label and drop all duplicated label</p> <h2>3. Post Process</h2> <p>I add addition boxes for 6 frames before and after frames sequence of a cluster. This work because ground truth box is overlap to addition box in some case, and also there is no penalty in evaluate metric when predicted box and ground truth box is too far from each other. </p> <h5>Summary</h5> <p><strong>CV 0.787 (run for all train dataset)</strong> <br> <strong>+ Endzone 0.792</strong><br> <strong>+ Sideline 0.783</strong></p> <p><strong>Public 0.756</strong><br> <strong>Private 0.790</strong></p>
NFL Health & Safety - Helmet Assignment
25th place solution (Trained DeepSort + Hyperparameter Sweep + Gmmreg + Hungarian Algorithm)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Health & Safety - Helmet Assignment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>25th place solution</h1> <p>First of all, I would like to thank the hosts for this amazing competition it was truly interesting and challenging!</p> <p>Our solution followed the following 4 steps pipeline:</p> <ol> <li>Helmet detection using YoloV5</li> <li>Bounding box fusion using WBF</li> <li>Helmet tracking using DeepSort</li> <li>Label assignment (i.e. mapping tracking data to video data)</li> </ol> <h2>1) Helmet detection</h2> <p>We trained 5 yolov5x6 using 5-fold CV for 10 epochs at full image resolution (~1h per epoch). You can find the whole notebook <a href="https://colab.research.google.com/drive/1tI0_nMW_V_tAWd_AIS3zc71nNQt5xZZB?usp=sharing" target="_blank">HERE</a></p> <h2>2) Bounding box fusion</h2> <p>We then fused the boxes using WBF</p> <h2>3) Tracking</h2> <p>We trained DeepSort using comp data (which boosted LB by a bit). We trained that by simply using the cropped boxes and training a classification model (with 1k+ classes). You can find the training notebook <a href="https://colab.research.google.com/drive/1APloA_9FTE3ou9PBCDajOBtoBMoJIPxl?usp=sharing" target="_blank">HERE</a> <br> We also did hyperparameter tuning using WandB Sweeps, you can find the optimization notebook <a href="https://colab.research.google.com/drive/1nGuWjZRCxF6xtmO2u4hnEgR0C42wrXpp?usp=sharing" target="_blank">HERE</a><br> <img src="https://i.imgur.com/BXKsjcY.png" alt=""></p> <h2>4) Label assignment</h2> <p>By far the most challenging part was doing the label assignment. You can see our algorithm in the image below.<br> <img src="https://i.imgur.com/VzC6alE.png" alt=""><br> By far the most important part of this algorithm was label averaging (using tracking information) and the linear sum assignment (Hungarian algorithm).<br> Registration was performed using gmmreg.</p> <p>You can see the whole code <a href="https://www.kaggle.com/coldfir3/nfl-label-assignment?scriptVersionId=77839706" target="_blank">HERE</a></p> <p>There are a few hyperparameters that we tuned using the amazing WandB sweep tools, you can find the notebook <a href="https://www.kaggle.com/coldfir3/nfl-label-assignment-sweep" target="_blank">HERE</a><br> <img src="https://i.imgur.com/8V6b7qj.png" alt=""></p> <h2>Things that didn't work for us:</h2> <ol> <li>PointNet: could not make the network to learn anything useful (<a href="https://www.kaggle.com/coldfir3/point-net-v0?scriptVersionId=77205269" target="_blank">Notebook1</a> and <a href="https://colab.research.google.com/drive/1Wx9uLYX1pZFTPHc55A82Yqklh5mtTZ8y?usp=sharing" target="_blank">Notebook2</a>) </li> <li>ProbReg registration algorithm: it works but <code>gmmreg</code> is much better</li> <li>Classification based on image: too many classes and too much occlusion (network can learn on train set but do not generalize well)</li> </ol>
NFL Health & Safety - Helmet Assignment
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Health & Safety - Helmet Assignment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you,<br> The source code is so dirty that I am wondering if I should make this public.<br> if I release it , I'll let you know here.</p> <p>Thank you, narsil. :)<br> And congratulations on your gold medals of Landmark and Grandmaster!</p> <p>Thank you,</p> <ol> <li><br> Helmet boxes and it's orientation are detected from "full image" at the same time by CenterNet model.<br> The results from Yolov5 and CenterNet are merged later.<br> 2.<br> I did not test without using coordinate transformation.</li> </ol> <p>Congratulations to all the winners. Congratulations especially to K_mat. Your scores and solutions are really great. <br> <br><br> I would like to thank the Kaggle and host team for this interesting competition. I especially want to thank Rob. Your contribution to this competition is very significant! <br> <br><br> This is a brief summary of my solution.<br> <br></p> <h2>1 Helmet detection</h2> <p>I trained YoloV5 with only supplemental photos.<br> The prediction accuracy for small helmets was not good, so I upsampled from1280 to 1664 and trained it.</p> <h2>2 Helmet clustering (team classification)</h2> <p>I performed 2-stage K-means clustering using the helmet images.<br> The first K-means clustering extracts 20 representative colors from the set of each pixels of all helmet images in the entire video-frames.<br> In the second K-means, helmets are clustered using feature vectors which shows how many of each representative color the helmet have.<br> This model achieved 97% accuracy.<br> This model was improved to 98% accuracy by using the tracking information to ensure that the same players are on the same team in post-processing.<br> The result of this team clustering is used for player assignment and player tracking.<br> <img src="https://raw.githubusercontent.com/CuteChibiko/kaggle-images/main/nfl/clustering.png" alt="helmet clustering"></p> <h2>3 Feature extraction (used as distance for matching)</h2> <p>In addition to the 2D information of up/down, left/right, team (Home team or Visitor team) information, player orientation, and the gap between the helmet and the sensor position were used as distances for mapping players.</p> <h3>3-1 Player orientation</h3> <p>I created a model that predicts the "o" in the tracking data and used it to penalizes if the angle is too far off when matching.<br> <img src="https://raw.githubusercontent.com/CuteChibiko/kaggle-images/main/nfl/%E5%90%91%E3%81%8D.png" alt="Player Orientation Prediction"><br> Red: Predicted value (Home)<br> Orange: Predicted value (Visitor)<br> Light blue: Ground Truth (Home)<br> Dark blue: Ground Truth (Visitor)</p> <h3>3-2 Apparent gap between helmet and sensor position</h3> <p>For example, even with the same coordinates, the apparent helmet position changes depending on whether the user is standing or squatting.<br> I have created a model to predict this shift.<br> In advance, I made mapping Grand-Truth and helmet box to obtain the positional gap.<br> Then I made a model to predict this gap using this training data.<br> This image show the helmet box updated by gap.<br> You can see that the boxes of kneeling and fallen players have been corrected.<br> <img src="https://raw.githubusercontent.com/CuteChibiko/kaggle-images/main/nfl/gap_with_arrow.png" alt="updated boxes with the gap"><br> For these feature extractions, I used <a href="https://github.com/xingyizhou/CenterNet" target="_blank">CenterNet</a>, and I shared this code here.<br> <a href="https://www.kaggle.com/its7171/nfl-centernet" target="_blank">https://www.kaggle.com/its7171/nfl-centernet</a></p> <h2>4 Coordinate transformation</h2> <p>I used lines on the ground to align the coordinates of the video image and the tracking.<br> I used CV2 to detect the lines.<br> <img src="https://raw.githubusercontent.com/CuteChibiko/kaggle-images/main/nfl/line.png" alt="White line detection"><br> By extracting two lines from the extracted line and doing a little calculation, geometric coordinate transformation is possible.<br> Coordinate transformation1 (aspect ratio)<br> <img src="https://raw.githubusercontent.com/CuteChibiko/kaggle-images/main/nfl/aspect_ratio.png" alt="Coordinate transformation"><br> Coordinate transformation2 (trapezoid correction)<br> <img src="https://raw.githubusercontent.com/CuteChibiko/kaggle-images/main/nfl/pt.png" alt="Coordinate transformation"><br> Coordinate transformation3 (rotation)<br> <img src="https://raw.githubusercontent.com/CuteChibiko/kaggle-images/main/nfl/座標変換.png" alt="Coordinate transformation"></p> <h2>5 Player assignment</h2> <p>Since there are often less than 22 players in the video, I adjusted the number of players by removing the players on the top, bottom, left and right of the tracking information (by cutting off the tracking coordinates with a rectangle), and then solved it as a linear assignment problem.<br> There are many candidates for removing players, but I chose the candidate with the smallest minimum distance among all candidates.</p> <h2>6 Player tracking</h2> <p>Modern object tracking models use the closeness of image features as a metric for the distance between objects.<br> However, I thought this approach might not be very appropriate for this competition since all helmets are the same for the same team.<br> So I used an algorithm based on <a href="https://github.com/abewley/sort" target="_blank">SORT</a>.<br> However, I introduced the helmet image feature as a step-function to avoid tracking the same person across different teams.<br> The image features are reused from K-means team clustering.</p> <p>The transformation can be done if three parameters are known, as follows.<br> (I have added pictures to illustrate this.)<br> (1) Aspect ratio<br> (2) The ratio of the upper and lower bases of the trapezoidal transformation<br> (3) Rotation angle<br> Here, (2) and (3) can be calculated geometrically using two extracted lines.<br> For (1), the distance is calculated with several candidates and the best one is selected as the result.</p> <p>Then scale and parallel translation are done by aligning the maximum and minimum values of the coordinates of the players to be compared.</p>
NFL Health & Safety - Helmet Assignment
30th place solution. Affine transformation and voting deepsort cluster
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Health & Safety - Helmet Assignment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to everyone for your great results, specially the winners. It has been a very consuming time and funny competition!</p> <p>The solutions are really great and impressive. I have little to add with value, but I wanted to share this solution for, I think, its simplicity. Just using boxes offered and strandard DeepSORT.</p> <p>It uses DeepSORT clusters and vote assignments by using affine transformation matrix moving coordinates from tracking data to video image coordinates.</p> <h3>The pipeline</h3> <ul> <li>Apply DeepSORT to offered boxes.</li> <li>Then on every frame:<ul> <li>Obtain the affine transformation matrix (least-squares minimization using 4 ground truth points). </li> <li>Linear sum assignment</li> <li>Vote for cluster</li> <li>As cluster receives enough votes, assign label.</li> <li>Split clusters that after some time receives enough votes for another label </li></ul></li> </ul> <h3>Obtaining affine transformation matrix</h3> <p>To obtain the affine transform matrix in the first frame it used 4 players-boxes combinations as ground-truth. Used the most outer boxes of the images (left, top, bottom and right) and tried using the 6 outer players on each side and assign a box as ground truth and compare the assigning costs on each combination.</p> <ul> <li>Get outer boxes (left, top, bottom and right)</li> <li>Iterate over:<ul> <li>Try a combinations of outer players (tracking data)</li> <li>Compute transformation on all players</li> <li>Linear sum assignment and get the cost of assignments</li></ul></li> <li>Get the transfomation matrix with minimum assignment cost.</li> </ul> <p>The score on the 20 first frames where in most cases 100%. The following frames the transformation matrix was calculated using the most isolated boxes/players. <br> (Image: The little squares next to the boxes are the transformation result of the tracking data)</p> <p><a href="https://postimg.cc/QHQKWYjg" target="_blank"><img src="https://i.postimg.cc/c4jR2Pjz/Initial-Assigment2.png" alt="Initial-Assigment2.png"></a></p> <p>When advancing in the play the solution had problems mainly with 'z', when players where in the ground and assigning cluster with mixed players. But it had the ability to recover and get good scores again. (Like the image below recovering 100% of accuracy on frame 302)</p> <p><a href="https://postimg.cc/v1B4kQfX" target="_blank"><img src="https://i.postimg.cc/xdGLk8Cr/midplay.png" alt="midplay.png"></a></p> <p>SCORE<br> Position: 30th<br> Public LB: 0.814<br> Private LB: 0.732</p>
NFL Health & Safety - Helmet Assignment
39th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Health & Safety - Helmet Assignment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers of the competition that we've had a possibility to take part in 😃</p> <h2>1. Detection</h2> <p>No custom detector was used, only raw bboxes from test_baseline_helmets.csv.</p> <h2>2. Camera Mapping.</h2> <p>I've tried to locate the global camera position (x, y, z) and adjust its rotation (yaw, pitch, raw) by some kind of the Iterative Closest Point algorithm (ICP).</p> <h3>2.1. Rotation Angles.</h3> <p>First of all, we need to get initial rotation angles so the camera can take a look at the field of players (90°, 180°, 270°). It is made by computation the euclidean distance (<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html" target="_blank">cdist</a>) between corresponding points (<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linear_sum_assignment.html" target="_blank">linear_sum_assignment</a>) from tracking 2d projections and detections point clouds after transformations (move, scale). The points to be matched are chosen by up-down-left-right points and considered as a main cluster.<br> <img src="https://i.imgur.com/vxfvL4I.png" alt="Point Cloud Transformation"><br> It will also help in the measuring the approximate distance between the camera and the actual game center (the position of a player who make a ball snap).</p> <h3>2.2. Position Coordinates (with ICP).</h3> <p>Here the camera position is defined by spherical coordinates (r, theta, fi) with <em>r</em> equal to the distance to a ball snap player. We select the position with less distance between point clouds too. <br> <img src="https://i.imgur.com/6kCx4nh.png" alt="Sideline Spherical Coordinates"><br> For Endzone videos, I've had to choose between two positions because the same projections are made by different angles (0°, 180°, 90° and 270°, 0°, 180°, as far as I remember…).<br> <img src="https://i.imgur.com/8WjEWHq.png" alt="Endzone Spherical Coordinates"></p> <h3>2.3. Adjust both by ICP.</h3> <p>Here we choose the direction of the camera position and rotation shifts and "move" them until the distance stops increasing. Then we choose the direction once again, and if it fails, we stop the algorithm.<br> <img src="https://i.imgur.com/idMBTp8.png" alt="57583_000082_Sideline_10"><br> <img src="https://i.imgur.com/vZQioSJ.png" alt="57583_000082_Endzone_24"></p> <h2>3. Tracking by sensor data.</h2> <p>The ICP algorithm is used too.</p> <p>Sometimes, it is necessary to disable "outliers" - detections that doesn't belong to any tracking data but are connected to some missing points anyway. First of all, we try to disable a single outlier if it is. If it fails, we try to find a main cluster by border points as it was described earlier (up-down-left-bottom points). It helps us to disable all the points that don't belong to the main cluster.<br> <img src="https://i.imgur.com/YJDh4Fk.png" alt="58094_002819_Sideline_340"></p> <h2>4. Tracking by IoU.</h2> <p>It is the simple tracking by Intersection-over-Union (IoU) of bounding boxes with the threshold of 0.35.</p> <p>If someone is interested in technical details, here is my notebook: <a href="https://www.kaggle.com/thehemen/nfl-camera-tracking-39th-place" target="_blank">https://www.kaggle.com/thehemen/nfl-camera-tracking-39th-place</a>.</p>
NFL Health & Safety - Helmet Assignment
4th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Health & Safety - Helmet Assignment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Run with jersey number model:</p> <ul> <li>0.883 public</li> <li>0.888 private</li> <li><code>0.77*0.888 + 0.23*0.883 = 0.887</code> test set score</li> </ul> <p>Run without jersey number model:</p> <ul> <li>0.841 public</li> <li>0.890 private</li> <li><code>0.77*0.890 + 0.23*0.841 = 0.879</code> test set score</li> </ul> <p>Once the pipeline is strong, jersey number model's effect is not very significant. But you can train jersey number model with more data and get more robust model. Then you can apply lower confidence threshold and affect more helmets.</p> <h1>4th Place Solution</h1> <p>Congrats to winners and thanks to <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a> for hosting it. It was really a fun competition. We took <a href="https://www.kaggle.com/robikscube/helper-code-helmet-mapping-deepsort" target="_blank">this</a> as the starter notebook and improved until it gets the 4th place.</p> <p>I will explain the solution pipeline step by step:</p> <h2>Helmet Detection</h2> <ul> <li>I replaced the baseline helmets with a trained yolov5 model.</li> <li>15 epochs training with default parameters on the provided extra images with image size 640.</li> <li>5 epochs finetuning on the every 3rd frame of video images with image size 1280 excluding V00 and H00 boxes.</li> <li>Detection on test set videos with image size 1280, confidence threshold 0.03 and iou threshold 0.2.</li> </ul> <p>Training our own model was better than using the baseline helmets since we could use the video images for training and have control over the detection parameters.</p> <h2>Deepsort</h2> <ul> <li>Tuned parameters of the deepsort config.</li> <li>I updated the deepsort code so that it returns unconfirmed boxes too.</li> <li>I replaced the merge_asof logic in the starter notebook with greedy minimum distance assignment to map tracks to helmets.</li> <li>Helmets that couldn't be assigned to any track became a cluster of one helmet.</li> <li>Helmet track confidence is calculated as sum of helmet detection confidences within the same cluster. It is used for selecting top ~22 helmets in 2D matching step.</li> </ul> <p>In the last days, I replaced the deepsort with my custom tracking. It scored significantly worse on Public LB (I had no time and hope for testing its validation performance), therefore we ignored it but it turned out to be our best Private LB submission. In summary, it was using helmet similarity score and iou score together to track helmets rather than thresholded step by step approach. And its feature extractor model was trained within ArcFace Team Detection model.</p> <h2>2D Mapping</h2> <p>I skip this part for now. <a href="https://www.kaggle.com/rytisva88" target="_blank">@rytisva88</a> can explain this part better since he worked on it. </p> <h2>Jersey Number Prediction</h2> <p><img src="https://i.imgur.com/kRW9Cwf.png" alt="Jersey Number Training Data"></p> <p>While jersey numbers are not always visible, we can still utilize them when they are clearly visible. With cropping the area around the helmet, I have created a jersey number dataset. I trained a 2 head model (one head for each digit) with resnet34 backbone. Despite resnet34 is small, it was still very easy to overfit on this data in just 2 epochs. Therefore some augmentations were needed:</p> <ul> <li>Cutmix with 40% probability where center of the image and the border of the image are from different images.</li> <li>Put random 2 digits with Times font on the center of the image and change the target accordingly with 20% probability.</li> <li>Convert to black and white with 50% probability.</li> <li>Invert colors with 50% probability.</li> </ul> <p>Given the noise, this model didn't have high recall but it has high precision. And having only ~22 players (jersey numbers) available helped the model to eliminate impossible jersey numbers. When the model is confident, labels from 2D mapping is overwritten by jersey number model. </p> <p><img src="https://i.imgur.com/7lGRJkh.png" alt="Early jersey number model results"></p> <h2>Cluster Ensembling</h2> <p>For each helmet h in each frame f, I calculate score for each label prediction it has within its cluster c: <code>np.log1p(conf[c][label])*conf[c][label]/conf[c].sum()</code><br> All helmet-label possible matches within a frame is sorted according to this score. Then they are assigned one by one starting from the highest score and same label is not assigned twice.</p> <h2>Linear Regression</h2> <p>After cluster ensembling, we may have some helmets not assigned to any label. For covering those, I fit a Linear Regression model within each frame left and top of helmets being the target and x, y of tracking data being the points. So model does the mapping between these 2 2D spaces.</p> <p>I create a distance matrix with model estimations and helmet left, top values. If we directly use this distance matrix, it performs very close to just doing 2D mapping. Therefore, for each (label, helmet) pair from cluster ensembling, I subtract 100*match_score to bias the matrix towards cluster ensembling output. Then greedy minimum distance assignment is done and all helmets get a label.</p> <p>Thanks <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a>. I had first trained a Helmet Team ArcFace model for clustering helmets by team but my main goal was to reduce the options for jersey number predictions from 22 to 11. It didn't really improve the pipeline. Then later I used the same model for my custom tracking to replace deepsort.</p> <p>Jersey identification was very useful in the beginning. But after the pipeline got strong, I don't know if it kept its importance. Let me re-run our selected submission without it and see.</p>
NFL Health & Safety - Helmet Assignment
5th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Health & Safety - Helmet Assignment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First off thanks to the organizers / <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a> for hosting this competition - it must have been the most fun I have had in a while, since I originally I just wanted to check some public notebooks (like <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> 's awesome notebook) out and bail when I was happy but got sucked in :D. And congratulations to the winners, as some of your solutions are really a testament to how creative Kagglers can get. The competition was also not too hardware intensive so it was very manageable resource-wise for a guy who is still stuck with his 1080Tis.</p> <p>I'll be only verbal and graphically lacking since I'm exhausted from other commitments, and which is why I dropped the ball and slowed down in the last two weeks, plus I think some of my ideas overlap with other solutions anyway - but not to mention some unique differences in other solutions and more so perhaps the very unique top solution, which definitely blew my mind :D and is indeed a well deserved win on a different league. I will try to augment the writing for this on the weekends when I get more time.</p> <h1>Primary features</h1> <h2>Detector</h2> <ul> <li>YOLOv5 (can't go wrong with this) pretrained from yolov5x6.pt, 1280x720 with standard augmentation as well as a 1472 version. Both used hyp.scratch.yaml (not p6, which I didn't evaluate). I used all the boxes in the videos beyond the ones given in <code>extra</code>. In my final submission I used 118 videos and left only a pair for sanity checking [could have used all videos]</li> <li>WBF applied on the above and their horizontal flip TTA versions, using iou_thres=0.35, min_confidence=0.01. Limited number of boxes to 22.</li> <li>I found that the detector performance greatly impacted my CV. When I used the ideal boxes (from the ground truth) my score was around 0.958+ for the entire training set, which meant that detector performance was a bottleneck.</li> <li>I did not investigate but perhaps Scaled-YOLOv4 and YOLOR might have been better options given their metrics are supposedly higher. </li> </ul> <h2>Label assignment</h2> <ul> <li>Euclidean distance minimization search by Hungarian assignment (use <code>scipy.linear_sum_assignment</code>, <code>lapsolver</code>, etc. and get their sum) including sweeping rotations, sweeping outlier tracker points as rectangular boundaries, permuting centering (mean vs min-max center). Used min-max normalization on both images and tracker. <ul> <li>After determining minimal distance for a set of parameters, determine label assignment from tracker to image again using Hungarian assignment via weighted Euclidean distance.&nbsp; I found that the dy^2 was&nbsp;best at around 0.25 of the dx^2. </li></ul></li> <li>For subsequent frames, I also added a speedup to fix to one side via majority vote on the first 50 frames and limiting rotations and tracker outleir boundary search based on past frames.</li> <li>In addition, I added gravity features which scaled the cost matrix elements for the Hungarian label assignment, determined via individual discrete classifiers. These included:</li> </ul> <ol> <li>Digit matching - I trained a multi-output classifier along the lines of the paper shared in discussions, which reached an accuracy of ~35+%. The classifier also had a side input branch that took an input vector of size 100 that one-hot any existing player digits that were available (since these were given to you), and added ~50% augmentation with generated digits on different colors. I used this classifier to generate digit features while the initial detector boxes were being generated, and in final application on Endzone videos only if one digit matches scale the image-tracker distance by 0.7; if two digits match scale the cost matrix element distance by 0.49.</li> <li>Orientation matching - I trained an 8-class classifier with each class at 45 degree increments. The angular difference between the classifier and the transformed tracker orientations was then inversely scaled with the corresponding cost matrix element. For my final score I used 0.5 as the scaling factor, across an ensemble of 3 different orientation models.</li> <li>Direction matching - Similar to the orientation model, but here I took +/-4 frames around the box of interest (similar to the previous NFL competition) and plugged them into a 2.5D CNN (replacing input layer) generating 8x angles for direction of movement. Angular difference was then inversely scaled against the cost matrix element, and I used 0.7 as the scaling factor. Anecdotally based on a submission check I think this improved my public/private score by +0.01.</li> <li>Using past labels - scale the cost matrix element if the IOU of the current box and a previous frame box exceed a threshold for a given label. This has somewhat of a dampened IOU tracker effect. This helped in public, uncertain in private.</li> </ol> <p>These significantly improved the pre-tracking validation/public scores (e.g. from 0.75 to 0.8+) - especially the orientation model - but barely moved any part of the post-tracker scores. I suspect the images were too noisy e.g. if the individual was occluded or the digit was not visible the classifiers should either yield a low confidence or a null value but this would have required significant annotation cleanup that I was not ready for.. Also, adding these significantly slowed down my pipeline and mine tended to run close to 8+ hours for the full submission.</p> <h2>Tracking</h2> <ul> <li>Standard deepsort seemed to just work here, and the parameters seemed to indicate that low max age, high iou is preferable. </li> <li>I use my own dynamic programming with bitmask function to maximize the sum of IOUs between the DS cluster boxes and the YOLO boxes to find the best correspondence between DS-detector boxes. I guess I could have done this using Hungarian assignment too.</li> <li>I did some extra post-processing to reassign duplicate labels, which basically took the re-assigned duplicated label clusters, sorted the counts of their labels, and assigned those labels which were not duplicated.</li> <li>If duplicates remained, as well as NaN labels, I used the tracker info to look for a nearest neighbor that was unassigned and just used that.</li> <li>One more feature -in trying to fix id switching - I added a parameter to break long clusters that had two / bi-modal large label counts since there were instances where the post-tracking score was lower than the pre-tracking score, but that occasionally did not work. Example a cluster could have labels {A,A,A,A,….B,B,B,B} and if the counts of A and B are significant and over a certain threshold, I tried to break it up heuristically. Perhaps a better way would be to use a learned model here…</li> </ul> <h2>Misc</h2> <p>One more thing - I had one submission which had 0.912 public / 0.911 private, and the only difference of that submission with my current private of 0.885 is the number of epochs - the 0.911 private was using a epoch=10 1280 detector model, while the 0.885 private was an epoch=34 1280 detector model! The scores were definitely highly variable within most of the top positions, and perhaps I should have evaluated more carefully since that seemed to indicate the detector model overfitted beyond 10+ epochs.</p> <h2>Summary of importance:</h2> <ol> <li>Detector - most important, this made the most impact</li> <li>Hungarian assignment on Euclidean distance - most important</li> <li>Gravity features - moderately important (tbd, I need to evaluate further) (?)</li> <li>Duplicate fixing in tracker - moderately important</li> <li>DP w/ bitmask maxmizing IOU btw tracker and detector boxes - most important</li> <li>Cluster breaking post-tracking - questionable (?)</li> </ol> <h1>What did not work</h1> <p>Oh boy, could I have a Kaggle point for every one of these:</p> <ol> <li>FairMOT (even with YOLOv5 replaced as backbone), SORT.&nbsp; V-IOU tracker was competitive, but I didn't find the time unfortunately to port the code over.&nbsp; I also tried adding the gravity features from the matching above into IOU tracker, but that didn't help.&nbsp;</li> <li>Adding homography / cv2.AffinePartial2D / additional ICP on top of the Hungarian distance minimization.</li> <li>Performing tracking first then using the tracked clusters to assist matching directly.</li> <li>fast-reid retraining. This marginally helped, but slowed down the pipeline a lot since the models were larger than the default REID provided.</li> <li>Performing a forward and reverse tracking on the same video and taking the intersection of clusters (as an ensembling mechanism) to break clusters i.e. those with id switching.</li> <li>In fact, I'm not sure if the gravity features mentioned above helped that much, but I'll do a more in depth evaluation against the public/private scores when I get the chance.</li> <li>Tuning Kalman Filter parameters in DeepSort, e.g. with the position/velocity parameters and noise covariance. Didn't seem to help.</li> <li>Matching acceleration/velocity. Training a 2.5D CNN regressing between min/max values on this gave extremely poor results and not surprising since it's hard to tell with the camera moving around.</li> </ol> <h1>What I didn't add, but should have and was relatively low hanging and most top competitors did this:</h1> <p>Team classifier as gravity features. I suspect this would have helped, but I saw some examples that had similar colors so decided against it. </p>
NFL Health & Safety - Helmet Assignment
9th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Health & Safety - Helmet Assignment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>No I used only one class. All the helmets in the training dataset belong to 'helmet' class.</p> <p>I'm not sure, but I doubt it succeeds because I think there are little clue to distinguish helmets of sideline players from those in field. My approach was to give up making such detector. Instead, I used shape context to distinguish them because sideline players usually stood along sideline and histogram of their relative positions between each other were very different from those in the field.</p> <p>First of all, I would like to thank hosts for hosting such a nice competition! The problem was so interesting that I was absorbed to this competition for almost all 3 months.</p> <p>Here I will show you how I tackled this problem. The solution pipeline consists of 4 parts.</p> <ol> <li>YOLO-v5 helmet detection</li> <li>Tracking&lt;-&gt;bounding box homography matrix estimation<ol> <li>Homography matrix estimation using shape context in frame 1</li> <li>Modify homography matrix in previous frame to get that of current frame.</li></ol></li> <li>First helmet assignment using estimated homography matrices</li> <li>Second helmet assignment using deepsort and previously obtained homography matrices</li> </ol> <h1>1. YOLO v5 helmet detection</h1> <p>As you know, baseline helmet detections were provided by the host. At first I thought the performance of the detection was enough and I used them for a while. However, I noticed some helmets involving in impact were missing. So I tried training a YOLO detector to suppress this false negative. The training settings was:</p> <ul> <li>Training data: jpg images in "images" folder, with 1280px resolution</li> <li>Model: yolov5x</li> <li>Batch size: 16</li> <li>Epochs: 20 (fixed)</li> <li>Mixed precision: on</li> <li>Environment: AWS SageMaker p3.8xlarge</li> <li>No validation</li> <li>Other settings(learning rate, optimizer, etc): default</li> </ul> <p>It was quite effective and the score jumped up by about 0.1 at that time. <br> EDIT: I replaced detections in my final submission with baseline detections and did late submission. I got 0.803(private)/0.841(public), which indicates that YOLO improved the score by 0.057.</p> <h1>2.1 Initial frame homography matrix estimation using shape context</h1> <p>To match sensor data to bboxes in frame 1, I used shape context. Shape context is a histogram of relative position of points. It can be calculated for each points and can be used in point set registration. For details of shape context, please google it. The detailed procedure was:</p> <ol> <li>Normalize coordinates of bboxes (confidence&gt;0.25) and sensor data. As a normalization factor, I used np.median(np.sort(cdist(points,points),axis=1)[:,1]). This quantity means median distance between nearest players.</li> <li>Scale and rotate sensor data coordinates by some factor(mentioned later).</li> <li>Calculate shape context for each points in tracking data and centers of mass of bboxes.</li> <li>Calculate cosine distance matrix between sensor descriptors and bbox descriptors.</li> <li>Hungarian algorithm matching using scipy.optimize.linear_sum_assignment. At this point, the matching result is usually contaminated by wrongly assigned pairs.</li> <li>Homography matrix calculation using cv2.findHomography. I did this calculation like iterative closest point algorithm. During first several iterations I used cv2.LMEDS to repair wrongly assigned pairs. If the cost function stopped improving, LMEDS was turned off and switched to normal homography estimation.</li> <li>If the cost function stopped improving, stop iteration.</li> </ol> <p>The picture below describes the flow from 2.3 to 2.7.<br> <img src="https://user-images.githubusercontent.com/5365048/140308438-098c5d52-7378-4ac3-99bb-27f225c92379.png" alt=""></p> <h2>Cost function</h2> <p>I evaluated fitness of homography transform using the following code. This cost function can be interpreted as RMS distance between bboxes and transformed tracking coordinates.<br> dist=cdist(homography_transformed_tracking_points,bbox_points)**2<br> row_ind,col_ind=linear_sum_assignment(dist)<br> score=np.sqrt(dist[row_ind,col_ind].sum()/len(row_ind))</p> <h2>Brute force optimization of rotation/scaling factors</h2> <p>I optimized rotation/scaling factors used in 2.1.2. In other words, I did above procedure many times changing rotation/scaling factors and selected best factors. The optimization ranges are:</p> <ul> <li>Rotation angle: [0, 2pi] with resolution pi/500</li> <li>X scaling factor: {0.5,1.25, 3}</li> <li>Y scaling factor: {0.5,1.25, 3}</li> </ul> <h1>2.2 Modify homography transform in a preceding frame to get that of current frame</h1> <p>Since player positions of adjacent frames are similar, the homography matrix should be also similar. First, I applied homography transform in previous frame to the tracking data in current frame. Then I assigned players to bboxes(conf&gt;0.25) in current frame like aforementioned way to get current homography matrix. To improve robustness of the algorithm, I tried homography matrices in 60 preceding frames and selected one with best cost. To deal with false positive of helmet detections, I had to add rows full of zero in distance matrix (ignore false positive bboxes). The number of ignored helmets was adaptively selected by looking at cost function. If increasing ignore number from n to n+1 helped decreasing cost function by 10, the increase was accepted.</p> <p><img src="https://user-images.githubusercontent.com/5365048/140308464-0d072238-4757-4f13-a0d1-e283b746494b.png" alt=""></p> <p>The typical quality of obtained homography transform looks like this (57583_000082_Endzone.mp4):<br> <img src="https://user-images.githubusercontent.com/5365048/140337106-4e87606a-f2d3-411f-bfe9-5826078f542f.gif" alt=""><br> Blue: transformed tracking coordinates, Orange: BBox centers of mass</p> <h1>3. First helmet assignment using estimated homography transforms</h1> <p>Using homography matrix and ignore numbers in all frames, I assigned player label to each bboxes. The procedure is:</p> <ol> <li>Assign labels to helmets with confidence&gt;0.25. To assign them, I used aforementioned cost function and ignore number.</li> <li>If the number of assigned helmets is smaller than 22, the all remaining labels and helmets were matched using linear_sum_assignment.</li> </ol> <h1>4. Second helmet assignment using deepsort and previously obtained homography transforms</h1> <p>I ran deepsort to improve assignment. The basic procedure was almost the same as the notebook Rob posted.</p> <ol> <li>Run deepsort using bboxes which were labeled in first helmet assignment.</li> <li>Assign labels to output bboxes of deepsort. The homography matrices were those which were obtained in secion 2. Helmet ignore numbers were recalculated.</li> <li>For each deepsort cluster, calculate the number of assigned labels for each players.</li> <li>Try assigning the most voted label to each deepsort cluster. If there were label conflicts in a frame, only one cluster which was voted most was labeled. Remaining ones remain unlabeled here.</li> <li>Assign second most voted labels to unlabeled deepsort_clusters</li> <li>Repeat 4 and 5.</li> <li>If there were unlabeled clusters after 100 iterations, remaining labels were assigned to them using linear_sum_assignment.</li> </ol> <h2>Parameters of deepsort</h2> <p>ReID model was the same as Rob's notebook.</p> <ul> <li>MAX_DIST: 0.28743019816658505</li> <li>MIN_CONFIDENCE: 0.4857573798672442</li> <li>NMS_MAX_OVERLAP: 0.43969024230191284</li> <li>MAX_IOU_DISTANCE: 0.7320514748544789</li> <li>MAX_AGE: 2</li> <li>N_INIT: 1</li> <li>NN_BUDGET: 30</li> </ul> <h1>What I couldn't make it work</h1> <ul> <li>FairMOT, ByteTrack, Tracktor++</li> <li>YOLOX detector</li> <li>Exploiting speed/acceleration/orientation data</li> <li>Jersey number detection (did not try)</li> <li>Training ReID models using torchreid</li> <li>Camera matrix optimization to get initial assignment guess for frame 1<ul> <li>Actually it worked to some extent, but it tended to be trapped to local minima. This tendency became stronger in videos with a lot of sideline players(e.g. 57781_000252_Sideline.mp4). So I concluded that shape context was better.</li></ul></li> </ul> <p>Thank you very much! This is the first solo gold so I feel really happy.</p> <p>Thank you very much😄</p> <p>Thank you Rob! Your helper codes and notebooks were really helpful.</p>
NFL Health & Safety - Helmet Assignment
[3rd place solution] YOLOv5 + DeepSort + ICP + Hungarian algorithm
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NFL Health & Safety - Helmet Assignment <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you!<br> The deepsort notebook that you shared was really helpful!</p> <p>Thank you!<br> The last assignment improved the validation score from 0.8469 to 0.9102.</p> <p>Thank you to all the participants and organizers! &nbsp;<br> Thanks especially to <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> and <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a> for sharing their amazing notebooks!&nbsp;</p> <h3>Summary</h3> <p>My solution consists of the following stages.</p> <ul> <li><p>(Stage-1) Use Yolov5 to detect helmets.</p></li> <li><p>(Stage-2) Use deepsort to track the helmet.</p></li> <li><p>(Stage-3) Use ICP (Iterative Closest Point) and the Hungarian algorithm to assign helmet boxes to the tracking data for each frame. &nbsp;</p></li> <li><p>(Stage-4) Divide helmets into two clusters using k-means based on the color of the helmet.</p></li> <li><p>(Stage-5) Use ICP (Iterative Closest Point) and the Hungarian algorithm to assign helmet boxes to the tracking data again, taking into account the results of deepsort and the helmet color information. &nbsp;</p></li> <li><p>(Stage-6) Use the Hungarian algorithm to determine the final label. &nbsp;</p></li> </ul> <h3>(Stage-1) Use Yolov5 to detect helmets.</h3> <p>I trained yolov5l6 and yolov5x6 with an image size of 1280. &nbsp;<br> After converting the videos to images, I trained more frames with impact. &nbsp;<br> In order to avoid detecting sideline helmets, sideline helmets were not trained. &nbsp;<br> During inference, yolov5l6 and yolov5x6 were used, and only horizontal flip was used for TTA. &nbsp;<br> Since helmets with impact are often only partially visible, I had to set lower thresholds for confidence and iou. &nbsp;</p> <h3>(Stage-2) Use deepsort to track the helmet. &nbsp;</h3> <p>I ran deepsort on the helmets detected by yolov5 to track the helmet. <br> At this time, to reduce false positives, I deleted helmets that could not be tracked for frames longer than a certain length. &nbsp;<br> I modified the deepsort code to return the features extracted by ReID and the confidence. &nbsp;<br> Features are used in stage-4, and confidence is used in stage-3 and stage-5. &nbsp;<br> I used the ReID's weights distributed in git as is. &nbsp;</p> <h3>(Stage-3) Use ICP and the Hungarian algorithm to assign helmet boxes to the tracking data for each frame. &nbsp;</h3> <p>I used ICP and the Hungarian algorithm to assign the helmet boxes to the tracking data. &nbsp;<br> ICP is used to adjust the position of the helmet boxes so that the total distance between the helmet boxes and the tracking data is minimized. <br> At that time, each coordinate is normalized to 0~1 for both x and y. &nbsp;<br> Then the Hungarian algorithm is used to assign each helmet's boxes to each tracking data. &nbsp;<br> The labels are assigned by creating and solving a cost matrix based on the distance between each helmet's boxes and each tracking data. &nbsp;</p> <p>In order to adjust the position of the tracking data using ICP, I need to remove False Positive helmet boxes and tracking data that is not shown in the video. &nbsp;<br> To remove tracking data that is not shown in the video, I removed the outermost tracking data one by one and repeated the ICP. &nbsp;<br> To remove False Positive helmet boxes in the same way, I removed helmet boxes with a confidence lower than 0.5 one by one and repeated the ICP. <br> Also, when the number of helmet boxes is greater than the tracking data, remove the outermost helmet boxes so that the number is equal. &nbsp;<br> It's because most of the time, the helmets of the audience are detected. <br> After repeating the ICP and then getting multiple results, the result with the smallest total distance between the helmet boxes and the tracking data is chosen. &nbsp;<br> Since ICP is highly dependent on the initial values, I reversed the top, bottom, left, and right sides and ran it in multiple patterns. &nbsp;<br> <img src="https://i.postimg.cc/T34KcYfs/NFL-HA-ICP2.png" alt="Alternate Text"></p> <h3>(Stage-4) Divide helmets into two clusters using k-means based on the color of the helmet &nbsp;</h3> <p>Helmet boxes were divided into two clusters based on the color of the helmet using k-means. &nbsp;<br> To calculate the distance between each Helmet's boxes, I used the features calculated by the deepsort process. &nbsp;<br> Combining this result with the result of stage-3, I can determine which team (H or V) each helmet belongs to. &nbsp;<br> <img src="https://i.postimg.cc/FH4qdPg7/Screenshot-from-2021-11-03-22-18-20.png" alt="Alternate Text"></p> <h3>(Stage-5) Use ICP and the Hungarian algorithm to assign helmet boxes to the tracking data again, taking into account the results of deepsort and the helmet color information.</h3> <p>I ran the same process as stage-3. &nbsp;<br> The difference from stage-3 is that it takes into account the results of deepsort and the color of the helmet. &nbsp;<br> This is done by reducing the value of the cost matrix according to the results of deepsort and the color of the helmet. &nbsp;<br> When reducing the value of the cost matrix based on the results of deepsort, reduce the distance to the tracking data that has the same label as the label that occurs frequently in the deep sort. <br> This increases the probability that the same label will be assigned to helmets of the same deepsort. &nbsp;<br> When reducing the value of the cost matrix based on the color of the helmet, I use the results of Stage-4 to reduce the distance to the tracking data for the same team that the helmet belongs to. <br> This increases the probability that helmets of the same color will be assigned the same team label. &nbsp;</p> <h3>(Stage-6) Use the Hungarian algorithm to determine the final label &nbsp;</h3> <p>I used the Hungarian algorithm again to assign the helmets to the tracking data. &nbsp;<br> In this stage, the cost matrix is created based on the percentage of labels in the same deepsort helmets. &nbsp;<br> Therefore, labels that appear frequently in the same deepsort helmets are assigned priority. &nbsp;<br> Relatively successful allocation of tracking data, which is difficult to adjust in ICP, because the distance between helmet boxes and tracking data is not used in the cost matrix.</p> <h3>Score</h3> <p>validation score (24 videos) : 0.9102</p> <p>public LB : 0.910</p> <p>private LB : 0.904</p>
NIPS 2017: Defense Against Adversarial Attack
Solution sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NIPS 2017: Defense Against Adversarial Attack <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to Kaggle, especially a big thank to Alex, for all 3 these exciting competitions. I have learned deep learning, tensorflow and a little bit of “game theory”. </p> <p>Here is my code for all 3 submissions. <a href="https://github.com/tiendzung-le/cleverhans-models">https://github.com/tiendzung-le/cleverhans-models</a>. </p> <p>My defense solution can be summarized as bellow</p> <p>• It is basically an ensemble of 7 models.</p> <p>• The inception_v3 model's prediction is not included in the ensemble but used as a special adversarial image detector. Let's name the output of the ensemble is main_label and the output of the inception_v3 model is fool_label. Note that the inception_v3 model is strongly attacked so most of the time the fool_lable is wrong. The final outcome is the main_label if it is different from fool_lable or more than half of the ensemble classifiers votes for the main_label. Otherwise the final outcome is the second best of the ensemble.</p> <p>• In order to avoid the OOM issue, each model is run seperately and the prediction is redirected into a temporary file. The ensemble script reads all 8 predictions and produces the final result.</p> <p>For the 3rd dev round, only 5 models were in the submissions and it got 2nd position in the ranking. </p>
NIPS 2017: Non-targeted Adversarial Attack
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NIPS 2017: Non-targeted Adversarial Attack <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Everyone,</p> <p>It is seemed that final results are uploaded to leaderboad.<br> I make a conceptual diagram for my solution. My solution is based on basic iterative method. The key point is applying blur effects to gradient by convolving gaussian filter. This works as regularization and enable to attack many types of model simultaneously (This ability is called <em>transferability</em> in adversarial-example research).</p> <p>Code: <a href="https://github.com/toshi-k/kaggle-nips-2017-adversarial-attack">https://github.com/toshi-k/kaggle-nips-2017-adversarial-attack</a> <img src="https://raw.githubusercontent.com/toshi-k/kaggle-nips-2017-adversarial-attack/master/img/solution.png" alt="enter image description here" title=""></p>
NIPS 2017: Non-targeted Adversarial Attack
9th place solution - paper
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NIPS 2017: Non-targeted Adversarial Attack <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We wrote a paper about our 9th place solution for this sub-competition. <a href="http://alekseynp.com/nips2017-adversarial.html">link to paper</a></p>
NIPS 2017: Non-targeted Adversarial Attack
Sharing my submissions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NIPS 2017: Non-targeted Adversarial Attack <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Non-targeted attacker using ensemble of models.</h1> <p>Note: I have doubts about my final score because of major changes in attackers from last dev round 2. But, I want to share some of my approaches and observations.</p> <p>Sources are available <a href="https://github.com/erko/nips17-attack">here</a> and <a href="https://github.com/erko/nips17-targeted-attack">here</a>, defense.</p> <p>In this post described observations and improvements of non-targeted attacks by team erko (Yerkebulan Berdibekov) as a participator of competition “NIPS 2017: Non-targeted Adversarial Attack”. Currently the most robust and transferable attacks described in Carlini et. al. [ 1 ]. Also A. Kurakin et al [ 2 ] shown ensemble attack can produce for adversarial samples for gradient masking/defensive distillations and etc. However, Carlini’s attacks are very costly in term of wall-clock time to generating adversarial samples in initial proposed version and not suitable to this competition by constraint of maximum epsilon perturbation. I adopted this techniques described in this papers to generate faster adversarial examples. Particularly, to be suitable to this competition, I have removed requirement of clipping by epsilon perturbation at every or final iteration at all, and tuned to reach faster a region of adversarial examples to ensemble of models. Also, in Carlini et. al. [ 3 ] shown experiments of attacking against spatial smoothing defense. But generating this adversarial samples takes more resources and cannot be joint with attacks against other defenses within same competition time constraints. Therefore, in this submission this type of attack did not included.</p> <h2>Description of Carlini and Wagner attacks:</h2> <p>To produce adversarial examples we can optimize image by minimizing loss function. In this procedure, this image must stay a valid image. E.g. pixel value of images must stay in range [-1.0, 1.0]. After optimization step we can clip variable by -1.0 and 1.0. In this scenario our optimization procedure can stuck at such area where we always wipe-out last change of variable and so improvement in optimization. I.e. we reach in some pixel boundary, but gradient still tell to go further.</p> <p>Another, more proposed method is to introduce new variable that is not bounded by range, and can be monotonically optimized. We can squeeze variable to bound to [-1.0, 1.0] using hyperbolic tangent to get valid image. Using optimizer on good loss function with confidence threshold value (see original paper) with some low learning rate this proposed method can generate robust adversarial examples.</p> <h2>Removing of clipping by epsilon perturbation:</h2> <p>In this competition, valid adversarial example can deviate from given image by max epsilon value (2, 4, 8, 16). Therefore, just staying in range [-1.0, 1.0] is no longer valid. To incorporate this constraint we can introduce new variable, that hyperbolic tangent of this variable represents difference of possible valid image. I.e. possible valid image is [image - eps, image + eps]. Let's call this values x_min &amp; x_max respectively. Then valid image can be obtained: <img src="https://image.ibb.co/iw3Uow/xscale_formula.png" alt="formula" title=""></p> <p>Here, X- is given image, e- epsilon, maximum perturbation of image pixel value, Xvariable- variable, that we optimizing. Using this method, we can make improvement at every iteration. </p> <h2>Ensembling multiple models and generating faster adversarial samples:</h2> <p>Using as many as 6 models we can generate more transferable adversarial samples. At experimenting this ensemble attacker can produce about 95-100% adversarial samples for included models. Adding more models gives more and more transferability feature, however performance of producing each example drastically decreases by every new model. Due to limitation on GPU memory 11GB this attacker’s batch size is reduced to 10 to fit ensemble of 6 models. </p> <p>Another competition constraint is to produce 100 adversarial samples in 500s of wall-clock time.This includes all initialization and execution time (runs in docker container). Ensemble models graph initialization takes about 100s at google compute cloud. And model variables restoring takes another 50s. I have tuned number of iterations to 9 to fit in 300s to produce adversarial samples. To be able to generate robust adversarial examples in this iterations, due to squeezing space to epsilon range we need to set large learning rate. I have set learning rate to 2.0.</p> <h2>Experiments on attacking spatial smoothing defenses:</h2> <p>In Weilin Xu et. al. [ 4 ] also described feature squeezing technique of defenses against types of adversarial perturbations. However, using feature squeezing with depth 2 of color channel detecting adversarial samples gives about 75% AUC on development set images, bigger depth (4,5,7) of color channel gives even lower accuracy. Experimentally, on development set median filtering adversarial samples detecting gives about 97% AUC. At this development set evaluation using median filtering is not harming clean images accuracy, and gives relatively good accuracy on adversarial samples. To penetrate this defense I tested multiple variations: adding another model to ensemble with median filtered input, and all ensemble models median filtered. Tensorflow hasn’t median filtering method, I implemented own using top_k function: </p> <pre><code>def median_filter_gray(images, k=3): patches = tf.extract_image_patches(images, [1, k, k, 1], [1, 1, 1, 1], [1, 1, 1, 1], padding='SAME') m_idx = int(k * k // 2 + 1) top = tf.nn.top_k(patches, m_idx, sorted=True)[0] median = tf.slice(top, [0, 0, 0, m_idx - 1], [-1, -1, -1, 1]) return median def median_filter(images, k=3): stacked = tf.concat( [median_filter_gray(images[:, :, :, i:i + 1], k) for i in range(images.get_shape().as_list()[-1])], axis=3) return stacked </code></pre> <p>Although any variations using median filtering harms generating speed and samples adversarial ratio, it can produce samples with 20-50% adversarial ratio (within same number of iterations). I presume that with bigger number of iterations it can produce samples with high adversarial ratio.</p> <h2>“Easy-to-Code” suggestions:</h2> <p>This suggestions may be obvious and insignificant. But this helped me to speed up coding of ensembles. In this submissions I experimented with coding and came up using tensorflow slim’s nets_factory script. Following code snippets describes how to ensemble models in few lines:</p> <pre><code>from slim.nets import nets_factory </code></pre> <hr> <p>function to inference any models listed <a href="https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models">here</a> and also adversarial models:</p> <pre><code>def model(model_name, x_input, num_classes, preprocess=False, is_training=False, label_offset=0, scope=None, reuse=None): network_fn = nets_factory.get_network_fn( model_name, num_classes=num_classes - label_offset, is_training=is_training, scope=scope, reuse=reuse) eval_image_size = network_fn.default_image_size images = x_input if preprocess: images = preprocess_batch(model_name, x_input, eval_image_size, eval_image_size) logits, _ = network_fn(images) return logits </code></pre> <hr> <pre><code>for model_name in models_scopes.keys(): logits = model(model_name, clipped_x, num_classes, True, label_offset=label_offset, scope=scope) </code></pre> <hr> <pre><code>var_lists = [] for i, model_name in enumerate(models_scopes.keys()): scope = models_scopes[model_name] mapping_scope = scope if model_name not in checkpoint_mapping else checkpoint_mapping[model_name] var_list = {(mapping_scope + v.name[len(scope):][:-2]): v for v in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=scope)} var_lists.append(var_list) </code></pre> <hr> <pre><code>savers = [tf.train.Saver(var_lists[i]) for i, scope in enumerate(models_scopes.values())] for i, model_name in enumerate(models_scopes.keys()): savers[i].restore(sess, model_checkpoints[model_name]) </code></pre> <h2>References:</h2> <ol> <li>Nicholas Carlini, David Wagner. Towards evaluating the robustness of neural networks. <a href="https://arxiv.org/abs/1608.04644">https://arxiv.org/abs/1608.04644</a> </li> <li>Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Dan Boneh, Patrick McDaniel. Ensemble Adversarial Training: Attacks and Defenses. <a href="https://arxiv.org/abs/1705.07204">https://arxiv.org/abs/1705.07204</a></li> <li>Nicholas Carlini, David Wagner. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. <a href="https://arxiv.org/pdf/1705.07263.pdf">https://arxiv.org/pdf/1705.07263.pdf</a> </li> <li>Weilin Xu, David Evans, Yanjun Qi. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. <a href="https://arxiv.org/abs/1704.01155">https://arxiv.org/abs/1704.01155</a> </li> <li>...</li> </ol>
NIPS 2017: Targeted Adversarial Attack
5th Place Solution - Paper
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NIPS 2017: Targeted Adversarial Attack <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My teammate, Ross Wightman and I wrote a paper about our attack. If anyone thinks it deserves to be in the arxiv or the competition book, let me know ;)</p> <p><a href="http://alekseynp.com/papers/nips2017-adversarial-paper.pdf">http://alekseynp.com/papers/nips2017-adversarial-paper.pdf</a></p>
NOAA Fisheries Steller Sea Lion Population Count
23 th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NOAA Fisheries Steller Sea Lion Population Count <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Everyone,</p> <p>Ending final validation, I share my solution.<br> My approach is based on semantic segmentation following by simple regression.<br> I remove full-connect layer from resnet 34, and use 7x7 buttom surface for segmentation.</p> <p>Code: <a href="https://github.com/toshi-k/kaggle-steller-sea-lion-population-count">https://github.com/toshi-k/kaggle-steller-sea-lion-population-count</a></p> <p><img src="https://raw.githubusercontent.com/toshi-k/kaggle-steller-sea-lion-population-count/master/img/solution.png" alt="enter image description here" title=""></p> <p>Hi eagle4,</p> <p>Result of semantic segmentation have [0~1] values for each pixel.<br> I summarize this result with three features and use for linear regression.<br> This procedure is done in this code.<br> <a href="https://github.com/toshi-k/kaggle-steller-sea-lion-population-count/blob/master/source/02_deep/lib/extract.lua">https://github.com/toshi-k/kaggle-steller-sea-lion-population-count/blob/master/source/02_deep/lib/extract.lua</a></p>
NOAA Fisheries Steller Sea Lion Population Count
6th solution, objection detection + cnn classification
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NOAA Fisheries Steller Sea Lion Population Count <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>1) your solution does not use BN model for region proposals, only AN?</p> <p>I use AN only to adjust picture size, then use cnn to classify proposals from BN.</p> <p>2)on the second stage you fed to CNNs not exact candidate boxes, but crops of predefined size (72, 108, 144) centered on region proposals?</p> <p>Yes,the crop is centered on region proposals, and extracted after scaling picture</p> <p>Thanks to @threeplusone for your coordinates.</p> <p>My solution consists of two parts, firstly using faster rcnn to get candidate box, and then using cnns to classify.</p> <p>Two faster rcnns(AN and BN) are trained. AN is trained through multi – scale input ,zooming in each picture two to four times randomly. BN is trained through single – scale input,zooming in each picture twice. Because The scale of the same age group of sealions in different picture changes greatly, for example, in Test ,the adult’s size vary from ~100 pixels to ~300 piexls. When sealions are too big, my single-scale network does not completely cover the whole sealion (mainly adult male),but it has less false positives. Because in second stage I need to use box’s size, I train the muti-scale faster rcnn . It completely covers the sealions, but it has more false positives.</p> <p>And then I train a CNN to reduce false positives.</p> <p>Finally, I train about a dozen cnns to classify candidate boxes from AN. I choose best four them lastly and average their results.</p> <p>Four cnns are as follows,</p> <p>1)72x72 input, vgg-like cnn</p> <p>2)72x72 and 144x144 muti - input network</p> <p>3)72x72 and 72x72(108x108 patch scaled to 72x72) muti - input network</p> <p>4)same as net3, but output five classes(no background)</p> <p>In test stage, I will scale each picture at a certain scale(scale = target-scale/ max_box_length , max_box_length is the longest edge of all the candidates from BN in a picture, target-scale is about 78-88). The candidate boxes' coordinates are scaled at the same scale.</p>
NOAA Fisheries Steller Sea Lion Population Count
AdaBoost, SSD: object detection for lions counting
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NOAA Fisheries Steller Sea Lion Population Count <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I don't know how well it works, but network stably predicts adult, subadult males and pups. For the rest two classes scale is a problem that was unrivaled for me, and yes females predicted more often than juveniles.</p> <p>I mean that tried the same models (didn't train new), but tested on images, resized to different size, also added different extra amount to pups, juveniles number, thus evaluate no new ideas, just tried to fit the LB.</p> <p>There is famous <a href="https://github.com/pdollar/toolbox">Piotr Dollar's toolbox</a> for object detection approach: sliding window for localization + AdaBoost for classification. I've used it as is, RealBoost with 1024-2048 decision trees.</p> <p>Hyper feature - it's a concatenated feature maps from different network layers: shallow for fine details, mid for class specific responses, deep for scene context, e.g. (conv3_3 &amp; maxpool) concatenated with conv4_3 concatenated with (conv5_3 &amp; deconvolution), after that 1x1 convolutions can be used to integrate the features and reduce the width (also features from each layer need to have similar scale before concatenation, so BN or scale/normalization layers are added). Nice example can be found the in paper <a href="https://arxiv.org/pdf/1604.00600.pdf">"HyperNet: Towards Accurate Region Proposal Generation and Joint Object Detection"</a> by Intel guys.</p> <p>Since we have confidence for each rectangle, they can be suppressed using this cue: rectangle b, with confidence smaller than rectangle a, is suppressed if intersection_area(a, b) / min(a.area(), b.area()) &gt; threshold (0.3 is used), taken from: <a href="https://github.com/pdollar/toolbox/blob/master/detector/bbNms.m">Piotr Dollar's toolbox</a>.</p> <p>Yep.</p> <p>We made a bet with my friend, how far we can go in this challenge (and two month of life have blown away). The problem looks like a type of crowd counting task. Recent solutions prepare density maps for dotted crowd annotations, train network to predict them, and then sum the density in each pixel to obtain the final number. However, I've decided to use object detection approaches, and my friend started with segmentation...</p> <hr> <p>My path:</p> <p>1) Classic ml - AdaBoost</p> <p>2) SSD on full images (since it is faster than faster rcnn)</p> <p>3) SSD on tiles</p> <p>For object detection boxes is needed, so decided to use squares around the dots coordinates (size was selected manually, OpenCV <code>inRange</code> did the work with dots coordinates extraction).</p> <p>As for now DL is a mainstream with state of art results, but around 3 years ago, <em>the best off-the-shelf classifier</em> <strong>AdaBoost</strong> crushes the leaderboards, so cannot skip to evaluate it. One of the fastest and successful AdaBoost pipelines is boosted decision forest with aggregated channel features (thx to Piotr Dollar for this milestone in object detection). Thus trained 3 models: adult, subadult males and pups. This gives ~20 on public LB <em>(it took around of an hour to evaluate one model on all test set, on CPU)</em>. The recall is almost 100%, however, due to lions rotations, models learned to find circle-shaped objects, so there were a lot of false positives on stones.</p> <p>DL's time. Single Shot Multibox Detector was selected for the speed reasons and good accuracy. Scaled the images to fit the GPU memory (~1200x800) and run with VGG backbone out of the box. DL features give ~18 on LB, mostly due to better (than mean values) work on females and juveniles, testing time is also ~1 hour, but on GPU. Surprisingly, AdaBoost with ACF did well on adult, subadult males, the difference was ~0.1-0.3 compared to SSD. The issues:</p> <ul> <li><p>pups detected badly</p></li> <li><p>can't distinguish between females and juveniles</p></li> </ul> <p>What tried to beat this (without any success):</p> <ul> <li><p>Better features - construct a hyper feature from conv3_3 &amp; conv4_3 &amp; conv5_3 to integrate the details from shallow layer and context from deeper.</p></li> <li><p>Learn the image scale - average pool of feature maps + fc128 + fc16 and concatenate these 16 learned channels with feature maps for final prediction. So, I also did nothing with the scale problem.</p></li> <li><p>Tried multitask loss - added regression output to SSD classification and localization. Regression produced more or less reasonable values for adult, subadult males, but for classes with huge deviation in number of lions per image obtained <code>nan</code>. Looks like need to spend more time here =), and use small tiles instead of full images.</p></li> </ul> <p>So, gave up with sophisticated methods, tile the train data. Used huge tiles 1000x1000 for speed reasons and to avoid border effects. It helped to detect pups better, gives ~17 on LB (during training and testing images were resized to 3000 by the shortest side, then tiles extracted, did flip augmentation), runs ~8 hours on GPU: <img src="http://i67.tinypic.com/juhicw.png" alt="Picture 1" title=""> <img src="http://i63.tinypic.com/nv76ns.png" alt="Picture 2" title=""> The rest 1.5 point is stacking models (spent ~100 submissions while doing this).</p> <p>Thanks for everyone who did involve in this competition. It was fun!</p> <p>I can't distinguish with my eyes between those similar classes, so switched to DL for better features, but found there similar problems. So I haven't tried one-vs-all or other tricks. My guess that cross-entropy loss doesn't good for this case, since it forces the rest classes to have near 0 score, and network cannot learn well if classes have similar features. I've tried hinge loss to simplify the training, but with no success =), probably due to wrong integration in SSD pipeline.</p> <p>Small objects is a pain for all OD. Usually hyper feature should help, but it didn't, or more sophisticated classifier (didn't help), also tried to learn predictors on different layers - conv3 - conv5, but results were similar. So looks like it's not a feature representation problem (it's strong enough), check the pups: <img src="http://i65.tinypic.com/2646had.jpg" alt="pups" title=""> Thus SSD here performs as everything else (faster rcnn), but faster.</p> <p>From my experience highly overlapped objects here is one of the main issues with OD approach, they merges during nms and creates troubles during matching. Tried to add 6'th class: female + near pup, but without success. I played a lot with boxes sizes and nms thresholds, last submissions have only 2 box sizes during training: huge for males, smaller for the rest 3 classes to mach appropriately. During inference nms first 4 classes at first, then scale down boxes for females and juveniles, nms them with pups.</p>
NOAA Fisheries Steller Sea Lion Population Count
Density estimation approach: quick overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NOAA Fisheries Steller Sea Lion Population Count <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulation to the winners and all participants. Special thanks to @threeplusone for Sea Lion Coordinates.</p> <p>My approach was started as gaussian density regression, similar to "Interactive Object Counting" by Arteta, Lempitsky, Noble, Zisserman, <a href="https://www.robots.ox.ac.uk/~vgg/publications/2014/Arteta14/arteta14.pdf">link</a>. Final model consists of 3 major parts:</p> <ol> <li><p>sea lion classifier (5+1 classes) trained on 75x75 patches. Implemented as Dilated Residual Networks, DRN (ResNet-18 + Dilated convolution layers). Classification output was removed leaving the model with pixel-level activation of size 138x138 and 512 feature depth for an input tile 276x276. By design, DRN can leave resolution intact, but due to computational issues receptive field has been reduced by a factor of 2 in the first convolutional layer.</p></li> <li><p>input tiles 276x276 have been converted into feature maps 138x138x512 and linearly regressed to Gaussian fields centered on sea lions, class-wise. Every Gaussian is normalized to 1, and integration over activation map gives objects count. Linear models have been added on top of DRN as 1x1 convolitional layers. Initially this stage was intended as final, and it worked just perfect on synthetic data giving under 2% error rate. Unfortunately, it was not so good for sea lions. The picture demonstrates 2 problems: a) inter-class mis-classification, esp. adult/sub-adult males, b) reconstructed Gaussians sometimes biased towards body shape, thus overestimating density, phenomenon noticed by other participants <a href="https://www.kaggle.com/c/noaa-fisheries-steller-sea-lion-population-count/discussion/33788#190898">as well</a>. The reason could be a sub-optimal Gaussian kernel size. (do not pay attention to border artifacts, it's dilated conv features that has been routinely cropped out)</p></li> <li><p>Gaussian activations, 138x138, 10 feature depth (I used 2 linear models, 5 activations each) have been fed into simplified variation of ResNet-18 with 5 regression outputs, trained for RMSE loss.</p></li> </ol> <p>DRN is computationally very intensive. It's purpose localization, and unlike traditional conv models it does not reduce resolution. To filter empty tiles I trained aux VGG-16 binary classifier. </p> <p>There is room for improvement.</p> <ul> <li>I considered U-net too, but opted for DRN to avoid deconvolution. This decision has its pros and contras, and I would give U-net a try if started over.</li> <li>linear regression to Gaussian fields appears primary source of error. It does not handle well multi-class problem, has issues with different object scales, and difficult to maintain huge (x100) variability of object counts per image. One thing to consider is substitute linear regression with trainable end-to-end non-linear layer</li> </ul> <p>Update: code uploaded to github <a href="https://github.com/alexander-rakhlin/Sea-Lion-Population-Count">link</a></p>
NOAA Fisheries Steller Sea Lion Population Count
Finally, we have 1,178,071 sea lions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NOAA Fisheries Steller Sea Lion Population Count <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I think so.</p> <p>Something I could share about why misclassification on females and juveniles is worse more than I expected.</p> <p><img src="http://i.imgur.com/rJBOLir.gif" alt="enter image description here" title=""></p> <p>I generated random dot for explanation. I wish the threshold between females and juveniles in the model could be the green line, but it is too naive. Then I think at least it should be the red line by statistically, but in fact the threshold is dotted red line. (It is about size feature, there are other features like lion distribution etc. in the model let the prediction better.)</p> <p>I guess it is because in the small scale patch, lion count is larger, and the squared error even more. I tried add large scale training patch with more juveniles and it helps a little, but overall score is not significant.</p> <p>for p in test_patches: p = p*f(count(p)) #make sure 0 &lt; f &lt; 1 or do some normalized processing</p> <p>then stitch them side by side</p> <p>By the way, this is how I verify the prediction: (more count more bright) <img src="http://i.imgur.com/FKQdbpX.jpg" alt="enter image description here" title=""></p> <p>I mean, in my network, predictions are only numbers. Or you want to see some testing patches with its output?</p> <p>I don't quite understand what is batch_of_predictions?</p> <p>Ideally, output should be 0.5 if the lion has been cut in half. Because in the same situation at training stage, one patch is 1 and the other one is 0.</p> <p>Just intuition, and I think it would work undoubtedly even when I got trouble at the beginning. VGG is a basic CNN, I took the 1024 FC as a buffer between FCN and output. The size is according to RAM limit and model size, and intuitively I think it is enough. When I got scale problem, I tried more neurons and nothing happened. SGD is more stable than other aggressive optimizers in my work, maybe it is due to the large variation in output labels.</p> <p>Thanks. But I may try it when I have time or need to do farther research on this topic.</p> <p>Why not?</p> <p>I haven't studied receptive field in detail yet. So, thanks for sharing.</p> <p>I tried a little different, and failed as expected.</p> <p>Sure, and it is not easy work. By the way, if they have grounding truth bounding box, I want to train a object detection for image scale detect by sea lion's size, just a thought.</p> <p>I think it is typically because the testing image's sea lion is larger than training in average. In my opinion, the model don't know what sea lion is and where it is.</p> <p>Sorry for being unclear and thanks to erde. It is the ratio between juveniles and juveniles + adult_females.</p> <p>I think the ratio problem comes from scale problem. The CNN basically is a statistical model, simply distinguish juveniles and adult_females by size. Scale down image can get more juveniles, but the problem is if out of range, total lions count will go down. For example:</p> <ul> <li>scale predict</li> <li>0.59 [ 13 5 180 120 2]</li> <li>0.53 [ 14 8 141 149 0]</li> <li>0.48 [ 10 5 132 165 2]</li> <li>0.43 [ 9 7 122 182 0]</li> <li>0.39 [ 6 7 101 185 0]</li> <li>0.35 [ 7 7 74 176 0]</li> <li>0.31 [ 4 6 62 168 1]</li> <li>0.28 [ 5 7 48 137 0]</li> </ul> <p>This is the predict result of test/12690 on multi-scale model with different test scale. The juveniles + adult_females keeps around 300 for 0.59~0.43, and began to decrease from 0.39.</p> <p>So, all you are interesting in without post processing's performance. It is sure if I scale down more. But I don't want to take the risk if sea lions in private set are smaller then public.</p> <blockquote> <p>the multi-scale model is essentially heavy data augmentation with different scales?</p> </blockquote> <p>Yes, and single scale model's keeping number scale range is narrower.</p> <p>Relu got problem at 0, and also need to do clip.</p> <p>The "Patches looks like" section in notebook, image is training patch, [1 0 3 0 1] is label (lion count per type in the patch), the last is 5 neurons fully-connected layer with linear output.</p> <ol> <li><p>I use custom data generator.</p></li> <li><p>I trained a 200x200 and it performed worse. Maybe it is because 300 contains more relationship information between pups and adult_females. Flipping(rotating) 4 times at prediction time improved 0.04 RMSE. Using only convolutional layer in VGG is size free.</p></li> <li><p>Just a normal loss function.</p></li> <li><p>You can reference the table in the section.</p></li> </ol> <p>Yes, and if trained well, it is really hard to predict -10, since training labels never less then 0. In statistically, it is no need to clip even use linear, but we can simply clip the noise around 0 and get better performance.</p> <ol> <li><p>Yes, train the new FC layers first, then all layers.</p></li> <li><p>I use linear activation at output layer. (because 3.)</p></li> <li><p>Yes.</p></li> <li><p>Training got exploded sometimes, but it is not problem. When testing, distinguish juveniles and adult_females is difficult. I think the model don't know what is "relative" size. Actually, in the training set, we also got a lot of human label noise at juveniles and adult_females.</p></li> </ol> <p>i7 7770K, 16GB RAM, 1 GTX 1080.</p> <p>For single scale model, positive training patches = 14237, so total training patches = 54100 (positive*4*0.95). If one epoch means total training patches, it took about 30 mins a epoch. (Multi-scale takes 4 times) It took several days for training.</p> <p>Thanks to everyone. I love this competition.</p> <p><a href="https://www.kaggle.com/outrunner/use-keras-to-count-sea-lions/notebook">Here is my solution.</a> </p> <p>I use VGG16 without top, add FC-1024, and FC-5 with linear output.</p> <p>SGD optimization and mean_squared_error loss.</p> <p>And post processing for better regression.</p> <p>I think it is a simple way to count. Looking forward to hearing your thoughts, thanks.</p> <p>I found a submission got 13.46393 private LB score is made by:</p> <ul> <li>training scale: *1.23~0.66</li> <li>testing scale: *0.48</li> </ul> <p>only cut pups more than 1.8x adult_females after predict, and this affect a little in my experience.</p> <p>So, without post processing on other combination, maybe it could be around 13.</p> <p>Additionally, post processing also add pups, in this case, it should be [10,10,70,130,24]. But the main contribution comes from juveniles/adult_females adjustment.</p> <p>Thanks again.</p> <p>Sorry, do you mean different score or what make the score difference? I did not submit r0.59, I think it may be 14~15. About post processing: if a image predict in r0.59 = [10,10,100,100,20], then submit r0.59*1.3 = [10,10,70,130,20]</p> <p>I tried other optimizers/learning rate schedule, just for observe the learning loss convergence.</p> <p>I scaled down the original image (*0.9, ..., *0.6561), cut into 300x300, and use those patches together to train a network.</p> <p>58 training epochs is a example in the kernel. I train the new FC several epochs, observe the loss, then train more layers until the train loss close to 0.3. The total epochs before training the whole network is about 60(I think it is too slow due to my poor skill). And the whole network takes 27 epochs to got 0.126/0.247 train/validation loss with private LB 11.7. </p>
NOAA Fisheries Steller Sea Lion Population Count
My segment and count solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NOAA Fisheries Steller Sea Lion Population Count <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The label is 320*320*6<br> 0:background 1:adult-males 2:subadult-males...5:pups</p> <p>I also read solutions of the Data science Bowl,they are very good.You can get very useful information from them and implement your net.</p> <p>I did not find a good tool to label image ,so I used GIMP to label the sealions.I added layer on train set images,and extracted the mask from new layer using GIMP python APIs.</p> <p>thanks</p> <p>JuGL,we have 5 types of sealions and we should add a type to indicate there is no sealion on that pixel.</p> <p>Thanks,I read a lot of papers related and checked different structures and found the UNET is so powerfule.</p> <p>Thanks!<br>I got the idea when I was watching an TV program about the galaxies ,stars and planets...:)</p> <p>thanks</p> <p>kglspl,I selected the images easy to fit (with no complex background and having a lot of sealions),they were cutted into a lot of 320*320 small images to input to unet.Then I predicted similar images(easy to fit) and did it iterately.<br> As to mask tools,I just used GIMP and added layers to it,then I read the mask layers via GIMP python api .I think the solution is ok. </p> <p>Congratulations to outrunner,konstantin,thanks to threeplusone for the lion coordinates,thanks to kaggle and noaa for hosted a very good competition.<br></p> <p>Here is my brief overview of my solution.<br></p> <p><strong>My goal of this competition:</strong><br> I want to try my best to find and count all sealions .<br></p> <p><strong>Challenges:</strong><br> 1.Scale variance of the test data which lead to no stable local cross-validate method.<br> 2.Pups hard to find.<br> 3.In some images sealions are close to each other,and those images have large weights to RMSE.<br> 4.Sealion types almost can only be distinguished by their size,especially adult-females and juveniles.<br></p> <p><strong>Result:</strong><br> 1.My model can find almost all sealions and get their outlines(masks) except pups.<br> 2.I did not find a way to handle scale variance.<br></p> <p>Best private/public scores:13.03/14.03<br></p> <p><strong>Some demos:</strong><br> Some dotted test set images:<br> <img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/sealions/3190.png" alt="3190 color dotted"> <img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/sealions/361-580.png" alt="361 color dotted"> The masks,outlines predicted by my models:<br> <img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/sealions/48-mask-outline.png" alt="outline demo 1 "> <img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/sealions/6983-mask-outline.png" alt="outline demo 2 "> The sealion-outline-star relation<br> <img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/sealions/star-demo.png" alt="relation "> <strong>The way to find all sealions:</strong><br></p> <p>My final solution base on two UNet:<br> The first one, to predict dot on sealions,I called it star-unet<br> The second one:to predict the the mask of each type of the sealions(I labeled all training set sealions mask) I called it outline-unet<br> <br> The two net’s structure are almost the same,the difference is the first net is logistic regression and the second is 6-way softmax regression.<br></p> <p>As to star-unet,I generated the mask by setting the value of the mask to 255 at the center of the sealions,and then decreasing to 255*0.8,255*0.6,255*0.4 as the position go far away from the center.<br></p> <p>As to outline-unet:I labeled the sealion’s outline step by step.I labeled 3 images and trained the net on them first,and let the net predict the mask of the some train-set images,and I modified the predicted images’ mask,when I have labeled 100 images,the result were very good.<br></p> <p><strong>The way to count the sealions:</strong><br> Because my segment net is very good,so I decided to count the sealions based on them,I used a lot of functions of skimage(especially morphology) and scipy.ndimage. <br> It’s not a easy job to split the sealions outline when they are connected,I used the star predicted by star-unet and used a lot of skills on morphology’s closing/opening and scipy.ndimage.label,and finally found more than 1 million sealions. I guess the recall-rate &gt;80% except pups.<br> <br> As to pups,I predicted the mask with different scales and ensembled them.<br> <br> <strong>The pities:</strong><br> I failed to find a good way to get the scale factor of every test image.I planed to find the sealions with large area in every test image and compared to the average areas of the adult-males and get the correct scale of every test image(or similar ways)BUT I had no time to finish it,because I under-estimated the difficulty of this competition.I got 15.74 public LB using two Faster-RCNN network and then I entered another competition, when I was back, there were only 3 weeks left and the test-set is a big one.<br> <br> So,my final submission is simple ensemble of the prediction with different scale factors.It’s not much better than single model.My best single model is 13.21/14.26. <br> <br> <strong>Other methods I tried</strong>:<br> I trained two Faster-RCNN networks with Resnet-101,but I found it can not handle sealions that stay close, I can not count them and can only estimate them,I thought I should find better way,so they are not appeared in final ensemble.</p> <p>thanks,if we can get the more information(for example,height or zoom of the images taken),then we can calibrate the image to same resolution and the result will be much better,although we can calibrate by area of the adult males,perhaps.</p> <p>Thanks.Yes,perhaps,we under-estimated the UNET,labeling the sealions is not the hardest part of this competition.</p> <p>Something important of the training:<br></p> <ol> <li>use adam with lr rate 0.0001 when use softmax,use adam with lr rate 0.001</li> <li>I train the UNET with 320*320 patch.</li> <li>I don't train the unet with patches without sealions in first 40 epochs.Then I predict the patches without sealions,I use those patches which have False-Positive samples for 5 epochs.(Sample 10000 of them every epochs)<br> <br> And I trained a resnet-50 UNET,but I found it ok too,but I have no time to use it.</li> </ol> <p>Thanks,as to codes,there are thousands lines of my codes,they are written in a hurry and I has no time to clean and refactor now,Planet competetion is waiting for me.:)<br>But I want to share my model struture and training strategy here:<br></p> <p>My Unet structure is simple:<br> N_Cls = 6<br> ISZ = 320<br> smooth = 1e-12<br> def get_unet():<br> inputs = Input((3, ISZ, ISZ))<br> conv1 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(inputs)<br> conv1 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(conv1)<br> pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)<br> <br> conv2 = Convolution2D(64, 3, 3, activation='relu', border_mode='same')(pool1)<br> conv2 = Convolution2D(64, 3, 3, activation='relu', border_mode='same')(conv2)<br> pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)<br> <br> conv3 = Convolution2D(128, 3, 3, activation='relu', border_mode='same')(pool2)<br> conv3 = Convolution2D(128, 3, 3, activation='relu', border_mode='same')(conv3)<br> pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)<br> <br> conv4 = Convolution2D(256, 3, 3, activation='relu', border_mode='same')(pool3)<br> conv4 = Convolution2D(256, 3, 3, activation='relu', border_mode='same')(conv4)<br> pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)<br> <br> conv5 = Convolution2D(512, 3, 3, activation='relu', border_mode='same')(pool4)<br> conv5 = Convolution2D(512, 3, 3, activation='relu', border_mode='same')(conv5)<br> <br> up6 = merge([UpSampling2D(size=(2, 2))(conv5), conv4], mode='concat', concat_axis=1)<br> conv6 = Convolution2D(256, 3, 3, activation='relu', border_mode='same')(up6)<br> conv6 = Convolution2D(256, 3, 3, activation='relu', border_mode='same')(conv6)<br> <br> up7 = merge([UpSampling2D(size=(2, 2))(conv6), conv3], mode='concat', concat_axis=1)<br> conv7 = Convolution2D(128, 3, 3, activation='relu', border_mode='same')(up7)<br> conv7 = Convolution2D(128, 3, 3, activation='relu', border_mode='same')(conv7)<br> <br> up8 = merge([UpSampling2D(size=(2, 2))(conv7), conv2], mode='concat', concat_axis=1)<br> conv8 = Convolution2D(64, 3, 3, activation='relu', border_mode='same')(up8)<br> conv8 = Convolution2D(64, 3, 3, activation='relu', border_mode='same')(conv8)<br> <br> up9 = merge([UpSampling2D(size=(2, 2))(conv8), conv1], mode='concat', concat_axis=1)<br> conv9 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(up9)<br> conv9 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(conv9)<br> <br></p> <pre><code>conv10 = Convolution2D(6, 1, 1, activation='relu',border_mode='same')(conv9)&lt;br&gt; conv10 = core.Reshape((6,ISZ*ISZ))(conv10)&lt;br&gt; conv10 = core.Permute((2,1))(conv10)&lt;br&gt; ############# conv11 = core.Activation('softmax')(conv10)&lt;br&gt; model = Model(input=inputs, output=[conv11])&lt;br&gt; </code></pre> <p><br> model.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy',metrics=[dice_coef,'accuracy'])<br></p> <p>dice_coef:<br></p> <p>def dice_coef(y_true, y_pred):<br> y_true_f = K.flatten(y_true)<br> y_pred_f = K.flatten(y_pred)<br> intersection = K.sum(y_true_f * y_pred_f)<br> return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)<br></p> <p>def dice_coef_loss(y_true, y_pred):<br> return 1-dice_coef(y_true, y_pred)<br></p> <p>Sorry,I mean logistic regession on each pixel to get the probability ,I delete the '2 way' from the post.</p>
NOAA Fisheries Steller Sea Lion Population Count
My heatmap regression+segmentation solution and possible ways to normalize scale changes
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NOAA Fisheries Steller Sea Lion Population Count <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes, I had 2 losses, the first one is L2 loss, the other is dice coefficient loss, I up weighted the dice coeff loss by a factor of 50, since I chose very large peak value in heatmap (500), so the L2 loss is relatively very large, while the dice coeff loss is always less than 1.0 . I doubt if the overall results will be better I trained them separately, but when train them together, the heatmap do have a better looking than training it individually, maybe the multi-task learning did help the Unet to converge towards better direction.</p> <p>Thank you, I will push my project to github this weekend</p> <p>Congratulations to the winners! I really enjoy the time I spend on that competition, though for a while I almost <a href="https://www.kaggle.com/c/noaa-fisheries-steller-sea-lion-population-count/discussion/33788#190898">gave up finishing it</a> . My approach is kind of similar to some already released approaches, I really learned a lot by reading those posts, so I decide to share my approach here. In general it is a combination of gaussian heatmap regression and semantic segmentation. At the beginning I tried to regress 5 heatmaps for each individual class with Unet, later I felt it was too difficult tuning that task. So instead I regressed one single gaussian heatmap for all classes, so I can identify the location of each sea lion by detecting peaks without considering their classes, much alike the first step in some object detection methods, in which they propose a set of class agnostic bounding boxes before knowing their classes. <img src="http://i.imgur.com/7Lx31GB.jpg" alt="heatmap" title=""> <img src="http://i.imgur.com/p8VNf98.jpg" alt="peak_detection" title=""> To classify the detected sea lions, an auxiliary semantic segmentation task is added to the same Unet and trained together that output a 6 class binary mask, the ground truth is generated by thresh-holding the gaussian heatmap. So after each sea lion was detected by peak detection, an argmax operation will be applied to that same location on the 6 class binary mask to decide which class it belongs to. <img src="http://i.imgur.com/0IJWu4B.png" alt="6 class masks" title=""> My final submission was made by re-scaling all the test images to 0.6x original size, later I made some attempts try to generate more results with different re-scaling factors and ensemble them, I work with my team mate @hx364 on different scales, but it was still too slow.</p> <p>Meanwhile I made some experiments to see if I can estimate an optimal scaling factor for each testing image. The main idea is to find a proper scaling factor with template matching: I first hand picked about 400 template patches from training images of sea lions of class 2 and 3 that I thought has a "good" scale, then I rotate them to all 360 degrees and add to the template collection to cover different rotations. A VGG was used to extract features for all the templates, and I averaged all the features and divided by their covariances to make an "exemplar sea lion feature". <img src="http://i.imgur.com/1uKN4Mu.png" alt="templates" title=""> Since I've already detected some sea lions with the 0.6x method, I center croped all of the detections that belong to class 2,3 within in the same image, and re-scale each cropped patch with a series of scaling factors (1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4 )to make an image pyramid, and compute their mean cosine distance to the "exemplar sea lion" using VGG features, pick the lowest one and its corresponding scaling factor as the final results. <img src="http://i.imgur.com/JZ20JLd.png" alt="estimated scales" title=""> With that approach I experimentally estimated the scaling factor for some of the test images, but I wasn't able to finish it before the deadline, so I am not sure if it really works but I am still very interested in exploring that method further.</p>
NOAA Fisheries Steller Sea Lion Population Count
Quick overview my solution: UNet + regression on masks
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NOAA Fisheries Steller Sea Lion Population Count <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Russ, I really liked your solution, I think it's the most principled and robust of the ones shared due to explicit accounting for scale.</p> <p>Finally got to check this clever hack, a milder version of it (just increasing pups by 20%) also gives a moderate boost: 12.5 -&gt; 12.0 on private (and 13.2 -&gt; 12.9 on public), but the optimal settings are probably different for my model.</p> <p>Sure - I used blob_log from skimage to detect blobs <a href="http://scikit-image.org/docs/dev/api/skimage.feature.html#skimage.feature.blob_log">http://scikit-image.org/docs/dev/api/skimage.feature.html#skimage.feature.blob_log</a> and then generated two kinds of features from them: "blob-sum" is the sum of probabilities at blob centers (so if we have two blobs with max probability 0.3 and 0.05 the value would be 0.35), and "blob-count" is just the number of blobs (so here the value of the feature would be 2). I also did this for several blob thresholds (minimal blob intensities). In order to speed things up and to save space, I saved 4x downscaled prediction, and ran blob detection at this scale too, so it was not so slow.</p> <p>Yes, I would say it's a classical UNet, and it's similar to bn-VGG, just conv3s everywhere. It's almost the same as this <a href="https://github.com/lopuhin/kaggle-dstl/blob/master/models.py#L188">https://github.com/lopuhin/kaggle-dstl/blob/master/models.py#L188</a> with a small twist: it has 4x pooling instead of 2x in the "deepest" part - this save a bit of memory and in theory gives a larger receptive field.</p> <p>Thanks!</p> <blockquote> <p>So you resized the test images by half. Did it not make all the sea lions smaller compare to training? (in pixels). Or did you resize the train images too?</p> </blockquote> <p>I think that images in test are about 2x bigger than what we have in the training set. Probably they used a different camera, or didn't resize them, or flew lower, or something like this :)</p> <p>Still the images in test had different scales, so I applied scale augmentation during training to make the model more robust. Also I was not sure if 2x was the precise difference, and didn't really want to fit it using the public LB.</p> <blockquote> <p>How did you deal with half cut sea lions on the patches (like the tail in on patch and the other half with the head on another patch)?</p> </blockquote> <p>I averaged overlapping predictions at test time. At train time I didn't do anything special.</p> <p>Thanks Kapil!</p> <blockquote> <p>How was the performance with deconvolution layers instead of upsampling?</p> </blockquote> <p>I didn't check on this task, but on DSTL Satellite image segmentation I just could not get good results with deconvolutions at all - maybe it was a problem with my implementation (because others used them just fine), or bad init - I was using PyTorch. So here I just went with what I knew worked.</p> <blockquote> <p>I am guessing, you used 6 layer masks as Unet output with cross entropy without weighs for your Unet.</p> </blockquote> <p>Right. I tried applying a different weight to the background, but it resulted in more false positives and lower quality masks. On the other hand, oversampling of images with lions helped - which makes sense in retrospect.</p> <blockquote> <p>Did you encounter problems with misclassification/ crowded sea lions ? ( I was using Unet and most of the sea-lions were being misclassified as adult-females).</p> </blockquote> <p>Yes to both, and to be honest, I didn't manage to implement a good solution to any of those. For crowded areas maybe xgboost regressor + custom features helped a little. For misclassificatoin, I added a VGG-based classifier that tried to re-classify blobs, but I implemented it only two days before the deadline and didn't manage to get any gain from it - still curious if this would help.</p> <blockquote> <p>How was the training done ? I mean, how many epochs were trained to get these results ? Also, what kind of optimizer and lr was used? In my case, after 1-2 epochs, the log loss stops decreasing and remains constant. The output masks at this point remain inaccurate. </p> </blockquote> <p>For me one epoch covered entire training set (but I sampled randomly and did all augmentations on the fly), and I got best results with training for 10-20 epochs. I used Adam with lr 0.0001 and batch size 32 (but I multiplied loss by batch size) - it was much better than SGD. I divided lr by 5 when validation loss stopped decreasing for more than two epochs (just once usually). Both training and validation losses continued to go down, but it turned out it's better to stop training even before the best validation loss is reached - probably due to difference between train and test.</p> <blockquote> <p>The output masks had sea-lion shaped predictions but they were being misclassified and some other structures were also classified as sea-lions.</p> </blockquote> <p>I've seen such shapes early in training, maybe after the first few epochs. I also saw some misclassifications and some false positives (especially on the test set), didn't get to solve it.</p> <blockquote> <p>Is the purple background in the output image coming from the background label?</p> </blockquote> <p>Yes - it's just some default way of visualising masks with matplotlib, definitely not the best way to show it.</p> <p>Thanks! Yeah, training UNet can be painful!</p> <p>Forgot to mention one more difference from vanilla UNet: I'm using upsampling instead of deconvolutions, on other problems it was much more robust and produced better looking masks.</p> <p>I also could not make UNet to work properly with dice loss (which should work better in case of class imbalance), and also UNet that predicted 4x smaller output also didn't work so well (which is a pity because it was much faster and more convenient).</p> <p>I'm happy to answer the questions :)</p> <p>I sampled patches that formed the batch completely independently. With probability 0.8, I sampled a random image, a random location in that image, a random scale and random angle, and created a patch from it. With probability 0.2, I sampled a random lion (uniformly across all lions, so crowded images got more samples), and then sampled a random location within a pre-defined distance (~patch size) from that lion, and then again the angle and scale.</p> <p>Thanks @gbhalla! I used a log loss with 6 classes, one for background and the other for different lions - so it's the same loss you normally use for classification problems, but applied per-pixel on the output mask.</p> <p>Yes, sure - I will post the code in 1-2 weeks or earlier, will post the link here. </p> <p>Hi, congrats to @outrunner for an amazing performance, and congrats to everyone who participated! Special thanks to @threeplusone for the lion coordinates!</p> <p>Quick overview of my solution:</p> <p><strong>UNet</strong> with batch normalization and upsampling instead of deconvolution, trained on 256x256 patches with scale (about 0.8 -- 1.6) and rotation augmentation. With probability 0.2 patches were sampled close to some random sea lion. The task was to predict fixed sized squares centered at lion coordinates, softmax output with log loss was used. Here is what it looked like: <img src="http://imgur.com/I9JlU9g.png" alt="UNet training" title=""></p> <p>And here is what predictions (here all classes are combined) looked like:</p> <p><img src="http://i.imgur.com/YST2jU1.png" alt="UNet predictions (all classes)" title=""></p> <p>In order to <strong>predict the lion count</strong>, patches of about the same size (240x240) were extracted from predicted masks, and several features were extracted: mostly sum of outputs (with several thresholds) and also blobs were detected and counted. This features were extracted from predictions of each class separately, and for each class a separate regressor was trained (an ensemble of two extra trees regressors from sklearn and one xgboost regressor).</p> <p>The only <strong>trick for Test</strong> was to predict at 0.5 scale (so I downscaled test 2x) - I determined this value by just looking at the images.</p> <p>All in all I was very unsure of the leaderboard score as it often did not correlate with my local validation, so I didn't do any per-class LB tuning. For the final submission I averaged several models that were better on the LB (mid-13 public LB / mid-12 private LB), and some models that were better on my validation (they gave about 14 on public LB / 13 on private LB). </p> <p>Training took about 12-24 hours on a single 1070, and prediction took another 12 hours. Plus about 3 hours for training and running count prediction.</p> <p>Here is the code: <a href="https://github.com/lopuhin/kaggle-lions-2017">https://github.com/lopuhin/kaggle-lions-2017</a> (including all failed experiments, so in some places the code is less clean then I'd like)</p>
NOAA Fisheries Steller Sea Lion Population Count
R&D&G team solution, 5th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NOAA Fisheries Steller Sea Lion Population Count <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I think for scale estimation your sea lions area would be even more reliable data source than area of rects predicted by SSD, especially when you predict sea lions class as well in addition to area and do a few iterations of initial scale -&gt; geneate segmentation -&gt; estimate scale -&gt; scale images by estimated scale -&gt; generate heatmaps -&gt; improved scale estimation.</p> <p>Multiple iterations help to improve quality of classification as after each iteration the size of sea lions is closer to size used during training.</p> <p>What I tried and did not work:</p> <p>I tried to build a conv net to predict scale from patches, downscaled image or even image after cosine transform, but nothing worked reliably, partially due to different terrain on the test images.</p> <p>I tried to build the second level model to predict count from heatmaps, the results looked reasonable and improved the local cross validation score, but the leaderboard result was always worse. One possible explanation would be difference between training and test datasets.</p> <p>Applying the same postprocessing as used by outrunner has improved the public/private score to 13.1/12.2</p> <p>Congrats to outrunner, Konstantin and bestfitting!</p> <p>A short overview of our team solution:</p> <p>We estimated image size, implemented u-net line network and with sum of heatmaps as resulting count.</p> <p>Our image scale estimation solution:</p> <p>Firstly I have labelled the scale of the training dataset images. To do this, I have done a simple application to display all sea lions from the image next to lions from the reference image, and adjusted scale of lions until it visually matches for all classes: <a href="https://ibb.co/mWg5oQ">https://ibb.co/mWg5oQ</a></p> <p>Instead I trained the SSD network to detect sea lion rects using the average sea lion size for known image scale and used predicted sea lion rects as input to xgboost network, predicting actual image scale.</p> <p>So training pipeline looks like:</p> <ol> <li><p>Label image scale for train dataset</p></li> <li><p>Extra scale augmentation, prepare set of sea lion squares</p></li> <li><p>Train SSD, predict sea lion squares</p></li> <li><p>Combine results of SSD, like mean of predicted crops mean size, size of different percentiles of rects sorted by detection confidence, the same per class, etc and trained XGBoost to predict image scale</p></li> </ol> <p>Prediction pipeline:</p> <ol> <li><p>Downscale image 2x as sea lions at some images are over 3x larger than on the train dataset</p></li> <li><p>Predict sea lion rects with SSD and image scale with XGBoost</p></li> <li><p>Scale test to scale predicted with step 2, it should be the better match than original 0.5x</p></li> <li><p>Predict scale with SSD and XGBoost again</p></li> </ol> <p>Applying estimated scale allowed to improve score from ~23 to 15.9.</p> <p>Distribution of image scale of the train and test datasets (on this plot - how much image has to be rescaled to match train/0.jpg):</p> <p><a href="https://ibb.co/i3X38Q">https://ibb.co/i3X38Q</a></p> <p>For some reason the test dataset is significantly different from the train dataset, image scale is approx 2x larger.</p> <p>To count sea lions I combined the imagenet pretrained networks like Resnet and VGG with decoder part of U-net.</p> <p>I used pretrained and fine tuned on sea lion crops Resnet50 network, attached 3 stages of U-net like decoder to network output and internal layers.</p> <p>The resulting resolution was 4x lower than resolution of original image. I tried to use original U-Net or added more stages to increase resolution, but results were worse. Adding of residual blocks to U-Net decoder allowed to improve the results.</p> <p>I marked each lion as 3x3 square regardless of class and predicted heatmap with softmax loss for 5 sea lion classes + 1 background class. </p> <p>My idea was - if logloss/softmax has minimum when output is equal to probability of particular class, the count of sea lions should be equal to sum of per class heatmap / area of marker used for training. It worked reasonably well, on images with wrong scale the count shifted between classes as network learned lion sizes but the total count matched.</p> <p>examples of heatmaps, number is predicted count, as sum(heatmap)/(3*3)</p> <p><a href="https://ibb.co/ndUmv5">https://ibb.co/ndUmv5</a></p> <p><a href="https://ibb.co/jg6ANk">https://ibb.co/jg6ANk</a></p> <p>For the final submission, Gilberto has trained a few 2nd level models we averaged together with predictions of Resnet50 and VGG16 based models.</p>
NOAA Fisheries Steller Sea Lion Population Count
Solution in a nutshell. 2th Public LB / 4th Private LB
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NOAA Fisheries Steller Sea Lion Population Count <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here I will give a short description of the approach of our team (me and <a href="https://www.kaggle.com/chelovekparohod">DmitryKotovenko</a>). <br> The source code is available on GitHub <a href="https://github.com/asanakoy/kaggle_sea_lions_counting">https://github.com/asanakoy/kaggle_sea_lions_counting</a>.</p> <h2>Preprocessing</h2> <p>Thanks to @Radu Stoicescu for his blob detection to get corrected counts from dotted images.</p> <p>GT count for each tile was generated as a sum over heatmap (to overcome cases with lions on the border of the tile). On top of each lion we put a Gaussian with a standard deviation heuristically estimated by calculating the smallest distance between lion on the image. <br> We set the standard deviation to be 50 at least for each Gaussian and adjust it according to size of the animals from different classes ( multiplied by 2 for adult males and by 0.5 for pups).</p> <p><a href="https://ibb.co/hhAda5">GT Gaussians for tile</a> <img src="https://preview.ibb.co/bJ5Bv5/Screenshot_from_2017_06_28_16_28_57.png" alt="GT Gaussians for tile" title=""></p> <h2>Model and Training</h2> <p>Our model incarnates <strong>regression for 5 classes on tiles of the images</strong>. (In similar spirit as the approach of @outrunner)</p> <p><strong>Inception Resnet v2</strong> pretrainedon Imagenet. <br> We substituted the last layer with 256-way FC layer + dropout + 5-way FC layer on top. + RMSE loss. Then we fine-tuned the model on 299x299 image tiles with Adam optimizer. </p> <p><strong>Augmentations:</strong> random rotation on 90/180/270 grads, random flip left-right, bottom-up.</p> <p><strong>Scale augmentations:</strong> one model without them, one model with 0.83-1.25 random scaling, one model with 0.66 - 1.5 random scaling.</p> <p><a href="https://ibb.co/kVhwTQ">RMSE on val for 3 best models</a> <img src="https://preview.ibb.co/jx6O8Q/Screenshot_from_2017_06_28_17_02_17.png" alt="RMSE on val for 3 best models" title=""></p> <h2>Testing</h2> <p><strong>During test</strong> we made predictions up to 5 times for each model using different shifts of the tiles in the image. Test images were downscaled in 0.4-0.5 times.</p> <p>The final ensemble was made by averaging all the predictions. <br> <strong>Private LB RMSE:</strong> <em>13.18968</em> <br> <strong>Public LB RMSE:</strong> <em>13.29065</em></p> <p>Applying further postprocessing as suggested by @outrunner, could improve results. <br> Just increasing the number of pups by 20% gives a huge improvement: <br> <strong>Private LB RMSE:</strong> <em>12.58131</em> <br> <strong>Public LB RMSE:</strong> <em>12.75510</em></p> <h2>Some negative experiments</h2> <p>We labeled some images from train set according to scale and trained a CNN to regress a scale of the image. This could unify all the images to have the same approximate size of the lions of corresponding classes and simplify the CNN training to count animals.</p> <p>But it didn't work out. I reckon, the reason is the high variation in terrain and inability to estimate scale of objects if you look at them within a small spatial context (even with my own eyes).</p> <p>UPD: I have added a link to the source code on GitHub.</p> <p>Hey @eagle4. Gaussians are used to estimate the ground truth number of lions on the tile. Gaussians are not used during inference. Putting a gaussian on top of each sea lion allows us to cut this lion with the tile border and still have a fraction of the lion in each adjacent tile. Otherwise, we would have to assign this lion to the only one tile.</p>
NOAA Fisheries Steller Sea Lion Population Count
Solution sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NOAA Fisheries Steller Sea Lion Population Count <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to the winners and thanks to the organizers and fellow competitors. What I really enjoyed in this challenging competition was the fact that the Customer is a scientific agency with a good cause.</p> <ul> <li><p>The approaches I experimented with:</p> <ol><li>Five binary linear SVMs (background vs sea lion type, one model for each class) that use HOG features and iteratively search and train-on hard examples. Result: Detecting too much background as sea lions.</li> <li>Six class CNN (five sea lion types + background) trained with and without Euclidean distances to 50 nearest sea lions (for each sea lion type). The idea with the distances was to train a NN to learn the group behavior of sea lions. Result: (i) With the distances: The problem here was that you only know the distances for the training data. In the testing phase, I replaced the distances by a missing value identifier, which apparently was the cause for the NN to output only background predictions. (ii) Without the distances: much better predictions than in the HOG solution, but still way too much background getting detected as sea lions.</li> <li>Binary CNN operating on 512x512 patches to first identify the spots in the images where it's probable to have sea lions, and then passing images of those spots to CNN regressor outputting counts for each sea lion type. Result: By manual inspection, these predictions looked actually pretty good, but the LB scores were only slightly better than the all-zeros submission.</li> <li>Extracting features from 512x512 patches through pre-trained VGG16 and feeding these features to five gradient boosted regression trees (one for each sea lion type). Result: My best performing solution, scoring 23.9 on private LB.</li></ol></li> <li><p>An estimate of the time spent on this competition:</p> <ul><li>2/6 programming and innovating data pre-processing.</li> <li>3/6 programming and innovating machine learning models.</li> <li>1/6 waiting until the computation finishes.</li></ul></li> </ul>
National Data Science Bowl
Auroraxie's report and code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: National Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Kaggle for holding this competition! Congratulations to all the winners, and thanks all the players! It's a great fun to participate this competition with all of you.</p> <p>Here is the link for my report and code:</p> <p>https://github.com/auroraxie/Kaggle-NDSB</p> <p>Thanks for your interest!</p> <p>A lot of papers show the advantage of deep convolution networks over shallow nets. &nbsp;You can see the paper by VGG (http://arxiv.org/pdf/1409.1556.pdf) as a reference.&nbsp;Deeper nets have better performance.</p> <p>The number of iterations I used ranged from 250-500. You can check it out in my conf files.</p>
National Data Science Bowl
Blog post describing the winning solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: National Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Right, I was going to do that next ;)</p> <p>Hi everyone,</p> <p>The code to reproduce our winning solution as well as documentation are now available on GitHub:&nbsp;https://github.com/benanne/kaggle-ndsb</p> <p>Just to clarify, with a &quot;gradient step&quot; I meant a single update of the parameters, i.e. a single forward and backward pass through the network. It sounds like you might be interpreting it as a single pass through the training data, which indeed took a lot longer! I don't have numbers for this since we randomly sampled data points for each minibatch, but I guess you can extrapolate that it took more than 100 seconds for the network to have seen ~30000 training examples for almost all of our models.</p> <p>To determine when the CNNs converged, we just kind of eyeballed the learning curves at the start of the competition. Because we were using dropout, the validation error tended not to increase as the network started overfitting, so it wasn't a disaster to leave it training for a bit too long. We settled on 215k updates + 2 learning rate decreases pretty quickly,&nbsp;and any attempts to come up with a better learning schedule&nbsp;didn't really improve things. It's all very &quot;ad hoc&quot;, but it's simple and it works.</p> <p>I will definitely try to include them! Probably as appendices though, since they sort of fall outside of the main story, which will be about deep learning for music.</p> <p>[quote=old-ufo;66676]</p> <p>@sedielem, could you please also share score of the best-single model (with or without TTA)?</p> <p>[/quote]</p> <p>We'll see what we can do :) It's probably going to be one of those in the ensemble anyway, so probably yes!</p> <p>Thanks for the kind words everyone :) Thanks zzspar also for the effective tl;dr summary!</p> <p>Personally, my next project is going to be to finally finish my dissertation, which Kaggle has been distracting me from ;)&nbsp;I don't know about the rest of the team though!</p> <p>Hey everyone,</p> <p>Thanks to Kaggle, Booz Allen and the Hatfield Marine Science Center for a very interesting competition. Congratulations to the other winners!</p> <p>Here's a blog post describing our approach in some detail:&nbsp;http://benanne.github.io/2015/03/17/plankton.html</p> <p>Code and documentation to reproduce the winning solution will follow soon!</p> <p>UPDATE:</p> <p>Code and documentation are now available on GitHub:&nbsp;https://github.com/benanne/kaggle-ndsb</p> <p>This is actually quite a challenge as there were usually a bunch of people working on different things in parallel at any given time. So it was not a linear process :)</p> <p>It's also not a 'discrete' process in the sense that all decisions we made were approximate. We never had perfect information about whether something worked better than something else, due the noise on our validation score estimates (this is always a problem with Kaggle competitions, but in this case it was more severe than I'm used to). So when we decided to use something, it was because it was &quot;probably&quot; better, but it's also possible that it wasn't making any difference. A good example are the untied biases we ended up using - I'm still not 100% convinced that this actually helps, but it didn't hurt, so why not :)</p> <p>There's also another important Kaggle lesson that follows from this: don't discard an idea you believe in after a single failed experiment. Keep it around and try it again later, maybe you'll be pleasantly surprised.</p> <p>Anyway, here's a rough, incomplete overview of what I did. This is mostly network architecture stuff, which I was more or less in charge of together with Aaron.</p> <p>--</p> <p>I don't have a great record of the very first few things we tried (first two weeks or so) because we only started properly validating things and recording experimental results after that. But the very first network I tried to train had 5 layers: 3 convolutional with 32 3x3 filters each and 2 dense (one hidden layer with 128 units, one output layer with 121 units). It had pooling layers after the 1st and 2nd convolutional layer and 50x50 input, just the 'center patch' of the images, randomly rotated (so no rescaling). This got us ~1.6 on the leaderboard.</p> <p>(That's another important Kaggle lesson by the way, start small so you can iterate fast! Being able to try a ton of things is much more important than having a big model that gets epsilon better performance.)</p> <p>We gradually made this bigger by adding more layer, more filters, more augmentation and increasing the input size. A 7-layer network applied to 68x68 patches gave us a score of ~1.1. A 9-layer net with 76x76 input and more augmentation gave us ~1.0.</p> <p>When we started to experiment with rescaling the images, our score improved&nbsp;quite a bit. We tried a bunch of different strategies: initially I thought rescaling based on image area would be a good idea, but it turned out that scaling based on the maximal side of the image worked a lot better.</p> <p>A network with 8 convolutional layers, a discrete learning rate schedule and an area-based rescaling strategy got us down to 0.79. Then we started experimenting with test-time augmentation and saw a huge drop to ~0.71.</p> <p>Adding an extra dense layer and changing the rescaling strategy to be based on the maximal side got us down to ~0.68.</p> <p>At this point, I was trying many architectural improvements in parallel so the scores are hard to compare one to one sometimes. I believe cyclic pooling and adding a roll operation only after the first dense layer got us from ~0.67 to ~0.66, so a modest improvement.</p> <p>Our first attempt at a multiscale network got us ~0.64, but I don't immediately remember what model this should be compared to, so I don't know how big the jump was.</p> <p>I think our best single model eventually got about 0.607 on the public leaderboard (with test-time augmentation). After that we started submitting blends almost exclusively, as well as networks trained with pseudo-labeling.</p> <p>--</p> <p>This is about the best I can do, based on our submission history. Hopefully it's helpful. Some general guidelines are:</p> <p>* Iteration speed is crucially important. keep the ideas that slow down training too much for later in the competition. Initially it's crucial that you are able to try a ton of things. Maybe 80% of them will fail, but that's normal.</p> <p>* Don't discard ideas based on a single failed experiment, as mentioned before.</p> <p>* Keep track of your ideas in a list, and constantly re-prioritize them based on new information from experiments, or just from thinking about them really hard :) Write EVERYTHING down because you will forget things.</p> <p>* Keep track of your results in a list as well. In a team, this is especially important. We used a Google Drive spreadsheet for this, where every team member had their own sheet.</p> <p>* Look at the data and what your models learn about it (visualize filters / activations). One thing I find especially useful is to look at validation set examples that the model makes the worst predictions on. This is great for generating ideas. Find patterns in the mistakes the model makes and try to fix them. Write a bunch of data visualization tools while the experiments are running, you won't regret it.</p>
National Data Science Bowl
Brief Describe Method and CXXNET V2
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: National Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Xudong has posted the network structure. Just use insanity as activation.</p> <p>Warning: 48x48 one will cost more than 3GB ram and took about 2-3 days on a single Geforce 780</p> <p>[quote=Ertuka;69379]</p> <p>Bing,</p> <p>Since bowl.conf in github is not using this special layer, Can you post a good network config with Insanity layer/magical layer for cxxnet?</p> <p>[/quote]</p> <p>I suggest you do some study before random trying.&nbsp;</p> <p>Stanford cs231n is a good start guide, then try popular ImageNet papers.</p> <p>If you know how conv net works, then you will understand why vgg or xudong's net will cost so much computation resource.</p> <p>Personally, i think the magic is network structure but not optimization algorithm or activation function. Those two are just helpful for a good net.&nbsp;</p> <p>For network below 1.0, I remember someone has posted a net with 0.75 logloss by using graph lab, you can try it, it is same to cxxnet.</p> <p>[quote=Ertuka;69543]</p> <p>Bing,</p> <p>Apologies that haven't went thru Xudong's post completely and I was using bowl.conf from cxxnet latest changeset, and each iteration seem to take 160 seconds on my laptop with GTX 860M with 2GB ram.</p> <p>Thanks for the warning. I am trying to understand this huge memory/speed change. Is that due to Insanity layer or Xudong's network structure.</p> <p>What is the complete iteration count in your good runs in this competition or could you post the logloss cost function wrt iteration count. The reason I am asking is that kind of blindly I tried various networks and logloss seem to stuck around 1 after 300-400 iterations. I am curious is it possible to go under 1 in less than 200 or 300 complete iteration of the training data?</p> <p>Thanks</p> <p>[/quote]</p> <p>V2 used to be public in branch V2-refactor.</p> <p>Now CXXNET V2 is merged to master, and become part of distributed machine learning common.</p> <p>[quote=Patrick Chan;66927]</p> <p>Hi Bing Xu,Was using CXXNet in this competition, unfortunately work caught up. &nbsp;Thanks for sharing and looking forward to hearing more. Any idea when V2 will be on the public sight? &nbsp;Keep up the good works!</p> <p>[/quote]</p> <p>Thank you! Xudong Cao holds some black magic to design neural network. His structure&nbsp;works much better than ours. His single 48x48 network is able to achieve 0.631. I think his black magic should be uncovered by himself later and I am just talking about new and unique feature in CXXNET.</p> <p>[quote=old-ufo;66919]</p> <p>Congrats to your team. And I`m looking forward for your full method description, new activation function - is superb result of the competition!</p> <p>&gt;We still consider to publish it as an academic paper/workshop paper, but some theory problem need to be solved before that. <br>ICCV deadline is coming, good luck ;)</p> <p>P.S.&nbsp;It was really exciting (no offence, Bing, I just like lone wolfs), when single person hold 1st place for a long-long time</p> <p>[/quote]</p> <p>First I apologize&nbsp;that&nbsp;this post is not formal method sharing&nbsp;and CXXNET V2 comes a little late. Full document / configuration will post by Team lead Xudong Cao.</p> <p>In general, we are using extremely deep network (much deeper than VGG network) with Spatial Pyramid Pooling&nbsp;and Deeply-Supervised Nets. For activation function, we are using a strange/unpublished activation function. The activation function code is public in Github CXXNET repo for several months, I guess maybe there were&nbsp;some careful guys found it and used it. It will brings much faster converge speed than relu. It is not very like Leaky Relu/PRelu either. We still consider to publish it as an academic paper/workshop paper, but some theory problem need to be solved before that. We also used batch-normal layer, the code is public in repo after we read the paper.</p> <p>Thanks to CXXNET V2, the unique GPU-parameter server brings nearly linear speed up for multi-card training. So training such deep network is not so painful.&nbsp;</p> <p>Most augmentations are similar to&nbsp;Sander Dieleman's solution. Some&nbsp;minor tricks are shuffle image per epoch&nbsp;and real time random augmentation. CXXNET's new multi-thread binary image iterator is able to random shuffle overall doesn't hurt IO performance, so that we fully utilized computation power.</p> <p>New CXXNET V2 is moved to&nbsp;<a href="https://github.com/dmlc">Distributed (Deep) Machine Learning Common</a>, The DMLC will gradually provide kinds of distributed solution for learning algorithm. I think this will be useful for kaggler since competition costs much much more computation resource than ever. Single machine winner is fading out.</p> <p>CXXNET V2 is located at&nbsp;<a href="https://github.com/dmlc/cxxnet">https://github.com/dmlc/cxxnet </a>. New version will provide Python/Matlab training/prediction interface. I hope more people will take benefit from it in the <a href="https://www.kaggle.com/c/diabetic-retinopathy-detection">Diabetic Retinopathy Detection competition</a>. &nbsp;</p> <p>Cheers,</p> <p>Bing</p>
National Data Science Bowl
Other Solutions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: National Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It is very impressive to read about the top 3 winning solutions. They have pulled away from the rest of the pack by a wide gap.</p> <p>Some other solutions may still have complimentary components that in combination could offer additional improvement. </p> <p>Our team came #5.&nbsp; As others we used CNN. One thing that we found helpful was to vary the aspect ratio of the image. We used both resize preserving and non-preserving aspect ratio, and combined in ensemble. Each alone gave similar scores, but in combination improved by 0.1-0.15.<br>Others also mentioned using one or the other method. I think this helps for planktons because some of them are very long and thin,&nbsp; so when resizing with correct AR, not much detail left on the narrow side.</p> <p>Another method, we used the hierarchical structure to define 4 levels of higher hierarchy labels,.so that the highest level has the fewer number of labels. We trained same network on the separate label sets. We then combined them using the conditioned probability and enforcing some of lower level labels to sum to that of higher ones. An ensemble of such models improved our scores for a given network structure by about 0.1-0.15.</p> <p>It will be interesting to hear from teams that got improvements from other ideas.</p> <p>In particular we noticed that in our solution, it did not do as well for classes &quot;unknown&quot; and various &quot;others&quot;.&nbsp; It makes sense, since these are not normal classes but collections of exceptions. Did anyone try a method focused specifically on this aspect, that helped? I was thinking of trying some kind of novelty detection methods, but did not get to that.</p> <p>For us, we just used the original image/class distribution. <br>Yes, we noticed some rare classes were hard .</p>
National Data Science Bowl
Poisson Process' Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: National Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is just a quick overview for those curious about our solution. &nbsp;It seems quite simple compared to the other top two teams. &nbsp;I would like to make a blog post of how I approached this competition since this was my first time using CNNs, and my experience may be valuable for others as I was rapidly improving my methods towards the end of the competition. Of course I have also extensively followed the progress of neural networks since 2012's ImageNet competition, which helped a lot.</p> <p><strong>Why Spatially Sparse Networks?</strong></p> <p>I think this was probably a large advantage in this competition. &nbsp;Most of these images are simply white background. &nbsp;In fact I believe only 15% of the pixels had initial activations in my networks. &nbsp;This architecture allowed for a lot of flexibility in data augmentation since you can have really wide input fields and not greatly hurt performance.</p> <p><strong>Network Architecture</strong></p> <p>The architectures are mostly based off of templates for&nbsp;<a href="http://www2.warwick.ac.uk/fac/sci/statistics/staff/academic-research/graham/sparsecnn.pdf">Spatially Sparse Networks</a>&nbsp;with varying the number of layers between max pools.</p> <p>I tried 27 layer 2x2 convolutional filters with 5 2x2 Max Pooling layers. &nbsp;I believe that a 3x3 with 2 stride would have produced better results. &nbsp;I also had 18 layers with wider nets and 22 layers. &nbsp;All worked very well and produced single network scores of 0.618 - 0.625. &nbsp;Most networks had 4-5 max pooling layers.</p> <p>I eventually settled on a network similar to VGG with 3x3 convolutional filters. &nbsp;This was a shorter network with only 14 layers. These scored from 0.610 to 0.620. &nbsp;They were also moderately small. &nbsp;My networks were 3 million to 11 million parameters in size with moderate drop out. (0.2)</p> <p>Dr. Ben Graham was able to achieve a single network score of <strong>0.599</strong>&nbsp;with a network consisting of 12 Fractional Max Pooling layers. &nbsp;(see&nbsp;<a href="http://www2.warwick.ac.uk/fac/sci/statistics/staff/academic-research/graham/fmp.pdf">FMP Paper</a>) He also had a network with 0.608 which used 3x3 max pooling layers with a stride of 2.</p> <p><strong>Data Augmentation</strong></p> <p>Everything was done in real time. &nbsp;Dr. Ben Graham used affine transformations and I was using perspective transformations. &nbsp;The real key is that pretty much anything that doesn't change classes is fair game. &nbsp;If you look at the test data, many classes are heavily biased to certain orientations. &nbsp;However, it is best not to try and match these biases.</p> <p>Too much of a good thing can be bad. &nbsp;It was good to add small random variations in scaling, but if you increased it too much then the score was greatly hurt. &nbsp;I read somewhere that CNNs have problems dealing with data at different scales. It seems that this may be somewhat true.</p> <p>Since this was a spatially sparse network I found that dividing the largest dimension of each input image by 2.5 and adding 30 produced much better results than simply scaling the largest dimension to 80 or 100 or something similar.</p> <p><strong>Encoding</strong></p> <p>I have no idea how much the methods I used here really helped. &nbsp;I thresholded out any pixels &gt; 252. &nbsp;I then calculated the global mean and standard deviation of the training set, and each individual image over the remaining pixels. &nbsp;I also calculated a windowed (11x11) mean and standard deviation for each pixel. &nbsp;Those were combined to form 3 channels. &nbsp;I added a forth channel which used the global mean and image standard deviation.</p> <p>I really should have explored how I encoded the images more as I believe this method caused several classes to be very difficult to predict properly. (see below)</p> <p>Ben used a different method which can be seen in his code at his page.</p> <p><strong>Problem Areas</strong></p> <p>I believe that the encoding method I used made several classes very difficult for my network to predict properly. &nbsp;This problem did not appear in Ben's networks.</p> <p>Artifact_Edges was predicted as 2.4% of my total aggregated classes. &nbsp;It should have been closer to 0.56% (Based on training data). &nbsp;Protist_dark_center was predicted as 1.14% of my total aggregated classes. It should have been closer to 0.36%. These two classes were very large outliers for me.</p> <p>Our final ensemble was a little better with these classes, but this definitely hurt our final score.</p> <p><strong>Ensemble</strong></p> <p>Our final ensemble had 15ish&nbsp;networks with Dr. Graham's best net being counted twice. &nbsp;However, an ensemble of 5 networks reached 99.8% of the best score.</p> <p>I am really amazed out how you can add a network with a score of 0.63 and still see an improvement in score.&nbsp;</p> <p><strong>Some Things Different</strong></p> <p>Sometimes training really deep nets with dropout can make convergence painfully slow at the start. &nbsp;So I implemented a gradual drop out where I start at 0 drop out and introduce more and more drop out as the score improves on the training set. &nbsp;Usually by the 10-20th epoch I have reached maximum dropout across all layers. This doesn't seem to hurt final performance of the network.</p> <p><strong>Some Things Bad</strong></p> <p>To save time I tried to take partially trained networks and then change the input variables to create a new net. &nbsp;This was not a good idea.</p> <p><strong>Last Words</strong></p> <p>I haven't consulted with Dr. Graham before writing this so he may have something more to add.</p> <p>If you have any questions I will be glad to try and answer them.</p> <p>Great questions! &nbsp;And yes, it is my first time though I do computer vision work for a living. &nbsp;I spent the first half of the competition trying to get a breakthrough on semisupervised learning on Cifar 100 data set. &nbsp;I learned that coming up with breakthroughs is&nbsp;hard!</p> <p>[quote]</p> <p>Is this a leaderboard score? Was any averaging of predictions (i.e. test-time augmentation) done to arrive at this score? Because that's pretty impressive. I think the lowest we got for a single model (not counting models trained with pseudo-labeling) was about 0.607, and this was the score with test-time augmentation (i.e. each prediction was actually an average of 70 predictions).</p> <p>[/quote]</p> <p>It was a leaderboard score! &nbsp;He submitted it 2 hours before the end of the competition. &nbsp;It was a test time augmentation over 20-30 tests. All of our nets used test time augmentation. &nbsp;However, our&nbsp;method&nbsp;differed from yours in that I just did random transformations of the test data exactly like the training data.</p> <p>He used pseudo random overlapping fractional max pooling layers. &nbsp;Previously, the same net scored 0.608 I think. &nbsp;Unfortunately, substituting the net predictions didn't help our overall score very much.</p> <p>None of our nets used soft labeling techniques, though I did explore such methods early on.</p> <p>[quote]How large were the scale variations in the end? We ended up jittering the scale of the images between 1/1.6 and 1.6 (after initial size-based rescaling), which I found pretty extreme, but it seemed to help.[/quote]</p> <p>I believe they were 1/2.2 to 2.2. &nbsp;Some of this could be the size of the networks I was working with at the time. &nbsp;I ended up with only doing +- 30% on most of my networks.</p> <p>[quote]So if I understand this correctly, your rescaling approximately preserved the relative sizes of the organisms?[/quote]</p> <p>Yes. &nbsp;It helped quite a bit. &nbsp;But it was horrible use of memory, because if I randomly selected several large images in a single batch my application would crash. &nbsp;So I had to reduce the batch size to 20.</p> <p>[quote]</p> <p>Is this code available online anywhere yet? I can't seem to find it but I'm very curious :)</p> <p>[/quote]</p> <p>His FMP work is not yet available (It will be), but he has his spatially sparse code available which shows his augmentations.</p> <p>[quote]</p> <p>If your single best network achieved 0.599 on the leaderboard I'm somewhat surprised that the ensemble score isn't better, to be honest! Did you just average the predictions of all the networks together?[/quote]</p> <p>Yes, we did just average the scores together. &nbsp;To count twice I simply dropped a copy of the prediction file into my combine directory. &nbsp;It was very crude, but I was running out of time. &nbsp;I think the biggest problem was that I was greatly over predicting artifact_edges and protist_dark_center. All of my networks were biased in this same manner. &nbsp;The local normalization I used was probably greatly over exaggerating the background noise in many of the images.&nbsp;</p> <p>I did try replacing my predictions for those classes with Ben's predictions with five minutes left in the competition, but it was too little too late.</p>
National Data Science Bowl
winning solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: National Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>congrats to Sander et. al.</p> <p>http://benanne.github.io/2015/03/17/plankton.html</p>
NeurIPS 2023 - Machine Unlearning
10th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NeurIPS 2023 - Machine Unlearning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks challenge organizers for this interesting competition, I have learned more about the concept of machine unlearning (<strong>MU</strong>) and related algorithms.</p> <h1>Overview</h1> <p>The solution is based on a recent <strong>MU</strong> algorithm called <strong>SalUn</strong> (Saliency-based unlearning) [1].<br> <a href="https://www.kaggle.com/code/kuohsintu/saliency-based-unlearning-11th-place-solution" target="_blank">Solution source code</a></p> <h1>Terminology</h1> <ul> <li><strong>Dr</strong>: retain dataset.</li> <li><strong>Df</strong>: forget dataset.</li> <li><strong>θo</strong>: pretrained weights before unlearning.</li> <li><strong>L</strong>: class balanced cross entropy loss (beta is 0.99) from [2]</li> <li><strong>Gradient ascent (GA) unlearning</strong> [3]: reverses the training of a pretrained model using gradient ascent on <strong>Df</strong>.</li> <li><strong>Random labeling (RL) unlearning</strong>: assign random labels that are not the same as the original ones to all <strong>Df</strong> samples to produce <strong>Df’</strong>. Then, we fine-tune the model both on <strong>Df’</strong> and <strong>Dr</strong>.</li> </ul> <h1>Details</h1> <p>The method contains two stages:</p> <h2>Generate model weight saliency mask</h2> <p>In this stage, we would like to find out what parts of <strong>θo</strong> are salient (important) to a <strong>MU</strong> process.
First, we use the gradient information with respect to the negative <strong>L</strong> in <strong>GA</strong> to identify the model weights that are important to the unlearning of <strong>Df</strong> and generate a weight saliency mask <strong>M</strong>.
Then, we apply the sigmoid function and shift the values of <strong>M</strong> to [-1.0, 1.0]. In the end, we only keep the absolute values of M.</p> <h2>Saliency-based unlearning</h2> <p>In this step, we enhance <strong>MU</strong> methods with the weight saliency mask <strong>M</strong>.
Here, we run <strong>RL</strong> with <strong>L</strong> for an epoch and apply <strong>M</strong> (multiply gradients by <strong>M</strong>) when computing the gradients.
Finally, we fine-tune the unlearned model on <strong>Dr</strong> for an epoch to strike a balance between unlearning on forget data and preserving the model’s generalization ability for non-forgetting data.</p> <p>Notes:</p> <ul> <li>The solution has a few differences from the original paper<ul> <li>Do not reload <strong>θo</strong> in the saliency-based unlearning step since the public leaderboard scores are better.</li> <li>Use <strong>M</strong> to scale gradient values instead of hard thresholding (based on the official source code).</li> <li>Use <strong>Dr</strong> in addition to randomly labeled <strong>Df</strong> in <strong>RL</strong> because it produces better results empirically.</li></ul></li> </ul> <h1>Dataset</h1> <p>I only used the CIFAR10 data and pertained weights provided by the challenge organizers to validate the unlearning algorithm and tune parameters.</p> <h1>Results</h1> <p>With the submitted solution (a few hyperparameter tunings), the best score is 0.0736350706 on the private leaderboard.</p> <h1>Things did not work well</h1> <ul> <li>Normalized and scaled class balance weights.</li> <li>Various recent <strong>MU</strong> methods (ℓ1-sparse [4], Blindspot [5], and SCRUB [6]).</li> <li>Local validation dataset with face images and corresponding age labels.</li> </ul> <h1>Things to be done</h1> <ul> <li>Better local validation dataset: generate synthetic face data with generation models such as Stable Diffusion [7].</li> <li>Investigate and integrate MIA dense approaches to the unlearning process.</li> </ul> <h1>References</h1> <ol> <li>SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation<ul> <li><a href="https://arxiv.org/abs/2310.12508" target="_blank">Paper</a></li> <li><a href="https://github.com/OPTML-Group/Unlearn-Saliency" target="_blank">Official source code</a></li></ul></li> <li><a href="https://openaccess.thecvf.com/content_CVPR_2019/papers/Cui_Class-Balanced_Loss_Based_on_Effective_Number_of_Samples_CVPR_2019_paper.pdf" target="_blank">Class-balanced loss based on effective number of samples</a></li> <li><a href="https://arxiv.org/abs/2109.13398" target="_blank">Unrolling sgd: Understanding factors influencing machine unlearning</a></li> <li><a href="https://arxiv.org/abs/2304.04934" target="_blank">Model Sparsity Can Simplify Machine Unlearning</a></li> <li><a href="https://arxiv.org/abs/2210.08196" target="_blank">Deep Regression Unlearning</a></li> <li><a href="https://arxiv.org/abs/2302.09880" target="_blank">Towards Unbounded Machine Unlearning</a></li> <li><a href="https://huggingface.co/spaces/pcuenq/uncanny-faces" target="_blank">Generate Uncanny Faces with ControlNet Stable Diffusion</a></li> </ol> <h1>Update History</h1> <ul> <li>Edit 1: make details more clear.</li> <li>Edit 2: revisit values in M</li> <li>Edit 3: add information concerning L</li> </ul>
NeurIPS 2023 - Machine Unlearning
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NeurIPS 2023 - Machine Unlearning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>At first, thanks to Kaggle for hosting such an interesting competition and every Kagglers. It’s been a truly memorable experience for me, not only because of the active discussion zone, which provides interesting and useful directions for thought and practice, but also because of the meaningful topic of unlearning. I'm surprised to have this chance to introduce my solution here.<br> My solution is based on a 2-stage training method, which contains a one epoch forgetting stage on forget dataset and an 8-epoch adversarial fine-tuning stage on both forget dataset and retain dataset. Here are the details:</p> <h2>Overview:</h2> <p>(1). In order to let the model initially forget in the logits space, we optimize the KL-divergence between the output logits and a uniform pseudo label.<br> (2). An adversarial fine-tuning stage: <br> <strong>Forget Round:</strong><br> Instance-scale SCL(self supervised contrastive learning) could be a beneficial method when facing some unsupervised learning task such as clustering feature embeddings. Self supervised contrastive learning aims to pull the distance between positive samples and their enhanced samples closer, and to push the distance between positive samples and all samples further. In my assumption, we need to let the forget sample as uniform as possible in feature space, which could be seen as pushing the distance between any forget sample and all retain samples further. Hence, I have proposed two kinds of contrastive loss here:</p> <ol> <li><p>For a forget sample x, use its enhanced version x’ as a positive pair, and use all samples in the retain batch as a negative pair. The final loss formula could be written as this: <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9888178%2F5ad52ea1d39d433629469ce7663e740b%2FKER3I(PZHAPA4Q0CBUN.png?generation=1701420464802101&amp;alt=media" alt=""></p></li> <li><p>Each retain sample could have the same probability to be chosen as positive sample with the any forget sample, which means they have the same weight in the formula of the contrastive loss. The final loss formula could be written as this:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9888178%2Fa49dca72b4fc934c0ba13cb732080025%2FE8Q9FVJMZ2IQYR7FYX.png?generation=1701414982494286&amp;alt=media" alt=""><br> I have use second loss function as the final submission version (Because the first loss function performs not as good as the second preliminarily, I give up this idea due to the days limitation). I set temperature coefficient t=1.15 here.<br> the forget training round code here:</p></li> </ol> <pre><code>for sample_forget, sample_retain in (forget_loader, retain_ld4fgt): t = ##temperature coefficient inputs_forget,inputs_retain = sample_forget[],sample_retain[] inputs_forget, inputs_retain = inputs_forget.(DEVICE), inputs_retain.(DEVICE) optimizer_forget.() outputs_forget,outputs_retain = (inputs_forget),(inputs_retain).() loss = (- * nn.(dim=-)(outputs_forget @ outputs_retain.T/t)).() ##weighted CL ##Loss function loss.() optimizer_forget.() </code></pre> <p>Due to the different steps in forget_loader and retain_loader (here referred to as retain_ld4fgt, a new dataloader for Contrastive Learning), there is an unexpected random shuffle at different checkpoints, which can be viewed as an ensembling method.<br> <strong>Retain Round:</strong><br> To enhance performance on the retain dataset, fine-tuning it is unavoidable. Here, I just simply retrain the model using cross-entropy loss.</p> <h2>Some useful tricks:</h2> <ol> <li>Actually, because of our competition time limitation, the max epoch could be maximized to 6 when retain batchsize is 64. (This conclusion can be continued to use even for adversarial training because the forget round takes little time compared with the retain round). "I've tried to add more epochs by increasing the batch size of the retain dataloader and the learning rate. Results show that eight epochs of training with a batch size of 256 performs best on the public leaderboard.</li> </ol> <p>2.CosineAnnealingLR could also be useful in forget rounds (Simply adding this, score on public lb increases from 0.084 to 0.091)</p> <h2>Some directions I have not investigated deeply :</h2> <ol> <li>Add more forget rounds in an epoch. For example, to intensify the adversarial stage (as a fiercer confrontation may enhance forgetting performance), adding another forget round could increase the total steps of forgetting.</li> <li>Replace the first stage with Contrastive learning loss.</li> </ol> <h2>Conclusion:</h2> <p>During the competition, my thinking went through multiple stages, and each stage had a different understanding of the competition and tasks. In fact, many excellent codes or discussions in the forum have been of great help to me. For example, <a href="https://www.kaggle.com/Maria" target="_blank">@Maria</a> Gorinova 's <a href="https://www.kaggle.com/competitions/neurips-2023-machine-unlearning/discussion/454949" target="_blank">evaluation notebook</a> is an excellent work that help you quickly understand the evaluation metrics. I have also witnessed many vastly different ideas on the forum. I believe that each idea has its own reasons, but the competition scenes are limited and idealized. While high-scoring methods may have limitations in practical scenarios, their underlying innovative approaches are universally applicable. I always look forward to seeing any creative solution.</p> <p>Solution link:<a href="https://www.kaggle.com/code/fanchuan/2nd-place-machine-unlearning-solution?kernelSessionId=153137657" target="_blank">2nd place Solution</a></p> <p>Yeah, it's my misexpression. Actually I mean the Uniformity in the whole feature space comes from the denominator part of infoNce loss. (Make clusters stay around instead of a small part of the space) The random forgetting comes from the numerator part which will provide a clustering process like k-means as u said.</p> <p>Already read. It's an interesting method. Glad to hear if you make any progress.</p>
NeurIPS 2023 - Machine Unlearning
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NeurIPS 2023 - Machine Unlearning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to Kaggle and Google for such an exciting competition. It was a great experience.</p> <h3>Summary</h3> <p>In this work, we developed a gradient-based re-initialization method. We assumed that if the gradients of the weights in the model, specifically in the retain set and forget set, are similar, it becomes challenging to forget information from the forget set during the retraining of the retain set. To address this, we selectively reinitialized weights with similar gradients between the two sets, followed by retraining the model.</p> <h3>Method</h3> <p>The proposed gradient-based re-initialization method for unlearning consists of three main steps:</p> <ol> <li>Gradient collection: Gradient information is collected from the forget set and the retain set. The forget set’s gradient is collected using gradient ascent from cross-entropy loss with the forget set label, while the retain set’s gradient is collected using gradient descent cross-entropy loss with the retain set label. Due to the unequal sample sizes between the retain set and forget set, random sampling was used from the retain set to match the number of samples in the forget set for gradient collection. </li> <li>Weight initialization: Based on the gradient information collected in the first step, a percentage of the convolution filter weights are re-initialized. The weights with the smallest absolute values of gradient information are re-initialized. (Our best result initialized 30% of the weights) The weights are globally unstructured initialized, following the same He initialization (mode="fan_out", nonlinearity="relu") from torchvision Resnet codes.</li> <li>Retraining: The model is retrained with the retain set. The learning rate for the Uninitialized weights uses 1/10 of the base learning rates. This is accomplished by scaling the gradient of the Uninitilaized weight by 1/10. </li> </ol> <h3>learning rate scheduler</h3> <p>The learning rate scheduler is also an important factor. A linear decay learning rate scheduler with a few warmup epochs consistently produces better results than a linear decay learning rate scheduler or a linear increase learning rate scheduler.</p> <h3>Randomness</h3> <p>Randomness is also an important factor. Selecting weights using a subset of the retain set(resulting in a different initialization for each run) results in better performance than using the entire retain set. </p> <h3>Image</h3> <ul> <li>gradient collection<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3106709%2F9db2c688349380dc6feee88a71eacee9%2F1.PNG?generation=1701663754125244&amp;alt=media" alt=""></li> <li>Weight initialization<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3106709%2F4bcb6052f572a1edd1f49c34efa5453d%2F2.PNG?generation=1701663777339997&amp;alt=media" alt=""></li> </ul> <h3>Our <a href="https://www.kaggle.com/code/nuod8260/targeted-re-initialization-3rd-on-private-lb" target="_blank">code</a></h3> <p>Affiliation: School of Artificial Intelligence, College of Computer Science, Kookmin University</p> <p>Doun Lee<br> Email: <a href="mailto:[email protected]">[email protected]</a></p> <p>Name: Jinwoo Bae<br> Email: <a href="mailto:[email protected]">[email protected]</a></p> <p>Name: Jangho Kim (Professor)<br> Email: <a href="mailto:[email protected]">[email protected]</a></p>
NeurIPS 2023 - Machine Unlearning
5th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NeurIPS 2023 - Machine Unlearning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thank you to the competition hosts for a most interesting competition. I really enjoyed getting familiarized with the research topic of machine unlearning and this competition. </p> <p>My approach is quite simple but effective. I tried to make small incremental changes to some very simple basic ideas which I will share here. This strategy turned out to be quite successful, landing me a 5th place on the private leaderboard and a 6th place on the public one.</p> <p>The code notebook for my solution can be found here: <a href="https://www.kaggle.com/code/sebastianoleszko/prune-re-init-entropy-regularized-fine-tuning" target="_blank">Prune/Re-init &amp; Entropy Regularized Fine-Tuning</a></p> <h1>Overview</h1> <p>My solution consisted of two primary contributions, </p> <ol> <li>Introducing sparsity through pruning and re-initializing</li> <li>Fine-tuning the model with both regular cross-entropy loss and a kind of regularization on entropy</li> </ol> <p>First, the model was pruned (99% of the parameters) based on an unstructured L1-norm criteria and then re-initialized to random weights. The idea of this step was simply to both unlearn the information from the forget set, and to add sufficient noise to the model to "move" far enough away from the initial model that we can properly imitate the distribution of fully retrained models. Then, the model was fine-tuned on the retain set but with a modification to the loss,<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4680847%2Fe6799ec8e6221ca32bf685684e708946%2Fformulas.png?generation=1701634759913711&amp;alt=media" alt=""><br> where L_FT is the cross entropy loss for fine-tuning on the retain set and L_RegEntropy is the MSE of entropy between the original network’s (trained on all data) predictions and the fine-tuned networks predictions. The idea is that this acts as a sort of regularization on the fine-tuning, making sure that the entropy is similar to our starting point which is also the best estimate of entropy for the retrained models. </p> <p>With these two steps, my thinking was that the model should have (1) forgotten the information of the forget set, (2) moved far away from the fine-tuning starting point which was the full data model, and (3) been fine-tuned back to high performance and trained in such a way that the resulting predictions are similar to those of a model trained with similar parameters as the original training.</p> <h1>Important ideas</h1> <h2>References and inspiration</h2> <p>As I developed the general idea of my solution, I found two articles that either inspired me or to some extent confirmed my choice of methods. Although I didn't use any of the suggested methods directly, these papers gave me valuable insights into SOTA approaches and why they might succeed. I built on those results and found simple steps that led to improvements in the competition.</p> <ul> <li><a href="https://arxiv.org/pdf/2304.04934.pdf" target="_blank">Model Sparsity Can Simplify Machine Unlearning</a></li> <li><a href="https://arxiv.org/pdf/2302.09880.pdf" target="_blank">Towards Unbounded Machine Unlearning</a></li> </ul> <h2>Hyperparameters</h2> <p>As I started to experiment with different ideas in the competition, I quickly found that the most important thing for me would be to properly tune the hyperparameters of the model. Although I understand the risk of overfitting to the public leaderboard when tuning against submitted results, I think that the value of doing so outweighed the risk. I think more than half of my submissions were spent tuning well performing methods' hyperparameters in small steps. This could often lead to jumps in the public leaderboard of 0.005+ at a time and quickly moved me up the leaderboard. I found most success in tuning the learning rate/number of epochs along with adding a very small amount of class weights to the cross entropy loss term w^(0.1). I never managed to achieve good results with the maximum number of epochs that I could train with the time limitations, however I expect that this would have helped if I managed to tune the rest of the parameters optimally for this (scheduler, learning rate, class weights etc.).</p> <p>My reasoning of why hyperparameters were especially important in this competition was due to the lack of precise information about how the original model was trained. Since I believe that this information is required to make an efficient unlearning algorithm under the proposed evaluation metric, the next best thing was to systematically guess parameters until the score went up. I would have liked to have more information about the training procedure to avoid this guesswork for future unlearning competitions.</p> <h2>What I would have liked to try</h2> <ul> <li>Other pruning criteria such as removing parameters with high importance for predicting forget set images but not others</li> <li>Maximizing the allowed time to run the algorithm (more epochs)</li> <li>Further explore if the MSE loss was better than something like a KL divergence or cross entropy. This is something that I changed a couple of times without improvement on the public leaderboard. This loss term was possibly one of the most important pieces of my final solution.</li> </ul> <h1>Conclusion</h1> <p>This competition had me looking for new exciting solutions from start to finish but I ended up with quite a simple but efficient method. I tried a lot of other ideas but I either didn't have the time to properly tune them, thus couldn't know if they actually improved my unlearning algorithm, or they didn't work for me. I do however believe that the key was in somehow performing the three steps that i outlined in the summary:</p> <ol> <li>Forget the data (prune)</li> <li>Move the model randomly away from the fixed fully trained model (randomly re-initialize parameters)</li> <li>Regain performance while not moving too far from the original model (Regularized Fine-tune)</li> </ol> <p>I look forward to learning more about unlearning in the future and to follow the work of my fellow Kagglers and researchers. Thank you again to the competition hosts and to everyone who shared their solutions and insights, I've really enjoyed reading up on how you all have approached this competition!</p> <h3>Final scores</h3> <p>Public LB: 0.0932121752<br> Private LB: 0.0863052371</p>
NeurIPS 2023 - Machine Unlearning
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NeurIPS 2023 - Machine Unlearning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly, thank you for hosting this intriguing research competition! I'd also like to express my gratitude to everyone for sharing numerous important insights in discussions.</p> <p>We have published our notebook, and the description is as follows:<br> Notebook : <a href="https://www.kaggle.com/code/marvelworld/toshi-k-rotate-and-marvel-pseudo-blend48-52" target="_blank">https://www.kaggle.com/code/marvelworld/toshi-k-rotate-and-marvel-pseudo-blend48-52</a><br> (The code for adding to the dataset and debugging does not affect the submission in any way)</p> <h2>1. Overview</h2> <p>Our solution is an ensemble of two approaches: (1) Retraining from transposed weights and (2) Fine-tuning with pseudo-labels.</p> <ul> <li>(1) involves retraining the model using a modified version of the original model. In this modification, all weights in Conv2D are transposed. This process helps in forgetting samples in the forget-set, enabling the reuse of valuable features from the original model.</li> <li>(2) reproduces the behavior of the retrained model with pseudo-labels. We estimate the mistakes made by the retrained model with the forget data and use them as pseudo-labels for fine-tuning the pretrained model.</li> </ul> <p>One of these approaches is executed each time unlearning is performed for one model.</p> <h2>2. Retraining from transposed weights (by <a href="https://www.kaggle.com/toshik" target="_blank">@toshik</a>)</h2> <p>At the start of the unlearning process, all weights in Conv2D are transposed both vertically and horizontally. This operation facilitates the forgetting of the forget-set. The modification is carried out simply as follows,</p> <pre><code>def rotate: print('rotate weight') local_model.modules: isinstance(, torch.nn.modules.conv.Conv2d): .weight = torch.nn.) </code></pre> <p>Afterwards, the model is trained with the retain set for 3 epochs. This is considered equivalent to inputting flipped images into the original model and performing fine-tuning.<br> When compared to retraining from scratch, transposed weights still retain valuable features from the original model. This allows for the reuse of such information, resulting in faster and more stable convergence during fine-tuning.</p> <h2>3. Fine-tune with pseudo-labels (by <a href="https://www.kaggle.com/marvelworld" target="_blank">@marvelworld</a>)</h2> <p>The first step is to infer the forget data from pretrained model. Next, infer the forget data after a simple unlearning as shown in the example notebook again. Compare the results of both inferences and identify the data on which the inference moves in the wrong direction.<br> We also developed a simple scratch model from retrain data to infer forget data. This result also identifies data on which the model is easy to mistake with some confidence.<br> Finally, finetuning pretrained model with the identified data and incorrect inferences as pseudo-labels.</p> <h2>4. Ensemble</h2> <p>Our solution comprises a mixture of two types of models. We opted for different combinations, and the final scores are as follows,</p> <table> <thead> <tr> <th>Retraining from transposed weights</th> <th>Fine-tune with pseudo-labels</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>246 models</td> <td>266 models</td> <td>0.0785184178</td> </tr> <tr> <td>266 models</td> <td>246 models</td> <td>0.0756313425</td> </tr> </tbody> </table> <h2>5. What we tried but didn’t work</h2> <ul> <li>class weights</li> <li>approach to each layer like LLRD, init/freeze</li> <li>softmax labels / KL Loss</li> <li>Relax Loss<ul> <li><a href="https://openreview.net/forum?id=FEDfGWVZYIn" target="_blank">https://openreview.net/forum?id=FEDfGWVZYIn</a></li></ul></li> <li>SCRUB<ul> <li><a href="https://arxiv.org/abs/2302.09880" target="_blank">https://arxiv.org/abs/2302.09880</a></li></ul></li> </ul> <p>Although Relax Loss and SCRUB appeared to be a state-of-the-art methods in this field, they unexpectedly did not perform well.<br> The unlearning metric was so stringent that the defenses against MIA were insufficient to achieve a good score.</p> <h2>6. Final Remarks</h2> <p>Our solution is built upon two distinctive approaches, contributing to the stability of our solution in the private leaderboard. However, there are certain limitations when applying it to real-world problems, particularly in manually tuning some parameters.</p> <p>We hope that our solution serves as a valuable reference in future research.<br> Thank you for reading this to the end!</p> <p>At least SCRUB inspired us, but ultimately the idea wasn't adopted.</p>
NeurIPS 2023 - Machine Unlearning
6th Place Solution for the NeurIPS 2023 - Machine Unlearning Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NeurIPS 2023 - Machine Unlearning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, we would like to thank the organizing committee, it was an incredible experience, and we are so grateful for all the hard work and effort you and your team put into making it happen.</p> <h1>Context</h1> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/neurips-2023-machine-unlearning/overview" target="_blank">https://www.kaggle.com/competitions/neurips-2023-machine-unlearning/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/neurips-2023-machine-unlearning/data" target="_blank">https://www.kaggle.com/competitions/neurips-2023-machine-unlearning/data</a></li> </ul> <h1>Overview of the approach</h1> <p>In our solution, we present a simple yet effective machine unlearning approach that uses a selective model parameter reset, a warm-up phase employing knowledge distillation and finally, a fine-tuning phase. Specifically, our method focuses on resetting the first and last layer of the original model, which is trained on the full training set. By resetting the first layer, we enable a disruption on the initial hidden representations of the data, leading to a direct decline in model performance, facilitation the forgetting of previously trained data. Resetting the last layer enables the model to diverge from its original output patterns, allowing it to approximate a new output distribution, which can result in similar output distribution to that of a model trained on a specific data subset, i.e., the retain set. </p> <ul> <li>We followed the data processing pipeline as provided in the starter notebook(run-unlearn-finetune), without further data augmentations.</li> <li>We submitted two final versions: one that uses the entire validation set and another that uses only samples from the first two classes for the warm-up phase. Note that these two classes were the only ones with samples in the forget set.</li> <li>Our method does not require the forget set.</li> </ul> <h1>Details of the submission</h1> <h2>Insights from Cifar-10</h2> <p>The initial phases involve resetting the first and layers of the original model. These two layers are chosen because the first layer significantly influences the rest of the model layers and the last layer determines the model’s final output distribution. With this reset step, we enable the model to deviate from its original state. <br> In addition, empirical observations from local experiments on the CIFAR-10 dataset showed that these two layers exhibited mostly negative cosine similarity between model weights trained on the full training set and models trained from scratch on a smaller subset (i.e., the retain set). <br> Thus, in terms of global weight parameters, this means that the two models found a similar solution with respect to predictive performance, but with different directions (See Figure "Weight Similarities" on the CIFAR-10 dataset between the pretrained and retrained on retain set model).</p> <h3>Weight Similarities</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4512660%2Ffa884bf7bbac22713c58b723cea9d13e%2Fcos_sim_layer.png?generation=1701973645041086&amp;alt=media" alt=""></p> <p>Furthermore, we measured the differences in terms of activation distributions between the pre-trained model and the model trained from scratch on the retain set. This experiment revealed that the differences in terms of cosine similarity are clearly evident even regarding their activations (See figures below) These observations lay the foundations for resetting the first and the last layer parameters of the original model. </p> <h3>Similarity of Activations after MaxPooling layer</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4512660%2Fc887eea40d7e4d6087598c2aed603d6b%2F5.Maxpool_Embeddings_Cropped.png?generation=1701974052559749&amp;alt=media" alt=""></p> <h3>Embedding Similarity (after AvgPool layer)</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4512660%2F202a742464afbd7f0de798730944545d%2F1.Avgpool_Embeddings_Cropped.png?generation=1701974207236188&amp;alt=media" alt=""><br> Finally, we performed experiments with baseline unlearning algorithms including simple finetuning, finetuning by resetting only the final layer, eu-k and cf-k forgetting [1], unlearning by maximizing the loss on the forget set [2], unlearning with bad teaching [3] and SCRUB [4], which minimizes the divergence of a student model from the teacher on the retain set and maximizes the divergence on forget set. In all of the above cases, none of the methods achieved significantly higher public score compared to the baseline fine-tuning approach.</p> <p>After resetting the first and the last layer of the network, we include a rapid warm-up phase using the validation set. The goal in this phase is to prepare the model for the third phase by minimizing the Kullback-Leibler (KL) divergence between the outputs of the original pre-trained model (teacher) and the reinitialized model (student).</p> <p>The final phase includes fine-tuning the student model on the retain set. In this stage, we combine three types of losses: standard cross-entropy loss, soft cross-entropy loss and KL divergence. The cross-entropy loss ensures the model's accuracy on the underlying task using hard labels from the retain set. The soft cross-entropy loss involves comparing softened predictions of the student model with soft labels from the teacher model. The KL divergence loss combined with the soft cross-entropy facilitates rapid knowledge transfer and broader information capture. Additionally, the cosine annealing scheduler is integrated to optimize training dynamics, promoting faster convergence and efficient learning.<br> The Figure below shows a summary of the proposed pipeline.</p> <h2>Unlearning Pipeline Visualization</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4512660%2Feb7b840f4ca0ea41f5df37386440712d%2FMachineUnleaning.drawio.png?generation=1701971655736516&amp;alt=media" alt=""></p> <h1>Final Submissions</h1> <p>As mentioned earlier, we selected two variations of our pipeline. The first uses the entire validation set and the second employs the warm-up phase only a subset, specifically using the samples from the first two classes. This inspiration of this strategy derives from the distribution of forget set, which exclusively contains samples from the first two classes (0,1).</p> <p>To extract the samples belonging to the first two classes, we used the following function:</p> <pre><code> (): val_ds = HiddenDataset(split=) samples = [] sample val_ds: y = sample[] (y) == (y) == : samples.append(sample) val_loader = DataLoader(samples, batch_size=batch_size, shuffle=) val_loader </code></pre> <p>The scores of these two approaches are summarized in the following table:</p> <table> <thead> <tr> <th>Warm-up Approach</th> <th>Public Score</th> <th>Private Score</th> </tr> </thead> <tbody> <tr> <td>All classes</td> <td>0.08383</td> <td>0.07219</td> </tr> <tr> <td>Classes 0,1</td> <td>0.08324</td> <td><strong>0.07831</strong></td> </tr> </tbody> </table> <p>Notably, both approaches resulted in almost equivalent public scores. However, the approach which uses the first two classes from the validation set for warming-up the model demonstrated improved performance in the hidden test set. </p> <h2>Useful Tricks</h2> <ul> <li>Resetting a portion of model parameters </li> <li>CosineAnnealingLR Scheduling for Finetuning Phase</li> </ul> <h2>Strategies</h2> <ul> <li>Threefold Optimization Loss during Finetuning Phase </li> <li>High Temperature in both stages</li> </ul> <h2>What didn't work</h2> <ul> <li>Use of class weights (Despite their use in the original training pipeline)</li> <li>Algorithms that did not employ resetting resulted lower scores</li> <li>Variants of retraining (Starting from pretrained model on ImageNet resulted to low scores or threw exception probably due to exploding gradients)</li> </ul> <h1>Conclusion</h1> <p>In conclusion, throughout this competition we gained a better understanding of Machine Unlearning which has been both enlightening and challenging. Special thanks also to the non-Kaggle participant of our team, Nikos Komodakis for his contribution.</p> <h1>Sources</h1> <ol> <li>Goel, S., Prabhu, A., Sanyal, A., Lim, S. N., Torr, P., &amp; Kumaraguru, P. (2022). Towards adversarial evaluations for inexact machine unlearning. arXiv preprint arXiv:2201.06640.</li> <li>Golatkar, A., Achille, A., &amp; Soatto, S. (2020). Eternal sunshine of the spotless net: Selective forgetting in deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9304-9312).</li> <li>Can bad teaching induce forgetting? Unlearning in deep networks using an incompetent teacher. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 6, pp. 7210-7217).</li> <li>Kurmanji, M., Triantafillou, P., &amp; Triantafillou, E. (2023). Towards Unbounded Machine Unlearning. arXiv preprint arXiv:2302.09880.</li> </ol> <ul> <li>Starter Notebook (run-unlearne-finetune) <a href="https://www.kaggle.com/code/eleni30fillou/run-unlearn-finetune" target="_blank">https://www.kaggle.com/code/eleni30fillou/run-unlearn-finetune</a></li> </ul> <p>Code for <a href="https://www.kaggle.com/code/stathiskaripidis/unlearning-by-resetting-layers-7th-on-private-lb" target="_blank">7th Place Solution</a></p> <h2>Team</h2> <ul> <li><a href="https://scholar.google.com/citations?user=jif2JYsAAAAJ&amp;hl=en" target="_blank">Efstathios Karypidis</a>, PhD Student Archimedes Unit, Athena RC | NTUA</li> <li><a href="https://scholar.google.com/citations?user=V1t6u_YAAAAJ&amp;hl=en&amp;oi=ao" target="_blank">Vasileios Perifanis</a>, PhD Student, DUTH</li> <li><a href="https://scholar.google.com/citations?user=srkOEMYAAAAJ&amp;hl=el&amp;oi=ao" target="_blank">Christos Chrysanthos Nikolaidis</a>, PhD Student, DUTH</li> <li><a href="https://scholar.google.com/citations?user=xCPoT4EAAAAJ&amp;hl=en&amp;oi=ao" target="_blank">Nikos Komodakis</a>, Assistant Professor, (non-Kaggle Participant), UoC</li> <li><a href="https://scholar.google.com/citations?user=Dxz0pLsAAAAJ&amp;hl=en" target="_blank">Pavlos Efraimidis</a>, Professor, DUTH</li> </ul>
NeurIPS 2023 - Machine Unlearning
7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NeurIPS 2023 - Machine Unlearning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly, I would like to express my sincere gratitude to the Kaggle platform and its dedicated team for their invaluable support and assistance. Their efforts are truly commendable. I feel fortunate to have secured the eighth position in this competition, and the experience has opened a new door to a fascinating field of study. I want to extend my appreciation to the other participating teams whose selfless sharing of knowledge is one of the reasons for the rapid growth and development of machine learning.</p> <p>The code for our solution can be found at: <a href="https://www.kaggle.com/code/sunkroos/noise-injection-unlearning-8th-place-solution" target="_blank">noise injection unlearning</a></p> <h2><strong>Overview:</strong></h2> <p>Our team designed a simple yet effective unlearning strategy that only requires the use of the retrain set. The solution comprises three main stages:</p> <p>(1) Resetting the parameters of the fully connected (FC) layer and fine-tuning the entire model.</p> <blockquote> <p>By resetting the FC layer parameters, we can alter the distribution of the final output while preserving the features learned by the network. This effectively reduces the overfitting effect of the model on the forget set.</p> </blockquote> <p>(2) Randomly selecting N layers from the network, adding noise to specific layers in an additive manner, and fine-tuning these layers.</p> <blockquote> <p>Adding noise helps the network 'forget' the information it has learned, and the randomness of the noise and layer selection contributes to enhancing the model's diversity. It is noteworthy that model diversity significantly affects the final score, as a single model distribution makes membership inference attacks (MIA) more likely to succeed.</p> </blockquote> <p>(3) Fine-tuning all network layers.</p> <blockquote> <p>The primary goal of this step is to restore the accuracy lost in the network due to 'forgetting' information.</p> </blockquote> <h2><strong>Tricks attempted but did not yield improvement:</strong></h2> <ul> <li>Training with class weights.</li> <li>Changing additive noise to multiplicative noise, but no significant improvement observed.</li> <li>Calculating the importance of parameters in the retrain set and forget set based on model gradient statistics and adding more noise to parameters deemed more important in the forget set.</li> <li>Using label smoothing or focal loss to reduce overfitting/increase model generalization.</li> </ul> <h2><strong>Unexplored methods due to personal constraints:</strong></h2> <p><em>Due to limited resources, some potentially helpful methods were not explored.</em></p> <ul> <li>Increasing the distance between original model parameters and current model parameters during finetuning.</li> <li>Using multiple unlearning strategies for ensemble learning in 512 models.</li> </ul> <h2><strong>Conclusion:</strong></h2> <p>In conclusion, this competition has been a tremendous learning experience for me, providing insights into a completely new domain over the course of approximately two months. I am fortunate to have achieved a reasonably good result, and I hope that my modest contributions can be of some benefit to the field. I would like to reiterate my gratitude to all participating teams for their generous sharing. While this competition may not have resulted in medals, the ideas and thinking sparked from discussions serve as solid stepping stones along the path of machine learning development.</p>
NeurIPS 2023 - Machine Unlearning
84th place solution and consideration on the use of validation data
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NeurIPS 2023 - Machine Unlearning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, first of all, I would like to express my sincere gratitude for hosting such an interesting competition.</p> <p>The solution I finally submitted involved <strong>1) label smoothing for the forget set</strong> and <strong>2) ensemble with the original model.</strong> </p> <p>The reason for using the original model and performing ensemble was that, instead of training for a small number of iterations on the retain set, I believed that this method could capture information about the retain set without investing additional training time.<br> Although I tried various other approaches, it seems that the aforementioned experiments conducted early on achieved the best scores. <strong>Among these, I'd like to explore whether there are any other potential issues with the approach I took, beyond what I have considered.</strong></p> <hr> <h2>The approach I'd like to discuss involves using statistics from the validation data for label smoothing. Here are the details.</h2> <h2>0. Basic assumptions</h2> <p>I'm not sure why, but it seems that writing equations within Markdown sentences isn't functioning properly. If anyone has any advice on how to address this, I would appreciate it.</p> <p><strong>Notations</strong></p> <p>$\M_T$ (pre-trained model)</p> <p>$M_R$ (retrained-from-scratch model)</p> <h2>Contents</h2> <p>I make several assumptions for the following training workflow.</p> <p>1) The output distributions for the validation and forget sets in the $M_R$ (retrained-from-scratch models) are expected to be similar.</p> <p>2) The output distributions for the validation in the $M_T$ (pre-trained model) and $M_R$ are anticipated to be similar.</p> <p>3) In other words, the result of feeding validation into $M_T$ is expected to have a similar output distribution to that of $M_R$ with forget set.</p> <h1>1. Get statistics from validation set</h1> <ul> <li>The validation data is used as input for $M_T$ to obtain the output.</li> <li>Sort all outputs based on the index corresponding to the class.</li> <li>Divide the total range into five intervals and obtain the mean value for each interval as a statistical measure.</li> </ul> <h1>2. Change the labels of the forget set to pesudo labels obtained from the validation set.</h1> <ul> <li>Sort the results of the forget set fed into $M_T$ and divide them into five intervals.</li> <li>Assign pesudo labels to each interval based on the statistical values obtained in step 1.</li> </ul> <h1>3. Training with forget set with pesudo labels</h1> <ul> <li>Proceed with training using the forget set with the newly assigned labels, aiming to align the results for the forget set with those of the validation set.</li> <li>I conducted training using the simple approach, reducing the L2 distance.</li> </ul> <p>This approach worked well in experiment using CIFAR10. I believe the differences in the dataset and the degree of model overfitting may contribute to the variation in performance. However, if you have any other concerns or think there might be issues with the method, I would greatly appreciate your advice. Thank you.</p>
NeurIPS 2023 - Machine Unlearning
9th Place Solution (Forget set-free Approach, 3rd on Public LB)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NeurIPS 2023 - Machine Unlearning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Forget set-free Approach (9th on Private LB, 3rd on Public LB)</h1> <p>First of all, thank you for organizing this meaningful challenge. It provided a great opportunity to deeply engage with the concept of machine unlearning and to exchange ideas with esteemed researchers.</p> <h2>Context</h2> <p>Business context: <a href="https://www.kaggle.com/competitions/neurips-2023-machine-unlearning/overview" target="_blank">https://www.kaggle.com/competitions/neurips-2023-machine-unlearning/overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/neurips-2023-machine-unlearning/data" target="_blank">https://www.kaggle.com/competitions/neurips-2023-machine-unlearning/data</a></p> <h2>Overview</h2> <p>Our solution consists of <em>forgetting phase</em> and <em>remembering phase</em>. In forgetting phase, model parameters are stochastically selected and re-initialized. In remembering phase, knowledge preserving loss is calculated between the original model and the target unlearning model to remind the target model about retain set. Furthermore, the forgetting phase and remembering phase are repeated for some cycles to enhance unlearning performance.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4500975%2F606b7592ad5f5e6b32c17f85818b3c6b%2Foverall.PNG?generation=1701416396700810&amp;alt=media" alt=""></p> <p>Table 1 demonstrates the unlearning performance of our final solutions, compared with other published unlearning methods [1-4]. Note that the public scores are recorded.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4500975%2Ffcd863e1c450a399533f3925210fa46a%2Ftable1.PNG?generation=1702128358580812&amp;alt=media" alt=""></p> <h2>Details</h2> <h3>Experimental Settings</h3> <p>NeurIPS 2023 Machine Unlearning Challenge has specific code requirements. For example, the submissions should run 512 completely independent unlearning algorithms without any pre-compiled code or caching work between each run. Furthermore, these 512 runs should be completed within 8 hours. To obligate these requirements, few cycles of stochastic re-initialization and knowledge preserving process are executed. In detail, the first algorithm has 3 cycles with [1, 2, 2] epochs for each cycle. During the total 5 epochs, cosine learning rate scheduler (init_lr=0.001, T_max=2) is employed. The second algorithm has 4 cycles with [2, 1, 1, 1] epochs for each cycle and learning rate is set to [0.0005, 0.001, 0.001, 0.001, 0.001] for each epoch. Both algorithms randomly select 6 layers from the selection pool, while allowing replacement. The gaussian noise is sampled from the distribution of zero mean with 0.01 sigma. Note that only retain set is utilized for the experiments.</p> <h3>Logit Distribution</h3> <p>If the unlearning is successful, the unlearned model should produce similar logit distributions compared with the retrained model. Therefore, in addition to the quantitative metric, we observed the distribution of logits when feeding forget set and retain set to the models. Logit values of forget set and retain set are collected and visualized as a overlapped histograms in Figure 2 and Figure 3. As demonstrated in Figure 2 and Figure 3, Our unlearned models produces closer distributions with the distributions of retrained model, when compared with the naively fine-tuned model. Because the challenge dataset is hidden, MUFAC dataset [5] is utilized to visualize these distributions.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4500975%2F82c3f0aa1b1df3f1354253ea0d30913d%2Flogits.PNG?generation=1702129823962503&amp;alt=media" alt=""></p> <h3>Explored Methods</h3> <p>Aforementioned unlearning methods come from several interim experiments. In this section, some interim findings are introduced to follow our decision process. </p> <h4>Re-initialization</h4> <p>When stochastic re-initialization is employed in addition to the fine-tuning, the score surpasses that of pure fine-tuning. Furthermore, interestingly, parameter selection using diagonal elements of fisher information matrix decreases the score. These results are demonstrated in Table 2.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4500975%2Fa7bafc650a81274f36644b8160bcbb30%2Ftable2.PNG?generation=1702128837516533&amp;alt=media" alt=""></p> <h4>Data Augmentation</h4> <p>Table 3 describes the effect of data augmentation. Gaussian noise, generated from a gaussian distribution with a mean of 0.0 and standard deviation of 0.1, is added to the retain set images.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4500975%2Fde255579512f779d39d06265c5527828%2Ftable3.PNG?generation=1702128859512521&amp;alt=media" alt=""></p> <h4>Loss Functions</h4> <p>Table 4 describes the effect of different loss functions. Between the loss functions, MSE loss outperforms other loss functions, such as cross-entropy loss and L1 loss. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4500975%2F3317c74874cd638d5c5435f839253ef4%2Ftable4.PNG?generation=1702128943728505&amp;alt=media" alt=""></p> <h4>N Cycles</h4> <p>It is unable to improve unlearning performance by simply increasing parameter selection ratio for re-initialization. However, we find that repeating cycles of forgetting phase and remembering phase significantly improves the performance. Table 5 demonstrates these improvements.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4500975%2Ff61d629e383ceda7a89654731f5575f2%2Ftable5.PNG?generation=1702128920501681&amp;alt=media" alt=""></p> <h4>Layer-wise vs. Element-wise</h4> <p>In our experiments, layer-wise parameter selection highly surpasses element-wise parameter selection method, as shown in Table 6.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4500975%2Fe64dcf0616033366e9065ccccc26ee5f%2Ftable6.PNG?generation=1702128969844128&amp;alt=media" alt=""></p> <h4>Selection Pool</h4> <p>Furthermore, it is effective to prevent some layers from the re-initialization. In our experiments, fully-connected layers and projection-shortcut layers should be excluded from parameter selection and re-initialization. Table 7 demonstrates these results.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4500975%2F378326d5b46ec9dd7237ef4ddcd80e16%2Ftable7.PNG?generation=1702128988087226&amp;alt=media" alt=""></p> <h2>Conclusion</h2> <p>Our solution exclusively utilizes the retain set, omitting forget set and validation set. This can be effective when the forget set is already delected or its usage is prohibited.<br> However, as our solution involves random selection of layers, the unlearning performance may vary depending on the specific layers chosen.</p> <h2>Sources</h2> <p>[1] Laura Graves, Vineel Nagisetty, and Vijay Ganesh. Amnesiac machine learning. In Proceedings of the AAAI Conference on Artificial Intelligence, May 2021.<br> [2] Anvith Thudi, Gabriel Deza, Varun Chandrasekaran, and Nicolas Papernot. Unrolling sgd: Understanding factors influencing machine unlearning. In 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&amp;P), pages 303–319. IEEE, June 2022. ISBN 9781665416146, 9781665416153. doi: 10.1109/EuroSP53844.2022.00027.<br> [3] Eleni Triantafillou, Fabian Pedregosa, Jamie Hayes, Peter Kairouz, Isabelle Guyon, Meghdad Kurmanji, Gintare Karolina Dziugaite, Peter Triantafillou, Kairan Zhao, Lisheng Sun Hosoya, Julio C. S. Jacques Junior, Vincent Dumoulin, Ioannis Mitliagkas, Sergio Escalera, Jun Wan, Sohier Dane, Maggie Demkin, and Walter Reade. Neurips 2023 - machine unlearning, 2023.URL <a href="https://kaggle.com/competitions/neurips-2023-machine-unlearning" target="_blank">https://kaggle.com/competitions/neurips-2023-machine-unlearning</a>.<br> [4] Alexander Warnecke, Lukas Pirch, Christian Wressnegger, and Konrad Rieck. Machine unlearning of features and labels, August 2021.<br> [5] Dasol Choi and Dongbin Na. Towards machine unlearning benchmarks: Forgetting the personal identities in facial recognition systems, November 2023.</p> <h2>Supplementary Material</h2> <p>The detailed explanation of our methods can be found in pdf below: <a href="https://www.dropbox.com/scl/fi/izmzhmj3ktqk3ze6rhjv6/Kaggle_Unlearning_Challenge_Solution.pdf?rlkey=9pqw47izw4nuuthanw3czu1gi&amp;dl=0" target="_blank">https://www.dropbox.com/scl/fi/izmzhmj3ktqk3ze6rhjv6/Kaggle_Unlearning_Challenge_Solution.pdf?rlkey=9pqw47izw4nuuthanw3czu1gi&amp;dl=0</a></p> <p>Our solution code can be found in: <a href="https://www.kaggle.com/code/jaesinahn/forget-set-free-approach-9th-on-private-lb" target="_blank">https://www.kaggle.com/code/jaesinahn/forget-set-free-approach-9th-on-private-lb</a></p>
NeurIPS 2023 - Machine Unlearning
From top 50 to 800th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NeurIPS 2023 - Machine Unlearning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was definitely a great experience, a bittersweet one because of the final standings but great nonetheless. I have open source the <a href="https://github.com/luispintoc/machine-unlearning" target="_blank">repo</a> with all the different approaches we tried, includes a combination of:</p> <ul> <li>Retrain on retain set</li> <li>Gradient ascent on forget set</li> <li>Add noise</li> <li>Reset layers</li> <li>Pruning</li> <li>KL loss (e.g. SCRUBS)</li> <li>GANs (train a discriminator on differentiating retain/val loss)</li> <li>Contrastive learning</li> <li>Perturb weights guided by the gradient (SSD paper and variations)</li> <li>Reduce retain set to remove images with embeddings closer to forget set images</li> <li>Label poisoning</li> </ul> <p>Moreover, we trained a model from scratch on a downsampled version of CIFAR, with the same distribution as the competition dataset.</p> <p>Our <a href="https://www.kaggle.com/code/luispintoc/pruning-forget-and-retrain" target="_blank">final submission</a> consisted on:</p> <ul> <li>Pruning the model</li> <li>Adding a small noise to the fc</li> <li>Within one epoch, and using warmup: first 10% retrain, then for every forget batch retrain with X batches and finalize the epoch with only retrain.</li> </ul> <p>I believe our mistake was using CIFAR to test our approaches, where we found that using 1 epoch was enough and almost never tried more epochs (now that I see everyone's solutions, I can confirm that more epochs was indeed needed). Most likely the reason 1 epoch was enough for CIFAR, as mentioned in one of the discussion threads, is that the model is already overfitting to the data, which is not the case for the competition dataset (i.e. need for more epochs).</p> <p>PS: We'll be at NeurIPS if anyone wants to chat</p> <p>Ouch… 😞</p>
NeurIPS 2023 - Machine Unlearning
Unlearning solution (3th rank)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NeurIPS 2023 - Machine Unlearning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>1. Presentation and general context</h1> <p>Hello! I am Seifeddine Achour an engineering student specializing in mathematical modeling and data science at ENIT, enrolled simultaneously in a Master's program in image processing and living complexity of Paris cité. Also, I am the co-founder of Google's machine learning community MLAct.</p> <p>I conducted many AI researches, some of them were academic, others were professional, but most of them were brought by me based on spontaneous questions that came to my mind, on which I work to find them a solution just out of passion and interest. One of these questions was: What if I have a large model and I want to let it forget some data while keeping almost the same performance on the other data without retraining it from scratch, as it may take much time?<br> I mulled over that for some time until I stumbled upon this competition, which allowed me to focus more on it, and then authored a scientific paper on this topic.</p> <h1>2. Overview</h1> <p>The competition aims to find the model state that maximizes the memory on the wanted data and minimizes it on the unwanted data. In my article, I used a different approach which worked perfectly in the Starting Kit using the CIFAR10 dataset and Resnet18 model as shown in the following figure:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9882418%2F0d7473ba58c21828942b871a8bce85cd%2Funlearning.png?generation=1701713408479176&amp;alt=media" alt=""></p> <p>However, in this competition, I encountered some compiling problems that were not understandable of course as I don't have access to the error interface.<br> Therefore, I used a different approach starting from our main problem which is The heavy computation time related to retraining from scratch, which is based on 2 main axes:</p> <ol> <li><p>Instead of running a forgetting algorithm which is time-consuming, initially, the algorithm confuses instantly the model’s vision by changing a bit the state of the convolutional weights while keeping in average the original information and then training it on the retainset.</p></li> <li><p>Due to the classes’ unbalance in this dataset specifically, I introduced a new criterion that gives more importance to the majority class “Class0” to minimize the outlier decisions.</p></li> </ol> <h1>3. Approach details</h1> <p>As mentioned in the previous section, the problem of retraining from scratch is The heavy computation time. The main reason behind this heaviness is related to the slow backpropagation of the model state using gradient descent on all parameters starting from an initial random state. That's why, to surpass this problem the approach consists of retaining on average the global information existing in the original trained model and just confusing a little bit its vision to reconstitute it later properly. This will be achieved by deviating a bit the convolutional parameters' from its actual state while keeping on average the old parameters, then reinforcing the model's knowledge on the desired data through epochs. </p> <p>Let’s understand the maths behind that, we consider, initially, w_1…w_n the weights of the convolutional layers as random variables that follow the Gaussian Law 𝓝(m_i ,𝜎^2) , where mi is the original weight and 𝜎=0.6. After that, we continue the training process on the retaining set for 4 epochs using the SGD algorithm, learning rate 0.0007, momentum 0.9, and the strong convexity component weight_decay=5e-4. Before starting the final training epoch we apply a light variation in the weights to avoid the non-differential minima in the loss landscape then we finalize the last epoch.</p> <p>The second crucial component to consider while unlearning the model is the class imbalance present in the dataset as mentioned in the organizers’ article and shown in the following figure:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9882418%2F0d94529afdec56a7b6646b59727bd22c%2FCapture%20dcran%202023-12-03%20151053.png?generation=1701713844326082&amp;alt=media" alt=""><br> [1]</p> <p>To highlight this fact while training, I introduced a customized loss function which is based on the cross entropy principle but additionally, it gives weight = 1 to class 0 and 0.05 to the rest of classes. The loss is represented as follows :</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9882418%2Fc01d714daff5f65871244be1502348ce%2FCapture%20dcran%202023-12-04%20191925.png?generation=1701714020731331&amp;alt=media" alt=""></p> <h1>4. Results</h1> <p>The scoring system in this competition is based on the degree of similarity between the retrained model and the unlearned model. The highest score that my model obtained using the first 50% of the test set was 0.0968 and the average of all scores of this same algorithm was 0.092. Using the second 50% of the test set, it obtained 0.0885 as a final validation score. </p> <p>It is also important to note that the whole algorithm duration is around 6 hours from the confusing process to the training through 4 epochs.</p> <h1>5. Discussion</h1> <p>In this section, I would like to highlight three aspects:</p> <ol> <li><p>The increase in the number of epochs seems to be unuseful above a certain threshold. Although it may be challenging to confirm it without access to the output interface to make a proper analysis and interpretation of model performance, I assume it is due to an overfitting which happened because of the class unbalance.</p></li> <li><p>Despite the relative fastness of the algorithm that I implemented in the competition and its good score using only the retain set, the use of the forget set can be beneficial as it adds more information to help the model make the right decision. I confirmed that using the approach that I developed theoretically in my article on a regression problem, and used it to conduct experiments on classification problem using the starting kit of this competition.</p></li> <li><p>It sounds logical that the retrained model will have the best forgetting performance on the forget set while keeping the exact same accuracy on the retain set. Otherwise, according to the experiments that I have conducted, If we are satisfied with a lightly low retain accuracy we can get much better results in terms of forgetting performance which can be more beneficial in many applications as we can interpret from the following figure:</p></li> </ol> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9882418%2F38d519141cfd3a3441c398c99976897c%2FCapture%20dcran%202023-12-03%20185038.png?generation=1701714211392961&amp;alt=media" alt=""></p> <h1>6. References</h1> <p>[1] NeurIPS Machine Unlearning Competition organizers, (August 2023), “Evaluation for the NeurIPS Machine Unlearning Competition”</p> <p>Code link: https: //<a href="http://www.kaggle.com/code/seifachour12/unlearning-solution-4th-rank" target="_blank">www.kaggle.com/code/seifachour12/unlearning-solution-4th-rank</a></p>
Nomad2018 Predicting Transparent Conductors
17th place solution - deep learning approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Nomad2018 Predicting Transparent Conductors <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all, Firstly, Congratulations to the winners and thanks to the organizers for such interesting competition. I had the chance to try many different approaches and learn a lot of new things.</p> <p>I and my colleague Kareem have computer engineering background and our colleague Mohamed have bioinformatics background. Since we lack the domain knowledge of material science, we decided to work with deep learning models. </p> <p>We worked mainly with two models:</p> <ul> <li><p>Voxnet [1]: in which we divided the space to small equally sized voxels and constructed a 3d matrix for each material. In each voxel, we counted the number of atoms located in it. Then applied a series of 3d convolution and max pooling layers followed by feed forward layers. It scored a cv score of 0.0529</p></li> <li><p>Pointnet [2]: in which we described point-wise features (xyz position, one hot encoding of atom type, molecular weight, count of adjacent atoms and bond valence), then used pointnet model which is basically a series of 1d convolution layers followed by max-pooling as aggregation function and lastly some feed forward layers. It scored a cv score of 0.0509</p></li> </ul> <p>In both models we added the precomputed features by the organizers in the last feed forward layers. For cross validation, we used 24 folds.</p> <p>Our selected submission is an ensemble of multiple versions of point net and voxel net with different number of layers and filters, and some xgboost models with distance and position based features. This submission had a cv score of 0.0464 and private lb score of 0.0573.</p> <p>We tried in the very last moment to use pointnet++ [3] and added it to our ensemble but we didn’t select it for private leaderboard. However, it gave us slightly better score in private leaderboard (sad :( ).</p> <p>We hope that our approach would be helpful for the research community and please feel free to ask us any questions about our approach.</p> <p>Regards.</p> <p>References</p> <p>[1] Maturana, Daniel, and Sebastian Scherer. "Voxnet: A 3d convolutional neural network for real-time object recognition." Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on. IEEE, 2015.</p> <p>[2] Qi, Charles R., et al. "Pointnet: Deep learning on point sets for 3d classification and segmentation." Proc. Computer Vision and Pattern Recognition (CVPR), IEEE 1.2 (2017): 4.</p> <p>[3] Qi, Charles Ruizhongtai, et al. "Pointnet++: Deep hierarchical feature learning on point sets in a metric space." Advances in Neural Information Processing Systems. 2017.</p>
Nomad2018 Predicting Transparent Conductors
5th place - Dual GRU ensemble using geometry files
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Nomad2018 Predicting Transparent Conductors <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to share my approach with you. First of all big thanks to the competition hosts for my first Kaggle experience and congrats to the winners. I had a lot of fun and learned many new things!</p> <p>My approach was to use an ensemble of five RNN's, each consisting of two parallel GRU-Cells. I assumed that the intrinsic properties of the unit cell for both energies are different and therefore require two seperate representations. Each RNN output tensor was averaged in time, concatenated and then fed into a three-layer MLP with ReLU activations. I appended a visualization of the network architecture.</p> <p>Each network was trained with MSE as objective function, Adam optimizer and gradient clipping. The final predictions were weighted by a linear regression model. </p> <p>As RNN's are very data hungry I also used data augmentation to artifically increase the size of the dataset. I rotated the atomic coordinates by using rotation matrices with different angles in the range of 45° to 315°. This was driven by the assumption that the intrinsic properties are invariant to linear transformations. Best results were given by choosing 7 equidistant angles. I also found that too much data augmentation performed worse. This approach led to a significant increase of ~0.01 in CV- and LB-score.</p> <p>Thank you and have a nice weekend!</p> <p>Edit:</p> <p>Training the networks took approx. 1 - 1.5 hours. This is due to the fast Keras CuDNNGRU implementation.</p> <p>The elements as one-hot-encoded vectors were concatenated to the corresponding position vectors. Lattice vectors were concatenated in front of the atomic position matrix and the resulting matrix zero padded to a shape of 80 x 7 . I also tried to embed the elements, but it did worse than one-hot-encoding.</p> <p>Parameters</p> <pre><code> Initial learning rate: 0.01 Learning rate decay: 6.668e-5 Epochs: 15 Gradient Norm Clipping: 1.5 </code></pre> <p>Architecture </p> <pre><code> CuDNNGRU 1/2: 256 cells Dense 1: 128 neurons Dense 2: 90 neurons </code></pre> <p>Thank you :) </p> <p>I trained a recurrent denoising autoencoder to extract features and combined them with the ones that were given, but had no significant increase in my CV/LB back then. </p> <p>Unfortunately I did not include them in the MLP stage which maybe would have helped to generalize more. I did include the lattice vectors though by concatenating them to the beginning of the position-element-matrix.</p> <p>I edited my post and added the parameters of the training and architecture. </p> <p>Indeed training RNN's is painfully slow, but fortunately there are fast GPU versions of LSTM and GRU implemented in Tensorflow and Keras. In general adaptive optimizers like Adam combined with learning rate schedule and gradient norm clipping stabilize the training procedure. I found this blog post very helpful:</p> <p><a href="https://danijar.com/tips-for-training-recurrent-neural-networks/">https://danijar.com/tips-for-training-recurrent-neural-networks/</a></p> <p>The elements as one-hot-encoded vectors were concatenated to the corresponding position vectors and the resulting matrix zero padded to a shape of 80 x 7 . I also tried to embed the elements, but it did worse than one-hot-encoding.</p> <p>Thank you :)</p> <p>I will share my code in the upcoming days as I am quite busy with exams right now.</p>
Nomad2018 Predicting Transparent Conductors
9th place solution with geometric features
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Nomad2018 Predicting Transparent Conductors <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>As a physicist by training, I of course had to participate in this competition. But it turned out that detailed knowledge of physics was not necessary at all, just some basic 3 dimensional geometry.</p> <p>What didn't work: </p> <p>First I tried to actually calculate the band gaps and the formation energies with density functional theory (using the ase and gpaw python packages). Because of limited resources, I could only spend a few minutes of CPU time on each crystal. So in the end the accuracy of the solutions was not good enough, although there was a clear correlation between the results and the target variables.</p> <p>What did work:</p> <p>My idea then was to construct features which better describe the geometry of the crystals in a coordinate system independent way. So I first made 80 new features that encode the radial density distributions of the atoms as described in <a href="https://arxiv.org/abs/1307.1266">https://arxiv.org/abs/1307.1266</a> . However, this approach throws away all information about angles. So I made 80 additional features that encode the angular distributions of the atoms that are closest to each other.</p> <p>For the machine learning model I used xgboost. I noticed that the log error for the formation energy was always much lower than for the band gap. So I first predicted the formation energies, and used the predictions as an additional feature when predicting the band gap.</p> <p>My local CV score was 0.049, and my public leaderboard score was 0.0515. I was only at the 253rd position on the public data, so I was very pleasantly surprised that I finished on 9th place in the private leaderboard! I guess that the 100 test data points for the public leaderboard data were really not enough for a reliable estimate of the quality of the model, and I got a bit unlucky there.</p> <p>Anyway, it was a fun competition, but too bad that there are no medals awarded...</p>
Nomad2018 Predicting Transparent Conductors
< 0.062 solution and other things.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Nomad2018 Predicting Transparent Conductors <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I did 25th, 50th, 75th percentile at first it works pretty well from the start. The expand part (periodic boundary) have me stop for a little while because idk how to expand the positions correctly. I end up writing python scripts in blender and visualize atoms to make sure I implemented the expanding thing correctly. Actually the periodic thing does not give much better result but I just use it because it's much more nice to work with corrected distance. Also I release the code is maybe someone can look at it and point out error (looks like it's correct so far).</p> <p>As for NN part, one of my submission got 0.541 using Catboost + features from lstm encoder that I posted here. And I would agree that with more data I'm pretty sure the features from lstm encoder would be more useful.</p> <p>Hi everyone, now competition is over. I want to make some poor codes and ideas available for future competitions, so here's how I got &lt; 0.062.</p> <p>The data available to us contain csv file of important features such as spacegroup, number of atoms, percent atom, etc. This features can be easily use to achieve &lt; 0.07 private with catboost. To improve my model I thought of geometry data and what can I do with it. First I generate pairwise distance between atom and use features like percentiles and count of each atom pairs. The idea is bond distance should be correlate with formation energy. For example, there's 1 Ga atom and 16 O atoms, I got some features out of it such as 25th percentile of those distance. Later I use periodic to duplicate atoms to other direction to get more accurate atoms distance. And this is the main thing I did in fact the submission in the end only use this features.</p> <p>But there's another thing that I interested in doing from the start of competition is using LSTM to read those geometry data and map it to some spaces like 50 dimension and added those features in the catboost. It's turn out it's look like it does something and does nothing at the same time. What I mean by that is looking at submission it did lower the scores for some submission sometimes it does not. My hypothesis is that the for some instances (unit cells) the loss surfaces are sharp which the representation we pull from might not be good and causing the lower score (I did use SGD instead of Adam optimizer for several rounds in the end it seems like it did works but I have no idea if it really did).</p> <p>There's also another failed attempt I trying to use CNN on rendered pictures of the unit cell. It actually did works on it's own but have pretty bad score like 0.08 or something. LSTM on it's own is also achieve about &gt; 0.07 only using geometry data.</p> <p>In conclusion, this competition is interesting to me because searching for new materials is always interesting thing but due to relatively low samples size, and most of the loss occur on some outliers. I did plot the loss of validation on my train set it's like top20 error contribute a lot to final scores so it's kinda unfortunate that luck is kinda huge factors for this competition.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/283945/8583/lstm_p_recon.png" alt="lstm_dist_3d"> LSTM actual atomic positions and the reconstructed posiitons.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/283945/8582/lstm_dist.png" alt="lstm_diff"></p> <p>Different between actual atomic positions and the reconstructed posiitons.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/283945/8584/16.png" alt="rendered_example"></p> <p>Rendered example of unit cells from CNN.</p>
Nomad2018 Predicting Transparent Conductors
Sharing my approach, 20th place solution -- Local 0.0487, Public 0.0498, Private 0.0582
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Nomad2018 Predicting Transparent Conductors <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Linked edited in header and working</p> <p>Hi Aaron, my round parameter was about a thousand. I am pretty sure many stuff were highly correlated, especially elemental properties which were essentially different forms of linear combinations. Regarding curse of dimentionality, I tired removing the least useful features but this resulted in lower performance. PCA features also did poorly.</p> <p>Sorry for being unclear, This was from observations of my submissions. I compared local CV, public and private leaderboard scores, and i found that the local CV did not correlate well with public and leaderboard scores as well as public and private scores. </p> <p>Dear all, I am sharing my best solution according to CV scores, which is a single xgboost model. In retrospect, i found that the public leaderboard and private leaderboard correlates pretty well, and one cause is that the test set are all drawn from a similar distribution -- the training set is drawn from a slightly different distribution (relatively speaking compared to test set). </p> <p>Now for my approach ....</p> <p>code : <a href="https://github.com/leexa90/Projects/tree/master/Materials_semiconducotor">https://github.com/leexa90/Projects/tree/master/Materials_semiconducotor</a></p> <p>Feature engineering - with a total of 350 features. </p> <p>Crystal lattice properties (vol,angles,) elemental properties bond lengths bond/dihedral angles forces and energy (vdw and eswald) The mean,median,interquartile and discretized distribution of features above are also used Linear estimate using above parameters</p> <p>Training</p> <p>Subsampling of col ~ 20% and datapoints 40% was heavily done since it was found to improve performance. Intuitively the linear estimate gets the 'main trend' correct, and the other 350 features provide incremental improvements in the gradient boosting alogorithm. Since there were so many highly correlated features, subsampling at 20% gives features a "chance" of being selected. The linear estimate failed to converge for some samples, but the xgboost model takes care of that. </p> <p>Predictions was made using 20 fold CV, with 1 validation and 1 test set. </p> <p>Results</p> <p>CV error : 0.0487 Public LB score : 0.0498 Private LB score : ~0.058</p>
Novozymes Enzyme Stability Prediction
11th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Novozymes Enzyme Stability Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Good to have you back. You should do your annual physical right now - probably will never have a higher red blood cells count !!</p> <p>Sure thing. Congratulation on your gold medal!</p> <p>I want to start by expressing my admiration for many competitors who figured out how to navigate a terrain that was completely foreign to you. More importantly, you were so gracious to share those insights with us. It takes a long time to figure out the domain knowledge needed for protein modeling, using protein language models, to perform energy and molecular dynamics calculations. Yet many of you did it and made it easier for us to have a meaningful and productive competition. Thank you!</p> <p>You will have heard this in other write-ups by now, but I will still repeat it: ensembling the solutions is the way to go.</p> <p>I started this competition on a lazy note, mostly because it was not obvious how the train dataset they provided could be used without a major time investment. There were several very nice solutions based on test data alone, so I went along with that. Like everyone else, I was figuring out what molecular quantities we can calculate from test data are most informative, and how to blend them to get the best score. Along the way I kept warning that pLDDT values and B-factors are not really predictive of thermal stability, yet there was enough evidence they worked fine for this protein. At some point in the middle of Octobar, as I fortuitously sat in the 3rd place, it occurred to me that I needed to decide: focus on getting the best result on this nuclease, or try to make the best general model that will hopefully work on this protein. It is difficult to give your best effort until you fully commit, so I went with the latter option.</p> <p>I knew already that Rosetta’s energy function worked well, but had never used FoldX before for the purpose of mutagenesis. It turns out that FoldX scores alone on test data got me to ~0.47 on public LB. I used Rosetta’s lesser-known programs <code>ddg_monomer</code> and <code>pmut_scan</code> which are designed to score mutants. <code>pmut_scan</code> alone scored 0.499 on public LB, which convinced me that it is imperative to use these programs and their physics-based energy functions. In parallel, I started building a train dataset, and doing a full-scale AlphaFold2 modeling of all wild-type proteins and mutants. In the end the total was 2335 test mutants and 2964 train mutants, of which the last 300 PDB models or so are yet to be completed. That gave my GPU a constant workload since early November. It was relatively easy to get dTm values for the first 1000-1500 train mutants that had a good match between residue numbers and were measured at a single pH value. It became difficult to do it in the second half, as I was manually verifying that protein residues matched PDB residues, had to find which pH value was physiologically relevant before including them, and so forth. It took until mid-December to complete the train dataset (except for AlphaFold2 modeling), but eventually I had 2964 curated mutants for training. Extracted ~50 features from PDB files, calculated FoldX, ddb_monomer and pmut_scan scores, did ESM2 and ProtT5 embedding and subtracted mutant values from the wild-type. Finally, I symmetrized the dataset by adding the minus sign to dTm and all other quantities. This was now a tabular dataset that could be modeled using any ML approach, and I chose LightGBM for speed and TabNet for diversity of its models. When this dataset was stratified by dTm and divided 75:25 train:test, I got ~0.6 10-fold CV, and 0.65 Spearman correlation coefficient on test data. It seemed to work, and I think it will work even better in predicting ddG as opposed to dTm. I will paste below the relative importance of features as determined by SHAP from LightGBM trees. You will see that energy-based features (FoldX, ddg_monomer and pmut_scan scores) were most important, followed by language embedding expressed as PCA components. Structural features extracted from PDB files worked relatively well on their own (~0.5 10-fold CV), but when they were in a big feature set very few of them made it to top 50 features by importance.</p> <p>Now it was time to test this model on our nuclease and – womp, womp - it didn’t work very well. Best LightGBM models were scoring 0.47-0.5 on public LB, while best TabNet models were in the 0.45-0.48 range. All of them had considerably higher CV scores and behaved well when I set a general test dataset aside, but that didn’t translate to Novozyme’s protein. Eventually I blended 5 LightGBM models and 2 TabNet models that scored 0.531 on public LB and 0.511 on private LB. That was a good score given our assumptions at the start of this competition, but it wouldn’t have been good enough for any medal.</p> <p>In the last two weeks I went back to my test-based models, and decided to blend them with my LGB and TabNet model from general training. Since I didn’t have out-of-fold data here to guide the blending process, that had to be done again using only public LB scores as a guideline. This always involves luck. In the end I chose an equal blend of publicly available ThermoNet V3 (I think it scored ~0.59), my pmut_scan model (0.499), a slightly different SASA model from the one I published (0.453), a slightly different depth model from the one I published (0.42), two LGB models from general training that scored 0.505 and 0.5 and a single TabNet model that scored 0.477. That blend had 0.604 on public LB and 0.529 on private, which was in the end my best submitted score. There were &gt;20 blends with either 2 or 3 TabNet models that scored in high 0.53s on private LB, but I didn’t select any of them based on lower public LB scores.</p> <p>There were many other things that didn’t work, and several for which I am not sure yet if they work. I will add them later to this post or write a separate topic.</p> <p>Thanks again to all of you for your support and kind words throughout the competition and especially in the past two days. I always learn something from each competition, and this one was no different.</p> <p>PS A final colorful look at our protein is <a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/376393" target="_blank"><strong>here</strong></a>.</p> <p><img src="https://i.ibb.co/ThCvhwz/LGB-feature-importance-SHAP-bar-03-v3z.png" alt="Feature importance"></p> <p>Thank you, and congratulations on your shared gold medal. You and Darien explored all kinds of interesting avenues.</p> <p>In the end I caved a bit because one of my final submissions was a mix of public models, my own test-based models and full training results. But really, all of you guys near the top are so good at this that it wouldn't have surprised me if one of your scores was a really good general solution.</p> <p>Thank you for your kindness. In a long competition there are always many things that didn't work, and I may put a summary together with an invitation to others to share the ideas.</p> <p>I appreciate your support, and happy to hear than many benefited from my participation. I learned quite a bit from others as well.</p> <p>If we take my best model trained on a general protein dataset as some kind of a baseline (0.511 on private LB), the improvement from using the nuclease-specific data is on the order of 0.06-0.08 in Spearman coefficient. That is not Earth-shattering, but should help Novozyme. Beyond the improved model, they will have so many ideas shared on this forum to guide their own ML efforts. They could never buy that kind of brainstorming power in a conventional setting for $25 K.</p>
Novozymes Enzyme Stability Prediction
1st Place Public - Shakedown to 967th Place Private - Hahaha
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Novozymes Enzyme Stability Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you Andre! </p> <p>Hi. What other Kaggle competition are you referring to?</p> <p>My final solution is a simple weighted ensemble. The reason for my big drop is because I normalize the <code>submission.csv</code> file with <code>sub.tm = (sub.tm - mean) / std</code> before ensemble. Apparently this works for public LB but did not generalize to private LB. We needed to rank submissions before ensemble with <code>sub.tm = rankdata( sub.tm )</code>. See the difference in scores below:<br> <img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Dec-2022/hc_met2.png" alt=""></p> <p>Interesting. I will try min max now and see how that works. </p> <blockquote> <p>X=(X-Xmin)/(Xmax-Xmin) and ranked them.</p> </blockquote> <p>Did you only use min max. Or do min max then rank with <code>sub.tm = rankdata( (sub.tm - min)/(max-min) )</code></p> <p>Novo did not publish. Furthermore, we have not heard any concluding remarks from either Kaggle nor Novo about this competition. It would be nice to hear a few words of conclusion.</p> <p>Intuitively i also believe RF is the best. In general, ensembles of decision trees (i.e. RF and GBT) are powerhouses in ML/DL. They model interactions well and we can control overfitting with bagging (both samples and features) and tree depth.</p> <p>I mainly did the CV experiments for my own sanity. I can now say that i had no reason to pick RF (based on local experiments) and can feel less regretful for not selecting it 😃 </p> <p>But of course in retrospect it seems silly of me not to pick one hill climbing and one random forest for my final submissions. Also i should have picked one final sub using my 27 strong features and one using all 41 of my features. So hill climb w/ 27. And random forest w/ 41. But of course that is classic hindsight thinking 😃</p> <p><a href="https://www.kaggle.com/roberthatch" target="_blank">@roberthatch</a> I did some more experiments with GroupKFold (and my public test labels). (1) I used residual number for GroupKFold. Then (2) i used <code>(residual - 16)//10</code> and (3) <code>(residual - 16)//20</code>. These later two keep more consecutive residuals together. I did 5 folds. Then compute CV score based on spearman correlation with my LB 860 sub (then multiply by 0.86 to estimate LB score)</p> <ul> <li>(1) 580, 601, 615 for Random Forest, Hill Climb positive, SVR</li> <li>(2) 539, 555, 502 for RF, HC, SVR</li> <li>(3) 560, 556, 531 for RF, HC, SVR</li> </ul> <p>So this method of GroupKFold does not discover that random forest would do so well on private LB. Also it picks <code>C=500</code> as best parameter for SVR when c=500 gets private LB 490 and c=80 gets private LB 535. It does find <code>max_depth=4</code> for random forest. All in all, this local validation doesn't seem helpful for choosing the best final submission.</p> <p>Using this CV scheme also says that using only my 27 strong features with LB 300+ are slightly better (+005) than using all features LB 200+. I think this is because of leakage. In actuality using all features LB 200+ does better on private LB like (+020)</p> <p>Yes, <a href="https://www.kaggle.com/roberthatch" target="_blank">@roberthatch</a> . As you can imagine, i have also submitted many variations to see "what if". No matter what i do it gets 1st place 😓</p> <p>I have submitted different seeds. I submit with max depth 3,4,5,6,7, or 8. I submit with all my features/models over LB 150+ over LB 200+ over LB 250+. No matter what i do with random forest it wins 1st place 😓 So in retrospect, i didn't need any validation. I just needed to submit it 😄</p> <p>To me, this indicates that I failed to understand the power of random forest to work well. Also random forest doesn't care how you preprocess the data! </p> <p>UPDATE: Corrected a bug (i.e. poor preprocess choice) in my hill climbing code. My final private LB would be 0.520 with hill climbing and private LB rank would be 198th. This explains part of the reason for the huge difference between hill climbing and random forest. I think the remainder of difference is because </p> <ul> <li>my random forest sub uses 41 features instead of 27 like hill climb</li> <li>random forest learns interactions</li> <li>random forest overfits less because of bagging samples and features</li> </ul> <p>I am making a notebook now which has all 41 of my single models and my LB 865 sub. (And trains and submits both hill climbing and random forest). This will help others run experiments and compare the 41 different ways to predict stability.</p> <p>My new notebook will show how it works</p> <h2>UPDATE</h2> <p>I published my final solution <a href="https://www.kaggle.com/code/cdeotte/public-lb-1st-place-solution/notebook" target="_blank">here</a>. It shows the hill climbing with faulty preprocess that i selected. It shows how hill climbing could achieve private 0.525 with correct preprocess and it shows how random forest can achieve private 0.560. It also has a dataset containing my 41 features/models plus my LB 865 sub if others want to experiment.</p> <p>I did not share my LB probe code at this time because i want to use it to get a gold in a future competition. Once it succeeds, i will share the algorithm. </p> <p><a href="https://www.kaggle.com/kaggleqrdl" target="_blank">@kaggleqrdl</a> I shared my LB 865 submission and 41 best single model submissions in a dataset attached to my new notebook</p> <p>Hi Michael. It was difficult to validate, optimize, and select the best model in this competition. Kaggle did not provide train data (that was similar to test data) and it was difficult to find data online to create a local validation. Furthermore the public and private leaderboard are both small. Each have 1000 targets related to only 100 residuals (with public private being different data distributions, i.e. different parts of an enzyme). It was a "don't overfit" comp without any local validation 😃</p> <p>I'm reviewing my notes and analyzing private LB to understand what the best way to generalize from public to private is in this comp. I have no conclusions yet but i was surprised to see random forest generalize better than hill climbing. But this may have been luck. That's one thing i'm reviewing. </p> <p>It was also strange that hill climbing with positive weights achieved same private LB as hill climbing with positive and negative weights when both achieved different public LB. It looks like models began overfitting as early as LB 0.500 which was about 1.5 months ago haha.</p> <p>You were correct <a href="https://www.kaggle.com/hdynamics" target="_blank">@hdynamics</a> we were overfitting. I knew this but i didn't realize we were overfitting as much as we were. I thought the top private LB would be around LB 0.600. But top private ended up around 0.550. If we knew this, then we probably should have focused on models with lower public LB but more generalizability. </p> <p><a href="https://www.kaggle.com/garybios" target="_blank">@garybios</a> I found it by accident. When you use ddgun server, to compute both 3d and seq at same time <a href="https://folding.biofold.org/ddgun/index.html" target="_blank">here</a>, it returns a column named <code>ddgun_RSA[WT]</code>. Then when you use imut server, to compute 3d <a href="https://folding.biofold.org/i-mutant+/index.html" target="_blank">here</a>, it returns 2 columns. One for mutant <code>imut2_RSA[MUT]</code> and one for wildtype <code>imut2_RSA[WT]</code>. And lastly the file that RopeOnMars shared <a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/357899" target="_blank">here</a> has a column for <code>ratio</code> which is wildtype RSASA.</p> <p><a href="https://www.kaggle.com/tilii7" target="_blank">@tilii7</a> When we use the ddgun server, it returns a column named <code>S_BL</code> which ddgun website <a href="https://folding.biofold.org/ddgun/help.html" target="_blank">here</a> describes as </p> <blockquote> <p>scores calculated by Blosum62</p> </blockquote> <p>However this column is the result of a continuous asymetric substitution and scores public LB 207 and private LB 315. What is this and how come it is not the same as the normal discrete symetric blosum62 substitution matrix (which scores much lower)?</p> <p>Most of my ideas originated from something someone shared in discussion or notebook. When i start reading how something works, i realize there are more ways to utilize that something. For example if we learn how DeMask works it is actually a linear combination of 3 things, then i learn about each of the 3 things. </p> <p>One of the things is a asymmetric substitution matrix, now we can explore other substitution matrices like blosum. One of the things is entropy measurement. Now we can explore other entropy like ESM entropy etc. etc. So from reading about one thing, there are suggestions and paths to other things. The number of roads to explore is endless. I have never been in a comp where there were no more ideas to investigate.</p> <p>Also i enjoy exploring and learning so i don't get tired of trying ideas every day. It interesting and exciting.</p> <p>Thanks <a href="https://www.kaggle.com/tilii7" target="_blank">@tilii7</a> I appreciate everything that you shared. Your sharing along with others made this a fun biology learning experience. Congratulations achieving solo gold! </p> <p>It was fun reading all the discussion posts where Kagglers were guessing what i was doing. It was hard for me not to respond with insight and/or jokes.</p> <p>Here's my public LB 865 submission - <a href="https://www.kaggle.com/datasets/cdeotte/novozyme-public-lb-865-submission" target="_blank">link</a>. </p> <p>Thanks <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> my original algorithm attempted to predict private test in a meaningful way. But later versions of my algorithm did not. None-the-less, my algorithm still accepts full length submissions of length 2413 as input and outputs full length submissions of 2413 as output. In my latest version output, public test rows are optimized and private test rows are garbage.</p> <p>I updated my original post and removed "but of course the private LB was a low LB 266." I agree that is confusing.</p> <p>Thanks for all your sharing <a href="https://www.kaggle.com/kaggleqrdl" target="_blank">@kaggleqrdl</a> . You made this competition more enjoyable. This was an awesome group competition, with many great biology discussions from many domain experts.</p> <blockquote> <p>How did you do your CV</p> </blockquote> <p>I half heartedly used a Group 2 Fold. I split the public test data into first half of residuals and second half (to mimic public private split). I tried out a few models including hill climbing, linear regression, quadratic regression, lasso, ridge, svr, random forest. But i never fully trusted my "CV" because i didn't know whether this part of enzyme would generalize to private and I believed that my CV was leaky because my features were selected because we already know they do well on both of my "2 folds".</p> <p>I am now comparing my Group 2 Fold CV score with actual private LB and I see that it was misleading. My group 2 fold, unselected many surface area features. And my group 2 fold showed hill climbing and random forest as equal. So my group 2 fold didn't help me select the best final model. In retrospect, I would have been better to ignore validation completely and just submit one hill climbing and one random forest. And just tune both to have public LB 640.</p> <p>Interesting. This is another mystery :-) There are many mysteries in this competition about what really helped and what didn't.</p> <p><a href="https://www.kaggle.com/roberthatch" target="_blank">@roberthatch</a> <a href="https://www.kaggle.com/garybios" target="_blank">@garybios</a> </p> <blockquote> <p>Just gonna ask one more time if you're planning to share your exact algorithm(s) for the LB probing?</p> </blockquote> <p>I don't plan to share it yet but I will most likely share it after I use it to achieve gold in a future comp. </p> <p>My code also works reasonably well when the public/private split is unknown. For example in this comp, i think it still gets around public LB 0.700 without using public/private split information.</p> <p>It took me a few weeks to discover and write my algorithm during this comp. And this was my first time using it. I believe my algorithm can be adapted to all Kaggle comps regardless of competition metric! </p> <p>Wow, big shakedown. It hurts! Congratulations to all teams that did well on private leaderboard. Impressive!</p> <p>I spent a lot of time on this competition, so I will share my approach. I did have an unselected submission that would have won 1st place private so I believe there are things to learn from my solution.</p> <h1>Solution Summary</h1> <p>Like everyone, I struggled to create a local validation. Since the test data was one enzyme, I decided to use public test as my validation. So my strategy was as follows:</p> <ul> <li>Generate as many strong single model/features as possible</li> <li>Probe LB to acquire the public test labels via mathematical optimization</li> <li>Train one model using my single models as features and public test labels as targets</li> </ul> <p>This approach worked well. But i had no way to know which trained model was best. For my final two submission, I chose to use (1) hill climbing with positive weights and (2) hill climbing with positive and negative weights because i felt these two model will not overfit and would do well on private. Their public LBs were 609 and 645 respectively which seemed good. </p> <p>During the competition, I also trained (3) a random forest with sklearn default settings but i suspected it was overfitting because RF allows for non-linearities and interactions. In retrospect, the random forest was the best and achieved private LB 546 first place. </p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Dec-2022/best.png" alt=""></p> <p>Another reason I didn't choose RF was because its public LB was 0.829 which also suggested overfitting. After the competition ended, i changed the random forest hyperparameters to <code>depth=4</code> and achieved public LB 655 and private LB 558! 🔥</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Dec-2022/best2.png" alt=""></p> <h1>Probe LB</h1> <p>Since Kaggle LB returns an error if all of public prediction is zeros (and does not count against submission quota), we can determine the exact public/private split using only 2 quota submissions. We slowly move where our non-zero predictions are.</p> <blockquote> <p>Public LB data is exactly <code>df.iloc[:541]</code> and <code>df.iloc[1757:]</code> and <code>df.iloc[1169]</code>. Public LB is residuals 16 thru 123 inclusive plus wildtype and private LB is residuals 124 thru 221 inclusive. So the public LB is the first half of residuals while the private LB is the second half of the residuals.</p> </blockquote> <p>Once we know the public split exactly, it is a mathematical optimization problem to maximize public LB. Download all 350 public submission from all versions of public notebooks. Then find <code>X</code> such that the correlation of <code>X</code> with each of the 350 submission equals their LB score. Next create 5 new submission each day where <code>submission.csv</code> values are chosen to acquire the most information where your optimization is least confident. I ran my optimization algorithm 24 hours a day for 3 months, each day i uploaded my new 5 submissions and any new public notebooks to my optimization algorithm. My optimization algorithm achieved public LB 865 </p> <h1>Build Single Model/Features</h1> <p>While my optimization algorithm was running, I set out to discover new features and/or single models. I searched the internet, current research, etc. For each row of test data, compute feature (or infer model) and submit and observe LB score.</p> <h1>The Best Feature</h1> <p>The strongest feature on private LB was RSASA. No public notebooks nor discussions mentioned it. This is <strong>relative</strong> solvent accessible surface area. Robert posted a notebook <a href="https://www.kaggle.com/code/roberthatch/nesp-alphafold-getarea-exploration" target="_blank">here</a> which submits SASA to LB and achieves public LB 407 and private LB 466. (Also RopeOnMars published SASA <a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/357899" target="_blank">here</a>). </p> <p>SASA is the surface area of an amino acid exposed to the outside (i.e. the solvent) of the enzyme. RSASA is the relative surface area (i.e. 50%, 75% etc of amino acid is exposed). This single feature (its not even a model, its just a single amino acid property but does require PDB structure) achieves public LB 442 and private LB 496. </p> <h1>Simplest 3rd Place Gold Medal Solution</h1> <p>The simplest gold medal solution is to blend 50% Rosetta energy scores with 50% RSASA (25% wildtype RSASA and 25% mutation RSASA). This simple model achieves public LB 556 private LB 542! <strong>3rd Place</strong> 🔥</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Dec-2022/best3.png" alt=""></p> <h1>Strong Single Models/Features (Public LB 300+)</h1> <p>Below is a list of all my single models and features which achieved LB 300+ on their own. Many of these were made public in notebooks and discussion. Note that the LB may be better than you expect. This is because i have corrected the <code>delete mutation</code> test rows as explained below.</p> <table> <thead> <tr> <th>Model</th> <th>Public LB</th> <th>Private LB</th> <th>links</th> </tr> </thead> <tbody> <tr> <td>Thermonet v2</td> <td>495</td> <td>460</td> <td><a href="https://www.kaggle.com/code/vslaykovsky/nesp-thermonet-v2" target="_blank">link</a></td> </tr> <tr> <td>Rosetta Energy</td> <td>471</td> <td>438</td> <td><a href="https://www.kaggle.com/code/shlomoron/nesp-relaxed-rosetta-scores" target="_blank">link</a></td> </tr> <tr> <td>RASP</td> <td>454</td> <td>416</td> <td><a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/368270" target="_blank">link</a> <a href="https://www.kaggle.com/code/sgreiner/novo-esp-rasp-baseline" target="_blank">link</a></td> </tr> <tr> <td>RSASA wt</td> <td>449</td> <td>484</td> <td><a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/357899" target="_blank">link</a> <a href="https://folding.biofold.org/i-mutant+/index.html" target="_blank">link</a></td> </tr> <tr> <td>RSASA mut</td> <td>428</td> <td>477</td> <td><a href="https://www.kaggle.com/code/homofaberus/novozymes-data-enrichement" target="_blank">link</a></td> </tr> <tr> <td>SASA wt+mut</td> <td>420</td> <td>487</td> <td><a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/357899" target="_blank">link</a></td> </tr> <tr> <td>RMSD</td> <td>410</td> <td>350</td> <td><a href="https://www.kaggle.com/code/oxzplvifi/rmsd-from-molecular-dynamics" target="_blank">link</a></td> </tr> <tr> <td>XGB v2</td> <td>408</td> <td>394</td> <td><a href="https://www.kaggle.com/code/cdeotte/xgboost-5000-mutations-200-pdb-files-lb-0-410" target="_blank">link</a></td> </tr> <tr> <td>SASA wt</td> <td>407</td> <td>466</td> <td><a href="https://www.kaggle.com/code/roberthatch/nesp-alphafold-getarea-exploration" target="_blank">link</a> <a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/357899" target="_blank">link</a></td> </tr> <tr> <td>XGB v1</td> <td>397</td> <td>400</td> <td><a href="https://www.kaggle.com/code/cdeotte/xgboost-5000-mutations-200-pdb-files-lb-0-410" target="_blank">link</a></td> </tr> <tr> <td>SA apolar</td> <td>395</td> <td>406</td> <td><a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/357899" target="_blank">link</a></td> </tr> <tr> <td>Thermonet v1</td> <td>386</td> <td>428</td> <td><a href="https://www.kaggle.com/code/vslaykovsky/nesp-thermonet" target="_blank">link</a></td> </tr> <tr> <td>SA sidechain</td> <td>377</td> <td>342</td> <td><a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/357899" target="_blank">link</a></td> </tr> <tr> <td>deepDDG</td> <td>364</td> <td>429</td> <td><a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/354631" target="_blank">link</a> <a href="https://www.kaggle.com/code/hengck23/lb0-335-deepdgg-server-benchmark" target="_blank">link</a></td> </tr> <tr> <td>DeMask score</td> <td>363</td> <td>340</td> <td><a href="https://demask.princeton.edu/query/" target="_blank">link</a></td> </tr> <tr> <td>gemme</td> <td>352</td> <td>260</td> <td><a href="https://www.kaggle.com/code/geraseva/gemme/notebook" target="_blank">link</a></td> </tr> <tr> <td>DeMask log2f var</td> <td>323</td> <td>259</td> <td><a href="https://demask.princeton.edu/about/" target="_blank">link</a></td> </tr> <tr> <td>residue depth wt</td> <td>322</td> <td>326</td> <td><a href="https://www.kaggle.com/code/gehallak/nesp-3d-geometry-0-32-lb" target="_blank">link</a></td> </tr> <tr> <td>residue depth mut</td> <td>320</td> <td>338</td> <td><a href="https://www.kaggle.com/code/gehallak/nesp-3d-geometry-0-32-lb" target="_blank">link</a></td> </tr> <tr> <td>continuous Blosum62</td> <td>315</td> <td>207</td> <td><a href="https://www.kaggle.com/code/homofaberus/novozymes-data-enrichement" target="_blank">link</a></td> </tr> <tr> <td>ESM finetune</td> <td>308</td> <td>352</td> <td><a href="https://www.kaggle.com/code/cdeotte/protein-bert-finetune-lb-0-20" target="_blank">link</a></td> </tr> <tr> <td>SA backbone</td> <td>297</td> <td>420</td> <td><a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/357899" target="_blank">link</a></td> </tr> <tr> <td>pLDDT diff</td> <td>297</td> <td>287</td> <td><a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/361816" target="_blank">link</a> <a href="https://www.kaggle.com/code/cdeotte/difference-features-lb-0-600" target="_blank">link</a></td> </tr> <tr> <td>inps 3d</td> <td>293</td> <td>318</td> <td><a href="https://inpsmd.biocomp.unibo.it/inpsSuite/default/index3D" target="_blank">link</a></td> </tr> <tr> <td>duet</td> <td>291</td> <td>406</td> <td><a href="https://www.kaggle.com/code/geraseva/duet-tool" target="_blank">link</a></td> </tr> <tr> <td>pLDDT</td> <td>291</td> <td>209</td> <td><a href="https://www.kaggle.com/code/chinartist/fork-nesp-an-error-was-corrected" target="_blank">link</a></td> </tr> <tr> <td>DeMask entropy</td> <td>285</td> <td>340</td> <td><a href="https://demask.princeton.edu/about/" target="_blank">link</a></td> </tr> </tbody> </table> <h1>Weak Single Models/Features (Public LB 200-300)</h1> <p>Below is a list of all my single models and features which achieved LB 200-300 on their own. Many of these were made public in notebooks and discussion. Note that the LB may be better than you expect. This is because i have corrected the <code>delete mutation</code> test rows as explained below.</p> <table> <thead> <tr> <th>Model</th> <th>Public LB</th> <th>Private LB</th> <th>links</th> </tr> </thead> <tbody> <tr> <td>ddgun seq</td> <td>281</td> <td>231</td> <td><a href="https://www.kaggle.com/code/homofaberus/novozymes-data-enrichement" target="_blank">link</a></td> </tr> <tr> <td>sdm</td> <td>277</td> <td>328</td> <td><a href="https://www.kaggle.com/code/geraseva/sdm-tool" target="_blank">link</a></td> </tr> <tr> <td>inps seq</td> <td>256</td> <td>244</td> <td><a href="https://inpsmd.biocomp.unibo.it/welcome/default/index" target="_blank">link</a></td> </tr> <tr> <td>imut 3d</td> <td>246</td> <td>208</td> <td><a href="https://folding.biofold.org/i-mutant+/index.html" target="_blank">link</a></td> </tr> <tr> <td>ESM entropy</td> <td>238</td> <td>134</td> <td><a href="https://www.kaggle.com/code/kaggleqrdl/esm-quick-start-lb237/comments#1981223" target="_blank">link</a></td> </tr> <tr> <td>DeMask sub matrix</td> <td>236</td> <td>211</td> <td><a href="https://demask.princeton.edu/about/" target="_blank">link</a></td> </tr> <tr> <td>ESM</td> <td>234</td> <td>107</td> <td><a href="https://www.kaggle.com/code/kaggleqrdl/esm-quick-start-lb237" target="_blank">link</a></td> </tr> <tr> <td>dynamut</td> <td>226</td> <td>357</td> <td><a href="https://www.kaggle.com/code/geraseva/dynamut2" target="_blank">link</a></td> </tr> <tr> <td>imut seq</td> <td>224</td> <td>191</td> <td><a href="https://folding.biofold.org/i-mutant+/index.html" target="_blank">link</a></td> </tr> <tr> <td>mCSM</td> <td>214</td> <td>384</td> <td><a href="https://www.kaggle.com/code/geraseva/mcsmtool" target="_blank">link</a></td> </tr> <tr> <td>ESM mut prob</td> <td>210</td> <td>065</td> <td><a href="https://www.kaggle.com/code/kaggleqrdl/esm-quick-start-lb237/comments#1981223" target="_blank">link</a></td> </tr> <tr> <td>blosum 95</td> <td>203</td> <td>141</td> <td></td> </tr> <tr> <td>ddgun 3d</td> <td>199</td> <td>216</td> <td><a href="https://folding.biofold.org/ddgun/index.html" target="_blank">link</a></td> </tr> <tr> <td>blosum 100</td> <td>197</td> <td>135</td> <td></td> </tr> </tbody> </table> <h1>Meaningless Models/Features (Public LB 0-200)</h1> <p>I submitted a bunch of random submissions and was able to achieve LB 215 with pure randomness. Therefore I concluded that any model or feature with public LB less than 200 cannot be trusted on private LB. </p> <h1>Delete Mutation</h1> <p>The simplest way to predict <code>delete mutation</code> test rows is to predict the worst stability that occurs at that residual location. I learned this from Geraseva's notebooks and confirmed it works well.</p> <pre><code>test['min_pos'] = test.groupby('position').tm.transform('min') test.loc[test.seq_id.isin(DELETE),'tm'] = test.loc[test.seq_id.isin(DELETE),'min_pos'] </code></pre> <h1>Train Final Submission</h1> <p>Now we have 27 strong features and 14 weak features. All single features/models with LB 300+ I trusted to generalize to both public and private LB. All single features/models with LB 200-300 I partially trusted. And all single feaures/models with LB less than 200 I did not trust and I ignored. We train our final submission model using the public test labels from LB probe LB 865. We use features from the tables above.</p> <ul> <li>Random Forest trained with 27 strong features - Public LB 656 - Private LB 535 - <strong>5th Place</strong> 🔥</li> <li>Random Forest trained with 41 features - Public LB 655 - Private LB 558 - <strong>1st Place</strong> 🔥</li> <li>Hill Climb with 27 strong and pos and neg weights - Public 645 - - Private LB 517 without bug <strong>227th place</strong></li> <li>Hill Climb with 27 strong and positive weights - Public 609 - - Private LB 520 without bug <strong>198th place</strong></li> </ul> <p>Above illustrates the difficulty of model selection. Based only on public LB score, how can we tell which model generalizes the best to unseen data? Which will do best on private LB? My instincts chose hill climbing but apparently random forest modeled important feature interaction. Who knew??</p> <h1>Random Forest</h1> <p>To train random forest, first we normalize each single model/feature prediction with <code>pred = (pred - min) / std</code>. Then we normalize the LB 865 submission too. Then <code>x_train = np.zeros((1162,27))</code>. The shape of <code>x_train</code> are all the rows of public LB excluding the <code>delete mutation</code> and <code>wildtype row 1169</code>. We only fit our random forest to the <code>replace mutations</code>. Afterward we use the code above to predict the <code>delete mutation</code>. Below are total feature importance from the entire forest and below is a picture of one tree in the forest of 100 trees.</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Dec-2022/f_imp.png" alt=""></p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Dec-2022/tree4.png" alt=""></p> <p>We can enlarge the image of tree above by right clicking and "open in new tab". We observe that many decisions in the tree are based on <code>SASA</code> and <code>RSASA</code>. Random forest learns to model interactions like "If amino acid has such and such SASA and Thermonet prediction is large, predict this". Or "If amino acid has such and such RSASA and Rosetta energy is low, predict this" etc etc. </p> <p>Hill climbing has always been my favorite algorithm to create my final submission, but I will be more careful and open minded in the future…</p> <h1>UPDATE</h1> <p>I published my final solution <a href="https://www.kaggle.com/code/cdeotte/public-lb-1st-place-solution/notebook" target="_blank">here</a>. It shows the hill climbing with faulty preprocess that i selected. It shows how hill climbing could achieve private 0.525 with correct preprocess and it shows how random forest can achieve private 0.560. It also has a dataset containing my 41 features/models plus my LB 865 sub if others want to experiment.</p> <p>I did not share my LB probe code at this time because i want to use it to get a gold in a future competition. Once it succeeds, i will share the algorithm. </p>
Novozymes Enzyme Stability Prediction
1st place solution - Protein as a Graph
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Novozymes Enzyme Stability Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My notebooks are now available to the public.<br> Sorry for the delay in publishing.</p> <ul> <li>GNN (Data Preparation): <a href="https://www.kaggle.com/code/gyozzza/nesp-gnn-data" target="_blank">https://www.kaggle.com/code/gyozzza/nesp-gnn-data</a></li> <li>GNN (Training and Inference): <a href="https://www.kaggle.com/code/gyozzza/nesp-gnn" target="_blank">https://www.kaggle.com/code/gyozzza/nesp-gnn</a></li> <li>Graph Centrality: <a href="https://www.kaggle.com/code/gyozzza/graphcentrality" target="_blank">https://www.kaggle.com/code/gyozzza/graphcentrality</a></li> </ul> <p>Although not mentioned, the physical and chemical properties of residues are also used as node features.</p> <p>Correct</p> <pre><code>pred = model(graph, wildtype_seq, mutant_seq) </code></pre> <p>Bug</p> <pre><code>pred = model(graph, wildtype_seq, wildtype_seq) </code></pre> <p>The graph is generated from wildtype PDB file.</p> <p>Interesting story about BLOSUM62!</p> <p>Thank you!</p> <p>On the public LB, 1st place submission scored 0.612</p> <h1>1st place solution - Protein as a Graph</h1> <p>This is the 1st place solution, but not necessarily the best solution. <br> Since there can be a large difference between a private score and a public score, many participants could not choose a submission with high private score because of low public score. It seems that there are many participants who have submissions with higher private scores than my 1st place score.<br> Therefore, this is not the best solution. </p> <p>However, I want to share my idea so below I share my approach treating protein as a graph.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3852768%2F7be85d61fadccbb4cf4aa1c3451d25a6%2Foverview.png?generation=1672961940076762&amp;alt=media" alt=""></p> <h2>Data</h2> <h3>Protein as a Graph</h3> <p>The three-dimensional structure of a protein can be represented by a graph. Nodes represent residues, and if two residues are closer than a certain distance, the nodes are connected by edges. <br> I created graph data from PDB files.<br> (see <a href="https://www.kaggle.com/code/gyozzza/create-graph-data-from-pdb-files-for-gnn" target="_blank">https://www.kaggle.com/code/gyozzza/create-graph-data-from-pdb-files-for-gnn</a>)</p> <h3>Feature</h3> <p>The features of each node are the embedding vectors of wildtype sequence and mutant sequence generated by HuggingFace's <code>facebook/esm2_t33_650M_UR50D</code>. </p> <p>I referred to Chris' public notebook (<a href="https://www.kaggle.com/code/cdeotte/prot-bert-finetune-lb-0-30" target="_blank">https://www.kaggle.com/code/cdeotte/prot-bert-finetune-lb-0-30</a> ).</p> <h2>Model</h2> <h3>Graph Neural Network (GNN)</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3852768%2F31eddb7c601905075248107e0589025b%2FGNN.png?generation=1672962016785069&amp;alt=media" alt=""></p> <p>My primary GNN model is a 3-layer Graph Isomorphism Network (GIN). The intermediate features of each layer are concatenated, and the global readout features and the features of the node where the mutation occurred are input to a MLP to obtain predictions. </p> <p>My best single GNN model in public LB scored</p> <ul> <li>Public: 0.365</li> <li>Private: 0.368</li> </ul> <p>My best single GNN model in private LB scored</p> <ul> <li>Public: 0.315</li> <li>Private: 0.49</li> </ul> <hr> <h2>Details of 1st place submission</h2> <p>The bugs mentioned in discussion(<a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/375920" target="_blank">https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/375920</a>) are</p> <ol> <li>GNN model output almost identical predictions for all samples in each mini-batch due to a memory issue. </li> <li>GNN model takes wildtype features and mutant features as inputs but two wildtype features were input when predicting on test data.</li> </ol> <p>Submission by the model with the bugs scored:</p> <ul> <li>Public: 0.295</li> <li>Private: 0.222</li> </ul> <p>1st place submission is an ensemble of the GNN with bugs and some public notebooks. The following table shows the notebooks and weights used for the ensemble.</p> <table> <thead> <tr> <th>Model</th> <th>Weight</th> </tr> </thead> <tbody> <tr> <td><a href="https://www.kaggle.com/code/shlomoron/nesp-relaxed-rosetta-scores" target="_blank">Rosetta</a></td> <td>2</td> </tr> <tr> <td><a href="https://www.kaggle.com/code/vslaykovsky/nesp-thermonet-v2" target="_blank">Thermonet v2</a></td> <td>1</td> </tr> <tr> <td><a href="https://www.kaggle.com/code/oxzplvifi/rmsd-from-molecular-dynamics" target="_blank">RMSD</a></td> <td>1</td> </tr> <tr> <td><a href="https://www.kaggle.com/code/cdeotte/difference-features-lb-0-600" target="_blank">pLDDT diff</a></td> <td>1</td> </tr> <tr> <td>GNN(with bugs)</td> <td>1</td> </tr> </tbody> </table> <hr> <h2>Another approach</h2> <h3>Graph Centrality</h3> <p>Inspired by Elias' notebook ( <a href="https://www.kaggle.com/code/gehallak/nesp-3d-geometry-0-32-lb" target="_blank">https://www.kaggle.com/code/gehallak/nesp-3d-geometry-0-32-lb</a> ), I examined the correlation between dTm and the centrality of the node where the mutation occurred in the graph. <br> The figure below shows the relationship between information centrality and dTm in some training data.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3852768%2F872204bc6ff62321a3122d1f1e4a5eb9%2Finformation_centrality.png?generation=1672962053946235&amp;alt=media" alt=""></p> <p>Submission of inverse of centrality scored</p> <ul> <li>Public: 0.37</li> <li>Private: 0.425</li> </ul> <p><br><br> <br></p> <p>I hope you find the approach treating the 3D structure of a protein as a graph interesting.<br> I will share some notebooks later.</p> <p><br></p> <p>Thanks to the organizers of this competition and all those who shared their knowledge and insights in the public notebooks and discussions.</p>
Novozymes Enzyme Stability Prediction
21st Place Solution [ Representation is all you need]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Novozymes Enzyme Stability Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First I want to thank all the competitors who shared insightful topics, Novozymes for launching this type of competition, which I enjoyed and would like to see more.</p> <p>I know it is too late to share such information, but it was not in my hands after all, so thank you for being here, I have really appreciated, hope you're getting something interesting. </p> <p><strong>Unfortunately, I have joined the competition before the deadline with only 20 days. Due to my limited time I have decided to use one of the public code and add something unique to it then submit and wait for the shake.</strong></p> <p>So I have chosen <a href="https://www.kaggle.com/code/vslaykovsky/nesp-thermonet-v2/notebook" target="_blank">🧬 NESP: ThermoNet v2 🧬 </a>code by <a href="https://www.kaggle.com/vslaykovsky" target="_blank">@vslaykovsky</a> to be my baseline and build in above of it.</p> <p>🧬 NESP: ThermoNet v2 🧬 single model public score was great. <br> </p> <table> <thead> <tr> <th>1-fold</th> <th>10 folds</th> </tr> </thead> <tbody> <tr> <td>0.494</td> <td>0.582</td> </tr> </tbody> </table> <p></p> <p><br></p> <h2>&nbsp;Idea</h2> <p><strong>The most important feature for us was "protein_sequence" a lot of codes tried to encode it with a lot of tokenizers, and it was big. But I thought if we could find a network that could map our problem perfectly, In other words, we could use its back-to-back FC layer to REPRESENT the sequence better, and instead of mapping it to linear layers, we could map it to something else, such as XGBOOST.</strong></p> <p>Trick Note: I have made a simple change in the inference stage for all experiments.</p> <ol> <li>Using np.exp(preds) for predicting replacements</li> <li>Use np.quantile(0.333) to remove.</li> </ol> <p>And the reason is that:<br> As we're predicting ddG the distribution range was too short, our LeadBoard metrics using a rank Algorithm to sort the best submission, this will end up with many tie sequences. <br> Exponential (the easiest way to normalize and input) will lead to larger range.</p> <p>So it becomes from something like this:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2535352%2Ffe0322bb3870b17726f0b74ab1542d8b%2F__results___6_1.png?generation=1678353109951360&amp;alt=media" alt="short range dist"></p> <p>To this:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2535352%2Fe81cd5ea223d6113b6c175f180f3e0ec%2F__results___35_1.png?generation=1678353151391108&amp;alt=media" alt="long range dist"></p> <p>Yeah, here we go…</p> <p>ThermoNet v2 representations layer contains 74 nodes </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2535352%2Fe40653f0ef1e60ccffab5394648323e2%2FScreenshot%202023-03-08%20at%201.20.58%20PM.jpg?generation=1678304345121711&amp;alt=media" alt="ddg"></p> <p>Also, we have added the PH to end up with 75 features as a representation for each sequence.</p> <h2>&nbsp;1. Rep. with xgboost</h2> <h3>1.1 Diagram</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2535352%2F3c0c07444e50ab110bf9095a25c58d81%2Fxgboost1.jpg?generation=1678353719592938&amp;alt=media" alt="xgb stage 1"></p> <h3>&nbsp;1.2 Results:</h3> <table> <thead> <tr> <th>folds</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>single</td> <td>0.46987</td> <td>0.4455</td> </tr> <tr> <td>10</td> <td>0.49001</td> <td>0.45407</td> </tr> </tbody> </table> <h2>2. Ensemble with ThermoNet notebook</h2> <h3>&nbsp;2.1 Digram</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2535352%2F92e51952213c81fcb7583341b875c498%2Fstage2.jpg?generation=1678353854133134&amp;alt=media" alt="stage 2"></p> <h3>&nbsp;2.2 Results:</h3> <table> <thead> <tr> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>0.5887</td> <td>0.5026</td> </tr> </tbody> </table> <h2>&nbsp;3. Final Stage (21st place LB)</h2> <p>final stage is to ensemble all with <a href="https://www.kaggle.com/code/elcaiseri/difference-features-final-stage/notebook" target="_blank">Difference Features </a> notebook.<br> In this notebook we create difference features based on Kvigly's published PDBs <a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/361816" target="_blank">here</a> </p> <h2>3.1 Digram</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2535352%2Fa33d79aa67ccb9e22f12a68d7f368fdc%2Ffinal%20stage.jpg?generation=1678354341393156&amp;alt=media" alt="final stage"></p> <h3>&nbsp;2.3 Results:</h3> <table> <thead> <tr> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>0.59749</td> <td>0.52441</td> </tr> </tbody> </table> <hr> <p><strong>Update:</strong><br> — Code: <a href="https://github.com/elcaiseri/Novozymes-Enzyme-Stability-Prediction" target="_blank">21st Place Solution - Representation Is All You Need</a>.</p> <p>Thanks@elcaiseri</p> <p><strong>Update:</strong></p> <p>Code: <a href="https://github.com/elcaiseri/Novozymes-Enzyme-Stability-Prediction" target="_blank">21st Place Solution - Representation Is All You Need</a>.</p>
Novozymes Enzyme Stability Prediction
2nd place solution & private LB 0.577 QUICK OVERVIEW
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Novozymes Enzyme Stability Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hahaha thanks 😂</p> <p>Hi Chris, you're almost correct. but a little correction that the model used in either of positive or negative will not be used again.<br> for example:<br> pred = 0.2 * A + 0.5 * B + 0.3 * C <br> pred = pred - (D * 0.2 + E * 0.7 + F * 0.1) </p> <p>I used <code>scipy.stats.rankdata()</code> for both of them so that they can be in same scale. however, I believe <code>normalize</code> or some other function such as <code>MinMaxScaler</code> could work just fine</p> <p>Sure, I will share the code. </p> <p><a href="https://www.kaggle.com/roberthatch" target="_blank">@roberthatch</a> public score was 0.618</p> <p>Hi everyone! My solution is very simple. It's just an ensemble of different methods and a little smoothing at the end. </p> <p>a quick overview:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3350264%2Fc36a345c2440a24230fb59f7e6023607%2Fwhole.png?generation=1672861825096708&amp;alt=media" alt=""></p> <p>In the +ve weights block we have different methods. let me know if you want the explanation of each of them separately, I will make another post for it but as they are available on the internet, I didn't write about them here.</p> <p>I'm assuming that up to the ensemble, everything is clear.</p> <p>Now coming to the smoothing part:</p> <p>for me, combing ML models (the ones I have mentioned) with the positive block drops the score, my assumption is that it makes the values extreme. even if we overfit the public LB, we might drop the private LB. <br> Private LB was mostly from ~500 rows to ~1600 rows so if we have extreme values based on public LB, this part of private LB could drop significantly. So to avoid shakeup we needed some smoothing.<br> Now we have many methods for smoothing, we could simply try log transform but the problem is we don't know which part of the prediction needs smoothing and which doesn't. one method to solve this is to train a neural network but as we have training data belonging to different distributions, this won't help us here. <br> So I did something risky: I ensembled all the ML models and subtracted them from the ensemble.<br> let's say we have an ensemble prediction of 0.9 and ground truth of 0.5 and an ML prediction of 0.6. now subtracting ML prediction from ensemble 0.9-0.6 = 0.3 will put us a little closer to 0.5.<br> The reason I called it risky is that if you didn't set the right weights, you will fall terribly on private LB. assume ground truth and ensemble both 0.9 and ML pred as 0.6. it will add a big error difference here.<br> So to make it work, we need to set weights based on CV scores but in this competition, we don't have similar training data so I had to set the weights based on public LB. </p> <p>With different combinations of the above, I was able to make about ~30 submissions in the range of private scores 0.545 to 0.577. I found their private and public score a little reliable and correlating.<br> here is a snap:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3350264%2F945e4f75a3785b75c0b1eb30b5658c7a%2FScreenshot%20(1681).png?generation=1672863185750539&amp;alt=media" alt=""></p> <p>This was the first time I was working on a protein-related competition, I tried many different things everyday for the past 3 months (80% of them are not part of my solution) and I'm really happy that I won a solo gold. I hope to become GM soon.<br> Thanks.</p>
Novozymes Enzyme Stability Prediction
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Novozymes Enzyme Stability Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you! I didn't notice that. I will try it.</p> <p>Hi, FredPiko.<br> To be clear, my model gets a higher public score, but the private score is actually not high(0.464).<br> I will answer your questions based on my previous attempts, because I haven't verified them on private LB yet.</p> <blockquote> <p>1.Adding three features calculated by “ProteinAnalysis“ module of Bio library, such as “stability_analysis”. The main idea of this change comes from ”Residue Depth“ notebook Darien posted.</p> </blockquote> <p>Adding these three feature will probably increase my Public score by 0.005, but the increase in Private score is small.</p> <blockquote> <p>2.Adding spatial features mentioned above in this discussion. There are a many features of this type, but each feature is weak, mostly arranged in the rear of feature importance.</p> </blockquote> <p>Althougth the single feature is weak, but the score only base on these features is not bad. You can check my notebook <a href="https://www.kaggle.com/code/zrongchu/4000-unique-mutation-with-dtm/notebook" target="_blank">here</a>(Public 0.404, Private 0.432).It will achieve Private score 0.489 adding sasa(version 49). By the way, the score is affected by the training data. I used more data here and only kept ubstable mutations.</p> <blockquote> <p>3.For mutations with deleted residues, using the result in "deletion-ensemble" notebook.</p> </blockquote> <p>This method has little improvement in public LB, which is not much different from setting deletion mutants to mean of subtitution mutations.</p> <p>Welcome back, Rope on Mars.<br> Thanks for you great contribution in this competition.</p> <p>Thank you, Chris.<br> Please check this document(<a href="https://foldxsuite.crg.eu/documentation" target="_blank">https://foldxsuite.crg.eu/documentation</a>) and discussion <a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/366106" target="_blank">here</a>. But I will get an proxy error now when I open this website, I don't know why. I downloaded foldx5Win32.zip from this website. Following Robert Hatch's instructions, I first ran the command to repair the wildtype pdb. Then I created the .cfg file and ran the positionscan command. According to the guidance of the document, the format of .cfg is as follows:</p> <blockquote> <p>command=PositionScan<br> pdb=wildtype_structure_prediction_af2_Repair.pdb<br> positions=LA17E,LA17K,KA18C,KA18F….(wildtype+chain+position+mutation)</p> </blockquote> <p>You will get a repair_scanning_output.txt when it's done. That's what I'm using <a href="https://www.kaggle.com/code/zrongchu/foldx" target="_blank">here</a>.The second column of the file is ddg. I don't know if this is correct.</p> <p>Sorry, I missed this comment. The public LB score is 0.614.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8874838%2F1babf306a41db313de2525f22be5b9b8%2F2023-01-10%20144804.png?generation=1673333312315669&amp;alt=media" alt=""></p> <p>Thanks a lot to the hosts and Kaggle for hosting this interesting competition. Congratulations to the other competitors for the great solutions and results.Special thanks to the competitors who selflessly shared their knowledge during this competition, <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> , <a href="https://www.kaggle.com/tilii7" target="_blank">@tilii7</a> , <a href="https://www.kaggle.com/ropeonmars" target="_blank">@ropeonmars</a> , <a href="https://www.kaggle.com/roberthatch" target="_blank">@roberthatch</a> , <a href="https://www.kaggle.com/dschettler8845" target="_blank">@dschettler8845</a> , <a href="https://www.kaggle.com/jinyuansun" target="_blank">@jinyuansun</a> , <a href="https://www.kaggle.com/kvigly55" target="_blank">@kvigly55</a> , <a href="https://www.kaggle.com/vslaykovsky" target="_blank">@vslaykovsky</a> , <a href="https://www.kaggle.com/oxzplvifi" target="_blank">@oxzplvifi</a> , <a href="https://www.kaggle.com/shlomoron" target="_blank">@shlomoron</a> …, If I've missed someone, please let me know.</p> <h1>Training Set</h1> <p>I used the dataset <a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/359284" target="_blank">D1</a> ,<a href="https://www.kaggle.com/datasets/shlomoron/train-wildtypes-af" target="_blank">D2</a> and <a href="https://www.kaggle.com/code/vslaykovsky/14656-unique-mutations-voxel-features-pdbs" target="_blank">D3</a>.</p> <h1>Feature Engineering</h1> <p>I selected the 21 amino acids closest to the mutant amino acids to construct the features.</p> <ul> <li>The relative distances of the 21 amino acid mutation sites were calculated.<br> $$D_{ci}=\frac{r_{c}-r_{i}}{max_{i∈K}{(r_{c}-r_{i})}}$$,$$r_{ci}=\sqrt{{(x_{c}-x_{i})}^{2}+{(y_{c}-y_{i})}^{2}+{(z_{c}-z_{i})}^{2}}$$</li> <li>The average relative distance between the 21 amino acids and the mutation site was calculated.</li> <li>The angles of the 21 amino acids to the mutation site were calculated.<br> $$\alpha_{1}=arccos\frac{(z_{c}-z_{i})}{r}$$<br> $$\alpha_{2}=arccos\frac{(x_{c}-x_{i})}{r}$$<br> $$\alpha_{3}=arccos\frac{(y_{c}-y_{i})}{r}$$<br> $$\beta_{4}=arctan\frac{(y_{c}-y_{i})}{(x_{c}-x_{i})}$$<br> $$\beta_{5}=arctan\frac{(z_{c}-z_{i})}{(y_{c}-y_{i})}$$<br> $$\beta_{6}=arctan\frac{(x_{c}-x_{i})}{(z_{c}-z_{i})}$$</li> <li>B_factor, sasa, blosum, demask…</li> <li>Use the adjusted blosum62 to multiply sasa, b_factor…</li> </ul> <pre><code> (): -(/(+np.exp(-x/adjustment_factor))) </code></pre> <ul> <li>Features of the whole protein sequence, mutant amino acids and 21 amino acids extracted by esm.</li> <li>There are a few other interesting features <a href="https://www.kaggle.com/code/dschettler8845/novo-esp-residue-depth-and-more-w-biopython" target="_blank">here</a>.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8874838%2Fe933f40cad3209abc44bd7fc34206c3f%2F2023-01-04%20142508.png?generation=1672813546656646&amp;alt=media"></li> </ul> <h1>Model</h1> <h2>XGB_v1</h2> <p>This model gets public score <strong>0.52132</strong> and private score: <strong>0.46642</strong>.<br> We used <a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/359284" target="_blank">D1</a> to train my first Xgboost model. This dataset is wildtypes and mutants selected from kaggle's training set with 4000+ rows of train data. First of all, I did some simple processing on the dataset, removing duplicate rows and deletion mutation, deleting rows that did not match pdb, and retaining groups with mutation samples &gt; 20. Finally, we get a training set with 1800+ rows.<br> We used the features mentioned above and took Tm rank as target to train the model,so we have 298 features for our model.</p> <h2>XGB_v2</h2> <p>This model gets public score <strong>0.40512</strong> and private score: <strong>0.30495</strong>. <br> We used <a href="https://www.kaggle.com/datasets/shlomoron/train-wildtypes-af" target="_blank">D2</a> to train my second Xgboost model. This dataset is wildtypes and mutants selected from kaggle's training set. But this data set is cleaned up in more detail.The cleaned dataset contains only 986 mutations from 30 groups, all groups are same-source, same-pH.</p> <h2>XGB_v3</h2> <p>This model gets public score <strong>0.40435</strong> and private score: <strong>0.43294</strong>. <br> We used <a href="https://www.kaggle.com/code/vslaykovsky/14656-unique-mutations-voxel-features-pdbs" target="_blank">D3</a> to train my third Xgboost model. Since the strong correlation between dTm and ddg(ddG is a measure of the change in energy between the folded and unfolded states (𝚫Gfolding) and the change in 𝚫Gfolding when a point mutation is present.), we can also use ddg as target in order to increase the training data. This external dataset contains 14656 unique mutations. Since the wild type is more stable than most mutants, we kept only unstable mutations and ensure that the pH and source of each group are the same. After a simple cleanup, the training set contains 6000+ rows.</p> <h1>Ensemble</h1> <p>I used the three xgb models ensemble with some public notebooks, I don't have a good idea here, just try to change the weight until the score stops improving.</p> <h1>reference</h1> <ul> <li><a href="https://www.kaggle.com/code/cdeotte/xgboost-5000-mutations-200-pdb-files-lb-0-410" target="_blank">XGBoost - 5000 Mutations 200 PDB Files [LB 0.410]</a></li> <li><a href="https://www.kaggle.com/code/vslaykovsky/nesp-thermonet-v2" target="_blank">🧬 NESP: ThermoNet v2 🧬</a></li> <li><a href="https://www.kaggle.com/code/roberthatch/nesp-alphafold-getarea-exploration" target="_blank">NESP: AlphaFold+GetArea exploration</a></li> <li><a href="https://www.kaggle.com/code/cdeotte/difference-features-lb-0-600" target="_blank">Difference Features - [LB 0.600]</a></li> <li><a href="https://www.kaggle.com/code/oxzplvifi/rmsd-from-molecular-dynamics" target="_blank">RMSD from Molecular Dynamics</a></li> <li><a href="https://www.kaggle.com/code/dschettler8845/novo-esp-residue-depth-and-more-w-biopython" target="_blank">NOVO-ESP – Residue Depth and More w/ BioPython</a></li> <li><a href="https://www.kaggle.com/code/dschettler8845/novo-esp-eli5-performant-approaches-lb-0-451" target="_blank">NOVO ESP – ELI5 - Performant Approaches [LB=0.451]</a></li> </ul> <hr> <p>Update:<br> The ensemble weight of this notebook wihch got 0.542 in private Leaderboard.</p> <blockquote> <p><a href="https://www.kaggle.com/code/gehallak/nesp-3d-geometry-0-32-lb" target="_blank">3Dgeometry</a>(0.32) + 1.5×foldx(0.413) + 1.5×<a href="https://www.kaggle.com/code/vslaykovsky/nesp-thermonet-v2" target="_blank">ThermoNet v2</a>(0.495) + 2×<a href="https://www.kaggle.com/code/shlomoron/nesp-relaxed-rosetta-scores" target="_blank">rosetta</a>(0.471) + 1.5×<a href="https://www.kaggle.com/code/oxzplvifi/rmsd-from-molecular-dynamics" target="_blank">rmsd</a>(0.410) + xgb2(0.404) + 1.5×xgb1(0.521) + 1.5×<a href="https://www.kaggle.com/code/cdeotte/difference-features-lb-0-600" target="_blank">difplddt</a>(0.299) + 1.5×xgb3(0.405)</p> </blockquote> <table> <thead> <tr> <th>Model</th> <th>Public LB</th> <th>Private LB</th> <th>Link</th> </tr> </thead> <tbody> <tr> <td>3Dgeometry</td> <td>0.32</td> <td>0.338</td> <td><a href="https://www.kaggle.com/code/gehallak/nesp-3d-geometry-0-32-lb" target="_blank">link</a></td> </tr> <tr> <td>Foldx</td> <td>0.413</td> <td>0.375</td> <td><a href="https://www.kaggle.com/code/zrongchu/foldx" target="_blank">link</a></td> </tr> <tr> <td>Thermonet v2</td> <td>0.495</td> <td>0.460</td> <td><a href="https://www.kaggle.com/code/vslaykovsky/nesp-thermonet-v2" target="_blank">link</a></td> </tr> <tr> <td>rosetta</td> <td>0.471</td> <td>0.438</td> <td><a href="https://www.kaggle.com/code/shlomoron/nesp-relaxed-rosetta-scores" target="_blank">link</a></td> </tr> <tr> <td>rmsd</td> <td>0.410</td> <td>0.350</td> <td><a href="https://www.kaggle.com/code/oxzplvifi/rmsd-from-molecular-dynamics" target="_blank">link</a></td> </tr> <tr> <td>xgb2</td> <td>0.404</td> <td>0.432</td> <td><a href="https://www.kaggle.com/code/zrongchu/4000-unique-mutation-with-dtm/notebook" target="_blank">link</a></td> </tr> <tr> <td>xgb1</td> <td>0.524</td> <td>0.464</td> <td><a href="https://www.kaggle.com/code/zrongchu/nesp-xgboost" target="_blank">link</a></td> </tr> <tr> <td>difplddt</td> <td>0.297</td> <td>0.287</td> <td><a href="https://www.kaggle.com/code/cdeotte/difference-features-lb-0-600" target="_blank">link</a></td> </tr> <tr> <td>xgb3</td> <td>0.405</td> <td>0.304</td> <td><a href="https://www.kaggle.com/code/zrongchu/nesp-xgboost?scriptVersionId=113857946" target="_blank">link</a></td> </tr> </tbody> </table> <p>Thank you,Barrry.🤝</p> <p>Thanks, gary.<br> I just made my notebook public <a href="https://www.kaggle.com/code/zrongchu/nesp-xgboost/notebook?scriptVersionId=115466978" target="_blank">here</a> and it's not very readable.</p> <p>Thanks,FredPiko.<br> The demask I added is the wildtype of the test set, and it is also used for the train set. I didn't generate demask for every wildtype(I tried to do this at first, but then I realized it wasn't possible), please refer to this <a href="https://www.kaggle.com/code/cdeotte/xgboost-5000-mutations-200-pdb-files-lb-0-410" target="_blank">notebook</a>. To clarify, for the dataset 14656, the authors have already generated PDBs for each wildtype and mutant. You can use biopython to get sasa features. See this <a href="https://www.kaggle.com/code/dschettler8845/novo-esp-surface-area-w-biopython" target="_blank">notebook</a> for details. Of course, you can also use this getarea web. You can also use this website to get other useful features besides sasa, such as sidechain, in/out, etc., which can also improve the performance of the model.</p> <p>Hi,Swati Parida.Forgive me for the delay in replying.The best mse I got is 0.26+.But there's not much correlation between mse and Spearman's coefficient</p> <p>Thanks you! Marília Prata.</p>
Novozymes Enzyme Stability Prediction
8-th place solution - AVGP3 + Rosetta
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Novozymes Enzyme Stability Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://software.acellera.com/moleculekit/moleculekit.tools.voxeldescriptors.html" target="_blank">https://software.acellera.com/moleculekit/moleculekit.tools.voxeldescriptors.html</a></p> <p>this tool makes 7 3D features from pdb-file</p> <p>we have found code example in thermonet sources</p> <p><a href="https://github.com/gersteinlab/ThermoNet" target="_blank">https://github.com/gersteinlab/ThermoNet</a></p> <p>gends.py -i p53_variants.txt -o p53_direct_stacked_16_1 -p ./ --boxsize 16 --voxelsize 1</p> <p>Train CV on 5 k-fold</p> <p>CV: 0.370, Public LB: 0.482, Private LB: 0.470</p> <p>First of all, we are very happy to be a part of such unordinary and interesting competition, many thanks to the organizers and competitors who selflessly shared their knowledge during this competition, <a href="https://www.kaggle.com/vslaykovsky" target="_blank">@vslaykovsky</a>, <a href="https://www.kaggle.com/oxzplvifi" target="_blank">@oxzplvifi</a>, <a href="https://www.kaggle.com/dschettler8845" target="_blank">@dschettler8845</a>, <a href="https://www.kaggle.com/kvigly55" target="_blank">@kvigly55</a>, <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/shlomoron" target="_blank">@shlomoron</a> …, If I've missed someone, please let me know.</p> <p>Many thanks to my great team QuData <a href="https://www.kaggle.com/synset" target="_blank">@synset</a>, <a href="https://www.kaggle.com/alexz0" target="_blank">@alexz0</a>, <a href="https://www.kaggle.com/kerrit" target="_blank">@kerrit</a> and <a href="https://www.kaggle.com/semenb" target="_blank">@semenb</a>, you are awesome! Despite the constant blackouts in Ukraine and the explosion of rockets over our heads, we enjoyed diving into the world of biochemistry.</p> <h2>AVGP3</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2Fe47a345ccc56dc455e4a30773efd8b40%2Fthermonet.png?generation=1672846271446547&amp;alt=media" alt="Thermonet features"></p> <p>At the beginning of the competition, we noticed that the <a href="https://www.kaggle.com/code/vslaykovsky/nesp-thermonet" target="_blank">https://www.kaggle.com/code/vslaykovsky/nesp-thermonet</a> approach gives a high contribution to the prediction of thermal stability.</p> <p>We studied the material from this article <a href="https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008291" target="_blank">https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008291</a> and decided to experiment with different neural network architectures, and arrived at the Resnet-3D with a public score 0.488.</p> <p>Then we wondered which parts of the network really affect the result. By iteratively pruning the network we came up with a fairly simple architecture:</p> <pre><code> (): x = x.mean(axis=(,,)) -x[:,]&lt;br&gt; </code></pre> <p>The same shape as for the Thermonet work was used as the input, (14,16,16,16).</p> <p>This model gave a metric of 0.477 on public lb. Thus, averaging a single parameter <strong>hbond_donor</strong> (hydrogen bond donor) yielded a fairly high result.</p> <p>To improve the metrics, we built different variations of feature descriptors using the <strong>molekulekit</strong> utility. The optimal ones were: boxsize:16, voxelsize: 0.5, which improved public lb to 0.482. This model, which we called AVGP3, became a part of our final ensemble solution.</p> <h2>Rosetta</h2> <p>Thanks to the notebook <a href="https://www.kaggle.com/code/oxzplvifi/deletion-specific-ensemble" target="_blank">https://www.kaggle.com/code/oxzplvifi/deletion-specific-ensemble</a> we noticed that thermal stability is also well predicted by Rosetta scores.</p> <p>Combining two methods in the ensemble (0.5 rosetta + 0.5 AVGP3) we got <strong>0.590</strong> in public lb and, as it eventually turned out, <strong>0.540</strong> in private lb (<strong>4th</strong> place). But unfortunately we didn't choose this result in the final submission.</p> <p>Next, the AVGP3 result was combined with <a href="https://www.kaggle.com/code/oxzplvifi/deletion-specific-ensemble" target="_blank">https://www.kaggle.com/code/oxzplvifi/deletion-specific-ensemble</a> with weights of 0.2 and 0.8 and improved our public lb score to 0.613</p> <p>We then proceeded to examine what Rosetta Scores success consists of. After the relaxation procedure, the pdb files contain POSE_ENERGIES_TABLE at the bottom</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2Fba3bc23a2c2e896a338a0e11c16aadb6%2Frosetta.png?generation=1672847774943319&amp;alt=media" alt="energy table"></p> <p>which consists of 20 parameters and weights to them, and we used them in our approach.</p> <h2>Ranked rosetta</h2> <p>The idea emerged to use the weights for the ensemble. Independent predictions of thermostability were created for each parameter separately for all mutations, and then combined into an ensemble using the weights from POSE_ENERGIES_TABLE </p> <p>In fact, we have replaced:<br> $$ tm = {rank(\sum_{i=1}^n w_i * prop_i)} $$<br> on:<br> $$ tm = {\sum_{i=1}^n w_i * rank(prop_i)} $$</p> <p>This ensemble gave a score of 0.338 (we call it RankedRosetta)</p> <p>Combining this result with the best ensemble (0.9 * 0.613 + 0.1 * 0.338) we got 0.615 on the public lb.</p> <h2>Final ensemble</h2> <p>On the final day, after analyzing the forums, another result (<a href="https://www.kaggle.com/code/sgreiner/novo-esp-rasp-baseline" target="_blank">https://www.kaggle.com/code/sgreiner/novo-esp-rasp-baseline</a>) was added to the ensemble, which added another point to our public score<br> (0.05 * Rasp + 0.95 * 0.615) = 0.616 and gave us the 0.532 private score.</p> <p>So, to sum it all up,</p> <p>the best submitted result:</p> <p>0.95(0.9(0.8 (DelSpec:0.603) + 0.2 (AVGP3)) + 0.1(RankedRosetta)) + 0.05*(RASP) = 0.612 pub, 0.532 private (<strong>8th</strong> place)</p> <p>the best result overall (non-submitted):<br> 0.5(Rosetta)+0.5(AVGP3) = 0.590 pub, 0.540 private (would be <strong>4th</strong> place)</p> <p>As we can see, the thermostability in both cases was significantly affected by Rosetta Scores throughout the molecule and the average of the hydrogen bond donor parameter in the vicinity of the mutation.</p> <p>Thanks to all!</p>
Novozymes Enzyme Stability Prediction
My incredibly lucky 6th place solution w code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Novozymes Enzyme Stability Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Unfortunately, confidence had nothing to with my selection. I simply did not select any submissions and the 0.594 was actually my best public sub (I was testing a lot of e3nn single models exps so my subs were all taken up and I didn't even make a sub with the 0.603), which is why I titled my solution "incredibly lucky" lol. </p> <p>What happened was that I worked on my e3nn for like two weeks, but my cv/lb correlation was very weak/no correlation, and I thought that both cv and public couldn't really be trusted. Therefore, I kind of gave up one week before the competition end date, with no sub selected, leading to my lucky finish.</p> <p>Thanks Chris! Sorry it wasn't clear in the post. The .603 notebook has 0.8 weight and the 8-fold e3nn has 0.2 weight. The public score is 0.594.</p> <p>Thanks Robert. In theory, adding rotational augmentation would be helpful, although a ridiculous amount of augmentation would be needed to provide a similar performance to an equivariant network, to a point where it's not computationally feasible. You can take a look at figure 4 of e3nn paper (<a href="https://arxiv.org/pdf/2207.09453.pdf" target="_blank">https://arxiv.org/pdf/2207.09453.pdf</a>) to see the data efficiency scaling, which is quite interesting. Also, certainly it seems to me that the voxel generation would be too slow to use in real time with a data generator. With that being said, thermonet still outperformed my e3nn, which is surprising, but I think that the input features of thermonet probably help a lot compared to my e3nn which only uses xyz cood, atom type, and residue type. I think a lot of simple features also outperform my e3nn such as sasa, but since e3nn is so different, it adds nicely to the ensemble. I tried to generate the same features per atom instead of per voxel but the results weren't good. I suspect I had some error in my per atom feature generation code but didn't have enough time to figure it out.</p> <p>Thanks! Your discussion posts are very helpful, although there were so many of them and I couldn't keep up at the end haha</p> <p>Somehow I ended up in 6th place, and I can't believe how lucky I am. I started working on this competition with only 4 weeks left and my main idea was to use a rotation and translation equivariant network (e3nn), which I failed to develop into a strong solution on its own. Most of my ensemble comes from public notebooks, I will reference them shortly but in this post I will quickly discuss my e3nn network.</p> <h1>Data</h1> <p>I used the same dataset compiled by the thermonet notebook, so nothing fancy really.</p> <h1>My e3nn network</h1> <p>The good performance of thermonet surprised me a lot, mostly because 3D conv is not rotation equivariant, meaning that if you rotate a protein structure, it would give you different results. Obviously, you would think this would be a problem because we know rotating a protein structure shouldn't change its properties but thermonet actually outperformed my e3nn network on both private and public. However, it could be that my e3nn is simply unoptimized because I've never used it before and don't know how to use it well. </p> <p>For convenience, i simply ported most the data pipeline from the themonet notebook but instead of 3D voxel grids I used atom types and xyz coordinates as inputs to e3nn, a euclidean neural network specifically designed to tackle 3D structures with rotation, translation, and permutation equivariance. The rest of my solution is identical to the thermonet notebook.</p> <p>My code: <a href="https://github.com/Shujun-He/Novaenzymes" target="_blank">https://github.com/Shujun-He/Novaenzymes</a></p> <p>Some notable specifics of my e3nn:</p> <ol> <li>I calculate the geometric center of the residue of the mutated position and only take atoms within a certain range (8)</li> <li>Input node features other than atom types include residue and whether the structure is a mutant structure or wt structure</li> </ol> <p>My e3nn used in the final sub scored 0.362/0.346 on private/public.</p> <h1>Ensemble</h1> <p>I ensemble an 8 fold e3nn with the public 0.603 notebook (<a href="https://www.kaggle.com/code/oxzplvifi/deletion-specific-ensemble" target="_blank">https://www.kaggle.com/code/oxzplvifi/deletion-specific-ensemble</a>) with 0.8 (public 0.603 notebook)/0.2 (8 fold e3nn) weight. For inference, I use both alphafold structures and rosetta relaxed structures (generated with thermonet codebase) and averaged the predictions.</p> <h1>References</h1> <p>pLDDT, BLOSUM, DeepDDG and DeMaSk procedures are from:<br> <a href="https://www.kaggle.com/code/dschettler8845/novo-esp-eli5-performant-approaches-lb-0-425" target="_blank">https://www.kaggle.com/code/dschettler8845/novo-esp-eli5-performant-approaches-lb-0-425</a><br> with additional references:<br> <a href="https://www.kaggle.com/code/kvigly55/plldt-and-ddg" target="_blank">https://www.kaggle.com/code/kvigly55/plldt-and-ddg</a><br> <a href="https://www.kaggle.com/code/lucasmorin/nesp-changes-eda-and-baseline" target="_blank">https://www.kaggle.com/code/lucasmorin/nesp-changes-eda-and-baseline</a><br> <a href="https://www.kaggle.com/code/hengck23/lb0-335-deepdgg-server-benchmark" target="_blank">https://www.kaggle.com/code/hengck23/lb0-335-deepdgg-server-benchmark</a><br> Differential pLDDT procedure is from:<br> <a href="https://www.kaggle.com/code/cdeotte/difference-features-lb-0-600" target="_blank">https://www.kaggle.com/code/cdeotte/difference-features-lb-0-600</a><br> with mutation PDB files from:<br> <a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/361816" target="_blank">https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/361816</a><br> <a href="https://www.kaggle.com/datasets/roberthatch/nesp-kvigly-test-mutation-pdbs" target="_blank">https://www.kaggle.com/datasets/roberthatch/nesp-kvigly-test-mutation-pdbs</a><br> RMSD and SASA from MD simulation procedure:<br> <a href="https://www.kaggle.com/code/oxzplvifi/rmsd-from-molecular-dynamics" target="_blank">https://www.kaggle.com/code/oxzplvifi/rmsd-from-molecular-dynamics</a><br> with the following two MD simulation outputs used:<br> <a href="https://www.kaggle.com/datasets/oxzplvifi/novozymes-md" target="_blank">https://www.kaggle.com/datasets/oxzplvifi/novozymes-md</a> for SASA measurement<br> <a href="https://www.kaggle.com/datasets/oxzplvifi/novozymes-md2" target="_blank">https://www.kaggle.com/datasets/oxzplvifi/novozymes-md2</a> for RMSD measurement<br> The output files of Rosetta and Thermonet were loaded directly since I am not sure yet if they can be implemented in R:<br> <a href="https://www.kaggle.com/code/shlomoron/nesp-relaxed-rosetta-scores" target="_blank">https://www.kaggle.com/code/shlomoron/nesp-relaxed-rosetta-scores</a> for Rosetta<br> <a href="https://www.kaggle.com/code/vslaykovsky/nesp-thermonet-v2" target="_blank">https://www.kaggle.com/code/vslaykovsky/nesp-thermonet-v2</a> for Thermonet</p> <p>e3nn github:<a href="https://github.com/e3nn/e3nn" target="_blank">https://github.com/e3nn/e3nn</a><br> e3nn paper:<a href="https://arxiv.org/abs/2207.09453" target="_blank">https://arxiv.org/abs/2207.09453</a></p> <p>Thanks Patrick. The reason I used 8 is because I started out with 8 due to thermonet using a box of 16x16x16 (16/2=8). I tried some different ranges (e.g. 5, 10, 15) but 8 works the best</p> <p>Thanks. My e3nn used in the final sub scored 0.362/0.346 on private/public. Added that o the solution as well</p> <p>Thanks <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> ! Indeed it was quite lucky</p> <p>Thanks. Please see the comment above for my e3nn score</p>
Novozymes Enzyme Stability Prediction
[56th place] Fitting? I'm over it.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Novozymes Enzyme Stability Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This has been an enjoyable but also very demanding competition, three months seems like a very long time, and 501 is perhaps too many submissions. Thanks to <a href="https://www.kaggle.com/maggiemd" target="_blank">@maggiemd</a>, <a href="https://www.kaggle.com/inversion" target="_blank">@inversion</a> and the Kaggle team for running this competition and to Novozymes for hosting. Congratulations to those who won medals, and commiserations to the many excellent competitors who were unluckily shaken down the private LB. Thanks to everyone for your engagement on the Discussion forum, particularly <a href="https://www.kaggle.com/kaggleqrdl" target="_blank">@kaggleqrdl</a>, <a href="https://www.kaggle.com/tilii7" target="_blank">@tilii7</a>, <a href="https://www.kaggle.com/carlmcbrideellis" target="_blank">@carlmcbrideellis</a> &amp; <a href="https://www.kaggle.com/ropeonmars" target="_blank">@ropeonmars</a>.<br> At this point, I have to admit that I’ve significantly rewritten my original draft given the seismic shift between public and private LB, and the discovery that many of the ideas whose public LB benefits I was most excited about turned out to have the opposite effect on the private.</p> <p>The large distance between the provided training and test data meant that this contest offered distinct approaches; either the usual ML method of building a model based on the training data, or alternatively to throw bioinformatics descriptors at the public LB in the hope of building a sufficiently high scoring and adequately robust model to survive the shake-up.</p> <p>My approach is almost entirely of the second kind, and consists of a hybrid between a multi-linear regression and an ensemble. Hence, it is entirely devoid of ML sophistication and its content rests solely on the features. My models are at <a href="https://www.kaggle.com/code/jbomitchell/model-a" target="_blank">model A</a> and <a href="https://www.kaggle.com/code/jbomitchell/model-b" target="_blank">model B</a> - you can read the full details there if you have the patience. Component sub-models (but not binary or integer descriptors) are mostly scaled onto 0-100 for ensembling, and for the most part I do not use ranks.</p> <p>There are a handful of descriptors that both contributed to my model and turned out to be robust (thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for the thorough analysis in his write-up). Relative Accessibility (rSASA), is calculated for each residue in the provided AlphaFold2 model of the target protein by dividing the Solvent Accessible Surface Area by the maximum area available for the given residue. This feature scored 0.428 on its own, a useful advance on the ~0.400 from the usual SASA. Thus, rSASA (called “accRel” in my models) provided a base prediction for each residue position in the structure, which will subsequently be modified by sequence-dependent (i.e., varying depending on the incoming residue) features. The rSASA feature also provided an important division of sites into ‘core’ and ‘surface’, which I treated differently throughout, with all positions having more than the median accRel value being defined as ’surface’ and those with less than median relative accessibility as ‘core’. More position-specific information came from the RMSD in MD (Molecular Dynamics) simulations, as shown in the wonderful notebook by <a href="https://www.kaggle.com/oxzplvifi" target="_blank">@oxzplvifi</a>. The pLDDT measure of AlphaFold2’s prediction confidence is similarly position-specific, though in fact I incorporated it via the notebook by <a href="https://www.kaggle.com/cafelatte1" target="_blank">@cafelatte1</a>, which mixed it with position-independent residue-replacement preference information from a substitution matrix (BLOSUM100 in this case, though I also used BLOSUM30 in my model). Each of Thermonet, Rosetta, RASP, deepDDG and GEMME were ensembled into my model, along with some other predictors. However, much of what I did was finally revealed to have been useless or harmful to the private LB score.</p> <p>I had about 11 submissions that would have hit the gold zone, but no indication that might have led to my selecting them. The best scored .536 private LB and came when I was playing with position-specific residue preferences for this nuclease protein family, based on my transcription of the <a href="https://www.kaggle.com/competitions/novozymes-enzyme-stability-prediction/discussion/356776" target="_blank">sequence logo</a> that <a href="https://www.kaggle.com/tilii7" target="_blank">@tilii7</a> previously posted in the Discussion forum. My plan was always to submit my best public LB model (model A , in the vain hope of a small shake-up) and a slightly safer (less overfitted) variant model B that excluded all steps which had failed to tick over the public LB by the minimum visible increment of 0.001. Ultimately, I’m not sure whether to be disappointed at being shaken down from the gold region or relieved to get silver in the craziness of the reranking. Unsurprisingly, my safer model B scored higher (model A would still have got bronze).</p> <p>Finally, thanks also to the authors of all the public notebooks whose work and willingness to share so greatly advanced this competition, and in particular:<br> <a href="https://www.kaggle.com/code/oxzplvifi/rmsd-from-molecular-dynamics" target="_blank">RMSD from Molecular Dynamics</a> by <a href="https://www.kaggle.com/oxzplvifi" target="_blank">@oxzplvifi</a>;<br> <a href="https://www.kaggle.com/code/oxzplvifi/deletion-specific-ensemble" target="_blank">Deletion-specific ensemble</a> by <a href="https://www.kaggle.com/oxzplvifi" target="_blank">@oxzplvifi</a><br> <a href="https://www.kaggle.com/code/vslaykovsky/nesp-thermonet-v2" target="_blank">NESP: ThermoNet v2</a> by <a href="https://www.kaggle.com/vslaykovsky" target="_blank">@vslaykovsky</a>;<br> <a href="https://www.kaggle.com/code/cafelatte1/nesp-blosum100" target="_blank">NESP - BLOSUM100</a> by <a href="https://www.kaggle.com/cafelatte1" target="_blank">@cafelatte1</a>;<br> <a href="https://www.kaggle.com/code/cafelatte1/nesp-plddt-with-blosum100-score-scaling" target="_blank">NESP - pLDDT with BLOSUM100 Score Scaling</a> by <a href="https://www.kaggle.com/cafelatte1" target="_blank">@cafelatte1</a>;<br> <a href="https://www.kaggle.com/code/cdeotte/xgboost-5000-mutations-200-pdb-files-lb-0-3" target="_blank">XGBoost - 5000 Mutations 200 PDB Files - LB 0.3+</a> by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>;<br> <a href="https://www.kaggle.com/code/cdeotte/difference-features-lb-0-600" target="_blank">Difference Features - LB 0.600</a> by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>;<br> <a href="https://www.kaggle.com/code/carlmcbrideellis/enzyme-location-location-location" target="_blank">Enzyme: "Location, Location, Location"…</a> by <a href="https://www.kaggle.com/carlmcbrideellis" target="_blank">@carlmcbrideellis</a> (this is pretty much minus one times the sample submission);<br> <a href="https://www.kaggle.com/code/viktorfairuschin/thermostability-prediction-with-rasp-model" target="_blank">Thermostability Prediction with RaSP Model</a> by <a href="https://www.kaggle.com/viktorfairuschin" target="_blank">@viktorfairuschin</a>;<br> <a href="https://www.kaggle.com/code/sgreiner/novo-esp-rasp-baseline" target="_blank">NOVO ESP RASP Baseline</a> by <a href="https://www.kaggle.com/sgreiner" target="_blank">@sgreiner</a>;<br> <a href="https://www.kaggle.com/code/geraseva/gemme" target="_blank">GEMME</a> by <a href="https://www.kaggle.com/geraseva" target="_blank">@geraseva</a>.</p>
OSIC Pulmonary Fibrosis Progression
10th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <a href="https://www.kaggle.com/aadhavvignesh" target="_blank">@aadhavvignesh</a></p> <p>Thanks <a href="https://www.kaggle.com/imoore" target="_blank">@imoore</a> 😊</p> <p>Congratz to the winners !</p> <p>Many thanks to kaggle team for hosting this competition,hope the winning solutions will bring useful insight to the problem.</p> <p>My solution is based only on tabular data.</p> <h1>Data Augmentation</h1> <p>The strategy has been shared in many public kernels.<br> You can find it <a href="https://www.kaggle.com/ttahara/osic-baseline-lgbm-with-custom-metric" target="_blank">here</a>.</p> <h1>Validation</h1> <p>I already shared my validation strategy <a href="https://www.kaggle.com/c/osic-pulmonary-fibrosis-progression/discussion/168610" target="_blank">here </a>during the competition.</p> <h1>Features</h1> <p>I use the following features : <code>'base_FVC', 'base_Percent', 'base_Age', 'Week_passed', 'Sex', 'SmokingStatus'</code></p> <h1>FVC Prediction</h1> <p>Final fvc prediction is a mean of the following regressor : </p> <ul> <li>SVM</li> <li>KNN</li> <li>NN</li> <li>Quantile Regressor (0.5)</li> <li>RF</li> <li>LM</li> <li>HuberRegressor</li> <li>ElasticNet</li> <li>Lgbm</li> </ul> <h1>Confidence Prediction</h1> <p>1 - Compute the optimal Confidence value for final fvc,which is actually:<br> <code>Confidence = np.sqrt(2)*np.abs(FVC_true - FVC_pred)</code><br> 2 - Use the same models in FVC prediction to train on the Confidence this time<br> 3 - Train a binary classifier to know if <code>Confidence&lt;=100</code><br> 4 - Use prediction from (2) and post-process them by (3)</p> <h1>Things that didn't work for me</h1> <ul> <li>Everything with image data</li> <li>Linear Augmentation on tabular data</li> </ul> <p>Thanks <a href="https://www.kaggle.com/code1110" target="_blank">@code1110</a> </p> <p>Thanks <a href="https://www.kaggle.com/hypothermia15" target="_blank">@hypothermia15</a> </p> <p>Yes only tabular data !</p> <p>Thanks <a href="https://www.kaggle.com/johannhuber" target="_blank">@johannhuber</a> .</p> <p><strong>Dataset preparation</strong></p> <ul> <li>I used ordinal encoder. ( eg: Male is more important than Female in Sex variable …..).</li> <li>No scaling</li> <li>Yes direct prediction of FVC.<br> I don't think the 3 points above has a huge impact on the result. The most important was the augmentation trick.</li> </ul> <p><strong>Training policy &amp; CV</strong><br> CV and validation set were the key.<br> I randomly remove 20% of patient in train and add them to test. (Based on k=2 clustering ) <br> So 20% of Patient will not be part of CV they will be a validation set i'll not train on them.<br> NN : I used Mlpregressor from sklearn and I fixed params based on oof and validation predictions<br> RF: fixed params according to oof and validation predictions</p> <p><strong>Post-Processing</strong><br> Because of the metric you can ajust Confidence to get better score.<br> The binary classifier tell you for which observation you should do that.</p>
OSIC Pulmonary Fibrosis Progression
1st place "mostly" unpredictable solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all! I am really happy to be among the top scorers of this competition, even though I didn't expect that much. My best solution is heavily based on <a href="https://www.kaggle.com/khoongweihao/efficientnets-quantile-regression-inference" target="_blank">this notebook</a> so kudos to <a href="https://www.kaggle.com/khoongweihao" target="_blank">@khoongweihao</a></p> <p><strong>Introduction</strong></p> <p>You might have noticed that during the competition there were a lot of public notebooks where authors just tuned the hyperparameters of the notebook mentioned above. When I started to solve this competition I noticed some unlogical parts in the notebooks, which for some reason resulted in a high score on the public leaderboard. My initial hypothesis was that on the private leaderboard those notebooks should fail, which turned out to be true. </p> <p>Here are some of the things that I have noticed:</p> <ul> <li>Usage of the "Percent" feature, there were even discussions on the forum about the usefulness of this feature. While it wasn't obvious if it was really useful or not I decided not to use that</li> <li>Strange blending weights for the models. To my mind, it was very illogical to give such a huge weight to the EfficientNet models because of 2 reasons. Firstly, we had only around 200 patients in the training set. Secondly, models were trained on the random slices of the initial 3d CT scans, which isn't a really reliable technique. </li> </ul> <p><strong>Validation</strong></p> <p>I tried different techniques of validation and none of them really correlated with the leaderboard. Finally, I stopped on the following validation scheme:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2139711%2Fe2fa663a66bb474595bbeb7b88fd7b7e%2Fscreen1.png?generation=1602065497962232&amp;alt=media" alt=""></p> <p>Even though I didn't notice any correlation with the public lb, I decided that this approach should be close to the one used in this competition for scoring. After creating this validation, if my model scored badly both on my validation and leaderboard - I disregarded it. But in terms of selecting the best submissions, it was still a black box for me.</p> <p><strong>Training, models, and final solution</strong></p> <p>I have tried a lot of things (I will add more about this below), but ironically the backbone to my best solution turned out to be this <a href="https://www.kaggle.com/khoongweihao/efficientnets-quantile-regression-inference" target="_blank">kernel</a> :)</p> <p>Speaking about my final solution, here is what I have done to achieve this score. Firstly, I trained both models (Quantile Regression + EfficientNet b5) from scratch. For both models, I lowered the number of epochs. For effnet I decided to train for 30 epochs and for quantile regression for 600 epochs. Then, I changed the architecture of Quantile Regression a bit, because on validation my architecture worked better. Apart from that, I removed all the "Percent" related features for both models, it turned out that it gave a huge boost on private lb for me. The hardest decision was how to choose the weights for the blend. Well, I just decided to give a slightly higher weight to Quantile Regression because for me it seemed to work better. Finally, I did some more improvements for the backbone notebook, for example, there was a part with the quantile selection based on the best loglikelihood score for the EfficientNet models. This part took ages to finish and moreover, for me, it looked like not a good decision, so I have just set the quantile to 0.5 and didn't select anything, this allowed my Inference notebooks to run in just 3 minutes total.</p> <p>Also, I would like to give a small tip, on how I tend to select submissions and verify the prediction correctness, in general, this helps me a lot. When I have trained the new model and receive the submission file, I always draw distribution plots of prediction values, along with plots for predictions themselves. Here is how they look like:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2139711%2Fdb8ced80a746d1946238e60ba981273a%2FUntitled.png?generation=1602065623926772&amp;alt=media" alt=""></p> <p>These are plots for the test set "Confidence" for a subset of my models, sometimes by looking at those plots you can identify strange model behavior and find a bug. In general, I always analyze the predictions really carefully and build a lot of graphs</p> <p><strong>What didn't work</strong></p> <p>As I have said I have tried a lot of stuff, but it almost always worked badly both on LB and CV. Here are a few things:</p> <ul> <li>Calculated lung volume with methods from the public notebooks and passed it as features for both models</li> <li>Tested other models, XGBoost, Log Regressions on tabular data. Thanks to my CV it immediately turned out that trees do not work here, so I didn't do anything with trees since the beginning of this competition.</li> <li>Since I was testing simple models, my 2nd selected submission was a really simple logistic regression model, which by the way landed in the bronze zone</li> <li>Augmentations for the CT scans worked bad, maybe I should have spent more time testing them</li> <li>Histogram features of the image didn't work either</li> <li>If you have analyzed model outputs, you might have noticed those spikes (both for Confidence and FVC)</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2139711%2Fd0538eb2a86d7017f0042c79694ac4de%2FUntitled%20(1).png?generation=1602065688923432&amp;alt=media" alt=""></p> <p>It made total sense to remove them, but it didn't work on my validation, so I left it as is. It turned out it wasn't working on the private test set as well. Though, I am still confused about the reason why it happened.</p> <p><strong>Final words</strong></p> <p>This is my first gold on Kaggle and I am really happy about that. I am also happy about this huge shake-up which helped me land in the first place, which I wasn't expecting. I would like to thank all the Kaggle community for making so many notebooks public and being active on the forum! Without you guys, I wouldn't have learned that much during this competition and all other previous ones.</p> <p>And one last thing, don't ever track your public LB score, this mostly helps ;)</p> <p>Below I will attach links to my final submission notebooks, along with Medium writeup and Github repo.</p> <p><a href="https://www.kaggle.com/artkulak/inference-45-55-600-epochs-tuned-effnet-b5-30-ep" target="_blank">1st Place notebook</a></p> <p><a href="https://www.kaggle.com/artkulak/simple-logreg?scriptVersionId=44081090" target="_blank">Bronze zone very simple solution</a></p> <p><a href="https://medium.com/@artkulakov/how-i-achieved-the-1st-place-in-kaggle-osic-pulmonary-fibrosis-progression-competition-e410962c4edc" target="_blank">Medium writeup</a></p> <p><a href="https://github.com/artkulak/osic-pulmonary-fibrosis-progression" target="_blank">Github repository</a></p> <p>Thank you! Why do you think min_FVC leaks information? Isn't that just the initial measurement for each patient, which is present in the test set as well?</p>
OSIC Pulmonary Fibrosis Progression
23rd Place Solution - Single Tabnet model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was my first Kaggle silver medal, super happy about that 😄 This was a great learning experience for me. Kudos to all the winners and thanks to the entire Kaggle ecosystem.</p> <h3><strong>Solution Overview</strong></h3> <p>Tabular features and features obtained from CT scans were used in a <a href="https://github.com/dreamquark-ai/tabnet" target="_blank">Tabnet model</a>, trained with pinball loss. As a form of data augmentation, every FVC score available for each person was assumed to be the first FVC test score, and other features were built accordingly.</p> <p><strong>Training Notebook</strong> - <a href="https://www.kaggle.com/gautham11/quantile-regression-pytorch-lightning-training?scriptVersionId=44239361" target="_blank">https://www.kaggle.com/gautham11/quantile-regression-pytorch-lightning-training?scriptVersionId=44239361</a></p> <p><strong>Inference Notebook</strong> - <a href="https://www.kaggle.com/gautham11/quantile-regression-pytorch-lightning-inference/data?scriptVersionId=43574381" target="_blank">https://www.kaggle.com/gautham11/quantile-regression-pytorch-lightning-inference/data?scriptVersionId=43574381</a></p> <h3><strong>Tabular Features</strong></h3> <p>This is fairly straightforward.</p> <p><code>first_test_fvc</code> - FVC score of the assumed first visit<br> <code>predict_week</code> - Weeks from the CT scan, <code>Week</code> feature from the train data<br> <code>weeks_from_first_visit</code> - number of weeks from the first FVC test.<br> <code>expected_fvc</code> - <code>fvc * percent</code> this is a constant for each patient.<br> <code>percent</code> - Percent of the <code>first_test_fvc</code> when compared to the <code>expected_fvc</code><br> <code>age</code> - Age of the patient</p> <p>One-hot encoded values of <code>Sex</code> and <code>SmokingStatus</code></p> <h3><strong>CT Scan Features</strong></h3> <p>3D rescaled segmented lung model was generated for each patient from which the image features were obtained.</p> <p><strong>Segmentation</strong> - The lung segmentation was performed using <a href="https://www.kaggle.com/aadhavvignesh/lung-segmentation-by-marker-controlled-watershed" target="_blank">Marker-controlled watershed segmentation</a>, thanks to the amazing work by <a href="https://www.kaggle.com/aadhavvignesh" target="_blank">Aadhav Vignesh</a></p> <p><strong>Rescaling the voxel pixels</strong> - The rescaling was done using the <code>resample</code> method from <a href="https://www.kaggle.com/allunia/pulmonary-dicom-preprocessing" target="_blank">Pulmonary Dicom Preprocessing</a>. Thanks <a href="https://www.kaggle.com/allunia" target="_blank">Laura Fink</a> for the great kernel.</p> <p>The only issue I faced with this approach was to stay within the 4 hours submission runtime while generating the 3D models for private dataset in the inference notebook. I was able to overcome this by switching <code>scipy.ndimage.zoom</code> in <code>resample</code> function with <code>torch.nn.functional.interpolate</code>, which was much faster.</p> <p><strong>Image features notebook</strong> - <a href="https://www.kaggle.com/gautham11/lung-volume-lung-height-hu-values-image-features" target="_blank">https://www.kaggle.com/gautham11/lung-volume-lung-height-hu-values-image-features</a></p> <p><strong>train Image features dataset</strong> - <a href="https://www.kaggle.com/gautham11/lung-image-features" target="_blank">https://www.kaggle.com/gautham11/lung-image-features</a></p> <p>Image features generated are</p> <p><code>lung_volume_in_liters</code> - Number of segmented lung voxel pixels in 3d model / 1e6.<br> <code>lung_height</code> - Difference between the first and last slice with more that 1000 lung voxel pixels.<br> <code>lung_mean_hu</code> - Mean of the HU values.<br> <code>lung_skew</code> - skew of the HU values.<br> <code>lung_kurtosis</code> - Kurtosis of the HU values.</p> <p>And histogram bin values of the HU values in the 3D model were used as features - <code>bin_-900</code>, <code>bin-800</code>…<code>bin_200</code>.</p> <p>The inspiration for most of these features was from the <a href="https://www.kaggle.com/c/osic-pulmonary-fibrosis-progression/discussion/165727" target="_blank">Domain expert's insights</a> post by <a href="https://www.kaggle.com/sandorkonya" target="_blank">Dr. Konya</a></p> <p>These features gave me a good improvement in my CV score.</p> <h3><strong>Model Training</strong></h3> <p>The data was min-max scaled. The data was split into 5 folds, stratified on the <code>expected_fvc</code>, intuition being that split could be based on the Patients' characteristics, not sure if this was the best method. For each split, the Tabnet model was trained with <code>SGD</code> optimizer and One-cycle LR scheduler for 350 epochs.</p> <p><strong>Model weights dataset</strong> - <a href="https://www.kaggle.com/gautham11/gautham-quantmodel" target="_blank">https://www.kaggle.com/gautham11/gautham-quantmodel</a></p> <p>I gave Pytorch-Lightning a try for the first time in this competition, and I'm fairly certain that I'll use it in every project from here on.</p> <p>Finally, on selecting my submission, I selected my best CV score model and my best Public LB model. In retrospect, selecting the best Public LB model was a poor decision because it got a bad score in private LB, all the more affirmation to always <strong>trust the CV</strong></p> <p>Thanks! My tabnet model with only tabular features also scored in the same range.<br> Great question! It seems pytorch-lightning supports <a href="https://pytorch-lightning.readthedocs.io/en/latest/hyperparameters.html#hyperparameter-optimization" target="_blank">hyperparameter tuning by integrating with other libraries</a>. I didn't try it in this competition, but we should be able to tune tabnet parameters this way.</p> <p>Thanks, <a href="https://www.kaggle.com/aadhavvignesh" target="_blank">@aadhavvignesh</a>! Good to see a fellow pytorch-lightning fan :P</p> <p>Thanks <a href="https://www.kaggle.com/tanulsingh077" target="_blank">@tanulsingh077</a> :)</p> <p><a href="https://www.kaggle.com/optimo" target="_blank">@optimo</a> Very thankful for the work you've done, can't wait for the updates. <br> The training code is available in this notebook - <a href="https://www.kaggle.com/gautham11/quantile-regression-pytorch-lightning-training?scriptVersionId=44239361" target="_blank">https://www.kaggle.com/gautham11/quantile-regression-pytorch-lightning-training?scriptVersionId=44239361</a><br> I didn't need the sklearn wrapper, so used <code>TabNet</code> directly.</p>
OSIC Pulmonary Fibrosis Progression
34th place solution - Using CT Scans (+ Code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I did not expect to win this comp so I was not immediately prepared with my solution or the code. I have made my code available here: <a href="https://github.com/GreatGameDota/OSIC-Pulmonary-Fibrosis-Prediction" target="_blank">https://github.com/GreatGameDota/OSIC-Pulmonary-Fibrosis-Prediction</a></p> <p>As always I want to thank Kaggle for another amazing competition as well as the hosts for putting this together!</p> <p>This was an interesting competition for me because I saw that everyone else were only using the tabular data. I have no experience at all with tabular data comps and I like CV so I was stubborn and forced myself to use the CT scans. I was discouraged by my poor LB score but I keep pushing and it turned out to be the correct decision!</p> <p>Anyway on to the solution:</p> <h2>Models</h2> <ul> <li>Resnet50</li> </ul> <h2>Dataset</h2> <p>Image resolution: 512x512<br> For each scan I loaded in all the dicom files then converted them to HU. I then cropped all the images and reshaped them to the size 50x512x512. After doing that I then windowed each image into three parts. For a detailed explanation of the windowing take a look at <a href="https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection/discussion/182930" target="_blank">this</a> post (I used the same function and window values as shared there, thanks so much Ian Pan!). After I got the three windowed images I simply saved them as pngs.<br> Each image looked similar to this:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3543139%2F3b1f2461da6d31658806e1ace841dd5f%2Flung.png?generation=1602033034958099&amp;alt=media" alt=""></p> <p>I made the dataset I used public here: <a href="https://www.kaggle.com/greatgamedota/osic-windowed-lung-images" target="_blank">https://www.kaggle.com/greatgamedota/osic-windowed-lung-images</a></p> <p>For meta data I used the same meta data from <a href="https://www.kaggle.com/ulrich07" target="_blank">@ulrich07</a> 's <a href="https://www.kaggle.com/ulrich07/osic-multiple-quantile-regression-starter" target="_blank">baseline kernal</a> except I only used the base Percent value as it increased my CV to LB to PB correlation.</p> <h2>Augmentation</h2> <ul> <li>Coarse dropout</li> <li>SSR</li> <li>Horizontal + Vertical flip</li> <li>For one model: Random Saturation and Brightness</li> </ul> <p>No tabular/meta augmentation</p> <h2>Training</h2> <ul> <li>Adam optimizer with Reduce on Plateau scheduler</li> <li>Trained with an LR of .003 for 30 epochs</li> <li>Batch size of 16 (bs of 4 for one model)</li> <li>Trained using Quantile Regression with .8 qloss + .2 metric loss (same as Ulrich's kernal)</li> <li>Trained 5 fold for every model</li> <li>Didn't use any batch accumulation or mixed precision</li> <li>Saved checkpoint based on best validation score (all weeks)</li> </ul> <p>For training I removed 6 entire patients because their CT scans were broken. I then split each patient into a fold by randomly shuffling them then using GroupKFold.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3543139%2F69a1967fbe3f4067d7fd9f6e5e91350c%2Ffolds.png?generation=1602036568284315&amp;alt=media" alt=""></p> <p>Then while training I pick a random unique patient and randomly select an image from 10-40 (since the first and last 10 images don't contain any lung info). I then made it so that each iteration lasted 4 * amount of patients. For validation I only picked the 15th image since it was the middle image that usually contains the most information.</p> <h2>Ensembling/Blending</h2> <p>Simple mean average of their FVC predictions and confidence</p> <h2>Final Submission</h2> <p>My final submission was a blend of 3 Resnet50 models:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3543139%2F773b5cc6fdf39b2fe77ec08d4b295684%2Ffinal%20sub.png?generation=1602040599569002&amp;alt=media" alt=""><br> And another point is that the single model that would have scored gold had the same training parameters as the other models except the added brightness/saturation augmentation</p> <h2>What didn't work</h2> <ul> <li>3d resnets (I tried for at least a month with these)</li> <li>Linear Decay Regression</li> <li>Any other type of model besides Resnets and Efficientnets (determinism issues)</li> <li>Efficientnets</li> <li>Simple meta data head (just concat)</li> <li>Random erase augmentation</li> </ul> <h2>Final Thoughts</h2> <p>I want to say again that this was an awesome comp that I am so glad I participated in!! Very glad to get my third medal and second silver medal! :D</p> <h2>Some helpful links</h2> <p>Inference notebook: <a href="https://www.kaggle.com/greatgamedota/osic-inference2?scriptVersionId=44254523" target="_blank">https://www.kaggle.com/greatgamedota/osic-inference2?scriptVersionId=44254523</a></p>
OSIC Pulmonary Fibrosis Progression
4th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/aadhavvignesh" target="_blank">@aadhavvignesh</a> I have updated my post with the solution.</p> <p>This moment seems to be magical, a 4th place finish. In my first two competitions(Trends and SIIM Melanoma), I ended with top 12% and 8% rank on final LB. I did not win big in those competitions but learnt a lot. </p> <p>In this competition, I believe that the tissue segmentation and other features extracted from CT scans made the difference. So what worked for us?</p> <ul> <li>Robust CV</li> <li>Good quality lung and tissue segmentation</li> <li>Rich features extracted from CT scans</li> <li>Ensemble of 10 models</li> <li>A lot of courage to not get baffeled by LB scores</li> </ul> <p>Tissue segmentation can be found <a href="https://www.kaggle.com/abhishekgbhat/tissue-segmentation-used-in-4th-place-solution" target="_blank">here</a><br> Feature Extraction and the entire data processing and inference pipeline can be found <a href="https://www.kaggle.com/abhishekgbhat/quantreg-linear-decay-efficientnet-b1-su" target="_blank">here</a></p> <p>It would be totally unfair if I dont thank the Kaggle community for selfless sharing of ideas and approaches on discussion forums and in the form of notebooks. Our solution is built on the amazing work done by other kagglers in this competition.</p> <p>I'd specially like to thank:</p> <ul> <li><a href="https://www.kaggle.com/andradaolteanu" target="_blank">@andradaolteanu</a> for her lung segmentaion notebook. This served as a base for our lung segmentation. With a few tweaks we could improve the lung segmentation and further add tissue segmentation as well.</li> <li><a href="https://www.kaggle.com/miklgr500" target="_blank">@miklgr500</a> for sharing Linear Decay notebook</li> <li><a href="https://www.kaggle.com/ulrich07" target="_blank">@ulrich07</a> for the Quant Reg notebook</li> </ul> <p>Thanks Everyone!</p> <p>Hey <a href="https://www.kaggle.com/tanulsingh077" target="_blank">@tanulsingh077</a>, I have updated my post to include solution notebooks. Hope you find it helpful.</p>
OSIC Pulmonary Fibrosis Progression
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, congratulations to the winners and thanks to kaggle and the hosts for this competition. Special thanks to <a href="https://www.kaggle.com/LukeReijnen" target="_blank">@LukeReijnen</a> for being an awesome teammate. I would also like to thank the community for sharing their ideas on what worked and did not work and for reinforcing the idea to trust your cv. </p> <h5>Model:</h5> <p>Now to the model. We used a small network with only tabular data. It had the following input features: <code>[WeekInit, WeekTarget, WeekDiff, FVC, Percent, Age, Sex, CurrentlySmokes, Ex-smoker, Never Smoked]</code>. These were followed by two dense hidden layers of each 32 nodes with the swish activation function. The output consisted of a node for the FVC prediction and a node for the direct prediction of sigma. To make the network predict values between 0 and 1, the outputs were multiplied by 5000 and 500 respectively.</p> <h5>Loss function:</h5> <p>We initially used the competition metric. However as we wanted a higher emphasis on MAE we used the loss function:<br> $$\frac{\sqrt2 \Delta}{70} + \frac{\sqrt2 \Delta}{\sigma_{clipped}} + \ln(\sqrt2 \sigma_{clipped})$$</p> <h5>Validation:</h5> <p>As validation technique, we used group K-fold. We split the patients into 5 folds of roughly equal size. As train and validation set we used all combinations of data for each patient with <code>WeekTarget &gt; WeekInit</code>. In other words, given a measurement of a patient, we trained on predicting all future measurements.</p> <h5>Data augmentation:</h5> <p>On the training set, we added Gaussian noise to the input. For FVC we used a rather big standard deviation of 500 mL. However, as we added the same noise to the target FVC that was to be predicted this turned out to work great. To change the percent feature accordingly, we added <code>Percent*FVC_noise/FVC</code> to it on top of the noise on percent. Finally, we normalized the input to lie roughly between 0 and 1.</p> <p>As prediction, we used the average of the FVC predictions across folds. For sigma, we used the quadratic mean of the predicted sigmas across folds.</p> <p>This was an interesting competition from which I learned a lot. We tried a lot of techniques that did not work out in the end but were fun implementing anyways. It was tempting to look at the public leaderboard score but we had often reminded ourselves to trust our CV. With a jump of ~1500 from the public LB that seems to have paid off. We are very pleased with the results and enjoyed this competition.</p>
OSIC Pulmonary Fibrosis Progression
67th Place Solution but Just ElasticNet performed way better!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to OSIC and Kaggle for hosting this competition. It was lot of fun exploring and learning from public kernels, discussion forums and testing new methods</p> <p><strong>Brief Summary</strong><br> Our Solution is an ensemble of EffNet B5 with competition metrics for evaluating CV performance. Quantile Regression with modified loss functions and ElasticNet</p> <ul> <li>EffNet B5 -&gt; (520x520) -&gt; Linear Decay based on EffNet for coefficient prediction, CV performance evaluated based on competition metrics</li> <li>Quantile Regression with modified loss function</li> </ul> <pre><code>def new_asy_qloss(y_true,y_pred): qs = [0.2, 0.50, 0.8] q = tf.constant(np.array([qs]), dtype=tf.float32) e = y_true - y_pred epsilon = 0.8 v = tf.maximum( -(1-q)*(e+q*epsilon), q*(e-(1-q)*epsilon)) v1 = tf.maximum(v,0.0) return K.mean(v1) </code></pre> <ul> <li>ElasticNet with 10folds on GroupKFold </li> </ul> <p>And finally an ensemble of all the models.</p> <p><strong>Just an ElasticNet on 10folds with GroupKfold performed better compared to our ensemble submission.</strong></p> <p><a href="https://www.kaggle.com/jagadish13/osic-baseline-elasticnet-eda-testing?scriptVersionId=43936071" target="_blank">Check: ElasticNet Final Submission</a></p> <p><strong>Congrats to all the winners!</strong></p>
OSIC Pulmonary Fibrosis Progression
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!</p> <p>I don't think we should monitor last3 cv because it's smaller to trust, so I haven't checked it.</p> <p>My recent submissions are as follows, seems CV and Private correlate well.</p> <ul> <li>CV: -6.58651 Public: -6.8921 Private: -6.8443</li> <li>CV: -6.54917 Public: -6.8922 Private: -6.8421</li> <li>CV: -6.53815 Public: -6.8972 Private: -6.8363</li> </ul> <p>Yes, I think my input data construction strategy is important.</p> <p>Thank you! Glad to hear that, Congrats to your gold too :)</p> <p>Thanks you!<br> Glad to hear that, I'll do my best :)</p> <p>Thank you!</p> <p>I would like to thank kaggle &amp; host for the interesting competition and to all the participants for giving me a lot of ideas. And congrats to winners!</p> <p>My work is based on </p> <ul> <li>My tabular approach notebooks<ul> <li><a href="https://www.kaggle.com/yasufuminakama/osic-lgb-baseline" target="_blank">https://www.kaggle.com/yasufuminakama/osic-lgb-baseline</a> </li> <li><a href="https://www.kaggle.com/yasufuminakama/osic-ridge-baseline" target="_blank">https://www.kaggle.com/yasufuminakama/osic-ridge-baseline</a> </li></ul></li> <li>Image approach notebooks<ul> <li><a href="https://www.kaggle.com/miklgr500/linear-decay-based-on-resnet-cnn" target="_blank">https://www.kaggle.com/miklgr500/linear-decay-based-on-resnet-cnn</a></li> <li><a href="https://www.kaggle.com/khoongweihao/k-fold-tf-efficientnet-models-training" target="_blank">https://www.kaggle.com/khoongweihao/k-fold-tf-efficientnet-models-training</a></li></ul></li> </ul> <h1>Solution Overview</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1695531%2F1f5423ba0fc5e5bb3bc182c0b8a75b5a%2FOSIC-solution.png?generation=1602040115132884&amp;alt=media" alt="OSIC-solution-overview"></p> <h1>How to construct input data</h1> <p>As I showed in <a href="https://www.kaggle.com/yasufuminakama/osic-lgb-baseline" target="_blank">my notebook</a>, I constructed input data by treating every single measurement as if it were a “baseline” measurement. And “Week” is used to create “Week_passed” for each measurements. We don’t know which “Week” is given in test data, so this input data construction gives us robustness to build models.</p> <h1>How to treat image data</h1> <p>Based on image approach notebooks, I used efficientnet-b0 for image size 320x320. <br> I tried other efficientnet models but there are not so much different results among them when I fix quantile value as 0.5, so I used the smallest one.<br> This efficientnet-b0 output is used for FVC &amp; Confidence model training with tabular features.</p> <h1>FVC prediction</h1> <p>FVC prediction is almost same as my notebooks, the difference is I prepared 5 models and blend them automatically using <code>sp.optimize.minimize</code>, weight is [Lasso, Ridge, ElasticNet, SVM, NN] = [0.68573749, 0., 0., 0.07551167, 0.23750526].</p> <h1>Confidence prediction</h1> <p>Confidence labels are made using FVC oof &amp; <code>sp.optimize.minimize</code> as shown in my notebooks.<br> Confidence prediction is almost same as my notebooks too, the difference is I prepared 5 models and blend them automatically using <code>sp.optimize.minimize</code>, weight is [Lasso, Ridge, ElasticNet, SVM, NN] = [0.22062125, 0., 0., 0., 0.80819966].</p> <h1>About shake up</h1> <p>There are so many public notebooks which overfits Public LB.<br> Public LB has not so much data, so you don't need to care about Public LB so much.</p> <h1>CV and LB Transition</h1> <table> <thead> <tr> <th>Model</th> <th>CV</th> <th>Public</th> <th>Private</th> <th>Medal</th> </tr> </thead> <tbody> <tr> <td>LGB</td> <td>-6.85094</td> <td>-6.9605</td> <td>-7.0037</td> <td>None</td> </tr> <tr> <td>Ridge</td> <td>-6.73738</td> <td>-6.9357</td> <td>-6.8562</td> <td>Bronze</td> </tr> <tr> <td>Efficientnet-b0 + Ridge</td> <td>-6.58651</td> <td>-6.8921</td> <td>-6.8443</td> <td>Silver</td> </tr> <tr> <td>Efficientnet-b0 + Blend models v1</td> <td>-6.54917</td> <td>-6.8922</td> <td>-6.8421</td> <td>Top Silver</td> </tr> <tr> <td>Efficientnet-b0 + Blend models v2</td> <td>-6.53815</td> <td>-6.8972</td> <td>-6.8363</td> <td>Gold</td> </tr> </tbody> </table> <h1>Final result</h1> <p>I selected my best CV, and it was the best Private LB submission.<br> CV (4fold): -6.53815<br> Private LB: -6.8363</p> <p>Thanks you!</p> <p>Thank you! :) </p> <p>Thanks! :)</p> <p>Thanks!<br> I have tabular only submission that scores private -6.8366 for gold, but CV using CT dicom files is much better than tabular only, so IMHO tabular only submission can get gold with some luck, but using CT dicom files leads to more solid result.</p> <p>I think the reason you got higher CV is that it's not same Cross-Validation before you apply <code>-12&lt;=output['Week_passed']</code>, you can't compare them.<br> Thanks &amp; Congrats to your result!</p> <p>Thank you &amp; Congrats to your effort and result!</p> <p>Thank you! Glad to hear that :)<br> It's for all data, I don't think we should monitor last3 cv because it's smaller to trust.</p> <p>Thanks!</p> <ul> <li>How did you actually use the image data?<ul> <li>Training is almost same method as public notebooks I wrote above. I simply used efficientnet-b0 oof predictions for each patient to train other models. Quantile value is fixed to 0.5 when I get predictions. <strong>Linear decay for weeks is not applied.</strong></li></ul></li> <li>From what I experimented using images did not help with the predictions much<ul> <li>I'm not sure about your CV, but in my experiments for example my single ridge improved CV around 0.15 by using image predictions.</li></ul></li> </ul> <p>Updated CV and LB Transition, CV and Private correlate well :)</p> <p>Thanks again!</p>
OSIC Pulmonary Fibrosis Progression
A non-selected solution [private lb: -6.8331]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We all know that and looking forward your solution :)</p> <p>This baseline kernel was made two months ago and I believe many of you could have kernels with a similar result. I don't learn a lot from this result and please feel free to comment on your experience .</p> <p><a href="https://www.kaggle.com/dxchen/fork-of-pytorch-tabular-nn" target="_blank">https://www.kaggle.com/dxchen/fork-of-pytorch-tabular-nn</a></p>
OSIC Pulmonary Fibrosis Progression
An efficient solution to Lung function decline prediction
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, You can check out this notebook for an efficient solution to this competition.<br> Notebook link: <a href="https://www.kaggle.com/alifrahman/osic-pulmonary-fibrosis-progression-submission" target="_blank">OSIC Pulmuonary Fibrosis Progression</a></p> <p>Thank you for your time!😊</p>
OSIC Pulmonary Fibrosis Progression
Another first place solution and shake up explanation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>These were our best private leaderboard submissions. There was no way to know they were going to win because they were my test submissions.</p> <p><img src="https://i.ibb.co/khVpqpb/subs.jpg" alt="i"></p> <p>The best score is output of a quantile regression model trained with 25 epochs. Another thing to consider is, those models were using baseline Percent instead Percent feature. . All of the public notebooks were using 800 epochs which caused lots of overfitting on training data. Using Percent feature instead of baseline Percent caused models to output non-linear curves. I think those two things caused the shake up.</p> <p>I plotted oof FVC curves in those versions. You can see what kind of predictions did well on private leaderboard.<br> <a href="https://www.kaggle.com/gunesevitan/osic-pulmonary-fibrosis-progression-3-cv-3-model?scriptVersionId=39886853" target="_blank">https://www.kaggle.com/gunesevitan/osic-pulmonary-fibrosis-progression-3-cv-3-model?scriptVersionId=39886853</a></p>
OSIC Pulmonary Fibrosis Progression
I should have won gold medal and 12th by trusting my augmentation approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>many thks for your kind words !!!</p> <p>In the begining i had an augmentation with only tabular data that i knew stable but i didn't choose it. It remembers my M5 blunder. For those interested i post the related notebook here <a href="https://www.kaggle.com/ulrich07/osic-basic-tabular-data-augmentation-nn?scriptVersionId=39788369" target="_blank">https://www.kaggle.com/ulrich07/osic-basic-tabular-data-augmentation-nn?scriptVersionId=39788369</a>.</p>
OSIC Pulmonary Fibrosis Progression
Last 3 CV -6.8746 solution which suffered giant shake up to 1525th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to winners of this shaky competition. For me, it's very disheartening result…<br> I made effort to achieve good CV score with 3 fold and it was successful at CV -6.8746.<br> I believed that my solution will at least give a silver medal, but my CV betrayed.</p> <p>This is my solution:</p> <p><strong>Features</strong><br> I used tabular features from this notebook<br> <a href="https://www.kaggle.com/ulrich07/osic-multiple-quantile-regression-starter" target="_blank">https://www.kaggle.com/ulrich07/osic-multiple-quantile-regression-starter</a><br> and image features that my teammate <a href="https://www.kaggle.com/samshipengs" target="_blank">@samshipengs</a> engineered (following).</p> <p>total_lung_kurtosis<br> max_slice_lung_skew<br> min_slice_lung_skew<br> total_lung_skew<br> lung_height<br> total_lung_volume</p> <p>This helped much for improving CV (about + 0.05), but private score became dismal.</p> <p><strong>Model 1: Lasso</strong><br> last 3 MAE 182.20 last 3 CV -6.8892</p> <p>I made this model on the final day of this comp.<br> I thought Lasso greatly reduced overfitting, but…<br> This model is following process, all of prediction process is Lasso:</p> <ol> <li>Assume we predict fold 3, construct training data with fold 1&amp;2 data + fold 3 first visit data * 6 (6 times sampled). Simultaneously, construct training data with fold 1&amp;2 data + test first visit data * 2 for test prediciton</li> <li>Predict first FVC value and replace first FVC value with (prediciton + actual value) / 2</li> <li>Predict FVC with adjusted first FVC value + other feats for both training data and validation/ test data, then take absolute value of prediction error of training data</li> <li>Predict absolute value of error then Confidence = max( predicted error * √2, 70)</li> </ol> <p>Using first visit data of validation data worked greatly for CV.</p> <p><strong>Model 2: 4 layers MLP</strong><br> last 3 MAE 185.51 last 3 CV -6.9057</p> <p>This is by my teammate <a href="https://www.kaggle.com/drtausamaru" target="_blank">@drtausamaru</a> .<br> This model predicts FVC and Confidence separately.<br> Using Huber loss for FVC and quantile loss for Confidence. </p> <p><strong>Ensemble</strong><br> FVC = Lasso * 0.65 + MLP * 0.35<br> Confidence = Lasso * 0.5 + MLP * 0.5<br> last 3 MAE 180.50 last 3 CV -6.8746</p>
OSIC Pulmonary Fibrosis Progression
Making my (not submitted) 5th place solution public
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone!</p> <p>This has been a very thrilling competition and I've learned a lot from it. Sadly I came late to properly implementing cross-validation and good tests, so I overlooked <a href="https://www.kaggle.com/dcasbol/osic-my-best-solution-6-8348?scriptVersionId=44213715" target="_blank"><strong>this solution</strong></a> I made some weeks ago. Anyway, I'm very happy with my 38th place 😁 and I hope someone can learn from what I wrote.</p> <p>This is a quick-release, but soon I will update it with proper balanced tests and so.</p> <p>AND, after implementing a good cross validation strategy, (surprise) it turns out that it was just a matter of luck. I'm constantly obtaining expected values of -6.98 or so with a 95% C.I. of +/- 0.45, so it could have been anything.</p>
OSIC Pulmonary Fibrosis Progression
My first Silver Medal on Kaggle! (65th on Private LB)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My hearty congratulations to all the winners of this competition. Quite honestly I have to admit that I gave up any hopes of winning this competition. I simply tried tinkering around with models &amp; ideas. And when I found that I had secured my first silver at my first ever serious kaggle competition, my joy knew no bounds! I had a public LB rank of 1480, the last time I checked, and I woke up to find a rank of 65. </p> <p>Here's a link to my quick inference Notebook: <a href="https://www.kaggle.com/doctorkael/osic-inference?scriptVersionId=44167954" target="_blank">https://www.kaggle.com/doctorkael/osic-inference?scriptVersionId=44167954</a></p> <p>My winning solution can be summarized as follows:</p> <p><strong>1. Perform data augmentation:</strong> Create synthetic week FVC from other patients data who have similar <em>cosine similarity</em>. For this purpose, we calculate the various features to ensure that augmented data is as similar as possible to the actual patients data. I also tried another type of augmentation with simple interpolation methods such as akima, linear, slinear, etc and used cross validation to see which performed the best at run time and use it to create models which would make the test predictions. </p> <p><strong>2. Some more augmentations</strong> (sort of): We treat each week's FVC and Percent as <em>Baseline</em> values from which we try to predict data for other weeks. This idea was borrowed from <a href="https://www.kaggle.com/yasufuminakama/osic-lgb-baseline" target="_blank">here</a>. It helps creating models that are more robust and helps when we have to predict for weeks -12 and the like when all the model would have seen otherwise is week -5. Using <code>Week_Offset</code> as feature instead of <code>Weeks</code> really helped here.</p> <p><strong>3. The model:</strong> The secret sauce was to use a simple Gradient boosting regressor to predict the confidence intervals. Methods on how to accomplish this can be found <a href="https://scikit-learn.org/stable/auto_examples/ensemble/plot_gradient_boosting_quantile.html#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-quantile-py" target="_blank">here</a>. Choosing an <code>alpha</code> value of 0.75 worked best. We create models on all the different augmented data samples, cross validate them to see which perform best with least error in patient scores (since we only have 3 final predictions which would actually count in scoring, we need to minimize this).</p> <p><strong>4. Making Predictions:</strong> The maximum (for upper bound) and minimum (for lower bound) of all the predictions made for that particular patient and week is calculated, subtracted to obtain the the <code>Confidence</code> values.</p> <p>I would soon publish my notebook containing all the ideas I tinkered around with soon as it is still in an unedited format. </p> <p>I have documented my experiments in this Notebook <a href="https://www.kaggle.com/doctorkael/osic-incrementally-improved-models" target="_blank">here</a>. Do check it out if you are interested :)</p>
OSIC Pulmonary Fibrosis Progression
Probably simplest solution - 180 lb
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>i just made good (kfold == 20) validation and found one coefficient manually, that's all)<br> <a href="https://www.kaggle.com/lao777/baseline?scriptVersionId=40987085" target="_blank">link</a></p>
OSIC Pulmonary Fibrosis Progression
Silver medal in three days of work
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to everyone and thank you Kaggle for hosting this competition.<br> This contest was by far one of the most realistic ones. </p> <p>I decided to join late this competition after I saw the public kernels, it was obvious that people were overfitting the public leaderboard. Then I decided to find the best validation strategy and use it to find new features and tune hyper-parameters.</p> <h2>Validation strategy</h2> <p>I used GroupKFold to create a K-fold partition (K == 3) with non-overlapping groups (here the Patient ID) and the distribution of patients across the folds is close to being uniform. (explanation and code took from <a href="https://www.kaggle.com/rftexas" target="_blank">@rftexas</a> 's notebook.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1803801%2Fff68dceee4b8e5de81f1bcb5cc7dc1ca%2FScreen%20Shot%202020-10-07%20at%201.17.31%20AM.png?generation=1602030140932166&amp;alt=media" alt=""></p> <p>Then like I mentioned in this discussion, I didn't include the Percent feature in my validation folds and I only included the last three measurements for each patient. Why ? because I want to mimic exactly the testing dataset, since we don't have too many features, using Percent improved CV and Lb scores. BUT we only have initial Percent values in test. So I decided to create init_Percent feature that I used in my validation folds.</p> <p>With that I was able to have a good correlation between CV and LB scores, with that I was more confident.</p> <h2>Features used</h2> <p>I only used the tabular dataset and I kept features that improved my validation score. I didn't include weeks and base_week. <br> I created init_FVC and init_Percent features, there are the initial values of FVC and Percent for each patient.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1803801%2F3f5a07f25b7a753c48d1807f173ae01b%2FScreen%20Shot%202020-10-07%20at%201.26.32%20AM.png?generation=1602030491883502&amp;alt=media" alt=""></p> <p>I added Sex_SmokingStatus <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1803801%2Fc3965f9c27c8278dfd6c2fed72ff62ae%2FScreen%20Shot%202020-10-07%20at%201.26.36%20AM.png?generation=1602030521844243&amp;alt=media" alt=""></p> <p>I used group statistics features<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1803801%2Ff93fb7f94f4636fe054e1c8133a4f94f%2FScreen%20Shot%202020-10-07%20at%201.26.26%20AM.png?generation=1602030547099121&amp;alt=media" alt=""></p> <h2>Model used</h2> <p>I used a simple neural network with pinball loss for multiple quantiles using [0.2, 0.5, 0.8].</p> <h2>What didn't work</h2> <p>I tried to extract features from images but I wasn't able to improve my CV score. <br> I also tried others models like linear regression and lightgbm but they performed poorly. Even ensembling them didn't improve my results.</p> <p>I didn't spent a lot of time in this competition, I didn't find time to improve this solution. </p> <h2>Key for this competition</h2> <p>Find the validation strategy that gives you a correlation between CV and LB scores !!!</p> <p>Again, thank you for this competition and for reading my solution, it is only a top 100 solution but it means a lot to me because I will be a kaggle competition master :p </p>
OSIC Pulmonary Fibrosis Progression
Solution for the first place, but we didn't select
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I couldn't upload the image. Our actual notebook that submitted before the deadline was -6.8250, I prepared this notebook for sharing. But the score was decreased :D </p> <p>I don't think so :)</p> <p>We'd like to first thank Open Source Imaging Consortium (OSIC) and Kaggle for providing us with an interesting, real-world dataset of decline in lung function based on a CT scan of their lungs to play with!</p> <p>We are very happy to get our first silver medal in Kaggle from this competition.</p> <p>It was really difficult to select the best model. We trusted our CV score and selected our model for the submission, which allowed us to win a silver medal.<br> However, the fact that we had a first-place solution, it shows how difficult to select the best model for this competition.</p> <p>We shared our first-place solution (didn't select) in the notebook below. I think the embedding layer helped improve the score.</p> <p><a href="https://www.kaggle.com/lhagiimn/solution-for-the-first-place-but-we-didn-t-select" target="_blank">https://www.kaggle.com/lhagiimn/solution-for-the-first-place-but-we-didn-t-select</a></p>
OSIC Pulmonary Fibrosis Progression
Surrogate loss for modified Laplace log likelihood (79th solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all the winners and all those who survived the shakeup!</p> <p>I wasn't really present in the last month of this competition since I was moving house and my computer is in a shipping container somewhere on the sea, but I thought I'd share the loss function I made that served as a pretty good surrogate for the competition metric (in PyTorch). The predictions should be a tensor of shape (N, 2) where the first column is the FVC prediction, and the second is the confidence.</p> <pre><code>class LaplaceLogLikelihood(nn.Module): def __init__(self): super(LaplaceLogLikelihood, self).__init__() self.l1_loss = nn.SmoothL1Loss(reduction="none") self.lrelu = nn.RReLU() self.root2 = torch.sqrt(torch.tensor(2, dtype=torch.float, requires_grad=False)) def forward(self, predictions, target, clamp=False): delta = self.l1_loss(predictions[:, 0], target) if clamp: delta = torch.clamp(delta, max=1000) sigma = torch.clamp(predictions[:, 1], min=70) # clip sigma without destroying gradient else: sigma = self.lrelu(predictions[:, 1] - 70) + 70 laplace_ll = -(self.root2 * delta) / sigma - torch.log(self.root2 * sigma) laplace_ll_mean = torch.mean(laplace_ll) return -laplace_ll_mean </code></pre> <p>The delta part of the metric was easy - we can just use <code>L1Loss</code> or <code>SmoothedL1Loss</code> as I have.</p> <p>The tricky part was the <code>max(sigma, 70)</code> for the clipping of confidence. For this, I used a leaky ReLU so that we could do something similar without killing the gradient.</p> <p>My best private LB score was <strong>-6.8383</strong> (gold) using tabular only but of course, I chose the wrong version ☹️. I'm not going to make excuses, and there were people who had worked harder than me who deserved it more. I would say though a) trust your CV and b) annotate/track your notebooks clearly to make picking submissions easy for yourself</p> <p>Here's the complete code: <a href="https://www.kaggle.com/anjum48/fully-connected-custom-loss-private-6-8383" target="_blank">https://www.kaggle.com/anjum48/fully-connected-custom-loss-private-6-8383</a></p>
OSIC Pulmonary Fibrosis Progression
[#145] Robust Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I decided to model only with the values that are also available in the test set. That means no FVC other than the first one, no percent other than the first one.</p> <p>Quickly, I realized that the data doesn't really tell anyting and went with the easiest and most robust solution. I normalized all data to FVC_init = 1 and shifted the start_week to 0.</p> <p>Then, I fitted a linear regression model to shifted weeks vs normalized FVC. Turns out that was my best private LB submission.</p> <p><img src="https://i.imgur.com/qN56d6Q.png" alt="img"></p> <p><a href="https://www.kaggle.com/ilu000/osic-my-normalized-submission-with-explanation" target="_blank">link to my notebook</a></p>
OSIC Pulmonary Fibrosis Progression
top 9 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks </p> <p>Thank you so much, <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> . This competition was quite difficult for all. </p> <p>Thank you so much. Actually <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a> today won 2 gold medals and become a 3xGM.</p> <p>Congratulations to all participants and thanks to organizers for making this competition possible. Despite it didn't go as expected, I hope many people learned something new here. Also, I would like to express my gratitude to my teammates for working together with me on this challenge. Special thanks to <a href="https://www.kaggle.com/jaideepvalani" target="_blank">@jaideepvalani</a> for not giving up on this competition and motivating our team to keep going, without him we would not go that far. Though, getting the medal I have a little bit bitter feeling. I'll take this chance and describe some of our ideas: hopefully they could help organizers to fight fibrosis. </p> <h3>CT model: Concatenate Tile Pooling</h3> <p>Working on PANDA competition I have proposed <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/169205" target="_blank">Concatenate Tile Pooling</a> method, which also appeared to be applicable quite well for CT scan data. The idea of this method is illustrated in image below. Instead of assigning labels, like FVC decay and confidence to each CT layer, which may be difficult to predict based on a single image, why not to assign it to all images together?<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2Fde95b636611aa0e65432f4dcf6756269%2Ftile.png?generation=1602038549896215&amp;alt=media" alt=""><br> <strong>why not 3D conv?</strong></p> <ul> <li>CT scans have different spacing, which deteriorates performance of 3D models. It can be viewed as training and performing inference with random stretch of images along one dimension.</li> <li>Many accurate and well-optimized 2D pretrained models.</li> <li>Not possible to train a good 3D model from scratch for so limited data</li> </ul> <p><strong>Important details:</strong></p> <ul> <li>Since the provided data is very limited, just ~150 samples, train the model first on masked lungs only (masks are produced with <a href="https://github.com/JoHof/lungmask/" target="_blank">lungmask</a>) with following finetuning on the original images to be able to perform inference without lungmask (So I could do inference of a single 4 fold model within just 10-20 min at kaggle). Check the image below.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2F31a1d24eb0da1882deb565777c1281f2%2Fmasks.png?generation=1602039389832792&amp;alt=media" alt=""></li> <li>For CT models it's better to predict percent rather than FCV. Image augmentation, like zoom, as well as use only of a limited number of slides makes it difficult to compute the volume based on CT. Meanwhile, the model could more reliably say what is the percent of the entire volume is affected by fibrosis by looking just at several scans.</li> <li>Assume linear dependence of the FVC and confidence on the week and predict the slope and the initial value.</li> <li>LLL_loss (Laplace Log Likelihood) works but for convergence one needs to ensure that the output at the beginning of training is close to the gt FVC. I used the following: <code>FVC = V0*(0.01*a*(w-w0)/134 + b + 0.01*p0)</code>, <code>sigma = V0*softplus(c*w/134 + d)</code>, were a,b,c,d are model predictions, V0 is the full lung volume computed as V0 = 100*FVC/percent. So, initially the predicted FVCs for model before training are quite close to the expected FVC, and the loss converges nicely. I have computed the loss based on all FVC measurements for a given patient (dropping the first one), so there was no need to evaluate gt slops explicitly. An alternative could be doing something similar to public kernels: model tries to predict FVC directly with using mloss, followed by model finetuning with LLL_loss (which is essentially the metric in this competition).</li> <li><a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/169205" target="_blank">Tile cutout</a> - random selection of CT layers during training could be viewed as a cutout in 3d image.</li> <li>When work with the data try to plot it to see what is really going on. For a number of patients I saw incorrect offsets, so the level of signal must be changed. Also I clipped signal at [-1024,600]. Finally, some images are not square, so I needed to take a crop to prevent deformation of images when rescale. </li> </ul> <p><strong>Additional details:</strong><br> 64 layers x 256 x 256 setup<br> ResNeXt50 and ResNet18 backbones<br> Standard augmentation like rotation, zoom, horizontal flip at the pretraining stage.<br> Unfortunately, this beautiful approach didn't work well giving only ~6.85/6.90 at private/public LB.</p> <h3>Tabular models</h3> <p>This is quite standard, and one can find many examples in public kernels. <strong>The important thing: always do proper validation and exclude all kinds of leaks.</strong> So percent must not be used as a feature unless it is set to be equal to the value at the first visit (and the first visit must be excluded from training and validation). Also, we have computed validation based on last 3 visits, similar to the competition metric. We have 2 kinds of models: based only on tabular and tabular+CT features (with some additional variations). <br> Some thoughts about <strong>percent madness</strong> people used in public kernels: It seems that by chance public LB contains mostly cases with low slope (FVC decay rate), so anything that artificially reduces slope could boost public LB. If one is training with percent feature and then set it to the constant, it is equivalent to pivoting the prediction in such a way that reduces the slope. </p> <h3>Final submission</h3> <p>Our final submission is an ensemble of CT models + TAB NN based models + <a href="https://www.kaggle.com/jeabat/osic-bayesian-ridge-regression" target="_blank">Bayesian Ridge Regression</a> based on maximizing CV. This submission got 6.8385/6.8884 at private/public LB. It is surprising for so large shake up: we have chosen our best private LB sub.</p> <p>Thank you </p> <p>Thank you. It was indeed quite difficult.</p> <p>Thank you so much</p> <p>Thank you </p> <p>Thank you. I joined the competition because I got this idea. Unfortunately, it worked not completely as expected, but I think it's the best way to make use of CT.</p> <p>Thank you so much, I really like those ideas too. I hoped that they would work better, but fortunately they made a strong contribution to our ensemble.</p> <p>Thank you so much. </p> <p>Thanks, and you are welcome.</p> <p>Thanks </p> <p>Thank you so much.</p> <p><a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> , Thank you so much. Public LB was was not very representative because many participants were exploiting percent feature: train with FVC percent given at each week and do inference with percent only from the first week, which by coincidence appeared to work really well at public LB, but not making much sense.</p> <p>You are welcome</p> <p>Thank you so much <a href="https://www.kaggle.com/sandorkonya" target="_blank">@sandorkonya</a> , I saw your hand-labeled dataset posted, really impressive work and the amount of efforts you and your team contributed. I'm so sorry that you didn't win a reward proportional to your contribution.</p> <p>Thanks, I really appreciate your words.</p> <p>Thanks </p> <p>Thank you <a href="https://www.kaggle.com/ronaldokun" target="_blank">@ronaldokun</a></p> <p>Thank you so much, congratulations to u too for running up reward, attention maps was really good idea. But I'm really worrying about the model interpretability at this point( Hopefully organizers will figure it out how to get all out of the method, especially since it was not that bad at the external data. </p> <p>Thanks <a href="https://www.kaggle.com/shujun717" target="_blank">@shujun717</a> , u2 congratulations with solo gold in Vaccine competition, well done. Also, nice presentation at the workshop today.</p>
OTTO – Multi-Objective Recommender System
10th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks OTTO &amp; Kaggle to host such interesting competition, my solution is straight forward, just candidate generation + reranker, a short summary as below.</p> <h6>Candidate Generation</h6> <ul> <li><strong>re-visit</strong> - all visited items</li> <li><strong>co-visit1</strong> - based on the public notebook with 2 improvement. Thanks <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> and <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <ol> <li>candidate generation by weight decay, e.g. a session has N items(idx1 by the reverse ts) &amp; each item has M pairs of co-visit item(idx2 by time weight or type weight), score is calculated as 1 / idx1 / idx2 for each item &amp; candidate pair, then sum the score as candidate score</li> <li>co-visit features for rerank model - counts &amp; probs pairwise item co-occurrences, index distance &amp; time distance(consider order or not), etc.</li></ol></li> <li><strong>co-visit2(buy2buy)</strong> - only carts or orders</li> <li><strong>co-visit3(next item)</strong> - only consider last item when generate candidate</li> <li><strong>sknn1</strong> - simple sknn with 100 most similair &amp; recent session </li> <li><strong>sknn2</strong> - a variant of sknn named as Sequence and Time Aware Neighborhood, reference <a href="https://arxiv.org/abs/1910.12781" target="_blank">https://arxiv.org/abs/1910.12781</a> for both sknn implementation</li> </ul> <h6>Re-Rank Model</h6> <ul> <li><strong>aid features</strong> - simple stat by time window(7, 14, 21)</li> <li><strong>session features</strong> - length, aid count, count by type, time to the end of prediction period</li> <li><strong>revisit aid X session</strong>, clicks/carts/orders count, absolute/relative position</li> <li><strong>co-visit features</strong> - as mentioned before, generate a lot feature to describe pairwise item co-occurrences</li> <li><strong>sknn features</strong> - only candidate score/rank</li> <li><strong>similarity features</strong> - similarity between candidate item and session by item embedding of word2vec &amp; implicit' s BPR</li> </ul> <p>Finally, 3 models(clicks/carts/orders) is trained by LightGBM(lambdarank)</p> <p>Yes, i only take last 40 aid, not selected carefully. And I have another group of feature to take carts or orders only.</p> <p>just implement it by myself, the logic is simple.</p> <p>I built my own code to prepare the training data and label, logic should be same. </p> <p>Thanks. I calculate the cosine distance between the candidate and last item, and max, min, mean to each item in the session.</p> <p>Thanks, lucky solo, let's find a chance to team up.</p>
OTTO – Multi-Objective Recommender System
110th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would thank organizers for this competition. It was first my serious approach to learn rescys. Also I want thank <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for very helpfull notebooks and comments.</p> <p>My solution consisted of a candidate model and a ranking model.</p> <p>My first candidate model was based on matrices from this notebook <a href="https://www.kaggle.com/code/cdeotte/compute-validation-score-cv-565" target="_blank">https://www.kaggle.com/code/cdeotte/compute-validation-score-cv-565</a> . But it was different in that I just took candidates from each of the matrices. And also I added there all the aids that were in the session. I took 100 candidates for each of the matrices by aggregating the values. I used 5 aggregations:<br> 1) the most common<br> 2) the largest total value in the matrix<br> 3) the largest sum value divided by the frequency of aid in the session<br> 4) the largest sum value in the matrix multiplied by 1/(session_len – aid_number)<br> 5) the largest sum divided by the frequency of aid in the session multiplied by the 1/(session_len - aid_number) <br> After that, I combined all the aid across all the matrices.</p> <p>I used such aggregations because I wanted to get aids that strongly correlate with low-frequency aids in session and also I wanted to boost the latest ones.</p> <p>For the ranker I used Catboost model with 3 folds splitted by session_id. It was trained with 20 negatives from candidates model. My features was <br> 1) aggrefations from the matrixes which described above<br> 2) current session len<br> 3) number of click/card/orders devided by session len<br> 4) 1/(session_len – aid_number)</p> <p>The LB score of this solution was 0.581</p> <p>On the next iteration, I created two additional matrices. the first of these is the inverted distance matrix. This is a matrix of the sum of inverted distances between pairs of aids. And it improved my LB score to 0.582.</p> <p>After that, I decided that my matrices do not take into account which of the aids were before and which after. Thus I add new matrix similar to previous but I took only pairs which aids_x position was smaller than aid_y position. And its improved my LB score to 0.583.</p> <p>At the 4th itter I decided to add Word2Vec similarity model. I took the model from this notebook <a href="https://www.kaggle.com/code/balaganiarz0/word2vec-model-training-and-submission-0-533" target="_blank">https://www.kaggle.com/code/balaganiarz0/word2vec-model-training-and-submission-0-533</a>. In the candidates model I took top 50 most similar elements from 6 latest aids in session + 3 last orders + 3 last cards. And also I add cosine similarity scores to my ranker features. It improved my LB to 0.584. But it was 0.583 of private too.</p> <p>At the end of competition I made voting ensemble of two my last models submission file with the <a href="https://www.kaggle.com/code/karakasatarik/0-578-ensemble-of-public-notebooks" target="_blank">https://www.kaggle.com/code/karakasatarik/0-578-ensemble-of-public-notebooks</a>. The LB of ensemble was 0.586 but private score was only 0.585.</p> <p>It seems that my candidate model was generating a very large number of candidates, but the candidates from different matrices and aggregations were very similar and the average was ~200 candidates per session.</p>
OTTO – Multi-Objective Recommender System
116th place write up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I want to say thank you to Otto to hold such an interesting competition. Since we only have the interaction information, the task was innevitably hard to solve, but I learned a lot of things from that quite challenging task. I'm also involved in recommender systems at work, so I'm sure this experience will help me.</p> <p>Handling a large number of sessions (users) was also a hard point of this competition. I'm sure many people suffered from this. Maybe some of you started using cudf or polars instead of the famous pandas. I used BigQuery for joinning large tables, and this was the first time for me to use BQ for Kaggle competition.</p> <p>We've seen some sad things happened. Some kaggle GMs and Masters seemed to have been cheating, and it turned out that they have likely been doing so for years. I hope Kaggle will take appropriate actions for this problem.</p> <p>Anyway, here's what I did to achieve this place in this competition.</p> <h2>Overview</h2> <p>I followed the standard candidate re-rank pipeline. I didn't have time to prepare ensembles, so my score comes from single XGBoost Ranker model.</p> <p>In the first step, I split the <code>train.jsonl</code> into train data (which I call <code>train split</code> afterwards) and validation data (<code>validation split</code>). This was done using the code from <a href="https://github.com/otto-de/recsys-dataset/blob/main/src/testset.py" target="_blank">host's repository</a>.</p> <p>Then, I randomly truncate each session in <code>train split</code> to make labels for it.</p> <p>Then, I made three types of co-visitation matrices which was introduced in <a href="https://www.kaggle.com/code/cdeotte/candidate-rerank-model-lb-0-575" target="_blank">this notebook</a>. To avoid leak, I made them in three ways. First ones are for training, so I used truncated version of <code>train split</code> and test data (<code>test.jsonl</code>; I'll call this <code>test split</code>) to make three co-visitation matrices. Second ones are for validation, so I used nontrucated version of <code>train split</code> and <code>test split</code>. Third ones are for inference, so I used the whole data.</p> <p>I also trained two Word2Vec model in three ways (the same way I used for making co-visitation matrices). One is trained with <code>aid</code> sequences and the other is trained with <code>aid_type</code> sequences (I concatenated <code>aid</code> and <code>type</code>. For example, if there is a record that with <code>aid=0</code> and <code>type='carts'</code>, then <code>aid_carts</code> will be <code>0_carts</code>).</p> <p>Then I generate candidates. For each session (user), I use the whole aids in the history, top 50 most co-occurring items to the last item (for each co-visitation matrix), top 50 most similar items to the last item (using word2vec vector), top 50 most similar (co-occuring) items to session's most frequent item, top 50 most similar (co-occuring) items to session's most frequent clicks (carts, orders), and most popular items. For each session, there are average 193 aids (max. 834 items), and candidate recalls are</p> <ul> <li>clicks: 0.663</li> <li>carts: 0.541</li> <li>orders: 0.726</li> </ul> <p>for the <code>validation split</code>.</p> <p>I sampled around 200,000 sessions for each targets. Then I created around 50 features. With these features, I trained XGBoost Rankers for each target. Finally I rerank the candidates with this model. The recalls@20 for each target is</p> <ul> <li>clicks: 0.528</li> <li>carts: 0.422</li> <li>orders: 0.657</li> <li>overall: 0.574</li> </ul> <p>This gave me 0.586 on public LB.</p> <h2>Features</h2> <p>I used 53 features in total. </p> <h3>Rank features from public notebook</h3> <p>The ranking of <a href="https://www.kaggle.com/code/cdeotte/candidate-rerank-model-lb-0-575" target="_blank">this notebook</a> was pretty good. I used the rank of each items in the result of the notebook. In the notebook, there are two types of list (list for clicks target and list for buys target), so I used aid rank from both list. These are called <code>clicks_rank</code> and <code>buys_rank</code> respectively.</p> <h3>history features</h3> <p>Some candidate items come from session's history, so I made some history features. I made some group inside each session. It is said that what is called a <code>session</code> is in fact a <code>user</code> in general meaning, so I used a simple method to detect <code>session</code> (general meaning) and called them <code>groups</code> (I used a method similar to the way introduced in <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/366138" target="_blank">this discussion</a>.)</p> <p>For each <code>aid</code> I generated 16 history features.</p> <ul> <li>clicks count</li> <li>orders count</li> <li>carts count</li> <li>events count <ul> <li>(clicks_count + orders_count + carts_count)</li></ul></li> <li>clicks group count<ul> <li>the number of groups which contains 'carts' event of the item</li></ul></li> <li>orders group count</li> <li>carts group count</li> <li>events group count</li> <li>time elapsed from last clicks</li> <li>time elapsed from last orders</li> <li>time elapsed from last carts</li> <li>time elapsed from last events</li> <li>events count from last clicks<ul> <li>the number of actions taken to the other items after the user took 'clicks' action to the item</li></ul></li> <li>events count from last orders</li> <li>events count from last carts</li> <li>events count from last events</li> </ul> <h3>co-visitation matrices' weights features</h3> <ul> <li>buy2buy covisitation matrix weight of the item to user's last item</li> <li>buy2buy covisitation matrix weight of the item to the most frequently actioned item in the user's history</li> <li>carts_orders covisitation matrix weight of the item to user's last item</li> <li>carts_orders covisitation matrix weight of the item to the most frequently actioned item in the user's history</li> <li>clicks covisitation matrix weight of the item to user's last item</li> <li>clicks covisitation matrix weight of the item to the most frequently actioned item in the user's history</li> </ul> <h3>popularity features</h3> <ul> <li>global carts count (counted in three ways to avoid leak)</li> <li>global orders count</li> <li>global clicks count</li> <li>number of sessions who clicked the item</li> <li>number of sessions who put the item into cart</li> <li>number of sessions who ordered the item</li> </ul> <h3>cosine similarity features</h3> <ul> <li>items word2vec vector similarity to the last item of the session</li> <li>items word2vec vector similarity to the last item of the session</li> <li>aid_type word2vec vector similarity to the last aid_type of the session<ul> <li>concatenate the candidate aid and actions (clicks, carts, orders)</li></ul></li> <li>aid_type word2vec vector similarity to the mean of aid_types in the last group of the session<ul> <li>concatenate the candidate aid and actions (clicks, carts, orders)</li></ul></li> <li>aid_type word2vec vector similarity to the mean of unique aid_types in the last group of the session</li> <li>max aid_type word2vec vector similarity to the aid_types in the last group of the session</li> <li>aid_type word2vec vector similarity to the mean of aid_types in the session</li> <li>aid_type word2vec vector similarity to the mean of unique aid_types in the session</li> <li>max aid_type word2vec vector similarity ot the aid_types in the session</li> </ul> <h2>XGBoost Ranker</h2> <p>I used the following parameters to train my XGBoost model</p> <pre><code>params = { : , : , : , : , : , : , : , } </code></pre> <p>I put aside 20,000 sessions at random to use them for early stopping.</p> <p>Here's the F-score plot for carts model</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1489975%2F748998b852c018b77c0079b3f42a8a75%2Fcarts.png?generation=1675210503694228&amp;alt=media" alt=""></p> <p>and here's the one for orders model</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1489975%2F062ed3117586218146f4ed9599c67a16%2Forders.png?generation=1675210547893838&amp;alt=media" alt=""></p> <p>and here's the one for clicks model</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1489975%2F6cefdba258d07289198de30823cc331b%2Fclicks.png?generation=1675210576351554&amp;alt=media" alt=""></p> <blockquote> <p>For each session, there are average 193 aids (max. 834 items)</p> </blockquote> <p>This is about the number of candidates for each session!</p> <blockquote> <p>You used top50 candidates to generate inputs for the Ranker, right?</p> </blockquote> <p>It isn't. Here's the way I generated the candidates.</p> <pre><code> I use the whole aids in the history, top 50 most co-occurring items to the last item (for each co-visitation matrix), top 50 most similar items to the last item (using word2vec vector), top 50 most similar (co-occuring) items to session's most frequent item, top 50 most similar (co-occuring) items to session's most frequent clicks (carts, orders), and most popular items. </code></pre> <p>The number of candidates varies between sessions (avg: 193, max: 834).</p> <blockquote> <p>I see you only used co-visitation matrices comparing candidate to a single aid. You used BigQuery and still considered it would take too much time to calculate sum/mean weights for candidate in comparison to 5/10/15 last aids from session?</p> </blockquote> <p>Calculating Co-visitation matrices was done on my local PC (with cudf, which was the way Chris introduced). I used BigQuery for merging features to the candidate pairs.</p>
OTTO – Multi-Objective Recommender System
12th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>1. Retrieval Stage</h3> <p>• Re_visitition:<br> All past events<br> • Co_visitition:<br> Top_150 of [ Top_140(type_weighted) + Top_140(buy2buy) + Top_50(time_weighted) ]</p> <p>• SRGNN:<br> Top_172<br> DNN model can boost the retriever recall significantly in my solution by using even slightly smaller average candidates number.<br> The recall here indicates recall@full_candidates</p> <table> <thead> <tr> <th>retrieval_strategy</th> <th>recall_clicks</th> <th>recall_carts</th> <th>recall_orders</th> <th>recall_weighted_sum</th> <th>average candidates number per session</th> </tr> </thead> <tbody> <tr> <td>all_past_events | top_258_co_visitition</td> <td>0.697</td> <td>0.555</td> <td>0.732</td> <td>0.675</td> <td>236.47</td> </tr> <tr> <td>all_past_events | top_150_co_visitition | top_169_CORE_trm</td> <td>0.723</td> <td>0.575</td> <td>0.742</td> <td>0.690</td> <td>234.26</td> </tr> <tr> <td>all_past_events | top_150_co_visitition | top_172_SRGNN</td> <td>0.734</td> <td>0.581</td> <td>0.746</td> <td>0.695</td> <td>234.07</td> </tr> <tr> <td>Combine multiple DNN models does not work for me, single SRGNN performs best under dozens of experiments.</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> <h4>Data preparation &amp; augmentation for SRGNN:</h4> <p>Delete items appearing less than 3 times.<br> train_set: 3rd_week + truncated_4th_week_former_part<br> valid_set: truncated_4th_week_latter_part (with former part as basis)</p> <h5>Augmentattion:</h5> <p>Train_set: For each sequence, iteratively seperate last item as target, truncate or pad the rest to max_length of 50 as training sequence, until only 1 item left.<br> Valid_set: For each sequence, use truncated_4th_week_former_part as basis, then iteratively concatenate truncated_4th_week_latter_part as target, until last item in the latter part, as illustrated below:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4993592%2F081097ff55a958dc4326dc27ed76cbbb%2FMind%20map%20(1).png?generation=1675234390597141&amp;alt=media" alt=""><br> As in test case, we have former part as basis for prediction, so I used this method to best mimic this scenario. It increased the validation samples as well compared with using latter part only, in which case the first item of latter part can not be used as target.<br> The validation metric is recall@20, which correlates very well with my retrieval strategy's final recall across all experiments &amp; DNN models.<br> I did not try other augmentations, such as Swapping, Shuffling, etc…, simply because the data scale is already considered huge for my machine.</p> <h3>2. Ranking Stage</h3> <h4>CV strategy for ranking:</h4> <p>Using only the 4th week as training data for forming the session-item pairs.<br> For each type: clicks/carts/orders, drop all sessions without positve targets, then do GroupKfold for each, respectively.<br> Negative sample downsampling: positive:negative = 1:20</p> <h4>Feature Engineering:</h4> <ol> <li><p>Session features: statistics about session: <br> session_clicks/carts/orders_count, </p> <p>session_length</p> <p>session_items_nunique</p> <p>session_duration </p> <p>etc…</p></li> <li><p>Item features: statistics about item:<br> simple statistics</p> <p>item_total_count</p> <p>item_count as clicks/carts/orders</p> <p>clicks/carts/orders_counts in each week</p> <p>clicks/carts/orders_counts changing trends in temporal order, weekly basis</p> <p>clicks/carts/orders_unique_session_counts in each week</p> <p>clicks/carts/orders_unique_session_counts changing trends in temporal order, weekly basis</p> <p>etc…</p></li> <li><p>Session-item interation features:<br> item appearing count in session</p> <p>item appearing as clicks/carts/orders count in session</p> <p>item recent 1 hour/day/week clicks/carts/orders count from prediction timestamp</p> <p>relative time gap between the item clicked/carted/ordered last time and prediction timestamp</p> <p>co_visition weights(time weight, type weight, just count) for items in each session</p> <p>dnn model cosine similaries between session and item embeddings</p> <p>etc…</p></li> </ol> <h4>Models used:</h4> <p>Lgbm, Catboost, Xgboost, MLP<br> Classification and ranking algorithm perform similarly in my solution.<br> And ensembling classification and ranking does not provide any boost.<br> So only binary classification is used at the end.</p> <h4>Ensemble:</h4> <p>Blending of 4 models, each with 5 folds.<br> Stacking with meta model leads to a silimar performance, thus only blending is used.</p> <h4>Post processing:</h4> <p>There are 3327 sessions with less than 20 candidates.<br> So I re-calculated the co_visitition items for these sessions without constraining the time range, namely with full data.<br> After this supplement, the average candidates number is above 19.<br> Using hottest items within half day around the prediction timestamp to supplement the last drop.</p> <h3>Acknowledgement</h3> <p>Thank <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> for introducing polars to the community, which saves me in this competition with limited ram.<br> Its lazy evalution and native multithreading support make it such a nice Lib on CPU platform.<br> Thank <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for sharing the knowledge and valuable notebooks as always.<br> Thank all guys who shared their knowledge and insights in notebooks and discussions. Learned a lot from you guys.</p> <p>Just use all items in one session as its candidates.</p> <p>No, I created my own dataset using host version 1 script, where there is unseen items in valid set. And it turns out to be not that robust. Only correlate with public LB well, the private LB severely shakes down.</p> <p>Yes, your feeling is right. SRGNN similarity score is the top 1 feature in some of my models.</p> <p>Just modified the data processing part to accelerate training speed.<br> Filter out less frequent item to reduce cardinality.</p> <p>Thanks mate</p>
OTTO – Multi-Objective Recommender System
13th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thank you for organizing a great competition. I would also like to thank everyone who participated in the competition with us and the four of us who worked together as a team.<br> Sepcially thanks to my teammates: <a href="https://www.kaggle.com/trasibulo" target="_blank">@trasibulo</a> <a href="https://www.kaggle.com/virilo" target="_blank">@virilo</a> and <a href="https://www.kaggle.com/albert2017" target="_blank">@albert2017</a> </p> <h2>Candidates Selection</h2> <p>We merge several types of models for gettint the final candidates. Each model contribute with different number of candidates and then we ensemble them over a ranking based weights to get the final 150 candidates. We tested 100 and 150, we noticed a slightly improvement with 150.</p> <h3>Scores:</h3> <p>Our final top150 recall was: <br><br> clicks recall=0.7241 <br><br> carts recall=0.5636 <br><br> orders recall=0.7399 <br><br> <strong>Overall Recall(top 150)</strong> = 0.6854 <br></p> <h3>Base models</h3> <ul> <li>historical items weighted by time and type.</li> <li>fnoa candidates (50-150 candidates depending on the historic).</li> <li>transformers4rec based model in order to get candidates. (100)</li> <li>short term coocurrence (100)</li> <li>covisitation matrix (100)</li> <li>w2vec knn (100)</li> </ul> <h2>Ranking</h2> <h3>Fnoa</h3> <p>I started with pandas but finally I changed all my pipeline to polars.<br> I decided to do one model per item type: clicks, carts, orders</p> <h4>Features used:</h4> <p>item_features<br> session_features<br> user_items_features<br> features from candidates generation (probs, similarity, etc..)</p> <h4>Models:</h4> <p>We tried lgb and catboost;<br> Although lgb was slightly better on LB, catboost was faster so at the end I decided to go only with catboost.</p> <h4>Things that didnt work:</h4> <p>I tried many things for improving validation score at the end but everything i tried didn't seem to help<br> Use carts predictions as feature for orders<br> Stacking<br> Different models<br> Train with all data<br> w2vec similarity features</p> <h3>Xiao</h3> <p>Xgboost model and lgbm model. (comming soon)</p> <h3>Virilo</h3> <p>BST Transformer.<br> <a href="https://www.kaggle.com/competitions/otto-recommender-system/discussion/384441" target="_blank">https://www.kaggle.com/competitions/otto-recommender-system/discussion/384441</a></p> <h2>Ensemble &amp; Stacking</h2> <p>We tried both ensemble and stacking.<br> <strong>Stacking:</strong> we didnt managed to make it work on LB (while we were getting a good validation score, it was not reflected on LB)<br> <strong>Ensemble:</strong> it seemed to be more stable but also didnt give us the expected boost.<br> At the end our final submission was an ensemble of probabilities from 3 models: xgb,lgb,catboost<br> LB results-&gt; single model (0.602 low); ensemble (0.602 high)</p>
OTTO – Multi-Objective Recommender System
14th Place Solution(Ethan&qyxs part)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It's time!✌️✌️✌️</p> <p>Thanks to OTTO and kaggle hosting such a good competition. Thanks my great teammates <a href="https://www.kaggle.com/juzqyxs" target="_blank">@juzqyxs</a> <a href="https://www.kaggle.com/rib" target="_blank">@rib</a> <a href="https://www.kaggle.com/ria" target="_blank">@ria</a>.</p> <h2>Validation</h2> <p>use the sessions in last week.</p> <h2>Retrieval</h2> <p>two traditional methods,</p> <ul> <li>u2i: the aids user action before</li> <li>i2i: itemcf, we do it seperately for each task. And we used some kinds of weight to optimize its performance, such as "time_gap", "loc_gap", "type_difference" between two aids.</li> </ul> <h2>Ranking</h2> <ul> <li>Features</li> </ul> <ol> <li>session based: count, last action time_gap(click/carts/orders), last action type </li> <li>aid based: count, ratio(ratio of click, ratio of carts to orders…), time</li> <li>session+aid based: count, last action time_gap(click/carts/orders), last action type</li> <li>w2v embeddings: difference and similarity of candiates between sessions' history aids, then use mean\max\min\last\std</li> <li>collaborative filtering score: collaborative filtering score of i2i, calculate the items between the candidates and user's historical aids.</li> <li>count and ratio features of aids after/before diffrent windows(1day, 3day) of each sessions' last time</li> </ol> <ul> <li>Data Augment</li> </ul> <ol> <li>use more weeks to train, we used 3 weeks for finnal submission.</li> <li>use different seeds to generate more traindata(different cutoff of sessions), we used 5 seeds at last. <br> those bring 0.001 boost on local cv.</li> </ol> <ul> <li>Model</li> </ul> <ol> <li>lightgbm binary classifier, learning rate: 0.05, 1000 rounds</li> <li>catboost binary classifier, learning rate: 0.05, 6000 rounds<br> enemble these two results scored LB:0.600.</li> </ol> <h2>Interesting Findings</h2> <p>When optimize the task of orders, </p> <ol> <li>orders_prob = 0.7* orders_prob+0.2* carts_prob+0.1* click_prob, brings 0.0004 boost on orders' local cv(evaluation on the same session of three task).</li> <li>My teammate rib concats carts and orders' training data, and add one feature(0, 1) whether the sample comes from carts or not, can get 0.0003 boost on orders' score.</li> </ol> <h2>Enemble</h2> <p>For final submission, we used voting to enemble with Rib and ria's result(different candidats, LB: 0.601), LB goes to 0.602.</p>
OTTO – Multi-Objective Recommender System
15th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OTTO – Multi-Objective Recommender System <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Sure. For LB submit, I would re-train the model on week4 data and predict the test data. (for cv, I train on week3 and predict week4.)</p> <p>First of all, I would thank the organizers and those who shared their knowledge and ideas. Especially, I would like to thank <a href="https://www.kaggle.com/carnozhao" target="_blank">@carnozhao</a> for providing great numba notebook. Without him, I must give up this competition and enjoy my New year holiday lying in the bed everyday (LOL.)</p> <h2>Approch Summary</h2> <ul> <li>Retrieval &amp; Re-rank 2stage model</li> <li>local cv: train on week3/ valid on week 4</li> <li>Retrieval recall20: 0.585(pulic LB)/0.585(private LB)</li> <li>Re-rank models: lightgbm ranker with 182 features. (single model get 0.601(public LB)/0.600(private LB))</li> <li>ensemble: 5 lightgbm ranker with same features with different seed for negative sampling and model training. ( 0.601(public LB)/0.601(private LB))</li> </ul> <h2>Retrieval/candidate stage</h2> <ul> <li>100 candidates for each session. (candidates number 50 -&gt; 100, which boosts 0.0007. have no time and memory to test 200.)<ul> <li>historical aids.</li> <li>i2i2i based on <a href="https://www.kaggle.com/carnozhao" target="_blank">@carnozhao</a> 's notebook </li> <li>most popular aids to make up for 100 candidates.</li></ul></li> <li>local cv on week4 [LB 0.585]</li> </ul> <table> <thead> <tr> <th>num of candidates</th> <th>click</th> <th>cart</th> <th>order</th> <th>sum</th> </tr> </thead> <tbody> <tr> <td>20</td> <td>0.5397</td> <td>0.4220</td> <td>0.6580</td> <td>0.5754</td> </tr> <tr> <td>50</td> <td>0.6223</td> <td>0.4838</td> <td>0.69498</td> <td>0.6243</td> </tr> <tr> <td>100</td> <td>0.6745</td> <td>0.5265</td> <td>0.7188</td> <td>0.6567</td> </tr> </tbody> </table> <ul> <li>main modification of i2i<ul> <li>order weight: adjacent aids in interactions have larger weight. (session 1 has a,b,c aids, the weight of b,c is 1 and the weight of a,c is 1/2.) Use those weight in both i2i similarity and u2i recommandation. I didn't use time weight here, because I notice most of interactions are within 1hour.</li> <li>Normalize the i2i similarity dict. The popular aids have larger i2i score with other aids. It's not fair in the u2i stage. Just use the mean of i2i scores of popular aids to normalize.</li></ul></li> </ul> <h2>Rerank-models</h2> <p>lightgbm ranker with 182 features. 0.2 negative ratio. (single model get 0.601(public LB)/0.600(private LB))</p> <ul> <li>session features(adding more session features didn't work for me)<ul> <li>event count/ type count</li> <li>aid_nunique/ type aid_nunique</li> <li>first time, last time, time range</li></ul></li> <li>aids features <ul> <li>kinds of count/nunique from (1week,2week)</li> <li>hour features to detect some "promotion" aids which only exists for 1-2 hour in a week.</li> <li>ratio of buy2cart, buy2order, cart2order</li> <li>ratio of re-cart, re-order, re-click</li> <li>first last time of aids </li></ul></li> <li>aid X session features<ul> <li>kinds of count, order decayed counts</li> <li>time diff</li></ul></li> <li>retrieval features<ul> <li>revisit rank/scores</li> <li>i2i rank/scores</li> <li>(avg/median/max/last one / last two/ last 3 avg) i2i scores</li></ul></li> <li>embedding similarity<ul> <li>(avg/median/max/last one / last two/ last 3 avg) w2v similarity</li> <li>(avg/median/max/last one / last two/ last 3 avg) bpr similarity</li></ul></li> </ul> <h2>ensemble</h2> <ul> <li><p>5 lightgbm ranker with same features with different seed for negative sampling and model training. </p> <table> <thead> <tr> <th>LB</th> <th>click</th> <th>cart</th> <th>order</th> <th>sum</th> </tr> </thead> <tbody> <tr> <td>public</td> <td>0.056</td> <td>0.136</td> <td>0.408</td> <td>0.601</td> </tr> <tr> <td>private</td> <td>0.056</td> <td>0.136</td> <td>0.408</td> <td>0.601</td> </tr> </tbody> </table></li> <li><p>catboost&amp;&amp;MLP ensemble can boost 0.0003 in local cv but have no time to run it.</p></li> </ul> <h2>what didn't work for me.</h2> <ul> <li>prone similarity</li> <li>click/cart/order 3 targets are correlated. So I use pred score of other 2 targets as features.(i.e. when predicting order, I use pred score of click and cart). In local cv, it boosted almost 0.001 but didn't work in LB. don't know why.</li> <li>combining week3 data and week4 data as final trainset didn't work. I don't create week2 data to test it in local cv. I just used 1.25 * num of rounds of lgb using only week4 to train.</li> <li>adding more session feas and interaction feas.</li> <li>adding k nearest neighbor of w2v embedding in the retrieval stage didn't improve hit rate.</li> </ul> <p>Thanks, Devastator!</p> <p>hahaha. good luck to us.😜</p>