Title of Competition
stringclasses
168 values
Title of Writeup
stringlengths
9
139
User
stringclasses
1 value
Writeup
stringlengths
419
61.5k
GoDaddy - Microbusiness Density Forecasting
11th Place Solution - Simplicity (Luck) Is All You Need
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: GoDaddy - Microbusiness Density Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, my thanks to Kagglers for very informative and helpful discussions and notebooks. Also, thank you to Kaggle and GoDaddy for hosting the competition.</p> <p>My code is available at <a href="https://github.com/ywugwu/GoDaddy-MDF-Competition" target="_blank">github</a>, which is based on <a href="https://www.kaggle.com/code/vadimkamaev/microbusiness-density" target="_blank">VADIM KAMAEV's</a> public notebook. </p> <p>In this competition, my idea is to keep improving my CV and PB score without letting my model grow too complex.</p> <h1>Validation (CV)</h1> <p>I used the last 5 month data for validation. The model with the best validation score is used as one of my final submissions.</p> <h1>Model</h1> <p>Same with the public notebook, I used a Catboost Regressor to stack a LightGBM, an XGBoost, and a CatBoost model. Tuning a lot hyperparameters.</p> <h1>Feature Engineering</h1> <p>I didn't do too much novel feature engineering, only used techniques from public notebooks. </p> <p>But I change the hyperparameters a lot.</p> <h1>Blending Submissions</h1> <p>I used different hyperparameters of the model and different features (i.e., with different diff, max, mean, etc. settings) to conduct a large number of submissions. I select the best 4 (or maybe 3) submissions and use a weighted average to create a blended submission. The best (according to the public leaderboard) blended result is selected as one of my final submission. In the end, the private leaderboard score of blended submission is slightly better than the best CV model.</p>
1st and Future - Player Contact Detection
16th place solution - Team : Deimon Devil Bats
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers and the kaggle team for organizing the contest. EDA(match watching) was a lot of fun. Thanks to all participants for their hard work. I'll be reading and learning from your solutions!</p> <p>Also, thanks to the team, I could do best until the finish. Thanks <a href="https://www.kaggle.com/yokuyama" target="_blank">@yokuyama</a> <a href="https://www.kaggle.com/shimishige" target="_blank">@shimishige</a> !</p> <p>During the first half of the competition, each team member tried to create models in their own way (3D segmentaion, CenterNet , etc.), but unfortunately, the scores did not increase at all (LB score &lt; 0.7). With 3 weeks remaining, the policy was changed to proceed on the basis of public notebooks.</p> <h2>Summary</h2> <p>This is a 2-stage model of Deep Learning (2.5D CNN, Transformer) and GBDT. Each is based on two public notebooks. Thanks <a href="https://www.kaggle.com/zzy990106" target="_blank">@zzy990106</a> (<a href="https://www.kaggle.com/code/zzy990106/nfl-2-5d-cnn-baseline-inference" target="_blank">2.5DCNN</a>) , <a href="https://www.kaggle.com/columbia2131" target="_blank">@columbia2131</a> (<a href="https://www.kaggle.com/code/columbia2131/nfl-player-contact-detection-simple-xgb-baseline" target="_blank">GBDT</a> ).</p> <p>Deep is poor at the level of worrying about the correctness of the CV calculation, but it seems to have been sufficient as a feature to GBDT. deep has CV calculation with dist&lt;2 only. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3381868%2F6d49493565d812fb14f36fded4e4e005%2Fnfl_summary.png?generation=1677776470936082&amp;alt=media" alt=""></p> <h2>1st stage</h2> <p><strong>2.5D CNN</strong><br> We created a 1-class output model that predicts player contact and G in the same class, and a 2-class output model that predicts them separately. Two models were created for Endzone and Sideline, respectively, for a total of 4 models.</p> <ul> <li>Common settings<ul> <li>input : Image (±4frame), Tracking data</li> <li>backbone : tf_efficientnet_b0_ns</li> <li>Image cropping based on predicted player helmet size (max(width, height)*5)</li> <li>Prediction only for distance&lt;2</li> <li>mixup</li></ul></li> <li>1class<ul> <li>Train data downsmpling (negative sample to 40,000sample)</li></ul></li> <li>2class<ul> <li>Helmet position heatmap for player 1 and 2 (<a href="https://www.kaggle.com/competitions/nfl-impact-detection/discussion/208947" target="_blank">reference</a>)</li> <li>Temporal Shift Module (<a href="https://www.kaggle.com/competitions/dfl-bundesliga-data-shootout/discussion/360236#2003353" target="_blank">reference code</a> Thanks <a href="https://www.kaggle.com/bamps53" target="_blank">@bamps53</a>)</li></ul></li> </ul> <p><strong>Transformer + LSTM</strong></p> <ul> <li>30% skip connection Transformer (<a href="https://www.kaggle.com/code/cdeotte/tensorflow-transformer-0-112?scriptVersionId=79039122&amp;cellId=21" target="_blank">reference code</a> Thanks <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> )</li> <li>LSTM in last layer</li> <li>25 features based on tracking data</li> <li>Scaling with RobustScaler</li> </ul> <h2>2nd stage</h2> <ul> <li>catboost was a little better than XGB</li> <li>Features (public notebook +)<ul> <li>Tracking data : diff, shift, product</li> <li>Deep model prob : shift, cummax, cumsum</li> <li>helmet size, etc.</li></ul></li> </ul> <p>↓Adding Deep model predictions (especially CNN) improves the score</p> <table> <thead> <tr> <th></th> <th>CV</th> </tr> </thead> <tbody> <tr> <td>Table only</td> <td>0.7030</td> </tr> <tr> <td>+ 2.5D CNN 1class</td> <td>0.7540</td> </tr> <tr> <td>+ 2.5D CNN 2class</td> <td>0.7681</td> </tr> <tr> <td>+ Transformer+LSTM</td> <td>0.7683</td> </tr> </tbody> </table> <hr> <p><strong>Not work</strong></p> <ul> <li>Interpolation of box undetected helmets by homography transformation. Probably there was a lot of noise, and the score worsened when the box interpolated by the transformation was used honestly.</li> <li>Using embedding of CNN</li> <li>Batch prediction of players in images (3D segmentaion, CenterNet, etc.)</li> </ul>
1st and Future - Player Contact Detection
3rd place solution, single stage approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I'd like to thank the I'd like to thank organisers for a very interesting challenge (especially <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a> for providing very useful answers and helping teams). It was interesting to participate.</p> <h2>Overview</h2> <p>The approach is single-stage, trained end-to-end with a single model executed per player and step interval (instead of per pairs or players) and predicting for all input steps range the ground contact for the current player and contact with 7 nearest players. The model has a video encoder part to process input video frames and a transformer decoder to combine tracking features and video activations.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F743064%2Fa93573968664b0e0af1534e3819e9d49%2FKaggle%20model_1.png?generation=1677986296204472&amp;alt=media" alt=""></p> <h3>Video encoders</h3> <p>The video encoders used a number of input video frames around requested steps and produced activations at corresponding steps at downsampled resolution, usually for 16 steps with corresponding 96 frames using every second frame for input.</p> <p>I used a few different models for video encoders:</p> <ul> <li>2d imagenet pretrained models + 3d Conv layer (credits to the Team Hydrogen solution of one of previous competitions). 3 input frames around the current step are converted to grayscale and used as an input to 2d model, with the results combined using 3d conv. Usually larger models performed better for me, with the best performing model based on the convnext large backbone. Other Convnext based models or DPN92 also worked ok.</li> <li>2d imagenet pretrained models + TSM, with the color inputs for every 2nd or 3rd frame and TSM like activation exchange between frames before every convolution. Worked better with smaller models like convnext pico or resnet 34 (would probably work better with larger models if the TSM converted model were pretrained on video tasks).</li> <li>3D/Video models like CLIP-X (X-CLIP-B/16 was the second best performing model) or the Video Swin Transformer (performed okeish but not included in the final submission).</li> </ul> <p>Video frames were cropped to 224x224 resolution with the current player's helmet placed at the center/top part of the frame and scaled so the average size of helmets in surrounding frames would be scaled to 34 pixels.<br> I applied augmentations to randomly shift, scale, rotate images, shift HUV, added blur and noise.</p> <p>For video model activations (at the 32x downsampled 7x7 resolution) I added the positional encoding and learnable separate sideline / endzone markers.<br> Optionally the video activations may be encoded using transformers per frame in a similar way as done in DETR but I found it has little to no impact on the result.</p> <h3>Transformer player features / video activations decoder</h3> <p>The idea is to use attention mechanisms to combine the players features with other surrounding players information and to query the relevant parts of the images.</p> <p>For particular player and step, I selected the current player features for surrounding -7..+8 steps and for every step I selected up to 7 nearest players within 2.4 yards, so in total 16 steps * (7+1) players inputs.</p> <p>For every player/step input I used the following features, added together using per feature linear transformation to match the transformer features dim:</p> <ul> <li>position encoding for the helmet pos on the sideline and endzone video, if within 128 pixels from the crop.</li> <li>is it visible on sideline and endzone frames</li> <li>pos encoding for the step number</li> <li>is player the current selected player</li> <li>is player from the same team as the current player or not</li> <li>player position (not xy but the role from the tracking metadata)</li> <li>speed over +- 2 frames</li> <li>signed acceleration over +- 2 frames</li> <li>distance to the current player, both values and one hot encoding over +- 2 frames</li> <li>relative orientation, of the player relative to player-player0 and of player0 relative to player, encoded as sin and cos over +- 2 frames</li> <li>for visible helmets, I also added the activations from the video at the helmet position directly to player features. The idea was - it's most likely relevant and may help to avoid using the attention heads for the same task, but I found no difference in the final result.</li> </ul> <p>Player/step features are used as inputs/targets for a few iterations of transformer layers:</p> <ul> <li>For all step/player input, I applied the transformer decoder layer with the query over video activations from the same step. </li> <li>For all step/player inputs I applied the transformer encoder with the self attention over all players/steps:</li> </ul> <pre><code> # video shape is HW*2 x T*B x C # player_features shape is P, T, B, C # where P - players, T - time_steps, B - batch, C - features, HW - video activations dims x = player_features for step in range(self.num_decoder_layers): x = x.reshape(P, T*B, C) # reshape to move time steps to batch to use attention only over the current step x = self.video_decoders[step](x, video) x = x.reshape(P*T, B, C) # attention over all players/steps x = self.player_decoders[step](x) </code></pre> <p>I tested with the number of iterations between 2 and 8 and the results were comparable, so I used 2 iterations for most of models.</p> <h2>Data pre-processing</h2> <p>Mostly to smooth the predicted helmets trajectory, smoothed the prediction to find and remove outliers and interpolated/extrapolated.<br> During the early test the impact on the performance was not very large, so not conclusive.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F743064%2Feb690ac7195ad8d03205681175d3c979%2Fplayers_trajectory_pp.png?generation=1677916456068367&amp;alt=media" alt=""></p> <h2>Training</h2> <p>For training I selected all players and steps with helmet detected on at least one video (so model would have the tracking features for a few steps before or after the player was visible for the first/last time). I have not excluded any samples using other rules.</p> <p>I used the AdamW optimiser with quite a small batch size of 1 to 4 and CosineAnnealingWarmRestarts scheduler with the epoch size of 1024-2048 samples, trained for 68 epochs. It takes about 6-10 hours to train a single model on 3090 GPU.<br> I evaluated model every time the scheduler reaches the min rate at epochs 14, 36 and 68.</p> <p>I used the BCE loss with slight label smoothing of 0.001..0.999 (it was a guess, I have not tuned hyperparameters much).</p> <p>I added aux outputs to the video models to predict if the current player has contact with other players or ground and heatmap of other player helmets with contacts, but the impact on the score was not very large.</p> <h2>Prediction</h2> <p>The prediction is very straightforward, for model with the input interval of 11 or 16 steps I run it with the smaller offset of 5 steps to predict over the overlapped intervals for every player.</p> <p>predictions = defaultdict(list) # key is (game, step, player1, player2)</p> <p>Every prediction between the current and another player, it's added to the list at the dictionary key (gameplay, step, min(player0, player), max(player0, player))<br> and all predictions are averages. Usually predictions for the pair of players at a certain step would include predictions with each player as the current one and a few step intervals when the current step is closer to the beginning, middle and end of the intervals.</p> <p>When ensembles multiple models, their predictions are added to the same predictions dictionary, with better models added 2-3 times to increase their weight.<br> In total, I used 7 models for the best submission.</p> <h2>Individual models performance</h2> <table> <thead> <tr> <th>Video model type, backbone</th> <th>Notes</th> <th>Private LB score</th> </tr> </thead> <tbody> <tr> <td>Convnext large, 2D + 3D conv</td> <td>16 steps/96 frames, skip 1 frame.</td> <td>0.7915</td> </tr> <tr> <td>Convnext base, 2D + 3D conv</td> <td>16 steps/96 frames, skip 1 frame.</td> <td>0.786</td> </tr> <tr> <td>DPN92, 2D + 3D conv</td> <td>16 steps/96 frames, skip 1 frame.</td> <td>0.784</td> </tr> <tr> <td>X-CLIP-B/16</td> <td>11 steps/64 frames, skip 1 frame.</td> <td>0.791</td> </tr> <tr> <td>X-CLIP-B/32</td> <td>11 steps/64 frames, skip 1 frame.</td> <td>0.784</td> </tr> <tr> <td>Convnext pico, TSM</td> <td>63 steps/384 frames, skip 2 frames.</td> <td>0.788</td> </tr> <tr> <td>Convnext pico, 2D + 3D conv</td> <td>64 steps/384 frames, skip 2 frames.</td> <td>Local CV slightly worse than TSM</td> </tr> <tr> <td>2 best models ensemble</td> <td>Convnext large and X-CLIP-B/16,</td> <td>0.7925</td> </tr> <tr> <td>6 models ensemble</td> <td>Without DPN92, re-trained on full data with original helmets</td> <td>0.7932</td> </tr> <tr> <td>6 models ensemble</td> <td>Without DPN92, re-trained on full data with fixed helmets</td> <td>0.7934</td> </tr> <tr> <td>7 models ensemble</td> <td>Convnext large added with weight 3 and X-CLIP-B/16 with weight 2. Models trained on different folds.</td> <td>0.7956</td> </tr> </tbody> </table> <h2>What did not work</h2> <ul> <li>Training Video Encoder model using aux losses before training transformer decoders. Video Encoder overfits.</li> <li>Adding much more tracking features to player transformer inputs. When added the history over larger number of steps for each player input, the transformer encoder overfits.</li> <li>Larger models with TSM</li> <li>Fix players/helmets assignment in the provided baseline helmets prediction. On some folds the impact was negligible, on some the score has improved by ~ 0.005 even without re-training models. On the private LB the score was similar with and without helmets fixed. One submitted model was using the original data pre-processing, another using more complex pipeline with helmets re-assigned.</li> </ul> <h2>Local CV challenges</h2> <p>To check for possible issues with models generalisation, I decided to split to folds using the sorted by game play list of games, with the first 25% of games assigned to fold 0 validation fold and so on.</p> <p>I found to have not only the difference between folds in score, but models/ideas performing well on one fold may work much worse on another.<br> For example, I found on the fold 2, the models with the very large receptive field over time/steps (384 steps, over 6 seconds, convnext pico based models in the submission) performed by about 0.008 better than the best larger models, while the score fo such models was by the similar 0.007 worse on the fold 3.</p> <p>All this made the local validation much more challenging and harder to trust. Taking into account the private dataset is even smaller than every fold, I expected to see a significant shakeup.</p> <h2>Player helmets re-assignment</h2> <p>Since it was not part of the best submission, added as a separate post: <a href="https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/392392" target="_blank">https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/392392</a></p> <p>Instead of the data pre-processing described above, I used the estimated tracking -&gt; video transformation to interpolate/extrapolate missing helmets information. The best result was when I discarded the first or the last predicted helmet position and extrapolated by 8 steps maintaining the difference with the position predicted from tracking and tracking-&gt;view transformation.</p> <p>The submission source is available at <a href="https://www.kaggle.com/dmytropoplavskiy/nfl-sub-place3" target="_blank">https://www.kaggle.com/dmytropoplavskiy/nfl-sub-place3</a></p>
GoDaddy - Microbusiness Density Forecasting
2nd Place solution: The Godaddy Microbusiness Data Cleaning Challenge
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: GoDaddy - Microbusiness Density Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Interesting problem, and it was also interesting to see the solutions came up with. I'm pretty new to these competitions. The biggest uncertainty to me is how much probing of the hidden data is allowed, especially in a case like this where it matters a lot for time series prediction.</p> <p>I also though SMAPE was a weird choice of metric that didn't make sense in a business sense. A change in a small county would have a large influence in SMAPE, where if you are trying to capture revenue streams, wouldn't you care a lot more about a change in the number of active forecasts in Los Angeles? I can only imagine this would be useful for allocation of advertising dollars.</p> <p>The code: <a href="https://www.kaggle.com/code/danielphalen/godaddy-microbusiness-final/notebook?scriptVersionId=133942554" target="_blank">Godaddy Microbusiness Final</a></p> <h2>The biggest issue - Data quality</h2> <p>As has been pointed out by many people, SMAPE is a relative metric. With the kind of forecast values we are looking at (~1.2-1.5 for 1 month, ~3.2 for months 3-5 together), we can look at the contribution of an individual CFIPS. </p> <p>$$\Delta_{SMAPE,i} = \frac{200}{n_{counties}} \frac{|F_i - A_i|}{F_i+A_i}$$</p> <p>Which, in the case of something going from zero to non-zero can be ~0.0638 per month difference. At the same time, a prediction for most of these models seems to be on the order of 0.5% change per month, which is far below the smallest change in active entries for many 25% of the counties. </p> <p>Second, there are many CFIPS where the data is terrible. The hosts acknowledged that they had a methodology change in Jan 2021, leading to a number of jumps, and I suspect there was another change after the first month. </p> <p>The one which always bothered me was CFIPS 56033, Sheridan Co, WY, where there are 2.36 microbusinesses/working age person. I can only guess there is some bug where if there is a misclassification they dump it in that CFIPS. I also wondered if there was some fraud happening during COVID times as people chased PPP loans given the large jumps. However, many of these issues would eventually revert, like the below:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5552115%2Fc85777fd4f981dee2765e2b78b7f1236%2Fcfips48155.png?generation=1678978908088571&amp;alt=media" alt=""></p> <p>So the question I think that make or breaks this, if we identify a large jump, will it revert? In this I believe we are helped by the gap between December and the first forecast month, March. I also hope GoDaddy gets better in their data collection procedures, which would help. In then end, if they want to use this in business, it would probably not be that useful to have a large machine learning error correction model for data collection issues.</p> <p>My solution is a mix of public leaderboard probing for individual CFIPS changes, reversion for outlier CFIPS, and a forecast of the smooth changes. </p> <h3>Public leaderboard - still very useful information</h3> <p>Given the equation above and as <a href="https://www.kaggle.com/petersorenson360" target="_blank">@petersorenson360</a> suggested, and that the leaderboard was active to 4 decimal places, you are sensitive to changes of less than 1% in the value an individual value. Given you had about 20 days from revealed test to final submission deadline, you could probe about 100 CFIPS for final values. It was actually easier if you made the value worse since the equation above would allow you to work out the exact number of active entries. </p> <p>Many large jumps would the revert, which we tried to work out how long that might take. This will probably be the difference in the end of who wins and loses, and I think the couple month gap between the test and private data will help ensure they revert. This was by far the top influence.</p> <h3>Continuous Model</h3> <p>After seeing GiBa's <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a> notebook, his data cleaning method reminded me of something used in futures algorithmic trading, called a <code>continuous contract</code>. Basically, you need to take a discontinuous series of prices and make it smooth. So for what I will call the continuous forecast, I used a data cleaning method where I looked for large jumps in the number of active entries, then did a shift to smooth them out. So the month where there was a large active jump became zero active jump. This smoothing method gave the best CV score of the number that I tried.</p> <p>I then setup the CV environment. For the model used XGBoost, added extra indicators, and prevented peeking in the future:</p> <ul> <li>lagged density changes and lagged active values</li> <li>pct_bb, pct_college, pct_it, pct_foreign_born, median_hh_inc of the last year (Many public books used an implicit forward bias in their features by looking at 2021 census data when training on 2019 data)</li> <li>Labor Force and Labor force participation for the county</li> <li>10 year average pct population change for the county</li> <li>latitude and longitude</li> <li>engagement, participation, and MAI_composite from the Godaddy website.</li> <li>The difference between the microbusiness density and the average of the neighboring counties, weighted by population.</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5552115%2Ff1ba9755aeb2b45525fd98359dbf9cd2%2FFeatureImportances_1.png?generation=1687036241940575&amp;alt=media" alt=""></p> <p>I found a population cut of about 5,000 was helpful. Rounding the model to an integer number of active helped a bit. A lot of these external indicators proved more helpful for the longer term forecasts, where there is some reversion to an average.</p> <p>Other notes:</p> <ul> <li>I actually found training on the 1 month forward change then feeding that prediction back into the model and recalculating all indicators gave the best CV, as opposed to directly forecasting the 3 month ahead forecast. It turned out to be less biased.</li> <li>The <code>scale</code> variable used seemed to work in forecasting the public leaderboard, but did terribly in my CV environment.</li> <li>Again, noting that the last value is pretty good and I was using a roll forward model, I used the tuned public models to patch in a January forecast as an estimate for the ground truth, then rolled forward the XGBoost model from that.</li> </ul>
RSNA Screening Mammography Breast Cancer Detection
⭐️ Remek & Andrij - #9 solution ⭐️
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all congratulations to all participants. Congratulations to dream teams from gold zone. I’m impressed by your consistency in winning Kaggle competition. Waiting to learn from your solution.</p> <p>Thank you my team mate Andrij <a href="https://www.kaggle.com/aikhmelnytskyy" target="_blank">@aikhmelnytskyy</a> We had great collaboration 👍👍👍 - I feel that from first minute we played in one team having one goal - find better solution.</p> <p>Gold in competition was dream for me. Last year we (with <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> ) were #1 in sliver (#12 solution in Image Matching Challange 2021). This year I decided to work hard to experience gold zone and finally become competition master. Even there is no official LB finalized …. we are #9 and in gold! :) and I am …… extremely happy! 😁😁😍😜</p> <p>This competition was really great for testing many different computer vision techniques. Three months passed very quickly. The first phase of the competition was difficult. We performed a large number of different tests that did not gave us results higher than 0.3 (LB score). It was very frustrating. We were unable to find any correlation between local CV and LB. Then we setup good training pipeline – main our success point are:</p> <ul> <li>Sampling strategy and positive class balancing</li> <li>Augumentation</li> <li>Model selection</li> <li>Postprocessing</li> </ul> <p>Last two weeks of competition were hard to me – I caught covid and had to pause (during recovery time I coded using iPad). But we cooperate all the time and we finally managed to jump to TOP10 and 0.63 (public LB). Finally we were closing gold zone on #13 (public LB) and #9 in private LB!</p> <p><strong>Models score summary</strong><br> • best public LB: 0.63 (ensemble) / 0.57 (single model) / local CV (0.48)<br> • private lb: 0.50 (max: 0.50)</p> <p><strong>Competition achievements</strong><br> • new experience in Kaggle competition - a lot of good discussion <br> • 1x gold medal - dataset (I am very happy - my first one)<br> • 2x gold medals - notebook <br> • 1x gold medal - competition-&gt; extremely happy! 😁😁😍😜</p> <p><strong>Our final selection</strong><br> We selected two different solution which based on the same model setup.<br> • 3 models - ensemble average model prediction probabilities -&gt; LB: 0.63 (PL: 0.47)<br> • 3 models - voting strategy and then score average (on votes score) -&gt; LB: 0.62 (PL: 0.5)<br> After many tests we had strong feeling that our second choice (even score was lower on LB that many of our rest solution) is more stable (was less sensitive on th) than other solution. So we closed eye and trusted in our test rather then LB score.</p> <p><strong>Solution description in 4 steps</strong><br> Our solution is very simple. We tried different ways to predict breast cancer but finally it appeared that simples solution wors for us the best (both local CV and LB).</p> <ol> <li>Process dicom files to png (windowing).</li> <li>Inference – 3 convnext (v1) models with TTA</li> <li>Ensemble – probabilities average or voting</li> <li>Thresholding – th ~0.5 -&gt; final prediction result 0|1</li> </ol> <p><strong>Dataset</strong><br> • Image resolution: 1536x768<br> • ROI cropped - cv2.connectedComponentsWithStats method (we started with yolov5 for prototyping phase but then we used cv2 - since licence regulations)<br> • 4-GroupFold: on patient_id<br> • image pixel scaling: div by 255.<br> • dicom2png – proposed by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> (<a href="https://www.kaggle.com/code/hengck23/3hr-tensorrt-nextvit-example" target="_blank">https://www.kaggle.com/code/hengck23/3hr-tensorrt-nextvit-example</a>)<br> • we do not use external dataset – all models were trained on competition data<br> • we processed data in two steps (to avoid problem with file storage capacity – during the tests we exported images to different resolutions – max 2400px):</p> <ol> <li>process j2k files format (dicom2png and crop to 1536xW - we do not resize it in this step to final resolution)-&gt;inference-&gt;delete files </li> <li>process nonj2k files-&gt;inference-&gt;delete files</li> </ol> <p>fast and reliable crop roi function we used during competition (cretits to <a href="https://www.kaggle.com/vslaykovsky" target="_blank">@vslaykovsky</a> : <a href="https://www.kaggle.com/code/vslaykovsky/rsna-cut-off-empty-space-from-images" target="_blank">"RSNA: Cut Off Empty Space from Images"</a>)</p> <pre><code>def crop_roi(img, photometric_interpretation): # it can be improved Y = img xmin = Y.min() xmax = Y.max() norm = np.empty_like(Y, dtype=np.uint8) dicomsdl.util.convert_to_uint8(Y, norm, xmin, xmax) if photometric_interpretation == 'MONOCHROME1': norm = 255 - norm X = norm X = X[5:-5, 5:-5] output= cv2.connectedComponentsWithStats((X &gt; 10).astype(np.uint8)[:, :], 8, cv2.CV_32S) # stats = output[2] idx = stats[1:, 4].argmax() + 1 x1, y1, w, h = stats[idx][:4] x2 = x1 + w y2 = y1 + h X_out = Y[y1: y2, x1: x2] return X_out </code></pre> <p><strong>Augumentation</strong> <br> • Albumentations (we tried Kornia and works great but we need more time to rewrite functions we used in competition).</p> <pre><code>transformation = [ A.OneOf([ A.RandomBrightnessContrast(always_apply=False, p=.5, brightness_limit=(-1, 1.0), contrast_limit=(-1, 1.0), brightness_by_max=True), A.RandomGamma(always_apply=False, p=.5, gamma_limit=(60, 120), eps=None), ], p = 0.5), A.Rotate(limit=5, p=0.5), A.Affine(rotate = 5, translate_percent=0.1, scale=[0.9,1.5], shear=0, p=0.5), A.HorizontalFlip(p=0.5), A.VerticalFlip(p=0.5), A.Resize(im_size[0], im_size[1]), A.ShiftScaleRotate(always_apply=False, p=.2, shift_limit_x=(-1.0, 1.0), shift_limit_y=(-1.0, 1.0), scale_limit=(-0.1, 0.1), rotate_limit=(-5, 5), interpolation=0, border_mode=3, value=(0, 0, 0), mask_value=None, rotate_method='largest_box'), A.OneOf([ GridDropoutv2(always_apply=False, p=.2, ratio = .25, unit_size_min = 100, unit_size_max = 400, holes_number_x=100, holes_number_y=100), CoarseDropoutv2(always_apply=False, p=.2, max_holes=12, max_height=250, max_width=100, min_holes=3, min_height=50, min_width=50, mask_fill_value=None)], p=0.25), ToTensorV2() ] </code></pre> <p>GridDropoutv2 and CoarseDropoutv2 is our modification of Albumentations function. It generates cuts in random greyscale.</p> <p><img src="https://i.ibb.co/6WtqnNv/batch-images-ep7.jpg" alt=""></p> <p><strong>Training</strong><br> • Framework: Pytorch scripts (multi gpu support: DDP)<br> • Optimizer: RAdam <br> • Lookahead: on<br> • Scheduler: OneCycle - no warm up<br> • Weight decay: 1e-2<br> • Dataset each epoch was seeded by different seed.<br> • Batch size: 16<br> • Mixed precision (AMP): on<br> • Gradient clipping: on<br> • Loss function: BCEWithLogitsLoss with pos_weight = 1.0 - 1.25<br> • Epochs: 7 (best models are from 4-5 epochs) - we decided to take early stopping (best probf1score) models instead of the last one.<br> • Sampler (thanks to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>): custom sequential sampler (sequence -&gt; positive sample / negative sample = 8) – small changes compare to the original one.</p> <pre><code>class Balancer(torch.utils.data.Sampler): def __init__(self, pos_cases, neg_cases, ratio = 3): self.r = ratio - 1 self.pos_index = pos_cases self.neg_index = neg_cases self.length = self.r * int(np.floor(len(self.neg_index)/self.r)) self.ds_len = self.length + (self.length // self.r) def __iter__(self): pos_index = self.pos_index neg_index = self.neg_index np.random.shuffle(pos_index) np.random.shuffle(neg_index) neg_index = neg_index[:self.length].reshape(-1,self.r) #pos_index = np.random.choice(pos_index, self.length//self.r).reshape(-1,1) pos_index_len = len(pos_index) pos_index = np.tile(pos_index, ((len(neg_index) // pos_index_len) + 1, 1)) pos_index = np.apply_along_axis(np.random.permutation, 1, pos_index) pos_index = pos_index.reshape(-1,1)[:len(neg_index)] index = np.concatenate([pos_index,neg_index], -1).reshape(-1) return iter(index) def __len__(self): return self.ds_len </code></pre> <p><strong>Models</strong><br> ConvNext_v1 small (timm - checkpoint: convnext_small.fb_in22k_ft_in1k_384). We tried different architectures but this one works in our solution the best.<br> • avg pooling or <br> • GEM pooling (trainable parameters set to True)<br> • drop_path_rate=0.2 and drop_rate=0.05 (we tested different settings but these works the best for our setup)<br> • input size: (1536, 768), 3 channels (b&amp;w images)</p> <p><strong>Local validation</strong><br> • Metrics: probf1, ROC_AUC, prec/recall and MCC (Matthews’s correlation coefficient)<br> • For local validation we used tool provided by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> - <a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/378521" target="_blank">https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/378521</a><br> • For each traing epoch we additionaly plot: prediction dynamic, confusion matrix.<br> • We looked into prediction and performed visual analysis of missed or incorrectly classified samples. </p> <p><strong>Inference</strong><br> • Three models – ensemble (voting or model probabilities averaging):</p> <ol> <li>avg pooling - fold 1 (trained on 1536 x 768) -&gt; probf1: 0.52 (local CV) / 0.57 (LB)</li> <li>GEM pooling - fold 2 (trained on 1536 x 768) -&gt; probf1: 0.44 (local CV) (we do not test it on LB)</li> <li>GEM pooling - fold 3 (trained on 1536 x 768) -&gt; probf1: 0.46 (local CV) (-)<br> • TTA: h_flip only (TTA weighted - 0.6 original image + 0.4 flipped image) - &gt; it inceased our score by 0.03</li> </ol> <p><strong>Ensembling</strong><br> • Model probabilities ensembling using better then median function</p> <pre><code>def better_than_median(inputs, axis): """Compute the mean of the predictions if there are no outliers, or the median if there are outliers. Parameter: inputs = ndarray of shape (n_samples, n_folds)""" spread = inputs.max(axis=axis) - inputs.min(axis=axis) spread_lim = 0.6 print(f"Inliers: {(spread &gt; spread_lim).sum():7} -&gt; compute mean") print(f"Outliers: {(spread &lt;= spread_lim).sum():7} -&gt; compute median") print(f"Total: {len(inputs):7}") return np.where(spread &gt; spread_lim, np.mean(inputs, axis=axis), np.median(inputs, axis=axis)) </code></pre> <p>• Voting and then averaging</p> <pre><code>def voting_confidence(x): if x&lt;=0.25: return 0.0 elif x&lt;=0.45: return 0.35 elif x&lt;=0.55: return x elif x&lt;=0.75: return 0.65 else: return 1.0 </code></pre> <p><strong>CPU/GPU</strong><br> • Training: 1-2xA6000 or A100 GPU-80GB on Cloud </p> <p><strong>Experiment tracking</strong><br> • Weights &amp; Biases </p> <p><strong>Things we tested during competition and did not improve our score (probably more tests are needed)</strong><br> • Full dataset training, site_1/site_2 separate models, up-sampling (by sample frac), down-sampling.<br> • Stochastic Weight Averaging and Exponential Moving Average<br> • Optimizers – AdamW, SGD, NAdam, Lion, Lamb<br> • Training with layer freeze (different levels)<br> • Loss function: Focal loss, LDAM, LMFLoss, Label smoothing<br> • Models: Effnet, NextVit (simmilar score but harder to train and slower), DenseNet, Inception_v3, maxvit<br> • Effnet + pixel-wise self attention, Effnet + cross attention<br> • Convnextv1 + max and avg concatenated pooling<br> • Synthetic dataset: synthetic breast cancer generator (experienced radiologist is required to evaluate solution).<br> • PatchGD - <a href="https://arxiv.org/pdf/2301.13817.pdf" target="_blank">https://arxiv.org/pdf/2301.13817.pdf</a><br> • Training attitudes - Pair training – MLO / CC – patient_id and side (L/R) <br> • Metamodel - SVM and XGBoost on image enbeddings.<br> • Pseudolabeling - using new samples during submission time.<br> • Selecting best samples for cases where patient has more then 2 views </p> <pre><code>sub_s1 = sub_s1[['prediction_id', 'site_id', 'patient_id', 'laterality', 'cancer']].\ groupby(['patient_id','laterality']).\ apply(lambda x: x.nlargest(3,'cancer')).reset_index(drop=True) </code></pre> <p><strong>Special thanks to</strong><br> • <a href="https://www.kaggle.com/Andrij" target="_blank">@Andrij</a> - for great cooperation during the competition. For very good and substantive talks aimed at solving the problem and improving results.<br> • <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> - Thanks for the great activity on the forum and sharing knowledge. I implemented many of your ideas, many of them improved our score. Thank you!</p> <p>Source code will be available soon (in week) on [Remek github](<a href="https://github.com/rkinas" target="_blank">https://github.com/rkinas</a></p>
RSNA Screening Mammography Breast Cancer Detection
16th place solution : segmentation & meta-classifier
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA Screening Mammography Breast Cancer Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone!</p> <p>First, thanks to Kaggle team and the organizers for hosting a competition! Also, congratulations to all the winners!</p> <h2>Data</h2> <h3>Preprocessing</h3> <p>My preprocessing code heavily depends on the public notebooks (eg. remove letters, crop breast via contour).</p> <ol> <li>decode <code>.jpeg</code> with <code>dicomsdl</code> &amp; <code>nvjpeg2000</code>.</li> <li>crop edge (margin pixel 10)</li> <li>extract breast with <code>opencv2</code> (contour based)</li> <li>resize to 1536x960. (I roughly guess that resizing into a 1.5 ~ 2.0 aspect ratio is fine.)</li> </ol> <p>In my experiment, windowing doesn't affect the score positively, so I decide not to use it.</p> <h3>Augmentation</h3> <p>Heavy augmentation works well. Light augmentation tends to overfit.</p> <ul> <li>v/hflip</li> <li>scale / rotate</li> <li>brightness / contrast</li> <li>cutout (coarse dropout with large patch size)</li> <li>mixup</li> </ul> <h2>Architecture</h2> <p>I couldn't spend much time running various experiments due to a lack of time &amp; computing resources. So, I only tested few backbones &amp; training recipes. (about 70% of my submissions are runtime errors &amp; mistakes lol)</p> <p>Here's a full pipeline.</p> <ol> <li>pre-train segmentation model with the <code>cbis-ddsm</code> &amp; <code>vindr</code> datasets.<ul> <li>segment: provided RoI image.</li> <li>label: <code>malignant</code> to cancer / <code>BIRADS 5</code> to cancer.<br> % Of course, the label doesn't perfectly correlate with the competition standards. But, I roughly think that maybe it could help train the model in some ways.</li></ul></li> <li>train with competition data (initialize the weight with the pre-trained model)<ul> <li>segment: inferred with the pre-trained model.</li> <li>auxiliary: given meta-features (total 11 features).</li></ul></li> <li>re-label the external data with the <code>step 2</code> model.</li> <li>re-train with competition data (initialize with <code>step 3</code> model)</li> <li>train a meta-classifier (oof + meta-features (e.g. laterality, age, …))</li> </ol> <p>For a baseline, I run step 1 ~ 2, 5 and achieve CV 0.4885 LB 0.59 (PB 0.46). Also, I test only with the <code>cbis-ddsm</code> dataset for pre-training, and there were about 0.02 drops on CV &amp; LB, but the same score on PB (CV 0.4656 LB 0.57 PB 0.46).</p> <p>A week before the deadline, I finished up to step ~ 5 and got CV 0.5012 LB 0.55 (PB 0.51). Sadly, I didn't choose it as a final submission : (</p> <p>Last day of the competition, I ensembled <code>effnet_v2_s</code> backbone and got CV 0.5063 LB 0.56 (PB 0.49).</p> <p>Lastly, I choose the best LB &amp; CV for the final submission.</p> <h3>Meta-Classifier</h3> <p>I built a meta-classifier with meta-features like age, laterality, and the (per-breast) statistics of the predictions. Usually, It gives ~ 0.02 improvements on the CV &amp; LB (also PB).</p> <p>I'm worried about overfitting into some meta-features (eg. machine id, (predicted) density, …), so I decided to use only 3 auxiliary features (age, site_id, laterality) to train the model.</p> <ul> <li>feature: age, site_id, laterality, (mean, std, min, max) of the predictions.</li> <li>cv: stratified k fold (5 folds)</li> <li>model: CatBoost</li> </ul> <h2>Works</h2> <ul> <li>higher resolution (1536x768 ~ 1024) is good.</li> <li>external data<ul> <li>it gives about +0.02 boosts.</li></ul></li> <li>architecture<ul> <li>encoder: backbone: <code>effnet-b3</code> works best</li> <li>decoder: u-net++</li></ul></li> <li>augmentation</li> <li>mixup (alpha 1.0)</li> <li>loss<ul> <li>0.6 * cls_loss (cross_entropy) + 0.4 * seg_loss (dice) + 0.1 * aux_loss (cross-entropy)</li></ul></li> <li>stratified group k fold (4 folds)</li> <li>meta-classifier</li> <li>TTA</li> </ul> <p>thanks for reading! I hope this could help you :)</p>
Learning Equality - Curriculum Recommendations
24th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>No shakeup! Congrats to all the winners! I've learned a lot from this competition. Can't wait to see the winner solutions!!! For me, this is my first solo silver medal.</p> <p>I've published my inference code <a href="https://www.kaggle.com/code/xfffrank/lb-0-693-lecr-inference-two-stages-ensemble" target="_blank">here</a>.</p> <hr> <p>The image below shows a summary of my approach.</p> <p><strong>TLDR</strong>: An ensemble of 3 retrievers using max_length of 64 + top10 neighbors + 6 rerankers using mixed max_length. The reranker ensemble strategy is just a simple average.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1429419%2F29bf4a65cff09c08c709ece87eeb1c7d%2Fapproach%20summary.jpg?generation=1678838705022493&amp;alt=media" alt="Solution summary"></p> <ul> <li>Retriever candidates:</li> </ul> <table> <thead> <tr> <th>Model</th> <th>Batch size</th> <th>Validation recall@10</th> </tr> </thead> <tbody> <tr> <td>sentence-transformers/paraphrase-multilingual-mpnet-base-v2</td> <td>640</td> <td>0.79869</td> </tr> <tr> <td>sentence-transformers/all-distilroberta-v1</td> <td>1280</td> <td>0.78176</td> </tr> <tr> <td>xlm-roberta-base</td> <td>512</td> <td>0.77429</td> </tr> </tbody> </table> <ul> <li>Reranker candidates:</li> </ul> <table> <thead> <tr> <th>Model</th> <th>max length</th> <th>validation F2 score</th> </tr> </thead> <tbody> <tr> <td>MoritzLaurer/mDeBERTa-v3-base-mnli-xnli (with further pretraining)</td> <td>128</td> <td>0.6567</td> </tr> <tr> <td>sentence-transformers/paraphrase-multilingual-mpnet-base-v2 (with further pretraining)</td> <td>256</td> <td>0.6448</td> </tr> <tr> <td>sentence-transformers/paraphrase-multilingual-mpnet-base-v2 (with further pretraining)</td> <td>128</td> <td>0.6376</td> </tr> <tr> <td>MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7</td> <td>128</td> <td>0.6361</td> </tr> <tr> <td>xlm-roberta-large (with further pretraining)</td> <td>128</td> <td>0.6581</td> </tr> <tr> <td>timpal0l/mdeberta-v3-base-squad2</td> <td>256</td> <td>0.6269</td> </tr> <tr> <td>Ensemble</td> <td>/</td> <td>0.6882 (public LB: 0.660; private LB: 0.693)</td> </tr> </tbody> </table> <h3>CV setup</h3> <p>Basically I followed what was proposed <a href="https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/376873" target="_blank">here</a>. </p> <ol> <li>Split the topics which are not from "source" into 4 folds. The folds were stratified by "channels" because the hosts mentioned that the test topics could from seen channels.</li> <li>Put the topics from fold 0 to the validation set.</li> <li>Put all the topics from the "source" category, and the topics not from fold 0, to the training set.</li> <li>Use the correlation file to get all the matches of the training topics, and use them as positive pairs for sentence transformers.</li> </ol> <h3>Retriever training (Stage #1)</h3> <p>Use <strong>separate encoders</strong> for "topic" and "content" because intuitively their distributions are different. It turned out that it improved the validation recall by 2% or above compared with using one unified encoder. </p> <p>My best retriever is trained with "paraphrase-multilingual-mpnet-base-v2", it achieved a recall@10 of <strong>~0.80</strong>.</p> <p>One thing to note is that using longer sequence to train did not give me benefits, so I used 64 in all models. Meanwhile, a large batch size is very important in this stage. It is worth sacrificing the information in longer sequences.</p> <p>With contrastive loss, it was way much worse than <code>MultipleNegativesRankingLoss</code>. I assume I didn't use it in the right way.</p> <h3>Retrieve (Stage #2 prep)</h3> <p>This is the tricky part. It is difficult to retrieve more possible candidates and introduce less noise in the meantime. Naïvely retrieving more to increase recall will only lead to much longer training time and no better performer.</p> <p>I used 3 retrievers in my ensemble. I retrieved top10 candidates with each one and take the union set. Altogether I've got recall = <strong>0.853</strong> for the validation set.</p> <h3>Reranker training (Stage #2)</h3> <ul> <li><p>Here I used the same train/val splits for topics as the stage #1.</p></li> <li><p>I found that using a pretraining can boost the performance of reranker. Here the pretraining means the retriever training using a unified encoder. And before training the reranker, load the pretrained checkpoint. This way allows the model to gain some domain knowledge. But it could lead to data leakage to some extent. In the final ensemble, I mixed rerankers with and without the pretraining.</p></li> <li><p>In order to make the model converge faster and reduce training time, I use a customised balancing sampler, so that during each epoch the numbers of positive and negative pairs are the same.</p></li> </ul> <p><strong>Final thoughts</strong><br> It looks like my pipeline is close to the <a href="https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/394838" target="_blank">3rd solution</a> except that they used top100 neighbours and I only used top10 instead. I would say this is a rather simple pipeline. Their success indicates that big models and rich resources are able to create a simple yet very performant model !</p>
Learning Equality - Curriculum Recommendations
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Summary</h1> <ul> <li>Retriever and reranker</li> <li>Transformer-based model for retriever</li> <li>GBDT-based model for reranker</li> <li>Inference code <a href="https://www.kaggle.com/code/iiyamaiiyama/llecr-ens8910-125128-149154-155157" target="_blank">https://www.kaggle.com/code/iiyamaiiyama/llecr-ens8910-125128-149154-155157</a></li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2304617%2Fa101ba6640b7753f57c8d362e540c16a%2Flecr-reranker.png?generation=1678838816684059&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2304617%2Fe6c650f31f8520c19cb6e34b825d6ded%2Flecr-reranker2.png?generation=1678838830446492&amp;alt=media" alt=""></p> <h1>CV strategy</h1> <p>Simple 5fold random split with topic-id.<br> For stage 1 I used all data, including "source".<br> For stage 2, I used the same fold split as for stage 1, and created several models with and without "source" to ensure diversity.<br> CV and LB were well correlated.</p> <p>I trained a model based on channel GroupKFold and select it for one of my final submissions, but a simple random kfold gave slightly better results for CV and LB.</p> <h1>Retriever(stage1)</h1> <h2>Text</h2> <p>The training text was created as follows:</p> <ul> <li>Topics <br> Recursively traversed to the root node and added titles. Finally, the topic description was added. For example, (root title + parent1 title + … + topic title + topic description). </li> <li>Contents <br> The title and description were concatenated. The "text" column was discarded.</li> </ul> <p>model max positive score@50 CV F2@stage1 CV F2@stage2 public LB@stage2<br> sentence-transformers/LaBSE 0.8887 0.5462 0.6727 0.676<br> sentence-transformers/paraphrase-multilingual-mpnet-base-v2 0.8891 0.5429 0.6698 0.678<br> facebook/xlm-v-base 0.8869 0.532 0.669 0.671<br> xlm-roberta-base 0.8832 0.5388 0.6666 0.676<br> naive ensemble above four 0.9336 - 0.6916 (I didn't sub this)<br> my published submission - - 0.7152 0.707</p> <h2>Model</h2> <p>Each topic and its correlated contents were grouped together as one class. <br> The model was trained with ArcFace. Each model produces 768-dimensional embeddings. <br> Trained for 30 or 60 epochs, which took about 5 hours per fold. <br> The margin was gradually increased from 0.2 to 0.6 during training.<br> The following models were used for the final submission.</p> <ul> <li><a href="https://huggingface.co/sentence-transformers/LaBSE" target="_blank">sentence-transformers/LaBSE</a></li> <li><a href="https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2" target="_blank">sentence-transformers/paraphrase-multilingual-mpnet-base-v2</a></li> <li><a href="https://huggingface.co/facebook/xlm-v-base" target="_blank">facebook/xlm-v-base</a></li> <li><a href="https://huggingface.co/xlm-roberta-base" target="_blank">xlm-roberta-base</a></li> </ul> <h2>KNN</h2> <p>For each topic, find 50 nearest neighbor contents and these pairs are passed to the reranker(stage2).</p> <h1>Reranker(stage2)</h1> <p>When I submitted stage1 model only, I coudn't reach 0.6 on the public LB. I think this is because stage1 model doesn't have any topic-tree structure information. So I added many tree-based features for stage2 model.</p> <h2>Model</h2> <p>CatBoost and XGBoost were used for reranker.<br> Features were created for each pair, then GBDT model predicts probability that the pairs were correlated or not.</p> <h2>Example of features</h2> <ul> <li>Embeddings distance</li> <li>TF-IDF distance</li> <li>Whether the topic's siblings correlated the target content</li> <li>How many times the content was correlated in the channel</li> </ul> <h2>Ensemble</h2> <ul> <li>stage1 <br> Concatenate embeddings then find KNN</li> <li>stage2 <br> Average GDBT predictions, then select pairs as positive predictions above the threshold.</li> </ul> <h1>Post-processing</h1> <ul> <li>After stage2, if there were no predictions for a topic, the content with the highest predicted score was added.</li> <li>If two contents that were always correrated together in "correlations.csv", and one of them appeared in the prediciton, add the other one.</li> <li>Search best threshold for each channel<br> For channels not included in the training data, a fixed threshold was used. (based on CV across all channels)</li> </ul> <h1>Not worked</h1> <ul> <li>"text" information of contents <br> I cannot find good way to use "text" column.</li> <li>Transformer-based reranker <br> They were very prone to overfitting. </li> </ul> <h1>Appendix: my models</h1> <table> <thead> <tr> <th>Model</th> <th>max positive score@50</th> <th>CV F2@stage1</th> <th>CV F2@stage2</th> <th>public LB@stage2</th> </tr> </thead> <tbody> <tr> <td>sentence-transformers/LaBSE</td> <td>0.8887</td> <td>0.5462</td> <td>0.6727</td> <td>0.676</td> </tr> <tr> <td>sentence-transformers/paraphrase-multilingual-mpnet-base-v2</td> <td>0.8891</td> <td>0.5429</td> <td>0.6698</td> <td>0.678</td> </tr> <tr> <td>facebook/xlm-v-base</td> <td>0.8869</td> <td>0.5320</td> <td>0.6690</td> <td>0.671</td> </tr> <tr> <td>xlm-roberta-base</td> <td>0.8832</td> <td>0.5388</td> <td>0.6666</td> <td>0.676</td> </tr> <tr> <td>Naive ensemble above four</td> <td>0.9336</td> <td>-</td> <td>0.6916</td> <td>(I didn't sub this)</td> </tr> <tr> <td>My published submission</td> <td>-</td> <td>-</td> <td>0.7152</td> <td>0.707</td> </tr> </tbody> </table>
Learning Equality - Curriculum Recommendations
31st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, many thanks to the organizing team for the competition! I am just starting to focus on NLP domain and this competition was very educational for me, it also gave me my first solo silver 😀. It included a recommender-system problem and allowed me to practice text-embeddings. But I have reached my limits in creativity towards the end of the competition and tried unnecessary, overkill stuff. I will try to summarize my solution briefly. You can see the whole source-code from my <a href="https://github.com/nlztrk/Learning-Equality-Curriculum-Recommendations" target="_blank">GitHub here</a>. You can also see the inference code of my submission from <a href="https://www.kaggle.com/code/nlztrk/lecr-st-cross-encoders-cv-0-6461-50-pp" target="_blank">here</a>.</p> <h2>Solution Summary</h2> <p><img src="https://raw.githubusercontent.com/UKPLab/sentence-transformers/master/docs/img/InformationRetrieval.png" alt="architecture"></p> <p>I used <a href="https://github.com/UKPLab/sentence-transformers" target="_blank">sentence-transformers</a> library and the models from <a href="https://huggingface.co/" target="_blank">HuggingFace</a>. I tried to implement the shared architecture <a href="https://www.sbert.net/examples/applications/retrieve_rerank/README.html" target="_blank">here</a>.</p> <p>The pipeline consists of:</p> <ul> <li><a href="https://github.com/nlztrk/Learning-Equality-Curriculum-Recommendations/blob/main/0.%20Generating%20Splits.ipynb" target="_blank">Splitting the Data as Train/Val</a></li> <li><a href="https://github.com/nlztrk/Learning-Equality-Curriculum-Recommendations/blob/main/utils/unsupervised_utils.py#L154" target="_blank">Text Processing</a></li> <li><a href="https://github.com/nlztrk/Learning-Equality-Curriculum-Recommendations/blob/main/1.%20Unsupervised%20Training.py" target="_blank">Training Sentence-Transformer (Stage 1)</a></li> <li><a href="https://github.com/nlztrk/Learning-Equality-Curriculum-Recommendations/blob/main/2.%20Unsupervised%20Sampling.ipynb" target="_blank">Retrieve with kNN using Stage 1 Embeddings</a></li> <li><a href="https://github.com/nlztrk/Learning-Equality-Curriculum-Recommendations/blob/main/3.%20Supervised%20Training.py" target="_blank">Training Cross-Encoder (Stage 2)</a></li> <li><a href="https://github.com/nlztrk/Learning-Equality-Curriculum-Recommendations/blob/main/4.%20Inference.ipynb" target="_blank">Inference</a></li> </ul> <h3>Splitting the Data as Train/Val</h3> <p>I've seen a lot of different approaches on the forum. I also wanted to use the imbalance in language distribution in my approach. I set all the data coming from <strong>source</strong> as <strong>train</strong>. For the remaining, I used:</p> <ul> <li><strong>CV Scheme:</strong> Grouped Stratified K-Fold</li> <li><strong>Folds:</strong> 5 (Used only the first)</li> <li><strong>Group:</strong> Topic ID</li> <li><strong>Stratifier Label:</strong> Language</li> </ul> <h3>Text Processing</h3> <ul> <li>Created topic tree</li> <li>Created special tokens for each value <strong>language</strong> and <strong>content kind</strong> can take.</li> <li>Created identifier separators for <strong>topic title</strong>, <strong>topic tree</strong>, <strong>topic description</strong>, <strong>content title</strong>, <strong>content description</strong> and <strong>content text</strong>.</li> </ul> <p>My final input for the model was like:</p> <ul> <li><strong>Topic:</strong> <code>[&lt;[language_en]&gt;] [&lt;[topic_title]&gt;] videos [&lt;[topic_tree]&gt;] maths g3 to g10 &gt; maths &gt; g6 &gt; 17. geometrical constructions &gt; perpendicular and perpendicular bisector &gt; videos [&lt;[topic_desc]&gt;] nan</code></li> <li><strong>Content:</strong> <code>[&lt;[language_en]&gt;] [&lt;[kind_exercise]&gt;] [&lt;[cntnt_title]&gt;] level 3: identify elements of simple machine(axle,wheel,pulley and inclined plane etc [&lt;[cntnt_desc]&gt;] nan [&lt;[cntnt_text]&gt;] nan</code></li> </ul> <h3>Training Sentence-Transformer (Stage 1)</h3> <ul> <li><strong>Base Model:</strong> <a href="https://huggingface.co/AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2" target="_blank">AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2</a></li> <li><strong>Sequence Length:</strong> 128</li> <li><strong>Epochs:</strong> 50</li> <li><strong>Batch Size:</strong> 128</li> <li><strong>Warm-Up Ratio:</strong> 0.03</li> </ul> <h3>Retrieve with kNN using Stage 1 Embeddings</h3> <p>I used <strong>kNN</strong> from <a href="https://rapids.ai/" target="_blank">RAPIDS</a> and get closest <strong>100</strong> content embedding for each topic embedding using <strong>cosine-similarity</strong>.</p> <h3>Training Cross-Encoder (Stage 2)</h3> <ul> <li><strong>Base Model:</strong> Trained model from Stage 1</li> <li><strong>Output:</strong> Sigmoid</li> <li><strong>Sequence Length:</strong> 128</li> <li><strong>Epochs:</strong> 15</li> <li><strong>Batch Size:</strong> 256</li> <li><strong>Warm-Up Ratio:</strong> 0.05</li> </ul> <h3>Inference</h3> <ul> <li>Ran all the steps above sequentially in a single script.</li> <li>Tuned classification threshold on the hold-out validation set to maximize F2-Score.</li> <li>Imputed empty topic rows with the highest scoring content IDs.</li> </ul> <h3>Didn't Work &amp; Improve</h3> <ul> <li>Language specific kNN</li> <li>Smaller models</li> <li>Lower sequence length</li> <li>Lower batch-size</li> <li>Union submission blending</li> </ul>
Learning Equality - Curriculum Recommendations
28th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for this interesting competition! I joined one month before the competition ended and am happy that a simple and direct approach got me to 28th place.</p> <h1>Stage 1 retriever: Biencoder</h1> <p>Teach model to embed related topics and contents closely together, using the following common setup:</p> <ul> <li>Embed topic and content using same transformer backbone and mean pooling</li> <li>Compute cosine similarity between all examples in a batch</li> <li>Multiple in-batch negative ranking loss with temperature 0.05</li> </ul> <p>The following modifications gave a significant boost to the model:</p> <ul> <li>Modify sampler such that all examples in a batch share the same language. This makes the negative examples much more meaningful. For inference, I only consider candidate contents with matching language.</li> <li>Compute cross-entropy loss row-wise and column-wise (with respect to topics and contents)</li> <li>Penalize scores for the correct class with a margin</li> </ul> <p>The best retriever used XLM-RoBERTa (large) as backbone and was trained for 7 epochs. On my CV, it achieves a recall@50 of 91%.</p> <h1>Stage 2: Crossencoder</h1> <p>For every topic generate 50 candidates using the biencoder. The crossencoder feeds a joint representation of topic + content through a transformer and does binary prediction whether they match or not. The stage 2 model was trained using the out-of-fold predictions of the stage 1 model as input.</p> <p>The only things that worked for me in improving model performance were oversampling the positive class and using differential learning rates.</p> <p>My best stage 2 model was a multilingual BERT trained for 12 epochs achieving 63.6% on my CV (64.3% and 68.3% on public and private LB respectively).</p> <p>Looking at its performance and predictions, I realized that the crossencoder was having a hard time predicting a match from the text representations alone and experimented with GBDT as stage 2 model, but was not able to come up with a better-performing model.</p> <h1>Input representation</h1> <p>Could not find a input representation that worked better than simple concatenation.</p> <ul> <li>Topic representation: Title. Parent title. … Root title. Topic description.</li> <li>Content representation: Title. Description. Text.</li> </ul> <p>Sequence length was 128 tokens for both models.</p> <h1>Cross validation</h1> <p>I used 5 folds split on topics for stage 1, and 3 folds split on non-source topics for stage 2. I had good correlation between CV and LB using this setup. Splitting on channels instead led to unstable CV for me. Probing the public LB revealed that in fact there was a great amount (over 40%) of near-duplicate topics, so I settled in favor of a simple CV setup that gave me good correlation with LB.</p> <p>My evaluation metrics for stage 1 were average precision and recall@N, and directly the competition metric for stage 2.</p> <h1>Post processing</h1> <ul> <li>Topics with no predictions get assigned the nearest neighbor according to biencoder</li> <li>If a new topic shared the same title, parent title, and grandparent title as a seen topic, add the contents of the seen topic to the predictions (tiny boost).</li> </ul>
Learning Equality - Curriculum Recommendations
1st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I had given a long break on Kaggle and came back with this competition because I thought it was very relevant to the app I am building, namely Epicurus. This break made me miss Kaggle and kept me motivated in this competition</p> <p>My actual solution and efficiency solution are very similar. So I will describe both of them at the same time. I hope I can refactor my code and share on Github soon.</p> <p><strong>Pipeline:</strong><br> Candidate Selection (Retriever methods) -&gt; Feature Engineering -&gt; Lightgbm -&gt; Postprocessing</p> <p><strong>Validation Scheme:</strong></p> <ul> <li>1 fold validation</li> <li>All source topics and random 67% of the other topics are selected for the training set. The rest are validation topics.</li> <li>The contents which only match with validation topics are excluded from the training set.<br> For evaluation, validation topics are matched with all the contents and competition metric is calculated.</li> <li>While training lightgbm model on the candidates, group4fold on topic_id is used on the validation set. Evaluation is done on the whole validation set afterwards.</li> </ul> <p>At the end of the competition, I had 0.764 validation score and 0.727 LB. While it is a big gap, improvements in my validation score were almost always correlated with LB. And I got my validation score as my Private LB score, which I didnt expect.</p> <p>Edit: Efficiency model got 0.718 validation, 0.688 Public LB, 0.740 Private LB and around 20 minutes CPU run-time.</p> <p><strong>Topic/Content Representation</strong></p> <p>Each topic is represented as a text using its title, its description and its ancestor titles up to 3 parents above in the tree. Example: </p> <p>'Triangles and polygons @ Space, shape and measurement @ Form 1 @ Malawi Mathematics Syllabus | Learning outcomes: students must be able to solve problems involving angles, triangles and polygons including: types of triangles, calculate the interior and exterior angles of a triangle, different types of polygons, interior angles and sides of a convex polygon, the size and exterior angle of any convex polygon.'</p> <p>Each content is represented as a text using its title, its kind and its description (its text if it doesn’t have a description). Example:</p> <p>'Compare multi-digit numbers | exercise | Use your place value skills to practice comparing whole numbers.'</p> <p><strong>Candidate Selection</strong></p> <p><strong>TFIDF</strong><br> Char 4gram TFIDF sparse vectors are created for each language and matched with sparse_dot_topn, which is a package I co-authered (<a href="https://github.com/ing-bank/sparse_dot_topn" target="_blank">https://github.com/ing-bank/sparse_dot_topn</a>) It works very fast and memory efficient. For each topic, top 20 matches above 1% cosine similarity are retrieved.</p> <p><strong>Transformer Models</strong><br> I used paraphrase-multilingual-MiniLM-L12-v2 for efficiency track and ensemble of bert-base-multilingual-uncased, paraphrase-multilingual-mpnet-base-v2 (it is actually a xlm-roberta-base) and xlm-roberta-large for the actual competition. </p> <ul> <li>Sequence length: 64. But only the first half of the output is mean pooled for the representation vector. Last half is only fed for context. This worked the best for me.</li> <li>Arcface training: Training contents are used as classes. Therefore topics have multiple classes and l1-normalized target vectors. The margin starts with 0.1 and increases linearly to 0.5 at the end of 22 epochs. First 2 and last 2 epochs have significantly lower LR. Arcface class centers are initialized with content vectors extracted from pretrained models.</li> <li>Ensemble method: Concatenation after l2 normalization</li> </ul> <p>Edit: Models are re-trained with whole data for submission at the end.</p> <p>Top 20 matches within the same language contents are retrieved.</p> <p>In addition, for each topic, its closest train set topic is found and its content matches are retrieved as second degree matches.</p> <p><strong>Matches from Same Title Topics</strong></p> <p>For each topic, train set topics with the same title are found and their matched contents are retrieved.</p> <p><strong>Matches from Same Representation Text Topics</strong></p> <p>For each topic, train set topics with the same representation text are found and their matched contents are retrieved.</p> <p><strong>Matches from Same Parent Topics</strong></p> <p>For each topic, train set topics with the same parent are found and their matched contents are retrieved.</p> <p>All retrieved topic-content pairs are outer joined.</p> <p><strong>Feature Engineering</strong></p> <ul> <li>tfidf match score</li> <li>tfidf match score max by topic id</li> <li>tfidf match score min by topic id</li> <li>vector cosine distance</li> <li>vector cosine distance max by topic id</li> <li>vector cosine distance min by topic id</li> <li>topic title length</li> <li>topic description length</li> <li>content title length</li> <li>content description length</li> <li>content text length</li> <li>content same title match count</li> <li>content same title match count mean over topic id</li> <li>content same representation text match count</li> <li>content same representation text match count mean over topic id</li> <li>content same parent match count</li> <li>content same parent match count mean over topic id</li> <li>topic language</li> <li>topic category</li> <li>topic level</li> <li>content kind</li> <li>same chapter (number extracted from the text)</li> <li>starts same</li> <li>is content train</li> <li>content max train score</li> <li>topic max train score</li> <li>is content second degree match</li> </ul> <p><strong>Lightgbm Model</strong></p> <ul> <li>Hit or miss classification problem</li> <li>Overweight hit (minority) class</li> <li>Monotonic constraint and 2x feature contribution on most important feature: vector cosine distance</li> <li>2 diverse lightgbms: Excluded features which will potentially have different distribution on real test set in one of the models, vector cosine distance min by topic id. Also used slightly different parameters and kfold seed.</li> </ul> <p><strong>Postprocess</strong></p> <p>Postprocessing was very important. Using relative probabilities (gaps with highest matches) and using different conditions for train and test set contents were the key. While matching train set contents was like a classification problem, matching test set contents was like an assignment problem.</p> <p>Topic-content pairs are included if they have one of the conditions below:</p> <ul> <li>Content has the best matching probability among other contents for the given topic.</li> <li>Content is among the train set contents and has above 5% probability and has less than 25% gap with the highest matching probability in the given topic.</li> <li>Content is among the test set contents and has less than 5% gap with the highest matching probability in the given topic.</li> <li>Content is among the test set contents and the topic is its best match and its total gap* is less than 55%.</li> </ul> <p><strong>Code</strong></p> <p>All training notebooks: <a href="https://github.com/aerdem4/curriculum-recommendations" target="_blank">https://github.com/aerdem4/curriculum-recommendations</a><br> My actual inference notebook (v15 selected): <a href="https://www.kaggle.com/code/aerdem4/lecr-ensemble-v03" target="_blank">https://www.kaggle.com/code/aerdem4/lecr-ensemble-v03</a><br> My best Efficiency notebook: <a href="https://www.kaggle.com/code/aerdem4/lecr-efficiency-minilm" target="_blank">https://www.kaggle.com/code/aerdem4/lecr-efficiency-minilm</a><br> An alternative Efficiency solution: <a href="https://www.kaggle.com/code/aerdem4/lecr-efficiency-nobert" target="_blank">https://www.kaggle.com/code/aerdem4/lecr-efficiency-nobert</a></p>
Learning Equality - Curriculum Recommendations
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Learning Equality - Curriculum Recommendations <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a lot to Kaggle and the hosts, specifically <a href="https://www.kaggle.com/jamiealexandre" target="_blank">@jamiealexandre</a>, for hosting this competition. Unfortunately, we only could join after NFL competition finished, so we tried to do our best within an eleven days sprint, and we are very happy about the outcome. Improving every day can be quite thrilling - but at the same time we regret a bit not having more time for this competition as it was really interesting and fun participating here, and I believe our solution has not yet reached its ceiling. </p> <p>Due to time constraints, our solution is based only on an ensemble of embedding models with cosine similarity matching as well as additional post-processing.</p> <h3>Data processing</h3> <p>We only use the following features as text input for out models: language, category, title and description<br> An "issue" with the data is, that some topics and content items do not have a proper description, or very inconclusive titles. So it is helpful to supplement them.</p> <p>For topics, we replace the title with an inverse track of the category tree titles, so basically adding the breadcrumbs, we do the same for description. <br> For content, we concatenate the description and text fields as a single column.</p> <p>The benefit of this approach is, that it will add information to those records that have incomplete data. With tokenization and truncation, the models will prioritize the original information, if available, and otherwise use the supplemented information. So for instance, if a content item has description and text available, it will prioritize the description, but otherwise if description is missing, it will use the text.</p> <h3>Validation setup and models</h3> <p>Due to lack of time, we focused on a single holdout. For this, we split out non-source categories. Contents matching to these categories can then be either already be part of training, or completely unseen. We got strong correlation with this setup to public and private leaderboard in relative terms.</p> <p>For submissions, we always retrained models on full data. As we were running out of time, we could not always do both a validation and fullfit. So 3-4 days before end, we only did blind fullfits, and blended them on submission. And for local validation and testing post-processing we relied on some earlier models. While not ideal, it was a reasonable approach given the time constraints.</p> <h3>Embedding models</h3> <p>We only use ArcFace models. For input and training the models we use two different schemas:</p> <p><strong>Topic-based labeling.</strong> Here, a single label is defined as a topic, and all content items that match to this topic. So for instance: Label 1: Topic A, Content A, Content C, …</p> <p>This means that content items occur as many times as they match topics, and each topic is only a single sample. This approach is strongest on its own with the F2-based metric.</p> <p><strong>Content-based labeling.</strong> Here, a single label is defined as a content item, and a topic that matches to this content. So for instance: Label 1: Content A, Topic A</p> <p>This means that each topic-content pair is a single label. This method worked worse individually, but blended quite nicely with the topic-based approach.</p> <p>Our final blend contains 7 topic-based, and 2 content-based models. Backbones are mostly xlm-roberta-base, xlm-roberta-large, paraphrase-multilingual-mpnet-base-v2, or deberta-v3-large.</p> <h3>Post-processing</h3> <p>We played a lot with different post-processing techniques as this is always something that is useful in metric-learning matching. We optimize the treshold automatically in the kernel to a certain average number of matches per topic. Also, we found that penalizing the cosine similarities based on additional information helps. First, we slightly reduce similarity probabilities for content that only matches to a single topic in whole training. Second, we increase the probability of content items that are not available in training. We also always match new content to the top ranked topic, if above a certain threshold. Finally, we also additionally add new matches if we have less than five matches for a topic, but the additional probabilities are above a certain ratio to the higher ranked probabilities.</p> <h3>What did not work (due to time)</h3> <p>We spent 2-3 days trying to tune bi-encoder text models for second-stage, but could not get anything that improved our first-stage embedding models to be worth the additional runtime. So we decided to drop it and focus on first-stage only.</p> <p>Also, we spent some time on trying to tune LGB second-stage models. We were quite sure that they should be working well, and might replace also some manualy post-processing. But while CV looked reasonable, LB was dropping a bit, and we were not too confident in the validation setup for it, so we dropped it. Seeing other solutions, it definitely seems to be helpful, and I believe it could push our solution higher. </p> <h3>Efficiency sub</h3> <p>We also have an efficiency sub scoring 0.72 ensembling two smaller models on shorter token lengths running in 22 minutes. We use multiprocessing and ONNX. We probably lack a good 2nd stage LGB model to boost the score higher here. </p> <p>As always, cheers to my amazing team-mate <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a>.</p> <p>All training and inference code can be found <a href="https://github.com/psinger/kaggle-curriculum-solution" target="_blank">online</a>.</p>
GoDaddy - Microbusiness Density Forecasting
35th Place Solution - GRU+XGB
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: GoDaddy - Microbusiness Density Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Goal of this competition was to predict monthly <a href="https://www.kaggle.com/competitions/godaddy-microbusiness-density-forecasting/discussion/374293" target="_blank">domain registrations of small businesses</a> divided by the active population for all counties in the US. The biggest difference to other timeseries competitions is in my opinion that the public leaderboard tracked the score of the one month forecast (January) while our final score would be determined on 3-5 months in the future (March, April, May). My expectation for this competition was to deal with various macroeconomic explanatory variables and maybe even include some capital markets data. As it turned out this was not the case😄.</p> <h3>Things that did not work</h3> <ul> <li>I spent more time than I'm willing to admit on scrolling through files from statsamerica (<a href="http://www.statsamerica.org/downloads" target="_blank">www.statsamerica.org/downloads</a>) trying to find anything helpful to enhance the data we had at hand. Unfortunately I can't present any evidence for it's usefulness in this context. Anyway they have cool stuff (<a href="https://www.statsamerica.org/innovation" target="_blank">https://www.statsamerica.org/innovation</a>) over there.</li> <li>Capital markets indicators also had little effect. The decision to register a domain is surprisingly not driven by the VIX or FOMC dots (or more precise it does not help on county level forecasting).</li> </ul> <h3>Data Quality and Preprocessing</h3> <p>Instead, outlier detection was the name of the game. I didn't probe the leaderboard but nevertheless spent much time comparing small counties and their erratic timeseries. Additionally, we had to deal with a <a href="https://www.kaggle.com/competitions/godaddy-microbusiness-density-forecasting/discussion/375715" target="_blank">structural change</a> in the data in January 2021 and the <a href="https://www.kaggle.com/competitions/godaddy-microbusiness-density-forecasting/discussion/389215" target="_blank">underlying population numbers</a> were also shifting. </p> <p>Therefore the target variable was adjusted for the latest census data and I removed suspect jumps for all counties in this month if they exceeded a threshold of +-7%:</p> <pre><code> o tqdm(raw.cfips.unique()): indices = (raw[]==o) tmp = raw.loc[indices].copy().reset_index(drop=) var = tmp.microbusiness_density.values.copy() var_pct = tmp.microbusiness_density.pct_change().clip(-,) j (, , -): j== (var_pct[j]== - var_pct[j]== ): var[j-] = var[j] : var[j-] = var[j]/(+var_pct[j]) raw.loc[indices, ] = var </code></pre> <p>Afterwards the timeseries was further denoised with an autoencoder:</p> <pre><code> (): i = tf.keras.Input(shape=(,)) encoded = tf.keras.layers.BatchNormalization()(i) encoded = tf.keras.layers.GaussianNoise(noise)(encoded) encoded = tf.keras.layers.Dense(,activation=)(encoded) decoded = tf.keras.layers.Dropout()(encoded) decoded = tf.keras.layers.Dense()(decoded) x = tf.keras.layers.Dense(,activation=)(decoded) x = tf.keras.layers.BatchNormalization()(x) x = tf.keras.layers.Dropout()(x) x = tf.keras.layers.Dense(,activation=)(x) encoder = tf.keras.Model(inputs=i,outputs=x) loss = tf.keras.losses.MeanSquaredError() encoder.(optimizer=Adam(),loss=loss) encoder </code></pre> <p>I'm unsure if this could add any kind of leakage as it was applied before/outside of the CV fold structure. But it stabilized my CVs and had no negative impact on the public leaderboard score. Anyway I kept using it only for the GRU model and trained XGB on the 'raw' data.</p> <h3>Models and Target</h3> <p>Only counties with more than 150 microbusinesses were considered as input for the model. For counties below this threshold the last value was used.<br> Building on top <a href="https://www.kaggle.com/competitions/godaddy-microbusiness-density-forecasting/discussion/381038#2114357" target="_blank">Chris proposal</a> I used GRU with 24 months sequence of np.log1p(microbusiness density) input combining 3 folds (GroupKFold grouped by date) and forecasting 1, 3, 4, 5 months into the future.<br> As a second model I used XGB derived from what <a href="https://www.kaggle.com/code/titericz/better-xgb-baseline" target="_blank">GIBA shared</a>, defining the 1, 3, 4, 5 months growth difference as a target. Most of the county level features were included here but I think they added little explanatory value. For postprocessing I clipped the XGB predictions of the percentage changes with the corresponding 10% and 90% quantile values observed after preprocessing. Both models are weighted with 50% in the final submission.</p> <h3>Second Submission</h3> <p>This was a simple linear model inspired by the success of <a href="https://www.kaggle.com/code/vitalykudelya/21-lines-of-code" target="_blank">21 lines of code</a>. It calculated the average monthly growth per county on the 21 months after the structural break in January 2021. The mean was clipped at -0.5% and +1% and for counties with less then 150 active domains this value was scaled by 0.3 (all of this was expert judgement based on very crude statistics, average growth, 5%,25%,75%,95% quantiles of growth). For counties with stale growth in November and December the last value was forecasted perpetually. This scored 4.0731 which would have been place 175 (still a silver medal with 10% of the effort). 🙀</p>
1st and Future - Player Contact Detection
18th place solution : 2d-cnn / 1d-cnn / XGB / 1d-cnn
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I want to thank the hosts of this competition and my team: <a href="https://www.kaggle.com/chenlin1999" target="_blank">@chenlin1999</a> and <a href="https://www.kaggle.com/hanzhou0315" target="_blank">@hanzhou0315</a></p> <h1>Summary</h1> <p>Our solution is made of 4 stages :</p> <ol> <li><strong>2d-cnn</strong> : The model predicts for each player if the player is in contact as well as if the player is on the ground</li> <li><strong>1d-cnn</strong> : This stage is intended to smooth the prediction of 1. using the temporality</li> <li><strong>XGB</strong> : It is at this moment that we associate the contacts between players</li> <li><strong>1d-cnn</strong> : This stage is intended to smooth the prediction of 3. using the temporality</li> </ol> <h1>Validation methodology</h1> <p>We opt for a Stratified group 5-Fold cross-validation by <code>game_play</code> : this strategy seemed to be the most correlated with LB and the most obvious. We have a final solution that reaches a CV score : 0.77174 for a LB : 0.77004 and a public LB : 0.76219</p> <h1>Stage1 : 2d-cnn</h1> <p>At this stage it is very easy to overfit on the data so we only trained for 2 epochs. We used the timm models: efficientnetv2_rw_s and convnext_base_in22k for the final submission.</p> <p>The input is composed of 2 RGB images for the Endzone and the Sideline, then we concatenate the features to make a prediction. To add supervision to this model we used features created from the tabular data. Tt's a bit similar to <a href="https://www.kaggle.com/competitions/petfinder-pawpularity-score/discussion/301015" target="_blank">this</a><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6012866%2Fccac7fba20baef4c47053060a748ff75%2Fimage_2023-03-03_164244573.png?generation=1677879764555718&amp;alt=media" alt=""></p> <h1>Stage2 : 1d-cnn</h1> <p>It is a simple CNN with 5 layers and kernels of 3. To ensure temporal consistency we sorted by <code>["game_play","nfl_player_id","step"]</code></p> <h1>Stage3 : XGB</h1> <p>It's XGB like <a href="https://www.kaggle.com/code/columbia2131/nfl-player-contact-detection-simple-xgb-baseline" target="_blank">this</a> by <a href="https://www.kaggle.com/columbia2131" target="_blank">@columbia2131</a> and we added the features from the previous stage</p> <h1>Stage4 : 1d-cnn</h1> <p>It is a simple CNN with 5 layers and kernels of 3. To ensure temporal consistency we sorted by <code>["game_play","nfl_player_id_1","nfl_player_id_2","step"]</code></p> <h1>Final results</h1> <table> <thead> <tr> <th>Stage1</th> <th>Stage2</th> <th>Stage3</th> <th>Stage4</th> <th>CV</th> <th>public LB</th> <th>private LB</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>✓</td> <td></td> <td>0.65</td> <td>0.645</td> <td>0.645</td> </tr> <tr> <td>✓</td> <td></td> <td>✓</td> <td></td> <td>0.718</td> <td>0.715</td> <td>0.716</td> </tr> <tr> <td>✓</td> <td></td> <td>✓</td> <td>✓</td> <td>0.731</td> <td>0.729</td> <td>0.725</td> </tr> <tr> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>0.771</td> <td>0.762</td> <td>0.770</td> </tr> </tbody> </table>
IceCube - Neutrinos in Deep Ice
5th place solution: combining GraphNet and Transformer with LSTM meta-model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We'd like to thank the organizers and Kaggle staff for putting together this enthralling challenge. As we share the scientific feast we've cooked up in our "Big Bento Box of Science," please accept our apologies for the slightly delayed write-up. The competition was very intense, with our team initially lagging over 400 spots away from the medal zone just two weeks before the end. Through hard work and persistence, we managed to improve our standing, ultimately finishing in 5th place and achieving our first gold medal.</p> <h2>Summary:</h2> <ul> <li>Two classification heads with 256 bins each for transformer, smart label smoothing -&gt; <strong>1.003</strong></li> <li>Increasing the number of features GraphNet uses in KNN graph builder, changing activation functions to LayerNorm, overall increasing model size, predicting <code>(x, y, z)</code> direction instead of azimuth and zenith directly -&gt; <strong>0.990</strong></li> <li>LSTM model selector which took the input data as well as predictions from Transformer and GraphNet, and returned prediction which of these models to use -&gt; <strong>0.974</strong></li> </ul> <h2>Transformer</h2> <p>We used standard encoder architecture with 8 layers, 512 model size and 8 attention heads. The input consisted of <code>(x, y, z, t, charge, auxiliary, is_core, rank)</code>. <code>is_core</code> was a 0/1 value indicating whether the sensor was part of a deep core or not. <code>rank</code> was computed the same way as in the <a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-inference" target="_blank">following great notebook with LSTM</a>. For features like <code>(x, y, z, t, charge)</code> we used a few linear layers to preprocess them before passing them to the Transformer. While <code>(auxiliary, is_core, rank)</code> due to their discrete nature were embedded.</p> <p>We had two output heads for both azimuth and zenith. Each head predicted 256 values that represented bins. We used equally spaced bins and trained them using cross entropy with smoothed labels. The labels were smoothed so that neighboring bins also have some probability (not like in standard label smoothing where remaining probability is equally assigned to all classes). For azimuth this smoothing was wrapped around in circular fashion, so that predicting \(0\) and \(2 \pi \) was equivalent. Prediction was just an argmax over the bins. We experimented with a different number of bins, ranging from 16 to 256, and more bins resulted in better performance. However, the difference between 128 and 256 was not significant and we chose to use 256. </p> <p>We trained the model for 760k steps on TPU with batch size of 512 and sequence length of 256, so around 3 epochs (which is not much). We used a cosine schedule for LR with peak LR=3e-5 and repeated the cycle every 10k steps (looking back at it the cycle should probably be longer, but at the beginning we weren’t sure what to expect from the training). For the last 160k steps we changed peak LR to 3e-6. Whole training took around 26h. After 760k steps we could still see some slight improvements as the training progressed but decided to focus more on other ideas. This led to a model with a score of <strong>1.003</strong> on public LB.</p> <h2>GraphNet</h2> <p>As a starting point we’ve used <a href="https://www.kaggle.com/code/rasmusrse/graphnet-baseline-submission" target="_blank">standard GraphNet model</a> with <a href="https://www.kaggle.com/code/anjum48/early-sharing-prize-dynedge-1-046" target="_blank">additional input features from early sharing price notebook</a>.</p> <p>Model input was composed of <code>(x, y, z, t, charge, auxiliary, quantum_effeciency, wrong_charge_dom, ice_absorption, ice_scatter)</code> where <code>wrong_charge_dom</code> was a marking of one sensor (num. 1229) because our analysis shown there might be something wrong with it. Output of the model was a direction. </p> <p>To achieve the final LB score we've made a few changes to the model.</p> <p>First of all, after looking through the code of <code>DynEdge</code> we saw that <code>features_subset</code> argument (latent feature subset for computing nearest neighbors in <code>DynEdge</code>) is applied not only on the first layer, but also on inputs to other layers. Intuitively it felt limiting, we changed it so that <code>features_subset</code> for the non-first layer was variable. In the end we set it to 48 (bigger seems to work better). With this change our model achieved <strong>0.998</strong> on LB.</p> <p>Next big change was moving from <code>BatchNorm1d</code> to <code>LayerNorm</code>. This decision was guided by a big difference in MAE metrics we observed between training and validation. Even the standard model (with <code>BatchNorm1d</code>) got better results without switching to eval mode, so we decided to move to <code>LayerNorm</code> that does not differ between training and evaluation.</p> <p>Last but not least, we experimented and tuned the whole training procedure. We used AdaBelief optimizer. Unlike the Transformer, with GraphNet model we took the approach of changing the learning rate in steps, starting from 1e-3 and reducing the learning rate by a factor of 2 every 200k steps. Total number of steps was 700k with 512 batch size. It took around 3 days on RTX 4090.</p> <p>Finally, with all changes and scaling of the model we were able to achieve a <strong>0.990</strong> score on LB. </p> <h2>Model stacking ensemble</h2> <p>In the last few days of competition we had two models and no time to train them any further. We looked previously at ensembling them but results were not promising. We made some analysis and found that the predictions are sometimes very different from both models. If we could pick the correct model perfectly the lower bound was around 0.810. </p> <p>Two days before the end of competition we dumped the embeddings from our best Transformer (last common layer before two categorical heads) and GraphNet (layer before predicting direction) for the first 10 data batches. At first we fed both of those embeddings along with model predictions into XGBoost. It was able to achieve 57% accuracy which resulted in <strong>0.976</strong> score on LB (and 0.978 on our internal validation).</p> <p>Later on we coded up a simple 3 layer Bi-directional LSTM as a model selector. Our thinking was that having a different architecture (not Transformer or GraphNet) as a selector will be beneficial in learning some different features from data resulting in better selections.</p> <p>LSTMs hidden states were initialized with concatenated Transformer and GraphNet embeddings and subset of pulses were given as input sequence*. We thought that training it on event data will help it make better decisions. We trained the model for around 9 epochs (keep in mind it’s only on the first 10 batches) which took about 8h. Doing this improved accuracy to 58% and resulted in a score of <strong>0.974</strong> which was our final result.</p> <p>(*) We also experimented with concatenating the embeddings to the input sequence or just before the last classification layer, but both of those approaches turned out to be worse than just initializing hidden states with embeddings.</p> <p>Overall architecture of our final submission looked like this:</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/2230884/19025/model-idea.png" alt="model diagram"></p> <h2>Things that we tried</h2> <ul> <li>We trained two Transformer models on different subset of dataset and tried simple ensembling (adding logits/softmaxes and argmaxing) but this resulted only in marginal improvements of 0.002</li> <li>Training a separate categorical heads (like in case of transformer) on GraphNet embeddings and then ensemble both models by simply adding logits/softmaxes and picking argmax - this slightly reduced the error but was not spectacular</li> <li>Classification task for GraphNet didn’t work that great, neither have azimuth and zenith regression. Direction regression is what performed best.</li> <li>At the beginning of the competition we found a paper “A Convolutional Neural Network based Cascade Reconstruction for the IceCube Neutrino Observatory” and implemented a variant of a 3d convolutional network with so called hexagonal convolutions, which are supposed to be localized in the DOMs space. However, we couldn’t get it to work and get the mean angular error below 1.2. A likely reason is that the data in the competition was quite different from the data used in the paper. The latter had great resolution in the time dimension, because the pulses were collected every 2ns, while our data was much more sparse.</li> </ul> <h2>Inference code</h2> <p><a href="https://www.kaggle.com/grzego/inference-lstm-ensembler-graphnet-transformer" target="_blank">https://www.kaggle.com/grzego/inference-lstm-ensembler-graphnet-transformer</a></p>
IceCube - Neutrinos in Deep Ice
14th place solution: transformer and GRU
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We would like to thank the organizers and participants of this great competition! We learned a lot and had a lot of fun. Special thanks to <a href="https://www.kaggle.com/seungmoklee" target="_blank">ZhaounBooty</a> for his early sharing solution, <a href="https://www.kaggle.com/rsmits" target="_blank">Robin Smits</a> for sharing further improvements of the RNN idea, and <a href="https://www.kaggle.com/iafoss" target="_blank">Iafoss</a> for sharing the dataloader.</p> <p>Below is a (preliminary) description of the two models we trained.</p> <p>We preprocessed the whole training dataset and saved it as a memory-mapped file. We have written an iterable dataset that allowed very fast data loading.</p> <h3>Transformer model:</h3> <p><em>Private and public score 0.983</em></p> <ul> <li>Size: 4.9M params</li> <li>Batch: 2000 events</li> <li>Hidden size: 256</li> <li>Activation: SiLU</li> <li>Layernorm</li> <li>Num transformer layers: 3</li> <li>Num heads: 4</li> <li>Global aggregation: [sum, mean, max, std_dev]</li> </ul> <p>Training:</p> <ul> <li>LR: 0.002</li> <li>Patience: 1, multiplicative decay: 0.9</li> <li>Run for 77 epochs</li> <li>32 GPUs</li> <li>Warmup: 2 epochs</li> </ul> <h3>Bidirectional GRU model:</h3> <p><em>Private and public score 0.987</em></p> <ul> <li>Size: 1.7 M parameters</li> <li>Batch size: 2048</li> <li>Input shape: (96, 6)</li> <li>Number of layers: 3</li> <li>hidden_size: 160</li> <li>Hidden states of the last GRU unit are passed to a fully connected layer (size: 512)</li> </ul> <p>Training</p> <ul> <li>max LR 1e-3 with a warm up and cosine annealing ( OneCycleLR, pct_start: 0.001)</li> <li>AdamW with weight decay 1e-4</li> <li>Trained for 14 epochs on the full dataset on GTX 1080 GPU for 90 hours</li> </ul> <p>In both cases, the network output was a 3-dimensional unit vector. We then directly used the mean angular error as the loss.</p> <p>Interestingly, despite very limited computer resources, the GRU model scores quite well. Unfortunately, we didn't have time to scale it further on the GTX 1080.</p>
IceCube - Neutrinos in Deep Ice
13th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>A thank you to the host, and well done to all the participants. I'm frustrated that I couldn't stay in the gold tier, but I gained valuable learning experiences. I'd like to take this opportunity to express my gratitude. Although it's late, I will share my solution. My English isn't so good so feel free to ask me if there is anything unclear.</p> <p>My solution is:</p> <ul> <li>Noise reduction from training data</li> <li>Customized GraphNet-based models</li> </ul> <p>I tried training LSTM and Transformers, but it didn't go well, so I abandoned them early, which led to my failure.</p> <p>I used an ensemble of eight GraphNet-based models on final submit. A single graph model get Public 0.982 and Private 0.984 scores. Therefore, there may not have been much of an ensemble effect.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2391815%2Fc25aef115dd90773cb76ae7fc0ae296c%2Flatesub.png?generation=1682672834486253&amp;alt=media" alt="late sub"></p> <p><strong>Noise reduction from training data</strong><br> I consider when the data had few observed signals within an event, it couldn't fit well and led to overfitting. So, I implemented the following cycle:<br> 1) Infer to training data and exclude training data that do not predict well. Specifically, exclude data with MAE &gt; 1.<br> 2) Train on the dataset, and if the validation score improves, use that model to infer the training data again.<br> 3) Construct the data set in the same manner as step 1, exclude the training data that cannot be fitted, and train again.</p> <p>The above process was repeated two or three times.</p> <p>This idea is based on my past experience with overfitting with noisy labels, as in the PANDA competition. I think this competition deals with physical phenomena, prediction is relatively easy given ideal observational data. Therefore, I considered a model trained include data like noise would not correctly predict difficult events on test data, too. In fact, the model trained with the noisy data was able to improve the CV. Public LB also improved as did CV, so I trusts this method.</p> <p><strong>Customized GraphNet-based models</strong><br> I created models based on provided by the host, I also created models that include time variables in the information added to the KNN. And, changed the number of KNN neighborhoods (8 to 32) and pooling layers(replacing pooling to attention). I also tried other things such as changing the activation function, but the zero gradient function in the negative region does not seem to be a good. PReLU was the best, in my case.</p> <p><strong>My Failure Points</strong></p> <ul> <li>Other information<br> Tried to include information on QE, etc., but could not pursue it very deeply, especially since GNN did not seem to improve.</li> <li>Other approach<br> I could not train LSTM and Transformer well. It is possible that the inputs and settings were incorrect, but the biggest failure was to terminate this approach prematurely.</li> <li>post-process<br> Attempted post-processing in the classification task to deal with cases where the exact opposite direction was obtained, but it did not work. </li> <li>Interpolation<br> Since the accuracy was poor for events with few observed signals, I looked for ways to improve the accuracy by interpolating signals at intermediate times and intermediate locations, but this did not work.</li> <li>Imaging Idea<br> Tried to create an ensemble factor by predicting the position and orientation of the sensor as a video/imaging task, but it did not work because the number of events was too large (storage was disk full). I should have focused on something else.</li> </ul> <p><strong>Others</strong></p> <ul> <li><a href="https://www.kaggle.com/code/sinpcw/icecube-submit" target="_blank">inference notebook</a> (version 111 is best LB.)</li> </ul>
IceCube - Neutrinos in Deep Ice
9th Place Solution: GNNs Ensemble and MLP Stacking
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, we want to thank Kaggle and IceCube Collaboration for providing this spectacular competition!</p> <p>Our solution is based on MLP stack with shared weights of 6 DynEdge models + blend of 4 GNNs inside the segments by pulses number.</p> <h2>1. Model Zoo</h2> <p>We took <a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/383524" target="_blank">GraphNeT</a> model with DynEdge architecture as a baseline and modified it in different ways. Main variations and their nicknames:</p> <ul> <li>v1 <strong>Vanilla</strong>. Changed KNN value and included additional pooling schemes. Replaced Adam optimizer with <a href="https://arxiv.org/abs/1810.06801" target="_blank">QHAdam</a>.</li> <li>v2 <strong>Bigman</strong>. Added more layers and made them wider. Changed activation functions to <a href="https://arxiv.org/abs/1702.03118" target="_blank">SiLU</a> (they proved to be very powerful in various domains). Added Batch Normalization layers.</li> <li>v3 <strong>DynEmb</strong>. To account for ice heterogeneity, we integrated sensor embeddings into the model (embeddings are initialized by XYZ coordinates). This approach can be generalized and used with any type of Observatory that has sensors distributed in ice or water. We also added Weight Normalization and switched to FP16 training with new <a href="https://arxiv.org/abs/2302.06675" target="_blank">Lion</a> optimizer. For this model, we published a submission <a href="https://www.kaggle.com/code/ukrainskiydv/icecube-submission-dynemb-example/notebook" target="_blank">example</a> that gives top 20 LB in solo for 1 hour 20 minutes.</li> </ul> <h2>2. Training Protocol</h2> <p>Every model had longrun training from scratch with von Mises-Fisher Loss on events up to 600 pulses. Then a couple of additional epochs was trained with Euclidean Distance Loss and up to 200 pulses. As <a href="https://www.kaggle.com/rusg77" target="_blank">@rusg77</a> noted below this loss works much better than original for second-stage training. The details on the optimizers choice and second-stage training strategy presented by <a href="https://www.kaggle.com/churkinnikita" target="_blank">@churkinnikita</a> in the comment section.</p> <h2>3. Ensembling</h2> <p>Finally, SWA was applied to the best checkpoints of each model for better robustness. As a result we had v1, v2, v3 models with performance &lt; 1.000 LB and v1, v2, v3 models with performance &lt; 0.990 LB. These 6 models were blended and stacked. Stack showed better performance ~ 0.977 LB in the very final version, <a href="https://www.kaggle.com/simakov" target="_blank">@simakov</a> added the details below.</p> <h2>4. Alternative Approach</h2> <p>Another part of our solution consists of several GNN models for different neutrino events. The dataset was segmented by ten uniformly bins on pulses number (that represents energy levels of neutrino) and 4 new models were trained on their compositions. Eventually, the models were blended segment by segment, <a href="https://www.kaggle.com/rusg77" target="_blank">@rusg77</a> added the details on this part of solution in the post <a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/402995" target="_blank">here</a>.</p> <h2>5. Results</h2> <p>Final model is a 60 / 40 blend of 6-stacked / 4-segmented GNNs and shows 0.975 LB (0.976 Private). We used a blend of stacked models to improve the overall stability of the ensemble and reduce the variance of individual models. We also hope this approach leads to better resilience of the model to possible noise in the data.</p> <p>For sure, Transformer-based models have great performance on the task, but a diversified ensemble of properly tuned GNNs can also show a keen score on the considered reconstruction problem.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F743648%2F829df4033d8353d91b74f0559259506c%2FModels%20Zoo%20RESIZED.png?generation=1682034404975611&amp;alt=media" alt=""></p>
IceCube - Neutrinos in Deep Ice
2nd place solution: Neutrino direction prediction with transformers
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Summary</h1> <ul> <li><strong>Transformer-based solution</strong></li> <li>Chunk-based data loading with caching and length matching</li> <li>Fourier encoder</li> <li>Relative spacetime interval bias</li> <li>von Mises-Fisher Loss + the competition metric</li> </ul> <p><strong>Our code is available at <a href="https://github.com/DrHB/icecube-2nd-place" target="_blank">this repo</a></strong>.</p> <h1>Introduction</h1> <p>While neutrinos are one the most abundant particles in the universe that may bring information about violent astrophysical events, their fundamental properties make them difficult to detect and requires a huge volume of material to interact with. The IceCube Neutrino Observatory is the first detector of its kind, which consists of a cubic kilometer of ice and is designed to search for nearly massless neutrinos. The rare interaction of neutrinos with the matter may produce high-energy particles emitting Cherenkov radiation, which is recorded with photo-detectors immersed in ice. The objective of this challenge was a reconstruction of the neutrino trace direction based on the photon detections. <br> To begin with, I would really like to express my gratitude to the organizers and Kaggle team for making this competition possible. Reading the description of the competition I immediately realized that transformers may be a really good approach for it. Since I wanted to experiment with transformers in depth, I decided to join this competition. Meanwhile, my outstanding teammate <a href="https://www.kaggle.com/drhabib" target="_blank">@drhabib</a> has joined this competition <a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/381481#2117294" target="_blank">hoping to visit Antarctica</a>. </p> <h1>Details</h1> <p>Our solution is based on transformers, which appeared to be ideal for the considered task. Below we provide the key components of our approach.</p> <h2>Why transformers</h2> <p>Graph Neural Networks (GNNs) exhibit limitations in certain areas:<br> (1) GNNs function primarily on a local level, as updates are performed based on neighboring nodes. Meanwhile, these models are expected to predict global quantities, such as the direction of a track. Organizers have attempted to address this issue through dynamic edge building, but this method falls short in comparison to the multi-head self-attention mechanism found in transformers. Moreover, dynamic edge building does not provide gradient feedback for neighbor selection.<br> (2) Upon initial examination, GNNs might appear faster than transformers due to their consideration of only a fixed number of neighbors. Nevertheless, extensive sparse operations and data rearrangements in GNNs can be significantly slower than the highly optimized dense tensor operations in transformers for a reasonable sequence length and considering all possible interactions. In our experiments, the organizers’ baseline has the same computational cost as our reference T model (discussed below) while having only 1.4M parameters vs. 7.4M for T model.<br> <strong>Transformer is a GNN on steroids</strong>. It may be considered as GNN on a fully connected graph using attention to estimate the edge weights dynamically. But this power comes with the need for huge training data.</p> <h2>Data</h2> <p>The organizers generously provided a sizable training dataset, which may be somewhat challenging to manage. Since transformers require massive data for training, it was vital to set up a proper data pipeline at the very beginning. Our approach consists of three primary components: (1) caching the selected data chunks to minimize the substantial cost of data loading and preprocessing, (2) employing chunk-based random sampling to effectively utilize caching, and (3) <strong>implementing length-matched data sampling</strong> (sapling batches with approximately equal lengths) to reduce the computational overhead associated with padding tokens when truncating the batch at the longest sequence. Components (1) and (2) offer a computationally efficient and RAM-friendly handling of the data and were shared through the <a href="https://www.kaggle.com/code/iafoss/chunk-based-data-loading-with-caching/notebook" target="_blank">"Chunk-based Data Loading with Caching" notebook</a> during the competition to facilitate the setup of a comprehensive training data pipeline for participants. <strong>Component (3) considerably expedited training and allowed us to use the maximum sequence length of 192 for training and 768 for inference (without a noticeable increase in inference time)</strong>. Inference at a 768 -length led to a 25 bps improvement over the 192-length. Initially, we also attempted to use Hugging Face datasets, which offered the ability to utilize chunked data but required additional file conversion; ultimately, we settled on a chunk caching pipeline.<br> In our study, we observed no significant difference between utilizing only basic features (time, sensor position, charge, auxiliary, and the total number of detections) and incorporating additional features related to ice transparency and absorption. This outcome is expected, as for sufficiently large training datasets, the extra features merely offer an alternate representation of the z-coordinate, with the corresponding transparency and absorption distributions being explicitly learned. To ensure a diverse range of models, we employed both the base and extended feature sets in our experiments. The features are normalized in a similar way as in the organizers' baseline. <strong>It is important to consider detections with both auxiliary true and false.</strong> For events exceeding the maximum considered sequence length, we implemented the following selection process: Initially, we randomly selected detections from the auxiliary false subset, and if this subset proved insufficient to create a sequence of the required length, we then randomly sampled noisy auxiliary true detections. It is interesting that this random selection approach outperformed more complex methods, such as selecting based on charge or considering only events within a specific temporal boundary (IceCube size divided by the speed of light) surrounding the detection with the maximum charge.</p> <h2>Model</h2> <p>The model is schematically illustrated in the plot below:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2F552b4994e2931207aa2267c170d6044d%2FModel2.png?generation=1681961703319090&amp;alt=media" alt=""><br> <em>Fourier encoder</em>. Our solution is based on a transformer model, which considers each event as a sequence of detections. It is crucial to process the provided continuous input, such as time or charge, into a form suitable for transformers. We use <a href="https://arxiv.org/pdf/1706.03762.pdf" target="_blank">Fourier encoding representation</a>, often used to describe the position in the sequence in language models. This method can be viewed as a soft digitization of the continuous input signal into a set of codes determined by Fourier frequencies. This procedure is applied for all continuous input variables, while the discrete auxiliary flag is encoded with learnable embedding. We multiply the normalized input variables by 1024-4096 to have a sufficient resolution after digitizing. For example, after multiplying the normalized time by 4096, the temporal resolution becomes 7.3 ns, and the model may understand smaller time variations because of the continuous nature of the Fourier encoding. This multiplication is critical, and even a change of the coefficient from 128 to 4096 gives 20 bps boost.<br> The encoding of all variables is concatenated, passed through one GELU layer, and reprojected into the transformer dimension. An alternative to Fourier encoding may be a multilayer network that learns the corresponding representation of continuous variables, but it may require many more layers than we consider in our setup. <br> <em>GraphNet encoder</em>. In addition to the use of Fourier encoding, in several of our models, we incorporated GraphNet feature extractor. In addition to learning the representation of input variables, it also constructs features accounting for the neighbors. <br> <em>Relative spacetime interval bias</em>. In the special theory of relativity there is a quantity called spacetime interval: ds^2=c^2 dt^2-dx^2-dy^2-dz^2. For particles moving with a speed close to the speed of light, it is close to zero. What is more useful is that all particles and photons produced in the reactions caused by a single neutrino should have ds^2 close to zero too (it is not fully precise because the refractive index of ice is about 1.3, and the speed of light in it is lower than c). Therefore, by <strong>computing ds^2 between all pairs of detections in the given event, it is relatively straightforward to distinguish between detections originating from the same neutrino and those that are merely noise</strong>, as demonstrated in the figure below. This criterion can be naturally introduced to a transformer as a <a href="https://arxiv.org/pdf/1803.02155v2.pdf" target="_blank">relative bias</a>. Therefore, during the construction of the attention matrix, the transformer automatically groups detections based on the source event effectively filtering out noise. We use Fourier encoding representation of ds^2 to build the bias term. <strong>Use of relative bias boosts the performance by 40 bps</strong>.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2Fa96019c14603b2c54be402d9d960464f%2Fds.png?generation=1681961957521492&amp;alt=media" alt=""><br> <em>Model</em>. We use the standard <a href="https://arxiv.org/pdf/2208.06366.pdf" target="_blank">BEiT building blocks</a> (transformer with learnable shortcuts). To incorporate relative bias, the first transformer blocks are modified according to <a href="https://arxiv.org/pdf/1803.02155v2.pdf" target="_blank">this paper</a>. The model size is characterized with a standard ViT notation: T – tiny (dim 192), S – small (dim 384), B – base (dim 768). We use 16 transformer blocks in total: 4 blocks with relative bias + 12 blocks with cls token. The use of an additional cls token, included in the input sequence, enables the gradual collection of information about the track direction throughout the entire model. The attention head size is varied between 32 and 64: the smaller head size gives better performance, especially for the T model (10 bps). The details of the considered models are summarized below.</p> <table> <thead> <tr> <th>Model</th> <th>dimension</th> <th>number of heads</th> <th>depth</th> <th>number of parameters</th> </tr> </thead> <tbody> <tr> <td>T</td> <td>192</td> <td>3-6</td> <td>4+12</td> <td>7.57M</td> </tr> <tr> <td>S</td> <td>384</td> <td>6-12</td> <td>4+12</td> <td>29.3M</td> </tr> <tr> <td>B</td> <td>768</td> <td>12-24</td> <td>4+12</td> <td>115.6M</td> </tr> </tbody> </table> <p><em>Training setup</em>. The models are trained with AdamW optimizer (weight decay 0.05) for 4-5 epochs (all data is considered). The learning rate is changed with a cosine annealing scheduler with a warmup. The maximum learning rate at the first epoch is 5e-4 for T and S models, and 1e-4 for B models. The following epochs are trained with the maximum learning rate of 0.5e-5 - 2e-5. The effective batch size is 4096 in all experiments (with the use of gradient accumulation). Starting from the second epoch Stochastic Weight Average is used. <br> We perform training for the first 2-3 epochs with von Mises-Fisher Loss (the model predicts 3d vectors, and kappa is computed based on the length of the produced vector), and the remaining epochs are performed with using the competition metric as the objective function + 0.05 von Mises-Fisher Loss (to give the model feedback on the vector length vs. confidence relation and simplify ensembling step). Use of the competition metric as the loss gives 55 bps boost. </p> <h1>Results</h1> <p>The table below summarizes the performance of our finalized models. CV is evaluated at L=512. (*) refers to models trained on 2×A6000, which in contrast to rtx4090 enables larger actual batch size. Further finetuning with L=256 may result in about 5 bps boost.</p> <table> <thead> <tr> <th>Model setup</th> <th>CV512</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>T d32</td> <td>0.9704</td> <td>0.9693</td> <td>0.9698</td> </tr> <tr> <td>S d32</td> <td>0.9671</td> <td>0.9654</td> <td>0.9659</td> </tr> <tr> <td>B d32</td> <td>0.9642</td> <td>0.9623</td> <td>0.9632</td> </tr> <tr> <td>B d64</td> <td>0.9645</td> <td>0.9635</td> <td>0.9629</td> </tr> <tr> <td>B+gnn d48</td> <td>0.9643</td> <td>0.9624</td> <td>0.9627</td> </tr> <tr> <td>* S+gnn d32</td> <td>0.9639</td> <td>0.9620</td> <td>0.9628</td> </tr> <tr> <td>* B+gnn d32</td> <td>0.9633</td> <td>0.9609</td> <td>0.9621</td> </tr> </tbody> </table> <p>Each increase of the model to the next size T-&gt;S-&gt;B gives approximately 30 bps but is accompanied by an ×4 increase in the number of model parameters. However, the provided data may be insufficient for training models larger than B. Meanwhile, the results of the use GNN in addition to the Fourier extractor are rather contradictive and may be affected by the difference in the hardware configuration. <br> So, even <strong>the smallest considered T model with only 7.53M parameters and fast training (under 12 hours on rtx4090 per epoch) can take the top-5 in this competition</strong> by itself. Meanwhile, with additional finetuning at L=256, <strong>our best model, B+gnn d32, reaches 0.9628 CV512, 0.9608 public, and 0.9618 private LB as a single model</strong>. </p> <h2>Model ensemble</h2> <p>With the use of von Mises-Fisher Loss, the vector length corresponds to the model confidence. Therefore, a weighted average of the predicted vectors automatically incorporates the confidence and biases the prediction towards the most confident direction. <br> Our best final submission in the linear ensemble of 6 models gives 0.9594 at public and 0.9602 at private LB (the contribution of each model is evaluated as an average of weights fitted in 5 random 5-fold splits). Ensembling gives only an insignificant improvement over our best single model submission of 0.9608 public and 0.9618 at private LB. We also tried to consider more elaborated ensembling by utilizing additional features such as sequence length, first detection time, total charge, etc., but without any improvement over the simple linear ensemble</p> <h2>Training and inference time</h2> <p><strong>On rtx4090 the inference time can be as short as 3 min per 1M samples for the T model</strong> (without inference optimization) and may increase to 8-10 min for the B model. On outdated GPUs several generations behind, such as P100 at Kaggle, the inference time may increase by x10. The training time varies from 12 hours per epoch for the T model to 56 hours per epoch for the B+gnn model. On a machine with 2×A6000 the speed is approximately the same (~10% faster) than running the model on a single rtx4090.</p> <h2>Robustness</h2> <p>Regarding the robustness of model predictions with respect to perturbation of the input parameters, we have considered the following cases: (1) <strong>temporal noise</strong>: add normally distributed random variable to detection time of each event; (2) <strong>charge noise</strong>: multiply charge by 10^ normally distributed randomly with variance reported at the horizontal axis; (3) <strong>auxiliary noise</strong>: randomly flip auxiliary flag with a given probability; and (4) <strong>length reduction</strong>: remove a given percentage of detections. The plot below reports results for our reference T model (noise-free CV512 is 0.9704).<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2Ffdbfc788586698d1965af8a024868dd7%2FNoise.png?generation=1681963294066000&amp;alt=media" alt=""><br> Although the model was not explicitly trained to consider the abovementioned types of noise, the predictions in most cases do not degrade by more than 15% at the largest magnitude of the considered noise, e.g. adding 200 ns random time error or ×2 reduction of the number of detected events. Training the model with specific data augmentation during training may improve the robustness of the model. However, there may be a limit when the error starts rapidly increasing, e.g. 10 ns temporal noise or a change in the value of detected charge by 40%.<br> The considered model is versatile enough to apply to any neutrino telescope with a similar detector concept as IceCube, regardless of the number of detectors and their arrangement. However, since during the training of our models, the positions of the detectors are fixed, the model may not have a complete understanding of the geometry and experience the degradation of the performance if the detectors are moved. For example, the performance of T model drops from 0.9704 to 1.004 under flip of all detector positions along X or Y axes. To avoid such performance degradation and improve the understanding of geometry by the model, the training could be performed at randomly generated configurations of detectors at each detection event. Incorporation of geometry augmentation, such as shifts, flips, and rotation, would also improve the model robustness to the change of the detector geometry. The less optimal solution is the generation of new training data for a new neutrino telescope. Consideration of <a href="https://arxiv.org/pdf/2006.10503.pdf" target="_blank">3D Roto-Translation Equivariant Attention</a> models, in which detector positions are incorporated through attention rather than direct input, could potentially address the issue of detector geometry. However, this improvement may come with both the computational cost and VRAM requirement increase, and, therefore, a more practical solution may be training a simple model with spatial augmentation and randomized detector positions.<br> The plot below visualizes the distribution of error for confident and unconfident prediction.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2Fd1fe483e3972b5ae6aafa7c4db3f4ea9%2Fdist.png?generation=1681963384438419&amp;alt=media" alt=""></p> <h2>Interesting findings</h2> <p>It appeared that <strong>the majority of participants of the competition are unconsciously using the following leak</strong>. The provided data is the result of a simulation, and 0 time has a particular meaning revealing some details of the generation process (the time of the first detection with respect to 0 time may probably give such information as the energy and approximate direction limitations based on the traveling time of the neutrino from the box boundary to the detector). The models learn how to utilize this leak and improve its performance. For example, one of our experiments with the T model trained (1) on data with keeping 0-time-reference and (2) subtracting the time of the first detection from all detections gives the performance gap of 145 bps at the end of the first epoch. <br> In the real detector data, the 0-time-reference is not available, and the time might be considered with the reference to the first detected event only. For models trained using 0-time-reference, such experimental data should be shifted by the average first detection time. However, in this case, the performance may drop by 150+ bps, comparable to the results of the model trained without the 0-time leak. </p> <h2>Things that did not work</h2> <p>In this competition, we have experimented with a number of things that, unfortunately, did not give an improvement:<br> <strong>Graph Neural Networks</strong>. After building the first transformer-based pipeline it becomes apparent that GNNs are well behind (0.98-0.99 vs. 1.0 LB for first models). One interesting comparison we ran at that moment is dividing the predictions based on the kappa by 30% of confident and 70% of unconfident (noisy input and very small number of detections for track reconstruction). The performance of both models on unconfident predictions was comparable, while the improvement on confident predictions was the thing giving the performance gap between transformers and GNNs. It is fully in line with our initial expectation that transformers with global consideration of the input data are more appropriate for the prediction of the global quantity as the track direction in contrast to local GNNs. So further efforts were spent on enhancing the transformer setup. So, in the end, we used GNN as an optional addition to our Fourier feature extractor for diversity.<br> <strong>Ice properties</strong>. We did not see that use of additional input, such as ice properties, helps to improve model performance.<br> <strong>Local attention</strong>. We spent several weeks experimenting with local attention. The basic idea is simple: before giving a sequence of detections to the transformer model, which considers all-vs-all interactions between detections, process the features within the local neighbors. In theory, local attention is expected to be fast and memory efficient since it examines only a small number of k=8 neighbors instead of requiring L×L attention. However, in practice, local attention appears to be quite slow because of extensive data shuffling (similar to GNN), and in some naïve implementations, it also may consume significantly more VRAM than highly optimized transformers. Surprisingly, the fastest local attention is just regular transformer attention with masking attention for elements beyond k neighbors. Such masking helps at the beginning of training, but at the later stage, full attention outperforms it or gives comparable results. We also tried to consider configurations with alternating global-local attention (sandwich) or parallel blocks of local and global attention in the part of the model preceding the regular 12-layer transformer. However, a simple configuration with consideration of 4 layers with full attention and rel bias + 12-layer regular transformer outperformed the setups with local attention.<br> <strong>Classification loss</strong>. We tried to subdivide the angle space into bins and utilize classification loss to fight against angle uncertainty. The ground truth is represented as a 2D Gaussian around the provided label, as illustrated in the figure below. We introduced 8 cls tokens to predict the probability for a 64×128 angular bin map (32×32 per token). Unfortunately, this model ended up at approximately 1.0 CV, and we did not perform any further checks on this approach. However, this model is quite interesting to mention, and it may provide a good visualization of the predictions.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2F1a2b2b993bb6a79ad13babb00ea689dc%2Fcls.png?generation=1681963515503826&amp;alt=media" alt=""><br> <strong>L and H models</strong> (400 and 800M parameters). Because of Nvidia virtual address space driver bug, 2×rtx4090 could not be used together until 1-2 weeks before the end of the competition. And even after this bug was fixed, disabled p2p makes multi-GPU training with these cards to be quite inefficient. Therefore, we were unable to scale up our runs and stopped at relatively small B models. The expected boost at each model size increase is about 30 bps, and L and H single models could likely reach 0.958 and 0.955-0.956 LB as a single model but with a massive increase of the required compute for both training and inference.</p>
IceCube - Neutrinos in Deep Ice
1st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First and foremost, I would like to express my gratitude to the hosts and the Kaggle team for organizing such an engaging competition. <br> Additionally, I want to extend my appreciation to those who shared their valuable insights by publishing notebooks and participating in the discussions.<br> In particular, the following notebooks and posts have been incredibly helpful.</p> <ul> <li><a href="https://www.kaggle.com/code/rasmusrse/graphnet-baseline-submission" target="_blank">graphnet_baseline_submission</a> by <a href="https://www.kaggle.com/rasmusrse" target="_blank">@rasmusrse</a></li> <li><a href="https://www.kaggle.com/code/rasmusrse/graphnet-example" target="_blank">graphnet_example</a> by <a href="https://www.kaggle.com/rasmusrse" target="_blank">@rasmusrse</a> and <a href="https://www.kaggle.com/pellerphys" target="_blank">@pellerphys</a></li> <li><a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/392096" target="_blank">∞ Explanation and Improvement von Mises-Fisher Loss</a> by <a href="https://www.kaggle.com/synset" target="_blank">@synset</a></li> <li><a href="https://www.kaggle.com/code/anjum48/early-sharing-prize-dynedge-1-046" target="_blank">Early Sharing Prize - DynEdge - 1.046</a> by <a href="https://www.kaggle.com/anjum48" target="_blank">@anjum48</a></li> <li><a href="https://www.kaggle.com/code/seungmoklee/3-lstms-with-data-picking-and-shifting" target="_blank">3 LSTMs; with Data Picking and Shifting</a> by <a href="https://www.kaggle.com/seungmoklee" target="_blank">@seungmoklee</a></li> <li><a href="https://www.kaggle.com/code/dschettler8845/ndi-let-s-learn-together-eli5-and-eda" target="_blank">NDI – Let's Learn Together – ELI5 and EDA</a> by <a href="https://www.kaggle.com/dschettler8845" target="_blank">@dschettler8845</a></li> <li><a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-training-tpu" target="_blank">Tensorflow LSTM Model Training TPU</a> by <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a></li> </ul> <h3>Architecture</h3> <p>My model has a simple structure with EdgeConv and Transformer connected.<br> I aimed to combine EdgeConv and Transformers in my model to efficiently gather information. GNNs were employed to capture local neighborhood information, while Transformers were used to collect global context, ensuring a comprehensive understanding of the data.<br> This simple model achieves a public score of 0.9628 points and a private score of 0.9633 points as a single model.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F548996%2Ff28760ca31a4cedaebaad8cac80352ac%2Ficecube2.png?generation=1682000618971016&amp;alt=media" alt="model"></p> <h3>Static edge selection for GNN</h3> <p>In the <a href="https://arxiv.org/abs/1801.07829" target="_blank">original paper</a>, edges are calculated in each layer dynamically, but this edge selection is not differentiable and cannot be trained. This still works well for the segmentation task in the original paper, as points in the same segment are trained to be close in the latent space. However, the situation is different in this task, so I did not think it would make sense to dynamically select edges. Therefore, the edge selection used in EdgeConv is calculated at the input.</p> <h3>Simple modification of EdgeConv</h3> <p>Using <a href="https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.EdgeConv.html" target="_blank">EdgeConv</a>, the latent parameter \({x}_i\) of \({dom}_i\) is updated using the difference: \({x}_j-{x}_i\), where \({x}_j\) is the latent parameter of the \({dom}_i\)'s k-nearest neighbour \({dom}_j\).<br> This is fine for x, y, z and time, but for charge and auxiliary, the absolute values are meaningful, so I make \({x}_i\) be updated using not only \({x}_j-{x}_i\) but also \({x}_j\).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F548996%2F99153c1b48d05472673c6dcd1827d187%2Fedge4.png?generation=1682093716754213&amp;alt=media" alt="edgeConv"></p> <h3>Loss function</h3> <p>VMFLoss is a good and stable loss function, but θ is in the form of cos:</p> <pre><code>VMFLoss = -κ*cos(θ) + C(κ) </code></pre> <p>(where θ is the angle between the ground truth and the prediction and κ is the length of the 3D prediction).<br> On the other hand, the metric of this competition is the angle θ itself.<br> To minimise θ itself, I defined the loss function as follows:</p> <pre><code>MyLoss = -θ - κ*cos(θ) + C(κ) </code></pre> <p>This simple modification resulted in a 0.005👍 gain compared to VMFLoss.</p> <h3>Sequence Bucketing</h3> <p>In transformers, computational complexity is proportional to the square of the sequence length, making it essential to reduce the sequence length. As a result, it is effective to group data with similar sequence lengths and create mini-batches. This approach speeds up computation and further decreases the GPU memory usage, allowing for the utilization of larger mini-batche size.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F548996%2F3674d2af7f7552e176a208149f3ba2ec%2Fmini_batch.png?generation=1682128087385204&amp;alt=media" alt=""></p> <p>It is easy to implement by changing collate_fn in the DataLoader.</p> <pre><code> (): graphs = [g g graphs g.n_pulses &gt; ] graphs.sort(key= x: x.n_pulses) batch_list = [] minp, maxp ([]+split_list[:-], split_list): min_idx = (minp*(graphs)) max_idx = (maxp*(graphs)) this_graphs = graphs[min_idx:max_idx] this_batch = Batch.from_data_list(this_graphs) batch_list.append(this_batch) batch_list </code></pre> <p>Note that even if gradients are calculated individually for each length-biased mini-mini-batch in the batch_list, the negative effects can be mitigated by updating the weights collectively afterwards.</p> <h3>Data loading</h3> <p>Because the data was huge and it was difficult to put everything in memory, I tried to load one batch per epoch.<br> The pseudo code is shown below.</p> <pre><code> (): (): self.this_batch_id = self.get_next_batch_id() self.this_batch = pd.read_parquet() self.this_meta = self.meta[self.meta.batch_id == self.this_batch_id] () -&gt; : (self.this_meta) ・・・ (): (): ・・・ self._dataset = dataset (): ・・・ self._dataset.reset_batch() </code></pre> <h3>Model parameters</h3> <p>This is some information about base models which are used for stacking. <br> 4 Layters, 6M parameters seems to be smaller than other top solutions.</p> <ul> <li>Depth of EdgeConv+Transformer block layer: 3 or 4</li> <li>Embedding dimension: 256</li> <li>Used sequence length (train): 200 to 500</li> <li>Used sequence length (inference): 6000</li> <li>With/without global features</li> <li>Number of parameters: 6M</li> <li>Parameters used in kNN for GNN: x,y,z or x,y,z,time</li> <li>Training time: 10 to 14 hours/epoch (on 2x titan RTX)</li> <li>Inference time: 30 minutes/5 batches ( on Kaggle notebook)</li> </ul> <p>Inference time is relatively slow due to the use of 6000 sequences.</p> <h3>Stacking</h3> <p>The framework for stacking was primarily based on the original base model. <br> The main difference was that the model was replaced with a 3-layer MLP. <br> Through stacking, an improvement of +0.003 was achieved.</p> <h3>Code</h3> <p>Inference notebook is available <a href="https://www.kaggle.com/code/its7171/icecube-edgeconv-transformer-inference" target="_blank">here</a>.</p>
Lux AI Season 2
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lux AI Season 2 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I have to acknowledge <a href="https://www.kaggle.com/ttigga" target="_blank">@ttigga</a> and <a href="https://www.kaggle.com/danmctree" target="_blank">@danmctree</a> for co-producing the most uncertain and stressful race to the finish I could have imagined. Huge congrats to both of you.</p> <h1>Background</h1> <p>The Lux game specification can be found <a href="https://www.kaggle.com/competitions/lux-ai-season-2/overview/lux-ai-specifications" target="_blank">here</a>.</p> <h1>Overall Strategy</h1> <p>I competed in the <a href="https://www.kaggle.com/competitions/lux-ai-2022-beta" target="_blank">Lux Beta</a> just before this competition began, but threw out most of my strategic code when the post-beta rule changes were announced. The major change was that lichen now produced power. I decided then that I'd really push lichen growth early and often to generate as much power as possible. That mostly stayed true throughout the competition, even when robot-produced solar power was being promoted as a viable alternative. I just didn't have the discipline necessary to keep a robot stationary for long - there are so many other things for it to be doing!</p> <h3>My state-of-the-art visualizer</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9709029%2F11d8af058a0c975d0fb7edcda449f913%2FIMG_5429.jpg?generation=1683587685788613&amp;alt=media" alt=""></p> <h3>Forward simulation</h3> <p>I would determine actions for all units and factories, update state, then do it all over again. I'd do that for about 2.9 seconds every invocation, which gave me anywhere from 5 to 50+ steps worth of planning, depending on unit count and pathfinding complexity. I simulated just about everything including my future units. I tried to guess at opponent unit movement in limited combat situations. I never added logic for handling self-collisions or my factory explosions - probably due to optimism that maybe it'll work out differently when we get there! I assumed opponent lichen values were static, which was never correct, but never too far off.</p> <h3>Factory placement</h3> <p>Basically: adjacent to ice, close to ore, close to no-/low-rubble territory. I went back and forth on the relative value of nearby ore vs nearby flatland. Flatland can be incredibly valuable during the early game. Instead of digging (expensive), my light robots can go antagonize nearby opponent lights (cheap), throwing off their early plans. But I can't just avoid ore and end up completely outnumbered by mid-game. I eventually added some logic for measuring ore scarcity and comparing my current factories' ore situations vs my opponent's to determine a dynamic bonus for ore proximity. There's also the matter of ice vulnerability/security (i.e. can this location readily be denied access to ice), but I'll cover that in more detail shortly.</p> <h3>Mining resources and building robots</h3> <p>My strategy largely depended on growing lichen until my power income was greater than my current (estimated) power usage. Then I'd build some units and go back to mining ice and growing lichen - until my power income was ready for more units. This was definitely not a perfect strategy but it worked ok. I tended to err on the side of under-building units so that I always had plenty of power to hand out. I never knew for sure how to decide when to build a light vs save metal to build a heavy. I tried to maintain some minimum number of lights per factory and then build heavies beyond that point.</p> <h3>On chains</h3> <p>I used chains heavily during the beta, to limited success. My code was error-prone, and it seemed too easy to disturb a whole chain of activity with only a single light robot. I decided early on I would avoid chains except in the trivial factory-adjacent case. Clearly Tigga and Siesta proved me wrong in the end. Their economic system based on chains is definitely superior. I was rarely able to adequately disturb a chain for long. And using chains allows for more flexible factory spawn locations - ice no longer necessarily needs to be factory-adjacent. All resources are just 1 chain away! Very powerful.</p> <h1>Roles and Goals</h1> <p>For every step I simulated, I would set or update a "role" and "goal" for each unit. A role is a long-term but non-permanent job/responsibility. A goal is a single multi-step objective that typically involves movement and/or hitting some resource or power threshold. Each role controlled a different goal state machine. The current role and goal would be persisted between invocations. Every step I would:</p> <ol> <li>Check each unit to see that the current role is still valid</li> <li>Check each unit for special role transition situations</li> <li>Iterate over all units, assigning roles to units without one, until all units had a role</li> <li>Update goals for all units</li> </ol> <p>I ended up with about 10 unique roles: antagonizer, attacker, blockade, cow, miner, pillager, protector, recharge, relocate, power_transporter, water_transporter.</p> <p>Role selection looked something like this, and would depend on unit type and factory situation:</p> <pre><code>new_role = ( RolePillager.from_lichen_cell(step, unit, max_dist=, max_count=) RoleAntagonizer.from_chain(step, unit, max_dist=, max_count=) RoleRelocate.from_assist_ice_conflict(step, unit) RoleAntagonizer.from_mine(step, unit, max_dist=, ice=, max_water=) RoleAntagonizer.from_mine(step, unit, max_dist=, max_count=) RoleCow.from_lowland_route(step, unit, max_dist=, min_size=) ) </code></pre> <h1>Prioritizing Action Selection</h1> <p>Not all units are created equal. I didn't want a low-priority light robot to claim a cell that a heavy robot urgently needed to move to, or pick up power that could be better used elsewhere. I used a list of prioritized situations to ensure a reasonable order that units (and factories) locked their actions into place. I wrote some nifty python to do this in a relatively clean way.</p> <pre><code> : (): self.step = step self.entities = entities (): attr self.__dict__: (): e self.entities: e.last_action_step &lt; self.step: e.action = (e, attr)(self.step, *args, **kwargs) e.action : e.last_action_step = self.step func ().__getattr__(attr) </code></pre> <pre><code>group = EntityGroup(step, board.player.units() + board.player.factories()) group.do_pickup_resource_from_exploding_factory() group.do_move_win_collision() group.do_protector_transfer(heavy=) group.do_protector_pickup(heavy=) group.do_protector_move(heavy=) group.do_factory_end_phase_water() group.do_factory_build() </code></pre> <h1>Ice Conflicts</h1> <p>During sprint 2, I noticed some matchups would incidentally result in two factories fighting over ice. It felt like a crazy idea at the time, but I figured it might be worth purposely choosing an otherwise suboptimal factory spawn position if it meant I was well-positioned to deny ice to an opponent’s factory, while using light water transporters to outlast them. I called this strategy “ice conflicts”. I started doing this during sprint 2 by giving a huge boost to my final factory placement if it was within 4 distance or so of all nearby ice deposits around an opponent factory. This led to my agent winning sprint 2, and temporarily breaking Kaggle’s matchmaking algorithm, sending my score to 36k+ (RIP Homeostat). I figured this strategy would be copied or countered but it never really was. While Bovard was fixing the matchmaking algorithm, I deactivated the ice conflict logic from my “production” build to save it for a rainy day (i.e. the last day). I continued to make improvements to this strategy, including using a pair of light "blockade" units to block opponent water transporters. In the end, ice conflict spawning is the only thing that kept me competitive with Siesta's and Tigga’s improved economic strategy. Interestingly, this did not work at all against flg, who tended to use heavies to flexibly move ice around the map wherever it was needed. This resulted in very few factory kills, and I just had to live with the otherwise suboptimal factory placements. This weak matchup really hurt my final score.</p> <h1>Coordination Among Units and Factories</h1> <p>There were a few situations where units and factories needed to coordinate with one another to make sure they weren't trying to do/use the same thing.</p> <ul> <li>Enforce unique unit-&gt;cell assignments<ul> <li>Units would claim a cell whenever they were tasked with digging/antagonizing/standing there.</li> <li>Some units could “displace” another unit’s assignment, invalidating its role and forcing a new role to be determined. This usually happened when a heavy took over a job from a light.</li></ul></li> <li>Enforce unique unit-&gt;unit assignments<ul> <li>Only really relevant for units with the attacker role. I didn't need more than one unit pursuing an opponent unit at a time.</li></ul></li> <li>Enforce unique cell-&gt;factory assignments<ul> <li>Ice cells are assigned to my various factories over time, determined by whichever factory sends a heavy robot there first.</li> <li>This ensures factories can rely on relatively stable ice income over time (no risk of “losing” an ice cell to a friendly factory)</li> <li>Any unit can mine any ice cell if its available, but if a unit belonging to the owner factory wants it, they get it, displacing whoever was there previously.</li></ul></li> <li>Collision avoidance<ul> <li>As described earlier, units determine their actions in priority order. This means I often needed to determine if a high priority unit could safely move to a cell currently occupied by a lower priority unit. Eventually I settled on simply comparing the low-priority unit's power to the power necessary to move <em>anywhere</em>, assuming at least one safe move was possible. Similarly, I needed to check for situations where a high-priority unit might move onto a cell that was the only remaining safe option for a lower-priority unit to move <em>to</em>. These checks were a little messy, and there were still small gaps in the logic, but it worked most of the time.</li></ul></li> </ul> <h1>Reflection / Looking Ahead</h1> <p>I had very few expectations coming into this competition. I had no idea how much time I was going to put into it, and I was mostly under the impression that RL was going to reign supreme as in season 1. The game proved to be very challenging, surprisingly deep, and well balanced. I so enjoyed tinkering on this project for 3+ months. Thanks to <a href="https://www.kaggle.com/stonet2000" target="_blank">@stonet2000</a> and <a href="https://www.kaggle.com/bovard" target="_blank">@bovard</a> for putting this all together. And a special shout out to the folks who were constantly pushing the bounds of the game/meta: <a href="https://www.kaggle.com/harmbuisman" target="_blank">@harmbuisman</a>, <a href="https://www.kaggle.com/danmctree" target="_blank">@danmctree</a>, and <a href="https://www.kaggle.com/ttigga" target="_blank">@ttigga</a>. And to <a href="https://www.kaggle.com/ferdinandlimburg" target="_blank">@ferdinandlimburg</a> and <a href="https://www.kaggle.com/philippkostuch" target="_blank">@philippkostuch</a> who pushed radically different and competitive solutions over the last month or so. I look forward to seeing you all on future leaderboards.</p> <p>Edit:</p> <p>My open sourced code for Season 2 (Python) can be found <a href="https://github.com/ryandy/Lux-S2-public" target="_blank">here</a>. And Season 2 NeurIPS Edition (C++) <a href="https://github.com/ryandy/Lux-S2-neurips-public" target="_blank">here</a>.</p>
IceCube - Neutrinos in Deep Ice
12th Place Solution Details - IceCube
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Summary</h1> <p>My final solution used an ensemble with a simple weighted average of 6 models that were generalizations of publicly posted models: (1) the DynEdge model <a href="https://www.kaggle.com/rasmusrse" target="_blank">@rasmusrse</a> , <a href="https://www.kaggle.com/anjum48" target="_blank">@anjum48</a> , and (2) LSTM model by <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a>. Each model predicted directly a 3-dimensional vector of the target direction, using a loss that was the norm of the difference between the predicted vector and the target vector. The predicted vector was not normalized first.</p> <h1>TLDR:</h1> <ol> <li>Final Ensemble of 6 models, linear weighting determined by nonlinear optimization minimizing MAE (Mean Angular Error) on entire held-out batch.</li> <li>Pool of all models included 30 different models, best combination of 6 selected through optimization method.<br> Models included Dynamic Graph Networks of different sizes, and LSTM networks of different sizes and head networks</li> <li>Models included different number of pulses selected for training and different number of pulses used for inference (not always the same). Pulses were selected randomly from entire set of pulses.</li> <li>Each model was used (effectively) 4 times and averaged (because of the random selection of pulse set) - a simple observation was that smaller events did not require multiple evaluations.</li> <li>For prediction, each batch in the test set was read into memory and cached for use in evaluating all 6 models (and their 4x sampling). Because of the distribution of number of pulses per event, only about 15% additional inferences were required to produce a 4x average for each model.</li> <li>Each model used as loss the norm of predicted vector minus true unit direction vector.</li> </ol> <h1>References</h1> <p><a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-training-tpu" target="_blank">Robin Smits LSTM models</a><br> <a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/383524" target="_blank">RASMUS ØRSØE DynEdge Post</a><br> <a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/381747" target="_blank">My Derivation of Least Squares Fit</a><br> <a href="https://www.kaggle.com/code/solverworld/12th-place-ensemble-models" target="_blank">My Ensembling Notebook</a> [The attached dataset contains the pytorch models]<br> Mirco Hunnefield Masters Thesis, Online Reconstruction of Muon-Neutrino Events in IceCube using Deep Learning Techniques<br> Kai Schatto, PhD Thesis, Stacked searches for high-energy neutrinos from blazars with IceCube</p> <h1>Early Days</h1> <p>I started by reading as many of the neutrino detection papers as I could, trying to understand the nature of the problem. There were many excellent posts with graphics (taken from the papers) showing how neutrinos generated a muon that then formed spinoff collisions that illuminated a large set of sensors embedded in the Antarctic Ice. I won't repeat all that excellent information, but I will try to outline my thought process as my solution evolved.</p> <p>My first submissions were the (<a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/381747" target="_blank">line-fit solutions</a> that got 1.214 and later improved to 1.174 with more clever picking of pulses through neighbor counting (thanks <a href="https://www.kaggle.com/roberthatch" target="_blank">@roberthatch</a> !).</p> <p>I then decided to move on to neural net models. Since there was a lot of excitement around the DynEdge models, I though maybe I should zig while others were zagging, and looked into other graph neural networks. I implemented PointNet and then PointNet++, which are basically DynEdge models where you do not dynamically alter the graph at each iteration. That is, the graph is fixed by neighbors within a certain radius, and those neighbors are then operated on much as a Convolutional NN (CNN) does for the entire solution. The geometric pytorch <a href="https://pytorch-geometric.readthedocs.io/en/latest/cheatsheet/gnn_cheatsheet.html" target="_blank">cheatsheet</a> gives a nice summary of all the GNN models it supports with linked papers. </p> <p>After looking at all the GNN models, and failing to have PointNet++ do much, I thought I should dive into the DynEdge models. Trying to install the graphnet module required backing up too many python modules (and using Python 3.7 instead of my local 3.10). So, since I hate to go backwards, I pulled out the dynedge code from the entire graphnet module, and used py-geometric to build my DynEdge models. This also freed me from the SQlite and other framework difficulties so I could build my own pipeline.</p> <p>I solved the data size problem by preprocessing each parquet batch file into 780 individual hdf5 files which contained 256x2000xK matrices which were the data for 256 events. The pulses were limited to 2000 by Aux and Charge sorting. The rest were zero-padded, but because hdf5 compresses the data, this did not result in larger storage. The K features were x,y,z, time, charge, aux, num-of-neighbors, and some pre-computed time window information.</p> <p>I then was able to shuffle both batches and the 256-groups of events within each batch without massive amounts of memory. I was able to feed my GPU efficiently using the preprocessed files.</p> <h1>Progression</h1> <p>I decided to use batches 1-100 for training, and batch 601 and 602 for validation. I typically trained for 600-800 batches, thus used each batch only 6-8 times. I see now from the top finishers that people trained on much more data and for longer. My models were probably not complex enough to need more data.</p> <p>I got the DynEdge models working close to the reported public notebooks, and eventually tried a few variations, including adding more layers (up to 6) and a larger output head. The default was just a 128 size FC layer, but I eventually used [384,256,128,64,3]. My best DynEdge models got around 1.001 on Batch 602 (my test batch).</p> <h1>Choice of Output</h1> <p>I tried outputting bins (thanks again, <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a>) but I found better results with directly predicting the 3-dimension direction and using the norm ||v-pred|| as the loss function. The gradient of this function for v close to to pred looks much like the Angular Error function, so I thought it should work. I could not get the Angular Error to work directly as a loss function, because it does not control for the vector size and allows the predicted vector to shrink to zero size. There also some inaccuracies involved in making sure you don't normalize a length 0 vector.</p> <h1>Choice of Input</h1> <p>I tried various ways of selecting the input pulses to use in my models. Every sorting or choosing method seemed to work worse than random selection. Because of the random selection nature, model accuracy could be improved by simply averaging 4 predictions <em>from the same model</em>. One simple trick was to realize that events with less pulses than were required for a model did not have a random element and thus those did not need to be repeated 4 times. This meant I could do inference 4x with only a 15% increase in computation effort on submissions.</p> <h1>Back to LSTM Models</h1> <p>I decided to go back the LSTM models and taking a hint from the 2nd place finish of the RSNA Intracranial Hemorrhage challenge ( <a href="https://www.kaggle.com/darraghdog" target="_blank">@darraghdog</a> ) <a href="https://www.kaggle.com/competitions/rsna-intracranial-hemorrhage-detection/discussion/117228" target="_blank">2nd Place</a> I used multiple LSTM layers, with the outputs of each layer concatenated together into a final multi-layer fully connected prediction head.<br> The best models were 512 wide LSTMs in 3 layers. I used prediction heads with layers [256,64,64,3] and variations thereof. I also tried an Attention modification on the LSTM output which helped a little bit (this was simply a learned weight that is dotted with the LSTM output to produce a signal proportional the weight that should be given to that element in the sequence).</p> <h1>Trying all the Models</h1> <p>Here are the scores of 35 models, just to show the variation and improvement from averaging. The s&lt;i&gt; indicates a different random selection of the pulses from each event, the p&lt;n&gt; indicates how many pulses were used in the inference (prediction) and the last two columns indicate the change in score by using an average of the 4 predictions and the standard deviation, respectively. The score was done on Batch 602.<br> You can see the average helps about 35 bps on models using 128 pulses, about 6-10 bps for models using 350 or 384 pulses. The description includes the type of model as well as the number of pulses used for training. E.g. LSTM512x3-384 was a 3 layer LSTM model trained on 384 pulses. ATT means attention was used.<br> The best GNN model gets 1.001, the best LSTM model gets 0.9859 on batch 602. My scores on the leaderboard were about .002 lower than my cross validation scores on batch 602. I did not submit any individual models to the leaderboard, as I did want to contaminate my procedure with fitting to the leaderboard.</p> <pre><code>000 00 dog-qnr-state-664-s9 LSTM512x3-384 p384 s0 0.98869 00 dog-qnr-state-664-s9 LSTM512x3-384 p384 s1 0.98881 00 dog-qnr-state-664-s9 LSTM512x3-384 p384 s2 0.98886 00 dog-qnr-state-664-s9 LSTM512x3-384 p384 s3 0.98881 00 dog-qnr-state-664-s9 LSTM512x3-384 p384 -- 0.98810 -0.00069 0.00006 001 01 dog-ust-state-752-s9 LSTM512x3 p128 s0 0.99525 01 dog-ust-state-752-s9 LSTM512x3 p128 s1 0.99547 01 dog-ust-state-752-s9 LSTM512x3 p128 s2 0.99538 01 dog-ust-state-752-s9 LSTM512x3 p128 s3 0.99533 01 dog-ust-state-752-s9 LSTM512x3 p128 -- 0.99184 -0.00352 0.00008 002 02 dog-ust-state-752-s9 LSTM512x3 p200 s0 0.99056 02 dog-ust-state-752-s9 LSTM512x3 p200 s1 0.99073 02 dog-ust-state-752-s9 LSTM512x3 p200 s2 0.99069 02 dog-ust-state-752-s9 LSTM512x3 p200 s3 0.99061 02 dog-ust-state-752-s9 LSTM512x3 p200 -- 0.98900 -0.00165 0.00007 003 03 dog-ust-state-752-s9 LSTM512x3 p256 s0 0.98949 03 dog-ust-state-752-s9 LSTM512x3 p256 s1 0.98929 03 dog-ust-state-752-s9 LSTM512x3 p256 s2 0.98943 03 dog-ust-state-752-s9 LSTM512x3 p256 s3 0.98945 03 dog-ust-state-752-s9 LSTM512x3 p256 -- 0.98831 -0.00110 0.00008 004 04 dog-ust-state-752-s9 LSTM512x3 p320 s0 0.98896 04 dog-ust-state-752-s9 LSTM512x3 p320 s1 0.98897 04 dog-ust-state-752-s9 LSTM512x3 p320 s2 0.98885 04 dog-ust-state-752-s9 LSTM512x3 p320 s3 0.98891 04 dog-ust-state-752-s9 LSTM512x3 p320 -- 0.98813 -0.00079 0.00005 005 05 dog-ust-state-752-s9 LSTM512x3 p384 s0 0.98868 05 dog-ust-state-752-s9 LSTM512x3 p384 s1 0.98881 05 dog-ust-state-752-s9 LSTM512x3 p384 s2 0.98874 05 dog-ust-state-752-s9 LSTM512x3 p384 s3 0.98872 05 dog-ust-state-752-s9 LSTM512x3 p384 -- 0.98815 -0.00058 0.00005 006 06 dog-uva-state-648-s9 LSTM512x3-200 p320 s0 0.99047 06 dog-uva-state-648-s9 LSTM512x3-200 p320 s1 0.99047 06 dog-uva-state-648-s9 LSTM512x3-200 p320 s2 0.99047 06 dog-uva-state-648-s9 LSTM512x3-200 p320 s3 0.99047 06 dog-uva-state-648-s9 LSTM512x3-200 p320 -- 0.99047 0.00000 0.00000 007 07 dog-uva-state-768-s9 LSTM512x3-200 p200 s0 0.99120 07 dog-uva-state-768-s9 LSTM512x3-200 p200 s1 0.99138 07 dog-uva-state-768-s9 LSTM512x3-200 p200 s2 0.99120 07 dog-uva-state-768-s9 LSTM512x3-200 p200 s3 0.99140 07 dog-uva-state-768-s9 LSTM512x3-200 p200 -- 0.98955 -0.00175 0.00010 008 08 dog-uva-state-768-s9 LSTM512x3-200 p256 s0 0.98947 08 dog-uva-state-768-s9 LSTM512x3-200 p256 s1 0.98931 08 dog-uva-state-768-s9 LSTM512x3-200 p256 s2 0.98939 08 dog-uva-state-768-s9 LSTM512x3-200 p256 s3 0.98936 08 dog-uva-state-768-s9 LSTM512x3-200 p256 -- 0.98819 -0.00119 0.00006 009 09 dog-uva-state-768-s9 LSTM512x3-200 p320 s0 0.98846 09 dog-uva-state-768-s9 LSTM512x3-200 p320 s1 0.98843 09 dog-uva-state-768-s9 LSTM512x3-200 p320 s2 0.98848 09 dog-uva-state-768-s9 LSTM512x3-200 p320 s3 0.98835 09 dog-uva-state-768-s9 LSTM512x3-200 p320 -- 0.98756 -0.00087 0.00005 010 10 dog-uva-state-768-s9 LSTM512x3-200 p384 s0 0.98793 10 dog-uva-state-768-s9 LSTM512x3-200 p384 s1 0.98807 10 dog-uva-state-768-s9 LSTM512x3-200 p384 s2 0.98810 10 dog-uva-state-768-s9 LSTM512x3-200 p384 s3 0.98798 10 dog-uva-state-768-s9 LSTM512x3-200 p384 -- 0.98741 -0.00061 0.00007 011 11 dyno-dxu-state-456-s GNN5-RO p200 s0 1.00377 11 dyno-dxu-state-456-s GNN5-RO p200 s1 1.00385 11 dyno-dxu-state-456-s GNN5-RO p200 s2 1.00370 11 dyno-dxu-state-456-s GNN5-RO p200 s3 1.00397 11 dyno-dxu-state-456-s GNN5-RO p200 -- 1.00193 -0.00189 0.00010 012 12 dyno-dxu-state-456-s GNN5-RO p256 s0 1.00346 12 dyno-dxu-state-456-s GNN5-RO p256 s1 1.00343 12 dyno-dxu-state-456-s GNN5-RO p256 s2 1.00341 12 dyno-dxu-state-456-s GNN5-RO p256 s3 1.00340 12 dyno-dxu-state-456-s GNN5-RO p256 -- 1.00213 -0.00129 0.00002 013 13 dyno-dxu-state-456-s GNN5-RO p350 s0 1.00300 13 dyno-dxu-state-456-s GNN5-RO p350 s1 1.00308 13 dyno-dxu-state-456-s GNN5-RO p350 s2 1.00316 13 dyno-dxu-state-456-s GNN5-RO p350 s3 1.00299 13 dyno-dxu-state-456-s GNN5-RO p350 -- 1.00213 -0.00093 0.00007 014 14 dyno-kxc-state-524-s GNN5-RO-350 p350 s0 1.00250 14 dyno-kxc-state-524-s GNN5-RO-350 p350 s1 1.00255 14 dyno-kxc-state-524-s GNN5-RO-350 p350 s2 1.00268 14 dyno-kxc-state-524-s GNN5-RO-350 p350 s3 1.00245 14 dyno-kxc-state-524-s GNN5-RO-350 p350 -- 1.00158 -0.00096 0.00009 015 15 dyno-kxc-state-580-s GNN5-RO-350 p350 s0 1.00200 15 dyno-kxc-state-580-s GNN5-RO-350 p350 s1 1.00209 15 dyno-kxc-state-580-s GNN5-RO-350 p350 s2 1.00218 15 dyno-kxc-state-580-s GNN5-RO-350 p350 s3 1.00194 15 dyno-kxc-state-580-s GNN5-RO-350 p350 -- 1.00107 -0.00098 0.00009 016 16 dyno-rjl-state-708-s GNN4-RO-350 p200 s0 1.00572 16 dyno-rjl-state-708-s GNN4-RO-350 p200 s1 1.00582 16 dyno-rjl-state-708-s GNN4-RO-350 p200 s2 1.00562 16 dyno-rjl-state-708-s GNN4-RO-350 p200 s3 1.00587 16 dyno-rjl-state-708-s GNN4-RO-350 p200 -- 1.00374 -0.00202 0.00010 017 17 dyno-rjl-state-708-s GNN4-RO-350 p256 s0 1.00503 17 dyno-rjl-state-708-s GNN4-RO-350 p256 s1 1.00487 17 dyno-rjl-state-708-s GNN4-RO-350 p256 s2 1.00496 17 dyno-rjl-state-708-s GNN4-RO-350 p256 s3 1.00504 17 dyno-rjl-state-708-s GNN4-RO-350 p256 -- 1.00352 -0.00146 0.00007 018 18 dyno-rjl-state-708-s GNN4-RO-350 p350 s0 1.00432 18 dyno-rjl-state-708-s GNN4-RO-350 p350 s1 1.00434 18 dyno-rjl-state-708-s GNN4-RO-350 p350 s2 1.00439 18 dyno-rjl-state-708-s GNN4-RO-350 p350 s3 1.00445 18 dyno-rjl-state-708-s GNN4-RO-350 p350 -- 1.00337 -0.00101 0.00005 019 19 dyno-wnq-state-476-s GNN4 p200 s0 1.00640 19 dyno-wnq-state-476-s GNN4 p200 s1 1.00645 19 dyno-wnq-state-476-s GNN4 p200 s2 1.00646 19 dyno-wnq-state-476-s GNN4 p200 s3 1.00660 19 dyno-wnq-state-476-s GNN4 p200 -- 1.00457 -0.00191 0.00007 020 20 egg-jhn-state-604-s9 LSTM384x3-ATT p128 s0 0.99903 20 egg-jhn-state-604-s9 LSTM384x3-ATT p128 s1 0.99885 20 egg-jhn-state-604-s9 LSTM384x3-ATT p128 s2 0.99879 20 egg-jhn-state-604-s9 LSTM384x3-ATT p128 s3 0.99877 20 egg-jhn-state-604-s9 LSTM384x3-ATT p128 -- 0.99553 -0.00333 0.00010 021 21 egg-jhn-state-604-s9 LSTM384x3-ATT p200 s0 0.99426 21 egg-jhn-state-604-s9 LSTM384x3-ATT p200 s1 0.99452 21 egg-jhn-state-604-s9 LSTM384x3-ATT p200 s2 0.99433 21 egg-jhn-state-604-s9 LSTM384x3-ATT p200 s3 0.99448 21 egg-jhn-state-604-s9 LSTM384x3-ATT p200 -- 0.99282 -0.00158 0.00011 022 22 egg-jhn-state-604-s9 LSTM384x3-ATT p350 s0 0.99456 22 egg-jhn-state-604-s9 LSTM384x3-ATT p350 s1 0.99457 22 egg-jhn-state-604-s9 LSTM384x3-ATT p350 s2 0.99457 22 egg-jhn-state-604-s9 LSTM384x3-ATT p350 s3 0.99453 22 egg-jhn-state-604-s9 LSTM384x3-ATT p350 -- 0.99396 -0.00060 0.00002 023 23 egg-jqn-state-572-s9 LSTM512x3-ATT200 p200 s0 0.99215 23 egg-jqn-state-572-s9 LSTM512x3-ATT200 p200 s1 0.99226 23 egg-jqn-state-572-s9 LSTM512x3-ATT200 p200 s2 0.99228 23 egg-jqn-state-572-s9 LSTM512x3-ATT200 p200 s3 0.99238 23 egg-jqn-state-572-s9 LSTM512x3-ATT200 p200 -- 0.99057 -0.00170 0.00008 024 24 egg-jqn-state-572-s9 LSTM512x3-ATT200 p384 s0 0.99058 24 egg-jqn-state-572-s9 LSTM512x3-ATT200 p384 s1 0.99062 24 egg-jqn-state-572-s9 LSTM512x3-ATT200 p384 s2 0.99072 24 egg-jqn-state-572-s9 LSTM512x3-ATT200 p384 s3 0.99065 24 egg-jqn-state-572-s9 LSTM512x3-ATT200 p384 -- 0.99003 -0.00061 0.00005 025 25 fir-irl-state-456-s9 LSTM512x3-200-mod p200 s0 0.99316 25 fir-irl-state-456-s9 LSTM512x3-200-mod p200 s1 0.99324 25 fir-irl-state-456-s9 LSTM512x3-200-mod p200 s2 0.99298 25 fir-irl-state-456-s9 LSTM512x3-200-mod p200 s3 0.99322 25 fir-irl-state-456-s9 LSTM512x3-200-mod p200 -- 0.99145 -0.00170 0.00010 026 26 fir-irl-state-456-s9 LSTM512x3-200-mod p300 s0 0.99138 26 fir-irl-state-456-s9 LSTM512x3-200-mod p300 s1 0.99142 26 fir-irl-state-456-s9 LSTM512x3-200-mod p300 s2 0.99129 26 fir-irl-state-456-s9 LSTM512x3-200-mod p300 s3 0.99142 26 fir-irl-state-456-s9 LSTM512x3-200-mod p300 -- 0.99050 -0.00087 0.00005 027 27 fir-irl-state-456-s9 LSTM512x3-200-mod p350 s0 0.99141 27 fir-irl-state-456-s9 LSTM512x3-200-mod p350 s1 0.99143 27 fir-irl-state-456-s9 LSTM512x3-200-mod p350 s2 0.99151 27 fir-irl-state-456-s9 LSTM512x3-200-mod p350 s3 0.99137 27 fir-irl-state-456-s9 LSTM512x3-200-mod p350 -- 0.99071 -0.00072 0.00005 028 28 egg-kvu-state-516-s9 LSTM512x3-ATT384 p384 s0 0.98831 28 egg-kvu-state-516-s9 LSTM512x3-ATT384 p384 s1 0.98831 28 egg-kvu-state-516-s9 LSTM512x3-ATT384 p384 s2 0.98837 28 egg-kvu-state-516-s9 LSTM512x3-ATT384 p384 s3 0.98838 28 egg-kvu-state-516-s9 LSTM512x3-ATT384 p384 -- 0.98771 -0.00063 0.00003 029 29 egg-kvu-state-548-s9 LSTM512x3-ATT384 p384 s0 0.98777 29 egg-kvu-state-548-s9 LSTM512x3-ATT384 p384 s1 0.98777 29 egg-kvu-state-548-s9 LSTM512x3-ATT384 p384 s2 0.98784 29 egg-kvu-state-548-s9 LSTM512x3-ATT384 p384 s3 0.98781 29 egg-kvu-state-548-s9 LSTM512x3-ATT384 p384 -- 0.98717 -0.00063 0.00003 030 30 goo-uel-state-652-s9 GOO-200 p200 s0 0.99403 30 goo-uel-state-652-s9 GOO-200 p200 s1 0.99415 30 goo-uel-state-652-s9 GOO-200 p200 s2 0.99398 30 goo-uel-state-652-s9 GOO-200 p200 s3 0.99416 30 goo-uel-state-652-s9 GOO-200 p200 -- 0.99241 -0.00167 0.00008 031 31 goo-uel-state-652-s9 GOO-200 p350 s0 0.99091 31 goo-uel-state-652-s9 GOO-200 p350 s1 0.99087 31 goo-uel-state-652-s9 GOO-200 p350 s2 0.99092 31 goo-uel-state-652-s9 GOO-200 p350 s3 0.99093 31 goo-uel-state-652-s9 GOO-200 p350 -- 0.99013 -0.00078 0.00002 032 32 goo-wqz-state-664-s9 GOO-384 p384 s0 0.99036 32 goo-wqz-state-664-s9 GOO-384 p384 s1 0.99049 32 goo-wqz-state-664-s9 GOO-384 p384 s2 0.99043 32 goo-wqz-state-664-s9 GOO-384 p384 s3 0.99039 32 goo-wqz-state-664-s9 GOO-384 p384 -- 0.98976 -0.00066 0.00005 033 33 dyno-fdk-state-396-s GNN6-200 p200 s0 1.01130 33 dyno-fdk-state-396-s GNN6-200 p200 s1 1.01137 33 dyno-fdk-state-396-s GNN6-200 p200 s2 1.01136 33 dyno-fdk-state-396-s GNN6-200 p200 s3 1.01143 33 dyno-fdk-state-396-s GNN6-200 p200 -- 1.00933 -0.00204 0.00005 034 34 dyno-rqy-state-408-s GNN6-350 p350 s0 1.00987 34 dyno-rqy-state-408-s GNN6-350 p350 s1 1.00999 34 dyno-rqy-state-408-s GNN6-350 p350 s2 1.00999 34 dyno-rqy-state-408-s GNN6-350 p350 s3 1.01000 34 dyno-rqy-state-408-s GNN6-350 p350 -- 1.00892 -0.00104 0.00005 035 35 egg-fsh-state-620-s9 LSTM512x3-ATT384 p384 s0 0.98744 35 egg-fsh-state-620-s9 LSTM512x3-ATT384 p384 s1 0.98755 35 egg-fsh-state-620-s9 LSTM512x3-ATT384 p384 s2 0.98756 35 egg-fsh-state-620-s9 LSTM512x3-ATT384 p384 s3 0.98758 35 egg-fsh-state-620-s9 LSTM512x3-ATT384 p384 -- 0.98687 -0.00066 0.00006 036 36 egg-fsh-state-640-s9 LSTM512x3-ATT384 p384 s0 0.98654 36 egg-fsh-state-640-s9 LSTM512x3-ATT384 p384 s1 0.98662 36 egg-fsh-state-640-s9 LSTM512x3-ATT384 p384 s2 0.98664 36 egg-fsh-state-640-s9 LSTM512x3-ATT384 p384 s3 0.98666 36 egg-fsh-state-640-s9 LSTM512x3-ATT384 p384 -- 0.98595 -0.00066 0.00005 Total models present 37 </code></pre> <h1>Combining the Models</h1> <p>I used a linear combination of a subset of these models. For a baseline, I simple calculated the optimal weights on all 37 models using scipy.optimization.minimize with the Powell method. Using least squares gives about 20 bps worse score because the true metric is the mean angular error. The baseline score was .97948 on Batch 601 (used for weight calculation) and .97964 on Batch 602 (for validation). This is a 63bp improvement over the best single model. Since I didn't want to calculate all 37 models for submission, I choose a subset of models. You cannot simply use the models with the highest weights to pick a subset because the models are correlated. For example, if model A and model B are the best and yet effectively the same, then their weights might be approximately equal and the highest of the bunch. A subset should only include one of them. So I took an iterative approach, removing a single model at a time and removing the one that had the least negative impact. These are the results:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F651278%2Fa906b327fa59d5259a491c6098ce82eb%2Fscore_by_models.png?generation=1682004978180418&amp;alt=media" alt=""></p> <p>You can see that too many models is bad on cross-validation due to overfitting. I choose 6 models as a compromise of inference time and best score.</p> <h1>Lessons Learned</h1> <ol> <li>Learn about Transformers!</li> <li>Have a better framework in place to evaluate different models and parameters without pulling my hair out when changing stuff.</li> <li>Spend less time on searching for small tweaks when there is a large gap to the leaders</li> <li>To save time, do more fine tuning of models when adding things rather than training from scratch</li> </ol> <h1>Things that didn't work for me</h1> <ol> <li>I could not get vonMises-Fisher loss to work</li> <li>Adding additional features, like number of neighbors, sensor type, line-fit estimated direction </li> <li>Dropout or weight decay. My models tended not to overfit, probably because they were not complex enough</li> <li>Pulse selection. I could never improve on random selection, which still mystifies me</li> <li>More complex ensembles. I see <a href="https://www.kaggle.com/dipamc77" target="_blank">@dipamc77</a> did well with this, but I could not get XGBOOST or a neural network to combine predictions or even the embeddings into anything better than weighted averaging.</li> </ol> <h1>Conclusion</h1> <p>Congratulations to all the top finishers! And a big thank you to all the people posting information to help everyone, to everyone for participating, and to the contest organizers for putting on such a great competition.</p>
Lux AI Season 2
RL PPO Self-play Solution (Digs and Transfers Water, Survives Full Game)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lux AI Season 2 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Kaggle code submission: <a href="https://www.kaggle.com/code/sgoodfriend/luxs2-rl-algo-impls-v0-0-12-rl-ppo-selfplay" target="_blank">https://www.kaggle.com/code/sgoodfriend/luxs2-rl-algo-impls-v0-0-12-rl-ppo-selfplay</a></p> <p>Training repo: <a href="https://github.com/sgoodfriend/rl-algo-impls/releases/tag/v0.0.12" target="_blank">https://github.com/sgoodfriend/rl-algo-impls/releases/tag/v0.0.12</a> (the specific commit the model was trained on was <a href="https://github.com/sgoodfriend/rl-algo-impls/tree/1c3f35f47cfcdb542e10e1666c0a0968dc6f7779" target="_blank">1c3f35f</a>)</p> <p>Weights &amp; Biases report: <a href="https://api.wandb.ai/links/sgoodfriend/v9g3qfbd" target="_blank">https://api.wandb.ai/links/sgoodfriend/v9g3qfbd</a></p> <p>After 20 million steps, the agent learned the following skills:</p> <ul> <li>Dig for ice and transfer the ice to base. The agent usually manages to survive the game if the opponent doesn’t intervene.</li> <li>Grow lichen on the map over the course of the game.</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1298%2Fc5be5b5f760a11d32600d042c3e3868d%2FLuxS2MapResult.png?generation=1683140601660520&amp;alt=media" alt=""></p> <h2>RL Algorithm: PPO</h2> <p>The PPO implementation mostly reproduces <a href="https://github.com/vwxyzjn/ppo-implementation-details/blob/main/ppo_multidiscrete_mask.py" target="_blank">vwxyzjn/ppo-implementation-details/main/ppo_multidiscrete_mask.py</a> and uses similar hyperparameters to the Gym-μRTS paper [1]. Lux uses a much bigger map (48x48 vs 16x16 in the Gym-μRTS paper), which required a lot more memory to train. In order to get it to fit I reduced the <code>batch_size</code> to 1152 (from 3072) for an Nvidia A10 GPU and 2304 for an Nvidia A100. <code>n_steps</code> was reduced to keep the 4 minibatches per epoch.</p> <p>Initial bidding is a random distribution with mean 0 and standard deviation 5. Factory placement logic uses the logic from Abhishek Kumar’s <a href="https://www.kaggle.com/code/scakcotf/building-a-basic-rule-based-agent-in-python" target="_blank">Building a Basic Rule Based Agent in Python</a> with a slight weighing towards building factories near opponents.</p> <p><strong>“Spike” learning rate decay:</strong> Instead of using linear decay for learning rate, I implemented a learning rate schedule that started 100 times lower, raises to peak at 10% of total timesteps, and drops to 1000 times lower at the end. I hoped the initially low learning rate would allow me to use a higher peak learning rate (<code>4e-4</code>, about 60% higher than Gym-μRTS), since the biggest errors and jumps occur at the beginning. The estimated KL-divergence hovered around 0.005-0.02 for much of the training, not catastrophic, but a higher than the standard recommended 0.01 threshold.</p> <p><strong>Transitioning dense reward to sparse(-ish) reward over the course of training:</strong> At the beginning of training, the agent gets rewards for digging and transferring raw resources to the factory and gets penalized for losing factories. At the end of training, the agent only gets rewarded/penalized for win/loss, lichen generation, and difference vs opponent in lichen generation. In between, the reward is a linear interpolation based on number of training steps.</p> <p><strong>Training curve:</strong>The chart below shows ice generation significantly climbing at just before 5 million steps. At 3.3M steps, the agent is averaging a dig action’s worth of ice. By the end, ice generation averages over 40K . Water generation isn’t shown because it closely mimics ice except 4-fold less accounting for the conversion ratio. Episode length lags behind ice generation a bit, but crosses the important 150 step threshold at 5.2M steps (each factory was given 150 water+metal [last factory got less for any used in the bidding]). The training mean score tapers down to ~5 by the end because the reward by the end is primarily win/loss (+10/-10).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1298%2Fb89c39489680fd10a9e6b4b6cf73787a%2FSubmissionTrainingCharts.png?generation=1683140790910775&amp;alt=media" alt=""></p> <h2>Autocurriculum</h2> <p>The agent trained with half the agents playing themselves while the other half played agents with weights from prior versions. The prior agents were selected from snapshots taken every 300,000 steps with a window size of 33 (~10M step history). Every 144,000 steps (6000 steps per environment) the weights are switched out randomly.</p> <h2>Model: U-net-like</h2> <p>The Gym-μRTS paper uses an encoder-decoder; however, I went with something U-net-like as it was similar to the paper’s encoder-decoder with the addition of skip connections at each resolution level. The model scaled down the map 4 times to a size of 3x3 at the bottom. The model had ~3.8M parameters.</p> <p><strong>Critic:</strong> The value head was attached to the bottom of the U-net with an AvgPool2D to allow the map size to change while using the same model.</p> <p><strong>Actor:</strong> The actor head generates a MultiDiscrete action output for every grid position (what the Gym-μRTS paper refers to as Gridnet). Each grid position has the following discrete action outputs (30 bools):</p> <ul> <li>Factory action (4, includes do nothing)</li> <li>Unit action<ul> <li>Action type (6)</li> <li>Move direction (5)</li> <li>Transfer direction (5)</li> <li>Transfer resource type (5)</li> <li>Pickup resource type (5)</li></ul></li> </ul> <p>Transfer, pickup, and recharge action amounts are the maximum capable by the robot. The actions are enqueued repeating effectively indefinitely.</p> <h2>Input</h2> <p>The LuxAI_S2 PettingZoo ParallelEnv is wrapped in gym Wrappers and an AsyncVectorEnv reimplementation.</p> <p><strong>Mapping the Lux observation to Gym Box Space:</strong> The Lux observation dictionaries are converted to a Box space of WxH and ~80 boolean and float outputs. Each grid position has information on the following:</p> <ul> <li>Board ice, ore, rubble, lichen;</li> <li>Factory ownership, tile occupancy, cargo, and water survival info;</li> <li>Unit ownership, cargo, and enqueued action;</li> <li>Turn, day index, and day cycle info.</li> </ul> <p><strong>Invalid action mask:</strong> Actions that would be invalid to the LuxAI_S2 env are masked out by setting the logits of such actions to extremely large negative values, effectively zeroing the probability and gradients. Invalid actions masked out include:</p> <ul> <li>Not enough power to even enqueue actions</li> <li>Not enough power to move to a location (whether base power requirements or rubble) or illegal move into opponent factory</li> <li>No chance a transfer would lead to a valid destination (so if another friendly robot could be in that location, it’s not masked)</li> <li>Nothing possible to pickup</li> <li>Not enough power to dig or self-destruct</li> </ul> <p>Generating the above action masks “appears” to be very expensive. Training steps per second dropped from ~120/second to starting at 20-30/s. The reason I say “appears” is that over the course of training, training steps/second eventually climbs to ~250/s. My guess is that early slowness is from the game resetting every ~15 steps (without masking the agent picks invalid actions doing nothing vs the masked agent doing actions that cause factory destruction):</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1298%2F6982bb693c8e4eeee135597cae2606a3%2FTrainStepsPerSecond.png?generation=1683141143403629&amp;alt=media" alt=""></p> <h2>Performance</h2> <p>The above chart on training steps/second probably explains why my attempts at improving performance through increased caching and asynchronous execution of environments did not help very much. Even using a modified version of gym’s AsyncVectorEnv to handle passing along action masks through shared memory (<a href="https://github.com/sgoodfriend/rl-algo-impls/blob/1c3f35f47cfcdb542e10e1666c0a0968dc6f7779/rl_algo_impls/shared/vec_env/lux_async_vector_env.py" target="_blank">LuxAsyncVectorEnv</a>) helped little.</p> <h2>Reproducibility</h2> <p>The submitted agent was my first agent to exceed mean episode length above 150 steps (each factory got at most 150 water). This important milestone meant the model learned how to dig for ice and transfer the ice to the factory. This agent was also the best performing agent, outperforming followup agents during the competition and apparently outperforming an agent trained on the same commit:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1298%2Fade52a0a09a270700b41f16efef17bce%2FReproducingSubmissionTraining.png?generation=1683141239740251&amp;alt=media" alt=""></p> <p>The submitted agent is red. Blue is an agent trained at <a href="https://github.com/sgoodfriend/rl-algo-impls/commit/4f3a500cf1e575c55b06ebc2409d6537493f9207" target="_blank">4f3a500</a> (immediately after the submission version [v0.0.12] and meant to use the same hyperparams as the submission agent). The green line is being trained on the same commit as the submission, but it did worse than both!</p> <p>The submission agent isn’t a one-off. The blue line is trending towards 1000 step episode lengths (and most runs near the end are averaging over 990 steps [faded blue line]). With more steps and tweaking of the reward shape, training would likely catch up. However, this demonstrates how the same code and hyperparams can lead to drastically different training results.</p> <h2>Transfer Learning from a Simpler Map</h2> <p>The same U-net-like model can run on different sized maps since the actor is entirely convolutions-based and the critic uses an AvgPool2D before its fully-connected layers. Therefore, I tried training the agent on a smaller map (32x32) with one factory each for 20M steps and used this trained model as a starting point for the same 20M step full-game training.</p> <p>The simple-env trained agent was promising. After 20M “pre-training” steps, the agent was producing water reliably (though at an average of 10 per episode, which isn’t enough to survive).</p> <p>Even more promising, if this resulting model was the starting point of the full-game training, the agent nearly immediately lasts over 150 steps. However, the agent took a while to get out of this local optima, taking ~15M steps before exceeding 300 steps in episodes. By 20M steps, episodes were going for ~600 steps, despite likely collecting enough ice to survive (15K ice → ~3750 water → 3.75 factory-games). This is about one-third of the ice of the submission agent.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1298%2F3e60de33ce8a76576f828ed1e9b05725%2FSmallAndTransferLearning.png?generation=1683141368677955&amp;alt=media" alt=""></p> <h2>Next Steps</h2> <p>20M steps is not nearly enough for RL in a game of this complexity. With more time, I plan on training the agents for 100M+ steps, but with the following improvements:</p> <ul> <li><strong>Distinct phases for reward weights, <code>gamma</code>, and <code>gae_lambda</code>.</strong> The thought is to have a Phase 1 where rewards are skewed towards mining and survival lasting 30-50% of the training depending on total steps. Phase 3 will be sparse win/loss + score difference rewards (with higher <code>gamma</code> [0.999] and <code>gae_lambda</code> [0.99] for end-game only rewards), with a transitionary Phase 2. This resembles the Lux Season 1 paper using training phases [2], but with the addition of a linear transition period.</li> <li><strong>Switch from lichen generation to power generation reward.</strong> Lichen generation is problematic because lichen can be lost by not watering, which the agent could exploit. “Current lichen” is a better metric, but I’m going to switch to power generation, which combines factory survival, number of robots, and number of lichen tiles.</li> <li><strong>Asynchronous rollout generation instead of asynchronous rollout step generation.</strong> LuxAsyncVectorEnv isn’t that efficient because if one of the 18 environments is resetting, then the entire step has to wait for map generation to complete. Even in the best case scenario of 1000-step games, a reset in one of the environments will happen every 1/55 steps. If instead each rollout of 192 <code>n_steps</code> (A10 GPU) runs independently, then the resets are handled more equally (at most once per environment for 1000-step games, almost one-quarter reset pauses). This effect should be significantly larger at the beginning of training.</li> </ul> <p>The goal is for the agent to learn more advanced behaviors such as building robots, clearing rubble, direct competition with the opponent, and late-game strategies.</p> <h2>References</h2> <p>[1] Huang, S., Ontañón, S., Bamford, C., &amp; Grela, L. (2021). Gym-μRTS: Toward Affordable Full Game Real-time Strategy Games Research with Deep Reinforcement Learning. arXiv preprint <a href="https://arxiv.org/abs/2105.13807" target="_blank">arXiv:2105.13807</a>.</p> <p>[2] Chen, H., Tao, S., Chen, J., Shen, W., Li, X., Yu, C., Cheng, S., Zhu, X., &amp; Li, X. (2023). Emergent collective intelligence from massive-agent cooperation and competition. arXiv preprint <a href="https://arxiv.org/abs/2301.01609" target="_blank">arXiv:2301.01609</a>. </p>
Lux AI Season 2
Final notes and parameter tweaking
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lux AI Season 2 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>While the final games of Lux AI 2 are playing and I'm still uncertain of my final ranking in what's still looking like a close 2-5th place, I thought I'd do a little (long) writeup. I thought this was an exciting competition to take part in, with a deep and very well balanced game that really rewarded different types of strategies. I think we probably only just tapped the surface level of this game, and I'd definitely recommend checking it out even post-competition for fun or perhaps research</p> <p><strong>Approach</strong></p> <p>My main approach in this competition was to use C++ logic forming as a skeleton for a large selection (about 1500) of custom parameters that I would tune using custom made statistical analysis tooling. Everything from target selection to combat micro and move conflict management was deeply affected by these parameters. These parameters ranged from serving as a desire adjustment for further tiles, factory spawn tile evaluation, to booleans that could adjust critical if statements or even turn entire functionalities off to overarching multipliers like META_SIMCITY, which would adjust my strategic level desire for peaceful actions (collecting resources, clearing rubble) versus aggressive actions like trying to kill enemy units or digging up enemy lichen. I tried to get most 'opinionated' logic captured in at least some parameter so that I could discover whether my intuition was good. It often was, but there were some real stinkers in there too</p> <p>I chose this approach because I know it's a very reliable way for me to do well and because it's a lot of fun. I think it combines some of the good parts of rule based bots, such as the ability to really calculate out an optimal solution in places (eg. pathing) and a certain flexibility in strategy and game mechanics, while also taking some of the good parts of reinforcement learning, such as your software slowly intuiting optimal proportions of types of actions. I do believe that reinforcement learning strategies may have somewhat higher potential here, but I didn't go for that as the process has always felt a little unpredictable to me, and just not as fun. And well, I'm not a research group with the available hardware funds to achieve it reliably</p> <p><strong>Parameter Optimization</strong></p> <p>I tried to tune my parameters by running a lot of games, in each of which I'd randomize a significant proportion of the parameters (normal distribution for floats, uniform for ints). On every game I'd generate a file with the values of all parameters, whether I'd win or lose and also an assortment of game statistics I thought might be useful, for example the amount of light units I built, the amount of turns units spend digging rubble, the starting rubble on the map, etc. Then, outside of the game in my stats tool I built for halite 2, I would check to see if there was any way to improve the parameters. </p> <p>For that, I've tried several approaches and rewards. For rewards, some options I picked were chance of victory, the lichen difference with my opponent, the amount of power I produced, the match making rating of associated versions, and even a neural net output trying to estimate my matchmaking rating based on statistics such as the proportion of time my units spend mining ice.</p> <p>Using this reward, I'd try to find better values using several automatic and manual methods: I've used basic statistical tools such as the Pearson correlation coefficient and Spearman's rank correlation coefficient to see if there was a slope in the data (perhaps I'd win slightly more often if I made my units dig a little bit more ore). Neural nets and random forest tree learning approaches to try to find better values. Just plain old average values of integers or weighted averages of floats. And sometimes I'd just stare at graphs myself and hope to make sense of it. Basically I treated the parameter reward space as an unknown noisy multidimensional function to find a maximum on, but generally made the assumption that complicated interactions between the parameters would be small enough that I could fairly effectively hillclimb by adjusting individual parameters on their individual contributions. I just couldn't generate data fast enough to try and work with anything else </p> <p>I picked this type of approach over more standardized approaches like simulated annealing, because the amount of games I could run was tiny compared to the amount of parameters I had and there was a lot of noise and I was constantly changing the code as I went. Under those circumstances, that approach doesn't tend to do well</p> <p>After parameter changes, I'd check to see if there was an improvement on the leaderboard, and revert if not. The leaderboard was extremely noisy, so that may not always have been correct, but I figured it'd be correct more often than not. A nice advantage of this stats orientated approach is that I could often make improvements with low time and brain commitment. I would often take 5 minutes to automatically tweak and upload a new version or two before work, and then the same before going to sleep. Some of my biggest periods of growth were in weeks I had no time to mess with the code while doing this learning</p> <p>Initially I mostly had both of the agents in a game randomized and learning, for a double learning rate. Later I switched to static reference players to better track my progress. In the end I'm still not quite sure which is better. Static reference players give purer data, but less of it, and I felt there's some advantage in playing against more randomized players too to prevent overfitting. </p> <p>Some lessons learned here were:</p> <ul> <li>This learning approach, even despite all of its randomness, is pretty good at finding the low hanging fruit of improvements</li> <li>It's also very inconsistent and has a lot of trouble finetuning less important parameters in all the noise. </li> <li>You would think that if you have thousands of clean datapoints, where you know for sure a causation exists, and you take some parameter and your odds of victory get a Pearson correlation coefficient of 1.05 and a corresponding p-value of 0.0000001 for the null-hypothesis. And on top of that the average victory odds of that parameter are in fact observably higher on a graph. That if you then adjust this parameter upwards, you will start winning more, at least locally. Nothing could be further from the truth, almost every time it will have absolutely no measurable impact. I don't know what this implies, maybe just that I'm bad at stats, but I feel like there has to be some kind of fundamental effect here that consistently overexaggerates or invents apparent effects where there are none. I know it's not just that sometimes with many parameters you just hit the low p value odds, because it consistently happened to more parameters than you would expect. I feel like I might take some time post competition to more properly explore this</li> <li>Victory chance was probably overall the most effective at getting results. Optimizing for mmr or opponent win rate did almost nothing for me this time (but was very effective in halite). </li> <li>I found that varying my used reward seemed to work a bit better than consistently applying the same one. Likewise with the used approach for tweaking values. Some rewards I think had insidious side effects, such as if you optimize for proportion of turns spend digging, combat units may want to start dying on purpose. By varying rewards, that effect is limited a little</li> <li>Manual adjustments using a combination of the Pearson correlation coefficient and my insight into the games generally gave me the most gains. Automatic tweaking using Pearson was fairly effective too. For that I generally used: newvalue = oldvalue + (abs(oldvalue) * pearson * scaler), sometimes with a max change, or requiring a p value below some level. At the start I was very careful, using a low scaler ( &lt; 1), later I discovered using a fake parameter simulation thing, that high scalers and frequent tweaks actually get you to the best value much faster, even with lots of noise and low sample sizes ( &lt; 1000). That did seem to work better, though I often had to manually correct drift. I had a few parameters doing absolutely nothing, and these drifted from 1-&gt;0.88, 2-&gt;3.81, 3-&gt;3.7, 4-&gt;2.75, -5 -&gt; -5.39. Which honestly wasn't that bad. Meanwhile actual parameters I must've initially estimated wrong could shift by a factor of 10-100 over the course of the competition, I presume for the better</li> <li>Using neural networks and random forest trees to try to predict improvements was a waste of time. I think there just wasn't enough data, and enough random noise in parameters for extreme overfitting.</li> <li>Correlation between local results and the leaderboard were not great but tolerable</li> <li>It was critical to have a parameter indicating my version number, and to adjust the reward based on the average for that version. This erased a lot of weird artifacts and allowed me to fairly reliably use data across different versions without too much of an impact on results</li> <li>In the end I probably would've honestly done a little better if I didn't spend so much time on the parameter finetuning, and instead spend more time designing deep game specific algorithms and by watching games and fixing bugs, but I think the experience I gained using this approach was probably more valuable</li> </ul> <p><strong>Graphs and power</strong></p> <p>As a fun side effect of collecting in game data, I could easily find interesting observations and draw cool plots. Honestly I mostly underused this, but there was one graph that had a huge impact on my strategy. But first, let's introduce lichen power generation for those less familiar with it. <br> For every lichen tile you had, you would generate 1 power per turn. Power was a super critical bottleneck, so that's pretty nice. To create lichen you need to spend water at your factory to water your tiles. This costs ceil(tiles/10) water for both new and already existing lichen tiles. On turns you don't water, all your lichen tiles decay by 1. In most games, most water comes from ice mined by heavy units. You can expect to spend a minimum of 12 power per water, plus probably a bit of loss on inefficiencies. Now much has been analyzed about this mechanic already, but the basics are that ignoring weird integer cutoffs of the watering, you can expect that on a turn that you're watering, you have a cost of 1.2 power per tile, and a gain of 1, for a total change of -0.2. On tiles that you're not watering, you have a gain of 1 power per tile. So if you alternate watering and not watering, you can expect a gain of 0.4 power per tile per turn while keeping your lichen field intact. <br> This sounds pretty good. By just spending a bit of upfront power, you can grow a large, profitable lichenfield, which additionally is your source of points at the end of the game. Through the power gains, you could invest back even more into the factory, collecting more ice, more lichen, more power. About 1-2 months into the competition, I think the dominating thought was that this was a totally obvious way of making economies work.</p> <p>The main catch was, that it didn't actually really work. Now, results may be a bit different for other players, but for me it looked like this:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F557674%2Fc91f315e69ae44daa7a3a26b4cf64bc8%2Ficeprofit.png?generation=1682417547993146&amp;alt=media" alt=""></p> <p>This is a graph with on the x-axis, the amount of ice I've collected throughout the game ). On the y-axis, it's my total ice profit. Both are divided over starting factory count. Ice profit here is the amount of power I gained from my lichen, minus the amount of power I spend digging for lichen-ice (ice used for the factory upkeep is not counted). The blue lines are different fit lines, purple dots are victories, red dots are losses. The green/red line shows the running average with red meaning few data points</p> <p>Now I see two main things here. First, I never really made all that much of a profit from ice, on any game. Second, the amount of ice I collected doesn't affect profits as much as you'd hope. The only reason you seem to get a bit more profit on the far right end, is that those are overwhelming victories. For the bulk of games and for losses, there might even be a bit of a downward trend. Now critically, this graph did not include the rubble clearing costs and move/requeue costs necessary for the process. If I include the rubble clearing costs, you get this graph:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F557674%2F535e0ba0e1b9f2b10b2d73d8ff70475a%2Ficerubbleprofit.png?generation=1682417568260059&amp;alt=media" alt=""></p> <p>On top of this seeming lack of profits, most of the gains tend to be at the very end of the game where power isn't as useful anymore. Basically, big lichen power is a scam! There's no profit here. Reasons why range from the awkward shapes lichen fields often have to form, which means you require longer water investment stages, to enemy units clearing out your rubble.</p> <p>Of course you need the points so you might as well water the lichen early to at least sort of run even. But it just isn't quite what it was chalked up to be. Anyway, all of that is just kind of to illustrate that collecting data of this kind can be super useful. In this case, it lead me to go for unit based power generation much earlier, and made me realize that the popular lichen canals used were actually not that good of an idea. While they saved you rubble digging costs of digging out everything around the factory, the shape of the lichen actually meant you ran a bigger loss on the lichen itself, and additionally canals were very vulnerable to harassment. <br> On the other hand, building units for power generation was much more obviously profitable. You just need to have your units stand still often enough that they don't just burn their generated power. This was far from trivial in a many units style approach like mine. Clogged situations around factories in particular can force many moves when scary enemy units pass by and you gotta handle your conflicts so that units don't run into each other. Getting units to calm down and stop clogging was always one of my largest battles. In the end, most of my units ended up self sufficient, never taking any power from the factories so that I could just direct most factory power to making more units. Here's two graphs showing ice and ore based power profits in a typical game for me (ice profits here ignore rubble costs):</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F557674%2F6cb5369b88d05c07c19ed43cb01901c2%2Ficeprofitabilityreplay.png?generation=1682417603348006&amp;alt=media" alt=""><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F557674%2Feb5b40c0ba036755630f3e617aeca40f%2Foreprofitability.png?generation=1682417621598366&amp;alt=media" alt=""></p> <p><strong>Replay viewer</strong></p> <p>Besides my stats tool, I also used a custom replay viewer designed for my own replay format, which was arguably even more useful. I generated the replay data from within the code of my agent, allowing me to visually present not just objective game data, but also various forms of debugging data.</p> <p>Here's a quick peak of my replay viewer. The shape of units shows their movement. The fill of units shows their power for easy at a glance viewing. There's some little side-things too like highlights for unit deaths (magenta) and transfers (yellow) and a water carrier highlight. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F557674%2F823005a28c53f2b01a83255bab8d7a37%2Freplayviewer.png?generation=1682417639130120&amp;alt=media" alt=""></p> <p>This image shows the 'base' desires for every tile on the map, this corresponds to how much units will want to go there, before unit-specific adjustments such as distance from the tile</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F557674%2F106ed8896acf4b109cde46ee836fad58%2Freplayviewerdebug.png?generation=1682417661219381&amp;alt=media" alt=""></p> <p>This viewer was insanely useful for debugging purposes. It's much easier to see what is going on when you can easily see it across the screen like this instead of having to dig through logs. For example, I had an bug in my spawning logic at some point, where I messed up an x and y coordinate, so factories would be preferably spawned on the same column as tiles adjacent to ice. Probably would never have found it if I didn't spot the weird vertical lines in my spawn desire output</p> <p>For anyone who likes frequenting these kinds of competitions, I'd strongly recommend taking the time to invest a bit in both a reusable custom replay viewer and possibly your own tool to analyze stats. The time invested in any such infrastructure easily pays itself back through ease of debugging, and it's otherwise a great learning experience. </p> <p>Speaking of learning experiences. I largely chose c++ as my agents language of choice because I wanted to learn the language. That task mostly failed, in that I ended up mostly just sticking to some really basic usages of the language, but at least it was much more performant than alternatives might've been. I mostly used an ugly but effective style, where I had a large collection of global variables I could easily access everywhere at very low performance cost. One critical trick is to precompute everything you can precompute. Perhaps my most used thing throughout the codebase is to iterate over the neighbors of a tile. It was a huge help to just precompute that all once at the start, and never have to worry about performance impact and literal edge cases anymore. Outside of one awkward O(n^2) algorithms that went haywire near ~900 units I never really experienced timeouts. The game still was rather sluggish locally, but a significant part of that was I/O I couldn't really speed up without a total redesign. Oh yeah, preprocessor definitions to avoid logging overhead is cool</p> <p>Long, accurate action queues were very beneficial here, I mostly did queues through going turn by turn into the future, having every unit determine its course of action for that turn, resolving any move, transfer or pickup conflicts and then forward simulating the next turn. It unfortunately failed quite frequently, especially when units had to adjust for combat, made others scoot over, which then prevented transfers from landing, etc. Overall though my requeue costs were not that bad because I at least got the heavy units on the mines adjacent to the factories (and solar panels) to work predictably. The one thing I really wish I could've done is some system to really have significant information from predicted further turns carry back down to earlier turn in the queue. That unfortunately proved a little too challenging for me to come up with a good performant system, maybe in a follow-up competition!</p> <p>Throughout most of the competition and until the very end I've been in a tight race with a few other competitors. In particular ry_andy_, Tigga and Harm Buisman. Later to be joined by reinforcement learning player flg. Thanks to them for making this competition very exciting and of course to Stone Tao and to Bovard Doerschuk-Tiberi for organizing this and creating such a fun game. Hope any of that helps someone and I guess I'll see some of you in some future competitions!</p>
Lux AI Season 2
[76th place] Another fork of "Building a Basic Rule Based Agent in Python"
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lux AI Season 2 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>[76th place] Another fork of "Building a Basic Rule Based Agent in Python"</h1> <h2>Intro</h2> <p>First of all, big thanks to the organizers for hosting such a great competition! It was really interesting and fun all the time during the competition timeline!<br> Big thanks to the community for the inspiration and interesting discussions - It was really cool to be a part of it.<br> Special thanks to <a href="https://www.kaggle.com/scakcotf" target="_blank">@scakcotf</a> for posting their public kernel - it really helps me to start my season 2 journey, not even sure I came up with something without this baseline!</p> <h2>Approach</h2> <p>The solution is a rule-based agent improved from the <a href="https://www.kaggle.com/code/scakcotf/building-a-basic-rule-based-agent-in-python" target="_blank">scakcotf baseline</a> consisting of mostly predefined factory behavior, and somewhat dynamic robots reaction for the environment.</p> <h3>Early stage</h3> <p>Taken from the baseline with almost no changes.</p> <p>Bid 5 then place all the factories by simple heuristic to score each possible spawn location based on:</p> <ul> <li>rubble density</li> <li>distance to the closest ice</li> <li>distance to the closest ore</li> <li>distance to the closest opponent factory</li> <li>distance to the closest player factory</li> </ul> <h3>Main stage: Factories</h3> <p>Factories first build robots and assign their tasks based on the predefined sequence then mostly stay inactive during the game. They also detect opponent appearances nearby and assign a robot to defend the area.<br> Starting from turn 800 factory grows lichen, if it has enough water to do it all the following turns (determined by a simple heuristic).</p> <p>The final agent does not heavily rely upon economics and does not dig ore or generate power. Only a few robots are built - one heavy and 4-5 light, which allows them to not experience a severe lack of power, but survive till the end if no intervention occurs, and prepare enough space for lichen growth.</p> <h3>Main stage: Robots</h3> <ul> <li><p>Each robot belongs to a particular factory, which is its place to return gathered resources and take power. </p></li> <li><p>Robots have one of the tasks given by the factory and obey the logic it determines. </p></li> <li><p>Every robot reacts to opponent closeness: try to destroy the enemy robot if can, and escape to the safe zone otherwise</p></li> <li><p>Paths for robots are built using graph shortest path algorithm as described in <a href="https://www.kaggle.com/code/jtbontinck/shortest-path-on-mars-using-networkx" target="_blank">this notebook by jtbontinck</a>.</p></li> <li><p>Collision avoidance was handled in “real-time”, every robot check if it can move further, and if the cell was already occupied - change its direction.</p></li> <li><p>Actions were computed each turn, and action sequences were used for power usage optimization. The robot creates a sequence of all the following actions determined by its task, then compares the new sequence with the existing one: if the first action is the same, the queue was not overwritten.</p></li> </ul> <h4>Tasks: Ice</h4> <p>Task handled by the first heavy robots. Simply digs the closest ice, and return to the factory to transfer it if the cargo is full and takes power. Additionally, urgent transfer was added: if the factory is out of water, and the robot has ice in the cargo - it returns back immediately.</p> <h4>Tasks: Rubble</h4> <p>Handled by light robots - digs closest rubble or enemy lichen, returns to factory if discharged.<br> A few restrictions were added:</p> <ul> <li>ignore rubble on resource tiles</li> <li>ignore tiles that prevent opponent lichen from spreading</li> <li>consider rubble tiles only in a specific radius around the factory, or adjacent to lichen.</li> <li>if there is no rubble to dig - wait near the radius border to prevent crowding</li> </ul> <p>The last two restrictions, which were originally added to prevent robots from digging too far from the factory, accidentally result in some interesting behavior - when their base cleaning is done, robots start to help other factories with cleaning, and massively attack enemy lichen. Actually, these features help to do the last score gain and stay in the bronze.</p> <h4>Tasks: Ore</h4> <p>Same as ice: find the closest resource - dig - back to the factory to unload and recharge. Was not used in the final agent.</p> <h4>Tasks: Kill</h4> <p>The logic for an attacking robot - find go to the closest opponent robot, and try to fight it with common simple fights logic. Was not used in the final agent.</p> <h4>Tasks: Defend</h4> <p>The same code as Kill, but treated by the factory differently. When the opponent robot appears close to the factory - one of the factory heavies changes its task to defend and tries to strike the enemy. If the enemy was destroyed or had left the factory area - all defenders were revoked back to normal tasks.</p> <h2>Summary</h2> <p>A few things I have learned during the competition:</p> <ul> <li>Rule-based approach is extremely fun</li> <li>Bugs. Bugs everywhere. All the time during the competition I have the higher gain not by implementing features, but by making them actually work!</li> <li>Code organization really matters. Suboptimal code organization - suboptimal agent logic. Keep it in mind, and don't be like me 😄</li> </ul> <p>Thanks for reading, and hope to see you in season 3!</p> <blockquote> <p>P.S. Code can be found here - <a href="https://github.com/andrii0yerko/lux-ai-s2-2023" target="_blank">https://github.com/andrii0yerko/lux-ai-s2-2023</a></p> </blockquote>
Lux AI Season 2
[15th place] A goal-based logic bot & code.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lux AI Season 2 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was my first time competing in a competition of this kind and building a logic bot. I have found it extremely interesting to compete and recommend it to people who were on the fence about joining this year. It is a great experience to code your bot and then visually see it perform against other bots. The game design has been superb, creating some new types of challenges. There have been a few pitfalls for me in this competition, which I would like to share with you. Hopefully, this will help some people avoid them in the future.</p> <p>In this post, I will first quickly explain the game. Then I will go through my design. After that, I will go through my pitfalls and end by going through what I would have liked to do if I had had more time.</p> <h2>Python Code</h2> <p>For anybody interested in the code, it's stored on <a href="https://github.com/BartvonMeijenfeldt/LuxAISeason2Bot" target="_blank">GitHub</a>.</p> <h1>Game Summary</h1> <p>The LuxAI 2 challenge is a 2-player zero-sum game where both players compete to terraform Mars by building the most lichen in 1000 time steps. They start with a few factories and enough resources to build a few units. During the game, the units can collect two types of resources—ore for building new units, ice for keeping the factories alive and watering lichen. Lichen, besides counting for the score at the end of the game, also generates power that units need to perform actions. In general, power was a very scarce resource in this game, so efficient power usage was essential.</p> <p>There are two types of units: a light unit which is more power-efficient, and a heavy unit able to dig more per turn and therefore has higher resource utilization. Units execute actions and require power to perform these actions.</p> <p>If two units collide, the stronger one survives. If they are the same type and one unit is stationary, that one is lost. If they are both moving, the unit with the most power survives. When your units collide with each other, the same rules apply. Therefore avoiding collisions between your units is extremely important. This wasn't easy because you control your unit not by sending its next action but by updating an action queue which can easily consist of 50 actions. One could update the action queue for every unit at every step, but this is quite costly, as it costs power every time you update the queue.</p> <h1>Final design</h1> <p>I switched my design around a few times during the competition, but in the end, I ended up with the following design.</p> <h2>Factories decide on strategic focus</h2> <p>Each unit is assigned to the factory that spawns them, and the factory will decide, based on its current resources, the game time step, and the goals that it already scheduled, what currently is the highest priority strategy for its units to focus on. Some examples of these strategies are to increase lichen, collect ore, attack the opponent or defend a lichen tile. Based on these strategies, an attempt is made to schedule the most suitable unit for this task. The unit selection picks the unit that, at a glance, appears to have the highest value per step (explained in more detail in the get_best_case_value_per_step section). If none of the factory's units can be scheduled to aid in this strategy, an attempt will be made to schedule a unit on the next highest priority strategy. After a factory schedules a unit, the factory recalculates the priorities of the strategies again to decide where to schedule the next unit. This loop continues until all units are scheduled, or no unit can be scheduled on any relevant strategy.</p> <h2>Units carry out strategic focus with goals</h2> <p>I ended up with 11 different goals for units. These goals either directly contribute to fulfilling a strategy or help the unit survive( e.g. fleeing from opponents). For example, these goals include collecting ice at a specific coordinate (x, y), clearing out rubble at a specific coordinate or destroying a specific opponent's lichen tile. The abstract base class of these goals have the following methods: get_best_case_value_per_step, generate_action_plan and get_value_per_step_of_action_plan.</p> <h3>get_best_case_value_per_step</h3> <p>To evaluate goals, I used the following simple formula:<br> value_per_step = (power_benefit – power_cost) / nr_steps</p> <p>This method, at a glance, estimates this formula for the best-case scenario of scheduling a unit on a goal without computing an exact action plan. To estimate the power benefit of each goal, I created a formulate which contained some parameters I could tweak. Then to decide on the parameters, I tuned them based on the performance on the leaderboard.</p> <p>To quickly compute the best-case scenario, we assume that everything is optimal. E.g. to collect ice for a unit, we would calculate the distance it would take to go there without being obstructed by any unit, there to be no rubble on the way, and the unit max number of digging the ice out is based on the state to be day meaning that power is generated each step and then goes back to the factory to return the power.</p> <p>These best-case values for goals are used to select the most suitable unit for a strategy. E.g. if we want to increase ore to build units, all units belonging to a factory that are not scheduled on a goal yet, are asked to compute their best-case values of mining ore on the ore tiles around them. The unit with the best value is selected, and for this unit, we will carry out the expensive method, generate_action_plan.</p> <h3>generate_action_plan</h3> <p>To get from a goal to an action plan, a specific set of actions, you will need some type of planning algorithm. I used <a href="https://en.wikipedia.org/wiki/A*_search_algorithm" target="_blank"> A* </a> as the basis. A* is an algorithm similar to the more famous Dijkstra's algorithm but can better prioritize the most promising node if you can set accurate lower bounds on the remaining cost to reach the endpoint. In a grid world, as in this game, a reasonable lower bound can be obtained based on the distance to the final node. Goals are broken down into some subgoals (e.g. move to a factory and pick up power, collect ice and transfer ice to the factory) to reduce the planning horizon, which improves computational performance significantly. However, the found solution can be slightly worse than the optimal solution.</p> <p>Another crucial task to solve is ensuring that your units don't collide. This is especially hard in this game, as we are not planning one step at a time per unit, but we update action queues containing multiple actions. For our planning algorithm, this means we must also consider other units' action plans. In the end, I used a reservation table for this. Which works in a first come, first serve manner. The first unit to be scheduled will reserve the time coordinates (x, y and t=time) corresponding to its action plan. Then any future unit planning will not be allowed to be on those coordinates and will generate its action plans accordingly.</p> <h3>get_value_per_step_of_action_plan</h3> <p>Given that we now have the exact action plan of the unit, we can calculate the value per step of this plan precisely. It was essential to do this, as in the middle game, often there are more units than worthwhile goals to carry out. If you always schedule your units on goals, you end up using too much power, and then your units will eventually have too little power left for the most important goals. Therefore, goals with negative values are never scheduled.</p> <h1>Pitfalls</h1> <h2>All units performing the same strategy</h2> <p>At the start, every unit thought for themselves what the most worthwhile goal was, and they were not assigned to a factory. Using a <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linear_sum_assignment.html" target="_blank">sum-assignment problem solver</a>, units were assigned to specific goals. This has one big drawback. Often all units ended up performing the same goal. E.g. if the parameters of ice mining were slightly more favourable than ore mining and there were many ice resources around the factory, the units would only collect ice, neglecting the ore resources completely. Then I would end up with a lot of ice but no ore to build new units. Similarly, if ore mining was slightly more favourable, I might end up with many units but no ice and, therefore, no lichen to generate power for these units. </p> <p>Therefore, some self-balancing rules were required in the planning algorithm. I ended up scheduling units to perform goals based on their factories' needs. After the factory schedules a unit on a goal, it recomputes its most important strategies using the last scheduled unit's goal. For example, suppose the factory scheduled the last unit to collect ore. In that case, it will soon build new units, so it will now prioritize the strategy of generating power for these future units.</p> <h2>Optimal goal resolution</h2> <p>The biggest pitfall I spent a lot of time on at the start was attempting to compute the perfect joint plan at every step. I was using conflict-based planning for this. The idea of this algorithm is that first, every unit comes up independently with its action plan to complete its goal. After this, we validate whether the joint plan is valid, i.e. does it not contain any collisions? If any units' paths collide, we split the problem into two sub-problems, one where the first unit is granted the time coordinate and one where he isn't. Then, in my implementation, I would reconsider the best distribution of goals given this restriction (if a unit is not granted its optimal path, it might be better to have the unit perform a similar goal on an unblocked path). Then the unit that was not granted its optimal path will generate a new action plan. If this is invalid again, we split the problem in two another time. In a branch-and-bound style, we focus on sub-problems whose optimal value is most promising and repeat this process until a node is found with the best-case joint value and without collisions.</p> <p>At the start, this worked out well and created joint action plans which were always optimal. But as I started to build more efficient economies and my unit size increased, collisions increased, and no collision-free joint plans were found in time. To solve this, I spent significant time optimizing the code to generate action plans under the time limit. But it appeared impossible to get this to work. Therefore, in the end, I switched to using a reservation table, which, although might not grant the perfect solution, is magnitudes faster and allowed me to focus computational resources and, more importantly, my available programming time for this project on more important parts.</p> <h2>Under-utilizing logging</h2> <p>I underestimated the complexity of building a bot to control 100+ units. At the start, when a bug popped up, I figured out the problem using the debugger and carried on. But there were two big disadvantages in mainly using the debugger vs logging. One was that matches often took more than 20 minutes, so replicating this state in debug mode was time-consuming. Also, there were some recurring problems which are harder to detect in debug mode. For example, I had some situations where some units were scheduled on a goal, but then another unit got scheduled on the same goal. The original unit would be unscheduled but would later schedule themselves on this goal. Detecting this takes a decent amount of effort using a debugger, especially when other units are scheduled in between as well. But this is very easy to detect by logging every scheduling and unscheduling decision.</p> <h2>More parameter optimization and better evaluation</h2> <p>The best way to know if your bot is improving is by comparing it against opponents on the leaderboard. Therefore it is essential to make good use of your five daily submissions. Unfortunately, I focused too much on adding in all the functionality needed for the bot, planning to focus on optimizing the parameters at the end. But in the end, I realized there was a lot more I wanted to test than the number of uploads I had left. Looking back, I should have started using all my uploads from the start, as it takes little effort to tune a few parameters each day.</p> <p>To make the best use of these uploads, it helps to evaluate them as accurately as possible. Kaggle's rating system jumps around a lot, especially before the final submission date. For leaderboard reasons, this might be optimal, but for accurate performance evaluation in a day of games, there might be better options. I haven't found much information on the Kaggle rating system, but I believe it works similarly to the <a href="https://en.wikipedia.org/wiki/Elo_rating_system" target="_blank">Elo</a> and <a href="https://en.wikipedia.org/wiki/Glicko_rating_system" target="_blank">Glicko</a> rating systems. In these systems, the impact of recent games is more important than that of old games. This makes a lot of sense for the original domains of these methods, sports. But for static bots, you can be more accurate by using a method that assumes a stationary level of the competitor.</p> <h1>Improvements if I had the time</h1> <h2>Changing the goal-scheduling method</h2> <p>During the competition, I set up my scheduling in two different ways, both with their shortcomings. When units schedule purely by themselves, they might end up performing goals irrelevant to the factory. On the other hand, when factories focus only on the most critical strategy, this might lead to poorly accomplishing the most important strategy instead of efficiently accomplishing the second most important strategy. I would have wanted to combine the two methods by multiplying the goal's value by the strategy's importance. The importance of the strategy should then be a number whose average value is approximately 1. </p> <h2>More focus on attack/defence goals</h2> <p>Initially, I spent most of my time getting the planning algorithms to work correctly, quickly enough and without collisions. Later, I struggled with balancing my economy. There was also a lot of small functionality to build in, e.g. differentiating between a unit's private action plan and the action queue it sends to the environment and precisely calculating the power of this action plan, including the moments it will update the action queue.</p> <p>In the end, I ran out of time to properly implement the defence goals and also some attacking goals. I did implement clearing out the opponent's lichen decently, but defending ice tiles close to my base or defending my lichen was rushed and did not work properly, nor was it prioritized appropriately by the factories. Neither did I have the time to implement camping important opponent resources or capturing opponent's units low on energy and far away from their factory, where they are safe. Lacking these capabilities hampered the performance of my bot severely.</p> <h1>Conclusion</h1> <p>In conclusion, it has been a great experience. I want to thank Stone Tao for creating such a rich competition environment. I have learned a lot by competing here. I hope this write-up helps some people avoid my pitfalls for the next challenge. And lastly, I can't wait to see what LuxAI Challenge 3 will bring!</p>
Lux AI Season 2
[10th place] Deimos's RL approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lux AI Season 2 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Deimos's RL approach</h1> <h2>Background</h2> <p>Our colleagues participated in the Lux Season 1 competition last year and conducted an in-depth study on the competition environment. During the preparation phase of this year's competition, we were invited by Stone to rewrite the Lux environment using JAX framework, which resulted in considerable speedup effect (overall user experience still needs improvement and we plan to optimize further in the future). Therefore, we also wanted to participate in Lux Season two to gain first-hand experience.</p> <h2>Method</h2> <p>Our policy for this competition remains the utilization of the PPO algorithm, and the specific implementation follows the off-policy approach provided in RLlib. Since the observation space of this season's competition environment is structured similarly to that of the previous season, we have adopted the neural network architecture outlined in [1] for the backbone of our model and have made our code available as a reference in [2]. However, we have also made some modifications to our model that are unique to this season's competition environment, which we will highlight in the following sections.</p> <h3>Feature space</h3> <p>The feature space consisted of three main components: global information, map features, and action queue. Each component was designed to capture critical information that allowed our robot to operate efficiently in the given environment.</p> <ul> <li><p>Global Information<br> The first component of our feature space is global information, which includes the current time phase of the environment (step, cycle, hour) and the total number of robots and resources. We utilized one-hot encoding to represent certain features, resulting in a feature dimension of 32.</p></li> <li><p>Map Features<br> The map feature component contained information about the resources (lichen, ice, ore, rubble) present in each block as well as the robot (unit, factory) located in each block. The feature dimension was designed to be 48x48x30.</p></li> <li><p>Action Queue<br> The last component of our feature space was the action queue, which represented the current set of actions in robots' queues. This component had a feature dimension of 48x48x20x6.</p></li> </ul> <p>To encode our feature space, we then replicated this encoded global information for each grid. We used ResNet to merge the three components to achieve maximum efficiency. </p> <h3>Action related</h3> <p>The biggest challenge for RL players was managing the massive action space in the second season. To address this, we restrict the number of actions sent to a robot to one per turn, i.e., we completely ignore the action queue, and turn it into a real-time control problem. Unlike flg's approach, we allow the action to be repeated, thus some power can be saved. Besides, we implement a relatively complicated conditional sampling as follows.</p> <h4>Sampling Order for action sequences</h4> <p>We sample different dimensions of action in the following order.</p> <ul> <li>Action type (move, transfer, pick up, dig, self-destruct, recharge, do nothing)</li> <li>Action direction (5 directions)</li> <li>Resource type for action (ice, ore, water, metal, power)</li> <li>Quantity of resources for action (float, 0-1)</li> <li>Infinite repeat or not (boolean)</li> <li>n (integer)</li> </ul> <p>The dimensions sampled before affect the dimensions sampled later, so the whole action spaced is modeled as a conditional probability distribution.</p> <pre><code> P(type, direction, resource, quantity, repeat, infinite) = P(type) * P(direction | type) * P(resource | type, direction) * P(quantity | type, direction, resource) * P(repeat | type, direction, resource, quantity) * P(n | type, direction, resource, quantity, repeat) </code></pre> <h4>Continuous Value Sampling</h4> <p>From the action space of the unit type robot described above, we can observe that the quantity of resources operated is a ranged real value, such as $[0,1]$. Generally, for actions involving continuous numerical values, researchers usually model them as a Gaussian distribution, but the gaussian distribution is unbounded, so some truncation is needed. Here, we adopted a more reasonable distribution, the beta distribution, which falls into $[0,1]$ by nature. The advantages of this approach can be found in our reference at [3].</p> <h2>Model Architecture</h2> <p>In brief, is an 8-layer-deep 64-neuron-wide Resnet with some additional feature pre-processing layer and action/value head. The backbone is the same as [1], and the specific implementation can be found in [2]. </p> <h2>Training &amp;&amp; Performance</h2> <h3>Random Initialization</h3> <p>The distributed reinforcement learning framework consists of two types of processes:</p> <ul> <li>Rollout worker : interact with the environment to generate data, and </li> <li>Trainer : perform policy optimization.</li> </ul> <p>As this competition is a perfect information game, we set Rollout worker to play against itself, i.e., self-play. However, we found that the model did not explore new game states during the training process. Therefore, we downloaded a large number of replays from the Kaggle, randomly selected various game state from them, and used these intermediate states as the initial settings for the self-play process. This helped our strategy to explore many game situations that were difficult to evolve through self-play, and the data download can be referenced via the Lux season one sharing blog.</p> <h3>CURRICULUM LEARNING PHASES</h3> <p>There are three main phases during the training process.</p> <ol> <li>dense reward + rule-based factory spawning</li> <li>dense reward + neural network factory spawning</li> <li>sparse reward + neural network factory spawning</li> </ol> <h4>Phase1</h4> <p>The reward is set to dense reward as shown in the table below, and in this phase, we wrote a script for choosing factory spawning location and initial water/metal. In addition, if a player loses the game, it will be penalized by a large negative reward, which cancel out all its past reward. Thus, only the winner get positive reward in this phase.</p> <table> <thead> <tr> <th>Reward type</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>reward for victory</td> <td>0.1</td> </tr> <tr> <td>penalty for defeat</td> <td>-0.1</td> </tr> <tr> <td>light units increment</td> <td>0.04</td> </tr> <tr> <td>heavy units increment</td> <td>0.4</td> </tr> <tr> <td>ice increment</td> <td>0.0005</td> </tr> <tr> <td>ore increment</td> <td>0.001</td> </tr> <tr> <td>water increment</td> <td>0.001</td> </tr> <tr> <td>metal increment</td> <td>0.002</td> </tr> <tr> <td>power increment</td> <td>0.00005</td> </tr> <tr> <td>growing lichen</td> <td>0.001</td> </tr> <tr> <td>penalty for losing factory</td> <td>-0.5</td> </tr> <tr> <td>reward for surviving in this round</td> <td>0.01</td> </tr> </tbody> </table> <p>Most of the reward can also be a penalty. For example, if a player builds a light unit, it will get a reward of 0.04, but if it loses the light unit, it will get a penalty of -0.04.</p> <h4>Phase2</h4> <p>The dense reward as described above is still used in this phase, but the factory spawning is now decided by the neural network. This phase is quite short. As long as robots know how to collect resources during phase 1, neural networks can learn to spawn factories in a few hours.</p> <h4>Phase3</h4> <p>In this phase, we switch to a sparse reward.</p> <table> <thead> <tr> <th>Reward type</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>reward for lichen increment</td> <td>0.001</td> </tr> <tr> <td>reward for surviving in this round</td> <td>0.01</td> </tr> </tbody> </table> <p>During this phase, the reward for lichen increment is zero-sum. All lichen reward a player gain will also become a penalty for its opponent, so the goal for a player becomes to gain more lichen than its opponent.</p> <h1>Conclusion</h1> <p>Our training gets a smooth start, and we can see the ELO score increase steadily. However, it was stuck around 1900 in the last two month. We did lots of experiments to improve the ELO score, but none of them worked. In general, our approach is quite similar to flg's. Both ours and flg's follows last year's champion's approach, but in detail, we have different action space modeling, different rewards, and different curriculum learning phases. Maybe there is a bug; maybe our action space modeling is too complicated; maybe our reward is not good enough. We can't figure out the reason, because RL system is too complex to debug. It consists of too many components, and each component has too many hyper-parameters. Most time, if you did wrong in one component, you just get a bad ELO score silently without any explicit exceptions. Despite all the above, we still want to share our approach with you. </p> <h2>References</h2> <pre><code>[1] Chen, H., Tao, S., Chen, J., Shen, W., Li, X., Yu, C., Cheng, S., Zhu, X., &amp; Li, X. (2023). Emergent collective intelligence from massive-agent cooperation and competition. arXiv preprint [arXiv:2301.01609](https://arxiv.org/pdf/2301.01609.pdf). [2] https://github.com/RoboEden/lux2-deimos [3] Po-Wei Chou, Daniel Maturana, Sebastian Scherer Proceedings of the 34th International Conference on Machine Learning, [PMLR 70:834-843, 2017](https://proceedings.mlr.press/v70/chou17a.html). </code></pre>
IceCube - Neutrinos in Deep Ice
3rd Place - Attention + XGBoost Ensembler
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>3rd place Solution writeup - Attention + XGBoost</h1> <p>Participated on Kaggle after 3 years, first gold for me, had a really exciting time. 😃</p> <p><strong>Check the TL;Dr at the end for a short summary</strong> - The post is long with a mixture of model information and personal lessons.</p> <h2>The bitter lesson - Attention is all you need</h2> <p>This data is not a natural graph, I'm confused by all the graph models being trained on it. The graph edges are artificially selected based on criteria such as time or distance, which I didn't think was the right approach. Hence I spent my time on LSTM and later moved to Self-Attention.</p> <h3>Base Model</h3> <p>The base model is a simple <strong>self-attention model</strong>. Most of the model code is from Andrej Karpathy's NanoGPT <a href="https://github.com/karpathy/nanoGPT.git" target="_blank">https://github.com/karpathy/nanoGPT.git</a></p> <p>The inputs are normalized pulse data with transparency information from Datasaurus' Dynedge baseline <a href="https://www.kaggle.com/code/anjum48/early-sharing-prize-dynedge-1-046" target="_blank">https://www.kaggle.com/code/anjum48/early-sharing-prize-dynedge-1-046</a> - Sequences are sampled upto a max length of 256, prioritizing sampling of the pulses where <code>auxiliary=False</code></p> <p>The outputs are <strong>two classification heads</strong> for each angle with 128 bins each. - I train it with a custom loss that locally smoothens the one hot target vectors, more on that below.</p> <p>A small version of the model can be trained on an RTX 3060 to give a score close to ~1.02 in 1 hour. (Embedding size 128 - 6 layers).</p> <h3>The bitter lesson - Scale beats everything</h3> <p>The competition was undoubtedly compute-heavy, but I only had access to my home PC with an RTX 3060. Naturally, I spent a lot of time hand-designing features, optimizing the code, and trying ideas that felt like they should help. Only to have to throw out most of the experiments which help a smaller model but get washed out when training a big model with more data.</p> <p>Notable ideas that helped but later got discarded:</p> <ol> <li>Simple augmentation like moving the points slightly, changing the charges etc.</li> <li>Supervised contrastive loss style training to predict the angle between two events.</li> <li>Mutliple pooling instead of just average pooling at the end. (in fact this caused overfitting)</li> <li>Angle classifier on sphere coorindates instead of two classifiers.</li> </ol> <p>All of the above individually helped get better performance on a small model trained for 2-3 hours, but reach the same performance with bigger models. There is no overfitting in the models even with 20 batches of training data.</p> <p>The real improvements in score came from scaling the model. Training on more data was crucial as well, the final models I use train on 650 batches of data. They finally start to overfit after 4 cycles through the entire data. </p> <p>All the wasted days on experimentation reminds me of Richard Sutton's blog on "The Bitter Lesson". <a href="http://www.incompleteideas.net/IncIdeas/BitterLesson.html" target="_blank">http://www.incompleteideas.net/IncIdeas/BitterLesson.html</a> - Haha GPUs go bitterrrrrrr.</p> <h3>Scaling and engineering tricks</h3> <p>At the start of April, I decided to buy a new GPU just seeing how much compute is needed. Upgraded to RTX 4080, which is 3x faster. (No this is not the only scaling I did 😅, but more compute was crucial)</p> <p>Final submitted model has 18 self attention layers, embedding size 512. This reaches a LB score of 0.982, and I'm sure training a bigger model will get even better. I trained the same model with 15 layers, but had worse performance due to mixed precision instability.</p> <p>Engineering tricks</p> <ol> <li>Mixed precision training is much faster, I train with <strong>FP16 with FlashAttention</strong>. However this started getting unstable after reaching a score of 0.990, so I switched to FP32 after that. Only the bigger models showed this instability and I didn't get to investigate how to continue on FP16, probably the scales of the inputs or the loss can be managed.</li> <li>The sequence lengths in the dataset has high variance. After sampling the data upto 256 pulses, the average length of each event is close of 100. This means a lot of attention processing (when not using FlashAttention) is wasted on padding. To speed up training I <strong>group the batches by event length</strong>. This gives a 1.5x speedup over FlashAttention and <strong>3.5x speedup</strong> over normal attention implementation. Packing also speeds up inference in the same ratios.</li> <li>Loading a portion of the training data for each "epoch". This is straightforward, switch out the bathces in the dataloader after each epoch, or do it as a background process for prefetch.</li> </ol> <p>Scaling worked incredibly well:</p> <ol> <li>128 Embedding - 9 Layers - 200 batches - Score ~1.001</li> <li>256 Embedding - 12 Layers - 400 batches - Score ~0.990</li> <li>512 Embedding - 18 Layers - 650 batches - Score ~0.982 (Some overfitting at the end)</li> </ol> <p>Further, I fine tuned the model for sequence lengths upto 3072, which gives a slight imporvement of 0.002.</p> <h3>Smoothed cross entropy loss</h3> <p>The Von Mises-Fisher Loss proved unstable with Attention model, and I never got it to go beyond 1.04 when training end to end.</p> <p>For the classifier I digitize the angles to 128 bins. Cross entropy loss works well, but I wanted to introduce inductive bias in the model that the clases are ordered, and not independent. To do so, I apply <strong>1D convolution with a gaussian kernel</strong> to the target one hot vectors. For azimuth this covolution has wraparound, for zenith I extend the bins based on the length of the kernel. (Notebook explaining the loss will be released later). This gives slightly more stable training, though with good hyperparam tuning even cross entropy might work just as well.</p> <h2>The sweet ending - Attention is not all you need</h2> <p>Obviously attention was not enough to win, 0.982 isn't even gold zone (cries in corner). However, the base model was cruicial for the next ideas to work.</p> <h3>Stack vMF model</h3> <p>Training an MLP with vMF loss on top of the encoder embeddings works. I dump 100 batches of average pooled encoder embeddings and train the stack model with it, achieves a score of ~0.978. Simple average ensemble of vMF and base model gives a score ~0.976.</p> <h3>The magic - Ensembling using XGBoost 🔥</h3> <p>I know the standard ensembling tricks, but this is the first time I have ensembled using another model. If you are aware of this used in practice, <strong>please let me know if there is literature on this</strong>. I got the idea just 3 days before the end of the competition, barely got to test it properly, yet it worked like magic.</p> <p>My hypothesis on why this worked - The classifier and vMF models are far from perfect scores, and have high disagreement. Here's a histogram of the zenith angles.</p> <p>Both the model have its own biases, and its not obvious how to combine them.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3468635%2Fd4807872db2ad427f9484a1f6ac8ccbd%2Fzenith_histogram.png?generation=1681967255551488&amp;alt=media" alt=""></p> <p>First I tried a simple voting rule - If they're too far then don't average:<br> This already bumped the score from 0.976 -&gt; 0.973, already a top 5 solution.</p> <pre><code>avg_zenith = (zn_stack + zn_base) / vote_zenith = avg_zenith.clone() voter = torch.(zn_stack - zn_base) &gt; vote_zenith[voter] = zn_stack[voter].clone() </code></pre> <p>Both the models have some way of indicating "confidence" in prediction. Probabilty of bin for classifier, and kappa for vMF. I played around with some thresholding for those, along with event lengths and z coordinates, and it bumped score to 0.9715, so I decided to try XGBoost. On the first try the score went straight to 0.969. 🚀</p> <p>Further, I tried another boosting classifier to "predict which predictions are bad". And used it to flip the angles, this also gives a slight improvement in score.</p> <p>Adding some more simple features the final score for the 18 layer model was 0.965 (I made a submission with the 15 layer model which got 0.967).</p> <p>I used sklearn's HistGradientBoostingClassifier for the final setup, but XGBoost has similar results.</p> <h3>Inference timing stat</h3> <p>The inference time for the base model is around 4 minutes per batch, and another 4 minutes for the long sequence model, which only gives a small boost. So its possible to get a score of <strong>0.967 within 25 minutes</strong>.</p> <h3>Final submission</h3> <p>The final submission has 3 models. The 18 layer, 15 layer, and another checkpoint from same 18 layer run.</p> <p>Another boosting classifier to merge 18 layer and 15 layer. And then simple voting to merge the 3.</p> <p>Here's the inference notebooks for final submission and single attention model.</p> <ol> <li>Final - <a href="https://www.kaggle.com/code/dipamc77/3rd-place-attention-xgboost-ensembler" target="_blank">https://www.kaggle.com/code/dipamc77/3rd-place-attention-xgboost-ensembler</a></li> <li>Attention only - <a href="https://www.kaggle.com/code/dipamc77/single-attention-model-inference" target="_blank">https://www.kaggle.com/code/dipamc77/single-attention-model-inference</a></li> </ol> <h2>Summary of learnings</h2> <ol> <li>Test the model for scaling, especially if its underfitting, before trying feature engineering/other ideas.</li> <li>Spend the initial engineering effort to optimize the pipeline, to test ideas fast.</li> <li>If models have high disagreement and not close to max score, try to train a classifier to ensemble to models.</li> <li>Even though the final improvement came from ensembling tricks, scaling was the most important. Bigger models will likely give even better results.</li> <li>Don't give up, I really threw everything in the kitchen sink to improve scores at the end!</li> <li>Buy more GPUs. 💸</li> </ol> <p>In hindsight, I should have tried some of the graph models as they might have disagreed with the classifier, and ensembling them might give a good improvement.</p> <h2>Notable Failed ideas</h2> <p>Didn't expect these to fail, and didn't abandon prematurely, but couldn't make them work.</p> <ol> <li><p>Finding noise in the dataset to ignore during training.</p></li> <li><p>vMF loss training with attention model.</p> <ul> <li>Edit - From reading the 2nd place solution, this might be because I didn't encode the inputs properly. Perhaps it was lack of understanding on my part. I thought encoding continuous variables with a normal FC layer is good enough. Clearly it did work with the classification loss though.</li></ul></li> <li><p>Reducing bias in zenith predictions by class weightage and oversampling.</p></li> </ol> <h2>TL;Dr - Final Solution summary</h2> <p>Pulses as tokens, sequence length upto 256, priority sampling for "real" pulses.</p> <ul> <li>(Self Attention (512 embedding, 18 layer, Avg pool, 2048 FF) + 2 Angle classifier heads) -&gt; ~0.982</li> <li>(Fine tuned attention model for sequence length upto 3072) -&gt; 0.980</li> <li>(MLP with vMF loss on attention model embeddings) -&gt; ~0.976</li> <li>(Combine classifier and vMF angles with boosting classifier) -&gt; ~0.965 🔥</li> <li>(15 layer + 18 layer + 1 checkpoint from same 18 layer run - combine with another boosting classifier) -&gt; ~0.962</li> </ul> <hr> <p>P.S - Doesn't matter what your rank is, please share your learnings and also failed ideas. It helps you organize what you learnt and get better insights, I'll make sure to read all of them.</p> <p><strong>Training Code - Please share your feedback.</strong></p> <p><a href="https://github.com/dipamc/kaggle-icecube-neutrinos" target="_blank">https://github.com/dipamc/kaggle-icecube-neutrinos</a></p>
GoDaddy - Microbusiness Density Forecasting
44th Place Solution - Simple LGBM
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: GoDaddy - Microbusiness Density Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for the competition on a very interesting topic. It was a time-series forecasting problem that predicts the true future, and was a fun theme to solve. I understood that the United States is a very large and diverse country.</p> <p>This is a simple LightGBM, 44th Place Solution.</p> <h3>Solution Overview</h3> <ul> <li><p>Only LightGBM with 32 features </p></li> <li><p>Multiplier Prediction<br> I chose target as the multiplier between the previous month's data and the month to be predicted.</p></li> <li><p>Target smoothing<br> Because there were so many outliers in the data, I smoothed the target by taking the medians of the three cases before and after the target.</p></li> <li><p>Model to forecast 1-6 months later<br> I created several models to forecast 1-6 months later separately.</p></li> <li><p>Average of 3 months<br> Average the multipliers for 3 months, including the months before and after the month of forecast. This is more accurate.</p></li> <li><p>Conversion of population from 2020 to 2021</p></li> <li><p>Round(0) since it is a discrete value</p></li> </ul> <h3>My Notebook</h3> <p>I have published my notebook below.<br> <a href="https://www.kaggle.com/code/thajime/godaady-44th-solution-lgbm" target="_blank">https://www.kaggle.com/code/thajime/godaady-44th-solution-lgbm</a></p>
IceCube - Neutrinos in Deep Ice
11-th place solution: Attention, GNN, Ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, we are very happy to be a part of such profound and interesting competition, many thanks to the organizers for their support and to all competitors who selflessly shared their knowledge and insights, <a href="https://www.kaggle.com/rasmusrse" target="_blank">@rasmusrse</a>, <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a>, <a href="https://www.kaggle.com/solverworld" target="_blank">@solverworld</a>, <a href="https://www.kaggle.com/edguy99" target="_blank">@edguy99</a>, <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a>, <a href="https://www.kaggle.com/pellerphys" target="_blank">@pellerphys</a>… Forgive me if I've missed someone, just so many people helped us so much with going forward. </p> <p>Many thanks to my great team QuData <a href="https://www.kaggle.com/synset" target="_blank">@synset</a>, <a href="https://www.kaggle.com/alexz0" target="_blank">@alexz0</a> and <a href="https://www.kaggle.com/semenb" target="_blank">@semenb</a>, you are awesome! We all come from Ukraine, and we wish to thank everyone who is supporting our people now!</p> <p>This competition was a great opportunity to unite a bit of relevant domain knowledge, as two of us were doing theoretical physics in the past, with the whole team's data science skills and knowledge. <br> Taking part in a big astrophysics experiment was novel and exciting, and we hope we made a useful contribution to finding out how the Universe works.</p> <h2> Source Code and Expanded Write-up </h2> <p>Our report turned out to be quite large to comfortably fit in one post, so here are the links to</p> <ul> <li><a href="https://qudata.com/projects/icecube-neutrino/en/" target="_blank">More detailed version of the solution description and background</a></li> <li><a href="https://qudata.com/projects/icecube-neutrino/en/" target="_blank">Source code for data preparation, training and inference</a></li> </ul> <h2>General architecture</h2> <p> The solution has the following structure: </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2Fd27747a4b9d7f2eb613032789cc095a0%2Fcommon_arch.png?generation=1681914442238565&amp;alt=media" alt=""></p> <p>Here is the important milestones in our progress during the competition:</p> <table> <thead> <tr> <th>Model</th> <th>Score LB</th> </tr> </thead> <tbody> <tr> <td>Training base GNN model</td> <td>1.012</td> </tr> <tr> <td>Adding another layer to GNN</td> <td>1.007</td> </tr> <tr> <td>Increasing the number of neighbors</td> <td>1.003</td> </tr> <tr> <td>GNN - Expanding Layers MLP</td> <td>0.996</td> </tr> <tr> <td>GNN - Replacing regression with classification</td> <td>0.992</td> </tr> <tr> <td>Transformer (aggregated pulses)</td> <td>1.002</td> </tr> <tr> <td>Transformer (separate pulses)</td> <td>0.995</td> </tr> <tr> <td>GNN-Transformer Ensemble (trained MLP)</td> <td>0.976</td> </tr> </tbody> </table> <h1>We list only one score for each model, as due to high quality data, the validation, public and private scores are all very close.</h1> <p> "GNN" is a modification of the graph neural network architecture for Neutrino Telescope Event Reconstruction <a href="https://github.com/graphnet-team/graphnet">GraphNet</a> (public score: <b>0.992</b>) </p> <p> "Transformer" is a combination of architectures including fully connected blocks, an attention mechanism, and a recurrent layer (public score: <b>0.995</b>) </p> <p> "Ensemble" is a neural network that agregates outputs from "GNN" and "Transformer" models and predict direction. (public score: <b>0.976</b>) </p> <h2>GNN</h2> <h3>Model</h3> <p>During the competition, we made many changes to the architecture of the graph neural network (described in detail on the <a href="https://qudata.com/projects/icecube-neutrino/en/gnn.html">https://qudata.com/projects/icecube-neutrino/en/gnn.html</a>), its final state:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2F000a4894518c3090f4fad6146a865c13%2Fgnn_arch.png?generation=1681914695253186&amp;alt=media" alt=""></p> <p>The input for this model is the graph. Each node corresponds to the pulse, and the features are the features of the pulse combined with the aggregated features of the event. To construct the topology of the graph, the <b>knn</b> algorithm is used. The resulting graph sequentially passes through layers, each of which modifies the feature space and also updates its topology. The outputs of all layers are combined and fed to the layer <b>Layers MLP</b> in which the number of features is reduced to 256. Then a pooling operation is performed in which the features of all nodes of the graph are aggregated by functions <b>min, max, mean</b>. At the last node <b>Classification MLP</b> the resulting embedding is converted into class id and each class is responsible for its own direction.</p> <p>Below is described our progress that led to this architecture.</p> <h3>Training and enchancing the model </h3><h4>Training base model (1.018 → 1.012)</h4> <p>Since we were limited in computational resources, we decided to proceed with retraining the best public GNN model from a notebook <a href="https://www.kaggle.com/code/rasmusrse/graphnet-baseline-submission">GraphNeT Baseline Submission</a> (special thanks to <a href="https://www.kaggle.com/rasmusrse" target="_blank">@rasmusrse</a>) which gave LB score <b>1.018</b>.</p> <p>We retrained the model on all batches, except for the last one; wich we were used for validation. During training, the learning rate was reduced in steps, at those moments when the validation did not improve for about 100 epochs. As a result, after 956 epochs, the value of the metric dropped to <b>1.0127</b>.</p> <h4>Adding another layer (1.012 → 1.007)</h4> <p>Having a trained network, we tried to add another layer <b>EdgeConv</b> to it. In order not to learn from scratch, all layers in the new architecture were frozen except for the new one.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2Fc2015199cfa3d81a4f028d61458c239d%2Fgnn_plus_layer.png?generation=1681919370984185&amp;alt=media" alt=""></p> <p>In the frozen layers, weights were loaded from the model retrained at the previous stage and training continued.<br> The model in this mode learns quickly, and the metric also quickly reaches the level of the model from which the weights are borrowed. Then, unfreezing was performed, the learning rate was reduced, and the entire model was retrained in this mode. After 1077 epochs, we reached the metric <b>1.007</b></p> <h4>Increasing the number of neighbors (1.007 → 1.003)</h4> <p>In the original Graphnet library, the number of neighbors for building a graph is chosen as <b>8</b>.<br> We have tried increasing this value to <b>16</b>. In the module <b>EdgeConv</b> the same MLP model is applied to all neighbors, and then the result is summed. Therefore, with an increase in the number of neighbors, the number of network parameters does not change, but at the same time it learns and works twice as slowly.</p> <p>Thus, we retrained the trained model from the previous stage with a new number of neighbors and after the 1444 epochs, the metric reached a new low of <b>1.003</b></p> <h4>Expanding Layers MLP(1.003 → 0.9964)</h4> <p>Since the number of layers was increased, we thought it reasonable that the number of parameters of <b>Layers MLP</b> which receives concatenated input of the outputs of each level, should also be increased. The first layer of the module <b>Layers MLP</b> was increased from 336 to 2048. Similarly, all levels were frozen, the weights of the model of the previous stage were loaded, and training was continued. After 1150 epochs, the metric dropped to <b>0.9964</b></p> <h4>Replacing regression with classification (0.9964 → 0.9919)</h4> <p>Studying the best solutions, we paid attention to the notebook <a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-training-tpu">Tensorflow LSTM Model Training TPU</a> (thanks to <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a>) From it we borrowed the idea to move from a regression problem to a classification one.</p> <p>The azimuth angle was uniformly divided into 24 bins. The zenith angle was also divided into 24 bins; here we warked with cosine of zenith, since it is the cosine that has a uniform distribution (as is seen from the statistics of all training events).</p> <p>Accordingly we received a total of <b>24x24=576</b> classes. The the last layer of MLP was increased from [128,3] to [512,576], and the loss-function was changed to <b>CrossEntropyLoss</b>.<br> We froze the entire model except for the last module and loaded the weights obtained at the previous stage and continued training. After 967 epochs, the metric reached the value of <b>0.9919</b>.<br> This was the best result that we achieved for a standalone GNN model, and it was then used for<br> ensembling with other models.</p> <h3>What didn't help</h3> <p> During the competition we tried many other things that did not yield a noticeable improvement of the metric, or none at all. Some of the things we did: </p> <ul> <li> separately predicting zenith and azimuth angles; </li><li> changing the type of convolution to SAGE, GAT; </li><li> inserting transformer after Layers MLP; </li><li> replacing pooling with RNN (GRU); </li><li> inserting a TopKPooling layer; </li><li> using 4 or more features to build the event graph. </li></ul> <h3>What have not been done</h3> <p> There are also several approaches which we thought over but did not pursued to the end, mainly because of lack of time and hardware resources, to mention just a few: </p> <ul> <li> training the GNN from scratch, with scattering, absorption and DOM-embedding features; </li><li> training a classifier with a larger number of bins; </li><li> parallel training of transformer and GNN models. </li></ul> <h2>Transformer</h2> <p> The model, which we named <a href="https://qudata.com/projects/icecube-neutrino/en/transformer.html">"Transformer"</a>, is a combination of architectures that include fully connected blocks, an attention mechanism, and a recurrent layer. The typical metric achieved by this model was 0.995. </p> <p> The general architecture of the Transformer model looked like this. It receives two tensors as input: <b>(B,T,PF)</b> and <b>(B,EF)</b>, where <b>B</b> is the index of the sample in the minibatch and <b>T</b> is the index of the pulse number in the sequence. Each pulse was characterized by <a href="https://qudata.com/projects/icecube-neutrino/en/transformer.html#F"><b>PF=9</b> features</a>. The second tensor <b>(B,EF)</b> characterized the properties of the entire event with <a href="https://qudata.com/projects/icecube-neutrino/en/transformer.html#AF"><b>EF</b> more features</a>. These features were calculated for all pulses of the event (24 features) and inaddition only for the pulses marked with the auxiliary=0 flag (we'll shorten it to "aux"), 24 more. </p> <p> In some modifications of the architecture, aggregated features with <b>aux=0</b>, using a fully connected MLP block (multilayer perceptron) with one hidden layer, were compressed to a smaller dimension and then combined with the pulse feature tensor <b>(B,T,PF)</b>. The motivation for compression is related to the fact that the number of event features (24) was significantly greater than the number of more significant pulse features (9). In other architectures, only features of pulses were used. </p> <p> After concatenation, the resulting tensor <b>(B,T,F)</b> was fed to the second MLP block (below the Feature Generator). Its task was to increase the number of features to <b>E=128</b>. The resulting tensor <b>(B,T,E)</b> was fed to a chain of 10–12 transformer blocks. Each pulse, as a result of the mechanism of attention, "interacted" with all the pulses in the sequence. As a result of the work of the transformer, the pulses received new features in the space of the same dimension. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2Fd22171be96432ebf984027aaaf96bfc8%2Ftransformer_00.png?generation=1681979981110852&amp;alt=media" alt=""></p> <p>Next, the Integrator block converted the transformer output tensor <b>(B,T,E)</b> into a tensor of <b>(B,E')</b> dimensions.<br> At the initial stage of training, the concatenation of the operation of averaging and obtaining the maximum according to the features of all pulses acted as the Integrator block.<br> At the final stage, this block was replaced by an RNN layer integrating the event pulses.<br> One unidirectional recurrent layer with GRU cells was chosen as RNN.<br> In this case, the pulses of the event were sorted in reverse order by time.<br> As a result, the input of the last GRU cell (the output of which was used by the Integrator) received the most significant, that is the first, pulse.<br> </p><p></p> <p> The event feature tensor for all pulses and pulses with aux=0 was attached to the <b>(B,E')</b> tensor obtained after the Integrator. The resulting tensor <b>(B,E'+2*EF)</b> was fed to an MLP whose three outputs were Cartesian components of the direction vector predicted by the regression model. </p> <p> The transformer had a standard architecture with 16 heads in the attention block and trainable weights on skip-connections: Position embedding was not used (although we experimented with it early on). Time was one of the features of a pulse, and apparently that was enough. Numerous, but not final :) experiments were carried out on various modifications of this architecture. In particular, we studied the version where the final layer was a classifier in 10k directions in space, as well as a version in which the transformer consisted of two parallel chains of blocks that were eventually merged together. The feasibility of such architectural modifications requires further research. </p> <p><a></a></p> <h3>Dataset</h3> <p> The transformer training requires the same number of pulses for all examples in the minibatch. The standard solution to this problem is masking tokens to align sequence lengths. However, given our computational capabilities, this was a very wasteful method. So we went the other way. Every 5 training batches were collected in a pack of 1m examples. These examples were sorted by sequence length and combined into groups of the same length. Accordingly, the data were divided into minibatches within each group. After that, the minibatches of all groups were mixed. One learning epoch consisted of one pack. Then the next pack was loaded. </p> <p> Another idea that allows you to cope with limited memory was related to the variable size of the minibatch. The memory requirements increase quadratically with the length of the sequence T. Therefore, long sequences can be collected into shorter mini-batch, and short sequences into longer batches. Thus, several hyperparameters were set: T_max - the maximum length of the sequence, batch_size - batch size for maximum length and batch_max - upper limit of minibatch size for short sequences. For examples of length T, the minibatch size was determined by the following formula: </p><pre>batch_size = min(int(batch_size * (T_max/T)**2), batch_max) </pre> This led to approximately the same memory consumption for batches with long and short sequences. <p></p> <p><a></a></p> <h3>Training implementation</h3> <p> Let's show a typical training start schedule (pulses are DOM-aggregated). An error of the order of 1.020 when learning from scratch is achieved in 60-70 million examples (300 batches, not necessarily new ones). The total time on the standard T4 card is about 12 hours. This stage was done at a constant learning rate lr=1e-3 with the Adam optimizer: </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2F6bf188353b365fde71a4e1a94fe91b82%2Fexp_large_01.png?generation=1681973550189176&amp;alt=media" alt=""></p> <p> It took significantly longer to get close to error 1.002. To achieve it, the learning rate was gradually decreased to 1e-5 (below, the dotted brown line). At the level of 1.004, there was a transition to large maxima of the sequence lengths. This immediately reduced the error by 1-1.5 points. </p> <p> Further training required a transition to non-DOM-aggregated pulses and took a few more days on the A100 card. </p> <p> During the training process, we analyzed the value of the weights on various blocks of the model and gradient propagation through them. A typical analysis chart looked like this (see details in <a href="https://qudata.com/projects/icecube-neutrino/en/transformer.html">our report</a>): </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2F2cd7550542fa9f05041b1d8266aa9756%2Fexp_large_grad_01.png?generation=1681973578263913&amp;alt=media" alt=""></p> <h2>Ensemble set-up</h2> <p> The results of the architectures we trained (<a href="https://qudata.com/projects/icecube-neutrino/en/transformer.html">Transformer</a> and <a href="https://qudata.com/projects/icecube-neutrino/en/gnn.html">GNN</a>) were not strongly correlated with each other. This made it possible to form ensembles of models, which gave a significant improvement in the final metric. In addition, thanks to the remarkably large dataset provided by the organizers, additional progress in the metric could be obtained using the trained ensemble. The general architecture of the ensemble looked like: </p> <p></p><p></p> <h2>Ensemble Models</h2> <p> In each architecture, several models were selected, obtained with different hyperparameters or architecture options. 80 batches were passed through these models, the results of which formed a dataset for the ensemble. </p> <p> Below are the initial data of the selected models. The first column of numbers is the base metric of the competition (mean angular error). The second two columns are the azimuth error and the module of the zenith error of the model (all validations were carried out on the first 5 batches): </p> <pre>id ang_err az_err ze_err model 0 0.9935 1.035 0.606 | gnn_1 1 0.9922 1.033 0.595 | gnn_2 2 0.9963 1.039 0.598 | gnn_3 3 0.9954 1.018 0.612 | att_1 4 0.9993 1.022 0.636 | att_2 --------------------------------------------------------------- 0.9846 1.021 0.540 | simple mean </pre> <p> The criterion for selecting models for the ensemble was the angular error and coefficients of correlations of the angular error between different models: </p> <pre> 0 1 2 3 4 0 0.959 0.950 0.807 0.781 1 0.959 0.934 0.820 0.795 2 0.950 0.940 0.808 0.783 3 0.807 0.820 0.808 0.940 4 0.782 0.795 0.783 0.940 </pre> <p> Simply averaging the vectors predicted by each model resulted in a <b>0.9846</b> error. </p> <p> Next, we built an ensemble with trainable weights (the multiplier of each model): <b>n</b> = w <b>n</b> + ... + w <b>n</b> This reduced the error to <b>0.9827</b>. In this case, the regression weights of the models had the form: </p> <pre>gnn_1: 0.934, gnn_2: 1.346, gnn_3: 0.753, att_1: 1.466, att_2: 0.477 </pre> <p><a></a></p> <h2>Training Ensemble</h2> <p></p><p><br> Further advancement of the metric was achieved by a trained ensemble based on a neural network.<br> Several architectures were considered, including a transformer and adding aggregated event features to the model outputs.<br> However, they turned out to be no better than conventional MLP with one hidden layer.<br> The tensor <b>(B,N,3)</b> was fed at the input of this network, where B is the number of examples in the batch, N is the number of models,<br> each of which produced three components of the direction vector.<br> This tensor was converted into dimension <b>(B,3*N)</b>.<br> After MLP, its dimension became equal to <b>(B,3)</b>.</p> <p> The number of neurons in the hidden layer ranged from 128 to 2048. However, all these models gave approximately the same results. For training, the best loss appeared to be the cosine between the predicted and the target vector. The learning rate was quite high <b>lr = 1e-3 - 1e-4</b> and the Adam optimizer was used. </p> <p> The best metric, as determined by the validation on the first 5 batches, gave the result <b>0.9796</b>. Below are some typical learning curves: </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1667828%2F2342fe7cc1946eaaee3ca0f42d8a4339%2Fensemble_train.png?generation=1681973858282499&amp;alt=media" alt=""></p> <h3>Source code</h3> <p></p><p><br> The source code of our solutions can be found on <a href="https://github.com/QuDataAI/IceCube">https://github.com/QuDataAI/IceCube</a> and in Kaggle public notebooks.<br> Below are links to modules:</p> <ul> <li> <a href="https://github.com/QuDataAI/IceCube/blob/main/kaggle_2023/IceCube-Dataset-kaggle.ipynb">IceCube-Dataset-kaggle</a> - сreating a dataset for transformer; </li><li> <a href="https://github.com/QuDataAI/IceCube/blob/main/kaggle_2023/IceCube-Transformer-kaggle.ipynb">IceCube-Transformer-kaggle</a> - transformer train; </li><li> <a href="https://github.com/QuDataAI/IceCube/blob/main/kaggle_2023/IceCube_GNN_train_colab.ipynb">IceCube-GNN-train-colab</a> - GNN train; </li><li> <a href="https://github.com/QuDataAI/IceCube/blob/main/kaggle_2023/IceCube-Ensemble-kaggle.ipynb">IceCube-Ensemble-kaggle</a> - Ensemble train; </li><li> <a href="https://www.kaggle.com/andreybeyn/icecube-ensemble-submit">IceCube-Ensemble-submit</a> - Ensemble submission;, </li><li> <a href="https://www.kaggle.com/code/synset/icecube-qudata-line-fit-method">IceCube_QuData_Line-fit_Method</a> - modifications of Line-fit method; </li><li> <a href="https://www.kaggle.com/code/synset/icecube-qudata-anisotropy">IceCube_QuData_Anisotropy</a> - anisotropy analyses; </li></ul> <p><a></a> </p> <h3>What have not been done</h3> <p> There are also several approaches which we thought over but did not pursued to the end, mainly because of lack of time and hardware resources, to mention just a few: </p> <ul> <li> Training the GNN from scratch, with scattering, absorption and DOM-embedding features; </li><li> Training a GNN classifier with a larger number of bins; </li><li> Parallel training of transformer and GNN models. </li><li> Use a bidirectional recurrent layer as integrator in transformer. </li><li> Use the transformer pre-trained on the regression task to detect noise pulses, for their subsequent filtering. </li><li> Replace the regressor with a classifier in transformer, for a large set of directions (points on a sphere). A similar model got to error 1.012, but there was not enough time to retrain it and include it in the ensemble. </li><li> Train a model on a subset of the data (for example, single-string events, two-string events, etc.) by obtaining a set of models that specialize in specific event patterns. </li><li> Add position embedding in transformer. </li></ul> <h2>Conclusions</h2> <p>Thanks to the organizers for large dataset, without which we would not have been able to train complicated architectures and combine them into an ensemble.</p>
1st and Future - Player Contact Detection
45th place solution (the most simple method to get seliver madel)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all thanks to our teammates.<br> Our method is makeup by a tabular model and cnn model.<br> In the tabular model, we are based on <a href="https://www.kaggle.com/code/ahmedelfazouan/nfl-player-contact-detection-helmet-track-ftrs" target="_blank">this model</a><br> second, we add (mean, std, max, min) from helmet trajectory data(train_player_tracking.csv) to increase LB from 0.650 to 0.684.<br> Third, we add step_rate (0.684-&gt;0.693) from the video.<br> And then We apply TTA to <a href="https://www.kaggle.com/code/zzy990106/nfl-2-5d-cnn-baseline-inference" target="_blank">cnn model</a> to increase LB from 0.667 to 0.671.<br> Finally, we add cnn model's prediction to the tabular model.<br> After doing this, we get 0.724 (public score) and 0.728 (private score).<br> Our <a href="https://www.kaggle.com/code/yoyobar/cnn-with-feature/notebook" target="_blank">code</a>.<br> Cheers!</p>
1st and Future - Player Contact Detection
Result of Late Submission: 2D-CNN + XGB + 1D-CNN (Private LB: 0.78703)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 1st and Future - Player Contact Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I will share experiment result after competition (LB: 0.78703) for future reference. My original solution is available <a href="https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/391607" target="_blank">here</a> (Private LB: 0.76741).</p> <p>First of all, I appreciate all the solution write-ups by others. I get much insight from their solutions. For example, giving numeric features on CNN with isolated channels is from Team Hidrogen's solution, and using player-anyone &amp; player-ground contact information is adopted from Qishen and Bo's solution (and from 18th place team).</p> <h2>What I Did on the Late Submissions</h2> <ol> <li>use <a href="https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/393400" target="_blank">less-player-duplicated fold split</a></li> <li>5-channel 2D-CNN (channel design is the same as <a href="https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/391740" target="_blank">Team Hydrogen's solution</a> except for not using 2.5D)</li> <li>add group feature &amp; lag feature of 1st/2nd stage prediction score (+group +lag)</li> <li>add player-anyone contact feature (+group +lag)</li> <li>add player-ground contact feature on player-player model (+group +lag)</li> <li>apply player-player sequence-level pruning</li> <li>add 4th-stage 1D-CNN</li> </ol> <h2>Discussion</h2> <p>5-channel 2D-CNN gave me notable boost from my original architecture: 3-channel 2.5D-CNN (+0.54%) it also much reduces train/scoring time.</p> <p>One of the largest gains are from strictly split CV. As I already wrote on <a href="https://www.kaggle.com/competitions/nfl-player-contact-detection/discussion/393570" target="_blank">the post</a>, the fold split with less player duplication drastically improves CV/LB correlation. Thanks to this fold split, I can boost LB further with additional features on the 3rd stage (+0.87%).</p> <p>Sequence level pruning (+0.16%) and 1D-CNN (+0.26%) boosted score further although there are no gain on CV. One of the possible reason is the CV I used is too strict and test data may contain few players that also appeared on train data.</p> <h2>Tricks to speed up scoring time</h2> <p>I also reduced scoring time by the following tricks.</p> <ul> <li>use 2D CNN instead of 2.5D (3-4h -&gt; 2h)</li> <li>use <code>@lru_cache</code> when loading image (2h -&gt; 1h)</li> <li>use numpy array instead of JPEG (1h -&gt; 45 min)</li> </ul> <h2>What didn't worked</h2> <ul> <li>using prediction score of additional CNN which trains labels with player-anyone contact (it only scores tie comparing to the group features extracted from player-player contact CNN)</li> <li>2.5D-CNN (it only tie scores to 2D-CNN)</li> </ul> <h2>Score Results</h2> <table> <thead> <tr> <th>Submissions</th> <th>CV</th> <th>Public LB</th> <th>Private LB</th> <th>architecture</th> <th>description</th> <th>#features(p2g)</th> <th>#features(p2g)</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0.7950</td> <td>0.7701</td> <td>0.7738</td> <td>XGB + 2D-CNN(5-channel) + XGB</td> <td>stage 1 feats. + stage-2 pred score</td> <td>1032</td> <td>1032</td> </tr> <tr> <td>2</td> <td>0.8038</td> <td>0.7806</td> <td>0.7773</td> <td>XGB + 2D-CNN(5-channel) + XGB</td> <td>#1 + lag &amp; group feats. of stage-2 pred score</td> <td>1057</td> <td>1057</td> </tr> <tr> <td>3</td> <td>0.8035</td> <td>0.7788</td> <td>0.7799</td> <td>XGB + 2D-CNN(5-channel) + XGB</td> <td>#2 + lag &amp; group feats. of stage-1 pred score</td> <td>1083</td> <td>1083</td> </tr> <tr> <td>4</td> <td>0.8039</td> <td>0.7799</td> <td>0.7782</td> <td>XGB + 2D-CNN(5-channel) + XGB</td> <td>#3 + stage 2 p2anyone feats (+group)</td> <td>1087</td> <td>1087</td> </tr> <tr> <td>5</td> <td>0.8040</td> <td>0.7800</td> <td>0.7808</td> <td>XGB + 2D-CNN(5-channel) + XGB</td> <td>#3 + stage 2 p2anyone feats (+group +lag)</td> <td>1104</td> <td>1104</td> </tr> <tr> <td>6</td> <td>0.8055</td> <td>0.7815</td> <td>0.7820</td> <td>XGB + 2D-CNN(5-channel) + XGB</td> <td>#5 + stage 1 p2anyone feats (+group)</td> <td>1108</td> <td>1108</td> </tr> <tr> <td>7</td> <td>0.8053</td> <td>0.7808</td> <td>0.7819</td> <td>XGB + 2D-CNN(5-channel) + XGB</td> <td>#5 + stage 1 p2anyone feats (+group +lag)</td> <td>1125</td> <td>1125</td> </tr> <tr> <td>8</td> <td>0.8064</td> <td>0.7827</td> <td>0.7825</td> <td>XGB + 2D-CNN(5-channel) + XGB</td> <td>#6 + p2g feat on p2p model (+group +lag)</td> <td>1108</td> <td>1130</td> </tr> <tr> <td>9</td> <td>0.8051</td> <td>0.78368</td> <td>0.78412</td> <td>XGB + 2D-CNN(5-channel) + XGB</td> <td>#8 + sequence level pruning</td> <td>1108</td> <td>1130</td> </tr> <tr> <td>10</td> <td>0.8053</td> <td>0.78687</td> <td>0.78672</td> <td>XGB + 2D-CNN(5-channel) + XGB + 1D-CNN</td> <td>#9 + 4th stage (1D-CNN; input stages 1-3 output)</td> <td>1108</td> <td>1130</td> </tr> <tr> <td>11</td> <td>0.80577</td> <td>0.78713</td> <td>0.78703</td> <td>XGB + 2D-CNN(5-channel) + XGB + 1D-CNN</td> <td>#9 + 4th stage (1D-CNN; input only stage 3 output)</td> <td>1108</td> <td>1130</td> </tr> </tbody> </table>
IceCube - Neutrinos in Deep Ice
15th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, we want to thank Kaggle team and the organizer for holding this amazing competition. In addition, we appreciate the people who shared their amazing notebooks and insightful ideas about this competition.</p> <p>Unfortunately, we could not reach a gold medal, but we will share a brief explanation of our solution.</p> <hr> <ol> <li>model<br> custom DNN (based on <a href="https://www.kaggle.com/code/anjum48/early-sharing-prize-dynedge-1-046" target="_blank">this</a>) <br> We trained 4fold (out of 100fold cv) DNN and ensembled them with simple averaging.<br> We tried RNN, LSTM, or transformer, but because of limited time and computer resources, could not make them work.</li> <li>data<br> We picked 200 pulses per event based on auxiliary, valid time window, and charge.</li> <li>features<br> We chose [x, y, z, time, charge, auxiliary, valid_time_window, string, qe, scatter, absorption] as input features based on small data experiments. Maybe some of them are not necessary for big data training because the model can implicitly learn them from data. But we don't have much time to check, so we left them.</li> <li>prediction<br> We made the model to predict the (x, y, z) direction from where neutrinos came and to where they go. During training, we calculated losses for each of them. In Inference time, we flipped one of them, averaged them, and got one (x, y, z) direction.</li> <li>loss function<br> We used VonMisesFisher3DLoss. We checked other losses or binning&amp;classification, but this gave us the best result.</li> <li>others<br> We utilized polars for data processing. The processing speed was very fast, and our best score submission took only about 30min.<br> This is a regret, but we got a fairly good score with GNN and a little effort and persisted in improving their scores. Reading top-ranked teams' solutions, we thought we should have a more wide view and try various solutions.</li> </ol> <hr> <p><a href="https://www.kaggle.com/code/ludditep/15th-place-dynedge-2-direction-prediction" target="_blank">Inference code</a></p> <p>株式会社Rist様・株式会社スクラムサイン様、スポンサーとしてKaiRAとそこに所属する学生の学びを支えてくださり、ありがとうございます。KaiRAを代表してお礼を申し上げます。</p>
Lux AI Season 2
Solution from QuData
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lux AI Season 2 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Decision for 58 place</h1> <p>First of all, we are very happy to be part of such a well-developed and interesting competition, many thanks to the organizers for their support and to all the contestants who selflessly shared their knowledge and insights.</p> <p>Many thanks to <a href="https://www.kaggle.com/kerrit" target="_blank">@kerrit</a>, a member of our QuData team! We are located and working in Ukraine and we want to thank everyone who now supports the people of Ukraine!</p> <h2>Solution Description</h2> <p>The solution is a library of several strategies for different stages of the game and active units, as well as some auxiliary functions.<br> Each strategy is a set of rules on the basis of which the behavior of robots and factories is built.</p> <p>The solution is based on the <strong>DefaultStrategy</strong> class from the basic.py file. It has a set of mandatory methods for implementing a full-fledged game strategy, in particular:</p> <ul> <li><em>getBid(…)</em> - bid selection</li> <li><em>getSpawnPos(…)</em> - select the position of the factory</li> <li><em>getActions(…)</em> - action definitions for factories and robots</li> </ul> <p>A complete strategy can be implemented in the <strong>DefaultStrategy</strong> class.</p> <p>Next, the <strong>DefaultStrategy</strong> strategy is divided into two groups <strong>Early</strong> and <strong>Game</strong>.</p> <h3>Early strategies</h3> <p><strong>Early</strong> strategies are the strategies of the factory setup phase.<br> Each of the strategies has a set of mandatory methods for implementing the strategy for the game phase, in particular:</p> <ul> <li><em>getBid(…)</em> - bid selection</li> <li><em>getSpawnPos(…)</em> - select the position of the factory</li> </ul> <p>In total, 6 <strong>Early</strong> strategies were implemented, located in the corresponding files.</p> <ul> <li><em>default</em> - Basic strategy of the factory placement library. The position for the factory is chosen randomly based on a previously compiled weight matrix. An example of the algorithm can be viewed <a href="https://docs.google.com/spreadsheets/d/e/2PACX-1vSxd6FtgHrewPwfVBQtcwuRW0I46YdhTiy38FRarN9gOhW1b9-N3miff7Gg5rfQ92cEzD_BzV99Htkc/pubhtml?gid=986897905&amp;single=true" target="_blank">here</a>.</li> <li><em>from_kaggle_strategy</em> - This strategy is an extension of <em>default</em>. The strategy for calculating the weights uses algorithms from the notebook <a href="https://www.kaggle.com/code/istinetz/picking-a-good-starting-location" target="_blank">@istinetz</a>.</li> <li><em>best_strategy</em> - This strategy is an improvement <em>default</em> in terms of position selection. The most important change was the elimination of randomness in position selection. In this strategy, the position of the factory is chosen based on the amount of rubble in the 3x3 square around the point. In addition, the coefficients of the weight matrix have been slightly changed.</li> <li><em>single_strategy</em> - Strategy for test training of RL model. Based on <em>best_strategy</em>. A distinctive feature is that only one factory is exposed in the strategy.</li> <li><em>new_from_kaggle</em> - Strategy using algorithms from notebook <a href="https://www.kaggle.com/code/istinetz/picking-a-good-starting-location" target="_blank">@istinetz</a>. A distinctive feature is that the ratio of the total number of cells around the factory to the "bad" (&gt; 25) cells is used as weights.</li> <li><em>next_generation</em> - The last and, according to tests, the best strategy implemented in the library. The strategy calculates separate weight matrices for ice and ore, and when choosing a factory position, the amount of rubble in a 7x7 square around the point is taken into account. In addition, if all the best positions adjacent to the ice are occupied, the weight matrix for ice is recalculated.</li> </ul> <h3>Game strategies</h3> <p><strong>Game</strong> strategies are the strategies of the game phase, which are divided into two groups <strong>Factory</strong> and <strong>Robot</strong>.<br> Each of the strategies has a set of mandatory methods for implementing the strategy for the game phase, in particular:</p> <ul> <li><em>getFactoryActions(…)</em> - return actions for factories</li> <li><em>getRobotActions(…)</em> - return actions for robots</li> </ul> <h4>Factory strategies</h4> <p><strong>Factory</strong> strategies are strategies for how factories work.</p> <p>In total, 6 strategies were implemented, located in the corresponding files.</p> <ul> <li><em>default</em> - Basic strategy for factories. If resources are available, it creates 1 heavy and up to 3 light robots, and after 700 steps it starts growing lichen, reserving 1000-step water.</li> <li><em>mean_water</em> - Improved <em>default</em> strategy. The strategy uses the average water change per step to calculate the lichen planting step.</li> <li><em>for_best</em> - New factory strategy for <em>best_strategy</em>. The strategy creates up to 20 light robots, while the number of robots is calculated by the formula $min(max(round(step-step_{start})/l_{ts}*l_{max}, l_{min}), l_{max} )$, where $step_{start}$ - to what step should be $l_{min}$ robots, $l_{ts}$ - in how many steps the number of robots should increase to $l_{max}$. The number of heavy robots is not limited. To calculate the planting time of the lichen, the coefficient of increase in the cost of the lichen per step is also used.</li> <li><em>for_best_v2</em> - A branch of the <em>for_best</em> strategy. Added accounting for lichen to the strategy, which we will potentially grow in $min(300, 1000-step)$ steps.</li> <li><em>no_limit</em> - Improved <em>for_best</em> strategy. In the strategy, the threshold for light robots has been increased to 100 per factory, and the number of heavy ones is limited to 6.</li> <li><em>no_limit_v2</em> - An offshoot of the <em>no_limit</em> strategy. Added accounting for lichen to the strategy, which we will potentially grow in $min(300, 1000-step)$ steps.</li> </ul> <h4>Robot strategies</h4> <p><strong>Robot</strong> strategies are robot behavior strategies.<br> Robot strategies are the most complex among other strategies and use more auxiliary tools.</p> <h5>General information about tasks</h5> <p>In a general sense, the behavior of robots in strategies is determined by <strong>task</strong>.</p> <p><strong>Task</strong> is a set of algorithms for implementing robot behavior. Initially, the library had two tasks MINER (mines ice) and CLEANER (removes rubble). Later this list was expanded.</p> <p>The duties of the mining robot included:</p> <ul> <li>find the position of the ice,</li> <li>build a route,</li> <li>add digging actions if the robot is on a resource,</li> <li>return to the factory if we can't dig anything else (run out of energy).</li> </ul> <p>The responsibilities of the cleaning robot included:</p> <ul> <li>find the nearest rubble,</li> <li>build a route,</li> <li>add digging actions if the robot is on rubble,</li> <li>return to the factory if the energy runs out.</li> </ul> <h5>Description of strategies</h5> <p>To implement the algorithms of robot tasks, some auxiliary tools were used. The first of these are <strong>Eyes</strong>.</p> <p><strong>Eyes</strong> - This is a class that stores game state arrays, and also allows you to conveniently perform various operations with them.<br> Initially, the <strong>Eyes</strong> data was updated inside the <strong>Game</strong> strategy at every step. In later versions of the library, the <strong>DataController</strong> class was implemented to control the state of the game.<br> In total, N matrices were used in Eyes:</p> <ul> <li><em>factories</em> - Location of enemy factories. All the cells occupied by the enemy's factory were entered in the matrix.</li> <li><em>units(version 1)</em> - Location of allied units.</li> <li><em>units(version 2)</em> - Location of allied units or their next turn.</li> <li><em>units(version 3)</em> - Location of allied units or their next turn. In the case of a collision, the values are added together.</li> <li><em>u_move</em> - Ally's possible moves. Later versions used robot type instead of 1.</li> <li><em>e_move</em> - Possible enemy moves. Later versions used robot type instead of 1.</li> <li><em>u_energy</em> - The energy of an ally, distributed over possible moves.</li> <li><em>e_energy(version 1)</em> - Enemy's energy distributed over possible moves.</li> <li><em>e_energy(version 2)</em> - Enemy's energy distributed over possible moves. In the event of a collision, the values are added together.</li> <li><em>e_energy(version 3)</em> - Enemy's energy distributed over possible moves. In the event of a collision, the values are added together. In the position of the robot - 0.</li> <li><em>e_lichen</em> - Enemy lichen location.</li> </ul> <p>Using only <strong>Eyes</strong> the following 3 strategies were implemented.</p> <ul> <li><em>default</em> - The basic strategy of the robots' behavior, the essence of which is that the robots performed only the MINER and CLEANER tasks. The robots took into account the position of the allies when building the route, thereby avoiding the collision with the allies. The task was assigned as follows: heavy robots were always MINER, light ones up to 500 steps were assigned the task MINER after 500 - CLEANER. The strategy used the Eyes matrices: <em>factories and units(1)</em>.</li> <li><em>cautious</em> - Runaway strategy for robots. The strategy implemented a mechanism for escaping robots. The essence of the mechanism was that if the difference <em>e_energy(1)</em>-<em>u_energy</em> &gt; 0, then we could be crushed and the robot would return to the base. The strategy used Eyes matrices: <em>factories, units(1), e_energy(1), u_energy</em>. Also, this strategy was implemented as a separate GameStrategy as an example of the fact that any convenient approach can be used when implementing the strategy.</li> <li><em>curious</em> - Catching up strategy. This is an improved <em>cautious</em> strategy. A pursuit indicator was added to the strategists, which was responsible for how many steps the robot would pursue the enemy in order to crush. The strategy used Eyes matrices: <em>factories, units(2), e_energy(2), u_energy, u_move</em>. In addition, the u_energy and u_move matrices were individual for each robot, which eliminated some problems.</li> </ul> <p>The next step was to implement the <strong>Observer</strong> auxiliary class, which dealt with the distribution of tasks to robots. The number of tasks has been expanded to: MINER, CLEANER, RETURN (go to base), JOBLESS (unemployed), WARRION (crushes opponents), LEAVER (runs away), RECHARGE (charges). This contributed to more flexible control of robots.</p> <p>Using <strong>Eyes</strong> and <strong>Observer</strong> the following 2 strategies were implemented.</p> <ul> <li><em>optimised</em> - Improved and optimized <em>curious</em> robot strategy for new challenges. The strategy used Eyes matrices: <em>factories, units(2), e_energy(2), u_energy, u_move</em>.</li> <li><em>fixed</em> - Fixed version of <em>optimised</em> strategy. The RECHARGE(recharging) task was added to the strategy. The Eyes matrices were used in the strategy: <em>factories, units(3), e_energy(3), u_energy, u_move</em>.</li> </ul> <p>The last auxiliary is the <strong>TaskManager</strong> class. The tasks of the class include a deeper indication of the tasks of the robot. On the <strong>Observer</strong> class, only the functions of monitoring the environment and informing that the task of the robot needs to be changed remained. In addition, new tasks were defined: WALKER (retreat if an ally is coming at us), CARRIER (removes rubble to the nearest resource), ENERGIZER (charges robots adjacent to the factory), DESTROYER (destroys enemy lichen), the MINER task is divided into ICE_MINER and ORE_MINER, and the tasks themselves were divided into permanent (ICE_MINER, CARRIER …) and one-time (WALKER, LEAVER …).<br> Permanent tasks are assigned to the robot and are redefined when the robot has run out of queue, one-time tasks are determined only when a certain event occurs (an enemy is coming at us and he will crush us (LEAVER) or an ally (WALKER), we have run out of energy (RECHARGE)).</p> <ul> <li><em>next_generation</em> - This strategy fully implemented all the tasks of the robots, in addition, many errors were fixed. In the strategy, thanks to <strong>TaskManager</strong>, the number of robots on tasks was regulated, in particular, the ICE_MINER task was not assigned to three robots if there were only two ices next to us, etc. In addition, the number of ENERGIZER robots was controlled so that there were no more of them than ICE_MINER. CARRIER cleared the path to the resource only in the first 50 steps, and then went to extract the resource. For the ICE_MINER task, priority was given to heavy robots, if there were none, then light ones took their place. The amount of CLEANER was controlled. If the distance to the nearest rubble is &gt; 14, then the robots become DESTROYER or WARRIOR if the enemy did not have lichen. In addition, the total number of MINER robots depended on the current step.</li> </ul> <h5>Auxiliary Functions</h5> <p>The <strong>Path</strong> class was implemented to search for the path of bots.<br> A distinctively and unexpectedly useful feature of pathfinding is the added window mode. Its essence boiled down to the fact that we are looking for a path not on the entire map, but on a cut-out area with a minimum size of 5x5. The starting point of the route and the end point were located in the corners of this area.</p> <p>The picture below shows an example of a window mode:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11578546%2F4ee6b070461f1d1202be78082ca84c20%2Fwindow_mode.png?generation=1683474895479322&amp;alt=media" alt="Example of window mode when searchin"></p> <h2>Solution Links</h2> <ul> <li><a href="https://github.com/BooCreator/Lux-AI-Season-2-Strategy-Library" target="_blank">Here</a> you can see the full code of the solution.</li> <li><a href="https://github.com/BooCreator/Lux-AI-Season-2-Strategy-Library/tree/main/strategy" target="_blank">Here</a> is the implementation of the solution strategies</li> <li><a href="https://github.com/BooCreator/Lux-AI-Season-2-Strategy-Library/tree/main/bots" target="_blank">Here</a> is the implementation of the latest uploaded bots</li> <li><a href="https://github.com/BooCreator/Lux-AI-Season-2-Strategy-Library/tree/main/strategy/kits" target="_blank">Here</a> is the implementation of all helpers and classes</li> </ul>
GoDaddy - Microbusiness Density Forecasting
Godday's Solution that didn't Appear in Public Notebooks.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: GoDaddy - Microbusiness Density Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This competition is full of positions, and I don't know if I can get a good result in the end. But I figured I'd forget about the game if I waited another 3 months, so I decided to share my solution anyway.</p> <h2>0. Preface:</h2> <p>Hi all people diving in this competition,</p> <p>We had a very memorable racing experience together (at least for me). I learned a lot from this competition, and to be honest, I'm just a newbie in machine learning and time series research. I met a lot of talent here <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/kaggleqrdl" target="_blank">@kaggleqrdl</a> <a href="https://www.kaggle.com/joshuascj" target="_blank">@joshuascj</a> <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a> <a href="https://www.kaggle.com/batprem" target="_blank">@batprem</a> <a href="https://www.kaggle.com/vitalykudelya" target="_blank">@vitalykudelya</a> . Too many friends for me to put them all here.<br> They kindly shared good baselines, ideas and data, which taught me a lot. I must say that we have reached such score by standing on the shoulders of giants.</p> <h2>1. solution:</h2> <h3>1.1. ideas</h3> <p>Our team used GRU blending with XGBoost as a solution.<br> The composition of time series data is probably: (this is my understanding, I hope there is no error in expression)</p> <p>$$timeseries=trend+otherinformantion+noise$$</p> <p>The general idea is to use GRU to capture the "trend" information of the data, and XGBoost to capture "other information".</p> <p>Expressed in pseudocode:</p> <p><strong>Fit stage</strong></p> <p><code>GRU. fit(trend_data)</code><br> <code>XGB.train(X_XGB[other_information_features], residual_data)</code></p> <p><strong>Inference stage</strong><br> <code>residual_data_pred=XGB.predict(X_XGB[other_information_features])</code><br> <code>trend_data_pred = GRU. predict(X_GRU)</code><br> <code>sub_data_pred=trend_data_pred +residual_data_pred</code></p> <h3>1.2. Cross-validation</h3> <ol> <li>When we train GRU, I predict 5 months totally. I use rolling-window CV method for 5 times.</li> </ol> <p><strong>Results:</strong> 1st month: 1.333; 3rd month: 2.47; 4th month: 2.95; 5th month: 3.39</p> <ol> <li>When we train XGB, we use Group CV results for 3 times.<br> <strong>Results:</strong> 1st month: Nan; 3rd month: 2.34; 4th month: 2.74; 5th month: 3.12</li> <li>Improve:</li> <li><strong>Results:</strong> 1st month: Nan; 3rd month: 0.13; 4th month: 0.20; 5th month: 0.26</li> </ol> <h3>1.3. LB tests:</h3> <p>Also, I use LB to determine my CV’s performance.<br> I treat January (LB) as "March", "April" and "May" respectively to output the results. (Just slide the GRU forecast starting point backwards from 2022-12-01.)</p> <p><strong>Only GRU results:</strong> 3rd month: 2.82, 4th month: 3.15, 5th month: 3.4</p> <p><strong>GRU blending XGB</strong>: 3rd month: 2.66, 4th month:2.8, 5th month: 2.5</p> <p><strong>Improve</strong>: 3rd month: 0.16, 4th month:0.35, 5th month: 0.9 (Maybe this is somehow the luck)</p> <h3>1.4. Conclusion</h3> <p>Both CV score and LB score all get improvement. XGB can help GRU. Thank you. <br> Good luck to us.</p>
IceCube - Neutrinos in Deep Ice
8th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank the organizers and staff for hosting such a wonderful competition. And thank you to all my wonderful teammates! <a href="https://www.kaggle.com/anjum48" target="_blank">@anjum48</a>, <a href="https://www.kaggle.com/allvor" target="_blank">@allvor</a>, <a href="https://www.kaggle.com/remekkinas" target="_blank">@remekkinas</a>, <a href="https://www.kaggle.com/wrrosa" target="_blank">@wrrosa</a>. </p> <p>Congratulations <a href="https://www.kaggle.com/anjum48" target="_blank">@anjum48</a>! your promotion to GM!</p> <p>Our code will be available <a href="https://github.com/Anjum48/icecube-neutrinos-in-deep-ice" target="_blank">here</a>.</p> <h1>Datasaurus Part</h1> <h2>Preprocessing</h2> <h3>Raw data</h3> <p>This dataset was really big, and repeatedly doing pandas operations every epoch would have been a waste of CPU cycles (polars does not appear to work with PyTorch data loaders with many workers yet). To address this, I made PyTorch Geometric <code>Data</code> objects for each event and saved them as <code>.pt</code> files which could be loaded during training. This took about 8 hours to create using 32 threads, and required about 1TB of space. </p> <p>The issue with this was that that 1TB was across 130+ millon tiny files. A Linux partition has a finite number of “index nodes” or <code>inodes</code>, i.e. an index to a certain file. Since these files are so small I ran into my inode limit before I ran out of space on my 2TB drive. As a workaround, I had to spread some of these files across two drives, so heads up for anyone trying to reproduce this method or run my code.</p> <p>A more efficient way could be to store <code>.pt</code> files that have already been pre-batched which would require fewer files, but then you lose the ability to shuffle every epoch which may/may not make a difference with this much data. I didn’t try the sqlite method suggested by the GraphNet team.</p> <h2>Features</h2> <p>In the context of GNNs, each DOM is considered as a node. Each node was given the following 11 features:</p> <ul> <li>X: X location from sensor_geometry.csv / 500</li> <li>Y: Y location from sensor_geometry.csv / 500</li> <li>Z: Z location from sensor_geometry.csv / 500</li> <li>T: (Time from batch_[n].parquet - 1e4) / 3e3</li> <li>Charge: log10(Charge from batch_[n].parquet) / 3.0</li> <li>QE: See below</li> <li>Aux: (False = -0.5, True = 0.5)</li> <li>Scattering Length: See below</li> <li>Distance to previous hit: See below</li> <li>Time delta since previous hit: See below</li> <li>Scattering flag (False = -0.5, True = 0.5). See below</li> </ul> <p>Many of the normalisation methods were taken from GraphNet as a <a href="https://github.com/graphnet-team/graphnet/blob/4df8f396400da3cfca4ff1e0593a0c7d1b5b5195/src/graphnet/models/detector/icecube.py#L64-L69" target="_blank">starting point</a>, but I altered the scale for time, since time is a very important feature here.</p> <p>For events with large numbers of hits, to prevent OOM errors, I sampled 256 hits. This can make the process slightly non-deterministic.</p> <h2>Quantum efficiency</h2> <p>QE is the quantum efficiency of the photomutipliers in the DOMs. The DeepCore DOMs are quoted to have 35% higher QE than the regular DOMs (Figure 1 of this <a href="https://arxiv.org/pdf/2209.03042.pdf" target="_blank">paper</a>), so QE was set to 1 everywhere, and 1.35 for the lower 50 DOMs in DeepCore. The final QE feature was scaled using (QE - 1.25) / 0.25.</p> <h2>Scattering length</h2> <p>Scattering and absorption lengths are important to characterise differences in the clarity of the ice. This data is published on page 31 of this <a href="https://arxiv.org/abs/1301.5361" target="_blank">paper</a>. A datum depth of 1920 metres was used so that z = (depth - 1920) / 500. The data was resampled to the z values using <code>scipy.interpolate.interp1d</code>. I found that after passing the data though <code>RobustScaler</code>, the scattering and absorption data was near identical, so I only used scattering length.</p> <h2>Previous hit features</h2> <p>The two main types of events are track and cascade events. Looking at some of the amazing <a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/388858" target="_blank">visualisation tools</a> for example from edguy99, I got the idea that if a node had some understanding where and when the nearest previous hit was, it might help the model differentiate between these two groups. To calculate this for each event, sorted the hits by time, calculated the pairwise distances of all hits, masked any hits from the future and calculated the distance, d, to the nearest previous hit. This was scaled using (d - 0.5) / 0.5. The time delta from the previous hit was also calculated using the same method and scaled using (t - 0.1) / 0.1.</p> <h2>Scattering flag</h2> <p>I tried to create a flag that could discern whether a hit was caused directly from a track, or some secondary scattering, inspired by section 2.1 of this <a href="https://arxiv.org/pdf/2203.02303.pdf" target="_blank">paper</a>. A side effect of adding this flag was that training was much more stable. The flag is generated as follows:</p> <ol> <li>Identify the hit with the largest charge</li> <li>From this DOM location, calculate the distances &amp; time delta to every other hit</li> <li>If the time taken to travel that distance is &gt; speed of light in ice, assume that the photon is a result of scattering</li> </ol> <h2>Validation</h2> <p>I used 90% - 10% train-validation split, and no cross validation due to the size of the dataset. The split was done by creating 10 bins of log10(n_hits), and then using <code>StratifiedKFold</code> using 10 splits.</p> <h2>Models</h2> <p>I used the <code>DirectionReconstructionWithKappa</code> task directly from GraphNet, meaning that an embedding (e.g. shape of 128) will be projected to a shape of 4 (x, y, z, kappa)</p> <h2>Architectures</h2> <p>I used the following 3 architectures. All validation scores are with 6x TTA applied</p> <p><a href="https://github.com/graphnet-team/graphnet" target="_blank">GraphNet/DynEdge</a> - Val = 0.98501<br> <a href="https://arxiv.org/abs/2205.12454" target="_blank">GPS</a> - Val = 0.98945*<br> <a href="https://arxiv.org/abs/1902.07987" target="_blank">GravNet</a> - Val = 0.98519</p> <p>The average of these 3 models gave a 0.982 LB score.</p> <p>The GraphNet/DynEdge model had very little modfification, other than changing to GELU activations.</p> <p>GPS &amp; GravNet used 8 blocks and you can find the exact architectures for both in the code <a href="https://github.com/Anjum48/icecube-neutrinos-in-deep-ice/blob/main/src/modules.py" target="_blank">here</a>.</p> <p>*GPS was the most powerful model, but also slowest to train being a transformer type model (roughly 11 hours/epoch on my machine). I managed to train a model which achieved a validation score of 0.98XX but was too late to include in our final submission.</p> <h2>Loss</h2> <p>I used VonMisesFisher3DLoss + (1 - CosineSimilarity) as the final loss function, since cosine similarity is a nice proxy for mean angular error. For CosineSimilarity I transformed the target azimuth &amp; zenith values to cartesian coordinates.</p> <p>This performed much better than separate losses for azimuth (VMF2D) &amp; zenith (MSE) which I the route I <a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/383546" target="_blank">initially went down</a>.</p> <h2>Augmentation</h2> <p>I centered the data on string 35 (the DeepCore string) and rotated about the z-axis in 60 degree steps. This didn’t actually improve validation performance but did have the benefit of making the models rotationally invariant so that I could take advantage of the detector symmetry and apply a 6x test time augmentation (TTA). This often improved scores by 0.002-0.003. </p> <h2>Training parameters</h2> <ul> <li>AdamW optimiser</li> <li>Epochs = 6</li> <li>Cosine schedule (no warmup)</li> <li>Learning rate = 0.0002</li> <li>Batch size = 1024</li> <li>Weight decay = 0.001 - 0.1 depending on model</li> <li>FP16 training</li> <li>Hardware: 2x RTX 3090, 128 GB RAM</li> </ul> <h2>Final submissions</h2> <p>Circular mean</p> <h2>Robustness to perturbation</h2> <p>TBC</p> <h2>Lessons learned/stuff that didn’t work</h2> <ul> <li>The GraphNet DynEdge baseline is extremely strong and tough to improve on - kudos to the team! It is also the fastest/efficient model, and what I used for the majority of experimentation</li> <li>More data = more better. The issue with this though is that I found that some conclusions drawn from experiments on 1% or 5% of the data were no longer applicable on the full dataset. This made experimentation slow and expensive</li> <li>Batch normalisation made things unstable and didn’t show improvements</li> <li>Lion optimiser didn’t generalise as well as AdamW</li> <li>Weight decay was important for some models. As a result I assumed changing the epsilon value in Adam would have an effect, but I didn’t see anything significant</li> <li>In my experiments, GNNs seem to benefit from leaky activations, e.g. GELU</li> <li>For MPNN aggregation, it seems that [min, max, mean, sum] is sufficient. Adding more didn’t appear to make significant gains</li> <li>Realigning all of the times to the time of the first hit of each event deteriorates performance, possibly due to noise in the data/false triggers etc.</li> <li>Radial nearest neighbours didn’t work any better than KNN when defining graph edges</li> <li>Only using 1 - CosineSimilarity as a loss function wasn’t very stable. Adding VMF3D helped a lot</li> </ul> <h2>Code</h2> <p>All my code will be available here soon: <a href="https://github.com/Anjum48/icecube-neutrinos-in-deep-ice" target="_blank">https://github.com/Anjum48/icecube-neutrinos-in-deep-ice</a></p> <h1>Isamu Part</h1> <p>Before the team merge, I created LSTM and GraphNet models. After the team merge, I focused on GraphNet because Remek's LSTM model was superior to mine. I used graphnet (<a href="https://github.com/graphnet-team/graphnet" target="_blank">https://github.com/graphnet-team/graphnet</a>) as a baseline and made several changes to improve its accuracy. I list below some of the experiments I performed that worked(There are tons of things that didn't work)</p> <ul> <li>random sampling (random sampling from DB if the specified data length exceeds 800)</li> <li>Increasing nearest neighbors of KNN layer(8-&gt;16)</li> <li>Addition of features<ul> <li>x, y, z</li> <li>time</li> <li>charge</li> <li>auxiliary</li> <li>ice_transparency feature</li></ul></li> <li>2-stage model with kappa(sigma) of vonMisesFisher distribution <ul> <li>Train 1st stage model to predict x, y, z, kappa<ul> <li>restart from the weights of GraphNet from public baseline notebook</li> <li>About 1-250 batches were used</li> <li>batch size 512</li> <li>epoch 20</li> <li>DirectionReconstructionWithKappa</li></ul></li> <li>Split data into easy and hard parts according to 1st stage kappa value<ul> <li>Inference was performed using the 1st model and classified into two sets of data(easy part and hard part) according to their predicted kappa value</li></ul></li> <li>Train expert models for easy and hard parts and combine their predictions<ul> <li>About 250-350 batches were used</li> <li>batch size 512</li> <li>epoch 20</li> <li>DirectionReconstructionWithKappa</li></ul></li></ul></li> <li>TTA <ul> <li>rotation 180-degree TTA about the z-axis</li></ul></li> <li>Loss<ul> <li>DirectionReconstructionWithKappa</li></ul></li> <li>Hardware: RTX 3090, 64 GB RAM, 8TB HDD, 2TB SSD, Google Colab Pro</li> </ul> <p>The ensemble of models(1st and 2nd) created above gave public LB 0.995669, private LB 0.996550 </p> <h1>Remek Part - LSTM</h1> <p>For LSTM training we used the attitude proposed by Robin Smits (@rsmits) with some improvements.</p> <ul> <li><p>We added more LSTM/GRU layers (4 GRU/LSTM) - more and less than 4 layers was not better in our experiments.</p></li> <li><p>We added ice transparency as additional features (we used both features – transparency and absorption).</p></li> <li><p>We redesigned the training loop to train models on all batches – we can train models on different parts of DS (from one batch to all DS in one epoch).</p></li> <li><p>We checked many hypotheses (for part of them weight and biases Swipe tool was used):</p> <ul> <li>Different size of LSTM units – finally 196 was the best in our model.</li> <li>Different bin size – 24 was our final choice.</li> <li>Different amount of features and impulse selection - finally we use strategy - first select non_aux events then add random aux events (it gave us score boost as well - this was kind of an augmentation technique).</li> <li>Different model architectures – finally for our score blend we used pure LSTM/GRU setup. We tested transformer architecture but as it appeared we gave up too early (first scores were way worse then pure LSTM).</li> <li>Different optimizers (AdamW, NAdam) – final choice was Adam.</li> <li>Different schedulers – CosineDecay, OneCycle but then we use step LR scheduling described below.</li></ul></li> </ul> <p>Training was divided into three parts scheduling LR:</p> <ul> <li>Step 1 - Train baseline models (two models) – batches 4-330 (m1) and 331-660 (m2) using LR = 0.005, for 6-8 epochs, sparse categorical crossentropy loss. Both models were validated on batches 1-3.</li> <li>Step 2 - Fine tuning models m1 and m2 using LR/10 = 0.0005 for 3 epochs, sparse categorical crossentropy loss.</li> <li>Step 3 – Fine tuning models m1 and m2 using LR = 0.00025 for 2 epochs with different loss function – categorical crossentropy with label smoothing 0.05</li> <li>Then we took 4 best models (according to MAE metrics) for both model m1 and m2 (8 models in total) and produced one model using SWA (Stochastic Weight Averaging). We simply averaged model weights. As it appeared the model had the same performance compared for 8 ensembled models but we significantly decreased LSTM inference time. Single model score (public LB) (no TTA): 1.0024.</li> </ul> <p>Things did not improve our score:</p> <ul> <li>Dropout in LSTM/GRU layers and linear head.</li> <li>GaussianNoise Layer after Masking layer or after LSTM/GRU layer.</li> <li>Adam + Lookahead optimizer.</li> <li>Gradient Accumulation to simulate TPU big batch size – I (Remek) had problem to implement it properly in TF/Keras (it is easy in Pytorch, as it appeared not exactly easy to implement in TF/Keras – my daily choice is Pytorch)</li> </ul> <p>Additional tools:</p> <ul> <li>Weights and biases for two tasks – logging and monitoring training process and swipe for hyperparameter tuning (number of LSTM units, LR scheduler, Optimizer and LR).</li> </ul> <p>For my (Remek) experiment I use (in first part of competition) ZbyHP Z4 with 2xA5000 but then HP sent me ZbyHP Z8 workstation with 2x Intel Xeon CPU and Nvidia A6000 GPU. Personally I can say that this help me to establish fast experimentation pipeline. I was able to process dataset files very fast. Having A6000 gave me possibility to train models on bigger batch size. Thank HP for supporting my work.</p> <p>My final words - we set up a great team in my opinion - each of us was responsible for part of the solution, we discussed a lot but there was no “my is better”. Although we worked together for the first time, I had the impression that we had known each other forever. Great team, great result! Thank you guys for having the opportunity to learn from great AI guys.</p> <h1>Alvor part - blending</h1> <p>My main contribution to this competition was the development of methods for blending my teammates’ solutions. Analysis of the different types of solutions (GraphNet, LSTM etc.) showed that they have different efficiency at different predicted zeniths and azimuths values (zenith mainly). So I changed my initial "constant weight" approach to a "bins" approach.<br> The method consists in splitting the predicted zenith values into 10 bins of equal width. Thus, when combining two solutions, we get 100 bins in total. The value of 10 is configurable, but experiments showed it to be close to optimal.</p> <p>After that, the blending weight for the zenith was found in each bin and the blending weight for the azimuth was found. The blending of predicted zenith values was done by a simple linear combination. Blending the azimuth values was a little more complicated, due to the possible transition through 2*pi. Therefore, to begin with, the difference in the predicted azimuths in the two solutions was calculated and the direction of the second value relative to the first value.<br> Weights fitting was carried out on a sample of training data close in size to the test data (5 batches with 1M events).</p> <p>Several approaches to improve this blending method were also tested. In particular, the use of GBDT.</p> <p>One of these approaches was an attempt to classify events into “simple” and “complex” ones (inspired by the kernel <a href="https://www.kaggle.com/code/tatelarkin/neutrino-event-type-classifier-auc-score-0-93)" target="_blank">https://www.kaggle.com/code/tatelarkin/neutrino-event-type-classifier-auc-score-0-93)</a>.</p> <p>Experiments have shown that some types of neural networks are better at handling simple events, while other types are better at handling complex events. If we could accurately determine the type of each event, this would greatly improve the competition metric. A model was built, the efficiency of which in events classifying was at the level of the public kernel (AUС 0.93), but this was not enough to get a noticeable improvement in the blend quality.</p> <p>Several other models of classification (binary variable - which of the two neural networks better predicts a given event) and regression (target variable - the difference in competition metrics from two neural networks for a given event) were also trained. Unfortunately, all these approaches showed only a minor improvement in the result (fourth decimal place), but they were very overfitting-sensitive.</p> <p>Later, my method was significantly improved by Wojtek Rosa. Therefore, it was his approach that was used in the final submissions, which he describes in more detail in his part. In particular, I would like to note his brilliant idea of using a Decision Tree model to build blending bins.</p> <h1>Wojtek part - more blending</h1> <p>Thank you competition host for exciting competition IceCube!<br> Also thank you my Teammates - once again congrats for their prizes, GM titles and great solutions.<br> My part was about blending.<br> I used batch_ids 1-5 for evaluation and adjusting parameters, later I used batch_ids 655+ only for evaluation purposes.<br> When I joined Remek&amp;Alvor team, we have great LSTM solution, public Graphnet, and amazing blend technique:</p> <pre><code> (): s[] = np.(s[] - s[]) s[] = np.where( s[] &lt; np.pi, np.sign(s[] - s[]), -np.sign(s[] - s[]) ) N = s[] = (N*np.floor(N*s[]/np.pi) + np.floor(N*s[]/np.pi)).astype() </code></pre> <p>and minimize score for each bin, finding best qu, alpha such as:</p> <pre><code>s0[] = s0[] + alpha * s0[] * s0[] s0[] = (-qu) * s0[] + qu * s0[] </code></pre> <p>I realize, that compared to simple/public vector weight ensembling this method is much better.<br> I tried to improve score with greater N values but this leads me to overfit.<br> After this, I managed to improve the score, by creating bins using Regression Trees with target = score_1 - score_2 and features:</p> <pre><code>cls = [,,,,,, ,,,,,, ,,,] </code></pre> <p>where direction_% are from Isamu submission, zenith_3 is from crazy quick and clean 1.183 Robert linear solution. I tried many different features directly from event, but<br> The simplicity of Decision Tree Regressor gave us (almost) full control of N bins for adjustment allvors parameters qu and alpha:</p> <pre><code>regr_1 = DecisionTreeRegressor(max_depth=, min_samples_leaf=, min_samples_split=) s[] = regr_1.predict(s[cls]) s[] = (s[],).astype() </code></pre> <p>Using DecissionTrees gave us 0.0027 boost both with local score and LB. I tried other methods of blending such as MLP or LGBM without success.<br> Later on, I also slightly modified <code>adjusting</code> loop, for finding full linear combination of zenith_1 and zenith_2:</p> <pre><code>s0[] = s0[] + alpha * s0[] * s0[] s0[] = ru * s0[] + qu * s0[] </code></pre> <p>It takes much more computation to find ru and qu, but that gives us additional 0.001 improvement.<br> Our final blend consist:<br> Isamu solution .995 -&gt; Remek LSTM 1.002 -&gt; datasaurus Graphnet .982 -&gt; Robert 1.183<br> local score 0.9774692 Public LB 0.975545, Private LB 0.97658</p> <h2>Findings:</h2> <ul> <li>we found interesting type of error, ‘model give up’ and predicts zenith close to 0.<br> Our final solution zenith prediction vs score (x-axis zenith predicted, y-axis zenith ground truth):</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3974868%2F4a8b117472efd756290083dcf0b787c1%2Fimage1.png?generation=1682343676129581&amp;alt=media" alt=""></p> <p>Ground truth zenith (x-axis) vs score (y-axis) - ‘error triangle’ visible:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3974868%2F2ee6d7032191782027e45522daff9081%2Fimage2.png?generation=1682343631946316&amp;alt=media" alt=""></p> <ul> <li>we found that predictions from LSTM classification goes incredibly good at the center of bins, chart: zenith_pred_lstm*1000 (x-axis), mean score (y-axis), zenith edges (red lines)</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3974868%2F7fde090be1100c83a09f045fbf57f5fd%2Fimage3.png?generation=1682343415933098&amp;alt=media" alt=""></p>
Predict Student Performance from Game Play
13th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>13th place solution</h1> <p>First of all, I would like to thank the Kaggle community for sharing great ideas and engaging discussions. I would also like to thank the hosts for organizing this interesting task competition.</p> <h2>Summary</h2> <ul> <li>Ensemble of LightGBM and NN</li> <li>Cross validation: Nested cross validation<ul> <li>Training data: Data for which the first four digits of the session_id are 2200 or less.</li> <li>Validation data: Data for which the first four digits of the session_id are 2201 or more.</li> <li>I trained the model on the training data using a 5-fold cross validation strategy, and evaluated it on the validation data using predictions from all 5 trained models.</li> <li>For the final submission, I trained the model on the entire dataset using a 5-fold cross validation approach.</li></ul></li> </ul> <h2>LightGBM</h2> <ul> <li>I trained one model for level group 0-4 and another for level group 5-12. For these, I included features representing the target and trained a single model.</li> <li>For level group 13-22, I trained a distinct model for each target.</li> <li>Main features:<ul> <li>The count of categorical data for each session_id</li> <li>The statistical measures of numerical data for each session_id</li> <li>An aggregate of the next action taken </li></ul></li> <li>Scores<ul> <li>CV : 0.7032</li> <li>Public Score : 0.704</li> <li>Private Score : 0.701</li></ul></li> </ul> <h2>NN</h2> <ul> <li>Model: Transformer + GRU<ul> <li>The standalone Transformer didn't perform very well.</li> <li>The addition of GRU improved the score.</li></ul></li> <li>Trained with fewer features</li> <li>I trained a separate model for each level group.</li> <li>Scores<ul> <li>CV : 0.7010</li> <li>Public Score: 0.700</li> <li>Private Score: 0.700</li></ul></li> </ul> <h2>Ensemble</h2> <ul> <li>LightGBM * 0.66 + NN * 0.34</li> <li>CV : 0.7053</li> <li>Public Score : 0.706</li> <li>Private Score : 0.702</li> </ul>
Predict Student Performance from Game Play
9th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would first like to thank all the participants who dedicated so much time and effort to this competition, as well as the hosts and the management. While I still believe there are areas in which the administration of the competition could be improved, in this post, I will focus solely on discussing my solution.</p> <h2>Overview</h2> <p>I didn't do anything particularly special.</p> <p>Mainly, I just kept adding features to improve the accuracy of the single model. For each question, I built models using LightGBM and Catboost, and took the simple average of the two models. I used the models with the highest CV scores as the final candidates. The high CV submit also got almost the best score for private.</p> <ul> <li>LightGBM CV:0.7018 LB(Public):0.703 LB(Private):0.702</li> <li>Catboost CV:0.7011 LB(Public):0.7 LB(Private):0.701</li> <li>Merge(final submit) CV:0.7024 LB(Public):0.7 LB(Private):0.702</li> </ul> <h2>Features</h2> <p>Rather than introducing each of the numerous features I created, I'll share my overall approach and discuss a few specific features that particularly contributed to the accuracy.</p> <p>As already demonstrated in public notebooks, an important element in this competition was "how much time one spends playing." To delve deeper, I felt that "how much time it took from a certain point to another" was crucial, so I created many features related to this.</p> <ul> <li>Checkpoint feature (as I named it)<br> In this game, there are events that almost every player will inevitably experience. For instance, every user will find a notebook and see the message "found it!" I identified these "events that almost every user goes through," and used the time taken between these events (i.e., the elapsed time from event A to event B) and the number of clicks as features. This seemed to significantly contribute to the accuracy.</li> </ul> <p>I created such features in various patterns, like the elapsed time from viewing text A to text B, the elapsed time from one fqid to the next, the elapsed time from one room to the next, and so on.</p> <ul> <li>Other than this, I obviously included features like the time elapsed for each level and the average coordinates, as introduced in the public notebooks.</li> </ul> <p>The number of features increases with the level group. Ultimately, the feature counts were as follows:<br> Level group 0-4: 3009 features Level group 5-12: 9747 features Level group 13-22: 18610 features</p> <h2>Modeling Approach</h2> <ul> <li><p>I chose to use 10-fold rather than 5-fold as it gave slightly higher CV scores (around +0.0005).</p></li> <li><p>I used straified Kfold with the number of correct answers for 18 questions of the user. So, the model is made with the distribution of the total number of correct answers of the users almost aligned. (However, I don't think it would be much different with a simple K-fold)</p></li> <li><p>For feature selection, I simply used the top 500 features based on their importance. To prevent leakage, I selected the feature importance for each fold, and retrained the model for each fold. For example, when training fold1, I first train with all features, then select the features using the fold1 model, and retrain the fold1 model with the top 500 features.</p></li> <li><p>I used the prediction probabilities of previous questions as features. For example, when predicting question 3, I used the prediction probabilities for questions 1 and 2. When predicting question 15, I used the prediction probabilities for questions 1 through 14, etc. (this improved CV by around +0.001)</p></li> <li><p>Inference took the following amounts of time:<br> LightGBM: 90min Catboost: 120min Merge: 150min<br> LightGBM's inference became significantly faster by compiling the model with a library called "lleaves."<br> <a href="https://github.com/siboehm/lleaves" target="_blank">https://github.com/siboehm/lleaves</a><br> I'm sharing my inference code. Features not mentioned here can be somewhat understood by looking at it.<br> <a href="https://www.kaggle.com/code/mhyodo/restart-model-merge-v1" target="_blank">https://www.kaggle.com/code/mhyodo/restart-model-merge-v1</a></p></li> </ul> <h2>What Didn't Work</h2> <ul> <li>NN models (I tried several types, such as LSTM and MLP, but none contributed to the CV)</li> </ul> <p>Lastly, I've seen posts from others where the LB score was higher than the CV score, but in my case, they were pretty much the same. I was expecting some sort of shakeup, but I didn't anticipate making it into the top 10. I'm curious as to how those with higher LB scores achieved this, as I was unable to significantly increase my LB score. <br> Anyway, thank you! If the ranking is confirmed, I can become a new GrandMaster!</p>
Predict Student Performance from Game Play
8th Place Solution and Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The competition was really exciting and it gave us a chance to practice feature engineering. I'm very thankful for the support and help from my team <a href="https://www.kaggle.com/shinomoriaoshi" target="_blank">@shinomoriaoshi</a> <a href="https://www.kaggle.com/hoangnguyen719" target="_blank">@hoangnguyen719</a> and <a href="https://www.kaggle.com/martasprg" target="_blank">@martasprg</a>. They were always there for me and together we made a big difference.</p> <p>I would like to thank the hosts, and special thanks to <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> and <a href="https://www.kaggle.com/pdnartreb" target="_blank">@pdnartreb</a> for identifying the issue of data leak, which made the competition right back on track.</p> <p>Special thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for his great starter notebooks and insights that helped me in the early phase of the competition.</p> <h2>Overview</h2> <p>Here's an overview of what each of us worked on:<br> · My main focus was on improving the XGBoost model and handling feature engineering.<br> · Minh Tri Phan worked on a Transformer model with a CV (cross-validation) score of 0.699 and a public leaderboard (LB) score of 0.7.<br> · Hoang processed the external data.<br> · Martin worked on selecting the most relevant features.</p> <p>In our final submissions, we ensembled the XGBoost and Transformer models, which helped us achieve the gold position. Our ensemble submission had a public LB score of <strong>0.705</strong> and a private LB score of approximately <strong>0.7025</strong>. Additionally, we had two other submissions with single XGBoost models, where one had a public LB score of <strong>0.705</strong> and a private LB score of <strong>0.700</strong>.</p> <h2>My Part</h2> <p>Code: The code is a bit uncleaned, apologies for that. For any queries, contact me on <a href="https://www.linkedin.com/in/priyanshu-chaudhary-ba0b23199/" target="_blank">LinkedIn</a> <br> <strong>FE code:</strong> <a href="https://www.kaggle.com/code/chaudharypriyanshu/mb-fb5-train-xgb-25-11-external-data/notebook" target="_blank">https://www.kaggle.com/code/chaudharypriyanshu/mb-fb5-train-xgb-25-11-external-data/notebook</a><br> <strong>Inference code:</strong> <a href="https://www.kaggle.com/code/chaudharypriyanshu/inference-xgb-25-11-17/notebook" target="_blank">https://www.kaggle.com/code/chaudharypriyanshu/inference-xgb-25-11-17/notebook</a><br> <strong>Training code:</strong> <a href="https://www.kaggle.com/code/chaudharypriyanshu/mb-fb5-train-xgb-25-9-training/notebook" target="_blank">https://www.kaggle.com/code/chaudharypriyanshu/mb-fb5-train-xgb-25-9-training/notebook</a></p> <h3>Overview</h3> <p>I created a 5-fold XGBoost model for each question (a total of 90 models). I used Kaggle kernels only to train XGBoost since it took only 45 mins on Kaggle’s P100 GPU to train all 90 models.<br> The single XGBoost model achieved a Public leaderboard (LB) score of 0.705 and took 45-50 mins for inference, but it didn't perform as well on the private LB. When we included Hoang's external data, the model's score improved to <strong>0.704</strong> on the private LB. However, we decided not to use it because the public LB score was unusually low at <strong>0.702</strong>.</p> <h3>Feature engineering</h3> <ol> <li><p><strong>Session length:</strong> simply accounts for the total length of the session per level group.</p></li> <li><p><strong>Instance features:</strong> I created Object click-based features (first object click, room coordinates of that click, I called them Instance features)that were most important and gave an improvement of 0.0007, when I added them with standard features. I created a total of 36 features since there were 12 instances where object clicks were present.</p></li> <li><p><strong>Magic bingo features:</strong> Inspired from the public notebooks. I created more such features for all 3 level groups and it improved the CV by <strong>0.0003</strong>.</p></li> <li><p><strong>Standard features:</strong></p> <p>a) <strong>Count features:</strong> I created count features based on <code>Fqid, text_Fqid, room_fqid, level, and event_comb</code>. These features capture the frequency of specific events or combinations. </p> <p>b) <strong>Binning of indexes:</strong> I performed binning on indexes with bin sizes of approximately 30 or 50 in sorted order. Raw indexes worked better on the private LB, while binned features yielded better results on the public LB.</p> <p>c) <strong>First and Sum features:</strong> I generated first and sum of elapsed_time_diff for all categorical columns. I found that min, max, and std did not work well in my case. </p> <p>d) <strong>Aggregations based on hover duration.</strong></p></li> <li><p><strong>Top Level Group Features:</strong> Used top 15-25 features (according to feature importance), Duration and instance features across different level groups.</p></li> <li><p><strong>Meta features:</strong> Using past questions predictions to predict the current question. i.e. for question<code>t</code> I used all predictions for questions <code>(1 to t-1)</code>. Using them gave an improvement of around <strong>0.001</strong>.</p></li> </ol> <h3>Feature Selection (Martin's Part):</h3> <ol> <li>I eliminated features that had zero importance based on their Gain and Shapley feature importance scores.</li> <li>After performing feature selection, I made adjustments to the learning rate by reducing it from <strong>0.05</strong> to <strong>0.03</strong> and adding more features. </li> <li>Additionally, I removed duplicate features and features with more than <strong>95%</strong> values as null.</li> </ol> <h3>External data:</h3> <ol> <li>We used publicly available data. It had about 7500 sessions where all 18 questions were answered.</li> <li>Adding this external data improved our model's performance by 0.0005 in cross-validation and 0.002 on the leaderboard.</li> <li>Hoang also created processed external data that worked well on the private leaderboard (score of 0.704). If we had included it, our single XGBoost model could have reached a top 5. position. However, we decided not to use it because of its lower Public leaderboard score (a bad decision).</li> </ol> <h3>Inference:</h3> <ol> <li><p>We made improvements to retain the original order of the sequence during inference.</p></li> <li><p>We found that there are approx. 250 sessions with abnormal indexing(interestingly all of them are from the 5th and 6th December 2020)</p></li> <li><p>Created a function to preserve the original sequence for 99.5% of sequences, with only a small portion (0.5%) having misplaced events not more than 4-5 positions of the actual index.</p></li> <li><p>Reindexed these abnormal sessions which improved or scored on LB slightly.</p></li> </ol> <h3>Things not worked:</h3> <ol> <li>Ensemble with LGBM, Catboost didn’t work.</li> <li>Created a custom eval metric that uses benchmark true positives and negatives a model should have. It increased the CV by 0.001 but LB decreased probably due to overfitting.</li> <li>Different thresholds for each question. (increased CV decreased LB).</li> </ol> <p>The below tables list our experiments with the best results.</p> <table> <thead> <tr> <th>External Data Used</th> <th>CV</th> <th>Public LB</th> <th>Private LB</th> <th>final Sub</th> </tr> </thead> <tbody> <tr> <td>No</td> <td>0.6996</td> <td>0.701</td> <td>0.698</td> <td>No</td> </tr> <tr> <td>No</td> <td>0.7001</td> <td>0.702</td> <td>0.700</td> <td>No</td> </tr> <tr> <td>No</td> <td>0.6996</td> <td>0.701</td> <td>0.698</td> <td>No</td> </tr> <tr> <td>Yes((Public ED)</td> <td>0.7015</td> <td>0.705</td> <td>0.700</td> <td>Yes</td> </tr> <tr> <td>Yes(Hoang's ED )</td> <td>0.7019</td> <td>0.702</td> <td>0.704</td> <td>No</td> </tr> <tr> <td>Yes (Hoang's ED)</td> <td>0.7022</td> <td>0.703</td> <td>0.703</td> <td>No</td> </tr> </tbody> </table> <h2>Tri's Part:</h2> <p>The model is shown in the following figure:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5005351%2F196a1b67b9b8bdf56ba47c91097e8d0d%2FPicture1.png?generation=1688196490904650&amp;alt=media" alt=""></p> <p>Particularly, it consists of 2 parts,<br> (i) Training a neural network, then extracting the embedding.<br> (ii) Concatenating the embedding from the neural network to a set of aggregated features, then training a gradient boosting model (XGBoost, CatBoost, LightGBM).</p> <h3>Neural network</h3> <p>I was inspired by the RIIID competition and <a href="https://www.kaggle.com/letranduckinh" target="_blank">@letranduckinh</a>’s solution, in which he customized the multi-head attention mechanism to adopt the time gap between 2 actions. In my opinion, if we have to relate the problem to an NLP problem, RIIID competition is like a token classification task (e.g., NER), meanwhile, this competition is like a document classification task. Therefore, I decided to use a transformer and some other recurrent network types.</p> <p>I used the encoder-only structure as I didn’t see any motivation to have the decoder. <br> However, the transformer encoder alone didn’t work so well, so I decided to add some more (3) GRU layers in front of the encoder. The detailed architecture (Pytorch code) is given here (<a href="https://github.com/minhtriphan/Kaggle-competition---Predicting-Student-Performance/blob/main/Transformer/model.py" target="_blank">https://github.com/minhtriphan/Kaggle-competition---Predicting-Student-Performance/blob/main/Transformer/model.py</a>).</p> <h3>Some remarks about training:</h3> <ol> <li>I used 3 models for 3 level groups. At each level, I used the sequence of previous levels (e.g. The model for the 0-4 level uses the 0-4 sequence, the model for the 5-12 level uses the 0-4 and 5-12 sequences, and so on.)</li> <li>I used all the given features to train the model,</li> </ol> <pre><code>NUM_COLS = [, , , , , , ] TXT_COLS = [, , , , , , ] </code></pre> <ol> <li>I think the performance of a student, for example, in level 13-22 could carry some information to predict his/her performance in level 0-4. This is what I call the “global knowledge” of a student, and I want the network to capture that. Therefore, the neural network is trained in a multi-tasking manner, in which in the main output is the set of questions in the corresponding level (e.g., for level 0-4, the main output is 3-dimensional for questions 1, 2, and 3), the auxiliary head is used to predict all other questions. This trick helps to gain <strong>+0.002</strong> in CV.</li> </ol> <p>Overall, the NN gets <strong>0.695/0.700</strong> in CV and public LB (before the API crisis, after that I never check how the NN works in the public LB anymore as it was combined always with XGBoost)</p> <h3>Gradient Boosting</h3> <p>However, the NN in my case was not super satisfactory. I then decided to extract the embedding from the trained NN, concatenate them into a set of aggregated features, then use XGBoost to train the model. This helped me to get a huge boost in both CV and LB.</p> <p>Overall, the scores of this approach are shown below,</p> <table> <thead> <tr> <th>External Data Used</th> <th>CV</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>No</td> <td>0.6993</td> <td>0.702</td> <td>0.697</td> </tr> <tr> <td>Yes</td> <td>0.6989</td> <td>0.701</td> <td>0.699</td> </tr> </tbody> </table> <p>Unfortunately, as I didn’t observe any gain in CV and public LB with external data, I decided not to choose that model to add to our model pool.</p> <p><strong>Links:</strong><br> Training code: <a href="https://github.com/minhtriphan/Kaggle-competition---Predicting-Student-Performance----part-of--8th-solution.git" target="_blank">https://github.com/minhtriphan/Kaggle-competition---Predicting-Student-Performance----part-of--8th-solution.git</a></p> <p><strong>Inference code:</strong> <br> <strong>Without external data:</strong> <a href="https://www.kaggle.com/code/shinomoriaoshi/psp-v7b-infer" target="_blank">https://www.kaggle.com/code/shinomoriaoshi/psp-v7b-infer</a><br> <strong>With external data:</strong> <a href="https://www.kaggle.com/code/shinomoriaoshi/psp-v9a-infer" target="_blank">https://www.kaggle.com/code/shinomoriaoshi/psp-v9a-infer</a></p> <h2>Hoang's Part:</h2> <p>Hoang has described his work in a separate thread that describes the preprocessing of external data, experimental results and why to trust CV over LB.<br> Link to Hoang's Part: <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420315" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420315</a></p>
Predict Student Performance from Game Play
49th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First and foremost, I would like to express my gratitude to the hosts who made significant efforts in organizing the competition, despite numerous challenges. Congratulations to all the winners! While my model does not come close to the top performers, I am sharing my solution, hoping that it can be of use to someone, as it is relatively simple.</p> <h2>Overview</h2> <p>The central idea of my model revolves around the ensemble of raw log data processing using 1D-CNN and aggregated feature processing using GBDT. Individually, these models achieved CV=0.696 and Public LB=0.697, respectively. However, by ensembling them, I was able to improve the scores to CV=0.700, Public LB=0.700, and Private LB=0.700.</p> <h2>Models</h2> <p>The competition data provided consisted of gameplay logs, with several hundred to several thousand logs per session. Hence, I employed two modeling approaches: 1D-CNN, which directly extracts features from the temporal log sequences, and LightGBM, which utilizes aggregated features obtained through feature engineering. For 1D-CNN, I based my implementation on the <a href="https://www.kaggle.com/code/abaojiang/lb-0-694-tconv-with-4-features-training-part" target="_blank">public notebook by ABAOJIANG</a>. As for feature engineering and LightGBM, I referred to the <a href="https://www.kaggle.com/code/leehomhuang/catboost-baseline-with-lots-features-inference" target="_blank">public notebook by ONELUX</a>. I extend my gratitude to them for sharing their excellent notebooks.</p> <p>Regarding 1D-CNN, I used the encoder part of the public notebook as the base. After performing feature extraction using 1D-CNN, I applied the Multi-Head Attention structure before conducting temporal aggregation. I utilized five input features: numerical features such as diff(elapsed_time) and log(elapsed_time), and categorical features such as event_name + name, room_fqid, and fqid + text_fqid.</p> <p>For LightGBM, I added several features to the ones presented in the public notebook, resulting in inputting over 2000 features. Most of the additional features were related to text_fqid, including total time spent displaying text for each fqid and the reading speed per word.</p> <p>Furthermore, I combined a subset of these features (around 6) with 1D-CNN to create a new neural network model, which also had a positive effect (CV+0.001 approximately). I integrated these three models using linear regression-based stacking to generate the final predictions.</p> <p>However, for the simplest questions (2, 3, 18), I did not perform any modeling and predicted all of them as 1. The inference time was cutting it close at around 9 hours (528 minutes), and I was quite nervous during the final submission lol.</p> <h2>What Didn't Work</h2> <p>Here is a list summarizing the experiments I conducted that did not yield successful results:</p> <ul> <li>Building the encoder solely using Transformers</li> <li>Utilizing state-of-the-art time series neural networks like <a href="https://github.com/timeseriesAI/tsai/blob/main/tsai/models/PatchTST.py" target="_blank">Patch TST</a> or <a href="https://github.com/cure-lab/SCINet" target="_blank">SCINet</a></li> <li>Including CatBoost in the ensemble</li> <li>Handling all the problems with a single model</li> <li>TabNet</li> <li>Merging <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/412098" target="_blank">additional data</a></li> <li>DAE (Denoising Autoencoder)</li> <li>Applying an anomaly detection model to the simplest questions (2, 3, 18)</li> <li>Using LightGBM with the latent features of the neural network</li> </ul>
Predict Student Performance from Game Play
98th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Despite all the confusion, I would like to express my gratitude to the staff and the Kagglers who shared great information and knowledge. My result is not the top one, but I am posting this in the hope that it will be helpful to someone else and as a record for myself.</p> <h2>Overview</h2> <p>I created multiple XGboost models and blended their predictions as submission. Public Score was 0.703 - 0.704 and Private Score was 0.698 for all three.</p> <h2>Solution</h2> <p><a href="https://www.kaggle.com/code/pourchot/simple-xgb" target="_blank">Laurent's Notebook</a> was used as the baseline. The following features were added</p> <ul> <li>Identifiy conversations and actions that are essential to the progress of the game, and calculate the elapsed time between them and the number of Indexes.</li> <li>The number of Indexes and the sum of elapsed time diff using fqid, room, and level as keys, since the same fqid can occur in multiple rooms and levels.</li> <li>Number of times a very large elapsed time diff has occurred</li> <li>Whether the order of level_group is swapped or not, since there are cases in which the order of level_group is swapped as described in <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/395250" target="_blank">AbaoJiang's Notebook</a>.</li> </ul> <p>Data from the previous level_group and predictions for the previous questions were also added as features</p> <ul> <li>e.g.) For the model with level_group = 5-12, the features created for level_group = 0-4 are added as they are.</li> <li>For the model predicting question t, I added the predictions for questions 1, 2, 3, …, t-1 as features.</li> </ul> <p>In this way, 1200 features were prepared for level_group = 0-4, 3000 for level_group 5-12, and 5500 for level_group = 13-22. From here, feature selection is performed by feature importance in terms of gains. I created multiple XGboost models (like 3 - 8 models), which vary which features to include and how much to reduce the number of features by feature selection. In many models, the number of features are around 800, 1500, 2500, respectively. The weighted average of these models was used as the final prediction. In the inference code, I sorted the test data as described in <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/416963" target="_blank">Daniel's Notebook</a>. In many cases, this sorting improved scores over sorting by index even in GBDT models.</p> <p>The addition of the past level_group features and the addition of the past question predictions as features contributed in particular to the scores.</p> <h2>What did not work</h2> <ul> <li>Tuning the hyperparameters for individual models did little to improve the scores</li> <li>Adjusting the thresholds for individual questions also did little to improve the scores</li> <li>I tried to use additional raw data available on the web, but could not improve scores.</li> <li>I tried to convert the coordinates of clicks in a room into a good feature, but could not improve scores.</li> <li>I also tried to incorporate the predictions of the previous question in two steps using stacking, but this did little to improve scores, so I adopted the simpler approach described above.</li> </ul> <h2>Reflection</h2> <ul> <li>Since I started feature selection, I calculated the CV score in a bad way, resulting in an inappropriate CV score. This caused an over-fitting to Public Score, because I did not know what to trust.</li> <li>I took the weighted average of multiple XGboosts as the final predicted value, but the models were so similar that it improved the Public Score but not the Private Score. Blending <a href="https://www.kaggle.com/code/vadimkamaev/catboost-new/notebook" target="_blank">VADIM’s publick notebook</a> and my XGboost model gives private score 0.700 but I could not select this one. I should have increased the diversity of the models instead of being drawn to the Public Score.</li> <li>In my submission list, I found a model with a Public Score of 0.695 but a Private Score of 0.704. I think it would have been impossible to choose this as my final submission because it was a fluke result!</li> </ul> <p>My code can be found in <a href="https://github.com/KazuakiTokunaga/kaggle-studentperformance" target="_blank">this github repository</a>.</p>
Predict Student Performance from Game Play
14th Place Solution Joseph Part
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank my teammates for their efforts, I have learned a lot from them. Luckily we don't shake-down too much. Now I would like to introduce my solution to you. </p> <h1>Modeling</h1> <p>My modeling method is like a 'cumulative' one: using 0-4 part data to generate the train set of q1-q3, using 0-4 and 5-12 part data to generate the train set of q4-q13, using 0-4, 5-12 and 13-22 part data to generate the train set of q14-q18. <strong>Question is also a feature</strong>. It has merit that I don't need to save 'historical' data. And this one is time-saving and costs about 40min for inference.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3110858%2Fe49b6d8581547e3a9477a626647d36cc%2FWX20230629-0924562x.png?generation=1688002004825453&amp;alt=media" alt=""></p> <h1>Feature Engineering</h1> <p>There are my FE ideas:</p> <ul> <li><p><strong>basic agg</strong> features: eclipse_time_diff sum, count and max of each group, each level, each event_name, …, each text; eclipse_time_diff sum, count under a particular room_fqid and an event_name, etc.</p></li> <li><p><strong>behavior-change</strong> features: the number of room change, and the number of room change under each level; the number of text_fqid change, and the number of text_fqid change under each level, etc.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3110858%2F2f61e63e58a3bfab1944b9af16d0e589%2F1688005632589.jpg?generation=1688005685285390&amp;alt=media" alt=""><br> In this picture, we can see a <strong>room change</strong> behavior, we calculate the change times and average to characterize one's ability to understand and reason. Some of them have pretty high feature importance.</p></li> <li><p><strong>Meta</strong> features: Besides basic <code>groupby</code> feature engineering, I add the meta feature for 5-12 and 13-22 groups. There are two way to use them:</p></li> </ul> <ol> <li>each question's <code>predict_proba</code> as a feature, 5-12's model includes features <strong>q1_proba</strong>, <strong>q2_proba</strong>, and <strong>q3_proba</strong>, 13-22's model includes features <strong>q1_proba</strong>, <strong>q2_proba</strong>, … <strong>q13_proba</strong>.</li> <li>mean of all question in one group as a feature, for instance, 5-12's model includes a feature <strong>mean_of_q1-q3_proba</strong>, 13-22's model includes features <strong>mean_of_q1-q3_proba</strong>, <strong>mean_of_q4-q13_proba</strong>.</li> </ol> <h1>Models</h1> <p>For my part, I use 9 models for my ensemble. They are 4 xgboost, 1 lightgbm, 2 dart, 2 catboost. The private-best single model is a dart, which achieved <strong>Public 0.704</strong> and <strong>Private 0.704</strong>. The public-best model is a xgboost, which achieved <strong>Public 0.705</strong> and <strong>Private 0.698</strong>. My dart notebook <a href="https://www.kaggle.com/code/takanashihumbert/game-play-lgbdart-infer/notebook" target="_blank">Game-Play-LGBDart[INFER] Private LB 0.704</a></p> <h1>The difficulty</h1> <p>I think the most difficult part of this comp is to establish CV and choose the threshold and the submissions. As you can see, my dart model and xgboost model in the same CV strategy vary wildly. It's beyond my expectation. I even have no confidence to give my dart models a bigger weight. <strong>I believe many teams didn't choose their best results.</strong></p> <p>Finally, I would like to pay tribute to all kagglers who share their ideas. See you next game.</p>
Predict Student Performance from Game Play
Efficiency: 14th place Public
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Code:<br> Training: <a href="https://www.kaggle.com/roberthatch/student-efficiency-catboost-polars-train" target="_blank">https://www.kaggle.com/roberthatch/student-efficiency-catboost-polars-train</a><br> Inference: <a href="https://www.kaggle.com/code/roberthatch/student-efficiency-polars-inference" target="_blank">https://www.kaggle.com/code/roberthatch/student-efficiency-polars-inference</a></p> <p>My efficiency solution wasn't very innovative, mainly due to limited time. I only did first couple weeks of competition, then last week of competition, and didn't actually focus too much on efficiency solution, even though it was much more interesting to me, as I wanted a silver medal. (But failed to medal.)</p> <p>[Aside: yes, I know it was a 5 month competition. Frankly, I was excited to get a second chance after 60 days in which I didn't even edit a single Kaggle Notebook. But I also split my time on the second chance sign language competition, so only gave myself a week on this one.]</p> <p>The interesting thing was taking <a href="https://www.kaggle.com/carnozhao" target="_blank">@carnozhao</a> 's excellent <a href="https://www.kaggle.com/code/carnozhao/cpu-catboost-baseline-using-polars-inference" target="_blank">Polars starter notebook</a> and fixing the inference script to allow submission with latest API was already 15th place public(!)</p> <p>So - on public LB - I only went from 15th -&gt;14th place with some small improvements. With private LB I probably gained another few places, as public score was 682-&gt;687, but private score was 681-&gt;691. Given that it took about 2.5 more minutes, that makes the public vs private even more of a difference, at about +0.0025 vs +0.0075, so much more improvement on private LB.</p> <p>The semi-minimal updates to get that from 0.682 -&gt; 0.687, while adding about 140 seconds to the runtime.</p> <ul> <li>3 digit threshold.</li> <li>So far: [no CV test, 0.681 public, 0.682 private]</li> <li>Rerun training with all data</li> <li>SKIP 6 questions (inspired by top public notebook). Skip 2, 3, 12, 13, 16, 18. (Guess 0 on Q13, 1 on the rest.) Naturally this speeds up inference, which is why I decided to be slightly aggressive on how many questions to skip. Only about -0.0001 CV penalty each for skipping the 4th, 5th, 6th questions. It gives an added benefit that I could get lucky and if a question predicts poorly on private LB, (since private LB is a different population), then I would avoid a score penalty that would affect most everyone else.</li> <li>With all the above so far: [0.6819 CV, 0.682 public, 0.683 private, ~5:25 scoring time]</li> <li>34-&gt;60 features [0.6859 CV, 0.685 public, 0.683 private, ~7:15 scoring time run A, ~6:37 run B ]</li> <li>60-&gt;117 features</li> <li>Train on all train data with per question iterations (number of trees) based on results of 5 fold early stopping tuned for each question individually.</li> <li>[0.688 CV, 0.687 public, 0.691 private, ~7:41 scoring time]</li> </ul> <p>Many, many things didn't have time to implement and try:</p> <ul> <li>checkpoint features</li> <li>save prior level predictions (or all features)</li> <li>hyper-parameter tuning</li> <li>Extensive feature selection to optimize for efficiency prize</li> <li>Use checkpoint features for CutMix style data augmentation.</li> </ul>
Predict Student Performance from Game Play
1st Place Solution for the Predict Student Performance from Game Play Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Unbelievable to write this!</p> <h1>Thanks!</h1> <p>As it is the usage, we first <strong>thank the host and Kaggle</strong>. These are special thanks because you and us have had a special link in this competition as we gave you more work by reporting data leaks. No doubt you tried to do your best. You are right to animate this community and to trust in it. You are part of it. Please take care of this community that is able to build so much together by sharing. As all of us you have made mistakes and we hope you will learn from them.</p> <p>We also want to <strong>thank all of you</strong>, Kagglers. We love and are grateful to be part of our group/community. Thanks for sharing and for the collective learning experience.</p> <h1>Context</h1> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/overview" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/overview</a>,</li> <li>Data context: <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/data" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/data</a>.</li> </ul> <h1>Overview of the Approach</h1> <p>Our solution is essentially a blend of a XGBoost and a NN models. Both heavily rely on duration that appeared to be a powerful leverage. Time was aggregated in different ways and combined with counts for the GBDT while it is transformed via a custom TimeEmbedding block based on 1D convolutions that produce a representation combined with user event representations for the NN.<br> Robustness and efficiency founded our work. XGBoost models were validated on 10 bags of 5 folds and features incorporated only if the mean of the CV of these 10 bags was greater than the level of noise we quantified while we opted for a majority/consensus strategy to build the NN, i.e. validate choices only if 4 of 5 folds were improved. The 3rd place of the efficiency LB was achieved with a lightweight NN accelerated via TF Lite.</p> <h1>Details of the submission</h1> <h2>Code</h2> <p>After publishing this write-up we decided to open our code: <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420332" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420332</a>.<br> It is composed by several parts: <a href="https://www.kaggle.com/code/pdnartreb/pspfgp-1st-place-gbdt-training" target="_blank">how to train the XGBoost models</a>, how to <a href="https://www.kaggle.com/code/pdnartreb/pspfgp-1st-place-nn-pretraining" target="_blank">pretrain</a> and <a href="https://www.kaggle.com/code/pdnartreb/pspfgp-1st-place-nn-training" target="_blank">train</a> the NN models and the <a href="https://www.kaggle.com/code/pdnartreb/pspfgp-1st-place-inference" target="_blank">inference notebook</a> used to win this competition.</p> <h2>Data</h2> <p>Looking at the 1st data released showed that there aren't a lot of sessions so not a lot of sequences. Moreover these are long sequences. This is not ideal for a deep learning approach. <br> Exploring the Field Day Lab research instructed that the Jo Wilder application was built to help learning to read and that way more than 11,500 learners had played this game. <br> These 2 ideas led to search for a bigger dataset. In 1 Google search and 3 clicks we came up to the open data portal (<a href="https://fielddaylab.wisc.edu/opengamedata/" target="_blank">https://fielddaylab.wisc.edu/opengamedata/</a>) which contains a lot of sessions. 1 hour and 3 bash commands latter we knew that the train set was in part in the open data. So we took a week to <strong>build a pipeline that extracts 98 % of the sessions of the train set perfectly and with minor errors for the last 2 %</strong>. Our data are even better than the comp data because we knew before the host confirmation that for the sessions with 2 games the target was skewed (0 if wrong in 1 of the 2 games when we aim at predicting the responses for the 1st game). It seems that fixing these targets can bring a significant boost up to +0.002.</p> <p>We took 1 more week to build a GBDT/XGBoost baseline that would have scored top 10 given the CV score, with the use of the supplemental data (~20,000 sessions) that gave +0.003/0.004 at that time. As we simulated the API locally (see after), we used some training sessions to infer and noticed that it scored 0.718. We were hoping that the LB sessions were not part of the open data portal but our 1st submission, LB 0.708, immediately showed to us that we had rebuilt about a half of the data and especially the targets in the public LB, because 0.708 = (0.698 + 0.718) / 2. The host and Kaggle have been immediately informed. You know what happened next (<a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/415820)" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/415820)</a>. <br> After the release of the LB data we measured that we perfectly rebuilt ~7000 sessions over the ~11,500 of the LB data.</p> <p>We spent the first month exploring the data until we understood/knew it pretty well. For example we even reconstituted sessions for what might be schools (several games on 1 IP session), extracted every single session with at least 1 answer, …</p> <p>After the update we made a first submission that scored 0.72. This was shocking because this meant that some leaked data were remaining. A few days later we noticed that the open data was not totally similar with the state we found it 1 month before. A file was missing. So we returned to the host and Kaggle to give them more work.</p> <p><strong>This process/work led us to perfectly understand the data model</strong> (that changed since the 1st release of the game). This also allowed us to deeply understand the data itself.</p> <p>Note that we only used the sessions for which we had responses to all questions of the 2 1st level groups. 1) This is more consistant with the sessions we want to predict (game from the beginning to the end) and 2) this approach preserves performance (vs all data) while reducing training time.</p> <p>Our dataset is constituted by <strong>37323 complete sessions (23562 comp + logs) in a total of 66376 sessions</strong>.</p> <p>The supplemental data (that we fully added 1 month ago) gave us consistently <strong>CV +0.002</strong>.</p> <h2>Model</h2> <p>Our solution is mainly an ensemble of GBDT + NN models.</p> <h3>Trust your validation</h3> <p>We think that <strong>the main reason of the robustness of our solution is that we only relied on CV</strong> for decision making. No choice had been made on LB.</p> <p>Probing showed to us that the private test consists in the 1st 1450/1500 sessions served by the API. This is a small set. In our experiments 5,000 sessions is the minimum to guarantee a stable CV/LB alignment. A set less than 2,000 is very noisy so <strong>robustness was the way to go</strong>.</p> <p>We <strong>only added features that improved the CV for sure</strong>. This is not easy to delete features that you believe in but this is needed as science is not a matter of belief. There are several ways to do so: for example monitor all folds in a CV (and accept only on majority or consensus), monitor several bags (composition of CV to not overfit validation), …</p> <p>For the GBDT approach, we mainly validated on the mean of 10 bags (we defined a bag as a composition of the folds). As we estimated the noise to be ~0.0003, only improvements greater than the noise have been considered. For the NN as we needed to iterate quicker we only used a single bag and only incorporated &gt; 0.0003 overall improvements with at least 3 or 4 (over 5) folds improved.</p> <h3>Metric</h3> <p>We experimented a lot on finding a threshold by question but found that this approach is less robust than a single threshold. We mainly used 0.625 as global threshold despite our highest LB scores that were obtained with a threshold per question.</p> <h3>GBDT</h3> <p>We prototyped a baseline with <strong>XGBoost because of the structured/tabular nature of the data</strong>. The feature engineering process is interesting to understand what is predictive and to understand the causation, i.e. how the features or decision criteria that enable to predict correctly.</p> <p>Generally speaking we followed 3 ways to build features: <strong>business knowledge</strong>, our <strong>intuition</strong> playing the game and a meticulous <strong>exploration of the data</strong>.<br> Business knowledge refers to using expert knowledge. Reading the papers of the researchers that built this game allow to understand the game beyond usage. For example, Jo Wilder has been built to improve the players reading skills. So this means that the text duration should be important. These are like killer features.</p> <p>We exclusively made use of Polars because of the CPU constraints and to simply learn it.<br> Our features (663, 1993, 3734 for each level_group) are mainly <strong>durations and counts for different aggregations</strong>: how much time in a level, in a room, reading a text, interacting in some way (event type), how many events in a level_group, how many events of each type, how many events of each type in a room or a level, … <br> We also built a few notebook dedicated features: how many type of events on the notebook in a level, … <br> Despite our efforts we weren't able to extract useful information from the coordinates, the only few features of this type had been mean and std for some events in the activities (journal interactions for example).</p> <p>We considered that injecting targets predicted in the previous level groups was a compression of the signal, meaning a loss of information, so we used, for each session, <strong>all interactions from the beginning of the game/session</strong>. This led to a +0.002 at the time of this choice.</p> <p>After the API needed to order the data, we noticed that <strong>models trained both on original order and on index order</strong> but validated on index order (inference order) improved our scores. This leads to more variety that was needed to <strong>improve stability and robustness</strong>. The same goes for the composition of the validation sets: usage of several bags (composition of validation sets) based on the comp data but also on the extracted data improved our scores. We detected late that increasing the number of folds from 5 to 10 could also be leveraged.</p> <p>The code for GBDT allows to switch from XGBoost to LightGBM and CatBoost with a simple variable parameter but despite the good scores (~0.001 less than XGBoost), this did not bring to ensemble so we sticked to only XGBoost.</p> <p>We experimented a lot around feature selection but were unable to build a stable strategy. So instead of a top-down approach consisting in deleting useless features, we adopted a bottom-up approach choosing carefully each group of features.</p> <p>Our <strong>XGBoost models score CV ~0.7025 +/-0.0003</strong> and blending 5 of them (the only XGBoost we still have with correct score) scores <strong>LB 0.704</strong>.</p> <h3>NN</h3> <p>After achieving a good score with gradient boosting and having understood well the data we focused on deep learning.</p> <p>The <strong>first attempt was with Transformers</strong>. The 1st results were disappointed: CV 0.685 with 2 hours / fold (as far as we can remember). Transformers are very computationally intensive. Resources: <a href="https://arxiv.org/pdf/1912.09363.pdf" target="_blank">https://arxiv.org/pdf/1912.09363.pdf</a>, <a href="https://arxiv.org/pdf/2001.08317.pdf" target="_blank">https://arxiv.org/pdf/2001.08317.pdf</a>, <a href="https://arxiv.org/pdf/1711.03905.pdf" target="_blank">https://arxiv.org/pdf/1711.03905.pdf</a>, <a href="https://arxiv.org/pdf/1907.00235.pdf" target="_blank">https://arxiv.org/pdf/1907.00235.pdf</a>, …</p> <p>We then gave a try to <strong>Conv1D</strong>. In one day we had a very simple model that scored as Transformers but <strong>10x faster</strong> allowing to iterate quicker. So we pushed this approach and could seamlessly scaled it beyond our expectations.</p> <p>Difficult to share the <strong>tens or hundreds of experimentations</strong> needed to achieve the final solution which is both based on a simple architecture and a slightly complex training pipeline.</p> <h4>Architecture roots</h4> <p>We browsed the literature based on the question: how to model time in deep learning? <br> This research made us come to the idea of <strong>time-aware events</strong> (i.e. <a href="https://proceedings.mlr.press/v126/zhang20c/zhang20c.pdf" target="_blank">https://proceedings.mlr.press/v126/zhang20c/zhang20c.pdf</a>) and back to <strong>WaveNet</strong> (<a href="https://arxiv.org/pdf/1609.03499.pdf" target="_blank">https://arxiv.org/pdf/1609.03499.pdf</a>) because it uses <strong>Conv1D to model long sequences with considerations on causation</strong>. <br> Other papers also inspired us: <a href="https://arxiv.org/pdf/1703.04691.pdf" target="_blank">https://arxiv.org/pdf/1703.04691.pdf</a> build on top of WaveNet paper for time series, <a href="https://idus.us.es/bitstream/handle/11441/114701/Short-Term%20Load%20Forecasting%20Using%20Encoder-Decoder%20WaveNet.pdf?sequence=1&amp;isAllowed=y" target="_blank">https://idus.us.es/bitstream/handle/11441/114701/Short-Term%20Load%20Forecasting%20Using%20Encoder-Decoder%20WaveNet.pdf?sequence=1&amp;isAllowed=y</a> also build on top of WaveNet. <br> We also have to mention the excellent work that <a href="https://www.kaggle.com/abaojiang" target="_blank">@abaojiang</a> shared (<a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/398565" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/398565</a> and <a href="https://www.kaggle.com/code/abaojiang/lb-0-694-tconv-with-4-features-training-part)" target="_blank">https://www.kaggle.com/code/abaojiang/lb-0-694-tconv-with-4-features-training-part)</a>. It inspired our research and maybe successfully biased it.</p> <p>Let's focus on the model of our efficiency submission that is also one of our final ensemble and which performance is nearly the same as models with a few more features.</p> <h4>Feature representations</h4> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2208184%2Ff692423c63172494ea4df97b01d65d33%2Fdata.png?generation=1702113769960411&amp;alt=media" alt=""></p> <p><strong>5 features as inputs: duration, text_fqid, room_fqid, fqid, event_name + name</strong> (this is the event type from the original data model as far as we remember). Each of these information is encoded/embedded into a vector representation (d_model = 24) to be the merged. The 4 <strong>categorical features feed a classical Embedding layer and the duration a TimeEmbedding</strong> which is a custom block.</p> <p>Developing the GBDT solution showed that the <strong>duration</strong> was crucial, so we put a crucial amount of time trying to model it greatly. The TimeEmbedding layer is a composition of 4x ConvBlock which is inspired by the Transformer main block: Conv1D -&gt; skip connection -&gt; layer norm -&gt; dropout.</p> <pre><code> : def : super(TimeEmbedding, self). self.conv_blocks = def call(self, inputs): x = tf.expand conv_block self.conv_blocks: x = conv return x </code></pre> <pre><code> : def : super(ConvBlock, self). self.conv1d = tf.keras.layers. self.layer_norm = tf.keras.layers. self.dropout = tf.keras.layers. def call(self, inputs): x = self.conv1d(inputs) x = x + inputs x = self.layer outputs = self.dropout(x) return outputs </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2208184%2F9407390ffd7f226b7be0f57c37051f77%2Ftime_embedding.png?generation=1702113795144186&amp;alt=media" alt=""></p> <h4>Time-aware events</h4> <p>As said, the goal of building these representations was to model time-aware events. We considered the <strong>categorical features as events</strong> because they represent the user interactions with business entities of the game. We then tried to incorporate duration to make them time-awared. Our main intuition showed to be the best. It is a <strong>simple solution based on operation priority to represent that the duration should be associated to each event before associated them together</strong>: duration * event_1 + duration * event_2 + … which had been factorized to duration * (event_1 + event_2 + …).</p> <pre><code> : def : super(ConvNet, self). self.input_dims = input_dims self.n_outputs = n_outputs self.d_model = d_model self.n_blocks = n_blocks self.event_embedding = tf.keras.layers. self.room_embedding = tf.keras.layers. self.text_embedding = tf.keras.layers. self.fqid_embedding = tf.keras.layers. self.duration_embedding = self.gap = tf.keras.layers. def call(self, inputs): event = self.event room = self.room text = self.text fqid = self.fqid duration = self.duration x = duration(event + room + text + fqid) outputs = self.gap(x) return outputs def get: config = super.get.copy config.update({ 'input_dims': self.input_dims, 'n_outputs': self.n_outputs, 'd_model': self.d_model, 'n_blocks': self.n_blocks, 'name': self._name, }) return config @classmethod def from: return cls(**config) </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2208184%2Fca563daa4f9ef7bf8d8b20e0c85c7951%2Ftime_aware_events_1.png?generation=1702113822015438&amp;alt=media" alt=""><br> The 2 representations are equivalent: either you can think time-aware events as a combination of time and sub-events or as a combination of sub-events and time.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2208184%2F1f65d399dfd93f128ca6c887d0f0be89%2Ftime_aware_events_2.png?generation=1702113853644045&amp;alt=media" alt=""></p> <h4>Training pipeline</h4> <p>The training pipeline is not totally straight forward.</p> <p><a href="https://www.kaggle.com/dongyk" target="_blank">@dongyk</a> published great schematics that can be useful to illustrate what is explained bellow: <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420217#2332166" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420217#2332166</a>.</p> <h5>1st step (pre-training?)</h5> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2208184%2F970baa66fc37b301bb8960cc92cf8ab4%2Fpre_training.png?generation=1702113879947155&amp;alt=media" alt=""></p> <p>The best approach for us consists in a kind of <strong>backbone that represents the events of a level_group</strong>.</p> <p>This backbone is trained on all the data available for this level_group (i.e. on complete + incomplete sessions). It is associated with a temporary SimpleHead optimizing BCE loss.</p> <pre><code> (tf.keras.Model): (): (SimpleHead, ).__init__(name=name) .ffs = [tf.keras.layers.Dense(units, activation=) units n_units] .out = tf.keras.layers.Dense(n_outputs, activation=) (): x = inputs ff . x = ff(x) outputs = .out(x) outputs </code></pre> <p>This approach allows to score <strong>CV 0.70025 +/- 0.0005</strong>.</p> <h5>2nd step (training?)</h5> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2208184%2F98e32800c54ffab63d98d90d220fbbcd%2Fend_2_end.png?generation=1702113900174294&amp;alt=media" alt=""></p> <p><strong>The weights of each of the 3 backbones (1 by level_group) are freezed</strong> for the 2nd level of training to speedup training but also because it is more stable and efficient. These backbones can be thought as "embedders".</p> <p>During this 2nd step, <strong>all the submodels that composed the solution were trained on all complete sessions in an end-to-end setup</strong>. The input data are 3 sequences of the 5 features, 1 for each of the 3 level groups. Each "embedders" outputs a 24 dim-vector representation. These outputs are the inputs of a head in which enters the representation of level_group '0-4' to predict the 3 first questions and the concatenation of the previous and the current representations for level_groups '5-12' and '13-22' to make use of all information.</p> <p>Proceeding like this allows to optimize the overall performance and to monitor it based on the F1 score that is the score of the competition. This means we optimized BCE with F1 score as a metric.</p> <p>Our winning submission uses a simple <strong>MLP head</strong> but also a <strong>skip head</strong> (512 -&gt; 512 -&gt; 512 allow it for example). <strong>MMoE</strong> did not improve the simplest approaches.</p> <p>This approach allows to score <strong>CV 0.70175 +/- 0.0003</strong> which is <strong>comparable to the GBDT solution</strong>.</p> <h3>Inference</h3> <h4>Build a simulator</h4> <p>Early in the competition we built a simulator of the API. Doing so we never experimented any submission error. Maybe trying to keep ideas and code as simple as possible was also key to debug easily.</p> <h4>Efficiency</h4> <p>We invested the efficiency part of the challenge for GBDT as well as NNs. <br> Using <strong>Treelite</strong> for XGBoost allow us to divide by 2 the execution time. <br> Our deep learning models were lights: <strong>400,000 weights</strong> for the end-to-end model which combines every parts/sub-models. Having already used <strong>TF Lite</strong> we knew it could be a game changer. Converting our models led to a significant boost in inference time without any performance loss (we do not remember exactly but we think it is at least <strong>6x faster</strong> on our local inference simulator). <br> Beginning to explore pruning as well as hard quantization showed that the performance loss would be significant (which is OK in production but not in a competition) so we sticked to a simple TF Lite conversion.</p> <p>We have not leveraged what seems to be a problem in the efficiency metric. As we identified the private test sessions to be the 1450/1500 first served by the API we tried to just predict the others to check which time was used (public for public and private for private). Doing so we gain a place but choose to not use this.</p> <p>Our <strong>efficiency submission is a NN that scores public LB 0.702 and private LB 0.699 in less than 5 minutes</strong>.</p> <h4>Ensemble</h4> <p>We experimented a lot of ensembling alternatives. In the end we sticked to a simple average 50/50 GBDT/NN with:</p> <ul> <li>2 kinds of GBDT: trained on original order + trained on index order (validated on index order that is the inference case),</li> <li>3 kinds of NNs: trained on original order + trained on index order with 5 or all features.</li> </ul> <p>As our models are lightweight we were able to build a hugh ensemble: <strong>2 x 4 x 10 folds XGBoost + 3 x 4 x 5 folds NNs</strong>. The bottleneck for us is the 8 Go RAM constraint.</p> <p>The winning submission scores <strong>CV 0.705, public LB 0.705 and private LB 0.705</strong>.</p> <h2>Conclusion</h2> <p>The main achievement of our work is that it is a good solution for the researchers, learners and children that can benefit of it and we hope it will contributes to progresses for a better learning experience. Up to you guys!</p> <p>Thanks if you read until here!<br> If you have any question do not hesitate to ask. We will do our best to respond.</p> <h2>Presentation to the host</h2> <p>A video presentation to the host has been recorded and can be available on demand. Feel free to ask via PM.</p> <h1>Sources</h1> <p>Below are the main sources that we used. More sources can be found in section <em>Details of the submission</em> above.</p> <ul> <li><a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420332" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420332</a>,</li> <li><a href="https://fielddaylab.wisc.edu/opengamedata/" target="_blank">https://fielddaylab.wisc.edu/opengamedata/</a>,</li> <li><a href="https://arxiv.org/pdf/1609.03499.pdf" target="_blank">https://arxiv.org/pdf/1609.03499.pdf</a>,</li> <li><a href="https://www.tensorflow.org/lite/guide" target="_blank">https://www.tensorflow.org/lite/guide</a></li> </ul>
Predict Student Performance from Game Play
7th Place Solution (Efficiency 1st)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I am pleased to have fought the long and hard competition with all of you here.<br> Here I would like to outline my solution.</p> <h2>Overview</h2> <ul> <li>To make predictions for 18 questions, I trained 3 LightGBM models, one for each level_group. The reason I did not build a model for each question was primarily to reduce inference time.</li> <li>Most of the features I have created are features based on the time difference between two consecutive actions. (More on this later.)</li> <li>The CV score was improved by about 0.002 by adding raw data published by the competition host.</li> <li>Unexpectedly, the submission for the Efficiency Prize had the best score in Private Leaderboard amoung the selected sumissions. The inference time of that is approximately 3 minutes.</li> </ul> <p>The notebooks reproducing my submission are as follows:</p> <ul> <li><a href="https://www.kaggle.com/code/rsakata/psp-1-save-data" target="_blank">https://www.kaggle.com/code/rsakata/psp-1-save-data</a></li> <li><a href="https://www.kaggle.com/code/rsakata/psp-2-process-raw-data" target="_blank">https://www.kaggle.com/code/rsakata/psp-2-process-raw-data</a></li> <li><a href="https://www.kaggle.com/code/rsakata/psp-3-fe-and-train-lgb" target="_blank">https://www.kaggle.com/code/rsakata/psp-3-fe-and-train-lgb</a></li> <li><a href="https://www.kaggle.com/code/rsakata/psp-4-test-inference" target="_blank">https://www.kaggle.com/code/rsakata/psp-4-test-inference</a></li> </ul> <h2>Feature Engineering</h2> <p>The six variables (level, name, event_name, room_fqid, fqid, and text) were concatenated as aggregation keys, and the time difference from the previous or following record was summed for each key and used as the feature. If written in pandas-like code, <br> <code>df.groupby(['level', 'name', 'event_name', 'room_fqid', 'fqid', 'text'])['elapsed_time_diff'].sum()</code></p> <p>In addition to the time difference from the previous or following records, the number of occurrences of each key is also added as a feature. Since these features can be calculated by sequentially reading the user's session, they can be calculated very efficiently by treating the data as the Python list instead of using Pandas.</p> <p>Furthermore, the record whose event_name is 'notification_click' is considered as a important event, and the time difference between the two events is added to the feature.</p> <p>The procedures for calculating these features can be found by reading the third published notebook.</p> <h2>Modeling</h2> <p>Since the variety of keys (combinations of six variables) is very large, I reduced features before training by excluding in advance rare combinations that appear only in a small number of sessions. However, since the number of features still amounted to several thousand, I first trained LightGBM with a large learning rate (0.1) and performed feature selection based on gain feature importance. The training was then performed again with a smaller learning rate (0.02) using 500 to 700 features.</p> <p>In the second modeling, raw data published by the host (<a href="https://fielddaylab.wisc.edu/opengamedata/" target="_blank">https://fielddaylab.wisc.edu/opengamedata/</a>) was included in the training. Although I was unable to reproduce the host's train.csv file completely, but I was able to reproduce it approximately using the second published notebook.</p> <p>Many of the sessions included in this data were different in nature from the competition data because they did not complete the game until the end. In fact, users who left the game midway through tended to have lower percentages of correct responses. To reflect this difference, the maximum level of each session was added as a feature.</p> <p>When training the model for the last level_group, I augmented the label of the second level_group, which contributed to the improvement in accuracy. I believe that the reason for this is that overfitting was suppressed by using more data to determine the split point when splitting nodes of decision trees. However, for the first and second level_groups, this data augmentation method did not contribute to improve accuracy in local validation.</p> <p>The CV/LB scores of my best submission is:</p> <ul> <li>CV: 0.7034</li> <li>Public LB: 0.703</li> <li>Private LB: 0.703</li> </ul> <h2>Other Remarks</h2> <ul> <li>For stability of evaluation, 4-fold CV was repeated three times with different seeds.</li> <li>Based on the validation results, the threshold was set at 0.625. No adjustment was made for each question.</li> <li>To reduce inference time, models trained in CV were not used, but retrained models using all data were used for inference.</li> </ul>
GoDaddy - Microbusiness Density Forecasting
My Forecasting Strategy
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: GoDaddy - Microbusiness Density Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Trend</h1> <p>My cross-validation didn't show that complicated gradient boostings showed significantly better results than simple approaches. Also for some months gradient boostings performed worse than the Last Value baseline and simple approaches showed more stable performance. </p> <p>The following formula shows how I estimated trend for cfips.<br> <code>active_quantile_group_21_last_6m_trend</code> means the best multiplication constant (clipped from 1 to 1.006) for the last value of <code>microbusiness_density</code> for the next month on a previous 6 month for a group of cfips (total 21 groups by quantile of active). The same logic for <code>state</code> and concrete <code>cfips</code> group.<br> <code>forward_2</code> means the best multiplication constant for the next 2 month (clipped from 1 to 1.012).</p> <pre><code>df_features_data[] = ( df_features_data[] * + df_features_data[] * + df_features_data[] * + df_features_data[] * + df_features_data[] * + df_features_data[] * ) * + (( df_features_data[] * + df_features_data[] * + df_features_data[] * )**) * </code></pre> <h1>Using the best public submission as a starting point for predictions</h1> <p>Public Leaderboard data wasn't published and for time-series data it should be beneficial to use the best public submission.</p> <h1>Select 2 Submissions. Positive and Negative</h1> <p>It's a good idea to cover different scenarios selecting 2 submissions. I call it Positive submission (we under-forecasting trend) and negative (we over-forecasting trend). <br> For example for Positive submission <code>* (trend + 0.0005)**(month_number)</code></p> <h1>May Seasonality</h1> <p>I just used forecast for April <code>* (1 - 0.0025)</code> for Positive submission, because May showed a dropdown of values the last two years.</p> <h1>Cfips with active &lt; 150</h1> <p>My cross-validation showed that using the last value baseline for such cfips (as many public baselines do) is good only for one month forward forecast, for March, April, May i used some multiplication constants.<br> The following plot shows that for active &lt; 75 using the last value baseline for 3 month forecast is the best, for 75-150 it's better to use trend. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F297456%2F7059d3c2b37887d58d0ae4b88dd4cb4d%2Fnewplot%20(98).png?generation=1678841088819412&amp;alt=media" alt=""></p> <h1>Hope that random helps 🤞</h1>
Predict Student Performance from Game Play
3rd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a lot to the hosts of the competition and my teammates ( <a href="https://www.kaggle.com/kingychiu" target="_blank">@kingychiu</a>, <a href="https://www.kaggle.com/tangtunyu" target="_blank">@tangtunyu</a>, and <a href="https://www.kaggle.com/yyykrk" target="_blank">@yyykrk</a>). I am thrilled that <a href="https://www.kaggle.com/kingychiu" target="_blank">@kingychiu</a> and I will become GM, <a href="https://www.kaggle.com/tangtunyu" target="_blank">@tangtunyu</a> is one step closer to becoming a Master, and <a href="https://www.kaggle.com/yyykrk" target="_blank">@yyykrk</a> will get his second gold medal after this competition!</p> <p>Here We will explain our overall solution, <a href="https://www.kaggle.com/yyykrk" target="_blank">@yyykrk</a> also provided additional explanation of the parts he worked on: <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420274" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420274</a></p> <h1>Classification Task Formulation</h1> <p>In this competition, we are asked to predict 18 values for each session. Each session contains 3 level groups. There are multiple ways to model this.</p> <ol> <li>18 binary classifiers</li> <li>3 Level group classifiers, each one can be<br> a. A multi-label classifier that predicts all values within a level group<br> b. A binary classifier that takes “question index” as a feature within a level group</li> <li>1 classifier that is<br> a. A multi-label classifier that predicts 18 values within a session<br> b. A binary classifier that takes “question index” as a feature within a session</li> </ol> <p>For Gradient boosted tree models, method 2b &gt; method 3b &gt; method 1. Method 2a and 3a are ignored because training the multi-label task is a lot slower with Gradient boosted tree models.</p> <p>For NN models, we focus on the method 2a and 3a, because</p> <ul> <li>These 2 methods are not well handled by tree models</li> <li>Multi-label learning makes more sense, because of the F1 score setting of this competition. (some posts discuss we should not optimize for 1 question).</li> <li>Multi-label NN models are faster to train and infer.</li> </ul> <h1>Additional dataset generated from the raw data</h1> <p>We create an additional dataset from the raw data, it contains 11343 complete sessions.<br> This dataset boosts the CV scores for GBT models by about +0.001~2, but there is not much effect on the public and private scores, and it has both positive and negative outcomes.<br> However, it works very well for NN models, we see +0.002 improvement in both CV and public scores.</p> <h1>Validation</h1> <p>We are using 5-fold GroupKFold on session_id so that there won’t be any seen sessions in the validation set. Also we didn’t include additional data in our validation set.</p> <h1>Gradient Boosted Tree</h1> <p>Per question classifier is handled by <a href="https://www.kaggle.com/yyykrk" target="_blank">@yyykrk</a>, Per level, and All-in-1 classifier is handled by <a href="https://www.kaggle.com/tangtunyu" target="_blank">@tangtunyu</a> <a href="https://www.kaggle.com/kingychiu" target="_blank">@kingychiu</a>. That’s why there are some inconsistencies in the data preprocessing steps, such as sort by index vs sort by time.</p> <h2>Per Question Classifiers</h2> <p>We create features for each level group and sorted by index. The features and the sorting methods differ from other models.</p> <pre><code> df1 = df.(pl.col() == ) df2 = df.(pl.col() == ) df3 = df.(pl.col() == ) df1 = df1.sort(pl.col(), pl.col()) df2 = df2.sort(pl.col(), pl.col()) df3 = df3.sort(pl.col(), pl.col()) </code></pre> <h5>The number of features:</h5> <ul> <li>Level group 0-4: 1,000 features</li> <li>Level group 5-12: 2,000 features</li> <li>Level group 13-22: 2,400 features</li> </ul> <h5>Feature Selection</h5> <p>We try feature selection with out-of-folds but the public scores tend to decrease, so we don’t select features about this model in the final submission.</p> <h5>The typical features</h5> <ul> <li>Elapsed time between the previous level group and the current level group.</li> <li>Elapsed time and index count between flag events.</li> <li>Prediction probabilities for previous questions.</li> <li>Sum of the most recent M (M=1,2,…) prediction probabilities.</li> </ul> <p>Flag events are events that must be passed during game progression. We extract them with reference to jo_wilder's source code, game playing, and the log data of users who have got perfect scores. </p> <h5>Single Best Model(5folds XGBoost)</h5> <p>CV: 0.702, Public LB: 0.700, Private LB: 0.701</p> <h2>Per Level Group Classifiers</h2> <p>In order to allow the level group models to utilize information from previous level groups, we first split the training data by:</p> <pre><code> df1 = df.(pl.col() == ) df2 = df.((pl.col() == ) | (pl.col() == )) df3 = df df1 = df1.sort(pl.col(), pl.col()) df2 = df2.sort(pl.col(), pl.col()) df3 = df3.sort(pl.col(), pl.col()) </code></pre> <p>Feature selection is then applied after feature engineering.</p> <h5>Features Engineering</h5> <ul> <li>Room distance and screen distance</li> </ul> <pre><code> (pl.col() - pl.col().shift()).over([]).().alias(), (pl.col() - pl.col().shift()).over([]).().alias(), (pl.col() - pl.col().shift()).over([]).().alias(), (pl.col() - pl.col().shift()).over([]).().alias(), </code></pre> <ul> <li>Final scene, checkpoint and answer time</li> </ul> <p>By playing the game manually, we know that students are only taking the quiz at the end of each level. The shorter time they used to finish the session of answering questions, the higher probability that they answered those questions correctly. Captured by features like:</p> <pre><code> pl.col().((pl.col() == ) | (pl.col() == )).apply( s: s.() - s.()).alias(), (pl.col().(pl.col() == ).() - pl.col().(pl.col() == ).()).alias() </code></pre> <ul> <li>Unnecessary moves</li> </ul> <p>Also from the experience of playing the game, we believe that there are many people who have played the game for more than one time. Would be great if we are have some feature to identify these players</p> <pre><code>unnecessary_data_values = {} q (): unnecessary_data_values[q] = {} feature_type [, , ]: unnecessary_data_values[q][feature_type] = [] unique_values = (df.((pl.col() == q))[feature_type].unique()) val unique_values: df.((pl.col() == q) &amp; (pl.col(feature_type) == val))[].n_unique() &lt; : unused_data_values[q][feature_type].append(val) </code></pre> <p>If they are playing for the first time, they likely have many unnecessary moves. Then we calculate the time / actions they have spent of these moves</p> <pre><code> col []: aggs.extend([ *[pl.col(col).((pl.col() == level) &amp; (pl.col().is_in(unused_data_values[level][]))).count().alias() level level_feature], *[pl.col(col).((pl.col() == level) &amp; (pl.col().is_in(unused_data_values[level][]))).count().alias() level level_feature], *[pl.col(col).((pl.col() == level) &amp; (pl.col().is_in(unused_data_values[level][]))).count().alias() level level_feature], *[pl.col(col).((pl.col() == level) &amp; (pl.col().is_in(unused_data_values[level][]))).().alias() level level_feature], *[pl.col(col).((pl.col() == level) &amp; (pl.col().is_in(unused_data_values[level][]))).().alias() level level_feature], *[pl.col(col).((pl.col() == level) &amp; (pl.col().is_in(unused_data_values[level][]))).().alias() level level_feature], ]) </code></pre> <ul> <li>Time / actions spent on tasks</li> </ul> <p>Another class of feature to filter out experienced players is to measure how fast they finish the tasks before the quiz in every level group. For example the first task of the game is to find the notebook, our hypothesis is that an experienced player would spend less time and actions to finish it. And they have a higher chance to answer the quiz questions correctly.</p> <p>Two examples for chapter 1</p> <pre><code> pl.col().((pl.col() == ) | (pl.col() == )).apply( s: s.() - s.()).alias(), pl.col().((pl.col() == ) | (pl.col() == )).apply( s: s.() - s.()).alias(), pl.col().((pl.col() == ) | (pl.col() == )).apply( s: s.() - s.()).alias(), pl.col().((pl.col() == ) | (pl.col() == )).apply( s: s.() - s.()).alias() </code></pre> <h5>Feature Selection</h5> <p>The selection is based on Catboost feature importance over the Catboost feature importance with shuffled labels. (Which is the idea of Null Importances <a href="https://www.kaggle.com/code/ogrellier/feature-selection-with-null-importances" target="_blank">https://www.kaggle.com/code/ogrellier/feature-selection-with-null-importances</a>)</p> <ol> <li>Compute Catboost feature importance with the entire training data.</li> <li>Shuffle the training data labels and obtain the importance again for N times.</li> <li>Compute the final importance by the base importance divided by mean random importance.</li> <li>We then use <code>gp_minimize</code> to search for the best feature size based on 5-fold cross-validation.<br> In the end, we have 233, 647, 693 features respectively for each level group.</li> </ol> <p>With Catboost 5-fold CV out of fold F1: 0.7019<br> With Xgboost 5-fold CV out of fold F1: 0.7021</p> <p>Then feature engineering is applied to each of the data frames above. And the transformed data frames are used to train our level group models.</p> <h5>18-in-1 Classifiers</h5> <p>To train the 18-questions-in-1 classifier, we further concat the above 3 data frames together to form a large data frame.</p> <pre><code> all_df = pd.concat([ df1[FEATURES1 + []], df2[FEATURES2 + []], df3[FEATURES3 + []], ], axis=) </code></pre> <p>This mega concatenation creates many null values because some features only exist in a particular level group. That’s why when building the features for this 18-in-1 classifier:<br> First, reuse the feature selection results from the Per Level Group case.<br> Rerun feature selection again after the mega concatenation</p> <p>With Catboost 5-fold CV out of fold F1: 0.7002<br> With Xgboost 5-fold CV out of fold F1: 0.7007</p> <h1>Neural Network</h1> <p>Model: Transformer + LSTM</p> <p>The pipeline of our NN is based on this public notebook: <a href="https://www.kaggle.com/code/abaojiang/lb-0-694-event-aware-tconv-with-only-4-features" target="_blank">https://www.kaggle.com/code/abaojiang/lb-0-694-event-aware-tconv-with-only-4-features</a></p> <h5>Numerical input:</h5> <ul> <li>np.log1p( elapsed_time_diff )</li> </ul> <h5>Categorical inputs:</h5> <ul> <li>event_comb, room_fqid, page, text_fqid, level</li> </ul> <h5>Transformer part (3 variants):</h5> <ul> <li>Type A: Conformer like transformer, with last query attention (<a href="https://www.kaggle.com/competitions/riiid-test-answer-prediction/discussion/218318" target="_blank">https://www.kaggle.com/competitions/riiid-test-answer-prediction/discussion/218318</a>)</li> <li>Type B: Conformer like transformer, with last query attention</li> <li>Type C: Standard transformer</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1904629%2Fb48403960b8dc8cceff7d92e3d18a1bd%2FPSP%20NN.png?generation=1688066794674184&amp;alt=media" alt=""></p> <h5>Post Transformer LSTM:</h5> <ul> <li>1 Bidirectional LSTM + 1 LSTM layer</li> </ul> <h5>Pooling method:</h5> <ul> <li>Concat of sum, std, max, last</li> </ul> <h5>Training method:</h5> <ol> <li>As mentioned in the previous section, we train the model with multi-label, and there are two variants:<br> a. One model per level group<br> b. Same model for ALL level groups</li> <li>We find that combining models trained with different settings can improve both the CV and public LB.</li> <li>Additional data was used for training, it improves both CV and public LB for NN</li> </ol> <h3>Best NN only ensemble (5 NN with different settings):</h3> <ul> <li>CV: 0.7028, Public LB: 0.701, Private LB: 0.704</li> <li>It turns out that NN doesn’t perform very well in Public LB, but does well in Private LB.</li> </ul> <h1>Submission Selection</h1> <p>We selected a submission with the highest LB, a submission with the highest CV, and a submission with a target on a reasonably high CV and a high variety of methods/models. </p> <p>Our best-selected sub is an ensemble of </p> <ul> <li>One level group Catboost, one 18-in1 Catboost, two 18-in1 Xgboost, and three NN.</li> <li>The NNs we selected are Type A per level group, Type B per level group, and Type C ALL level groups. This combination gives good diversity to the final ensemble.</li> <li>We ensemble GBT models and NN models on oof data separately with 2 standalone Logistic regression models, then combined them with GBT:NN = 6:4 ratio. </li> <li>The manual weighting in combining GBT and NN results is due to NN not performing well in public LB, so we didn't have enough confidence to give too much weight to our NN models as discussed below. </li> </ul> <p>Best selected ensemble:</p> <ul> <li>CV: 0.7046, Public LB: 0.706, Private LB: 0.704</li> </ul> <h1>0.705 subs that we haven’t picked</h1> <p>We have three 705 private score submissions that are not selected. Our best-selected subs ranked 13th in all of our subs in terms of private score.</p> <p>Among these 705 private subs:</p> <ul> <li>Per-question GBT model + Group Level GBT model gives us the 705 private score, but not a high ensemble CV score. </li> <li>Per-level-group GBT + NN models with Logistic regression ensemble gives us the 705 private score, but not a high public score.</li> </ul> <h1>Observations:</h1> <ol> <li>NN models perform well in CV and private but very poorly in public, while GBT models fit the public so well, It is very strange…</li> <li>Single-question GBT models makes a lower CV ensemble but perform quite ok in both public and private</li> </ol>
Predict Student Performance from Game Play
2nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>2nd Place Solution</h1> <p>First, I would like to take the opportunity to thank The Learning Agency Lab for hosting the competition and the Kaggle team for making it happen.</p> <p>Here are the details of the 2nd place solution.</p> <p><strong>Summary</strong></p> <ul> <li>A single ‭LightGBM ‬model was used to predict all the questions (i.e. not separate models per question or level group)</li> <li>5 fold cross validation was used during development but for the final submission a single model was trained on all of the data</li> <li>The code was optimised to minimise the efficiency score<ul> <li>For the final submission the vast majority of the execution time was spent on the LightGBM prediction stage</li> <li>There was extensive use of numba and C for the feature generation code</li></ul></li> <li>The model contained ‬1296 features</li> <li>A Threshold value of 0.63 was used</li> </ul> <p>‭<strong>Features</strong></p> <p>A lot of the most important features were based on the time taken to complete a task or react the an event. One of the most important features (after some of the basic features such as the question number and the total count of events for a level group) was the amount of time the user spent looking at the report in level 1 (feature name: L‬G0_L1_first_report_open_duration‭).</p> <p>A plot of the feature importance (LightGBM gain) of the top features can be seen below:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1199911%2F8bb9d994b0bb3f59dcff38f99eb95ebe%2Ffeature_importance.png?generation=1688479725194354&amp;alt=media" alt=""></p> <p><strong>Code</strong></p> <p>‭The code for each stage of the solution can be found here:</p> <ol> <li><a href="https://www.kaggle.com/mark4h/jowilder-2nd-place-solution-0-preprocess-data" target="_blank">preprocess data</a></li> <li><a href="https://www.kaggle.com/mark4h/jowilder-2nd-place-solution-1-features-code" target="_blank">features code</a> (<a href="https://www.kaggle.com/mark4h/jowilder-2nd-place-solution-1-c-feature-code" target="_blank">features code utility script</a>)</li> <li><a href="https://www.kaggle.com/mark4h/jowilder-2nd-place-solution-2-generate-features" target="_blank">generate features</a></li> <li><a href="https://www.kaggle.com/mark4h/jowilder-2nd-place-solution-3-train-model" target="_blank">train model</a></li> <li><a href="https://www.kaggle.com/mark4h/jowilder-2nd-place-solution-4-submission" target="_blank">submission</a></li> </ol>
Predict Student Performance from Game Play
4th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Acknowledgement</h1> <p>I'd like to thank the hosts for providing a very interesting and difficult project to work on for the past months. I am also grateful for all the public sharing on Kaggle, this has been an insane learning experience for me. Without all the public notebooks, discussion posts and old competition solutions available i would have had no chance in this competition.</p> <h1>Overview</h1> <ul> <li>Used most of the raw data for training, while validating only on the kaggle data.</li> <li>Ensemble of Transformer, XGBoost and Catboost, with 3 seeds and 5 folds each. </li> <li>Used a generic set of features based on time, index and screen_coor differences.</li> <li>Linear regression as a meta model.</li> <li>Thresholds have a big impact on LB score</li> </ul> <h1>Data</h1> <p>I used most of the raw data for training, including sessions that only completed level group 0-4 and 5-12. About ~38000 whole sessions and ~58000 sessions in total. Using the raw data increased CV by over 0.001. I validated only on the kaggle data. </p> <p>My initial data preprocessing is simply sorting by level group and index, same as what happens during inference. Also, my experiments indicated no benefit from using the hover durations, so after sorting i dropped the hover rows and re-indexed each session from 0 to len(session).</p> <h1>Transformer</h1> <p>I spent much of my time experimenting with transformers, which resulted in a light weight model that achieved 0.698 on the public and private LB, and 0.702 CV. </p> <pre><code>class (nn.Module): def (self, num_cont_cols, embed_dim, num_layers, num_heads, max_seq_len): (NN, self).() self.emb_cont = nn.( nn.(num_cont_cols, embed_dim//), nn.(embed_dim//) ) self.emb_cats = nn.( nn.(max_seq_len + , embed_dim//), nn.(embed_dim//) ) encoder_layer = nn.( d_model=embed_dim, nhead=num_heads, dim_feedforward=embed_dim, dropout=, batch_first=True, activation=, ) self.encoder = nn.(encoder_layer, num_layers=num_layers) self.clf_heads = nn.([ nn.(embed_dim, out_dim) for out_dim in [, , ] ]) def (self, x, grp): emb_conts = self.(x[:, :, :-]) emb_cats = self.(x[:, :, -].(torch.int32)) x = torch.([emb_conts, emb_cats], dim=) x = self.(x) x = x.(dim=) x = self.clf_heads[[, , ].(grp)](x) return x.() </code></pre> <ul> <li>embed_dim: 64</li> <li>num_layers: 1</li> <li>num_heads: 8</li> <li>max_seq_len: 452 (explained below), though the sequences are cropped to 256</li> <li>I used the same single model for all questions</li> </ul> <p>I found the data easy to overfit with transformers, so in an attempt to improve the signal to noise ratio i did the following:</p> <ol> <li>Identify different points in the game by string concatenating event_name, level, name, page, fqid, room_fqid, text_fqid, in the dataframe.</li> <li>Some of these occur more than once in a session. Treat these as different points by enumerating them and adding the enumeration to their names.</li> <li>Filter out the rows with points that is present in over 0.999 of the sessions. This makes each session maximum 452 steps long. </li> <li>Create 6 feature columns:<br> time difference, index difference, distance (cumulative distance moved, calculated from screen_coor's) difference, room_coor_x, room_coor_y and the categorical point column embedded.</li> </ol> <h1>XGBoost</h1> <p>This is my strongest single model with public LB 0.701, private 0.702, and 0.7029 CV. What stands out is that i flattened 5 of the transformer input columns (excluding the categorical column), and used all those values as individual features.</p> <p>The other features are mainly stats that can be found in public kernels, like mean and max time diff over the categoricals. The stats were calculated before applying the transformer input filtering.</p> <p>From early on I trained one model for each level group, inputing the question number as a feature. I found that CV increased by around 0.0002 compared to using a model for each question. This could be randomness, but i went with it since i thought 3 models instead of 18 would make my life easier during experimentation. Similar reasoning behind using one model for the transformer. </p> <h1>Catboost</h1> <p>Essentially looks the same as XGBoost. CV 0.7022.</p> <h1>Ensemble</h1> <p>I trained a linear regression meta model for each question, using the above models output probabilites as input, to produce the final predictions. I included probabilities of past questions and some future ones! For example the regression model trained to predict question 2 took probabilities on question 1-3 as input, to predict question 7 I used probabilities on question 1-13, and for question 16 i used probabilities on question 1-18. I took 3 seeds average before linear regression input to make it more robust. </p> <p>This finally results in public LB 0.702, private LB 0.703, CV 0.7044. </p> <h1>On threshold and submission selection</h1> <p>I tried to trust CV as much as possible, but the consistent gap between my CV and LB was suspicius until the last few days. Then I realized one reason could be my selected threshold was suboptimal on the test data.. I made some submissions with my highest CV solution, only changing the threshold, and noticed it was indeed suboptimal and caused more variation in LB score than most of my latest experiments. So in the end I selected 3 of the same solution, with different thresholds: 0.60 (best on LB), 0.62 (best during CV) and 0.64. Turned out 0.61 would have resulted in 0.704 private, but no regrets ;)</p> <p>Thank you for reading!</p> <h1>Code</h1> <p><a href="https://github.com/joelerikanders/pspgp/tree/main" target="_blank">Training code</a><br> <a href="https://www.kaggle.com/erijoel/4th-place-submission" target="_blank">Submission notebook</a></p>
Stable Diffusion - Image to Prompts
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We thank all organizers for this very exciting competition.<br> Congratulations to all who finished the competition and to the winners.</p> <h2>Solution summary</h2> <p>Our approach involved training multiple image models to predict prompt embeddings (generated by a sentence transformer) directly from images, then ensembling the predictions from these models. The final submission was an ensemble of four models:</p> <ul> <li>eva02_large</li> <li>convnext_xxlarge</li> <li>convnext_large</li> <li>vit_large</li> </ul> <p>Initially, in the early stages of the competition, we experimented with models that predicted the image prompts themselves. However, we found that the method of directly predicting prompt embeddings yielded better scores. Consequently, from the midpoint of the competition onwards, we focused on improving the method of direct embedding prediction.</p> <h2>Dataset Generation</h2> <p>We generated approximately 5 million pairs of prompt-image sets (some images were generated multiple times from the same prompt) as detailed in the table below:</p> <table> <thead> <tr> <th>dataset name</th> <th># of prompts</th> <th>images per prompt</th> <th>Total images</th> </tr> </thead> <tbody> <tr> <td>cc3m</td> <td>249,593</td> <td>3</td> <td>748,781</td> </tr> <tr> <td>lexica</td> <td>67,101</td> <td>3</td> <td>201,303</td> </tr> <tr> <td>diffusiondb</td> <td>326,035</td> <td>3</td> <td>978,105</td> </tr> <tr> <td>cc3m part2</td> <td>1,166,852</td> <td>1</td> <td>1,166,852</td> </tr> <tr> <td>diffusiondb part2</td> <td>1,371,480</td> <td>1</td> <td>1,371,480</td> </tr> <tr> <td>mscoco</td> <td>197,527</td> <td>3</td> <td>592,583</td> </tr> <tr> <td>Total:</td> <td>3,378,588</td> <td>1or3</td> <td>5,059,104</td> </tr> </tbody> </table> <p>During training, for samples with three images corresponding to a prompt, we randomly selected one image to ensure that duplicate images from the same prompt were not used within the same epoch.</p> <p>Dataset references</p> <ul> <li>cc3m: <a href="https://ai.google.com/research/ConceptualCaptions/download" target="_blank">https://ai.google.com/research/ConceptualCaptions/download</a></li> <li>lexica: <a href="https://www.kaggle.com/datasets/safavieh/gustavosta-stable-diffusion-prompts" target="_blank">https://www.kaggle.com/datasets/safavieh/gustavosta-stable-diffusion-prompts</a></li> <li>diffusiondb: <a href="https://huggingface.co/datasets/poloclub/diffusiondb" target="_blank">https://huggingface.co/datasets/poloclub/diffusiondb</a></li> <li>mscoco: <a href="https://cocodataset.org/" target="_blank">https://cocodataset.org/</a></li> </ul> <h2>Validation Strategy</h2> <p>To partition the generated image-prompt pairs for training and evaluation, we first calculated the prompt embeddings using a sentence transformer. We then grouped samples with similar prompt embeddings. After grouping samples with a cosine similarity of 0.7 or higher in their embeddings, we divided the dataset into 10 folds based on these groups using GroupKFold. Model training was conducted on all folds except fold0, which was reserved for evaluation.</p> <p>Furthermore, during the creation of the cc3m and diffusiondb datasets, we precomputed the cosine similarity of prompt embeddings. For prompts with a similarity of 0.9 or above, we only used one for image generation. From the midpoint of the competition, we also began to utilize the remaining prompts that we had not used before, creating the cc3m_part2 and diffusiondb_part2 datasets.</p> <h2>Model Architecture</h2> <p>Our model architecture is straightforward, consisting of a backbone connected to a 384-dimensional fully connected (FC) layer (without bias), which matches the output dimensions of the sentence transformer.</p> <p>Some models have a larger input image resolution as shown below.</p> <table> <thead> <tr> <th>model</th> <th>pretrained backbone</th> <th>input size</th> </tr> </thead> <tbody> <tr> <td>eva02_large</td> <td>eva02_large_patch14_448.mim_m38m_ft_in22k (timm)</td> <td>448 (pretrained@448)</td> </tr> <tr> <td>convnext_xxlarge</td> <td>laion2b_s34b_b82k_augreg_rewind (open_clip)</td> <td>384 (pretrained@256)</td> </tr> <tr> <td>convnext_large</td> <td>laion2b_s29b_b131k_ft_soup (open_clip)</td> <td>384 (pretrained@320)</td> </tr> <tr> <td>vit_large</td> <td>laion2b_s32b_b82k (open_clip)</td> <td>336 (pretrained@224)</td> </tr> </tbody> </table> <p>The ViT model of open_clip can change the input image size as follows.</p> <pre><code>backbone = open_clip_model.visual hgrid, wgrid = backbone.grid_size hpatch = input_height // hgrid wpatch = input_width // wgrid backbone.patch_size = (hpatch, wpatch) backbone.conv1.kernel_size = (hpatch, wpatch) backbone.conv1.stride = (hpatch, wpatch) </code></pre> <h2>Training Process</h2> <p>We used the PyTorch framework for training our models, with the following configurations:</p> <ul> <li>10 or 15 epochs with Distributed Data Parallel (DDP), Automatic Mixed Precision (AMP), and Gradient Checkpointing</li> <li>CosineEmbeddingLoss as the loss function</li> <li>MADGRAD as the optimizer</li> <li>Warmup 1 or 3 epochs training only FC layer with the backbone frozen (learning rate: from 1e-2 to 1e-4)</li> <li>Cosine Learning Rate Scheduler with learning rates ranging from 1e-4 to 1e-7.</li> <li>Finetune 3 or 5 epochs with all dataset.</li> </ul> <p>As for data augmentation, we implemented a relatively light configuration as follows:</p> <ul> <li>No Rotation</li> <li>No Horizontal Flip</li> <li>RandomResizedCrop with a scale of 0.5 to 1.0</li> <li>ColorJitter (brightness=0.05, contrast=0.05, saturation=0.05, hue=0.05)</li> </ul> <p>Until midway through the competition, we trained our models using only three datasets: cc3m, lexica, and diffusiondb.</p> <table> <thead> <tr> <th>model</th> <th>epochs</th> <th>warmup</th> <th>initial lr</th> <th>final lr</th> <th>dataset</th> </tr> </thead> <tbody> <tr> <td>eva02_large</td> <td>10</td> <td>1</td> <td>1e-4</td> <td>1e-6</td> <td>cc3m,lexica,diffusiondb</td> </tr> <tr> <td>convnext_xxlarge</td> <td>10</td> <td>1</td> <td>1e-4</td> <td>1e-6</td> <td>cc3m,lexica,diffusiondb</td> </tr> <tr> <td>convnext_large</td> <td>15</td> <td>3</td> <td>1e-4</td> <td>1e-6</td> <td>cc3m,lexica,diffusiondb</td> </tr> <tr> <td>vit_large</td> <td>15</td> <td>3</td> <td>1e-4</td> <td>1e-6</td> <td>cc3m,lexica,diffusiondb</td> </tr> </tbody> </table> <p>From the midpoint onwards, we included the cc3m_part2, diffusiondb_part2, and mscoco datasets for fine-tuning the model.</p> <table> <thead> <tr> <th>model</th> <th>epochs</th> <th>warmup</th> <th>initial lr</th> <th>final lr</th> <th>dataset</th> </tr> </thead> <tbody> <tr> <td>eva02_large</td> <td>3</td> <td>0</td> <td>1e-6</td> <td>1e-7</td> <td>all</td> </tr> <tr> <td>convnext_xxlarge</td> <td>3</td> <td>0</td> <td>1e-6</td> <td>1e-7</td> <td>all</td> </tr> <tr> <td>convnext_large</td> <td>5</td> <td>0</td> <td>1e-5</td> <td>1e-7</td> <td>all</td> </tr> <tr> <td>vit_large</td> <td>5</td> <td>0</td> <td>1e-5</td> <td>1e-7</td> <td>all</td> </tr> </tbody> </table> <h2>Ensemble Strategy</h2> <p>In our final submission, we utilized four models. Also, for each model, we used four or five different cross-validation folds, and thus we created an ensemble of 18 folds across the four models.</p> <table> <thead> <tr> <th>model</th> <th>folds</th> </tr> </thead> <tbody> <tr> <td>eva02_large</td> <td>0,2,2*,5,9</td> </tr> <tr> <td>convnext_xxlarge</td> <td>0,1,6,6*,7</td> </tr> <tr> <td>convnext_large</td> <td>0,1,2,3</td> </tr> <tr> <td>vit_large</td> <td>0,3,4,8</td> </tr> </tbody> </table> <p>Folds marked with * are the ones where we fine-tuned for only 1 epoch without data augmentation.</p> <p>We had models validated on fold0 prepared for each model, and the weights of the ensemble were adjusted using these fold0 validation data.</p> <p>We also experimented with determining weights not just on a per-model basis, but along each of the 384 output dimensions for each model (amounting to 384x4 weights), which were also determined through training. However, the training didn't result in significant variability across the weights for specific dimensions, and the outcome was almost the same as when determining a single weight per model.</p> <p>Upon submitting both methods and comparing their scores, the method using 384 weights per model resulted in a slight score increase of +0.0002. Therefore, we implemented the latter method for our final submission. However, it's worth noting that this improvement is likely within the margin of error.</p> <h2>Things that didn't work well for us</h2> <ul> <li>Mixup</li> <li>GeM (Generalized Mean) Pooling</li> <li>GGeM pooling (<a href="https://arxiv.org/abs/2212.04114" target="_blank">Group Generalized Mean Pooling for Vision Transformer</a>)</li> <li>Triplet Attention (<a href="https://arxiv.org/abs/2010.03045" target="_blank">Convolutional Triplet Attention Module</a>)</li> <li>Image captioning like BLIP, BLIP2, CoCa<ul> <li>Predicting the prompt embedding directly was more effective in terms of both computational cost and score.</li></ul></li> <li>Weighting loss based on scores from the model's OOF predictions (loss considering easy or difficult samples)</li> <li>kNN approach using a pretrained clip model<ul> <li>Although improvements were observed when ensembled with the prediction results of a single model, no effect was observed when added to the results of the 4-model ensemble used in the final submission.</li></ul></li> <li>Cosine similarity loss for all sample combinations (details to follow)</li> </ul> <h3>Cosine Similarity Loss for All Sample Combinations</h3> <p>When using CosineEmbeddingLoss, the loss calculation is performed for each image-prompt pair, without considering the relationships between different pairs. To address this, we tried to implement a loss that would bring the cosine similarity of all pairs of model outputs closer to the cosine similarity of all pairs of corresponding targets within a mini-batch, considering all combinations of samples.</p> <p>However, we didn't adopt this approach as it didn't improve performance, whether used alone or in combination with CosineEmbeddingLoss.</p>
Lux AI Season 2
FLG's Approach - Deep Reinforcement Learning with a Focus on Performance - 4th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lux AI Season 2 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>1. Introduction</h1> <h2>Lux Season 2</h2> <p>LuxS2 is a two player, zero-sum game in which each participant controls a swarm of robots racing to terraform Mars faster than their opponent. It is played on a 48x48 square map in multiple phases:</p> <ol> <li>A simultaneous-move bidding phase to decide who goes first,</li> <li>A sequential placement phase in which players place 2-5 factories (number chosen randomly) and</li> <li>A simultaneous-move main phase.</li> </ol> <p>During main phase players acquire metal to create robots, water to grow lichen and clear rubble to make space for the lichen farms. At then end of 1000 turns the player with the larger lichen amount wins. </p> <p>The main phase is split into 20 day-night-cycles, each lasting 50 turns out of which 30 are day and 20 night.<br> There are three types of actors:</p> <ol> <li>Factories grow lichen, produce robots and turn raw ore/ice into usable metal/water,</li> <li>Heavy units excel at combat and resource gathering but are somewhat sluggish to move around especially in high-rubble areas,</li> <li>Light units are good at moving around efficiently but mine less resources and lose in combat to heavies.</li> </ol> <p>Units require power for almost all actions. Power is gained each turn by factories, slowly by all units during the day and by growing lichen. A unit without power cannot take any actions and needs to (slowly) recharge.</p> <p>When two units collide (move to the same target position), heavies destroy lights and among same units of the same type the one with more power wins. This collision handling also applies to your own units!</p> <p>On top of these mechanics there is also a planning component: players can send a queue of up to 20 actions to each of their units. This is useful since each time the action queue is updated, a small power charge needs to be paid by the affected unit. Example:</p> <ol> <li>A heavy unit digs 20 times (60 power each) and the dig action is sent each turn anew (10 power per update): power cost: 20 * 60 + 20 * 10 = 1400.</li> <li>Same case but a queue of 20 dig actions is sent once: 20 * 60 + 10 = 1210.</li> </ol> <p>The unit used almost 15% less power in the second case. Power is pretty much the scarcest resource so this efficiency gain makes a very big difference.</p> <h2>Comparison with last season</h2> <p>The game increased in complexity compared to last year quite a bit:</p> <ul> <li>the map is 2.2x as large,</li> <li>a match lasts 3x as many turns,</li> <li>there are three different game phases,</li> <li>for learning approaches, like mine, the power system (especially in combination with the need to clear rubble to gather resources and grow lichen) made random exploration a lot harder,</li> <li>the action queue system changed the game from a single-step problem to a multistep planning one.</li> </ul> <h1>2. Method</h1> <h2>Plan</h2> <p>The matches of the competition are run on Kaggle using CPU-only VMs that seemed to have varying performance but mostly be single-core machines. Because of this performance was the main consideration on every decision and I came up with a plan focused on it:</p> <ol> <li>Simplify the action and observation space to manageable size,</li> <li>Handle collisions and action queues,</li> <li>Use imitation learning and small map RL to find a neural network architecture that combines CPU-performance, large receptive field and sample efficiency,</li> <li>Train a small model using RL,</li> <li>Train a larger main model using the small one as a teacher,</li> <li>Optimize the final model for cpu inference.</li> </ol> <h2>Action and observation space</h2> <h3>Action space</h3> <p>I structured my model to have seven different actors:</p> <ol> <li>Bid: Discrete(10): [0, 10, 20, 30, 40, 50, 70, 100, 150, 200],</li> <li>Spawn location: Discrete(48*48),</li> <li>Spawn resource amount for water: Discrete(7): [50, 100, 120, 150, 200, 250, 300],</li> <li>Spawn resource amount for metal: Discrete(7): [50, 100, 120, 150, 200, 250, 300],</li> <li>Factories: Grid(4): [noop, build light, build heavy, grow lichen],</li> <li>Light units: Grid(23): [moves in 5 directions (incl. noop), dig, self-destruct, pick-up all power available, transfers in 5 directions for [ore, ice, power]]</li> <li>Heavy units: Grid(23): same as light units.</li> </ol> <p>Grid(k) means a 48x48xk action space in which for each position of the 48x48 map one of k actions is sampled independently.<br> On transfer actions a unit transfers everything it holds for ice and ore and 90% of it’s current value for power. When performing a spawn action, a location and a starting resource amount for water and metal need to be selected at the same time. I first sample a location then predict the water/metal amounts for all locations in parallel and sample one of the 7 actions at the chosen spawn location.<br> Note: there are no actions for transferring or picking up refined resources like metal and water. So the agent could only use unrefined resources for transferring over long distances.</p> <h3>Observation space</h3> <p>I encoded the positions of all actors using 0/1 variables, most other quantities (power, resources, rubble) were taken as continuous variables normalized to usually be in the -1 – 1 range. I tried using explicit embeddings for all discrete observations but couldn’t find any difference in performance. The one exception to this was the lichen ownership: each factory has its own lichen strain. With up to five factories per player that’s eleven discrete values when you add one more for “no lichen”. I embedded these using a 16 dimensional embedding layer and summed the result with a 16-dim embedding of the player id to indicate the owner of that lichen.<br> Global features like the current turn or the spawn resources remaining are encoded as separate feature layers (with the same value repeated for every position in the 48x48 feature layer).<br> In total this gave me 54 different features. To these I added historic information: the positions and resources of all actors as well as lichen and rubble values on the board for the last three turns (17 features per turn). The final observation space had a shape of 48x48x105 (54 + 3*17=105).</p> <h2>Collisions and action queues</h2> <h3>Collisions</h3> <p>Self collisions were very punishing this season since all but one of the colliding units died. During early RL I realized that my agent avoided having units work next to each other and in general didn’t like making many units. This seemed to be caused by the fact that a somewhat random policy (which is needed for exploration) will cause many harmful self collisions. To counteract this I canceled all moves that would result in self collisions, so that units performed a noop instead. This required reproducing quite a bit of game logic but it increased (early) learning speed a lot.</p> <h3>Action queues</h3> <p>To fill the action queues with multiple actions at inference time I used forward prediction: the agent predicts actions for itself and the opponent then steps the environment and repeats the process. Since the opponent’s action queue is part of the public game observation, I had one agent on the LB that used it for forward prediction (and falling back to its own prediction if no move from the queue is available) – it performed similar to just using predictions.</p> <p>During training I did not use action queues and only had the agent predict its next action. This method only had one problem: the amount of power used to update action queues differs in both cases. This causes the agent’s expectations about how long it can mine, which parts of the map it can reach etc. to be off for longer sequences. To combat this I used an expected communications cost approach: I randomly applied the comm cost with p=0.2, simulating an average length five action queue (which was pretty close to the effective action queue length that the agent achieved after all optimizations).</p> <h2>Neural network architecture search</h2> <p>I used some beta agents and later a few early submissions to create a dataset for imitation learning (IL). More precisly, I:</p> <ul> <li>downloaded around 2000 matches from top-scoring agents via MetaKaggle</li> <li>transformed the obs and actions into the spaces mentioned above when creating the dataset</li> <li>stored everything as compressed HDF5 dataset (the final dataset was around 300GB and HDF5 allows for very fast random access without loading everything into RAM).</li> </ul> <p>Submissions that used actions I did not support were excluded (some agents were trasferring water between factories).</p> <p>I evaluated different NN architectures based on their unit-accuracy on this dataset (I also trained and evaluated for factory- and spawn-locaction-accuracy but these were much easier to learn). The main traits I was looking for were:</p> <ul> <li>CPU performance</li> <li>Receptive field (“see the whole board quickly”)</li> <li>GPU-Memory usage</li> <li>Sample efficiency.</li> </ul> <p>The best result I got was a “DoubleCone” network, a DoubleCone(4, 6, 4) is depicted below. It performed a decent share of its compute at a downscaled resolution of 12x12 (instead of 48x48) which helped with respect to all goals above except sample efficiency. To avoid a loss of fine grained control there is a skip connection around the DoubleCone-block. Additionally before and after that block there are multiple full resolution ResBlocks. The actor heads are simple 3x3 convolution layers without activations, the one exception to this is the bidding head, which uses the critic architecture (minus one Conv-Layer).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2360589%2F047dcaede92d9da504c479852e3b9971%2Fnn-arch.png?generation=1683117990007566&amp;alt=media" alt="DoubleCone(4, 6, 4)"></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2360589%2F94da0139e55fca3108cd69630ec66c6a%2Fnn_critic.png?generation=1683118310164650&amp;alt=media" alt="Critic"></p> <p>The critic is depicted above, its architecture did not seem to be very important in my tests. The slightly unusual AdaptiveAveragePool is there since many of my early RL tests were done with 16x16 and 24x24 maps and I wanted a truly resolution-independent architecture.</p> <p>I also experimented with layer optimizations, especially ResBlock design. Here I tested recent advances from computer vision (e.g. ConvNext) with some task specific optimizations. Some results:</p> <ul> <li>I did not use any Bottleneck or InverseBottleneck blocks as the 1x1 convolutions are fairly GPU-intensive but do not provide receptive field. </li> <li>I used way less activations, in particular there is no activation at all after the second convolution, only SqueezeAndExcitation.</li> <li>With my compute budget I couldn’t find real differences between ReLU, LeakyReLU or GELU.</li> <li>3x3 and 5x5 convolutions performed similarly if you roughly double the depth when using 3x3s (lower parameters / receptive field).</li> <li>There are no normalization layers as I couldn’t find any benefit in them, as long as there are SqueezeAndExcitation layers. Without these performance dropped significantly but part of this could be recovered e.g. with LayerNorm.</li> </ul> <p>A comparison between my final architecture and a ResNet can be seen here:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2360589%2F9c3f15ae406ea47cb78aae76d2afeb5b%2Fil_performance.png?generation=1683118089236048&amp;alt=media" alt="IL Performance"></p> <h2>Reinforcement Learning</h2> <p>My agent was trained using self-play and reinforcement learning. I used shaped rewards that were strongly focused on the game result to avoid biasing the agent as much as possible.</p> <h3>Rewards</h3> <p>In a set-up phase (around 65M steps) the agent was trained using heavily shaped rewards focusing on gathering resources. It received rewards for:</p> <ul> <li>moving towards its closest resources</li> <li>clearing rubble from resources</li> <li>mining resources</li> <li>moving to closest factory when carrying e.g. &gt; 100 resources</li> <li>dropping resources on factories</li> <li>bonus for mining ore and ice</li> <li>making units</li> <li>growing lichen</li> <li>placing factories near resources.</li> </ul> <p>These rewards were “selfish” for each agent (not zero-sum).</p> <p>After this phase, I switchted to zero-sum rewards:</p> <ul> <li>{-1, 0, 1} for game result</li> <li>[-1, 1] for relative lichen advantage at the end of the game</li> <li>0.3 for each factory</li> <li>[-0.8, 0.8] relative unit advantage</li> <li>[-0.1, 0.1] for relative lichen power production.</li> </ul> <p>Each reward was given relative to its opponent's performance to yield a zero-sum reward in the range of [-1, 1]. One exception was the factory reward: here the agent received a flat + or - 0.3 for each factory it had more / less than its opponent. Rewards were only given when a change was made (e.g. when the factory was lost), not each turn afterwards. This way the (undiscounted) sum of all relative rewards equals the relative advantage at the end of the game.<br> I tested many different reward weightings over the course of training and the agent seemed very robust to changes, probably because the game result and lichen advantage dominate the reward.</p> <h3>Reinforcement learning method</h3> <p>My RL method uses a single-learner-multiple-actor approach. I implement everything using Python’s Multiprocessing: each actor and evaluator is its own process and all processes communicate via queues. The architecture can be seen below. I use Pytorch for all NNs and vectorized compute but no RL framework.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2360589%2Ff913c0f00e3d5ea22e4cac9a934a0346%2Frl-arch.png?generation=1683118426313787&amp;alt=media" alt="RL architecture"></p> <p>In the beginning I tested PPO and Vtrace for advantage calculation on 16x16 and 24x24 maps and achieved similar results. I went with Vtrace. All in all my agent learned using the following losses:</p> <table> <thead> <tr> <th>Type of loss</th> <th>Method</th> <th>Count</th> <th>Main parameters</th> </tr> </thead> <tbody> <tr> <td>Value</td> <td>TD(λ)</td> <td>1</td> <td>λ=0.95, γ=0.9995, weighting coef: 1.0</td> </tr> <tr> <td>Policy gradient / advantages</td> <td>Vtrace</td> <td>7 (1 per actor)</td> <td>λ=1.0, γ=0.9995, clipping c/ρ = 1, weighting coef: 1.0</td> </tr> <tr> <td>Entropy</td> <td>KL</td> <td>7</td> <td>Weighting coef: 1e-5 .. 1e-4, depending on actor</td> </tr> <tr> <td>Teacher</td> <td>KL</td> <td>7</td> <td>Weighting coef: ~5e-3</td> </tr> </tbody> </table> <p>I lowered the entropy very slowly over training and updated the teacher regularly, keeping it around 30M steps behind the agent.</p> <p>One problem with my approach was that bidding and spawning factories are fairly rare events, they only occur on ~1% of all steps. This meant that a typical batch of size ~500 contained 0-2 of these events which didn’t allow stable learning. To counteract this I “upsampled” spawns and bidding by having a dedicated spawn actor. Every 16 update steps this actor played around ten matches from the starting position for roughly 50 steps each then abandoned these games.</p> <h3>Training larger models</h3> <p>It was my plan to only use the DoubleCone(4, 6, 4) network shown above as a teacher for a second and maybe third generation of larger models. However due to problems described below with runtime performance, I had to cancel the training of a DoubleCone(6, 8, 6) model after around 100M steps. I then continued finetuning the smaller model. Its training performance can be seen here:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2360589%2F2b3c46c03fc3662bdf740524666a0a04%2Ftraining_progress.png?generation=1683118499788164&amp;alt=media" alt="Training progress"></p> <h2>Optimizing Kaggle Performance</h2> <p>The fact that each unit had to pay an engergy cost when updating its action queue meant that solutions that only predicted few steps ahead had a strong disadvantage. Over the course of the competition I ran many local and leaderboard tests to evaluate the impact of energy efficiency. Results were fairly consistent: one should aim for at least 4-6 step predictions, below that performance was crippling (especially single step). Getting more steps only provided modest advantages in my tests. Some results can be seen here:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2360589%2F9a1de816ff9044508fdae3e46604a1ce%2Fpred_steps.png?generation=1683118552693371&amp;alt=media" alt="Prediction steps influence on performance"> </p> <p>When you compare these results with prediction performance of ResNet and DoubleCone architectures, you can see that large models were very hard to use. See below. The 24 block ResNet is similar to what was used by last year’s winner and it would barely be able to predict a single step. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2360589%2Fe6916e3ff69a821ac7a3ffa77eb17312%2Finference_speed.png?generation=1683118597376763&amp;alt=media" alt="Inference speed"></p> <p>One very important detail is that no fast inference runtime (like onnxruntime with platform-specific execution providers like OpenVino mentioned above) was available in the Kaggle environment. Since it wasn’t possible to install any packages, me and a couple of other competitors tried packaging onnxruntime ourselves but as far as I know no one was successful. The fast execution providers like OpenVine somehow weren’t working on the competition environment (although it worked everywhere else including Kaggle notebooks). It also was close to impossible to debug this as all error logs were 404 for me. </p> <p>When looking at the performance numbers above and the fact that for example OpenVino doubled/tripled performance in all my tests, it was clear it was make or break for using larger models on Kaggle hardware. After unsuccessfully debugging the issue for a couple of days, I decided to give up on it, cancel the DoubleCone(6, 8, 6) model I was training and went back to fine tuning the smaller one.</p> <p>After this I tried many other approaches for speeding up inference:</p> <ul> <li>static post-train quantization</li> <li>dynamic post-train quantization</li> <li>model compression</li> <li>jit trace and script.</li> </ul> <p>Most of these either failed as they had some kind of setup time which was too long on Kaggle hardware or they had unfavorable accuracy-performance trade-off compared to using a tiny model to fill up action queues:</p> <h3>Filling up action queues</h3> <p>My agent usually managed to predict 3-5 steps ahead using a DoubleCone(4, 6, 4). To improve on this, I used transfer learning with a DoubleCone(2, 6, 2) network with only 64 (instead of 128) channels. The action queues were then built as follows:</p> <ol> <li>Predict up to 3 step with the DoubleCone(4, 6, 4) network.</li> <li>Fill up with predictions from the tiny net until time runs out.</li> <li>Use pattern matching to fill the queue to 20 actions.</li> </ol> <p>Always doing 1-3 slow predictions meant that any mistakes made by the tiny net during previous predictions would be corrected. This way the agent usually managed 6-8 predictions, with effective (corrected) queue length between 5 and 6.</p> <p>The pattern matching used common behaviors to naivly fill the queue: units moving constantly back and forth in combat, (light) units picking up power and transferring it to adjacent mining units and digging. If none of these worked, it just copied the last action.</p> <p>For switching between these stages I used predictive time management. This was necessary as VM performance was somewhat unreliable. </p> <p>Transfer learning was done with a (Q)Dagger-inspired approach: first training on the teacher’s trajectories and then switching to the student’s to correct remaining mistakes. The strength of the teacher was reduced over the course of training.</p> <h1>3. Conclusion</h1> <p>The competition was a bit of up and down for me. On the one hand the lack of fast deployment options for NNs was a huge letdown and the debugging problems made it very frustrating to work with. On the other hand the environment was very interesting and fun to work with. My solution worked quite well and I had a chance to use a wide array of tools from IL and architecture search over RL to transfer learning and performance optimization. But most importantly the community especially on Discord is just amazing. Lots of thanks to Stone Tao, Bovard and the sponsors for running this competition! Thanks to Agenlu for helping with performance optimization, Tigga, ry_andy_, Harm and SiestaGuru for being the people I battled with the most and finally thanks to ry_andy_ and loh-maa for (unknowingly) providing an IL dataset.</p>
Stable Diffusion - Image to Prompts
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all the winners, and thanks to kaggle for hosting such an interesting competition. This competition was a great learning experience for me. I am really looking forward to reading everyone's solutions.</p> <p>My solution is basically the same as the ViT-based method in the public notebook. I created my own dataset by running Stable Diffusion and trained a model to predict sentence embeddings in a supervised manner. The overview is shown in the figure below.</p> <p><img src="https://user-images.githubusercontent.com/8179588/238365771-166ab184-9639-49e1-bd67-9d6e8fd05c90.png" alt="image"></p> <h2>Dataset Generation</h2> <h3>Preparation</h3> <p>The following modifications were made to 4x speed up the generation with fp16.</p> <ul> <li>Change the scheduler from DDIM to DPMSolver++ (diffusers.DPMSolverMultistepScheduler) and change the number of steps from 50 to 16.</li> <li>Use xformers</li> </ul> <p>There was concern that reducing the number of generation steps would cause a shift in the mapping from prompt to image, but there was no significant qualitative difference, and the advantage of increased generation speed was judged to be greater, so I chose this method.</p> <p>The xformers led to a speedup of about 30%. xformers were also used in the training of the model, which is described after.</p> <p>Note that the random numbers used in the generation have a significant impact on the overall image generated, so it is necessary to generate images using different seeds. At first I fixed the seed to 0 to ensure reproducibility, but I realized this mistake and used a different seed for each prompt, which greatly increased the score. I was generating the seed using zlib.adler32, the name of the prompt set, and the number of the prompt.</p> <p>As for the guidance scale, I had been using the default value of 7.5 from diffusers until I realized that 9.0 seemed to be the correct value (<a href="https://github.com/Stability-AI/stablediffusion/blob/main/scripts/txt2img.py#L142" target="_blank">link</a>), so I used data generated with either value. LB scores showed that there was no significant difference between 7.5 and 9.0, but scores with 3.0 were significantly lower.</p> <h3>DiffusionDB</h3> <p>To deal with the vocabulary specific to Stable Diffusion, we generated images using prompts from <a href="https://huggingface.co/datasets/poloclub/diffusiondb" target="_blank">DiffusionDB</a>. Excluding duplicates, there are approximately 1.8M Prompts, and images were generated for all of them.</p> <h3>COCO Captions</h3> <p>Captions from <a href="https://arxiv.org/abs/1504.00325" target="_blank">Microsoft COCO Captions</a> were used to generate the images. 500k images were generated from approximately 600k captions in train and validation.</p> <h3>Open Images</h3> <p>To create even more diverse prompt sets, I utilized the natural images in <a href="https://github.com/openimages/dataset/blob/main/READMEV3.md" target="_blank">Open Images Dataset V3</a> (OID) and Image-To-Text pretrained models. Images from OID are input to the Image-To-Text model, and the resulting caption is input to StableDiffusion as a prompt. OID contains 9M natural images, of which the first 5M were used to generate.</p> <p><a href="https://huggingface.co/Salesforce/blip2-flan-t5-xxl" target="_blank">Salesforce/blip2-flan-t5-xxl</a> was used as the Image-To-Text model. I used Nucleus sampling with <code>do_sample=True</code> and <code>top_p=0.9</code>.</p> <p>The sample prompts in the competition consist of multiple objects and multiple sentences, but captions output by BLIP-2 tends to produce very concise sentences by default. Therefore, by chaining BLIP-2's caption generation, I made sure that sentences with a certain level of complexity are generated. The procedure for chaining is as follows</p> <ol> <li>generate a sentence following <code>"a photo of"</code> as usual (denote the generated sentence as A)</li> <li>generate a sentence following <code>"a photo of" + A</code> (denote this as B)</li> <li>Output <code>A + B</code> as the final caption</li> </ol> <p>The Prompt set created is as follows.</p> <table> <thead> <tr> <th>Prompt set name</th> <th>Subset</th> <th># of images</th> <th>Chain</th> </tr> </thead> <tbody> <tr> <td>OID1</td> <td>train0</td> <td>1M</td> <td>2</td> </tr> <tr> <td>OID2</td> <td>train1</td> <td>1M</td> <td>3</td> </tr> <tr> <td>OID3</td> <td>train2</td> <td>1M</td> <td>1*</td> </tr> <tr> <td>OID4</td> <td>train3</td> <td>1M</td> <td>2</td> </tr> <tr> <td>OID5</td> <td>train4</td> <td>1M</td> <td>2</td> </tr> </tbody> </table> <p>For OID3, in order to realize a combination of objects that do not exist in natural images, prompts were created by connecting two short captions created with chain=1 with commas. In concrete terms, the short captions created in chain=1 were converted to sentence embeddings using SentenceTransformer, and divided into 1000 clusters using Spherical KMeans. Then, for every combination of two clusters, we randomly selected one caption for each cluster and connected them with a comma.</p> <h3>ChatGPT (gpt-3.5-turbo)</h3> <p>This is a dataset that has utilized the ideas from <a href="https://www.kaggle.com/code/safavieh/chatgpt-generated-prompts/notebook" target="_blank">this notebook</a>. When I tried generating prompts using ChatGPT, I found that using the same prompt only yielded similar generation results. Altering parameters such as temperature did somewhat improve the situation, but it also made it more likely to produce broken sentences or excessively long ones. Therefore, I considered modifying the prompt sent to ChatGPT utilizing the previous generated prompts, and implemented it with the following code.</p> <pre><code>command = ( ) sample_set = [ , , , , , , , ] generated_prompts: [] = [] _ (num_trials): (generated_prompts) &lt; : examples = (sample_set) : examples = (np.random.choice(sample_set, , replace=)) + ( np.random.choice(generated_prompts, , replace=) ) np.random.shuffle(examples) prompt = command + .join([ e examples]) generated_prompts = send_to_chat_gpt_and_cleansing(prompt) </code></pre> <p>In this method, I created about 1M prompts, inputted them into Stable Diffusion after removing duplicates, and produced about 1M images.</p> <p>I also tested a few other LLMs, especially focusing on those with open weights, but I was unable to obtain a prompt set equivalent or superior to gpt-3.5-turbo using the same method, based on my qualitative evaluation. Typical behaviors of such LLMs are starting some sort of explanation instead of answering in the desired format or only being able to produce slightly modified examples. I have not tested LLaMA and its derivative models because I was unable to obtain LLaMA's weights in a license-compliant manner.</p> <h2>Model and Training</h2> <h3>CLIP Models</h3> <p>In the initial, simplistic experiments, the pretrained models with CLIP performed better than those for ImageNet classification. As a result, I focused on fine-tuning CLIP models.</p> <p>The following four models were used in the final submission.</p> <ul> <li>ConvNeXt xxlarge (<a href="https://huggingface.co/laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-rewind" target="_blank">CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-rewind</a>)</li> <li>BLIP-2 VisionModel (EVA01-g?, <a href="https://huggingface.co/Salesforce/blip2-opt-2.7b" target="_blank">Salesforce/blip2-opt-2.7b</a>)</li> <li>EVA02-L (<a href="https://huggingface.co/timm/eva02_large_patch14_clip_336.merged2b_s6b_b61k" target="_blank">timm/eva02_large_patch14_clip_336.merged2b_s6b_b61k</a>)</li> <li>EVA02-e (<a href="https://huggingface.co/timm/eva02_enormous_patch14_plus_clip_224.laion2b_s9b_b144k" target="_blank">timm/eva02_enormous_patch14_plus_clip_224.laion2b_s9b_b144k</a>)</li> </ul> <h3>Linear-Probing</h3> <p>Regarding the fine-tuning of CLIP, it has been suggested that Linear-Probing (LP), which trains only the last linear layer, and <a href="https://arxiv.org/abs/2202.10054" target="_blank">LP-FT</a>, which fine-tunes the entire model after LP, are effective to generalize. Indeed, these methods were effective in my experiments as well.</p> <h3>Resolution</h3> <p>Increasing the resolution also had a significant effect. CLIP models are typically trained at a resolution of 224, but I believe that 224 was disadvantageous for recognizing complex contexts containing multiple objects.</p> <p>ViT can infer with only a slight degradation in performance for inputs at a higher resolution than during training, by interpolating positional encoding. For instance, you can use functions like <a href="https://github.com/huggingface/pytorch-image-models/blob/v0.9.2/timm/layers/pos_embed.py#L17" target="_blank">resample_abs_pos_embed</a> implemented in timm. By using this, you can use a model trained at a lower resolution as a good initial value when training at a higher resolution.</p> <p>Until the middle of the competition, I was using models trained at 224, but towards the end, I improved my score by using those as initial values and training models at 336 or 448.</p> <h3>Q-former</h3> <p>SentenceTransformer's sentence embeddings are the normalized sum of token embeddings. I considered leveraging these token embeddings as additional supervision signals.</p> <p>I implemented an architecture using the Q-former from BLIP-2 to convert the patch embeddings output by the vision model into token embeddings with a length of 77. I then compared the output of that model to the token embeddings from the SentenceTransformer, padded with zeros until they reached a length of 77, and trained the model based on this comparison. Simply taking the mean squared error (MSE) would be heavily influenced by word order, so I used the Hungarian method to find the matching with the minimum MSE and calculated the MSE between those matches. I also used the cosine similarity between the sentence embeddings.</p> <p>Simultaneous training of the Q-former and the vision model was unstable, and I could not achieve success during the competition period. However, by freezing the pretrained vision model and training only the added Q-former, I achieved a performance improvement of +0.001 to +0.003 on the leaderboard (LB) for individual models. Moreover, since the vision model is frozen, I could obtain its standalone output of the vision model with almost no additional cost when predicting with the Q-former. It provided a slight benefit in ensembling.</p> <p>The effectiveness of the Q-former was not particularly pronounced in this problem setting, but I believe it would be useful in the original sense of an inversion task, where the goal is to reproduce images.</p> <h3>Weighted Averaging</h3> <p>For ensembling, I used weighted averaging. The weights were optimized using Adam and validation set from my ChatGPT dataset.</p>
GoDaddy - Microbusiness Density Forecasting
ETS, Linear Models, and MARS
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: GoDaddy - Microbusiness Density Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Below is a brief description + thoughts on my forecasting solution.<br> <strong>Private LB Place</strong>: 122/3,546<br> <strong>Private LB Score</strong>: 3.9884<br> <strong>Code</strong>: <a href="https://www.kaggle.com/code/abandura/mbd-forecasting-w-mars-lms-r" target="_blank">https://www.kaggle.com/code/abandura/mbd-forecasting-w-mars-lms-r</a></p> <p>Thanks to everyone who shared their insights and code!</p> <h2>Pre-Processing</h2> <p>Many counties exhibit level shifts in their historical microbusiness density, which reduces overall accuracy if simply included in the modeling process. I separated these from the rest of the counties by identifying any counties with a month-to-month change greater than 25%, and simply used last value predictions for them. Many other contestants took a similar approach. Looking at the size distribution of these counties (visualized in the histogram below) we can see that it excludes most of the smallest counties as well as a group of larger volume counties.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13474157%2F12985b7771d35940abaa8a6db8e72484%2Fexcl_cfips.png?generation=1687123492757230&amp;alt=media" alt=""></p> <p>Small counties are unpredictable, as a single domain registration can dramatically change the density. However, looking at the larger counties with high variance it's clear that there is still a strong signal in many of them that could be leveraged for better predictions than a simple last value. For example, county ID 24013 has only one level shift up early on in the data set, and data before that shift could be removed and then interpolated so that it could be included in the forecast model training/prediction process. County IDs 10003 and 32510 have two level shifts that creates a "hump", which could be corrected in a similar manner using interpolation. I had planned to implement an algorithm to correct shifts for these larger counties, but unfortunately ran out of time. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13474157%2F6b3c0541a03e4f405b1aa745b943d5b6%2Flarge_excl_cfips.png?generation=1687123510916081&amp;alt=media" alt=""></p> <p>Like many others, I also made the "percent change" adjustment to microbusiness density before training so that loss metrics like RMSE would weight small counties higher in the same way that SMAPE does. I used that adjusted target for all training other than the linear models, because they can be flexibly trained to directly minimize SMAPE (as described below).</p> <h2>Traditional Time-Series Forecasting, and other features</h2> <p>Of the traditional autoregressive time-series forecasting methods (ARIMA, Holt-Winters, Exponential Smoothing, etc.) that I tried, Exponential Smoothing (ETS) was the best. I ended up using ETS forecasts as features for the linear modeling by rolling through the data and generating ets forecasts for 1-6 month time horizons.</p> <p>Other features I toyed with included moving-average terms, difference terms, month and year, and the average of all previous MB density values. </p> <h2>CV Strategy</h2> <p>For CV I used the last 6 months of data, as shown below, and trained separate models for each forecast horizon. I wanted to use as much data as possible for training, since we only had a few years of historical data spanning the pandemic.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13474157%2F6ae7279f1d78b31a125355f0c2d34cd8%2Fcv_strategy.png?generation=1687123527511586&amp;alt=media" alt=""></p> <h2>Linear Modeling via Optimization</h2> <p>I tried other ML approaches like XGBoost, but found the most success with the simpler linear models.</p> <p>Another nice feature of linear models is you can directly fit them to minimize SMAPE by solving the optimization problem with the following objective:</p> <pre><code> fn par df r_cols y norm_col l2 l1 preds par df.N l2p l1p ind par i_col r_colsind preds preds parinddf geti_colmeangeti_colsdgeti_col l2p l2p l2parind l1p l1p l1parind err smapeadfgety fcpreds l1p l2p err </code></pre> <p>I tested L1 and L2 regularization, but found the lowest CV accuracy with no regularization. Looking at the fitted weight magnitudes, it seems that MA terms are most important in the forecasts. This was my best-scoring final submission.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13474157%2F8b4a56ee77fff6fe87562ab6916726f5%2Flm_weights.png?generation=1687130521279924&amp;alt=media" alt=""></p> <h2>Multivariate adaptive regression spline (MARS)</h2> <p><a href="https://en.wikipedia.org/wiki/Multivariate_adaptive_regression_spline" target="_blank">https://en.wikipedia.org/wiki/Multivariate_adaptive_regression_spline</a></p> <p>Because I had success with the LMs, I though that MARS would be a natural improvement since it is an algorithm for fitting piecewise linear models to automatically deal with non-linearities and incorporate variable interactions.</p> <p>I did some hyperparameter tuning, but had the most success with the default parameters, and based on the CV accuracy I hoped it would outperform the LMs. However that wasn't the case with the private LB (I'm still not sure why, perhaps MARS was more sensitive to erroneous Dec data…)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13474157%2Fb41c10f7948ea74940de90bd4e8f357a%2Fsmape_comp.png?generation=1687127830752749&amp;alt=media" alt=""></p>
Stable Diffusion - Image to Prompts
Silver medal solution: ViT, ConvNext, kNN-regression with BLIP-2 captions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the host, organizers, all the participants and my teammate <a href="https://www.kaggle.com/manwithaflower" target="_blank">@manwithaflower</a>!</p> <p>Brief summary of our solution:</p> <ul> <li><p>Models trained to predict prompt embeddings (ViT, ConvNext):</p> <ul> <li>First, train a separate MLP projection head for each model using selected image-text pairs from the LAION dataset. Filter the images based on image/text cosine similarity, retaining only the highest-scoring ones. This actually results in a dataset primarily consisting of images of written text with related prompts containing that same text - but it is still surprisingly effective, achieving 0.54434 public/0.54351 private on a single model.</li> <li>Fine-tune the models further with selected prompt/image pairs. Filter the prompts based on cosine similarity, the more aggressive filtering the better, with our final filtering threshold of 0.55 cosine similarity. Mainly use data from DiffusionDB 14M, <a href="https://www.kaggle.com/xiaozhouwang" target="_blank">@xiaozhouwang</a> GPT2 &amp; Hardcoded datasets, <a href="https://www.kaggle.com/jeinsong" target="_blank">@jeinsong</a> ChatGPT dataset, <a href="https://www.kaggle.com/motono0223" target="_blank">@motono0223</a> dataset, images from Discord downloaded with URLs from Open Prompts dataset, and images downloaded from URLs in <a href="https://www.kaggle.com/tanreinama" target="_blank">@tanreinama</a> 900k dataset.</li> <li>Apply layerwise learning rate decay for fine-tuning.</li> <li>Use a loss function based on angular distance, which outperforms raw cosine similarity.</li> <li>For consistent local validation, filter prompts from the training dataset that are similar to validation prompts based on cosine similarity.</li> <li>Determine ensemble weights for the models using Nelder-Mead minimization algorithm on local validation.<br> The resulting ensemble achieves 0.59333 public/0.59109 private on the leaderboard.<br> <br></li></ul></li> <li><p>Retrieval-based/kNN-regression approach with BLIP-2 image captioning:</p> <ul> <li><p>Improve <a href="https://www.kaggle.com/motono0223" target="_blank">@motono0223</a>'s kNN-regression approach:</p> <ul> <li>Filter out duplicate text embeddings from the datasets.</li> <li>Combine multiple sub-arrays of numpy arrays.</li> <li>Penalize distances between sample and examples differently by simple distance exponentiation.</li> <li>Add more diverse data.</li> <li>Memory-map stored reference prompt embedding arrays for efficient access without loading the files into memory.</li></ul> <p>This allows to reduce inference time to 1 hour and achieves 0.57179 public/0.56952 private.</p></li> <li><p>Generate 20 diverse image captions for each test image using BLIP-2 and diverse beam search decoding. Store these embeddings along with the corresponding reference embeddings, and use them with the retrieval-based/kNN-regression approach. This achieves 0.58821 public/0.58617 private.</p></li></ul></li> <li><p>Ensemble both approaches! The resulting ensemble scores 0.61004 public/0.60787 private on the leaderboard.</p></li> </ul>
Lux AI Season 2
8th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lux AI Season 2 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello Folks</p> <p>Responding to a certain request and fullfilling my obligations, I'm sharing my humble solution:</p> <p><a href="https://github.com/loh-maa/luxai-s2-submission" target="_blank">https://github.com/loh-maa/luxai-s2-submission</a></p> <p>It's a rather ordinary rule-based agent with some extra quirks, none of which are particularly unique -- they already have been described in other, better solutions.</p> <p>See you next time<br> loh-maa</p>
Lux AI Season 2
Some notes on a pure logic approach (lots of focus on domestic admin, not enough on fighting)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lux AI Season 2 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Notes below are not highly structured, just a few bits I find worth sharing in my approach (matches the notebook itself well in that sense: it is adequately commented I hope but far from production-ready code and the structure is difficult to penetrate as it is a nasty mix of 3 stages of design: legacy code from my first aimless submission, highly generalised code when I thought I could run 20+ solar panels and multiple ore/ice sources and then code that drops that generalisation to get at least somewhere).<br> <strong>- Bidding:</strong> bids very high but not dynamically, I came to the conclusion that a bid of 30 upfront is an immaterial cost relative to the benefits of better placement chances (it represents &lt;900 Power in value, so even at only 2 factories this bid sets me back by &lt;10 turns)<br> -<strong>Placement:</strong> </p> <ul> <li>primary score assesses resource situation at a spawn and only spawns with current max primary score are considered; if I run out of spawns with good ice/ore options on their own, friendly factories with their own ice are considered as ice donors if they are not donors yet (making an ore_adjacent factory with no ice an attractive primary score); hides ore/ice sources previously allocated to my own factories and all ice/ore sources within tier 2 of opponent factories</li> <li>among those, a secondary score picks the winner: key ingredients for secondary score is potential to host mines (applies to placements with own ice: looks to maximise the number of ore sources within radius 20 for later mine placement); also giving a hefty bonus to placements where the factory puts itself between the resource and the closest opponent to make attacking the resource more costly<br> -<strong>General stuff:</strong> </li> <li>works extensively with internal record keeping to store knowledge/decisions across turns (all in "permanent_record" which is an unwieldly big dictionary, poorly structured in some places) - done at 3 levels: global, factory and robot (i.e. every robot has it's own entry to remember stuff like it's mission, destination etc)</li> <li>movement is done via Networkx pathfinding but the edge costs are manipulated to hard-core avoid some places (e.g. opp factories, the parking spots of my solar panels) and discourage others</li> <li>friendly collisions are avoided by having a "flight-log": persistent record over all future turns, giving for any future turn the list of already reserved locs that turn; any order must check for consistency with said flight-log and if order is feasible and gets committed, it updates the fligh-log accordingly (flight-log clean-up isnt implemented well alas, e.g. when move orders get interrupted the matching entries don't get removed)</li> <li>orders that take place on specified tiles (e.g. de-rubbling, de-lichening, staging for attack, parking on the defense grid) reserve their allocated tile for the duration of their order execution, but release the reservation (i.e. open it up for another robot) if they get disturbed and abort the mission</li> <li>orders given by combat code generate an entry blocking a robot from receiving another order until the combat order is fully executed, to avoid counter-ordering (on the assumption that combat orders are always more important that other orders)<br> -<strong>Factory management:</strong></li> <li>factories are governed by pre-set scripts (e.g. determining number of attack robots to build, when to stop ore mining etc) and try to follow the outlined script development through phases; </li> <li>all production is organised through Congas (absolutely no code here that allows for non-conga-mining, which is a grave design flaw); </li> <li>Dyanmic factory controller mainly reacts to falling below Water and Power safety stocks (SS) levels (by increasing Water production or shutting down Ore production till Power levels recover)</li> <li>Water SS increases over time to reach 500 water per factory by turn 700 (to have 200 spare water for final lichen growth - once this is achieved across all factories, all ice production is shut down); Ore production shuts down at 650 (as payback is 250 turns and I want max available power at 850 when the main attack starts)</li> <li>Factories trade water and metal; any factory can launch a Water emergency call and then other factories may respond to this if they can by dispatching a truck (there is also a Metal emergency call, if a factory has a broken Ore conga and no suitable repair robots left); for mine/farm specialised factories, that trade is done regularly plus some metal flowing the other way to balance power production; beyond emergency water trades, there is also option to donate water to ensure roughly equal distribution of water levels across all factories<br> -<strong>Robot management:</strong></li> <li>organised around 8ish different roles that robots can be given; typically given at birth, but dynamic reclassification occurs (e.g. trucks are taken by reclassifying light terraform or defender units, and are handed back to that pool upon completion); solar panels peel off from the factory at start of end game to get Heavy attacker or defender role</li> <li>3 domestic roles for solar panels/production Congas (that differentiation is pretty pointless now, but was important during original design calling for 20+ robot mega factories): they just move into position and then get production/solar generation orders from the factory controller</li> <li>Each non-domestic robot type has a controller, that typically triggers when it finds a robot of that mission type with an empty order queue: trucks for metal/water distribution, repair to move solar panels into production congas when they are broken; terraformers (given some prio locs, then paths to reduce final attack costs, and general tier 2 tiles only after turn 500 when power is less valuable; defenders (light and heavy); Heavy lichen attackers</li> <li>Every controller has a recharge option that triggers when the cost to move to the closest factory gets close to power left in the robot<br> -<strong>Combat management:</strong></li> <li>home defense is simply an exercise in power management; the base robots (ie those sitting on the factory) are responsible for maintaining the power SS in each exposed robot; so when they start dancing around due to an invader, the base refills them so that they don't run out of power</li> <li>main combat code is appropriately names SD (Self defense): triggers when stationary robot has an adjacent threat or moving robot is about to enter a tile that can be reached by a threat this turn (i.e. robot of same or higher size with more power); role-dependent preference for stepping away or stepping onto the threat; lichen attackers have a particular preference for stepping away from the attacked factory, in the hope of drawing out defending Heavy robots; for lichen attackers this gets turned of 30 turns before game end to make sure that are (if necessary by suicide) going to delichen Tier 1 (which wont happen otherwise if enemy Heavies are parked inside the factory)</li> <li>delichen attackers first stage in tier 4 of the attacked factory; then at a fixed turn (870) 4 of them start to delichen; the rest remain inactive until the number of robot doing this drops below 4 (any robot engaged in SD counts as no longer delichening); this is done to preserve power in the attack force and not get into their own way</li> <li>allocation of attack robots to factories is done giving preference to achieving superiority in Heavy counts over the defenders and then to balance numbers across factories, moderated by the distance to the target factories (which effectively means I do not always send a Heavy attacker to the closest opposition factory)<br> -<strong>High value features missing</strong></li> <li>Some code to mine resources without Congas to allow more placement flexibility and react better to breaking of production Congas early on (and even mid-game, probably preferable over a Conga with 2 exposed Heavies as they are still hilariously easy to break)</li> <li>Actual defense code: detect incoming heavy, pick closest solar panel, have the base unit charge up that solar panel with all the available power, detach the solar panel and have it meet the intruder in the field before reaching the factory; same for lights in the end game to at least slow down delichening by lights; same for heavy defenders in the end game (currently I fake this by giving some Heavy the Terraform mission in the end game, but it would be much better if they chased intruder)</li> <li>Resource attack code: I have a significant power advantage throughout the midgame, should use this to create attackers to disrupt ore mining (maybe even ice mining); given the globalised attack code structure, I should in theory be able to focus fire a single factory overwhelming it with 6+ Heavy before turn 500 and maybe taking out a factory early</li> <li>Lichen attackers should get reallocated periodically: often I find 6+ of them huddling around an obviously beaten factory while the factory 1-over isnt delichened yet</li> <li>There is clearly a bug in SD: the rules of the game mean that robots should be nearly immortal as long as you move a robot threatened by a neighbour and never move it to a tile where a higher power robot could move in at the same turn; if you have code for that, only way to die is heavy congestion (ie no such tiles exist) or you run out of power (which you in theory can avoid by having a dynamic recharge order trigger when you run the risk of that happening)..and yet, my robots die like flies even before I tell them to </li> </ul>
AMP®-Parkinson's Disease Progression Prediction
Top 89 - Non Leaky Solution.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: AMP®-Parkinson's Disease Progression Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My solution is simple and doesn't use any "magic" in the data. It uses visit_month as a feature +200 peptides stdscaled features. The idea is to minimize the variance of results running multiple train/validation splits and abusing of BaggingRegressor with this simple algorithm:</p> <pre><code> fold (): valid = &lt;- sample patients data trainset train = &lt;- sample remaining patients trainset model = BaggingRegressor( estimator=SVR(C=), n_estimators=, max_samples=, max_features=, n_jobs= -) model.fit(tr[features].values, tr[]) valid[] = model.predict(valid[features]) VALID_LIST.append(valid) </code></pre> <p>This is basically bagging with booststrap and helps a lot stabilize the results. The thing is that using peptides features boosted my baseline score by around CV: +0.3 / LB: +0.3. <br> After the deadline I added the visit_month leak features into this model and it booosted by a lot my scores, but I wasn't able to beat the baseline anymore. What I concluded is that the leak features are so much stronger than the peptides features that makes the model focus only on the magic feats.<br> Hosts, in a real world application, without any magic feature, this sampling strategy may work to extract a slightly better signal from the data.</p>
Stable Diffusion - Image to Prompts
7th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to all for the great competition, and especially to my teammate <a href="https://www.kaggle.com/evilpsycho42" target="_blank">@evilpsycho42</a>. Don't let his "evilpsycho" username fool you--he was a fantastic teammate, and we had a great Slack conversation throughout the competition as we discussed our strategy and commiserated together when many of our best ideas failed to work as well as we hoped.</p> <p>Here are some key elements to our 7th place solution:</p> <p><strong>Data Generation</strong></p> <p>We assumed the training data would play a major role in this competition, so we started generating custom images using Stable Diffusion as quickly as possible. We tried to gather a diverse set of prompts, generally limiting them to have a cosine similarity of less than 0.7 using the competition metric. We filtered what we thought we the highest quality, low correlation prompts from the following sources:</p> <ul> <li>Diffusion DB</li> <li>Conceptual Captions</li> <li>COCO image captions</li> <li>Flickr Image Captions</li> <li>The ChatGPT prompts from <a href="https://www.kaggle.com/jeinsong" target="_blank">@jeinsong</a> (<a href="https://www.kaggle.com/datasets/jeinsong/chatgpt-images-w-prompts" target="_blank">https://www.kaggle.com/datasets/jeinsong/chatgpt-images-w-prompts</a>)</li> </ul> <p>Most importantly, we designed a custom prompt generator that we found to be very effective. It formed prompts from random combinations of subjects (40k people, places, and things), artistic media, adjectives, and verbs.</p> <p><strong>Validation Set</strong></p> <p>I'm sure most of us struggled at first to find a good validation set. We found it was important to ensure that our validation prompts came from a variety of sources and had a low correlation with our training data. </p> <p><strong>Models</strong></p> <p>Our best models were CLIP vision towers, especially <code>'convnext_xxlarge.clip_laion2b_soup'</code> and <code>'eva02_large_patch14_clip_336.merged2b'</code>, and we also included a few of the smaller ViT and convnext models from open_clip/timm in our final ensemble.</p> <p>While these CLIP backbones had the best single model scores, we found a robustness benefit from also including a small weight on a zero-shot image captioning model in our final ensemble, though not a large difference which one we used. Our top model uses the GIT model captioning model (<a href="https://arxiv.org/abs/2205.14100" target="_blank">https://arxiv.org/abs/2205.14100</a>)</p> <p><strong>Head Pretraining Trick</strong></p> <p>Since we were using CLIP models, we found a pretraining trick that allowed us to initialize the model's projection head that maps the CLIP backbone output to the 384 embedding used by the competition. We used our custom prompt generator to generate a million random prompts, and we created their text embeddings with the CLIP model's text tower as well as the text embeddings using the competition metric (<code>all-MiniLM-L6-v2</code>). Then we trained a simple linear layer that mapped the CLIP embedding to the competition embedding (e.g. <code>nn.Linear(in_features=clip_embeddings_size,out_features=384,bias=True)</code> )</p> <p>We used this as the pretrained weights for our model's projection head, which added information and performed better than using random weights to initialize the head.</p> <p><strong>Training</strong></p> <p>We tried a few different variations of training techniques, and our favorite runs generally included:</p> <ul> <li>2 epochs</li> <li>layer rate decay</li> <li>very little image augmentation (small amount of random/resize crop)</li> <li>large amounts of gradient accumulation to simulate large batch sizes</li> </ul> <p>We found the WiSE-FT technique was very effective in improving single model scores, but not helpful in our ensemble. </p> <p><strong>Single Model LB Scores</strong></p> <p>In case you are curious, here are the LB scores of our top models:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F642735%2Fe87c32b582ab1f658a722de90addcaa7%2Fprivate_lb_score_single_model.png?generation=1684201595735019&amp;alt=media" alt="private_lb_score"></p> <p><strong>Conclusions</strong></p> <p>Thanks to the organizers, and congrats to the other contestants!</p> <p><strong>Edit May 18</strong><br> Based on comments below, we shared the following, which I will add to the original post.</p> <ul> <li><p>The prompt generator is here <a href="https://www.kaggle.com/datasets/rturley/custom-prompt" target="_blank">https://www.kaggle.com/datasets/rturley/custom-prompt</a> with sample code in this notebook <a href="https://www.kaggle.com/code/rturley/custom-prompt-generator-for-image-generation" target="_blank">https://www.kaggle.com/code/rturley/custom-prompt-generator-for-image-generation</a></p></li> <li><p>The first 100k images that came from this prompt generator is in this dataset <a href="https://www.kaggle.com/datasets/rturley/stable-diffusion-100k-custom-prompts-and-images" target="_blank">https://www.kaggle.com/datasets/rturley/stable-diffusion-100k-custom-prompts-and-images</a> with sample code in this notebook (<a href="https://www.kaggle.com/rturley/tour-of-stable-diffusion-custom-prompt-dataset)[https://www.kaggle.com/rturley/tour-of-stable-diffusion-custom-prompt-dataset" target="_blank">https://www.kaggle.com/rturley/tour-of-stable-diffusion-custom-prompt-dataset)[https://www.kaggle.com/rturley/tour-of-stable-diffusion-custom-prompt-dataset</a>]</p></li> </ul>
Stable Diffusion - Image to Prompts
6th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you organizers for hosting this competition, and congrats to all the winners. I was depressed today after missing the prize, but I am happy to have won the gold medal.<br> I tried some approaches at the beginning of the competition, but after all the solution settled down to quite a simple one:<br> <strong>create 8.87M SD2-generated images from the captions of the image/video captioning dataset, and after deduplicating/preprocessing the dataset, train the generated images with timm's latest backbones (i.e. eva, convnext, swin).</strong><br> The inference notebook for final submission is <a href="https://www.kaggle.com/code/sokazaki/sdip-6th-place-submission" target="_blank">here</a></p> <p><strong>First Attempts</strong><br> At the beginning of this competition, as the public notebooks do, I also tried some image captioning models.<br> After trying several models, the combination of 4 captioning models (<a href="https://arxiv.org/abs/2205.14100" target="_blank">GIT trained on MS COCO</a> / CoCa / <a href="https://arxiv.org/abs/2302.01328" target="_blank">Socratic model from IC3</a> / CLIP Interrogator) and ViT trained model from <a href="https://www.kaggle.com/code/shoheiazuma/stable-diffusion-vit-baseline-inference" target="_blank">@shoheiazuma notebook</a> reached 0.5568 Public LB, and with the 4 image captioning models + ViT&amp;Swin trained using <a href="https://www.kaggle.com/code/shoheiazuma/stable-diffusion-vit-baseline-train" target="_blank">@shoheiazuma notebook</a> the score reached 0.57439 Public LB.<br> Therefore, at the beginning I considered creating two ways (i.e. image captioning models and direct optimization models like <a href="https://www.kaggle.com/code/shoheiazuma/stable-diffusion-vit-baseline-train" target="_blank">@shoheiazuma notebook</a>) to create final submissions, however, the inference time of image captioning models were too long, so I discarded to search for the best image captioning models, and shift to concentrate on creating direct optimization models. For creating direct optimization models, it was the key to create the SD2 dataset as soon as possible, so I started the survey of the image/video captioning dataset.</p> <p><strong>Data Generation</strong><br> I surveyed the image captioning / video captioning dataset and selected the below datasets to create SD2 images.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1476759%2F8a506f048c85efde0c8b8894827b5cb5%2F2023-05-16%20231638.png?generation=1684246639225798&amp;alt=media" alt="SD2 generated dataset"><br> For the training dataset, The rich vocabulary &amp; the moderate length of the text captioning seemed important to me, so I selected the dataset based on the two factors. Also, I used the faiss script from <a href="https://www.kaggle.com/code/tomokihirose/use-vector-search-for-diffusion-db-cleansing" target="_blank">@tomokihirose notebook</a> only for Diffusion14M DB and WebVid10M to remove redundant above 0.9 cosine similarity captions from the dataset. Also, as preprocessing for the dataset, the captions which are not English / shorter than 5 words / longer than 70 words are removed from the training dataset. <br> For empowering depth effect <a href="https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts/discussion/398407" target="_blank">as discussed in this thread</a>, I also added diffusion2M DB 224x224 resized dataset [15] and another created 722K (180.5K captions * 4 different seed) from diffusion2M DB captions to the above training dataset.<br> The gain by the depth was nice (+ 0.005~0.010 Public LB) as shown in the latter results, so I should have dug more deeper into the depth side for further performance improvement.</p> <p>To create the SD2 dataset, I used the diffuser's library (the weight is from stabilityai/stable-diffusion-2) with the default setting. For a fast generation, I also applied torch.compile mode and xformers with PyTorch2.0 using the below snippets:</p> <pre><code>stable_model.unet = torch.(stable_model.unet) stable_model.enable_xformers_memory_efficient_attention() </code></pre> <p>For further speedup, applying <a href="https://github.com/facebookresearch/ToMe" target="_blank">Token Merging</a> or using DPMSolver++ in SD2 were candidates, however, I couldn't have confidence that using those approaches won't decrease the total performance and it's hard to recreate the dataset due to the time resources, so I decided not to use those techniques.<br> For creating 8.87M dataset, it took about 2 months with 5~10 V100s and 1~5 A100s. In my setting, V100/A100 can produce about 100K/200K images in one week, so it costs about 600 V100 days / 300 A100 days for creating 8.87M dataset.</p> <p><strong>Model Training</strong><br> I selected 3 backbones (<a href="https://huggingface.co/timm/eva_large_patch14_336.in22k_ft_in22k_in1k" target="_blank">eva-large 336 size</a>, <a href="https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384" target="_blank">convnext-large 384 size</a>, <a href="https://huggingface.co/timm/swin_large_patch4_window12_384.ms_in22k_ft_in1k" target="_blank">swin-large 384 size</a>), and each model is trained for 5 epoch. For training, it took about 3~7 days on one A100. All models converged around 4 epochs for the validation dataset (train/valid split is 0.9/0.1).<br> In my experiments, the performance of eva 196 size, deit3, and efficientnet variants was lower than the above 3 models, so I didn't use them.</p> <p>The training script is almost the same as <a href="https://www.kaggle.com/code/shoheiazuma/stable-diffusion-vit-baseline-train" target="_blank">@shoheiazuma notebook</a>. The difference from the training notebook is here:</p> <ul> <li>changed batch size to 128 for all models</li> <li>training with fp16 mode</li> <li>applied horizontal flip (ColorJitter, GaussianBlur, and RandomCrop didn't lead to the improvement)</li> </ul> <p>Also, in the retraining phase, I changed CosineEmbeddingLoss() to CosineEmbeddingLoss(margin=-0.5) + 0.01 x MSELoss and trained with all datasets.<br> For final submission, the highlighted 14 models in the attached picture were ensembled with different resize resolution TTA (460x460 and 512x512) with equal ensemble weight.<br> The performance of each trained model is here. Split1 (5.02M) and Split2 (6.50M) are selected from the created 8.87M SD2 dataset except for 722K (180.5K * 4 different seeds) from DiffusionDB2M:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1476759%2F59e4ce02426f42578cef6d1cdca8bb5e%2F2023-05-18%20004915.png?generation=1684338631593500&amp;alt=media" alt="ConvNext Results"><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1476759%2F628b5ae887ae41492ad40f6c4800c03e%2F2023-05-18%20004936.png?generation=1684338658637625&amp;alt=media" alt="Swin Results"><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1476759%2Fd02d95a4f44090f657b420bb7ee45762%2F2023-05-18%20004955.png?generation=1684338689799294&amp;alt=media" alt="EVA Results"></p> <p><strong>What didn't work in my environment</strong></p> <ul> <li>Hard Prompts Made Easy (<a href="https://arxiv.org/abs/2302.03668):" target="_blank">https://arxiv.org/abs/2302.03668):</a> <br> the score using this method was low, and it took a bit long time to generate each optimized prompt.</li> <li>Stable unCLIP (<a href="https://github.com/Stability-AI/stablediffusion/blob/main/doc/UNCLIP.MD):" target="_blank">https://github.com/Stability-AI/stablediffusion/blob/main/doc/UNCLIP.MD):</a> <br> I tried to use unCLIP for Test Time Augmentation, but it took a long time to generate each image variation.</li> <li>Training with Stable Diffusion's latent feature: <br> Extracted the feature of VAE in SD2 to train, but it didn't work well.</li> <li>Using LLM (e.g. T5, GPT2, GPT3 Davinci-003) to summarize/regenerate the captions from image captioning models: <br> I couldn't get much gain even with GPT3 Davinci-003, so I stopped to use this approach.</li> <li>Also I tried to generate prompts using Vicuna-13B, however, I couldn't get the weight from the application form, so I gave up using it.</li> </ul> <p><strong>Acknowledgement</strong><br> I sincerely would like to appreciate Hitachi Ltd., as I used my company's resources for creating datasets and training models.</p> <p><strong>Reference for the dataset</strong><br> <a href="https://cocodataset.org/#download" target="_blank">[1] MS COCO captions 2017</a><br> <a href="https://huggingface.co/datasets/poloclub/diffusiondb/tree/main" target="_blank">[2] DiffusionDB 14M</a><br> <a href="https://paperswithcode.com/dataset/flickrstyle10k" target="_blank">[3] Flickr Funny &amp; Romantic dataset</a><br> <a href="https://paperswithcode.com/dataset/flickr30k" target="_blank">[4] Flickr 30K</a><br> <a href="https://www.imageclef.org/photodata" target="_blank">[5] Iaprtc12</a><br> <a href="https://nocaps.org/download" target="_blank">[6] Nocaps</a><br> <a href="https://www.cs.rice.edu/~vo9/sbucaptions/" target="_blank">[7] SBU Captioning Dataset</a><br> <a href="http://users.cecs.anu.edu.au/~u4534172/senticap.html" target="_blank">[8] Senticap Dataset</a><br> <a href="https://textvqa.org/textcaps/" target="_blank">[9] Textcaps Dataset</a><br> <a href="https://vizwiz.org/tasks-and-datasets/image-captioning/" target="_blank">[10] VizWiz Dataset</a><br> <a href="https://trecvid.nist.gov/trecvid.data.html" target="_blank">[11] VTT 2016-2022 from TRECVID</a><br> <a href="https://m-bain.github.io/webvid-dataset/" target="_blank">[12] WebVid10M</a><br> <a href="https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts/discussion/402146" target="_blank">[13] captions from </a><a href="https://www.kaggle.com/jeinsong" target="_blank">@jeinsong</a><br> <a href="https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts/discussion/398529" target="_blank">[14] captions from </a><a href="https://www.kaggle.com/xiaozhouwang" target="_blank">@xiaozhouwang</a><br> <a href="https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts/discussion/396136" target="_blank">[15] DiffusionDB 2M 224x224 resized dataset from </a><a href="https://www.kaggle.com/atom1231" target="_blank">@atom1231</a><br> <a href="https://huggingface.co/datasets/poloclub/diffusiondb/tree/main" target="_blank">[16] DiffusionDB 2M captions</a></p>
Lux AI Season 2
Yet-another-logic-bot
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lux AI Season 2 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>With the competition still ongoing I've no idea whether I'm going to end up 1st or 5th (though judging by the first few days I suspect it'll be 1st-3rd, which I'm very happy with!), but now is the time for the writeup while it's still fresh in my mind! </p> <h1>General outline</h1> <p>I went for a stateful rules based bot with no full forward projection, though with some internal forward projection. I wrote it in Typescript. Early on I was planning on doing a team entry with some other Screeps players, but that fell through.</p> <p>Most of my work was done with self-play, especially for the first couple of months where I had some tricks that other people hadn't figured out yet which I wanted to keep under wraps. Pretty much all of them other people figured out on their own though! This got a bit tricky as per-step runtimes of my agent climbed, with some self-play games taking &gt;20 minutes. I think next time if we're still at 1000 steps of 3 seconds, I may drop down to C++ as I got to the stage where I'd needed 10+ hours on an 8 core machine to get any sort of reasonable data. Season 1 at 360 steps of 3 seconds was much more managable.</p> <h1>Placement and bidding</h1> <p>In general I felt everyone was bidding too low throughout the season, but I didn't want to bid higher, because then others would bid higher. Only my final agents bid "high" at 41, chosen because other agents were using 40 as their limit. Bidding was based on a simulation of best placements, then running things through my scoring system. I do rarely bid -1, as placing second was very rarely judged to be an advantage. I set the max range for a resource to be 4 from any factory tile. I noticed others had this set further. I found I'd too often have my chain stalled out if this was the case, but maybe I should have based it on range to enemy factory after everything had spawned. While the majority of my mining was chains I did have support for non-chain stuff which occasionally kicks in.</p> <p>Placement was designed to focus on good spots with both ice and ore. Having ice within one tiles was heavily prioritized, and having it outside of two was heavily penalized. Rubble and map edges were also penalized. Aggression, where my factory was closer to a resource than the opponent's only source of that resource, was a big bonus, again with ice more important than metal. </p> <p>If good ore+ice spots could not be found I tried to place ice-only factories, or, to pair with ice-only factories, ore-only factories. I then had metal and water couriers to move resources between them. I'm not sure I saw anyone else doing metal couriers: I reasoned it was better than spawning myself and moving the robot as often ice only factories would have an excess of power. Not 100% sure this was true.</p> <h1>Robot logic</h1> <p>I went for a stateful rules based bot. Each robot was spawned to fulfill a "role", and each step each robot would generate the actions to fulfill that role to its best ability. Some roles were quite static, like mining along a chain, while others were given more freedom. In cases where decisions were based on the "target" of another robot, I'd take the latest information - either from something previously run this tick or something run last tick. Everything was iterated through in the same order to keep things stable. The commands could get complicated enough to fill my action queues fairly well (go here, dig that rubble, go back to factory, pickup power, go here dig that other rubble was plenty!)</p> <p>Once those actions were generated they were passed through a check to see if those actions would change the queue, and if not, nothing was submitted.</p> <p>All robots went through the same core "starting" logic where they'd do urgent tasks irrespective of role which went down a common path:</p> <ul> <li>Clearing lichen urgently (ie. where there was an opportunity to cut lichen off)</li> <li>Clearing rubble urgently (to reconnect my lichen),</li> <li>Giving power urgently (usually to a heavy that was at risk of being killed)</li> <li>Fighting neighbours</li> </ul> <p>Safety logic was also shared - I had a very important and expensive <code>checkSafetyForAction</code> function which would try to figure out if any enemy robot could kill my robot before I made it back to the factory. This was a bit more complicated than simply knowing how much power I needed to return home (which I also had) and I didn't really ever get perfect. If a robot "unsafe" or going to become "unsafe" it would ignore it's previously computed set of actions and either go home for more power or sit still and charge.</p> <p>At the end of every step I'd have a resolver stage that figured out collisions, repathing and bumping/swapping, as well as validating pickup/transfer commands to avoid waste, and adding new transfer commands if they were better than what the robot wasn't doing. I then had a final "emergency" bump logic for when everything else failed and a robot needed to move. Everything else shouldn't have been failing… but there were corner cases in cramped environments that sometimes fell through the cracks.</p> <p>Robots could transition between roles. For example rubble clearers who got dragged away from their desired target by too much would turn into combat robots. Less important robots could be moved to a different role if a more important role was stuck waiting for metal. Things like that. I feel this was an underdeveloped part of my bot. Often I'd over-provision a role and not have anything stealing the robot for another role. For example: I might over-build rubble clearers, transition them to combat, then realise I needed more rubble clearers after all and not transition back.</p> <h1>Factory logic</h1> <p>Lichen growth was one of the trickiest parts to balance as it was rarely clear what the best strategy was. Others have proposed "only do it at the end" and maybe that's correct, though most of the top agents disagree. I tried to balance efficiency (eg. not watering when at 100, trying to minimizes the losses in the ceil(numTiles/10) function) with need for power and the current water situation. I had a forward projection for late game which helped a lot earlier in the event, though once late game attacks on lichen started appearing it became a bit too conservative and often I found myself holding on to too much water at the end.</p> <h1>Weaknesses/"future work"</h1> <p>While I don't plan on doing any future work, if I did, there are things I'd take a step back on re-write.</p> <ul> <li>Movement in crowded situations. It often ended up a bit too much "first come first serve" with lots of cancelled queues due to blockages. I feel I could have analysed this better to form better queues.</li> <li>Countering late game heavy lichen blitzing. Seems I lose a lot of games to this. Have to figure out a way to indentify it and combat the blitzers. I tried this with my 10pm final day submission, but it was too rushed. Many games I lose despite being 4x lichen up at step 900 and basically dropping to zero by step 1000.</li> <li>Less movement. My robots tend to be a bit twitchy and waste power. More "doing nothing" would benefit it.</li> <li>Better handling of "mobbed" factories (and mobbing myself). A factory can quite easily be denied ice if it's surrounded by enemy factories. Right now I don't really do that.</li> <li>In general, better attack/defend priorities. It's not really focussed enough right now. For example, I've seen it stop camping the ice of a factory with very low water where camping it would have killed the factory.</li> <li>Maybe more solar-panel stuff?</li> </ul>
March Machine Learning Mania 2023
6th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F485694%2F7bb4578e8da82d07c168e93a18f4da67%2FPicture1.png?generation=1681071447070772&amp;alt=media" alt=""><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F485694%2Fd09996b8accf6cbd6b875d4d33f94af5%2FPicture2.png?generation=1681071456828606&amp;alt=media" alt=""><br> <strong>Background</strong> </p> <p>I graduated from Heriot-Watt University in Edinburgh, Scotland in 2006 with a degree in Actuarial Mathematics and Statistics. </p> <p>Since then, I have been working for multiple banks in the United Kingdom and Australia predominately in credit risk modelling. This has involved creating mathematical models to help the organisation make the correct decisions. In addition I have also looked at regulatory models and forecasting of provisions. I am currently a Director, Analytic Consulting in an analytics company based in Madrid.</p> <p>This competition has become an annual hobby for me. I first discovered it in 2017 which is when I found out what March Madness was. In that competition I came 5th but I wasn’t happy with the approach and have changed it since then.</p> <p>This year I spent a bit more time than usual because I moved my processes from running on my local machine to running on Kaggle. As part of that, I have reviewed and updated all my old codes to make sure they were doing what they should have and updated the process. This includes updating to use R code rather than using the SQL library which is how I produced some of some steps previously. I have included my code here (<a href="https://www.kaggle.com/code/djscott1909/march-madness-2023-ds" target="_blank">https://www.kaggle.com/code/djscott1909/march-madness-2023-ds</a>) which you can review but please be kind. I have taught myself R during these competitions so I apologise for any inefficient coding. </p> <p><strong>Summary</strong></p> <p>My code breaks down into multiple sections. The first 6 sections of my code are included to use the data provided by the competition and looking at different ways to generate variables. This code includes making a range of my own ratings for teams. They include:</p> <ul> <li><strong>A Simple rating system</strong> – This creates a rating for all teams based on their season results and using the margin of victory in each game to generate a rating. </li> <li><strong>An ELO rating</strong> – The same concept except it uses the ELO process (<a href="https://en.wikipedia.org/wiki/Elo_rating_system" target="_blank">https://en.wikipedia.org/wiki/Elo_rating_system</a>) to generate the teams rating.</li> <li><strong>Summary statistics</strong> – Using the information to generate some statistics which could be beneficial from the games. This includes Win percentage, Points for and against, Strength of Schedule, quadrant 1 wins, etc.</li> </ul> <p>The last two sections are focused on generating a unique model for the Men’s and Women’s competition. I use all the information available and don’t introduce any of the statistics which are only available in later years, i.e. I don’t use the detailed results dataset, location information (only if a team was home, away or neutral), 538, odds or use the 0/1 method to optimise. I have used both odds and 0/1 in previous years but with the changes this year I didn’t think it was worth it. I also haven’t used the override approach, i.e., setting 1 Seed to 100% chance of winning. In some years I am sure this will be beneficial but I want to simply see how the model performs. </p> <p>The model itself is using a stepwise regression. I start the model with no variables and the step process looks to add in the most predictive variable and repeats the process. I prefer to use a step process which also reviews the variables in the model and removes them if they no longer add value to the model. This helps to make sure all variables considered are truly adding value. </p> <p>My final models looked as follows:</p> <p><strong>Men</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F485694%2F762288b66a2ff686a56f076c3af621e7%2FPicture3.png?generation=1681071956155461&amp;alt=media" alt=""></p> <p><strong>Women</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F485694%2Fa64887b1057974e7789a7c32aef1bd76%2FPicture4.png?generation=1681071974834457&amp;alt=media" alt=""></p> <p><strong>Interesting findings</strong></p> <p>I decided to test some variable transformations in the model development. This included creating a Weight of Evidence binning (WoE) for the variables, removing outliers and normalising the variable. </p> <p>During the Warmup competition I was careful to consider only the development and leverage the Out of time to help inform which direction I should go next. During this time, it seemed to show that Normalising didn’t add much to the Women’s competition. I can only assume the distribution within some of the variables made this better but I didn’t expect to see a difference between the two. My plan for next year is to investigate this further. </p> <p><strong>Simple Features and Methods</strong></p> <p>This year, with the changes make, I decided to create 2 models and not leverage the 0/1 approach with the 2 submissions. I thought that the reward wasn’t as high this year so I went a different way. </p> <p>I had a much simpler model which didn’t contain the WoE fields. Unfortunately, it appears to be too simple (or maybe was just unlucky this year). This model would only have been good enough for 329th so I might need to investigate alternatives for my 2nd model for next year. </p> <p>Since my approach to the model development is using a stepwise regression and developing a single model the model ends up fairly simplistic in comparison to others (I think). While the models may look like a lot of variables it is really the same variable with different WoE binning so the underlying characteristics are fairly small.</p> <p><strong>Additional Note</strong></p> <p>I always enjoy these competitions but I wanted to give a mention to the work done by <a href="https://www.kaggle.com/zdbradshaw" target="_blank">@zdbradshaw</a> (<a href="https://www.kaggle.com/competitions/march-machine-learning-mania-2023/discussion/397987)" target="_blank">https://www.kaggle.com/competitions/march-machine-learning-mania-2023/discussion/397987)</a>. The work he did with the submissions which were made public made the final games much more enjoyable for me as I knew which results to cheer for. I really encourage people to share their solutions next year so a similar exercise can be done. It adds a lot to the experience for me. </p> <p>This year was a roll coaster of emotions (being 800+ after the 1st 4 games) but I got luckier as the competition went on and that is why I keep coming back to it. I hope it continues for many years to come. </p>
AMP®-Parkinson's Disease Progression Prediction
8th place solution: One trick to win the gold
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: AMP®-Parkinson's Disease Progression Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks my teammates <a href="https://www.kaggle.com/hookman" target="_blank">@hookman</a> <a href="https://www.kaggle.com/zui0711" target="_blank">@zui0711</a> <a href="https://www.kaggle.com/max2020" target="_blank">@max2020</a> <a href="https://www.kaggle.com/librauee" target="_blank">@librauee</a> , I have learnt a lot from them. A crucial trick found by them is there are two groups in the data: <strong>The true patients and the control group</strong>. We can find this by <strong>the minimum visit_month diff</strong> of each id:</p> <pre><code>for df in [clinical, sup_clinical]: df['visit_month_diff'] = df.groupby(['patient_id'])['visit_month'].diff() df['visit_month_diff_min'] = df.groupby('patient_id')['visit_month_diff'].transform('min') </code></pre> <p>There are 3, 5, 6, 12, 36 visit_month diff in clinical and sup_clinical, we choose only 3, 6, 12, 36 parts as training data, and transform 3, 36 to 6. We find 3, 6 and 36 has obviously higher updrs values and 12 has lower updrs values(5 part dropped as abnormal).<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3110858%2F99690a48cfd9b72ad8bc4fc746815856%2FWX20230519-0926512x.png?generation=1684460243462016&amp;alt=media" alt="updrs in different visit_month diff and months"><br> We can draw a conclusion <strong>visit_month diff==6</strong> are true patients and <strong>visit_month diff==12</strong> are control group.<br> We use piecewise function to optimize two groups smape separately:</p> <pre><code>def calculate_predictions(pred_month, trend): if target == 'updrs_4': pred_month = pred_month.clip(60, None) pred_month2 = (pred_month-60).clip(0, None) return np.round(trend[0] + pred_month * trend[1] + pred_month2 * trend[2]) </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3110858%2F5705a4149bab70f274bd16c521c5c967%2FWX20230519-1027572x.png?generation=1684463307465921&amp;alt=media" alt="0=control group, 1=true patients"><br> In the first loop inference, we cannot get visit_month diff, so we train additional coefficients based on true patients + control group. In every loop, we record history data so that we could get the <strong>the minimum visit_month diff</strong>.<br> Finding these two groups can help you reach 54.2~54.8 on public board and 60.1~60.7 on private board.</p>
Predict Student Performance from Game Play
[0.67] 742nd Place Solution - Single Transformers Model Comparison Between Data Aggregation and None
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1><strong>742nd Place Solution</strong></h1> <p>First of all, congratulations to the first 4 teams who emerged victorious in this long and challenging competition. This was my very first challenge since I created this account 4 years ago, during my first year at university for a random course at that time.</p> <p>In this competition, I made every effort to implement transformers to solve the problem, resulting in two solutions with different approaches:</p> <h1>1. Sequence-N Aprroach</h1> <ul> <li>Codes: <a href="https://www.kaggle.com/phuhoang26/psp-llama" target="_blank">Llama</a></li> <li>Model's weights: <a href="https://www.kaggle.com/datasets/phuhoang26/psp-seqtf-llama" target="_blank">Llama-W</a></li> <li>Public score: 0.616</li> <li>Private score: 0.621</li> </ul> <p>This approach was inspired by the <a href="https://www.kaggle.com/competitions/data-science-bowl-2019/discussion/127891" target="_blank">3rd solution of limerobot in the Data Science Bowl 2019</a> by <a href="https://www.kaggle.com/limerobot" target="_blank">@limerobot</a>.<br> In this approach, I utilized N information (rows) of session_id as input to my model. The pipeline for this approach is as follows:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3563184%2F8c31538c1619c8e330bdde51b078247f%2Fdrawio.png?generation=1688024191708037&amp;alt=media" alt="Sequence-N data pipeline"></p> <p>The rationale behind this approach was to leverage the sequential characteristics of the data. After preprocessing and grouping the data by sessions, it was fed into a language model. The model was trained using StratifiedKFold (K=5).</p> <p>Instead of using BERT, as in the original codes of limerobot, I employed <a href="https://arxiv.org/abs/2302.13971" target="_blank">Llama</a>, a state-of-the-art language model developed by Facebook. Llama claims to perform well or even better than previous models (with billions of parameters) while having a significantly smaller number of parameters (4M) for the same tasks.</p> <p>I trained this approach with different sequence lengths. For level_group = '0-4', I trained with sequence length equals to 256; level_group = '5-12' is 528; and level_group = '13-22' is 808. These number was the output of EDA (counting number of rows at quantile 0.9)</p> <p>I also experimented with different classifiers to learn from the output of Llama, but the scores did not show significant improvement. These are the classifiers I tried:</p> <ul> <li>Llama + <a href="https://arxiv.org/abs/1408.5882" target="_blank">1D CNN</a></li> <li>Llama + XGBOOST</li> <li>Llama + LSTM</li> </ul> <p><strong>Reasons for the failure of this approach:</strong></p> <ul> <li>For processing CATS features (categorical features), I employed a mapping technique (e.g., cutsence_click as 1, personal_click as 2, navigate_click as 3, etc.). This technique works well when the tokens are carefully customized, as done in Glove or Fasttext, where similar tokens are embedded close to each other in the feature space (e.g., run and walk, hit and slap, brown and yellow).</li> <li>For NUMS features (numerical features), I only filled the NaN values with 0, without performing feature selection or generation. This may have contributed to the failure of the model.</li> </ul> <h1>2. Gated Tab Transformer</h1> <ul> <li>Codes: <a href="https://www.kaggle.com/phuhoang26/gated-tab-transformer" target="_blank">GATED-TAB-TF</a></li> <li>Model's weights: <a href="https://www.kaggle.com/datasets/phuhoang26/gated-tf-psp" target="_blank">GATED-TAB-TF-W</a></li> <li>Public score: 0.674</li> <li>Private score: 0.67</li> </ul> <p>Paper of this approach: <a href="https://arxiv.org/abs/2201.00199" target="_blank">The GatedTabTransformer. An enhanced deep learning architecture for tabular modeling</a></p> <p>Data aggregation ideas and codes derive from <a href="https://www.kaggle.com/pourchot" target="_blank">@pourchot</a> notebook, you can see <a href="https://www.kaggle.com/code/pourchot/simple-xgb" target="_blank">here</a></p> <p>this approach, I used aggregated information of each session_id to feed into my model. Here is this approach's pipeline:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3563184%2F4d9e348215f7856a575c1fb8db55c739%2F2.drawio.png?generation=1688024800924132&amp;alt=media" alt=""></p> <h1>Conclusion</h1> <p>Initially, I believed that the Sequence-N approach would outperform the Gated Tab Transformers. However, the scores of the latter model turned out to be significantly better than the former, and I have yet to determine the reason for this.</p> <p>Furthermore, as I state earlier, I train the Sequence-N approach with quite long sequence. However, in the submit versions, I saw that the smaller sequence length the better! I cannot explain this.</p> <p>If you have any questions, comments, or contributions, please feel free to contact me or leave a comment here.</p> <p>Thank you for taking the time to read this!</p>
GoDaddy - Microbusiness Density Forecasting
Public 1125 - > Private 48 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: GoDaddy - Microbusiness Density Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Methods that are beneficial for prediction:</p> <p>Outlier smoothing<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5998972%2Fbc31d3c4fcf04c23e7afb091f931984b%2F1.png?generation=1687061025743863&amp;alt=media" alt=""><br> Coefficient adjustment + approximation<br> active: The raw count of microbusinesses in the county The target to be predicted is population density = active / population. The population statistics change every January. The competition requires predicting the population density in 2023, which has changed in the denominator of population density. Based on the existing data, use (2023 predicted value * 2020 population) / 2021 population to estimate the coefficient of population trend change, and then correct the predicted value with this coefficient. Because the numerator part active is originally an integer, the predicted value multiplied by the population is usually a continuous value, such as 1.3, while the actual value may be 1, and rounding can help increase accuracy. The predicted value after coefficient correction should be multiplied by the 2021 population, then rounded, and then divided by the 2021 population for further correction.</p> <p>Optimal shift time = 12<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5998972%2Fc78306ebaa11dbf3adbc2c6a1d6a1b62%2F2.png?generation=1687061119239898&amp;alt=media" alt=""><br> Create lagged features for the past 12 months, if it is 24 or 6, it will make the effect worse, especially 24.</p> <p>The state and county fields are label encoded, plus the cfips field (equivalent to state + county), which are beneficial for the prediction results. <br> Rate of change effect is better<br> For predicting n + gap tasks, change the original density continuous value to be predicted to the growth value between the current value and the previous gap months.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5998972%2F9c4f2fa8203de8a5e29cc538e1dfccb8%2F3.png?generation=1687061133256397&amp;alt=media" alt=""><br> Make some lagged features for the original density continuous value, and then add some lagged features for this growth value, which improves the prediction accuracy</p> <p>After converting the density target value into a growth value, LightGbm and Xgboost models need to change the default objective parameter and use pseudo huber loss as the optimization indicator.</p> <p>========</p> <p>对预测有增益的方法:</p> <h1>异常值平滑</h1> <h1>系数调整 + 求近似值</h1> <p>active:The raw count of microbusinesses in the county<br> 要预测的目标为人口密度 = active / 人口。 人口统计值每年一月会发生变化。竞赛要预测2023年的人口密度,在人口密度分母上已经发生了变化。基于已有的数据,使用(2023年预测值 * 2020年人口)/ 2021年人口 估算出人口趋势变化的系数,再将该系数修正预测值。<br> 因为分子部分active本来是整数,预测值乘以人口通常是连续值,譬如1.3,而实际值可能是1,取整后可以帮助精度增加。系数修正后的预测值要乘以2021年人口,然后取整,再除以2021年人口,做进一步修正。</p> <h1>最优shift time = 12</h1> <p>创建过去12个月的滞后特征,如果是24或者6,都会让效果变差,尤其是24.</p> <h1>state和county两个字段做label encoding,加上cfips(等同于state + county)字段,都对预测结果有增益。</h1> <h1>rate of change 效果变好</h1> <p>1.预测n + gap的任务,将原本要预测的密度连续值, 改成当前值与上gap个月的增长值。</p> <p>2.对原本对密度连续值做一些滞后特征,再加上对该增长值做滞后特征,对预测准度有提升<br> 3.将密度目标值转换成增长值后,LightGbm和Xgboost模型需要更改默认的objective参数,使用pseudo huber loss作为优化指标.</p>
Stable Diffusion - Image to Prompts
12th place solution - GPUs go Brrr
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3076212%2Fcd59e11f814393fda802c8d225b02635%2FKaggle%20Days%20post-competition.001.jpeg?generation=1684439846271607&amp;alt=media" alt=""></p> <p>YAY this is my first solo gold medal! Thanks Kaggle for organizing this! As always it was a fun but demanding competition. Also huge thanks to my employer, <a href="http://freepik.com" target="_blank">freepik.com</a> for the compute. I would have never made it with my own equipment!</p> <p>My strategy all along was to amass as many images and train as many models as possible in the 3 months span of the competition.</p> <h1>Data</h1> <p>I generated roughly 4.4M images. Initially I used the script referred to by the organizer, but a few weeks into the competition I adopted a few optimizations from pytorch 2.0 and the diffusers library, such as memory efficient attention and <code>torch.compile</code> and rolled out my own generator largely based on Pedro Cuenca's <a href="https://pytorch.org/blog/accelerated-diffusers-pt-20/" target="_blank">accelerated diffusers blog post</a>. I used the same generation procedure as the organizer, or at least the bits that were public: generate 768px images then rescale to 512px, same CFG, noise sampler, etc. It would have been much faster to get 512px images straight out of the model but I hypothesized that the former would approximate the test set domain better. I kept the images in lossless format (WebP). Using JPGs was much more manageable but resulted in slightly worse LB.</p> <p>I tried to gather prompts as diverse as possible including real image captions from LAION, CoCo, etc. and scraped prompts (DB2, OP, MP):</p> <ul> <li>Conceptual Captions (~2M)</li> <li>Open Prompts (~1.2M)</li> <li>CoCo (~0.4M)</li> <li>Magic Prompt (~0.4M)</li> <li>DiffusionDB2 (~0.4M)</li> <li>LAION aesthetics (~0.1M)</li> </ul> <p>I used <a href="https://www.kaggle.com/xiaozhouwang" target="_blank">@xiaozhouwang</a> 's idea to <a href="https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts/discussion/398529" target="_blank">filter out images with cosine similarities &gt;.9</a> as many prompts, especially those scraped from discord, etc. are simply a rephrasing of another prompt. I also discarded prompts longer than 77 tokens. Deduplication had a significant impact in both CV and LB.</p> <p>2023-08-14 EDIT: <a href="https://huggingface.co/datasets/jamarju/sd-4.4M" target="_blank">Link to the full train/validation dataset</a>.</p> <h1>Single models</h1> <p>I tried many architectures with a 100K subset distributed more or less uniformly from the Open Prompts database, and then selected the following models for the final train:</p> <ul> <li>eva_giant_patch14_336 (1B params)</li> <li>eva02_large_patch14_448 (300M params)</li> <li>eva02_large_patch14_448.mim_m38m_ft_in22k (300M params)</li> <li>vit_huge_patch14_clip_224.laion2b (630M params)</li> <li>vit_large_patch14_clip_336.openai_ft_in12k_in1k (300M params)</li> </ul> <p>I used timm's pretrained weights with a single <code>Linear(x, 384)</code> head and negative cosine similarity loss. Some of those models were released in the middle of the competition, so enormous thanks go to Ross Wightman for his titanic and continuous job.</p> <p>I trained for 3 epochs with OneCycleLR schedule and lr=1e-6 for most of the models and Adam, 8-bit Adam from bitsandbytes or 8-bit Lion (also from bitsandbytes) depending on the model and deepspeed stage 2 for distributed training. Overall 8-bit training worked fine but some models were only competitive when trained with regular Adam + AMP (bfloat16). Each architecture was trained with a different random seed. In the last couple of weeks I retrained 10 of the weaker models using constant LR + warm up as some models improved both CV and LR over their OneCycleLR counterparts, but that was not always true so I kept the OneCycleLR checkpoint where it was better.</p> <p>The total compute I used to train all 25 models was ~215 gpu*days plus another ~85 gpu*days taken by failed/suboptimal/interrupted trains. GPUs were mostly RTX 3090.</p> <h1>Validation strategy</h1> <p>To tame the CV/LB correlation, I clustered the embeddings into 100 clusters using KMeans and ran K-fold validation with 20% holdout samples grouped by cluster. My CVs were consistently between 9% and 17% higher than the LB for single fold models, 13% on average. </p> <h1>Ensemble</h1> <p>For the final ensemble I trained a very simple <code>nn.Linear(5, 1, bias=False)</code> to find the optimum blending coefficients. I tried other sophisticated sample-wise strategies, such as a transformer encoder with or without architecture hinting or an MLP, but blending linearly worked better.</p> <p>In the very last hours I had 1 spare sub, so I submitted just the EVAs dropping the two (weaker) ViTs and that turned out to be my best scoring submission.</p> <h1>Pain points</h1> <p>Keeping track of so many models was the hardest part for me in this competition, as I am not an overly organized person. Also, some models training lasted for as long as 9 days! Murphy's law held true most of the time and I experienced many kinds of interruptions:</p> <ul> <li>power outages, </li> <li>NVMe errors, </li> <li>disk full at checkpoint save (ouch),</li> <li>Linux OOM killing sprees, </li> <li>GPU thermal throttling, </li> <li>higher prio GPU tasks (ie. other people wanting to use the computer for real work),</li> <li>even tmux has blown up in my face twice</li> </ul> <p>If I had to chose my least favorite crash that would probably be tmux's.</p> <p>I can't imagine what months long training of massive LLMs must be!</p> <p>I used pytorch lightning + checkpoint callbacks every epoch AND every hour. Unfortunately, the hourly checkpoints (or any mid-epoch checkpoint for this matter) are not properly resumable because the dataloader is not deterministically fast-forwarded to the exact same step. That results in some images being run twice and some others not run at all, and therefore a lower CV.</p> <h1>Things that didn't work</h1> <ul> <li>LORA (trains fast but less competitive)</li> <li>Many other archs (ConvNext XL, XXL, efficientnet v2, smaller ViTs, BEiT, BEiTv2, vanilla resnets, etc).</li> <li>Patch dropout (trains really fast but less competitive)</li> <li>Tensor-rt (what a nightmare)</li> <li>Contrastive loss instead of -cosine similarity loss.</li> <li>GPT-3.5/-4 prompt generation (problems of obedience to the prompt and repetition of themes, but I didn't try too hard)</li> </ul> <h1>Thanks</h1> <p>Again, Ross Wightman for timm, <a href="https://www.kaggle.com/ptrblck" target="_blank">@ptrblck</a> at the pytorch forums, my employer <a href="http://freepik.com" target="_blank">freepik.com</a> (currently hiring, btw!) and the organizers.</p>
Stable Diffusion - Image to Prompts
4th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners! Thank you, Kaggle, for hosting such a fun competition (kudos for coming up with 16K prompts)! Also, a shoutout to all the open-source efforts in the DS community.</p> <p><a href="https://www.kaggle.com/code/gerwynng/sd-itp-4th-place-solution" target="_blank">Final Submission Notebook</a></p> <h1>Summary</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2796179%2F06b4a722da105f50301945222179de44%2F4th_solution.png?generation=1684243194540991&amp;alt=media" alt=""></p> <table> <thead> <tr> <th></th> <th>Public</th> <th>Private</th> <th>Runtime</th> </tr> </thead> <tbody> <tr> <td>(1+2+3) Text Candidate Generation</td> <td>0.614X</td> <td>0.612X</td> <td>6 hours</td> </tr> <tr> <td>(4) ViT Encoders</td> <td>0.644X</td> <td>0.641X</td> <td>3 hours</td> </tr> <tr> <td>Weighted ensemble (CV Best: 0.4, 0.6)</td> <td>0.646X</td> <td>0.643X</td> <td>9 hours</td> </tr> <tr> <td>Weighted ensemble (LB Best: 0.25, 0.75)</td> <td>0.648X</td> <td>0.645X</td> <td>9 hours</td> </tr> </tbody> </table> <p><strong>Text Candidate Generation</strong></p> <ul> <li>Text retrieval: performing a top-k exhaustive search from 56M precomputed text embeddings, using pretrained <a href="https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k" target="_blank">CLIP-bigG</a>, followed by cosine similarity filtering using pretrained <a href="https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K" target="_blank">CLIP-H14</a></li> <li>CLIP interrogator: performing a top-k exhaustive search from precomputed CLIP-bigG text embeddings of 1.6M "prompt components". Each of the retrieved string components are concatenated into prompts.</li> <li>CoCa: using <a href="https://github.com/mlfoundations/open_clip#fine-tuning-coca" target="_blank">LAION's implementation</a>. </li> </ul> <p><strong>ViT Encoders</strong><br> Two-stage finetune process using: 1. DDB; 2. generated 7M images<br> Initialised from the following checkpoints: CLIP-ViT <a href="https://huggingface.co/laion/CLIP-ViT-L-14-laion2B-s32B-b82K" target="_blank">L</a>, H (as above), <a href="https://huggingface.co/laion/CLIP-ViT-g-14-laion2B-s12B-b42K" target="_blank">G</a>, bigG (as above)</p> <h1>Dataset</h1> <p><strong>Caption Set (Text Retrieval + Image Generation)</strong><br> Total: ~5.2M, Generated Images: ~5.2M </p> <ul> <li><a href="https://github.com/poloclub/diffusiondb" target="_blank">DDB</a></li> <li><a href="https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts" target="_blank">Gustavosta's Stable Diffusion Prompts</a></li> <li><a href="https://github.com/krea-ai/open-prompts" target="_blank">Kree AI's Open Prompts</a></li> <li><a href="https://huggingface.co/datasets/bartman081523/stable-diffusion-discord-prompts" target="_blank">Stable Diffusion Discord Prompts</a></li> <li><a href="https://huggingface.co/datasets/succinctly/midjourney-prompts" target="_blank">Midjourney prompts</a></li> <li><a href="https://www.kaggle.com/datasets//900k-diffusion-prompts-dataset" target="_blank">tanreinama's 900k</a></li> <li><a href="https://ai.google.com/research/ConceptualCaptions/" target="_blank">Google Conceptual Captions</a></li> <li><a href="https://nocaps.org/download" target="_blank">nocaps</a></li> <li><a href="https://cocodataset.org/#download" target="_blank">coco</a></li> <li><a href="https://textvqa.org/textcaps/dataset/" target="_blank">textvqa textcaps</a></li> </ul> <p><strong><a href="https://github.com/google-research-datasets/wit" target="_blank">WIT (Text Retrieval + Image Generation)</a></strong><br> Total: ~2.5M, Generated Images: ~1M</p> <p><strong>Generated Prompts (Text Retrieval + Image Generation)</strong><br> Total: ~10M+, Generated Images: ~1M</p> <ul> <li><a href="https://www.kaggle.com/datasets/leonidkulyk/magic-prompt-1m" target="_blank">leonidkulyk's magic prompt 1M</a></li> <li>xiaozhouwang's <a href="https://www.kaggle.com/datasets/xiaozhouwang/sd2gpt2" target="_blank">sd2gpt</a> and <a href="https://www.kaggle.com/datasets/xiaozhouwang/sd2hardcode" target="_blank">sd2hardcode</a></li> </ul> <p>i also generated more using <a href="https://huggingface.co/succinctly/text2image-prompt-generator" target="_blank">text2image-prompt-generator</a> and <a href="https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion" target="_blank">MagicPrompt</a></p> <p><strong><a href="https://huggingface.co/datasets/dalle-mini/YFCC100M_OpenAI_subset" target="_blank">YFCC 100M open subset</a> (Text Retrieval only)</strong><br> Total: ~4M</p> <p><strong><a href="https://huggingface.co/datasets/laion/laion-coco" target="_blank">Laion CoCo</a> (Text Retrieval only)</strong><br> Total: 27M (random sampled)</p> <p><strong><a href="https://huggingface.co/datasets/mlfoundations/datacomp_pools/tree/main/small" target="_blank">Datacomp Small</a> (Text Retrieval only)</strong><br> Total: 5M</p> <p><strong>Prompt Components (Interrogator only)</strong><br> Total: 1.6M components</p> <ul> <li><a href="http://www.ashley-bovan.co.uk/words/partsofspeech.html" target="_blank">English PoS word list</a></li> <li><a href="https://storage.googleapis.com/openimages/web/download_v7.html" target="_blank">Open Image Class List</a></li> <li><a href="https://www.kaggle.com/datasets/gerwynng/img2text-prompt-components" target="_blank">CuPL Imagenet Prefixes</a></li> <li><a href="https://github.com/pharmapsychotic/clip-interrogator/tree/main/clip_interrogator/data" target="_blank">Original Clip Interrogator Components</a></li> <li>+ i took top frequent phrases (e.g. split prompts by <code>,</code>) from OpenPrompts dataset (~1.2M)</li> </ul> <h2>Text Preprocessing</h2> <p>Most stable-diffusion prompts are really messy with lots of unreadable/unnecessary words and punctuations. I manually looked at samples and tried to clean them. (<a href="https://www.kaggle.com/gerwynng/sdi2p-text-preprocessing" target="_blank">functions in this notebook</a>)</p> <ul> <li>removing near-duplicates (using sentence transformers' cosine similarity) works well for text-retrieval solutions; reduces 10-20% in dataset sizes while achieving similar scores. </li> </ul> <h2>CV</h2> <p>32k sampled prompts from Caption Set. Total is ~34k after allowing some prompts to repeat with different generated images.</p> <h1>Solutions and Training</h1> <h2>Text Retrieval</h2> <p>No training was performed in this step.</p> <p>I selected the top 60 candidates by calculating the cosine similarity between the image CLIP-bigG embeddings and my precomputed text embeddings. Then, for each of these top 60 candidates, I recalculated the cosine similarity using CLIP-H14 and filtered out those with a cosine similarity &lt;= 0.29.</p> <p>To perform exhaustive search on the 56M embeddings at inference time, I employed the following technique:</p> <ul> <li>I divided my dataset into different sections as listed above (dataset chunking).</li> <li>PCA with dimensions of 768 for all sections, except for Laion CoCo where I used 512 dimensions.</li> <li>mem-mapped numpy arrays in float16 format.</li> <li>a two-layer chunking approach. The outer loop involved splitting the 16k test image embeddings into 768 chunks, while the inner loop involved chunking the text image embeddings into 1024*1024 chunks.</li> <li><code>torch.mm</code> calculations on GPU to compute the cosine similarity.</li> </ul> <h2>CLIP Interrogator</h2> <p>I came out with 14 templates of how to create prompts from components. E.g. <code>['adjectives', 'nouns', 'verbs', 'openprompts_modifiers', 'adverbs', 'movements']</code>. For each template, I selected the top 8 candidates from each component, resulting in a total of 112 text candidates.</p> <p>Initially, I experimented with random search, using permutations of prompt components + the number of components. However, I observed that this approach led to overfitting on the CV set too quickly. Thus, I decided to use the 14 templates that I manually selected from EDA.</p> <h2>CoCa Model</h2> <p>This is low impact. I tried finetuning it, but it did not offer significant improvement over the pretrained version.</p> <h2>Vit-Encoders</h2> <p>I kept batch size ~160, and then un-freezed layers depending on model size</p> <ul> <li>L: un-freeze from 2 onwards</li> <li>H: un-freeze from 14 onwards </li> <li>G: un-freeze from 24 onwards </li> <li>bigG: un-freeze from 36 onwards</li> </ul> <p>Stage 1: DDB (~1.9M left after text-preprocessing, used their original images)<br> Stage 2: Self generated 7M<br> Both using LR: 4/5e-5 for 3/4 Epoch, using CosineEmbeddingLoss</p> <p>Did not work: adding dropout (/ spatial), increasing FC layers, epoch ensemble, changing patch size</p> <h3>TTA</h3> <p>In my final effort, I was able to achieve a 0.02X boost by utilizing different image mean and standard deviation normalization as Test-Time Augmentation (TTA). I used the pretrained statistics along with statistics computed from my 32K test set (<a href="https://www.kaggle.com/gerwynng/estimate-image-stats" target="_blank">notebook on how i computed here</a>). I performed two inferences and averaged the results, followed by a simple average across the four ViT encoders.</p> <h2>Weighted Average (Text Candidates + ViT Encoders)</h2> <ul> <li>Prior to taking a weighted average, I normalized the ST embeddings (from Text Candidates) and ViT embeddings.</li> <li>My CV only relied on one fold. As a risk mitigation strategy, I made one submission with weights based on the maximum CV (Text Candidates: 0.4, ViTs: 0.6), and another submission with weights based on the maximum LB score (Text Candidates: 0.25, ViTs: 0.75).</li> </ul>
Google - Isolated Sign Language Recognition
2nd Place Solution | Google - Isolated Sign Language Recognition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>TLDR</h2> <p>We used an approach similar to audio spectrogram classification using the EfficientNet-B0 model, with numerous augmentations and transformer models such as BERT and DeBERTa as helper models. The final solution consists of one EfficientNet-B0 with an input size of 160x80, trained on a single fold from 8 randomly split folds, as well as DeBERTa and BERT trained on the full dataset. A single fold model using EfficientNet has a CV score of 0.898 and a leaderboard score of ~0.8.</p> <p>We used only competition data.</p> <h2>1. Data Preprocessing</h2> <h3>1.1 CNN Preprocessing</h3> <ul> <li>We extracted 18 lip points, 20 pose points (including arms, shoulders, eyebrows, and nose), and all hand points, resulting in a total of 80 points.</li> <li>During training, we applied various augmentations.</li> <li>We implemented standard normalization.</li> <li>Instead of dropping NaN values, we filled them with zeros after normalization.</li> <li>We interpolated the time axis to a size of 160 using 'nearest' interpolation: <code>yy = F.interpolate(yy[None, None, :], size=self.new_size, mode='nearest')</code>.</li> <li>Finally, we obtained a tensor with dimensions 160x80x3, where 3 represents the <code>(X, Y, Z)</code> axes. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4212496%2Fc47290891a7ac6497a6a0c296973f071%2Fdata_prep.jpg?generation=1682986293067532&amp;alt=media" alt="Preprocessing"></li> </ul> <h3>1.2 Transformer Preprocessing</h3> <ul> <li><p>Only 61 points were kept, including 40 lip points and 21 hand points. For left and right hand, the one with less NaN was kept. If right hand was kept, mirror it to left hand.</p></li> <li><p>Augmentations, normalization and NaN-filling were applied sequentially.</p></li> <li><p>Sequences longer than 96 were interpolated to 96. Sequences shorter than 96 were unchanged.</p></li> <li><p>Apart from raw positions, hand-crafted features were also used, including motion, distances, and cosine of angles.</p></li> <li><p>Motion features consist of future motion and history motion, which can be denoted as:</p></li> </ul> <p>$$<br> Motion_{future} = position_{t+1} - position_{t}<br> $$<br> $$<br> Motion_{history} = position_{t} - position_{t-1}<br> $$</p> <ul> <li><p>Full 210 pairwise distances among 21 hand points were included. </p></li> <li><p>There are 5 vertices in a finger (e.g. thumb is <code>[0,1,2,3,4]</code>), and therefore, there are 3 angles: <code>&lt;0,1,2&gt;, &lt;1,2,3&gt;, &lt;2,3,4&gt;</code>. So 15 angles of 5 fingers were included.</p></li> <li><p>Randomly selected 190 pairwise distances and randomly selected 8 angles among 40 lip points were included.</p></li> </ul> <h2>2. Augmentation</h2> <h3>2.1 Common Augmentations</h3> <blockquote> <p>These augmentations are used in both CNN training and transformer training</p> </blockquote> <ol> <li><p><code>Random affine</code>: Same as <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> shared. In CNN, after global affine, shift-scale-rotate was also applied to each part separately (e.g. hand, lip, body-pose).</p></li> <li><p><code>Random interpolation</code>: Slightly scale and shift the time dimension.</p></li> <li><p><code>Flip pose</code>: Flip the x-coordinates of all points. In CNN, <code>x_new = x_max - x_old</code>. In transformer, <code>x_new = 2 * frame[:,0,0] - x_old</code>.</p></li> <li><p><code>Finger tree rotate</code>: There are 4 root-children pairs in a finger with 5-vertices. E.g. in thumb (<code>[0,1,2,3,4]</code>), these 4 root-children pairs are: <code>0-[1,2,3,4]</code>,<code>1-[2,3,4]</code>,<code>2-[3,4]</code>,<code>3-[4]</code>. We randomly choose some of these pairs, and rotate the children points around root point with a small random angle.</p></li> </ol> <h3>2.2 CNN Specific Augmentations</h3> <ul> <li><code>Mixup</code>: Implement basic mixup augmentation (only works with CNNs, not transformers).</li> <li><code>Replace augmentation</code>: Replace some random parts from other samples of the same class.</li> <li><code>Time and frequence masking</code>: This basic torchaudio augmentation works exceptionally well.</li> </ul> <pre><code>freq_m = torchaudio.transforms.FrequencyMasking() time_m = torchaudio.transforms.TimeMasking() </code></pre> <h3>2.3 Augmented Sample Example</h3> <p>Before augmentation:</p> <p><img alt="aug1" src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4212496%2F5d2b97cf6754c5f4063724181bfe7172%2Fbefore_aug.png?generation=1682986332091937&amp;alt=media"> </p> <p>After augmentation:</p> <p> <img alt="aug2" src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4212496%2F516f0790f6f9903ad8943977bc392965%2Fafter_aug.png?generation=1682986413818912&amp;alt=media"> </p> <h2>3. Training</h2> <h3>3.1 CNN Training</h3> <ul> <li>Train on one fold with a random split (8 folds in total) or the full dataset using the best parameters</li> <li>Onecycle scheduler with 0.1 warmup.</li> <li>Use weighted <code>CrossEntropyLoss</code>. Increase the weights for poorly predicted classes and classes with semantically similar pairs (such as kitty and cat)</li> <li>Implement a hypercolumn for EfficientNet with 5 blocks</li> </ul> <h3>3.2 Transformer Training</h3> <ul> <li>Train on one fold with a random split (8 folds in total) or the full dataset using the best parameters</li> <li>Ranger optimizer with 60% flat and 40% cosine annealing learning rate schedule.</li> <li>A 4-layer, 256 hidden-size, 512 intermediate-size transformer were trained.</li> <li>A 3-layer model was initialized with 4-layer model's first 3 layers. Knowledge distillation were used in 3-layer model training, in which the 4-layer model is the teacher.</li> </ul> <h3>3.3 Hyperparameter Tuning</h3> <p>Since we trained only one fold and used smaller models, we decided to tune most parameters with Optuna. </p> <p>Here is the parameters list of CNN training (transformer training has a similar param-list):</p> <ul> <li><p>All augmentations probabilities (0.1 - 0.5+)</p></li> <li><p>Learning rate (2e-3 - 3e-3)</p></li> <li><p>Drop out (0.1 - 0.25)</p></li> <li><p>Num of epochs (170-185)</p></li> <li><p>Loss weights powers (0.75 - 2)</p></li> <li><p>Optimizer (<code>Lookahead_RAdam</code>, <code>RAdam</code>)</p></li> <li><p>Label smoothing (0.5 - 0.7)</p></li> </ul> <h2>4. Submissions, Conversion and Ensemble</h2> <ol> <li><p>We rewrote all our models in Keras and transferred PyTorch weights to them, resulting in a speed boost of around 30%. For transformer model, pytorch-onnx-tf-tflite will generate too much useless tensor shape operations, a fully rewriting can reduce these manually. For CNN model, we rewrote DepthwiseConv2D with a hard-coded way, whose speed is 200%~300% of its original version of tflite DepthwiseConv2D.</p></li> <li><p>After that, we aggregated all these models in the <code>tf.Module</code> class. Converting directly from Keras resulted in lower speed (don't know why).</p></li> <li><p>We calculated ensemble weights for models trained on fold 0 using the local fold 0 score and applied these weights to the full dataset models.</p></li> </ol> <p>EfficientNet-B0 achieved a leaderboard score of approximately 0.8, and transformers improved the score to 0.81. The final ensemble included:</p> <ol> <li>Efficientnet-B0, fold 0</li> <li>BERT, full data train</li> <li>DeBERTa, full data train</li> </ol> <p>Interestingly, a key feature was using the ensemble without softmax, which consistently provided a boost of around 0.01.</p> <h2>5. PS. Need <strong>BETTER</strong> TFlite DepthwiseConv2D</h2> <p>Depthwise convolution models performed very well for these tasks, outperforming other CNN and ViT models (rexnet_100 was also good).<br> We spent a lot of time dealing with the conversion of DepthwiseConv2D operation. Here are some strange results:</p> <p>Given a input image with 82x42x32 (HWC), there are two ways to do a 3x3 depthwise convolution in Keras. One is <code>Conv2D(32, 3, groups = 32)</code>, the other is <code>DepthwiseConv2D(3)</code>. However, after converting these two to tflite, the running time of the <code>Conv2D</code> is 5.05ms, and the running time of <code>DepthwiseConv2D</code> is 3.70ms. More strangely, a full convolution <code>Conv2D(32, 3, groups = 1)</code> with FLOPs = HWC^2 only takes 2.09ms, even faster than previous two with FLOPs = HWC.</p> <p>Then we rewrote the depthwise-conv like this:</p> <pre><code> (): out = x[:,:self.H_out:self.strides,:self.W_out:self.strides] * self.weight[,] i (self.kernel_size): j (self.kernel_size): i == j == : out += x[:,i:self.H_out + i:self.strides,j:self.W_out + j:self.strides] * self.weight[i,j] self.bias : out = out + self.bias out </code></pre> <p>The running time of this is 1.24 ms.</p> <p>In summary, our version (1.24ms) &gt; full <code>Conv2D</code> with larger FLOPs (2.09ms) &gt; <code>DepthwiseConv2D</code> (3.70ms) &gt; <code>Conv2D(C, groups = C)</code> (5.05ms).</p> <p>However, our version introduced too much nodes in tflite graph, which is not stable in running time. If the tensorflow team has a better implementation of DepthwiseConv2D, we can even ensemble two CNN models, which is expected to reach 0.82 LB.</p> <p>By the way, EfficientNet with ONNX was ~5 times faster than TFLite.</p> <h3>Big thanks to my teammates <a href="https://www.kaggle.com/artemtprv" target="_blank">@artemtprv</a> and <a href="https://www.kaggle.com/carnozhao" target="_blank">@carnozhao</a> and congrats with new tiers, Master and GrandMaster!</h3> <p><a href="https://github.com/ffs333/2nd_place_GISLR" target="_blank">github code</a></p>
Google - Isolated Sign Language Recognition
14th place solution - publicly shared Transformer architecture was so strong!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Summary</h1> <p>I thank Kaggle administraror &amp; host for holding this competition. Although struggling with Tensorflow's unfriendly errors and lots of try-and-errors for failing TFLite conversion was really, really tough, these low-layer experience was valuable for me.<br> Below is my solution writeup of this competition.</p> <h2>Pipeline</h2> <p>I tried over 377 different patterns of training models for this competition, however, the best architecture is only minor-changed one from <a href="https://www.kaggle.com/code/markwijkhuizen/gislr-tf-data-processing-transformer-training" target="_blank">Mark Wijkhuizen's great public notebook</a>.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4910466%2F980bafe93f77c3d2418f5512ebfb2aa3%2Fpipeline.png?generation=1682985827786639&amp;alt=media" alt=""></p> <h3>Mixed PostLN &amp; PreLN Architecture</h3> <p>I tested a) PostLN, b) PreLN, c) Mixed architectures, and found Mixed architecture provides the best result. This architecture was originally (perhaps unintentionally) implemented in the Mark Wijkhuizen's public notebook (in earlier version).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4910466%2Fe045ca21bb34a220006c06d9306446aa%2Ftransformer_block.png?generation=1682985844553627&amp;alt=media" alt=""></p> <h3>Other modifications</h3> <ul> <li>keep frames with no hands (instead of dropping) in pre-processing.</li> <li>increase number of layers in the keypoint encoder. With more layers, the accuracy gets better. The 4x layer seetting is the best tradeoff for accuracy and inference time.</li> <li>set number of hidden units in keypoint encoder independently for each parts (lips=192, left_hand=256, right_hand=256, pose=128). This reduces inference time without losing accuracy.</li> <li>attach ArcFace layer on training.</li> </ul> <h2>Training Setting</h2> <ul> <li>loss function: <code>0.5 * ArcFace + 0.5 * CrossEntropy</code> (This setting was shared by <a href="https://www.kaggle.com/code/medali1992/gislr-nn-arcface-baseline" target="_blank">Med Ali Bouchhioua</a>). Using ArcFace loss together with cross entropy loss converges faster, as well as the accuracy gets better.</li> <li>I tested 50, 80, 100, 120 epochs, but 100 epoch is the best on LB.</li> <li>Label smoothing (0.20-0.25) can avoid overfitting, but ArcFace is better. Using both label smoothing and ArcFace didn't increase CV/LB.</li> </ul> <h3>Data Augmentation</h3> <p>The augmentation strategy is almost the same as <a href="https://www.kaggle.com/competitions/asl-signs/discussion/391265" target="_blank">that of </a><a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> was shared.</p> <ul> <li>handedness swapping (p=0.5)</li> <li>global 2D Affine transformation (p=0.5, shift=(-0.1, 0.1), rotation=(-30, 30), scale=(0.9, 1.1), shear=(-1.5, 1.5))</li> <li>random frame masking (p=0.5, mask_ratio=0.75).</li> </ul> <p>Frame masking simulates the mis-detection of keypoins. It also act as cutout augmentation in image tasks.</p> <h2>TFLite Conversion</h2> <p>When FP16 quantization is applied for the Mark's original implementation, the Transformer block outputs NaN value. So I rewrote transformer block based on <a href="https://www.kaggle.com/code/hengck23/lb-0-73-single-fold-transformer-architecture/notebook" target="_blank">@henck23's implementation</a>.<br> FP16 quantization made model size about 1/2, without losing model accuracy.</p> <h3>TTA</h3> <ul> <li>apply handedness swapping augmentation to 2/4 of ensemble-seed models</li> </ul> <h3>The output TFLite model</h3> <ul> <li>model size=26MB</li> <li>scoring time=56-59 min</li> </ul> <h2>Not-worked Experiments</h2> <ul> <li>distance/velocity features didn't contribute to increase accuracy.</li> <li>using synthesized hand pose dataset, I trained z-axis prediction model. I used this model's prediction result for a) 3D-affine transform augmentation and b) sub-task to predict z-axis from key-point encoder's output, but both trials didn't contribute to increase accuracy.</li> </ul>
Google - Isolated Sign Language Recognition
8th place solution ... close but no cigar
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is a quick overview of the 8th place solution. </p> <p>3 transformers models, 2 layers each (384 hidden, 512 hidden ffn), with an ffn encoder (512-&gt;384), trained from scratch. LR 8e-4 with cosine schedule trained for ~300 epochs, dropout 0.1, batch size 1024, label smoothing 0.1. Using hands, lips and pose (above waist only). On one transformer all pose and a subset of lips were used for diversity. </p> <p><strong>Augmentations</strong></p> <ul> <li>most important was sequence cutout. On each sample, and each body part (left hand, right hand, lips, pose) with a 0.4 proba convert to nan 5 random slices of 0.15 x SequenceLength. It was hard to overfit with this in.</li> <li>mirror left</li> <li>random rotate. </li> </ul> <p><strong>Preprocessing</strong></p> <ul> <li>Linear interpolation of longer sequences to max length of 96. </li> <li>Normalise each body part, using min max - I found this better than mean/std. In one model I used mean/std for diversity. </li> <li>Create time shift delta features on a subset of points, using time shifts of <code>[1, 2, 3, 4, 6, 8, 12, 16]</code>. It was important to forward fill NANs as opposed to 0-fill. </li> <li>Angle features between xyz position, and xy positions, on a few different joints - corner of mouth, hand, arms. </li> <li>Some point to point distances, interestingly this did not help much. </li> </ul> <p><strong>Tflite</strong><br> Training was all in pytorch. Converting to tflite, the feature preprocessing was rewritten in tensorflow, and the base models were converted via onnx to tf (this turned out to be a mistake from looking at #2 solution, I should have rewritten the transformer encoder). It was a great opportunity to learn tensorflow. I like it. <br> A funny thing… doing the following on the pytorch model <code>model = model.half().float()</code> before converting to onnx, gave a good speed up in the final tflite inference. I tried quantizing the pytorch model and it did not further increase the speed. </p> <p>In the last week I found in preprocessing, normalising points across the whole sequence channel wise gave a good lift. Particularly what I tried to do was for each body part get the channel wise range (max-min), then get the average range over all frames, and use this to normalise the time deltas and the raw coordinates - as opposed to normalising each frame independently. Every time I submitted this it gave a very low score after ~25 mins. I rewrote it in different ways a number of times and it worked with tfilte in kaggle kernels but not when submitted. It was very frustrating at the end. However, I do not think the boost would have brought me into the prize range which was the goal 🤑</p> <p><strong>Failed attempts</strong></p> <ul> <li>Mixup worked ( worked for #9 team), I tried it on the embeddings and it worked ok, . See comments below on mixup.</li> <li>CNNs with mixup - tried efficientnet and edgenext. I tried on raw points and it did not work, looked like I had an incorrect normalisation from #2 solution. It worked well with an FFN encoder, but was too slow in inference. Looks like I missed to rewrite it into tf, or train in tf. </li> </ul>
Stable Diffusion - Image to Prompts
1st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all the winners, and thank you to Kaggle for organizing such an interesting competition. Big thanks to all the other Kagglers out there who shared their ideas and codes throughout the competition. Apologize for the slight delay as I spent some time conducting ablation experiments and late submissions to showcase the performance improvements from various mechanisms.</p> <h1>Solution Summary</h1> <p>I generated approximately 10.6 million images from around 8.6 million prompts. Various methods were utilized to accelerate the image generation process. These images and prompts were used to train ViT-Large, ConvNeXt-XXLarge, and BLIP-2 models. Modifications were made to these models, such as removing the Large Language Model (LLM) component of BLIP-2 and employing LORA. My final submission involved a weighted ensemble of embeddings from the three aforementioned model types.</p> <h1>Datasets</h1> <p>In this competition, having diverse and high-quality data is important. To achieve this, I first utilized DiffusionDB_2M and prioritized datasets relevant to image captioning tasks, including COCO Captions, VizWiz, Open Image, ADE20K, Flickr30K, and TextCaps. Duplicates with a similarity score &gt;0.8 were removed. Finally, prompts generated by ChatGPT were also added into the dataset with stricter rules, resulting in a dataset of around 2 million high-quality prompts that we refer to as "PROMPTS_HQ" In the following text. For every prompt, I generated 2 images using different random seeds. </p> <p>COYO-700M is a large-scale dataset that contains 747M image-text pairs, I attempted to select and add some of them to the PROMPT_HQ dataset, it did not improve the score. Therefore, I utilized this dataset for pretraining and found it to be highly effective(+0.008~0.01). From this big dataset, I selected approximately 6.6 million samples that met specific criteria, such as ensuring the similarity between the image and text was greater than 0.3 and removing duplicates. I then utilized the text as prompts, feeding them into Stable Diffusion.This dataset is referred to as "PROMPTS_LQ". </p> <h1>Image Generation</h1> <p>Generating 10.6M images using Stable Diffusion is not an easy task for me, generating an image using scripts provided is about 15s, to speed up the generation, many measures were taken:<br> 1.XFormer, 10s/img<br> 2.FP16, 5s/img <br> 3.The output image size from 768x768 to 512x512 and the steps from 50 to 25, 2s/img</p> <h1>Models</h1> <p>In the early stages, I spent several days designing models for this task by understanding and applying the mathematical principles of stable diffusion, which motivated me to enter this competition. Unfortunately, I did not found efficient and effective way. Therefore, I used vision models which were supervised using target vectors produced by the all-MiniLM-L6-v2 model.</p> <p>My final submission included the ViT-Large, ConvNeXt-XXLarge, and BLIP-2 models. Although I also tried SWIN, DinoV2, and other models, they did not demonstrate an advantage within the limited time and resources available.</p> <p>The Large Language Model(LLM) component in BLIP-2 was removed, not solely due to efficiency concerns, but primarily because both grammar and sequence of prompts in this task were unimportant or even a little harmful if supervised by the them. However, if time permits, the characteristics of the prompts and dedicated losses designed for them may bring improvements.</p> <p>LORA was adopted to finetune the models, I used LORA for the bottom layers and it bring some improvement.</p> <p>To my surprise, adding another large fully connected layer before the output layer brought significant improvement to the CLIP model.</p> <pre><code>ebd_dim = fc_dim = * self.head = nn.Sequential( nn.Linear(ebd_dim, fc_dim), nn.BatchNorm1d(fc_dim), nn.ReLU(), nn.Linear(fc_dim, ), ) </code></pre> <p>I trained my models in two stages: first, pretraining on PROMPTS_LQ, and then finetuning them on PROMPTS_HQ.</p> <p>When training, images from the same prompt were randomly selected once per epoch. Other options were also tested, such as training different models and ensembling, but they performed nearly the same.</p> <p>I found that using larger image sizes led to higher scores, but I did not attempt to use sizes beyond 336x336 as I believed computation resources should be invested in other more important options.</p> <p>No augmentation was used when training CLIP models and default augmentation was used when training BLIP-2 model.</p> <p>Other training details and their respective contributions are listed in the table 1.</p> <table> <thead> <tr> <th><strong>Model</strong></th> <th><strong>Stage</strong></th> <th><strong>Epochs</strong></th> <th><strong>Warmup</strong></th> <th><strong>Initial lr</strong></th> <th><strong>Final lr</strong></th> <th><strong>EMA</strong></th> <th><strong>Aug</strong></th> </tr> </thead> <tbody> <tr> <td>ViT-Large</td> <td>pretraining</td> <td>2</td> <td>1</td> <td>1e-4</td> <td>8e-5</td> <td>✓</td> <td></td> </tr> <tr> <td>ViT-Large</td> <td>finetuning</td> <td>3</td> <td>1</td> <td>1e-4</td> <td>5e-5</td> <td>✓</td> <td></td> </tr> <tr> <td>ConvNeXt-XXLarge</td> <td>pretraining</td> <td>2</td> <td>1</td> <td>1e-4</td> <td>8e-5</td> <td>✓</td> <td></td> </tr> <tr> <td>ConvNeXt-XXLarge</td> <td>finetuning</td> <td>3</td> <td>1</td> <td>1e-4</td> <td>5e-5</td> <td>✓</td> <td></td> </tr> <tr> <td>BLIP-2</td> <td>pretraining</td> <td>3</td> <td>1</td> <td>1e-4</td> <td>4e-5</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>BLIP-2</td> <td>finetuning</td> <td>4</td> <td>1</td> <td>1e-4</td> <td>2e-5</td> <td>✓</td> <td>✓</td> </tr> </tbody> </table> <p>Table1. Training details of models, all models were trained in two stages</p> <h1>Results</h1> <p>The results are shown in table 2.</p> <table> <thead> <tr> <th><strong>Index</strong></th> <th><strong>Model</strong></th> <th><strong>Samples</strong></th> <th><strong>Image size</strong></th> <th><strong>Trained params</strong></th> <th><strong>LORA</strong></th> <th><strong>+FC</strong></th> <th><strong>Private</strong></th> <th><strong>Public</strong></th> </tr> </thead> <tbody> <tr> <td>1</td> <td>ViT-Large</td> <td>0.2M</td> <td>224</td> <td>25%</td> <td></td> <td></td> <td>0.5579</td> <td>0.5573</td> </tr> <tr> <td>2</td> <td>ViT-Large</td> <td>0.2M</td> <td>336</td> <td>25%</td> <td></td> <td></td> <td>0.5612</td> <td>0.5617</td> </tr> <tr> <td>3</td> <td>ViT-Large</td> <td>0.2M</td> <td>336</td> <td>30%</td> <td></td> <td>✓</td> <td>0.5674</td> <td>0.5687</td> </tr> <tr> <td>4</td> <td>ViT-Large</td> <td>0.2M</td> <td>336</td> <td>47%</td> <td>✓</td> <td>✓</td> <td>0.5725</td> <td>0.5737</td> </tr> <tr> <td>5</td> <td>ViT-Large</td> <td>2M</td> <td>336</td> <td>47%</td> <td>✓</td> <td>✓</td> <td>0.6368</td> <td>0.6386</td> </tr> <tr> <td>6</td> <td>ViT-Large</td> <td>6.6M+2M</td> <td>336</td> <td>47%</td> <td>✓</td> <td>✓</td> <td>0.6459</td> <td>0.6461</td> </tr> <tr> <td>7</td> <td>ConvNeXt-XXLarge</td> <td>6.6M+2M</td> <td>256</td> <td>47%</td> <td>✓</td> <td>✓</td> <td>0.6505</td> <td>0.6518</td> </tr> <tr> <td>8</td> <td>BLIP-2</td> <td>6.6M+2M</td> <td>224</td> <td>20%</td> <td>✓</td> <td></td> <td>0.6458</td> <td>0.6489</td> </tr> <tr> <td>9</td> <td>Ensemble(6+7+8)</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>0.6620</td> <td>0.6634</td> </tr> <tr> <td>10</td> <td>Ensemble(11 diverse predictions)</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>0.6632</td> <td>0.6645</td> </tr> </tbody> </table> <p>Table2. Models, strategies and their scores </p> <p><a href="https://www.kaggle.com/code/bestfitting/en-3models-vit-blip-convnext-notebook" target="_blank">This notebook contains an ensemble of three models that achieved scores of 0.6620/0.6634.</a><br> <strong><em>During the writing process of this solution, ChatGPT was utilized for proofreading purposes, including this sentence. :)</em></strong></p>
Google - Isolated Sign Language Recognition
12th place solution - MLP Based (Structured Keypoint Pooling network)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the hosts and the Kaggle team for hosting such an amazing competition.\<br> Below is my brife solution.</p> <h2>Model</h2> <ul> <li>MLP based. I used a modified model of <a href="https://arxiv.org/abs/2303.15270" target="_blank">the Structured Keypoint Pooling network</a>.</li> <li>Input joint and bone separately. Bidirectional lateral Connection. Concat after 2 mlp blocks. <ul> <li>Refered to <a href="https://arxiv.org/abs/2211.01367" target="_blank">Two-Stream Network for Sign Language Recognition and Translation</a></li></ul></li> </ul> <h2>Data</h2> <ul> <li>Only original data</li> <li>Lip + Pose (upper body) + Left hand + Right hand</li> <li>Resize to 64 frames, if long.</li> <li>Feature: xy (joint or bone), motion of 1&amp;2frame.</li> <li>So, input shape is [64(frame), (107(joint) + 142(bone)) x 2 (xy) x 3 (+1&amp;2motion)]</li> <li>Exclude data with an low estimation probability on the training data (1%). [CV: -0.006, publicLB +0.01]</li> </ul> <h2>Augmentation</h2> <ul> <li>Flip [CV: +0.02]</li> <li>Scale [CV: + 0.008]</li> <li>Rotate3d all data [CV: + 0.003]<ul> <li>Better than Rotate2d by CV 0.002</li></ul></li> <li>Rotate3d hands individually [CV: +0.002]</li> <li>Resize frame (only shorten the frame) [CV: + 0.005]</li> <li>Cut the first or last frame [CV: +0.001]</li> <li>Swap hand or lip with same sign. [Hand CV: +0.006, Lip CV: + 0.002]</li> </ul> <h2>Loss</h2> <ul> <li>CrossEntropy (label smoothing 0.4)</li> </ul> <h2>What Didn't Work</h2> <ul> <li>Word based-label smoothing</li> <li>Manifold mixup</li> <li>Shift DA</li> <li>Adding eye</li> <li>Pseudo labels</li> </ul> <h2>Especially thanks to these kernels</h2> <ul> <li><a href="https://www.kaggle.com/code/hengck23/lb-0-67-one-pytorch-transformer-solution" target="_blank">[LB 0.67] one pytorch transformer solution</a> <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a></li> <li><a href="https://www.kaggle.com/code/roberthatch/gislr-lb-0-63-on-the-shoulders" target="_blank">GISLR [LB 0.63]: On the Shoulders</a> <a href="https://www.kaggle.com/roberthatch" target="_blank">@roberthatch</a></li> <li><a href="https://www.kaggle.com/code/danielpeshkov/animated-data-visualization" target="_blank">Animated Data Visualization</a> <a href="https://www.kaggle.com/danielpeshkov" target="_blank">@danielpeshkov</a></li> </ul>
GoDaddy - Microbusiness Density Forecasting
Public 531st -> Private 10th: The Complete Solution Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: GoDaddy - Microbusiness Density Forecasting <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1><strong>GoDaddy - Microbusiness Density Forecasting</strong></h1> <p>The solution is 3 months ago, so if there is anything missing, please let me know.</p> <p>The objective of the competition is to predict monthly <strong>microbusiness density</strong> in <strong>3315 cfips</strong> (County FIPS) in the United States. The code implementation incorporates several special features and techniques to improve the accuracy of the predictions.</p> <h2><strong>Approach</strong></h2> <h3><strong>1. Predicting 3315 Time Series</strong></h3> <p>This approach predicts the microbusiness density for each of the <strong>3315 cfips</strong> individually. This allows for more granular and accurate predictions tailored to specific regions.</p> <h3><strong>2. Technical Indicators</strong></h3> <p>The code utilizes popular technical indicators such as <strong>Exponential Moving Average (EMA)</strong>, <strong>Momentum (MOM)</strong>, and <strong>Relative Strength Index (RSI)</strong>. These indicators capture underlying trends, momentum, and market conditions, enhancing the predictive power of the model.</p> <h3><strong>3. External Datasets</strong></h3> <p>In addition to the microbusiness density data, the model incorporates several external datasets. These datasets include information such as <strong>unemployment data</strong>, <strong>earnings</strong>, <strong>rent</strong>, <strong>DSG10</strong>, <strong>tax rate</strong>, <strong>housing price</strong>, and <strong>population estimates</strong>. By incorporating these relevant external factors, the model can capture the influence of broader economic and demographic factors on microbusiness density.</p> <h3><strong>4. Optuna for Hyperparameter Optimization</strong></h3> <p><strong>Optuna</strong> is used to minimize the <strong>SMAPE</strong> in the <strong>Catboost</strong> model.</p> <h3><strong>5. Catboost Model with Cross Validation</strong></h3> <p>The code employs the <strong>Catboost model</strong>. Also used <strong>Cross Validation (CV)</strong> to generalize well to unseen data.</p> <h3><strong>6. Multiple Model Training</strong></h3> <p>To predict multiple future time periods, five separate models are trained. Each model is designed to predict a specific time horizon, such as <strong>t+1 month</strong>, <strong>t+2 months</strong>, and so on.</p> <h3><strong>7. External Dataset Addition</strong></h3> <p>The code "<strong>6-external-datasets.ipynb</strong>" is found on Kaggle and is provided to incorporate additional external datasets into the prediction model.</p> <h2><strong>Limitations</strong></h2> <ul> <li><strong>Limited Historical Data</strong>: The current approach utilizes only the past three months of data to predict future microbusiness density. Exploring longer intervals of historical data could provide additional context and potentially improve the forecasting accuracy.</li> </ul> <h2><strong>Code Files</strong></h2> <ul> <li><p><strong>6-external-datasets.ipynb</strong>: add additional external datasets to the prediction model. It can be found on Kaggle and is intended to enhance the feature set.</p></li> <li><p><strong>kaggle_competition_microbusiness.ipynb</strong>: contains the full model training code for microbusiness density prediction. It encompasses data preprocessing, model training using <strong>Catboost</strong> and <strong>Optuna</strong>, and evaluation using <strong>SMAPE</strong>.</p></li> </ul> <p><strong>Github link</strong>: <a href="https://github.com/ttterence927/kaggle_competition_microbusiness/" target="_blank">https://github.com/ttterence927/kaggle_competition_microbusiness/</a></p> <h2><strong>Show Your Support</strong></h2> <p>If you find this code implementation valuable or interesting, please consider giving it a star on <strong>GitHub</strong>.</p>
March Machine Learning Mania 2023
7th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>2023 March Madness Mania Competition</h1> <p><a href="https://github.com/JackLich10/kaggle_2023" target="_blank">GITHUB Link</a></p> <h2><a href="https://www.kaggle.com/competitions/march-machine-learning-mania-2023/leaderboard" target="_blank">7th Place Submission</a></h2> <pre><code>Private Leaderboard Score: 0.17704 Private Leaderboard Place: 7th </code></pre> <h3>Background</h3> <p>I am a 2022 graduate of Duke University with a degree in Data Science. I currently work for a sports analytics company in Washington, D.C.. This is my second year entering this type of competition. I entered because I think it's super fun to forecast the results of such a crazy basketball tournament. Last year, I finished 100th in the men's only competition. I did not participate in the women's only competition last year. My men's model was written 2 years ago for fun while my women's model I created this year in ~4 days to enter this competition.</p> <h3>Methodology</h3> <p>All coding was done in R. The basic methodology behind both of my models is to:</p> <p>1) generate team ratings via ridge <a href="https://glmnet.stanford.edu/index.html" target="_blank">(<code>glmnet</code>)</a>/mixed effects <a href="https://github.com/lme4/lme4" target="_blank">(<code>lme4</code>)</a> modeling predicting point difference, offense/defense efficiency, offense/defense pace, etc. (i.e. what is team A's impact on offense efficiency, score difference, etc. after controlling for opponent, home court, etc.?)</p> <p>2) use team ratings from 1) to predict game level score difference (via <a href="https://xgboost.readthedocs.io/en/stable/" target="_blank">XGBoost</a>)</p> <p>The most important features to predict game level score difference are the various team ratings from 1). Each of these models are trained after each day of each season. As such, it takes multiple days for the entirety of this modeling framework to run for the first time. A simple GLM converts predicted score difference to predicted win probability.</p> <h3>Code</h3> <p>This <a href="https://github.com/JackLich10/kaggle_2023" target="_blank">repository</a> has my men's model and women's model in their respective folders, <code>mens/</code> and <code>womens/</code>. Within each of <code>mens/</code> and <code>womens/</code>, the <code>run_all.sh</code> shell script specifies the order in which the files need to be run for the entire modeling framework to work. The <code>kaggle_preds.R</code> script combines the predictions into the final submission file.</p> <p>The first time running everything, the scripts inside the subfolder <code>models/</code> within each <code>mens/</code> and <code>womens/</code> must be run to tune the hyper-parameters of the XGBoost models found within each workflow as well as to train the men's shot probability model (This tuning is done via the <a href="https://www.tidymodels.org/" target="_blank"><code>tidymodels</code></a> framework and using crossfold validation). You will also have to create <code>data/</code> folders within each <code>mens/</code> and <code>womens/</code> directories.</p> <p><em>NOTE</em>: My men's model uses outside public data via my own <a href="https://jacklich10.github.io/gamezoneR/index.html" target="_blank">gamezoneR</a> R package. My women's model only uses provided data via the competition website.</p> <h3>Manual Overrides</h3> <p>In one of my submissions, I manually set Oral Roberts win probability over Duke as 80%… that did not work out. In the other, I manually set Michigan State's win probability over USC as 80%… that one worked out. These overrides were dumb because I should have put them at 99% and I could have been in a very bad spot if both of them failed. For the women's bracket, I over-rid all 1 over 16 seeds at 99%. </p>
IceCube - Neutrinos in Deep Ice
6th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: IceCube - Neutrinos in Deep Ice <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>inference(normal ensemble,0.978)</strong>:<a href="https://www.kaggle.com/code/chihantsai/ens-gnn-v4-5-2-and-lstm-v8-and-tformer-v18-up300" target="_blank">https://www.kaggle.com/code/chihantsai/ens-gnn-v4-5-2-and-lstm-v8-and-tformer-v18-up300</a> <br> <strong>inference(group ensemble,0.975):</strong><a href="https://www.kaggle.com/code/chihantsai/ens-group-gnn-v4-5-2-and-lstm-v8-and-tformer-v17" target="_blank">https://www.kaggle.com/code/chihantsai/ens-group-gnn-v4-5-2-and-lstm-v8-and-tformer-v17</a> </p> <h3><strong>GNN part</strong> :</h3> <p><a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/402880" target="_blank">https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/402880</a></p> <h3><strong>Transformer part</strong> :</h3> <p><a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/402880#2227949" target="_blank">https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/402880#2227949</a></p> <h3><strong>LSTM</strong>:</h3> <p>Based on <a href="https://www.kaggle.com/code/rsmits/tensorflow-lstm-model-training-tpu" target="_blank">LSTM model</a> by <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a>.<br> loss : <br> Custom cross-entropy loss can be enhanced for regression tasks by adding a nearby area loss to improve training. The weights of the nearby loss used are shown in the figure below, inspired by <a href="https://www.kaggle.com/competitions/ventilator-pressure-prediction/discussion/278362" target="_blank">this</a>.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4184342%2F937cb92c0b9387a371379130f211eee4%2Fnearby_weight.png?generation=1682086516402080&amp;alt=media" alt="weights of nearly loss"><br> We believe that our custom cross-entropy loss can cover a larger area than the original cross-entropy loss, which enables us to divide the spheres into more classes<br> bin_num : 48 (Spheres are classified into 48*48 classes)<br> pulse_count(the number of pulses selected per event) : 192 and 300<br> optimizer: AdamW <br> train step:<br> Step 1: set the learning rate to 1e-3 and use pulse_count 192 to train the dataset for 3 rounds<br> Step 2: apply the CosineAnnealingLR scheduler with a learning rate range of 1e-4 to 1e-6, t_max=4003 (an epoch iteration), and pulse_count 192. We train the dataset for 4 rounds<br> Step 3: use the CosineAnnealingLR scheduler with a learning rate range of 1e-5 to 1e-7, t_max=4003 (an epoch iteration), and pulse_count 192. We train the dataset for 3 rounds<br> Step 4: use the CosineAnnealingLR scheduler with a learning rate range of 1e-4 to 1e-6, t_max=4003 (an epoch iteration), and pulse_count 300</p> <h3><strong>SAKT</strong>(LSTM CNN transformer) :</h3> <p>(from <a href="https://www.kaggle.com/code/shujun717/1-solution-lstm-cnn-transformer-1-fold/notebook" target="_blank">https://www.kaggle.com/code/shujun717/1-solution-lstm-cnn-transformer-1-fold/notebook</a>)<br> loss : as same as LSTM model<br> bin_num : 48 (Spheres are classified into 48 classes)<br> pulse_count(the number of pulses selected per event) : 192 <br> optimizer: AdamW <br> train step:<br> Step 1:use learning rate to 1e-3 , train the dataset for 3 rounds<br> Step 2:apply the CosineAnnealingLR scheduler with a learning rate range of 1e-4 to 1e-6, t_max=4003 (an epoch iteration) , train the dataset for 4 rounds<br> Step 3: apply the CosineAnnealingLR scheduler with a learning rate range of 1e-5 to 1e-7, t_max=4003 (an epoch iteration)</p> <h1><strong>Ensemble</strong></h1> <p>Validation score of the model in batch 657 658 659<br> Lstm : 0.9899<br> Sakt : 0.9927<br> Gnn : 0.9927<br> Transformer : 0.9869</p> <h2><strong>Normal ensemble</strong> :</h2> <p>validation score : 0.9803<br> lb : 0.978</p> <h2><strong>Group ensemble</strong> :</h2> <p>I used the results of the <a href="https://www.kaggle.com/code/solverworld/icecube-neutrino-path-least-squares-1-214" target="_blank">line-fit</a> and metadata to classify the events into 6 groups </p> <pre><code>group1 : the number of pulses aux= &gt;= zenith(prediction by line-fit) &gt;= pi/ (~%) group2 : the number of pulses aux= &gt;= zenith(prediction by line-fit) &lt; pi/ (~%) group3: the number of pulses aux= &lt; , zenith(prediction by line-fit) &gt;= pi/ the pulses of aux= are on a vertical string (~%) group4 : the number of pulses aux= &lt; , zenith(prediction by line-fit) &lt; pi/ the pulses of aux= are on a vertical string (~%) group5 : the number of pulses aux= &lt; , zenith(prediction by line-fit) &gt;= pi/ the pulses of aux= are on multiple strings (~%) group6 : the number of pulses aux= &lt; , zenith(prediction by line-fit) &lt; pi/ the pulses of aux= are on multiple strings (~%) </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4184342%2F48225ff16bf6c8a8bf69ef01029cc4e1%2FScreenshot%20from%202023-04-21%2022-04-42.png?generation=1682085909903995&amp;alt=media" alt="Validation score"></p> <p>we can observe that the validation scores of different groups vary considerably, and each model performs better on different groups. This is logical since atmospheric muons always travel from top to bottom, so the zenith (predicted by line-fit) &lt; pi/2 is prone to more noise. Moreover, pulses of aux=False located on a vertical string have a higher probability of containing noise. Additionally, events with more pulses are better predicted.</p> <p>To enhance our results, we used different ensemble weights for each group, leading to an improvement of 0.003 in both validation score and lb (0.975). We further divided the events into 13 groups based on additional criteria, resulting in a gain of 0.001 in both validation score and lb (0.974, private score 0.975).</p> <p>Perhaps using an MLP for ensemble could get better results, but unfortunately, we did not have enough time to investigate this approach.</p> <p>P.S. My English is not very fluent, so please let me know if my explanation is not clear enough.</p>
March Machine Learning Mania 2023
2nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Private leaderboard place:</strong> 2nd<br> <strong>Private leaderboard score:</strong> 0.17522</p> <p>UPDATE: Notebooks now available. Each notebook is seeded with public data, built from the prior notebook, to allow anyone to run any notebook. This keeps RAM requirements under the 30GB limit for non-paying accounts. </p> <ol> <li><strong><a href="https://www.kaggle.com/code/matthiask/01-prep-march-mania-2023-data-round-1" target="_blank">Prep men's and women's data</a></strong></li> <li><strong><a href="https://www.kaggle.com/code/matthiask/02-march-mania-2023-feature-engineering" target="_blank">Feature engineering</a></strong></li> <li><strong><a href="https://www.kaggle.com/code/matthiask/03-march-mania-2023-modeling" target="_blank">Modeling</a></strong></li> <li><strong><a href="https://www.kaggle.com/code/matthiask/04-march-mania-2023-smooth-predictions" target="_blank">Smoothing predictions</a></strong></li> <li><strong><a href="https://www.kaggle.com/code/matthiask/05-march-mania-2023-submission-evaluation" target="_blank">Model evaluation technique</a></strong></li> </ol> <h3>Intro</h3> <p>My original solution used a blend of pre- and post-processing steps in R, and API calls to DataRobot's AutoML platform to leverage feature engineering and AutoML functionality. I happen to work at DataRobot and have access to their most recent software, but I understand that many do not. For that reason I have attempted to replicate the entire process in R, where I have achieved outputs with &gt; 0.993 correlation to my original final submission and a very similar leaderboard score. I have uploaded a series of notebooks linked above.</p> <p>I have a BA in mathematics and an MS in statistics, with 10+ years of professional experience as a university professor, actuary, and data scientist. I would be remiss not to point out that with just 126 scoring observations, I very well could have been lucky. But I am optimistic about some of these ideas working year-over-year. Here is the basic pipeline:</p> <p>R: pre-processing -&gt; R: conference effects offset -&gt; DataRobot time-aware feature engineering -&gt; DataRobot AutoML -&gt; R: smoothing probability predictions and outputting in Kaggle format.</p> <h3>Primary objective and methodology</h3> <p>I wanted to find a way to leverage information from entire seasons, while also allowing the model(s) to focus on how teams are playing leading up to the tournament, and focus on games that are more like tournament games. I leveraged AutoML to save time, because my 5- and 7-year-olds are my real bosses, and they are demanding ;-) To that end, here are the most unique things about my approach.</p> <p>1) Derived conference effects for every season from the regular season non-conference schedule, i.e., information that would be available when the tournament starts. This came in iteration 2 or 3, when I saw the South Dakota State women as my #2 overall team. Sure they went 18-0 in conference, but they also play in the Summit conference, which isn't very good. Clearly my model was not picking up conference effects very well before this addition.</p> <p>For this, I used R's glmnet on only non-conference, regular season games, and I leveraged four primary features: </p> <ul> <li>each team's in-conference point differential to control for how good the team is within their conference</li> <li>home court indicator</li> <li>the season (as a factor), as many conferences have gone up and down over the years</li> <li>conference labels (and interaction between season and conference). This is the main effect I'm looking for, really. The rest are just controls.</li> </ul> <p>Maybe the real magic here is in the creation of the model matrix, where instead of duplicating conference features for home and away, I made a single column for each conference. The home team's conference was labeled +1 and the away team's conference was labeled -1. This allowed me to isolate an entire conference's information into one column, rather than splitting it up between home[conf] and away[conf]. I used two targets: home team's point differential in the game and home team win indicator. Then I extracted the season-conference coefficients for each of home margin and home win models (log-odds), and appended them onto each game in my training dataset, to the home and away teams, respectively. I took the difference between the home team's conference effects and away team's conference effects, and these became offsets in my main models to help control for conference effects that are hard to pick up in the relatively few tournament games that are played.</p> <p>2) Related to the above, I had to do something about neutral site games, as the entire NCAA tournament is played on neutral site courts (for the men, and some for the women, as well). So from the Kaggle-provided data, I duplicated all neutral site rows flipping "home team" and "away team", and used a "home" indicator = 0 for when games were on neutral courts. This made all neutral site games have similar impacts on the influence of "home" and "away" features, as well as doubling the weight of most NCAA tournament games in model training. When I went to predict tournament games, I predicted two rows per every matchup and then averaged the results. </p> <p>3) I built four models in the main training portion of the work. Point differential and home win models for each of men and women. There was more information given for men's tournaments, and I thought it best to just separate out the genders. I also believe you can extract more information from differentials--as a 20-point win isn't the same as a 1-point win--so I wanted to make sure I adjusted my win probabilities by margins. To do so, using the outputs of the four models described above, I built a <code>gam()</code> with the <code>mgcv</code> package in R that regressed log-odds of predicted home win probability on the predicted home point differentials to "smooth" the probabilities out and leverage those point differentials. Anecdotally, this helped the predictions for the Iowa women, as their predicted point differentials were relatively greater than their original predicted win probabilities. </p> <p>4) Time-aware feature engineering: I noted that I wanted to identify teams that may be getting relatively better or worse, and this is how I achieved it using DataRobot's Feature Discovery tool:</p> <ul> <li>Derived rolling 2-week averages, medians, maxes, mins, and standard deviations of each Massey ranking (for men)</li> <li>Derived rolling averages, medians, maxes, mins, and standard deviations for box score statistics, going back 6 months (effectively all season), 3 months, 1 month, and 2 weeks. </li> <li>In addition to the box score stats provided, I derived team differentials for each game that were aggregated up. So, for example, for the designated home team in an upcoming game I would have their "average offensive rebounding differential over the past 2 weeks" as one such feature.</li> <li>For all these box score statistics, I normalized them before aggregation on a per-67 possessions basis. </li> </ul> <p>5) AutoML revealed that elastic net regressors did very well on this data. I think that makes some sense in sports, as typically effects are pretty monotonic and linear. You can't really be "too good" at something, such that you'd need xgboost or a comparable algorithm to detect non-monotonicity or non-linearity. </p> <h3>Top features</h3> <ul> <li>The home effect was quite large, equivalent to between 3-4 points per game. For example, the men's (un-standardized) coefficient effect was +1.72, which represents the gap between playing at home and playing on a neutral court. Thus the home-vs.-away effect is more like 2x1.72 = 3.44, which jives with what you can typically infer from betting lines.</li> <li>Conference effects were very large, especially in early round games where you often see power conferences playing mid-majors.</li> <li>Lagged, aggregated offensive rebounding differential: showed up highest for men, less so for women.</li> <li>Lagged, aggregated versions of score differential (points, FGM, FTM): showed up high for men <em>and</em> women. In general, longer windows were more valuable (e.g., whole season over 2-week lag), but FTM for men was important in the last month. I hypothesize that getting to the line is something a team has a fair amount of control over, and there is signal in shorter rolling windows. </li> <li>Individual Massey ranking systems showed up early and often for men. I would have thought some of my aggregations (like median rank across systems) would have been top contributors, but instead it was DOK and 7OT that both made the top 10. </li> </ul>
March Machine Learning Mania 2023
3th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Private Leaderboard Score:</strong> 0.17525<br> <strong>Private Leaderboard Place:</strong> 3th<br> <strong><a href="https://www.kaggle.com/tihonby/march-madness-3th-place-solution/" target="_blank">Notebook link</a></strong></p> <h4>Introduction</h4> <p>Hello Kaggle Members! <br> I want to thank the organizers for hosting this contest, Kaggle for providing a platform for the contest. Thanks to the colleagues at Kaggle for the public codes that I learned a lot from. Thanks to <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> and <a href="https://www.kaggle.com/verracodeguacas" target="_blank">@verracodeguacas</a> for their codes that I used in my work. My congratulations to the finalists and winners of this wonderful competition!</p> <h4>Background</h4> <p>I am ML Engineer, 3 years of professional experience, this is my first time participating in this competition, I used to predict the results of other sports.</p> <h4>Datasets</h4> <p>In the final, I only used data from NCAATourneyDetailedResults, NCAATourneySeeds, RegularSeasonDetailedResults. I experimented with MasseyOrdinals data, but in the end I abandoned this data, as I did not find a clear increase. I think that it is necessary to return to them, since I have not studied them enough.</p> <h4>Features</h4> <p>Features that were used in training the final model:'Score', 'FGM', 'FGA', 'FGM3', 'FGA3', 'OR', 'Ast', 'TO', 'Stl', 'PF', 'seed', and derivatives from them. When testing, I excluded other features, as they did not give an increase in the quality of predictions. With an increase in the number of features over 70, the quality of forecasts began to deteriorate.<br> Added data extraction function for the last 3 seasons. The function retrieves the value of the feature in the current season and in the previous two. Assigns weights in descending order. The current season carries more weight and contributes more. Calculates and writes the average value to a new function. This feature allows you to smooth out outliers when a team performs differently this season than in previous years. In some cases, this can improve the forecast by 0.001-0.003. The function is little studied and is applied to the parameters 'Score', 'FGA', 'FGA3', 'quality', 'seed'.</p> <h4>Scaling</h4> <p>I used Normalizer, MinMaxScaler, StandardScaler. The best results are from Normalizer, which was used in the final.</p> <h4>Models</h4> <p>XGBoost, ElasticNet, LogisticRegression have been tested.XGBoost has better performance, final settings: xgb.cv, repeat_cv=4, max_depth=3, folds=KFold, n_splits=5.<br> When tested with repeat_cv = 7-10 gave an improvement of 0.003-0.005 over the multi-season average. But if the team that the algorithm was sure of had lost, the score could have been worse, which happened in this tournament. For this tournament, the result for repeat_cv=7 was worse than for 4. I checked the options with a forecast of the probability of winning 1 or 0, or a forecast by points, I did not find a significant difference.</p> <h4>Results processing</h4> <p>I did not post-process the final results of the model prediction, the predictions were in the original version.Later I rechecked for 1-3, 14-16, but this did not noticeably improve the results.I do not welcome manual adjustment, because in this way you can reduce all predictions to a manual forecast based on rating and personal preferences. Although I think that it can improve the forecast and add emotions during the competition.<br> The search for optimal parameters was carried out by training in all seasons, except for the last one, which was used for validation. The training was held separately for men's and women's teams. Prediction results are better for women's matches. Approximately 0.145-0.151 versus 0.187-0.192. I think that the men's teams have more equal strength, and it is more difficult to predict the outcome of the match in advance. The average result for the seasons is about 0.172-0.173, the difference for individual seasons is 0.169-0.177. I expected my result in the competition to be in the range of 0.174 - 0.176, although I thought that there would be those who could predict better than 0.169.<br> The 2 most successful options were selected for sending, the results of the forecasts: 0.17525 (3rd) and 0.1763 (6th) places. Some predictions for strong teams turned out to be incorrect, but others had the same predictions for this game, so the place in the rankings did not change much. The results obtained are very pleasing.</p> <h4>Source</h4> <p><a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a>: <a href="https://github.com/fakyras/ncaa_women_2018/blob/master/win_ncaa.R" target="_blank">Here's the original R from 2018</a>. The guidelines of <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> - Darius Barušauskas, <a href="https://www.youtube.com/watch?v=KmhGNc7gcCM&amp;t=18s&amp;ab_channel=Kaggle" target="_blank">youtube presentation</a><br> <a href="https://www.kaggle.com/verracodeguacas" target="_blank">@verracodeguacas</a> Python notebooks from 2023: 1: <a href="https://www.kaggle.com/code/verracodeguacas/xgboost-ken-pom-data-sagarin-ranks" target="_blank">men</a>, 2: <a href="https://www.kaggle.com/code/verracodeguacas/xgboost-ncaa-women-s-side" target="_blank">women</a>, 3: <a href="https://www.kaggle.com/code/verracodeguacas/easy-mixer-men-women" target="_blank">mixer</a></p> <h5>Thanks Kaggle! Good luck to everyone in the new season!</h5>
March Machine Learning Mania 2023
5th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><a href="https://www.kaggle.com/code/mbund1237/marchmadness/notebook" target="_blank">Notebook link</a></p> <p><strong>Private leaderboard place:</strong> 5th<br> <strong>Private leaderboard score:</strong> 0.17619</p> <h3>Intro</h3> <p>Before getting into it, I just want to thank Kaggle for getting all the different datasets together and for hosting this competition. I've learned a lot and this was a great way to challenge myself. I also want to thank anyone who has shared a notebook or participated in any discussion with the goal of helping others! However, it's sad to see that ~60% of the top 100 was taken up by a fork of one notebook (without any changes). I was surprised at the number (and profile rank) of people who put more value on winning an arbitrary medal vs. improving their skills and challenging themselves. I hope that future competitions will put more emphasis on original submissions.</p> <h3>Background</h3> <p>I graduated in August 2022 with a BS in Business Administration from San Diego State University. I don't have any professional experience (yet) but hope to land a data-driven role. I am self-taught and took up machine learning at the beginning of 2023. However, I do have experience playing sports, sports analytics, and building betting models for basketball primarily (personal projects). I am definitely not the best modeler in the competition, but I do feel as if my knowledge in this subject specifically is what gave me a bit of an edge.</p> <h3>Approach Introduction</h3> <p>One of my favorite movies of all time is Moneyball. In Moneyball, the Oakland Athletics took a statistical approach to building a baseball team. One of the main metrics they judged hitters by was OBP (on-base percentage). They strategized that winning in baseball can primarily be broken down to getting batters on base. I have always wanted to apply a similar strategy to basketball, but it's tough as it is a very different game. </p> <h3>My Approach</h3> <p>My approach to recreate the Moneyball strategy was to only use features that normalized any difference in pace of play, playstyle, bias, noise, etc. In order to do this, **I focused on how well teams played on a possession-by-possession basis only with an emphasis on their efficiency. Teams who play more efficiently per possession are more likely to beat their opponent, irrespective of strength/seed. ** I am somewhat confident that this approach would do fairly well every year and could possibly be expanded to the NBA as well.</p> <h3>Features</h3> <p>During the testing phase, most of my time spent was on feature selection. As mentioned before, having previous knowledge and experience using some advanced basketball statistics made the feature selection much easier. I also decided to drop "NumOT" and "WLoc" from all of the datasets that I used. I view "NumOT" as noise due to the fact that it doesn't give any value in determining the strength of either team. "WLoc" is irrelevant in my opinion as I see the NCAA Tournament as a neutral location so there is no home-court advantage.</p> <p>Here is a breakdown of each of my features, why I used them, and their importance (will average between the two teams… i.e. AWinRatio and BWinRatio will just be referred to as WinRatio and I will use their average importance):</p> <table> <thead> <tr> <th>Feature</th> <th>Formula</th> <th>Explanation</th> <th>Importance</th> </tr> </thead> <tbody> <tr> <td>Win Ratio (Win %)</td> <td>WR = W / G</td> <td>This one is fairly straightforward. A team with a higher win percentage is more likely the stronger team. Teams with a high win percentage and a low seed may be stronger than anticipated.</td> <td>0.327</td> </tr> <tr> <td>Seed</td> <td>-</td> <td>This one is also fairly straightforward. A team with a high seed is more likely to be the stronger team than a low-seeded team.</td> <td>0.052</td> </tr> <tr> <td>PPP (points per possession)</td> <td>PPP = PTS / Poss</td> <td>PPP metric represents the efficiency at which a team is able to generate points on a possession-by-possession basis. The higher the PPP, the better a team is at generating efficient points each time they have possession. A low PPP may be indicative of bad shot selection (i.e. long 2's), poor efficiency, or bad offense.</td> <td>0.014</td> </tr> <tr> <td>Team</td> <td>-</td> <td>Some teams have performed better than others (and vice-versa). This may be due to better recruiting, good coaching, conference differences, or even luck.</td> <td>0.028</td> </tr> <tr> <td>EFG% (Effective field goal %)</td> <td>EFG = (FG + 0.5 * 3P) / FGA</td> <td>EFG is a shooting efficiency metric that accounts for the fact that a 3 pointer is worth 1 more point than a 2 pointer (50% more) when looking at FG% (field goal percentage).</td> <td>0.008</td> </tr> <tr> <td>TOV%</td> <td>TOV% = (TOV / (FGA + 0.475 * FTA + TOV))</td> <td>TOV% (Turnover percentage) is an estimate for the number of times a team turns over the ball per 100 possessions. This can be used to measure a team’s ability to take care of the ball on offense as teams who turn the ball over often have lower efficiency and allow the defense to get back possession of the ball. Note: Some calculations use 0.44 as the weight instead of 0.475.</td> <td>0.008</td> </tr> <tr> <td>OREB%</td> <td>OREB% = OREB / (OREB / OppDREB)</td> <td>OREB% (Offensive rebound percentage) is an estimate of how often a team gets an offensive rebound. This measurement is useful as a team with a high OREB% is more likely to create second chance opportunities after a missed shot.</td> <td>0.008</td> </tr> <tr> <td>Set</td> <td>1 = Tournament, 0 = Regular Season</td> <td>This feature was created by me in order to allow the model to possibly assign a different weight to regular season games vs tournament games. Winning a tournament game can be seen as more important than winning a regular season game.</td> <td>0.008</td> </tr> <tr> <td>TS%</td> <td>TS% = PTS / (2 * FGA + 0.475 * FTA)</td> <td>Another advanced shooting efficiency metric that takes into account free throws when calculating field goal percentage (FG%). Traditional FG% only utilizes total field goals (made and attempted) while TS% takes into account the difference between a free throw, 2 pointer, and 3 pointer when calculating. Note: Some calculations use 0.44 as the weight instead of 0.475.</td> <td>0.008</td> </tr> <tr> <td>FTR</td> <td>FTR = FTA / FGA</td> <td>FTR (Free throw rate) represents how often a team is able to draw shooting fouls relative to how many shots they take. Teams with a high FTR are able to get to the free throw line often when attempting shots. This metric can be useful as a team with a high FTR is not only able to generate additional points but can also get the other team into foul trouble.</td> <td>0.007</td> </tr> <tr> <td>DREB%</td> <td>DREB% = DREB / (DREB + OppOREB)</td> <td>DREB% (Defensive rebound percentage) measures a team’s ability to secure a defensive rebound after a missed shot by the offense. Teams with a high DREB% minimize the opponent’s chances to get an offensive rebound after a missed shot and give their team possession of the ball.</td> <td>0.007</td> </tr> <tr> <td>Season</td> <td>-</td> <td>I wasn’t sure if I should leave this feature in but it seemed to perform well. My rational was that teams that have performed well the last few seasons may be more likely to perform well present day. It can also be used to quantify playstyle differences throughout the years (i.e. 3 pointers are taken at a much higher rate than in the early 2000’s).</td> <td>0.007</td> </tr> <tr> <td>4F</td> <td>-</td> <td>Please see references for an explanation. The 4 factors concept was created by Dean Oliver in his book, “Basketball On Paper”.</td> <td>0.007</td> </tr> <tr> <td>Gender</td> <td>1 = Men, 0 = Women</td> <td>This feature is used to try and quantify the playstyle differences between men and women. Due to only training one model (for both men and women), I figured this could be a possible solution. Next year I hope to train two separate models. It didn’t carry any weight in the final feature importance list.</td> <td>0.000</td> </tr> <tr> <td>ORTG</td> <td>ORTG = 100 * (PTS / Poss)</td> <td>ORTG (Offensive rating) is used to measure the efficiency of an offense. It’s value represents the number of points a team scores per 100 possessions. Looking back, PPP (points per possession) does almost the same thing- just doesn’t adjust it per 100 possessions. While testing the model in the March Madness Warmup, this held more importance than it did here.</td> <td>0.000</td> </tr> <tr> <td>DRTG</td> <td>ORTG = 100 * (OppPTS / OppPoss)</td> <td>DTRG (Defensive rating) is used to measure the number of points a team’s defense allows per 100 possessions. Similar to ORTG, PPP is highly correlated with this.</td> <td>0.000</td> </tr> <tr> <td>Opponent Stats</td> <td>-</td> <td>In addition to calculating all of these metrics for a team, I did it for their opponent’s averages as well. For example, if a team has a very low “OppEFG” for the season, it means their opponents do not shoot efficiently against them. This is highly indicative of a strong defense. Another example is OppFTR. If a team has a very low “OppFTR”, it means that the team does not foul shooters often and doesn’t allow the opposing team to shoot free throws at a high rate. In general, you want your Opponents’ metrics lower than yours (other than TOV%).</td> <td>See Full Importance List Below</td> </tr> </tbody> </table> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9608345%2F957d47dd74e565808e51f14f21cc4c51%2Fimportances.png?generation=1681346881119087&amp;alt=media" alt="Full importance dataframe"></p> <h3>Datasets &amp; Modeling</h3> <p><strong>Datasets:</strong> I only used a total of 6 datasets and did not incorporate any outside data (i.e. 538, Massey, KenPom). For both men's and women's I used TourneyDetailedResults, NCAATourneySeeds, and RegularSeasonDetailedResults. I also used the sample submission when creating my entry. </p> <p><strong>Scaling:</strong> I tested RobustScaler, MinMaxScaler, and StandardScaler. I got the best CV scores using MinMaxScaler so that is what I used.</p> <p><strong>Cross-validation:</strong> I used KFold with shuffling and tested 5, 8, and 10 splits with 10 performing the best. With this said, this is one area where I need to improve, and hope to cross-validate more effectively.</p> <p><strong>Modeling:</strong> I decided to take a classification approach- 1 being a win and 0 being a loss. It seems as if many people took a regression approach to predict the scoring margin. However, I feel that this isn't the right way to approach the problem as we are not being scored on the accuracy of the margin. There is no difference if a team wins by 2 or 15 in this competition- only the fact that they won (or lost). Many people tried to extrapolate a win probability based off of this margin but I don't think that approach would be as accurate. </p> <p>I tested 5 different classification models, LogisticRegression, as well as a voting ensemble. The classification models included XGBoost, LightGBM, RandomForest, KNearestNeighbors, and StochasticGradientDescent. While testing, the voting ensemble was taking too long to train so I decided to scrap that approach. In the end, XGBoost performed the best so that is what I used for my submission.</p> <h3>Post Processing</h3> <p>Unlike many others, I didn't manually override any of the predictions. I considered this, but felt it would be incorporating my own bias into the modeling (which I wanted to avoid). With this said, any manual overrides seem fairly safe for the women's tournament but much riskier for the men's. </p> <h3>Conclusion</h3> <p>I'm very happy with my results and am curious to see how this approach would score on previous tournaments. I fully intend to work on a submission for next year's tournament!</p> <h3>Wishlist and suggestions</h3> <ul> <li>Individual player data for each season</li> <li>Score the competition using the scoring margin rather than the probability of winning. There is a huge difference between a team winning by 20 vs 2, especially if you consider the seeds. A 16-seed blowing out a 1-seed by 20 points would be incredibly unlikely (looking at you Virginia) and could be more indicative of a team's strength.</li> <li>No direct forks of public notebooks (or cluster them as a team like I saw someone suggest)</li> <li>Sub-competitions? I feel as if this could be interesting. I saw a few people mention that this competition is getting a bit repetitive or boring. Having different sub-competitions for March Madness could be really fun. I.e. separate competitions or scoring for only the first round, whoever can predict the closest to the sum of all the scores, etc. Some people may enjoy the variety and additional challenge(s) it brings.</li> </ul> <h3>References</h3> <p>About. THE POWER RANK. (n.d.). Retrieved from <a href="https://thepowerrank.com/cbb-analytics/" target="_blank">https://thepowerrank.com/cbb-analytics/</a><br> All 68 Ncaa Tournament Teams in 12 Minutes. (2023). Youtube. Retrieved from <a href="https://www.youtube.com/watch?v=1LUgGiw-cHw&amp;" target="_blank">https://www.youtube.com/watch?v=1LUgGiw-cHw&amp;</a>.<br> Basketball Reference. (n.d.). Four factors. Four Factors. Retrieved from <a href="https://www.basketball-reference.com/about/factors.html" target="_blank">https://www.basketball-reference.com/about/factors.html</a><br> Kaggle. (2019). Ncaa March Madness Workshop | led by Darius Darius Barušauskas | Kaggle Days Paris. YouTube. Retrieved from <a href="https://www.youtube.com/watch?v=KmhGNc7gcCM" target="_blank">https://www.youtube.com/watch?v=KmhGNc7gcCM</a>.<br> Kotzias, K. (2018, March 15). The four factors of basketball as a measure of success. Statathlon. Retrieved from <a href="https://statathlon.com/four-factors-basketball-success/" target="_blank">https://statathlon.com/four-factors-basketball-success/</a><br> squared2020. (2017, September 6). Introduction to oliver's four factors. Squared Statistics: Understanding Basketball Analytics. Retrieved from <a href="https://squared2020.com/2017/09/05/introduction-to-olivers-four-factors/" target="_blank">https://squared2020.com/2017/09/05/introduction-to-olivers-four-factors/</a> </p> <p>Glossary links:<br> <a href="https://www.basketball-reference.com/about/glossary.html#tsa" target="_blank">Basketball Reference</a><br> <a href="https://www.nbastuffer.com/analytics101/" target="_blank">NBAStuffer</a></p> <p>Thanks again and congratulations to all the winners! Feel free to ask questions below, I will answer them all to the best of my ability. </p>
Google - Isolated Sign Language Recognition
5th place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is a quick overview of the 5th-place solution.</p> <ol> <li><p><strong>we applied various augmentations like flip, concatenation, etc</strong><br> 1.1. By applying different augmentations, we can increase the cv by ~ 0.02 (0.76 -&gt; 0.78)</p></li> <li><p><strong>the model is only a transformer model based on the public kernels</strong><br> 2.1. By increasing the number of parameters, the performance of a single model can be increased to around 0.8 (0.78-&gt;0.8) in public LB.<br> 2.1.1. 3 layers of transformer with the embedding size 480.</p></li> <li><p><strong>Preprocessing by mean and std of single sign sequence</strong><br> 3.1. the preprocessing does affect the final performance. <br> 3.1.1. we tried different ways of calculating the mean and std and found out that using the mean and std of the single sign sequence results in better cv.</p></li> <li><p><strong>Feature engineering like distances between points</strong><br> 4.1. we selected and used around 106 points (as the public notebook by Heck).<br> 4.2. distances withinpoints of hands/nose/eyes/… are calculated.</p></li> <li><p><strong>some methods to prevent overfitting like awp, random mask of frames, ema, etc …</strong></p></li> </ol> <p>many thanks to my teammates <a href="https://www.kaggle.com/qiaoshiji" target="_blank">@qiaoshiji</a> <a href="https://www.kaggle.com/zengzhaoyang" target="_blank">@zengzhaoyang</a></p> <p>The source code for training models can be found here : <a href="https://github.com/zhouyuanzhe/kaggleasl5thplacesolution" target="_blank">https://github.com/zhouyuanzhe/kaggleasl5thplacesolution</a></p>
Google - Isolated Sign Language Recognition
🥈18th Place Solution🥈
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>TLDR</h1> <p>Based on my experiment, it seems that the most significant factors contributing to improved performance were <strong>regularization</strong>, <strong>feature processing</strong> and <strong>embedding_layer</strong>. The final ensemble model consisted of various input features and models, including Transformer, MLP, and GRU.<br> <br></p> <h1>Data Processing</h1> <p>Out of the 543 landmarks in the provided raw data, a total of 115 landmarks were used for hands, poses, and lips. Input features were constructed by concatenating xy(z), motion, and distance, resulting in a final input feature size of 1196. Initially, using only the distance feature of hand data was employed, but the significant performance improvement (CV +0.05) was achieved by adding the distance feature of pose and lip data. </p> <pre><code>feature = tf.concat([ tf.reshape(xyz_hand[:, :21, :3], [-1, 21 * 3]), tf.reshape(xyz_pose[:, 21:46, :2], [-1, 25 * 2]), tf.reshape(xyz_lip[:, 46:66, :2], [-1, 20 * 2]), tf.reshape(motion_hand[:, :21, :3], [-1, 21 * 3]), tf.reshape(motion_pose[:, 21:46, :2], [-1, 25 * 2]), tf.reshape(motion_lip[:, 46:66, :2], [-1, 20 * 2]), tf.reshape(distance_hand, [-1, 210]), tf.reshape(distance_pose, [-1, 300]), tf.reshape(distance_outlip, [-1, 190]), tf.reshape(distance_inlip, [-1, 190]), ], axis=-1) </code></pre> <p>Additionally, the hand with fewer NaN values was utilized, and the leg landmarks were removed from the pose landmarks. Based on this, two versions of inputs were created. The first version only utilized frames with non-NaN hand data, while the second version included frames with NaN hand data. The former had a max_length of 100, and the latter had a max_length of 200.</p> <pre><code>cond = lefth_sum &gt; righth_sum h_x = tf.where(cond, lefth_x, righth_x) xfeat = tf.where(cond, tf.concat([lefth_x, pose_x, lip_x], axis = 1), tf.concat([righth_x, pose_x, lip_x], axis = 1)) </code></pre> <p><br></p> <h1>Augmentation</h1> <p>The hand with fewer NaN values was utilized, and both hands were flipped to be recognized as right hands in the model, which actually contributed to a performance improvement of about 0.01 in CV.</p> <pre><code># x-axis mirroring cond = lefth_sum &gt; righth_sum xfeat_xcoordi = xfeat[:, :, 0] xfeat_else = xfeat[:, :, 1:] xfeat_xcoordi = tf.where(cond, -xfeat_xcoordi, xfeat_xcoordi) xfeat = tf.concat([xfeat_xcoordi[:, :, tf.newaxis], xfeat_else], axis = -1) </code></pre> <p>I did not observe that flip, rotate, mixup, and other augmentation techniques contributed to performance improvement in CV, so I supplemented this by ensembling the models of the two versions of inputs mentioned earlier.<br> <br></p> <h1>Model</h1> <p>Prior to being utilized as inputs for the transformer model, the input features, namely xy(z), motion, and distance, underwent individual processing through dedicated embedding layers. Compared to the scenario where features were not processed independently, a performance improvement of 0.01 was observed in the CV score when the features were treated independently.</p> <pre><code># embedding layer xy = xy_embeddings(xy) motion = motion_embeddings(motion) distance_hand = distance_hand_embeddings(distance_hand) distance_pose = distance_pose_embeddings(distance_pose) distance_outlip = distance_outlip_embeddings(distance_outlip) distance_inlip = distance_inlip_embeddings(distance_inlip) x = tf.concat([xy, motion, distance_hand, distance_pose, distance_outlip, distance_inlip], axis=-1) x = relu(x) x = fc_layer(x) x = TransformerModel(input_ids = None, inputs_embeds=x, attention_mask=x_mask).last_hidden_state </code></pre> <p><br></p> <p>For Transformer models, I used huggingface's RoBERTa-PreLayerNorm, DeBERTaV2, and GPT2. The input was processed independently for xyz, motion, and distance, and then concatenated to form a 300-dimensional transformer input. The mean, max, and std values of the Transformer output were then concatenated to obtain the final output.</p> <pre><code># pooling function def get_pool(self, x, x_mask): x = x * tf.expand_dims(x_mask, axis=-1) # apply mask nonzero_count = tf.reduce_sum(x_mask, axis=1, keepdims=True) # count nonzero elements max_discount = (1-x_mask)*1e10 apool = tf.reduce_sum(x, axis=1) / nonzero_count mpool = tf.reduce_max(x - tf.expand_dims(max_discount, axis=-1), axis=1) spool = tf.sqrt((tf.reduce_sum(((x - tf.expand_dims(apool, axis=1)) ** 2) * tf.expand_dims(x_mask, axis=-1), axis=1) / nonzero_count) + 1e-9) return tf.concat([apool, mpool, spool], axis=-1) </code></pre> <p>In addition to the Transformer model, simple linear models and GRU models also achieved similar performance to the Transformer model, so I ensembled these three types of models.<br> <br></p> <h1>Training</h1> <ul> <li>Scheduler : lr_warmup_cosine_decay </li> <li>Warmup Ratio : 0.2 </li> <li>Optimizer : AdamW </li> <li>Weight Decay : 0.01</li> <li>Epoch : 40</li> <li>Learning Rate : 1e-3 </li> <li>Loss Function : CrossEntropyLoss </li> <li>Smoothing Value : 0.65 ~ 0.75</li> </ul> <p><br></p> <h1>Regularization</h1> <p>During model training, there are three primary regularization techniques that have made significant contributions to both improving convergence speed and final performance</p> <ul> <li><p>Weight normalization applied to the final linear layer(<a href="https://arxiv.org/abs/1602.07868" target="_blank">paper</a>)</p> <pre><code>final_layer = torch.nn.utils.weight_norm(nn.Linear(hidden_size, 250)) </code></pre></li> <li><p>Batch normalization applied before the final linear layer</p> <pre><code># head x = fc_layer(x) x = batchnorm1d(x) x = relu(x) x = dropout(x) x = final_layer(x) </code></pre></li> <li><p>High weight decay value with the AdamW<br> <br></p></li> </ul> <h1>TFLite Conversion</h1> <p>In the early stages of the competition, I worked with PyTorch, which meant I had to deal with numerous errors when converting to TFLite, and ultimately failed to handle dynamic input shapes. In the latter stages of the competition, I began working with Keras, and the number of errors when converting to TFLite was significantly reduced.<br> <br></p> <h1>Didn't Work</h1> <ul> <li>Adding the distance feature contributed to performance improvement, but adding angle and direction features did not.</li> <li>Increasing the number of transformer layers did not contribute to performance improvement.</li> <li>I attempted to model the relationships between landmark points using Transformers or GATs, but the inference speed of the model became slower, and the performance actually decreased.</li> <li>Bert-like pretraining (MLM) for XYZ coordinates did not improve performance with the provided data in the competition.<br> <br></li> </ul> <h1>Learned</h1> <p>I am currently serving in the military in South Korea. During my army training, I learned a lot while coding in between. I was amazed by the culture of Kagglers sharing various ideas and discussing them. It's really cool that something bigger and better something can be born through this community, and I also want to participate in it diligently in the future. Thank you for reading my post.<br> <br></p> <h1>Code</h1> <p><a href="https://github.com/siwooyong/Google-Isolated-Sign-Language-Recognition" target="_blank">https://github.com/siwooyong/Google-Isolated-Sign-Language-Recognition</a><br> <br></p>
Google - Isolated Sign Language Recognition
6th Place Solution - Transformer Tweaking Madness
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to both, the organizers of this competition who offered a fun yet challenging problem as well as all of the other competitors - well done to everyone who worked hard for small incremental increases. </p> <p>Although I am the one posting the topic, this is the result of a great team effort, so big shoutout to <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a>.</p> <h1>Brief Summary</h1> <p>Our solution is a 2 model ensemble of a MLP-encoder-frame-transformer model. We pushed our transformer models close to the limit and implemented a lot of tricks to climb up to 6th place. <br> I have 1403 hours of experiment monitoring time in April (that’s 48h per day :)).</p> <p><strong>Update :</strong> Code is available here : <a href="https://github.com/TheoViel/kaggle_islr" target="_blank">https://github.com/TheoViel/kaggle_islr</a></p> <h1>Detailed Summary</h1> <h2>Preprocessing &amp; Model</h2> <p><a href="https://ibb.co/g3np4jQ"><img src="https://i.ibb.co/80hpYKG/ISLR-drawio-1.png" alt="ISLR-drawio-1"></a></p> <h3>Preprocessing</h3> <ul> <li>Remove frames without fingers</li> <li>Stride the sequence (use 1 every n frames) such that the sequence size is <code>&lt;= max_len</code>. We used <code>max_len=25</code> and <code>80</code> in the final ensemble</li> <li>Normalization is done for the whole sequence to have 0 mean and 1 std. We do an extra centering before the specific MLP. Nan values are set to 0</li> </ul> <h3>Embedding</h3> <ul> <li>2x 1D convolutions (<code>k=5</code>) to smooth the positions</li> <li>Embed the landmark id and type (e.g. lips, right hand, …), <code>embed_dim=dense_dim=16</code></li> </ul> <h3>Feature extractor</h3> <ul> <li>One MLP combining all the features, and 4 for specific landmark types (2x hands, face, lips)</li> <li>Max aggregation for hands is to take into account that signers use one hand</li> <li><code>dim=192</code>, <code>dropout=0.25</code></li> </ul> <h3>Transformer</h3> <ul> <li>Deberta was better than Bert, but we had to rewrite the attention layer for it to be efficient</li> <li>To reduce the number of parameters, we use a smaller first transformer, and modify the output layer to upscale/downscale the features. This was key to enable blending 2 models</li> <li><code>d_in=512</code>, <code>δ=256</code> for <code>max_len=25</code>, <code>δ=64</code> for <code>max_len=80</code>, <code>num_heads=16</code>, <code>dropout=0.05</code> for the first layer, <code>0.1</code> for the other 2</li> <li>Unfortunately, we did not have any luck with using a pre-trained version of Deberta, for example by importing some of the pretrained weights</li> </ul> <h2>Training strategy</h2> <p><a href="https://imgbb.com/"><img src="https://i.ibb.co/VvT0pVx/ISLR-Train-drawio-1.png" alt="ISLR-Train-drawio-1"></a></p> <h3>Augmentations:</h3> <ul> <li>Horizontal Flip (<code>p=0.5</code>)</li> <li>Rotate around <code>(0, 0, 0)</code> by an angle between -60 and 60° (<code>p=0.5</code>)</li> <li>Resizing by a factor in <code>(0.7, 1.3)</code> (<code>p=0.5</code>). We also allow for distortion (<code>p=0.5</code>)</li> <li>Crop 20% of the start or end (<code>p=0.5</code>)</li> <li>Interpolate to fill missing values (<code>p=0.5</code>)</li> <li>Manifold Mixup <a href="https://arxiv.org/abs/1806.05236" target="_blank">[1]</a> (scheduled, <code>p=0.5 * epoch / (0.9 * n_epochs)</code>) : random apply mixup to the features before one of the transformer layer</li> <li>Only during the first half of the training, since it improved convergence<ul> <li>Fill the value of the missing hand with those of the existing one (<code>p=0.25</code>)</li> <li>Face CutMix : replace the face landmarks with those of another signer doing the same sign (<code>p=0.25</code>)</li></ul></li> </ul> <h3>Training</h3> <ul> <li>100 epochs, <code>lr=3e-4</code>, 25% warmup, linear schedule</li> <li>Cross entropy with smoothing (<code>eps=0.3</code>)</li> <li><code>weight_decay=0.4</code>, <code>batch_size=32</code> (x8 GPUs)</li> <li>OUSM <a href="https://arxiv.org/pdf/1901.07759.pdf" target="_blank">[2]</a>, i.e. exclude the top k (<code>k=3</code>) samples with the highest loss from the computation</li> <li>Mean teacher <a href="https://arxiv.org/abs/1703.01780" target="_blank">[3]</a> &amp; Knowledge Distillation (see image above). We train 3 models at the same time, and use the distilled one for inference</li> <li>Model soup <a href="https://arxiv.org/abs/2203.05482" target="_blank">[4]</a> of the last 10 epochs checkpoints</li> </ul> <h2>Experiments</h2> <p><a href="https://ibb.co/MV343Xx"><img src="https://i.ibb.co/jM121pP/cvlb.png" alt="cvlb"></a></p> <h3>Validation strategy</h3> <p>We use a Stratified 4-fold, grouped by patient. We had a great CV - LB correlation (see figure), and our best model achieved CV 0.749 - Public 0.795 - Private 0.877. We submitted single models trained on the full dataset, and 2 models for ensembles.</p> <h3>What worked &amp; reported improvements</h3> <p>Starting from early April :</p> <ul> <li>Baseline : MLP + Bert : LB 0.72</li> <li>Deberta instead of Bert : CV +0.015 <strong>-&gt; LB 0.73</strong></li> <li>Improve Feature extractor : CV +0.01 <strong>-&gt; LB 0.74</strong></li> <li>Flip augmentation : CV +0.015 <strong>-&gt; LB 0.75</strong></li> <li>Increase model size : CV +0.006 </li> <li>Two stage training + interp aug : CV + 0.002 ?</li> <li>Increase model size : CV +0.005 <strong>-&gt; LB 0.76</strong></li> <li>Deberta Rework : CV +0.005 <strong>-&gt; LB 0.77</strong></li> <li>OUSM : CV +0.003</li> <li>Mean Teacher : CV +0.005</li> <li>Ensemble two distilled models : +0.005 <strong>-&gt; LB 0.78</strong></li> <li>Conv layers : CV +0.001</li> <li>Mish <a href="https://arxiv.org/abs/1908.08681" target="_blank">[5]</a> activation instead of ReLU/LeakyReLU : CV +0.001</li> <li>Warmup 0.1 -&gt; 0.25 : CV +0.001</li> <li>Model soup : CV +0.001</li> <li><a href="https://github.com/AlexanderLutsenko/nobuco" target="_blank">Nobuco</a> instead of onnx : 45ms/it -&gt; 38ms/it, this enabled to improve the ensemble </li> <li>Weight decay 0 -&gt; 0.4 : CV +0.004</li> <li>Truncation aug : CV +0.001</li> <li>Manifold mixup : CV +0.002</li> <li>Different input, model and teacher size for ensemble : CV +0.005 <strong>-&gt; LB 0.79</strong></li> <li>Centering before the MLP layers : CV +0.004 <strong>-&gt; LB 0.795 (final)</strong></li> </ul> <h3>What did not work (for us)</h3> <ul> <li>Unfortunately we could not make CNNs work</li> <li>GCNs, or other architectures that are “sota” for ASL</li> <li>Heng’s architecture, although others had great results with it</li> <li>Getting a successful 3-model sub</li> <li>Relabel the data to reduce label noise</li> <li>Other augmentations such as noise, dropout, drop frame, shift</li> <li>Dropout scheduling, custom lr per layer</li> <li>Pretraining part of the model</li> <li>Handcrafted graph features (adjacency matrices, edge features)</li> <li>Adding more landmarks (eyes, eyebrows)</li> </ul> <p><em>Thanks for reading !</em></p>
Google - Isolated Sign Language Recognition
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to thank the organizers and all the competitors. <br> The overall picture of my solution is shown in the figure below.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1630583%2F2d2c527274d7918f8c3a447a7ffe1863%2Foverview.png?generation=1683110270790469&amp;alt=media" alt=""></p> <h1>Preprocessing</h1> <ul> <li>Use XY coordinates</li> <li>Normalize the coordinates between the eyebrows to (0,0).</li> <li>Compare the number of frames detected for the right and left hands, and flip only when the number of frames detected for the right hand is less than that of the left hand (x=-x).</li> <li>Use XY coordinates of 21 feature points of the right hand (flip left hand) and 40 feature points of the lips, for a total of 122 dimensions.</li> <li>Delete frames in which the feature points of the hand have not been detected. Therefore, the number of frames for the hand and the number of frames for the lips will be different."</li> </ul> <h1>Modeling</h1> <ul> <li>I have constructed two types of 1D CNNs. </li> <li>The first is a model that classifies fixed-length sequences (1DCNN-FixLen). </li> <li>The second is a model that classifies variable-length sequences (1DCNN-VariableLen). </li> <li>Both models have a hand backbone, a lips backbone, and a head for classification. </li> <li>The hand and lips backbones are similar, except that the convolution dimension differs: 128 for the hand backbone and 64 for the lips backbone.</li> </ul> <h2>1DCNN-FixLen</h2> <ul> <li>Interpolate the sequence to 96. This process is done for both hand and lip feature points.</li> </ul> <h3>Backbone</h3> <ul> <li>Apply conv 11 times and max_pooling 3 times to reduce sequence length to 12 (=96/(2**3)).</li> <li>Apply conv to increase the dimension to 512, then apply global_max_pooling.<ul> <li>Changing global_max_pooling to global_avg_pooling resulted in a significant performance drop. This is probably because important features in the sequence are only included in a few frames. We believe that with global_avg_pooling, these features are averaged out and disappear.</li> <li>I tried other gating mechanisms such as Gated Linear Unit, but global_max_pooling was the best.</li></ul></li> </ul> <h3>Head</h3> <ul> <li>Sum the hand features (512 dim) and lips features (512 dim).</li> <li>Apply 6 fully connected layers (shallow version, stochastic_depth=0.1) or 18 fully connected layers (deep version, stochastic_depth=0.5).<ul> <li>The deep version ties the parameters of the fully connected layers to prevent model bloat.</li></ul></li> <li>Classify into 250 classes.</li> </ul> <h2>1DCNN-VariableLen</h2> <h3>Backbone</h3> <ul> <li>Apply conv (kernel_size=3) 5 times in total and conv (kernel_size=1) 6 times in total. Unlike above, pooling is not applied.<ul> <li>Therefore, the size of the receptive field is 11 frames. Since reducing the number of frames further worsened the situation, and increasing the number of frames worsened the situation, it seems that the meaningful sequence was about 11 frames, even if the sequence was long.</li></ul></li> <li>Apply global_max_pooling after increasing the dimension to 512 with conv.<ul> <li>During training, the output is masked by sequence length and global_max_pooling is applied. This is because 0-padding is applied to normalize by the maximum length in the batch.</li></ul></li> </ul> <h3>Head</h3> <ul> <li>Same as FixLen with 1DCNN.</li> </ul> <h1>Training</h1> <ul> <li>3-fold participant CV.</li> <li>300 epochs, apply SWA after 15 epochs.</li> <li>AdamW optimizer.</li> <li>ArcMarginProduct is used, but only norm normalization of features and weight is applied since margin m=0.</li> <li>Stochastic Depth, 0.1 or 0.5.</li> <li>Label Smoothing Loss, epsilon=0.5.</li> </ul> <h1>Data Augmentation</h1> <ul> <li>Randomly drop frames (p=0.3).</li> <li>Augment hand position, size, and angle.</li> <li>Sequence length input to 1DCNN-FixLen is between 64 and 128.</li> </ul> <h1>CleanLab</h1> <ul> <li>CleanLab was used to remove approximately 5,000 scenes that were considered noise.<ul> <li>Calculated posterior probabilities with 21-participant-fold and used filter_by="both".</li></ul></li> <li>The model trained on the cleaned dataset increased LB in the stand-alone model, but not much when ensembling.<ul> <li>Is it because the data was too clean and the diversity of the models was reduced? I honestly don't know.</li></ul></li> <li>Also, the gap between CV and LB appeared, so I tried not to be too overconfident.<ul> <li>In the final submission, CleanLab was applied to only 2 of the 6 ensembles.</li></ul></li> </ul> <h1>Extra Experiments</h1> <p>I did some experiments, including some that weren't included in the final submission. The following two points can be made from this evaluation:</p> <ul> <li>CleanLab is effective (+0.003) (compare A and B, D and E).</li> <li>Large effect of ensembling FixedLen and VariableLen (+0.01) (A0+D).</li> <li>I couldn't get the prize just by ensembling fixedlen.</li> </ul> <table> <thead> <tr> <th>model</th> <th>Length</th> <th>head size</th> <th>cleanlab</th> <th>seed</th> <th>Private</th> <th>Public</th> </tr> </thead> <tbody> <tr> <td>A0</td> <td>fixed</td> <td>deep</td> <td>no</td> <td>0</td> <td>0.8661</td> <td>0.7843</td> </tr> <tr> <td>A1</td> <td>fixed</td> <td>deep</td> <td>no</td> <td>1</td> <td>0.8682</td> <td>0.7842</td> </tr> <tr> <td>B0</td> <td>fixed</td> <td>deep</td> <td>yes</td> <td>5</td> <td>0.8721</td> <td>0.7848</td> </tr> <tr> <td>B1</td> <td>fixed</td> <td>deep</td> <td>yes</td> <td>6</td> <td>0.8702</td> <td>0.7862</td> </tr> <tr> <td>C0</td> <td>fixed</td> <td>shallow</td> <td>no</td> <td>5</td> <td>0.8651</td> <td>0.7794</td> </tr> <tr> <td>C1</td> <td>fixed</td> <td>shallow</td> <td>no</td> <td>6</td> <td>0.8665</td> <td>0.7825</td> </tr> <tr> <td>D</td> <td>variable</td> <td>deep</td> <td>no</td> <td>25</td> <td>0.8653</td> <td>0.7812</td> </tr> <tr> <td>E</td> <td>variable</td> <td>deep</td> <td>yes</td> <td>35</td> <td>0.8688</td> <td>0.7840</td> </tr> <tr> <td>F</td> <td>variable</td> <td>shallow</td> <td>no</td> <td>150</td> <td>0.8647</td> <td>0.7794</td> </tr> <tr> <td>A0+A1</td> <td>fixed</td> <td>-</td> <td>no</td> <td>-</td> <td>0.8722</td> <td>0.7905</td> </tr> <tr> <td>B0+B1</td> <td>fixed</td> <td>-</td> <td>yes</td> <td>-</td> <td>0.8761</td> <td>0.7935</td> </tr> <tr> <td>A0+D</td> <td>-</td> <td>-</td> <td>no</td> <td>-</td> <td>0.8766</td> <td>0.7945</td> </tr> <tr> <td>fixedlen only (A0+A1+B0+B1+C0+C1)</td> <td>fixed</td> <td>-</td> <td>-</td> <td>-</td> <td>0.8774</td> <td>0.7962</td> </tr> <tr> <td>best sub(A0+B0+B1+C0+E+F)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>0.8824</td> <td>0.7999</td> </tr> </tbody> </table>
Google - Isolated Sign Language Recognition
1st place solution - 1DCNN combined with Transformer
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to express my gratitude to the Google for hosting this amazing competition. I have always been a big fan of the services, frameworks, and platforms provided by Google.(Colab, GCP, TensorFlow.. all of them are amazing). Without access to these offerings from Google, I wouldn't have been able to win this competition. I would also like to thank all the other participants who shared their ideas. In particular, I gained valuable insights from the ideas shared by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>.</p> <h1>TL;DR</h1> <p>My solution involved a combination of a 1D CNN and a Transformer, trained from scratch using all training data(competition data only), and used 4x seed ensemble for submission. and I initially started with PyTorch + GPU but later switched to TensorFlow + Colab TPU(tpuv2-8) to ensure compatibility with TensorFlow Lite.</p> <h1>1D CNN vs. Transformer?</h1> <p>My hypothesis was that in modeling sequential data, <br> <strong>if there is a strong inter-frame correlation, 1D CNNs would be more efficient than Transformers.</strong></p> <p>As in my experiment, the pure 1D CNN easily outperformed the Transformer and as a result, I was able to achieve a public LB score of 0.80 using only the 1D CNN at the end. </p> <p>However, there still were roles for the Transformer, which could be used on top of the 1D CNN(we can view 1d cnn as some kind of trainable tokenizer).</p> <h1>Model</h1> <pre><code> (): inp = tf.keras.Input((max_len,CHANNELS)) x = tf.keras.layers.Masking(mask_value=PAD,input_shape=(max_len,CHANNELS))(inp) ksize = x = tf.keras.layers.Dense(dim, use_bias=,name=)(x) x = tf.keras.layers.BatchNormalization(momentum=,name=)(x) x = Conv1DBlock(dim,ksize,drop_rate=)(x) x = Conv1DBlock(dim,ksize,drop_rate=)(x) x = Conv1DBlock(dim,ksize,drop_rate=)(x) x = TransformerBlock(dim,expand=)(x) x = Conv1DBlock(dim,ksize,drop_rate=)(x) x = Conv1DBlock(dim,ksize,drop_rate=)(x) x = Conv1DBlock(dim,ksize,drop_rate=)(x) x = TransformerBlock(dim,expand=)(x) dim == : x = Conv1DBlock(dim,ksize,drop_rate=)(x) x = Conv1DBlock(dim,ksize,drop_rate=)(x) x = Conv1DBlock(dim,ksize,drop_rate=)(x) x = TransformerBlock(dim,expand=)(x) x = Conv1DBlock(dim,ksize,drop_rate=)(x) x = Conv1DBlock(dim,ksize,drop_rate=)(x) x = Conv1DBlock(dim,ksize,drop_rate=)(x) x = TransformerBlock(dim,expand=)(x) x = tf.keras.layers.Dense(dim*,activation=,name=)(x) x = tf.keras.layers.GlobalAveragePooling1D()(x) x = LateDropout(, start_step=dropout_step)(x) x = tf.keras.layers.Dense(NUM_CLASSES,name=)(x) tf.keras.Model(inp, x) </code></pre> <p>Combining CNN and Transformer is a prevalent idea in recent state-of-the-art models(coatnet, conformer, Maxvit, nextvit…). I started with an 192d 8-layer 1D CNN, then switched to a 192d (3+1)x2 conv-transformer structure, which yielded a +0.01 CV and LB improvement. </p> <p>The 1D CNN model employed depthwise convolution and causal padding. The Transformer used BatchNorm + Swish instead of the typical LayerNorm + GELU, due to slightly(negligible) lighter inference with the same accuracy.<br> single model has around 1.85M parameters.</p> <h1>Masking</h1> <p><strong>Handling variable-length input correctly was very crucial for ensuring train-test consistency and efficient inference</strong> , as we do not necessarily have to pad the short videos. During training, I used a max_len=384 with padding and truncation, while for inference, I only applied truncation. This approach provided sufficient inference speed and allowed the use of reasonably large models. To accurately apply masking to the 1D CNN, I used causal padding to maintain the mask index. In TensorFlow, masking can be easily implemented using tf.keras.layers.Masking at the beginning of the model. Plus, It is essential to ensure that masking is accurately applied to operations like batch normalization and global average pooling which can be affected by masking.</p> <h1>Regularization</h1> <ol> <li>Drop Path(stochastic depth, p=0.2)</li> <li>high rate of Dropout (p=0.8)</li> <li>AWP(Adversarial Weight Perturbation, with lambda = 0.2)</li> </ol> <h6> </h6> <p>As we need to train the model from scratch, regularization technics played a significant role. I used drop_path(0.2, applied after each block), dropout(0.8, applied after GAP) and AWP(Adversarial Weight Perturbation, lambda=0.2) AWP and the dropout applied after epoch 15. All these methods were very crucial for preventing overfitting when training with long epochs(&gt;300). All three methods had a significant impact on both CV and leaderboard scores, and removing any one of them led to noticeable performance drops.</p> <h1>Preprocessing</h1> <pre><code> (tf.keras.layers.Layer): (): ().__init__(**kwargs) self.max_len = max_len self.point_landmarks = point_landmarks (): tf.rank(inputs) == : x = inputs[,...] : x = inputs mean = tf_nan_mean(tf.gather(x, [], axis=), axis=[,], keepdims=) mean = tf.where(tf.math.is_nan(mean), tf.constant(,x.dtype), mean) x = tf.gather(x, self.point_landmarks, axis=) std = tf_nan_std(x, center=mean, axis=[,], keepdims=) x = (x - mean)/std self.max_len : x = x[:,:self.max_len] length = tf.shape(x)[] x = x[...,:] dx = tf.cond(tf.shape(x)[]&gt;,:tf.pad(x[:,:] - x[:,:-], [[,],[,],[,],[,]]),:tf.zeros_like(x)) dx2 = tf.cond(tf.shape(x)[]&gt;,:tf.pad(x[:,:] - x[:,:-], [[,],[,],[,],[,]]),:tf.zeros_like(x)) x = tf.concat([ tf.reshape(x, (-,length,*(self.point_landmarks))), tf.reshape(dx, (-,length,*(self.point_landmarks))), tf.reshape(dx2, (-,length,*(self.point_landmarks))), ], axis = -) x = tf.where(tf.math.is_nan(x),tf.constant(,x.dtype),x) x </code></pre> <p>I used left-right hand, eye, nose, and lips landmarks. For normalization, I used the 17th landmark located in the nose as a reference point, since it is usually located close to the center ([0.5, 0.5]). I used motion feature of lag1 x[:1] - x[1:], and lag2 x[:2] - x[2:](lag &gt; 2 did not help much).</p> <h1>Augmentation</h1> <ul> <li>temporal augmentation</li> </ul> <ol> <li>Random resample (0.5x ~ 1.5x to original length)</li> <li>Random masking</li> </ol> <ul> <li>Spatial augmentation</li> </ul> <ol> <li>hflip</li> <li>Random Affine(Scale, shift, rotate, shear)</li> <li>Random Cutout</li> </ol> <h1>Training</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5003978%2Ff23fef7388aa47b7a1e26f6261e6c0a4%2F.png?generation=1683114294544440&amp;alt=media" alt=""></p> <ul> <li>Epoch = 400</li> <li>Lr = 5e-4 * num_replicas = 4e-3</li> <li>Schedule = CosineDecay with no warmup</li> <li>Optimizer = RAdam with Lookahead(better than AdamW with optimal parameters)</li> <li>Loss = CCE with label smoothing=0.1 or just plain CCE</li> </ul> <h6> </h6> <p>final single model result<br> CV(participant split 5fold): 0.80<br> public LB: 0.80<br> private LB: 0.88</p> <h6> </h6> <p>Training takes around 4 hours with colab TPUv2-8.</p> <p>Single model CV was around 0.80 with participant split(5fold) at the end. I ensemble 4 different seed(with some minor differences in training configurations) for the final model and got LB 0.81. By the way, I could see slightly worse score when I submitted 4x size(384d, 16layers) single model with the same settings. I think it can achieve same or better score with better configurations.</p> <h1>Tried but not worked</h1> <ul> <li>GCNs</li> <li>More Complex augmentations: augmentation based on angle and the distance between the landmarks, grid distortion on temporal, spatial axis, etc.</li> <li>CutMix, MixUp: Not worked. Main problem was how to define new label with two different length of inputs. I could not find the correct way to implement it.</li> <li>Knowledge Distillation: I tried to use single 4x sized model and distill it with 4x seed 4x sized model, but I could not manage it to work due to lack of time.</li> </ul> <h6> </h6> <p>I'm always amazed by the fact that we did try each other’s methods, but we came up with different result. I also gave the 2D-CNN(similar to <a href="https://www.kaggle.com/kolyaforrat" target="_blank">@kolyaforrat</a> 's brilliant solution) and pure transformer approaches a try, but due to their initial weak performance and my confidence in my own hypothesis, I didn't dig any deeper. Seeing other teams succeed with the ideas I had trouble with, through their skill and hard work, has been truly inspiring. I've learned a lot through this competition, and I'm grateful for the experience. Thank you all!!! :)</p> <p>Edit: I made my code public, check <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406978" target="_blank">https://www.kaggle.com/competitions/asl-signs/discussion/406978</a></p>
Google - Isolated Sign Language Recognition
3rd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We used an <strong>ensemble of six conv1d models and two versions of transformers</strong> based on the <a href="https://www.kaggle.com/code/markwijkhuizen/gislr-tf-data-processing-transformer-training" target="_blank">public notebook</a>. The key points are <strong>data preprocessing, hard augmentation and ensemble</strong>.</p> <p>I initially tried to develop my own solution based on the aforementioned public transformer but then I found that the architecture of a model didn’t matter as much as proper work with data and ensembling, so I switched to more simple architectures like <strong>conv1d multilayer models</strong>. I used <code>Keras</code> and utilized all the advantages of native tflite layers like <code>DepthwiseConv1D</code>. My typical model looked like that:</p> <pre><code>do = model = Sequential() model.add(InputLayer(input_shape=(max_len, , ))) model.add(Reshape((max_len, *))) model.add(Conv1D(, , strides=, padding=, activation=)) model.add(BatchNormalization()) model.add(DepthwiseConv1D(, strides=, padding=, depth_multiplier=, activation=)) model.add(BatchNormalization()) model.add(Conv1D(, , strides=, padding=, activation=)) model.add(BatchNormalization()) model.add(DepthwiseConv1D(, strides=, padding=, depth_multiplier=, activation=)) model.add(BatchNormalization()) model.add(MaxPool1D(, )) model.add(Conv1D(, , strides=, padding=, activation=)) model.add(BatchNormalization()) model.add(DepthwiseConv1D(, strides=, padding=, depth_multiplier=, activation=)) model.add(BatchNormalization()) model.add(Conv1D(, , strides=, padding=, activation=)) model.add(BatchNormalization()) model.add(DepthwiseConv1D(, strides=, padding=, depth_multiplier=, activation=)) model.add(BatchNormalization()) model.add(GlobalAvgPool1D()) model.add(Dropout(rate=do)) model.add(Dense(, activation=)) model.add(BatchNormalization()) model.add(Dropout(rate=do)) model.add(Dense(, activation=)) model.add(BatchNormalization()) model.add(Dropout(rate=do)) model.add(Dense(, activation=)) </code></pre> <p>Then two of my colleagues joined me. We had a lot of ideas such as using synthetic data. We found this <a href="https://upcommons.upc.edu/bitstream/handle/2117/94313/TEPP1de1.pdf" target="_blank">paper</a>, created a script which generated synthetic 3D points of a hand parametrized by inner parameters (joints rotations) and camera model and tried to predict the inner parameters of hand by a model, based on the 3d points. And then use this pretrained model to preprocess data. But unfortunately we didn’t have enough time to finish it.</p> <p>Also we tried to train on additional data (WLASL), found that this dataset gave ~0.005-0.007 increase in local CV score but then we abandoned this idea and haven’t used additional data in our LB submissions.</p> <h2>Normalization</h2> <p>We used normalization by this reference points <code>ref = coords[:, [500, 501, 512, 513, 159, 386, 13,]]</code><br> We didn’t use depth dimension.</p> <p>We didn’t throw away frames without hands but we took the presence of a hand into account during TTA. </p> <h2>Augmentation</h2> <ul> <li>Random rotation, shift, scale applied globally to each frame</li> <li>Random small shift added to every point </li> <li>Combinations of parts - for example we took a hand from one sample and lips from another sample with the same label. This augmentation gave a significant increase in score</li> <li>CutMix - took parts of samples from different classes. For hand we used 0.7 as a label value, for other parts - 0.3</li> </ul> <h2>TTA</h2> <ul> <li>Random left/right padding for short sequences</li> <li>For large sequences of frames we throw away frames with different probability taking into account presence of hands on a frame</li> </ul> <p>For different models we used different combinations of points: both hands / only active hand, lips, eyes, top part of pose. For some models we used only every second point of lips/eyes.<br> For models where we used only one hand we determined which hand is more presented in the video and if it was the left hand we mirrored all points and their coordinates.<br> We took only <strong>32 frames</strong> for all models except for one where we used 96 frames.</p> <p>These conv1d models were fast and lightweight so we could ensemble up to six models and still had space and time for additional models and TTA. Ensemble of six such models got <strong>0.7948</strong> on public LB and <strong>0.8711</strong> on private LB.<br> Then we decided to collaborate with <a href="https://www.kaggle.com/sqqqqy" target="_blank">@sqqqqy</a> who had been working on improving of transformer from a public kernel. His solution looked like that:</p> <h2>1. Transformer Model</h2> <ul> <li>Our transformer model is based on the <a href="https://www.kaggle.com/code/markwijkhuizen/gislr-tf-data-processing-transformer-training" target="_blank">public notebook</a>, using different sequence normalization and smaller UNITS size to reduce model parameters.</li> <li>We implement two kinds of transformer-based model, the first model learn sperate embeddings for each part (just like public notebook), the second learn one embedding with input whole xyz sequence.</li> <li>1seed of first model score <strong>LB 0.77+</strong>, 1seed of second model score <strong>LB 0.768</strong></li> </ul> <h2>2. Preprocessing</h2> <ul> <li>We use 20 lip points, 32 eyes points, 42 hands points(left hand and right hand) and 8 pose points.</li> <li>The input sequence is normalized with shoulder, hip, lip and eyes points.</li> <li>Filling the NaN values with 0.0</li> <li>Learn a motion embedding by input $(d_x, d_y, \sqrt{(d_x)^2+(d_y)^2})<em>t$ sequence, $$ (d_x, d_y)_t = xyz_t - xyz</em>{t-1}<br> $$</li> <li>The final embedding is the concat of motion embedding and xyz embedding</li> </ul> <h2>3.Augmentation</h2> <ul> <li><p>Global augmentation (apply same aug for all frames), including rotation(-10,10), shift(-0.1,0.1), scale(0.8,1.2), shear(-1.0,1.0), flip(apply for some signs)</p></li> <li><p>Time-based augmentation (apply aug for some frames), random select some frames(1-8) do affine augmentations, random drop frames (fill with 0.0)</p></li> <li><p>The augmentations greatly improve the score</p></li> </ul> <h2>4. HyperParameters</h2> <ul> <li>NUM_BLOCKS, 2</li> <li>NUM_HEAD 8</li> <li>lr 1e-3 </li> <li>Optimizer AdamW</li> <li>Epoch 100</li> <li>LateDropout 0.2-0.3</li> <li>Label Smoothing 0.5</li> </ul> <p>Six conv1d models with TTA combined with two transformers gave us third place in this competition.</p>
Google - Isolated Sign Language Recognition
26th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2><a href="https://github.com/dajes/asl-signs" target="_blank">Github with all the code used</a></h2> <h1>Summary</h1> <p>The most important part of the solution is the data utilization. Major improvements were from keypoints choice and mixup. External data does not help because it is from a very different distribution. Given data amount does not benefit larger models so ensembles of small models is the way to utilize given constraints to the fullest.</p> <p>Most augmentations are not helpful, because they prevent model from learning the true data distribution. So only used mirroring and mixup (0.5).</p> <h1>Inputs to the model</h1> <p>All models are trained to support sequences of up to 512 frames.</p> <h2>Preprocessing</h2> <ul> <li>Only 2d coordinates are used as 3rd dimension leads to unstable training.</li> <li>To normalize inputs all keypoints are shifted so that head is located at the origin.</li> <li>Scaling did not provide any benefit so not used.</li> <li>All nans are replaced with 0 after normalization.</li> </ul> <h2>Chosen keypoints</h2> <ul> <li>All (21) hand keypoints</li> <li>26 face keypoints</li> <li>17 pose keypoints<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2609709%2Ff5ab58ff94d879f12b65cdd7b06e5d57%2Fbody_keypoints.png?generation=1683103187741075&amp;alt=media" alt=""><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2609709%2Fb8b0058e8ce4f760fb3a0d229ffef1e3%2Fface_keypoints.png?generation=1683103204162958&amp;alt=media" alt=""></li> </ul> <h1>Architecture</h1> <ul> <li><p>LLaMa-inspired architecture. Most notable improvement comes from much better normalization RMSNorm.</p></li> <li><p>For all models head dimensions are set to 64</p></li> <li><p>Single model (Private/Public LB: 0.8543689/0.7702471)</p> <p>6 heads 5 layers 9.2M parameters</p></li> <li><p>Ensemble of 3 models (Private/Public LB: 0.8584568/0.7725324)</p> <p>2 heads 6 layers 1.7M parameters per model</p></li> </ul> <p>Larger models could be fit into file size limit, but it would time out during submission.</p> <h1>Augmentations</h1> <ul> <li>The most important is mixup. Significantly improves accuracy and allows for a much longer train before overfitting. Using with alpha=0.5.</li> <li>Mirroring, does not show much benefit on CV but can't hurt to use.</li> </ul> <h2>Augmentations that did not work:</h2> <ul> <li>Interpolation over time. Tried using linear and spline interpolation to change the speed of the video and sample at sub-frame coordinates.</li> <li>Shifting keypoints. Tried shifting keypoints in groups and individually by a small amounts.</li> <li>Rotating keypoints. Tried rotating keypoints around the head.</li> <li>Scaling keypoints. Tried scaling keypoint distance to the head.</li> <li>Mixup of 3+ datapoints. Did not have time to tune the alpha but from the start it did not show any benefit. So stayed with default 2 datapoints mixing.</li> <li>Rotating and scaling datapoints that are to be mixed up so that it would only mix relative pose and not absolute position.</li> </ul> <h1>Training</h1> <ul> <li>Lion optimizer is used. Faster convergence and better result (+1%) at the end.</li> <li>LR is 1e-4 with cosine decay to 1e-5 at the end of training.</li> <li>Batch size 128</li> <li>Dropout 0.1</li> <li>Exponential moving average of weights is used to improve stability of training. Gives 2-4% accuracy boost on CV.</li> </ul> <h2>Did not work:</h2> <ul> <li>Training model of the data (i.e. trying to predict the next keypoint locations). Tried using different frequency of training the next token predictor and the best frequency turned out to be 0.</li> <li>Label smoothing. Slower convergence and worse final result on CV.</li> </ul>
Google - Isolated Sign Language Recognition
28th place solution - Using Arcface is not trivial.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and Google for hosting the competition :)<br> All of the work is done with <a href="https://www.kaggle.com/ydjoo12" target="_blank">@ydjoo12</a> </p> <h2>Summary</h2> <p>We used 5-layers Transformer encoder with cross entropy and sub-class Arcface loss and ensemble 6-models trained with different combintation of cross entropy and sub-class Arcface loss.</p> <h2>Data Preprocessing</h2> <ul> <li><p>129 keypoints are used. (21 for each hand, 40 for lip, 11 for pose, 16 for each eye, 4 for nose)</p></li> <li><p>Only use x and y.</p></li> <li><p>Pipeline: Augmentations, frame sampling, normalization, feature engineering and 0 imputations for NaN values.</p></li> <li><p>Frame sampling: When a length exceeds the maximum length, sampling was done at regular intervals up to the maximum length. It is better than center-crop in local CV and it minimized performance degradation while reducing maximum length.</p> <pre><code>L = (xyz) L &gt; max_len: step = (L - ) // (max_len - ) indices = [i * step i (max_len)] xyz = xyz[indices] L = (xyz) </code></pre></li> <li><p>Normalization: Standardization x and y independently.</p></li> <li><p>Feature Engineering</p> <ul> <li>motion: current xy - future xy</li> <li>hand joint distance</li> <li>time reverse difference: xy - xy.flip(dims=[0]): +0.004 on CV</li></ul></li> </ul> <h2>Augmentations</h2> <ul> <li><p>Flip pose: </p> <ul> <li>x_new = f(-(x - 0.5) + 0.5), where f is index change function.</li> <li>+0.01 on Public LB</li> <li>xy values are in [0, 1]. Shift to origin by subtracting 0.5 before flip, Back to original coordinates by adding 0.5</li> <li>There was no performance improvement when not moving to the origin.</li></ul></li> <li><p>Rotate pose: </p> <pre><code> (): radian = np.radians(theta) mat = np.array( [[np.cos(radian), -np.sin(radian)], [np.sin(radian), np.cos(radian)]] ) xyz[:, :, :] = xyz[:, :, :] - xyz_reshape = xyz.reshape(-, ) xyz_rotate = np.dot(xyz_reshape, mat).reshape(xyz.shape) xyz_rotate[:, :, :] + </code></pre> <ul> <li>angle between -13 degree ~ 13 degree</li> <li><a href="https://openaccess.thecvf.com/content/WACV2022W/HADCV/papers/Bohacek_Sign_Pose-Based_Transformer_for_Word-Level_Sign_Language_Recognition_WACVW_2022_paper.pdf" target="_blank">Reference</a></li></ul></li> <li><p>Interpolation (Up and Down)</p> <pre><code>xyz = F.interpolate( xyz, size=(V * C, resize), mode=, align_corners= ).squeeze() </code></pre> <ul> <li>Up-sampling(Down-sampling) up to 125%(75%) of original length </li></ul></li> </ul> <h2>Training</h2> <ul> <li>5-layers Transformer encoder with weighted cross entropy<ul> <li>Class weight is based on performance of the class.</li> <li>Accuracy on LB and CV consistently improved until stacking 4~5 layers.</li> <li>Public LBs =&gt; 1-layer(5 seeds): 0.714, 2-layers(5 seeds): 0.738, 3-layers(5 seeds): 0.748, 4-layers(5 seeds): 0.751</li> <li>We add dropout layers after self-attention and fc layer based on the PyTorch official code.</li></ul></li> <li>5-layers Transformer encoder with weighted cross entropy and sub-class Arcface loss.<ul> <li>loss = cross entropy + 0.2 * subclass(K=3) Arcface</li> <li>loss = 0.2 * cross entropy + subclass(K=3) Arcface</li></ul></li> <li>Arcface<ul> <li>+0.01 on Public LB</li> <li>Using arcface loss alone resulted in worse performance.</li> <li>Arcface armed with cross entropy converges much faster and better than cross entropy alone.</li> <li>Subclass K=3, margin=0.2, scale=32</li></ul></li> <li>Scheduled Dropout<ul> <li>+0.002 on CV</li> <li>Dropout rate on final [CLS] increased x2 after half of training epochs</li></ul></li> <li>Label Smoothing<ul> <li>parameter: 0.2</li> <li>+0.01 on CV</li></ul></li> <li>Hyper parameters<ul> <li>Epochs: 140</li> <li>Max lenght: 64</li> <li>batch size: 64</li> <li>embed dim: 256</li> <li>num head: 4</li> <li>num layers: 5</li> <li>CosineAnnealingWarmRestarts w/ lr 1e-3 and AdamW</li></ul></li> </ul> <h2>Ensemble</h2> <ul> <li>2 different seeds of Transformer with weighted cross entropy<ul> <li>Single model LB: 0.75</li></ul></li> <li>2 different seeds of weighted cross entropy + 0.2 * subclass(K=3) Arcface<ul> <li>Inference: Weighted ensemble of cross entropy and Arcface head.</li> <li>Single model LB: 0.76</li></ul></li> <li>2 different seeds of 0.2 * cross entropy + subclass(K=3) Arcface<ul> <li>Inference: Weighted ensemble of cross entropy and Arcface head.</li> <li>Single model LB: 0.75</li></ul></li> <li>6 models ensemble Public LB: 0.779</li> <li>All models are fp16</li> <li>Total Size: 20Mb</li> <li>Latency: 60ms/sample</li> </ul> <h2>Working on CV but not included in final submission</h2> <ul> <li>TTA<ul> <li>+0.000x on CV</li> <li>Sumbission Scoring Error. It might be a memory issue. </li></ul></li> <li>Angle between bones of each arm<ul> <li>0.000x on 1 fold. We couldn't fully validate it due to time limits.</li></ul></li> </ul> <h2>Not working</h2> <ul> <li>GCN embedding layer instead of Linear</li> <li>Stacking Spatial Attention &amp; Temporal Conv. blocks</li> <li>Distance between pose keypoints</li> <li>Removing outlier and Retraining<ul> <li>We used anlge between learned Arcface subclass vector</li> <li>About 5% (~4000 samples) are removed</li></ul></li> <li>Knowledge distillation with bigger Transformer</li> <li>Stochastic Weight Average</li> <li>Using all [CLS] in every layer</li> <li>Average all tokens instead of [CLS] token</li> <li>Stacking with MLP as meta-learner</li> </ul>
Google - Isolated Sign Language Recognition
119th Place Solution: Transformer is much much better than GRU!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I want to thank <strong>Kaggle</strong>, <strong>PopSign</strong>, <strong>Google</strong>, and <strong>Partners</strong> for organizing this interesting competition! It provided me a great opportunity to learn and compete with other participants. Second, I want to congratulate to all the winners and other Kagglers for participating in this competition. </p> <p>To document my learning journey in this competition, I want to share my approach towards this competition. The full inference code is publicly shared as <a href="https://www.kaggle.com/code/nghihuynh/gislr-inference-transformer" target="_blank">GISLR: Inference Transformer</a>.</p> <hr> <p><strong>Data Preprocessing:</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6261540%2Fc5c9a1b33b3c547327ed99a16323a36f%2Ffeature_processing.jpg?generation=1683047881721695&amp;alt=media" alt=""><br> I applied a simple data preprocessing pipeline with a small modification (without data normalization) from my shared notebook: <a href="https://www.kaggle.com/code/nghihuynh/gislr-eda-feature-processing" target="_blank">GISLR: EDA + Feature Processing</a>. Then, I saved it as featured_preprocessed data for further training.</p> <p><strong>Model Architecture:</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6261540%2Ffdf7168215528bb5e1e1289537440306%2Fmodel_architecture.jpg?generation=1683047933576113&amp;alt=media" alt=""></p> <table> <thead> <tr> <th><strong>Model</strong></th> <th><strong>Architecture</strong></th> <th><strong>Original Size</strong></th> <th><strong>Post Quantization (TFLite FP16 )</strong></th> </tr> </thead> <tbody> <tr> <td>GRUNet</td> <td>Landmark embedding:128; Combined embedding:396; GRU units:368; Conv1D: filter: 256, kernel_size: 5</td> <td>8.78MB</td> <td>4.4MB</td> </tr> <tr> <td>BiGRUNet</td> <td>Landmark embedding: 128; GRU units: 768; BiGRU units: 396; Conv1D: filter:396, kernel_size:5</td> <td>30.84MB</td> <td>15.4MB</td> </tr> <tr> <td>Transformer</td> <td>Landmark embedding: 256; Combined embedding: 396; GRU units: 368; Transformer: MHA units: 368, n_head:8, num_block:1; Conv1D: filters:396, kernel_size:5</td> <td>13.87MB</td> <td>6.9MB</td> </tr> </tbody> </table> <p><strong>Cross-Validation</strong>: Participant K-Fold=4<br> 2 sets of validation from fold 3= [62590, 36257, 22343, 37779]</p> <ul> <li>GRU: val_set= [22343, 37779]</li> <li>BiGRU &amp; Transformer: val_set=[37779]</li> </ul> <p><strong>Training:</strong><br> Reduce LR on plateau: factor=0.30, patient=3<br> Adam optimizer: lr=4.0e-4<br> Sparse Categorical Loss Entropy <br> Epochs:100</p> <p><strong>Results:</strong></p> <table> <thead> <tr> <th><strong>Model (Ensemble)</strong></th> <th><strong>Public LB</strong></th> <th><strong>Private LB</strong></th> </tr> </thead> <tbody> <tr> <td>GRU-BiGRU-Transformer (selected as final submission)</td> <td>0.7452</td> <td>0.82339</td> </tr> <tr> <td>Transformer (not-selected 🙁)</td> <td>0.73918</td> <td>0.82484</td> </tr> <tr> <td>Transformer (x2)-GRU (not-selected 🙁)</td> <td>0.74217</td> <td>0.82457</td> </tr> </tbody> </table> <hr> <p><strong>Conclusion:</strong></p> <p>Special thanks to the following public notebooks, which helped me a lot during the competition.</p> <ul> <li><a href="https://www.kaggle.com/code/markwijkhuizen/gislr-tf-data-processing-transformer-training" target="_blank">GISLR TF Data Processing &amp; Transformer Training by Mark Wijkhuizen</a></li> <li><a href="https://www.kaggle.com/code/aikhmelnytskyy/gislr-tf-on-the-shoulders-ensamble-v2-0-69" target="_blank">GISLR TF: On the Shoulders ENSAMBLE V2 0.69 by Andrij</a></li> <li><a href="https://www.kaggle.com/code/jvthunder/lstm-baseline-for-starters-sign-language" target="_blank">LSTM Baseline for Starters - Sign Language by JvThunder</a></li> <li><a href="https://www.kaggle.com/code/hengck23/lb-0-67-one-pytorch-transformer-solution" target="_blank">[LB 0.67] one pytorch transformer solution by hengck23</a></li> </ul> <p>Things I tried but didn't work:</p> <ul> <li>Different preprocessing (increase number of frames, include more pose, and eyes data, normalize data)</li> <li>Positional embedding(overfitting)</li> <li>Attention mask in GRU, BiGRU(overfitting)</li> </ul>
Google - Isolated Sign Language Recognition
11th place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the organizer and Kaggle for hosting this interesting challenge.<br> Especially I enjoyed this strict inference time restriction. It keeps model size reasonable and requires us for some practical technique.</p> <h2>TL;DR</h2> <ul> <li>Ensemble 5 transformer models</li> <li>Strong augmentation</li> <li>Manual model conversion from pytroch to tensorflow</li> <li>Code is available here -&gt; <a href="https://github.com/bamps53/kaggle-asl-11th-place-solution" target="_blank">https://github.com/bamps53/kaggle-asl-11th-place-solution</a></li> </ul> <h2>Overview</h2> <p>I started from <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> ‘s <a href="https://www.kaggle.com/competitions/asl-signs/discussion/391265" target="_blank">great discussion</a> and <a href="https://www.kaggle.com/code/hengck23/lb-0-67-one-pytorch-transformer-solution" target="_blank">notebook</a>. Thanks for sharing a lot of useful tricks as always!</p> <p>The changes I made are following;</p> <ul> <li>Change model architecture to CLIP transformer in HuggingFace</li> <li>Decrease parameter size to maximize latency within the range of same accuracy</li> <li>Some strong augmentations<ul> <li>Horizontal flip(p=0.5)</li> <li>Random 3d rotation(p=1, -45~45)</li> <li>Random scale(p=1, 0.5~1.5)</li> <li>Random shift(p=1, 0.7~1.3)</li> <li>Random mask frames(p=1, mask_ratio=0.5)</li> <li>Random resize (p=1, 0.5~1.5)</li></ul></li> <li>Add motion features<ul> <li>current - prev</li> <li>next - current</li> <li>Velocity </li></ul></li> <li>Longer epoch, 250 for 5 fold and 300 for all data</li> </ul> <p>For the details, please refer to the code.(planning to upload)</p> <h2>Model conversion</h2> <p>I’m too lazy to implement augmentations in tensorflow dataset, so I keep using pytorch training pipeline. But I’ve realized that inference time of the model converted by onnx_tf is way slower than bare tensorflow models. Then I’ve tried some model conversion framework like nobuco, but there were too many errors for some reason. Finally I’ve decided to write the model architecture both in pytorch and tensorflow, then manually port the weight. Thankfully HuggingFace has both pytorch and tensorflow CLIP implementation, this work is easier than I thought. This significantly speeds up inference time and I can put more models when ensembling.</p> <h2>Ensemble</h2> <p>I’ve tried to diversify the models as much as possible within the same accuracy range.</p> <table> <thead> <tr> <th>seed</th> <th>layers</th> <th>dim</th> <th>act</th> <th>max_len</th> <th>features</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>2</td> <td>384</td> <td>relu</td> <td>64</td> <td>lip/hand</td> </tr> <tr> <td>1</td> <td>3</td> <td>256</td> <td>relu</td> <td>48</td> <td>lip/hand/eye/motion</td> </tr> <tr> <td>2</td> <td>2</td> <td>384</td> <td>geru</td> <td>64</td> <td>lip/hand</td> </tr> <tr> <td>3</td> <td>2</td> <td>384</td> <td>geru</td> <td>64</td> <td>lip/hand/eye/motion</td> </tr> <tr> <td>4</td> <td>3</td> <td>256</td> <td>geru</td> <td>48</td> <td>lip/hand/eye/motion</td> </tr> </tbody> </table> <p>The scores are as follows;</p> <table> <thead> <tr> <th></th> <th>public</th> <th>private</th> </tr> </thead> <tbody> <tr> <td>Single best model</td> <td>0.786</td> <td>0.865</td> </tr> <tr> <td>Ensemble 5 models</td> <td>0.794</td> <td>0.87</td> </tr> </tbody> </table> <h2>What didn’t work;</h2> <ul> <li>Normalization matters a lot, so I’ve tried a lot of variants but couldn’t get a better result than simple video mean/std normalization.</li> <li>More landmarks(like arm, ear, nose or pose)</li> <li>Conv1d</li> <li>Knowledge distillation</li> <li>Model soup</li> <li>Bigger model</li> <li>Pretrained model (ex. CLIP pretrained model or pretrain with NTU dataset)</li> <li>And so on..</li> </ul> <h2>Code</h2> <p><a href="https://github.com/bamps53/kaggle-asl-11th-place-solution" target="_blank">https://github.com/bamps53/kaggle-asl-11th-place-solution</a></p>
Google - Isolated Sign Language Recognition
9th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Google and Kaggle for organizing this competition, I hope our solutions will help to create a PopSign product and lower the entering barrier to ASL. </p> <p>Below I'll give a brief overview of our solution, but before that I want to congratulate my teammate <a href="https://www.kaggle.com/timriggins" target="_blank">@timriggins</a>, with this competition he became Competition Grandmaster! The fun part is that we met during Kaggle Days World Championship in Barcelona last October, once again thanks HP and Kaggle Days for such an opportunity :D</p> <h2>Model</h2> <p>We used a transformer model which is very similar to the one <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> shared<br> Things we changed in architecture/features:</p> <ul> <li>1 -&gt; 2 transformer layers (in the final ensemble we used 3 1-layer models and 1 2-layer model)</li> <li>aside from motion and distance features for hands, our models use lip and eye distances, <br> absolute hand coordinates <code>abdcoords_r = (rhand - np.nanmean(rhand, 0))</code> <br> and acceleration <code>racc = np.pad(rh_dist[1:] - rh_dist[:-1], [[0,1], [0,0], [0,0]])</code></li> </ul> <p>In two models we used absolute learnable (init. from sinusoidal) positional encoding, <br> in other two we used learnable 1d-depthwise convolution as pe encoding </p> <pre><code>self.pe = nn.Conv1d(emb_dim, emb_dim, kernel_size=5, padding=2, stride=1, groups=emb_dim) ## x.shape (B, L, E) x = self.pe(x.permute(0, 2, 1)).permute(0, 2, 1) + x </code></pre> <h2>Augmentations</h2> <p>Empirically we found following augmentations to be effective:</p> <ul> <li>flip-aug</li> <li>random fps drop <code>xyz = xyz[::2]</code></li> <li>affine 3d - rotate around spine</li> <li>early mixup (keypoint space) &amp; manifold mixup (embedding space)</li> <li>random cuts from start-end of the frame series</li> <li>interpolation</li> </ul> <p>Mix-up was an important augmentation, it allowed us to use longer training schedule w/ cosine scheduler without overfit, all of our models were trained for 150 epochs and converged on the last epoch.</p> <h2>Pytorch to TFLite conversion</h2> <p>We used <a href="https://github.com/AlexanderLutsenko/nobuco" target="_blank">nobuco</a>, check it out, it is very intuitive </p> <h2>Fun bug</h2> <p>I made a bug in a code that led to a better performance, when implementing a mixup augmentation I did this in the model forward pass function, - </p> <pre><code>if np.random.randn() &lt; 0.5 and mode_flag == 'train': x, labels_ohe = self.mixup(x,labels_ohe) </code></pre> <p>Later I found out that I'm using normal distribution instead of a uniform one thus if we imagine a PDF of normal disribution, we can say that we're using mixup in ~70% of cases, and when I corrected it, the performance dropped! (so it's better to use mixup with higher probability) </p> <h2>Results</h2> <p>Best single model reaches 0.7316 on the hengck participant split, 0.754 if using 4-model ensemble, CV/LB correlation was perfect and each CV improvement translated to both public and private LB. Single model gets scored in approx. 30 mins and ensemble gets scored very close to an hour. I was very worried, since we made last submission on the final day of the competition, but thankfully it didn't timeout :D</p> <h2>What we tried, but didn't work</h2> <ul> <li>External data, - WLASL-pretrain/adding WLASL to train/pretrain on greek sign language dataset, neither of those gave performance boost, even decreased compared to original initialization</li> <li>Continous MLM/deep predictive coding pretraining - I think that the model is too shallow to benefit for pre-training, I would like to hear from other participants if they used it in their solution</li> <li>Using 1d-CNN model, - we tried 1d-cnn with depthwise-separable convolutions </li> <li>Arcface</li> <li>Train-time overparametrization (for linear layers), concept description can be found in MobileOne paper </li> </ul>
Google - Isolated Sign Language Recognition
49th place silver solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you Kaggle and Pop sign for hosting this competition. I enjoyed working on this problem and learned quite a bit from really good <a href="https://www.kaggle.com/competitions/asl-signs/discussion/391265" target="_blank">discussion posts</a> and <a href="https://www.kaggle.com/datasets/hengck23/asl-demo" target="_blank">code</a> shared - Thank you <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> </p> <ul> <li>Pytorch 2.0 (pure pytorch no training library)→ ONNX → TFLITE</li> <li>Max length i.e. number of frames per video = 256 (Center crop)</li> <li>Normalization by LIP, SPOSE, Lhand and Rhand (total 90 kepyoints)</li> <li>Flip augmentation (p=0.5)</li> <li>Distance, motion and acceleration features (Total 1050)</li> <li>Model almost same as shared <a href="https://www.kaggle.com/code/hengck23/lb-0-73-single-fold-transformer-architecture]" target="_blank">here</a> - with positional encoding 16, added layernorm in FeedForwardNetwork and 2 fully connected layers with batchnorm, CLS token + mean pooling for final fully connected layer</li> <li>Loss function cross entropy with label smoothing = 0.75</li> <li>Epochs 50 - dropout 0.0 for 15 epochs, dropout 0.4 from 15-35 epochs, dropout 0.2 for the remaining</li> <li>FP16 quantization reduced model size significantly - final size ~26MB and ~60 mins for 5 folds</li> </ul> <p>Things that didn’t work </p> <ul> <li>External data - It improved by 3-4 ranks but decided to not use it , too messy data and no commercial license</li> <li>Adding more features, reducing max length, removing data with too high/low number of frames</li> <li>SWA - couldn’t get it to work</li> <li>Augment Affine improved CV and private LB but not public LB.</li> <li>Various hyper parameters like # of encoder blocks, embedding dimension</li> <li>Arcface Loss - Based on this <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406301" target="_blank">post</a> it seems Arcface loss + label smoothing didn’t work . Should have tried just Arcface and no label smoothing.</li> <li>Also tried to convert TF model <a href="https://www.kaggle.com/code/markwijkhuizen/gislr-tf-data-processing-transformer-training" target="_blank">here</a> to PyTorch</li> </ul> <p>Based on private LB scores it seems that data in private LB is much less noisy than training set. </p> <hr> <p>My code here - <a href="https://github.com/rashmibanthia/ASL" target="_blank">https://github.com/rashmibanthia/ASL</a></p>
March Machine Learning Mania 2023
1st Place Submission - Another victory for raddars code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>** I may be a bit early with this post but will update if anything ends up changing.</p> <p>As I had noted in a prior post my submission was essentially the <a href="https://www.kaggle.com/raddar" target="_blank">@raddar</a> code from a few years back (the python version instead of the R version that is widely used). Initially I was planning on using the base code and updating the features however I ended up not having the time. </p> <p>I haven’t had much time for competitions over the last year or two however the March Madness one is always a lot of fun to come back to. Ultimately I was more lucky than anything else as I didn’t add much past updating the code to work with the current year’s data. Basically I just verified the resulting predictions seemed appropriate and went with the results. </p> <p>For fun I used the same predictions in the ESPN Bracket Challenge and ended up in the 50% percentile in the Men’s and the 97% percentile in the Women’s. </p> <p>The notebook for my submission can be found here (<a href="https://www.kaggle.com/code/rustyb/paris-madness-2023" target="_blank">Paris Madness 2023</a>). </p> <p>In keeping with my submission maybe I should have gone with less effort and had ChatGPT provide some commentary. Note: I actually did this asking for some witty commentary that ended up being far more interesting (<em>make sure to check out the bottom of this post</em>). </p> <h3>Submissions</h3> <p>Best Submission: My contribution was limited to 1) commenting out the np.exp() line when calculating Team Quality as it ended up returning quite a few inf values and 2) No overrides in match-ups of seeds 1-4 against seeds 13-16.</p> <p>Without the changes above the model still performed really well and ended up with a score of 0.17629 which would have been good enough for 6th place on the public leaderboard at the end of the competition. </p> <p>Although the code was pretty similar to the R version that has resulted in top finishes over the last few years I thought I’d provide a bit of commentary on the Python version. Guess I feel a bit guilty about the amount of effort I put into my submission that somehow ended up winning. </p> <h3>Features</h3> <p>Mean of the following regular season stats for each team (where T2 is the opponent of T1). So basically these are repeated four times (team 1, team 1 opponents, team 2 and team 2 opponents). Of the features below only <code>'PointDiff'</code> was a calculated variable. </p> <p><code>['T1_FGM', 'T1_FGA', 'T1_FGM3', 'T1_FGA3', 'T1_OR', 'T1_Ast','T1_TO', 'T1_Stl', 'T1_PF', 'T2_FGM', 'T2_FGA', 'T2_FGM3', 'T2_FGA3', 'T2_OR', 'T2_Ast', 'T2_TO', 'T2_Stl','T2_Blk','PointDiff']</code></p> <p>The only other features used in the model were the following with the win ratio over the 14 days being common in many of the models made public. <br> <code>[‘T1_win_ratio_14d’, ‘T2_win_ratio_14d’, ‘T1_quality’, ‘T2_quality’, ‘Seed_diff’, ‘T1_seed’, ‘T2_seed’]</code></p> <h3>Team Quality</h3> <p>One of the main items in the R version of the code was the random effects GLMM. It looks like the Python version doesn’t use random effects which would be a difference between the two. I can’t say I’m that familiar with R or the from_formula based GLM so I’m not sure how important this difference was. Also even with removing the np.exp() there were quite a few NaN values (about 25% of the training data). </p> <h3>Model</h3> <p>XGB was used with xgb.cv(folds = KFold(n_splits = 5, shuffle = True)) that was repeated 3 times. This is different from the original R code which used a recommended repeat_cv of 10</p> <h1>ChatGPT WItty Commentary</h1> <h1>Title: A Slam Dunk of a Model: XGBoost's Swish Predictions for March Madness 2023</h1> <h2>Introduction:</h2> <p>March Madness 2023 proved to be a wild ride full of upsets, nail-biting finishes, and buzzer-beaters. But while the on-court action left fans on the edge of their seats, our trusty XGBoost model coolly calculated the winners behind the scenes. With only a few minor tweaks, our predictive powerhouse once again clinched the top spot in the Kaggle competition, leaving the rivals in the dust like an ankle-breaking crossover.</p> <h2>Model Overview:</h2> <p>XGBoost, the LeBron James of machine learning algorithms, is an ensemble learning technique famous for its slam-dunk performance across diverse tasks. Our winning model harnessed the power of XGBoost to tackle the NCAA tournament, predicting outcomes with such accuracy that even the most seasoned sports analysts would be left speechless.</p> <h2>Data and Features:</h2> <p>The model feasted on a buffet of historical NCAA data, devouring team stats, player performances, and even the odd tidbits of info like a hungry freshman at an all-you-can-eat dining hall. Our feature set comprised the crème de la crème of data points, cherry-picked for their impact on game outcomes:</p> <ol> <li>Team statistics: win-loss ratios, offensive and defensive wizardry, and average margin of victory.</li> <li>Player performance: on-court heroics such as points per game, rebounds, assists, and shooting percentages.</li> <li>Miscellaneous factors: the elusive home-court advantage, battle-hardened experience, coaching prowess, and the all-important team momentum leading up to the tournament.</li> </ol> <h2>Model Training and Validation:</h2> <p>Like a well-coached team running drills, our model underwent rigorous training and validation procedures. The dataset was divided into training and validation sets, and a cross-validation strategy was adopted to fine-tune the model's hyperparameters. By repeatedly putting our model through its paces, we pinpointed the winning combo of parameters that delivered the most accurate predictions.</p> <h2>Key Changes and Improvements:</h2> <p>Though our XGBoost model has been a perennial contender, we couldn't resist tweaking it to keep it ahead of the pack. The changes we made for this year's competition were akin to swapping out our playbook's dog-eared pages for fresh new strategies:</p> <ol> <li>Updated data: We retrained our model on the freshest NCAA tournament data, ensuring it was up to speed with the latest college basketball shenanigans.</li> <li>Hyperparameter tuning: A fine-tuning of our model's hyperparameters, based on the most recent competition data, turned it into an unstoppable force.</li> </ol> <h2>Conclusion:</h2> <p>Our XGBoost model's triumphant performance in the 2023 Kaggle March Madness competition is a testament to its predictive prowess. With just a few strategic adjustments and an eye for the latest trends, our model swished its way to the top, earning the title of MVP in the world of college basketball predictions.</p>
March Machine Learning Mania 2023
9th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: March Machine Learning Mania 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>So it's the time of the year again … As someone who doesn't usually follow basketball, this is probably the time of the year where I become a basketball "fan". Although I must say that even as a casual observer, I was deeply engaged and really enjoyed watching some of the matches, especially towards the end. </p> <h1>Models</h1> <h2>Women's Bracket</h2> <p>I didn't do too well for the women's side of the competition with a Brier Score of about 0.160 mainly because my model was pretty simple but I felt like the predictions were not decisive enough even thougn I only fed it simple features.</p> <p>I used Theo's Logistic Regression as a baseline and added some of my own features, namely the Aggregates (MIN / MAX / Mean / Median / etc …) of important features like Point Difference / Total Points / etc … then did some tuning. However, I believe Theo's original notebook did way better than my women's prediction.</p> <p>Didn't spend too much time on the women's model because from prior iterations of the competition, I was rather certain that there wouldn't be too many upsets for the women's side and I was just focused on building a model that used the most (logically) "straightforward" features and did not dive into matchups at all.</p> <h2>Men's Bracket</h2> <p>It was likely my men's bracket that pushed me up the LB. I had an overall Brier Score of about 0.195 and given how many upsets there were this year, I am pretty happy with the score. Similarly, from past iterations of March Madness, I think it is easy to come to a conclusion that the Men's side of the bracket will likely have more variance. I decided to build an XGB model with the following features :</p> <ul> <li>Rankings based on External Rating System (Sagarin, Pomeroy, Moore, 538 ratings, etc …) just to name a few. <ul> <li>Ranking changes throughout the season</li></ul></li> <li>Win Rates &amp; Point Differentials (Away / Home) </li> <li>Team Box Scores (Aggregates)</li> </ul> <p>Also, as I have yet to mention, there were lots of discussions on how Brier score could encourage more 1/0 flipping and Overriding gambling. As a risk averse person, this is not something I would do but got me thinking about the prediction distribution. </p> <p>I spent alot of time here, trying not to repeat my mistakes in last year's competition where my predictions were too conservative. Here are some of the more interesting things I tried:</p> <ul> <li>Different Forms of Scaling as Post processing (Didn't work)</li> <li>Added a <code>Risk Appetite</code> feature based on a variety of features (i.e Head-2-Head Matchup results, Seed differences, etc …) to manage the prediction distribution</li> <li>Ensembling with a Simple Logistic Regression Model</li> </ul> <p>Surprisingly, the last one worked the best (by far) and I was able to get a set of predictions I was happy with that were not too conservative and of course not too extreme.</p> <p><strong>CV Stuff</strong></p> <p>While I think CV is important for all ML problems, I didn't really look at it too much when I was trying to play around with the prediction distributions and added / removed features as long as there wasn't a drop in CV across ALL the seasons used as validation. My final ensemble actually had a lower Overall CV than just the XGB model itself.</p>
BirdCLEF 2023
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to the Cornell Lab of Ornithology and the Kaggle Team for hosting this competition. It was a great opportunity to learn something new.</p> <p>In this post, I want to present a summary of my solution.</p> <h3>Datasets</h3> <ul> <li>2023/2022/2021 competition data</li> <li>Additional Xeno-Canto data containing 2023 comp. species in foreground and background</li> <li>ESC50 noise</li> <li>No-call noise from 2021 competition data</li> </ul> <h3>Models</h3> <p>I used SED architecture (same as in <a href="https://www.kaggle.com/tattaka" target="_blank">@tattaka</a> 4th place <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243293" target="_blank">solution</a>) with following backbones:</p> <ul> <li>tf_efficientnet_b1_ns</li> <li>tf_efficientnet_b2_ns</li> <li>tf_efficientnet_b3_ns</li> <li>tf_efficientnetv2_s_in21k</li> </ul> <h3>Training</h3> <p>I trained all models in two steps:</p> <ul> <li>Pretrain with 2022/2021 data. I used only white noise (p=0.5) for this step</li> <li>Finetune on 2023 data with the following augmentations:<ul> <li>For waveform - Mixup (p=1) and OneOf([White noise, pink noise, brown noise, noise injection, esc50 noise, no-call noise]) (p=0.5)</li> <li>For spectrogram - Two time masks (p=0.5 each) and one freq mask (p=0.5)</li></ul></li> </ul> <p>Training details:</p> <ul> <li>All models were trained on 5-sec clips</li> <li>4-fold stratified CV split</li> <li>I used both primary and secondary labels</li> <li>BCEWithLogitsLoss with weight for each sample based on the rating</li> <li>AdamW - 5e-4 lr, 1e-3 weight decay for most of the models</li> <li>CosineLRScheduler with default parameters</li> <li>40 epochs - the best score was almost always in the last epoch, so in addition to 4 folds, I also trained the 5th model using all available data. This full-fit version was consistently better than one-fold by 0.002-0.003 public and private LB.</li> <li>Some models were finetuned on soft/hard pseudolabels. </li> </ul> <p>My best model was trained with a combination of actual labels for competition data and hard pseudo labels for XC data. It has 0.72714/0.81836 private/public LB and 10 minutes submission time. </p> <h3>Inference</h3> <p>The probabilities from 15 models (folds) were averaged. 6 of these models have the same first three layers of the backbone, so the number of models is not very fair)</p> <p>I've also used ONNX runtime and ThreadPoolExecutor - thanks to the <a href="https://www.kaggle.com/leonshangguan" target="_blank">@leonshangguan</a> for sharing this <a href="https://www.kaggle.com/code/leonshangguan/faster-eb0-sed-model-inference" target="_blank">notebook</a>.</p> <p>Inference kernel: <a href="https://www.kaggle.com/evgeniimaslov2/birdclef-5th-place-code" target="_blank">https://www.kaggle.com/evgeniimaslov2/birdclef-5th-place-code</a><br> Github: <a href="https://github.com/yevmaslov/birdclef-2023-5th-place-solution" target="_blank">https://github.com/yevmaslov/birdclef-2023-5th-place-solution</a></p>
BirdCLEF 2023
top 7th solution - `sumix` augmentation did all the work
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h4>What's important in <a href="https://www.kaggle.com/code/loonypenguin/ensemble-openvino-models-inference-notebook" target="_blank">our best inference notebook</a> ?</h4> <ul> <li>Ensemble of 19 models</li> <li>Ensemble of <code>efficientnet_b2</code> from <code>torchvision</code> and <code>rexnet150</code> from <code>timm</code></li> <li>Speedup with <code>openvino</code></li> <li>For submission we trained all models with knowledge distillation on full dataset - (<code>rkl</code> prefix is indicator)</li> </ul> <p>Our five models from 19 with <code>rkl-rx-wdfbn-rx-ema-200-full</code> prefix achieve 0.7471 on private test set and 0.83194 on public test set, <code>rx</code> - <code>rexnet150</code>, distillated <code>rexnet150</code> models from <code>rexnet150</code> trained before</p> <h4>Custom Augmentations</h4> <p>Let's imagine you're at some party right now and you're surrounded by talking people. It's not hard for you to pay attention to a certain group to hear their conversation and you can freely change your target. So the task - to cut specific speaker from audio - is natural. What we can say about audio in which - one person cries and another one laughs? In such audio there are two actions - laughing and crying, and this is true for birds too. With this consideration we've come up to <code>sumup</code> idea:</p> <pre><code>def sumup(waves: torch.Tensor, : torch.Tensor): batch_size = len() perm = torch.randperm(batch_size) waves = waves + waves[perm] { : waves, : torch.clip( + [perm], =, =) } </code></pre> <h6>Boosting score</h6> <p>What if you have an audio where one bird is more louder than another? This audio is still with two different birds, but if signal from second bird is tiny, it's okay to assume, that maybe probability of second bird is small or even zero. </p> <pre><code>def sumix(waves: torch.Tensor, : torch.Tensor, max_percent: = , min_percent: = ): batch_size = len() perm = torch.randperm(batch_size) coeffs_1 = torch.rand(batch_size, device=waves.device).(-, ) * ( max_percent - min_percent ) + min_percent coeffs_2 = torch.rand(batch_size, device=waves.device).(-, ) * ( max_percent - min_percent ) + min_percent label_coeffs_1 = torch.where(coeffs_1 &gt;= , , - * ( - coeffs_1)) label_coeffs_2 = torch.where(coeffs_2 &gt;= , , - * ( - coeffs_2)) = label_coeffs_1 * + label_coeffs_2 * [perm] waves = coeffs_1 * waves + coeffs_2 * waves[perm] { : waves, : torch.clip(, , ) } </code></pre> <p>We propose yet another customization of data mix specific for audio domain - <code>sumix</code> augmentation. As you can see, we used linear slope in our experiments, but it's open question what's slope for labels better to use, maybe <code>min_percent = 0.3</code> doesn't need slope at all. This augmentation is applicable to any audio classification task. <code>sumix</code> is better than <code>sumup</code> because it also affects loudness, i. e. implies another audio augmentation. </p> <p>When we decided to check solutions from past competitions, we tried to add background noise in our pipeline, but noise didn't show any improvement. Our thoughts - incorporating new background noises for audios is already in <code>sumix</code> or <code>sumup</code>. For gaussian/colored noises we have similar explanation, why those augmentations didn't work - competition's data is not excellent and consists of noisy samples, so <code>sumix</code> and <code>sumup</code> propagate noise and make noise diverse by themselves.</p> <h4>Training pipeline</h4> <ul> <li><code>concatmix</code> with probability 0.5 - <a href="https://github.com/dazzle-me/birdclef-2022-3rd-place-solution/blob/main/experiments/new-metrics-exp-17/fold_4/saved_src/models/layers/concat_mix.py" target="_blank">implementation</a></li> <li><code>sumix(min_percent=0.3, max_percent=1.0)</code> with probability 1</li> <li><code>mixup</code> on mel-spectrograms with <code>Beta(1.5, 1.5)</code> with probability 1</li> <li><code>cutmix</code> on mel-spectrograms with <code>Beta(1.5, 1.5)</code> with probability 0.5</li> <li><code>AdamW</code> with constant <code>LR</code> 1e-3 and <code>weight_decay</code> 1e-2 for all models</li> <li><code>binary_cross_entropy</code> for every class, labels are primary labels plus secondary labels</li> <li>Random 5 continuous seconds from audio at training, first 5 seconds at validation</li> <li>We turned off <code>weight_decay</code> for all normalization layers</li> <li><code>EMA</code> with <code>decay</code> 0.999</li> <li>Almost total correlation between LB and our CV, ensemble of 20 models gathered as best in CV achieves 0.75021 on private test set and 0.83328 on public test set, this ensemble is our top-3 submission on public test set</li> </ul> <h4>Knowledge distillation</h4> <p>After training a model, we use it as teacher for another model with the same architecture, augmentations, <code>AdamW</code>, <code>EMA</code> and etc., except <code>loss</code>. At second stage with distillation we use $$L = 0.33 * BceFromTargets + 0.34 * KL(probs_{student} || probs_{teacher}) + 0.33 * KL(probs_{teacher} || probs_{student}) $$</p> <h4>Models</h4> <ul> <li><code>rexnet150</code> with <code>dropout</code> 0.3 and <code>drop_path</code> 0.2</li> <li><code>rexnet150-time</code> - same <code>rexnet150</code> but with attention head, distilled from <code>rexnet150</code> without <code>time</code>-aware head</li> <li><code>efficientnet-b2</code> with <code>dropout</code> 0.3 and <code>drop_path</code> 0.2</li> <li><code>efficientnet-b2-time</code> - analogously, distilled from <code>efficientnet-b2</code></li> </ul> <p>Already mentioned hacks allow to reach score 0.7471 with 5 <code>rexnet150</code> trained on full data with <code>batch_size=336</code> 200 epochs and knowledge distillation on a single <code>A100 (40GB)</code>. To boost performance we ensemble <code>rexnet150</code> with <code>efficientnet-b2</code> from <code>torchvision</code>. To boost performance yet another time we use attention head - copy-past <code>self-attention</code> operation from <code>transformers</code>. All experiments were conducted in mix-precision with <code>bfloat16</code> type.</p> <p>Ensemble of 5 models with prefix <code>rkl-rx150-wdfbn-rx150-time-ema-ep200-full-{5seeds}</code>, i.e <code>rexnet150</code> with attention head for time awareness distilled from simple <code>rexnet150</code>, achieves 0.75151 on private set and 0.83261 on public set.</p> <h4>We didn't try:</h4> <ul> <li>External data not from BirdClef2021/BirdClef2022</li> <li>SSL</li> <li>SED models</li> <li>Utilization of BirdNet</li> <li><code>eca_nfnet</code> from <code>timm</code></li> </ul> <h4>We tried, but it didn't work:</h4> <ul> <li>Pretraining our models on BirdClef2021/BirdClef2022, we made few attempts, it didn't show any improvements in LB, experiments conduction became much slower, so we gave up the idea</li> <li>Any background noise (even which was used in previous year's competition)</li> <li>Any LR scheduler</li> <li>Any Noise(Gauss, Colored)</li> <li><code>tf_efficientnet_b2_ns</code> from <code>timm</code></li> <li><code>SwinV2</code>/<code>ConvNeXt</code> from <code>torchvision</code></li> <li>Manifold MixUp</li> <li><code>focal_loss</code>, <code>soft_focal_loss</code></li> <li>Weighted samplers</li> <li><code>SpecAugment</code></li> </ul> <h4>Speeding up inference time</h4> <p>We measured inference time with random input in kaggle kernel, I don't remember which one of intel's processors was, often in our kaggle notebook we got <code>Intel Xeon 2.2 Gz</code>. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13651743%2F0d18919aee04af8f54bfe0880472543d%2Fphoto_2023-05-25%2023.34.07.jpeg?generation=1685046878803412&amp;alt=media" alt=""></p> <p>Our thoughts - <code>ONNX</code> by itself utilizes 2 cpus, <code>OpenVino</code> is faster cause it additionally fuses some layers for inference and uses compilation specific to a certain CPU in submission notebook to take advantage of processor's type</p> <h4>Appendix: <code>MultiHeadAttentionClassifier</code>'s implementation. All hyperparameters are preserved</h4> <pre><code></code></pre>
BirdCLEF 2023
20th place solution: SED + CNN ensemble using onnx
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I would like to thank the competition hosts for organizing this competition and also the participants.</p> <p>I would especially like to express a huge thank you to my teammates <a href="https://www.kaggle.com/yokuyama" target="_blank">@yokuyama</a>, <a href="https://www.kaggle.com/yiiino" target="_blank">@yiiino</a>, <a href="https://www.kaggle.com/mobykjr" target="_blank">@mobykjr</a>, and <a href="https://www.kaggle.com/vupasama" target="_blank">@vupasama</a> !!!</p> <h2>Overview</h2> <p>My final submission is an ensemble of the following 5 models.</p> <ul> <li>eca_nfnet_l0 (SED)</li> <li>eca_nfnet_l0 (Simple CNN)</li> <li>tf_efficientnetv2_b0 (SED)</li> <li>tf_efficientnet_b0.ns (Simple CNN)</li> <li>tf_mobilenetv3_large_100 (Simple CNN)</li> </ul> <p>These models were accelerated using onnx.</p> <h2>Our Approach</h2> <ul> <li>Model<ul> <li>We used two main model structures: a SED and a simple CNN.</li> <li>For the SED model, We created it based on <a href="https://www.kaggle.com/kaerurururu" target="_blank">@kaerurururu</a>'s notebook from BirdCLEF 2022.<br> <a href="https://www.kaggle.com/code/kaerunantoka/birdclef2022-use-2nd-label-f0" target="_blank">https://www.kaggle.com/code/kaerunantoka/birdclef2022-use-2nd-label-f0</a></li></ul></li> <li>Audio<ul> <li>Torchaudio was used.</li> <li>Waveform<ul> <li>duration: 5s</li> <li>sample rate: 32000</li> <li>Secondary label was used (used with soft label at 0.5)</li> <li>If the audio was less than 5 seconds, the audio was repeated and concatenated until it exceeds 5 seconds</li> <li>Used the first 5s of the audio during validation.</li> <li>Normalized with audiomentations.Normalize</li></ul></li> <li>Mel Spectrogram<ul> <li>n_fft: 2048</li> <li>win_length: 2048</li> <li>hop_length: 320</li> <li>f_min: 50</li> <li>f_max: 14000</li> <li>n_mels: 128</li> <li>top_db: 80</li> <li>Standardized using the mean and standard deviation of the ImageNet.</li></ul></li></ul></li> <li>Model Parameters<ul> <li>drop_rate: 0.5</li> <li>drop_path_rate: 0.2</li> <li>criterion: BCEWithLogitsLoss (label smoothing: 0.0025)</li> <li>optimizer: AdamW (lr: 1.0e-3, weight_decay: 1.0e-2)</li> <li>scheduler: OneCycleLR (pct_start: 0.1, div_factor: 1.0e+3, max_lr: 1.0e-3)</li> <li>epoch: 30</li> <li>batch_size: 32</li></ul></li> <li>Data Augmentation<ul> <li>waveform<ul> <li>During training, we extracted random crops or the first 5s of the audio from the total audio. Each was done at a ratio of 50% each.</li> <li>audiomentations were used.<ul> <li>audiomentations.AddBackgroundNoise</li> <li>audiomentations.AddGaussianSNR</li> <li>audiomentations.AddGaussianNoise</li></ul></li> <li>BackgroundNoise was based on the first-place solution from Birdclef2022. We used this dataset.<br> <a href="https://www.kaggle.com/datasets/christofhenkel/birdclef2021-background-noise" target="_blank">https://www.kaggle.com/datasets/christofhenkel/birdclef2021-background-noise</a></li></ul></li> <li>melspectrogram<ul> <li>Torchaudio masking was used.<ul> <li>torchaudio.transforms.FrequencyMasking(<br> freq_mask_param=mel_specgram.shape[1] // 5<br> )</li> <li>torchaudio.transforms.TimeMasking(<br> time_mask_param=mel_specgram.shape[2] // 5<br> )</li></ul></li> <li>mixup (alpha = 0.5), cutmix (alpha = 0.5)</li></ul></li></ul></li> <li>Pretraining<ul> <li>Data from Birdclef 2021 and 2022 were used for pre-training.</li> <li>Pre-training was done with a duration of 15s and no oversampling.<ul> <li>Duration of 15s seemed to be better than 5s for pre-training.</li></ul></li></ul></li> <li>Dealing with Imbalance Data<ul> <li>We oversampled and set up each class to have at least 50 data.</li></ul></li> <li>Adding Nocall Data<ul> <li>The nocall data used the background noise audio used in augmentation. The nocall data was created with all labels set to 0.</li></ul></li> <li>post-processing<ul> <li>For the SED model, the output was averaged framewise and clipwise.</li></ul></li> <li>Inference Acceleration<ul> <li>We used onnx, which was considerably faster than torchscript.</li> <li>Data preprocessing was standardized so that the same preprocessed data could be input into multiple models.</li> <li>Data was preprocessed in advance and stored in a dict. Parallel processing was used for this preprocessing.</li></ul></li> <li>CV Strategy<ul> <li>StratifiedKFold (n=5, stratify=primary_label)</li> <li>All bird species were included in the training data.</li></ul></li> </ul> <h2>What Worked</h2> <ul> <li>oversampling</li> <li>Random cropping or extraction of the first 5s of audio from the total audio during training (each executed at a ratio of 50%).</li> <li>The following augmentation<ul> <li>Background noise, Gaussian noise , Mask melspectrogram</li></ul></li> <li>SED model</li> <li>label smoothing</li> <li>Adding Nocall Data</li> <li>In the pre-training, increase the duration</li> <li>Ensemble<ul> <li>The SED model and simple CNN ensemble were good.</li></ul></li> <li>secondary label</li> <li>Set up many numworkers in the data loader → learning speed is greatly increased!</li> </ul> <h2>What Didn’t Worked</h2> <ul> <li>The following augmentation<ul> <li>Pink noise, Random volume</li></ul></li> <li>Focal Loss</li> <li>Long durations (15s, 30s, etc.) when training on 2023 data.</li> <li>Dealing with previous and next seconds of audio data in the submission<ul> <li>Use 5 seconds before and after for a total of 15 seconds to create a mel spectrogram and predict → Notebook Timeout</li> <li>Moving average, moving weighted average</li></ul></li> <li>Large Models</li> <li>geometric mean</li> <li>LSTM head</li> <li>strong dropout</li> <li>rank ensemble</li> <li>hand labeling<ul> <li>It is thought to have led to overfitting…</li></ul></li> </ul> <h2>Code</h2> <ul> <li>GitHub (My training code) → <a href="https://github.com/moritake04/birdclef-2023" target="_blank">https://github.com/moritake04/birdclef-2023</a></li> </ul>
BirdCLEF 2023
8th Place Solution: Implementing Multimodal Data Augmentation Methods (last updated 05/28)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners!<br> Our gratitude extends to Kaggle and the Cornell Lab of Ornithology for their exceptional efforts in organizing this intriguing competition. In particular, I was able to contribute to the improvement of accuracy with highly original methods, so I was able to enjoy the competition very much.<br> Allow me to give a succinct overview of my solution. I’ll provide a more detailed update on the solution in the following days.</p> <h1><strong>Summary</strong></h1> <p>In my solution, only one type of backbone is used, and multimodal data augmentation techniques such as object detection, recommendation systems, and NLP are used to increase diversity and improve model accuracy.</p> <h2>Preprocess Pipeline Summary</h2> <p>We built a preprocessing pipeline that prevents overfitting by devising preprocessing so that the combinations that are actually possible while maintaining the diversity of the data. In particular, in preprocessing for Sound Object Detection, weighting the signal density of the spectrum made it easier to pick out the areas where the contribution of secondary labels is included. Thanks to that, the training accuracy has improved even when the value of the secondary labels is large.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6220676%2Ffc17c767400faa897953a3034352d16e%2F.drawio%20(1).png?generation=1685242284411201&amp;alt=media" alt=""></p> <h2>Training Pipeline</h2> <p>In order to grasp the features of bird calls with various periodicities, build a pipeline that first learns with a long spectrum input, then lowers the time axis direction for each epoch, and finally learns local features. Did. Preprocessing by SOD prevents noise-like offsets from being selected even for local spectra, so training is stable.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6220676%2F090bc781738d7cfe844801e0bcd23f6e%2F8thplacesolution-2.drawio.png?generation=1685260705814564&amp;alt=media" alt=""></p> <p>code detail: <a href="https://www.kaggle.com/code/kunihikofurugori/8th-solution-down-scaling-detail/notebook?scriptVersionId=131297676" target="_blank">https://www.kaggle.com/code/kunihikofurugori/8th-solution-down-scaling-detail/notebook?scriptVersionId=131297676</a></p> <h1><strong>MultiModal Data Augments Part(in Detailed)</strong></h1> <p>Code detail: <a href="https://www.kaggle.com/code/kunihikofurugori/8th-solution-notebook/notebook" target="_blank">https://www.kaggle.com/code/kunihikofurugori/8th-solution-notebook/notebook</a></p> <h3>・Time Segment DownScaling Augmentation (Global to Local)</h3> <p>Regarding the mini-batch method represented by the 2nd place solution of birdclef2021, we trained to learn local features while pre-learning global features by training to evolve the number of mini-batches from 15 to 1 at each epoch.</p> <p>By first learning the global features and then learning the local features, it becomes possible to predict the local features after understanding the global features.</p> <h3>・Weak to Strong Preprocessing by YOLOv8 (Sound Object Detection)</h3> <p>I use the data pointed out by the competition host to train the yolov8 object detection model, and select an offset that will increase the number of object detections as much as possible.<br> Also, by minimizing the bounding box and reliability-weighted IoU for the frequency masking, Frequency Masking effectively hides noise.</p> <h4>Sample Inference</h4> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6220676%2F058db543cb9d88e54a699785bd32c3b2%2F__results___3_2.png?generation=1685058390267527&amp;alt=media" alt="Inferece Sample Sound Object Detection"></p> <p>Reference Notebook<br> [1]<a href="https://www.kaggle.com/code/kunihikofurugori/sound-object-detection-prepare-annot-data/notebook" target="_blank">https://www.kaggle.com/code/kunihikofurugori/sound-object-detection-prepare-annot-data/notebook</a><br> [2]<a href="https://www.kaggle.com/code/kunihikofurugori/birdclef2023-soundobjectdetectiontrain/notebook" target="_blank">https://www.kaggle.com/code/kunihikofurugori/birdclef2023-soundobjectdetectiontrain/notebook</a><br> [3]<a href="https://www.kaggle.com/code/kunihikofurugori/sound-object-detection-batch-predict/notebook" target="_blank">https://www.kaggle.com/code/kunihikofurugori/sound-object-detection-batch-predict/notebook</a><br> [4]<a href="https://www.kaggle.com/code/kunihikofurugori/birdclef2023-soundobjectdetectionanalysis/notebook" target="_blank">https://www.kaggle.com/code/kunihikofurugori/birdclef2023-soundobjectdetectionanalysis/notebook</a></p> <h2>・Various Mixup Strategy</h2> <p>By using various mix-up methods, we tried to make it impossible to choose unnatural pairs. Details of the method are described below.</p> <h3>・Geometrical Mixup</h3> <p>Assuming that geographically close samples should have similar background noise, we created a model that mixes up geographically close samples without learning the background noise.</p> <p>If the distance matrix is properly constructed, the amount of calculation will be O(n^2), so the code is written so that the parts that match up to the second decimal place of the longitude and latitude are treated as the same group.</p> <h3>・Resonance Mixup (Inspired by collaborative filtering)</h3> <p>Choosing a pair that resonates easily as a mixup destination is important for creating a more natural sound. Implemented a mixup that preferentially selects bird samples that are likely to resonate based on the collaborative filtering idea of the recommender system.</p> <h3>・Inner mixup for geometrical clustering space</h3> <p>Groups that are geographically close and have the same label do not change the label characteristics even if they share inputs and mixup. Therefore, we clustered such groups and made it possible to perform Inputmixup freely.</p> <h1><strong>Models</strong></h1> <p>My models are based on the BirdCLEF 2021 2nd place solution using backbone, eca_nfnet_l0.</p> <h1><strong>Dataset</strong></h1> <ol> <li>Bird CLEF 2023</li> <li>Bird CLEF 2022/2021/2020 (Pretraining)</li> <li>ff1010bird_nocall</li> <li><a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/394358" target="_blank">Zendo data (YOLOv8) </a></li> </ol> <h1><strong>CV setup</strong></h1> <p>I thought it would be difficult to create a multifold model and check the LB due to resource limitations, so I did a simple test-train split instead of out-of-fold.</p> <p>For the test data, we built validation by splitting 30 seconds into 5 second intervals and bootstrap sampling to the same size as the LB dataset size (8400 samples).</p> <p>Furthermore, we found that using only the data with the primary label showed behavior similar to LB, so we excluded the data with the secondary label from the validation and calculated.</p> <p>Inference Kernel: <a href="https://www.kaggle.com/code/kunihikofurugori/8th-place-solution-inference-kernel" target="_blank">https://www.kaggle.com/code/kunihikofurugori/8th-place-solution-inference-kernel</a><br> Github link: <a href="https://github.com/furu-kaggle/birdclef2023" target="_blank">https://github.com/furu-kaggle/birdclef2023</a></p> <h1>Result</h1> <table> <thead> <tr> <th>commit_id</th> <th>single inference kernel</th> <th>LB</th> <th>PB</th> <th>tag</th> </tr> </thead> <tbody> <tr> <td>4a57a984d1f4101b9f470095e6a6fae0a6cc5af2</td> <td><a href="https://www.kaggle.com/code/kunihikofurugori/best-single-model-lb-0-826-pb-0-73777?scriptVersionId=130154067" target="_blank">link</a></td> <td>0.82628</td> <td><strong>0.73777</strong></td> <td>ecanfnet_nmel128_best1</td> </tr> <tr> <td>55557aea59976c5befc11cbcd077f22c1d8289c0</td> <td><a href="https://www.kaggle.com/code/kunihikofurugori/best-single-model-lb-0-826-pb-0-73777?scriptVersionId=129843959" target="_blank">link</a></td> <td>0.82475</td> <td>0.73301</td> <td>ecanfnet_nmel128_best3</td> </tr> <tr> <td>ccc435294a9f5f9989f5865def93da0817c3b030</td> <td><a href="https://www.kaggle.com/code/kunihikofurugori/best-single-model-lb-0-826-pb-0-73777?scriptVersionId=129843959" target="_blank">link</a></td> <td><strong>0.82786</strong></td> <td>0.73341</td> <td>ecanfnet_nmel128_best2</td> </tr> <tr> <td>c9e077c172368c31c2d1974e0288554ebd014b2f</td> <td><a href="https://www.kaggle.com/code/kunihikofurugori/best-single-model-lb-0-826-pb-0-73777?scriptVersionId=130288082" target="_blank">link</a></td> <td>0.82279</td> <td>0.72918</td> <td>ecanfnet_nmel64_best2</td> </tr> <tr> <td>a3791c561544adcc37507d5c27ec98e9d1c603a7</td> <td><a href="https://www.kaggle.com/code/kunihikofurugori/best-single-model-lb-0-826-pb-0-73777?scriptVersionId=129584055" target="_blank">link</a></td> <td>0.82172</td> <td>0.73223</td> <td>ecanfnet_nmel64_best1</td> </tr> <tr> <td>ea70ecc95de1625212465b8e0148bc996e0237f9</td> <td><a href="https://www.kaggle.com/code/kunihikofurugori/best-single-model-lb-0-826-pb-0-73777?scriptVersionId=129564172" target="_blank">link</a></td> <td>0.82149</td> <td>0.72850</td> <td>ecanfnet_nmel64_best3</td> </tr> <tr> <td>ecanfnet_nmel128_best1~3(300 stride pred) + ecanfnet_nmel64_best1~3 ensemble</td> <td><a href="https://www.kaggle.com/code/kunihikofurugori/8th-place-solution-inference-kernel?scriptVersionId=130834386" target="_blank">link</a></td> <td><strong>0.83616</strong></td> <td><strong>0.75285</strong></td> <td>best_submission</td> </tr> </tbody> </table>
BirdCLEF 2023
4th Place Solution: Knowledge Distillation Is All You Need
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to Kaggle and Cornell Lab of Ornithology for hosting this interesting competition. And I would like to emphasize the thanks to the bird sound recordists who provided their data through xeno-canto.</p> <h3>Solution Summary</h3> <ul> <li>Knowledge Distillation is all you need.</li> <li>Adding no-call data, xeno-canto data, and background audios(Zenodo) is effective.</li> </ul> <h3>Datasets</h3> <p>Additional datasets can be found <a href="https://www.kaggle.com/datasets/atsunorifujita/birdclef-2023-additional" target="_blank">here</a>.</p> <ul> <li>Bird CLEF 2023</li> <li>Bird CLEF 2021, 2022 (for pretraining)</li> <li>ff1010bird_nocall (5,755 files for learning no call)</li> <li>xeno-canto files not included in training dataset CC-BY-NC-SA (896 files) and CC-BY-NC-ND (5,212 files).</li> <li>Zenodo dataset (background noise)</li> <li>esc50 (rain, frog)</li> <li>aicrowd2020_noise_30sec (background noise for pretraining)</li> </ul> <h3>Models</h3> <ul> <li>My models are based on <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243463" target="_blank">the BirdCLEF 2021 2nd place solution</a> using <a href="https://github.com/huggingface/pytorch-image-models" target="_blank">timm</a>.</li> <li>My solution consists of a total of 4 models, all with eca_nfnet_l0 as the backbone. Each set is slightly different.</li> </ul> <ol> <li>MelSpectrogram (sample_rate: 32000, mel_bins: 128, fmin: 20, fmax: 16000, window_size: 2048, hop_size: 512, top_db=80.0, NormalizeMelSpec) following <a href="https://www.kaggle.com/competitions/birdclef-2021/discussion/243293" target="_blank">this solution</a>.<ul> <li>Public: 0.8312, Private: 0.74424</li></ul></li> <li>Balanced sampling with the above settings.<ul> <li>Public: 0.83106, Private: 0.74406</li></ul></li> <li><a href="https://github.com/daemon/pytorch-pcen" target="_blank">PCEN</a> (sample_rate: 32000, mel_bins: 128, fmin: 20, fmax: 16000, window_size: 2048, hop_size: 512).<ul> <li>Public: 0.83005, Private: 0.74134</li></ul></li> <li>MelSpectrogram (sample_rate: 32000, mel_bins: 64, fmin: 50, fmax: 14000, window_size: 1024, hop_size: 320).<ul> <li>Public: 0.83014, Private: 0.74201</li></ul></li> </ol> <h3>Training</h3> <p>Knowledge distillation of pre-computed predictions in <a href="https://www.kaggle.com/models/google/bird-vocalization-classifier/frameworks/TensorFlow2/variations/bird-vocalization-classifier/versions/1" target="_blank">Kaggle Models</a> (bird-vocalization-classifier) was the hallmark of my solution. This model cannot complete inference in less than 2h, but it is very powerful. cmAP_5 on my validation dataset was 0.9479. So I tried to extract useful information from this model.<br> <strong>The Kaggle models pre-computed predictions were created <a href="https://www.kaggle.com/code/atsunorifujita/extract-from-kaggle-models/notebook" target="_blank">here</a></strong>.<br> According to this <a href="https://arxiv.org/abs/2106.05237" target="_blank">paper</a>, they argued that efficient distillation required 1. consistent input, 2. aggressive mixup, and 3. a large number of epochs. So I did 2 and 3 because it was challenging to integrate the Kaggle model into my training pipeline. It certainly looked effective.<br> I chose only approaches that improve both CV and LB. In this competition, I just couldn't believe in one or the other.</p> <ul> <li>Using 5 StratifiedKFold. Only 1 fold did not cover all classes, so the remaining 4 were used for training.</li> <li>The evaluation metric is padded_cmap1. The best results were the same with padded_cmap5 in most cases.</li> <li>Use primary_label only</li> <li>no-call is represented by all 0</li> <li>loss: 0.1 * BCEWithLogitsLoss (primary_label) + 0.9 * KLDivLoss (from Kaggle model)<ul> <li>Softmax temperature=20 was best (tried 5, 10, 20, 30).</li></ul></li> <li>I used randomly sampled 20-sec clips for training. Audio that is less than 20 sec is repeated.</li> <li>epoch = 400 (Most models converge at 100-300)</li> <li>early stopping(pretraining=10, training=20)</li> <li>Optimizer: AdamW (lr: pretraining=5e-4, training=2.5e-4, wd: 1e-6)</li> <li>CosineLRScheduler(t_initial=10, warmup_t=1, cycle_limit=40, cycle_decay=1.0, lr_min=1e-7, t_in_epochs=True,)</li> <li>mixup p = 1.0 (It was better than p=0.5)<br></li> </ul> <p>I also encountered a situation where the training time was significantly longer when pretrained with past competition data. Thanks to everyone who suggested solutions.</p> <h4>Augmentation</h4> <ul> <li>OneOf ([Gain, GainTransition])</li> <li>OneOf ([AddGaussianNoise, AddGaussianSNR]</li> <li>AddShortNoises esc50 (rain, frog)</li> <li>AddBackgroundNoise from <a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/394358#2179605" target="_blank">Zenodo</a>. The 60 minutes with the fewest bird calls were extracted from each dataset and divided into 30 sec (training only).</li> <li>AddBackgroundNoise from aicrowd2020_noise_30sec and ff1010bird_nocall (pretraining only).</li> <li>LowPassFilter</li> <li>PitchShift</li> </ul> <h4>Hardware</h4> <ul> <li>1 * RTX 3090</li> <li>Pretraining time: 30-40h per model. </li> <li>Training time: 9-12h per model.</li> </ul> <h3>Inference</h3> <p>Reducing the inference time is the part I was having trouble with. Thanks to <a href="https://www.kaggle.com/leonshangguan" target="_blank">@leonshangguan</a> for sharing his <a href="https://www.kaggle.com/code/leonshangguan/faster-eb0-sed-model-inference" target="_blank">effective approach</a>, I was able to use 4 models.</p> <ul> <li>Ensemble with simple averaging.</li> <li>4 models using PyTorch JIT. total 110min.</li> <li><strong><a href="https://www.kaggle.com/code/atsunorifujita/4th-place-solution-inference-kernel" target="_blank">Inference notebook</a></strong></li> </ul> <h3>What didn’t work</h3> <ul> <li>focal loss.</li> <li>Split a low-sample class.</li> <li>Backbone other than eca_nfnet_l0 and eca_nfnet_l1.</li> <li>Optimizers (adan, lion, ranger21, shampoo). I tried to create a custom normalize-free model but failed.</li> <li><a href="https://github.com/naver-ai/cmo" target="_blank">CMO</a> (mixup worked better when using distillation).</li> <li><a href="https://github.com/KinWaiCheuk/nnAudio/blob/master/Installation/nnAudio/features/cqt.py" target="_blank">CQT</a> (slow and degraded).</li> <li>Change to first stride(1, 1). CV was good but inference takes a long time and not a single model is completed in less than 2h.</li> <li>Pretraining Zenodo data (story before trying distillation).</li> <li>Distillation training from scratch (not use imagenet weights). It didn't converge in the amount of time I could tolerate.</li> <li>MelSpectrogram, PCEN, and CQT integrated into input channels in all combinations. There was no synergistic effect.</li> </ul> <h3>Ablation study</h3> <table> <thead> <tr> <th>Name</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>BaseModel</td> <td>0.80603</td> <td>0.70782</td> </tr> <tr> <td>BaseModel + Knowledge Distillation</td> <td>0.82073</td> <td>0.72752</td> </tr> <tr> <td>BaseModel + Knowledge Distillation + Adding xeno-canto</td> <td>0.82905</td> <td>0.74038</td> </tr> <tr> <td>BaseModel + Knowledge Distillation + Adding xeno-canto + Pretraining</td> <td>0.8312</td> <td>0.74424</td> </tr> <tr> <td>BaseModel + Knowledge Distillation + Adding xeno-canto + Pretraining + Ensemble (4 models)</td> <td>0.84019</td> <td>0.75688</td> </tr> </tbody> </table> <h3>Acknowledgments</h3> <p>My solution builds on contributions from participants in this and past competitions, birding enthusiasts, and the work of the machine learning community. thank you very much 🙏 <br> <strong>Training Code</strong>: <a href="https://github.com/AtsunoriFujita/BirdCLEF-2023-Identify-bird-calls-in-soundscapes" target="_blank">https://github.com/AtsunoriFujita/BirdCLEF-2023-Identify-bird-calls-in-soundscapes</a></p>
Google - Isolated Sign Language Recognition
10th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Isolated Sign Language Recognition <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><blockquote> <p>First, I would like to thank the Armed Forces of Ukraine, Security Service of Ukraine, Defence Intelligence of Ukraine, State Emergency Service of Ukraine for providing safety and security to participate in this great competition, complete this work, and help science, technology, and business not stop and move forward.</p> </blockquote> <p>TLDR: Transformer models with embedding based on Linear layers for pose,lips,eye, and STGCN + Linear layer for target hand</p> <p><strong>Preprocessing</strong><br> Find target hand using nan in time dimension, flip if need, select poses where target hand is not nan, resize to 16 in time dimension, and finally normalize with</p> <pre><code>xyz = (xyz - np.nanmean(xyz[:,FACE,:],axis(,))) / np.nanstd(xyz[:,FACE,:],axis=(,)) </code></pre> <p><strong>Modeling</strong><br> (All validation scores are based on <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> split on participant id)<br> After a stuck at 0.75 lb (~0.7 validation), I understand that I need to change strategy.I go back and start modelling only with target hand. After finding best normalization strategy using face I get ~0.69 val score with only hand.<br> Then I tried to add pose,lips,eyes I get ~0.71 val score. So for me was clear that most of the signal in dominant hand, so i tried to get different representation and ensemble it. Firstly I try to plot target hand and then run 2.5d CNN or TransformerCNN based models, and get with this 0.6 val score only with hand. Then I tried to run STGCN(Spatio-Temporal Graph Convolutional Networks) for target hand and get ~0.66 score. Because CNN gets too much time to process I decide to drop it. So finally after ensembling pose,lip,eye,hand,STGCN(hand) -&gt; attention block i get ~0.72 score</p> <pre><code>x = torch.cat([ self.pose_emb(x[:,:,:,:].view(B,L,-), x_mask)[], self.lip_emb(x[:,:,:+,:].view(B,L,-), x_mask)[], self.reye_emb(x[:, :, + :++, :].view(B, L, -), x_mask)[], self.leye_emb(x[:, :, ++ :+++, :].view(B, L, -), x_mask)[], self.hand_emb(x[:, :, +++ :++++, :].view(B, L, -), x_mask)[], self.graph_emb(x[:, :,+++:++++, :].permute(, , , ).unsqueeze(-)), self.pos_embed[:L, :].unsqueeze().repeat(B,,)],axis=-) </code></pre> <p>Example how i ensemble it.</p> <p>Then I started to add different augmentations (Here i have to spend definitely more time to increase my score even more), Most of the boost i get from :</p> <ul> <li>Mixup</li> </ul> <pre><code> np.random.random() &lt; self.args.mixup: indices = torch.randperm(batch[].size(), device=batch[].device, dtype=torch.long) beta = np.random.beta(, ) batch[] = beta * batch[] + ( - beta) * batch[][indices] batch = self.run_logits(batch) batch[] = beta * self.args.loss(logits=batch[], labels=batch[]) + ( - beta) * self.args.loss(logits=batch[], labels=batch[][indices]) </code></pre> <ul> <li>Copy paste</li> </ul> <pre><code> np.random.random() &lt; self.args.cutout: indices_to_shuffle = torch.randperm(batch[].size(), device=batch[].device, dtype=torch.long) cutout_len = self.args.cutout_len (cutout_len) == : cutout_len = random.uniform(*cutout_len) sz = (batch[].size() * cutout_len) ind = np.random.randint(, batch[].size() - sz) indices_to_cut = torch.(ind, ind+sz, device=batch[].device, dtype=torch.long) beta = - cutout_len batch[][:, indices_to_cut] = batch[][indices_to_shuffle][:,indices_to_cut] batch[][:, indices_to_cut] = batch[][indices_to_shuffle][:,indices_to_cut] batch = self.run_logits(batch) batch[] = beta * self.args.loss(logits=batch[], labels=batch[]) + ( - beta) * self.args.loss(logits=batch[], labels=batch[][indices_to_shuffle]) </code></pre> <p>After that i get final score ~0.735-0.74 val score and 0.77 lb single model, 0.79 lb ensemble</p> <p><strong>What didn't work :</strong></p> <ul> <li>CNN on dominant hand, didn't get a lot of boost from it.</li> <li>Different losses based on embedding for words.</li> <li>Funny idea: Train model with arcface label : (pariticipant_id+sign nearly 5230 labels for full train) , get improvement in val score ~0.02 for val split based on label, because in lb we have samples from train too, i tried to add it to ensemble, but dont get any significant boost so decided to drop it.</li> </ul> <p>One more time want to say thanks for every one who support Ukraine in such a hard times.<br> Slava Ukraini.</p>
Stable Diffusion - Image to Prompts
11th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stable Diffusion - Image to Prompts <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the Kaggle staff and host for organizing the competition and congratulations to the winners. I appreciate my teammates <a href="https://www.kaggle.com/charmq" target="_blank">@charmq</a> and <a href="https://www.kaggle.com/yoichi7yamakawa" target="_blank">@yoichi7yamakawa</a>!</p> <h2>Overview</h2> <p>We took the approach of training the image model to directly predict the target embeddings. For almost the entire duration of the competition, we used the image encoder of OpenCLIP-ViT/H as a backbone, which is a counterpart of the text encoder used in Stable Diffusion v2. In this competition, it was much stronger than usual ImageNet pre-trained models, such as EfficientNet or ConvNeXt. However, we realized that OpenCLIP-ViT/bigG and BLIP-2 worked better in the last 1-2 days, which we didn't have enough time for tuning and training fully. Our final submission was an ensemble of OpenCLIP-ViT/H, OpenCLIP-ViT/g, OpenCLIP-ViT/bigG, and BLIP-2.</p> <h2>Dataset</h2> <p>We downloaded and generated 4.5M+ images in total and used a part of them as a validation set. The validation scores correlated a lot with LB scores, although they were much higher (0.7+). The datasets we used are as follows. We extracted the prompts with cosine similarity away from each other. Also, we randomly concatenated <a href="https://github.com/krea-ai/open-prompts/tree/84d1628e7b3d97a45d237311579faf39692f2a02/modifiers" target="_blank">modifiers from Open Prompts</a> to a part of the dataset.</p> <ul> <li><a href="https://poloclub.github.io/diffusiondb/" target="_blank">DiffusionDB</a></li> <li><a href="https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion" target="_blank">MagicPrompt-Stable-Diffusion</a></li> <li><a href="https://cocodataset.org/#home" target="_blank">Microsoft COCO captions</a></li> <li><a href="https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts/discussion/398529" target="_blank">gpt_generated, hardcoded (Kaggle)</a></li> <li><a href="https://ai.google.com/research/ConceptualCaptions/" target="_blank">Conceptual Captions</a></li> <li><a href="https://nocaps.org/" target="_blank">nocaps</a></li> <li><a href="https://shannon.cs.illinois.edu/DenotationGraph/" target="_blank">Flickr30k</a></li> </ul> <p>We first used the <a href="https://github.com/Stability-AI/stablediffusion/blob/cf1d67a6fd5ea1aa600c4df58e5b47da45f6bdbf/scripts/txt2img.py" target="_blank">official script</a> and mainly used the Diffusers library to generate images. The default parameter of <code>guidance_scale</code> differed from the official script, but we used the default parameters since it did not affect the performance a lot.</p> <h2>What worked</h2> <p>The backbones we used can only get images of (224, 224). We split a (448, 448) image into 4x (224, 224) patches and concat each patch (+full image) embeddings. This improved the performance but slowed down the speed of training and inference by 5x times, so we used it with normal models with (224, 224) input.</p> <p>The following small techniques worked a lot to fine-tune large models.</p> <ul> <li>freezing of first layers</li> <li>layer-wise learning rate decay</li> <li>pre-trained weight decay</li> </ul> <p>Also, contrastive loss using text encoders worked for CLIP models.</p> <h2>What did not work</h2> <ul> <li>Make use of Stable Diffusion's weight, such as encoder, decoder, unet</li> <li>Training from Stable Diffusion's latent</li> <li>Image2text</li> <li>Auxiliary loss, such as CLIP text embeddings or prompt prediction</li> <li>Decrease the step size in image generation (we thought it might work as data augmentation)</li> <li>knn with trained models</li> <li>CLIP Interrogator</li> </ul> <h2>Acknowledgement</h2> <p>We deeply acknowledge great OSS such as PyTorch, PyTorch Lightning, Hugging Face, etc. We would also like to appreciate Preferred Networks, Inc for allowing us to use computational resources.</p>
BirdCLEF 2023
10th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Training Data</h1> <p>I use 2023 competition data only, but 3 of my models have pre-train weights on 2022 data.</p> <h1>Model Architecture</h1> <p>I use SED architecture, almost the same as <a href="https://www.kaggle.com/code/kaerunantoka/birdclef2022-use-2nd-label-f0" target="_blank">this notebook</a> with very minor modifications (e.g., emb_size, num_layers).</p> <p>I tried 8 backbones in total, including 3 large and 5 small: <br> large backbones: efficientnetv2_m, eca_nfnet_l0, seresnext50<br> small backbones: efficientnet_v0, mnasnet_100, spnasnet_100, mobilenetv2_100, resnet34</p> <p>As seresnext50 and resnet34 perform not well based on my cv, so I dropped them in the final ensemble.</p> <p>I got the information of mnasnet_100, spnasnet_100, and mobilenetv2_100 by asking ChatGPT: <code>What are small models offered by timm</code> and ChatGPT replied with these models with the number of parameters, which is very helpful. </p> <h1>Code</h1> <p>One of my inference notebooks can be found <a href="https://www.kaggle.com/code/leonshangguan/fork-of-fork-of-onnx-final-combine-bs4-b9daf6?scriptVersionId=130783179" target="_blank">here</a></p> <h1>Training Strategy</h1> <ol> <li><p>all models are trained on 5s clip.</p></li> <li><p>soft label: in the one-hot gt, I give the primary label with "1" and the secondary label with "0.3".</p></li> <li><p>data augmentation: add GaussianNoise, Background noise, PinkNoise, adjust Gain, use cut mix and mix up (very helpful).</p></li> <li><p>Progressive pretraining: the hardest part of this competition is to find a good set of parameters when applying melspectrum, especially the n_fft, hop_length, and n_mels, which affect the input image (melspectrum) size. I tried many different params, and surprisingly, I found that if you have a model trained on params set A, then you load the model as a pre-trained model (except the first and last layer), and then train it on params set B, the model will perform much better than directly training on params set B, and this is the key idea of my solution. All of my models were trained on many different parameters and finally finetuned on n_mels=128, n_fft=1024, and hop_length=320 to make it the same size, so that can be ensemble efficiently.</p></li> <li><p>loss function: BCEWithLogits + BCEFocal2WayLoss, same as previous years.</p></li> <li><p>cv strategy: I use 5fold stratified cross-validation and select the best 1 or 2 fold according to cv. </p></li> <li><p>ensemble strategy: each model has 1/2 fold for ensemble, and I have 6/7 models in total, with 6 different backbones as mentioned before. The reason I didn't use all folds is because of the computational limitation asked by the host.</p></li> <li><p>cpu acceleration: I use ThreadPoolExecutor as I have mentioned <a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/401587" target="_blank">here</a>, and I also convert models to onnx which utilizes the model quantization technique and shows faster inference time.</p></li> </ol> <h1>Others</h1> <ol> <li>more than 70 of my submissions encountered timeout error, and I therefore highly request the function that if more information can be released for the submission, for example, the running time <a href="https://www.kaggle.com/herbison" target="_blank">@herbison</a> . I suffered from it a lot, and many almost the same submissions sometimes pass but sometimes not, even with the identical code. </li> </ol> <p>I previously made a request regarding the submission time function, as mentioned <a href="https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/337343" target="_blank">here</a>, when <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> and <a href="https://www.kaggle.com/jaafarmahmoud1" target="_blank">@jaafarmahmoud1</a> found 10 PM to 6 AM EST is a good time to make submissions. The reason behind this recommendation is that 10 PM EST is considered late for the American region, while it corresponds to 4 AM in Europe and 10 AM in Asia, which might be too early for many participants. Consequently, this particular timeframe becomes somewhat magical, as very few people tend to make submissions during this period. Therefore, if one chooses to make a submission during this time, they could potentially benefit from it.</p> <ol> <li><p>based on my experiments, I believe the model params have an impact on inference time when using CPU, i.e., with the same model architecture and the number of params, but different models, some may be fast some may not. <a href="https://www.kaggle.com/chrisqiu" target="_blank">@chrisqiu</a> 's <a href="https://www.kaggle.com/competitions/birdclef-2023/discussion/409331" target="_blank">post</a> verifies this situation. As the host is trying to build apps that can be deployed in the real world using the CPU, I think this issue is worth mentioning <a href="https://www.kaggle.com/stefankahl" target="_blank">@stefankahl</a>. </p></li> <li><p>I found that both my CV and lb vary based on the different folds, which indicates a good data split may have a great impact on the result, and some luck is needed when setting the random seed for the data split.</p></li> </ol> <h1>Acknowledgement</h1> <p>I appreciate <a href="https://www.kaggle.com/stefankahl" target="_blank">@stefankahl</a> for hosting this amazing competition, <a href="https://www.kaggle.com/herbison" target="_blank">@herbison</a> for answering my questions regarding the submission timeout error, and thanks to the Kaggle community for sharing their insights, ideas, and thoughts. This is a wonderful learning experience and I hope everyone enjoys it!</p>
BirdCLEF 2023
9th Place Solution: 7 CNN Models Ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: BirdCLEF 2023 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for hosting BirdClef 2023 again this year and giving us a learning experience.</p> <p>Heartfelt Congratulations to the winners and everyone who got to learn and experience.</p> <p>I would also like to express my gratitude to my talented teammates <a href="https://www.kaggle.com/ivanaerlic" target="_blank">@ivanaerlic</a> and <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a></p> <h1>Training</h1> <p>We used efficientnet_b0, eca_nfnet_l0 and convnext_tiny architectures in our final submission.</p> <p>The training was done in multiple rounds using datasets from all previous years. All models were trained on 5s clips</p> <p>For eca_nfnet_l0 and convnext_tiny<br> Hop size 320, n_mels 64, image_size = 64x501<br> For efficientnet_b0<br> Hop size 320, n_mels 64, image_size (Bilinear image size increase) = 128x1002</p> <p>The training was done in multiple rounds:</p> <p><strong>Round 1:</strong></p> <p>Hop size 512, n_mels 64, image_size = 64x313<br> (Bilinear image size increase) = 128x636</p> <p>Models : Eca nfnet l0 (128x636), convnext tiny (128x636).</p> <p>Training data: 2022 &amp; 2023</p> <p>Augmentations: HFlip, Random Cutout, Pink Noise, Gaussian Noise, Noise Injection, Random Volume</p> <p>Type of Training: Competition Labels</p> <p>CV: ~.79 (Low)</p> <p><strong>Round 2:</strong></p> <p>Hop size 320, n_mels 64, image_size = 64x501</p> <p>Models : Efficientnet B0 (128x1002 Bilinear increase), Eca nfnet l0 (64x501), convnext tiny (64x501).</p> <p>Augmentations: HFlip, Random Cutout, Pink Noise, Gaussian Noise, Noise Injection, Random Volume</p> <p>Training data: 2022 &amp; 2023</p> <p>Type of Training: Distillation</p> <p>CV: ~.79 (Slight increase)</p> <p><strong>Round 3:</strong></p> <p>Hop size 320, n_mels 64, image_size = 64x501</p> <p>Models : Efficientnet B0 (128x1002 Bilinear increase), Eca nfnet l0 (64x501), convnext tiny (64x501).</p> <p>Training data: 2021 &amp; 2022 &amp; 2023</p> <p>Extra methods: AWP, SWA, Attention Head</p> <p>Augmentations: HFlip, Random Cutout, Pixel Dropout</p> <p>Type of Training: Competition Labels</p> <p>CV: ~.8</p> <p><strong>Round 4:</strong></p> <p>Hop size 320, n_mels 64, image_size = 64x501</p> <p>Models : Efficientnet B0 (128x1002 Bilinear increase), Eca nfnet l0 (64x501), convnext tiny (64x501).</p> <p>Training data: 2023</p> <p>Augmentations: HFlip, Random Cutout, Pixel Dropout, Pink Noise, Gaussian Noise, Noise Injection, Random Volume</p> <p>Extra methods: SWA, Attention Head</p> <p>Type of Training: Pseudo Labels </p> <p>CV: ~.81 (mild leak) </p> <p>Our CV was the first 5 seconds of the clips, this CV although was not always consistent with LB, gave us a good enough idea of what we could expect.</p> <p>Pseudo labeling improved our score a lot. We used pseudo-labels to make new secondary labels and gave them a hard probability 0.5</p> <p>For convnext model, it was first fine tuned to increase the score on the targets where secondary label exists, although this sacrificed the primary label score, this adds diversity to our ensembles</p> <h1>Inference</h1> <p>We used OpenVino to increase the inference speed of our models</p> <p>We also saw that loading an audio and giving it in batches, the first batch of inference is especially slower than the rest, so we made it so that it loads 8 audios at once, makes a data loader for all 8 audios and then forward pass the model, doing this we increased the inference speed further by 20%.</p> <p>So we were able to submit 7 models, 2 of efficientnet_b0 + 2 of eca_nfnet_l0 + 3 of convnext_tiny</p> <p>We also noticed that sigmoid after weight averaging was more effective than sigmoid before weight averaging, also we were using softmax activation which gave a better score than sigmoid up until we reached top of .82 (public LB), after that sigmoid started to work better.</p> <h1>BirdNet</h1> <p>Our best private submission turned out to be when we ensembled our models with BirdNet, although it was like our top 10 submissions of public and did give us a lot of variance, we did not select this because we were a little afraid of it being a dice roll on private LB.</p> <h1>Post Processing</h1> <p>Looking at the last year’s competition, we tried out increasing the confidence of a class by a factor of 1.5 if any prediction of that class in the audio file is more than .96</p> <p>This did work on a lot of submissions and was in fact amongst our top subs, but in the end, it did not play well on private LB and we did not select it because of the risk of overfitting the public LB.</p> <h1>What did not work</h1> <p>SED models<br> Q-Transform<br> More than 5s training<br> Rolling Mean with last and next 5s segment</p> <p>More than 60 submissions of ours had timeout errors and on the last second day of the competition, 4 of 5 submissions had encountered timeout error while before that the chance of getting a timeout was only 20-30% in the same notebook.</p>