Title of Competition
stringclasses
343 values
Title of Writeup
stringlengths
6
139
User
stringclasses
1 value
Writeup
stringlengths
419
80.4k
PetFinder.my - Pawpularity Contest
54th place – single model solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks, yes with more time maybe I had done a multi model ensemble, but used the 15 TTA single model instead :)</p> <p>Congrats to the top solutions and a thanks for an important competition.<br> Also, a thanks to fellow Kagglers contributing with valuable insights, ideas, and notebooks.<br> I found inspiration and ideas from many notebooks, very useful as I entered the competition just 2 weeks ago.<br> Had this public notebook as the starting point <a href="https://www.kaggle.com/calvchen/swin-transformer-224-meta" target="_blank">https://www.kaggle.com/calvchen/swin-transformer-224-meta</a><br> Many public notebooks also had inspiration from this notebook <a href="https://www.kaggle.com/cdeotte/rapids-svr-boost-17-8" target="_blank">https://www.kaggle.com/cdeotte/rapids-svr-boost-17-8</a></p> <p>The solution summary:</p> <p>Data selection</p> <p>Used the clean dataset for training and all the metadata+dim.information and for inference I also used Yolov5+TTA to add the dog/cat features.</p> <p>Model selection</p> <p>I used FastAI with swin_large_patch4_window7_224 from timm library, tried many other both higher and lower in the model top lists, even tried the largest ones with stronger GPUs but the SwinL224 worked best.</p> <p>Training</p> <p>Used augmentation techniques Flip, Rotate, Zoom, Warp all with Reflection pad mode, also added Brightness and Contrast.<br> Started the 10 folds training with 1 epoch with freezing layers following with 5 epochs with unfreeze the layers, saved the best models using competition metric.<br> After finished the training of the main model I extracted the embedding and trained the extra regression head, for this I used a stacking approach – trained a stacked model with input from SVR, Ridge and LR.</p> <p>Inference</p> <p>15 TTA with some of the training augmentation techniques. For the final ensemble more weight was added to the stacked regression.<br> I also dropped the worst OOF folds, with the idea that they where badly trained instead of that the validation set were noisy.</p> <p>That’s it! 😊</p>
PetFinder.my - Pawpularity Contest
68th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you<br> It's depending on model size in swin 224 i used 32 while in swin 384 i used 8</p> <p>thanks this makes me happy 😄</p> <p>you are welcome😊</p> <p>Thanks to organizers for this interesting challenge and congrats everyone who enjoyed it! </p> <p><strong>My approach</strong><br> <strong>1-</strong> I split the data into 5 folds using stratified kfold.<br> <strong>2-</strong> I used Brightness(), Contrast(), Hue() and Saturation() for augmentation.<br> <strong>3-</strong> I trained Swin-L 224x224 with 2e-5 learning rate for 10 epoch for each fold.<br> <strong>4-</strong> I applied tta on Swin-L 224x224 with n = 4.<br> <strong>5-</strong> I trained svr model on the activations of Swin-L 224x224 and metadata.<br> <strong>6-</strong> I trained Swin-L 384x384 with 2e-5 learning rate for 10 epoch for each fold.<br> <strong>7-</strong> I applied tta on Swin-L 384x384 with n = 4.<br> <strong>8-</strong> I trained aanother svr model on the activations of Swin-L 384x384 and metadata.<br> <strong>9-</strong> I combined the predictions of these four models with metadata using ridge regression model as blender.<br> <strong>10-</strong> I trained another Swin-L 224x224 model on the complete training data with testing data whose <br> predictions taken from the blender model for 17 epoch with 2e-5 learning rate and I've added <br> noise to the test predictions from ridge model during training within the range (-17.8, +17.8).<br> <strong>11-</strong> I did a weighted sum of blender preds and the last model preds on test data according to the LB <br> like this:-<br> final_preds = first_model_preds * 0.9 + last_model_preds * 0.1</p> <p>my kernel link:-<br> <a href="https://www.kaggle.com/riadalmadani/inf-final-blender-with-psudolabe-17-epoch/notebook" target="_blank">https://www.kaggle.com/riadalmadani/inf-final-blender-with-psudolabe-17-epoch/notebook</a></p>
PetFinder.my - Pawpularity Contest
6th Place - Multitask Learning
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The best answer is to try both (freeze and unfreeze) and see what achieves the best CV score. In general i will freeze less for classification tasks and more for regression.</p> <p>There are other factors too like how similar the competition images are to the pretraining images (which are imagenet collection). When they are similar we can freeze more. And when they are different we need to unfreeze more.</p> <p>After downloading a pretrained CNN from the internet, you can display the number of layers with <code>len( base_model.layers )</code> in TensorFlow. For example, InceptionV3 has 311 layers. Then you can freeze the first 90% of layers with </p> <pre><code>for layer in base_model.layers[:-28]: layer.trainable = False </code></pre> <p>There is a great public notebook which demonstrates this <a href="https://www.kaggle.com/vishalkarangale/inceptionv3-10folds-tensorflow" target="_blank">here</a>. (Note if you fork that notebook, FYI there is CV leak. You need to create a new <code>base_model</code> for each fold. That notebook uses the same <code>base_model</code> for each fold).</p> <p>Regarding the reason, this is frequently done when fine tuning models. From their pretraining, the early layers have already learned to extract features from images (or in NLP the early layers already understand words). When we fine tune, we can use a lower learning rate for early layers and/or freeze these layers to avoid losing the pretraining.</p> <p>Additionally when doing regression (which is different than classification), I have found that freezing is often required. I'm not sure why. I think it may be because of batch normalization layers and drop out layers. There is a discussion about this <a href="https://www.kaggle.com/c/commonlitreadabilityprize/discussion/260729#1442448" target="_blank">here</a> and blog <a href="https://towardsdatascience.com/pitfalls-with-dropout-and-batchnorm-in-regression-problems-39e02ce08e4d" target="_blank">here</a></p> <p>If image is <code>192x224</code>, then the largest square is <code>192x192</code>. First i randomly select a largest square from <code>192x224</code>, next I resize it to <code>224x224</code></p> <p>In general if dim is <code>A x B</code>, then largest square is <code>min(A,B) x min(A,B)</code>. Then resize to <code>224x224</code>. (Sometimes largest square is smaller than 224, sometimes larger).</p> <p>Thanks <a href="https://www.kaggle.com/mobassir" target="_blank">@mobassir</a>!</p> <p>Your second equation. But we only have 5 models if we kept every model from day 1 to 4. For example we may discard Day 3 like this:</p> <p>Day 1, CV = Day1. Keep Day 1<br> Day 2, CV = (Day1 + Day2)/2, CV increase. Keep Day 2<br> Day 3, CV = (Day1 + Day2 + Day3)/3. <strong>CV decrease. Discard Day 3</strong><br> Day 4, CV = (Day1 + Day2 + Day4)/3. CV increase. Keep Day 4<br> Day 5, CV = (Day1 + Day2 + Day4 + Day5)/4. etc etc</p> <p>And then every few days, I may remove each previous model one by one to make sure removing a previous day doesn't increase CV. Because sometimes an old model is no longer helpful.</p> <p>The reason I use equal weights is to guarantee that the newest model really helps. If we only add 1% like <code>CV = 0.33*Day1 + 0.33*Day2 + 0.33*Day3 + 0.01*Day4</code> then by pure luck adding Day4 may increase CV by a little bit. But if a model increase CV with equal weights that it really is helpful without luck.</p> <p>I don't think adding these features <strong>as input</strong> help the model because the model can already see this in the image. For example, an image model doesn't need to be told whether animal is dog or cat. It can learn this to nearly 99.9% accuracy on its own.</p> <p>What i think is helpful is adding the feature you suggest and others as multi task learning (i.e. <strong>as output</strong>). That encourages the model to work harder while it is learning Pawpularity and usually results in a smarter model.</p> <p>(This is what i think but i could be wrong)</p> <p>Here are some ideas that come to mind</p> <ul> <li>change batch size</li> <li>change input image size</li> <li>change backbone</li> <li>use CNN vs. Transformer</li> <li>use model pretrained on different original image datasets (1k image vs. 22k imagenet vs. etc)</li> <li>use imagenet pretrain vs. noisy-student vs. AP etc etc</li> <li>preprocess with square crops to keep aspect ratio vs. squish image into square</li> <li>data augmentation</li> <li>add meta as input</li> <li>add meta as auxiliary target</li> <li>change loss from regression, to BCE, to multiclass (put Pawpularity into buckets)</li> <li>initialize layers in different ways</li> <li>use different activation functions</li> <li>do regression with MAE instead of MSE</li> <li>have model predict breed as auxiliary</li> <li>have model predict dog cat as auxiliary</li> <li>use imagenet pretrain model <strong>without</strong> finetuning it. (Just train a head).</li> <li>modify architecture like removing downsampling from first EffNet layer</li> <li>make custom head like add spatial attention to CNN etc</li> <li>use different pooling, like attention pooling, max pooling, avg pooling, etc</li> <li>freeze model batch norms</li> <li>freeze different parts of backbone during train<br> etc etc</li> </ul> <p>Making models with the squish preprocess only improved my <code>best LB ensemble</code> (strangely). All models in my <code>best CV ensemble</code> use random square cropping (and no squish preprocess).</p> <p>This is why teaming up makes strong ensembles. Because although everyone is doing basically the same stuff, one teammate will do just one thing different. Like perhaps they will initialize the layers of their NN differently. Then when you ensemble their model with your team ensemble, it gets a surprising big boost.</p> <p>Thanks Mutopia</p> <p>Thanks <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> . Your discussion and notebooks were very helpful. My earlier ensembles included your ViT model but as I continued to add more recent SOTA image transformers, using ViT no longer helped my CV LB. None-the-less, i learned from your code, discussion, and published experiments.</p> <p>UPDATE: I added emphasis that freezing batch norm layers was important when training EfficientNet to achieve good CV LB RSME. Team 42nd achieved CV 17.30 LB 17.92 with an EfficientNet explained <a href="https://www.kaggle.com/c/petfinder-pawpularity-score/discussion/301259" target="_blank">here</a>. Also Robin Smit's great notebook demonstrates it <a href="https://www.kaggle.com/rsmits/effnet-b2-feature-models-catboost" target="_blank">here</a>. These are awesome results!</p> <p>Yes. When you do KFolds CV, you get an OOF dataframe that is the same length as the length of train dataset. Your OOF has 1 prediction for every row of train data.</p> <pre><code>oof = np.array(len(train)) for fold in range(K): train_idx = train.loc[train.fold!=fold].index.values valid_idx = train.loc[train.fold==fold].index.values model.fit(train.loc[train_idx]) oof[valid_idx] = model.predict(train.loc[valid_idx]) </code></pre> <p>After Day 2, you have <code>oof_day1</code> Numpy array from Day 1 and <code>oof_day2</code> Numpy array from Day 2. The ensemble is </p> <pre><code>oof = (oof_day1 + oof_day2)/2 CV_RSME = np.sqrt(np.mean( (oof - train.y.values)**2.0 )) </code></pre> <p>So <strong>every time</strong>, you train a new model save the <code>oof</code> to disk as <code>numpy.save('oof_day1', oof_day1)</code>. Instead of doing a once per day, you can collect many <code>OOF</code> and then load all your Numpy OOF into my notebook <a href="https://www.kaggle.com/cdeotte/forward-selection-oof-ensemble-0-942-private" target="_blank">here</a>. (In code cell 5, change <code>TOL = 0.0003</code> to a meaningful value. In Pet Comp, i would use <code>TOL = 0.01</code> and only accept a new model if it increase RSME by <code>0.01</code>. To be more careful, you can set this higher like 0.015 or 0.02)</p> <p>Early in every competition, i decide whether I will pursue a single model or ensemble. If I believe single model is best then I spend most of my time improving the CV LB of one model (after exploring a bunch of different models). If I believe ensemble is best then I begin searching for diversity from the beginning. (And then yes, hill climbing is a great approach that i have used before. And when using ensemble, its important to read every pubic notebook to find diversity).</p> <p>The decision to chose single model or ensemble is as follows. If the data is very noisy and the CV changes when you train the same model with different seed. And the LB score is much different than CV score. And it feels difficult to improve a single model. And you're basically confused what to trust, then i go with ensemble.</p> <p>If CV score feels robust and behaves in consistent manner. If the LB score follows the CV score very closely. If I notice that changes to model improve CV and there is potential for more improvement. Then I will develop a single model. Of course in both cases (single model vs. ensemble), i will still spend time on each, but for each competition I will focus on one more than the other.</p> <p>If I have time, perhaps I will post. The idea is this. Choose a <strong>backbone</strong> and build a <strong>head</strong> with 12 outputs. Then train with previous images and predict the 12 meta targets from previous comp. This has the benefit of improving the backbone. Next remove the head, and keep the pretrained backbone. Next build a new head with 1 output for this comps Pawpularity. Next continue training with this comps images and this comp's target.</p> <p>Note that this is exactly how EffNet, Swin etc are pretrained. Someone else trained these backbones with a head with 1000 outputs. Then then trained with the images from imagenet collection which has 1000 targets. Then we download the backbone and remove head and add our new head.</p> <p>Congrats KhanhVD and team. 53th out of 3500 teams is excellent! Sorry to hear, you missed Gold. You'll get Gold next time! Good work.</p> <h1>Kaggle PetFinder Comp</h1> <p>Thanks Kaggle and Petfinder for hosting this wonderful competition. Building models to help pets was fun and rewarding. My solution is the combination of my shared discussion posts and notebooks.</p> <h1>Trust Your CV</h1> <p>For my final two submissions, I chose my best LB score (<strong>CV 17.15, LB 17.64</strong>) and my best CV score (<strong>CV 16.98, LB 17.78</strong>). My best CV score only had public leaderboard rank 120. It was <strong>scary</strong> to select it, but my final private LB 6th rank was this best CV submission!<br> <img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Jan-2022/catlb2.png" alt=""></p> <h1>Preprocess</h1> <p>There are at least two ways to resize images for our models. We can squish the original images into a square (distorting aspect ratio), or we can square crop (and preserve aspect ratio). I trained my models with random square crops. Not only does this create natural looking images that match how they are displayed at Petfinder website but it also provides data augmentation by using random crops during training (which helps our models generalize). <strong>This boosted CV LB</strong>.</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Jan-2022/preprocess.png" alt=""></p> <h1>Image CNN Training</h1> <p>Training CNNs to achieve a good CV LB RSME score was more difficult than training Transformers and it required freezing backbones to train the head (and/or freezing first 66% backbone to train last 33% backbone plus head). It's also <strong>important</strong> to freeze all batchnorm layers. (This was demonstrated <a href="https://www.kaggle.com/rsmits/effnet-b2-feature-models-catboost" target="_blank">here</a> and explained <a href="https://www.kaggle.com/c/petfinder-pawpularity-score/discussion/301259" target="_blank">here</a>). There are many great examples showing CNN training in public notebooks <a href="https://www.kaggle.com/rsmits/effnet-b2-feature-models-catboost" target="_blank">here</a>, <a href="https://www.kaggle.com/jackstapleton/d169-baseline-nc-inference" target="_blank">here</a>, and <a href="https://www.kaggle.com/vishalkarangale/inceptionv3-10folds-tensorflow" target="_blank">here</a>. My best single model CNN was <strong>EfficientNetB2 with RAPIDS SVR</strong> head. It achieved <strong>CV 17.55</strong></p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Jan-2022/freeze2.png" alt=""></p> <h1>Image Transformer Training</h1> <p>There are dozens of image transformers. To view all available pretrained image transformers, just type <code>import timm; timm.list_models(pretrained=True)</code>. You will see <code>beit, cait, convit, crossvit, deit, levit, pit, swin, vit, xcit</code>! Next you can fork the public <code>fastai</code> notebook <a href="https://www.kaggle.com/tanlikesmath/petfinder-pawpularity-eda-fastai-starter" target="_blank">here</a> and run with each of these models! To evaluate which is the best, you must correct the RSME metric with <code>metrics=AccumMetric(func=petfinder_rmse)</code> as explained <a href="https://www.kaggle.com/c/petfinder-pawpularity-score/discussion/294054" target="_blank">here</a> (otherwise CV score is meaningless and wrong).</p> <p>My best single model was <code>BeIT</code> (not <code>Swin</code>). It's a transformer with BERT pretraining. Research paper <a href="https://arxiv.org/abs/2106.08254" target="_blank">here</a>. A <strong>single BeIT transformer</strong> achieved my best <strong>CV 17.42</strong> and best <strong>Private LB 17.00</strong> which is top 50 private LB!</p> <h1>Image Size and Data Augmentation</h1> <p>I trained models with image sizes 224, 384, 512. The advantage of a variety of image sizes is explained <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/160147" target="_blank">here</a>. For large image sizes more augmentation was required. When using image size 512, the following augmentations were applied; Random square crop, Horizontal Flip, Vertical Flip, Transpose, Brightness, Contrast, Hue, Saturation, Rotation, Random Erase. Some of the small 224 models, only used Random square crop and Flips.</p> <h1>Multi Task Learning</h1> <p>There are two ways to use meta data. We can input it into our model or use it as an additional output (i.e. auxiliary target) during training. The easiest way to train an image transformer with additional output is by modifying the PyTorch notebook <a href="https://www.kaggle.com/ytakayama/train-pytorch-swin-5fold-some-tips" target="_blank">here</a>. The meta data provided in this competition didn't help, but using Cat Dog labels as an additionally target from <a href="https://www.kaggle.com/chrisbradley/pawpularity-cat-or-dog" target="_blank">here</a> helped boost CV LB and add model diversity.</p> <p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Jan-2022/multitask.png" alt=""></p> <h1>Hill Climbing</h1> <p>My approach in this competition was to build an ensemble slow and steady. First I corrected my local RSME computation so that CV score was correct. Then each day during the competition, I would train a new model, save the OOF predictions, and <strong>ensemble it will equal weights</strong> to my previous best CV ensemble and previous best LB ensemble. If it increased CV then i added it to my best CV ensemble. If it increased LB then I added it to my best LB ensemble. </p> <p>I describe hill climbing more <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/175614" target="_blank">here</a>. The secret to hill climbing is to make new <strong>diverse</strong> models. (i.e. Try different image sizes, backbones, augmentations etc). Each new model doesn't necessarily need to have a great CV by itself. They <strong>need to be different</strong> than your previous models so that when you ensemble them to your previous models, the result is an improved CV LB. My two (hill climbing generated) ensembles contained different models. Here are the lists</p> <h3>Best CV Ensemble</h3> <p><strong>CV = 16.98, Public LB = 17.78, Private LB = 16.90</strong></p> <ul> <li>Swin 224 with Dog Cat auxiliary output</li> <li>Swin 384</li> <li>BeIT 224</li> <li>BeIT 384</li> <li>BeIT 512</li> <li>ConvNeXt 224</li> <li>ConvNeXt 384 with Dog Cat auxiliary output</li> <li>EfficientNetB2 with RAPIDS SVR head</li> </ul> <h3>Best LB Ensemble</h3> <p><strong>CV = 17.15, Public LB = 17.64, Private LB = 17.01</strong></p> <ul> <li>Swin 224 with Dog Cat auxiliary output</li> <li>Swin 384</li> <li>BeIT 224</li> <li>BeIT 384</li> <li>EfficientNetB2 with RAPIDS SVR head</li> <li>DenseNet 224 with MLP regression head</li> <li>Inception 229 with 66% frozen backbone</li> </ul> <h1>What Didn't Work</h1> <p>I tried to pseudo label external images but it didn't improve my CV nor LB. I also tried to use last year competition data by pretraining models with last year's images and last year's meta data. This did not help CV nor LB. I also tried to train models on last year data and infer new features on this year's data. But that didn't help either. <strong>In the end, I didn't use last year's comp data nor external data</strong>.</p> <p>After reading other winners' solutions; <a href="https://www.kaggle.com/c/petfinder-pawpularity-score/discussion/300929" target="_blank">2nd place here</a>, <a href="https://www.kaggle.com/c/petfinder-pawpularity-score/discussion/301072" target="_blank">4th place here</a>, <a href="https://www.kaggle.com/c/petfinder-pawpularity-score/discussion/300928" target="_blank">5th place here</a>, it appears that last year's data was actually very powerful. The way to use it was to find the duplicate images from last years data that were also used this year. Then for those images (which were about 33% of all images) concatenate last years meta features onto this year's data. Then train a model to use this year's image with the concatenated last years meta data. Apparently the meta data from the previous PetFinder competition was more powerful than the meta data provided this year.</p> <h1>The Power of SOTA Image Models!</h1> <p>It was shown from <a href="https://www.kaggle.com/c/petfinder-pawpularity-score/discussion/300938" target="_blank">1st place here</a>, <a href="https://www.kaggle.com/c/petfinder-pawpularity-score/discussion/301044" target="_blank">3rd place here</a> (and my 6th place solution) that using meta features from last year were not needed to achieve a powerful model. Today's SOTA image models are amazingly good at looking at images and extracting information!</p> <p>Thanks <a href="https://www.kaggle.com/crained" target="_blank">@crained</a> . Congratulations to you and your team for shaking upward on private LB and finishing in top 100. You must have had a good reliable local CV.</p> <p>Thanks <a href="https://www.kaggle.com/blankaf" target="_blank">@blankaf</a> . Congrats on finishing solo Silver 52nd out of 3550 teams. That's fantastic!</p>
PetFinder.my - Pawpularity Contest
9th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It may be too late, but I have shared the code. Due to the environmental constraints of Kaggle Code, the model size, augmentation and number of epochs are limited, and the score is poor.</p> <p>I think mere classification of 100-class pawpularity without label distribution learning would not be able to learn well because the ordinality of the labels is not taken into account, so I didn't tried.<br> I thought this competition was similar to the age estimation task. Age is difficult to estimate in one-year increments, and it has the characteristic of being ordinal. The same is true for pet pawpularity. And I adopted DLDL-v2, one of the SOTAs from Papers with Code's <a href="https://paperswithcode.com/task/age-estimation" target="_blank">Age Estimation</a>.</p> <p>The <a href="https://github.com/PuNeal/DLDL-v2-PyTorch/blob/master/data.py" target="_blank">code</a> I referred to used std=2. I tried around 2 and 2 was the best.<br> (Fixed the reference DLDL-v2 link to the PyTorch version.)</p> <p>Thanks. ConvNeXt looks promising for future competitions too.</p> <p>I created normal distributions with the expected value of targets and the standard deviation of 2.0.<br> <img src="https://user-images.githubusercontent.com/75870915/149624023-4e6a27d3-9f76-4845-92d6-5f5a362fc4c7.jpg" alt=""></p> <p>My solution is a weighted average ensemble of the following three type of models. NN models are implemented by original PyTorch[1, 2] and fastai[3]. I did not use external data.</p> <ol> <li>Deep Label Distribution Learning (DLDL) models based on <a href="https://github.com/PuNeal/DLDL-v2-PyTorch" target="_blank">DLDL-v2-PyTorch</a> (<a href="https://www.ijcai.org/proceedings/2018/0099.pdf" target="_blank">Paper</a>) (3 models)</li> <li>Regression models trained as binary classification tasks (2 models)</li> <li>fastai models based on <a href="https://www.kaggle.com/ilovescience" target="_blank">@ilovescience</a>'s <a href="https://www.kaggle.com/tanlikesmath/petfinder-pawpularity-eda-fastai-starter" target="_blank">highly influential notebook</a> (2 models)</li> </ol> <p>The brief overview of each model is as follows.</p> <h3>[1] DLDL models</h3> <ul> <li>NN backbone : Swin Transformer L 384</li> <li>Augmentation : torchvision RandAugment(2, 10), mixup(alpha=0.2, p=0.5)</li> <li>Metadata : used tabular data (I think tabular metadata is useless, but I did not omit it.)</li> <li>Loss : (KLDivLoss and HuberLoss) or (FocalLoss and HuberLoss)</li> <li>Optimizer : <a href="https://github.com/lessw2020/Ranger21" target="_blank">Ranger21</a></li> <li>Scheduler : OneCycleLR</li> <li>Regularization : <a href="https://arxiv.org/abs/1909.11299" target="_blank">MixOut</a></li> <li>Data split : StratifiedKFold(n_splits=5)</li> <li>Additional head : SVR and Catboost</li> <li>TTA : 2 times [1: Resize(size)-&gt;CenterCrop(size) / 2: Resize(size, size)]<br> <img src="https://user-images.githubusercontent.com/75870915/149617749-fc23fab2-a420-4ae4-88f1-ee37a3addfcd.jpg" alt="DLDL model"></li> </ul> <h3>[2] Regression models</h3> <ul> <li>NN backbone : Swin Transformer L 384</li> <li>Augmentation : torchvision RandAugment(2, 10), mixup(alpha=0.2, p=0.5)</li> <li>Metadata : used tabular data</li> <li>Loss : BCEWithLogitsLoss()</li> <li>Optimizer : AdamW</li> <li>Scheduler : OneCycleLR</li> <li>Data split : StratifiedKFold(n_splits=5)</li> <li>Additional head : SVR and Catboost</li> <li>TTA : None [Resize(size)-&gt;CenterCrop(size)]</li> </ul> <h3>[3] fastai models</h3> <ul> <li>NN backbone : Swin Transformer L 224 / ConvNeXt XL 224</li> <li>Augmentation : Brightness(), Contrast(), Hue(), Saturation()</li> <li>Metadata : None</li> <li>Loss : BCEWithLogitsLossFlat()</li> <li>Additional head : None</li> <li>Data split : StratifiedKFold(n_splits=10) / StratifiedKFold(n_splits=5)</li> <li>TTA : 5 times / 1 time</li> </ul> <h3>Selected submissions for final score</h3> <ul> <li>Best public LB submission<ul> <li>Public LB : 17.74385 (33th)</li> <li>Private LB : 16.95033 (17th))</li></ul></li> <li>The only successful submission containing ConvNeXt (fastai model)<ul> <li>Public LB : 17.77487 (94th)</li> <li>Private LB : <strong>16.91650</strong> (9th)</li></ul></li> </ul> <p>As for the single ConvNeXt model , the CV was good, but the LB was not so good.<br> My best single NN model in private LB during competition was a DLDL model, its score is:</p> <ul> <li>CV : 17.36578</li> <li>Private LB : 17.01783 (2 TTA)</li> </ul> <p><strong>Update</strong> : I had completed training of a DLDL model of ConvNeXt XL about 9 hours before the competition end, but when I added it to the ensemble in a hurry and submitted it, I made a mistake in the weight calculation. I made a quick fix to the code and late submitted again, it scored <strong>16.89143</strong> in private. I just set the weights of weighted average by intuition, I think higher score will be obtained by hill climbing or stacking. I think the model to add into the ensemble need not be ConvNeXt. I should have tried wider variety of models.</p> <p><strong>Update2b</strong> : After competition end, using several single DLDL models, I tested the effectiveness of TTA and found that following 3 TTA using torchvision.transforms looks consistently work well in both private LB and public LB.</p> <ul> <li>[TTA1] <code>Resize(image_size), CenterCrop(image_size)</code> # preserve aspect ratio</li> <li>[TTA2] <code>Resize((image_size, image_size))</code> # squish the original images into a square</li> <li>[TTA3] <code>Resize(image_size), CenterCrop(image_size), RandomHorizontalFlip(p=1)</code></li> </ul> <p>All models are trained using random crop choice augmentation of TTA1 and TTA2, different seeds, and a little different hyper parameters.</p> <table> <thead> <tr> <th>model</th> <th>head</th> <th>TTA times</th> <th>Private LB</th> <th>Public LB</th> <th>CV</th> </tr> </thead> <tbody> <tr> <td>Swin L 224 DLDL #1</td> <td>NN only</td> <td>None (1)</td> <td>17.02085</td> <td>17.92344</td> <td>17.37</td> </tr> <tr> <td>^</td> <td>^</td> <td>3</td> <td><strong>17.00839</strong></td> <td>17.91271</td> <td></td> </tr> <tr> <td>^</td> <td>NN + SVR + CAT</td> <td>None (1)</td> <td>16.99053</td> <td>17.91827</td> <td></td> </tr> <tr> <td>^</td> <td>^</td> <td>3</td> <td><strong>16.97933</strong></td> <td>17.90832</td> <td></td> </tr> <tr> <td>Swin L 224 DLDL #2</td> <td>NN only</td> <td>None (1)</td> <td>17.13297</td> <td>17.92975</td> <td>17.37</td> </tr> <tr> <td>^</td> <td>^</td> <td>3</td> <td>17.09943</td> <td>17.90875</td> <td></td> </tr> <tr> <td>^</td> <td>NN + SVR + CAT</td> <td>None (1)</td> <td>17.04425</td> <td>17.82960</td> <td></td> </tr> <tr> <td>^</td> <td>^</td> <td>3</td> <td>17.00938</td> <td>17.81680</td> <td></td> </tr> <tr> <td>Swin L 224 DLDL #3</td> <td>NN only</td> <td>None (1)</td> <td>17.05831</td> <td>17.96708</td> <td>17.32</td> </tr> <tr> <td>^</td> <td>^</td> <td>3</td> <td>17.04335</td> <td>17.94935</td> <td></td> </tr> <tr> <td>^</td> <td>NN + SVR + CAT</td> <td>None (1)</td> <td>17.02163</td> <td>17.90633</td> <td></td> </tr> <tr> <td>^</td> <td>^</td> <td>3</td> <td>17.01208</td> <td>17.89366</td> <td></td> </tr> <tr> <td>ConvNeXt XL 224 DLDL #1</td> <td>NN only</td> <td>None (1)</td> <td>17.10452</td> <td>17.98510</td> <td>17.37</td> </tr> <tr> <td>^</td> <td>^</td> <td>3</td> <td>17.08961</td> <td>17.96937</td> <td></td> </tr> <tr> <td>^</td> <td>NN + SVR + CAT</td> <td>None (1)</td> <td>17.05040</td> <td>17.98521</td> <td></td> </tr> <tr> <td>^</td> <td>^</td> <td>3</td> <td>17.03909</td> <td>17.97758</td> <td></td> </tr> </tbody> </table> <p>The correlation between CV, Public LB, and Private LB is not high between other models, but within the same model, the improvement effect of TTA and additional head is consistently observed.<br> The idea and the hyper parameters of additional heads are based on these notebooks : <a href="https://www.kaggle.com/cdeotte/rapids-svr-boost-17-8" target="_blank">SVR</a>, <a href="https://www.kaggle.com/rsmits/effnet-b2-feature-models-catboost" target="_blank">Catboost</a>. Thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> and <a href="https://www.kaggle.com/rsmits" target="_blank">@rsmits</a>.</p> <p><strong>Update3</strong> : I share the code example of DLDL model. <a href="https://www.kaggle.com/shigemitsutomizawa/9th-place-swinb224-dldl-nn-svr-cat?scriptVersionId=86009375" target="_blank">[9th place] SwinB224 DLDL NN+SVR+CAT</a></p>
PetFinder.my - Pawpularity Contest
Jumped from 45th to 42nd place? What happened?
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>🤦‍♂️🤦‍♂️🤦‍♂️</p> <p>Why don't them remove them during the competition? This happens often?</p> <p>After the competition ended we were 45th on the private LB. Today I see we are 42nd.</p> <p>I see <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> is still in his unlucky position which means none of the gold medal positions changed.</p> <p>Any idea what happened?</p> <p>Would that be your favorite gold then? 🤣</p>
PetFinder.my - Pawpularity Contest
Oopsie Doopsie, Apparently my public notebook got a silver medal (#145) 😆
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Guys, </p> <p>I've made a bit of an oopsie 😆<br> Apparently, I accidentally got a silver medal with my "Nuclear Option" public notebook.</p> <blockquote> <p><a href="https://www.kaggle.com/yamqwe/the-nuclear-option-train" target="_blank">Here</a> If you are interested</p> </blockquote> <p><img src="https://i.ibb.co/gwpJXBm/lb.png" alt=""></p> <hr> <h4>The outline of the solution is simple:</h4> <p>I trained every single SOTA model from 2021 and then <strong>Nuked the leaderboard with 10 Billion parameters</strong> ensemble of ensembles. <br> The ensembles were also supplemented a bit with catboost 2nd stage models. <br> Just for the "why not". </p> <p>The approach: </p> <p><img src="https://i.ibb.co/McJ39mW/image-nuke.png" alt=""></p> <p>This stunt was done mainly for the purpose me catching up with the current most recent SOTA vision papers. </p> <p>I seriously didn't try to compete on the leaderboard and never had the intention of releasing a public notebook that actually gets a silver medal. Especially not so close to the competition deadline, I also didn't publish all the trained model weights. Just in case.</p> <p>This came as a complete surprise to me! <br> Hope the solution will be useful for many others in the future.</p> <p>If you got any questions or feedback, I'll be more than happy to discuss!</p>
PetFinder.my - Pawpularity Contest
Silver medal notebook (Private LB 17.04615, rank 126)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>So because I used a two notebook structure for my submission, the submission attached to my final score doesn't do a good job of explaining exactly what I did.</p> <p>My main notebook was used to train the weights for my submission notebook (which just pulled in the trained weights and made predictions).</p> <p>I put together a complete notebook that walks you through the process I used to train and submit.</p> <p>Here is the link:<br> <a href="https://www.kaggle.com/malachymoran/the-general" target="_blank">The General</a></p> <p>Is the link in the bottom of the post not working?</p>
PetFinder.my - Pawpularity Contest
Tentative 2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! I answered the question in another comment. Please find it!</p> <p>Thanks! I answered the question in another comment. Please find it!</p> <p>We are also looking forward to discussing you and your team! btw, an email I got from Kaggle says</p> <blockquote> <p>Winner’s Information Gathering: Please use this form to learn about the materials and other information we need from you. Each member of your team should submit a form. <strong>There will be no winners call for this competition</strong>, so you may skip the steps calling for that presentation.</p> </blockquote> <p>I thought we won't have such a presentation opportunity. <a href="https://www.kaggle.com/kaggleteam" target="_blank">@kaggleteam</a>, what should we do next?</p> <p>Hi Jedi, <br> Thank you for hosting such a nice competition! We are preparing our solution to submit. Please let us know if there is something we can help to implement our solution.</p> <p>I think train data also has about 2,000 images overlapping. I am not sure if this can be the answer though.</p> <p>Thanks a lot!</p> <p>Thanks a lot!</p> <p>Yes, I just used this loss</p> <pre><code>poisson_loss = torch.nn.PoissonNLLLoss(log_input=False, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean') </code></pre> <p>For training image models, I did not do any special things. I start from this kernel <a href="https://www.kaggle.com/keagle/petfindder-updated-rules" target="_blank">https://www.kaggle.com/keagle/petfindder-updated-rules</a>. Then, I just changed the model name for timm…</p> <p>Hi all,<br> Congratulations to <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a>! Here is our brief solution write-up. We will publish all our codes and datasets. Please post any questions in this thread if you have any.</p> <p><strong>Update:</strong> our code is here! <a href="https://www.kaggle.com/ktakita/team-merged-ensemble-fixednewmodelsexp029nonmean" target="_blank">https://www.kaggle.com/ktakita/team-merged-ensemble-fixednewmodelsexp029nonmean</a></p> <h1>Models</h1> <p>We mainly used Swin Transformer as also used in top kernels. I just used the fastai pipeline that appears in many public kernels. My models and ensemble weights are following. Then, we ensemble each model from each member. </p> <table> <thead> <tr> <th>Modell</th> <th>Loss</th> <th>Weight</th> </tr> </thead> <tbody> <tr> <td>convnext_large</td> <td>BCE</td> <td>0.107571</td> </tr> <tr> <td>cait_m36_384</td> <td>BCE</td> <td>0.086765</td> </tr> <tr> <td>vit_base_patch16_224_miil_in21k</td> <td>Poisson</td> <td>0.084596</td> </tr> <tr> <td>cait_m36_384</td> <td>100-class CE</td> <td>0.076999</td> </tr> <tr> <td>xcit_small_24_p16_384_dist</td> <td>BCE</td> <td>0.076158</td> </tr> <tr> <td>xcit_small_24_p8_384_dist</td> <td>BCE</td> <td>0.073455</td> </tr> <tr> <td>crossvit_18_dagger_408</td> <td>BCE</td> <td>0.072553</td> </tr> <tr> <td>swin_large_patch4_window12_384_in22k</td> <td>BCE</td> <td>0.072102</td> </tr> <tr> <td>vit_base_patch16_224_miil_in21k</td> <td>BCE</td> <td>0.071953</td> </tr> <tr> <td>vit_base_r50_s16_384</td> <td>BCE</td> <td>0.058601</td> </tr> <tr> <td>swin_base_patch4_window12_384</td> <td>BCE</td> <td>0.057393</td> </tr> <tr> <td>vit_base_patch16_224_miil_in21k</td> <td>MSE</td> <td>0.056347</td> </tr> <tr> <td>jx_nest_base</td> <td>BCE</td> <td>0.055556</td> </tr> <tr> <td>vit_large_patch16_224</td> <td>Poisson</td> <td>0.054602</td> </tr> </tbody> </table> <p><a href="https://www.kaggle.com/ktakita" target="_blank">@ktakita</a> and <a href="https://www.kaggle.com/teyosan1229" target="_blank">@teyosan1229</a> parts are as follows:</p> <p>We used three types of the following data<br> ①Image data<br> T.Resize(image_size)<br> T.CenterCrop([image_size, image_size])<br> ②One hot data<br> 'Subject Focus', 'Eyes', 'Face', 'Near', 'Action', 'Accessory',<br> 'Group', 'Collage', 'Human', 'Occlusion', 'Info', 'Blur'<br> ③Image shape<br> 'height', 'width', 'aspect'</p> <p>For the models, we ensemble the following 6 models by Ridge.</p> <ul> <li><p>swin_large_patch4_window12_384<br> used data:①<br> image_size:384<br> loss:nn.BCEWithLogitsLoss</p></li> <li><p>poolformer_m36<br> used data:①, ②<br> image_size:224<br> loss:BCEWithLogitsLossFlat</p></li> <li><p>swin_large_patch4_window7_224_in22k<br> used data:①, ②, ③<br> image_size:224<br> loss:BCEWithLogitsLossFlat</p></li> <li><p>swin_large_patch4_window12_384<br> used data:①<br> image_size:384<br> loss:BCEWithLogitsLossFlat</p></li> <li><p>swin_large_patch4_window7_224<br> used data:①, ②<br> image_size:224<br> loss:BCEWithLogitsLossFlat</p></li> <li><p>tf_efficientnetv2_b1<br> used data:①, ②, ③<br> image_size:224<br> loss:BCEWithLogitsLossFlat<br> Kohei single model<br> swin_large_patch4_window12_384_in22k<br> used data:①, ②, ③<br> image_size:384<br> loss:BCEWithLogitsLossFlat<br> teyosan single model<br> dm_nfnet_f3<br> used data:①,②<br> image_size:512<br> loss:BCEWithLogitsLoss</p></li> </ul> <h1>Association with the data of the previous competition</h1> <p>As discussed, it is allowed to use the data of the previous competition (<a href="https://www.kaggle.com/c/petfinder-adoption-prediction)" target="_blank">https://www.kaggle.com/c/petfinder-adoption-prediction)</a>. We find that some images are overlapping between this and previous competitions. In the training data, there are about 1,600 images. We then guess that such kind of overlapping could happen also in test data. <br> Based on the finding, we join the metadata in the previous competition with this current competition data by using the image hash as a key. </p> <h1>2nd-stage GBDT model</h1> <p>To use the associated metadata, we trained a 2nd-stage GBDT model, which is also known as a “stacking model”. We use the prediction of the 1st-stage DNN models (e.g., SwinT) as a feature of the 2nd-stage GBDT model. The training code is available here (TBA).</p> <p><strong>Update:</strong> After discussion with other teams, one possible reason why we jumped to 2nd seems to be the training strategy that we train 2nd-stage GBDT model with the entire of this competition data, not only with the records which are associated with the previous competition. It may be helpful to preserve the distribution of predicting Pawpularity.</p> <p>These GBDT models easily got overfitted. We were not able to find a good combination of feature sets. In the previous competitions, there are a lot of effective features are reported, but the features we tested made LB score worse. There should be some effective features we have not tested due to the time limit. We suspect that this may be one of the reasons why we did not reach the top.</p> <p>It may only have a minor effect. I did not use it. How do you think, <a href="https://www.kaggle.com/ktakita" target="_blank">@ktakita</a> and <a href="https://www.kaggle.com/teyosan1229" target="_blank">@teyosan1229</a> ?</p> <p>Thanks! With CV prediction, I solved a least-square problem to minimize MSE as follows.</p> <pre><code>coef = np.linalg.lstsq(df[['pred1', 'pred2', 'pred3']], df['Pawpularity'] )[0] </code></pre> <p>Then, I iteratively remove models which have less than 5% contribution. </p> <p>The score of each model is as follows.</p> <table> <thead> <tr> <th></th> <th>RMSE</th> </tr> </thead> <tbody> <tr> <td>exp7_cait_m36_384</td> <td>17.661636</td> </tr> <tr> <td>exp7_swin_large_patch4_window12_384_in22k</td> <td>17.687250</td> </tr> <tr> <td>exp20_convnext_large</td> <td>17.741086</td> </tr> <tr> <td>exp7_vit_base_patch16_224_miil_in21k</td> <td>17.772114</td> </tr> <tr> <td>exp20_convnext_base</td> <td>17.823614</td> </tr> <tr> <td>exp4_xcit_small_24_p16_384_dist</td> <td>17.861272</td> </tr> <tr> <td>exp7_swin_base_patch4_window12_384</td> <td>17.864037</td> </tr> <tr> <td>exp7_jx_nest_base</td> <td>17.880826</td> </tr> <tr> <td>exp4_crossvit_18_dagger_408</td> <td>17.893266</td> </tr> <tr> <td>exp4_xcit_small_24_p8_384_dist</td> <td>17.953217</td> </tr> <tr> <td>exp8_vit_base_patch16_224_miil_in21k</td> <td>18.021636</td> </tr> <tr> <td>exp15_vit_base_patch16_224_miil_in21k</td> <td>18.085443</td> </tr> <tr> <td>exp7_vit_base_r50_s16_384</td> <td>18.115582</td> </tr> <tr> <td>exp15_vit_large_patch16_224</td> <td>18.223578</td> </tr> <tr> <td>exp9_cait_m36_384</td> <td>23.401867</td> </tr> </tbody> </table> <p>The ensembled score is eventually 17.21.</p> <p>I did not check the LB score, but you may try it with the code we shared. </p> <p>Thanks. I have made them public. I mainly used AMD Ryzen 9 3950X, 64GB mem, and RTX 3090 in my lab. </p> <p>Hi, I think my code is completely messed up and may be hard to understand. To submit our solution to the host, I have updated all code as Kaggle kernels as below. I checked that all code is executable. To reproduce our models, please increase the number of epochs to 20. Hope my code is easy to understand.</p> <ul> <li>1st stage models<ul> <li>Exp4: Trained with BCE Loss<ul> <li><a href="https://www.kaggle.com/tkm2261/petfinder-2nd-place-model-exp4" target="_blank">https://www.kaggle.com/tkm2261/petfinder-2nd-place-model-exp4</a> </li></ul></li> <li>Exp7: Trained with BCE Loss and larger batch size<ul> <li><a href="https://www.kaggle.com/tkm2261/petfinder-2nd-place-model-exp7" target="_blank">https://www.kaggle.com/tkm2261/petfinder-2nd-place-model-exp7</a> </li></ul></li> <li>Exp8: Trained with MSE Loss<ul> <li><a href="https://www.kaggle.com/tkm2261/petfinder-2nd-place-model-exp8" target="_blank">https://www.kaggle.com/tkm2261/petfinder-2nd-place-model-exp8</a> </li></ul></li> <li>Exp9: Trained with Cross-entropy loss as a 100-label problem<ul> <li><a href="https://www.kaggle.com/tkm2261/petfinder-2nd-place-model-exp9" target="_blank">https://www.kaggle.com/tkm2261/petfinder-2nd-place-model-exp9</a> </li></ul></li> <li>Exp15: Trained with Poisson NLL loss <ul> <li><a href="https://www.kaggle.com/tkm2261/petfinder-2nd-place-model-exp15" target="_blank">https://www.kaggle.com/tkm2261/petfinder-2nd-place-model-exp15</a> </li></ul></li> <li>Exp20: ConvNext training<ul> <li><a href="https://www.kaggle.com/tkm2261/petfinder-2nd-place-model-exp20" target="_blank">https://www.kaggle.com/tkm2261/petfinder-2nd-place-model-exp20</a> </li></ul></li></ul></li> <li>2nd stage models training with 1st stage prediction and metadata from the current and previous competitions<ul> <li><a href="https://www.kaggle.com/ktakita/0113-2nd-stage-gbdt-fixed-w-newmodels3" target="_blank">https://www.kaggle.com/ktakita/0113-2nd-stage-gbdt-fixed-w-newmodels3</a> </li></ul></li> <li>Inference<ul> <li><a href="https://www.kaggle.com/ktakita/team-merged-ensemble-fixednewmodelsexp029nonmean?scriptVersionId=85153314" target="_blank">https://www.kaggle.com/ktakita/team-merged-ensemble-fixednewmodelsexp029nonmean?scriptVersionId=85153314</a> </li></ul></li> </ul>
PetFinder.my - Pawpularity Contest
Two main tricks helped us to avoid the shake
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Prediction + Ranking Ensembling</h1> <p>We ranked each model's prediction ranking from 1 ~ N (including test data), effectively the features for the ensemble are doubled. </p> <pre><code>all_preds = np.concatenate([ensemble_oofs, ensemble_tests]) all_preds.shape all_ranks = np.zeros_like(all_preds) for i in range(all_preds.shape[1]): all_ranks[:, i] = rankdata(all_preds[:, i]) ensemble_oof_ranks = all_ranks[:num_trains] ensemble_test_ranks = all_ranks[num_trains:] ensemble_oofs = np.column_stack([ensemble_oofs, ensemble_oof_ranks]) ensemble_tests = np.column_stack([ensemble_tests, ensemble_test_ranks]) </code></pre> <h1>Cross Validate the Ensembling</h1> <p>By checking all of our historical submissions, it turns out switching to a CV-based (such as Leave one) ensembling is the main factor why our team can avoid the shake-up.</p> <p>Code for Leave one out ridge ensembling:</p> <pre><code>def leave_one_out_ensemble(oofs, labels, tests, search_range=np.arange(100, 10000, 0.5)): sc = StandardScaler() sc.fit(oofs) regcv = RidgeCV(alphas=search_range, cv=None, scoring="neg_root_mean_squared_error", normalize=False) regcv.fit(sc.transform(oofs), labels) print(f"regcv.alpha_: {regcv.alpha_}, regcv.best_score_: {regcv.best_score_}") </code></pre> <p>Also, H2O automl performs very well for ensembling as well. </p> <pre><code>aml1 = H2OAutoML(max_runtime_secs=3600, stopping_metric='rmse',seed=721) </code></pre>
PetFinder.my - Pawpularity Contest
[ 31th solution ] noisy student, knowledge distillation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It's very interesting! I didn't test 3 folds.</p> <p>It's not dramatic. When I calculate CV score TTA7 is the best score. So when I ensemble and predict, I use TTA7.</p> <p>I used 5 fold. And I create 5 NN models by training OOF data.<br> In my case, the CV improved but the LB dropped too. So I didn't use this ensemble method. But in private LB, I found that this ensemble method achieve 16.90625. I'm sorry to didn't use that.</p> <p>It's simple. In fastai<br> <code>learn.tta(n=7)</code></p> <p>you means SVR ensemble? It's simple. Just extract each model's embedding and concentrate embeddings. Then train SVR <a href="https://www.kaggle.com/cdeotte/rapids-svr-boost-17-8" target="_blank">reference code</a></p> <p>Thank you all the people who participated in this competition, and the organizer that hosts such a sweet competition.</p> <p>I am glad to shake up to 31th place. And I can achieve competition master.<br> In this competition, i want to study multi-modal and noisy student and knowledge distillation.</p> <h1>Overview</h1> <p>For my solution, refer to this paper. <br> <a href="https://arxiv.org/abs/1911.04252" target="_blank">Self-training with Noisy Student improves ImageNet classification</a></p> <ol> <li>train single models</li> <li>ensemble models for predict and label external data</li> <li>train single models use external data + petfinder data</li> </ol> <h2>step1. train single models</h2> <h3>model architecture</h3> <p>I use fastai and timm library.<br> The model is as follows.</p> <p>class cust_fastai_model(nn.Module): </p> <pre><code>def __init__(self, model_name='swin_large_patch4_window7_224', emb=True): super().__init__() self.emb = emb self.backbone = create_model(model_name, pretrained=False, num_classes=0) image_num_features = self.backbone.num_features self.image_fc = nn.Sequential( nn.Dropout(0.3), nn.Linear(image_num_features, 128) ) self.custom_head = nn.Sequential( nn.Dropout(0.4), nn.Linear(128, 1) ) def forward(self, image): emb = self.backbone(image) emb = self.image_fc(emb) out = self.custom_head(emb) if self.emb: return torch.cat([out, emb], dim=1) else: return out </code></pre> <h3>single model performance</h3> <table> <thead> <tr> <th>model name</th> <th>image size</th> <th>CV</th> </tr> </thead> <tbody> <tr> <td>swin_large_patch4_window7_224</td> <td>224</td> <td>17.4291008</td> </tr> <tr> <td>swin_base_patch4_window7_224</td> <td>224</td> <td>17.62332115</td> </tr> <tr> <td>swin_large_patch4_window12_384</td> <td>384</td> <td>17.51918907</td> </tr> <tr> <td>tf_efficientnet_b5_ns</td> <td>456</td> <td>17.70378685</td> </tr> <tr> <td>vit_large_patch16_224</td> <td>224</td> <td>17.63189049</td> </tr> <tr> <td>eca_nfnet_l2</td> <td>384</td> <td>17.58191376</td> </tr> </tbody> </table> <h2>step2. ensemble models for predict and label external data.</h2> <h3>calculate CV about tta</h3> <table> <thead> <tr> <th>TTA</th> <th>CV</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>17.22635913</td> </tr> <tr> <td>2</td> <td>17.20699514</td> </tr> <tr> <td>3</td> <td>17.2165909</td> </tr> <tr> <td>4</td> <td>17.20069566</td> </tr> <tr> <td>5</td> <td>17.21008584</td> </tr> <tr> <td>6</td> <td>17.20981791</td> </tr> <tr> <td>7</td> <td>17.1992845</td> </tr> <tr> <td>8</td> <td>17.19996605</td> </tr> <tr> <td>9</td> <td>17.21487459</td> </tr> <tr> <td>10</td> <td>17.2102797</td> </tr> </tbody> </table> <p>Then, I use tta7 to predict and label external data.</p> <h3>External data</h3> <ol> <li><a href="https://cocodataset.org/#home" target="_blank">COCO cat and dog</a></li> <li><a href="https://www.kaggle.com/c/petfinder-adoption-prediction" target="_blank">Petfinder adopt dataset</a></li> <li><a href="https://www.kaggle.com/c/dogs-vs-cats" target="_blank">kaggle cat and dog dataset</a></li> </ol> <h2>step3. train single models by use external data</h2> <h3>single models performance</h3> <table> <thead> <tr> <th>model name</th> <th>image size</th> <th>CV</th> </tr> </thead> <tbody> <tr> <td>swin_large_patch4_window7_224</td> <td>224</td> <td>17.32193184</td> </tr> <tr> <td>swin_base_patch4_window7_224</td> <td>224</td> <td>17.34374313</td> </tr> <tr> <td>swin_large_patch4_window12_384</td> <td>384</td> <td>17.28282471</td> </tr> <tr> <td>tf_efficientnet_b5_ns</td> <td>456</td> <td>17.25554123</td> </tr> <tr> <td>vit_large_patch16_224</td> <td>224</td> <td>17.37071266 ​</td> </tr> <tr> <td>eca_nfnet_l2</td> <td>384</td> <td>-</td> </tr> </tbody> </table> <p>In this experiment, eca_nfnnet_l2 need a lot of times, so i didn't train this model.</p> <p>And I can found out efficientnet can learn more than other models.</p> <h3>ensemble performance</h3> <table> <thead> <tr> <th>TTA</th> <th>CV</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>17.19522166</td> </tr> <tr> <td>2</td> <td>17.18533359</td> </tr> <tr> <td>3</td> <td>17.19308663</td> </tr> <tr> <td>4</td> <td>17.18103402 ​</td> </tr> <tr> <td>5</td> <td>17.18683142</td> </tr> <tr> <td>6</td> <td>17.18794017</td> </tr> <tr> <td>7</td> <td>17.17808405</td> </tr> <tr> <td>8</td> <td>17.18468712</td> </tr> <tr> <td>9</td> <td>17.18554875</td> </tr> <tr> <td>10</td> <td>17.18290917</td> </tr> </tbody> </table> <p>I can improve CV a little. And i have a problem about notebook execution time. When i use TTA7.</p> <h3>predict and train again.</h3> <p>In noisy student paper, they can perform more when they predict and train again. so predict and train again.<br> And ensemble performance is as follow.</p> <table> <thead> <tr> <th>TTA</th> <th>CV</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>17.17037105</td> </tr> <tr> <td>2</td> <td>17.17464033</td> </tr> <tr> <td>3</td> <td>17.17522471</td> </tr> <tr> <td>4</td> <td>17.1676797​</td> </tr> <tr> <td>5</td> <td>17.17269141</td> </tr> <tr> <td>6</td> <td>17.1767502</td> </tr> <tr> <td>7</td> <td>17.16777866</td> </tr> <tr> <td>8</td> <td>17.17085731</td> </tr> <tr> <td>9</td> <td>17.17535145</td> </tr> <tr> <td>10</td> <td>17.17163645 ​</td> </tr> </tbody> </table> <p>In this experminet I can improve CV a little and I don't have to TTA much. So I use TTA2. And I can use all of this models.</p> <h3>more models</h3> <p>Looking at the experiment above, I was able to save time. So I train more models by 2nd pseudo label.<br> I train 5models.</p> <ol> <li>regnet</li> <li>resnext</li> <li>convnext base 384</li> <li>convnext large 224</li> <li>convnext xlarge 224<br> And regnet and resnext can't achieve high CV. So I use only convnext models.</li> </ol> <h2>Final models</h2> <p>I use models as follow</p> <table> <thead> <tr> <th>model name</th> <th>dataset</th> <th>image size</th> <th>CV</th> </tr> </thead> <tbody> <tr> <td>swin_large_patch4_window7_224</td> <td>2nd pseudo label</td> <td>224</td> <td>17.31553802</td> </tr> <tr> <td>swin_base_patch4_window7_224</td> <td>2nd pseudo label</td> <td>224</td> <td>17.36221161</td> </tr> <tr> <td>swin_large_patch4_window12_384</td> <td>2nd pseudo label</td> <td>384</td> <td>17.31249619</td> </tr> <tr> <td>tf_efficientnet_b5_ns</td> <td>2nd pseudo label</td> <td>456</td> <td>17.31544266</td> </tr> <tr> <td>vit_large_patch16_224</td> <td>2nd pseudo label</td> <td>224</td> <td>17.36038437​</td> </tr> <tr> <td>eca_nfnet_l2</td> <td>origin dataset</td> <td>384</td> <td>17.58191376</td> </tr> <tr> <td>swin_large_patch4_window7_224_pse</td> <td>1st pseudo label</td> <td>224</td> <td>17.32193184</td> </tr> <tr> <td>swin_large_patch4_window12_384</td> <td>1st pseudo label</td> <td>384</td> <td>17.28282471</td> </tr> <tr> <td>tf_efficientnet_b5_ns</td> <td>1st pseudo label</td> <td>456</td> <td>17.25554123</td> </tr> <tr> <td>convnext-base</td> <td>2st pseudo label</td> <td>384</td> <td>soon update</td> </tr> <tr> <td>convnext-large</td> <td>2st pseudo label</td> <td>224</td> <td>soon update</td> </tr> <tr> <td>convnext-xlarge</td> <td>2st pseudo label</td> <td>224</td> <td>soon update</td> </tr> </tbody> </table> <h2>Other things</h2> <p>I use SVR to ensemble.<br> <img src="https://user-images.githubusercontent.com/24911666/149612546-8f4f8988-06eb-4e24-aab8-89a635c70d98.png" alt="ensemble"><br> It can improve CV. But decrease LB a lot. So i didn't use. But in private LB,</p> <table> <thead> <tr> <th>model</th> <th>dataset</th> <th>private LB</th> </tr> </thead> <tbody> <tr> <td>ensemble SVR</td> <td>origin dataset</td> <td>16.90625</td> </tr> <tr> <td>ensemble 6 models</td> <td>origin dataset</td> <td>17.02122</td> </tr> <tr> <td>my final model 1</td> <td>pseudo labeling</td> <td>16.99605</td> </tr> <tr> <td>my final model 2</td> <td>pseudo labeling</td> <td>16.99253</td> </tr> <tr> <td>I'm sorry to didn't use SVR.</td> <td></td> <td></td> </tr> </tbody> </table> <h1>What I can learn in this competition</h1> <ul> <li>use noisy student, I can improve sinlge model's performance.</li> <li>use noisy student, I didn't need to many TTA.</li> <li>We can use SVR to ensemble models.</li> <li>Trust your CV.</li> </ul> <p>Everyone did great! See you at the next competition.</p>
PetFinder.my - Pawpularity Contest
[114th] Ensemble 7 models (pseudo label and not pseudo label)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, congrats to all winners and thanks to organizers. I have learned a lot from this competition, so I would like to share my solution here!</p> <p>Inference code and the detail of my solution:<br> <a href="https://www.kaggle.com/ttkagglett/119th-ensemble-7models-pseudo-and-not-pseudo" target="_blank">https://www.kaggle.com/ttkagglett/119th-ensemble-7models-pseudo-and-not-pseudo</a></p> <p>Make pseudo label dataset code:<br> <a href="https://www.kaggle.com/ttkagglett/petfinder-make-pseudolabel-dataset" target="_blank">https://www.kaggle.com/ttkagglett/petfinder-make-pseudolabel-dataset</a></p> <p>Semi-Supervised Learning (Training with pseudo labeled data) code:<br> <a href="https://www.kaggle.com/ttkagglett/petfinder-cnn-semi-supervised-training-fastai" target="_blank">https://www.kaggle.com/ttkagglett/petfinder-cnn-semi-supervised-training-fastai</a></p> <p>Training code:<br> <a href="https://www.kaggle.com/ttkagglett/petfinder-cnn-training-for-pseudolabel-2-fastai" target="_blank">https://www.kaggle.com/ttkagglett/petfinder-cnn-training-for-pseudolabel-2-fastai</a><br> I made many cloned notebooks from this to make various types of models.</p> <p>I used Kaggle environment only (1 GPU including GPU time limitation). Many of you can not use GPUs as you want, but we can train models! <br> I hope some Kagglers find new ideas from my solution. Please let me know if you have any questions.</p>
PetFinder.my - Pawpularity Contest
[Public 1st/ Private 5th] Solution Overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my - Pawpularity Contest <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I'm grad to receive comment from legend kaggler as you.<br> I always learn from your solution and discussion comment !!</p> <p>Thank you all teams who competed with us, all the people who participated in this competition, and the organizer that hosts such a great competition.</p> <p>I'm very sorry to shake down and missed 1st … !</p> <p>Our final notebooks are available.</p> <ul> <li>sub1 : <a href="https://www.kaggle.com/kaerunantoka/offense-final?scriptVersionId=85192352" target="_blank">https://www.kaggle.com/kaerunantoka/offense-final?scriptVersionId=85192352</a></li> <li>sub2 : <a href="https://www.kaggle.com/yururoi/stacking-pp-avg-pp-34-defensive-breed-group?scriptVersionId=84896876" target="_blank">https://www.kaggle.com/yururoi/stacking-pp-avg-pp-34-defensive-breed-group?scriptVersionId=84896876</a></li> <li>2nd stage training code : <a href="https://www.kaggle.com/yururoi/offense-stacking-pp-avg-pp-36-b285-b103" target="_blank">https://www.kaggle.com/yururoi/offense-stacking-pp-avg-pp-36-b285-b103</a></li> </ul> <h1>Overview</h1> <p>our solution has 3 steps.</p> <ul> <li><p>1st : Make a lot of single models. (our final submittion has 14 single models)</p></li> <li><p>2nd : Calculate similarity score to detect the same images, previous competiton's train+test images (about 70000 images) and this competition's images.</p></li> <li><p>Final : On overthreshold data, We can use previous competitions meta data. So, we made models for each adoptionspeed as post proccessing.</p></li> </ul> <p><img alt="Screen Shot 2022-01-15 at 7 23 54" src="https://user-images.githubusercontent.com/38504303/149600083-089fab1c-94fa-4065-997f-e90896c080b3.png"></p> <h1>Single Models</h1> <p>We use below models.</p> <ul> <li>All models are trained with Only Images. (Without meta data!)</li> <li>Cut long side to make the aspect ratio fixed in Dataset class.</li> </ul> <table> <thead> <tr> <th>Model</th> <th>Loss func</th> <th>CV</th> <th>LB (if any)</th> </tr> </thead> <tbody> <tr> <td>swin_large_patch4_window12_384</td> <td>BCE</td> <td>17.484603</td> <td>18.02737</td> </tr> <tr> <td>swin_large_patch4_window12_384</td> <td>BCE</td> <td>17.3944997416399</td> <td>-</td> </tr> <tr> <td>swin_large_patch4_window12_384</td> <td>BCE</td> <td>17.7691509886505</td> <td>17.98811</td> </tr> <tr> <td>swin_large_patch4_window7_224</td> <td>BCE</td> <td>17.7937685311947</td> <td>17.97379</td> </tr> <tr> <td>swin_large_patch4_window7_224</td> <td>BCE</td> <td>17.7916477565051</td> <td>17.94652</td> </tr> <tr> <td>swin_large_patch4_window7_224</td> <td>BCE</td> <td>17.458288</td> <td>-</td> </tr> <tr> <td>swin_large_patch4_window7_224</td> <td>BCE</td> <td>17.5616612122726</td> <td>17.96412</td> </tr> <tr> <td>swin_large_patch4_window7_224</td> <td>BCE</td> <td>17.6927744617933</td> <td>17.98374</td> </tr> <tr> <td>swin_large_patch4_window7_224</td> <td>BCE</td> <td>17.52656</td> <td>17.90651</td> </tr> <tr> <td>vit_base_r50_s16_384</td> <td>BCE</td> <td>17.9787132114343</td> <td>-</td> </tr> <tr> <td>vit_large_r50_s32_384</td> <td>BCE</td> <td>17.705468</td> <td>-</td> </tr> <tr> <td>densenet121</td> <td>MSE</td> <td>18.811589</td> <td>-</td> </tr> <tr> <td>tf_efficientnet_b0_ns</td> <td>MSE</td> <td>18.1307282713156</td> <td>-</td> </tr> <tr> <td>resnext101_32x8d</td> <td>BCE</td> <td>18.4265347215127</td> <td>-</td> </tr> </tbody> </table> <h1>Validation Strategy</h1> <p>We used below 2 method</p> <ul> <li><p>GroupKfold … Add id to the same images, this id is used as group.</p></li> <li><p>StratifiedKfold … Drop the same images, and split fold.</p></li> </ul> <h1>DataAugmentation</h1> <pre><code>CFG.get_transforms = { 'train' : A.Compose([ A.OneOf([ A.RandomResizedCrop(CFG.IMG_SIZE, CFG.IMG_SIZE, p=0.3, scale=(0.85, 0.95)), A.Resize(CFG.IMG_SIZE, CFG.IMG_SIZE, p=0.4), A.Compose([ A.Resize(int(CFG.IMG_SIZE * 1.5), int(CFG.IMG_SIZE * 1.5), p=1.0), A.CenterCrop(p=1.0, height=CFG.IMG_SIZE, width=CFG.IMG_SIZE), ], p=0.3), ], p=1.0), A.HorizontalFlip(p=0.5), A.VerticalFlip(p=0.5), A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225), max_pixel_value=255.0, p=1.0,), ], p=1.0), 'valid' : A.Compose([ A.Resize(CFG.IMG_SIZE, CFG.IMG_SIZE, p=1), A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225), max_pixel_value=255.0, p=1.0,), ], p=1.0), } </code></pre> <h1>Detect similar (maybe the same) image</h1> <ul> <li>Feature Extractor : EfficientNetB0</li> <li>Calculate cosine similarity</li> </ul> <h1>Stacking</h1> <ul> <li>Model … BayesianRidge</li> <li>fold … 4fold</li> </ul> <h1>Post Processing</h1> <ul> <li>Model … 5 BayesianRidge Models for each AdoptionSpeed (0~4)</li> <li>fold … 4fold</li> <li>Feature … oof, Age, Breed, TextLength, and so on.</li> </ul> <h1>What didn't work</h1> <ul> <li>Use Metadata</li> <li>Cropped images with Yolo v5</li> <li>Data augmentations on CNN models</li> <li>Image stats features</li> </ul> <p>Thank you!</p> <p>Thank you for comment! お疲れ様でした!</p>
PetFinder.my Adoption Prediction
10th Solution Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my Adoption Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for reading</p> <p>Thank you for pointing it out. Sorry,I make a mistake here,LGB *6(multi-classify/regression) is more suitable .I do not combine reg and classify. </p> <p>muitl-classify details: I use multi-classify to get 5 dims prob-matrix, then sum([0,1,2,3,4]*prob-matrix) to get one col value and finally use optR to predict it as five classes. </p> <p>Thank you for reading</p> <p>First of all, thanks to Petfinder.my and kaggle for hosting this great competition. And congratulations to the winners!Thanks to my teammates for their efforts. Here is our solution.</p> <h2>General</h2> <p>As our team name,Stacking is all you need. We have 4 group features:</p> <h3>features one:</h3> <ul> <li>1)clean breed ```python def deal_breed(df): if df['Breed1']==df['Breed2']: df['Breed2']=0 if df['Breed1']!=307 and df['Breed2']==307: temp=df["Breed1"] df['Breed1']=df['Breed2'] df['Breed2']=temp return df</li> </ul> <p><code> * 2)res features: </code>python def get_res_feat(df): temp=pd.DataFrame(index=range(1)) temp['RescuerID']=df['RescuerID'].values[0] temp['res_type_cnt']=len(df['Type'].unique()) temp['res_breed_cnt']=len(df['Breed1'].unique()) temp['res_breed_mode']=df['Breed1'].mode() temp['res_fee_mean']=df['Fee'].mean() temp['res_Quantity_sum']=df['Quantity'].sum() temp['res_MaturitySize_mean']=df['MaturitySize'].mean() temp['res_Description_unique']=len(df['Description'].unique()) return temp ``` * 3)meta features from public kernel * 4)Description features: * tfidf+svg * desc+type+breed+color-&gt;tfidf+svg/nmf/lda * desc+type+breed+color-&gt;countvec+svg/lda * desc-&gt;wordbatch+svg * 5)category_col onehot+svg * 6)densenet121 extract img features+svg * 7)state features (external data) * state population density * state_rank(according to state population density) * 8)mean target encode with breed(breed=breed1+breed2) boost 0.007(from 0.463 to 0.470) * 9)linear model oof * use 1-7 features to build some leaner model ,get linear model oof.</p> <p>with thease 101 dim features and multi-classify LGB (BEST SINGLE MODEL LB 0.471)</p> <h3>note:</h3> <p>get multi-classify result and optimize it: sum([0,1,2,3,4]*prob_matrix) <code>python class_list=[0,1,2,3,4] pred_test_y=np.array([sum(pred_test_y[ix]*class_list) for ix in range(len(pred_test_y[:,0]))]) </code></p> <h3>features two</h3> <p>In the early competition, I forked from the public kernel(lb 0.444)</p> <h3>features three <a href="/zhouqingsongct">@zhouqingsongct</a></h3> <p>use featuretools auto extract features </p> <h3>features four [for build lgb6]</h3> <p>from my teammate <a href="/amgis3">@amgis3</a>(lb 0.470)</p> <h3>models</h3> <p>and we stacking them with these models: * LGB *6LGB *6(multi-classify/regression) * CAT *2(multi-classify/regression) * NN *3(multi-classify/regression) * linear model(regression)</p> <h3>NN <a href="/gmhost">@gmhost</a>(NN structures reference to <a href="https://www.kaggle.com/c/avito-demand-prediction/discussion/59880#latest-450523">https://www.kaggle.com/c/avito-demand-prediction/discussion/59880#latest-450523</a>)</h3> <p>we do not use public Embeddings.we just use train+test desc to pretain a new w2v model(think of that are many chinese and Malay.) maybe we are wrong, our best nn is only near lb 0.44 </p> <h2>structures</h2> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/513498/12922/img.jpg" alt="img"></p> <h3>optR</h3> <p>The open source optR predicts very few zeros, so I manually divide the smallest part of the value to 0 (0.95* len_0) ( len_0 is the size of 0 in trainset) ```python</p> <p>def predict(self, X, coef,len_0): X_p = np.copy(X) temp = sorted(list(X_p)) threshold=temp[int(0.95*len_0)-1] for i, pred in enumerate(X_p): if pred &lt; threshold: X_p[i] = 0 elif pred &gt;= threshold and pred &lt; coef[1]: X_p[i] = 1 elif pred &gt;= coef[1] and pred &lt; coef[2]: X_p[i] = 2 elif pred &gt;= coef[2] and pred &lt; coef[3]: X_p[i] = 3 else: X_p[i] = 4 return X_p ``` That's all,thank you <a href="https://www.kaggle.com/chizhu2018/final-submit-two-10th-solution-private-0-442">code</a> is here <a href="https://github.com/chizhu/kaggle-petfinder-adoption-prediction-10th-solution">github</a></p> <p>Thank you for figuring out this error.</p> <p>Thank you</p> <p>Thank you!congratulations for your solo gold medal.</p>
PetFinder.my Adoption Prediction
13-th place solution summary 0.44091 (65-th on public LB: 0.459~0.467)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my Adoption Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you! It seems I had a verrrry good luck in this competition:)</p> <p>I published the code :) <a href="https://www.kaggle.com/corochann/13-th-place-solution-ensemble-of-5-models">13-th place solution: ensemble of 5 models</a></p> <p>It's similar to one-hot encoding, but it is not one-hot vector but 2 places are set 1 when both breed1 &amp; breed2 are set. I applied cutoff, all minor breeds are dealt as "unknown" breed" so breed have only 18 dim vector.</p> <p>Thank you for your interest!</p> <p>I permute only tabular input feature, target value is kept same value. Every time when I sample minibatch, I select 15 % of columns randomly to permute then I permute rows within minibatch for these 15% of columns. Thus 85% of columns are remain same value, and target value is also remain same.</p> <p>Actually I also tried mixup &amp; adding gaussian noise on input feature, but both approach do not seem to contribute to performance so much. And I did not adopt them in the final submission.</p> <p>Thank you for the interest. Currently the code is quite messy but I will consider :) </p> <p>First of all, thank you for organizers. It was my first time to seriously enter kernel competition, I enjoyed it. I will write up my approach to summarize.</p> <p><strong>[UPDATED] I published the kernel code</strong> - <a href="https://www.kaggle.com/corochann/13-th-place-solution-ensemble-of-5-models">13-th place solution: ensemble of 5 models</a></p> <h2>Feature Engineering</h2> <p>Many feature are adopted from <a href="https://www.kaggle.com/ranjoranjan/single-xgboost-model">Single XGBoost model</a> as a baseline. I will only write the additional feature engineering from this kernel.</p> <h3>Tabular data</h3> <p>Applied cutoff for "Age", "Quantity", "VideoAmt" and "PhotoAmt".</p> <p>Additional information I used is: - state gdp info (ref <a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/78040">The GDP and population of Malaysia states</a>) - breed ratings (ref <a href="https://www.kaggle.com/hocop1/cat-and-dog-breeds-parameters">Cat and dog breeds parameters</a>): performance does not change so much. I adopted only 4 keys which are included both dogs &amp; cats. - language: English, Malay and Chinese are mainly used in Malaysia. Most of the description are English but some are Malay or Chinese. I detected language using <code>langdetect</code> library, and added these language as categorical feature. Detect language is meaningful, because Malay people are mainly Muslim and do not have dogs (detail info can be found in <a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/86581">Stray animals in Malaysia: the Reality I Saw Travelling There For the Past Months</a>).</p> <h3>Text data</h3> <p>TFIDF --&gt; SVD feature extraction is executed for 3 text type 'Description', 'metadata_annots_top_desc', 'sentiment_entities' as same with Simple XGBoost model kernel.</p> <p>Additionally, glove word embedding feature is extracted <strong>only for "metadata_annots_top_desc"</strong>. When I tried to extract word embedding for description or sentiment_entities performance was worse. I tried both <a href="https://www.kaggle.com/rtatman/glove-global-vectors-for-word-representation">glove</a> and <a href="https://www.kaggle.com/facebook/fatsttext-common-crawl">fasttext</a> word embedding and <code>glove.6B.200d.txt</code> was the best performance which I adopted as final submission.</p> <h3>Image data</h3> <p>Final submission used <code>densenet</code> feature extraction (See <a href="https://www.kaggle.com/christofhenkel/extract-image-features-from-pretrained-nn">Extract Image features from pretrained NN</a>). Difference is that - I did not perform AveragePooling1D, and obtrain 1024 feature. I applied SVD to this 1024-dim vector. - More than 1 image may be contained in each pet, I calculated "mean" of image embedding (I calculated image embeddings up to 10 images per pet) feature when more than 1 image is contained.</p> <p>Although I tested a lot of different models supported in ChainerCV (see <a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/75943#486597">here</a> for supported models), I could not get better score than densenet and the final model used only the densenet. I was thinking Densenet performed better because it contains shallow &amp; deep information at once.</p> <h2>Validation strategy</h2> <p>I applied GroupKFold with rescuerid, instead of StratifiedKFold done in many public kernels. (Refer <a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/81809">this discussion</a>).</p> <p>When I check train and test data, it seems train and test rescuers are not overlapped. GroupKFold shows much "worse" score in local experiment, but I think this is more proper setting.</p> <p>I used 4-fold during fast experiment on local, and applied 10-fold for final submission to create 10 models for each architecture.</p> <h2>Model</h2> <h3>XGBoost</h3> <p>It was quite fast to train models among these 5 models when using GPU, and I mainly tested my feature engineering only with XGB through this competition. Since XGBoost cannot handle categorical data as opposed to LightGBM or CatBoost, I converted categorical values to numeric values by following - one hot encoding: Type, Gender, Vaccinated, Dewormed, Sterilized, State, FurLength, Health - breed encoding: Minor breeds are truncated as "unknown" and added breed1 &amp; breed2 as same field. - color encoding: Same with breeds, color1 color2 and color3 are added as same field.</p> <h3>LightGBM</h3> <p>Since it can handle categorical feature, no additional feature engineering is performed.</p> <h3>CatBoost</h3> <p>Same with LightGBM.</p> <h3>xlearn</h3> <p>As written in these kernels (<a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/81597">The Hitchhiker's Guide to the PetFinder Competition</a>, <a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/80937">https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/80937</a>), using FFM for sparse category dataset seems to good to try. It was a problem that official kaggle kernel does not support <code>xlearn</code> library to use FFM, but <a href="/bminixhofer">@bminixhofer</a> showed how to use it by adding library as a dataset, thank you! (Ref: <a href="https://www.kaggle.com/bminixhofer/xlearn">xlearn</a>)</p> <p>It basically handles category data, so I made discretize to bin for the numeric data such as age &amp; fee. Other features (PhotoAmt etc) are simply used as categorical data.</p> <h3>Neural Network (XDeepFM)</h3> <p>At first, I used simple MLP model but the performance is very bad. During investigating <code>xlearn</code> approach, I found there are research which utilizes "FM" idea into neural network. <a href="https://arxiv.org/abs/1803.05170">xDeepFM</a> is one of the model which aims to apply many sparse categorical data. The model is also applied for <a href="https://www.kaggle.com/c/criteo-display-ad-challenge">criteo dataset</a> in the paper. I implemented &amp; used this model.</p> <p>Main problem for this competition is that model easily overfits. Other than architecture, below Technics are used to reduce overfitting.</p> <ul> <li>permutation augmentation: to reduce overfitting, I permute some column's value between rows for data augmentation purpose.</li> <li><a href="https://openreview.net/forum?id=B1QRgziT-">spectral normalization</a></li> <li>dropout, weight decay</li> <li>weight tying: same weights are used among the depth for CIN network inside xDeepFM.</li> </ul> <p>I used most of the time for developing neural network, but sadly the performance was worst compared to the other models. However it is still effective to include for ensemble model, so it was not vain.</p> <p>I used <a href="https://github.com/pfnet/optuna">optuna</a> for hyper parameter tuning in the beginning-middle of the stage for neural network part. But I could not manage time to apply it for GBM &amp; xlearn models, so score might improve little bit more.</p> <h3>Ensemble</h3> <p>Ensemble is performed by simply taking mean of each models. Each model is trained with 10 models (using 10-folded GroupKFold), so final prediction is made by 10 * 5 = 50 models.</p> <h2>Model performance summary</h2> <p>Below RMSE &amp; QWK are the value for validation data of <code>train</code> dataset, calculated by 10 GroupKFold on RescuerID.</p> <p>| Model | RMSE | QWK (after optimized by nelder-mead) | | --- | --- | --- | | XGBoost | 1.041 | 0.448 | | LightGBM | 1.047 | 0.439| | CatBoost | 1.043 | 0.447| | xlearn | 1.055 | 0.432 | | NN(XDeepFM) | 1.065 | 0.414 | | Ensembled | 1.038 | 0.457 |</p> <h2>Final submission</h2> <p>How to determine threshold for regression model is quite important, and optimal way is not trivial in this competition (some <a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/83870">discussion</a> can also be found).</p> <p>I could consider following 4 options.</p> <p>[Determine threshold] 1.Determine threshold to align with train histogram. 2.Determine threshold to align with histogram which gets good score on public LB data.</p> <p>[Fix final prediction's histogram, instead of setting threshold.] 3.To align with train histogram. 4.To align with histogram which gets good score on public LB data.</p> <p>I submitted A: 3 (0.459 on public LB) and B: 2 (0.467 on public LB) as final submission.</p> <h3>[Updated] private leader board result</h3> <p>As I expected, threshold tuning for training histogram, method A, got high score = 0.44091 on private leader board which achieved me to reach gold medal in this competition. method B got score 0.43866.</p> <h2>Discussion</h2> <p>I have no idea how top kagglers got more than 0.470 scores and I am interested for those posts after the competition ends!</p> <p>One interesting approach which I could not try during competition is to predict "ranking" rather than actual value. Since threshold is adjusted in post-processing, proper ranking is quite important to get high score in this competition.</p> <p>Thanks for your post, it is interesting to know the background of competition :) </p> <p>Thanks for comment, the score on the table is calculated using validation data of <code>train</code> data. I could not submit each model separately.</p> <p>I can say that since "Single XGBoost model" public kernel already achieves 0.455 LB score with local validation 0.437, I could not get much improvement on LB even if RMSE &amp; QWK score improves on local <code>train</code> validation.</p> <p>Calculating image embedding does not take so much time, about 6 min for all images (including train &amp; test up to 10). The code is almost same with "Extract Image features from pretrained NN" kernel. But I skipped calculating embedding when image does not exist. When 0 image exist in pet zeros array is used as embedding. </p>
PetFinder.my Adoption Prediction
15th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my Adoption Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>thank you very much</p> <p>thank you dofu.</p> <p>Very pitty to stand on the side of solo gold. Here is the solution. Thanks for opensource in kernels. It help me start in the last 10 days.</p> <p><a href="https://www.kaggle.com/baomengjiao/fork-of-v5-fork-v4-change-gpu-add-name-feature">https://www.kaggle.com/baomengjiao/fork-of-v5-fork-v4-change-gpu-add-name-feature</a></p>
PetFinder.my Adoption Prediction
1st Place Solution Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my Adoption Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks! I am very happy to hear that.</p> <p>The answers are as follows:</p> <ol> <li><p>Dropping by adversarial validation was a pretty offensive strategy. According to the experiment in the 1st stage, CV &amp; LB score had a certain fluctuation by the dropping rate. Therefore, we decided to adopt this method to only one model.</p></li> <li><p>Managing kernel source codes is difficult for us. We gave editing authority each other, and forked the other's kernel. In order to prevent confusion, we regularly create figure of kernel relationships and share them within the team.</p></li> </ol> <p>Thanks!</p> <p>Simply because I love cats more than dogs! (Of course I love both.)</p> <p>[edit: 2020-01-11] Updated title following <a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/125436">this discussion</a> [/edit]</p> <p>We got 2nd place in the private leaderboard! Very happy to be rewarded with lots of trials and errors. I'd like to thank everyone, and give special thanks for my great teammates <a href="https://www.kaggle.com/gegege">gege</a>, <a href="https://www.kaggle.com/takuok">takuoko</a>, <a href="https://www.kaggle.com/naka2ka">ynktk</a>, <a href="https://www.kaggle.com/kaerunantoka">kaerururu</a> !</p> <p>This post is a brief summary, and more detailed information will be updated or posted as a new topic by my teammates.</p> <ul> <li><a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/88812">2nd Place Solution about k_features and LGBM2</a></li> <li><a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/88963">g_features + XGB (part of 2nd place solution)</a></li> <li><a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/89042">2nd solution tyu part</a></li> </ul> <h1>Our Model Pipeline</h1> <p>Our model pipeline is as follows:</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/511979/12897/wodori-2nd-place.png" alt=""></p> <h2>Feature Engineering</h2> <p>We created 3 types of features.</p> <ul> <li>tyu_features: mainly by takuoko, ynktk and u++</li> <li>k_features: mainly by kaerururu</li> <li>g_features: mainly by gege</li> </ul> <h3>tyu_features</h3> <p>The size of tyu_features is over 5000, and we selected 1000 features based on lgbm feature importance. As a result of trying various thresholds, this threshold was determined by execution times and cv &amp; public lb scores</p> <h3>k_features</h3> <p>k_features and LGBM2 is based on <a href="https://www.kaggle.com/wrosinski/baselinemodeling">wrosinski’s public kernel</a>. The size of k_features is not so big.</p> <p>You can see <a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/88812">2nd Place Solution about k_features and LGBM2</a> for more detail.</p> <h3>g_features</h3> <p>g_features and XGB is based on <a href="https://www.kaggle.com/reppy4620/xgboost">reppy4620’s public kernel</a>. What’s surprising for me is that 15% of features are dropped by adversarial validation. Adversarial validation tell us similarities of train datasets to test datasets. Dropping dissimilar train datasets gave us robustness and diversity, I suppose.</p> <p>You can see <a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/88963">g_features + XGB (part of 2nd place solution)</a> for more detail.</p> <h2>Validation Strategy</h2> <p>GroupKFold by Rescure_ID is used in the best score kernel. </p> <p>We first used GroupKFold, and from around mid-March we implemented and started using “StratifiedGroupKFold” like <a href="https://www.kaggle.com/jakubwasikowski/stratified-group-k-fold-cross-validation">jakubwasikowski’s public kernel</a>. </p> <p>As final submissions, one kernel uses GroupKFold and another uses StratifiedGroupKFold.</p> <h2>Models and Stacking</h2> <p>We used 4 models and stacking. Since this is a 2 stage competition, we didn't put much effort into hyperparameter tuning for boosting models. </p> <p>NN didn't work very well at first, but ynktk, takuoko and kaerururu tried a lot. Finally we can get NN model worth putting into the pipeline. You can see the information of NN <a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/89042">here</a>.</p> <ul> <li>NN</li> <li>LGBM1</li> <li>LGBM2</li> <li>XGB</li> </ul> <p>Of course, we tried the other models like catboost, xlearn, linier regression and so on. But we decided not to use them due to executions time and cv &amp; public lb scores.</p> <h2>Threshold Optimizer</h2> <p>In this competition, we have to submit the discrete values (from 0 to 4). We used <a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/76107">OptimizedRounder given by hocop1</a>. </p> <p>For additional improvement, we use rank averaging:</p> <ol> <li>Convert predicted value of each model to rank</li> <li>Converting predicted values to rank even with stacking</li> <li>Determine the threshold by OptimizedRounder()</li> </ol> <p>The idea of rank averaging comes from <a href="https://www.kaggle.com/c/quora-insincere-questions-classification/discussion/80568">Quora 1st place solution</a>.</p> <h2>Post Processing</h2> <p>We found image duplications and overwrote the predictions as post processing. The idea is based on <a href="https://www.kaggle.com/appian/let-s-find-out-duplicate-images-with-imagehash">appian’s wonderful public kernel</a>. The original Kernel searchd only the first image in each pet, but ynktk extended the search range to the second. We tried to extend the range to the fifth, but execution time was not enough.</p> <p>ynktk contributed to the model pipeline comprehensively for reducing execution time and handling errors. There may be no success in the 2nd stage without his effort.</p> <h1>Conclusion</h1> <p>In this post, the brief summary of our model pipeline was described. The more detailed information will be updated or posted as a new topic by my great teammates. </p> <h1>Appendix</h1> <h2>Data Cleaning (by u++)</h2> <p>We created “typo dictionary” for the text data. I don’t know how this contribute to the scores, but there is a few typos in the text data as follows. I checked all the words that were out of vocabulary manually, and updated the dictionary.</p> <p><code> u"whatapp": u"whatapps", u"whatssapp": u"whatapps", u"whatssap": u"whatapps", u"whatspp": u"whatapps", u"whastapp": u"whatapps", u"whatsap": u"whatapps", u"whassap": u"whatapps", u"watapps": u"whatapps", </code></p> <p><code> u"sherpherd": u"shepherd", u"sherphed": u"shepherd", u"sherperd": u"shepherd", u"sherpard": u"shepherd", </code></p> <p>I was also in charge of preparing the external data, some operations to unify BreedName were necessary there.</p> <p>These are a bit of a hard works, so I went to the cat cafe to get healing (and domain knowledge). 😺</p> <p><img src="https://pbs.twimg.com/media/D12QmZaUgAA83Ga?format=jpg&amp;name=4096x4096" alt=""></p> <p>Thank you, and congrats also to you!</p> <p>Your understanding is a bit wrong. We removed not features but the part of train dataset. </p> <ul> <li>Shape of g_feature is around (14993, 200)</li> <li>Shape of g'_feature is around (14993 * 0.85, 200)</li> </ul> <p>Which train dataset to remove is decided by the classifier as you said.</p> <p>Thank you and congrats to you! We thought that the Quora competition's solution would be helpful in that we need to determine the threshold.</p> <p>Maybe <a href="https://www.kaggle.com/takuok">takuoko</a> will post a NN solution!</p> <p>Your understanding seems correct. We don't know how it works in stage 2, but we can get about +0.002 improvement in public lb.</p> <p>Each of us selected features by watching the score of CV &amp; public lb.</p> <p>When we stack, we only use 85% of train data.</p> <p><code> if T_flag and K_flag and G_flag: with timer('stacking'): X = np.concatenate([y_pred_t[extract_idx].reshape(-1, 1), y_pred_t_nn[extract_idx].reshape(-1, 1), y_pred_k[extract_idx].reshape(-1, 1), y_pred_g.reshape(-1, 1), ], axis=1) X_test = np.concatenate([y_test_t.reshape(-1, 1), y_test_t_nn.reshape(-1, 1), y_test_k.reshape(-1, 1), y_test_g.reshape(-1, 1), ], axis=1) </code></p> <p>Notebook has been published, and you can see the full code <a href="https://www.kaggle.com/naka2ka/stack-480-speedup-groupkfold-with-no-dict?scriptVersionId=12166931">here</a>.</p> <p>We utilized <a href="/hocop1">@hocop1</a> 's implementation, and don't know enough for the intention. <a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/76107">https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/76107</a></p>
PetFinder.my Adoption Prediction
1st place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my Adoption Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>gone but with $$$; the next action by the host will be interesting 🤔 🤔 &gt; 1st Place - $ 10,000</p> <p>Letz hope they reveal their 'magic' soon !!!</p> <p>Am I the only one desperate for the 1st place(and other top holders) solution write up? </p>
PetFinder.my Adoption Prediction
5th Place Solution Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my Adoption Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>You're welcome! At the moment I'm kind of busy, but I'm planning to take part in a competition in early 2020 (thinking February or March) if any interest me at that time.</p> <p>Thanks for letting me know! The image hosting service I used apparently doesn't exist anymore. I fixed it. Great to know people still find my solution useful :)</p> <p>I only used the first image except in my LGB model where I used mean, sum and variance of image metadata features over all images. This is one area of my solution which could probably be improved upon.</p> <p>Thanks! And yes, I had a good XGB model (around the same score as LGB on the public LB) which also had reasonably low correlation with LGB. On the public LB, it significantly decreased my score though.</p> <p>I submitted it in my CV-tuned submission together with some changes in the NN architecture but it turned out worse than the LB-tuned models there too.</p> <p>Thanks! We both experienced the same situation, leading for such a long time and then falling back a little in the end. Good luck on the private LB in the santander competition though!</p> <p>You can see the exact weights <a href="https://www.kaggle.com/bminixhofer/6th-place-solution-code">here</a>. I'd say NN and LGB were very important (my score with only LGB on the public LB was ~ 0.467). FFM helped but didn't contribute that much.</p> <p>Thanks! Also congratulations on Gold :) </p> <p><strong>Edit: <a href="https://www.kaggle.com/bminixhofer/6th-place-solution-code">The code is now public.</a></strong></p> <p>First of all, thanks to Petfinder.my for hosting this great, meaningful competition. And congratulations to the winners! I did not expect to do this well in it. I started because I wanted to investigate the mismatch between CV and LB, but quickly managed to come up with a good solution and become first on the public LB. So naturally, I decided to stick with it. "Defending" this position was a challenging task and it motivated me to work a lot on this competition. And although I did not manage to do that in the end it was a great experience and I learnt amazingly much.</p> <p>As a student, one challenge was the time I could spend on this competition. I tried to work on it for 1h - 2h everyday during the week, more on weekends, but that was not nearly enough to try everything I wanted to try.</p> <p>I'll get right to summarizing my solution. You can see the code <a href="https://www.kaggle.com/bminixhofer/6th-place-solution-code">here</a>. I'd recommend reading this summary first.</p> <h1>General</h1> <p>I have been stacking 5 models:</p> <ul> <li>3 NNs using PyTorch</li> <li>one LightGBM model</li> <li>one FFM with xlearn</li> </ul> <p>I used repeated stratified cross-validation with 5 splits and 5 repeats in my submission to reduce the effect of randomness, so each model was run 25 times overall.</p> <p>I used another strategy for evaluating changes in the models.</p> <h1>Validation Strategy</h1> <p>Like in <a href="https://www.kaggle.com/bminixhofer/a-validation-framework-impact-of-the-random-seed">my kernel about validation</a> in the recent quora competition, I used a kind of nested K-Fold CV. In an outer split, I split the dataset into a local test set and a train+valid set. The train+valid set is then also split into K-Folds and a model is trained on each fold. The predictions of each model on the data in the local test set are then averaged to get a QWK score.</p> <p>This is repeated for every outer split. I made a figure to illustrate this.</p> <p><img src="https://i.imgur.com/8NdTYuN.png" alt="Validation Strategy"></p> <p>The advantage of this strategy over others is that it takes into account the effect of averaging the test predictions of each fold. The same is done in the actual submission and because the correlation between folds has been low (especially of the NNs), the score would not be accurately estimated when using regular K-Fold CV.</p> <h1>Features</h1> <p>I used all categorical and numerical features available in the .csv files, the document sentiment magnitude and score, and the image metadata features commonly used in public kernels. Additionally, I:</p> <ul> <li>created count features of Breed1, Breed2 and State. When using them in the NNs, they were logarithmically scaled.</li> <li>used GDP per capita of the state.</li> <li>binned RescuerID count into 10 quantile bins. This greatly increased my score (almost 0.01 on the public LB) and intuitively makes sense because rather than the exact amount of pets rescued, the <em>type</em> of rescuer should matter (e. g. whether it is corporation or a single person).</li> </ul> <h1>Neural Networks</h1> <p>All my neural networks use the same basic structure. To make the neural networks as diverse as possible, I modified:</p> <ul> <li>The loss function</li> <li>The image activations</li> <li>How the network treats text</li> </ul> <p>You can see the shared structure between networks below. The black arrows represent connections in every network. The red, green and blue arrows represent connections only made in NN1, NN2 and NN3, respectively.</p> <p><img src="https://i.imgur.com/A8HHxXa.png" alt="Neural Network Overview"></p> <p>All network were trained using Cyclic LR and the Adam optimizer. I achieved the best results training for 10 epochs.</p> <h2>NN 1 - TfIdf matrix</h2> <p>My first and best performing network uses a kind of ridiculous structure. Text is encoded using 3-grams of all words with document frequency &gt;= 2. That amounts to ~120k features. It is fed to the NN through a linear layer with 8 neurons, so there are 8 weights per 3-gram which amounts to ~1 million weights just in the text layer. To prevent this from miserably overfitting I used dropout of 80% and completely dropped the text features for 40% of all samples.</p> <p>I used MSE loss to train it. </p> <h2>NN 2 - Non-trainable 300d crawl embeddings</h2> <p>The second network uses a regular RNN structure: Text is fed to the NN via an embedding layer pretrained on crawl embeddings and non-trainable. It is then processed using a bidirectional LSTM. Max-pooled output and output of an attention layer is then concatenated with the other features.</p> <p>This network is also trained using MSE loss.</p> <h2>NN 3 - Trainable 200d GloVe embeddings + flipped image activations</h2> <p>The third network uses the exact same architecture as the second, but with 200d GloVe embeddings that I froze for the first 6 epochs and started training afterwards.</p> <p>Also, instead of using the regular image features I used image features extracted from the same densenet121 model on flipped images to increase diversity. I also tried multiple other augmentations (rotating, zooming, ...) but none of them worked. Flipping the images significantly increased my score though.</p> <p>This network was trained using SmoothL1Loss.</p> <h1>Other models</h1> <p>Additionally, I used LightGBM and xlearn. Text is encoded using SVD of a TfIdf matrix with 3-grams. The SVD has 10 components for xlearn and 120 components for LightGBM.</p> <p>I found that raw image activations were too high dimensional to work well, so I trained an image activation extractor NN model. This model has the top layer activations from Densenet121 as input and tries to predict the Type, Age and Breed1 of the pet. It is trained on the train + test set.</p> <p>This model has two hidden layers with 64 and 16 neurons. The activations of the first hidden layer are used as input for LightGBM, the activations of the second layer for xlearn. This greatly increased my score.</p> <p>For xlearn, it worked best to treat all features as categorical, so I binned the image extractor activations and the SVD representation into 10 quantile bins. You can see the structure of the input features for the two models below.</p> <p><img src="https://i.imgur.com/TsBN6D1.png" alt="Other models overview"></p> <h1>Stacking</h1> <p>At first, I used a linear regression to stack the models, but it did not work well. There were mainly 2 problems:</p> <ol> <li>Taking into account the increase when averaging predictions of each fold from the NN.</li> <li>Mismatch between train and test distribution.</li> </ol> <p>I solved the first problem by dividing the coefficients of the linear regression by the squared correlation between folds of the model. This somewhat accounts for the increase in score from averaging the predictions. I chose to use the squared correlation instead of just dividing by correlation because it slightly increased local CV score.</p> <p>The second problem was that neural networks were weighted significantly lower than expected. Of course, I didn't want to overfit to the public LB so adjusting the coefficients manually was definitely a bad idea. What i came up with is setting the sample weights of the linear regression to the out-of-fold predictions of an adversarial validation model. As such, the samples which were seen as closer to the test distribution were weighted higher and the coefficients of the neural networks increased strongly.</p> <p>When these two problems were solved, I arrived at a solid stacking strategy in which CV and LB score were closely related.</p> <p>To convert the output of the stacking model to classes, I just followed the train distribution of classes. I tried the public OptimizedRounder and various genetic algorithms but they performed on par with following the train distribution, so I just sticked with it.</p> <h1>Trouble with the Leaderboard</h1> <p>Like everyone else, I had problems with the mismatch between CV and LB. My validation strategy largely solved mismatches when just tuning hyperparameters of the models, but some significant discrepancies remained.</p> <h2>Binning Age</h2> <p>Binning the pet age into 20 quantile bins greatly increased my CV score and it intuitively makes sense to me because rather than looking at the exact age of a pet, adopters will typically divide pets into categories like young, middle aged or old. I guess there is little difference between chance for a pet that is e. g. 6 months old and another one that is 8 months old. However, this decreased my score on the public LB significantly.</p> <h2>Extreme sensivity to small changes</h2> <p>Another problem I faced was that the LB score was extremely sensitive to small changes in input data. Just changing the number of SVD components from 120 to 256 decreased my LB score by 0.007, although it slightly increased CV score.</p> <h2>Using ordinal regression</h2> <p>This competition is an ordinal regression problem. There are different classes but the classes have an ordinal relation. So why not treat is as such? Neural networks can directly handle ordinal regression by using 4 output neurons and encoding classes as:</p> <ul> <li>0: [0 0 0 0]</li> <li>1: [1 0 0 0]</li> <li>2: [1 1 0 0]</li> <li>3: [1 1 1 0]</li> <li>4: [1 1 1 1]</li> </ul> <p>More information on this kind of encoding <a href="https://stats.stackexchange.com/questions/140061/how-to-set-up-neural-network-to-output-ordinal-data">here</a>.</p> <p>The model can then be trained using binary cross entropy loss. The 4 outputs are summed to convert them back to a single scalar for each pet. This also greatly increased my CV score (by about 0.01) but decreased the public LB score.</p> <h2>A mistake increasing the LB score</h2> <p>For a long time, I used the following two lines of code in my kernel to create the gdp per capita feature: <code> train_df["gdp_vs_population"] = train_df["state_gdp"] / train_df["state_population"] test_df["gdp_vs_population"] = test_df["state_gdp"] / train_df["state_population"] </code></p> <p>When taking a closer look at this, you can see that it is wrong because the state GDP of pets in the test dataframe is divided by the state population of pets in the train dataframe. I didn't notice this until the last day in the competition. And I discovered that this wrong code gave me boost of an insane 0.007 on the leaderboard. So the LB score of my best correct submission was 0.485. This must have been just noise and it surely wouln't have held up in the private LB.</p> <p>But there is another strange thing about this. Why did the code even run? At first glance, I would think that it has to fail because the test dataframe and train dataframe have a different length so division can not be done correctly. It turns out that pandas automatically pads the series with NaN values to allow for a division to work. And that it then only uses the elements of this series up to the length of the test dataframe to make the assignment operation work. In my opinion, that is confusing behaviour and should probably be removed. </p> <p>Anyway, this is a testament for how noisy to public LB was, and how noisy the private LB probably also is because of the very little data that is available.</p> <p>Thanks. And I agree, I couldn't trust my CV that much after all. It is worth noting that my CV-tuned submission scored significantly worse than my LB-tuned submission on the private LB.</p> <p>Thanks! I'm working on making the kernel public. I had to run it again in a separate kernel because I chose my final solution from my main kernel and it is not the most recent version there, so it would be confusing to publish it. But it should be finished soon.</p> <p>I'm looking forward to your solution too if you'll publish it :)</p> <p>Thanks! And great question. One problem with my validation strategy is that in the case of 5 test splits and 5 "regular" splits I only use 80% * 80% = 64% of the data to train each model compared to the 80% of the data which the model in my submission is trained on in each fold. This might account for some mismatches.</p> <p>However, especially in this competition, there is a lot of unavoidable noise because we have very few samples to work with. </p> <p>In a future competition, when you find that CV and LB are not consistent, it is worth putting a lot of effort in making your CV as consistent as possible. Although my CV-tuned submission scored worse in the end, I only arrived at my "LB-tuned" submission because I was able to reduce the mismatches between CV and LB to a minimum and thus get a - in most cases - relatively accurate estimate of my LB score through local validation.</p> <p>It's important to note that when writing of "LB-tuned" I mean a submission which scored well in CV and LB, as opposed to CV-tuned where the submission scored <em>better</em> in CV but worse on LB. I did not just submit to the leaderboard and see what works.</p> <p>Thanks! The densenet model was not trained, I extracted the activations once for regular images and once for horizontally flipped images. It took about 20 minutes total. That might be a little unclear from the images. I'll update them.</p> <p>The rest of my models all take less than 30 seconds to train, except of LightGBM which takes about 1min 20s so time was not an issue.</p> <p>Edit: The images are now updated.</p> <p>Thanks! </p> <p>We average the <em>test</em> predictions, but not the <em>validation</em> predictions in normal K-Fold CV.</p> <p>When submitting to the Leaderboard, the averaged predictions on the test set of each fold are submitted. But when validating our model using regular K-Fold CV, the CV score is given by the mean of the validation scores of each fold. There is no averaging of predictions involved there.</p> <p>Yes. I explicitly specify categorical features when constructing the LightGBM dataset:</p> <pre><code>dtrain = lgb.Dataset(x_train_proc, y_train, feature_name=feature_names, categorical_feature=cat_features) </code></pre> <p>However, it even works well without that if the feature has low cardinality because LightGBM constructs bins of features internally anyway.</p>
PetFinder.my Adoption Prediction
7th Place Solution Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my Adoption Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks all! After reading all the sols shared till now, I feel we have the most simplest one :(</p> <p>Good for me! Steep Learning lies ahead:)</p> <h3><a href="https://www.kaggle.com/adityaecdrid/best-sub-selected-re-run-478">The code Is public now</a></h3> <p>First of all, thanks to Petfinder.my and Kaggle for hosting this great and meaningful competition. And congratulations to the winners! And it's the first time for many team members to grab the Gold!! Thanks All :)</p> <p>&gt;PS Pardon me if it isn't upto the mark, as it's my first time writing the solution desc..</p> <p><a href="https://www.kaggle.com/adityaecdrid/best-sub-selected-re-run-478">Kernel link</a></p> <p>The main issue with the PetFinder JSON Parsing script shared on public was that it wasn't accounting for many other json keys that were there and when you will add this, the script will break and this is what Shaz meant... (though the file having such contents were quite less, but we still added them because in case the test set had them in depth). Also we used all the files, not just the $-1$ tagged for everything (images, sentiments and metadata iirc)</p> <h2>FE's (Mainly, handcrafted)</h2> <p>Now coming to FE's, well we did a lot of work on that as well. - We tried cleaning the names col - Added Relative age feature <a href="https://www.kaggle.com/adityaecdrid/best-sub-selected-re-run-478#L362">line</a> - Also many interaction b/w Health, Sterelized etc cols as well :) - We also tried to mimic the way PetFinder.my adds the rating to their images(i created an account on their website to try to see it myself as well), we called it <a href="https://www.kaggle.com/adityaecdrid/best-sub-selected-re-run-478#L380">SEO Features</a> - After that we also ranked them accordingly as if someone visits the website, the pets are ranked somehow, so we just tried to mimic the same <a href="https://www.kaggle.com/adityaecdrid/best-sub-selected-re-run-478#L399">rankbyG</a> - Also we added a lot of aggregates as well on basis of rescuer id/state as well ( i guess this worked for the ID because since the rescuer id was auto-generated, it had the TS effect to it hidden as well because if you will select first 4-5 chars and sort the df, you will see the same, like earlier joinied users might save more etc kinda analogy) - Next we added <a href="https://www.kaggle.com/adityaecdrid/best-sub-selected-re-run-478#L556">nlp feats</a> as well which were borrowed from Quora's Shaz's work.. We tried POS tagging as well but didn't add it in the final solution as it was time consuming (but it helped improved the model) - You can see <a href="https://www.kaggle.com/adityaecdrid/best-sub-selected-re-run-478#L804">this</a> for many basic features we added as well, basically adding many groupby feats, states stats, cat/dogs breeds stats, ranking, insta features, year, top breeds, pets alllowed according to malaysian laws etc.. - Next we also had <a href="https://www.kaggle.com/adityaecdrid/best-sub-selected-re-run-478#L993">image_dims_feats</a> - We also added <a href="https://arxiv.org/abs/1709.05424">NIA</a> features as well (image quality, this ranked way higher for us) - Added features to indicate the dullness/brightness (refer Avito Comp, Peter's solution and great kernels their) - Also Lucas did a great job by verifying the CV with double CV strategy as well..</p> <ul> <li><p>Modelling includes lgb+xgb with different CV strategy and different STD as well for better ensembles.</p></li> <li><p>And there are ton of things which didn't work as well....</p></li> </ul> <p>(Will be updated with the CV used and other things as well by team mates as well)</p> <p>Thanks And Happy Kaggling:)</p> <p>&gt; Would request team-mates( <a href="/shaz13">@shaz13</a>, <a href="/lucamassaron">@lucamassaron</a>, <a href="/init27">@init27</a>, <a href="/init927">@init927</a>, <a href="/backaggle">@backaggle</a>) (in no order) to drop whatever i missed</p> <p>Check this out mlcourse.ai </p>
PetFinder.my Adoption Prediction
99th Solution: Ensemble of 8 models
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my Adoption Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations all winners of this wonderful competition. This is the 2nd time silver medal for me. <a href="https://www.kaggle.com/bigswimatom/ensemble-8-model">https://www.kaggle.com/bigswimatom/ensemble-8-model</a></p> <p>I'm appreciate all kagglers who published following kernels. - <a href="https://www.kaggle.com/rookzeno/ensemble-other-people-kernel">https://www.kaggle.com/rookzeno/ensemble-other-people-kernel</a> @rookzeno - <a href="https://www.kaggle.com/ranjoranjan/single-xgboost-model">https://www.kaggle.com/ranjoranjan/single-xgboost-model</a> @ranjoranjan - <a href="https://www.kaggle.com/reppy4620/xgboost">https://www.kaggle.com/reppy4620/xgboost</a> @reppy4620 - <a href="https://www.kaggle.com/wakamezake/lightgbm-image-features-etc">https://www.kaggle.com/wakamezake/lightgbm-image-features-etc</a> @wakamezake</p>
PetFinder.my Adoption Prediction
[22nd place] From 325th in Public LB to 22nd in Private LB
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my Adoption Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Kagglers, this is my first competition and I am excited to share our solution. Please provide feedback if you spot anything!</p> <p>We’d like to thank for Kaggle community and Petfinder.my for this learning experience. We learned a lot from those who share their knowledge and want to share our method with the hope of contributing to the community.</p> <p><a href="https://www.kaggle.com/enisimsar/22nd-place-solution">See our kernel here</a></p> <h1>Data Loading and Feature Extraction</h1> <h2>Metadata, Sentiment, Image and Their Aggregations</h2> <ul> <li>We took this part from <a href="https://www.kaggle.com/ranjoranjan/single-xgboost-model">Single XGBoost model</a> </li> <li>Addition to kernel above, we added min, max and std to its aggregations. Image number for each pet is also added. <h2>Breed Features</h2></li> </ul> <h3>Handling Mix Breed</h3> <p>We want to detect mixed breed instances and put specific(values other than 0-unknown- or 307-mixed-) breed information to <em>Breed1</em> column. Here is our strategy. 1. If any of <em>Breed1</em> or <em>Breed2</em> is mix breed(denoted by 307), we set *mixed_breed* column to 1. Then we change 307 value with 0. 2. For each mixed breed instance we set Breed1 as Breed1 + Breed2. Aim is to hold specific breed information in the <em>Breed1</em> column as much as possible. 3. If <em>Breed1</em> and <em>Breed2</em> columns are different than 0 for an instance, we set *mixed_breed* column to 1.</p> <h3>Breed Prediction</h3> <p>For some instances we do not have specific breed type(<em>Breed1 == 0</em> and *mixed_breed=1*). We’ve applied breed prediction only for dogs since we were only be able to find proper dataset for dogs. </p> <p><a href="https://github.com/sez-ai/Dog-Breed-Classifier">See our dog breed type prediction code here</a> <a href="https://www.kaggle.com/enisimsar/dog-identification-pretrained">Get pretrained model here</a></p> <p>We insert a column *pred_breed*, and fill that column with predicted values. If we do not predict breed since that instance already has, we simply copy <em>Breed1</em> value to *pred_breed*.</p> <h3>Creating pseudo-feature from breed names</h3> <p>Search breed names, if they contain any of the adjective below, add it as a categorical feature. [‘hair’, ’domestic’, ‘short’, ‘medium’, ‘retriever’, ‘terrier’, ‘tabby’, ‘long’]</p> <h2>Age</h2> <p>9 Bins are created for age column. See code for details.</p> <h2>Name</h2> <p>Following features are extracted from names. [‘Name_len’,’Name_isna’,”Name_with_numbers”]</p> <h2>Fee</h2> <p>5 Bins are created for fee column. See code for details.</p> <h2>State</h2> <p>We used *state_gdp* and *state_population* just like everyone else.</p> <h2>Word - Sentence Embeddings</h2> <p>We used Spacy’s <code>en_vectors_web_lg</code> and applied SVD with n_components 32.</p> <h2>Image Features</h2> <p>We were too lazy to use all images thus only used first photos of pets… Bottleneck image features are extracted from DenseNet121, Inception V3 and Xception with applying SVD for each one.</p> <h2>External Data for Breed</h2> <ul> <li>Scrapped from petfinder.com, note that it’s not petfinder.my. <a href="https://www.kaggle.com/enisimsar/petfindercomexternal">petfinder.com External Data</a> </li> <li><a href="https://www.kaggle.com/hocop1/cat-and-dog-breeds-parameters">Cat and dog breeds parameters</a></li> <li><a href="https://www.kaggle.com/rturley/pet-breed-characteristics">Pet Breed Characteristics</a> <h2>Rescuer ID</h2></li> </ul> <p>There were many discussion about Rescuer ID, we also observed it causes overfitting. Thus one of our kernel was playing with rescuer ID while other was dropping this feature. Well, sometimes overfitting wins…</p> <p><a href="https://www.kaggle.com/enisimsar/final-submission-with-rescuer-features#Rescuer-ID">See code for details</a></p> <h2>Aggregations</h2> <p>To enrich features even more, we have extracted many aggregations. <a href="https://www.kaggle.com/enisimsar/final-submission-with-rescuer-features#Aggregations">See code for details</a></p> <h2>Freq Encoder Categorical Features</h2> <p>Thanks to <a href="https://www.kaggle.com/c/avito-demand-prediction/discussion/55521">Sample categorical feature encoding methods</a></p> <h1>Model</h1> <p>We used 2 models, XGBoost and LightGBM. Golden Section Search method is used for OptimizedRounder. Thanks to @hocop1 at <a href="https://www.kaggle.com/c/petfinder-adoption-prediction/discussion/76107">How to use regression here? | Kaggle</a>. We’ve called OptimizedRounder in every fold and stored coefficients for each model. Then took mean of that coefficients for final use. </p> <p>As for final submission we’ve simply averaged coefficients and regression output of LGBM and XGBoost. </p>
PetFinder.my Adoption Prediction
[72nd place solution] Stacking Madness (only CPU)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my Adoption Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I used DenseNet121 only for XXX-1.jpg. Then it takes only 4500 sec for DenseNet121.</p> <p>As it takes time to translate the text about my model into English, I will only share the diagrams for now.</p> <p>I wrote 2 kernels. Please check and upvote them. 1. <a href="https://www.kaggle.com/nadare/72nd-place-key-points">essence of my solution</a> 2. <a href="https://www.kaggle.com/nadare/xlearn-model-cv-42-lb">xlearn example for PetFinder</a></p> <p>CPU: 17701.8 sec <img src="https://pbs.twimg.com/media/D30E-vzV4AAxdyP.png" alt="72nd place solution"></p>
PetFinder.my Adoption Prediction
part of 9th place (denoising auto-encoder NN)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: PetFinder.my Adoption Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First I want to thank petfinder.my for hosting such a challenging competitions. Also thanks to my wonderful teammates @peterhurford @rdizzl3 . Our solution (which we will share in more detail in the coming days) consists mainly of 4 NNs and several XGBs. In our main summary we will also address feature engineering, cv-scheme, post processing, and more details on pretrained image NNs. Here I want to give a first glimpse how the general architecture of our NNs look like. They are all different and quite low correlated but follow the same idea: using denoising auto-encoders for automatic feature engineering as used by @mjahrer in the Porto Seguro’s Safe Driver Prediction and explained <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44629#250927">here</a> Let me show an illustration of one of our NNs for illustration:</p> <p><img src="https://i.imgur.com/gWDC532.jpg" alt=""></p> <p>The training consists of two stages. </p> <p>First train an auto-encoder on train+test to predict the numerical and categorical variables also using text + image. Then in a second stage use the encoder part of that auto encoder to encode your variable and add a classification head. To be more specific:</p> <ol> <li><p>Stage: Training the auto encoder Use tfidf for feature extraction of text data(sentiment.json + label.json + description text), one hot encode of categorical and a pretrained image model (we used different ones) to get image features. Concatenate everything in one vector and use several fully connected layers to train the reconstruction of the numerical and categorical features. In that way the auto encode will learn the interactions between features which I think was the most important task in this competition. A nice side effect is, since the auto encoder works unsupervised, you can also leverage from test data. </p></li> <li><p>Stage Training the classifier Throw away the reconstruction head of the auto encoder you end up with a NN that is able to interpret the given features and their interaction. This now can be used to perform the classification task. One thing (besides many others) that I learned from @mjahrer post was to concatenate the main part of the auto-encoder to leverage even more from its encoding. I also realized since I only trained the auto encoder to reconstruct categorical and numerical features, it might help to again feed the image features to the classification head.</p></li> </ol> <p>I summary, the main advantage (at least for myself) comparing to other NN architectures, was that an auto encoder was easier to tune to to handle overfitting. If you for example dropout some of the input features for the auto encoder it can practically never overfit, and as a side effect it even learns to reconstruct variables from others. E.g. reconstruct type from image. As mentioned above I will tell more details n our overall summary, but I already wanted to give you a glimpse of what we did.</p> <p>Yes, I used swap noise. That worked best here.</p> <p>we ignored multiple images</p> <p>everything is shared apart from the classification head. The filled yellow layers are re-ordered (concatenated) but in the classification task hasve still the same weights they learned in autoencoder training</p> <p>you are funny</p>
Planet: Understanding the Amazon from Space
0.93017 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Planet: Understanding the Amazon from Space <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I've just pushed pytorch code <a href="https://github.com/EdwardTyantov/pytorch-kaggle-amazon-space">https://github.com/EdwardTyantov/pytorch-kaggle-amazon-space</a> Nothing special inside, but code is very clean &amp; structured, so it can help someone.</p> <p><strong>Results</strong>:</p> <ul> <li>Best single model: mixnet_v6: public: 0.92905, private: 0.93071</li> <li>Best blended ensemble: public: 0.93015, private: 0.93217</li> <li>Best submit during competition: 0.93023: 0.93168 (also ensemble)</li> </ul> <p><strong>Quick overview of the solution</strong>:</p> <p>Best single model (mixnetv6) solution consist of following tricks:</p> <ul> <li>6-channel input (3-jpg channel, NIR-chanell, NDWI-index, SAVI-index)</li> <li>model: resnet18 on jpg, resnet 18 on nir+indexes, concat -&gt; 256 embedding FC + final FC <ul><li>jpg branch lr modifier 0.05 to base LR, for nir branch - layer{3,4} - 1.0, layer{2,3} - 0.1, FC - 1.0 </li></ul></li> <li>plateau scheduller on val loss (cross-entropy), patience=3</li> <li>early stopping: 6 epochs</li> <li>train time augmentation: shift, flip, scale, rotate, transpose</li> <li>test time augmentation, 6x: as-is, rotate 90*{1,2,3}, flip x, flip y</li> <li>standart for this challenge searching thresholds for F2 (firstly I implement per class search - it is more consistent, but default on scale is better)</li> </ul> <p>Best Ensemble:</p> <ul> <li>trained various models on jpg, mix channels <ul><li>models: <ul><li>densenet{121,169} on jpg, 5 folds</li> <li>mixnetv6, mixnetv3 (different LRs) 5,6,7 folds</li> <li>wideresnet on 6-channel (for mix branch unpretrained WideResNet), 7 folds</li> <li>resnet18 + embedding FC on 8 folds</li></ul></li></ul></li> <li>best submit was based on weighting predictions using holdout F2 score, weight=((score - min_score)/max_score)**0.5</li> </ul> <p>Slides about challenge and my solution <a href="https://www.slideshare.net/Eduardyantov/kaggle-reviewplanet-understanding-the-amazon-from-space">https://www.slideshare.net/Eduardyantov/kaggle-reviewplanet-understanding-the-amazon-from-space</a></p>
Planet: Understanding the Amazon from Space
3rd place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Planet: Understanding the Amazon from Space <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>There was some modification of bisection method which was performed on validation data. Obtained thresholds then used for test data. </p> <p>On first step we search for single threshold for all classes. On the second step search optimal threshold independently for each class, while fix all other thresholds. Step 2 can be repeat several times since it keep increasing accuracy.</p> <p>There are many syntax and parameters changes in Keras 2.0. Not all the networks worked fine in 2.0 version while I solve Amazon problem. But I plan to move on Keras 2.0 in next projects, since all CNNs now fixed to work with 2.0.</p> <h1>Solution summary</h1> <p><strong>Software</strong></p> <ul> <li>Windows 10, Python 3.4 + 3.5</li> <li>Keras 1.2.1 - main framework for CNN training</li> <li>Theano 0.9.2 - main backend</li> <li>Tensorflow - additional backend for neural nets which is not supported by Theano (ResNet152 and XCeption)</li> <li>XGboost, LightGBM, Keras - main classificators for final ensemble</li> <li>OpenCV - image preprocessing</li> </ul> <p><strong>Hardware</strong></p> <ul> <li>We used 6 GPUs: 2*1080Ti + 4*1080. </li> <li>Parallelization was made using either different fold on different GPUs, or different CNNs on different GPUs.</li> </ul> <p><strong>Training Time</strong></p> <ul> <li>On single GPU around a month for full pipeline. On 6 GPUs it’s possible to finish in around 1 week.</li> </ul> <h1>Dataflow</h1> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/217773/7225/Dataflow.png" alt="Dataflow" title=""></p> <h1>Set of CNN based models</h1> <p>Due to problem with TIFFs all CNN models used only JPEG images</p> <ul> <li>11 models for different CNNs with 17 output neurons with sigmoid activation. Set of CNNs: INCEPTION_V3, INCEPTION_V4, DENSENET_121, DENSENET_169, DENSENET_161, RESNET50, RESNET101, VGG16, VGG19, RESNET152, XCEPTION</li> <li>1 DENSENET_121 model with weather classes only: ['clear', 'partly_cloudy', 'haze', 'cloudy']. “Softmax” activation because classes is mutually exclusive - 4 output neurons.</li> <li>1 DENSENET_121 model with land classes only - 13 output neurons.</li> <li>17 DENSENET_121 models for single classe. Each model has 1 output neuron to predict single class. 50/50 batches, where half of images have class presented.</li> </ul> <p>We used 5 KFold Cross Validation. So each model has 5 different weights set. 30 CNN models in total. 150 weight files</p> <h1>Training Process and Data Augmentation</h1> <ul> <li>Keras module for Python with Theano or Tensorflow backend.</li> <li>Typical batch size around 20 images for small nets, 16 images for large nets due to GPU memory limitation</li> <li>Batches created on the fly, so no need to store all the data in memory</li> <li>As optimizer we used Adam, with learning rate ~0.00003</li> <li>We used “logloss” loss function, since direct usage of F2-score in training process gave worse results.</li> <li>Single class models (for rare classes) used 50/50 batches, where half of images have class presented</li> </ul> <p><strong>Data augmentation includes</strong></p> <ul> <li>Random crops: chose some part of picture and resize it to CNN input shape (224x224 or 299x299)</li> <li>Random mirrors or 90 degrees rotations (8 possibilities in total)</li> <li>Random intensity changes</li> </ul> <h1>Validation and Test Time Augmentation (TTA)</h1> <ul> <li>We used 5 KFold, so to process Train images for validation we process it one time with corresponding fold model. In total 40K images</li> <li>To process test images each image must be processed with each fold model. 5 times in total, so for test data we process 300K images.</li> <li>To increase the accuracy we used TTA. This mean we process each image several times with some changes. We used 32 variation of single image, predict each and use mean as final prediction: -- For Train images 1.2M predictions -- For Test images 9.6M predictions</li> </ul> <p>After this we got 2 text CSV files with probabilities for each image to have predicted class. One CSV file for train images, second for test images. These files used later in second level classifiers.</p> <h1>Neighbors</h1> <p>More details in this thread: <a href="https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/36738">https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/36738</a></p> <p>Based on neighbours IDs, panorama IDs and CNN predictions we created the following set of features:</p> <ul> <li>Average CNN predictions for 4 neighbours</li> <li>Average CNN predictions for 8 neighbours (including diagonal elements)</li> <li>Average CNN predictions for whole Panorama ID</li> </ul> <p>Each image got “panorama ID” and “panorama Size“ as features for second level models or “N/A” depending on following rules:</p> <ul> <li>panorama must has more than one element</li> <li>panorama must contain at least one element from train and at least one element from test parts of data</li> </ul> <h1>2nd level models: XGBoost blender</h1> <p>We have large amount of CNN predictions and features. Now we need to some methods to join them altogether for more accurate predictions. One of methods is to use many random XGboost runs with random parameters.</p> <p>XGboost can predict only one class at the time. So each run create 17 different XGBoost models, each predict its own class. On the input we had ~300 features. What we variate from run to run:</p> <ul> <li>Folds number [4 - 10]</li> <li>Learning rate [0.06 - 0.45]</li> <li>Max Depth [2 - 5]</li> <li>Subsample [0.6 - 0.99]</li> <li>Colsample by tree [0.6 - 0.99]</li> </ul> <h1>2nd level models: Keras blender</h1> <p>Keras neural net classification models are good for following reasons:</p> <ul> <li>They’re the very different from XGboost models</li> <li>They can predict all 17 classes at once, catching the mutual influence of classes The bad thing is that it in most cases has slightly worse accuracy than XGboost or LightGBM.</li> </ul> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/217773/7230/Keras-blender.png" alt="Keras blender" title=""></p> <p>Variation parameters:</p> <ul> <li>Number of neurons on each level: 1st: 400 - 700, 2nd: 350 - 600, 3rd: 200 - 500</li> <li>Dropout value</li> <li>Random choice of activation: ’RELU’, ‘ELU’, ‘PRELU’</li> <li>Batch size: 200-1000</li> <li>Learning rate: 1e-5 - 1e-3</li> <li>Early stopping: 50 - 150</li> <li>Number of folds: 4 - 10</li> </ul> <h1>Final Ensemble</h1> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/217773/7229/Ensemble.png" alt="Ensemble" title=""></p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/217773/7228/Score1.png" alt="Score" title=""></p> <p>It's possible to get 2nd place with this solution:</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/217773/7232/2nd-place.png" alt="2nd place" title=""></p> <h1>Code</h1> <p><strong><a href="https://github.com/ZFTurbo/Kaggle-Planet-Understanding-the-Amazon-from-Space">Github repo</a></strong></p> <p>Trial and error. Also based on previous experience. I think LR can be tuned further for faster performance with same accuracy.</p>
Planet: Understanding the Amazon from Space
9th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Planet: Understanding the Amazon from Space <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Oh you could definitly have achieved similiar results with keras+tf or any other neural networks library :)</p> <p>I optimised the threshold on the validation. :/</p> <p>I used the same binary-crossentropy for both.</p> <p>Good explanation! I will definitely try more sophisticated research of semisupervised learning the next time, but in this competition to be honest I was just happy I could increase my score and I didn't think much further than that :)</p> <p>I have no experience with "classic" classifiers like SVMs or random forests, but still wanted the best possible classifier. XGBoost seems to win most of the competitions on Kaggle, so I just chose it for that reason.</p> <p>For me it was hard to find a good α(t) so I just fixed it to 1. I used 25% of the test data for semi-supervised learning. Instead of using α(t) used the running average and only started to adapt the labels after a few epochs. Keep in mind this system is a not a result of structured research. I just tried things and if they worked I build upon these. I think there are better ways to implement Pseudo-Labeling!</p> <p>Yes. 20%. No stratification.</p> <p>You are right. I will upload the code soon and then you can just look it up there.</p> <p>I like the low-level approach. In the end Pytorch is a tensor library like tensorflow. I never used such libraries, only keras. So I appreciated how easy it is to do custom implementations :)</p> <p>Code here: <a href="https://github.com/Mctigger/KagglePlanetPytorch">https://github.com/Mctigger/KagglePlanetPytorch</a> This code is not meant to reproduce the 9th place submissions, but to give you a basic idea of our approach.</p> <h1>CCT: 9th place solution</h1> <p>This was my first serious Kaggle competition and it has been an amazing experience. I went from merely understanding basic convolutional networks to understanding them a little better (in a practical way at least).</p> <h2>1. First attempts</h2> <p>In the beginning I tried custom convolutional neural networks. I got inspiration from VGG-like networks and just tried different architectures to get a hang of architectures and their performance. Since I started with "only" a GTX1060 my archiectures where limited to small to medium sized models. When I saw Heng CherKengs repository I tried custom implementations of DenseNets and ResNets in Keras, which has been a great learning experience, but did not yield results better than 0.927 on the public leaderboard.</p> <h2>2. Getting serious</h2> <p>Heng CherKeng convinced me to use Pytorch instead of Keras and I instantly learned to love it. Also a big advantage is the availability of many pretrained models, which have been key to improving my score. Although I was lacking any structure approach to this competition at this time I ranked quite good on the leaderboard. For my score of ~0.9315 I used an ensemble of the following models: - ResNets: 34, 50, 101, 152 - DenseNets: 121, 163, 169, 201</p> <p>I used SGD with momentum and weight-decay. The learning-rate scheduling was manually determined on based on the local validation loss and the leaderboard f2-score. I used data-augmentation for training which included arbirtrary rotations, flipping, translation and zooming. I also tried blurring and distortion which did not change much. Also I did not see a difference between 90° rotations and arbitrary rotations. To continue training models, which had been pretrained on Imagenet, I just removed the last layer and replaced it by a simple fully connected layer with sigmoids. I did not use combinations like sigmoids + softmax (for weather) since I expected misslabeling of the data in the test set and also for f2-score I would rather risk some false positives than false negatives. I used block-wise learning rates: - For DenseNets I used a constant learning rate for all layers, but trained the classifier alone for a few epochs in the beginning. - For ResNets I locked all froze all weights and unfroze them every 2 epochs until the classifier, 4 convolutional blocks and the low-level filter weights were unfrozen. From the on I continued to train on all layers.</p> <p>I trained exclusivly on JPG-files. The persons who labeled would have also only seen the JPG-files. To get hard labels for our submissions we used the f2-score optimiser from the kernels. Also my 8 models were trained on the full dataset and I used many submissions to find out the best epoch of each model. This obviously mean I probably overfit the public leaderboard pretty hard at that point in time.</p> <h2>3. Teaming up</h2> <p>Finally approximatly one month before the competition end I teamed up with Cpruce (<a href="https://www.kaggle.com/cpruce">https://www.kaggle.com/cpruce</a>) and Chris Chross (<a href="https://www.kaggle.com/crailtap">https://www.kaggle.com/crailtap</a>). While Cpruce had the same leaderboard score as me and the same approach, ChrisChross used 5-fold cross validtion for his models and replaced the final fully connected layers of his ResNets by 3 fully connected layers with dropout! This led to a leaderboard score of ~0.93177 for an averaged-ensemble of 5 folds. His overall leaderboard score was ~0.9325 and by combining our models with weighted voting we got to ~0.9330.</p> <h2>4. A structured approach</h2> <h3>Cross-validation</h3> <p>Now our final approach. So far I had never cross validated my models, since I thought this would take to much time, but thankfully I got an GTX1080TI and 5-fold-cv did not seem to time consuming anymore. First I just tried 5-fold-cv on all my existing models, averaged the results and added them to the ensemble which did not increase our score substantially. The best 5-fold-cv ensembles scored ~0.93050-0.93100.</p> <h3>Semisupervised learning</h3> <p>Based on Sander Dielemans competition reports (<a href="http://benanne.github.io/2015/03/17/plankton.html">http://benanne.github.io/2015/03/17/plankton.html</a>) I tried pseudo-labeling to fight overfitting of my models. I added a part of the test-data to the training-procedure and used the (soft-) outputs of the network as labels. I initialized the labels for the test-data with our best leaderboard submission and used a running average of the network outputs and the initialization as label. This led to a minor increase on model performance for DenseNets, but ResNet-50 and ResNet-101 improved a lot.</p> <h3>XGBoosted TTA</h3> <p>This has probably been the most important part of our final approach. We used 8-14 different transformations for test-time-augmentation (TTA). 14 had the best performance, but in the end there was not enough time to rerun these for all existing models. First we combined our TTAs by averaging, which was already quite good. When I read about model-stacking I got the idea to train a classifier to combine TTAs the most efficient way. Also since we already used 5-fold cross validation it was trivial to implement stacking. As classifier we used XGBoost even though we had never worked with it before. Now we got 0.93200 for ResNet-50 and ResNet-101 and big jumps for the worse models (for example ResNet-34 from ~0.9307 to ~0.9312. We combined the new models with weighted voting (based on leaderboard-f2) and removed most of the old models from out ensemble. This gave us our final public leaderboard score of 0.93340 and private leaderboard score of 0.93220</p> XGBoosted Ensembling <p>We tried the same procedure we did for TTA for ensembling all TTAs of all models, but we got very bad results (0.900). Either an implementation error or a systematic error, we do not know yet.</p> <p>We will upload our code soon!</p> <p>Hi, what do you mean by unsupervised features? SOTA is just the currently best submission we had on the public leaderboard at that point. We used it as labels for the test data. :) Btw: I did not try normal pseudo-labeling so I do not even know whether this strategy increases performance!</p> <p><a href="https://github.com/Mctigger/KagglePlanetPytorch">https://github.com/Mctigger/KagglePlanetPytorch</a></p>
Planet: Understanding the Amazon from Space
Code from a record breaking 903 place drop
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Planet: Understanding the Amazon from Space <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>FYI, the best single model from that codebase was a fbresnet-200 that hit 0.93143 on the public LB and 0.92915 on the private with 8x flip/transpose test-time augmentation enabled. My teammate had a few models that bested that score on the private LB almost cracking .93 with ResNet-50. </p> <p>So, instead of rounding out the bottom of the top 15 with a private LB score of 0.93202 and public 0.93386, we ended up in the bottom 50 after a 903 place drop thanks to an undetected issue jumbling filenames of just private LB in one of the scripts we were using to process and ensemble our submissions. Damn!</p> <p><a href="https://github.com/rwightman/pytorch-planet-amazon">https://github.com/rwightman/pytorch-planet-amazon</a></p> <p>Anyways, I wrote the code I used for this competition to be re-usable in the future. Nothing super fancy here, just a good collection of models (ResNets, DenseNets, fbresnet200, Inception-V4, Inception-Resnet-V2, Wide-Resnet 50, ResNext) pulled together from a variety of sources with a lot of different training and tweaking options, a convenient cross validation setup for training, and some test-time augmentation.</p> <p>My teammate also used Pytorch for his codebase so we had some model overlap but ended up favouring different hyper parameters and models so our ensembles worked well together. Most of our models were trained at 256x256 resolution with quite a bit of augmentation but we also had several at 224 and 299. </p> <p>F2 threshold optimization was used (thanks anokas!) and additional manual thresholding was done at ensembling time. We definitely overfit a bit to the leaderboard for the 'best' submissions as many did. Several earlier submissions that I unjumbled and re-verified since competition close performed better on the private than our best public scores. </p> <p>Note: the fateful CSV handling code is not in the above repo, sorry ;) </p> <p>Thanks! Hehe, you're right, I could still get a medal. Anyone who likes the code, give this a bump :) </p> <p>It was a complete fail... I saw your comments in their github issue tracker. It was very unstable, and within a few epochs the loss started skyrocketing. I tried clipping gradients and fiddling with the parameters but got nowhere so I abandoned that idea pretty quickly.</p> <p>This makes for a more amusing story :) </p> <p>Yes, that was the goal, and a very inefficient/brute force way of trying to achieve it. The .2 was a fudge factor to give a wider margin of acceptance so it had a hope of succeeding in a reasonable time. For the most part I used a 5-fold and 10-fold set, I was going to experiment with a more granular 12 fold and use two of the 12 for different tiers of CV but never got around to running that is it is. </p>
Planet: Understanding the Amazon from Space
My brief overview of my solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Planet: Understanding the Amazon from Space <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to Team plant and Team Russian Bears.<br> Special thanks to @Heng CherKeng for your share on forum,your codes and reports are very helpful,and the competition would not be so interesting without your generous contributions. <br>Here is the brief overview of my solution.<br> </p><h2>Challenges</h2><br> 1.Submissions are evaluated based on F2 score,so we must search thresholds after predictions and we can not find perfect thresholds.<br> 2.Label noise.We can only control the side effect of the noise.<br> 3.The public score are very close,and the division of the test data can cause shakeup..<br> <br> <h2>Solution</h2><br> My solution are similar to Heng’s,so I will only introduce the differences<br> 1.I write a F2 loss function in pytorch,(thanks to @Bruno G. do Amaral,<a href="https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/34484">https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/34484</a>) and add it to logloss,and I found it was helpful,but I did not check how much improvement we can get because I only compared them by using resnet 34 with 15 epochs and did not predict and submit.<br><br> 2.I used "Single Image Haze Removal using Dark Channel Prior" from <a href="http://kaiminghe.com/">http://kaiminghe.com/</a> ,I found that it was very useful,my networks can '<b>see</b>' the images more clearly.I hesitated to use it on weather labels,but I found that it’s quite good on all kinds of labels,especially road/water/habitation... <br><br> 3.I trained a hard example mining network on simple-net (based on heng’s code ) ,I selected 1/3 examples with largest loss and do back-propagation,I also trained other hard example methods but they did not help a lot ,so I did not use it.<br><br> 4.I trained different networks with 64*64 224*224 256*256 inputs and I used dilation of the in a resnet 34 network.Different networks have different capabilities on different labels.For example,SimpleNet 64 have good performance on Label:clear.<br><br> 5.Because I have networks with difference capabilities,so I do Ridge regression on them to predict each label separately,I mean,I have 17 regression models.<br><br> 6.My final submission are based on some strong models(based on CV),My best submission are voted based 9 models.<br><br> 7.I selected tiff images with correct labels (20000+ in train set),although they are more clear but I found they have little value ,I guess my dark channel pre-processing were strong enough. And I was afraid of the differences of distribution of train and test set will be harmful to F2 thresholds,so I did not use them.<br><br> 8.I also found I can get big images like ZFturbo did(<a href="https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/36738#latest-205616">https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/36738#latest-205616</a>) but I did not use it because I thought it will be of little value because convolutional network models are strong enough.<br><br> 9.I tried bayesian inference but I found it was not helpful.<br><br> 10.I compared the perfect F2 thresholds with thresholds searched,this can lead to 0.001 difference .What is perfect F2 thresholds?If the distribution of test data is exactly same as train set ,then the thresholds are perfect.I tried a lot of ways(such as normalization the prediction,controlling ratio of the labels of the test set)But I found that I could not get good result,and it seemed to be very difficult to reach perfect thresholds for all of us, so I just let it be.<br><br><p></p> <p></p><h2>Shake up</h2><br> I tried my best to evaluate the possible shake up in final stage of this competition.<br> I simulated the test set use half of train set,and I split the simulated test set to 66:34 using different random seeds,I found as the seed change ,the value of public-private score will change 0.001 to 0.0025.<br> <img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/amazon_demos/gap.jpg" alt="gap" title=""><br> the x is seed index,y is simulated public-private (I sort the seed by the value:public-private) <img src="https://raw.githubusercontent.com/bestfitting/kaggle/master/amazon_demos/dist2.png" alt="dist" title=""> the distributions of simulated public-private value <br><br> As we can see,the shake up happened on real Leardboard. <br> I thought carefully,I checked public LB,I must persuade myself that the public scores were not perfect metric of the capabilities of models,it was difficult,because at first glance,we have 40000+ images in public test set,it should be stable.<br><br> But<br> 1.The gap between us were very small(0.0005-0.001)<br> 2.I thought the public-private score changed along with seed is caused by Hard Examples,and these Hard Examples can only be random guess,as they are caused by label noise.<br><br> Finally,I persuaded myself by a guess:If we label all the images ourselves 3 times,some of then images will have different labels,because we can not say for sure some images are haze or cloudy,road or water,slash burn or not,blooming or not.....so the noise are random noises.<br> And If the noises are random,the score get from public LB will loss in private LB. <br> So, I adjusted my goal to keep myself in TOP 10,and decided not to care about public LB in last week and tried to seek most stable way to ensemble my models,I threw away any models may cause over-fitting,and I used just vote and ridge regression.And the final result is a big surprise to me.<br> <br><br> By the way,I entered this competition partly because I used Google satellite map for years when I preparing my Mountain bike trip,it is very useful when we cross mountains.I once thought I can benefit from my capabilities of distinguish water/road,habitation or not,but I found this kind of skills can only be used in my trip.:)<br> Thanks to kaggle and organizer !<p></p> <p>We have only probablities of each class when train and we can not use thresholds on them,so we can not use the F2 function directly,and most important reason is we must make it differentiable and back-propagate the loss .</p> <p>Thanks,waiting for your solution,I think it will be very good one.</p> <p>HI,Alex,<br> I have no time to clean my codes in next weeks,and the good news is heng will collect all solution and re-implement,at the moment, you can try the schedule like this:</p> <pre> def schedule(self,net,epoch,all_data=False): self._sgd = optim.SGD(net.parameters(), lr=0, momentum=0.9, weight_decay=0.0005) self._cur_optimizer = self._sgd lr = 0.005 if epoch &gt; 5: lr = 0.001 if epoch &gt; 10: lr = 0.0001 if epoch &gt; 15: lr = -1 self._cur_lr = lr return self._cur_optimizer,self._cur_lr </pre> <p>If you want your model to be stable,you can use n-folds training.n can be 4,5,or even 10.<br></p> <p>Hi,Jeremy<br> Happy to relpy to your question!<br> I used and only used simplenet version hard example in final ensemble.I tried to use hard example on other net,my local CV decreased,so I removed them.My ensemble strategy is very simple,just let Ridge model to select correct models to predict a label.<br> For dilation,I changed the restnet34 model: self.layer4 = self._make_layer(block, 512, layers[3], stride=1,dilation=2)<br> As I use pytorch, the dilation is a parameter of conv2d,so it's very easy.<br></p> <pre> if dilation&gt;1: self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1,dilation=dilation, padding=2, bias=False) else: self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) </pre> <p><br> And as to end to end solution,to be frank,I am not very statisfied with my solution in this competition.I prepared to find a end to end one at the early stage but I entered NOAA sealion competition at the same time (I managed to find a end to end way in that competition)and when I was back ,there were less 3 weeks left only,so I can only train a lot of models and ensemble them :) <br> I think we can find a good network structure with high performace to sovle this problem.<br> If we use resnet,we can use 256*256 input,we split the network from conv4 and split labels to some group,the weather with large stride ,the blooming ,road,water with dilation and add some hard-example loss on them. I hope I have time to have a try but I am exhausted now after these months.</p> <p>Hi,Mark<br> I did not submit any single model based submission after very early stage of this competition.<br> But I check my local validation of my models,I think the CV F2 scores are good.<br> densenet161_224 0.933014<br> resnet_50_256 0.932966<br> resnet_34_256 0.932912<br> resnet_dilation 0.932307<br> My ensemble model's validation F2 score are 0.9344x,and the gap to public LB score are 0.0005 to 0.0008,So I think the dataset of this competition is balanced among train set,public LB test set and private LB test set.We can trust our local CV.</p> <p>HI,Gene,<br>If we have 3 models such us resnet50,densnet121,simplenet,they have 17 predictions (probabilities) each,and we have 17 true value of 17 tags.<br>we can train 17 Ridge models,each of them predict 1 tag.<br>for example,for tag:Water,<br>X is [water_prob_of_resnet50, water_prob_of_densnet121, water_prob_of_simplenet]<br> y is [True_tag_of_water],<br>So we find optimized combine of three models for predicting water tag.</p> <p>Hi,Alex,<br> I use part of heng's argument methods,and I don't think too much argument will help.I don't want to use crop,translation argumentations when train,because it will change the image too much ,it will move important objects(cloud,road....) out of images.And use too much test time tta will slow down prediction,andI used only 4 basic argumentations provided by heng in some of my models.And I have emailed some key codes(haze removal,F2 loss function..)of my own to heng.</p> <p>Zsolt,I split the samples randomly:<br></p> <pre>df_train = pd.read_csv('/data4/data/amazon/input/train_v2.csv') kf = KFold(n_splits=n_splits, shuffle=True, random_state=random_state) for i, (train, test) in enumerate(kf.split(df_train)): print(train.shape, test.shape) </pre> <p>please check the topic:<a href="https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/36743">https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/36743</a></p> <p>Fan Fei Chong,<br> 1.I fixed my random seed during the competition,I fixed my random seed to 100 in all competitions indeed. :) I simulated the train and test data set split by using different seeds to evaluate the influence of the randomness. 2.Yes,I used the same train-val split during the competition.<br> 3.We can split train set into 4 parts,train on 3 three of them and validate on the left one ,it is very common practice in Machine Learning competitions,but in Deep Learning competitions,we must take the performance into account,the dataset of this competition is not so large so we can use 4 folds or even 5 folds.But if the datasets are two large we can not use this method ,we must find a good validate method ,the key is keep the train-validate-test dataset in nearly same distribution.</p> <p>Jeremy,I replied Zolst's questions about local-cv strategy above <a href="https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/36809#206724">https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/36809#206724</a> ,the reason,the codes,the rule of how to use local-CV,if you have any more question please let me know.</p> <p>Zsolt,<br> Simple random is a good way ,because the images are splited to train-test set in the same way!As you konw,I recovered the images from 256*256 to 2560*2560 images,and I can say for sure that the train and test set images are random selected.What's more,I checked train-val-test predictions carefully to confirm the distribution.As we must search the thresholds for each label,we must make sure we keep the balance between train and test<br> I chose models by local-cv of ensemble , when I found the local-cv high and public-LB was not so good or too good,then I decided not use newly added models(Too good LB was also harmful,and I confirmed after the post competition submissions ).I tried to keep public-LB and local-CV in a reasonable range.<br> And as we discussed in Fisheries competition,that competition was a little special,it was a two-stage one,we could not build a good CV strategy,we must guess the distribution of stage 2 dataset.If I knew all the images were from new boat,I could use by-boat split....I did so indeed and found the local-cv scores were OK after competition but I was afraid that some new images were from train-set boats...<br> The reason use ridge regression was quite simple,I wanted to find importance of each model,and I did not want to use complex one.And I also tried lasso but I found it had higher local-score and a little lower public score,so I did not use it.I guess lasso weights are more sparse and is not good for F2 thresholds,we need more continuous value space.<br></p> <p>Yes,4 folds means 4X time to train and predict.<br> As to the worries about the performace,there were a lot of discussios(criticisms) after kaggle competitions,here are my opinions:<br> 1.Performance is important only when it's a really problem.In this competion,the models are compared to human,If the models are accurate,I don't care if they are predicted in relatively long time.<br> 2.Kaggle competitions are attractive because of their clear and fair rules:the private LB,I some case,performance is important,kaggle will improve their rules,for example,In Two Sigma Competition,all models must be finished in one hour(train and predict).In NOAA sealion competition there are 18000 very large images(&gt;5000*5000) which prevent anyone from using too complex models.<br> 3.As Andrew Ng said in CS229,when the train-data obtained from months hard work,we must make full use of them.If I work for Planet,I will crop more images and use all top models to predict labels and ensemble them,and correct the mislabeled images by human,and add them to train set,and I will select less models iteratively by CV to get balance among human labor,computer resources,performace,accuracy.<br> 4.In most of kaggle competitions,winner often have top single model too.<br> 5.When we have a look at the improvement from RCNN,Fast-RCNN to Faster-RCNN and SSD,RFCN, we can see ,we can find a way to better models/algorithms when we get more understanding of the problem and use different models in different situations.As to this competition,I think we also can find an end-to-end simple model<br></p> <p>I hope it will be helpful to you.<br></p> <pre><code>def f2_loss(logits, labels): __small_value=1e-6 beta = 2 batch_size = logits.size()[0] p = F.sigmoid(logits) l = labels num_pos = torch.sum(p, 1) + __small_value num_pos_hat = torch.sum(l, 1) + __small_value tp = torch.sum(l * p, 1) precise = tp / num_pos recall = tp / num_pos_hat fs = (1 + beta * beta) * precise * recall / (beta * beta * precise + recall + __small_value) loss = fs.sum() / batch_size return (1 - loss) </code></pre> <p>Yes,your pseudo codes are clear,we have 17 models.In this competition,we can use the ground truth of labels when we train NN models,so their correlations are considered,I tried to used Baysian theory during the competition,however, it did not help me,so I used Ridge Regression,I found it helpful in a lot of competitions.<br> Although we did not use all the true labels when we train Ridge models,but the Ridge models are trained to decide whether we should choose some label when some labels coexist under some probabilities in our predictions.</p> <p><a href="https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/32402">https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/32402</a><br> <a href="https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/33559">https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/33559</a><br></p> <p>You can sort the topic by most voted then you can find most valuable information of a competition.:)</p>
Planet: Understanding the Amazon from Space
New to 14th in 1 week
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Planet: Understanding the Amazon from Space <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all, I wanted to make a thread to share some of the things that I have learned throughout this competition, and hopefully to help newer Kagglers have a course of action and game-plan for competitions. I'm quite new to machine learning, python, and data science, and I had never touched image recognition before entering this competition with a little over a week left to go. So, before I go into details on my solution (which is quite similar to Team Urucu) I am going to give a list of things that I found helpful during this competition as a newbie. </p> <p>1) The forums are your best friend. Before you tackle a competition, read any EDA's, discussion topics with more than 5 votes, and look through the code on starter kernels to familiarize yourself with the important packages and techniques and how they work. Our amazon hero, Heng, is the only reason that I knew enough about image processing to even participate in this competition. There are a lot of smart Kagglers out there to learn from!</p> <p>2) Don't reinvent the wheel. There are a lot of great resources out there for Kagglers. While it is important to understand how things work, you don't need to implement everything from scratch. A few that I have found helpful which I don't see very often: - <a href="https://github.com/Far0n/kaggletils">https://github.com/Far0n/kaggletils</a> (shoutout to @Faron) - past competition solutions. Reading those is a great way to generate ideas to apply to the current competition.</p> <p>3) Don't be afraid to try things. You will probably make some very time-consuming mistakes at the start, and that's OK . It does help to understand enough of the theory behind and inner workings of algorithms/libraries to understand when you are making mistakes though. On this competition, my worst moment was the first time I ran a pretrained model. I got a MemoryExhaustedError 8 hours into training, and had not been saving models. Ouch.</p> <p>4) Setting up good cross-validation is key. Don't pay too much attention to the leaderboard- focus on making your solution stable. On this competition, I actually had submissions that dropped by as little as .0005 (eg. went from .9268 public -&gt; .9215 private, but alas, hindsight is 20/20), and none of my submissions in the final two days dropped by significantly more than .001. </p> <hr> <p>On to the solution. Public LB: 0.93288 (Not my best public LB) Private LB: 0.93201 Drop: .00087 Since I had limited computation time, I only used 5 base models in my final submission. They were trained on 5-fold CV, and the test set predictions were bagged predictions of the 5 models trained during cross validation. The models were all keras pre-trained models: VGG-19 (128x128 VGG-16 (96x96) InceptionV3 (160x160) Xception (192x192) Resnet50 (256x256)</p> <p>For data augmentation, i used all possible random rotations/flips with equal probability, and with a small probability (1/4) I zoomed to a random value in (.85, 1.15).</p> <p>For the classification layer(s), I looked at the model base code in Keras, and used the same layers (scaled appropriately for the no. of classes), and this gave me fairly good results. (.928-.931 CV per model). Peter G. tipped me off to this strategy when he did so for VGG and scored ~.931 on the leaderboard. </p> <p>Something I noticed during my first day on the competition is that when stacking models, using models with different input shapes made the std. dev on stacking CV lower and also made the gains from stacking larger, and I think this was the key point in making my solutions stable. In addition, I didn't have time to run all of my models on a 256x256 input (ResNet50 took me almost 35 hours to run 5 fold) so that contributed to the decision as well. </p> <p>Then, I used the OOF predictions (with OOF-time augmentation on all rotations, just like TTA) to train xgboost, lightgbm, and catboost classifiers (one classifier for each label), and again with 5-fold CV and bagging test predictions. I optimized f1 on each model/class combo separately, which lead to quite big LB gains (.005). My final model was a voting ensemble of the second level classifiers.</p> <p>I think there was still a lot of room to improve this solution, but there was not time for me to run models on larger input sizes in addition to the smaller ones. I'd estimate at least .001 could be gained by adding all of the architectures with size 228x228 (center cropped) or 256x256, or by adding more sensitive models (maybe trained only on one class with a revised dataset, for example).</p> <p>You can actually just automatically reload at each point by iterating through the values that you want to do for epochs and learning rates, and then at the end of the loop reloading. This allows you to start with a lower learn rate from your best previous iteration (although I got mixed results with it- a few models it did much better and others it was somewhat of a wash). Here is how it would work: </p> <blockquote> </blockquote> <pre><code> model=your_model() epochs_arr = [10, 4,4,4,4] learn_rates = [0.0001, .00001, .000005, .000001, .0000005] kfold_weights_path=your_filepath for learn_rate, epochs in zip(learn_rates, epochs_arr): opt = optimizers.Adam(lr=learn_rate) model.compile(loss=your_loss_function optimizer=opt, metrics=[your_metrics]) callbacks = [EarlyStopping(monitor='val_loss', patience=2, verbose=0), ModelCheckpoint(kfold_weights_path, monitor='val_loss', save_best_only=True, verbose=0)] model.fit_generator(generator=train_generator(), steps_per_epoch=(len(X_train) // batch_size) + 1, epochs=epochs, verbose=2, callbacks=callbacks, validation_data=valid_generator(), validation_steps=(len(X_valid) // batch_size) + 1) if os.path.isfile(kfold_weights_path): model.load_weights(kfold_weights_path) </code></pre>
Planet: Understanding the Amazon from Space
Our approach to PUB-LB 0.93444 #3 / PRIV-LB 0.93206 #13
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Planet: Understanding the Amazon from Space <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is an image generated from noise using this AC-GAN. Every four images, from left to right and top to bottom, should match the labels 'clear', 'cloudy', 'haze', and'partly_cloudy'.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/207064/6910/fake_samples_epoch_137.png" alt="AC-GAN" title=""></p> <p>Hi Miha!</p> <p>Unfortunately, I did this implementation on the last days of competition and have no time to analyze how it contributes to the final score. My hope with this implementation is that it would add diversity to the ensemble.</p> <p>I based my code on <a href="https://github.com/kimhc6028/acgan-pytorch">this implementation</a>. The only difference is that I used images from test-jpg to train true/fake output and images from train-jpg to train the classifier (so I have two forwards on Discriminator). My thesis was that this would lead to better generalization on unseen data (though I have no evidence to support this claim).</p> <p>By the end, the generated samples were far from perfect and have low relation to original classes.</p> <p>Hi Tim! Indeed, it is a brute-force-like approach.</p> <p>XGBoost was used to join predictions from all base-models (same approach until step <a href="http://blog.kaggle.com/2016/12/27/a-kagglers-guide-to-model-stacking-in-practice/">3.2.1 here</a>).</p> <p>The difference is that <strong>we didn't fit any model to the whole dataset</strong> (steps 4 and 5 from previous link). Instead, we use the same 4-fold scheme from base-models to train the XGB. This generates 4 out-of-fold predictions, which we individually tuned for best thresholds (targeting the best f2-score).</p> <p>Regarding our rank drop, we did a little investigation to find the reason for such loss. We concludes that we may used wrong TIF files for some of the base models. Furthermore, we find that some models had bad class-correlation between train and test, which indicates some bug in any part of our implementation (I find one input-normalization issue, for example). Anyway, it is not perfect clear to us the LB drop.</p> <p><strong>Summary of Urucu team solution</strong></p> <p>Public LB: 0.93444 #3 Private LB: 0.93206 #13</p> <p>Team members: Bruno, Gilberto, Paulo and Yang</p> <p>Our solutions is based on an ensemble of convolutional neural networks (CNN), trained on 4-fold cross validation. The final ensemble included 25 models in 4-fold, in a total of 100 trained neural networks!</p> <p>One important step to reach this number was to start early in the competition, as each train/predict would take up to 3 days to run (on a 980ti). By the end, Yang joined forces with a 4-TitanX which accelerates this process.</p> <p>To develop the models, we use both Keras and Pytorch with (mostly) pre-trained weights on imagenet. Architectures includes VGG[16|19], Resnet50, Densenet121, InceptionV3 and custom convolutional neural networks, trained from scratch. By the end, we still included one AC-GAN, trained only on weather classes (in this case, using a softmax output).</p> <p>Our best single model, a VGG19, has an average logloss below 0.085 (f2 ~ 0.931 CV):</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/206968/6908/Screen%20Shot%202017-07-25%20at%2010.44.47.png" alt="Train loss" title=""></p> <p>Some models were trained in only one class. We resampled the dataset to ensure 50% of the trained samples were from the trained class. This produces a biased and sensitive model, but with great power to ensemble.</p> <p>After training, we start the predict step. In this phase, we augment each image 8 times (4 rotations and flip). Then we compute the mean, min, max and std of activations for each output class for that model.</p> <p>The ensemble is a combination of all features from all models generated on previous steps. For this we used a XGBoost, trained for each label on each fold (!). The number of trees is too tuned for each label. Also some feature selection techniques was applied for each individual label.</p> <p>The final step is to find the best thresholds that maximizes the f2-score. An initial approach was to find a global threshold for each individual label, but we notice that tuning it individually leads to better results. Also threshold are optimized per fold. Also a smoothing filter was applied to select best threshold and avoid local minimas.</p> <p>To build the submission, we included each label that was predicted in more than 2 of our folds.</p> <p>The final submission, made 20 minutes before deadline, was a Level 3 average of previous Level 2 models soft predictions (before threshold). This scored 0.9345 locally(CV), 0.93444 on public LB (but only 0.93206 on private). Anyway, this was our best private submission.</p>
Planet: Understanding the Amazon from Space
San Francisco. August 8. Meetup. Team ods.ai 7th place solution.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Planet: Understanding the Amazon from Space <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I really enjoyed discussions and overall atmosphere at the meetup in which I presented our 3rd place solution for the Dstl Satellite Imagery Feature Detection competition.</p> <p>I would like to do something similar and I am going to present our 7th place solution on August 8 at the meetup in San Francisco. It would be great if you will join. </p> <p>P.S. Beer and pizza provided.</p> <p><a href="https://www.meetup.com/San-Francisco-Kagglers/events/241919255/">https://www.meetup.com/San-Francisco-Kagglers/events/241919255/</a></p> <p>Are you also coming this time?</p> <p>Let's get this topic close to the top :)</p> <p>Reminder, tomorrow at Adroll in San Francisco, I will talk about our solution to this problem, guys from Planet will talk on how this competition looked from their side, and I hope Wendy will stop by and say a couple words from a Kaggle perspective.</p>
Planet: Understanding the Amazon from Space
a very simple brute force solution(public LB 0.9334,private LB 0.9318)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Planet: Understanding the Amazon from Space <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>my main ideal is come from @Peter Giannakopoulos in this topic <a href="https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/discussion/36091">Struggling to achieve LB 0.93 with fine tune pretrained model</a>,it really give me a geat help,Thanks a lot.as for a new bee, I have no ideal how to train a best network,I spent a lot of time to train a single model,but it just can't over 0.929.So i deside to using model combination.</p> <h1>1. some nets I used</h1> <pre><code>1) vgg19 2) densenet121,densenet169,densenet201 3) inceptionv3 4) renset50,resnet101,resnet152 </code></pre> <h1>2. training pipline</h1> <pre><code>1) resize picture to 128 * 128 2) DA by rotate90,horizontal and vertical flips,transposition. 3) using adam Opt, LR start by 1e-4,when vaild loss stop decresse 2 epoch devide LR by 10 until LR small than 1e-6. 3) train 10 times every net, and random split data set to train(0.8) and valid(0.2) set everytimes. 4) everytime I choose 2 snapshot one has best training set F2 score,anthor has best valid set F2 score,so every net I can get 15 ~ 20 models. 5) every model using a optimal thresh to predict,and through vote to generate result. </code></pre> <h1>3. some result</h1> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/206078/6896/vgg.png" alt="" title=""> <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/206078/6897/vgg_den.png" alt="" title=""> <img src="https://storage.googleapis.com/kaggle-forum-message-attachments/206078/6898/full.png" alt="" title=""></p> <h1>4. maybe improve</h1> <pre><code>1) There are no single model can get 0.93 LB stable,so maybe it can get a higher LB using stable 0.93 models. 2) Because time limit I can't do muti-scale train,maybe different models come from different scale can get better result. 3) I wasn't using CV to choose best combine of these nets,because I didn't split a test set from the data set,when I want combine there has no times. </code></pre> <h1>4. some puzzled</h1> <pre><code>1) I try to using some models whose F2 &amp;gt; 0.93 to generate result,but is get a worse LB,I don't know if it is because overfit. 2) DA such as shear,bright,color sames no benefit for the train,even I think flip and rotate is useless,doesn't somebody know why or I am wrong? </code></pre> <h1>Hope my method can help somebody~~~</h1>
Plant Pathology 2020 - FGVC7
0.98454(>1st) solution (RegNetY)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is NOT the 11th solution. I did not choose it.</p> <p>Model RegNetY-32GF  ・newest model from FAIR, this is powerful.  ・<a href="https://github.com/facebookresearch/pycls">https://github.com/facebookresearch/pycls</a></p> <p>Data augmentation (This idea almost is from <a href="https://www.kaggle.com/akasharidas/plant-pathology-2020-in-pytorch">https://www.kaggle.com/akasharidas/plant-pathology-2020-in-pytorch</a>)  Horizontal and Vertical Flip,  ShiftScaleRotate,  RandomBrightness,  IAAEmboss, IAASharpen,Blur,  ElasticTransform,  IAAPiecewiseAffine</p> <p>Imagesize  512x512</p> <p>scheduler  cosine annealing</p> <p>optimizer  SGD+momentum</p> <p>loss  Focal loss</p> <p>Inference  one fold, no TTA</p> <p>other  Pseudo labeling (0.98&gt;confidence)  This idea from my teammate xiaopeng! </p> <p>Thank you so much for letting me join the team.😃 </p> <p>I used note pc with no gpu. I trained all model on google colaboratory.</p> <p>Sorry for a late😨 </p>
Plant Pathology 2020 - FGVC7
10th place - missed 30+ better submissions :)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>:)</p> <p>you are welcome, any time</p> <p>it's simple but it worked on public</p> <p>it's pretty much based on this <a href="https://www.kaggle.com/dimakyn/classification-densenet201-efficientnetb7">kernel</a> with some minor changes like keeping aspect ratio while resizing (600x850) and using whole training without cross validation; added TTA with flips (0, H, V, HV). </p> <p>I did this for B5, B6, B7 and IRv2. The best was IRv2 with 0.981 on public and ensemble gave 0.984. </p> <p>Then added pseudo labels and got single IRv2 0.984 and ensemble 0.988.</p> <p>No fancy augmentation or loss functions, i did not treat imbalance. Still i ended up getting many high scores on both public and private. But it was hard to pick the best one as I had no validation set and both test and training sets are small. </p> <p>So I took the best public submission and randomly chose one out of 100 best submission :). The random one ended up above my best public. </p> <p>Also, I missed 30+ better submissions than my current score :D</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1556833%2Ff36bff29350340c3669cdad0a7dd203d%2Fbad_luck.png?generation=1590563501929665&amp;alt=media" alt=""></p> <p>thx</p>
Plant Pathology 2020 - FGVC7
1st place solution(single model)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I used cross entropy loss.</p> <p>I resized the image to 320x512 and I don't use crop.</p> <p>Congrats to all. Many thanks to Kaggle and the organizers for holding this competition. And special thanks to my teammates.</p> <h2>Problems with the data</h2> <p>We noticed that there are different labels on the same images, for example, Train_379 and Train_1173 are exactly the same image while they have different labels. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2186594%2Fcb794f440db244c11e0e78966e2202ae%2Fimage.png?generation=1590583887495151&amp;alt=media" alt=""> Some images in the training dataset are generated by one image, but they have different labels. We can tell from the details of image Train_171 and Train_1 that they are generated by the same one image, but are labelled differently.<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2186594%2F7e3d1d198e843a907b52e61b0a9a4e14%2Fimage1.png?generation=1590583924522781&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2186594%2Fa953717fab66a2fe1ce37bc3bb646691%2Fimage2.png?generation=1590583935937220&amp;alt=media" alt=""> Therefore, we mainly want to solve the problem of noisy labels in the training dataset. We used the knowledge distillation method, first train a 5-fold models and get out-of-fold results about valid dataset, and then mix the out-of-fold results and ground truth by 3: 7 as the labels of a new training model.</p> <h2>Model</h2> <h3>Data augmentation</h3> <ul> <li>Brightness, Contrast</li> <li>Blur</li> <li>Flip</li> <li><p>Shift, Scale, Rotate</p> <h3>Backbone</h3></li> <li><p>seresnextnet50</p> <h3>Imagesize</h3></li> <li><p>320x512</p> <h3>loss</h3></li> <li><p>Cross entropy loss</p> <h3>Inference</h3></li> <li><p>5-fold average and 5 times TTA</p></li> </ul> <h3>Others:</h3> <p>The amount of data in this competition is very small and the data distribution is imbalanced. A wrong classification of a sample with multiple disease categories will have significant impact on the final results. So, don’t trust the public leaderboard, trust your CV.</p> <p>Thank you. I have no idea whether it will helps, for there are too many noisy labels.</p> <p>Thank you. Just like <a href="/jinzcoder">@jinzcoder</a> said below.</p> <p>It's right.😃 </p>
Plant Pathology 2020 - FGVC7
20th Place: Bilinear CNN with FL + LS
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Welcome!</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1234220%2F9d8ab256053f1510a99a6924c08cca2a%2FBiLinearModel.PNG?generation=1590573890707261&amp;alt=media" alt=""> Kaggle Notebook: <a href="https://www.kaggle.com/jimitshah777/bilinear-efficientnet-focal-loss-label-smoothing">https://www.kaggle.com/jimitshah777/bilinear-efficientnet-focal-loss-label-smoothing</a></p> <p>This was one of the ideas I have used during my experimentation.</p> <p><strong>Apart from this I have tried various other things like:</strong> Different CNN Models(B0,....B7), Different image sizes, different Focal Loss parameters different Label Smoothing parameters Combination of Focal + LS Various Augmentations Early Stopping Late stopping Combination of various CNNs as backbone for BiLinear CNN And a combination of everything that worked.</p> <p>Finally I submitted an ensemble of all my solutions which had a public score of greater than 0.970.</p> <p><strong>Final Note:</strong> Congratulations to all the Winners! It was a great experience for me. I felt like I was learning something or the other day after day, submissions after submissions. Best of luck for future competitions...!</p>
Plant Pathology 2020 - FGVC7
33rd->27th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>haha</p> <p>Congratulations to everyone. Many thanks to Kaggle and CVPR for holding this nice competition. we are very lucky to win the 33th place. we were concerned about data balance and overfitting,so share our solution: Agumentation: A.IAAFliplr(p=0.5), A.IAAFlipud(p=0.5), A.FancyPCA(alpha=0.1, always_apply=False, p=0.5), A.OneOf([ A.IAAEmboss(p=1.0), A.IAASharpen(p=1.0), A.Blur(p=1.0), ], p=0.5), A.OneOf([ A.ElasticTransform(p=1.0), A.IAAPiecewiseAffine(p=1.0) ], p=0.5), A.Normalize(mean=RGB_MEAN, std=RGB_STD, p=1.0), ToTensorV2(p=1.0), we used so many agumentation because we don't have time to experiment with what works Backbone:efficientnet-b6,efficientnet-b5,We changed the model: model1:cbam_EfficientNet_B6 model2:cbam_EfficientNet_B5 features=model1._fc.in_features + model2._fc.in_feature we also add 'senet' block but it is not works Image sizes : 350,350 loss : CBLoss(not works),Categorical Cross Entropy+label smooth other:smote(Oversampling),5-folds,4*TTA final:ensemble other model Public Leaderboard 0.979-&gt;0.981,Private Leaderboard 0.975-&gt;0.976 one more thing: Do not trust the LB, trust your CV. You can ensemble many many models,but it is not necessarily.</p>
Plant Pathology 2020 - FGVC7
3rd Place Score Model Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was my first competition outside of Titanic and was a fun intro to Kaggle competitions. </p> <p>My submission ended up being fairly simple; it was an average of two ensembles, the first with three averaged models the other with two.</p> <p>For all of the models I trained EfficientNet B7 models with Tensorflow on TPU. I used 768x768 resolution for all runs and slightly varying data augmentations. The augmentations I used were: rotation, shear, zoom, shift, flip left/right, flip up/down, random brightness. I also upsampled “multiple diseases” data in my training set by 5x. Thanks to <a href="https://www.kaggle.com/cdeotte">@ChrisDeotte</a> for some helpful data augmentation guides.</p> <p>Ensemble A: Scores: 3 models, all 0.977 public score, averaged public score: 0.979 Model: EfficientNet B7 Data Balancing: Multiple diseases by 5x Data Augmentation: rotation, shear, zoom, shift, flip left/right, flip up/down, random brightness</p> <p>Ensemble B: Scores: 2 models, both 0.979 public score, averaged public score 0.980 Model: EfficientNet B7 Data Balancing: Multiple diseases by 5x Data Augmentation: &gt;35 degree random rotations, &gt;64 pixel shift, same other augmentations as above</p> <p>Ensemble A + B: Average public score 0.981</p>
Plant Pathology 2020 - FGVC7
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to everyone. I learned a lot and had fun. Thank you so much to Kaggle and the competition organizers for a really interesting and relevant problem. I attempted this Kaggle competition after titanic and mnist (though I read a lot of code from Flowers TPU).</p> <p>I think my solution is simple, much of the interesting stuff I tried didn't work sadly.</p> <p>Also would like to mention some Kagglers for helping me, my success is their success: <a href="https://www.kaggle.com/anyexiezouqu" target="_blank">@anyexiezouqu</a> for efficientnet b7 and incepresnetv2 ensemble at 800 image size. <a href="https://www.kaggle.com/ateplyuk" target="_blank">@ateplyuk</a> for efficientnet b7 on TPU. <a href="https://www.kaggle.com/mgornergoogle" target="_blank">@mgornergoogle</a> for multiple TPU tutorials and TPU troubleshooting posts I found during debugging.</p> <p>Submission 1: private score .978, public score .982 (placed 4th)<br> Submission 2: private score .980, public score .978 (hypothetically ~20th)</p> <p>I'll focus on the first submission here.<br> model1 = b7<br> model2 = (b6+incepresnetv2)/2<br> Final model = (model1 + model2) / 2</p> <h2><strong>What worked:</strong></h2> <ul> <li>Increasing the image size (I used 896 or 800 before memory limitations). If you use 896 you need to drop the batch size.</li> <li>Using noisystudent weights instead of imagenet</li> <li>drop val to train on more data (fairly standard practice)</li> <li>random flip LR augmentation</li> <li>random flip up/down augmentation</li> </ul> <h2><strong>No change or slight improvement:</strong></h2> <ul> <li>Use b6 instead of b7. With (b7 + incepresnetv2)/2 I got .975 LB, and also many LB scores lower than that. With (b6 + incepresnetv2)/2 I got .975 LB on the first try. So needs more data but maybe it improves. Though CV was not representative of LB for me, It was clear from model's confidence I was still overfitting. Also others used b5 and b6 successfully on LB so these 3 factors contributed to me bumping down to b6. I also wanted to try a b5 but not enough time.</li> </ul> <h2><strong>No change or slightly worse:</strong></h2> <ul> <li>Random cropping size 896 down to size 800. Not sure why this didn't improve the score, it's possible the pretrained networks are already quite translationally invariant.</li> <li>random rotation</li> <li>TTA with same, flip lr, flip down, flip lr flip down.</li> <li>Saturation (I might have done too much, but it didn't help)</li> </ul> <h2><strong>Implementation did not work:</strong></h2> <ul> <li>Gaussian blur on images (too slow), though I suspect it might have been useful if done carefully because of blurry/out of focus images.</li> <li>L2 regularization (seemed to make CV better but LB worse). After discovering this <a href="https://www.hds.utc.fr/~fdavoine/dokuwiki/_media/en/pr2020.pdf" target="_blank">I found a paper</a> saying, for transfer learning, to make weights approach imagenet initialization versus 1 (L2) or 0 (L1). Didn't try this yet.</li> </ul> <p>I think the theme of this competition for me was trying many techniques I thought would help, a few did but many didn't. So my model was quite simple in the end.</p> <p>Thanks, hope to see you all on Kaggle in future.</p>
Plant Pathology 2020 - FGVC7
7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>my architecture for this competition was <a href="https://github.com/mrT23/TResNet">TResNet</a>, that provides a very good speed-accuracy-batch_size tradeoff for GPUs. for example, i was able to train on input resolution 600x600 with batch size of 64 and training rate of 90 img/sec, which enabled me to experiment fast and efficient.</p> <ul> <li>as a loss function, i used <a href="https://arxiv.org/pdf/1910.07038.pdf">soft triplet loss</a> + cross entropy</li> <li>i used squish cropping with <a href="https://docs.fast.ai/vision.transform.html#_dihedral">8-rotation</a> augmentation, both for training and for inference.</li> <li>i used class balancing to give more weight to class1. without class-balancing, i could not learn that specific class properly, and the competition loss metric would punish me greatyl.</li> <li>i used pseudo-label to enrich a bit the ultra-small train set (added about 60% of the test set to train set)</li> <li>i used EMA to shorten the training even further and provide better results.</li> <li>my ensemble was composed of different input resolutions and different augmentations regimes.</li> </ul> <p>p.s. 1 "Do not trust the LB, trust your CV" ? everyone on every kaggle forums say that, but in hindsight there was a strong correlation for me between private and public scores. if i had chosen the top public score as submission,i would have ended at 2nd place :-)</p> <p>p.s. 2 test set was way too small, and just injected randomness and luck to the final positions</p> <p>6th place is infinitesimal close :-)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F658430%2Fbd834381a11701e06ca75cadf4906639%2Faaa.png?generation=1590644297590525&amp;alt=media" alt=""></p> <p>7th (or 6th) in a competition with 1317 teams is not that bad :-)</p> <p>my model (<a href="https://github.com/mrT23/TResNet">TResNet</a>) is fully available on github. regarding training code and tricks, they dont contain ideas very different from other public kernals already published, so i dont plan to release the code.</p>
Plant Pathology 2020 - FGVC7
The 2nd Solutions(Single fold and single model)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>0.95 is commonly used in pseudo label. I have not time to try other backbones including efficient.</p> <p>Settings - Pseudo Label (&gt;=0.95 confidence) - Single fold</p> <p>Data augmentation - HorizontalFlip - VerticalFlip - ShiftScaleRotate - IAAEmboss、IAASharpen、Blur - IAAPiecewiseAffine - Random Erasing</p> <p>Backbone - Pnasnet5large(0.98182 in private LB, 0.98189 public in LB, final submission) - ResneSt101 (0.97481 in private LB, 0.98240 public in LB)</p> <p>Imagesize - Resize the image into (545,545)</p> <p>Loss and Learning rate - CrossEntropyLoss - CosineSchedule with 30 epoch and Warmup</p>
Plant Pathology 2020 - FGVC7
[placeholder] (8th->5th) place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My score was 0.986. :)</p> <p>Congratulations to everyone. Many thanks to Kaggle and the organizers for holding this amazing competition. And special thanks to my teammates. <a href="/aziz69">@aziz69</a> <a href="/datadote">@datadote</a></p> <p>We were concerned about being overfitting. I don't know if it worked out as intended, but I would like to share a solution anyway. :)</p> <h2>Agumentation</h2> <ul> <li>VFlip, HFlip, Rotate, RandomResizedCrop</li> </ul> <h2>Backbone</h2> <ul> <li>resnet50, densenet121, inceptionresnetV2, efficientnet-b7 with noisy-studient EfficientNet worked well for multiple diseases compared to other models relatively.</li> </ul> <h2>loss</h2> <ul> <li>Categorical Cross Entropy + Focal Loss</li> </ul> <h2>Image sizes</h2> <ul> <li>812, 1024 (The score improved when the image size was 800 ~.)</li> </ul> <h2>Other</h2> <ul> <li>Pseudo Labeling</li> <li>SWA</li> <li>Label Smoothing (Improved resnet)</li> <li>TTA (Improved effcientnet / incepresnet / densenet )</li> <li>Oversampling (We tried to fit the data balance.)</li> </ul> <h2>Things that didn't help us.</h2> <ul> <li>Cutmix</li> <li>Mix Up</li> <li>GridMask</li> <li>Blockout</li> <li>etc.</li> </ul> <p>Actually, I did a lot of experiments, but I couldn't write them all down. (Too many resources were spent creating images with CycleGAN.)</p> <p>And I share a failed experiment. I wanted to solve the imbalance problem. I used additional data to generate multiple diseases data.</p> <p>[PlantPathology Apple Dataset] <a href="https://www.kaggle.com/piantic/plantpathology-apple-dataset">https://www.kaggle.com/piantic/plantpathology-apple-dataset</a></p> <p>Finally, I made 224, 512, 1024 sized generated images. The 512 and 1024 images didn't seem to be usable even with my bad eyes. I picked out some useful ones, and there were only a few hundred images. And When I used it for training, but the score was rather low, so I didn't use it.</p> <p>I share some generated images.</p> <p>Anyway, It was the first Kaggle competition working hard on. I learned a lot and it was fun. Thanks! :)</p> <p>p.s. 1. "Focus on your CV." Don't trust your LB score too much.</p> <p>p.s. 2. Perhaps our team members will post clean code and will update it soon. [efficientnetb7-tpu-tta] <a href="https://www.kaggle.com/aziz69/efficientnetb7-tpu-tta">https://www.kaggle.com/aziz69/efficientnetb7-tpu-tta</a></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3492127%2F8363a5bb94628521301011aa18f389e3%2Fmd_gen_1024_21.jpg?generation=1590540098980549&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3492127%2Fb0ccc2d7410b36c1726cbc23f3c69795%2Fmd_gen_512_164.jpg?generation=1590540070826273&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3492127%2F386104d87325450519a506bbb5379b59%2Fmd_gen_512_166.jpg?generation=1590540057196169&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3492127%2F37a64707be15ebc7a8077786b307f6e5%2Fmd_gen_512_5.jpg?generation=1590540038769512&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3492127%2F317c4ff5d05d784ce5a1b31acbfaaeea%2Fmd_gen_1024_32.jpg?generation=1590540311065011&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3492127%2F330c70a8fdbedb1835dc412c19a76469%2Fmd_gen_512_137.jpg?generation=1590595682856445&amp;alt=media" alt=""> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3492127%2F6de8e5b3934c954435e5f78ae4ff0bfa%2Fupload.png?generation=1590595777706473&amp;alt=media" alt=""></p> <p>Stochastic Weight Averaging (SWA) <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/119371">https://www.kaggle.com/c/google-quest-challenge/discussion/119371</a></p> <p>:)</p> <p>At that time, the lb score was bad. I don't trust the lb score but the score is so different that I decided to exclude it. </p> <p>The public lb score dropped, but the private lb score may have risen. It would be nice to experiment later. :)</p> <p><a href="https://www.kaggle.com/c/plant-pathology-2020-fgvc7/discussion/155929" target="_blank">https://www.kaggle.com/c/plant-pathology-2020-fgvc7/discussion/155929</a></p> <p>He got the pseudo labeling data (&gt;=0.95 confidence).<br> <a href="https://www.kaggle.com/changfu24" target="_blank">@changfu24</a> </p>
Plant Pathology 2020 - FGVC7
[placeholder] 49th solution + 0.981 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Well done, everyone! Thanks for the kernels and very interesting discussions. I'll post my best private score solution which is 0.981 (around top10 scores I would say) and then my 49th place solution</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1696514%2F65231b0ddc924eb86ffcd47f4110f7d1%2FScreenshot%202020-05-27%20at%2011.18.56.png?generation=1590572282021536&amp;alt=media" alt=""></p> <h1>Best private score - 0.981</h1> <h2>Augmentation</h2> <p>ImageNetPolicy + HFlip</p> <h2>Backbone</h2> <p>Ensemble of efficientnet-b{2+3+4+5} with noisy-studient with around 20 epochs</p> <h2>Optimizer</h2> <p>Ranger</p> <h2>Loss</h2> <p>BCEWithLogitsLoss</p> <h2>Image sizes</h2> <p>600x600</p> <h2>Other</h2> <p>TTA</p> <h1>49th place score - 0.975</h1> <h2>Agumentation</h2> <p>ImageNetPolicy + HFlip + PCAJitter</p> <h2>Backbone</h2> <p>Ensemble of efficientnet-b{2+3+4+5} with noisy-studient with around 20 epochs</p> <h2>Optimizer</h2> <p>RAdam</p> <h2>Loss</h2> <p>BCEWithLogitsLoss</p> <h2>Image sizes</h2> <p>600x600</p> <h2>Other</h2> <p>TTA, smoothed results</p> <h1>Things I tried without improvement on my LB score</h1> <p>Cutmix, Mixup, weighted loss, focal loss, higher image size</p>
Plant Pathology 2020 - FGVC7
very simple 46th solution + potential 17th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Solution,</strong></p> <p><strong>Model 1 (Tensorflow TPU)</strong>: <a href="https://www.kaggle.com/apthagowda/plant-pathology-2020-tenserflow-tpu">Kernel</a> Effinetnet B7 (imagenet) ,800x800, tf.image (Hflip, Vflip, Rot90), cosine schedule with warmup (warmup = 15, epochs = 40, lr = 0.0008), TTA 4 [ran the same notebook 3 times and ensambled the output] </p> <p><strong>Model 2 (Pytorch GPU)</strong>: <a href="https://www.kaggle.com/apthagowda/plant-pathology-2020-pytorch-gpu">Kernel</a> Effinetnet B5 (imagenet) ,520x520, albumentation ( Hflip, Vflip, Rotate 90), cosine schedule with warmup (warmup = 15, epochs = 35, lr = 0.001), TTA 4</p> <p><strong>Model 3 (Tensorflow TPU)</strong>: <a href="https://www.kaggle.com/apthagowda/plant-pathology-2020-tenserflow-tpu?scriptVersionId=33622663">Kernel</a> Effinetnet B7 (noisy student) ,800x800, tf.image (Hflip, Vflip, Rot90), cosine schedule with warmup (warmup = 15, epochs = 35, lr = 0.0008), No TTA</p> <p><strong>Ensemble</strong>: </p> <p>(Model1+(Model2+Model3)/2)/2, Public LB: 0.984, Private LB: 0.97541 (Model1+Model2+Model3)/3, Public LB: 0.983, Private LB: 0.97749 (I didnt selected this one)</p> <p>Thanks, everyone for the public kernels and discussions. Learnt a lot in this competition. Cheers</p>
Plant Pathology 2021 - FGVC8
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2021 - FGVC8 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>&nbsp;</p> <ul> <li><p><strong>soft label <br></strong><br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;train 5-folds efficientnetv2<br><br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;label==1 &amp; pred_score &lt; 0.7 ==&gt; soft_label=0.3(for train folds)<br> <br></p></li> <li><p><strong>multi-label augment <br></strong></p> <ul> <li><p><strong>cutmix <br></strong><br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;different from norm cutmix, I didn't use:<br> <br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;loss = lam * loss(outputs, labels) + (1-lam) * loss(outputs, mix_labels)<br><br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;because I think 'healthy' + 'scab' == 'scab'<br><br><br> 'scab'+'frog_eye_leaf_spot' ==&gt; 'scab frog_eye_leaf_spot'(There are also other different combinations)<br> <img src="https://i.loli.net/2021/06/01/tAIaBcgWirRuwM9.png" alt="1.png"></p></li> <li><p><strong>mosaic <br></strong><br> 'scab' + 'scab' + 'scab frog_eye_leaf_spot' + 'complex' ==&gt;'scab frog_eye_leaf_spot complex' (There are also other different combinations)<br><br> <img src="https://i.loli.net/2021/06/01/FDCuve7csdXROUJ.png" alt="2.png"><br> <br></p></li></ul></li> <li><p><strong>augment <br></strong> <br> Crop HFlip VFlip brightness contrast ShiftScaleRotate OpticalDistortion GridDistortion IAAPiecewiseAffine Cutout CoarseDropout<br> <br></p></li> <li><p><strong>other <br></strong><br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;resnet50-5folds+resnext50_32x4d-5folds<br><br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;TTA(flip, centerCrop)<br><br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;class balance(last three epoch)<br><br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;label_smooth<br><br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;384x576<br><br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;warmup<br><br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CosineAnnealingLR<br><br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;AdamW<br></p></li> <li><p><strong>unfinished LeafGAN <br></strong><br> Due to time and computing resource constraints, I did not finish it, but I think it's the best way for multi-label<br><br> we can train LeafGAN generate healthy-&gt;disease_A and healthy-&gt;disease_B, than we can get multi-label('disease_A disease_B') by mixup('disease_A', 'disease_B') <br><br> <a href="https://github.com/IyatomiLab/LeafGAN" target="_blank">https://github.com/IyatomiLab/LeafGAN</a><br> <img src="https://i.loli.net/2021/06/01/LQB3dEtp8IrGfg2.png" alt="3.png"></p></li> </ul>
Plant Pathology 2021 - FGVC8
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2021 - FGVC8 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Oh, they aren't very different. We replaced the label "multiple_diseases" with "complex".</p> <p>Our method isn't that different from those published in the top scoring notebooks. Let me list a few things that I believe contributed to our solution placing in the top three.</p> <p><strong>1 Trained on both fgvc7 and fgvc8 data</strong>. Interestingly, adding fgvc7 data seemed to contribute more than its fair share. Leaving 2000 images out of fgvc8 data didn't seem to make much difference. However, leaving out fgvc7 data (with the number of images in the same ballpark) led to a big drop in the leaderboard score. This was true even after <a href="https://www.kaggle.com/c/plant-pathology-2021-fgvc8/discussion/234332" target="_blank">the leak</a> was fixed, though less glaring.</p> <p><strong>2 Usage of soft labels</strong>. Like many others observed, the labels in the training set are very noisy. The label "complex" is especially problematic. It looks like "complex" could mean too many diseases or an unidentified disease. As we were using cross entropy as the loss function, it didn't make much sense to use hard labels that are often wrong. To soften the labels, we used this simple approach:</p> <pre><code>training_loop: train_for_one_epoch() labels = gamma*predictions + (1 - gamma)*labels </code></pre> <p>We considered <code>gamma</code> as a hyperparameter and tuned it to minimize validation error.</p> <p><strong>3 Automated hyperparameter tuning</strong>. Tuned using a new protocol being developed by <a href="https://luminide.com/" target="_blank">Luminide</a>.</p> <p><strong>4 Scaling the probabilities</strong>. We left out the "healthy" label while training. During inference, all the images that came out negative for all the diseases were deemed healthy. Validation showed that this method overestimated the proportion of healthy leaves. To compensate, we multiplied the logistic outputs with 1.3 before rounding them to get the predictions (this is pretty much equivalent to setting the threshold to 0.4).</p> <p>We used pretrained weights from <a href="https://www.kaggle.com/rwightman" target="_blank">@rwightman</a>'s excellent <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">timm</a> library. The final submission averaged predictions from resnet50 and seresnext50.</p> <p>Resized to 432x432 and then cropped to 384x384.</p>
Plant Pathology 2021 - FGVC8
[86.385] 10th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2021 - FGVC8 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to the organizers and Kaggle team for this competition, and congratulations to all the top scorers. This was a great learning experience throughout, read some great papers and tried to implement some of them, and reached a satisfactory position. <br> I started from the beginning then took a break and rejoined towards the end. My solution is quite simple but here are some key points for those who can benefit from it!</p> <h1><strong>Approach :</strong></h1> <ul> <li><p><strong>Data:</strong></p> <ul> <li>I used 640x640 resized tfrecords of the competition data. <a href="https://www.kaggle.com/ashish2001/640x640-plant-pathology-tfrecords" target="_blank">link</a></li> <li>Duplicates and misleading images were <strong>, not</strong> removed 🙃</li></ul></li> <li><p><strong>Model:</strong></p> <ul> <li>I experimented with Effnets (B0-B7), EffnetV2s, SeResNext-50, MobileNets, ViTs etc. with different image sizes (640, 512, 224) etc. Out of these Effnet B5 (imagenet) was my best single model (<strong>Public: 84.2, Private: 86.38</strong>) B6s and B7s were overfitting a lot!</li> <li>Noisy student weight performed worse throughout my experiments.</li> <li><strong>Best Model:</strong><ul> <li>EffnetB5(imagenet), all BatchNormalization layers were kept frozen, GlobalAveragePooling, followed by Dropout and Output layer of 6 sigmoidal nodes.</li></ul></li></ul></li> <li><p><strong>Loss Function</strong></p> <ul> <li>This played an important role in my pipeline, normal cross-entropy didn't work well possibly due to the fact that it inhibits learning in the early stages by penalizing more on the wrongly classified samples.</li> <li>Focal loss seemed to work well with consistent results throughout.</li> <li>Asymmetric loss taken from <a href="https://arxiv.org/pdf/2009.14119.pdf" target="_blank">here</a> worked well for increasing CV but couldn't improve LB (It could have made overfitting easy or maybe I translated it poorly from PyTorch to TensorFlow 😂).</li> <li>The most promising loss was to use a differentiable version of MacroF1 as described in this great <a href="https://towardsdatascience.com/the-unknown-benefits-of-using-a-soft-f1-loss-in-classification-systems-753902c0105d" target="_blank">article</a></li></ul></li> <li><p><strong>Augmentations</strong></p> <ul> <li>Random Shearing/ Rotations/ Flipping/ Saturation/ Contrast/ Brightness/ Gamma and Central Cropping were used thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> 's awesome <a href="https://www.kaggle.com/cdeotte/rotation-augmentation-gpu-tpu-0-96" target="_blank">notebook</a></li></ul></li> <li><p><strong>Other Details</strong></p> <ul> <li>Used Adam optimizer.</li> <li>Cosine Decay-based Learning Rate.</li> <li>5-Fold cross-validation scheme.</li> <li>Batch Size = 32 per replica of TPU.</li></ul></li> <li><p><strong>Inference Scheme</strong></p> <ul> <li>Averaged outputs of all the 5 folds of a model at a threshold of 0.5.</li></ul></li> </ul> <p>In the end, I chose one ensemble-based submission [Effnet B5, MobileNetv3 large initialized with Crop net weights, SeResNext50 and others.](Public: 82.9, Private 85.313), didn't have much time to create more diverse ensembles and my best EffNet B5. </p> <ul> <li>I could've tweaked my inference pipeline to figure out the best threshold (I got an increase of 0.2 on private lb just by tweaking the threshold but I didn't select that as my final submission).</li> </ul> <p>I would put the link to my code after tidying it up a bit. </p> <p>Overall I enjoyed participating in the competition and would love to know how other's tackled this problem. If you have any suggestions on how I could've improved please comment on them!</p> <p>See you guys again in some other competition!😄</p> <p>Thanks, and congrats to you too!</p> <p>Thanks, Chris! 😀</p>
Porto Seguro’s Safe Driver Prediction
12th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Porto seguro and kaggle bring us such a wonderful competition, I learn a lot from this competition, I can feel the strong enthusiasm of the kaggle community, many people share their experience in actively, it is unbeatable, it makes me grow up and gain valuable experience. It's so great.</p> <p><strong>Final score</strong></p> <p>We best private score :0.29187, very lucky, the final result is not overfitting, and still stay in the top20.</p> <p><strong>Data cleaning</strong></p> <p>We removed all the columns that started with “ps_calc_”, and removed "ps_car_11", “ps_ind_11_bin”, these columns seems to be some noise.</p> <p><strong>Feature engineering</strong></p> <p>We didn't do a lot of complicated feature engineering, we just did OHE here, total is 207 features, we have been using these 207 features all the time.</p> <p><strong>GBDT model(using R)</strong></p> <ul> <li>XGBoost: without cv folds/207 features/fixed a random seed</li> <li>LightGBM: with 5 cv folds/207 features/fixed a random seed for each fold</li> </ul> <p><strong>Neural Network model(using Python)</strong></p> <p>Many thanks to Joe Eddy, we simply modified his public nn kernel. We divided the input into 3 parts:reg, car and ind and designed a nn architecture to capture their inner relations. The idea came from entity embedding. Usually people use a weight matrix to represents the inner relations of a category variable. So we can use a weight matrix to capture the inner relations of the so-call 'main-class' such as reg, car and ind.</p> <p><strong>Ensemble strategy</strong></p> <p>We tried stacking, but the results were not ideal, in this time, we borrowed the harmonic averaging method.</p> <p>鲲's topic: <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/41658">https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/41658</a></p> <p>Wiki: <a href="https://en.wikipedia.org/wiki/Harmonic_mean">https://en.wikipedia.org/wiki/Harmonic_mean</a></p> <p><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/753130a05a1fab890e5785924b5bdbb5f97c8b6a" alt="hmean" title="">(1) <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/4b0ece72e6275f3628f83d849bd56bf5347bb818" alt="hmean2" title="">(2)</p> <p><strong>Blending model</strong></p> <ol> <li>2 XGBoost models in (1) </li> <li>91 LightGBM models in (1)</li> <li>XGBoost + LightGBM + NN models in (2), weight is : 0.4, 0.4, 0.2</li> </ol> <p>Finally, thanks to my teammates @lessonnair, we can't get such a good grade without him.</p> <p>hi Shubin, thanks! Let me contact my teammates @lessonnair, the structure of NN is built by him.</p> <p>Thank you Zeeshan-ul-hassan Usmani, I hope it's useful for you.</p> <p>hi Joe, Thank you and also congratulations! We also learn a lot from you.</p> <p>hi Ankit, parameters of the 91 LightGBM model are slightly different, in this case, we adjustment is nrounds, and other parameters have not changed much.In addition, we did not specify the random seeds of CV folds, and we didn't use hyperparameters.</p>
Porto Seguro’s Safe Driver Prediction
17th solution - seed selection, NN, ensembling..
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you! I am really grateful to you for your posts, comments and kernels, I have learn a lot from it.</p> <p>Thank you. About Feature selection method - yes. The main idea is using LGB/XGB feature importance not to select Important feature, but to eliminate Unimportant. If gradient boosting algo doesn't select some feature to split - it means that this feature doesn't add "value" to the model.</p> <p>So at first step we eliminate unimportant features. The we need to remove all collineary features, noise features and so on to make our data more robust. To do this we use "shuffling method", it very like as boruta algo, but simple. Instead of creating "shuffled" copy of feature all at once, we do (in CV of course):</p> <ol> <li>train model on the train</li> <li>create base_score on test</li> <li>for each feature randomly shuffle it's values (shuffle column with this feature on the test dataset)</li> <li>calc shuflled_score and compare it to base_score</li> </ol> <p>Main idea is that really important feature after shuffling lead to worst score, instead noise features leads to the same or even better score. <br> As you train model once and then predict on many shuffled datasets - the algo is fast. <br> As the last step we repeat removing 0-importance feature from lgb importance.</p> <p>About non-linearity boosting I have no idea, really. May be it is dataset-specific behavior,... I don't have guess, sorry for that.</p> <p>Thank you and congrats you too! It is 5 nice lines of code - as it helps to add diversity to model ) Thanks again!</p> <p>Congratulation to all winners - it was strange, but very trainable competition ) Thanks to all who share their approaches - I have learnt a lot from @CPMP, @Tilii, @Olivier, @Scirpus, @Andy Harless and many others kagglers as well.</p> <p>My solution is not unique - just ensemble of high performance diverse models. But I want to share some approaches I use, may be it will be helpful to somebody.</p> <p><strong>Cross-Validation</strong></p> <p>I use standard 5-Folds Stratified split, but at first I try to find "magic" seed(s) that split data to folds with more-or-less similar internal statistical characteristic of data to reduce intra-fold variance. I discussed this approach here (<a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/42785#240324">https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/42785#240324</a>). In addition to it I use averaging of 3-run per each fold with the same model but different (random!) seeds.</p> <p>This scheme can be extended to @CPMP version of CV with random seeds (<a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44614#251014">https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44614#251014</a>) by changing [0,1,2,3,4] :) to seeds with smallest variance. I still not sure that my approach is more robust to bias/noise that other, possible it is my luck that I don't catch bias, but... in this competition it helps me to get gold )</p> <p>// I checked (and submitted) several other models with another seed - all of them had worst CV/LB.</p> <p><strong>Model to use</strong></p> <p>Nothing special - XGB, LGB, CatBoost, RGF (with 2 or 3 base set of features): <br> - no "calc" / OHE / remove unimportant features <br> - no "calc" / LabelEncoded (for CatBoost / LGB-Cat) <br> - no "calc" / OHE / +some base FE (iteractions, counters, ....) </p> <p>FFM (added some value to ensemble)</p> <p>Thank you @Scirpus for <a href="https://www.kaggle.com/scirpus/libffm-generator-lb-280">https://www.kaggle.com/scirpus/libffm-generator-lb-280</a>, <br> @Chia-Ta Tsai for <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/43741">https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/43741</a>, <br> @Oscar Takeshita for <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/43741#245562">https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/43741#245562</a> )</p> <p>NN (based on Keras/TF)</p> <p>I use my own version of Embedding NN and try several architecture of model:</p> <ul> <li>Concat All Ind/Reg/Car category features to three groups </li> <li>Embed groups separately</li> <li>Merge with Base columns</li> <li>Merge with Calc (or drop Calc)</li> <li>Dense*N</li> </ul> <p>===</p> <ul> <li>Concat All Ind/Reg/Car/Calc features to 4 groups</li> <li>Dense over each group</li> <li>Merge output</li> <li>Dense*N</li> </ul> <p>// very nice version of embedding NN you can find here (<a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44601">https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44601</a>) </p> <p>Hyperparameters to tune: <br> - embed size (it's strange, but best result for me was size=2 of all category features) <br> - dense: one or two layers with 8-64 cells and dropout 0.25-0.1</p> <p>NN had high weight in my ensemble.</p> <p><strong>Feature selection</strong></p> <p>To reduce dataset (after adding some pack of features) I use the feature selection method: <br> - at first run lgb (it is fast!) with small depth (5) in loop and remove features with 0 importance <br> - then run shuffling selection of features using lgb (again it is fast!) <br> - final tuning - remove features with 0 importance on the new dataset </p> <p>// more detailed description you can find in coments below</p> <p>I have tried RFECV/Boruta (you can remember several nice discussion/kernel about this topic), but hold out using it (remove importance/shuffling is much faster with very nice result)</p> <p><strong>Ensemble</strong></p> <p>I use 2-step ensembling method. At first stage I stack all high-performance model of the same type (LGB-XGB/FFM/NN/...). Then I stack all L2 models together.</p> <p>The ensembling procedure: <br> - use LRCV to obtain correct C for current set of models <br> - create cor_matrix for prediction (I try pearson/spierman) <br> - get 2 models with smallest corr_coef and combine it <br> - iteratively add model with smallest corr_coef to current ensemble (drop model if adding don't get improvement in score) <br> - at the end build LR with C on selected models </p> <p>// I try to use NN / LGB as stacker but LR was better (at least in my case)</p> <p><strong>What don't fly</strong></p> <ul> <li>Clustering and all method of dimensionality reduction</li> <li>KNN and all attempt to play with similarities</li> <li>Feature Engineering...</li> </ul> <p><strong>Main lesson from this competition</strong></p> <ol> <li>Robust CV is 50% of success ) // remember Mercedes )</li> <li>Trust your CV!!! (I have submit to 12 place with better CV but not to choose it as I think it's overfit)</li> <li>Correct Ensemble scheme is nice ) : Use same folds / Use High performance diverse models / Be careful about overfitting/leakage.</li> <li>Don't looking for best n_rounds. Averaged Folds test predictions works well!</li> </ol> <p>Thank to all! Good luck and happy kaggling! :)</p> <p>@kruegger <br> P.S. you can upvote if you want )</p> <p>@Michail, thank you for congrats ) About shuffling - you are right, this idea is really amazing.</p> <p>According your question - your approach is correct, perhaps more robust, but slow. I have tried it on test set of features and don't give significant difference with my current scheme, but it is faster. Let me describe both methods:</p> <ol> <li>Shuffle feature then CV then compare score. You have to train N*K models (N- number of features, K - folds), and make N*K predictions.</li> <li>Shuffle feature on test inside CV. You train only K models, and make the same N*K predictions.</li> </ol> <p>In fact that training is more consumable process than prediction, the time for the whole procedure is grow lineary by O(N). If we have small set of features, but we need to be really sure what of them are important - your approach is better. When (in my case) we have ~200 features the speed (with reasonable precision) became the key.</p> <p>So advantage of shuffle in CV is speed with reasonable accuracy )</p> <p>P.S. It is just my opinion for method I use in this competition, perhaps I am not right, so I am ready to discuss about any other FSel methods.</p> <p>P.P.S. For example, RFECV in this competition doesn't help for me, but in recent Sberbank competition it helps me a lot.</p> <p>Thank you, @olivier! And personal thank you for your researches and posts. They are very helpful!</p> <p>Thank you! I m really happy to get gold here by using my own decisions, and again - thank you for sharing a lot of information (in this and others competitions) - it really helps me to improve my skills!</p> <p>Thank you!</p>
Porto Seguro’s Safe Driver Prediction
18th Place Solution - Careful Ensembling + Resampling Diversity
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Edit: <a href="https://jeddy92.github.io/JEddy92.github.io/seguro/">here</a> is a blog post I wrote describing some aspects of my solution in more detail. It's focused on stacking, entity embedding neural networks, and resampling strategies.</p> <p>This was great fun and an incredible learning experience. What an awesome surprise to land in the top 100, let alone top 20. I hope that this post is useful!</p> <p>My approach was influenced by many public kernels and discussions, so I owe a huge thank you to the community and many individuals. To mention specific people whose kernels I directly used or adapted in my solution -- <a href="https://www.kaggle.com/aharless/xgboost-cv-lb-284">Andy Harless / Olivier</a>, <a href="https://www.kaggle.com/tunguz/rgf-target-encoding-0-282-on-lb">Bojan</a>, <a href="https://www.kaggle.com/xbf6xbf/single-xgb-lb284">xbf</a>, and <a href="https://www.kaggle.com/snowdog/xgb-sandwich">Snow Dog</a>. </p> <p>This last kernel deserved a lot more attention. Building models that used its sandwich downsampling technique made the difference between top 100 and top 20 for me. Looking back on it, I'm quite surprised that there wasn't more in-depth discussion of resampling strategies given that class imbalance was a central part of this problem. </p> <p>My final solution included 16 base models, stacked with a regularized logistic regression. I decided early on that trying to get too fancy with stacking could lead to a bad overfit. I also prioritized trying to find model diversity over hyper-tuning individual models. Most models were trained with the 5-fold cv / average method seen in the kernels. To select which models to add to the stack I looked at spearman correlations of the test predictions and 5-fold cv scores (with a final mean cv of .2924, not far off my private score). I carefully chose which public kernels to consider including based on their having good cvs. </p> <p>I did a tiny bit of "feature engineering", but it was mostly data processing and probably didn't add much. For some models I combined the ps06-09 and ps16-18 bin variables into single category columns (these were clearly one-hot-encoded or functionally equivalent to it), and I also imputed ps_reg_03 from the other reg features with a simple linear regression. I also sometimes added interaction terms based on boosting feature importances. I added a sum of null values column, and dropped the _calc columns. </p> <p>Diversity in my stack comes from 3 main sources: 1) different models, 2) different features 3) different resampling of the training data. Here's a breakdown, most models used 2x resampling or class weights unless stated otherwise:</p> <p><strong>Gradient Boosting</strong></p> <p>4 LGB models: 1 with OHE categoricals, 1 with target encoding, 1 with entity embedding features, 1 with 5-fold sandwich downsampling (so total of 26x5 = 130 individual models).</p> <p>3 XGB models: 1 with OHE (xbf's), 1 with target encoding (Andy / Olivier), 1 with sandwich downsampling.</p> <p><strong>Neural Network</strong></p> <p>1 entity embedding network, basically my <a href="https://www.kaggle.com/aquatic/entity-embedding-neural-net">public kernel</a>.</p> <p><strong>Regularized Greedy Forest</strong></p> <p>3 models: 1 with OHE, 1 with target encoding (Bojan's), 1 with the sandwich.</p> <p><strong>Field-Aware Factorization</strong></p> <p>2 models: 1 without resampling and 1 with the sandwich (yes, writing a shell script to train these 130 models was very fun).</p> <p><strong>The Stuff That May Have Been A Bad Idea</strong></p> <p>In the end I think I took resampling diversity a bit too far, and actually included some LGB models where I upsampled the 1s enough to be the majority class in a fold. Those are the remaining models in my stack, and though they improved my CV from .2920 to .2924 and my public LB score, they worsened my private score. I think in the CV they only really improved my score on one fold, so I should have been more skeptical. If I had stuck with my highest LB score solution 3 days ago instead of my highest LB solution today, I would have finished at .29183, in 11th place. </p> <p>I hope that this writeup is helpful and please let me know if I can answer any questions about what I did. I may write about some of the methods I used in more detail in a blog post that I'd share here later on. </p> <p>Thanks again, and happy kaggling!</p> <p>Depended on the model - if I remember correctly, I just used class weights for 2, 2x upsampling was more common. One of the xgbs (Andy's kernel) had scale_pos_weight = 1.6 so that's a relatively small class weight adjustment. One of the lgb modeled had is_unbalance set to True, which is the same as 'balanced' class weights in sklearn models.</p> <p>Check out this kernel -- <a href="https://www.kaggle.com/headsortails/steering-wheel-of-fortune-porto-seguro-eda">https://www.kaggle.com/headsortails/steering-wheel-of-fortune-porto-seguro-eda</a>. The upshot is that they seemed to essentially be not much better than random noise as predictors. Models tend to do worse when you add unhelpful features to the mix.</p> <p>"In the “calc” binary features we see that we still haven’t managed to find a useful signal. There might be a tentative decrease in claim rate between the values of 2 vs 3, but everything is consistent within the uncertainties. Therefore, it appears that there is very little predictive power in the “calc” group."</p> <p>"The “calc” binary features on the other hand display no significant variation in their claim rates; thus providing further evidence that the “calc” group might not be very useful for our prediction goal. Here the sum distribution also peaks at 2, but takes a minimum of 0 and also a maximum of 6 for in total 6 “calc” features. Toward larger sums the (log-scaled) distribution has less of a sharp decline than for the “ind” features."</p> <p>Sorry could have been more clear there - I upsampled the positive class within each training fold by duplicating all positives, as in <a href="https://www.kaggle.com/ogrellier/xgb-classifier-upsampling-lb-0-283">https://www.kaggle.com/ogrellier/xgb-classifier-upsampling-lb-0-283</a> for example.</p> <p>Thanks, you too! Looking forward to reading your solution later</p> <p>Good to hear! I wrote about the class balancing / resampling stuff in more detail at the end of my <a href="https://jeddy92.github.io/JEddy92.github.io/seguro/">blog post,</a> if you're still interested in seeing that.</p>
Porto Seguro’s Safe Driver Prediction
1st place with representation learning
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Danzel, [Q1] in my swapNoise idea I sample a single feature value with a specific probability from the same feature column. Its completely randomized within the batch matrix. Not just replace the full column with different values (as you showed).</p> <p>You can try both swapping ideas (random per value [i,j], random for a full column [:,j]). But this just influences the autoencoder pretraining step. You need to verify if it works in a supervised setting later. I can say that I tried this dae+nn for many competitions, it only "worked" for two, here and in otto. I mean it improved in a very little numeric scale, if all.</p> <p>[Q2] The idea of rankGauss is to map the numeric feature values to a meaningful distribution based on ranking. Values which appears often in train get a bigger space after the rankGauss trafo (=ranking trafo). Based on my experience categoric columns are good as one-hot dense [0,0,0,1,0,0,0,0] representation for nn, this is what my normal pre-processing do. So in my rankGauss implementation I don't transform columns which has 2 unique values (assume these are from one-hot categories). I think you mean raw cat features = integers ? This is bad, input values are too large. I recommend to preprocess any csv-based data: rankGauss(numeric columns) + one-hot(cat columns) + (binary columns) If cat columns has too many tokens, you can try using cnts or numeric integers instead.</p> <p>no problem. to answer your questions: 1) 4000-1000r-1000r-s means 4000 input features, then two hidden layers of 1000 relu, then one output neuron sigmoid. 2) I always followed the approach of maximize the random samples drawn (max. entropy). So on average 15 features out of 100 will be swapped by another values inside the feature column.</p> <p>Thanks to Porto Seguro to provide us with such a nice, leakage-free, time-free and statistical correct dataset. A nice playground to test the performance of everything, this competition was stat similar to <a href="https://www.kaggle.com/c/otto-group-product-classification-challenge">otto</a>, like larger testset than train, anonymous data, but differ in details. I wanna dive straight into solution. Its a blend of 6 models. 1x lightgbm, 5x nn. All on same features, I just removed *calc and added 1-hot on *cat. All neural nets are trained on denoising autoencoder hidden activations, they did a great job in learning a better representation of the numeric data. lightgbm on raw data. Nonlinear stacking failed, simple averaging works best (all weights=1). Thats the final .2965 solution. 2 single models would have been enough to win (#1 + #2 give me 0.29502 on private). The complete list of models in the final blend:</p> <p><img src="https://i.imgur.com/z62wCWj.png" alt="summary"> Font is a bit small, you need to increase the zoom with ctrl (+). The difference to my private .2969 score is I added bagging versions nBag=32 of the above mentioned 6 models, all weight=1, and Igor's 287 script with weight=0.05. Was not really worth the efford for .2965 -&gt; .2969 gain huh!? I selected these 2 blends at the end.</p> <h1>feature engineering</h1> <p>I dislike this part most, my creativity is too low for an average competition lifetime, also luck plays huge role here. Therefore I like representation learning, its also an step towards AI. Basically I removed *calc, added 1-hot to *cat features. Thats all I've done. No missing value replacement or something. This is featureset "f0" in the table. This ends up in exactly 221 dense features. With single precision floats its 1.3GB RAM (1e-9*4*221*(595212+892816)). Thanks to the public kernels (wheel of fortune eg.) that suggest to remove *calc features, I'm too blind and probably would not have figured this out by myself. I never remove features.</p> <h1>local validation</h1> <p>5-fold CV as usual. Fixed seed. No stratification. Each model has own rand seed in CV (weight init in nn, data_random_seed in lightgbm). Test predictions are arithm. averages of all fold models. Just standard as I would use for any other task. Somebody wrote about bagging and its improvements, I spend a week in re-training all my models in a 32-bag setup (sampling with replacement). Score only improved a little.</p> <h1>normalization</h1> <p>Input normalization for gradient-based models such as neural nets is critical. For lightgbm/xgb it does not matter. The best what I found during the past and works straight of the box is "RankGauss". Its based on rank transformation. First step is to assign a linspace to the sorted features from 0..1, then apply the inverse of error function ErfInv to shape them like gaussians, then I substract the mean. Binary features are not touched with this trafo (eg. 1-hot ones). This works usually much better than standard mean/std scaler or min/max.</p> <h1>unsupervised learning</h1> <p>Denoising autoencoders (DAE) are nice to find a better representation of the numeric data for later neural net supervised learning. One can use train+test features to build the DAE. The larger the testset, the better :) An autoencoder tries to reconstruct the inputs features. So features = targets. Linear output layer. Minimize MSE. A denoising autoencoder tries to reconstruct the noisy version of the features. It tries to find some representation of the data to better reconstruct the clean one. With modern GPUs we can put much computing power to solve this task by touching peak floating point performance with huge layers. Sometimes I saw over 300W power consumption by checking nvidia-smi. So why manually constructing 2,3,4-way interactions, use target encoding, search for count features, impute features, when a model can find something similar by itself? The critical part here is to invent the noise. In tabular datasets we cannot just flip, rotate, sheer like people are doing this in images. Adding gaussian or uniform additive / multiplicative noise is not optimal since features have different scale or a discrete set of values that some noise just didnt make sense. I found a noise schema called "swap noise". Here I sample from the feature itself with a certain probability "inputSwapNoise" in the table above. 0.15 means 15% of features replaced by values from another row. Two different topologies are used by myself. Deep stack, where the new features are the values of the activations on all hidden layers. Second, bottleneck, where one middle layer is used to grab the activations as new dataset. This DAE step usually blows the input dimensionality to 1k..10k range.</p> <h1>learning with train+test features unsupervised</h1> <p>You might think I am cheating when using test features too for learning. So I've done an experiment to check the effectiveness of unsupervised learning without test features. For reference I took model #2, public:0.28970, private:0.29298. With exactly same params it ends up in a slighty weaker CV gini:0.2890. public:0.28508, private:0.29235. Private score is similar, public score is worse. So not a complete breakdown as expected. Btw total scoring time of the testset with this "clean" model is 80[s].</p> <h1>other unsupervised models</h1> <p>Yes I tried GANs (generative adversarial networks) here. No success. Since NIPS2016 I was able to code GANs by myself. A brilliant idea. Generated MNIST digits looked fine, CIFAR images not that. For generator and discrimiator I used MLPs. I think they have a fundamental problem in generating both numeric and categoric data. The discrimiator won nearly all the time on my setups. I tried various tricks like truncation the generator output. Clip to known values, many architectures, learn params, noise vec length, dropout, leakyRelu etc. Basically I used activations from hidden layers of the discrimiator as new dataset. At the end they were low 0.28x on CV, too low to contribute to the blend. Havent tried hard enough.</p> <p>Another idea that come late in my mind was a min/max. game like in GAN to generate good noise samples. Its critical to generate good noise for a DAE. I'm thinking of a generator with feature+noiseVec as input, it maximizes the distance to original sample while the autoencoder (input from generator) tried to reconstruct the sample... more maybe in another competition.</p> <h1>neural nets</h1> <p>Feedforward nets trained with backprop, accelerated by minibatch gradient updates. This is what all do here. I use vanilla SGD (no momentum or adam), large number of epochs, learning rate decay after every epoch. Hidden layers have 'r' = relu activation, output is sigmoid. Trained to minimize logloss. In bottleneck autoencoder the middle layer activation is 'l' = linear. When dropout!=0 it means all hidden layers have dropout. Input dropout often improve generalization when training on DAE features. Here a slight L2 regularization also helps in CV. Hidden layer size of 1000 works out of the box for most supervised tasks. All trained on GPU with 4-byte floats.</p> <h1>lightgbm</h1> <p>Nice library, very fast, sometimes better than xgboost in terms of accuracy. One model in the ensemble. I tuned params on CV.</p> <h1>xgboost</h1> <p>I didnt found a setup where xgboost adds something to the blend. So no used here in Porto.</p> <h1>blending</h1> <p>Nonlinear things failed. Thats the biggest difference to otto competition where xgb, nn were great stackers. Every competition has its own pitfalls. Whatever. For me even tuning of linear blending weights failed. So I stick with all w=1.</p> <h1>software used</h1> <p>Everything I've done here end-to-end was written in C++/CUDA by myself. Of course I used lightgbm and xgboost C interface and a couple of acceleration libs like cuBLAS. I'm a n00b in python or R like you guys are experts. My approach is still oldschool and low level. I want to understand what is going from top to bottom. At some time, I'll learn it, but currently there are just too much python/R packages that bust my head, I'm stick with loop-based code.</p> <h1>hardware used</h1> <p>All models above can be run on a 32GB RAM machine with clever data swapping. Next to that I use a GTX 1080 Ti card for all neural net stuff.</p> <h1>total time spend</h1> <p>Some exaflops and kilowatts of GPU power was wasted for this competition for sure. Models run longer than I spend on writing code. Reading all the forum posts also costs a remarable amount of time, but here my intention was dont miss anything. At the end it was all worth. Big hands to all the great writers here like Tilli, CPMP, .. really great job guys.</p> <h1>what did not work</h1> <p>upsampling, deeper autoencoders, wider autoencoders, KNNs, KNN on DAE features, nonlinear stacking, some feature engineering (yes, I tried this too), PCA, bagging, factor models (but others had success with it), xgboost (other did well with that) and much much more..</p> <p>thats it. The attatched file is the console log of model nr 3. If you do a "cat nn.cfg.log" from a Linux console you get the output with nice colors.</p> <p>I tried DAE for years, on every attempt I found some minor improvements. They worked in otto as well but we didnt publish it.</p> <p>Hi, short answer no. Maybe in future. Exactly same nn code is used here at operasolutions in our ml module. Also who will use a hacky command line tool ? 99% use python packages</p> <p>I recommend linear activation in the middle layer of bottleneck setup because relu truncate the values &lt;0. Yes just concat to a long feature vector. Here for a deep stack DAE 221-1500-1500-1500-221 you get new dataset with 4500 features.</p> <p>thanks, yes I noticed the nice tsne figures by Tilii, very well done! During competition I also tried to map the data to 2D space just for curiosity with a bottleneck DAE, see: <img src="https://i.imgur.com/4gxbsWJ.png" alt="porto 2D by DAE" title=""></p> <p>yes basically you can use any layer activations as new dataset. Information is present is all layers. When using all layers you don't loose any info (this was my idea, I use ALL layer outputs as features) but maybe not needed. But haven't tried only taking eg. last hidden layer in a deep stack setting.</p> <p>yea .. I am thinking of it. NVIDIA may donate a new GPU to open-source projects .. a V100 would be nice :D since I can switch to GPU16 2-byte floats (have full support coded for that) and the volta provide &gt;100TFLOPs. gamer cards like the 1080 suck with half-precision floats.</p> <p>thanks man. Without swapNoise DAE would not work that well. But this is also not optimal as well</p> <p>thanks, I tried this with #4 model (you see colGroups=1 in unsupervised params). This was the idea to swap columns with same meaning (categoric + all corresponding 1-hot categories). No success...</p> <p>My interest in all kind of nn things last back 15 years, I tried everything and still scratching only the surface. A good paper is: <a href="https://arxiv.org/abs/1406.3269">Scheduled denoising autoencoders</a></p> <p>basically I do nothing. If we have 3.6% ones, just use them as it is. We cannot generate more samples, not more info available, thats all we have. Yes I tried upsampling in supervised nn as other kernels suggested, but had no success that improved the blend, I guess I could improve individual models with upsampling but only a little.</p> <p>Just noise at input is enough. The noisy samples are look different for every batch, and the DAE learns from millions of batches. Its nearly always a different noisy version of input features that the DAE tries to reconstruct. It learns an internal representation of the prob distribution of the dataset. The internal state is used later to learn supervised from. </p> <p>Yes you can add noise to hidden layers too, havent tried this.</p> <p>thanks, good question. I would like to have ADAM as a switch to try it out, its not there :) I was too lazy to code it and I am only 99% sure how to implement it, I used it for another sgd model in another domain .. another story. Anyway. ADAM needs additional memory storage of the weights, you need to store 2 copies of the weights for all this moment estimates. A problem for really large nets, but doable. Being too happy with vanilla SGD, works just fine.</p> <p>yes exactly</p> <p>no. You need the rank transformation before to equalize the numeric feature regions.</p> <p>hi, yes scale should be nearly equal. But still the problem of mixed numeric/1-hot columns. There's only one answer :D .. try it out. Yes makes sense, you can put in DAE features to xgboost/lightgbm. It improved my CV a little but get worse on public LB, so I rejected it (BIG mistake).</p> <p>very good question. I don't have an answer. Trial and error. You can look at reconstruction rmse during training of DAE when having fix noise level. But its only an indicator. I've tried very deep and wide DAE with better rmse, but results in worse supervised score.</p> <p>hi, thanks. Yes I think you are right, linspace should be -1..+1. Just look at the histogram after the trafo. When its gauss-shaped and centered you've done it right.</p> <p>spend some time in life to think about solving problems. Often a simple approach explains the underlying</p> <p>suggestion from public kernels. Improved my CV. So I removed *calc</p> <p>yes. 0.291 private score with lightgbm with 8[min] total train time. I think that's the best for a simple/quick model.</p> <p>32GB RAM is minimum for 5CV on 4500 features with this data size. Of course you can use an older GPU, will run slower.</p> <p>it dont blend, at least my implementation does not contribute to the ensemble.</p> <p>ubuntu 16.04 what else?</p> <p>yes linear within bottleneck layer. Other layers relu.</p> <p>no. But I put it on the todo list for future experiments ..</p> <p>at each batch. Each batch samples new noise from the complete dataset. Complete dataset is train+test features. Copy the features before to use as target. Targets should be the clean uncorrupted features.</p> <p>I've done a run to train the unsupervised DAE step without test features. Its described in one paragraph in my writeup. Here I run my framework in prediction mode like I never saw the testset before. Result got worse, but not much.</p> <p>yes, see:</p> <p>37 2..38 no *calc</p> <p>5 ps_ind_02_cat</p> <p>3 ps_ind_04_cat</p> <p>8 ps_ind_05_cat</p> <p>13 ps_car_01_cat</p> <p>3 ps_car_02_cat</p> <p>3 ps_car_03_cat</p> <p>10 ps_car_04_cat</p> <p>3 ps_car_05_cat</p> <p>18 ps_car_06_cat</p> <p>3 ps_car_07_cat</p> <p>2 ps_car_08_cat</p> <p>6 ps_car_09_cat</p> <p>3 ps_car_10_cat</p> <p>104 ps_car_11_cat</p> <hr> <p>221 features</p> <p>I subtract mean afterwards. And do not touch 1/0 (binary columns). The basic idea of this "RankGauss" was to apply rank trafo and them shape them like gaussians. Thats the basic idea. You can try your own variation of this.</p> <p>I tried to drop, but CV got worse. So I kept the originals</p> <p>yep. Output layer is always linear in my DAE experiments.</p> <p>right</p> <p>thanks, my lRate is a raw value. This means without dividing by batchsize of whatever. Try my lRate/128 and use batchsize=128</p> <p>2013 I bought a book "CUDA Programming: A Developer's Guide to Parallel Computing with GPUs (Applications of Gpu Computing)" for intro. But major progress was made with self-experiments. I run kernels on GPU and exactly compare the CPU version, where I know its calculated correctly. Big difficulty was to understand the grid layout on a GPU (blockIdx, threadIdx..) to efficiently parallelize. Made many experiments on runtime and howto grid the problem (eg. calculate dropout or activation functions). All done in C++ with cuda-toolkit and gcc.</p> <p>no, all plain SGD</p> <p>Thanks too. Denoising autoencoders does not need to be symmetric or have a bottleneck layer. Their aim is to reconstruct a "noisy" version of the sample. I figured out that a deep stack of same layer size like input-N-N-N-output does a good job. Thanks for the paper link, I will read it when time let me do. No swap noise was just implemented as I think about the "noise generator". In tabular data you have mixed colums like binary, categoric integers and numeric floats. With this swapNoise one can sample noise values from the same distribution column-wise. 10..15% swapNoise is a good start value.</p> <p>pipeline is: raw-&gt;normalization(=RankGauss)-&gt;DAE-&gt;nn no normalization after DAE.</p> <p>thanks. I started with some params that worked on common problems like MNIST or other competitions. Like swapNoise=0.15 is a good start. Or 1-3 layers of 1000 units for the supervised nn. Learning rate for DAE is good when its near to divergence (as large as possible). DAE topology for deep stack 2-5 hidden layers is common. Sometimes memory limits the number of hidden units. Dropout in supervised nn is always a good idea. Values 0.1 .. 0.9 can be tried. lRate in supervised nn should be set that the net learns at least 100 epochs or so until it overfits.</p> <p>no, everything on the 595212 train samples.</p> <p>with colGroups I tried to swap columns with same meanings. This concerns categoric cols. I tried to swap the interger category+corresponding one-hot cols at once. No success. I answered a similar question in one of the comments above..</p> <p>in my models here no. Maybe there is some potential, I tried upsampling in a single nn on raw data and it improved to non-upsampling version. But on raw data I could not get better than 0.286 on CV or so, too low to contribute to the blend.</p> <p>correct.</p> <p>ok, I attached a file in my original post, its the console log of model nr 3. If you do a "cat nn.cfg.log" from a Linux console you get the output with nice colors. Maybe somebody can read something out of it. You know the MSE depends largely on the processed input data.</p> <p>thanks for pointing to this paper, very interesting. Why they are using the word "Hybrid Bootstrap" for that? Why hybrid? They sample from another training point, this is exactly what I was doing here. Nice to see they use it as a replacement for dropout :) I also tried exactly this in the hidden layers (as replacement for dropout) a year ago or so with no success. Anyways. funny. Again, for generating artificial noise samples for a denoising autoencoder swapNoise (or hybrid bootstrap) are good choices for csv/table data. But there must be better ways to generate artificial samples (GANs). For image data all kind of natural distortions (shear, affine, scale, rotate, flip..) are better ones.</p> <p>with denoising setup (swapNoise) you can never learn down to zero training error. there is a lower error bound, hence no regularization needed. train down as best as you can is my advice. But there is also some black art involved, for example lower training error does not necessarily mean better results on supervised task afterwards. Many unknowns .. but at least I managed for some datasets to show that DAE can do something useful.</p> <p>representation learning is the future :) ^^</p> <p>with more neurons in hidden the model has more power to learn the denoising function</p> <p>yes, transformed values are in -3..+3 range after normalization. In DAE I use linear output (no activation function), they can approximate any range.</p> <p>i used 10000-10000-10000. 20% swapnoise. Data normalization is the critical part. Which numeric preprocessing have you done?</p> <p>:) rankgauss+dae+nn does not work as well as plain lgbm. Cannot breach .79 barrier on CV :( still trying .. but this is the regular behavior. neural nets are behind boosted trees on csv datasets. I saw this so many times. One some datasets (like here on porto) it works. Especially when nTest &gt; nTrain and train+test comes from same distribution, unsupervised approach can learn better representations, "sometimes" :)</p> <p>between 300 and 700, depending on which dataset</p> <p>simple backprop (vectorized over batch). So I guess its dL/dw</p> <p>I added reference code to transform one vector with rankGauss under my github account.</p> <p>proper question. Yes I tried that (dae+lgbm) but gave only minor improvement in blend. As far as I remember it dropped LB score, so I did not experiment further.</p> <p>yea you figured it out - there are so many unknowns in this unsupervised phase. Years ago I also tried something like this and I got lucky to found a setting where error improves locally and I followed this path harder, this is what this solution here is about. There are many details, begins with data normalization to topology, learn params, etc.. The mse you mentioned with .0061 cannot be compared with my log output coz input data is totally different, hence I cannot say weather this is good or not. Ultimate test is using the generated features to the supervised learner and see some gains. Recently I got an invite to join Bojans team @ "home-credit-default-risk", maybe dae+nn helps also there. I'm not too optimistic because it has only a small testset.</p> <p>"calc" feats hurt the generalization of the models here, so better to reject them. one-hot is the more natural way to me to handle categoric data. For example cat=1 can be totally different to cat=2, so when I have 1-hot encoding these are different features, hence independent. But all this feature generation / normalization (needed in nn) is still black art</p> <p>no, I'm not aware of any papers about normalization. But I am away from academics since many years. I do not have a good overview of recent work in this field. My feeling is that nn cannot split in arbitrary fine granularity like gbdt(xgboost) does therefore you need to guess "good input regions"</p> <p>try it on your own on a particular dataset, CV will give you a hint which might work better</p> <p>we all know neural nets need a proper normalization on numeric cols to learn coz they calc gradients directly from inputs. In books you find mean/std or min/max normalization, which are not optimal when values are not gaussian distributed. During many years I tried many things, for me this "RankGauss" works well out-of-the-box for every dataset. Recently I found out that using the target information too leads to a better rank transformation. The idea here is to give the input range more weight when target changes often (more interesting regions). You see there is still room for creative approaches in normalizing data for neural nets.</p> <p>about swap noise: Neural net training happens in batches, this means the net only see a very small subset of total randomized rows, I use nearly always batchsize=128 (good compromise of speed and accuracy). Eg. with 1M rows the net processes 10^6/128=7813 batches per epoch. In an autoencoder setup input features and targets are the same. If you train with input=targets and huge hidden layers error will become nearly zero quickly (overcomplete representation, no regularizer). Swap noise on the input will modify the values in the input batch (randomly sampled from whole dataset column-wise), this acts as a very strong regularizer because the net need to "de-noise" the samples.</p> <p>yes, width of hidden layers determine model power. More is usually better, at least 1000 neurons I would say to fully occupy GPU (batchsize 128). More should not hurt when trained with swap noise (=strong regularizer, error can never be 0). </p> <p>very often normalization it does not hurt. A few exceptions: 1-hot category columns are ok to put in as 0 and 1. Outputs from DAE (activations from hidden units) surprisingly doesn't need normalization. Image pixels (uchar) work well by divide them by 255. </p> <p>I think I need to attach reference c++ code for this transformation</p> <p>2 or 3 hidden layers with 10x input size.</p> <p>1) I only use pure SGD, my experience is that you need a huge number of epoch to get good DAE representations. I recommend 1000 epochs.</p> <p>2) Yes in one epoch you iterate over all data batches (see all rows exactly once). In my implementation I randomize the dataset before and then train always on the same batches. Good.</p> <p>For swap noise on the inputs (targets are not changed, right!) I always sample new rows in every batch from the same column, this gives most randomization. Always generate new random numbers, I think this is important to fully explore the noise. So when u process a batch select eg. random 15% of the columns and exchange them with a value from a random row.</p> <p>I use cross entropy, (y_target-y_prediction). This works usually best. But as you said it comes down what works best, you can try squared error, L1 error, CE error.</p> <p>Thermometer coding (with equal ranked buckets) is a very robust code for representing numeric values to a nn. It always works. The bad thing is that you need 10..100 inputs to represent one num feature. And it can do worse than original numeric feature itself. Again here try yourself which normalization works best. CV will tell you.</p> <p>Hi, thanks for your comments.</p> <p>Yes the 221-1500-1500-1500-221 architecutre is not a bottleneck.<br> It is another kind of data coding by the help of denoising autoencoder training.<br> A different kind of code - it helped a little within the final blend.</p> <p>Both topologies does a different data coding style.<br> It always depends what can work, it need to be tried out, always.</p> <p>I remember trying to use the outputs too as features, but never helped the supervised model after. And yes you can also add original data, I guess was never trying this, but would be suprised if it helps.</p> <p>Never followed the idea of VAE, my believe was better noise like swap noise add good variation to sample the feature space. In this competition I was lucky enough to apply this successfully.</p> <p>Yes, categoricals as 1-hot, numerics are rank gauss normalized. This works really well since today. I found another schema for numericals which sometimes gives advantage over this rank gauss: thermometer rank coding. For one numeric feature I create eg. 100 binary features and use the thermometer code as representation (on a rank-equalized numeric scale).</p> <p>Error for my DAE is: (y-y_target). I think this is called logloss error or crossentropy. Not MSE which is (y-y_target)^2.</p> <p>Michael</p> <p>"gauss normalize": with this kind of transformation you do not loose information coded in numeric features, rank preserving and it seems better for nn's. Sometimes raw are better. You should have this in your "toolbox" for playing around.</p> <p>As far as I remember, I havent tried normalization of neuron outputs (output of DAE). One should try this, I guess it doesnt help much.</p> <p>With "output is sigmoid" I mean the problem is binary classification. The activation function for the supervised model after. Outputs of DAE are linear.</p> <p>Cheers!</p>
Porto Seguro’s Safe Driver Prediction
23rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats CPMP on your silver medal and for your many interesting forum posts.</p> <p>Yes, it's a lot of hardware for "One Old Dog".</p> <p>Thanks Yifan Xie, I'm glad you liked it.</p> <p>23rd place solution:</p> <p>The best of my two final selected submissions scored 0.29144 on the private LB and was my highest scorer on the public LB but only my 28th best on the private. My actual best scored 0.29157 and would have placed 19th on the private LB had I selected it. My final public LB rank was 1252, so I was one of the fortunate participants who moved up quite a bit (1229 places) from public to private ranking. I attribute this boost (and similar ones I got in previous Kaggle competitions that saw a lot of shakeup) to "clean living", by which I mean a strict avoidance of public kernels except for scanning them for potentially useful ideas. I also credit my silver medal in this contest to a combination of good luck, hard work, persistence, lots of experience, a modest amount of cleverness, ideas generously contributed on the contest forums by my fellow competitors, and fairly powerful hardware. I think the inventor Thomas Alva Edison was mostly right when he said that "Genius is one percent inspiration and ninety-nine percent perspiration", although in some cases one could usefully add several percent of "desperation". From my description below you might gather that my approach to this contest involved much more perspiration than inspiration, with my computers doing a lot of the perspiring.</p> <p>I worked in R on several multi-core Intel Xeon workstations with a total of 74 physical cores and 448 GB of RAM, all running Ubuntu Linux 16.04. I tried many ML algorithms, including catboost, deepboost, randomForest, glmnet, FactoRizationMachines, lightgbm, mxnet (function mx.mlp), rerf (Randomer Forest), rgf (regularized greedy forest), ranger (A Fast Implementation of Random Forests), softmaxreg (Training Multi-Layer Neural Network for Softmax Regression and Classification), neural networks in pytorch/nn (accessed in R through reticulate package), and xgboost. But in the end I ended up using only catboost, lightgbm, and xgboost, sometimes stacked together with glmnet. lightgbm seemed to be the fastest and best performer of the three boosting algorithms. I tuned the hyperparameters of these algorithms both manually and also in sequences of short holdout runs using my own optimization algorithm (not grid search, but a combination of random steps, gradient descent, and quadratic stepping).</p> <p>My best submission (0.29157) and my best selected submission (0.29144) were both minor variations on the same theme, which is massive ensembling, and they shared many of the same components. I choose to describe in more detail the 0.29157 submission, which is an ensemble (blend using weighted averaging) of many sets of predictions, each of which is itself a blend of many separate runs that differ from each other only in their random number seeds, and each of these runs also averaged the predictions of a stack of one or more algorithms run in parallel in separate jobs across multiple cores on the same machine with different random number seeds and different random subsets of the training data. Generally these subsets were "enriched" by selecting a larger fraction of target 1 records than target 0. For a stacker I developed my own algorithm, which is loosely based on the description of StackNet by kaz-Anova.</p> <p>All this may sound like ensemble overkill, but I hoped it would boost performance and also provide "safety in numbers" that protected to some extent against overfitting the public LB. I also did some cross-validation, which involved splitting the training data randomly into 75% modeling and 25% validation sets, measuring the Gini score on the holdouts, repeating this step 100 times with a fixed sequence of random number seeds, and computing the average (and standard deviation) of the Gini scores across all 100 runs. Those CV runs were used only to help select ML algorithms and tune hyperparameters - none of their models or forecasts were used directly for submissions.</p> <p>Data pre-processing: I converted the categoric features to binary 1-hots, except for catboost, which I allowed to handle categoric variables directly. I normalized the numeric features to a mean of 0 and standard deviation of 1, although that step was not important for the tree-based algorithms. I tried creating some features with PCA, but abandoned that after they failed to boost performance. My principal method of feature generation was to create a large number (20000 to 100000) of synthetic features using random combinations of 2, 3, or 4 numeric and/or binary variables, evaluate each of them on the training data using at first the magnitude of the spearman correlation with the target and then later the Gini score, and then pick some number of the top scorers (typically 50) to add to my feature set. The component variables were normalized and then combined linearly using random coefficients or multiplied together after being offset from their means by some random number of standard deviations. Some of my later runs also transformed some of the component variables via the sigmoid function to add additional diversity. The random feature generation and evaluation were spread across multiple cores for efficiency. To reduce the extent to which too many very similar variants of the same synthetic features would be generated and selected, I imposed various rules about how many times the same variable could appear as a synthetic feature component during the selection process. Because the selection of the highest-scoring synthetic features obviously involves some leakage from the target variable, there is a risk of overfitting. So the process has to be carefully tuned and monitored. I have used this method of feature generation for several years; it has proven more useful in some contests than others. Note that each run generates its own set of synthetic features using a separate random number seed; this enhances the diversity of the ensembles in which they are embedded.</p> <p>Here is an example of a fairly high-scoring (Gini 0.17336918) synthetic feature as an R expression: 0 + (0.6349437 * (((feat_ps_car_13) - 0.8132124) / 0.2245003)) + (0.6690679 * (((feat_ps_car_04_cat_dum_10) - 0.03195052) / 0.1758686)) + (0.511576 * (((feat_ps_reg_02) - 0.4391957) / 0.4043154)).</p> <p>I also thought of trying more sophisticated methods of feature generation and/or dimensionality reduction, but ran out of time before I could incorporate those into my models. But with Michael Jahrer's success I plan to look into denoising autoencoders for future contests.</p> <p>One of my most successful stacks ran catboost, lightgbm, and xgboost in one layer, then used their predictions as features for glmnet, which optimized a linear combination of those features and their first-level interactions. It turned out that a single ensemble of 15 runs using this stack scored 0.29154 on private LB, which was better than my best selection, although not as good as my best submission, in which it was a prominent component.</p> <p>Some of the other stacks I blended into my submissions included: a single layer of lightgbm; a single layer of catboost; lightgbm in one layer followed by catboost in a second, with restacking enabled to feed catboost both the original features and lightgbm's predictions as features; and variations on the preceding with lightgbm followed by xgboost and vice-versa;</p> <p>Finally, here are some of the hyperparameter values I used; these are examples, there is some variation between runs. Note that some of the integer parameters were tuned as non-integers; they were rounded probabilistically when actually passed:</p> <p>catboost: bagging_temperature 0.563685853, border_count 40.4459381, ctr_border_count 10.3875028, depth 7.3654202, iterations 567.283195, l2_leaf_reg 4.42588479, learning_rate 0.217045003, rsm 1</p> <p>lightgbm: bagging_fraction 1, bagging_freq 1, feature_fraction 0.228928379, lambda_l1 2.66438287, lambda_l2 1.25103924, learning_rate 1.6958276, max_bin 18.2150011, max_depth 14, min_data_in_leaf 219.512785, min_gain_to_split 2.5, min_sum_hessian_in_leaf 15.24851, num_iterations 13, num_leaves 500</p> <p>xgboost: alpha 1.63640353, colsample_bylevel 0.573134086, colsample_bytree 1, eta 0.490429693, gamma 1.91065198, lambda 0.300002699, max_delta_step 0, max_depth 5.45180973, min_child_weight 1, num_round 23.9970725, scale_pos_weight 1, subsample 0.696144779</p> <p>Thanks CoreyLevinson for your question. I don't include the target among the variables combined to make synthetic features. The danger of overfitting arises from the fact that of the very large number of synthetic features that I generate, I select only a relatively small number with the highest Gini scores (or whatever measure is appropriate) with respect to the target to actually include in my model. Otherwise I would end up with too many features for the modeler (lightgbm or whatever) to handle. Hope this helps.</p> <p>CoreyLevinson:</p> <p>The component variables are the existing feature variables that are combined to form the synthetic features. Here is the R code I use to generate a synthetic feature involving the product of component features:</p> <p>Def &lt;- "1" # initialize definition of product</p> <p>for( Ivar in 1 : NumFeats) { # loop over all component vars to insert in product</p> <p>Mean &lt;- Means[ Ivar] # mean of component variable</p> <p>Sd &lt;- Sds[ Ivar] # standard deviation of component variable</p> <p>Off &lt;- Sds[ Ivar] * runif( 1, -1, 1) - Mean # random offset between -1 and 1 standard deviations</p> <p>Term &lt;- sprintf( "(((%s) + %.7g) / %.7g)", FeatNameList[ Ivar], Off, Sd)</p> <p>Def &lt;- sprintf( "%s * %s", Def, Term)</p> <p>}</p> <p>And here is a sample expression produced by that code:</p> <p>"0.12301280 1 * (((feat_ps_ind_17_bin) + 0.1238053) / 0.3262747) * (((feat_ps_car_15) + -2.710025) / 0.7318236)"</p> <p>If I remember correctly, the idea of the sigmoid was to constrain the range of the values taken on by the variable to limit the effect of outliers. I actually used "1 / (1 + sigmoid( V))". The sigmoid transform was a recent addition to this feature synthesis method and is somewhat arbitrary. As you point out there are numerous other transforms one could apply.</p> <p>Hope this helps.</p>
Porto Seguro’s Safe Driver Prediction
31th/5k+ with 1 single model without stacking and blending possible, wow!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>done</p> <p>yes, i will, just right now im not in home</p> <p>200+ but many just OHE. Of course, i even can shared kernel if its interesting, but maybe most important tunning model and test for my CV. my CV 0.287+, LB 0.285 but Private LB 0.29115!</p> <p>I'm a beginner in ML (first model just 1 month ago), and I very glad the result what I've got from this competition. I'm very appreciate and I would like to say thanks to all participants and organizers this competition. </p>
Porto Seguro’s Safe Driver Prediction
35th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks @CPMP, target encoding is mentioned in the LightGBM model ! But using folds 2 and 3 I fine tuned which features to encode to get a better Public LB score.</p> <p>Thanks @Kruegger, here is the package on my github account. Not sure I committed all recent updates but you'll get the idea. <a href="https://github.com/goldentom42/predictor_stacker">https://github.com/goldentom42/predictor_stacker</a></p> <p>At each round the package </p> <ul> <li>tests all feature pairs by removing a bit of one and adding the same bit of the other</li> <li>keeps the best pair swap</li> <li>iterate until no significant improvement is made</li> </ul> <p>You can use several bags (without replacement) with a ratio of features and samples. I still have a problem with predict_proba since it does not check probabilities are positive ;-) That is why I used LogisticRegression to tidy things up. Future additions are : intercept, parallel processing of pairs (it's currently too long)...</p> <p>@Tilii, I think we'll have to create a competition of our own: how to rank in top 100 with "absolutely nothing special" solutions ;-) </p> <p>What I really meant is <strong>nothing special</strong> compared to the top 20. I'm really impressed by genetic features and the NN wake up call by Michael Jahrer and IronBar. </p> <p>Here is my solution for a 35th position. Nothing special here just a blend of different models.</p> <p><strong>Cross-validation</strong></p> <p>I used StratifiedKFold with seed 15 and saw early on that fold 2 and 3 were closely linked to Public LB. Some models were closer to Fold 2 and others were close to the average of Fold 2 and 3. I could increase my LB position while checking my overall local CV improvement and hope for the best on private LB.</p> <p><strong>Feature engineering</strong></p> <p>2 and 3 way feature interactions tested with a simple SGD. Continuous data was binned using pd.cut (did not have time to test Tilii's binning strategy but will in the future) All was then One Hot encoded to sparse matrix, removing small occurences ( &lt; 100 ). My best SGD scores 0.28206 on private LB.</p> <p>I tried to use a simple distance to mean of positive samples and mean of negative samples. The distance in itself was able to score 0.23828 on public LB. But for some reason models were unable to build on this... </p> <p>The only thing I found in this dataset is this: suppose you train an xgboost on the full train set and you get 0.286 local. If you check the scores for samples where ps_car_03_cat is equal/different to -1 you have a substantial score difference like 0.23 or 0.24 for one part and 0.30+ for the other samples. This is also true for ps_car_05_cat and ps_reg_03 Training models on these parts of the dataset did not really give local improvements...</p> <p><strong>Models</strong> </p> <ul> <li>LightGBM with 300+ features (OHE, target encoding, frequency, you name it) Local CV 0.2863 and private LB of 0.2869</li> <li>XGBoost with less features. It seems LightGBM sklearn API is better at managing memory. Local CV 0.2865 and private LB 0.2875</li> <li>Regularized greedy forest (XGBoost features). Local CV 0.282 and Private LB 0.284</li> <li>LGBM in Random Forest mode. Local CV of 0.2738 and Private LB 0.2732</li> <li>Keras 2 layers, I'm not very good with NN as I find them too long to train (I don't have a Tilii Special Nvidia GPU ;-), you know the TI ones) This was using OHE sparse data and 5 bags on each fold. 1st layer 50 PReLUs and 2nd layer 25 PReLUs, L2 regularization, No dropout. Local CV 0.277 and private LB 0.282</li> <li>LibFFM - a big thank you to Scirpus and Tilii. The only thing I did here is generate the files for each fold and simply run the exe file. Local 0.2816 and private score 0.28597</li> <li>FTRL proximal - another big thank you to Scirpus and Tilii. Local CV in the 0.269 and private LB 0.273</li> <li>SGD and Ridge, both on One Hot Encoded data. For ridge I used a sigmoid to transform the decision_function. </li> </ul> <p><strong>Ensembling</strong></p> <p>I used a linear stacker of my own that swaps meta features to maximize/minimize a given metric (and no intercept). And yes weights can be negative ;-) In my experience allowing negative weights always give the best results. The submission was an average over all folds with a rescaling procedure that uses a LogisticRegression. It scored 0.29085 on local CV and 0.29113 on private LB.</p> <p>XGBoost gave substantially worse local CV but scored 0.29083 on private LB so very close to linear stacking.</p>
Porto Seguro’s Safe Driver Prediction
39th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>As easy as ensemble of XGB, LGB, NN, FFM, RGF and couple of public.</p> <p>Scores are 284, 284, 280, 280, 283 and public in the range of 282-283. The averaging technique was geo mean.</p> <p>Even though it was very tough to squeeze anything out of feature engineering it was a good competition. Thanks to all of you who contribute and compete!</p>
Porto Seguro’s Safe Driver Prediction
39th solution utilizing Pr(is_test|X) by adversarial validation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We used MulticoreTSNE with default settings, which is way faster than that of sklearn implementation. train and test were concatenated beforehand.</p> <p>Val set is simply split to be about top 30% of train data in Pr(is_test|X). </p> <pre><code>X_val = X_train[prob_is_test &gt; th] X_train_ = X_train[prob_is_test &lt;= th] y_val = y_train[prob_is_test &gt; th] y_train_ = y_train[prob_is_test &lt;= th] </code></pre> <p>Thanks you my teammate Maxwell, Kaggle Admin, Porto Seguro, and all participants of this competition. I would like to share our 39th solution, which is utilizing Pr(is_test|X) produced by adversarial validation for various purposes.</p> <h2>Basic insight</h2> <p>From early stage of this competition, We were aware that train and test data are quite different in this competition. One clue is the subtitle of this competition: "Predict if a driver will file an insurance claim <strong>next year</strong>." Yes, the test data may be that of next year! <br> Actually, t-SNE shows that there is unignorable difference between train and test. You can see unseen clusters in test set as below. Even though adversarial validation with only original data could not distinguish between train and test at all, that with t-SNE features could clearly separate them at the ROC-AUC of 0.89.</p> <p><a href="https://github.com/nejumi/images/blob/master/images/tsne.png">t-SNE</a> <br> <a href="https://github.com/nejumi/images/blob/master/images/prob_is_test_refined.png">adversarial validation with t-SNE</a></p> <h2>Our approach</h2> <p>Our approach is consist of two predictions. One is relatively conservative prediction which is based on the adversarial validation. The other is relatively aggressive prediction which is based on intensive rank averaging.</p> <h3>Part A: Prediction based on adversarial validation</h3> <p>We splitted val set based on the probability of "is_test" (= Pr(is_test|X)) and conducted feature selection and model selection. This validation system well-reproduced the situation we experienced in the Public LB. Moreover, we created sample_weight from this Pr(is_test|X). By using this sample_weight for fitting, data points similar to test data can be more weighted. </p> <h3>Part B: Prediction based on CV</h3> <p>We conducted CV with lots of models and various random seed so as not to overfit to certain fold. This is similar to the "kagglemix" kernel. With enough ensemble, we could trust the CV score to some extent.</p> <h3>Weighted averaging based on Pr(is_test|X)</h3> <p>We conducted weighted averaging of the two predictions based on Pr(is_test|X). For data points which are similar to training data, kagglemix-like aggressive prediction is more weighted. On the other hand, for data points which is far from training data, the conservative prediction by adversarial validation is more weighted.</p> <p>Final Prediction = Coeff*(Part A) + (1- Coeff)<em>(Part B) <br> where Coeff = alpha</em>(Pr(is_test|X) + beta), alpha and beta were determined experimentally.</p> <h2>Our hypothesis</h2> <p>Why does such a difference appear? Of course one reason is time-series nature of the data implicitly contained in this data set. In addition, we want to propose the hypothesis for further investigation as below.</p> <ul> <li>There is huge gap of data size between train and test. In insurance business, such remarkable growth cannot be expected in the usual way, especially for large companies as Porto Seguro. </li> <li>Porto Seguro acquired AIG car insurance in 2016. The reason of the large increase in data size may be this event?</li> <li>There could be some difference in these two companies portfolio. Adversarial validation might have detected this difference with help of manifold learning.</li> </ul> <h2>Things which we could not do</h2> <ul> <li>Semi-supervised learning: We confirmed the effectiveness of semi-supervised learning in our validation system. Maybe it can catch future trend in the test data. We did not have enough time to create prediction for test data.</li> <li>More investigation for unsupervised learning: It is surprising for us that t-SNE could catch the structure of data which cannot be detected by supervised learning such as XGBoost. We should have used more time for digging this phenomenon.</li> <li>We could not get gold medals. We will try again!</li> </ul> <p>Thank you for giving me this important information. I'll be careful when using t-SNE and other algorithms. Although even MulticoreTSNE could improve our score in this competition, the precise understanding would substantially be more than that.</p>
Porto Seguro’s Safe Driver Prediction
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I have to say, I've been expecting some upward movement on private part of LB, but the final results were really pleasant and surprising! My solution is not very complex, just an average of 1 lgb and 1 nn, built pretty much on the same feature space. Most important parts:</p> <ol> <li>Feature elimination. I dropped all of calc features and ['ps_ind_14','ps_car_10_cat','ps_car_14','ps_ind_10_bin','ps_ind_11_bin', 'ps_ind_12_bin','ps_ind_13_bin','ps_car_11','ps_car_12']. I was excluding them one by one in greedy fashion and checking lgb cross validation score.</li> <li>Hot encoding categorical variables. It helped to reduce noise while getting the splits for most useful categories.</li> <li>For NN model it was also necessary to hot encode numeric features with small number of unique values - ['ps_car_15','ps_ind_01','ps_ind_03','ps_ind_15','ps_reg_01','ps_reg_02'] (without dropping the original ones)</li> <li>Regularized models. lgb_par = {'feature_fraction': 0.9, 'min_data_in_leaf': 2**4, 'lambda_l1':10, 'bagging_fraction': 0.5, 'learning_rate': 0.01, 'num_leaves': 2**4}</li> </ol> <p>Another thing that I unfortunately haven't explored well is anomaly detection on train+test datasets. Just like less frequent categories (or combinations) of categories are more likely to have label 1, we could find 'strange' samples via unsupervised methods. For example, if we train a basic autoencoder, AUC score of sample-wise reconstruction error would be ~0.60, which is pretty high. I believe more thorough analysis could make this approach really useful. </p> <p>Generally, it's hard to tell what else did not really work or could have worked: almost everything that you will try to do in a competition like this will result in no significant change, no matter whether you did it right or wrong:)</p> <p><code>sellout</code></p> <p>If you want to learn useful ML techniques and competition specific tricks, check out this course <a href="https://www.coursera.org/learn/competitive-data-science/home/welcome">https://www.coursera.org/learn/competitive-data-science/home/welcome</a> from experienced kagglers like KazAnova and me.</p> <p><code>/sellout</code></p> <p>See my response about structure below. As for the tricks, this is really it, nothing more.</p> <p>Thank you! What kind of parameters do you mean? If stuff like tree depth and row subsample ratios - I just tune them manually on holdout set. There is no specific code for it, I change parameter and rerun the cell:)</p> <p>Hi, no, it didn't work. Even CatBoost couldn't scrape anything from categorical features.</p> <p>5-fold CV with fixed seed, nothing out of ordinary.</p> <p>Well, normally you don't need to deal with numerical variables in specific ways, just scale them and use as is. I think that onehot helped lgb, because it allowed to reduce the noise a bit. Imagine that only 1 category is really important: then tree on labelencoded variable will use 2 splits and grab some noise while doing so, while in OHE case tree will simply choose good one with 1 split.</p> <p>3-layers, 4096-1024-256, with dropout of 0.5 between them. For regularization purposes 1st big layer has only 2% of nonzero weights. batch size 1024, nesterov momentum for optimizer. As I checked, the architecture is not very important - I got very close results with a single layer network: 1024 with 10% nonzero weights and then dropout of 0.5. So right preprocessing and regularization do the trick.</p> <p>I removed categorical, but kept numerical ones, because it wasn't clear exactly which numerical features were better with ohe.</p> <p>It was 80 epochs.</p> <p>Yes, it's important to average a few runs, otherwise the process might fall apart because of randomness. And by the way, fixing random seed makes sense only when everything else doesn't change, i.e both data and parameters. About dropping the feature that is seemingly useless, I'd say it depends on the situation and definitely requires some analysis. For example, you can try changing cv-split seeds and check if it's usefull again. </p> <p>The process is really simple. 1) At first you have some feature set. And you run a cross validation with those features to get score S0. 2) Exclude only feature1 from feature set and run cv again, get score S1,..., exclude only featuneN from feature set and get score Sn. 3) Completely remove feature that corresponds to the best score from S1,...,Sn. 4) Repeat 1-3 until score stops improving</p> <ol> <li>I didn't use seeds. Data was already shuffled, so for cross validation I used 5 consecutive blocks. When learning the model I always average the results of 4-8 runs with different seeds to ensure the stability of the results. My lgb version is 2.0.10</li> <li>Straight number of epochs. Just like for lgb, I average several runs with different random seeds.</li> <li>In every framework Fullyconnected (or linear) is implemented via multiplication of input batch by weight matrix. You can add one more multiplication by fixed matrix consisting of 1s and 0s in order to nullify the weights.</li> <li>I use PyTorch</li> </ol>
Porto Seguro’s Safe Driver Prediction
9th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Apologies for the delay -was flying ;)</p> <p>This competition was another great example for why kaggle is...well great. There was so much sharing and valuable information in kernels/forums that I am sure, irrespective of our ranks, everybody learnt from it :)</p> <p>Congrats to Jahrer for winning single-handedly the most popular kaggle competition ever! congrats to all winners in general and utility for <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44608">a simple and neat 3rd place solution</a> . As he pointed out, you can find more information about <a href="https://www.coursera.org/learn/competitive-data-science/home/welc">how to do well in competitions here.</a> </p> <p>Last but not least, I would like to thank my teamates Mario and Mathias for all their efforts in this competition . Now that credits are done (:)) , lets go to the solution.</p> <h1>cv strategy</h1> <p>We spent half of this competition trying to figure out why the miss-match between train and test performance. In the end we may have lost valuable time with this, because all are models were inline with private leaderboard. In any case I can conclude that Strattified cv was somehow off than other approaches. We tried 2 other schemas:</p> <ol> <li>Random 5 fold-cv with a seed (=0, original order) that <a href="https://www.kaggle.com/mariofilho">Mario</a> pointed that results were consistent . After 5 folder were done, full model is trained on the whole data and test predictions are with that model. We would still see a varying up to -0.005 gap from LB for certain models that were consistent in private. </li> <li>20 kfolds with 2 splits each. We averaged the results of all these 20 fold runs. So test predictions were made by averaging (20x2) 40 models. These models had smaller gaps with NNs and bigger gaps with tree-based models at LB </li> </ol> <p>We saw (the few) nns (we built) to score better in LB with (2), but tree-based models (like lightgbm and xgboost better with (1)</p> <p>We built 2 ensembles, one with models of (1) and one with models of (2) and we (rank) averaged them</p> <h1>Model</h1> <p>we recreated all popular (0280+) public kernels and fed them into these 2 cv schemas - there was no time for much improvisation. </p> <p>Sadly, we did not try many NNs until the last 2-3 days, so most of our scores are basically lightgbms and xgboosts. </p> <p>It seems that best transformations were dummies for categorical and everything else as is, target encoding was not adding much to this. For NNs, we managed to get one with 0.281 public LB after using these <a href="https://www.kaggle.com/scirpus/big-gp/">Genetic algorithm features</a> (but removed all coefficients, divisions, tanh), using only the sums and a very simple 1-layer architecture with relu. Numerical features were scaled with Standard scaler and <code>np.log1p</code></p> <p>We briefly tried the NNs with embeddings , but did not work for us and did not have much time to tune them</p> <p>in Addition to the public kernels, we were using other loss functions, specifically <code>regression</code> with lightgbm. CV-wise, some of the regression functions (<code>huber, fair, regression</code>), were scoring better than standard <code>binary</code></p> <p>We built around 100 models with varying parameters, loss functions etc. Some with our own feature engineering, maybe 1 or 2 extra feature interactions , </p> <p>Some models came from StackNet too, for instance , the <a href="https://github.com/kaz-Anova/StackNet/blob/master/parameters/PARAMETERS.MD#libffmclassifier">libffm model</a> added well to the blend.</p> <h1> Stacking</h1> <p>Initially Nonlinear stacking was failing and a linear blender (after rank transformations) was giving better results, BUT , Faron did find a simple nn architecture that worked better and gave +.001 on top of the linear blender (consistent with LB, public and private). We built 2 ensembles, with same models (just different cv strategy ) and averaged the results. The gain was not too big in private LB, we should have explored different models instead of spending so much time there.</p> <p>I have asked Faron to post more information - I don;t even know what he did :)</p>
Porto Seguro’s Safe Driver Prediction
Keep It Simple (21st solution & *mea culpa* for missing gold)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I still don't understand how it's possible to have a consistent difference between CV and public LB scores (except by chance) if your tuning process involves feedback from your CV score (unless there is an equal amount of feedback from LB scores, but that's unlikely given the submission limitation). Some of the improvements in your CV score from the tuning process will be real improvements (correction of underfits), and some will be illusory (overfits), and you can't easily tell which, but only the real improvements will affect your LB score, so your CV score will rise faster.</p> <p>If you <em>have</em> a trustworthy CV scheme (as apparently you did and we didn't), then you should trust it. But how do you know if it's trustworthy? And if you develop a scheme that you feel confident about, then why wouldn't you trust it? (You trust it by definition.) But the problem is (1) developing such a scheme, which is difficult, and (2) deciding whether you have succeeded. That decision comes with both type 1 and type 2 errors. But the way people assign blame for those errors is asymmetrical.</p> <p>I didn't say our best private model is also the best public LB model. What I said is that the best public LB model almost got gold. That was one of our final selections, and it did better than the other one, which had a higher CV score. Our best private LB model was an unweighted rank average, but <em>we didn't select it</em>, because we thought either the high-public-LB-score models or the high-CV-score models would do better. Part of the problem is that we never came up with any validation scheme for the rank averages, so (since they had lower public LB scores then the probability-weighted models and no CV score at all), they were never in the running to be final selections. They might have had higher CV scores if we had validated them. Or maybe not: you're right that we didn't have a reliable CV scheme. Or rather, we had several CV schemes that might have been individually reliable in different contexts but weren't comparable. I went through the process of developing CV schemes more than once, but the game changes as you go along, and if you want to take advantage of new information, you find that the old schemes don't apply.</p> <p>After every competition, it seems, a bunch of people make sad posts about how they should have trusted their CV instead of choosing the submissions with the best public LB scores. This competition is no different. But my own whine will be of a different vintage. This competition has left me still skeptical about the trustworthiness of cross-validation. (The stacking models that produced my team's best CV scores barely made it into the bronze range, while our best public LB score almost got gold. Almost.) What I have learned, though, is that parameters are your enemy. </p> <p>My teammate <a href="https://www.kaggle.com/justfor">Justfor</a> was doing fine with unweighted rank averages (the same method used in Victor Paslay's kernel, but with a better mix of models), and I messed things up by fitting them into weighted (log-odds) averages that allowed us to overfit the public LB. I think I had a pretty good method of combining ranks with probabilities (transform the ranks via an inverse logit-normal CDF, which leaves them in the same order but puts them in a metric comparable to the probabilities), but the weights on the probabilities themselves, as far as I can tell now, were already overfit. (We also experimented with weighted rank averages, which produced higher public LB scores separately than plain rank averages but didn't mix well with probabilities, and which would have had lower private LB scores separately.)</p> <p>So I'm going to ignore the 32% of our model that was probabilities and just list the 8 elements in the remaining rank average (which, by itself, would have been #11 on the private LB, at least if you assume everyone else's submissions unchanged):</p> <ol> <li>An XGBoost model from Jeru666's (from <a href="https://www.kaggle.com/jeru666/random-forest-pipeline?scriptVersionId=1634526">version 33</a> of his kernel)</li> <li>A slightly optimized version of Vladimir Demidov's <a href="https://www.kaggle.com/yekenot/simple-stacker-lb-0-284">Simple Stacker</a></li> <li>A slightly optimized version of my <a href="https://www.kaggle.com/aharless/stacking-with-log-odds-and-no-intercept">log-odds version</a> of that stacker</li> <li>A slightly optimized version of my <a href="https://www.kaggle.com/aharless/logistic-of-genetic-features">logistic regression</a> on Scirpus' <a href="https://www.kaggle.com/scirpus/big-gp">big GP</a> features</li> <li>Keui Shen Nong's <a href="https://www.kaggle.com/kueipo/base-on-froza-pascal-single-xgb-lb-0-284">Forza and Pascal</a> XGBoost kernel</li> <li>A Libffm model taken directly from <a href="https://www.kaggle.com/scirpus/libffm-generator-lb-280">Scirpus' kernel</a></li> <li>Bojan Tunguz' <a href="https://www.kaggle.com/tunguz/rgf-target-encoding-0-282-on-lb">RGF kernel</a></li> <li>A neural network model contributed by our mysterious teammate <a href="https://www.kaggle.com/loldataset">loldataset</a></li> </ol> <p>Except for the last (which was helpful but not critical), these were all either taken directly from public kernels or used slight variations thereof. The <a href="https://www.kaggle.com/vpaslay/lb-0-287-porto-seguro-mix">Paslay method</a> (or should we call it the Justfor method?) seems to work quite well if you take some care in choosing and validating (where possible) the base models. </p> <p>Meta-discussion: I see now why quotes sometimes appear with a gt sign rather than as a separate block. There is an apparent bug in the forum software, such that, when you edit a post that has already been posted, it interprets the gt sign literally and renders it as an entity rather than a format element.</p> <p>@NoFreeLunch You <em>can</em> get a CV score for rank mean (or linear blends or whatever) if you do the rank averaging (or linear blending or whatever) separately in separate folds. But it requires more effort, and in theory it requires you to re-fit the base models for each fold, which may not always be possible (for example, if one base model is Scirpus' black box of genetic programming, which we have no way to replicate on our own). To have a full menu of modeling mehods available, sometimes one has to work around these limitations (for example, by adding noise to training set predictions to create pseudo-OOF data).</p> <p>@CPMP:</p> <blockquote> <p>proper CV setting you get feedback from many train/test splits. With public LB you get feedback from ONE train/test split. How can you seriously think the latter is better in general? Best is probably to combine both, but certainly not to ignore the CV feedback altogether.</p> </blockquote> <p>I certainly would not advocate <em>ignoring</em> CV feedback. Indeed, I would say, in general, it should be — must be, given submission limitations — more important than LB feedback. I'm not against cross-validation. My <a href="https://www.kaggle.com/aharless/xgboost-cv-lb-284">most popular kernel</a> in this competition was about doing CV properly. (Really, it was more about copying almost every aspect of @olivier's <a href="https://www.kaggle.com/ogrellier/xgb-classifier-upsampling-lb-0-283">brilliant work</a>, but the innovations were to remove a leak in the CV and to save the OOF predictions.)</p> <p>But "combine both" also means don't ignore LB feedback. One popular approach, which I think is reasonable, is to make decisions based on CV feedback but roll back a set of decisions when the LB result (or a separate holdout from the training set) disagrees. That approach amounts to using the LB as a validation set for "early stopping" of whatever you're doing based on CV feedback. And as with ordinary early stopping, the optimal stop often happens at a time when the fit to the training set (in this case, the CV) is already much closer than the fit to the validation set (in this case, the LB). It depends on how the bias and variance interact in the context of the particular data and model.</p> <p>And as I continue to maintain, "trust your CV" is facile advice (in the English sense of the word "facile", not necessarily in the French sense). Every time you try something new, you bring in new potential sources of leakage. And the wider you cast your net for new methods, the more difficult it will be to fit those methods into your CV approach. You can make a point of casting narrowly enough to avoid major new CV issues, but then you miss out on a lot of potentially useful stuff. I don't think it's luck that some people do well without a consistent CV: they're just following an approach that makes cross-validation more difficult but, in their sometimes vindicated view, has advantages to compensate for that.</p> <p>I'm somewhat contradicting myself if I say "parameters are you enemy" but also praise Michael Jahrer for showing that deep learning works with moderate-sized data sets. So how about "parameters are your enemy, but they are an enemy that can be defeated and bent to your will if you have the right weapons, so don't go near them unless you are confident in your arsenal."</p>
Porto Seguro’s Safe Driver Prediction
My 513th place approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Looks like there are something wrong with Kaggle and my discussion is not saved.</p>
Porto Seguro’s Safe Driver Prediction
Solution 1178 Public / 29 Private
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>What was your CV?</p> <p>Not full time, but it requires planning of experiments. I usually do this in the morning and in the evening, and I check from time to time during the day. Goal is to keep the machines as busy as possible.</p> <p>Thanks.</p> <p>I've been burned twice, in Santander and in Mercedes competitions. Santander was my first, competition so I have an excuse, but in Mercedes I screw a good submission in favor of ones that were in line with the crowd. I ended up at 2100+ rank instead of 30th or so that way...</p> <p>I think that using the same folds split for all models is hurting. But I have not proof.</p> <p>For the second question, you are right, we average more models for test, which I see as a benefit, but it also introduces a disymmetry between train and test. This disymmetry comes from reusing models trained in folds to make test predictions rather than retraining on the full train set. I did test the latter and found it to be less effective. It may be more effective in another dataset.</p> <p>Good to know, thanks. It was shared too in Web Traffic : <a href="https://www.kaggle.com/c/web-traffic-time-series-forecasting/discussion/43529">https://www.kaggle.com/c/web-traffic-time-series-forecasting/discussion/43529</a> but that data could be scrapped from the internet anyway.</p> <p>These two competitions are research competitions, which may explain why test data could be shared.</p> <p>You're quite close.</p> <p>Say I have N base models. First for each model, I average all predictions made on the same fold (3 n general), so for each model I have 25 out of fold predictions, and 25 test predictions.</p> <p>For each of my 5 fold seed, I have a 5 fold split of the train data, and 5*N oof prediction for each of these folds. I just concatenate these 5*N fold predictions into N predictions for the full train set, as is usual in stacking, then run Ridge on that.</p> <p>When I'm done for each of the 5 ways to split into olds I average the predictions. Goal is to average across random seeds, whenever there is a seed.</p> <p>I never saw Kaggle share the private dataset. </p> <p>No, but remember, I need to use the same folds when ensembling. 5 fold splits is easier to manage than 15 ;)</p> <p>Thanks! Yes, long time. For xgb I use hist and not exact, which helps a lot. I ran these on a 20 core xeon machine, and training was taking between 6 and 24 hours depending on the input dataset and the objective function. </p> <p>For NN I ran them on a 1080 Ti, but the NNs are small, and the gpu wasn't used at full capacity. I did use very large batch size of 4096 for the factorization models, which made their training faster. For the feed forward NN I used 128, and this one was quite slow to train, almost 2 days for 25 runs.</p> <p>First, let me say you'll learn way more useful things from top teams than from this solution. Like many I'm eagerly waiting for Michael Jahrer magic to be disclosed. But other teams, esp #2 already shared very insightful info. Reading what top teams share is how one can really learn on Kaggle.</p> <p>Second, let me thanks the organizers for this challenging competition. My strength is in feature engineering based on understanding of the underlying real world phenomenon, but here, with anonymous features in an industry I don't know was very challenging to me.</p> <p>Third, let me thank all those who shared a lot, starting with @Tilii and @Olivier. There are many more, and I wish I could list them all.</p> <p>Back to my own solution, nothing really fancy, mostly XGBoost models, with few LGBM, RGF, and Keras models. Let's look at ingredients one by one.</p> <p><strong>Cross Validation</strong></p> <p>It became clear very rapidly to all that there was a huge variation of gini scores among folds, whatever the model used. To me it meant that public LB was no different, and therefore was not reliable. Relying on pubic LB was like optimizing for a single fixed fold throughout the whole competition: just too risky. I therefore focused on decreasing variance of my models. One great way to do it is averaging. There is some math behind this, known as central limit theorem. If we assume gini score is a random variable of unknown distribution and standard deviation S, then each model training creates one instance of that random variable. Then averaging over N runs has a std of S / sqrt(N). In order to achieve this we need independence of each random variable, which means that we should not rely on the same random seeds in our algorithms.</p> <p>My CV setting was therefore to run 5 stratified cross validations, with a different fold generator seed each. Each CV was a 5 fold. And I ran 3 model training for each fold, again with different seeds. Therefore I train 75 models each time. I then average the out of fold predictions, and the test predictions over the 75 models. For NNs, given the longer training time, I only ran each fold with one seed.</p> <p>The score for averaged OOF predictions was very well correlated with public LB, which enabled me to tune parameters and feature engineering effectively, and ignore the public LB till the end. I only checked that public LB and CV evolved the same way, which they always did for first level models. </p> <p><strong>Feature Engineering</strong></p> <p>Very simple, too simple obviously when I see top teams solutions. I removed some features as it is another way to decrease variance. I started from @Olivier feature selection kernel, and removed car_14 and the two calc features he had still. </p> <p>Categories were one hot encoded, numerical were either left as is, or binned using @Tilii entropy based binning. When numerical were not binned I created a binary for their missing value. I trained models using both variants. The only new feature I introduced was counts of feature interaction for (ind_0 and ind_17_bin). This alone gave me a 0.0005 lift.</p> <p>A last variant was to replace categorical features by embeddings from NN models, following the idea of <a href="https://arxiv.org/abs/1604.06737">categorical embedding paper</a>. These models performed well on public LB but a bit less than the ones with original categorical values on the private LB.</p> <p><strong>First level models</strong></p> <p>XGBoost, LightGBM, RGF, and NN. I spent time tuning XGBoost parameters. Parameter tuning gave me at least 0.004 LB improvement. Here is what worked best for me</p> <pre><code>params = { "objective" : "binary:logistic", "eval_metric" : "rmse", "eta" : 0.02, "max_depth" : 5, "min_child_weight" : 6, "gamma" : 10, "subsample" : 0.7, "colsample_bytree" : 0.5, "alpha" : 16, "lambda" : 1, 'tree_method' : 'hist', 'nthread' : 20, 'base_score' : y_train.mean(), 'max_delta_step' : 1, 'scale_pos_weight' : 3 } </code></pre> <p>Idea for min_child_weight and scale_pos_weight is to go as deep as what can isolate at least two claims. The rest is heavy regularization. In hindsight, the need for such heavy regularization should have led me to consider linear models more seriously.</p> <p>I trained xgb and lgb models with several objectives, logistic regression, regression, and poisson.</p> <p>For NNs I used 3 architectures. One is a feed forward model with 3 levels, a second one is an extension of a libffm model , and the third one is an extension of a libfm model. I'll share the libffm and the libfm models in more detail later.</p> <p><strong>Ensembling</strong></p> <p>I was fearing overfit like hell, and went for a very (too) conservative approach. I use a weighted combination of models, but restricted weights to be positive. I tried various ways to do it, and best for me was to use Ridge regression It gave results similar to using scipy minimize() as proposed by @FabienDaniel for a tiny fraction of the time. Maybe using scipy minimize would have got me few place higher, but I did not have time to run it in the last day of the competition. Allowing negative weights was better at first, but led to overfit when the number of base models increased.</p> <p>Given Ridge implementation in sklearn does not support positive weights, I iterated: run it, remove the model with the largest negative coefficient, and repeat until all coefficients are positive. I ran Ridge in the same 5 stratified 5 folds as for first level. To be clear, for each fold seed, I ran Ridge on the oof predictions corresponding to the same folds, i.e. I ran a Ridge selection process 5 times, one for each fold seed. Using different folds at level 1 than level 0 overfit for me. That was the bug I have been chasing in my ensemble till I found it the last day. It is ironic I made that mistake because I <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/43467#245043">explained why one should not use different folds when ensembling</a></p> <p><strong>What did not work</strong></p> <p>I noticed that there was a correlation with rows that were quite different from the rest. I tried to capture that in various ways, using knn, isolation forrest, gaussian mixture models, clustering, but they all led to overfit. Just using isolation forrest in particular yield a gini score above 0.23 which is quite amazing to me. I'm sad I didn't manage to use it effectively.</p> <p>I spent too much time on XGBoost instead of looking at other models less prone to overfit.</p> <p>Lambda rank was disappointing, both for xgboost and for NNs, I didn't use any resulting model in my ensemble.</p> <p><strong>Public Kernels</strong></p> <p>I decided to not use them after I saw a few very popular kernels that were obviously overfit to public LB. This was too extreme as several kernels proved to be very useful to others. I did reuse some of @Tilii and @Olivier's code though, but I should have looked at more kernels.</p> <p><strong>Final words</strong></p> <p>Trust what you do. Here one has to trust local CV. Beware, this may not be the case in other competitions. </p> <p>I can't say I was not impacted by my rank sinking by more than 1000 during the last weeks of the competition, and my inability to reach 0.287 on the public LB, but I was convinced people were overfiting to public LB in mass. I'm very happy to have been right on this.</p> <p>I'm not based in Paris, but I go there from time to time for my job. Why not have a meetup then?</p> <p>You got it right.</p> <p>I tried LR, it was not as good for me. And I had to use balanced class weights to make it produce decent results.</p> <p>Fold seeds where 0, 1, 2, 3, 4 ;) The seeds should not matter much as soon as you average enough runs, thanks to the central limit theorem. I find it very dangerous to start selecting random seeds as this introduces a bias that is impossible to remove.</p> <p>Manually ;)</p> <p>Right, I should have said I used LR as a filter for featurre engineering at first level. If I saw no improvement in my LR model I didn't proceed with training more complex models.</p> <p>Thanks Gilberto, I know I learned a lot from your past top solution sharing, and hope I reused some of it wisely here ;)</p> <p>Thanks for the kind words!</p> <p>...</p> <p>There is no wisdom of the crowd here.</p> <p>I tried both, and what worked best in the end was rank. To be clearer, if I only have tree ensemble models (xgb, lgb, rgf), using raw predictions was best. But to blend with NN I had to use ranks.</p> <p>Glad to see you back, you had a happy surprise didn't you? Well deserved, congrats.</p>
Porto Seguro’s Safe Driver Prediction
Solution Sharing
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations Michael Jahrer, Début_Kele Team and utility. Congrats to all other top finishers as well. Really interested in knowing the top solutions.</p> <p>Life is all about learning from failures. Thanks to all in advance for sharing your solutions. :)</p> <p><strong>1st place solution</strong></p> <ul> <li>author:<a href="https://www.kaggle.com/mjahrer">Michael</a></li> <li><a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44629">Discussion link</a></li> </ul> <p><strong>2nd place Solution</strong> </p> <ul> <li>author: <a href="https://www.kaggle.com/xiaozhouwang/competitions?sortBy=best&amp;group=active&amp;page=1&amp;pageSize=20">Little boat</a> </li> <li><a href="https://www.kaggle.com/xiaozhouwang/2nd-place-solution-nn-model">kernel</a></li> </ul> <p><strong>3rd place Solution</strong></p> <ul> <li>author: <a href="https://www.kaggle.com/utility">utility</a></li> <li><a href="https://www.kaggle.com/cpmpml">Discussion link</a></li> </ul> <p><strong>Solution 1178 Public / 29 Private</strong> </p> <ul> <li>author: <a href="https://www.kaggle.com/cpmpml">CPMP</a></li> <li><a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44614">Discussion link</a></li> </ul>
Porto Seguro’s Safe Driver Prediction
Taylor-made NN for 0.285 PLB (part of solution of 8º)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Michael, congratulations for your well deserved first position.</p> <blockquote> <p><strong>Oscar Takeshita wrote</strong></p> <blockquote> <p>Beautiful! It was my post about NN being behind. I was seeking reasons for that and it seems the reason was lack of creativity ;) What's the number of neurons used in the layer or what method do you use to tune their size? Did you regularize and if yes what type of regularization and why? </p> </blockquote> </blockquote> <p>I used l1 and l2 regularization on the encoding. It helped to fight the curse of dimensionality. I have added the values to the first post. <br> Dropout was used on the classifier. <br> The values I used were chosen after running extensive optimization search. </p> <blockquote> <p><strong>den3b wrote</strong></p> <blockquote> <p>So pretteh!!</p> </blockquote> <p>Question, where in the DNN did you put the dropouts?</p> </blockquote> <p>Only on the last part, on the classifier</p> <blockquote> <p><strong>Roberto Spadim wrote</strong></p> <blockquote> <p>details details details =) NN layers activation functions dropouts and others layers specifics details :)</p> </blockquote> <p>congrats! very nice solution</p> </blockquote> <h3>More details about the architecture</h3> <ul> <li>Tanh activation was used on the embedding, relu on the classifier and sigmoid on the output</li> <li>Dropout was applied only to the classifier, not the embedding</li> <li>Regularization l1 and l2 were applied to the embedding</li> <li>Number of units and layers is the following:</li> </ul> <p>'encoding_conf': { 'reg_categorical': 3.0, 'car_categorical': 1, 'ind_categorical': 50.0, 'ind': 70.0, 'car': 35.0, 'reg': 34.0}, 'n_layers': 1, 'n_units_per_layer': 90.0, </p> <blockquote> <p><strong>CPMP wrote</strong></p> <blockquote> <p>I get that, and I used the same. That's not my question. You have 3 concatenate layers in parallel, why not have one concatenate layer?</p> </blockquote> </blockquote> <p>Ok. Know I understand. I tried what you say but it worked worse. I think having 3 layers in parallel allows to do a better fine tuning of the size of the layers and assign to each type of feature (car, region and driver) the importance that it has for the prediction. </p> <blockquote> <p>Thanks for sharing, and congrats. I have a question: would your network be the same if you concatenate everything at level 2 instead of keeping 3 different concatenate? That's where my NN differ from yours. Now that I ask I see you can have different regularization/dropout/normalization for each of the three whereas I could not.</p> </blockquote> <p>Hi, The idea of using different levels is that categorical features are fed into the network using one-hot encoding. So their dimensionality is very high and it's necessary to use a different embedding to avoid the curse of dimensionality.</p> <p>Once that the competition has ended I wanted to share with you one of the parts of our ensemble. In fact it was the model with higher score of the ensemble, so it was funny to read in the forum that neural networks did not worked well in this competition ;)</p> <h2>Neural Network model</h2> <p>I use the library Keras for building and training the NN. The architecture is the following.</p> <p><img src="https://i.imgur.com/T3SsyPD.png" alt=" " title="NN architecture"></p> <p>The input is divided in 3 parts: reg, car and ind. Moreover each part is divided between categorical and numerical input. For each different input the network learns an encoding, and those encodings are combined later to produce the prediction. </p> <p><strong>The idea is that instead of working on feature engineering (which I'm not good at it) leave the network learn the best encoding for the input features.</strong></p> <p>The dataset was small so I could run careful optimization of the parameters of the network: </p> <ul> <li>Number of units on each layer/embedding</li> <li>dropout</li> <li>regularization</li> <li>lr</li> <li>....</li> </ul> <h3>Training strategy</h3> <p>I used simple KFold for cross-validation with 5 folds. For making the submission I trained with up to 20 different seeds, and averaged the predictions on the test set. That makes a total of 100 averaged models.</p> <p>Also I observed that the model was very prone to overfitting, in fact it usually reached the best score on epoch 0 or 1. So instead of using the whole dataset on each epoch I trained with smaller portions and that allowed to find a better stop point of the training. </p> <p>I used oversampling of positive samples with a ratio of 25.</p> <h3>More details about the architecture</h3> <ul> <li>Tanh activation was used on the embedding, relu on the classifier and sigmoid on the output</li> <li>Dropout was applied only to the classifier, not the embedding</li> <li>Regularization l1 and l2 were applied to the embedding</li> <li>Number of units and layers is the following:</li> </ul> <p>'encoding_conf': { 'reg_categorical': 3.0, 'car_categorical': 1, 'ind_categorical': 50.0, 'ind': 70.0, 'car': 35.0, 'reg': 34.0}, 'n_layers': 1, 'n_units_per_layer': 90.0, </p> <h3>Notebook with the code</h3> <p><a href="https://www.kaggle.com/ironbar/taylor-made-nn-for-0-285-plb-part-of-8/">https://www.kaggle.com/ironbar/taylor-made-nn-for-0-285-plb-part-of-8/</a></p> <hr> <p>Feel free to ask details.</p> <p>Thanks to MSChuan for being such a good team mate. </p> <p>&gt; <strong>Tilii wrote</strong> &gt; &gt; &gt; Thanks for sharing a truly original approach. An old cliche that I can't upvote you enough certainly applies here. I hope you get around to publish your code at some point. &gt; &gt; You have taught me a new way to fish!</p> <p>Thanks Tilii, <br> I have just uploaded the notebook I used to train the model. <br> <a href="https://www.kaggle.com/ironbar/taylor-made-nn-for-0-285-plb-part-of-8/">https://www.kaggle.com/ironbar/taylor-made-nn-for-0-285-plb-part-of-8/</a></p> <blockquote> <p><strong>Kruegger wrote</strong></p> <blockquote> <p>Thank you. And how many epochs did you get in the result?</p> </blockquote> </blockquote> <p>I think I splitted the epoch into 50 slices, and it could take about 75 miniepochs...</p> <blockquote> <p><strong>Kruegger wrote</strong></p> <blockquote> <p>Thank you for sharing. Some questions:</p> </blockquote> <ol> <li>What framework did you use? Keras/TF?</li> <li>Did you use early stopping or straight number of epochs?</li> </ol> </blockquote> <p>I use Keras with Theano backend. I used early stopping, moreover the validation score was really noisy so I made frequent evaluations of the validation set to ensure stopping at a good point.</p> <blockquote> <p><strong>Hossein Amirkhani wrote</strong></p> <blockquote> <p>Congratulation! One question: why you used tanh activation on the embeddings instead of ReLU?</p> </blockquote> </blockquote> <p>I run an optimization algorithm using the following activations for embedding: relu, tanh, sigmoid. <br> Tanh gave the best results.</p> <blockquote> <p><strong>Lorenzo Ridolfi wrote</strong> 1) What treatment you did for the missing values? In the code you shared the 'DatasetCleaner" code isn't available.</p> </blockquote> <p>I treated each feature independtly. If the number of missing values was small they were replaced using the most frequent feature value. Otherwise a new category was created for the missing values.</p> <blockquote> <p>2) When you oversampled the positive samples did you add some kind of noise?</p> </blockquote> <p>I did not use noise</p> <p>Usually in one epoch you train with the whole train dataset. Instead of that I train with a random 1/50 subset on each epoch. But I use the whole train set.</p>
Porto Seguro’s Safe Driver Prediction
genetic algorithm solution (20th place) - very long read
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>kernel: <a href="https://www.kaggle.com/jacekpoplawski/genetic-algorithm-main-part-20th-place/">https://www.kaggle.com/jacekpoplawski/genetic-algorithm-main-part-20th-place/</a></p> <p>It will be long post, but maybe someone will find it interesting - I want to show you my journey with this competition and I think my solution is quite different.</p> <p>No ensembling, no OHE, no manual feature engineering. I have more than 1500 different models. I could use just 3 of them to get higher score I have now.</p> <h2>Intro</h2> <p>I am a C++ programmer with over 13 years of commercial experience, I had big love for AI and Alife in 90s, but it took me some time to realize "Deep Learning" means Neural Networks. This year I started doing courses on udemy, then Andrew Ng courses in coursera. Then I found Kaggle.</p> <p>It was my second competition. I have spend lots of time on Porto Seguro. In the last weeks I was sure this time was wasted, because my models were not really better than average model from public kernels. I was shocked yesterday when I realized I jumped over 1000 places up and I am 20th. I will try to describe what I tried and I what i learned. Please correct me if I am wrong somewhere.</p> <p>I removed some stories related to my Neural Network approach or different variances of genetic algorithm to make it more focused and less boring ;)</p> <p>I will probably publish some kernel in next days, I will try to choose single one with good private score (not sure will it be calculated for new kernels?)</p> <h2>First approach - build basic subset of features</h2> <p>According to discussions on forum some features are just noise. According to xgb feature importance some features are very important. How can we check which features should be used?</p> <p>I think simply testing random forest or xgboost feature importance is not a good idea. If you have some score and you will look on feature importance you should not assume that by removing some low-important feature your score will be higher. You can remove feature which is not used at all, or you can be sure that some feature is big part of your model. But features can also hurt your model and feature importance won't tell you that.</p> <p>Instead we should build model with different feature subsets and compare which one has best score. How many combinations we have?</p> <p>len(train_data.columns) - 2 = 57</p> <p>If I am correct you can calculate it this way: each feature can be part of model or not. So it will be 2^57 minus 1 or maybe 2^56 minus 1. Anyway it's a quite big number.</p> <p>We can't expect to check them all. So how to find good subset? </p> <p>We know that ps_car_13 is important. I wasn't able to run xgboost with single feature so we need at least two. Let's do following:</p> <ul> <li><p>take one feature</p></li> <li><p>build model from ps_car_13 and this feature</p></li> <li><p>calculate score</p></li> <li><p>repeat until all features are processed (each feature plus ps_car_13)</p></li> </ul> <p>So now we have two features. Let's move forward and find third one. Then fourth. Etc.</p> <p>Then we have n=15 features with some nice score close to 0.270. Now we can try opposite way:</p> <ul> <li><p>remove one feature from the model</p></li> <li><p>calculate score</p></li> <li><p>repeat until all features are processed (each model is n-1 features)</p></li> </ul> <p>And again we can find feature which removed gives highest score.</p> <p>By executing this algorithm we can find nice subset of features. Later in the discussions I found Boruta algorithm which different but quite promising too.</p> <h2>Hyperparameters tuning</h2> <p>To create a model you need to execute some algorithm - like xgboost. And while this algorithm works on dataset it is also configured by many hyperparameters. In general you need make model as complex as needed but no more, because it will overfit, or as simple as possible but not more, because it won't be able to achieve good score.</p> <p>I tried hyperparameter tuning multiple times and the best way in this competition I found was step by step random search. </p> <ul> <li><p>start with default hyperparameters</p></li> <li><p>set random depth - random.randint(3,7) for instance</p></li> <li><p>calculate score</p></li> <li><p>very important: save hyperparameters and score to external file, write separate jupyter notebook to display scores and plot curves or scatterplots</p></li> <li><p>repeat infinite times, manually break loop when you will be happy with graph</p></li> </ul> <p>It is very tempting to skip plotting and just look on numbers on the screen. After I started using plots I never stopped. You just see on plots what's going on, while you can be fooled by looking on raw numbers.</p> <p>Depth is most important parameter because it defines how complex your tree will be. Then you need to deal with overfitting by using lambda and alpha. Now plots are extremely useful. Then you should also try to reduce number of features used in your trees and part of data. In xgboost this is colsample and subsample. There is also nice setting for unbalanced datasets - scale_pos_weight.</p> <h2>LightGBM</h2> <p>I didn't know what LGB is. It was word used in discussions and kernel titles but it was similar to LB so it wasn't clear to me what's the difference between LB and LGB ;)</p> <p>When I was starting with Kaggle I wanted to use NN only. Then I realized xgboost is only way to go. When I looked at some kernels I have noticed that people build xgb and lgb models and blend them. So I wanted to try it just to see will there be any difference.</p> <p>When I was calculating my models with xgb I realized it uses only small part of CPU. Why? Maybe because it's Windows or because it can't use threads, I don't know. So I was using multiple jupyter notebooks with xgboosts and it was working. Well maybe except I needed to fit my models in memory. How to fit multiple models in memory? Don't use many features. That was main reason I stopped using One Hot Encoding.</p> <p>Using lgb was very easy. I could use same code as for xgb, just changed few lines - create data for lgb instead xgb and then call train with different hyperparameters. Was the score much different? I seriously have no idea, because the first thing I've noticed was CPU usage. LightGBM is using my whole CPU and it's much much faster.</p> <p>I submitted some xgb and lgb data with LB 0.280 or 0.281 and soon I stopped using xgb at all.</p> <h2>Target encoding</h2> <p>My first Kaggle competition was NYC Taxi. The target was trip duration between two points on map. There was very interesting idea for feature engineering in this competition. You can do some clustering and then find average trip duration between two clusters. Or better, not just trip duration, but calculate feature called speed which is distance divided by trip duration. The first kernel which uses similar idea I saw here was Olivier's kernel with target_encode method.</p> <p>At first I didn't know why he used some calculations inside this method. There is some min_samples_leaf and smoothing. Was it some magic? Well, I did what I learned previously - I plotted it. Then I understood everything.</p> <p>Target encoding means that you group data with the same column value, then you calculate average target and use this average target as new column. This method can lead to overfitting when used incorrectly. For instance when number of records with given column value is very small your new column will store just target value. So algorithm will try to predict target from target. This is pointless and won't work at all with test data. So you need to be sure that number of records is big enough, that's what min_samples_leaf is. So what should you store in this new column when number of records is small? Just average value of target in whole dataset. It won't hurt. Result is just blend of average global target value and average target value for this column value.</p> <p>And now the most important thing. When you open public kernels you will notice little nice method called add_noise. What it does? It does absolutely nothing. Please go to kernel sections and check yourself. Maybe I missed some kernel where it is used, or maybe I am wrong (I never copied this code, I wrote it from scratch) but if I understand correctly everyone just copied add_noise which is called with argument equal to zero.</p> <p>When you are using target encoding on the same data you are doing your training you must use noise. Without noise the result will be bad. I compared noise_level = 0.1 then noise_level = 0 then noise_level = 0.5 and larger. At first it was counter intuitive. How can large noise be useful? It can. If your data is 0.1, 0.2, 0.3 and you add large random noise and result will be 0.7 0.4 0.9 - you still have 3 different values which can be used by tree algorithm, and you have no overfitting.</p> <p>(this is very important to understand, I can draw some table if needed)</p> <p>But now you may ask - I took one categorical value and converted it with complex algorithm into another categorical value, what's the difference? </p> <p>Because I can use more than one categorical value! I can combine them! Back to NYC Taxi. In this competition we were using 4 values (start/end trip points) for target encoding. So why not use this method in Porto Seguro?</p> <p>And now the madness starts.</p> <h2>feature engineering on steroids</h2> <p>How many new features can you create from some numeric data you don't understand at all? A lot.</p> <p>First you can add them. </p> <p>ps_car_10_cat_add_ps_ind_02_cat means you add columns ps_car_10_cat and ps_ind_02_cat</p> <p>You can also sub them, mul and divide.</p> <p>Is there any point in dividing number of apples by number of oranges?</p> <p>I think yes, at least it was best idea I had. In Andrew Ng course he gave example of some feature engineering where you divide two values and it can be very useful. Important note: there is one difference now between tree and NN: when you divide something you can get NaN values which are perfectly fine with tree but problematic with NN.</p> <p>You can also try (x1+x2)/x1 or (x1-x2)/(x1+x2) but I had no time for that. So my math features were only add, sub, div and mul.</p> <p>Then there was target encoding. My notation was like that:</p> <p>ps_car_07_cat_mean_ps_ind_02_cat_mean_ps_ind_10_bin - is average target value grouped by [ps_car_07_cat, ps_ind_02_cat, ps_ind_10_bin]</p> <p>ps_car_11_mean_ps_ind_10_bin - is average target value grouped by [ps_car_11, ps_ind_10_bin]</p> <p>0_mean_ps_ind_04_cat - is average target value grouped by [ps_ind_04_cat]</p> <p>As I said before I wasn't using OHE or any features from public kernels (I tried "missing" but it wasn't useful for me so i removed it)</p> <p>So now I am able to store my features in list of strings, score them and save scores and feature list in csv files.</p> <h2>genetic algorithm</h2> <p>Let's take basic set of features. For instance all features trom train set minus <em>calc</em>. Now let's generate random feature - math one or mean one (see previous section). Then let's calculate its score. Save it to the file. Repeat. Repeat. Repeat all night.</p> <p>Now we have nice long csv file with many feature sets and by using some pandas and matplotlib we can see the scores. It's time for stage two.</p> <p>Sort feature sets by score. Take two random sets with high score. Add sets. Randomly remove some feature or add new generated one. Calculate score. Save and repeat, repeat, repeat. After some days of different workflows I got collection of feature sets with 0.290 score in my CV. I mean average of 5 folds was over 0.290. I got also about 500 feature sets higher than 0.288. </p> <h2>Failure</h2> <p>Then I started submitting them to LB. Score was 0.282. Once it was 0.283. Later I started blending them. 0.284 wow. So at least my idea was better than basic xgboost model I was starting with. </p> <p><a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/43455">https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/43455</a></p> <p>I tried also ensembling or using rank instead average. But then I wanted to blend 300 models and it was difficult to do something more than averaging with them. I even found way to split data to 10 parts and then fit many hunderts datasets in memory to calculate rank. But it was still 0.284.</p> <p>There were 0.287 scores in public kernels. 0.288 in LB. And single 0.285 models in discussions. What can I do with my poor 0.284? It was time to admit I have lost. And I wasted lots of time, both my coding time and computation time. At least I learned something, maybe?</p> <h2>Grand Finale</h2> <p>5 days ago I wrote on forum:</p> <p>"I have build large number of feature sets and calculated local CV for them. Then I am averaging best ones (according to local CV). In 4 days I will see was it good idea or bad. My only chance for higher position on LB is that most public kernels overfit public LB :)" <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44210#248360">https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44210#248360</a></p> <p>my best submission was 0.29173, submitted 9 days ago, it was average of 3 kernels, only 3, hand picked, no random hyperparameters or seeds</p> <p>my other submissions: 0.29166, 0.29165, 0.29164, 0.29162, 0.29159, 0.29158, 0.29156, 0.29155, 0.29154 - none of them was selected by me to use</p> <p>selected submissions were 0.29153 and 0.29147 - blend of 180 models with random hyperparameters, submitted 4 days ago, after that submission I decided I am done</p> <h2>Conclusion</h2> <ul> <li><p>Kaggle is the best place on the planet </p></li> <li><p>by reading kernels and discussions and building your models at same time you can learn a lot</p></li> <li><p>it was easy for people to lose their money in tulip mania, it's easy to follow crowd on stock market, but it's good idea to make sure you know how the world works and don't overfit to LB which is just random 30% of test data</p></li> <li><p>it was very bad idea to stop fighting just before the end, my algorithm was working and with enough time I could try my feature sets with NN and xgb </p></li> <li><p>I hope for me it is just a start - thanks for reading :)</p></li> </ul> <p><img src="https://i.imgur.com/gcKlR5N.png" alt="my top submissions - none of them selected" title=""></p> <p>hello Olivier,</p> <p>I was reading a lot of kernels and discussions in the last two months but I think it was your code which inspired me most, yes you are right I even remember reading this comment about noise, probably when I was ready to start reading the code i was looking on newer kernel where noise was gone</p> <p>I did some research, I think the author or popularizator of target encoding idea was Owen Zhang <a href="https://www.kaggle.com/owenzhang1">https://www.kaggle.com/owenzhang1</a> - on google I found his slides "Winning data science competitions", so basicaly if you want to do target encoding you should do it on some other part of data than your train set, but since we are using kfold we are doomed, so the only way to make it work is to add some noise, please look at my final values:</p> <p>lgb_pars = { 'max_depth': 4, 'min_data_in_leaf': 20, 'min_sum_hessian_in_leaf': 1e-3, 'feature_fraction': 0.47, 'bagging_fraction': 0.87, 'bagging_freq': 10, 'lambda_l1': 8.0, <br> 'lambda_l2': 13.0, <br> 'min_split_gain': 0, 'max_bin': 255, 'min_data_in_bin': 3, 'learning_rate': 0.08, 'metric': {'gini_lgb'}, 'objective': "binary" }</p> <p>hyper = {"rounds": 1000, "early": 100, "lgb_pars": lgb_pars, "features": base_features, "noise_level": 0.1, "kfold" : 5, "smoothing": 30.0, "min_samples_leaf": 300}</p> <p>unfortunately in my final solution I randomized my hyperparameters a little to increase variance of results and blend more than 100 models, it wasn't very good idea because I could be 13th without trying that</p> <p>but that was set of hyperparameters I was using for my genetic algorithm - to find best features</p> <p>hello S.Iqbal </p> <p>1) please look at attached files, on hyper1 you can see scatter plot, on hyper2 you can see same data but line plot and better y scale, you can clearly see that there is some good area for validation score, then please look at hyper3, it is other hyperparameter and we see it has no big effect for score, this is just noise, if instead graphs you will just calculate 3 or 5 values and you will look at them - your eyes and your brain will start overfitting to small dataset, you will see something which doesn't exist, it was very visible when I was searching for good lambda value for xgboost, I was testing values 1 to 10 while the decrease of score happened after value 10000</p> <p>each dot on the plot means full model score, at least few minutes of calculations, so to create such graph you need hours</p> <p>2) You can't check everything. Genetic algorithm is just another way to search for solution. You need to decide which way to go. I did my hyperparameters tuning multiple times in the last two months. </p> <p>Please notice this dataset is very specific - variance is very very high (see forum discussions from the past). That's why simple random hyperparameter tuning was not enough and that's why changing seed changes your score a lot. So I needed to decide how to configure my local CV and then not change that during 1500 models calculation. Of course 1500 models was just last stage. Before I was calculating much more with other hyperparameters, and with xgboost and with NN. But it's this last stage which worked.</p> <p>PS. Daimler/Germany? greetings from Siili/Wroclaw ;)</p> <p>hello bigzhao,</p> <p>please read section "genetic algorithm" and also make sure you follow the basic algorithm description in section "First approach - build basic subset of features"</p> <p>to me "genetic algorithm" means that you have <strong>population of solutions</strong> - in this case one "solution" was just a set of features, each solution was scored by LGB (it's crucial to have this score stable, so no random values allowed inside), then from given population you are creating new solutions:</p> <ul> <li><p>take one good (high score) solution and add one new random feature (simple mutation)</p></li> <li><p>take two good solutions and mix them (crossover) - it mean you can add two feature sets or take some part of each feature set</p></li> <li><p>take multiple good solutions and mix them together (I tried 3 or 5)</p></li> <li><p>take one good solution and decrease number of features (I jumped from 0.289 to 0.290 this way)</p></li> </ul> <p>Please look at attached screenshot (top_groups), size is number of features in given solution.</p> <p>hello Mikhail,</p> <p>1) for me noise_level was just another hyperparameter to tune, for given set of features I was just testing noise_level values 0.0, 0.1, 0.01, 0.2, 0.5 then 1.0 or 2.0. Even as huge noise as 2.0 is better than 0.0. Why? Because with large noise you are no longer storing true information about target value, but you still have some value which can be used by tree. With noise 0.0 you are in danger of predicting target from target - so you are overfitting the train set which means total disaster for test set.</p> <p>2) Yes, just do scatter plot of lambda as X and score as Y, I will try to find picture to show you</p> <p>3) I was using multiple xgboost models at same time (see LGB section) and I tried NN in the beginning, please notice one cat column had more than 100 values, so I would need to use a lot of columns just to fit standard set of features - RAM was a problem. Instead I focused on fast calculation of model so I was able to build lots of them.</p> <p>code is Python, not C++ (see linked kernel - single model over 0.290 private LB)</p> <p>if anyone is interested here is the code of core idea:</p> <p><a href="https://www.kaggle.com/jacekpoplawski/genetic-algorithm-main-part-20th-place/">https://www.kaggle.com/jacekpoplawski/genetic-algorithm-main-part-20th-place/</a></p> <p>(there was problem with kernel uploading whole saturday, now it should be OK)</p> <p>When you look at example feature set you may wonder - how does it work?</p> <p>There are two interpretations you may accept.</p> <p>First, this is genetic algorithm result, so as an evolution it works strange way but the final result must fit the world, this is how nature works. So let's call it "magic".</p> <p>But if you want more scientific answer.... Please look at EDA kernels for this competition. They plot correlation between each feature and target. Do something more instead - draw a heatmap. Map two features and calculate average target value for each cell. You will notice hills and valleys. This is how <em>mean</em> features work. I was analysing these heatmaps few weeks ago, but my decision was to go fully algorithmic - so no manual feature engineering.</p>
Porto Seguro’s Safe Driver Prediction
三个臭皮匠's approach(s)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Porto Seguro’s Safe Driver Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We didn't focus too much on single model cv/plb difference. Rather we decided that whenever weighted averaged models has more or less same degree of improvement on cv and plb we consider that "safe". For single models, especially for NN, we could see a 0.002 difference easily on NNs with same cv scores.</p> <p>no rationale just a hunch :)</p> <p>That is my hypothesis as well. I was too lazy to test it out tho.</p> <p>yes, loop through all features with 'reg' in it and use 'car' and 'ind' as features. Out of fold prediction was used. It did nothing to my other models but had a pretty decent boost on NN and linear models. If I remember correctly, for NN, I had about 0.003 improvement with that.</p> <p>I was thinking of doing that at some point but was too afraid of overfitting so I gave up. Was there a big boost there?</p> <p>It definitely came as a surprise to me that there was this huge shake up in the end. I personally thought that the shake up should be relatively small because my stacking model performance on public LB and local cv was always aligned although for single model they could vary quite a bit. We did trust mostly on local CV and we tried very hard on nonlinear stacking (without success) so we relied on weighted average cv score and only submitted when there was an improvement on local cv. </p> <p>I cannot wait to see what @Michael Jahrer did but I would like to share some of our approaches (actually, just the NN part, and leave the rest to my teammates) before Michael publishes his and then nobody cares what we did :(</p> <p>So my best NN had a 5 fold local CV 0.294, and public LB 0.284 and private 0.290. And here is roughly how you can achieve that:</p> <p>1) important features' interactions (e.g. ps_car_13, ps_ind_03, ps_reg_03, ...)</p> <p>2) count of categorical features</p> <p>3) xgboost predictions: divide feature sets into three groups (car, ind, reg) and then use two group as features and the other group as target, train a xgboost model on it, and use prediction as features </p> <p>4) feature aggregation: pick two features (e.g. ps_car_13, ps_ind_03), and then use one as group variable, the other as value variable, do mean, std, max, min, median. Still top important features are picked only</p> <p>5) do embedding layers on all categorical features (I used embedding dimension 4 with dropout 0.25)</p> <p>6) model is 2 layer with relu activation and big dropout (512 with dropout 0.8, and 64 with 0.8)</p> <p>7) a few categorical count features @qianqian created</p> <p>That is about it! Hope it helps :)</p> <p>Update: my best lightgbm code is here </p> <p><a href="https://www.kaggle.com/xiaozhouwang/2nd-place-lightgbm-solution">https://www.kaggle.com/xiaozhouwang/2nd-place-lightgbm-solution</a></p> <p>It scored 0.29124 on private LB.</p> <p>best nn code is here <a href="https://www.kaggle.com/xiaozhouwang/2nd-place-solution-nn-model">https://www.kaggle.com/xiaozhouwang/2nd-place-solution-nn-model</a></p> <p>It scored 0.29008 on private LB.</p> <p>And their model results can be found in the comments (Seems I cannot upload files here?)</p> <p>Also, we will prepare all the relevant code in this repo <a href="https://github.com/xiaozhouwang/kaggle-porto-seguro">https://github.com/xiaozhouwang/kaggle-porto-seguro</a></p> <p>If you want to ensemble with your models, get the lightgbm and nn model results here. :)</p> <p>both lightgbm and nn code scripts are uploaded to kernel. </p> <p>it is counting the unique values. For example, you have feature A with possible values 1, 2, 3. And they appear 100, 200, 50 times in a training set (with 100+200+50 rows), then use 100, 200, 50 to replace 1, 2, 3</p>
Practice Fusion Diabetes Classification
Congrats to the winners!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Practice Fusion Diabetes Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>For me it was an interesting competition, especially with the strong improvements at the top of the leaderboard towards the end of the competition. I'm keen on knowing what you guys did :)</p> <p>Just want to share my approach in short: Basically, I tried to create meaningful features most of the time, like the change of weight in some time frame or the minimum/maximum weight. Furthermore, I didn't use the prescription files and the informations about the allergies were pretty useless for me too. Did anyone use these? In the end I had about 380 features which I fed into a GBM.That's it.</p> <p>Unfortunatly I didn't select my best submission (would have been 12th place). Given that, the most important thing I learned from this competition is to have faith in your CV-scores.</p>
Predict Closed Questions on Stack Overflow
Sharing my solution (Ranked #10)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Closed Questions on Stack Overflow <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello all</p> <p>I joined this competition fairly late in the game, partly intrigued by Foxtrot's post [1]. I'd first noticed the competition when it was announced, but had not got round to looking at it until about 1 week before the closing of the model phase. I was not sure of how to deal with the quantity of data available, as this was larger than I had tackled in the past. I had a number of sub-sampling approaches in mind, but they seemed like quite a bit of work for something that might not pay off. Foxtrot's post pointed out Vowpal Wabbit [2], which I'd previously heard of but never paid any real attention to. I saw what he was doing with it, which gave me a great platform to build from. I quickly replicated his set-up, then implemented cross-validation, then set about generating some additional features. In the end, I did better than I expected - perhaps because people made mistakes in their final submissions that they did not realize until the final scores were released, or perhaps because solutions had been tuned against the leaderboard results and ended up overfitting. I had alot of fun learning new tools, and working at a higher pace than I'm used to. In the spirit of Foxtrot's original post, I am sharing my own implementation [3].</p> <p>&nbsp;</p> <p>[1]&nbsp;<a href="http://www.kaggle.com/c/predict-closed-questions-on-stack-overflow/forums/t/2818/beating-the-benchmark-hands-down">http://www.kaggle.com/c/predict-closed-questions-on-stack-overflow/forums/t/2818/beating-the-benchmark-hands-down</a>&nbsp;</p> <p>[2]&nbsp;<a href="http://hunch.net/~vw/">http://hunch.net/~vw/</a>&nbsp;</p> <p>[3]&nbsp;<a href="https://github.com/saffsd/kaggle-stackoverflow2012">https://github.com/saffsd/kaggle-stackoverflow2012</a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>Hello Stephen</p> <p>The main body of feature extraction is actually in <a href="https://github.com/saffsd/kaggle-stackoverflow2012/blob/master/data2vw.py"> data2vw.py</a>, data2user was used to generate some supplementary user-level features. Most of the features you mention are actually in data2vw.py - including:</p> <ul> <li>divide post into code and non-code </li><li>sentence tokenization (done with NTLK, which internally uses punkt) </li><li>descriptive statistics <ul> <li>number of sentences (broken down by type, identified by terminal punctuation) </li><li>number of words </li><li>number of special tokens (e.g. URLs, digits, &quot;nonwords&quot;) </li></ul> </li></ul> <p>I also treated the first and last sentence as special. One thing I didn't get much mileage of was the code blocks themselves. I used the number and size, but I didn't get much out of the content. I didn't get into n-grams of words either - not sure how useful collocations would be, but it's not implausible and might be worth testing empirically.</p> <p>My gut feeling from the task is that the lowest-hanging fruit I left behind was from the temporal aspect of the task. Slicing the training data by time and weighting the same features in different time slices differently would probably have been the most productive next step.</p> <p>Hope this helps!</p> <p>Cheers</p> <p>Marco</p>
Predict Grant Applications
I'm ineligible for the prize - congrats to Quan Sun
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Grant Applications <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|>Because I have recently started employment with Kaggle, I am not eligible to win any prizes. Which means the prize-winner for this comp is Quan Sun (team 'student1')! Congratulations!<br><br>My approach to this competition was to first analyze the data in Excel pivottables. I looked for groups which had high or low application success rates. In this way, I found a large number of strong predictors - including by date (new years day is a strong predictor, as are applications processed on a Sunday), and for many fields a null value was highly predictive.<br><br>I then used C# to normalize the data into Grants and Persons objects, and constructed a dataset for modeling including these features: CatCode, NumPerPerson, PersonId, NumOnDate, AnyHasPhd, Country, Dept, DayOfWeek, HasPhd, IsNY, Month, NoClass, NoSpons, RFCD, Role, SEO, Sponsor, ValueBand, HasID, AnyHasID, AnyHasSucc, HasSucc, People.Count, AStarPapers, APapers, BPapers, CPapers, Papers, MaxAStarPapers, MaxCPapers, MaxPapers, NumSucc, NumUnsucc, MinNumSucc, MinNumUnsucc, PctRFCD, PctSEO, MaxYearBirth, MinYearUni, YearBirth, YearUni .<br><br>Most of these are fairly obvious as to what they mean. Field names starting with 'Any' are true if any person attached to the grant has that feature (e.g. 'AnyHasPhd'). For most fields I had one predictor that just looks at person 1 (e.g. 'APapers' is number of A papers from person 1), and one for the maximum of all people in the application (e.g. 'MaxAPapers').<br><br>Once I had created these features, I used a generalization of the random forest algorithm to build a model. I'll try to write some detail about how this algorithm works when I have more time, but really, the difference between it and a regular random forest is not that great.<br><br>I pre-processed the data before running it through the model by grouping up small groups in categorical variables, and replacing continuous columns with null values with 2 columns (one containing a binary predictor that is true only where the continuous column is null, the other containing the original column, with nulls replaced by the median). Other than the Excel pivottables at the start, all the pre-processing and modelling was done in C#, using libraries I developed during this competition. I hope to document and release these libraries at some point - perhaps after tuning them in future comps.<br mce_bogus="1">
Predict Grant Applications
Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Grant Applications <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|>The solution file is attached to this post.<div><br></div><div>Thanks all for participating,</div><div><br></div><div>Anthony<span class="Apple-tab-span" style="white-space:pre" mce_style="white-space:pre"> </span></div>
Predict Student Performance from Game Play
10th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I respect all of you for your tough and long term fight and I am glad that we were able to fight together. I also want to thank my teammates ( <a href="https://www.kaggle.com/tereka" target="_blank">@tereka</a>, <a href="https://www.kaggle.com/deepkun1995" target="_blank">@deepkun1995</a>, <a href="https://www.kaggle.com/ryotak12" target="_blank">@ryotak12</a>, <a href="https://www.kaggle.com/yurimaeda" target="_blank">@yurimaeda</a>) for their hard work.</p> <p>I'm happy because this is the first time I got a gold medal.</p> <h1>Overview</h1> <p>We're not doing anything special in our solution. We used 1 NN, 1 LightGBM and 4 XGBoost with various features for the Stage 1, and MLP and Logistic Regression stacking for the Stage 2. We used average and threshold optimization for the Stage 3.</p> <ul> <li>CV: 0.70573</li> <li>Public LB: 0.706</li> <li>Private LB: 0.702</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5563440%2F010fd90d184db7dc2858562025f3867a%2Fpsp_solution_overview.drawio%20(1).png?generation=1688311769280377&amp;alt=media" alt="psp_solution_overview"></p> <h2>Code</h2> <ul> <li>Inference: <a href="https://www.kaggle.com/code/shu421/psp-10thsolution-public0706-private0702/notebook" target="_blank">https://www.kaggle.com/code/shu421/psp-10thsolution-public0706-private0702/notebook</a></li> <li>shu421 XGBoost and Stacking Training: <a href="https://github.com/shu421/Kaggle_PSP_10thSolution" target="_blank">https://github.com/shu421/Kaggle_PSP_10thSolution</a></li> </ul> <h1>Models</h1> <h2>Stage 1: XGBoost (shu421 part)</h2> <p>I created XGBoost for each level_group. The base features are not so different from those in the public code. It is an aggregate feature of elapsed_time_diff and hover_duration, and other numerical features. However, in addition to these, I used previous level_group features and predicted probability as current level_group features. <br> I used numpy and numba to create them. Initially, I had used polars, but I switched to numba which is my teammate <a href="https://www.kaggle.com/yurimaeda" target="_blank">@yurimaeda</a> 's approach. The submission time was significantly reduced from 2 hours with polars to just 13 minutes with numpy and numba. I used 5-StratifiedGroupKFold as cross-validation strategy.</p> <ul> <li>CV: 0.70111</li> <li>Public LB: 0.702</li> <li>Private LB: 0.699</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5563440%2Faabaecb685f9740c8b542c8282791901%2Fpsp_solution_xgb.drawio%20(1).png?generation=1688311791487937&amp;alt=media" alt="psp_solution_xgboost"></p> <h2>Stage 2: Stacking</h2> <p>We created MLP and Logistic Regression for each question. Thus, there are 18 models each, and the output dimension of each model is (n_samples, 1).<br> Since stacking was very easy to overfit, we kept the model architecture simple.<br> Here is the code for MLP.</p> <pre><code> (nn.Module): (): ().__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.head = nn.Linear(hidden_size, output_size) self.dropout = nn.Dropout() self.relu = nn.ReLU() (): x = self.relu(self.fc1(x)) x = self.dropout(x) x = self.head(x) x </code></pre> <h2>Stage 3: Threshold Optimization</h2> <p>We take the average of the predictions of the 2 models in the Stage 2 and optimize the threshold for each question.</p> <pre><code> numpy np sklearn.metrics f1_score scipy.optimize minimize (): y_pred_binary = (y_pred_prob &gt; thresholds).astype() score = f1_score(y_true.flatten(), y_pred_binary.flatten(), average=) score (): n_labels = y_pred_prob.shape[] init_thresholds = np.full(n_labels, ) objective = thresholds: -f1_score_macro_for_thresholds( y_true, y_pred_prob, thresholds ) result = minimize( objective, init_thresholds, bounds=[(, )] * n_labels, method=method ) result.x </code></pre> <p>We tried some optimization methods, but Powell worked best.<br> This method improved CV by 0.008.</p> <h1>What worked</h1> <ul> <li>feature engineering<ul> <li>elapsed_time_diff and hover_duration agg features was important</li></ul></li> <li>threshold optimization</li> <li>ensemble</li> <li>lstm + transformer(ryota part)</li> <li><a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/416963" target="_blank">sort_frame</a></li> </ul> <h1>What didn't work</h1> <ul> <li>1D CNN</li> <li>stacking(below methods seemed to be overfitting)<ul> <li>CNN(1D/2D)</li> <li>RNN</li> <li>level_group preds</li></ul></li> <li>datetime agg features</li> <li>use NN embedding as gbdt features</li> <li>TimeSeriesClustering (elapsed_time_diff)</li> <li>additional data</li> </ul>
Predict Student Performance from Game Play
13th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I didn't exactly use early stopping. Instead, I set the number of epochs to 20 and saved the checkpoint that produced the best validation score. The choice to use 20 epochs was based on several experiments, where this setting resulted in the best cross-validation performance."</p> <p>Thank you for your comment! Also, congratulations on your gold medal! <br> For instance, I created the following model for level group 5-12.</p> <pre><code> num_cols_level2 = [,, ,] cat_cols = [,,,,] class MeanPooling(nn.Module): def __init__(self): super(MeanPooling, self).__init__() def forward(self, last_hidden_state, attention_mask): input_mask_expanded = attention_mask.unsqueeze(-1).expand(last_hidden_state.size()).float() sum_embeddings = torch.sum(last_hidden_state * input_mask_expanded, 1) sum_mask = input_mask_expanded.sum(1) sum_mask = torch.clamp(sum_mask, =1e-9) mean_embeddings = sum_embeddings / sum_mask return mean_embeddings class PspTransformerLevel2Model(nn.Module): def __init__( self, =0.2, =4, =7, name_embedding_size = 12, =12, event_embedding_size = 12, =130, fqid_embedding_size = 24, =20, room_fqid_embedding_size = 12, =24,level_embedding_size=12, categorical_linear_size = 120, numeraical_linear_size = 24, model_size = 160, nhead = 16, =480, =10): super(PspTransformerLevel2Model, self).__init__() self.name_embedding = nn.Embedding(=input_name_nunique, =name_embedding_size) self.event_embedding = nn.Embedding(=input_event_nunique, =event_embedding_size) self.fqid_embedding = nn.Embedding(=input_fqid_nunique, =fqid_embedding_size) self.room_fqid_embedding = nn.Embedding(=input_room_fqid_nunique, =room_fqid_embedding_size) self.level_embedding = nn.Embedding(=input_level_nunique, =level_embedding_size) self.categorical_linear = nn.Sequential( nn.Linear(name_embedding_size + event_embedding_size + fqid_embedding_size + room_fqid_embedding_size + level_embedding_size, categorical_linear_size), nn.LayerNorm(categorical_linear_size) ) self.numerical_linear = nn.Sequential( nn.Linear(input_numerical_size, numeraical_linear_size), nn.LayerNorm(numeraical_linear_size) ) self.linear1 = nn.Sequential( nn.Linear(categorical_linear_size + numeraical_linear_size, model_size), nn.LayerNorm(model_size), ) self.transformer_encoder = TransformerEncoder( encoder_layer = nn.TransformerEncoderLayer(=model_size, =nhead, =dim_feedforward , =dropout), =1) self.gru = nn.GRU(model_size, model_size, num_layers = 1, =, =) self.linear_out = nn.Sequential( nn.Linear(model_size, out_size) ) self.pool = MeanPooling() def forward(self, numerical_array, name_array,event_array, fqid_array, room_fqid_array, level_array, mask,mask_for_pooling): name_embedding = self.name_embedding(name_array) event_embedding = self.event_embedding(event_array) fqid_embedding = self.fqid_embedding(fqid_array) room_fqid_embedding = self.room_fqid_embedding(room_fqid_array) level_embedding = self.level_embedding(level_array) categorical_emedding = torch.cat([name_embedding, event_embedding, fqid_embedding, room_fqid_embedding, level_embedding ], =2) categorical_emedding = self.categorical_linear(categorical_emedding) numerical_embedding = self.numerical_linear(numerical_array) concat_embedding = torch.cat([categorical_emedding, numerical_embedding],=2) concat_embedding = self.linear1(concat_embedding) concat_embedding = concat_embedding.permute(1,0,2).contiguous() output = self.transformer_encoder(concat_embedding, =mask) output = output.permute(1,0,2).contiguous() output,_ = self.gru(output) output = self.pool(output,mask_for_pooling) output = self.linear_out(output) return output </code></pre> <p>The main improvements I made are listed below. Through a series of experiments, I found that the following practices were advantageous:</p> <ul> <li>Keeping the model size small:<ul> <li>For instance, I kept the number of layers for both the Transformer and GRU at 1.</li> <li>I didn't enlarge the encoder layer or hidden size.</li> <li>I also maintained a relatively small embedding size for categorical features.</li></ul></li> <li>Using fewer features:<ul> <li>For instance, I didn't use features such as screen_coor_x, screen_coor_y, hover_duration, and text.</li></ul></li> </ul> <p>Thank you for your comment! <br> Regarding level groups 0-4 and 5-12, I adopted the above method because creating a single model for them all, rather than creating a model for each target, resulted in better CV and public score.</p> <h1>13th place solution</h1> <p>First of all, I would like to thank the Kaggle community for sharing great ideas and engaging discussions. I would also like to thank the hosts for organizing this interesting task competition.</p> <h2>Summary</h2> <ul> <li>Ensemble of LightGBM and NN</li> <li>Cross validation: Nested cross validation<ul> <li>Training data: Data for which the first four digits of the session_id are 2200 or less.</li> <li>Validation data: Data for which the first four digits of the session_id are 2201 or more.</li> <li>I trained the model on the training data using a 5-fold cross validation strategy, and evaluated it on the validation data using predictions from all 5 trained models.</li> <li>For the final submission, I trained the model on the entire dataset using a 5-fold cross validation approach.</li></ul></li> </ul> <h2>LightGBM</h2> <ul> <li>I trained one model for level group 0-4 and another for level group 5-12. For these, I included features representing the target and trained a single model.</li> <li>For level group 13-22, I trained a distinct model for each target.</li> <li>Main features:<ul> <li>The count of categorical data for each session_id</li> <li>The statistical measures of numerical data for each session_id</li> <li>An aggregate of the next action taken </li></ul></li> <li>Scores<ul> <li>CV : 0.7032</li> <li>Public Score : 0.704</li> <li>Private Score : 0.701</li></ul></li> </ul> <h2>NN</h2> <ul> <li>Model: Transformer + GRU<ul> <li>The standalone Transformer didn't perform very well.</li> <li>The addition of GRU improved the score.</li></ul></li> <li>Trained with fewer features</li> <li>I trained a separate model for each level group.</li> <li>Scores<ul> <li>CV : 0.7010</li> <li>Public Score: 0.700</li> <li>Private Score: 0.700</li></ul></li> </ul> <h2>Ensemble</h2> <ul> <li>LightGBM * 0.66 + NN * 0.34</li> <li>CV : 0.7053</li> <li>Public Score : 0.706</li> <li>Private Score : 0.702</li> </ul> <p>Thank you for your comment! I used nested cross-validation to evaluate for the following two reasons:</p> <ul> <li><p>The evaluation metric for this time fluctuates significantly based on the threshold, and just changing the threshold also significantly changed the Public Score. I wanted to create the same situation locally as the Public Score (including the ensemble of 5-fold) and check the change in CV due to changing the threshold.</p></li> <li><p>For instance, in situations like the one described in the link below, I believed that nested cross-validation would be more reliable than standard cross-validation.<br> <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/388682" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/388682</a></p></li> </ul> <p>Thanks!</p> <ol> <li><p>For the LightGBM model, the number of features considered for each level_group model is as follows: For level_group 0-4, I consider 1570 features. For level_group 5-12, I consider 3992 features. And for level_group 13-22, I consider 5290 features.</p></li> <li><p>As for the Neural Network, the sequence length of the event records was determined based on cross-validation results. Specifically, for level_group 0-4, I chose a sequence length of 250. For level_group 5-12, the sequence length is 500. And for level_group 13-22, the sequence length is 800.</p></li> </ol> <p>Yes, as you said, the optimal epoch does differ for each level group. As for the checkpoints, I decided on the one where the AUC was highest. Even then, if the AUC improved for each level group, the final macro-f1 score also improved.</p>
Predict Student Performance from Game Play
14th Place Solution Joseph Part
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank my teammates for their efforts, I have learned a lot from them. Luckily we don't shake-down too much. Now I would like to introduce my solution to you. </p> <h1>Modeling</h1> <p>My modeling method is like a 'cumulative' one: using 0-4 part data to generate the train set of q1-q3, using 0-4 and 5-12 part data to generate the train set of q4-q13, using 0-4, 5-12 and 13-22 part data to generate the train set of q14-q18. <strong>Question is also a feature</strong>. It has merit that I don't need to save 'historical' data. And this one is time-saving and costs about 40min for inference.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3110858%2Fe49b6d8581547e3a9477a626647d36cc%2FWX20230629-0924562x.png?generation=1688002004825453&amp;alt=media" alt=""></p> <h1>Feature Engineering</h1> <p>There are my FE ideas:</p> <ul> <li><p><strong>basic agg</strong> features: eclipse_time_diff sum, count and max of each group, each level, each event_name, …, each text; eclipse_time_diff sum, count under a particular room_fqid and an event_name, etc.</p></li> <li><p><strong>behavior-change</strong> features: the number of room change, and the number of room change under each level; the number of text_fqid change, and the number of text_fqid change under each level, etc.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3110858%2F2f61e63e58a3bfab1944b9af16d0e589%2F1688005632589.jpg?generation=1688005685285390&amp;alt=media" alt=""><br> In this picture, we can see a <strong>room change</strong> behavior, we calculate the change times and average to characterize one's ability to understand and reason. Some of them have pretty high feature importance.</p></li> <li><p><strong>Meta</strong> features: Besides basic <code>groupby</code> feature engineering, I add the meta feature for 5-12 and 13-22 groups. There are two way to use them:</p></li> </ul> <ol> <li>each question's <code>predict_proba</code> as a feature, 5-12's model includes features <strong>q1_proba</strong>, <strong>q2_proba</strong>, and <strong>q3_proba</strong>, 13-22's model includes features <strong>q1_proba</strong>, <strong>q2_proba</strong>, … <strong>q13_proba</strong>.</li> <li>mean of all question in one group as a feature, for instance, 5-12's model includes a feature <strong>mean_of_q1-q3_proba</strong>, 13-22's model includes features <strong>mean_of_q1-q3_proba</strong>, <strong>mean_of_q4-q13_proba</strong>.</li> </ol> <h1>Models</h1> <p>For my part, I use 9 models for my ensemble. They are 4 xgboost, 1 lightgbm, 2 dart, 2 catboost. The private-best single model is a dart, which achieved <strong>Public 0.704</strong> and <strong>Private 0.704</strong>. The public-best model is a xgboost, which achieved <strong>Public 0.705</strong> and <strong>Private 0.698</strong>. My dart notebook <a href="https://www.kaggle.com/code/takanashihumbert/game-play-lgbdart-infer/notebook" target="_blank">Game-Play-LGBDart[INFER] Private LB 0.704</a></p> <h1>The difficulty</h1> <p>I think the most difficult part of this comp is to establish CV and choose the threshold and the submissions. As you can see, my dart model and xgboost model in the same CV strategy vary wildly. It's beyond my expectation. I even have no confidence to give my dart models a bigger weight. <strong>I believe many teams didn't choose their best results.</strong></p> <p>Finally, I would like to pay tribute to all kagglers who share their ideas. See you next game.</p> <p>It's also my pleasure, Professor A.</p> <p>Thanks! And sadly I didn't do any feature selection.🤕</p> <p>The external data, as Adam mentions above, helps my local cv. The F1-score improved by nearly 0.001, but public LB didn't change. What confuses me the most is that xgboost models are far lower than dart on private LB.</p> <p>Thank you, Chris. I have learned a lot from you in many competitions!</p> <ul> <li>dart cv: one is 0.70199, the other is 0.70203</li> <li>catboost cv: one is 0.70122, the other is 0.70095</li> </ul> <p>The second question is difficult for me to answer; because I didn't tweak the parameters too much.🤕</p> <p>Thanks <a href="https://www.kaggle.com/serangu" target="_blank">@serangu</a> !</p> <p>Thanks! We indeed need some luck in this comp.</p>
Predict Student Performance from Game Play
1st Place Solution for the Predict Student Performance from Game Play Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi <a href="https://www.kaggle.com/murugesann" target="_blank">@murugesann</a>!</p> <p>Thanks!<br> No problem with your question.</p> <h1>TL;DR</h1> <p>My recommendation is to use our solution that can be considered real world compliant.</p> <h1>Detailled reply</h1> <p>Given the fact that predicting the mean of each question lead to ~0.66 score and that the ML models do not learn that much (+0.04), 3rd decimal (+/- 0.002) is what we want to improve. The final LB shows that 3rd decimal is meaningful: for example even if 2nd place private is +101 positions, it is only 0.002 diff with public. 4th decimal is noise. I do not understand why you state that only the 2nd decimal is meaningful, implying that 3rd is just luck. I disagree.<br> My first competition was <a href="https://www.kaggle.com/competitions/microsoft-malware-prediction" target="_blank">https://www.kaggle.com/competitions/microsoft-malware-prediction</a>. The private dataset was totally skewed because the host did not have all the targets so they were put to 0 (<a href="https://www.kaggle.com/competitions/microsoft-malware-prediction/discussion/83946)" target="_blank">https://www.kaggle.com/competitions/microsoft-malware-prediction/discussion/83946)</a>. A lot of competitors, including me, had worked very hard (ask to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>) and the shakeup was monstruous and very hard to accept. I invite you to have a look at <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> results: strong on the public LB and strong on the private LB! Production-ready models, right? Argue that it is just luck is biased. I learned a lot from my failure as well as from <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> and <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a> success. These guys are true legends.<br> Last year I participated in <a href="https://www.kaggle.com/competitions/jigsaw-toxic-severity-rating" target="_blank">https://www.kaggle.com/competitions/jigsaw-toxic-severity-rating</a> and only entered in the last few days because of <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> feedback: <a href="https://www.kaggle.com/competitions/jigsaw-toxic-severity-rating/discussion/304441#1671696" target="_blank">https://www.kaggle.com/competitions/jigsaw-toxic-severity-rating/discussion/304441#1671696</a>. I only submitted twice. Just luck? I would rather say the private test set was pretty unfair and that robustness (my solution was significantly better than every benchmarks used in the field) ensured a good place that was in fact, for me and without modesty, a hugh disappointment. I let you search for the details if you want to understand why: <a href="https://www.kaggle.com/competitions/jigsaw-toxic-severity-rating/discussion/306181" target="_blank">https://www.kaggle.com/competitions/jigsaw-toxic-severity-rating/discussion/306181</a>.</p> <p>As the context is set, it is now possible to respond to your question:</p> <blockquote> <p>What would be your recommended model architecture for a best second decimal accuracy for this specific problem?</p> </blockquote> <p>I would recommand to consider 3rd decimal and the model architecture we used. As it is explained (sorry for the english that is far from perfect) in the writeup, we focused on CV scores and just use the LB as a subsidiary information. It means that the private score corresponds to the real world environment you are referring at. Even if luck is present, keep in mind that our solution scores CV 0.705, public LB 0.705 (the winning solution is not our best LB because we thought our best LB overfitted) and private LB 0.705.<br> If you want a stable/robust solution, I would recommand to blend GBDT and NN because as you need robustness, you need good but also diverse models. That being said proceeding like this you will limit variance so this choice is a tradeoff between ultimate performance and robustness. Real-world requires tradeoffs and robustness and it is the path we chose.<br> Hope it helps.</p> <p>We are considering to release the code but are not alone to decide. Stay tuned in the coming week.</p> <p>Thanks <a href="https://www.kaggle.com/abaojiang" target="_blank">@abaojiang</a>!</p> <p>The writeup has been updated to mention your excellent notebook. We first saw the time-aware events and the Conv1D ideas in your work. Finding once more these ideas while exploring the litterature likely made us focus more on them.</p> <p>Continuous features are tricky to preprocess. We knew from our GBDT modeling work that time and especially <code>duration</code> was very important. We put a lot of careful efforts to try to model it correctly. Processing the <code>duration</code> seemed important so we tried a lot of things: scaling, normalization, standardization, clipping, … We explored the data to analyze the distribution of duration times and experimented with values around a baseline that seemed to us reasonnable because of the better distribution on the continuum. What worked best was clipping at 60 seconds + simple scaling.</p> <p>Thanks <a href="https://www.kaggle.com/zrongchu" target="_blank">@zrongchu</a>!<br> We have decided to share our code: <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420332" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420332</a>.</p> <p>Unbelievable to write this!</p> <h1>Thanks!</h1> <p>As it is the usage, we first <strong>thank the host and Kaggle</strong>. These are special thanks because you and us have had a special link in this competition as we gave you more work by reporting data leaks. No doubt you tried to do your best. You are right to animate this community and to trust in it. You are part of it. Please take care of this community that is able to build so much together by sharing. As all of us you have made mistakes and we hope you will learn from them.</p> <p>We also want to <strong>thank all of you</strong>, Kagglers. We love and are grateful to be part of our group/community. Thanks for sharing and for the collective learning experience.</p> <h1>Context</h1> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/overview" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/overview</a>,</li> <li>Data context: <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/data" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/data</a>.</li> </ul> <h1>Overview of the Approach</h1> <p>Our solution is essentially a blend of a XGBoost and a NN models. Both heavily rely on duration that appeared to be a powerful leverage. Time was aggregated in different ways and combined with counts for the GBDT while it is transformed via a custom TimeEmbedding block based on 1D convolutions that produce a representation combined with user event representations for the NN.<br> Robustness and efficiency founded our work. XGBoost models were validated on 10 bags of 5 folds and features incorporated only if the mean of the CV of these 10 bags was greater than the level of noise we quantified while we opted for a majority/consensus strategy to build the NN, i.e. validate choices only if 4 of 5 folds were improved. The 3rd place of the efficiency LB was achieved with a lightweight NN accelerated via TF Lite.</p> <h1>Details of the submission</h1> <h2>Code</h2> <p>After publishing this write-up we decided to open our code: <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420332" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420332</a>.<br> It is composed by several parts: <a href="https://www.kaggle.com/code/pdnartreb/pspfgp-1st-place-gbdt-training" target="_blank">how to train the XGBoost models</a>, how to <a href="https://www.kaggle.com/code/pdnartreb/pspfgp-1st-place-nn-pretraining" target="_blank">pretrain</a> and <a href="https://www.kaggle.com/code/pdnartreb/pspfgp-1st-place-nn-training" target="_blank">train</a> the NN models and the <a href="https://www.kaggle.com/code/pdnartreb/pspfgp-1st-place-inference" target="_blank">inference notebook</a> used to win this competition.</p> <h2>Data</h2> <p>Looking at the 1st data released showed that there aren't a lot of sessions so not a lot of sequences. Moreover these are long sequences. This is not ideal for a deep learning approach. <br> Exploring the Field Day Lab research instructed that the Jo Wilder application was built to help learning to read and that way more than 11,500 learners had played this game. <br> These 2 ideas led to search for a bigger dataset. In 1 Google search and 3 clicks we came up to the open data portal (<a href="https://fielddaylab.wisc.edu/opengamedata/" target="_blank">https://fielddaylab.wisc.edu/opengamedata/</a>) which contains a lot of sessions. 1 hour and 3 bash commands latter we knew that the train set was in part in the open data. So we took a week to <strong>build a pipeline that extracts 98 % of the sessions of the train set perfectly and with minor errors for the last 2 %</strong>. Our data are even better than the comp data because we knew before the host confirmation that for the sessions with 2 games the target was skewed (0 if wrong in 1 of the 2 games when we aim at predicting the responses for the 1st game). It seems that fixing these targets can bring a significant boost up to +0.002.</p> <p>We took 1 more week to build a GBDT/XGBoost baseline that would have scored top 10 given the CV score, with the use of the supplemental data (~20,000 sessions) that gave +0.003/0.004 at that time. As we simulated the API locally (see after), we used some training sessions to infer and noticed that it scored 0.718. We were hoping that the LB sessions were not part of the open data portal but our 1st submission, LB 0.708, immediately showed to us that we had rebuilt about a half of the data and especially the targets in the public LB, because 0.708 = (0.698 + 0.718) / 2. The host and Kaggle have been immediately informed. You know what happened next (<a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/415820)" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/415820)</a>. <br> After the release of the LB data we measured that we perfectly rebuilt ~7000 sessions over the ~11,500 of the LB data.</p> <p>We spent the first month exploring the data until we understood/knew it pretty well. For example we even reconstituted sessions for what might be schools (several games on 1 IP session), extracted every single session with at least 1 answer, …</p> <p>After the update we made a first submission that scored 0.72. This was shocking because this meant that some leaked data were remaining. A few days later we noticed that the open data was not totally similar with the state we found it 1 month before. A file was missing. So we returned to the host and Kaggle to give them more work.</p> <p><strong>This process/work led us to perfectly understand the data model</strong> (that changed since the 1st release of the game). This also allowed us to deeply understand the data itself.</p> <p>Note that we only used the sessions for which we had responses to all questions of the 2 1st level groups. 1) This is more consistant with the sessions we want to predict (game from the beginning to the end) and 2) this approach preserves performance (vs all data) while reducing training time.</p> <p>Our dataset is constituted by <strong>37323 complete sessions (23562 comp + logs) in a total of 66376 sessions</strong>.</p> <p>The supplemental data (that we fully added 1 month ago) gave us consistently <strong>CV +0.002</strong>.</p> <h2>Model</h2> <p>Our solution is mainly an ensemble of GBDT + NN models.</p> <h3>Trust your validation</h3> <p>We think that <strong>the main reason of the robustness of our solution is that we only relied on CV</strong> for decision making. No choice had been made on LB.</p> <p>Probing showed to us that the private test consists in the 1st 1450/1500 sessions served by the API. This is a small set. In our experiments 5,000 sessions is the minimum to guarantee a stable CV/LB alignment. A set less than 2,000 is very noisy so <strong>robustness was the way to go</strong>.</p> <p>We <strong>only added features that improved the CV for sure</strong>. This is not easy to delete features that you believe in but this is needed as science is not a matter of belief. There are several ways to do so: for example monitor all folds in a CV (and accept only on majority or consensus), monitor several bags (composition of CV to not overfit validation), …</p> <p>For the GBDT approach, we mainly validated on the mean of 10 bags (we defined a bag as a composition of the folds). As we estimated the noise to be ~0.0003, only improvements greater than the noise have been considered. For the NN as we needed to iterate quicker we only used a single bag and only incorporated &gt; 0.0003 overall improvements with at least 3 or 4 (over 5) folds improved.</p> <h3>Metric</h3> <p>We experimented a lot on finding a threshold by question but found that this approach is less robust than a single threshold. We mainly used 0.625 as global threshold despite our highest LB scores that were obtained with a threshold per question.</p> <h3>GBDT</h3> <p>We prototyped a baseline with <strong>XGBoost because of the structured/tabular nature of the data</strong>. The feature engineering process is interesting to understand what is predictive and to understand the causation, i.e. how the features or decision criteria that enable to predict correctly.</p> <p>Generally speaking we followed 3 ways to build features: <strong>business knowledge</strong>, our <strong>intuition</strong> playing the game and a meticulous <strong>exploration of the data</strong>.<br> Business knowledge refers to using expert knowledge. Reading the papers of the researchers that built this game allow to understand the game beyond usage. For example, Jo Wilder has been built to improve the players reading skills. So this means that the text duration should be important. These are like killer features.</p> <p>We exclusively made use of Polars because of the CPU constraints and to simply learn it.<br> Our features (663, 1993, 3734 for each level_group) are mainly <strong>durations and counts for different aggregations</strong>: how much time in a level, in a room, reading a text, interacting in some way (event type), how many events in a level_group, how many events of each type, how many events of each type in a room or a level, … <br> We also built a few notebook dedicated features: how many type of events on the notebook in a level, … <br> Despite our efforts we weren't able to extract useful information from the coordinates, the only few features of this type had been mean and std for some events in the activities (journal interactions for example).</p> <p>We considered that injecting targets predicted in the previous level groups was a compression of the signal, meaning a loss of information, so we used, for each session, <strong>all interactions from the beginning of the game/session</strong>. This led to a +0.002 at the time of this choice.</p> <p>After the API needed to order the data, we noticed that <strong>models trained both on original order and on index order</strong> but validated on index order (inference order) improved our scores. This leads to more variety that was needed to <strong>improve stability and robustness</strong>. The same goes for the composition of the validation sets: usage of several bags (composition of validation sets) based on the comp data but also on the extracted data improved our scores. We detected late that increasing the number of folds from 5 to 10 could also be leveraged.</p> <p>The code for GBDT allows to switch from XGBoost to LightGBM and CatBoost with a simple variable parameter but despite the good scores (~0.001 less than XGBoost), this did not bring to ensemble so we sticked to only XGBoost.</p> <p>We experimented a lot around feature selection but were unable to build a stable strategy. So instead of a top-down approach consisting in deleting useless features, we adopted a bottom-up approach choosing carefully each group of features.</p> <p>Our <strong>XGBoost models score CV ~0.7025 +/-0.0003</strong> and blending 5 of them (the only XGBoost we still have with correct score) scores <strong>LB 0.704</strong>.</p> <h3>NN</h3> <p>After achieving a good score with gradient boosting and having understood well the data we focused on deep learning.</p> <p>The <strong>first attempt was with Transformers</strong>. The 1st results were disappointed: CV 0.685 with 2 hours / fold (as far as we can remember). Transformers are very computationally intensive. Resources: <a href="https://arxiv.org/pdf/1912.09363.pdf" target="_blank">https://arxiv.org/pdf/1912.09363.pdf</a>, <a href="https://arxiv.org/pdf/2001.08317.pdf" target="_blank">https://arxiv.org/pdf/2001.08317.pdf</a>, <a href="https://arxiv.org/pdf/1711.03905.pdf" target="_blank">https://arxiv.org/pdf/1711.03905.pdf</a>, <a href="https://arxiv.org/pdf/1907.00235.pdf" target="_blank">https://arxiv.org/pdf/1907.00235.pdf</a>, …</p> <p>We then gave a try to <strong>Conv1D</strong>. In one day we had a very simple model that scored as Transformers but <strong>10x faster</strong> allowing to iterate quicker. So we pushed this approach and could seamlessly scaled it beyond our expectations.</p> <p>Difficult to share the <strong>tens or hundreds of experimentations</strong> needed to achieve the final solution which is both based on a simple architecture and a slightly complex training pipeline.</p> <h4>Architecture roots</h4> <p>We browsed the literature based on the question: how to model time in deep learning? <br> This research made us come to the idea of <strong>time-aware events</strong> (i.e. <a href="https://proceedings.mlr.press/v126/zhang20c/zhang20c.pdf" target="_blank">https://proceedings.mlr.press/v126/zhang20c/zhang20c.pdf</a>) and back to <strong>WaveNet</strong> (<a href="https://arxiv.org/pdf/1609.03499.pdf" target="_blank">https://arxiv.org/pdf/1609.03499.pdf</a>) because it uses <strong>Conv1D to model long sequences with considerations on causation</strong>. <br> Other papers also inspired us: <a href="https://arxiv.org/pdf/1703.04691.pdf" target="_blank">https://arxiv.org/pdf/1703.04691.pdf</a> build on top of WaveNet paper for time series, <a href="https://idus.us.es/bitstream/handle/11441/114701/Short-Term%20Load%20Forecasting%20Using%20Encoder-Decoder%20WaveNet.pdf?sequence=1&amp;isAllowed=y" target="_blank">https://idus.us.es/bitstream/handle/11441/114701/Short-Term%20Load%20Forecasting%20Using%20Encoder-Decoder%20WaveNet.pdf?sequence=1&amp;isAllowed=y</a> also build on top of WaveNet. <br> We also have to mention the excellent work that <a href="https://www.kaggle.com/abaojiang" target="_blank">@abaojiang</a> shared (<a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/398565" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/398565</a> and <a href="https://www.kaggle.com/code/abaojiang/lb-0-694-tconv-with-4-features-training-part)" target="_blank">https://www.kaggle.com/code/abaojiang/lb-0-694-tconv-with-4-features-training-part)</a>. It inspired our research and maybe successfully biased it.</p> <p>Let's focus on the model of our efficiency submission that is also one of our final ensemble and which performance is nearly the same as models with a few more features.</p> <h4>Feature representations</h4> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2208184%2Ff692423c63172494ea4df97b01d65d33%2Fdata.png?generation=1702113769960411&amp;alt=media" alt=""></p> <p><strong>5 features as inputs: duration, text_fqid, room_fqid, fqid, event_name + name</strong> (this is the event type from the original data model as far as we remember). Each of these information is encoded/embedded into a vector representation (d_model = 24) to be the merged. The 4 <strong>categorical features feed a classical Embedding layer and the duration a TimeEmbedding</strong> which is a custom block.</p> <p>Developing the GBDT solution showed that the <strong>duration</strong> was crucial, so we put a crucial amount of time trying to model it greatly. The TimeEmbedding layer is a composition of 4x ConvBlock which is inspired by the Transformer main block: Conv1D -&gt; skip connection -&gt; layer norm -&gt; dropout.</p> <pre><code> : def : super(TimeEmbedding, self). self.conv_blocks = def call(self, inputs): x = tf.expand conv_block self.conv_blocks: x = conv return x </code></pre> <pre><code> : def : super(ConvBlock, self). self.conv1d = tf.keras.layers. self.layer_norm = tf.keras.layers. self.dropout = tf.keras.layers. def call(self, inputs): x = self.conv1d(inputs) x = x + inputs x = self.layer outputs = self.dropout(x) return outputs </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2208184%2F9407390ffd7f226b7be0f57c37051f77%2Ftime_embedding.png?generation=1702113795144186&amp;alt=media" alt=""></p> <h4>Time-aware events</h4> <p>As said, the goal of building these representations was to model time-aware events. We considered the <strong>categorical features as events</strong> because they represent the user interactions with business entities of the game. We then tried to incorporate duration to make them time-awared. Our main intuition showed to be the best. It is a <strong>simple solution based on operation priority to represent that the duration should be associated to each event before associated them together</strong>: duration * event_1 + duration * event_2 + … which had been factorized to duration * (event_1 + event_2 + …).</p> <pre><code> : def : super(ConvNet, self). self.input_dims = input_dims self.n_outputs = n_outputs self.d_model = d_model self.n_blocks = n_blocks self.event_embedding = tf.keras.layers. self.room_embedding = tf.keras.layers. self.text_embedding = tf.keras.layers. self.fqid_embedding = tf.keras.layers. self.duration_embedding = self.gap = tf.keras.layers. def call(self, inputs): event = self.event room = self.room text = self.text fqid = self.fqid duration = self.duration x = duration(event + room + text + fqid) outputs = self.gap(x) return outputs def get: config = super.get.copy config.update({ 'input_dims': self.input_dims, 'n_outputs': self.n_outputs, 'd_model': self.d_model, 'n_blocks': self.n_blocks, 'name': self._name, }) return config @classmethod def from: return cls(**config) </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2208184%2Fca563daa4f9ef7bf8d8b20e0c85c7951%2Ftime_aware_events_1.png?generation=1702113822015438&amp;alt=media" alt=""><br> The 2 representations are equivalent: either you can think time-aware events as a combination of time and sub-events or as a combination of sub-events and time.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2208184%2F1f65d399dfd93f128ca6c887d0f0be89%2Ftime_aware_events_2.png?generation=1702113853644045&amp;alt=media" alt=""></p> <h4>Training pipeline</h4> <p>The training pipeline is not totally straight forward.</p> <p><a href="https://www.kaggle.com/dongyk" target="_blank">@dongyk</a> published great schematics that can be useful to illustrate what is explained bellow: <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420217#2332166" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420217#2332166</a>.</p> <h5>1st step (pre-training?)</h5> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2208184%2F970baa66fc37b301bb8960cc92cf8ab4%2Fpre_training.png?generation=1702113879947155&amp;alt=media" alt=""></p> <p>The best approach for us consists in a kind of <strong>backbone that represents the events of a level_group</strong>.</p> <p>This backbone is trained on all the data available for this level_group (i.e. on complete + incomplete sessions). It is associated with a temporary SimpleHead optimizing BCE loss.</p> <pre><code> (tf.keras.Model): (): (SimpleHead, ).__init__(name=name) .ffs = [tf.keras.layers.Dense(units, activation=) units n_units] .out = tf.keras.layers.Dense(n_outputs, activation=) (): x = inputs ff . x = ff(x) outputs = .out(x) outputs </code></pre> <p>This approach allows to score <strong>CV 0.70025 +/- 0.0005</strong>.</p> <h5>2nd step (training?)</h5> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2208184%2F98e32800c54ffab63d98d90d220fbbcd%2Fend_2_end.png?generation=1702113900174294&amp;alt=media" alt=""></p> <p><strong>The weights of each of the 3 backbones (1 by level_group) are freezed</strong> for the 2nd level of training to speedup training but also because it is more stable and efficient. These backbones can be thought as "embedders".</p> <p>During this 2nd step, <strong>all the submodels that composed the solution were trained on all complete sessions in an end-to-end setup</strong>. The input data are 3 sequences of the 5 features, 1 for each of the 3 level groups. Each "embedders" outputs a 24 dim-vector representation. These outputs are the inputs of a head in which enters the representation of level_group '0-4' to predict the 3 first questions and the concatenation of the previous and the current representations for level_groups '5-12' and '13-22' to make use of all information.</p> <p>Proceeding like this allows to optimize the overall performance and to monitor it based on the F1 score that is the score of the competition. This means we optimized BCE with F1 score as a metric.</p> <p>Our winning submission uses a simple <strong>MLP head</strong> but also a <strong>skip head</strong> (512 -&gt; 512 -&gt; 512 allow it for example). <strong>MMoE</strong> did not improve the simplest approaches.</p> <p>This approach allows to score <strong>CV 0.70175 +/- 0.0003</strong> which is <strong>comparable to the GBDT solution</strong>.</p> <h3>Inference</h3> <h4>Build a simulator</h4> <p>Early in the competition we built a simulator of the API. Doing so we never experimented any submission error. Maybe trying to keep ideas and code as simple as possible was also key to debug easily.</p> <h4>Efficiency</h4> <p>We invested the efficiency part of the challenge for GBDT as well as NNs. <br> Using <strong>Treelite</strong> for XGBoost allow us to divide by 2 the execution time. <br> Our deep learning models were lights: <strong>400,000 weights</strong> for the end-to-end model which combines every parts/sub-models. Having already used <strong>TF Lite</strong> we knew it could be a game changer. Converting our models led to a significant boost in inference time without any performance loss (we do not remember exactly but we think it is at least <strong>6x faster</strong> on our local inference simulator). <br> Beginning to explore pruning as well as hard quantization showed that the performance loss would be significant (which is OK in production but not in a competition) so we sticked to a simple TF Lite conversion.</p> <p>We have not leveraged what seems to be a problem in the efficiency metric. As we identified the private test sessions to be the 1450/1500 first served by the API we tried to just predict the others to check which time was used (public for public and private for private). Doing so we gain a place but choose to not use this.</p> <p>Our <strong>efficiency submission is a NN that scores public LB 0.702 and private LB 0.699 in less than 5 minutes</strong>.</p> <h4>Ensemble</h4> <p>We experimented a lot of ensembling alternatives. In the end we sticked to a simple average 50/50 GBDT/NN with:</p> <ul> <li>2 kinds of GBDT: trained on original order + trained on index order (validated on index order that is the inference case),</li> <li>3 kinds of NNs: trained on original order + trained on index order with 5 or all features.</li> </ul> <p>As our models are lightweight we were able to build a hugh ensemble: <strong>2 x 4 x 10 folds XGBoost + 3 x 4 x 5 folds NNs</strong>. The bottleneck for us is the 8 Go RAM constraint.</p> <p>The winning submission scores <strong>CV 0.705, public LB 0.705 and private LB 0.705</strong>.</p> <h2>Conclusion</h2> <p>The main achievement of our work is that it is a good solution for the researchers, learners and children that can benefit of it and we hope it will contributes to progresses for a better learning experience. Up to you guys!</p> <p>Thanks if you read until here!<br> If you have any question do not hesitate to ask. We will do our best to respond.</p> <h2>Presentation to the host</h2> <p>A video presentation to the host has been recorded and can be available on demand. Feel free to ask via PM.</p> <h1>Sources</h1> <p>Below are the main sources that we used. More sources can be found in section <em>Details of the submission</em> above.</p> <ul> <li><a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420332" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420332</a>,</li> <li><a href="https://fielddaylab.wisc.edu/opengamedata/" target="_blank">https://fielddaylab.wisc.edu/opengamedata/</a>,</li> <li><a href="https://arxiv.org/pdf/1609.03499.pdf" target="_blank">https://arxiv.org/pdf/1609.03499.pdf</a>,</li> <li><a href="https://www.tensorflow.org/lite/guide" target="_blank">https://www.tensorflow.org/lite/guide</a></li> </ul> <p>I, too, think that I have been lucky to team with you!</p> <p>Hi <a href="https://www.kaggle.com/dongyk" target="_blank">@dongyk</a>!</p> <p>You are working hard. Great! This is really satisfying to see that our work is useful for you. With this mindset you are going to learn a lot.</p> <p>Your understanding and your schematic seem nearly OK. <br> Keep in mind that in step 2 of the training the weights of the ConvNets are freezed. This means backpropagation only changes the heads weights. <br> In this 2nd step we have 3 sequences of data as inputs, 1 for each level_group. Each of these sequences feeds its own ConvNet/embedder that outputs a representation in the form of a 24 dim vector. The representations for level_group 0-4 feeds the SimpleHead for level_group 0-4 which outputs a vector of 3 values that are the probability of the 3 responses of this level_group. The representations for level_group 0-4 and 5-12 are concatenated in a 48 dim vector the feeds the SimpleHead for the 2nd level_group that outputs 10 values that correspond to the 10 questions of the level_group 5-12. Same for level_group 13-22.</p> <p>You correctly spotted that this had been made to train the whole ensemble to optimize the overall F1 score that is what we want.<br> We tried every setup: training the whole ensemble from scratch, each part independently, … The setup we presented here corresponds to what worked best for us.</p> <p>Does this respond to your question(s)? If not feel free to ask.</p> <p>Wow! Thanks for your schematics that are 100% correct as I understand them! The write-up has been updated to link to your post.</p> <p>If the weights of the convnets are not freezed then they are updated by the backprop coming from each output they are linked to when training end-to-end.</p> <p>We first trained separatly which gave a baseline. Then we tried to train the end-to-end setup from scratch. As you might have noticed the output allow to monitor the loss which is composed by the losses of each level group that can also be monitored. We observed that the loss for 0-4 was the first to converge but also that, as the others go down, it tends to increase. We interpreted that as the optimization of previous level groups representation for next level groups. This is not what we wanted. We wanted a good representation for a level group that can be used by the next level groups. So we chose to first optimize each convnet separatly and to exploit its representation by heads in a second time.</p> <p>As F1 score cannot be optimized directy, the main goal of the end-2-end approach was to be able to monitor the F1 score to select the best weights.</p> <p>Thanks for your question <a href="https://www.kaggle.com/aesoptacit" target="_blank">@aesoptacit</a>!</p> <p>There is not a single response as there are dependancies with the data, the problem, the metric, the level of performance achieved, …<br> It is unlikely that if 4 folds are improved except 1 the average won't be improved but it is possible. It probably would require to explore why such a dynamic.</p> <p>In this competition, for the GBDT approach, we mainly worked with 10 bags (maybe 5 would have been enough) and estimated the noise to be ~0.0003. Only &gt; 0.0003 overall improvements have been considered. See the code to look at the outputs we monitored: <a href="https://www.kaggle.com/code/pdnartreb/pspfgp-1st-place-gbdt-training" target="_blank">https://www.kaggle.com/code/pdnartreb/pspfgp-1st-place-gbdt-training</a>. For the NN as we needed to iterate quicker we only used a single bag and only incorporated &gt; 0.0003 overall improvements with at least 3 or 4 (over 5) folds improved.</p>
Predict Student Performance from Game Play
240th place solution. Or, how difficult it is to earn your first Kaggle medal.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello, Kagglers!<br> This is my 240th solution (which earned a bronze medal on the public leaderboard).<br> It's a simple blend of CatBoost and XGBoost with hyperparameters tuning.</p> <p><a href="https://www.kaggle.com/code/lyalindmitriy/catboost-and-xgb-blend" target="_blank">https://www.kaggle.com/code/lyalindmitriy/catboost-and-xgb-blend</a></p> <p>P.S. Thanks to organizers for such a competition!<br> P.P.S. How do you feel, guys, when you work hard, carefully select final submissions, but then realize that a silver medal was among your other submissions? Arrgh!</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11108995%2F6aa114e1f5b5e6b6a9daf0df7e1d1aad%2F2023-06-29%20%2020.30.24.png?generation=1688041861940054&amp;alt=media" alt="silver"></p> <p>Thanks, Andrea! I will try.<br> P.S. I was in Naples and Vesuvius, by the way. These places are gorgeous and astonishing!<br> This shot is from my collection:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11108995%2F3cda239fef529c69b0449444340dac60%2Fvess.jpg?generation=1688047452206495&amp;alt=media" alt=""></p> <p>Swapnil Chowdhury, 🙏</p>
Predict Student Performance from Game Play
2nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks <a href="https://www.kaggle.com/erijoel" target="_blank">@erijoel</a> and well done on your 4th place.</p> <p>I tend to find that it is better to focus on improving a single model than spread my efforts over multiple. That said, it can definitely be beneficial to add more models, particularly if they are different architectures. This is especially true if your are part of a team, when each member can focus on a different model. For small datasets, I think it still depends, I guess one advantage is that it typically requires less time/resources to train multiple models for small datasets which makes it more practical.</p> <h1>2nd Place Solution</h1> <p>First, I would like to take the opportunity to thank The Learning Agency Lab for hosting the competition and the Kaggle team for making it happen.</p> <p>Here are the details of the 2nd place solution.</p> <p><strong>Summary</strong></p> <ul> <li>A single ‭LightGBM ‬model was used to predict all the questions (i.e. not separate models per question or level group)</li> <li>5 fold cross validation was used during development but for the final submission a single model was trained on all of the data</li> <li>The code was optimised to minimise the efficiency score<ul> <li>For the final submission the vast majority of the execution time was spent on the LightGBM prediction stage</li> <li>There was extensive use of numba and C for the feature generation code</li></ul></li> <li>The model contained ‬1296 features</li> <li>A Threshold value of 0.63 was used</li> </ul> <p>‭<strong>Features</strong></p> <p>A lot of the most important features were based on the time taken to complete a task or react the an event. One of the most important features (after some of the basic features such as the question number and the total count of events for a level group) was the amount of time the user spent looking at the report in level 1 (feature name: L‬G0_L1_first_report_open_duration‭).</p> <p>A plot of the feature importance (LightGBM gain) of the top features can be seen below:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1199911%2F8bb9d994b0bb3f59dcff38f99eb95ebe%2Ffeature_importance.png?generation=1688479725194354&amp;alt=media" alt=""></p> <p><strong>Code</strong></p> <p>‭The code for each stage of the solution can be found here:</p> <ol> <li><a href="https://www.kaggle.com/mark4h/jowilder-2nd-place-solution-0-preprocess-data" target="_blank">preprocess data</a></li> <li><a href="https://www.kaggle.com/mark4h/jowilder-2nd-place-solution-1-features-code" target="_blank">features code</a> (<a href="https://www.kaggle.com/mark4h/jowilder-2nd-place-solution-1-c-feature-code" target="_blank">features code utility script</a>)</li> <li><a href="https://www.kaggle.com/mark4h/jowilder-2nd-place-solution-2-generate-features" target="_blank">generate features</a></li> <li><a href="https://www.kaggle.com/mark4h/jowilder-2nd-place-solution-3-train-model" target="_blank">train model</a></li> <li><a href="https://www.kaggle.com/mark4h/jowilder-2nd-place-solution-4-submission" target="_blank">submission</a></li> </ol> <p>Thanks. Well done on 1st place.</p> <p>I can’t really add much to the topic of CV-LB correlation. I only made a handful of submissions and they were all of essentially the same model (tuned to improve execution time), so I don’t have much of a spread to correlate.</p> <p>On the question of single model vs separate models, my take was to use as simple a solution as possible (a single model), unless there was evidence that a more complex solution was needed (separate models). I never found any evidence that separate models were needed.</p>
Predict Student Performance from Game Play
3rd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a lot to the hosts of the competition and my teammates ( <a href="https://www.kaggle.com/kingychiu" target="_blank">@kingychiu</a>, <a href="https://www.kaggle.com/tangtunyu" target="_blank">@tangtunyu</a>, and <a href="https://www.kaggle.com/yyykrk" target="_blank">@yyykrk</a>). I am thrilled that <a href="https://www.kaggle.com/kingychiu" target="_blank">@kingychiu</a> and I will become GM, <a href="https://www.kaggle.com/tangtunyu" target="_blank">@tangtunyu</a> is one step closer to becoming a Master, and <a href="https://www.kaggle.com/yyykrk" target="_blank">@yyykrk</a> will get his second gold medal after this competition!</p> <p>Here We will explain our overall solution, <a href="https://www.kaggle.com/yyykrk" target="_blank">@yyykrk</a> also provided additional explanation of the parts he worked on: <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420274" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420274</a></p> <h1>Classification Task Formulation</h1> <p>In this competition, we are asked to predict 18 values for each session. Each session contains 3 level groups. There are multiple ways to model this.</p> <ol> <li>18 binary classifiers</li> <li>3 Level group classifiers, each one can be<br> a. A multi-label classifier that predicts all values within a level group<br> b. A binary classifier that takes “question index” as a feature within a level group</li> <li>1 classifier that is<br> a. A multi-label classifier that predicts 18 values within a session<br> b. A binary classifier that takes “question index” as a feature within a session</li> </ol> <p>For Gradient boosted tree models, method 2b &gt; method 3b &gt; method 1. Method 2a and 3a are ignored because training the multi-label task is a lot slower with Gradient boosted tree models.</p> <p>For NN models, we focus on the method 2a and 3a, because</p> <ul> <li>These 2 methods are not well handled by tree models</li> <li>Multi-label learning makes more sense, because of the F1 score setting of this competition. (some posts discuss we should not optimize for 1 question).</li> <li>Multi-label NN models are faster to train and infer.</li> </ul> <h1>Additional dataset generated from the raw data</h1> <p>We create an additional dataset from the raw data, it contains 11343 complete sessions.<br> This dataset boosts the CV scores for GBT models by about +0.001~2, but there is not much effect on the public and private scores, and it has both positive and negative outcomes.<br> However, it works very well for NN models, we see +0.002 improvement in both CV and public scores.</p> <h1>Validation</h1> <p>We are using 5-fold GroupKFold on session_id so that there won’t be any seen sessions in the validation set. Also we didn’t include additional data in our validation set.</p> <h1>Gradient Boosted Tree</h1> <p>Per question classifier is handled by <a href="https://www.kaggle.com/yyykrk" target="_blank">@yyykrk</a>, Per level, and All-in-1 classifier is handled by <a href="https://www.kaggle.com/tangtunyu" target="_blank">@tangtunyu</a> <a href="https://www.kaggle.com/kingychiu" target="_blank">@kingychiu</a>. That’s why there are some inconsistencies in the data preprocessing steps, such as sort by index vs sort by time.</p> <h2>Per Question Classifiers</h2> <p>We create features for each level group and sorted by index. The features and the sorting methods differ from other models.</p> <pre><code> df1 = df.(pl.col() == ) df2 = df.(pl.col() == ) df3 = df.(pl.col() == ) df1 = df1.sort(pl.col(), pl.col()) df2 = df2.sort(pl.col(), pl.col()) df3 = df3.sort(pl.col(), pl.col()) </code></pre> <h5>The number of features:</h5> <ul> <li>Level group 0-4: 1,000 features</li> <li>Level group 5-12: 2,000 features</li> <li>Level group 13-22: 2,400 features</li> </ul> <h5>Feature Selection</h5> <p>We try feature selection with out-of-folds but the public scores tend to decrease, so we don’t select features about this model in the final submission.</p> <h5>The typical features</h5> <ul> <li>Elapsed time between the previous level group and the current level group.</li> <li>Elapsed time and index count between flag events.</li> <li>Prediction probabilities for previous questions.</li> <li>Sum of the most recent M (M=1,2,…) prediction probabilities.</li> </ul> <p>Flag events are events that must be passed during game progression. We extract them with reference to jo_wilder's source code, game playing, and the log data of users who have got perfect scores. </p> <h5>Single Best Model(5folds XGBoost)</h5> <p>CV: 0.702, Public LB: 0.700, Private LB: 0.701</p> <h2>Per Level Group Classifiers</h2> <p>In order to allow the level group models to utilize information from previous level groups, we first split the training data by:</p> <pre><code> df1 = df.(pl.col() == ) df2 = df.((pl.col() == ) | (pl.col() == )) df3 = df df1 = df1.sort(pl.col(), pl.col()) df2 = df2.sort(pl.col(), pl.col()) df3 = df3.sort(pl.col(), pl.col()) </code></pre> <p>Feature selection is then applied after feature engineering.</p> <h5>Features Engineering</h5> <ul> <li>Room distance and screen distance</li> </ul> <pre><code> (pl.col() - pl.col().shift()).over([]).().alias(), (pl.col() - pl.col().shift()).over([]).().alias(), (pl.col() - pl.col().shift()).over([]).().alias(), (pl.col() - pl.col().shift()).over([]).().alias(), </code></pre> <ul> <li>Final scene, checkpoint and answer time</li> </ul> <p>By playing the game manually, we know that students are only taking the quiz at the end of each level. The shorter time they used to finish the session of answering questions, the higher probability that they answered those questions correctly. Captured by features like:</p> <pre><code> pl.col().((pl.col() == ) | (pl.col() == )).apply( s: s.() - s.()).alias(), (pl.col().(pl.col() == ).() - pl.col().(pl.col() == ).()).alias() </code></pre> <ul> <li>Unnecessary moves</li> </ul> <p>Also from the experience of playing the game, we believe that there are many people who have played the game for more than one time. Would be great if we are have some feature to identify these players</p> <pre><code>unnecessary_data_values = {} q (): unnecessary_data_values[q] = {} feature_type [, , ]: unnecessary_data_values[q][feature_type] = [] unique_values = (df.((pl.col() == q))[feature_type].unique()) val unique_values: df.((pl.col() == q) &amp; (pl.col(feature_type) == val))[].n_unique() &lt; : unused_data_values[q][feature_type].append(val) </code></pre> <p>If they are playing for the first time, they likely have many unnecessary moves. Then we calculate the time / actions they have spent of these moves</p> <pre><code> col []: aggs.extend([ *[pl.col(col).((pl.col() == level) &amp; (pl.col().is_in(unused_data_values[level][]))).count().alias() level level_feature], *[pl.col(col).((pl.col() == level) &amp; (pl.col().is_in(unused_data_values[level][]))).count().alias() level level_feature], *[pl.col(col).((pl.col() == level) &amp; (pl.col().is_in(unused_data_values[level][]))).count().alias() level level_feature], *[pl.col(col).((pl.col() == level) &amp; (pl.col().is_in(unused_data_values[level][]))).().alias() level level_feature], *[pl.col(col).((pl.col() == level) &amp; (pl.col().is_in(unused_data_values[level][]))).().alias() level level_feature], *[pl.col(col).((pl.col() == level) &amp; (pl.col().is_in(unused_data_values[level][]))).().alias() level level_feature], ]) </code></pre> <ul> <li>Time / actions spent on tasks</li> </ul> <p>Another class of feature to filter out experienced players is to measure how fast they finish the tasks before the quiz in every level group. For example the first task of the game is to find the notebook, our hypothesis is that an experienced player would spend less time and actions to finish it. And they have a higher chance to answer the quiz questions correctly.</p> <p>Two examples for chapter 1</p> <pre><code> pl.col().((pl.col() == ) | (pl.col() == )).apply( s: s.() - s.()).alias(), pl.col().((pl.col() == ) | (pl.col() == )).apply( s: s.() - s.()).alias(), pl.col().((pl.col() == ) | (pl.col() == )).apply( s: s.() - s.()).alias(), pl.col().((pl.col() == ) | (pl.col() == )).apply( s: s.() - s.()).alias() </code></pre> <h5>Feature Selection</h5> <p>The selection is based on Catboost feature importance over the Catboost feature importance with shuffled labels. (Which is the idea of Null Importances <a href="https://www.kaggle.com/code/ogrellier/feature-selection-with-null-importances" target="_blank">https://www.kaggle.com/code/ogrellier/feature-selection-with-null-importances</a>)</p> <ol> <li>Compute Catboost feature importance with the entire training data.</li> <li>Shuffle the training data labels and obtain the importance again for N times.</li> <li>Compute the final importance by the base importance divided by mean random importance.</li> <li>We then use <code>gp_minimize</code> to search for the best feature size based on 5-fold cross-validation.<br> In the end, we have 233, 647, 693 features respectively for each level group.</li> </ol> <p>With Catboost 5-fold CV out of fold F1: 0.7019<br> With Xgboost 5-fold CV out of fold F1: 0.7021</p> <p>Then feature engineering is applied to each of the data frames above. And the transformed data frames are used to train our level group models.</p> <h5>18-in-1 Classifiers</h5> <p>To train the 18-questions-in-1 classifier, we further concat the above 3 data frames together to form a large data frame.</p> <pre><code> all_df = pd.concat([ df1[FEATURES1 + []], df2[FEATURES2 + []], df3[FEATURES3 + []], ], axis=) </code></pre> <p>This mega concatenation creates many null values because some features only exist in a particular level group. That’s why when building the features for this 18-in-1 classifier:<br> First, reuse the feature selection results from the Per Level Group case.<br> Rerun feature selection again after the mega concatenation</p> <p>With Catboost 5-fold CV out of fold F1: 0.7002<br> With Xgboost 5-fold CV out of fold F1: 0.7007</p> <h1>Neural Network</h1> <p>Model: Transformer + LSTM</p> <p>The pipeline of our NN is based on this public notebook: <a href="https://www.kaggle.com/code/abaojiang/lb-0-694-event-aware-tconv-with-only-4-features" target="_blank">https://www.kaggle.com/code/abaojiang/lb-0-694-event-aware-tconv-with-only-4-features</a></p> <h5>Numerical input:</h5> <ul> <li>np.log1p( elapsed_time_diff )</li> </ul> <h5>Categorical inputs:</h5> <ul> <li>event_comb, room_fqid, page, text_fqid, level</li> </ul> <h5>Transformer part (3 variants):</h5> <ul> <li>Type A: Conformer like transformer, with last query attention (<a href="https://www.kaggle.com/competitions/riiid-test-answer-prediction/discussion/218318" target="_blank">https://www.kaggle.com/competitions/riiid-test-answer-prediction/discussion/218318</a>)</li> <li>Type B: Conformer like transformer, with last query attention</li> <li>Type C: Standard transformer</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1904629%2Fb48403960b8dc8cceff7d92e3d18a1bd%2FPSP%20NN.png?generation=1688066794674184&amp;alt=media" alt=""></p> <h5>Post Transformer LSTM:</h5> <ul> <li>1 Bidirectional LSTM + 1 LSTM layer</li> </ul> <h5>Pooling method:</h5> <ul> <li>Concat of sum, std, max, last</li> </ul> <h5>Training method:</h5> <ol> <li>As mentioned in the previous section, we train the model with multi-label, and there are two variants:<br> a. One model per level group<br> b. Same model for ALL level groups</li> <li>We find that combining models trained with different settings can improve both the CV and public LB.</li> <li>Additional data was used for training, it improves both CV and public LB for NN</li> </ol> <h3>Best NN only ensemble (5 NN with different settings):</h3> <ul> <li>CV: 0.7028, Public LB: 0.701, Private LB: 0.704</li> <li>It turns out that NN doesn’t perform very well in Public LB, but does well in Private LB.</li> </ul> <h1>Submission Selection</h1> <p>We selected a submission with the highest LB, a submission with the highest CV, and a submission with a target on a reasonably high CV and a high variety of methods/models. </p> <p>Our best-selected sub is an ensemble of </p> <ul> <li>One level group Catboost, one 18-in1 Catboost, two 18-in1 Xgboost, and three NN.</li> <li>The NNs we selected are Type A per level group, Type B per level group, and Type C ALL level groups. This combination gives good diversity to the final ensemble.</li> <li>We ensemble GBT models and NN models on oof data separately with 2 standalone Logistic regression models, then combined them with GBT:NN = 6:4 ratio. </li> <li>The manual weighting in combining GBT and NN results is due to NN not performing well in public LB, so we didn't have enough confidence to give too much weight to our NN models as discussed below. </li> </ul> <p>Best selected ensemble:</p> <ul> <li>CV: 0.7046, Public LB: 0.706, Private LB: 0.704</li> </ul> <h1>0.705 subs that we haven’t picked</h1> <p>We have three 705 private score submissions that are not selected. Our best-selected subs ranked 13th in all of our subs in terms of private score.</p> <p>Among these 705 private subs:</p> <ul> <li>Per-question GBT model + Group Level GBT model gives us the 705 private score, but not a high ensemble CV score. </li> <li>Per-level-group GBT + NN models with Logistic regression ensemble gives us the 705 private score, but not a high public score.</li> </ul> <h1>Observations:</h1> <ol> <li>NN models perform well in CV and private but very poorly in public, while GBT models fit the public so well, It is very strange…</li> <li>Single-question GBT models makes a lower CV ensemble but perform quite ok in both public and private</li> </ol> <p>Hi <a href="https://www.kaggle.com/tezdhar" target="_blank">@tezdhar</a> , thank you for the question.<br> You are right, I embed categorical inputs, here are the nodebooks to train these NN.</p> <p>Type A per level group : <a href="https://www.kaggle.com/wimwim/psp-transformer-type-a-grp-level" target="_blank">https://www.kaggle.com/wimwim/psp-transformer-type-a-grp-level</a><br> Type B per level group: <a href="https://www.kaggle.com/wimwim/psp-transformer-type-b-grp-level" target="_blank">https://www.kaggle.com/wimwim/psp-transformer-type-b-grp-level</a><br> Type C ALL level groups: <a href="https://www.kaggle.com/wimwim/psp-transformer-type-c-18in1" target="_blank">https://www.kaggle.com/wimwim/psp-transformer-type-c-18in1</a></p> <p>Yes, and you can find more information about this section in <a href="https://www.kaggle.com/yyykrk" target="_blank">@yyykrk</a> 's solution write-up:<br> <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420274" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420274</a></p> <p>Thank you my friend!</p>
Predict Student Performance from Game Play
43th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you </p> <h2>Solution</h2> <ul> <li>Create features for each level_group.<ul> <li>In addition, using the previous level_group features.</li></ul></li> <li>LGBM and XGB model for each level.</li> <li>Optimize hyperparameters for each level. (Only XGB)</li> <li>I think the amount of features is almost the same as what is in the public.</li> </ul> <h3>Not work for me</h3> <ul> <li>Catboost model</li> <li>level_group probability as feature fo stacking model.</li> <li>sample weight for each level.</li> <li>optimize threshold of f1-score for each level.</li> <li>As a feature of gbdt, using event seqence vectorize with w2v.</li> </ul> <h3>Not try yet</h3> <ul> <li>Ensenble knoledge tracing model with transformer or 1dcnn</li> <li>Optimize hyperparameters of LGBM for each level.</li> </ul> <p>repo: <a href="https://github.com/konumaru/predict_student_performance" target="_blank">https://github.com/konumaru/predict_student_performance</a></p> <p>Thank you.</p> <p>Sorry, but the results of the experiment are not neatly summarized and difficult to share.</p> <p>I was looking at the git commit log and SubmissionScore during the experiment, so I will only share the last few submission results so if you have time you can compare them by date, etc.</p> <p>commit log: <a href="https://github.com/konumaru/predict_student_performance/commits/main" target="_blank">https://github.com/konumaru/predict_student_performance/commits/main</a><br> <code>[update] cv=hogehoge</code></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3460322%2F671e9833bdec7cac9db131573eb142b7%2F2023-07-01%20144143.png?generation=1688190165710104&amp;alt=media" alt=""></p>
Predict Student Performance from Game Play
49th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you!</p> <p>Thank you so much!</p> <p>First and foremost, I would like to express my gratitude to the hosts who made significant efforts in organizing the competition, despite numerous challenges. Congratulations to all the winners! While my model does not come close to the top performers, I am sharing my solution, hoping that it can be of use to someone, as it is relatively simple.</p> <h2>Overview</h2> <p>The central idea of my model revolves around the ensemble of raw log data processing using 1D-CNN and aggregated feature processing using GBDT. Individually, these models achieved CV=0.696 and Public LB=0.697, respectively. However, by ensembling them, I was able to improve the scores to CV=0.700, Public LB=0.700, and Private LB=0.700.</p> <h2>Models</h2> <p>The competition data provided consisted of gameplay logs, with several hundred to several thousand logs per session. Hence, I employed two modeling approaches: 1D-CNN, which directly extracts features from the temporal log sequences, and LightGBM, which utilizes aggregated features obtained through feature engineering. For 1D-CNN, I based my implementation on the <a href="https://www.kaggle.com/code/abaojiang/lb-0-694-tconv-with-4-features-training-part" target="_blank">public notebook by ABAOJIANG</a>. As for feature engineering and LightGBM, I referred to the <a href="https://www.kaggle.com/code/leehomhuang/catboost-baseline-with-lots-features-inference" target="_blank">public notebook by ONELUX</a>. I extend my gratitude to them for sharing their excellent notebooks.</p> <p>Regarding 1D-CNN, I used the encoder part of the public notebook as the base. After performing feature extraction using 1D-CNN, I applied the Multi-Head Attention structure before conducting temporal aggregation. I utilized five input features: numerical features such as diff(elapsed_time) and log(elapsed_time), and categorical features such as event_name + name, room_fqid, and fqid + text_fqid.</p> <p>For LightGBM, I added several features to the ones presented in the public notebook, resulting in inputting over 2000 features. Most of the additional features were related to text_fqid, including total time spent displaying text for each fqid and the reading speed per word.</p> <p>Furthermore, I combined a subset of these features (around 6) with 1D-CNN to create a new neural network model, which also had a positive effect (CV+0.001 approximately). I integrated these three models using linear regression-based stacking to generate the final predictions.</p> <p>However, for the simplest questions (2, 3, 18), I did not perform any modeling and predicted all of them as 1. The inference time was cutting it close at around 9 hours (528 minutes), and I was quite nervous during the final submission lol.</p> <h2>What Didn't Work</h2> <p>Here is a list summarizing the experiments I conducted that did not yield successful results:</p> <ul> <li>Building the encoder solely using Transformers</li> <li>Utilizing state-of-the-art time series neural networks like <a href="https://github.com/timeseriesAI/tsai/blob/main/tsai/models/PatchTST.py" target="_blank">Patch TST</a> or <a href="https://github.com/cure-lab/SCINet" target="_blank">SCINet</a></li> <li>Including CatBoost in the ensemble</li> <li>Handling all the problems with a single model</li> <li>TabNet</li> <li>Merging <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/412098" target="_blank">additional data</a></li> <li>DAE (Denoising Autoencoder)</li> <li>Applying an anomaly detection model to the simplest questions (2, 3, 18)</li> <li>Using LightGBM with the latent features of the neural network</li> </ul>
Predict Student Performance from Game Play
4th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Acknowledgement</h1> <p>I'd like to thank the hosts for providing a very interesting and difficult project to work on for the past months. I am also grateful for all the public sharing on Kaggle, this has been an insane learning experience for me. Without all the public notebooks, discussion posts and old competition solutions available i would have had no chance in this competition.</p> <h1>Overview</h1> <ul> <li>Used most of the raw data for training, while validating only on the kaggle data.</li> <li>Ensemble of Transformer, XGBoost and Catboost, with 3 seeds and 5 folds each. </li> <li>Used a generic set of features based on time, index and screen_coor differences.</li> <li>Linear regression as a meta model.</li> <li>Thresholds have a big impact on LB score</li> </ul> <h1>Data</h1> <p>I used most of the raw data for training, including sessions that only completed level group 0-4 and 5-12. About ~38000 whole sessions and ~58000 sessions in total. Using the raw data increased CV by over 0.001. I validated only on the kaggle data. </p> <p>My initial data preprocessing is simply sorting by level group and index, same as what happens during inference. Also, my experiments indicated no benefit from using the hover durations, so after sorting i dropped the hover rows and re-indexed each session from 0 to len(session).</p> <h1>Transformer</h1> <p>I spent much of my time experimenting with transformers, which resulted in a light weight model that achieved 0.698 on the public and private LB, and 0.702 CV. </p> <pre><code>class (nn.Module): def (self, num_cont_cols, embed_dim, num_layers, num_heads, max_seq_len): (NN, self).() self.emb_cont = nn.( nn.(num_cont_cols, embed_dim//), nn.(embed_dim//) ) self.emb_cats = nn.( nn.(max_seq_len + , embed_dim//), nn.(embed_dim//) ) encoder_layer = nn.( d_model=embed_dim, nhead=num_heads, dim_feedforward=embed_dim, dropout=, batch_first=True, activation=, ) self.encoder = nn.(encoder_layer, num_layers=num_layers) self.clf_heads = nn.([ nn.(embed_dim, out_dim) for out_dim in [, , ] ]) def (self, x, grp): emb_conts = self.(x[:, :, :-]) emb_cats = self.(x[:, :, -].(torch.int32)) x = torch.([emb_conts, emb_cats], dim=) x = self.(x) x = x.(dim=) x = self.clf_heads[[, , ].(grp)](x) return x.() </code></pre> <ul> <li>embed_dim: 64</li> <li>num_layers: 1</li> <li>num_heads: 8</li> <li>max_seq_len: 452 (explained below), though the sequences are cropped to 256</li> <li>I used the same single model for all questions</li> </ul> <p>I found the data easy to overfit with transformers, so in an attempt to improve the signal to noise ratio i did the following:</p> <ol> <li>Identify different points in the game by string concatenating event_name, level, name, page, fqid, room_fqid, text_fqid, in the dataframe.</li> <li>Some of these occur more than once in a session. Treat these as different points by enumerating them and adding the enumeration to their names.</li> <li>Filter out the rows with points that is present in over 0.999 of the sessions. This makes each session maximum 452 steps long. </li> <li>Create 6 feature columns:<br> time difference, index difference, distance (cumulative distance moved, calculated from screen_coor's) difference, room_coor_x, room_coor_y and the categorical point column embedded.</li> </ol> <h1>XGBoost</h1> <p>This is my strongest single model with public LB 0.701, private 0.702, and 0.7029 CV. What stands out is that i flattened 5 of the transformer input columns (excluding the categorical column), and used all those values as individual features.</p> <p>The other features are mainly stats that can be found in public kernels, like mean and max time diff over the categoricals. The stats were calculated before applying the transformer input filtering.</p> <p>From early on I trained one model for each level group, inputing the question number as a feature. I found that CV increased by around 0.0002 compared to using a model for each question. This could be randomness, but i went with it since i thought 3 models instead of 18 would make my life easier during experimentation. Similar reasoning behind using one model for the transformer. </p> <h1>Catboost</h1> <p>Essentially looks the same as XGBoost. CV 0.7022.</p> <h1>Ensemble</h1> <p>I trained a linear regression meta model for each question, using the above models output probabilites as input, to produce the final predictions. I included probabilities of past questions and some future ones! For example the regression model trained to predict question 2 took probabilities on question 1-3 as input, to predict question 7 I used probabilities on question 1-13, and for question 16 i used probabilities on question 1-18. I took 3 seeds average before linear regression input to make it more robust. </p> <p>This finally results in public LB 0.702, private LB 0.703, CV 0.7044. </p> <h1>On threshold and submission selection</h1> <p>I tried to trust CV as much as possible, but the consistent gap between my CV and LB was suspicius until the last few days. Then I realized one reason could be my selected threshold was suboptimal on the test data.. I made some submissions with my highest CV solution, only changing the threshold, and noticed it was indeed suboptimal and caused more variation in LB score than most of my latest experiments. So in the end I selected 3 of the same solution, with different thresholds: 0.60 (best on LB), 0.62 (best during CV) and 0.64. Turned out 0.61 would have resulted in 0.704 private, but no regrets ;)</p> <p>Thank you for reading!</p> <h1>Code</h1> <p><a href="https://github.com/joelerikanders/pspgp/tree/main" target="_blank">Training code</a><br> <a href="https://www.kaggle.com/erijoel/4th-place-submission" target="_blank">Submission notebook</a></p> <p>Thank you 😄</p> <p>Thanks a lot!</p> <p>I think you're right that nested CV would have lead to more accurate or slightly pessimistic CV. Though I'm not sure whether or not it would have been a better approach, or lead to better thresholds in the end, maybe. I suppose it's a tradeoff between fast iteration and accuracy of the experiments</p> <p>Thanks for the comment! Surprising indeed, do you think the difference between question wise and single model score is due to randomness or something else? The only time I tried linear regression I did it question wise with previous probabilities the way I described</p>
Predict Student Performance from Game Play
7th Place Solution (Efficiency 1st)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for your comment.<br> In looking at the data with my own eyes, "notification_click" seemed to be some sort of checkpoint event.</p> <p>I am pleased to have fought the long and hard competition with all of you here.<br> Here I would like to outline my solution.</p> <h2>Overview</h2> <ul> <li>To make predictions for 18 questions, I trained 3 LightGBM models, one for each level_group. The reason I did not build a model for each question was primarily to reduce inference time.</li> <li>Most of the features I have created are features based on the time difference between two consecutive actions. (More on this later.)</li> <li>The CV score was improved by about 0.002 by adding raw data published by the competition host.</li> <li>Unexpectedly, the submission for the Efficiency Prize had the best score in Private Leaderboard amoung the selected sumissions. The inference time of that is approximately 3 minutes.</li> </ul> <p>The notebooks reproducing my submission are as follows:</p> <ul> <li><a href="https://www.kaggle.com/code/rsakata/psp-1-save-data" target="_blank">https://www.kaggle.com/code/rsakata/psp-1-save-data</a></li> <li><a href="https://www.kaggle.com/code/rsakata/psp-2-process-raw-data" target="_blank">https://www.kaggle.com/code/rsakata/psp-2-process-raw-data</a></li> <li><a href="https://www.kaggle.com/code/rsakata/psp-3-fe-and-train-lgb" target="_blank">https://www.kaggle.com/code/rsakata/psp-3-fe-and-train-lgb</a></li> <li><a href="https://www.kaggle.com/code/rsakata/psp-4-test-inference" target="_blank">https://www.kaggle.com/code/rsakata/psp-4-test-inference</a></li> </ul> <h2>Feature Engineering</h2> <p>The six variables (level, name, event_name, room_fqid, fqid, and text) were concatenated as aggregation keys, and the time difference from the previous or following record was summed for each key and used as the feature. If written in pandas-like code, <br> <code>df.groupby(['level', 'name', 'event_name', 'room_fqid', 'fqid', 'text'])['elapsed_time_diff'].sum()</code></p> <p>In addition to the time difference from the previous or following records, the number of occurrences of each key is also added as a feature. Since these features can be calculated by sequentially reading the user's session, they can be calculated very efficiently by treating the data as the Python list instead of using Pandas.</p> <p>Furthermore, the record whose event_name is 'notification_click' is considered as a important event, and the time difference between the two events is added to the feature.</p> <p>The procedures for calculating these features can be found by reading the third published notebook.</p> <h2>Modeling</h2> <p>Since the variety of keys (combinations of six variables) is very large, I reduced features before training by excluding in advance rare combinations that appear only in a small number of sessions. However, since the number of features still amounted to several thousand, I first trained LightGBM with a large learning rate (0.1) and performed feature selection based on gain feature importance. The training was then performed again with a smaller learning rate (0.02) using 500 to 700 features.</p> <p>In the second modeling, raw data published by the host (<a href="https://fielddaylab.wisc.edu/opengamedata/" target="_blank">https://fielddaylab.wisc.edu/opengamedata/</a>) was included in the training. Although I was unable to reproduce the host's train.csv file completely, but I was able to reproduce it approximately using the second published notebook.</p> <p>Many of the sessions included in this data were different in nature from the competition data because they did not complete the game until the end. In fact, users who left the game midway through tended to have lower percentages of correct responses. To reflect this difference, the maximum level of each session was added as a feature.</p> <p>When training the model for the last level_group, I augmented the label of the second level_group, which contributed to the improvement in accuracy. I believe that the reason for this is that overfitting was suppressed by using more data to determine the split point when splitting nodes of decision trees. However, for the first and second level_groups, this data augmentation method did not contribute to improve accuracy in local validation.</p> <p>The CV/LB scores of my best submission is:</p> <ul> <li>CV: 0.7034</li> <li>Public LB: 0.703</li> <li>Private LB: 0.703</li> </ul> <h2>Other Remarks</h2> <ul> <li>For stability of evaluation, 4-fold CV was repeated three times with different seeds.</li> <li>Based on the validation results, the threshold was set at 0.625. No adjustment was made for each question.</li> <li>To reduce inference time, models trained in CV were not used, but retrained models using all data were used for inference.</li> </ul> <p>Thank you for your comment and questions. Here are the answers.</p> <ol> <li>After computing the features, duplicate them for the number of questions and add the question numbers to the features.</li> <li>Same as 1.</li> <li>Yes. I used the simple average of best iterations when retrain models.</li> </ol> <p>Regarding the first question, it is simply the result of a series of modifications to the program to eliminate errors that were output due to changes in the data format.</p> <p>Regarding the second question, note that for the third level_group, bdiff and count are skipped, but fdiff is calculated. The reason for this is that the validation results showed that the contribution was small for bdiff and count, and the validation score for the third level_group was almost the same without including it.</p>
Predict Student Performance from Game Play
8th Place Solution and Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The competition was really exciting and it gave us a chance to practice feature engineering. I'm very thankful for the support and help from my team <a href="https://www.kaggle.com/shinomoriaoshi" target="_blank">@shinomoriaoshi</a> <a href="https://www.kaggle.com/hoangnguyen719" target="_blank">@hoangnguyen719</a> and <a href="https://www.kaggle.com/martasprg" target="_blank">@martasprg</a>. They were always there for me and together we made a big difference.</p> <p>I would like to thank the hosts, and special thanks to <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> and <a href="https://www.kaggle.com/pdnartreb" target="_blank">@pdnartreb</a> for identifying the issue of data leak, which made the competition right back on track.</p> <p>Special thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for his great starter notebooks and insights that helped me in the early phase of the competition.</p> <h2>Overview</h2> <p>Here's an overview of what each of us worked on:<br> · My main focus was on improving the XGBoost model and handling feature engineering.<br> · Minh Tri Phan worked on a Transformer model with a CV (cross-validation) score of 0.699 and a public leaderboard (LB) score of 0.7.<br> · Hoang processed the external data.<br> · Martin worked on selecting the most relevant features.</p> <p>In our final submissions, we ensembled the XGBoost and Transformer models, which helped us achieve the gold position. Our ensemble submission had a public LB score of <strong>0.705</strong> and a private LB score of approximately <strong>0.7025</strong>. Additionally, we had two other submissions with single XGBoost models, where one had a public LB score of <strong>0.705</strong> and a private LB score of <strong>0.700</strong>.</p> <h2>My Part</h2> <p>Code: The code is a bit uncleaned, apologies for that. For any queries, contact me on <a href="https://www.linkedin.com/in/priyanshu-chaudhary-ba0b23199/" target="_blank">LinkedIn</a> <br> <strong>FE code:</strong> <a href="https://www.kaggle.com/code/chaudharypriyanshu/mb-fb5-train-xgb-25-11-external-data/notebook" target="_blank">https://www.kaggle.com/code/chaudharypriyanshu/mb-fb5-train-xgb-25-11-external-data/notebook</a><br> <strong>Inference code:</strong> <a href="https://www.kaggle.com/code/chaudharypriyanshu/inference-xgb-25-11-17/notebook" target="_blank">https://www.kaggle.com/code/chaudharypriyanshu/inference-xgb-25-11-17/notebook</a><br> <strong>Training code:</strong> <a href="https://www.kaggle.com/code/chaudharypriyanshu/mb-fb5-train-xgb-25-9-training/notebook" target="_blank">https://www.kaggle.com/code/chaudharypriyanshu/mb-fb5-train-xgb-25-9-training/notebook</a></p> <h3>Overview</h3> <p>I created a 5-fold XGBoost model for each question (a total of 90 models). I used Kaggle kernels only to train XGBoost since it took only 45 mins on Kaggle’s P100 GPU to train all 90 models.<br> The single XGBoost model achieved a Public leaderboard (LB) score of 0.705 and took 45-50 mins for inference, but it didn't perform as well on the private LB. When we included Hoang's external data, the model's score improved to <strong>0.704</strong> on the private LB. However, we decided not to use it because the public LB score was unusually low at <strong>0.702</strong>.</p> <h3>Feature engineering</h3> <ol> <li><p><strong>Session length:</strong> simply accounts for the total length of the session per level group.</p></li> <li><p><strong>Instance features:</strong> I created Object click-based features (first object click, room coordinates of that click, I called them Instance features)that were most important and gave an improvement of 0.0007, when I added them with standard features. I created a total of 36 features since there were 12 instances where object clicks were present.</p></li> <li><p><strong>Magic bingo features:</strong> Inspired from the public notebooks. I created more such features for all 3 level groups and it improved the CV by <strong>0.0003</strong>.</p></li> <li><p><strong>Standard features:</strong></p> <p>a) <strong>Count features:</strong> I created count features based on <code>Fqid, text_Fqid, room_fqid, level, and event_comb</code>. These features capture the frequency of specific events or combinations. </p> <p>b) <strong>Binning of indexes:</strong> I performed binning on indexes with bin sizes of approximately 30 or 50 in sorted order. Raw indexes worked better on the private LB, while binned features yielded better results on the public LB.</p> <p>c) <strong>First and Sum features:</strong> I generated first and sum of elapsed_time_diff for all categorical columns. I found that min, max, and std did not work well in my case. </p> <p>d) <strong>Aggregations based on hover duration.</strong></p></li> <li><p><strong>Top Level Group Features:</strong> Used top 15-25 features (according to feature importance), Duration and instance features across different level groups.</p></li> <li><p><strong>Meta features:</strong> Using past questions predictions to predict the current question. i.e. for question<code>t</code> I used all predictions for questions <code>(1 to t-1)</code>. Using them gave an improvement of around <strong>0.001</strong>.</p></li> </ol> <h3>Feature Selection (Martin's Part):</h3> <ol> <li>I eliminated features that had zero importance based on their Gain and Shapley feature importance scores.</li> <li>After performing feature selection, I made adjustments to the learning rate by reducing it from <strong>0.05</strong> to <strong>0.03</strong> and adding more features. </li> <li>Additionally, I removed duplicate features and features with more than <strong>95%</strong> values as null.</li> </ol> <h3>External data:</h3> <ol> <li>We used publicly available data. It had about 7500 sessions where all 18 questions were answered.</li> <li>Adding this external data improved our model's performance by 0.0005 in cross-validation and 0.002 on the leaderboard.</li> <li>Hoang also created processed external data that worked well on the private leaderboard (score of 0.704). If we had included it, our single XGBoost model could have reached a top 5. position. However, we decided not to use it because of its lower Public leaderboard score (a bad decision).</li> </ol> <h3>Inference:</h3> <ol> <li><p>We made improvements to retain the original order of the sequence during inference.</p></li> <li><p>We found that there are approx. 250 sessions with abnormal indexing(interestingly all of them are from the 5th and 6th December 2020)</p></li> <li><p>Created a function to preserve the original sequence for 99.5% of sequences, with only a small portion (0.5%) having misplaced events not more than 4-5 positions of the actual index.</p></li> <li><p>Reindexed these abnormal sessions which improved or scored on LB slightly.</p></li> </ol> <h3>Things not worked:</h3> <ol> <li>Ensemble with LGBM, Catboost didn’t work.</li> <li>Created a custom eval metric that uses benchmark true positives and negatives a model should have. It increased the CV by 0.001 but LB decreased probably due to overfitting.</li> <li>Different thresholds for each question. (increased CV decreased LB).</li> </ol> <p>The below tables list our experiments with the best results.</p> <table> <thead> <tr> <th>External Data Used</th> <th>CV</th> <th>Public LB</th> <th>Private LB</th> <th>final Sub</th> </tr> </thead> <tbody> <tr> <td>No</td> <td>0.6996</td> <td>0.701</td> <td>0.698</td> <td>No</td> </tr> <tr> <td>No</td> <td>0.7001</td> <td>0.702</td> <td>0.700</td> <td>No</td> </tr> <tr> <td>No</td> <td>0.6996</td> <td>0.701</td> <td>0.698</td> <td>No</td> </tr> <tr> <td>Yes((Public ED)</td> <td>0.7015</td> <td>0.705</td> <td>0.700</td> <td>Yes</td> </tr> <tr> <td>Yes(Hoang's ED )</td> <td>0.7019</td> <td>0.702</td> <td>0.704</td> <td>No</td> </tr> <tr> <td>Yes (Hoang's ED)</td> <td>0.7022</td> <td>0.703</td> <td>0.703</td> <td>No</td> </tr> </tbody> </table> <h2>Tri's Part:</h2> <p>The model is shown in the following figure:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5005351%2F196a1b67b9b8bdf56ba47c91097e8d0d%2FPicture1.png?generation=1688196490904650&amp;alt=media" alt=""></p> <p>Particularly, it consists of 2 parts,<br> (i) Training a neural network, then extracting the embedding.<br> (ii) Concatenating the embedding from the neural network to a set of aggregated features, then training a gradient boosting model (XGBoost, CatBoost, LightGBM).</p> <h3>Neural network</h3> <p>I was inspired by the RIIID competition and <a href="https://www.kaggle.com/letranduckinh" target="_blank">@letranduckinh</a>’s solution, in which he customized the multi-head attention mechanism to adopt the time gap between 2 actions. In my opinion, if we have to relate the problem to an NLP problem, RIIID competition is like a token classification task (e.g., NER), meanwhile, this competition is like a document classification task. Therefore, I decided to use a transformer and some other recurrent network types.</p> <p>I used the encoder-only structure as I didn’t see any motivation to have the decoder. <br> However, the transformer encoder alone didn’t work so well, so I decided to add some more (3) GRU layers in front of the encoder. The detailed architecture (Pytorch code) is given here (<a href="https://github.com/minhtriphan/Kaggle-competition---Predicting-Student-Performance/blob/main/Transformer/model.py" target="_blank">https://github.com/minhtriphan/Kaggle-competition---Predicting-Student-Performance/blob/main/Transformer/model.py</a>).</p> <h3>Some remarks about training:</h3> <ol> <li>I used 3 models for 3 level groups. At each level, I used the sequence of previous levels (e.g. The model for the 0-4 level uses the 0-4 sequence, the model for the 5-12 level uses the 0-4 and 5-12 sequences, and so on.)</li> <li>I used all the given features to train the model,</li> </ol> <pre><code>NUM_COLS = [, , , , , , ] TXT_COLS = [, , , , , , ] </code></pre> <ol> <li>I think the performance of a student, for example, in level 13-22 could carry some information to predict his/her performance in level 0-4. This is what I call the “global knowledge” of a student, and I want the network to capture that. Therefore, the neural network is trained in a multi-tasking manner, in which in the main output is the set of questions in the corresponding level (e.g., for level 0-4, the main output is 3-dimensional for questions 1, 2, and 3), the auxiliary head is used to predict all other questions. This trick helps to gain <strong>+0.002</strong> in CV.</li> </ol> <p>Overall, the NN gets <strong>0.695/0.700</strong> in CV and public LB (before the API crisis, after that I never check how the NN works in the public LB anymore as it was combined always with XGBoost)</p> <h3>Gradient Boosting</h3> <p>However, the NN in my case was not super satisfactory. I then decided to extract the embedding from the trained NN, concatenate them into a set of aggregated features, then use XGBoost to train the model. This helped me to get a huge boost in both CV and LB.</p> <p>Overall, the scores of this approach are shown below,</p> <table> <thead> <tr> <th>External Data Used</th> <th>CV</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>No</td> <td>0.6993</td> <td>0.702</td> <td>0.697</td> </tr> <tr> <td>Yes</td> <td>0.6989</td> <td>0.701</td> <td>0.699</td> </tr> </tbody> </table> <p>Unfortunately, as I didn’t observe any gain in CV and public LB with external data, I decided not to choose that model to add to our model pool.</p> <p><strong>Links:</strong><br> Training code: <a href="https://github.com/minhtriphan/Kaggle-competition---Predicting-Student-Performance----part-of--8th-solution.git" target="_blank">https://github.com/minhtriphan/Kaggle-competition---Predicting-Student-Performance----part-of--8th-solution.git</a></p> <p><strong>Inference code:</strong> <br> <strong>Without external data:</strong> <a href="https://www.kaggle.com/code/shinomoriaoshi/psp-v7b-infer" target="_blank">https://www.kaggle.com/code/shinomoriaoshi/psp-v7b-infer</a><br> <strong>With external data:</strong> <a href="https://www.kaggle.com/code/shinomoriaoshi/psp-v9a-infer" target="_blank">https://www.kaggle.com/code/shinomoriaoshi/psp-v9a-infer</a></p> <h2>Hoang's Part:</h2> <p>Hoang has described his work in a separate thread that describes the preprocessing of external data, experimental results and why to trust CV over LB.<br> Link to Hoang's Part: <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420315" target="_blank">https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/420315</a></p> <p>Hey Ino,<br> In the dataset, there are fqids, that have the word bingo in them. They signify that a user was asked to find something on the screen and press it, when the user presses the correct item bingo fqid pops up. Therefore these features indicate the time and number of clicks it took for the player to get the correct item so that bingo-based fqid appeared.<br> I'd advise going through one session and observing the fqid for level group 5-12, you will see some fqids that have word bingo in them.</p>
Predict Student Performance from Game Play
98th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Despite all the confusion, I would like to express my gratitude to the staff and the Kagglers who shared great information and knowledge. My result is not the top one, but I am posting this in the hope that it will be helpful to someone else and as a record for myself.</p> <h2>Overview</h2> <p>I created multiple XGboost models and blended their predictions as submission. Public Score was 0.703 - 0.704 and Private Score was 0.698 for all three.</p> <h2>Solution</h2> <p><a href="https://www.kaggle.com/code/pourchot/simple-xgb" target="_blank">Laurent's Notebook</a> was used as the baseline. The following features were added</p> <ul> <li>Identifiy conversations and actions that are essential to the progress of the game, and calculate the elapsed time between them and the number of Indexes.</li> <li>The number of Indexes and the sum of elapsed time diff using fqid, room, and level as keys, since the same fqid can occur in multiple rooms and levels.</li> <li>Number of times a very large elapsed time diff has occurred</li> <li>Whether the order of level_group is swapped or not, since there are cases in which the order of level_group is swapped as described in <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/395250" target="_blank">AbaoJiang's Notebook</a>.</li> </ul> <p>Data from the previous level_group and predictions for the previous questions were also added as features</p> <ul> <li>e.g.) For the model with level_group = 5-12, the features created for level_group = 0-4 are added as they are.</li> <li>For the model predicting question t, I added the predictions for questions 1, 2, 3, …, t-1 as features.</li> </ul> <p>In this way, 1200 features were prepared for level_group = 0-4, 3000 for level_group 5-12, and 5500 for level_group = 13-22. From here, feature selection is performed by feature importance in terms of gains. I created multiple XGboost models (like 3 - 8 models), which vary which features to include and how much to reduce the number of features by feature selection. In many models, the number of features are around 800, 1500, 2500, respectively. The weighted average of these models was used as the final prediction. In the inference code, I sorted the test data as described in <a href="https://www.kaggle.com/competitions/predict-student-performance-from-game-play/discussion/416963" target="_blank">Daniel's Notebook</a>. In many cases, this sorting improved scores over sorting by index even in GBDT models.</p> <p>The addition of the past level_group features and the addition of the past question predictions as features contributed in particular to the scores.</p> <h2>What did not work</h2> <ul> <li>Tuning the hyperparameters for individual models did little to improve the scores</li> <li>Adjusting the thresholds for individual questions also did little to improve the scores</li> <li>I tried to use additional raw data available on the web, but could not improve scores.</li> <li>I tried to convert the coordinates of clicks in a room into a good feature, but could not improve scores.</li> <li>I also tried to incorporate the predictions of the previous question in two steps using stacking, but this did little to improve scores, so I adopted the simpler approach described above.</li> </ul> <h2>Reflection</h2> <ul> <li>Since I started feature selection, I calculated the CV score in a bad way, resulting in an inappropriate CV score. This caused an over-fitting to Public Score, because I did not know what to trust.</li> <li>I took the weighted average of multiple XGboosts as the final predicted value, but the models were so similar that it improved the Public Score but not the Private Score. Blending <a href="https://www.kaggle.com/code/vadimkamaev/catboost-new/notebook" target="_blank">VADIM’s publick notebook</a> and my XGboost model gives private score 0.700 but I could not select this one. I should have increased the diversity of the models instead of being drawn to the Public Score.</li> <li>In my submission list, I found a model with a Public Score of 0.695 but a Private Score of 0.704. I think it would have been impossible to choose this as my final submission because it was a fluke result!</li> </ul> <p>My code can be found in <a href="https://github.com/KazuakiTokunaga/kaggle-studentperformance" target="_blank">this github repository</a>.</p>
Predict Student Performance from Game Play
9th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I see, thank you for your info! I had not tried either.</p> <p>Oh, yes, I forgot to mention that. Thank you very much. I will add it later.<br> I used straified Kfold with the number of correct answers for 18 questions of the USER. So, the model is made with the distribution of the total number of correct answers of the users almost aligned. (However, I don't think it would be much different with a simple K-fold)</p> <p>I didn't use any extra data.</p> <p>First of all, I would first like to thank all the participants who dedicated so much time and effort to this competition, as well as the hosts and the management. While I still believe there are areas in which the administration of the competition could be improved, in this post, I will focus solely on discussing my solution.</p> <h2>Overview</h2> <p>I didn't do anything particularly special.</p> <p>Mainly, I just kept adding features to improve the accuracy of the single model. For each question, I built models using LightGBM and Catboost, and took the simple average of the two models. I used the models with the highest CV scores as the final candidates. The high CV submit also got almost the best score for private.</p> <ul> <li>LightGBM CV:0.7018 LB(Public):0.703 LB(Private):0.702</li> <li>Catboost CV:0.7011 LB(Public):0.7 LB(Private):0.701</li> <li>Merge(final submit) CV:0.7024 LB(Public):0.7 LB(Private):0.702</li> </ul> <h2>Features</h2> <p>Rather than introducing each of the numerous features I created, I'll share my overall approach and discuss a few specific features that particularly contributed to the accuracy.</p> <p>As already demonstrated in public notebooks, an important element in this competition was "how much time one spends playing." To delve deeper, I felt that "how much time it took from a certain point to another" was crucial, so I created many features related to this.</p> <ul> <li>Checkpoint feature (as I named it)<br> In this game, there are events that almost every player will inevitably experience. For instance, every user will find a notebook and see the message "found it!" I identified these "events that almost every user goes through," and used the time taken between these events (i.e., the elapsed time from event A to event B) and the number of clicks as features. This seemed to significantly contribute to the accuracy.</li> </ul> <p>I created such features in various patterns, like the elapsed time from viewing text A to text B, the elapsed time from one fqid to the next, the elapsed time from one room to the next, and so on.</p> <ul> <li>Other than this, I obviously included features like the time elapsed for each level and the average coordinates, as introduced in the public notebooks.</li> </ul> <p>The number of features increases with the level group. Ultimately, the feature counts were as follows:<br> Level group 0-4: 3009 features Level group 5-12: 9747 features Level group 13-22: 18610 features</p> <h2>Modeling Approach</h2> <ul> <li><p>I chose to use 10-fold rather than 5-fold as it gave slightly higher CV scores (around +0.0005).</p></li> <li><p>I used straified Kfold with the number of correct answers for 18 questions of the user. So, the model is made with the distribution of the total number of correct answers of the users almost aligned. (However, I don't think it would be much different with a simple K-fold)</p></li> <li><p>For feature selection, I simply used the top 500 features based on their importance. To prevent leakage, I selected the feature importance for each fold, and retrained the model for each fold. For example, when training fold1, I first train with all features, then select the features using the fold1 model, and retrain the fold1 model with the top 500 features.</p></li> <li><p>I used the prediction probabilities of previous questions as features. For example, when predicting question 3, I used the prediction probabilities for questions 1 and 2. When predicting question 15, I used the prediction probabilities for questions 1 through 14, etc. (this improved CV by around +0.001)</p></li> <li><p>Inference took the following amounts of time:<br> LightGBM: 90min Catboost: 120min Merge: 150min<br> LightGBM's inference became significantly faster by compiling the model with a library called "lleaves."<br> <a href="https://github.com/siboehm/lleaves" target="_blank">https://github.com/siboehm/lleaves</a><br> I'm sharing my inference code. Features not mentioned here can be somewhat understood by looking at it.<br> <a href="https://www.kaggle.com/code/mhyodo/restart-model-merge-v1" target="_blank">https://www.kaggle.com/code/mhyodo/restart-model-merge-v1</a></p></li> </ul> <h2>What Didn't Work</h2> <ul> <li>NN models (I tried several types, such as LSTM and MLP, but none contributed to the CV)</li> </ul> <p>Lastly, I've seen posts from others where the LB score was higher than the CV score, but in my case, they were pretty much the same. I was expecting some sort of shakeup, but I didn't anticipate making it into the top 10. I'm curious as to how those with higher LB scores achieved this, as I was unable to significantly increase my LB score. <br> Anyway, thank you! If the ranking is confirmed, I can become a new GrandMaster!</p> <p>Here are my parameters. <br> Since the features are very large and prone to overfitting, making min_data_in_leaf large enough and taking feature_fraction small enough improved the CV in my case.</p> <pre><code>param = { : , : , : , : , : , : , : , : , : -, : } </code></pre> <p>I didn't spend much time on tuning, I tried several patterns by hand, guessing the parameters that seemed important. (Ex. max_depth, min_data_in_leaf, feature_fraction</p> <p>Thanks Sirius!<br> Congrats to you too on your high placement in the KDDcup!</p> <p>Yes, I noticed.<br> However, in my case, even if I made the text consistent, the CV did not increase that much.</p>
Predict Student Performance from Game Play
Efficiency: 14th place Public
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I figured it would be a gamble, and that public LB score would be pretty random compared with private, so didn't check the difference in LB score if I had predicted all. </p> <p>Code:<br> Training: <a href="https://www.kaggle.com/roberthatch/student-efficiency-catboost-polars-train" target="_blank">https://www.kaggle.com/roberthatch/student-efficiency-catboost-polars-train</a><br> Inference: <a href="https://www.kaggle.com/code/roberthatch/student-efficiency-polars-inference" target="_blank">https://www.kaggle.com/code/roberthatch/student-efficiency-polars-inference</a></p> <p>My efficiency solution wasn't very innovative, mainly due to limited time. I only did first couple weeks of competition, then last week of competition, and didn't actually focus too much on efficiency solution, even though it was much more interesting to me, as I wanted a silver medal. (But failed to medal.)</p> <p>[Aside: yes, I know it was a 5 month competition. Frankly, I was excited to get a second chance after 60 days in which I didn't even edit a single Kaggle Notebook. But I also split my time on the second chance sign language competition, so only gave myself a week on this one.]</p> <p>The interesting thing was taking <a href="https://www.kaggle.com/carnozhao" target="_blank">@carnozhao</a> 's excellent <a href="https://www.kaggle.com/code/carnozhao/cpu-catboost-baseline-using-polars-inference" target="_blank">Polars starter notebook</a> and fixing the inference script to allow submission with latest API was already 15th place public(!)</p> <p>So - on public LB - I only went from 15th -&gt;14th place with some small improvements. With private LB I probably gained another few places, as public score was 682-&gt;687, but private score was 681-&gt;691. Given that it took about 2.5 more minutes, that makes the public vs private even more of a difference, at about +0.0025 vs +0.0075, so much more improvement on private LB.</p> <p>The semi-minimal updates to get that from 0.682 -&gt; 0.687, while adding about 140 seconds to the runtime.</p> <ul> <li>3 digit threshold.</li> <li>So far: [no CV test, 0.681 public, 0.682 private]</li> <li>Rerun training with all data</li> <li>SKIP 6 questions (inspired by top public notebook). Skip 2, 3, 12, 13, 16, 18. (Guess 0 on Q13, 1 on the rest.) Naturally this speeds up inference, which is why I decided to be slightly aggressive on how many questions to skip. Only about -0.0001 CV penalty each for skipping the 4th, 5th, 6th questions. It gives an added benefit that I could get lucky and if a question predicts poorly on private LB, (since private LB is a different population), then I would avoid a score penalty that would affect most everyone else.</li> <li>With all the above so far: [0.6819 CV, 0.682 public, 0.683 private, ~5:25 scoring time]</li> <li>34-&gt;60 features [0.6859 CV, 0.685 public, 0.683 private, ~7:15 scoring time run A, ~6:37 run B ]</li> <li>60-&gt;117 features</li> <li>Train on all train data with per question iterations (number of trees) based on results of 5 fold early stopping tuned for each question individually.</li> <li>[0.688 CV, 0.687 public, 0.691 private, ~7:41 scoring time]</li> </ul> <p>Many, many things didn't have time to implement and try:</p> <ul> <li>checkpoint features</li> <li>save prior level predictions (or all features)</li> <li>hyper-parameter tuning</li> <li>Extensive feature selection to optimize for efficiency prize</li> <li>Use checkpoint features for CutMix style data augmentation.</li> </ul>
Predict Student Performance from Game Play
Solution focusing on efficiency LB
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I'd like to congratulate all prize/medal winners and to thank the organizers for such a chance for newbies in the field of machine learning to gain experience and experts to increase theirs, too. <br> Second, I want to share my simple idea of my latest submissions so that I can know the feedback of others who have more experience than me.<br> As an introduction, I joined the competition solo and my goal was to learn by doing a real project, even if the results are not good or satisfactory, tried a lot of things and learnt how to make successful submission and a working model in a relatively long time :D.<br> But at some point, I checked the efficiency LB and found out that my position in it is (when compared to mine in the main LB) is way better, so I started to focus more on making simple, no time-consuming ideas that get a public score that is not that bad.<br> My feature engineering concentrated on the idea that, maybe, the actions of players in the main game who answer a specific question a right or wrong answer is somehow similar, focusing more on the elapsed time they consume in such actions.<br> So, I developed for each question 2 different models that ensemble each other, depending on the following:<br> 1st model: elapsed time in each ‘room_fqid’ &amp; ‘level’ group unique values<br> 2nd model: length of each group in ‘fqid’ texts divided by the length of the whole group. As the elapsed time difference in this column for group wouldn’t be telling a lot, as the same values doesn’t appear consecutively.</p> <p>The features are entered into LGBMClassifier with pre trained &amp; tuned hyperparameters.<br> The voting or weights of each model for each question is variable with the questions’ numbers, depending on which weights perform better on a small scale of the same training data, and that’s it!</p> <p>My code runs in about half a minute and takes between 4 and 6 minutes to complete scoring with 0.689 highest private LB score in my selected submissions, my highest record was 0.693 in about 9 minutes scoring + running time but unfortunately I didn't choose this submission :) </p> <p>I’d like to know about everyone’s feedback, as I am trying to evaluate my experiment in learning through doing real projects. Thank you again and wish me luck in the final efficiency LB :D!</p> <p>Link for my code example: <a href="url" target="_blank">https://www.kaggle.com/code/belalemadhussein/simple-highly-efficient-model</a></p> <p>Thank you! I will be waiting to see your solution idea :)</p>
Predict Student Performance from Game Play
Top 0.5% Efficiency Leaderboard with datatable
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Before jumping into details, I would like to say thank you to the organizers, Kaggle team and community.</p> <p>This was actually my first competition, though I had some limited Kaggle experience. Several years&nbsp;ago I benchmarked my newly developed models here, but at that time I didn't really look into the data and compete.&nbsp;</p> <p>Surprisingly, with the first attempt my solution was ranked 7th out of 2051 on <a href="https://www.kaggle.com/code/philculliton/student-performance-efficiency-leaderboard/notebook" target="_blank">the Efficiency Leaderboard</a>.</p> <h2>Background</h2> <p>When I joined about a month ago I was impressed by the fact, that simply <a href="https://www.kaggle.com/code/cpmpml/random-submission/" target="_blank">submitting mean values</a> could bring your score to&nbsp;<code>0.659</code> LB, while the most advanced models were at the level of ~<code>0.7</code>.&nbsp;</p> <p>So I decided to compete on the efficiency LB only and at the same&nbsp;time give a try to <a href="https://datatable.readthedocs.io/en/latest/api/models/linear_model.html" target="_blank">LinearModel</a>&nbsp;GLM I've recently developed. </p> <p>For data munging and feature engineering I've been using Python <a href="https://datatable.readthedocs.io/en/latest/index.html" target="_blank">datatable</a>,&nbsp;a package similar to pandas, but with a specific emphasis on speed and big data support.</p> <p>First, I have designed and shared <a href="https://www.kaggle.com/code/kononenko/datatable-linearmodel-0-676lb-in-6-seconds" target="_blank">a simple baseline</a>, that scored <code>0.676</code> and was pretty high on the efficiency LB.&nbsp;My next goal was to improve feature engineering and the overall code performance.</p> <h2>Feature engineering</h2> <p>Moving&nbsp;forward I've ended up with the following numeric features</p> <ul> <li>number of events per session, i.e. <code>sessions_id.count()</code>;</li> <li>session duration, i.e. <code>elapsed_time.max()</code>;</li> <li>mean level, i.e. <code>level.mean()</code>;</li> <li>screen x/y range, i.e. <code>screen_coor_x/y.max() - screen_coor_x/y.min()</code>.</li> </ul> <p>For categorical columns, I started with a number of unique values per a&nbsp;column. Then, created additional&nbsp;features for each of the values. Even though it worked locally on my CV, it didn't work on the public LB, so I had to employ feature selection based on the importances and also picked different features for different level groups.</p> <h2>Performance tuning</h2> <p>I have also&nbsp;employed some overall code tuning</p> <ul> <li>since <code>LinearModel</code> is fully&nbsp;parallel, both <code>.fit()</code> and <code>.predict()</code> methods, I have adjusted the number of threads to <code>2</code> to match the number of CPUs. By default, 4 threads were detected that could lead to over-parallelization;</li> <li>avoided <code>!pip install</code> in the inference code by pre-installing packages in a separate&nbsp;notebook. This saved me at least 30 seconds per submission;</li> <li>disabled "Persistence", so that no additional time is spent when the inference notebook is starting.</li> </ul> <h2>Final model</h2> <p><strong>Pros</strong></p> <ul> <li>a small number of features;</li> <li>less than one minute to be trained;</li> <li>highly interpretable;</li> <li>pretty robust: no errors due to the API changes, no submission errors, public LB scores are exactly the same as the private scores.</li> </ul> <p><strong>Cons</strong></p> <ul> <li>since at the end it is just a logistic regression, I don't think there is a huge room to further improve the model's score.</li> </ul> <h2>Conclusions</h2> <p>The best submission, I have selected, scores <code>0.687</code> LB with the scoring time of around 2-3 minutes. Notebook, that includes the training and the inference parts, is available <a href="https://www.kaggle.com/code/kononenko/top-1-public-efficiency-lb-with-linearmodel" target="_blank">here</a>.&nbsp;</p>
Predict Student Performance from Game Play
[0.67] 742nd Place Solution - Single Transformers Model Comparison Between Data Aggregation and None
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predict Student Performance from Game Play <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1><strong>742nd Place Solution</strong></h1> <p>First of all, congratulations to the first 4 teams who emerged victorious in this long and challenging competition. This was my very first challenge since I created this account 4 years ago, during my first year at university for a random course at that time.</p> <p>In this competition, I made every effort to implement transformers to solve the problem, resulting in two solutions with different approaches:</p> <h1>1. Sequence-N Aprroach</h1> <ul> <li>Codes: <a href="https://www.kaggle.com/phuhoang26/psp-llama" target="_blank">Llama</a></li> <li>Model's weights: <a href="https://www.kaggle.com/datasets/phuhoang26/psp-seqtf-llama" target="_blank">Llama-W</a></li> <li>Public score: 0.616</li> <li>Private score: 0.621</li> </ul> <p>This approach was inspired by the <a href="https://www.kaggle.com/competitions/data-science-bowl-2019/discussion/127891" target="_blank">3rd solution of limerobot in the Data Science Bowl 2019</a> by <a href="https://www.kaggle.com/limerobot" target="_blank">@limerobot</a>.<br> In this approach, I utilized N information (rows) of session_id as input to my model. The pipeline for this approach is as follows:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3563184%2F8c31538c1619c8e330bdde51b078247f%2Fdrawio.png?generation=1688024191708037&amp;alt=media" alt="Sequence-N data pipeline"></p> <p>The rationale behind this approach was to leverage the sequential characteristics of the data. After preprocessing and grouping the data by sessions, it was fed into a language model. The model was trained using StratifiedKFold (K=5).</p> <p>Instead of using BERT, as in the original codes of limerobot, I employed <a href="https://arxiv.org/abs/2302.13971" target="_blank">Llama</a>, a state-of-the-art language model developed by Facebook. Llama claims to perform well or even better than previous models (with billions of parameters) while having a significantly smaller number of parameters (4M) for the same tasks.</p> <p>I trained this approach with different sequence lengths. For level_group = '0-4', I trained with sequence length equals to 256; level_group = '5-12' is 528; and level_group = '13-22' is 808. These number was the output of EDA (counting number of rows at quantile 0.9)</p> <p>I also experimented with different classifiers to learn from the output of Llama, but the scores did not show significant improvement. These are the classifiers I tried:</p> <ul> <li>Llama + <a href="https://arxiv.org/abs/1408.5882" target="_blank">1D CNN</a></li> <li>Llama + XGBOOST</li> <li>Llama + LSTM</li> </ul> <p><strong>Reasons for the failure of this approach:</strong></p> <ul> <li>For processing CATS features (categorical features), I employed a mapping technique (e.g., cutsence_click as 1, personal_click as 2, navigate_click as 3, etc.). This technique works well when the tokens are carefully customized, as done in Glove or Fasttext, where similar tokens are embedded close to each other in the feature space (e.g., run and walk, hit and slap, brown and yellow).</li> <li>For NUMS features (numerical features), I only filled the NaN values with 0, without performing feature selection or generation. This may have contributed to the failure of the model.</li> </ul> <h1>2. Gated Tab Transformer</h1> <ul> <li>Codes: <a href="https://www.kaggle.com/phuhoang26/gated-tab-transformer" target="_blank">GATED-TAB-TF</a></li> <li>Model's weights: <a href="https://www.kaggle.com/datasets/phuhoang26/gated-tf-psp" target="_blank">GATED-TAB-TF-W</a></li> <li>Public score: 0.674</li> <li>Private score: 0.67</li> </ul> <p>Paper of this approach: <a href="https://arxiv.org/abs/2201.00199" target="_blank">The GatedTabTransformer. An enhanced deep learning architecture for tabular modeling</a></p> <p>Data aggregation ideas and codes derive from <a href="https://www.kaggle.com/pourchot" target="_blank">@pourchot</a> notebook, you can see <a href="https://www.kaggle.com/code/pourchot/simple-xgb" target="_blank">here</a></p> <p>this approach, I used aggregated information of each session_id to feed into my model. Here is this approach's pipeline:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3563184%2F4d9e348215f7856a575c1fb8db55c739%2F2.drawio.png?generation=1688024800924132&amp;alt=media" alt=""></p> <h1>Conclusion</h1> <p>Initially, I believed that the Sequence-N approach would outperform the Gated Tab Transformers. However, the scores of the latter model turned out to be significantly better than the former, and I have yet to determine the reason for this.</p> <p>Furthermore, as I state earlier, I train the Sequence-N approach with quite long sequence. However, in the submit versions, I saw that the smaller sequence length the better! I cannot explain this.</p> <p>If you have any questions, comments, or contributions, please feel free to contact me or leave a comment here.</p> <p>Thank you for taking the time to read this!</p> <p>I am cleaning my codes, stay tuned! :3 </p>
Predicting Molecular Properties
#1 Solution - hybrid
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone, Here's a brief writeup of the method we used for the #1 entry. I was hoping to post this a bit sooner, and apologies for the delay.</p> <h2>Update: 9/13/19</h2> <p>We have posted the code for our method, all of which is available under MIT license at: <a href="https://github.com/boschresearch/BCAI_kaggle_CHAMPS">https://github.com/boschresearch/BCAI_kaggle_CHAMPS</a> The main code for the model is available in the <code>src/</code> directory, though for those interested, the <code>models/</code> directory contains slight variants on this code that were used in the ensemble (mainly earlier versions of the same architecture) so that you can recreate the predictions exactly.</p> <h2>Introduction</h2> <p>First, a little bit of background on the team. This project was done at Bosch Research, specifically as collaboration between two groups, one at Bosch Corporate Research, and one at the Bosch Center for AI (BCAI). Our team consisted of both some ML experts and domain experts. To introduce our team:</p> <ul> <li><p>Jonathan Mailoa and Mordechai Kornbluth are both Research Engineers working out of the Boston lab of Bosch Research. They are domain experts on DFT and ML approach to molecular simulation, and have worked a great deal on molecular modeling, including lately some work with GNNs.</p></li> <li><p>Myself (Zico Kolter, I'm a faculty member working in machine learning at CMU, but work in industry one day a week at BCAI in Pittsburgh), Devin Willmott (Research Scienst at BCAI), and Shaojie Bai (my student at CMU, but doing this while while interning at BCAI) were all coming to the competition from the ML side. I had actually done a bit of (pre-deep-learning, so ancient history) work in ML for molecular modeling, though we didn't end up using many of those methods.</p></li> </ul> <h2>Overall architecture</h2> <p>Our overall approach is what I would call a kind of "soft" graph transformer. We wrote the model all from scratch for this work, instead of building upon any existing code. The model processes an entire molecule at once, simultaneously making a prediction for each of the scalar couplings in the molecule (we hadn't considered the per-atom approach that Quantum Uncertainty used, and frankly it sounds like that may be a pretty competitive approach, given that they did nearly as well with much less physical information).</p> <p>Unlike a traditional graph model, though, we're really processing the data as more of a "meta-graph". In constrast to most graph methods for molecules, where atoms are nodes and bonds are edges, in our graph each atom, bond (both chemical bonds, and non-chemical bonds, i.e., just pairs of atoms are included in the model), and even triplets or quads, if desired, all becomes nodes for the graph transformer. This means that each molecule has on the order of ~500 nodes (depending on whether we include all the bonds or not, or whether we include triplets or quads, which only would be included for chemcial bonds). At each layer of the network, we maintain an embedding of for each node in the graph, of dimension d ~= 600-750 in most of our models.</p> <p>Following the standard transformer architectures, at each layer of the network, we use self-attention layer that mixes the embeddings between the nodes. The "standard" scaled self-attention layer from the transformer paper would be something like (forgive the latex-esq notation formatted as code ... I'm entirely unprepared to describe model architectures without being able to write some form of equation):</p> <p><code>Z' = W_1 Z softmax(Z^T W_2^T W_3 Z)</code></p> <p>where W_1, W_2, and W_3 are weights of the layer. However, following the general practice of graph transformer architectures, we instead use a term</p> <p><code>Z' = W_1 Z softmax(Z^T W_2^T W_3 Z - gamma*D)</code></p> <p>where D is a distance matrix defined by the graph. For a "hard" graph transformer, this will work like the mask in normal self-attention layers, and be infinite for nodes that are not connected, and zero for nodes that are connected (and the gamma term would be fixed to one, say). In our "soft" version of the graph transformer, however, D was just the squared distance matrix between nodes in the graph, and gamma was a learnable parameters: as gamma went to zero, this would become a standard transformer with no notion of distance between objects, whereas as gamma went to infinity, it would become a hard graph transformer. To be even more precise, in the final architecture we used a multi-head version of this self-attention layers, as is also common in transformer models.</p> <p>As a final note, for this to work, we needed to define a distance measure between all the nodes in the graph. For e.g., atom-to-atom distances, we just used the actual distance between atoms, for atom-to-bond distances, we would use the the minimum distance from the atom to the two atoms in the bond, with similar extensions for triplets, quads, etc.</p> <p>After the self-attention layer, we used the normal fully-connected and layer-norm layers standard to transformer architectures, and used models of depth ranging from 14-16 (depending on available memory). After the final embeddings, we had separate heads that would predict the final scalar coupling for the nodes that corresponded to fairs for which we needed the coupling value, using a simple two layer MLP for each type (or actually, for several sub-types of the bonds, which we'll mention below).</p> <h2>Input features and embeddings</h2> <p>The input representation (i.e., the first-layer embeddings for all nodes in the network), As our input representation, for each type of node in the network, we would include a kind of hierarchical embedding, where were had different levels of specificity for the different atoms, bonds, etc.</p> <p>As an example, for each bond (again, really meaning just a pair of atoms ... I'm referring to pairs generally as bonds even if they are not chemical bonds in the molecule), we described it in terms of the two atoms belonging to the bond, but also in in terms of the number of bonds that each atom would have. Thus, each bond could be described by multiple given types at subtypes: first by just the type of atoms in the bond, then by the type and total number of bonds that each atom had, and then by a few additional properties such as the bond order, etc.</p> <p>This lead to substantially more coupling "types" than just the 8 that were used in the competition, and we actually had separate final layers for 33 different types of bonds, rather than the 8 in the competition (for instance, the 1JCH had very different properties depending on the number of bonds the C atom had), which definitely improved our predictions slightly.</p> <p>In addition to the "discrete" embedding, each node type would have associated with it one or two scalar constants that we would embed with a Fourier encoding, much like positional encoding in a standard sequential Tranformer model. For atoms, this consisted of the partial charge of the atom, as given by the OpenBabel library (*correction: original post said this was from RDKit, but RDKit was used for bond orders and conenctions, whereas OpenBabel was used for charges), just using some simple rules based upon graph structure; for bonds it was the distance between the two atoms; for triplets the angle between the center atom and the other others; and for quads the dihedral angle between the two planes formed by the center bond and the two other bonds (quads didn't end up helping too much for this particular task, though, so were left out most of our final models).</p> <h2>Ensembling</h2> <p>In this end, we trained 13 models that we used for the final ensemble, which basically just corresponded to different iterations and versions of the same basic structure (at times we also included a few models based upon a more standard graph neural network approach from the PyTorch Geometric library, though they weren't included in the final ensemble). We timed about 4 final models to complete on the final day of competition, including the model which eventually got the best performance, which is why we managed to sneak into the top spot on the very last day. As I had mentioned in my last post, there really wasn't anything that much happening during the last ~4 days where we moved up the rankings, nor were we "holding anything back": our models simply kept improving each day, and we'd submit our best version of the ensemble, which kept bumping us up day by day.</p> <p>Our best single model got about -3.08 on the public leaderboard, which makes me actually quite surprised, given that Quantum Uncertainty's best model was substantially better. But I think the fact that we predicted entire molecules at once actually may have increased the variance of predictions across all molecules, but therefore also seemingly made it work much better with the ensembling several different models. By taking a straight median across predictions from the best models, for instance, we could get to the ~-3.22 range, and with a slightly more involved blending scheme (using the median of all 13 models to determine which 9 models seemed best, then taking the mean of a few different medians of the different model predictions), we were able to achieve our score of -3.245 on the private leaderboard.</p> <h2>Other random notes</h2> <ul> <li>We used small amounts of dropout at each layer, as in standard transformer models, though found that it was best to use a very small amount of dropout.</li> <li>At the very end of the competition, we did find that for our model a kind of cutout procedure (where we would randomly drop out two atoms from the network, plus all bonds, triplets, etc, that contained this atom), worked as a very effective regularizer.</li> <li>We didn't use QM7/QM9 or in fact any of the extra data that was included in the competition besides the structures and train/test files (so just atoms and bonds).</li> <li>We used RDKit/xyz2mol and a few other packages to parse the atomic structure to e.g., the bond and neighbor configuration feautres. Jonathan had a post about this earlier listing the packages we used.</li> <li>I'm not even going to attempt to list all the things we tried that didn't work, but there was lots :-). Quad / dihedral angle information, for instance, actually seemed to <em>hurt</em> generalization performance, as did including simple Coulomb forces at the bond level.</li> <li>As you'd expect, we had a fair amount of compute resources for the work. Most of the models were trained on 4x RTX2080 Ti systems (we had 5 of these available through the month we were working on the project), with a handful also trained on six single V100s we got access to in the last week.</li> </ul> <h2>Final thoughts</h2> <p>I want to thank the CHAMPS team for putting on an amazing competition. As many others have pointed out, the stability between the private/public leaderboards demonstrates an understanding of how to run a machine learning contest that sadly seems to be missing from many of the other contests I looked over on Kaggle previously.</p> <p>I also again want to thank the Quantum Uncertainty team, who as I mentioned before, were our goalposts the entire competition. After reading their solution I'm coming away more convinced about Transformers as the architecture that's going to be dominant across many different domains, not just sequence models (despite the fact that I will never, ever, forgive the original paper for the monstrosity that is the "query", "key", and "value" terminology for self-attention layers ;-) ). I also think their per-atom transformer is an awesome idea, and something I wish we had thought of ... I think most likely it took us using a <em>lot</em> of domain knowledge and engineering to make back up the difference that their per-atom approach got. And while it's wild to me that a non-rotationally invariant model would do so well (since we only used distance as a feature at the bond level, our model is rotationally invariant), it's impossible to argue with results. Their model is excellent, and I think it actually goes to show there is substantial room for improvement still in the performance we can get on this task.</p> <p>Thanks again, Zico, Devin, Shaojie, Jonathan, and Mordechai</p>
Predicting Molecular Properties
#2 solution 🤖 Quantum Uncertainty 🤖
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Correct</p> <p>Check this code in the <code>AtomTransformer</code> constructor:</p> <p><code> n_atom_embedding = d_model//2 n_type_embedding = d_model - n_atom_embedding - 3 #- 1 - 1 -1 self.type_embedding = nn.Embedding(len(types)+1,n_type_embedding) self.atom_embedding = nn.Embedding(len(atoms)+1,n_atom_embedding) self.drop_type, self.drop_atom = nn.Dropout(embed_p), nn.Dropout(embed_p) </code></p> <p>In my case I bought a 3rd 2080 Ti which I will be using more. In renting I spent ~70$ in vast.ai</p> <p>Yeah! I liked the notion of alining distributions. In the IEEE fraud one I am trying to do the same automatically with deep learning instead of manual alignment. Not proven yet.</p> <p>Yes.</p> <p>We started building a Molecule Transformer which had all couplings in one shot (diff input representation) but stopped it bc we focussed on training the Atomic Transformer instead.</p> <p>Thanks! You made us sweat a lot in the final hours. It was fun tough and we learned a few lessons in those final hours.</p> <p>Say there are N source/reference atoms (<code>atom_index_0</code> in the csv file) for a given molecule. We build N sibling molecules, and for each the GT is the array of 29 atoms (max) where we place valid j-couplings at target (<code>atom_index_1</code> in the csv) indices.</p> <p>We want to thank Kaggle and CHAMPs for organizing such an awesome competition: </p> <ul> <li>No leakage.</li> <li>Same distribution in train, private and public test (very stable CV vs. LB, public LB vs. private LB). This is very relevant in other competitions given so many participants when the 3 distributions are different some winners (not all, e.g. <a href="https://www.kaggle.com/c/microsoft-malware-prediction/discussion/84069#latest-499864">CPMP approach to manually align test ~ train in the Microsoft malware comp</a>) are just lucky they fit private test distribution by chance; not the case here. </li> <li>Inspiring and useful science problem hopefully used for good purposes as described in the context (new drugs, etc.).</li> </ul> <p>Neither my teammate <a href="/pavelgonchar">@pavelgonchar</a> nor I had any previous domain expertise and we made the decision early on that we would tackle this problem using a very pure deep learning way: letting the model build the features for us, not the other way around (b/c obviously we were at a disadvantage if we tried to become quantum experts in 1 month… hence our team name 🤖Quantum Uncertainty🤖 we didn't know if our yet-to-be-developed approach was going to work).</p> <p>Our solution had two major parts: 1) the input representation and 2) deep learning architecture.</p> <p><strong>Input representation</strong></p> <p>This is in our opinion the key part: we take a molecule and a source atom and move it so the source atom is @ (0,0,0). For each molecule we create N molecule siblings (N being as many source atoms are defined for that molecule), and each molecule sibling is translated so its source is at (0,0,0). </p> <p>The <code>x</code> (input) are three arrays of dimension 29 (maximum number of atoms): </p> <p>1) <code>x,y,z</code> position of each atom, 2) <code>atom type</code> index (C=0, H=1, etc…) 3) <code>j-coupling type</code> index (1JHC=0,'2JHH=1,etc.)</p> <p>Padding is done by placing -1 in <code>atom type</code> index and <code>j-coupling type</code> for molecules which have less than 29 atoms.</p> <p>The <code>y</code> (ground truth) is just an array of dimension 29 containing j-couplings located at target atom indices.</p> <p>Note that there is no graph information nor any other manually engineered features.</p> <p><strong>Data augmentation</strong></p> <p>We did two types of data augmentation: - Rotations: which worked and were useful in our first attempt model: pointnet-based, but proved worthless in the final models (atomic transformer). - J-coupling symmetry: as described <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/94706#latest-563148">in this discussion</a></p> <p><strong>First attempt: Pointnet-inspired architecture (got up to -2.28200 LB)</strong></p> <p>Our input representation is basically a point cloud: an unordered set of elements with absolute positions <code>x,y,z</code> and two attributes <code>atom type</code> and <code>j-coupling type</code>. We modified the <a href="https://arxiv.org/abs/1612.00593">Pointnet</a> architecture to regress j-couplings. Training was a bit unstable and we tried many variations of the architecture, swapping FC layers by linear (fixed) projections (Hadamard), adding coulomb matrix as input, etc. </p> <p>While this worked OK and got us to -2.28200 LB (ensembling a few models) we felt that this architecture was limited by the extreme pooling/bottleneck operation so we decided to explore other architectures: meet the Atomic Transformer.</p> <p><strong>Final architecture: meet the Atomic Transformer</strong></p> <p>You may know that the recent NLP revolution is mostly due to the transformer architecture described in the <a href="https://arxiv.org/abs/1706.03762">Attention is all you need paper</a>. The vanilla transformer architecture uses a very clever technique to add positional encodings that are needed for position-dependent input, such as language. </p> <p>Our input representation is a set, which means we can (and should) remove positional encoding. Prior to this competition we had no experience with transformers either but there's a section in <a href="https://www.youtube.com/watch?v=Kedt2or9xlo">Lex Fridman MIT podcast interviewing Orion Vinyals</a> where he mentions the inherent position invariance of a barebone transformer encoder layer. This immediately triggered the idea of using transformer layers (encoders) stacked taking as an input <code>x,y,z</code> (normalized but otherwise as-is), and <code>atom type</code> and <code>j-coupling type</code> embeddings; just concatenated… nothing fancy. </p> <p>The dimension of the embeddings was such that the total dimension of the input vectors was <code>d_model</code> (as normally reference in transformer literature). We started with 256 and got immediately great results surpassing our pointnet-inspired architecture so we followed this path.</p> <p>We trained a total of 14 models, with varying dimensions from 512 to 2048 and layers from 6 to 24. Each model parameter size ranged from ~12M to ~100M (biggest model).</p> <p>We trained some models from scratch, others we fine-tuned. We also fine-tuned a few models on the troublesome j-couplings: reaching -2.12 CV on 1JHC on and -2.19 CV on 1JHN.</p> <p>Our best score is an ensemble of 14 models achieving private LB of -3.22349, and our best single model achieved private LB of -3.16234, again just with <code>x,y,z</code>, <code>atom type</code> and <code>j-coupling type</code> inputs (no QM9, etc.).</p> <p><strong>What didn't work</strong></p> <p>Many things! We tried:</p> <ul> <li>Multi-task learning using contributions and other organization provided values.</li> <li>Dropout: We tried multiple attempts to add dropout at various stages (embeddings, encoder layers, pre-decoder, etc.). None of them worked.</li> <li>Knock-out: We added a variation in which as input we deleted 10% of the input atoms, the idea being that the model would build an internal representation of the missing atoms. Surprisingly this worked in that the model still converged nicely but failed to reduce train ~ val gap.</li> <li>Rotations and TTA in Atomic Transformer: it didn't reduce train ~ val gap and didn't produce meaningful TTA gains.</li> <li>Deep decoder: Our decoder is just a projection of ~ the model dimension to 1 (scalar coupling). We tried adding more expressive power to the decoder but this didn't help.</li> <li>Fp16 training. This worked for models of dimensions 256 but as training evolved gave <code>NaN</code>s despite numerous attempts to fix it.</li> </ul> <p><strong>Source code</strong></p> <p>We will make source code available once we do clean up. It's a single jupyter notebook using FastAI. Be patient.</p> <p><strong>Computational resources</strong></p> <p>We had more ideas than computational resources, even if our computational resources were not tiny: - 3 x 2080 Ti + 128 Gb RAM + 16c32t processor - 2 x 1080 Ti + 64 Gb RAM + 8c16t processor - Rented 8+ 2080 Ti + 64 Gb RAM + 16c32t processor (multiple machines rented as needed)</p> <p><strong>Final thoughts</strong></p> <p>This was our most fun and hardest competition so far: - Challenging problem - Most teams in top 5 had domain experts (although we went <em>domainless</em> as part of our strategy) - Hungry computational resources.</p> <p>Even if we lost #1 position just a few hours before competition end we feel very excited we were able to achieve such [useful results for the organizers] (<a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/98375#latest-569312">https://www.kaggle.com/c/champs-scalar-coupling/discussion/98375#latest-569312</a>). In retrospect we believe a single model of the Atomic Transformer may achieve ever better results with further training.</p> <p>Best - Pavel &amp; Andres</p> <p>p.s. No graph NNs. We though graphs as manually engineered features that the model can infer by itself.</p>
Predicting Molecular Properties
#4 Solution [Hyperspatial Engineers]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone,</p> <p>First I'd like to thank the organizers for the well organized competition, but I'd also like to thank other competitors for making things fun, your great scores pushed us to do much more and get much better scores than we have originally though we could achieve.</p> <p>Here we'd like to share our solution which is also based on a Graph Transformer network, though with a few twists.</p> <h3>Input data</h3> <p>We have used OpenBabel to infer bonds from atom coordinates, and have used some custom code to fix what we identified as mistakes made by OpenBabel.</p> <p>We construct a graph where each node is an atom, each bond becomes an edge, then we add artificial edges between all nodes at distance 2 and 3 in the graph (2J and 3J edges respectively). All edges are directional, so we have one edge for each direction to make the graph fully bidirectional.</p> <p><strong>Attributes</strong>: - Nodes have atom type embedding, electronegativity, first ionization energy, electron affinity, Mulliken charge taken from the QM9 data (originally predicted using the same model architecture trained on the provided Mulliken charges) - All edges have distance and edge type embedding (single bond, double bond, triple bond, 2J, 3J) - 2J edges have bond angle on the atom they are skipping over. - 3J edges have dihedral angle between the atoms they are connecting</p> <p>No explicit XYZ data is used, as we wanted to make the model rotation and translation invariant.</p> <p>All input data was normalized to zero mean unit variance.</p> <h3>Model</h3> <p>Core of the architecture is the graph attention network with multiple heads, with a few twists: - Attention heads do not attend to all data from previous layer, but only the output of the same head from previous layer - All edge embeddings are first updated from triplets (src, edge, dst), the attention then updates atom embeddings by aggregating over edge embeddings, not neighboring node embeddings - We use gated residual connections between attention blocks similar to: <a href="https://arxiv.org/abs/1805.10988">https://arxiv.org/abs/1805.10988</a> - We output scalar coupling constants directly on the edges, this makes it two predictions, one for each edge direction and each is treated independently in the loss function. These two predictions are averaged to get a final prediction making this a kind of micro-ensemble.</p> <p>Loss function is a mean of MAEs per coupling type. We have tried mean log MAE but it was giving us worse results. We have used both scaled targets to zero mean unit variance and zero mean targets with no variance scaling, as different types benefited from different setups.</p> <p>Quickest way to outline the network would be using code:</p> <p>```python emb = 48 heads = 24 bias = False</p> <p>def AttnBlock(in_emb, out_emb): return nn.Sequential( EdgeLinear(in_emb, out_emb), NodeLinear(in_emb, out_emb), GraphLambda(lambda x: x.view(x.shape[0], heads, -1)), TripletCat(out='triplet'), MagicAttn(emb, 3 * emb, heads, attn_key='triplet'), TripletMultiLinear(emb, emb, emb, heads, bias=bias), GraphLambda(torch.nn.LayerNorm(heads * emb)) )</p> <p>net = nn.Sequential( Embed(emb, emb), AttnBlock(emb, emb * heads), GraphLambda(nn.PReLU()), GatedResidual(AttnBlock(emb * heads, emb * heads), emb * heads, emb * heads), GraphLambda(nn.PReLU()), GatedResidual(AttnBlock(emb * heads, emb * heads), emb * heads, emb * heads), GraphLambda(nn.PReLU()), GatedResidual(AttnBlock(emb * heads, emb * heads), emb * heads, emb * heads), GraphLambda(nn.PReLU()), GatedResidual(AttnBlock(emb * heads, emb * heads), emb * heads, emb * heads), GraphLambda(nn.PReLU()), GatedResidual(AttnBlock(emb * heads, emb * heads), emb * heads, emb * heads), GraphLambda(nn.PReLU()), GatedResidual(AttnBlock(emb * heads, emb * heads), emb * heads, emb * heads), GraphLambda(nn.PReLU()), GatedResidual(AttnBlock(emb * heads, emb * heads), emb * heads, emb * heads), GraphLambda(nn.PReLU()), EdgeLinear(emb * heads, 512, bias=True), GraphLambda(nn.PReLU(), node_key=None), EdgeLinear(512, 8, bias=True) ) ```</p> <h3>Optimizer</h3> <p>We have used LAMB optimizer (<a href="https://arxiv.org/abs/1904.00962">https://arxiv.org/abs/1904.00962</a>), again with a small twist, we have noticed that the weight decay is included in the step norm on which the trust ratio is calculated, this didn't make sense to us as weight decay should be independent of the update based on the batch gradient, so we moved the application of weight decay after the application of LAMB update, and this gave us better results. We call this LAMBW</p> <h3>Training regime</h3> <p>We have split the data into 90/10 train/eval, two times with two different. We have used one cycle learning rate for 30 epochs with high weight decay, then dropped weight decay and continued training until eval saturation (~70 more epochs). Then we fine tune 100 epochs for each type to get further improvement (except for *JHN types, as they didn't improve with fine tuning).</p> <p>We have used Stochastic Weight Averaging (<a href="https://arxiv.org/abs/1803.05407">https://arxiv.org/abs/1803.05407</a>) of last 25 epoch to get to a better minimum.</p> <p>We then average the predictions of the two training runs for the two splits we had.</p> <h3>What didn't work</h3> <p>Many, many things: - RAdam - combining RAdam and LAMBW - many different forms of attention mechanisms - mean log MAE loss, mean MSE loss - GradNorm for multitask learning for different coupling types <a href="https://arxiv.org/abs/1711.02257">https://arxiv.org/abs/1711.02257</a> - Multi-Task Learning as Multi-Objective Optimization <a href="https://arxiv.org/abs/1810.04650">https://arxiv.org/abs/1810.04650</a> And many many other things</p> <p>The full package with transformed data, additional data (and all code, which is also included in champs_code.tgz) is 7GB, so we are providing a link <a href="https://zenodo.org/record/3406154#.XXpX_ygzabg">https://zenodo.org/record/3406154#.XXpX_ygzabg</a> instead of uploading directly to Kaggle forum.</p>
Predicting Molecular Properties
#57 Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Published <a href="https://www.kaggle.com/stocks/schnet-by-j-type-helped-me-win-silver-57">SchNet Kernel by J-Type</a></p> <p>Welcome to the community! Enjoy the journey!</p> <p>i've added my final sub to the summary. it should blend well with LGBM since it is mostly SchNet. I will publish SchNet Kernel with the batch parameters I used for each type later today.</p> <p>Thank you!</p> <p>This turned out to be one of my favorite contests. I have always been fascinated by chemistry and have a reasonable knowledge base in chemistry in general. Originally, I hoped to use my chemical intuition to get an edge in this contest. Bu I realized soon enough that, since I had zero knowledge in molecular modeling, my edge was very slim.</p> <p>I sat on the fence for a very long time reluctant to invest time in learning the state of the art of molecule representation and modeling of molecular properties. Luckily, the Kaggle community came through with many great kernels that I could sample quickly and zero down to a few that were most promising in my hands.</p> <p>I started working hard on this contest about two weeks ago. For a long time I was out of medal range. Then, things started to click around one week ago and I managed quickly move into solid silver range within a few days.</p> <p>There was not much time left and models took long time to run. Since there was very little danger of overfitting to LB from what I observed on oof predictions, I chose not waste resources on creating oof prediction and blend old models with newer versions as they came in using LB as the only guide. The current standings and scores: #57 and -2.21819 (Private) and #58 and -2.22542 (Public), prove that that was a solid strategic choice.</p> <p>The three kernels below (with minor tweaks) contributed the most to the final blend: <a href="https://www.kaggle.com/toshik/schnet-starter-kit">SchNet Starter Kit</a> by <a href="/toshik">@toshik</a> , who barely missed Gold with -2.8 solo score! Amazing! <a href="https://www.kaggle.com/criskiev/distance-is-all-you-need-lb-1-481">Distance - is all you need. LB -1.481</a> by <a href="/criskiev">@criskiev</a> , whose team got Silver! <a href="https://www.kaggle.com/xwxw2929/keras-neural-net-and-distance-features">Keras Neural Net and Distance Features</a> by <a href="/xwxw2929">@xwxw2929</a>, who got solo Bronze! Basically, most credit for my 🥈 goes to the creators of these great kernels. I am glad they all did great in the contest. Congrats!</p> <p>At first the core of my blend was Sergii’s kernel, then Aicanghai_Smile’s Keras version started to weigh in more, but, in the last 5 days or so, I realized that <strong>the biggest hidden gem is actually SchNet.</strong> Wow! In the final blend it has by far the most weight. </p> <p>For SchNet my main modifications was training separate models for each type and adjustment of batch size upward (often I used 16 for mini-batches). Each of 8 models took 6-8 hrs to train for 60-90 epochs using Kaggle’s GPUs. The best SchNet model was -1.971 and I feel that I barely scratched the surface with what it could really accomplish.</p> <p>99% of computing was done with Kernels. It is great to have access to such powerful GPUs. My main wish is that GPU kernels would not freeze two days before the deadline. In the end, I ran out of time before I could apply my secret chemical intuition skills : (. I am guessing that a promising strategy to improve further is to split molecules into finer subgroups based on molecular features such as double bonds, rings, etc. and then train separate SchNet models on these subgroups.</p> <p>Congrats to all! Thank you the organizers and Kaggle for making this a great contest!</p> <p>**<em>Update: *</em><a href="https://www.kaggle.com/stocks/schnet-by-j-type-helped-me-win-silver-57">SchNet Kernel by J-Type</a></p> <p>👍 </p> <p>60-90 epochs each for each type, probably 50 hrs of GPU time to cover the whole train set once. But I simply did not have time to optimize anything. I cut train time by maybe 20% in the last day but only managed to cover 1/3 dataset (lots of snags). </p> <p>SchNet should be able to reach -2.4 at least with proper params I believe. </p> <p><a href="/jesucristo">@jesucristo</a> Very impressive LGBM score!</p> <p>I don't know if publishing the kernels is practical. They take forever to run. It is essentially the original SchNet just split by type and with longer train time. Usually, did not submit individual results but blended them right away with the previous best blend. Basically, I don't know which kernel set is the best. I will look into batch sizes I used.</p> <p>Everything went crazy in the last few days: GPUs died and I lost access to my remote machine where I combined predictions. I ended up hastily re-running models with slightly different parameters and piling them all into one monster stew: looked disgusting but ended up tasting good. LOL.</p> <p>I guess I could share my final monster blend submission via kernels for others to blend with. Maybe it is the fastest route. I guess it is OK to publish the submissions now.</p>
Predicting Molecular Properties
#9 Solution: model with only atom type and distance
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Yes. We should appreciate them as a winner of this competition. Their results broadened my view and help to break my stereotype on developing deep learning models for molecules</p> <p>Hi, all Kagglers,</p> <p>I am glad to share a brief explanation of my method and why I design the model as follows. I really appreciate organizers for holding such a nice competition. I learned a lot and really enjoyed the competition. All questions and opinions are always welcome!</p> <ol> <li><p>Strategy I have a few domain knowledge in quantum chemistry but not specific in this competition. The only domain expertise I used is that "all quantum mechanical properties are determined by atom type and distances between atoms". This was my fundamental strategy so I used only atom type and distance as input features. However, other participants showed that other features can improve the performance of their models. More surprisingly, some models which don't preserve permutation, rotational, and translational invariance outperformed my models. It really shocked me! I really appreciate their efforts to find such an innovative idea. It gave me inspiration in my research topic. I guess that the reason why such things don't make any problem in this competition is that there are some hidden patterns in atom ordering and 3D geometry of molecules of QM9 set. I wonder how the performance of the models change if we randomly change atom ordering, randomly rotate and translate the geometry of molecules in a large amount. In addition, I still personally believe that such conditions must be satisfied to develop robust and reliable models to predict molecular properties. </p></li> <li><p>Model I represented a molecule with a fully connected graph. Initial node feature includes only atom type and initial edge feature includes an only distance between atoms. I started with MPNN architecture. In contrast to original MPNN, my model predict scalar coupling between two atoms from their node state obtained after the propagation step. I used 8 different fully connected layers for each type of scalar coupling. I found that not sharing interaction layers improved the performance. I used 4 interaction block with 8 towers and 256 dimensions. This model achieved about -1.7--1.8 in public LD. The followings are my major modification to my models and corresponding LB score.</p> <ul><li>Replace onehot encoding of distance to RBF proposed in Schnet paper : LB:-2.1---2.2</li> <li>Edge update network: -2.4</li> <li>Edge update network with GRU update function: -2.6</li> <li>Hyperparameter tunning : -2.8</li> <li>Ensemble with 10 models with different initial weight parameters: -2.98</li></ul></li> </ol> <p>Finally, I congrats all the prize winners and thank all participants. I appreciate organizers' efforts again.</p>
Predicting Molecular Properties
10th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Sauve. Based on each type, we have 80-150 features. Best LGBM is -2.1. I will share more later. </p> <p>Thanks Andrew and I love all your kernels in all competitions. You are very passionate and a great asset for Kaggle community. </p> <p>1 day for 1/4 fold of JHC on P100. Around 500-800 epochs as cyclicLR requires a lot of epochs. </p> <p>Thanks Agnis. I want to compete in IEEE but I’m completely worn out. I wish you get a good result, and a little bit lucky in that competition (will it shuffle? I think so...)</p> <p>Thanks the organizers for such a great competition and great dataset. I just remembered the very recent nightmarish LANL competition, where its organizers were totally inactive and leaked dataset, then this Molecule competition is indeed one of the best ever. I love it very much.</p> <p>And special thanks to all of my teammates, who each one actually did a great deal of work from the beginning. Thanks so much.</p> <p>In summary, main methods are:</p> <ul> <li><p>SchNet1 (1st train) with input features: node-features (x, y, z, atom_type, mulliken_charges), edge-features (bond binary (for pairs of atoms with distance 1), angle (for pairs of atoms with distance 2), and dihedral (for pairs of atoms with distance 3). We predict 4 scc (scalar_coupling_constant) contributions and scc itself. This model gave us around -2.7 LB.</p></li> <li><p>SchNet2 (2nd train): the model is the same as SchNet1, except we feed 4 out-of-fold scc contributions and scc itself as edge features where applicable, EXCEPT the edge of the learning instance itself (so in other words, we prevent leak from this, and this is not a stacking method). This model gave us -2.85 LB.</p></li> <li><p>Scheduling: Cyclic LR. </p></li> <li><p>Train 3 to 4 different seeds of 4-fold for each type. Some small types (xJHN) we also had an extra training of 8-fold.</p></li> <li><p>Quantile regression and median averaging: since MAE cares more about predicting median of the variables, we tried to use quantile regression (with q=0.5), instead of linear regression, for 2 sub-tasks: 1) blending periodical valley model checkpoints of CyclicLR, where we removed the first 50% earlier checkpoints and only kept the last 50% checkpoints. 2) blending oof predictions of seeds of SchNet1 and SchNet2. This gave us around 0.01-0.02 boost on LB. For averaging test folds predictions, we also use median instead of mean, and found that it worked perfectly on LB.</p></li> </ul> <p>Now, I would like to describe the process of methods from start to end, like a journey.</p> <p>First we began with tree-based models, and without much chemistry knowledge. - Zidmie <a href="/zidmie">@zidmie</a> in our team wrote a custom code to extract bond information from only 3D coords, which was really accurate and impressive (and later proved unnecessary since we have some chemistry toolboxes to do that). - We even know aromatic indicator of each atom by his method! His early work laid a good foundation to get our team to top 15, at the time we were strong enough to add in <a href="/joshxsarah">@joshxsarah</a> and Kyle <a href="/petersk20">@petersk20</a>, who had great mindsets and diverse ideas.</p> <p>It was proved that feature importance permutation is a spectacularly good technique to eliminate useless features. Josh and Kyle with their strong features, together with importance permutation once got us to 6th place. But not late after that we gradually slipped on the LB as we saw tree-based models can't compete GCN in this competition.</p> <p>We then developed a custom GCN code based on Heng's shared code in the forum, which got us -2.5 to -2.6 LB, but can never reach -2.7. </p> <p>Then we merged with Akira <a href="/akirasosa">@akirasosa</a> , who has extremely great skills in NN!</p> <p>In the remaining of the competition onwards when having 5 people, we exploited all possible local GPUs (6) as well as 6 accounts of Google Cloud Platform (each with $300 free credits), to boost the score until the end.</p> <p>My teammates, especially Akira, will add in necessary information regarding SchNet if I missed something. You can view his separate post about SchNet architecture here: <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/106293#latest-610869">https://www.kaggle.com/c/champs-scalar-coupling/discussion/106293#latest-610869</a></p> <p>Our kernels are here: [GCN from Heng's code] <a href="https://www.kaggle.com/joshxsarah/custom-gcn-10th-place-solution">https://www.kaggle.com/joshxsarah/custom-gcn-10th-place-solution</a> [SchNet light Kaggle version] <a href="https://www.kaggle.com/petersk20/schnet-10th-place-solution">https://www.kaggle.com/petersk20/schnet-10th-place-solution</a></p> <p>Thanks for reading.</p> <p>Thanks Nanashi <a href="/jesucristo">@jesucristo</a> . No, we just used conventional PyTorch implementation.</p> <p>We edited Heng's code considerably much, and add a final layer which uses LGBM features. That can allow Heng's code to reach -2.5. We will share the kernel soon.</p> <p>Each new credit card will have $300 first free from Google, so 6 accounts = $ 1800. However, computation on GCP is extremely expensive! We only managed to train 2 seeds (on 8 models of 8 types) with those $1800 credits.</p> <p>Akira in our team has 5 GPU 1080Ti (if I remember it right...), besides, we use 6 accounts worth of $1800 on GCP, and select P100. </p> <p>Hi RNA, If you used permutation importance too, but cannot reach -2.1, I guess we might have just some better features. We crafted some binary features indicating groups of atoms, such as -CH3, -COOH, -OH... so we know that if those groups are connected to any atom of the molecule. </p> <p>Hi Sauvé. If you used permutation importance too, but cannot reach -2.1, I guess we might have just some better features. We crafted some binary features indicating groups of atoms, such as -CH3, -COOH, -OH... so we know that if those groups are connected to any atom of the molecule. </p> <p>Can you delete all those above comments? I can’t do that because you are the owner of comments. I lost my mind. Sorry for hurting you and the readers. Nothing to do with LANL. Just my bad behaviour. Not good to keep it. So please delete. </p>
Predicting Molecular Properties
12 Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I suspect it's because our model works on very different principles. Most other models were based on graph neural networks or SchNet. These are perform rotationally invariant calculations. Ours, however, is rotationally <em>equivariant</em>. We also do many of the calculations in rotational Fourier space.</p> <p>Our implementation is built on PyTorch. We're still working on making it presentable, but hopefully an alpha version should be up soon.</p> <p>Son's and Horace's codes are built for graph neural networks. There is a lot of ideological overlap with cormorant, but (unless I'm mistaken) they don't work explicitly with rotational equivariance. The method we are using does, however.</p> <p><a href="https://github.com/zlin7/CGNet">https://github.com/zlin7/CGNet</a></p> <p>Until we get cormorant up, this code might be informative.</p> <p>In the long-run, we'd really like to a write a custom-library for performing rotational equivariant computations. The problem is that, in general, there is a lot of sparse linear algebra on objects that can look like lists of matrices of different shapes. This means that, to really do the calculations fast on a GPU, one needs custom codes. Hopefully we can this this up soon as well.</p> <p>Sorry for the belated answer. The Supplement is coming :-). </p> <p>Here is a belated ELI5. The basic approach is that we are building an architecture to generalize the idea of standard convolutional layers in Convolutional Neural Networks (CNNs) to molecules. The key property that convolutional layers are designed to preserve is <em>translational covariance</em>: if you translate the input to the network, the output of the network translates as well. Ultimately, this allows you to learn commonalities across an image: if you learn what an object (e.g. a mug) looks likes in one part of the image, you can detect it in a different position in a different image. It’s also worth discussing how CNN’s work: the covariance comes from a linear transformation (the convolution, hence the name), which is followed by a standard nonlinearity.</p> <p>We want to transfer this to molecules. Let’s consider the linear convolution first. But there’s a complication: now the symmetry that we care about isn’t just <em>translation</em>, but <em>rotation in 3 dimensions</em>. If we rotate a molecule, we want our signal to rotate with it. Rotations are harder than translations. There are several complications, including the fact that it is not clear how to “discretize” rotations to perform a convolution without breaking symmetry in some way.</p> <p>Our approach is to use ideas from Fourier analysis. It turns out that it is possible to generalize Fourier analysis not just functions on the line, <em>but in the space of rotations</em>. This is nice, because convolutions in Real space look like multiplications in Fourier space. So if we have a Fourier transformed signal, performing convolutions is easy [1]. So we Fourier transform our data, at which point our linear convolutions are just linear algebra.</p> <p>So this helps us define the linear section: what about the nonlinear bit? It’s actually somewhat nontrivial to build a nonlinear function that doesn’t break covariance, or require transformation back into Real space (which can lead to numerical artifacts). However, one thing we can do is multiply signals together. For regular Fourier analysis, this is like constructing beats out of our Fourier modes [2, 3]. We can then efficiently decompose the beating signals into it’s Fourier components using known relations (these are the famous Clebsch-Gordan coefficients), allowing us to keep everything in Fourier space.</p> <p>The only thing that is left to do is to define the signals. This part is a bit technical, but basically in each layer, we grab signals from neighboring atoms based on their relative orientation of the atom and it’s signal (because our signals are rotationally covariant, they have a notion of orientation) [4]. We then perform the Fourier-space convolution and a few (tensor) multiplications, rinse and repeat!</p> <p>[1] See <a href="https://arxiv.org/pdf/1802.03690.pdf">https://arxiv.org/pdf/1802.03690.pdf</a> for further reading [2] See <a href="https://en.wikipedia.org/wiki/Beat_(acoustics)">https://en.wikipedia.org/wiki/Beat_(acoustics)</a> [3] Specifically we actually take tensor products of signals. This is more general, but just thinking about multiplications gets the right intuition across. [4] The coefficients for this message passing looks a lot like an edge network, and is what we actually read the signals off of for this Kaggle challenge.</p> <p>Thank you! We're very excited to see how far this architecture can go on point-cloud data.</p> <p>Absolutely! We're hoping to release a software package soon (in the next few weeks) for public use.</p> <p>We basically used this.</p> <p><a href="https://arxiv.org/abs/1906.04015">https://arxiv.org/abs/1906.04015</a></p> <p>We modified the architecture slightly so that the outputs would be on the connections of atoms, but that was pretty much it. For simplicity, we trained one net each on j_1, j_2, and j_3 couplings, and then split it into one net for each coupling type (e.g. 1JHC).</p> <p>The only slightly fancy thing we did was that we pretrained by initially using the Mulliken charges as features, which we removed further on in the training.</p> <p>That's it, no particularly clever tricks :-).</p> <p>P.S. If anyone has any questions about the paper, Brandon and I will be happy to address them to the best of our ability.</p>
Predicting Molecular Properties
13th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!!!</p> <p>ありがとうございます、 <a href="/akirasosa">@akirasosa</a> さんもおめでとうございます!スライドがこんな所で活用されると思いませんでしたが、少しでも役立てて頂けたのなら幸いです。</p> <p>Thanks for your interest! I may publish my code, but need some time because of its complexity... I used Geforce 2080Ti x2, and p3.2xlarge or p3.8xlarge instances at the last week of the competition. 1st stage (300epoch) and 2nd stage (150epoch) training take about 1 day per each. The effect of 1st stage training on the overall performance was not significant, so I frozen the 1st stage model at an early timing. 3rd stage (130epoch) training takes about 10h ~ 1 day for each coupling type. (1JHN has less data, so computational time is short: 10h) As mentioned above, our model was not computationally efficient, so we could only complete 2 / 10 fold at the 4th stage.</p> <p>Thanks, and congrats you too! I would show you group log MAE curve for the valid set at the 1st-stage and 3rd-stage training.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2311404%2Fb4f20cdba447cc569e536f554cde1b33%2F2019-08-31%209.58.29.png?generation=1567213143907405&amp;alt=media" alt=""></p> <p>Orange line (with pseudo label) is better than blue line (without pseudo label) in spite of the same NN architecture. I think one of the reasons why pseudo label worked is fine-tuning (2nd-stage) (I forgot to mention this procedure... Sorry. I revised the above document.) We got 8 models of different parameters with the same architecture in 2nd-stage, which could label with accuracy that could not be achieved by 1st-stage training. It might be a kind of distillation, I think.</p> <p>Thanks and congrats on the result and on becoming GM! I was afraid of your submissions that overtakes me every day in the last week. But it helped to improve our score, I think.</p> <p>Thank you! I used chainer and chainer-chemistry, and mainly customized the following parts: <a href="https://github.com/pfnet-research/chainer-chemistry/blob/master/chainer_chemistry/links/update/schnet_update.py">https://github.com/pfnet-research/chainer-chemistry/blob/master/chainer_chemistry/links/update/schnet_update.py</a></p> <p>First of all, I would like to thank the competition hosts. <br> Here I will try to summarize some of the main points of our solution.</p> <h2>1. Model architecture</h2> <p>We chose SchNet (CFConv) as a base neural network architecture. Details are shown below.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2311404%2F77175b9a4b8e3a6f892a7ad421fe1851%2F2019-08-29%2018.31.19.png?generation=1567071105142394&amp;alt=media" alt=""></p> <ul> <li>Message passing part <ul><li>SchNet with Edge Update <ul><li>Might be similar to "SchNet EU" in <a href="https://storage.googleapis.com/kaggle-forum-message-attachments/610869/14042/csc_10th.pdf">10th place</a></li> <li>Atom pair features which we used are explained in <code>2. Features</code></li></ul></li> <li>Prepare Edge Update Networks independently per graph distances (distance=1, 2, 3, and 4 or more)</li></ul></li> <li>Readout part <ul><li>Sum-up target atom embeddings updated by CFConv (message passing part)</li> <li>Concatenate Atom pair feature (same as used in message passing part)</li></ul></li> <li>Prediction part: MLP <ul><li>2 Layer MLPs for each coupling type (1JHC, 1JHN etc.)</li></ul></li> </ul> <h2>2. Features</h2> <ul> <li>Atom features <ul><li>Only use one-hot expression of chemical elements (C, H, etc.)</li></ul></li> <li>Atom pair features <ul><li>Gaussian expansion of 3D distance</li> <li>Gaussian expansion of dihedral angle features (cosθ, (cosθ)^2)</li> <li>Bond information (e.g. single-bond, double-bond, etc.)</li></ul></li> </ul> <h2>3. Training</h2> <ul> <li>1st-stage <ul><li>Training the model for all coupling types simultaneously.</li></ul></li> <li>2nd-stage <ul><li>Fine-tuning the model trained on the 1st-stage to each coupling types individually.</li></ul></li> <li>3rd-stage <ul><li>Perform the same training as the 1st stage, using data including examples pseudo-labeled on 2nd-stage.</li></ul></li> <li>4th-stage <ul><li>Fine-tuning the model trained on the 3rd-stage to each coupling types individually.</li></ul></li> </ul> <p>ありがとうございます! <a href="/toshik">@toshik</a> さんもsoloでこれだけの結果が出せるのは流石です。またtriplet featureもユニークで大変勉強になりました。</p>
Predicting Molecular Properties
14th place solution (GCN from scratch in Kaggle kernel)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I didn't write it in Julia, but I got used to Julia's arrays starting with index 1 rather than 0. I actually prefer it, it makes things a bit more straightforward. </p> <p>The script is now public: <a href="https://www.kaggle.com/cedben/squid?scriptVersionId=19764703">https://www.kaggle.com/cedben/squid?scriptVersionId=19764703</a></p> <p>Ah, I made a mistake. The dataset should be public now!</p> <blockquote> <p>Let me confirm shielding tensor eigenvalues were sorted just like large to small ?</p> </blockquote> <p>Indeed, the eigenvalues are sorted (maybe some sort of permutation-invariant NN would be better?)</p> <blockquote> <p>One more question, training NN take a lot of time. So, I suppose frequent modifications would be very hard. How do you improve the model ?</p> </blockquote> <p>I started with smaller networks, and sometimes based my judgment on the first 10 epochs, as it seemed to correlate with the final CV score. But I also used full runs, which allowed me to still do my day job in-between :) Also at the end I used the full computations both for contributing to the final average and testing different parameters. </p> <p>It helped that Kaggle allowed 5 parallel jobs then...</p> <blockquote> <p>Congratulations. I want be a great kaggler like you. 😊</p> </blockquote> <p>Thanks :)</p> <p>For this competition, I implemented a GCN using keras.</p> <p><strong>Input features</strong>: </p> <p>1/ Atom types (one-hot encoding) 2/ Bond adjacency 3/ Bond types (one-hot encoding, generated using Open Babel) 4/ Bond vectors, normalized 5/ Inverse distance between all atom pairs, together with powers up to 4 6/ Cosine of bond angles for pairs of atoms 2 apart (for which we have to predict the coupling) 7/ Cosine of dihedral angles for pairs of atoms 3 apart (for which we have to predict the coupling) 8/ A flag on atoms whose dihedral contribution come from more than one path because they are part of a loop (in which case the max dihedral angle is retained, which is probably suboptimal)</p> <p>Most features were created "offline", and pickled in a compact form to avoid exhausting the Kaggle instance memory. The data is "decompressed'' per-batch by a keras generator object. All the information regarding pairs of atoms are input in a 29x29 tensor, where 29 is the maximum number of atom per molecule. Entries for non-existing atoms are just zero'd, and masks are used where necessary. </p> <p><strong>Data augmentation</strong>:</p> <p>Random rotations to the bond vectors, to enforce independence from the reference frame. </p> <p><strong>Network structure</strong>:</p> <p>The network consists of 4 sections.</p> <p>1/ The vertex (atoms) and edge (atoms x atoms) information is processed by 4 layers of width 1 convolutions (i.e., the same fully connected network is apply to each vertex independently, and similarly for the edges). 2/ 6 graph convolution layers are applied (transforming the vertex data, but leaving the edge data constant). 3/ A special layer passes the information from the vertices to the edges. 4/ 4 layers of width 1 convolutions are applied to the edges, to produce the magnetic coupling predictions.</p> <p>Originally I only had 2 (body), 3 (head) and 4 (tentacles) :)</p> <p><strong>Extra data</strong>:</p> <p>The extra training data not present on the test data was used as follows.</p> <p>1/ Mulliken charges, and shieleding tensor eigenvalues were predicted by two width 1 convo from the output of the the 3rd GC layer, and the prediction input into the 4th GC layer. 2/ The magnetic contributions are predicted all together at the output of the network and summed to provide the final predictions.</p> <p>Unfortunately, only the eigenvalues of the shielding tensor can be used because the tensor are not provided in the correct reference frame.</p> <p><strong>Regularization</strong>:</p> <p>All layers are followed by ReLU nonlinearities, and most preceded by a Batch Normalization layer.</p> <p><strong>Result</strong>:</p> <p>This produces a score of -2.6 after about 27 hours runtime of the script (kernel) (3 times 9 hours continuous runs, with weights saved in-between). Averaging the predictions from about 10 runs yield the -2.9 final score.</p> <p><strong>Potential improvement</strong>:</p> <p>I was working on implementing a GCN with operations equivariant to rotations (as in <a href="https://arxiv.org/abs/1802.08219">https://arxiv.org/abs/1802.08219</a> and <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/106275#">https://www.kaggle.com/c/champs-scalar-coupling/discussion/106275#</a>), but didn't have enough time to finish.</p> <p><strong>Script</strong>: <a href="https://www.kaggle.com/cedben/squid?scriptVersionId=19764703">https://www.kaggle.com/cedben/squid?scriptVersionId=19764703</a></p> <p>[edited: network description]</p> <p>Ah, indeed I didn't realize I wasn't setting the index 0. I can't recall why... maybe because I had just switched from months of Julia :D The code appears consistent though. Let me just update the description. Thanks for noticing!</p>
Predicting Molecular Properties
15th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First off, I would like to thank Kaggle, the host, and my team <a href="/senkin13">@senkin13</a> , <a href="/alexpartisan">@alexpartisan</a> , <a href="/chenlongwang">@chenlongwang</a> . It has been an interesting journey for me and certainly very rewarding to get my first gold. And Congrats to all who has worked hard for this competition!</p> <p>I was mainly working on GNN while my teammates have worked on NN/lgb/feature engineering/stacking. So I will be focusing on GNN and leave the rest to them. </p> <p>I mostly used <a href="https://github.com/rusty1s/pytorch_geometric">pytorch_geometric</a> for GNN, which is an amazing library. Network structures I have tried but failed to work include NNConv (<a href="https://arxiv.org/abs/1704.02901">https://arxiv.org/abs/1704.02901</a>), SchNet (<a href="https://arxiv.org/abs/1706.08566">https://arxiv.org/abs/1706.08566</a>). I guess the reason is that those networks does not update edge information and this competition aims to predict edge attributes. I tried various ways (concat/add/multiply) to combine node information of the last layer into edge to make predictions but they do not work as well.</p> <p>Our major break through is based on Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals (<a href="https://arxiv.org/pdf/1812.05055.pdf">https://arxiv.org/pdf/1812.05055.pdf</a>), in which both edge and node information are updated together. On top of the edge prediction, I add MLP per type to make the final prediction. </p> <p>As for training, we tried both training each type separately and all types together and found that training all types together gives more stable results. We also tried fine-tune each type after training all types together first. But we found that it overfits very fast and leads to worse results.</p> <p>One question I was never able to answer is that, there are two types of graph, one based on chemical bond and one based on coupling that we seek to predict. While they have the same node, edge connection is very different. Not having any background knowledge in this domain, we were not sure which one to train GNN on and left it to trial and error. Maybe someone with domain knowledge could shed some light on this.</p> <p>Again, Congrats to all winners and look forward to their solution.</p> <p>Best,</p> <p>Zhenlan,</p> <p>Each molecule forms its own graph with node being atom and edge being chemical bond or coupling.</p> <p>We ended up using a combination of both. "real bond" followed by prediction bonds. </p>
Predicting Molecular Properties
19th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Everyone,</p> <p>Ending final validation, I share my solution. My solution is what I call <em>Triplet Update</em>. My model has hidden layers for combinations of triple atoms. This model is generalization of <em>Edge Update</em>, and similar to Higher-order Graph Neural Networks in a sense.</p> <p>Code: <a href="https://github.com/toshi-k/kaggle-champs-scalar-coupling">https://github.com/toshi-k/kaggle-champs-scalar-coupling</a></p> <p><img src="https://raw.githubusercontent.com/toshi-k/kaggle-champs-scalar-coupling/master/img/concept.png" alt="conceptual diagram"></p> <blockquote> <p>Do you typically use chainer - or was it mainly because of the chainer-chemistry package?</p> </blockquote> <p>I usually use other DL frameworks too (TensorFlow, PyTorch, and so on). The reason I used Chainer in this competition was that chainer-chemistry was a good entry point.</p> <p>I suppose running time would almost the same when we implement well.</p>
Predicting Molecular Properties
20th place solution, MPNN part
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks!</p> <p>Thank you!</p> <p>Thanks!</p> <p>🙌 </p> <p>🙏 🙏 🙏 thanks for all the help, was good working with you!</p> <p>Thanks! Yeah there were many more things to try, but I am pretty happy with the top 1% result. Congrats with the gold!</p> <p>Thank you, I'm glad it helped!</p> <p>So the main inspiration for our best solution came mostly from two papers: the <a href="https://arxiv.org/abs/1704.01212">Gilmer/Brain </a> paper, as implemented in my <a href="https://www.kaggle.com/fnands/1-mpnn">public kernel</a>, and then <a href="https://dl4physicalsciences.github.io/files/nips_dlps_2017_29.pdf">Neural Message Passing for Jet Physics.</a>. </p> <p>From the latter paper I took the idea of adding an extra step of not only updating the node values at each step, but also updating the adjacency matrix at each step, as in this algorithm from the above mentioned paper: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F314368%2Fcabe0b91e3333064b4d81ed5984ef351%2FNMPAlgo1.png?generation=1567064230681357&amp;alt=media" alt=""></p> <p>Also, from <a href="https://arxiv.org/abs/1805.10988">this paper</a>, of which <a href="/jaechang">@jaechang</a> is a co-author, I added skip connections between layers, which allowed us to go deeper (although we saw no improvement above 7 layers). </p> <p>The message passing functions and update functions are different from what was done in my public kernel, and for those interested in the details, please have a look at <a href="https://www.kaggle.com/fnands/mpnn-skip-2">this kernel</a> (and <a href="https://www.kaggle.com/fnands/internalmakegraphinput/output">pre-processing kernel</a> ) Of course our solution used 4/5 fold CV, and a bit of stacking, but the main workhorse of out model is described in the kernel. If you have any questions about the model please ask and I will clarify. I did it in TF2.0 as one of my goals for this Kaggle was to get familiar with the new version. </p> <p>My gut feeling is that this model has too many parameters and there is a simpler model out there that will do a better job. Any comments/suggestions/ideas are greatly appreciated. </p> <p>On a personal note, this has been the first Kaggle in which I have really competed and I had a great time! Thanks to everyone who made kernels, participated in discussions and who were just cool people in general. I learned a lot in this competition and hope to apply what I have learned in the future. </p> <p>Specifically, thanks to my teammates <a href="/kazanova">@kazanova</a> , <a href="/rohanrao">@rohanrao</a> , <a href="/supreethmanyam">@supreethmanyam</a> and <a href="/ogrellier">@ogrellier</a> who were great and really helped to squeeze the most out of our solution. They were all great teammates and I can highly recommend working with them!</p> <p>Now, I'm going to go read up on what the winners did... 🤓 </p> <p>Pleasure! 🙏 </p> <p>Thank you!</p>
Predicting Molecular Properties
22nd place solution - Vanilla SchNet
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Predicting Molecular Properties <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I have added my results.</p> <p>The notebook for 1JHN: <a href="https://www.kaggle.com/tonyyy/vanilla-schnet">https://www.kaggle.com/tonyyy/vanilla-schnet</a></p> <p>I used SchNet models from SchNetPack. The notebook for 1JHN is available from <a href="https://www.kaggle.com/tonyyy/vanilla-schnet">this link</a>.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F224674%2F53f40ec4718c0dab9c0eb747728925ae%2Fslide%201.png?generation=1567084000793466&amp;alt=media" alt="Slide 1"></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F224674%2F819fdf2b2b7d03dbbc49d6502139e628%2Fslide%202.png?generation=1567084094339157&amp;alt=media" alt="Slide 2"></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F224674%2F9f6e3dbdd63a073dd7e7fb4f4f7ba64b%2Fslide%203.png?generation=1567084180507117&amp;alt=media" alt="Slide 3"></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F224674%2Ffc2292ec0d8065469d092414fce5c9ec%2Fslide%204.png?generation=1567084247618833&amp;alt=media" alt="Slide 4"></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F224674%2F3708ad0cf1709d6c7abdcf863beb401c%2Fslide%205.png?generation=1567119800569793&amp;alt=media" alt="Slide 5"></p> <h2>Related Kernels</h2> <p><a href="https://www.kaggle.com/tonyyy/schnet-champs">Potential Energy</a> <a href="https://www.kaggle.com/tonyyy/schnet-magnetic-shielding">Chemical Shielding</a></p>